PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/OpenAI

Evaluate a free-trial A/B test

Last updated: Mar 29, 2026

Quick Overview

This question evaluates skills in experimental design, causal inference and statistical analysis, metric definition and instrumentation, data-quality debugging, and interpretation of product/business trade-offs in A/B testing.

  • easy
  • OpenAI
  • Analytics & Experimentation
  • Data Scientist

Evaluate a free-trial A/B test

Company: OpenAI

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: easy

Interview Round: Technical Screen

## Scenario A marketing team ran an A/B test offering **a free 1-month trial** to users. - **Control (A):** Standard offer (no free month) - **Treatment (B):** Free 1-month trial offer - Randomization unit: **user** (assume one assignment per user) The team cares about: 1) **Signup rate** (do more users start a subscription/trial?) 2) **Retention** (do users stick around after the trial / over time?) ## Tasks 1. **Experiment design & setup** - State the hypothesis and the key product/business risk. - Specify the **primary metric**, **diagnostic metrics**, and **guardrail metrics**. Discuss tradeoffs (e.g., signup lift vs low-quality signups). - Define precisely what counts as **“signup”** and what counts as **“retained”** (e.g., D30 retention, post-trial paid conversion, activity-based retention). - Explain how you would handle: - users assigned but never exposed to the offer (assignment vs exposure) - users with insufficient follow-up time (censoring) - seasonality or concurrent campaigns 2. **Analysis plan** - Describe how you would estimate the treatment effect on signup rate and retention. - Which statistical tests/models would you use (e.g., difference in proportions, logistic regression, survival analysis)? - How would you compute confidence intervals and communicate uncertainty? - What checks would you run before trusting results (e.g., SRM, balance checks, instrumentation validation)? 3. **Common implementation/logic issues (Python code review style)** In a typical experiment analysis codebase, list **likely logic bugs or setup mistakes** you would look for, such as: - incorrect attribution window - filtering/conditioning on post-treatment behavior - mixing per-event vs per-user denominators - incorrectly defining “eligible population” - double-counting users or handling cross-device users - peeking / stopping rules 4. **Decision & next steps** - Given possible outcomes (signup up, retention down; signup flat, retention up; etc.), outline what you would recommend to stakeholders. - Propose at least one follow-up experiment or segmentation to understand heterogeneous effects (e.g., new vs returning users, geo, acquisition channel).

Quick Answer: This question evaluates skills in experimental design, causal inference and statistical analysis, metric definition and instrumentation, data-quality debugging, and interpretation of product/business trade-offs in A/B testing.

Related Interview Questions

  • Design a free-month experiment - OpenAI (hard)
  • Assess free-month promotion impact - OpenAI (hard)
  • Measure free-month promotion impact - OpenAI (hard)
  • Design and analyze a free-trial A/B test - OpenAI (hard)
  • How would you evaluate a free-trial A/B test? - OpenAI (medium)
OpenAI logo
OpenAI
Oct 4, 2025, 12:00 AM
Data Scientist
Technical Screen
Analytics & Experimentation
4
0
Loading...

Scenario

A marketing team ran an A/B test offering a free 1-month trial to users.

  • Control (A): Standard offer (no free month)
  • Treatment (B): Free 1-month trial offer
  • Randomization unit: user (assume one assignment per user)

The team cares about:

  1. Signup rate (do more users start a subscription/trial?)
  2. Retention (do users stick around after the trial / over time?)

Tasks

  1. Experiment design & setup
    • State the hypothesis and the key product/business risk.
    • Specify the primary metric , diagnostic metrics , and guardrail metrics . Discuss tradeoffs (e.g., signup lift vs low-quality signups).
    • Define precisely what counts as “signup” and what counts as “retained” (e.g., D30 retention, post-trial paid conversion, activity-based retention).
    • Explain how you would handle:
      • users assigned but never exposed to the offer (assignment vs exposure)
      • users with insufficient follow-up time (censoring)
      • seasonality or concurrent campaigns
  2. Analysis plan
    • Describe how you would estimate the treatment effect on signup rate and retention.
    • Which statistical tests/models would you use (e.g., difference in proportions, logistic regression, survival analysis)?
    • How would you compute confidence intervals and communicate uncertainty?
    • What checks would you run before trusting results (e.g., SRM, balance checks, instrumentation validation)?
  3. Common implementation/logic issues (Python code review style) In a typical experiment analysis codebase, list likely logic bugs or setup mistakes you would look for, such as:
    • incorrect attribution window
    • filtering/conditioning on post-treatment behavior
    • mixing per-event vs per-user denominators
    • incorrectly defining “eligible population”
    • double-counting users or handling cross-device users
    • peeking / stopping rules
  4. Decision & next steps
    • Given possible outcomes (signup up, retention down; signup flat, retention up; etc.), outline what you would recommend to stakeholders.
    • Propose at least one follow-up experiment or segmentation to understand heterogeneous effects (e.g., new vs returning users, geo, acquisition channel).

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More OpenAI•More Data Scientist•OpenAI Data Scientist•OpenAI Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.