PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/OpenAI

Design and analyze a free-trial A/B test

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a data scientist's competency in end-to-end A/B test design, causal inference, metric definition and observation horizon selection, ITT versus triggered analyses, sample size calculation, bias and interference controls, and statistical decision frameworks.

  • hard
  • OpenAI
  • Analytics & Experimentation
  • Data Scientist

Design and analyze a free-trial A/B test

Company: OpenAI

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: hard

Interview Round: Technical Screen

You must evaluate whether offering a 1‑month free trial increases paid subscription sign‑ups. Design an end‑to‑end A/B test and detail: (1) Eligibility and randomization: who is included/excluded (e.g., prior payers, grace‑period users), unit of randomization, how to prevent reassignment and cross‑device contamination. (2) Primary outcome and horizon: define a single launch‑gating metric that captures true paid conversion given the 30‑day trial delay; justify an observation window (e.g., paid start within 60 days of first exposure) and specify any guardrail metrics (refunds, chargebacks, engagement, infra cost). (3) ITT vs. triggered analyses: describe both and when each should drive the decision; handle users who never see the offer or churn before trial end. (4) Sample size: compute the per‑arm n for baseline paid conversion 4.0%, minimum detectable effect +0.8 pp (absolute), two‑sided α=0.05, power=0.80; show the formula and result. (5) Bias controls: handle seasonality, novelty, geographic heterogeneity, and pre‑existing conversion propensity (e.g., CUPED with a pre‑exposure covariate). (6) Interference and fraud: detect collusion or referral abuse; protect against multiple sign‑ups. (7) Decision framework: specify exact launch criteria (statistical significance, minimum practical effect, guardrail thresholds), how you’d adjust for peeking/sequential looks, and a staged ramp plan if results are borderline. Provide the precise analysis steps and example tables/figures you would produce.

Quick Answer: This question evaluates a data scientist's competency in end-to-end A/B test design, causal inference, metric definition and observation horizon selection, ITT versus triggered analyses, sample size calculation, bias and interference controls, and statistical decision frameworks.

Related Interview Questions

  • Design a free-month experiment - OpenAI (hard)
  • Assess free-month promotion impact - OpenAI (hard)
  • Measure free-month promotion impact - OpenAI (hard)
  • How would you evaluate a free-trial A/B test? - OpenAI (medium)
  • Evaluate a free-trial A/B test - OpenAI (easy)
OpenAI logo
OpenAI
Oct 13, 2025, 9:49 PM
Data Scientist
Technical Screen
Analytics & Experimentation
15
0

A/B Test Design: 1‑Month Free Trial Impact on Paid Subscription Conversion

You are evaluating whether offering a 1‑month free trial increases paid subscription sign‑ups. Assume the product currently requires immediate payment (no trial). The treatment offers a 30‑day free trial that auto‑converts to paid unless canceled. Design an end‑to‑end A/B test and address:

  1. Eligibility and Randomization
    • Who is included/excluded (e.g., prior payers, grace‑period users)?
    • Unit of randomization (user, device, household?)
    • How to prevent reassignment and cross‑device contamination.
  2. Primary Outcome and Horizon
    • Define a single launch‑gating metric that captures true paid conversion given the 30‑day trial delay.
    • Justify an observation window (e.g., paid start within 60 days of first exposure).
    • Specify guardrail metrics (refunds, chargebacks, engagement, infra cost).
  3. ITT vs. Triggered Analyses
    • Describe both intention‑to‑treat and triggered analyses and when each should drive the decision.
    • Handle users who never see the offer or churn before trial end.
  4. Sample Size
    • Compute per‑arm sample size for: baseline paid conversion 4.0%, MDE +0.8 percentage points (absolute), two‑sided α=0.05, power=0.80. Show the formula and the result.
  5. Bias Controls
    • Address seasonality, novelty effects, geographic heterogeneity, and pre‑existing conversion propensity (e.g., CUPED with a pre‑exposure covariate).
  6. Interference and Fraud
    • Detect collusion or referral abuse; protect against multiple sign‑ups.
  7. Decision Framework
    • Specify exact launch criteria (statistical significance, minimum practical effect, guardrail thresholds), how you’d adjust for peeking/sequential looks, and a staged ramp plan if results are borderline.
    • Provide precise analysis steps and examples of tables/figures you would produce.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More OpenAI•More Data Scientist•OpenAI Data Scientist•OpenAI Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.