PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/Google

Design A/B testing platform

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a candidate's competency in designing scalable, privacy-compliant A/B testing platforms, covering deterministic bucketing, traffic allocation, telemetry and event pipelines, experiment governance, and applied statistical methods in the Analytics & Experimentation domain.

  • hard
  • Google
  • Analytics & Experimentation
  • Software Engineer

Design A/B testing platform

Company: Google

Role: Software Engineer

Category: Analytics & Experimentation

Difficulty: hard

Interview Round: Technical Screen

Design an A/B testing platform supporting experiment definition, randomization, exposure logging, metric computation, and guardrails. Describe bucketing, unit consistency, incremental rollout, bias avoidance, and statistical analysis and diagnostics.

Quick Answer: This question evaluates a candidate's competency in designing scalable, privacy-compliant A/B testing platforms, covering deterministic bucketing, traffic allocation, telemetry and event pipelines, experiment governance, and applied statistical methods in the Analytics & Experimentation domain.

Related Interview Questions

  • Design an A/B test for search ranking - Google (easy)
  • Design an Unbiased Upgrade Experiment - Google (hard)
  • Design a Causal Upgrade Experiment - Google (hard)
  • Design an experiment to measure latency impact - Google (medium)
  • How would you use propensity score matching here - Google (medium)
Google logo
Google
Sep 6, 2025, 12:00 AM
Software Engineer
Technical Screen
Analytics & Experimentation
3
0

Design an A/B Testing Platform (Architecture + Experiment Science)

Context

You are designing an A/B testing platform for a large-scale consumer web/mobile product. The platform must support millions of users, low-latency assignment, privacy compliance, and both real-time and batch analytics. Multiple experiments can run concurrently across different product surfaces.

Requirements

Design the platform end-to-end to support:

  1. Experiment definition and configuration (namespaces/layers, eligibility/targeting, traffic allocation, variants, start/stop).
  2. Deterministic randomization and bucketing with sticky assignment and unit consistency across devices/sessions.
  3. Exposure logging and event telemetry with deduplication and identity stitching.
  4. Metric computation (batch + streaming), including definitions for conversions, retention, ratios, quantiles, and experiment-scoped windows.
  5. Incremental rollout, governance, and guardrails (e.g., SRM, kill switches, safety metrics).
  6. Bias avoidance and experiment hygiene (triggering, intent-to-treat, overlap management, AA tests).
  7. Statistical analysis and diagnostics (power, variance reduction, CIs/p-values, sequential monitoring, multiple testing, cluster-robust errors, diagnostics dashboards).

In your answer, describe:

  • Bucketing and traffic allocation
  • Unit of randomization and unit consistency
  • Incremental rollout and guardrails
  • Bias avoidance practices
  • Statistical analysis and diagnostics
  • A high-level architecture and data flow

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More Google•More Software Engineer•Google Software Engineer•Google Analytics & Experimentation•Software Engineer Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.