PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Analytics & Experimentation/Confluent

Evaluate Metrics and Randomization for Onboarding Tutorial Change

Last updated: Mar 29, 2026

Quick Overview

This question evaluates a data scientist's skills in product experimentation, including selection of step-specific metrics (micro-conversions and time-to-complete), decisions about randomization and unit-of-analysis, and statistical inference with sample-size considerations.

  • medium
  • Confluent
  • Analytics & Experimentation
  • Data Scientist

Evaluate Metrics and Randomization for Onboarding Tutorial Change

Company: Confluent

Role: Data Scientist

Category: Analytics & Experimentation

Difficulty: medium

Interview Round: Onsite

##### Scenario Product team changed one specific step in Confluent’s user-onboarding tutorial and wants to evaluate whether the change improves the experience. ##### Question Which primary and secondary metrics would you track that are highly specific to the modified tutorial step? 2. At which level would you randomize (user vs. account) and what covariates would you examine to verify comparable groups? 3. Which statistical test(s) would you use, how would you compute required sample size and expected runtime, and what alternative test would you prefer if the sample size turns out to be very small? ##### Hints Think micro-conversion rates, time-to-complete, event drop-offs; discuss unit-of-analysis alignment and balance checks; consider t/Z tests, nonparametrics or Bayesian for small samples.

Quick Answer: This question evaluates a data scientist's skills in product experimentation, including selection of step-specific metrics (micro-conversions and time-to-complete), decisions about randomization and unit-of-analysis, and statistical inference with sample-size considerations.

Confluent logo
Confluent
Aug 4, 2025, 10:55 AM
Data Scientist
Onsite
Analytics & Experimentation
86
0

Scenario

A single step within Confluent’s multi-step user-onboarding tutorial was modified. The product team wants to run an experiment to determine whether the change improves the user experience specifically at that step, while ensuring no negative side effects on the overall onboarding flow.

Assumptions for clarity:

  • The tutorial consists of ordered steps (1…k). Only step i was changed; all other steps remain unchanged.
  • We can instrument events at the step level: step_i_view, step_i_submit, step_i_success, step_i_error, help_click, backtrack, abandon, timestamps.
  • Users may belong to accounts (organizations) with multiple users.

Questions

  1. Metrics
  • Which primary and secondary metrics would you track that are highly specific to the modified step?
  1. Experiment design
  • At which level would you randomize (user vs. account), and what covariates would you examine to verify comparable groups?
  1. Inference and sizing
  • Which statistical test(s) would you use? How would you compute required sample size and expected runtime? What alternative test would you prefer if the sample size turns out to be very small?

Hints

Think micro-conversion rates, time-to-complete, event drop-offs; discuss unit-of-analysis alignment and balance checks; consider t/Z tests, nonparametrics or Bayesian for small samples.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Analytics & Experimentation•More Confluent•More Data Scientist•Confluent Data Scientist•Confluent Analytics & Experimentation•Data Scientist Analytics & Experimentation
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.