You join a B2B SaaS firm with three public tiers (Basic $25/month, Pro $50/month, Enterprise = sales-quoted). The PM asks for a 2‑week A/B test to raise Basic and Pro list prices by 20%. Design the experiment end‑to‑end and justify or reject a classic A/B design. Specifically: 1) Decide whether to run an A/B price test at the user level vs. a quasi‑experimental pre/post (e.g., geo rollout, staggered rollout, synthetic control, or difference‑in‑differences). State clear pros/cons including contamination from users seeking lower prices (cross‑bucket opt‑out), fairness/brand risk, and organizational overhead. 2) Define the success metric hierarchy and decision rule: primary = incremental LTV per acquired customer over a fixed horizon; secondaries/guardrails = conversion rate, ARPU, churn at month 1–3, refund rate, support tickets, CAC payback, and revenue per visitor. Provide the exact business decision threshold (e.g., LTV uplift > 0 with 95% confidence or positive expected value after CAC). 3) Determine the required runtime and sample size so that churn and early retention are observable; show why 2 weeks is insufficient and propose a minimal viable schedule (e.g., onboarding month + at least one churn observation window). Include your MDE assumptions, traffic constraints, unit of randomization, bucketing persistence, and plan for sequential looks (alpha spending or Bayesian stopping). 4) Specify instrumentation and analysis: eligibility/exposure rules for visitors vs. signed‑ups, identity resolution to prevent bucket switching, outlier handling for discounts/coupons, seasonality controls, and feature‑freeze or feature‑flag strategy to avoid confounding. Describe the model you will use to estimate LTV during the experiment (e.g., survival + ARPU, or cohort‑based difference‑in‑differences) and how you’ll handle right‑censoring. 5) Outline a rollout/escalation plan if results are positive (e.g., geo or account‑segment ramp), and how you’ll audit for long‑run effects and customer backlash. In the same assignment, a growth team also wants to test a signup CTA with two factors: color {red, blue} and position {top, bottom}. Design a 2×2 factorial test: specify hypotheses for main and interaction effects, randomization, traffic allocation, power/MDE per effect, multiple‑testing correction, analysis of click‑through and downstream signup, device/traffic stratification, and when you would prefer chained tests over a full factorial. Explain how added variants increase variance and how you would extend test duration to maintain power.