Personalized Product Ranking for a Fintech Home Page — End-to-End Design
Context
You are designing a personalized ranking system for a fintech app’s home page. The app offers multiple products (e.g., high-yield savings, credit cards, personal loans, brokerage). Only eligible products should be shown, and certain regulatory and fairness constraints apply. The goal is to maximize long-term value while preventing customer harm and underwriting risk.
Task
Describe an end-to-end solution that addresses the following. Be specific with formulas, thresholds, and trade-offs suitable for a production launch.
-
Objective and Guardrails
-
Define the primary optimization objective as a revenue- or CLV-weighted conversion objective at the list level (top-K items), including discounting for delayed outcomes.
-
Specify guardrail metrics that strictly constrain ineligible impressions, underwriting risk, and customer harm.
-
Explicitly state how you would weight click, application start, approval, and funded/activated events, including treatment of delay.
-
Data and Features
-
Enumerate features by category: eligibility/suitability (e.g., geo, KYC completion, credit profile availability), user behavior (short- and long-term), session context, product attributes, and real-time events.
-
Identify data that must not be used due to fairness/compliance constraints, and any conditions under which sensitive data may be used (e.g., user consent).
-
Model Architecture
-
Describe a two-stage approach (candidate generation vs. ranking), including the objective choice (e.g., listwise objectives such as LambdaRank/soft-NDCG vs. pairwise).
-
Explain probability calibration and a constrained re-ranker that enforces eligibility, product quotas, and per-user suitability.
-
Provide latency and throughput budgets appropriate for a mobile home page.
-
Exploration vs. Exploitation with Safety
-
Propose an exploration strategy (e.g., contextual bandit or epsilon-greedy) that never violates eligibility or risk guardrails.
-
Address cold-start for new users and new products.
-
Offline Evaluation
-
Define time-based splits and leakage checks.
-
List metrics (e.g., NDCG@K, ERR, CVR@K, expected revenue@K) and describe calibration and stability checks.
-
Explain how you will address position bias (e.g., IPS/SNIPS or randomized swaps).
-
Online Experimentation
-
Present an A/B test plan with pre-registered success and guardrail metrics (e.g., approval rate, complaint rate, bad-rate proxy, drop-off in critical flows), sample size, duration, and sequential testing considerations.
-
Discuss when to use interleaving vs. full-funnel tests and how to attribute downstream approvals with long delays.
-
Monitoring and Feedback Loops
-
Propose drift detection, eligibility bug detection, fairness dashboards, and rollback criteria.
-
Provide a fallback ranking strategy when signals are sparse.
Include concrete thresholds, formulas, and trade-offs you would set for launch.