##### Scenario
Preparing for a role at Fannie Mae that involves collaboration with QA teams and product stakeholders.
##### Question
What does Fannie Mae do and how does its mission influence the way you would approach your work here? Compare UI testing and backend testing. What unique goals and tools does each require? How would you design test cases for a UI change to ensure both functional integrity and user-experience quality?
##### Hints
Show you know the firm, testing layers, and structured test-case design.
Quick Answer: This question evaluates mission-aware cross-functional collaboration, QA test-design skills, and technical understanding of UI versus backend testing for a data scientist, falling under the Behavioral & Leadership category and the software testing/quality assurance domain.
Solution
## 1) Fannie Mae’s Mission and How It Shapes My Approach
- What Fannie Mae does:
- A government-sponsored enterprise (GSE) that provides liquidity, stability, and affordability to the U.S. housing market by purchasing mortgages from lenders, guaranteeing mortgage-backed securities, and setting risk/underwriting standards.
- How the mission shapes my work:
- Safety and soundness first: Prioritize data quality, model risk management, and robust validation. For any analytics or UI that influences credit decisions or customer communications, I’d insist on clear acceptance criteria, traceability, and auditable logs.
- Fairness and access: Incorporate bias checks, accessibility requirements (e.g., WCAG/Section 508), and plain-language UX—especially for features that affect consumer outcomes.
- Compliance and consumer protection: Build tests that verify regulatory constraints (e.g., disclosures, adverse action messaging), PII protection, and least-privilege access.
- Reliability at scale: Design performance, resilience, and monitoring checks to protect liquidity and continuity of service.
- Explainability: Favor transparent UI copy and model outputs with clear explanations, so customers and auditors can understand decisions.
## 2) UI Testing vs. Backend Testing
- Goals
- UI testing focuses on:
- Correct rendering and interaction flows (clicks, forms, navigation, responsiveness)
- Usability and accessibility (keyboard-only, screen reader, contrast)
- Visual consistency and cross-browser/device compatibility
- Client-side performance (initial load, interaction latency)
- Backend testing focuses on:
- API correctness (status codes, schemas, contracts, idempotency)
- Data integrity and consistency (CRUD, transactions, eventual consistency)
- Reliability and performance (throughput, latency, timeouts, retries)
- Security and authorization (authN/Z, input validation, injection)
- Integration across services, queues, and data stores
- Typical Tools
- UI: Playwright or Cypress (E2E automation), Selenium (legacy breadth), Axe/Lighthouse (accessibility/performance), Percy/Applitools (visual regression), BrowserStack/Sauce Labs (cross-browser/device), Web Analytics debugger (event tagging).
- Backend: Postman/Newman or REST Assured (API tests), pytest + requests (Pythonic API testing), Pact (contract testing), WireMock (stubs), k6/JMeter (load/perf), OWASP ZAP (DAST), SQL assertions/dbt tests (data validation), Kafka test harnesses (event pipelines).
- How they complement each other
- UI validates the customer journey; backend guarantees correctness, security, and scale. A strong strategy links UI steps to backend assertions (e.g., a UI filter triggers an API call that returns the expected records, logged with proper audit metadata).
## 3) Designing Test Cases for a UI Change (Functional + UX)
Assume: We’re adding a new "Forbearance Status" filter and sortable column to a loan-servicing web dashboard used by call-center agents. The UI calls an internal API and displays loan lists with pagination.
### A. Acceptance Criteria (make them testable)
- Filter values: {Active, Pending, None}. Default = None (no filter).
- Applying filter updates results within 500 ms client-side after API response (<2 s end-to-end at p95).
- Sort by Forbearance Status is stable and toggles ascending/descending.
- User’s previous filter persists within session.
- Accessibility: Focus order is logical; filter is accessible via keyboard; labels have programmatic names; color contrast ≥ 4.5:1.
- Authorization: Agents only see loans they are entitled to view; PII masked per role.
- Analytics: Event "filter_applied" captures user_id (hashed), filter_value, and result_count. No raw PII in analytics.
Map each acceptance criterion to at least one test (traceability) to ensure coverage.
### B. Functional Test Design
- Positive cases
- Apply each filter value and verify result sets match backend API and database truth.
- Sort toggles correctly and remains stable when multiple items share the same status.
- Pagination remains correct after filtering and sorting.
- Session persistence: Refresh page; prior filter re-applies.
- Negative/edge cases
- No results for a valid filter (empty state UX, no errors).
- Invalid query parameters (UI should sanitize; API returns 400; UI shows graceful message).
- Large datasets: 10k+ loans—pagination and scrolling behave; no memory leaks.
- Latency/timeouts: Simulate 2–5 s API delays; show spinner; allow cancel/retry; ensure no double submissions.
- Intermittent network failures: Verify retries/backoff and user messaging.
- Concurrency: Filter changes during in-flight request cancels prior request (no stale results).
- Data integrity checks
- Contract test (Pact): UI expects schema {loan_id, masked_borrower_name, forbearance_status, …}.
- Cross-validate result counts: UI count equals API payload count equals DB query (in lower envs with seeded data).
- Audit fields present in API logs (request_id, user role, timestamp).
### C. Accessibility and UX Quality Tests
- Accessibility (use Axe/Lighthouse + manual):
- Keyboard-only navigation: Tab order, focus ring visibility, Enter/Space activation.
- Screen reader labels: Filter announced with name, state, and instructions.
- Contrast and color reliance: Status not conveyed by color alone.
- Usability heuristics
- Clarity: Labels and help text in plain language.
- Error/empty states: Actionable guidance; no jargon.
- UX success metrics (set targets and measure in pre-prod and canary):
- Task success rate: ≥ 95% agents can apply a filter and locate a loan.
- Time-on-task: Median ≤ 10 s to find target loan post-filter.
- Interaction cost: ≤ 3 clicks to apply filter and sort.
- Satisfaction: Quick post-task rating or SUS-lite in UAT.
### D. Performance, Security, and Privacy
- Performance
- p95 end-to-end latency ≤ 2 s after applying filter (k6 load at realistic RPS).
- Frontend bundle impact ≤ +20 KB gzipped; no layout shifts (CLS near 0).
- Security/privacy
- Authorization enforced (attempt to access out-of-scope loans returns 403; UI masks PII per role).
- Input sanitation: Prevent XSS via filter value; verify CSP headers.
- Analytics events exclude PII and follow data retention policies.
### E. Cross-Browser/Device and Resilience
- Browsers: Latest Chrome/Edge/Firefox + last 2 versions; Safari where applicable.
- Responsive behavior: Columns collapse gracefully; filter remains accessible.
- Feature flags: Roll out by cohort; enable quick rollback.
- Monitoring: Create dashboards for filter_applied rate, error rate, p95 latency; alert thresholds set.
### F. Example Test Cases (concise)
- TC1 (Positive): Select Active → API request includes filter=Active; response count N; UI shows N; sample of records all status=Active.
- TC2 (Sort): Click column header twice → order toggles asc/desc; stable sort for identical statuses.
- TC3 (Empty State): Select Pending when no data → UI shows "No loans match" with guidance; no errors.
- TC4 (Latency): Inject 3 s delay → spinner visible; no UI freeze; cancel works; results correct when returned.
- TC5 (Accessibility): Keyboard-only can open/select filter; screen reader announces state; contrast passes.
- TC6 (Auth): Login as restricted agent → attempting to view out-of-scope loan returns 403; UI masks PII.
- TC7 (Analytics): Applying Active emits event with filter_value=Active and correct result_count; no PII.
### G. Collaboration and Governance
- Definition of Done: All mapped tests pass; accessibility checks clean; performance SLOs met; analytics validated; rollback plan documented.
- Traceability: Link user stories → acceptance criteria → tests → monitoring alerts.
- Risk review: QA + Product + InfoSec sign-offs for privacy and accessibility before enabling flag in production.
### H. Guardrails and Validation in Production
- Canary release: Compare baseline vs. canary on error rate, p95 latency, and task success proxies (e.g., filter re-applies per session).
- Automated rollback: Trigger on sustained error rate > 2% or p95 latency > 2.5 s for 10 min.
- Post-release review: Analyze events for anomalies; confirm no shifts in user outcomes that might indicate fairness or access issues.
By grounding test design in Fannie Mae’s mission—safety, fairness, compliance, and reliability—we ensure the UI change is not only functionally correct but also supports responsible, auditable, and accessible experiences at scale.