##### Question
Give a concise introduction to your dissertation. Which aspect of your dissertation are you most proud of and why? Describe a time you disagreed with your supervisor and how you handled it. Describe a time you agreed with your supervisor and the outcome.
Quick Answer: This question evaluates interpersonal leadership, communication, and research depth by prompting discussion of a dissertation and supervision experience, testing both technical understanding in machine learning and collaborative competencies.
Solution
# How to Answer Effectively
Use brief, structured narratives that show clarity, impact, collaboration, and good judgment. For the project prompts, the STAR/CAR format works well:
- Situation/Context: What was the problem and why it mattered.
- Task/Action: What you decided/did and your role.
- Result: Quantified outcomes, learning, and next steps.
Aim for one paragraph per prompt. Focus on outcomes, metrics, and what you learned.
## 1) Concise Dissertation Introduction
Structure your answer:
1) One-line problem and importance
2) Your thesis claim (what you did that’s novel)
3) Approach and data (methods, scale)
4) Results with numbers (offline and, if relevant, deployment)
5) Impact and your unique role
Template
- Problem: "I studied X because Y users/stakeholders faced Z impact."
- Novelty: "My thesis shows A (new idea) that achieves B."
- Methods/Data: "I designed M and used dataset D (size), overcoming constraint C."
- Results: "We improved metric by P% vs baseline and reduced cost/latency by Q%."
- Role/Impact: "I led [contribution], enabling [adoption/insight]."
Example (ML-oriented)
- "My dissertation focused on reducing label dependence in medical imaging. I developed a self-supervised pretraining method that halves the labeled data needed to reach target performance. Using 1.2M unlabeled chest X-rays and 40k labeled images, I combined masked-image modeling with contrastive learning and a curriculum sampler to handle class imbalance. Compared with a supervised baseline, AUC improved by 3.8 points while labeled data requirements fell by 55%, and inference latency remained under 50 ms on T4 GPUs. I led model design, training pipelines, and a reproducibility suite, which enabled two clinical research groups to replicate our results."
Why this works: It’s specific, quantifies impact, highlights your role, and shows awareness of scale and constraints relevant to ML engineering.
## 2) Aspect You’re Most Proud Of (and Why)
Pick one dimension and tie it to measurable impact or engineering rigor.
- Potential angles: originality of method, reproducibility, deployment-readiness, data-centric rigor, responsible AI safeguards, or cross-functional collaboration.
Example
- "I’m most proud of the end-to-end reproducibility and fairness checks I built. I containerized training with deterministic seeds, data versioning (DVC), and unit tests for data transforms, which cut onboarding time for new collaborators from two weeks to two days. I also added calibration and subgroup performance reporting; after fixing a label leakage issue and rebalancing the training curriculum, worst-group AUC improved by 6 points without sacrificing overall AUC. This made our work more trustworthy and easier to extend."
Why this works: It shows engineering maturity, attention to equity, and measurable outcomes beyond a single metric.
## 3) Disagreement With Supervisor: Handling and Outcome
Goal: Show respectful dissent, data-driven decision-making, and learning.
Structure
- Situation: The decision at stake and constraints (compute, timeline, risk).
- Action: How you proposed alternatives, de-risked with experiments, and aligned on criteria.
- Result: What happened, metrics, and what you learned.
Example
- Situation: "My supervisor preferred deploying a transformer for our anomaly detection MVP. I was concerned about inference cost and latency on edge devices."
- Action: "I suggested a two-track experiment with pre-agreed criteria: F1 ≥ baseline +2 points and P95 latency ≤ 30 ms on target hardware. I implemented a distilled transformer and a gradient-boosted trees baseline with feature hashing, ran ablations, and profiled both on the edge device."
- Result: "The GBDT achieved +1.8 F1 with 12 ms P95; the distilled transformer achieved +2.4 F1 but 48 ms P95. We shipped GBDT for MVP to meet latency SLAs, then revisited the transformer after quantization and kernel fusion brought P95 to 26 ms, rolling it out in a later release. I learned to ‘disagree, commit, and iterate’ using lightweight, decision-focused experiments."
Why this works: It’s respectful, quantitative, and demonstrates production-minded trade-offs.
## 4) Agreement With Supervisor: Alignment and Outcome
Goal: Show you can align quickly on a sound plan and execute to results.
Structure
- Situation: The decision and success criteria.
- Action: How you collaborated to implement.
- Result: Concrete impact.
Example
- Situation: "We agreed that recall on rare classes was more valuable than slight precision gains for our triage system."
- Action: "We adopted focal loss, class weighting, and data augmentation; added human-in-the-loop relabeling for ambiguous cases; and set a monitoring dashboard for per-class metrics."
- Result: "Minority-class recall improved by 12 points with a 1.3-point precision drop, increasing overall F1 by 3.2 points. The triage queue caught 18% more true positives at the same review capacity."
Why this works: It shows principled prioritization aligned with business/user value and responsible monitoring.
## Pitfalls to Avoid
- Vague claims without numbers or criteria.
- Overemphasis on theory with no path to deployment or measurement.
- Casting blame in disagreements; avoid loaded language.
- Long tangents; aim for 4–6 crisp sentences per prompt.
## Quick Prep Checklist
- Draft a 5-sentence dissertation intro using the template; include one metric and your unique role.
- Choose one ‘proudest aspect’ that demonstrates engineering rigor or real-world impact.
- Prepare one disagreement story and one agreement story using STAR, both with measurable outcomes and learnings.
- Timebox each to 60–120 seconds and practice aloud.