Explain LLM fine-tuning and generative models
Company: Google
Role: Software Engineer
Category: Machine Learning
Difficulty: medium
Interview Round: Technical Screen
## Machine Learning fundamentals (LLM / Generative AI track)
You are interviewed for an ML role focused on LLMs and generative AI.
### Part A — LLM fine-tuning
1. What are common ways to adapt/fine-tune a pretrained LLM for a downstream task?
2. For each approach, explain **how it works**, **pros/cons**, and **when you would choose it**.
3. Discuss practical scenario considerations such as:
- limited labeled data
- strict latency/cost constraints
- need for domain adaptation without forgetting general capabilities
- safety/alignment requirements
### Part B — Generative models
Explain and compare:
- **Autoencoders (AE)**
- **Variational Autoencoders (VAE)**
- **Vector-Quantized VAE (VQ-VAE)**
For each, cover the objective, training behavior, typical failure modes, and common use cases.
Quick Answer: This question evaluates understanding of LLM adaptation techniques and trade-offs (fine-tuning and parameter-efficient methods) alongside knowledge of generative model families (AE, VAE, VQ‑VAE), covering objectives, training behavior, typical failure modes, and considerations like limited data, latency/cost, domain adaptation, and safety.