Explain Transformer, GPT vs BERT, and PR metrics
Company: TikTok
Role: Machine Learning Engineer
Category: Software Engineering Fundamentals
Difficulty: medium
Interview Round: Technical Screen
Quick Answer: This question evaluates understanding of modern NLP model architecture and evaluation metrics, specifically Transformer block components, key distinctions between GPT and BERT including pretraining objectives and usage, and precision/recall interpretation, testing competencies in deep learning architecture knowledge, model selection reasoning, and metric-based evaluation within the Software Engineering Fundamentals domain for a Machine Learning Engineer role. It is commonly asked because it probes both conceptual understanding of architectures and pretraining paradigms and practical application of performance trade-offs—assessing conceptual understanding alongside practical application in threshold-based precision versus recall decisions.