Scenario
You are designing a long-sequence text classification system under tight inference latency constraints (e.g., large documents or logs that must be classified quickly on GPU/CPU).
Task
-
Part A: Contrast RNNs and Transformers in terms of architecture, parallelism, context handling, and training dynamics for long-sequence classification with strict latency budgets.
-
Part B: Describe bagging and boosting ensemble techniques, including their goals (variance vs. bias reduction) and when each is preferable under practical constraints.
Hints
-
Address attention and parallelism, sequence length limits, and training stability.
-
For ensembles, discuss variance reduction (bagging) and bias reduction (boosting), latency implications, and practical guardrails.