PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/ML System Design/OpenAI

Design a GPU credit system and scheduler

Last updated: May 5, 2026

Quick Overview

This question evaluates system design, distributed systems, and resource-accounting skills focused on concurrency control, idempotent APIs, billing/credit models, and scheduler design for heterogeneous GPUs in multi-tenant ML platforms.

  • hard
  • OpenAI
  • ML System Design
  • Software Engineer

Design a GPU credit system and scheduler

Company: OpenAI

Role: Software Engineer

Category: ML System Design

Difficulty: hard

Interview Round: Technical Screen

Design a GPU credit accounting and scheduling service for an ML platform. Users purchase credits, submit training/inference jobs, and consume credits while jobs run. Requirements: credit issuance, balance queries, reservation at submission, metered consumption during execution, partial refunds on preemption/failure, expiration and promotional credits, per-user and per-project budgets, and audit trails. The API must be idempotent and concurrency-safe, with rate limits and protection against double-spend under races. The scheduler should place jobs on heterogeneous GPUs (e.g., A100/H 100) based on resource requirements and available quota, supporting fairness across users/teams and preemption policies. Describe schemas and data structures, consistency choices (strong vs. eventual), handling of clock skew, sharding and scaling strategies, and observability. Outline a test plan that captures edge cases and uncovers unspecified requirements.

Quick Answer: This question evaluates system design, distributed systems, and resource-accounting skills focused on concurrency control, idempotent APIs, billing/credit models, and scheduler design for heterogeneous GPUs in multi-tenant ML platforms.

Related Interview Questions

  • Design a GPU-Efficient Video Service - OpenAI (medium)
  • How would you build an image classifier with dirty data? - OpenAI (easy)
  • Design a RAG system with evaluation - OpenAI (medium)
  • Design an AWS fine-tuning platform for LLMs - OpenAI (hard)
  • Design a Retrieval-Augmented Generation (RAG) system - OpenAI (hard)
OpenAI logo
OpenAI
Aug 13, 2025, 12:00 AM
Software Engineer
Technical Screen
ML System Design
35
0

Design a GPU Credit Accounting and Scheduling Service (Technical Screen)

Context

You are designing a backend service for an ML platform that runs training and inference on heterogeneous GPUs (e.g., A100, H100). Users/teams purchase credits and consume them while jobs run. The platform must prevent double-spend under concurrency, schedule fairly across users/teams, and handle preemption/failures with partial refunds.

Assume GPU pricing is per GPU-hour and differs by GPU type. Jobs specify resource requirements (GPU type preferences, count, memory) and may be preempted according to policy. The system is multi-tenant, multi-project, and multi-region.

Functional Requirements

  1. Credit lifecycle
    • Issuance (purchases, grants, promotions) and expiration.
    • Balance queries with breakdown (promotional vs paid, expirations).
    • Spend ordering across buckets (e.g., earliest-expiring first).
  2. Reservation and metering
    • Idempotent reservation at job submission that checks budgets/quotas.
    • Metered consumption while jobs run; commit actual usage and partially refund unused holds on completion, preemption, or failure.
  3. Budgets and quotas
    • Per-user and per-project budgets; hierarchical limits (team/org → project → user).
    • Promotional credits with separate policies and expiration.
  4. Scheduling
    • Place jobs on heterogeneous GPUs based on requirements and available quota/credits.
    • Fairness across users/teams; support weights/priority classes and preemption.
  5. Audit and observability
    • Immutable audit trail for all credit and scheduling decisions.
    • Metrics, logs, and traces for SLOs and debugging.

Non-Functional Requirements

  • APIs must be idempotent and concurrency-safe with rate limits.
  • Protect against double-spend under races and retries.
  • Clearly state consistency choices (strong vs eventual) and handle clock skew.
  • Sharding/scaling strategies for high throughput.

Deliverables

Provide:

  1. Architecture overview (components and data flow).
  2. Data schemas and key data structures.
  3. API design and idempotency model.
  4. Scheduling algorithm and preemption policies.
  5. Consistency model and concurrency control (including double-spend protection and clock skew handling).
  6. Sharding and scaling strategy.
  7. Observability plan.
  8. A test plan that exercises edge cases and surfaces unspecified requirements.

Solution

Show

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More ML System Design•More OpenAI•More Software Engineer•OpenAI Software Engineer•OpenAI ML System Design•Software Engineer ML System Design
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.