PracHub
QuestionsPremiumLearningGuidesCheatsheetNEWCoaches
|Home/Coding & Algorithms/LinkedIn

Implement an LRU cache with follow-ups

Last updated: Mar 29, 2026

Quick Overview

This question evaluates understanding of core data structures and algorithms for designing an efficient LRU cache as well as knowledge of concurrency and synchronization for thread-safe access.

  • medium
  • LinkedIn
  • Coding & Algorithms
  • Software Engineer

Implement an LRU cache with follow-ups

Company: LinkedIn

Role: Software Engineer

Category: Coding & Algorithms

Difficulty: medium

Interview Round: Onsite

# Coding: Implement an LRU Cache and discuss concurrency Design and implement an in-memory Least Recently Used (LRU) cache data structure. The cache should: - Have a fixed capacity N specified at construction time - Store key-value pairs where keys and values are integers (for simplicity) - Support the following operations in average O(1) time: - get(key): - If the key exists, return its value and mark this key as the most recently used - If the key does not exist, return -1 - put(key, value): - Insert or update the key with the given value - If inserting a new key causes the number of keys to exceed capacity N, evict the least recently used key from the cache before inserting Clarify and then implement the following: 1. Describe the data structures you would use to achieve O(1) time for both get and put. 2. Implement the LRU cache class with constructor(capacity), get(key), and put(key, value) methods. 3. Explain the time and space complexity of your implementation. Follow-up (conceptual and design, not necessarily full code): 4. How would you modify your LRU cache so that it is safe to use from multiple threads concurrently? - Discuss possible approaches such as coarse-grained locking, fine-grained locking, or using concurrent data structures. - Talk about trade-offs between simplicity, contention, and performance. 5. What race conditions or consistency issues might occur if multiple threads call get and put at the same time, and how does your design avoid them?

Quick Answer: This question evaluates understanding of core data structures and algorithms for designing an efficient LRU cache as well as knowledge of concurrency and synchronization for thread-safe access.

Related Interview Questions

  • Count Trips From Vehicle Logs - LinkedIn (easy)
  • Design O(1) Randomized Multiset - LinkedIn (easy)
  • Process Mutable Matrix Sum Queries - LinkedIn (medium)
  • Design a Randomized Multiset - LinkedIn (medium)
  • Can You Place N Objects? - LinkedIn (medium)
LinkedIn logo
LinkedIn
Dec 8, 2025, 8:36 PM
Software Engineer
Onsite
Coding & Algorithms
12
0

Coding: Implement an LRU Cache and discuss concurrency

Design and implement an in-memory Least Recently Used (LRU) cache data structure.

The cache should:

  • Have a fixed capacity N specified at construction time
  • Store key-value pairs where keys and values are integers (for simplicity)
  • Support the following operations in average O(1) time:
    • get(key):
      • If the key exists, return its value and mark this key as the most recently used
      • If the key does not exist, return -1
    • put(key, value):
      • Insert or update the key with the given value
      • If inserting a new key causes the number of keys to exceed capacity N, evict the least recently used key from the cache before inserting

Clarify and then implement the following:

  1. Describe the data structures you would use to achieve O(1) time for both get and put.
  2. Implement the LRU cache class with constructor(capacity), get(key), and put(key, value) methods.
  3. Explain the time and space complexity of your implementation.

Follow-up (conceptual and design, not necessarily full code):

  1. How would you modify your LRU cache so that it is safe to use from multiple threads concurrently?
    • Discuss possible approaches such as coarse-grained locking, fine-grained locking, or using concurrent data structures.
    • Talk about trade-offs between simplicity, contention, and performance.
  2. What race conditions or consistency issues might occur if multiple threads call get and put at the same time, and how does your design avoid them?

Comments (0)

Sign in to leave a comment

Loading comments...

Browse More Questions

More Coding & Algorithms•More LinkedIn•More Software Engineer•LinkedIn Software Engineer•LinkedIn Coding & Algorithms•Software Engineer Coding & Algorithms
PracHub

Master your tech interviews with 7,500+ real questions from top companies.

Product

  • Questions
  • Learning Tracks
  • Interview Guides
  • Resources
  • Premium
  • For Universities
  • Student Access

Browse

  • By Company
  • By Role
  • By Category
  • Topic Hubs
  • SQL Questions
  • Compare Platforms
  • Discord Community

Support

  • support@prachub.com
  • (916) 541-4762

Legal

  • Privacy Policy
  • Terms of Service
  • About Us

© 2026 PracHub. All rights reserved.