Implement an LRU-based memoization helper with behavior similar to a standard Python LRU cache.
You are given an interface like this:
class LRU:
def __init__(self, capacity: int, persistence_path: str):
...
def generate_key(self, func, *args, **kwargs):
# return a deterministic, hashable cache key
pass
def call(self, func, *args, **kwargs):
# if the result for this function call is cached, return it
# otherwise compute it, cache it, and return it
pass
Requirements:
-
Cache results of pure function calls.
-
The cache key must include the function identity and its arguments.
-
generate_key
must handle both positional and keyword arguments.
-
Different keyword argument orders must produce the same key.
-
When the cache exceeds
capacity
, evict the least recently used entry.
-
Assume arguments and return values are serializable.
Follow-up: if the process crashes and the in-memory cache is lost, how would you persist enough information to restore the cache after restart while keeping the cache correct? Describe the data you would write, when you would write it, and how recovery would work.