You are given two independent coding problems that focus on data structure and API design.
Design a Tic-Tac-Toe game that supports a generalized board size and a customizable win condition.
Implement a class TicTacToe with the following behavior:
1
and
2
.
rows
rows and
cols
columns.
K
.
row
,
col
are 0-indexed coordinates on the board.
player
is either
1
or
2
.
(row, col)
.
0
if no one has won after this move,
1
if player 1 wins as a result of this move,
2
if player 2 wins as a result of this move,
-1
if the move is invalid (out of bounds or the cell is already occupied).
You should design the data structures so that each move call is efficient even when the board is large.
Constraints (you may assume):
1 <= rows, cols <= 1000
1 <= K <= max(rows, cols)
rows * cols
.
Extend the design with a very simple AI that chooses a random legal move for a given player.
You are given a helper function:
which returns a uniform random integer in the inclusive range [low, high].
Add a method:
(row, col)
corresponding to an
empty
cell where
player
can play next.
(-1, -1)
.
Design this so that getNextMove runs efficiently, even on a large board with many moves already played.
A key-value (KV) store receives a large number of requests. Each request is logged with a timestamp (in seconds). You want to support efficient queries about the queries per second (QPS) over arbitrary time ranges.
You are given an initially empty log. Implement a data structure that supports the following operations:
timestamp
is an integer representing seconds since some fixed epoch.
timestamp >=
previous
timestamp
).
[startTime, endTime]
.
count
requests have timestamps
t
such that
startTime <= t <= endTime
, then:
QPS = count / (endTime - startTime + 1)
getQPS
queries efficiently after recording a large volume of data.
Constraints (you may assume):
N = 10^6
calls to
record
.
Q = 10^6
calls to
getQPS
.
The naive implementation might scan all timestamps within [startTime, endTime] for each query, which is too slow when Q is large.
Design and describe a more efficient approach that:
getQPS(startTime, endTime)
in
sublinear
time in
N
(for example,
O(log N)
per query).
Your design should be implementable using common data structures (arrays, lists, hash maps, trees, etc.).
Now suppose memory is tight and you are allowed to sacrifice precision to save space. Instead of storing every individual timestamp, you can approximate the QPS.
Design a modified scheme that:
[startTime, endTime]
query.
You should:
getQPS
will be computed from this aggregated data.