This question evaluates object-oriented design, debugging, algorithmic reasoning, and probabilistic simulation competencies by requiring fixes to class behavior, implementation of aggregate statistics, and a Monte Carlo-based probability estimator.
You are building software to track how long racers take to complete an obstacle course.
title
(string)
obstacle_count
(int): number of obstacles in the course
obstacle_times
: list of per-obstacle completion times (seconds)
complete
: boolean that becomes
True
when
len(obstacle_times) == course.obstacle_count
add_obstacle_time(t)
: appends a time for the next obstacle; throws if run is already complete
get_run_time()
: sum of the recorded times so far (for incomplete runs, sum of completed obstacles)
add_run(run)
: adds a Run; rejects runs from a different Course
personal_best()
: the minimum total time among
complete
runs
RunCollection
implementation that causes an existing unit test to fail.
RunCollection
so that the provided test suite passes.
best_of_bests() in RunCollection:
i
, the minimum observed time for obstacle
i
across
all runs
that reached that obstacle (including incomplete runs), and summing these per-obstacle minima.
chance_of_personal_best(test_run) using simulation:
Run
(may have completed some obstacles but not all).
i
not yet completed in
test_run
, the time to complete obstacle
i
is drawn
uniformly at random
from the set of historical times recorded for obstacle
i
in the run collection (including obstacle times from incomplete runs, as long as that obstacle time exists).
test_run
’s current elapsed time.
personal_best()
.
best_of_bests()
and
chance_of_personal_best()
, what improvements would you make to the codebase (structure, naming, performance, reliability, determinism of tests, etc.)?
personal_best()
is defined only over complete runs.
i
requires there to be at least one historical time recorded for that obstacle.