This question evaluates algorithmic reasoning in constrained pairings and resource allocation, focusing on maximizing aggregate memory under pairwise feasibility constraints while managing large input sizes.
You are helping to optimize the capacity of a cloud system.
There are n servers, where n is always even. The memory capacity of the i-th server is given by an integer array memory of length n.
When there are n = 2 * x servers:
x
servers will be chosen as
primary
servers.
x
servers will be
backup
servers.
For every primary server P, there must exist a distinct backup server B such that:
memory[B] >= memory[P]
(Each server can be used as either primary or backup in exactly one such pair, so the n servers are partitioned into x (primary, backup) pairs.)
The system memory capacity is defined as the sum of the memory capacities of all primary servers:
capacity = sum(memory[P] over all primary servers P)
Your task is to determine the maximum possible system memory capacity that can be obtained by:
n
servers into
n/2
disjoint (primary, backup) pairs, and
memory[backup] >= memory[primary]
within each pair.
n
(number of servers).
memory
of length
n
, where
memory[i]
is the memory capacity of the
i
-th server.
You may assume:
2 <= n <= 2 * 10^5
, and
n
is even.
1 <= memory[i] <= 10^9
.
Design an algorithm efficient for n up to about 2 * 10^5 (i.e., significantly better than O(n^2) time).