Median and Order Statistics
Special case: median (in statistics), lower median (textbook default), upper median
Easy cases: minimum (or maximum): Algorithm-0 takes n-1 comparisons in an unsorted data structure (no more, no less?). Best case, worst case, and average case.
To find both the minimum and maximum.
If i is a constant, extend the algorithms to find
Selection by sorting: O(n lg n). Can it be faster if we don't need total order? How about stopping a sorting algorithm somewhere in the middle?
The algorithm Randomized-Select (page 186) uses Randomized-Partition (page 154), and has a worst case Θ(n2).
The average running time of the algorithm is given by E[T(n)] = Σ[k=1..n](1/n)E[T(max(k-1,n-k))] + O(n), which can be proved to be O(n). Therefore the average running time is the same as partition.
Or, think in this way: there is 98% change a randomly selected pivot will reduce the problem size by 1%, then T(n) = T(0.99n) + O(n) = O(n) by Master Theorem.
The SELECT algorithm (page 189): divide the n elements into groups of c (a constant), find the median of each group (using Insertion-Sort), then select the median of the group medians recursively, and finally use the median-of-medians as pivot to do partition. Intuitively, such a pivot can at least reduce instance size by 1/4 — it is larger and smaller than half elements in half groups, respectively.
The textbook gives the proof that the selection algorithm has a linear worst time using c = 5.