**Linear Programming**

The Ford-Fulkerson Method finds the maximum flow of a given flow network:

Ford-Fulkerson-Method(G, s, t) 1 initialize flowIntuitively, an augmenting path is a path fromfto 0 2whilethere exists an augmenting pathp3doaugment flowfalongp4returnf

Tricky point: a later path can partially "cancel" a previous flow.

The amount of additional flow we can push from u to v before exceeding the capacity c(u, v) is the *residual capacity* of (u, v), given by

cThe residual network of G is the graph formed by all the edges with positive residual capacity between every pair of vertices._{f}(u, v) = c(u, v) - f(u, v)

Example: Figure 26.5.

When the capacities are integers, the runtime of Ford-Fulkerson is bounded by O(Ef), where E is the number of edges in the graph and f is the maximum flow in the graph. This is because each augmenting path can be found in O(E) time and increases the flow by an integer amount which is at least 1.

When an algorithm follows this method and uses BFS to scan the paths, it becomes Edmonds-Karp algorithm, which runs in O(V E^{2}) time.

Example: maximize `x _{1} + x_{2}`, subject to the following constraints:

4xFeasible solution: values of_{1}- x_{2}≤ 8 2x_{1}+ x_{2}≤ 10 5x_{1}- 2x_{2}≥ -2 x_{1}≥ 0 x_{2}≥ 0

which can be rewritten in matrix/vector form:

maximize cConsequently, a LP problem in standard form is fully specified by^{T}x (c^{T}is the transpose of c) subject to Ax ≤ b, x ≥ 0

A slack variable defined as `s = b - Ax` can turn the inequality into an equation, plus the equality `s ≥ 0`. As a result, the linear program changes from standard form to slack form:

z = v + cwhere^{T}x s = b - Ax x ≥ 0 s ≥ 0

The simplex algorithm is described in Section 29.3, and in the following it will be explained using a concrete example.

The following LP problem is in standard form:

maximize 3xconvert the problem into slack form:_{1}+ x_{2}+ 2x_{3}subject to x_{1}+ x_{2}+ 3x_{3}≤ 30 2x_{1}+ 2x_{2}+ 5x_{3}≤ 24 4x_{1}+ x_{2}+ 2x_{3}≤ 36 x_{1}, x_{2}, x_{3}≥ 0

z = 3xA solution (x_{1}+ x_{2}+ 2x_{3}x_{4}= 30 - x_{1}- x_{2}- 3x_{3}x_{5}= 24 - 2x_{1}- 2x_{2}- 5x_{3}x_{6}= 36 - 4x_{1}- x_{2}- 2x_{3}

In each iteration, the simplex algorithm does a "pivoting", i.e., reformulating the problem to get a better basic solution by selecting a nonbasic variable whose coefficient in the objective function is positive, and turning it into a basic variable. This is the "entering" variable, and the "leaving" (basic) variable is the first one to become negative when the entering variable increases.

When x_{1} is selected as the entering variable, the leaving variable is x_{6}. The last equation above can be reformulated into

xUsing it to substitute the other x_{1}= 9 - x_{2}/4 - x_{3}/2 - x_{6}/4

z = 27 + xNow the basic solution is (9, 0, 0, 21, 6, 0), and the objective function value is 27._{2}/4 + x_{3}/2 - 3x_{6}/4 x_{1}= 9 - x_{2}/4 - x_{3}/2 - x_{6}/4 x_{4}= 21 - 3x_{2}/4 - 5x_{3}/2 + x_{6}/4 x_{5}= 6 - 3x_{2}/2 - 4x_{3}+ x_{6}/2

Do a pivot again to let x_{3} enter and x_{5} leave:

z = 111/4 + xNow the basic solution is (33/4, 0, 3/2, 69/4, 0, 0), and the objective function value is 111/4._{2}/16 - x_{5}/8 - 11x_{6}/16 x_{1}= 33/4 - x_{2}/16 + x_{5}/8 - 5x_{6}/16 x_{3}= 3/2 - 3x_{2}/8 - x_{5}/4 + x_{6}/8 x_{4}= 69/4 + 13x_{2}/16 + 5x_{5}/8 - x_{6}/16

Do a pivot again to let x_{2} enter and x_{3} leave:

z = 28 - xNow all coefficients in the objective function are negative, so the optimal value is z = 28, which corresponds to solution (8, 4, 0, 18, 0, 0), or x = (8, 4, 0)_{3}/6 - x_{5}/6 - 2x_{6}/3 x_{1}= 8 + x_{3}/6 + x_{5}/6 - x_{6}/3 x_{2}= 4 - 8x_{3}/3 - 2x_{5}/3 + x_{6}/3 x_{4}= 18 - x_{3}/2 + x_{5}/2

If in a LP problem the feasible solutions corresponds to a distortion of an n-dimensional cube, the worst-case complexity of the simplex algorithm may reach Θ(2^{n}). Nevertheless, it is usually remarkably efficient in practice.