The Online Encyclopedia and Dictionary






Linear programming

In mathematics, linear programming (LP) problems are optimization problems in which the objective function and the constraints are all linear.

Linear programming is an important field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems. Certain special cases of linear programming, such as network flow problems and multicommodity flow problems are considered important enough to have generated much research on specialized algorithms for their solution. A number of algorithms for other types of optimization problems work by solving LP problems as sub-problems. Historically, ideas from linear programming have inspired many of the central concepts of optimization theory, such as duality, decomposition, and the importance of convexity and its generalizations.



Here is an example of a linear programming problem. Suppose that a farmer has a piece of farm land, say A square kilometres large, to be planted with either wheat or barley or some combination of the two. The farmer has a limited permissible amount F of fertilizer and P of insecticide which can be used, each of which is required in different amounts per unit area for wheat (F1, P1) and barley (F2, P2). Let S1 be the selling price of wheat, and S2 the price of barley. If we denote the area planted with wheat and barley with x1 and x2 respectively, then the optimal number of square kilometres to plant with wheat vs barley can be expressed as a linear programming problem:

maximize S1x1 + S2x2 (maximize the profit - this is the "objective function")
subject to x_1 + x_2 \le A (limit on total area)
F_1 x_1 + F_2 x_2 \le F (limit on fertilizer)
P_1 x_1 + P_2 x_2 \le P (limit on insecticide)
x_1 \ge 0,\, x_2 \ge 0 (cannot plant a negative area)


Geometrically, the linear constraints define a convex polyhedron, which is called the feasible region. Since the objective function is also linear, all local optima are automatically global optima. The linear objective function also implies that an optimal solution can only occur at a boundary point of the feasible region.

There are two situations in which no optimal solution can be found. First, if the constraints contradict each other (for instance, x ≥ 2 and x ≤ 1) then the feasible region is empty and there can be no optimal solution, since there are no solutions at all. In this case, the LP is said to be infeasible.

Alternatively, the polyhedron can be unbounded in the direction of the objective function (for example: maximize x1 + 3 x2 subject to x1 ≥ 0, x2 ≥ 0, x1 + x2 ≥ 10), in which case there is no optimal solution since solutions with arbitrarily high values of the objective function can be constructed.

Barring these two pathological conditions (which are often ruled out by resource constraints integral to the problem being represented, as above), the optimum is always attained at a vertex of the polyhedron. However, the optimum is not necessarily unique: it is possible to have a set of optimal solutions covering an edge or face of the polyhedron, or even the entire polyhedron (This last situation would occur if the objective function were uniformly equal to zero).


The simplex algorithm solves LP problems by constructing an admissible solution at a vertex of the polyhedron, and then walking along edges of the polyhedron to vertices with successively higher values of the objective function until the optimum is reached. Although this algorithm is quite efficient in practice, and can be guaranteed to find the global optimum if certain precautions against cycling are taken, it has poor worst-case behavior: it is possible to construct a linear programming problem for which the simplex method takes a number of steps exponential in the problem size. In fact for some time it was not known whether the linear programming problem was NP-complete or polynomial-time solvable.

The first worst-case polynomial-time algorithm for the linear programming problem was proposed by Leonid Khachiyan in 1979. It was based on the ellipsoid method in nonlinear optimization by Naum Shor , which is the generalization of the ellipsoid method in convex optimization by Arkadi Nemirovski , a 2003 John von Neumann Theory Prize winner, and D. Yudin .

However, the practical performance of Khachiyan's algorithm is disappointing: generally, the simplex method is more efficient. Its main importance is that it encouraged the research of interior point methods. In contrast to the simplex algorithm, which only progresses along points on the boundary of the feasible region, interior point methods can move through the interior of the feasible region.

In 1984, N. Karmarkar proposed the projective method. This is the first algorithm performing well both in theory and in practice: Its worst-case complexity is polynomial and experiments on practical problems show that it is reasonably efficient compared to the simplex algorithm. Since then, many interior point methods have been proposed and analysed. A popular interior point method is the Mehrotra predictor-corrector method, which performs very well in practice even though little is known about it theoretically.

The current opinion is that the efficiency of good implementations of simplex-based methods and interior point methods is similar for routine applications of linear programming.

LP solvers are in widespread use for optimization of various problems in industry, such as optimization of flow in transportation networks, many of which can be transformed into linear programming problems only with some difficulty.

Integer unknowns

If the unknown variables are all required to be integers, then the problem is called an integer programming (IP) or integer linear programming (ILP) problem. In contrast to linear programming, which can be solved efficiently in the worst case, integer programming problems are in the worst case undecidable, and in many practical situations (those with bounded variables) NP-hard. 0-1 integer programming is the special case of integer programming where variables are required to be 0 or 1 (rather than arbitrary integers). This method is also classified as NP-hard.

If only some of the unknown variables are required to be integers, then the problem is called a mixed integer programming (MIP) problem. These are generally also NP-hard.

There are however some important subclasses of IP and MIP problems that are efficiently solvable, most notably problems where the constraint matrix is totally unimodular and the right-hand sides of the constraints are integer.

An advanced algorithm for solving large integer linear programs is delayed column generation

Solver packages

External links

The contents of this article are licensed from under the GNU Free Documentation License. How to see transparent copy