# Column generation algorithms

Author: Lorena Garcia Fernandez (lgf572)

## Introduction

Column Generation techniques have the scope of solving large linear optimization problems by generating only the variables that will have an influence on the objective function. This is important for big problems with many variables where the formulation with these techniques would simplify the problem formulation, since not all the possibilities need to be explicitly listed.

^{[1]}

**Theory, methodology and algorithmic discussions**

**Theory**

The way this method work is as follows; first, the original problem that is being solved needs to be split into two problems: the master problem and the sub-problem.

- The master problem is the original column-wise (i.e: one column at a time) formulation of the problem with only a subset of variables being considered.

- The sub-problem is a new problem created to identify a new promising variable. The objective function of the sub-problem is the reduced cost of the new variable with respect to the current dual variables, and the constraints require that the variable obeys the naturally occurring constraints. The subproblem is also referred to as the RMP or “restricted master problem”. From this we can infer that this method will be a good fit for problems whose constraint set admit a natural breakdown (i.e: decomposition) into sub-systems representing a well understood combinatorial structure.

To execute that decomposition from the original problem into Master and subproblems there are different techniques. The theory behind this method relies on the Dantzig-Wolfe decomposition.

In summary, when the master problem is solved, we are able to obtain dual prices for each of the constraints in the master problem. This information is then utilized in the objective function of the subproblem. The subproblem is solved. If the objective value of the subproblem is negative, a variable with negative reduced cost has been identified. This variable is then added to the master problem, and the master problem is re-solved. Re-solving the master problem will generate a new set of dual values, and the process is repeated until no negative reduced cost variables are identified. The subproblem returns a solution with non-negative reduced cost, we can conclude that the solution to the master problem is optimal.

**Methodology in detail**

To illustrate the algorithm, we will use a common example: an algorithm for one-dimensional cutting stock problem.

__Problem Overview__

Given:

A set of item types I,

For every item type i ∈ I, its length Li and the number of pieces to be produced Ri ,

The length W of the starting objects to be cut,

Objective:

To find the minimum number of objects needed to satisfy the demand of all item types.

Model:

The problem can be modeled as follows:

where:

S: set of all possible cutting patterns that can be used to obtain item types in I from the original objects of length W;

Nks : number of pieces of type k ∈ K in the cutting pattern s ∈ S .

ys : number of original objects to be cut with pattern s ∈ S.

The algorithm to solve this problem is built on the solution of the continuous relaxation of the above model, i.e., the model obtained by replacing constraints

with constraints...

Sometimes |S| could be so large that enumerating all patterns would not be practical. For this purpose, the column generation below can be used:

Step 0: initialize the problem

Generate a subset of patterns S ′ for which the problem has a solution that is feasible (a typical initialization is that of starting with the |I| single-item cutting patterns).

Step 1: formulation and solution of the master problem

Solve the master problem (restricted to the patterns (i.e: variables) ysj with s ∈ S ′ )

By solving this problem one can obtain first a primal optimal solution * y∗* and then a dual optimal solution

*such that*

**u***and*

**y∗***satisfy the complementary slackness condition (for example, this could be done with the simplex method).*

**u**Step 2: solution of the subproblem

Or in other words, the next step is to find the solution of the following integer linear programming problem (called subproblem or slave problem) with |K| variables and one constraint:

For this problem, the optimal solution would be:

Step 3: optimality check

As it was previously highlighted, it is necessary to conduct an optimality check in order to decide if the optimal solution has been reached or not. Below is the condition:

If...

then STOP.

* y∗* is an

*optimal solution*of the full continuous relaxation (including all patterns in

**S**). Otherwise,

*update the master problem*by including in

*the pattern γ defined by Nks= z∗k (this means that column*

**S′***has to be included in the constraint matrix) and go to Step 1.*

**z∗**Finally, one has to go from the optimal solution of the continuous relaxation to a heuristic (i.e., not necessarily optimal but hopefully good) solution of the original problem with integrality constraints. This can be done in at least two different ways:

By rounding up the entries of * y∗* (this is a good choice if these entries are large: 335.4 is not very different from 336...); Is worth noticing that rounding down is not allowed due to the fact that it would provide infeasible integer solution;

By applying an integer linear programming method (for instance the Branch-and Bound) to the last master problem that was generated; we are taking a step which is equivalent to solving the original problem (with integrality constraints)but restricted to the “good” patterns (those in * S′* ) found in the above steps

**Numerical example: The Cutting Stock problem**

Suppose we want to solve a numerical example of the cutting stock problem that we have discussed during the theory section of this wiki, specifically a one-dimensional cutting stock problem

__Problem Overview__

A company produces steel bars with diameter 45 millimeters and length 33 meters. The company also takes care of cutting the bars for their different customers, who each require different lengths. At the moment, the following demand forecast is expected and must be satisfied:

Pieces needed | Piece length(m) | Type of item |

144 | 6 | 1 |

105 | 13.5 | 2 |

72 | 15 | 3 |

30 | 16.5 | 4 |

24 | 22.5 | 5 |

The objective is to establish what is the minimum number of steel bars that should be used to satisfy the total demand.

A possible model for the problem, proposed by Gilmore and Gomory in the 1960ies is the one below:

**Sets**

K = {1, 2, 3, 4, 5}: set of item types;

S: set of patterns (i.e., possible ways) that can be adopted to cut a given bar into portions of the need lengths.

**Parameters**

M: bar length (before the cutting process)

L_{k} : length of item k ∈ K;

R_{s} : number of pieces of type k ∈ K required;

N_{ks} : number of pieces of type k ∈ K in pattern s ∈ S

**Decision variables**

Y_{s} : number of bars that should be portioned using pattern s ∈ S

**Model**

__Solving the problem__

The model assumes the availability of the set K and the parameters N_{ks} . To generate this data, you would have to list all possible cutting patterns. However, the number of possible cutting patterns is a big number. This is why a direct implementation of the model above is not partical in real-world problems. In this case is when it makes sense to solve the continuous relaxation of the above model. This is because, in reality, the demand figures are so high that the number of bars to cut is also a large number, and therefore a good solution can be determined by rounding up to the next integer each variable y_{s} found by solving the continuous relaxation. In addition to that, the solution of the relaxed problem will become the starting point for the application of an exact solution method (for instance, the Branch-and Bound).

Key take-away: In the next steps of this example we will analyze how to solve the continuous relaxation of the model.

As a starting point, we need any feasible solution. Such a solution can be constructed as follows:

- We consider any single-item cutting patterns, i.e., |K| configurations, each containing N
_{ks}= ⌊W/L_{k}⌋ pieces of type k; - Set y
_{k}= ⌈R_{s}/N_{ks}⌉ for pattern k (where pattern k is the pattern containing only pieces of type k).

This solution could also be arrived to by applying the simplex method to the model (without integrality constraints), considering only the decision variables that correspond to the above single-item patterns:

*min y _{1} + y_{2} + y_{3} + y_{4} + y_{5}*

*s.t 15y _{1} ≥ 144*

*6y _{2} ≥ 105 6y_{3} ≥ 72*

*6y _{4} ≥ 30*

*3y _{5} ≥ 24*

*y _{1 ,} y_{2 ,} y_{3 ,} y_{4 ,} y_{5} ≥ 0*

In fact, if we solve this problem (for example, use CPLEX solver in GAMS) the solution is as below :

Y1 | 28.8 |

Y2 | 52.5 |

Y3 | 24 |

Y4 | 15 |

Y5 | 24 |

Next, a new possible pattern (number 6) will be consider. This pattern contains only one piece of item type number 5. So the question is if the new solution would remain optimal if this new pattern was allowed. Duality helps answer ths question. At every iteration of the simplex method, the outcome is a feasible basic solution (corresponding to some basis B) for the primal problem and a dual solution (the multipliers u T = c T BB−1 ) that satisfy the complementary slackness conditions. (Note: the dual solution will be feasible only when the last iteration is reached)

The inclusion of new pattern "6" corresponds to including a new variable in the primal problem, with objective cost 1 (as each time pattern 6 is chosen, one bar is cut) and corresponding to the following column in the constraint matrix:

These variables create a new dual constraint. We then have to check if this new constraint is violated by the current dual solution ( or in other words, *if the reduced cost of the new variable with respect to basis B is negative)*

The new dual constraint is:

*1*u _{1} + 0*u_{2} + 0*u_{3} + 0*u_{4} + 1*u_{5} ≤ 1*

The solution for the dual problem can be computed in different software packages, or by hand. The example below shows the solution obtained with GAMS for this example:

(Note the solution for the dual problem would be:

Dual variable | Variable value |

D1 | 0.067 |

D2 | 0.167 |

D3 | 0.167 |

D4 | 0.167 |

D5 | 0.333 |

Since 0.2+1 = 1.2 > 1, the new constraint is violated.

This means that the current primal solution (in which the new variable is y_{6} = 0) may not be optimal anymore (although it is still feasible). The fact that the dual constraint is violated means the associated primal variable has negative reduced cost:

the norm of c_{6} = c_{6} - u^{T}D_{6} = 1 - 0.4 = 0.6

****update formula*****

To help us solve the problem, the next step is t let y_{6} enter the basis. To do so, we modify the problem by inserting the new variable as below:

*min y _{1} + y_{2} + y_{3} + y_{4} + y_{5} + y_{6}*

*s.t 15y _{1} + y_{6} ≥ 144*

*6y _{2} ≥ 105 6y_{3} ≥ 72*

*6y _{4} ≥ 30*

*3y _{5} + y_{6} ≥ 24*

*y _{1 ,} y_{2 ,} y_{3 ,} y_{4 ,} y_{5 ,} y_{6} ≥ 0*

If this problem is solved with the simplex method, the optimal solution is found, but restricted only to patterns 1 to 6. If a new pattern is available, a decision should be made whether this new pattern should be used or not by proceeding as above. However, the problem is how to find a pattern (i.e., a variable; i.e, a column of the matrix) whose reduced cost is negative (i.e., which will mean it is convenient to include it in the formulation). At this point one can notice that number of possible patterns exponentially large,and all the patterns are not even known explicitly. The question then is:

*Given a basic optimal solution for the problem in which only some variables are included, how can we find (if any exists) a variable with negative reduced cost (i.e., a constraint violated by the current dual solution)?*

This question can be transformed into an optimization problem: in order to see whether a variable with negative reduced cost exists, we can look for the minimum of the reduced costs of all possible variables and check whether this minimum is negative:

c ̅ = 1-u^T z

Because every column of the constraint matrix corresponds to a cutting pattern, and every entry of the column says how many pieces of a certain type are in that pattern. In order for z to be a possible column of the constraint matrix, the following condition must be satisfied:

And by so doin, it enables the conversion of the problem of finding a variable with negative reduced cost into the integer linear programming problem below:

which, in turn, would be equivalent to the below formulation (we just write the objective in maximization form and ignore the additive constant 1):

The coefficients z_{k} of a column with negative reduced cost can be found by solving the above integer "knapsack" problem (which is a traditional type of problem that we find in integer programming).

In our example, if we start from the problem restricted to the five single-item patterns, the above problem reads as:

*max 0.067z _{1} + 0.167z_{2} + 0.167z_{3} + 0.167z_{4} + z_{5}*

*s.t 6z _{1} + 13.5z_{2} + 15z_{3} + 16.5z_{4} + 22.5z_{5} ≤ 33*

*z _{1} _{,} z_{2 ,} z_{3 ,} z_{4 ,} z_{5} ≥ 0*

which has the following optimal solution:

*z ^{T} = [1 0 0 0 1]*

This matches the pattern called D6 earlier on in this page.

__Optimality test__

If :

then **y*** is an optimal solution of the full continuous relaxed problem (that is, including all patterns in S)

If this condition is not true, we go ahead and update the master problem by including in S' the pattern λ defined by N_{sλ} (in practical terms this means that the column **y*** needs to be included in the constraint matrix) Then,go to Step1.

For this example we find that the optimality test is met as 0.4 <1 so we have have found an optimal solution of the relaxed continuos problem (if this was not the case we would have had to go back to Step 1 as descrbed in the algorithm discussion of this page)

**Algorithm discussion**

The column generation subproblem is the critical part of the method is Step 2, i.e., generating the new columns. It is not reasonable to compute the reduced costs of all variables y_{s} for s = 1, . . . , S, otherwise this procedure would reduce to the simplex method. In fact, n can be very large (as in the cutting-stock problem) or, for some reason, it might not be possible or convenient to enumerate all decision variables. It is then necessary to study a specific column generation algorithm for each problem; *only if such an algorithm exists (and is efficient)*, the method can be fully developed. In the one-dimensional cutting stock problem, we transformed the column generation subproblem into an easily solvable integer linear programming problem. In other cases, the computational effort required to solve the subproblem may be so high as to make the full procedure unpractical.

**Applications**

As previously mentioned, column generation techniques are most relevant when the problem that we are trying to solve has a high ratio of number of variables with respect to the number of constraints. As such some common applications are:

- Network design
- Logistics – for example to determine an optimal path/routing for vehicles
- Column generation algorithms are used for large delivery networks, often in combination with other methods, helping to implement real-time solutions for om-demand logistics.
- Supply Chain scheduling problems

**Conclusions**

Column generation is a way of beginning with a small, manageable parts of a problem (specifically, a few of the variables), solving that part, analyzing that partial solution to discover the next part of the problem (specifically, one or more variables) to add to the model, and then resolving the extended model. Column generation repeats the algorithm steps until it achieves an optimal solution to the entire problem.

More formally, column generation is a way of solving a linear programming problem that adds columns (corresponding to constrained variables) during the pricing phase of the simplex method of solving the problem. Generating a column in the primal simplex formulation of a linear programming problem corresponds to adding a constraint in its dual formulation.

Column generation provides an advantage to the simplex method as the solvers (when computing the solution with software) will not need to access all the variables of the problem simultaneously. In fact, a solver could begin work with only the basis (a particular subset of the constrained variables) and then use reduced cost to decide which other variables to access as needed.

**References**

- http://www.math.chalmers.se/Math/Research/Optimization/reports/masters/PerSjogren-final.pdf
- L.A. Wolsey, Integer programming. Wiley, 1998
- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.396.8938&rep=rep1&type=pdf
- https://www.researchgate.net/publication/220209271_Acceleration_of_cutting-plane_and_column_generation_algorithms_Applications_to_network_design
- https://link.springer.com/article/10.1007/s10479-018-2911-2
- https://www.ac.tuwien.ac.at/wp/wp-content/uploads/Martin-Riedler-col_gen-1.pdf
- L. De Giovanni M. Di Summa G. Zambelli - Methods and Models for Combinatorial Optimization
- Dantzig-Wolfe decomposition.
*Encyclopedia of Mathematics.*URL: http://encyclopediaofmath.org/index.php?title=Dantzig-Wolfe_decomposition&oldid=50750^{[1]}

- ↑
^{1.0}^{1.1}s