Skip to yearly menu bar Skip to main content


Poster

Predicting Lagrangian Multipliers for Mixed Integer Linear Programs

Francesco Demelas · Joseph Roux · Mathieu Lacroix · Axel Parmentier


Abstract:

Lagrangian relaxation stands among the most efficient approaches for solving Mixed Integer Linear Programs (MILPs) with difficult constraints.Given any duals for these constraints, called Lagrangian Multipliers (LMs), it returns a bound on the optimal value of the MILP, and Lagrangian methods seek the LMs giving the best such bound.But these methods generally rely on iterative algorithms resembling gradient descent to maximize the concave piecewise linear dual function: the computational burden grows quickly with the number of relaxed constraints.We introduce a deep learning approach that bypasses the descent, effectively amortizing per instance optimization.A probabilistic encoder based on a graph neural network computes high-dimensional representations of relaxed constraints, which are turned into LMs by a decoder.We train the encoder and the decoder jointly by directly optimizing the bound obtained from the predicted multipliers.Experiments on two widely known problems, Multi-CommodityNetwork Design and Generalized Assignment, show that our approach closes up to 85% of the gap between the continuous relaxation and the best Lagrangian bound, and provides a high-quality warm-start for descent-based Lagrangian methods.

Live content is unavailable. Log in and register to view live content