Skip to yearly menu bar Skip to main content


Poster

PDHG-Unrolled Learning-to-Optimize Method for Large-Scale Linear Programming

Bingheng Li · Linxin Yang · Yupeng Chen · Senmiao Wang · Qian Chen · Haitao Mao · Yao Ma · Akang Wang · Tian Ding · Jiliang Tang · Ruoyu Sun


Abstract: Solving large-scale linear programming (LP) problems is an important task in various areas such as communication networks, power systems, finance and logistics. Recently, two distinct approaches have emerged to expedite LP solving: (i) First-order methods (FOMs); (ii) Learning to optimize (L2O). In this work, we propose an FOM-unrolled neural network (NN) called PDHG-Net, and propose a two-stage L2O method to solvelarge-scale LP problems. The new architecture PDHG-Net is designed by unrolling the recently emerged PDHG method into a neural network, combined with channel-expansion techniques borrowed from graph neural networks.We prove that the proposed PDHG-Net can recoverPDHG algorithm, thus can approximate optimal solutions of LP instanceswith a polynomial number of neurons. We propose a two-stage inference approach: first use PDHG-Net to generate an approximate solution, and then apply PDHG algorithm to further improve the solution.Experiments show that our approach can significantly accelerate LP solving, achieving up to a 3$\times$ speedup compared to FOMs for large-scale LP problems.

Live content is unavailable. Log in and register to view live content