Skip to yearly menu bar Skip to main content


Poster

On a Neural Implementation of Brenier's Polar Factorization

Nina Vesseron · Marco Cuturi


Abstract:

In 1991, Brenier proved a theorem that gener-alizes the QR decomposition for matrices (fac-torized as PSD times unitary) to any vector fieldF : Rd → Rd. The theorem, known as the polarfactorization theorem, states that any field F canbe recovered as the composition of the gradientof a convex function u with a measure-preservingmap M , i.e. F = ∇u ◦ M . We propose a practi-cal implementation of this intriguing theoreticalresult and explore possible uses within machinelearning. The theorem is closely related to opti-mal transport theory, and we borrow from recentadvances in neural transport to estimate the po-tential u and parameterize it as an input convexneural network. Using convex conjugacy, the mapM can be either recovered as ∇u∗ ◦ F , or learnedas an auxiliary network. Because M is not in-jective, however, we consider the extra task ofestimating a multivalued map that can approxi-mate M −1 using bridge matching. We illustratepossible applications of Brenier’s polar factoriza-tion to non-convex optimization problems, as wellas sampling of densities that are not log-concave

Live content is unavailable. Log in and register to view live content