Skip to yearly menu bar Skip to main content


Poster

Subequivariant Reinforcement Learning in 3D Multi-Object Physical Environments

Runfa Chen · Ling Wang · Yu Du · Fuchun Sun · Tianrui Xue · Jianwei Zhang · Wenbing Huang


Abstract:

Learning policies for multi-object systems in 3D environments is far more complicated against single-object scenarios, due to the exponential expansion of the global state space as the number of objects increases. One potential solution of alleviating the exponential complexity is dividing the global space into independent local views that are invariant to transformations including translations and rotations. To this end, this paper proposes Subequivariant Hierarchical Neural Networks (SHNN) to facilitate multi-object policy learning. In particular, SHNN first dynamically decouples the global space into local object-level graphs via task assignment. Second, it leverages subequivariant message passing over the local object-level graphs to devise invariant local reference frames, remarkably compressing the representation redundancy, particularly in gravity-affected environments. Furthermore, to overcome the limitations of existing benchmarks in capturing the subtleties of multi-object systems under the Euclidean symmetry, we propose the Multi-object Benchmark (MoBen), a new suite of environments tailored for exploring a wide range of multi-object reinforcement learning. Extensive experiments demonstrate significant advancements of SHNN on the proposed benchmarks compared to existing methods. Comprehensive ablations are conducted to verify the indispensability of task assignment and subequivariance.

Live content is unavailable. Log in and register to view live content