Skip to yearly menu bar Skip to main content


Poster

MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models

Justin Chen · Swarnadeep Saha · Elias Stengel-Eskin · Mohit Bansal


Abstract:

Multi-agent interactions between Large Language Model (LLM) agents have shown major improvements on diverse reasoning tasks. However, these involve long generations from multiple models across several rounds, making them expensive. Moreover, these multi-agent approaches fail to provide a final, single model for efficient inference. To address this, we introduce MAGDi, a new method for structured distillation of the reasoning interactions between multiple LLMs into smaller LMs. MAGDi teaches smaller models by representing multi-agent interactions using graphs, augmenting a base student model, and distilling knowledge using three objective functions: next-token prediction, a contrastive loss between correct and incorrect reasoning, and a graph-based objective to model the interaction structure. Experiments on seven widely-used commonsense and math reasoning benchmarks show that MAGDi improves the reasoning capabilities of smaller models, outperforming several methods that distill from a single teacher and multiple teachers. We conduct extensive analyses to show that MAGDi (1) scales positively with better base student models, (2) enhances the generalizability to out-of-domain tasks, and (3) obtains larger improvements when applying the inference technique of self-consistency, which relies on model diversity.

Live content is unavailable. Log in and register to view live content