Skip to yearly menu bar Skip to main content


Poster

Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation

Xinyi Wang · Alfonso Amayuelas · Kexun Zhang · Liangming Pan · Wenhu Chen · William Wang


Abstract:

Pre-trained language models (LMs) are able to perform complex reasoning without explicit fine-tuning. To understand how pre-training with a next-token prediction objective contributes to the emergence of such reasoning capability, we hypothesize that LMs can derive new conclusions by aggregating indirect reasoning paths seen at pre-training time. We then study two important cases of reasoning: logic reasoning with knowledge graphs (KGs) and math reasoning with math word problems (MWPs). We then formalize the reasoning paths as random walk paths on the knowledge/reasoning graphs. Analyses of learned LM distributions suggest that a logical-rule-weighted sum of relevant random walk path probabilities is a reasonable way to explain how LMs reason. Experiments and analysis on multiple KG and MWP datasets reveal the effect of training on random walk paths and suggest that augmenting unlabeled random walk reasoning paths of a suitable length can improve real-world multip-step reasoning performance.

Live content is unavailable. Log in and register to view live content