Skip to yearly menu bar Skip to main content


Poster

Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective

Yang Chen · Cong Fang · Zhouchen Lin · Bing Liu


Abstract:

Foundation Models (FMs) have demonstrated remarkable insights into the relational dynamics of the world, leading to the crucial question: how do these models acquire an understanding of world hybrid relations?Traditional statistical learning, particularly for prediction problems, may overlook the rich and inherently structured information from the data, especially regardingthe relationships between objects. We introduce a mathematical model that formalizes relational learning as hypergraph recovery to study pre-training of FMs. In our framework, the world is represented as a hypergraph, with data abstracted as random samples from hyperedges. We theoretically examine the feasibility of a Pre-Trained Model (PTM) to recover this hypergraph and analyze the data efficiency in a minimax near-optimal style.By integrating rich graph theories into the realm of PTMs, our mathematical framework offers powerful tools for an in-depth understanding of pre-training from a unique perspective and can be used under various scenarios. As an example, we extend the framework to entity alignment in multimodal learning.

Live content is unavailable. Log in and register to view live content