Skip to yearly menu bar Skip to main content


Poster

Emerging Representations of Formal Semantics in Language Models Trained on Programs

Charles Jin · Martin Rinard


Abstract:

We present evidence that language models (LMs) of code can learn to represent the formal semantics of programs, despite being trained only to perform next-token prediction. Specifically, we train a Transformer model on a synthetic corpus of programs written in a domain-specific language for navigating 2D grid world environments. Each program in the corpus is preceded by a (partial) specification in the form of several input-output examples. Despite providing no further inductive biases, we find that a probing classifier is able to extract increasingly accurate representations of program state from the LM hidden states over the course of training, suggesting the LM acquires an emergent ability to interpret programs in the formal sense. To establish the validity of our results, we also develop a novel interventional baseline for disentangling what is represented by the LM vs. learned by the probe, which has broad applicability to semantic probing experiments. In summary, this paper does not propose any new techniques for training LMs of code, but develops an experimental framework for and provides insights into the acquisition and representation of formal semantics in statistical models of code.

Live content is unavailable. Log in and register to view live content