Skip to yearly menu bar Skip to main content


Poster

Calibration, Entropy Rates, and Memory in Language Models

Mark Braverman · Xinyi Chen · Sham Kakade · Karthik Narasimhan · Cyril Zhang · Yi Zhang

Keywords: [ Applications - Language, Speech and Dialog ] [ Natural Language Processing / Dialogue ] [ Information Theory and Estimation ] [ Deep Sequence Models ] [ Deep Generative Models ]


Abstract:

Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing. Towards this end, we present a calibration-based approach to measure long-term discrepancies between a generative sequence model and the true distribution, and use these discrepancies to improve the model. Empirically, we show that state-of-the-art language models, including LSTMs and Transformers, are miscalibrated: the entropy rates of their generations drift dramatically upward over time. We then provide provable methods to mitigate this phenomenon. Furthermore, we show how this calibration-based approach can also be used to measure the amount of memory that language models use for prediction.

Chat is not available.