Skip to yearly menu bar Skip to main content


Poster

Best-Fit Data Packing: Fewer Truncations Improve Language Modeling

Hantian Ding · Zijian Wang · Giovanni Paolini · Varun Kumar · Anoop Deoras · Dan Roth · Stefano Soatto


Abstract:

In large language model training, input documents are typically concatenated together and then segmented into sequences of equal length to avoid padding tokens. Despite the efficiency of this approach, it inevitably breaks many documents into incomplete pieces, leading to excessive truncations that hinder the model from learning to compose logically coherent and factually consistent content that is grounded on the complete context. To mitigate the problem, we propose Best-fit Packing, a method that packs documents into training sequences through length-aware combinatorial optimization. Our method completely eliminates unnecessary truncations while retaining the same training efficiency as concatenation. Empirical results from both text and code pre-training show that our method achieves superior performance (e.g., +4.7\% on reading comprehension; +16.8\% in context following; and +9.2\% on program synthesis), and reduces closed-domain hallucination effectively by up to 58.3\%.

Live content is unavailable. Log in and register to view live content