Skip to yearly menu bar Skip to main content


Poster

Improving and Accelerating Retrieval-Augmented Generation with Superposition Prompting

Thomas Merth · Qichen Fu · Mohammad Rastegari · Mahyar Najibi


Abstract: Large language models (LLMs) have been widely adopted across various industries for a variety of natural language processing tasks. Despite their recent popularity, LLMs exhibit significant drawbacks, particularly when processing long contexts. The computational cost of LLM inference scales quadratically with respect to sequence length, making it expensive for deployment in some real-world text processing applications, such as retrieval-augmented generation (RAG). In the RAG setting, LLMs also exhibit the *distraction* phenomenon, where irrelevant context in the prompt tends to reduce generation quality. To address these drawbacks, we propose a novel prompting methodology, *superposition prompting*, which can be directly applied to any pre-trained transformer-based LLM *without the need for fine-tuning*. At a high level, superposition prompting allows the LLM to process input documents in parallel "prompt paths," discarding paths once they are deemed irrelevant. We demonstrate the capability of our method to simultaneously enhance accuracy and efficiency across a variety of question-answering benchmarks using multiple pre-trained LLMs. For example, our approach facilitates an $93\times$ reduction in compute time while *improving* accuracy by $37\%$ on the NaturalQuestions-Open dataset with the mpt-7b-instruct model.

Live content is unavailable. Log in and register to view live content