Skip to yearly menu bar Skip to main content


Poster

Larimar: LLMs with External Episodic Memory Control

Payel Das · Subhajit Chaudhury · Elliot Nelson · Igor Melnyk · Sarath Swaminathan · Sophie Dai · Aurelie Lozano · Georgios Kollias · Vijil Chenthamarakshan · Jiri Navratil · Soham Dan · Pin-Yu Chen


Abstract:

Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory. Larimar's memory allows for dynamic, one-shot updates of knowledge without the need for computationally expensive re-training or fine-tuning. Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines, even in the challenging sequential editing setup, but also excels in speed---yielding speed-ups between 100x depending on the base LLM ---as well as flexibility due to the proposed architecture being simple, LLM-agnostic, and hence general. We further provide mechanisms for selective fact forgetting and input context length generalization with Larimar and show their effectiveness.

Live content is unavailable. Log in and register to view live content