Skip to yearly menu bar Skip to main content


Poster

LoRA+: Efficient Low Rank Adaptation of Large Models

Soufiane Hayou · Nikhil Ghosh · Bin Yu


Abstract:

In this paper, we show that Low Rank Adaptation (LoRA) as originally introduced in (Hu et al., 2021) leads to suboptimal finetuning of models with large width. This is due to the fact that adapter matrices A and B in LoRA are updated with the same learning rate in ADAM. Using scaling arguments for large width networks, we demonstrate that the same learning rate does not allow efficient feature learning. We then show that this suboptimality of LoRA can be corrected simply by setting different learning rates for theLoRA adapter matrices A and B with a well-chosen fixed ratio. We call this proposed algorithm LoRA+. In our extensive experiments,LoRA+ improves finetuning speed (up to ∼ 2X SpeedUp) and performance (1% − 2% improvements), at the same computational cost as LoRA.

Live content is unavailable. Log in and register to view live content