Skip to yearly menu bar Skip to main content


Poster

ReMax: A Simple, Effective, and Efficient Reinforcement Learning Method for Aligning Large Language Models

Ziniu Li · Tian Xu · Yushun Zhang · Zhihang Lin · Yang Yu · Ruoyu Sun · Zhi-Quan Luo


Abstract:

Reinforcement Learning from Human Feedback (RLHF) is key for aligning Large Language Models (LLMs), typically paired with the Proximal Policy Optimization (PPO) algorithm. While PPO is a powerful method designed for general Reinforcement Learning (RL) tasks, it is overly sophisticated for LLMs, leading to significant memory and computation costs. To make RLHF more efficient, we present a tailored algorithm called ReMax. In particular, ReMax leverages three properties of RLHF: fast simulation, deterministic transitions, and trajectory-level rewards, which are not exploited in PPO. Building on the renowned REINFORCE algorithm, ReMax does not require training an additional value model as in PPO and is further enhanced with a new variance reduction technique. ReMax offers several benefits over PPO: it is simple to implement, eliminates 4 hyper-parameters in PPO, cuts GPU memory usage, and shortens training time. ReMax can save about 46\% GPU memory than PPO when training a 7B model and enables training on A800-80GB GPUs without the memory-saving offloading technique needed by PPO, which is also 1.6 times slower. Applying ReMax to a Mistral-7B model resulted in a 94.78\% win rate on the AlpacaEval leaderboard and a 7.739 score on MT-bench, setting a new SOTA for open-source 7B models. These results show the effectiveness of ReMax while addressing the limitations of PPO.

Live content is unavailable. Log in and register to view live content