Skip to yearly menu bar Skip to main content


Poster

Iterative Regularized Policy Optimization with Imperfect Demonstrations

Xudong Gong · Feng Dawei · Kele Xu · Yuanzhao Zhai · Chengkang Yao · Weijia Wang · Bo Ding · Huaimin Wang


Abstract:

Imitation learning heavily relies on the quality of provided demonstrations. In scenarios where demonstrations are imperfect and rare, a prevalent approach for refining policies is through online fine-tuning with reinforcement learning, in which a Kullback–Leibler (KL) regularization is often employed to stabilize the learning process. However, our investigation reveals that on the one hand, imperfect demonstrations can bias the imitation learning process, the KL regularization will further constrain the improvement of online policy exploration. To address the above issues, we propose Iterative Regularized Policy Optimization (IRPO), a framework that involves iterative offline imitation learning and online reinforcement exploration. Specifically, the policy learned online is used to serve as the demonstrator for successive learning iterations, with a data boosting to consistently enhance the quality of demonstrations. Experimental validations conducted across widely used benchmarks and a novel fixed-wing UAV control task consistently demonstrate the effectiveness of IRPO in improving both the demonstration quality and the policy performance.

Live content is unavailable. Log in and register to view live content