Skip to yearly menu bar Skip to main content


Poster

Breadth-First Exploration in Adaptive Grid-based Reinforcement Learning

Youngsik Yoon · Gangbok Lee · Sungsoo Ahn · Jungseul Ok


Abstract:

Graph-based planners have gained significant attention for goal-conditioned reinforcement learning (RL), where they construct a graph consisting of confident transitions between subgoals as edges and run shortest path algorithms to exploit the confident edges.Meanwhile, identifying and avoiding unattainable transitions are also crucial yet overlooked by the previous graph-based planners, leading to wasting an excessive number of attempts at unattainable subgoals. To address this oversight, we propose a graph construction method that efficiently manages all the achieved and unattained subgoals on a grid graph adaptively discretizing the goal space. This enables a breadth-first exploration strategy, grounded in the local adaptive grid refinement, that prioritizes broad probing of subgoals on a coarse grid over meticulous one on a dense grid. We theoretically and empirically show the effectiveness of our method through a geometric analysis and extensive experiments.

Live content is unavailable. Log in and register to view live content