Skip to yearly menu bar Skip to main content


Poster

Detecting Influence Structures in Multi-Agent Reinforcement Learning

Fabian R. Pieroth · Katherine Fitch · Lenz Belzner


Abstract:

We consider the problem of quantifying the amount of influence one agent can exert on another in the setting of multi-agent reinforcement learning (MARL). As a step towards a unified approach to express agents' interdependencies, we introduce the total and state influence measurement functions.Both of these are valid for all common MARL systems, such as the discounted reward setting.Additionally, we propose novel quantities, called the total impact measurement (TIM) and state impact measurement (SIM), that characterize one agent's influence on another by the maximum impact it can have on the other agents' expected returns and represent instances of impact measurement functions in the average reward setting. Furthermore, we provide approximation algorithms for TIM and SIM with simultaneously learning approximations of agents' expected returns, error bounds, stability analyses under changes of the policies, and convergence guarantees. The approximation algorithm relies only on observing other agents' actions and is, other than that, fully decentralized.Through empirical studies, we validate our approach's effectiveness in identifying intricate influence structures in complex interactions.Our work appears to be the first study of determining influence structures in the multi-agent average reward setting with convergence guarantees.

Live content is unavailable. Log in and register to view live content