Skip to yearly menu bar Skip to main content


Poster

Dynamic Byzantine-Robust Learning: Adapting to Switching Byzantine Workers

Ron Dorfman · Naseem Yehya · Kfir Levy


Abstract: Byzantine-robust learning has emerged as a prominent fault-tolerant distributed machine learning framework. However, most techniques consider the *static* setting, wherein the identity of Byzantine machines remains fixed during the learning process. This assumption does not capture real-world *dynamic* Byzantine behaviors, which may include transient malfunctions or targeted temporal attacks. Addressing this limitation, we propose $\textsf{DynaBRO}$ -- a new method capable of withstanding $\mathcal{O}(\sqrt{T})$ rounds of Byzantine identity alterations (where $T$ is the total number of training rounds), while matching the asymptotic convergence rate of the static setting. Our method combines a multi-level Monte Carlo (MLMC) gradient estimation technique with robust aggregation of worker updates and incorporates a fail-safe filter to limit bias from dynamic Byzantine strategies. Additionally, by leveraging an adaptive learning rate, our approach eliminates the need for knowing the percentage of Byzantine workers.

Live content is unavailable. Log in and register to view live content