Skip to yearly menu bar Skip to main content


Poster

Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution

Eslam Zaher · Maciej Trzaskowski · Quan Nguyen · Fred Roosta


Abstract:

In this paper, we delve into the reliability concerns of Integrated Gradients (IG), a prevalent feature attribution method for black-box deep learning models. We particularly address two predominant challenges associated with IG: the generation of noisy feature visualizations for vision models and the vulnerability to adversarial attributional attacks. Our approach involves an adaptation of path-based feature attribution, aligning the path of attribution more closely to the intrinsic geometry of the data manifold. Our experiments utilise deep generative models applied to several real-world image datasets. They demonstrate that IG along the geodesics conforms to the curved geometry of the Riemannian data manifold, generating more perceptually intuitive explanations and, subsequently, substantially increasing robustness to targeted attributional attacks.

Live content is unavailable. Log in and register to view live content