Skip to yearly menu bar Skip to main content


Poster

Simplicity Bias of Two-Layer Networks beyond Linearly-Separable Data

Nikita Tsoy · Nikola Konstantinov


Abstract:

Simplicity bias, the propensity of deep models to rely on simple features, has been identified as a potential reason for limited out-of-distribution generalization of neural networks (Shah et al., 2020). Despite the important implications of simplicity bias, this phenomenon has only been theoretically confirmed and characterized under strong training dataset assumptions, such as linear separability (Lyu et al., 2021). In this work, we characterize simplicity bias for general datasets in the context of two-layer neural networks with small initial weights and trained with gradient flow. Specifically, we prove that in the early training phases, network features cluster around a few directions that do not depend on the size of the hidden layer. Furthermore, for non-linearity-separable datasets with an XOR-like pattern, we precisely identify the learned features and demonstrate that simplicity bias intensifies during later training stages. These results indicate that features learned in the middle stages of training may be more useful for OOD transfer. We support this hypothesis with experiments on image data.

Live content is unavailable. Log in and register to view live content