Skip to yearly menu bar Skip to main content


Poster

On the Nonlinearity of Layer Normalization

Yunhao Ni · Yuxin Guo · Junlong Jia · Lei Huang


Abstract: Layer normalization (LN) is a ubiquitous technique in deep learning but our theoretical understanding to it remains elusive. This paper investigates a new theoretical direction for LN, regarding to its nonlinearity and representation capacity. We investigate the representation capacity of a network with layerwise composition of linear and LN transformations, referred to as LN-Net. We theoretically show that an LN-Net with only 3 neurons in each layer and $O(m)$ LN layers can correctly classify $m$ samples with any label assignment. We further show the lower bound of the VC dimension of an LN-Net. The nonlinearity of LN can be amplified by group partition, which is also theoretically demonstrated with mild assumption and empirically supported by our experiments. Based on our analyses, we further consider to design neural architecture by exploiting and amplifying the nonlinearity of LN, and the effectiveness is supported by our experiments.

Live content is unavailable. Log in and register to view live content