Skip to yearly menu bar Skip to main content


Poster

Data-free Neural Representation Compression with Riemannian Neural Dynamics

Zhengqi Pei · Anran Zhang · Shuhui Wang · Xiangyang Ji · Qingming Huang


Abstract: Neural models are equivalent to dynamic systems from a physics-inspired view, implying that computation in neural networks can be interpreted as the dynamical interactions between neurons. However, existing works model neuronal interaction as a weight-based linear transformation, and the nonlinearity comes from the nonlinear activation functions, which constrains the nonlinearity and data-fitting ability of the whole neuronal model.Inspired by Riemannian geometry, we interpret neural structures by projecting neurons onto the Riemannian neuronal state space and model neuronal interaction with Riemannian metric (${\it RieM}$). ${\it RieM}$ provides a more accurate and efficient representation of nonlinear neuron interactions, leading to higher parameter utilization. With ${\it RieM}$, we further design a novel data-free neural compression mechanism that does not require additional fine-tuning with real data. Using backbones like ResNet and Vision Transformer, we conduct extensive experiments on datasets such as MNIST, CIFAR-100, ImageNet-1k and COCO object detection.Empirical results demonstrate that, under equal compression rates and computational complexity, models compressed with ${\it RieM}$ achieve superior inference accuracy compared to existing data-free compression methods. Notably, ${\it RieM}$-compressed models outperform the compression methods using additional real data for fine-tuning.

Live content is unavailable. Log in and register to view live content