Skip to yearly menu bar Skip to main content


Poster

ReLUs Are Sufficient for Learning Implicit Neural Representations

Joseph Shenouda · Yamin Zhou · Robert Nowak


Abstract:

Motivated by the growing theoretical understanding of ReLU neural networks (NNs) and the computational advantages of their sparse activations, we revisit the use of ReLU activation functions for learning implicit neural representations (INRs). Inspired by second order B-spline wavelets, we incorporate a set of simple constraints to the ReLU neurons in each layer of a deep neural network (DNN) to dramatically mitigate the spectral bias. This in turn enables its use for various INR tasks. Empirically we demonstrate that, contrary to popular belief, one can learn state-of-the-art INRs based on a DNN composed of only ReLU neurons. Next, by leveraging recent theoretical works which characterize the kinds of functions ReLU neural networks learn, we provide a way to quantify the regularity of the learned function. This in turn provides fresh insights into some of the heuristics commonly employed when training INRs. We substantiate our claims through experiments in signal representation, superresolution, and computed tomography, demonstrating our method's versatility and effectiveness.

Live content is unavailable. Log in and register to view live content