Skip to yearly menu bar Skip to main content


Poster

Implicit Representations via Operator Learning

Sourav Pal · Harshavardhan Adepu · Clinton Wang · Polina Golland · Vikas Singh


Abstract:

The idea of representing a signal as the weights ofa neural network, called Implicit Neural Representations (INRs), has led to exciting implications forcompression, view synthesis and 3D volumetricdata understanding. An emergent problem settinghere pertains to the use of INRs for downstreamprocessing tasks. Despite a few conceptual results, this remains challenging because the INRfor a given image/signal often exists in isolation.What does the local region in the neighborhoodaround a given INR correspond to? Based onthis inspiration, we offer an operator theoreticreformulation of the INR model, which we callOperator INR (or O-INR). At a high level, insteadof mapping positional encodings to a signal, O-INR maps a function space to another functionspace. A practical form of this general casting ofthe problem is obtained by appealing to IntegralTransforms. The resultant model can mostly doaway with Multi-layer Perceptrons (MLPs) thatdominate nearly all existing INR models – weshow that convolutions are sufficient and offer numerous benefits in training including numericallystable behavior. We show that O-INR can easily handle most problem settings in the literature,where it meets or exceeds the performance profileof baselines. These benefits come with minimal,if any, compromise.

Live content is unavailable. Log in and register to view live content