Skip to yearly menu bar Skip to main content


Poster

Graph Geometry-Preserving Autoencoders

Jungbin Lim · Jihwan Kim · Yonghyeon Lee · Cheongjae Jang · Frank Chongwoo Park


Abstract:

When using an autoencoder to learn the low-dimensional manifold of high-dimensional data, it is crucial to find the latent representations that preserve the geometry of the data manifold. However, most existing studies assume a Euclidean nature for the high-dimensional data space, which is arbitrary and often does not precisely reflect the underlying semantic or domain-specific attributes of the data. In this paper we propose a novel autoencoder regularization framework based on the premise that the geometry of the data manifold can often be better captured with a well-designed similarity graph associated with data points. Given such a graph, we utilize a Riemannian geometric distortion measure as a regularizer to preserve the geometry derived from the graph Laplacian and make it suitable for larger-scale autoencoder training. Through extensive experiments compared to existing state-of-the-art geometry-preserving and graph-based autoencoders, we show that our method learns the most accurate graph geometry-preserving latent structures and is particularly effective in learning dynamics in the latent space.

Live content is unavailable. Log in and register to view live content