September 23, 2024
Similar papers 3
March 8, 2022
To model manifold data using normalizing flows, we employ isometric autoencoders to design embeddings with explicit inverses that do not distort the probability distribution. Using isometries separates manifold learning and density estimation and enables training of both parts to high accuracy. Thus, model selection and tuning are simplified compared to existing injective normalizing flows. Applied to data sets on (approximately) flat manifolds, the combined approach generate...
June 13, 2018
We investigate learning of the differential geometric structure of a data manifold embedded in a high-dimensional Euclidean space. We first analyze kernel-based algorithms and show that under the usual regularizations, non-probabilistic methods cannot recover the differential geometric structure, but instead find mostly linear manifolds or spaces equipped with teleports. To properly learn the differential geometric structure, non-probabilistic methods must apply regularizatio...
October 2, 2024
Data-driven Riemannian geometry has emerged as a powerful tool for interpretable representation learning, offering improved efficiency in downstream tasks. Moving forward, it is crucial to balance cheap manifold mappings with efficient training algorithms. In this work, we integrate concepts from pullback Riemannian geometry and generative models to propose a framework for data-driven Riemannian geometry that is scalable in both geometry and learning: score-based pullback Rie...
June 22, 2017
Traditional manifold learning algorithms assumed that the embedded manifold is globally or locally isometric to Euclidean space. Under this assumption, they divided manifold into a set of overlapping local patches which are locally isometric to linear subsets of Euclidean space. By analyzing the global or local isometry assumptions it can be shown that the learnt manifold is a flat manifold with zero Riemannian curvature tensor. In general, manifolds may not satisfy these hyp...
May 23, 2018
Latent variable models (LVMs) learn probabilistic models of data manifolds lying in an \emph{ambient} Euclidean space. In a number of applications, a priori known spatial constraints can shrink the ambient space into a considerably smaller manifold. Additionally, in these applications the Euclidean geometry might induce a suboptimal similarity measure, which could be improved by choosing a different metric. Euclidean models ignore such information and assign probability mass ...
February 12, 2020
In this paper, we propose a method to learn a minimizing geodesic within a data manifold. Along the learned geodesic, our method can generate high-quality interpolations between two given data samples. Specifically, we use an autoencoder network to map data samples into latent space and perform interpolation via an interpolation network. We add prior geometric information to regularize our autoencoder for the convexity of representations so that for any given interpolation ap...
October 14, 2020
In this paper, we propose a point cloud classification method based on graph neural network and manifold learning. Different from the conventional point cloud analysis methods, this paper uses manifold learning algorithms to embed point cloud features for better considering the geometric continuity on the surface. Then, the nature of point cloud can be acquired in low dimensional space, and after being concatenated with features in the original three-dimensional (3D)space, bo...
December 26, 2020
We consider the problem of reconstructing an embedding of a compact connected Riemannian manifold in a Euclidean space up to an almost isometry, given the information on intrinsic distances between points from its ``sufficiently large'' subset. This is one of the classical manifold learning problems. It happens that the most popular methods to deal with such a problem, with a long history in data science, namely, the classical Multidimensional scaling (MDS) and the Maximum va...
November 24, 2016
Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learni...
October 16, 2024
Rapid growth of high-dimensional datasets in fields such as single-cell RNA sequencing and spatial genomics has led to unprecedented opportunities for scientific discovery, but it also presents unique computational and statistical challenges. Traditional methods struggle with geometry-aware data generation, interpolation along meaningful trajectories, and transporting populations via feasible paths. To address these issues, we introduce Geometry-Aware Generative Autoencoder (...