ID: 2009.01786

Computational Analysis of Deformable Manifolds: from Geometric Modelling to Deep Learning

September 3, 2020

View on ArXiv

Similar papers 5

Geodesic convolutional neural networks on Riemannian manifolds

January 26, 2015

87% Match
Jonathan Masci, Davide Boscaini, ... , Vandergheynst Pierre
Computer Vision and Pattern ...

Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we introduce Geodesic Convolutional Neural Networks (GCNN), a generalization of the convolutional networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract "patches", which are then passed through a cascade of filters a...

Find SimilarView on arXiv

Neural Implicit Manifold Learning for Topology-Aware Density Estimation

June 22, 2022

87% Match
Brendan Leigh Ross, Gabriel Loaiza-Ganem, ... , Cresswell Jesse C.
Machine Learning
Machine Learning

Natural data observed in $\mathbb{R}^n$ is often constrained to an $m$-dimensional manifold $\mathcal{M}$, where $m < n$. This work focuses on the task of building theoretically principled generative models for such data. Current generative models learn $\mathcal{M}$ by mapping an $m$-dimensional latent variable through a neural network $f_\theta: \mathbb{R}^m \to \mathbb{R}^n$. These procedures, which we call pushforward models, incur a straightforward limitation: manifolds ...

Find SimilarView on arXiv

Geometric deep learning on graphs and manifolds using mixture model CNNs

November 25, 2016

87% Match
Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele RodolĂ , ... , Bronstein Michael M.
Computer Vision and Pattern ...

Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoust...

Find SimilarView on arXiv

Geometric Understanding of Deep Learning

May 26, 2018

87% Match
Na Lei, Zhongxuan Luo, ... , Gu David Xianfeng
Machine Learning
Machine Learning

Deep learning is the mainstream technique for many machine learning tasks, including image recognition, machine translation, speech recognition, and so on. It has outperformed conventional methods in various fields and achieved great successes. Unfortunately, the understanding on how it works remains unclear. It has the central importance to lay down the theoretic foundation for deep learning. In this work, we give a geometric view to understand deep learning: we show that ...

Find SimilarView on arXiv

Breaking the Curse of Dimensionality in Deep Neural Networks by Learning Invariant Representations

October 24, 2023

87% Match
Leonardo Petrini
Machine Learning

Artificial intelligence, particularly the subfield of machine learning, has seen a paradigm shift towards data-driven models that learn from and adapt to data. This has resulted in unprecedented advancements in various domains such as natural language processing and computer vision, largely attributed to deep learning, a special class of machine learning models. Deep learning arguably surpasses traditional approaches by learning the relevant features from raw data through a s...

Find SimilarView on arXiv

Neural Latent Geometry Search: Product Manifold Inference via Gromov-Hausdorff-Informed Bayesian Optimization

September 9, 2023

87% Match
Haitz Saez de Ocariz Borde, Alvaro Arroyo, Ismael Morales, ... , Dong Xiaowen
Machine Learning
Machine Learning

Recent research indicates that the performance of machine learning models can be improved by aligning the geometry of the latent space with the underlying data structure. Rather than relying solely on Euclidean space, researchers have proposed using hyperbolic and spherical spaces with constant curvature, or combinations thereof, to better model the latent space and enhance model performance. However, little attention has been given to the problem of automatically identifying...

Find SimilarView on arXiv

GD-VAEs: Geometric Dynamic Variational Autoencoders for Learning Nonlinear Dynamics and Dimension Reductions

June 10, 2022

87% Match
Ryan Lopez, Paul J. Atzberger
cs.LG
cs.NA
math.DS
math.NA
physics.data-an
stat.ML

We develop data-driven methods incorporating geometric and topological information to learn parsimonious representations of nonlinear dynamics from observations. We develop approaches for learning nonlinear state space models of the dynamics for general manifold latent spaces using training strategies related to Variational Autoencoders (VAEs). Our methods are referred to as Geometric Dynamic (GD) Variational Autoencoders (GD-VAEs). We learn encoders and decoders for the syst...

Find SimilarView on arXiv

Geometric Neural Operators (GNPs) for Data-Driven Deep Learning of Non-Euclidean Operators

April 16, 2024

87% Match
Blaine Quackenbush, Paul J. Atzberger
Machine Learning
Artificial Intelligence
Optimization and Control
Machine Learning

We introduce Geometric Neural Operators (GNPs) for accounting for geometric contributions in data-driven deep learning of operators. We show how GNPs can be used (i) to estimate geometric properties, such as the metric and curvatures, (ii) to approximate Partial Differential Equations (PDEs) on manifolds, (iii) learn solution maps for Laplace-Beltrami (LB) operators, and (iv) to solve Bayesian inverse problems for identifying manifold shapes. The methods allow for handling ge...

Find SimilarView on arXiv

Representation Learning via Manifold Flattening and Reconstruction

May 2, 2023

87% Match
Michael Psenka, Druv Pai, Vishal Raman, ... , Ma Yi
Machine Learning
Differential Geometry

This work proposes an algorithm for explicitly constructing a pair of neural networks that linearize and reconstruct an embedded submanifold, from finite samples of this manifold. Our such-generated neural networks, called Flattening Networks (FlatNet), are theoretically interpretable, computationally feasible at scale, and generalize well to test data, a balance not typically found in manifold-based learning methods. We present empirical results and comparisons to other mode...

Find SimilarView on arXiv

Learning Manifold Implicitly via Explicit Heat-Kernel Learning

October 5, 2020

87% Match
Yufan Zhou, Changyou Chen, Jinhui Xu
Machine Learning
Computer Vision and Pattern ...
Machine Learning

Manifold learning is a fundamental problem in machine learning with numerous applications. Most of the existing methods directly learn the low-dimensional embedding of the data in some high-dimensional space, and usually lack the flexibility of being directly applicable to down-stream applications. In this paper, we propose the concept of implicit manifold learning, where manifold information is implicitly obtained by learning the associated heat kernel. A heat kernel is the ...

Find SimilarView on arXiv