December 17, 2021
Deep Neural Networks are widely used for solving complex problems in several scientific areas, such as speech recognition, machine translation, image analysis. The strategies employed to investigate their theoretical properties mainly rely on Euclidean geometry, but in the last years new approaches based on Riemannian geometry have been developed. Motivated by some open problems, we study a particular sequence of maps between manifolds, with the last manifold of the sequence ...
June 11, 2017
We employ machine learning techniques to investigate the volume minimum of Sasaki-Einstein base manifolds of non-compact toric Calabi-Yau 3-folds. We find that the minimum volume can be approximated via a second order multiple linear regression on standard topological quantities obtained from the corresponding toric diagram. The approximation improves further after invoking a convolutional neural network with the full toric diagram of the Calabi-Yau 3-folds as the input. We a...
October 16, 2023
Recent methods in geometric deep learning have introduced various neural networks to operate over data that lie on Riemannian manifolds. Such networks are often necessary to learn well over graphs with a hierarchical structure or to learn over manifold-valued data encountered in the natural sciences. These networks are often inspired by and directly generalize standard Euclidean neural networks. However, extending Euclidean networks is difficult and has only been done for a s...
March 11, 2024
Data sets tend to live in low-dimensional non-linear subspaces. Ideal data analysis tools for such data sets should therefore account for such non-linear geometry. The symmetric Riemannian geometry setting can be suitable for a variety of reasons. First, it comes with a rich mathematical structure to account for a wide range of non-linear geometries that has been shown to be able to capture the data geometry through empirical evidence from classical non-linear embedding. Seco...
January 26, 2024
One of the challenges of heterotic compactification on a Calabi-Yau threefold is to determine the physical $(\mathbf{27})^3$ Yukawa couplings of the resulting four-dimensional $\mathcal{N}=1$ theory. In general, the calculation necessitates knowledge of the Ricci-flat metric. However, in the standard embedding, which references the tangent bundle, we can compute normalized Yukawa couplings from the Weil-Petersson metric on the moduli space of complex structure deformations of...
April 21, 2022
We review some recent applications of machine learning to algebraic geometry and physics. Since problems in algebraic geometry can typically be reformulated as mappings between tensors, this makes them particularly amenable to supervised learning. Additionally, unsupervised methods can provide insight into the structure of such geometrical data. At the heart of this programme is the question of how geometry can be machine learned, and indeed how AI helps one to do mathematics...
January 2, 2020
Generative models in deep learning allow for sampling probability distributions that approximate data distributions. We propose using generative models for making approximate statistical predictions in the string theory landscape. For vacua admitting a Lagrangian description this can be thought of as learning random tensor approximations of couplings. As a concrete proof-of-principle, we demonstrate in a large ensemble of Calabi-Yau manifolds that Kahler metrics evaluated at ...
March 2, 2020
We survey some recent developments on the problem of understanding degenerations of Calabi-Yau manifolds equipped with their Ricci-flat Kahler metrics, with an emphasis on the case when the metrics are volume collapsing.
February 11, 2019
A complete understanding of the widely used over-parameterized deep networks is a key step for AI. In this work we try to give a geometric picture of over-parameterized deep networks using our geometrization scheme. We show that the Riemannian geometry of network complexity plays a key role in understanding the basic properties of over-parameterizaed deep networks, including the generalization, convergence and parameter sensitivity. We also point out deep networks share lots ...
December 28, 2005
The first part of this paper discusses general procedures for finding numerical approximations to distinguished Kahler metrics, such as Calabi-Yau metrics, on complex projective manifolds. These procedures are closely related to ideas from Geometric Invariant Theory, and to the asymptotics of high powers of positive line bundles. In the core of the paper these ideas are illustrated by detailed numerical results for a particular K3 surface.