ID: 2001.02515

Einstein's Field Equations as Continuous-Time Recurrent Neural Networks

January 8, 2020

View on ArXiv

Similar papers 2

The Autodidactic Universe

March 29, 2021

84% Match
Stephon Alexander, William J. Cunningham, Jaron Lanier, Lee Smolin, Stefan Stanojevic, ... , Wecker Dave
Artificial Intelligence
Machine Learning
History and Philosophy of Ph...

We present an approach to cosmology in which the Universe learns its own physical laws. It does so by exploring a landscape of possible laws, which we express as a certain class of matrix models. We discover maps that put each of these matrix models in correspondence with both a gauge/gravity theory and a mathematical model of a learning machine, such as a deep recurrent, cyclic neural network. This establishes a correspondence between each solution of the physical theory and...

Find SimilarView on arXiv

Backpropagation through space, time, and the brain

March 25, 2024

84% Match
Benjamin Ellenberger, Paul Haider, Jakob Jordan, Kevin Max, Ismael Jaras, Laura Kriener, ... , Petrovici Mihai A.
Neurons and Cognition
Artificial Intelligence
Machine Learning
Neural and Evolutionary Comp...
Signal Processing

Effective learning in neuronal networks requires the adaptation of individual synapses given their relative contribution to solving a task. However, physical neuronal systems -- whether biological or artificial -- are constrained by spatio-temporal locality. How such networks can perform efficient credit assignment, remains, to a large extent, an open question. In Machine Learning, the answer is almost universally given by the error backpropagation algorithm, through both spa...

Find SimilarView on arXiv

Exploring New Physics Frontiers Through Numerical Relativity

August 29, 2014

83% Match
Vitor Cardoso, Leonardo Gualtieri, ... , Sperhake Ulrich
High Energy Astrophysical Ph...

The demand to obtain answers to highly complex problems within strong-field gravity has been met with significant progress in the numerical solution of Einstein's equations - along with some spectacular results - in various setups. We review techniques for solving Einstein's equations in generic spacetimes, focusing on fully nonlinear evolutions but also on how to benchmark those results with perturbative approaches. The results address problems in high-energy physics, hologr...

Find SimilarView on arXiv

Analysis of Black Hole Solutions in Parabolic Class Using Neural Networks

February 9, 2023

83% Match
Ehsan Hatefi, Armin Hatefi, Roberto J. López-Sastre
Mathematical Physics
Computational Physics
Data Analysis, Statistics an...

In this paper, we introduce a numerical method based on Artificial Neural Networks (ANNs) for the analysis of black hole solutions to the Einstein-axion-dilaton system in a high dimensional parabolic class. Leveraging a profile root-finding technique based on General Relativity we describe an ANN solver to directly tackle the system of ordinary differential equations. Through our extensive numerical analysis, we demonstrate, for the first time, that there is no self-similar c...

Find SimilarView on arXiv

Black Hole Weather Forecasting with Deep Learning: A Pilot Study

February 11, 2021

83% Match
Roberta Duarte, Rodrigo Nemmen, João Paulo Navarro
High Energy Astrophysical Ph...

In this pilot study, we investigate the use of a deep learning (DL) model to temporally evolve the dynamics of gas accreting onto a black hole in the form of a radiatively inefficient accretion flow (RIAF). We have trained a machine to forecast such a spatiotemporally chaotic system -- i.e. black hole weather forecasting -- using a convolutional neural network (CNN) and a training dataset which consists of numerical solutions of the hydrodynamical equations, for a range of in...

Find SimilarView on arXiv

Randomized Sparse Neural Galerkin Schemes for Solving Evolution Equations with Deep Networks

October 7, 2023

83% Match
Jules Berman, Benjamin Peherstorfer
Machine Learning
Numerical Analysis
Numerical Analysis
Machine Learning

Training neural networks sequentially in time to approximate solution fields of time-dependent partial differential equations can be beneficial for preserving causality and other physics properties; however, the sequential-in-time training is numerically challenging because training errors quickly accumulate and amplify over time. This work introduces Neural Galerkin schemes that update randomized sparse subsets of network parameters at each time step. The randomization avoid...

Find SimilarView on arXiv

Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations

November 28, 2017

83% Match
Maziar Raissi, Paris Perdikaris, George Em Karniadakis
Artificial Intelligence
Machine Learning
Numerical Analysis
Dynamical Systems
Machine Learning

We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. In this two part treatise, we present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations. Depending on the nature and arrangement of the available data, we...

Find SimilarView on arXiv

Synthesis of recurrent neural networks for dynamical system simulation

December 17, 2015

83% Match
Adam Trischler, Gabriele MT D'Eleuterio
Neural and Evolutionary Comp...

We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector field representation of a given dynamical system using backpropagation, then recast, using matrix manipulations, as a recu...

Find SimilarView on arXiv

A deep learning theory for neural networks grounded in physics

March 18, 2021

83% Match
Benjamin Scellier
Machine Learning

In the last decade, deep learning has become a major component of artificial intelligence. The workhorse of deep learning is the optimization of loss functions by stochastic gradient descent (SGD). Traditionally in deep learning, neural networks are differentiable mathematical functions, and the loss gradients required for SGD are computed with the backpropagation algorithm. However, the computer architectures on which these neural networks are implemented and trained suffer ...

Find SimilarView on arXiv

Quasinormal Modes in Modified Gravity using Physics-Informed Neural Networks

April 17, 2024

83% Match
Raimon Luna, Daniela D. Doneva, José A. Font, ... , Yazadjiev Stoytcho S.
High Energy Astrophysical Ph...

In this paper, we apply a novel approach based on physics-informed neural networks to the computation of quasinormal modes of black hole solutions in modified gravity. In particular, we focus on the case of Einstein-scalar-Gauss-Bonnet theory, with several choices of the coupling function between the scalar field and the Gauss-Bonnet invariant. This type of calculation introduces a number of challenges with respect to the case of General Relativity, mainly due to the extra co...

Find SimilarView on arXiv