January 8, 2020
We examine a subset of spatially homogenous and anisotropic solutions to Einstein's field equations: the Bianchi Type A models, and show that they can be written as a continuous-time recurrent neural network (CTRNN). This reformulation of Einstein's equations allows one to write potentially complicated nonlinear equations as a simpler dynamical system consisting of linear combinations of the neural network weights and logistic sigmoid activation functions. The CTRNN itself is trained by using an explicit Runge-Kutta solver to sample a number of solutions of Einstein's equations for the Bianchi Type A models and then using a nonlinear least-squares approach to find the optimal set of weights, time delay constants, and bias parameters that provide the best fit of the CTRNN equations to the Einstein equations. In terms of numerical examples, we specifically provide solutions to Bianchi Type I and II models. We conclude the paper with some comments on optimal parameter probability distributions and ideas for future work.
Similar papers 1
September 14, 2023
Einstein field equations are notoriously challenging to solve due to their complex mathematical form, with few analytical solutions available in the absence of highly symmetric systems or ideal matter distribution. However, accurate solutions are crucial, particularly in systems with strong gravitational field such as black holes or neutron stars. In this work, we use neural networks and auto differentiation to solve the Einstein field equations numerically inspired by the id...
January 5, 2022
On the long-established classification problems in general relativity we take a novel perspective by adopting fruitful techniques from machine learning and modern data-science. In particular, we model Petrov's classification of spacetimes, and show that a feed-forward neural network can achieve high degree of success. We also show how data visualization techniques with dimensionality reduction can help analyze the underlying patterns in the structure of the different types of...
June 30, 2022
As a network-based functional approximator, we have proposed a "Lagrangian Density Space-Time Deep Neural Networks" (LDDNN) topology. It is qualified for unsupervised training and learning to predict the dynamics of underlying physical science governed phenomena. The prototypical network respects the fundamental conservation laws of nature through the succinctly described Lagrangian and Hamiltonian density of the system by a given data-set of generalized nonlinear partial dif...
November 27, 2020
Deep learning has been widely and actively used in various research areas. Recently, in the gauge/gravity duality, a new deep learning technique so-called the AdS/Deep-Learning (DL) has been proposed [1, 2]. The goal of this paper is to describe the essence of the AdS/DL in the simplest possible setups, for those who want to apply it to the subject of emergent spacetime as a neural network. For prototypical examples, we choose simple classical mechanics problems. This method ...
March 27, 2024
The parametrized post-Einsteinian (ppE) framework and its variants are widely used to probe gravity through gravitational-wave tests that apply to a large class of theories beyond general relativity. However, the ppE framework is not truly theory-agnostic as it only captures certain types of deviations from general relativity: those that admit a post-Newtonian series representation in the inspiral of coalescencing compact objects. Moreover, each type of deviation in the ppE f...
March 21, 2024
Holography relates gravitational theories in five dimensions to four-dimensional quantum field theories in flat space. Under this map, the equation of state of the field theory is encoded in the black hole solutions of the gravitational theory. Solving the five-dimensional Einstein's equations to determine the equation of state is an algorithmic, direct problem. Determining the gravitational theory that gives rise to a prescribed equation of state is a much more challenging, ...
March 28, 2002
This paper gives a self-contained, elementary, and largely pictorial statement of Einstein's equation.
October 28, 2021
Neural network is a dynamical system described by two different types of degrees of freedom: fast-changing non-trainable variables (e.g. state of neurons) and slow-changing trainable variables (e.g. weights and biases). We show that the non-equilibrium dynamics of trainable variables can be described by the Madelung equations, if the number of neurons is fixed, and by the Schrodinger equation, if the learning system is capable of adjusting its own parameters such as the numbe...
February 22, 2022
We demonstrate that the dynamics of neural networks trained with gradient descent and the dynamics of scalar fields in a flat, vacuum energy dominated Universe are structurally profoundly related. This duality provides the framework for synergies between these systems, to understand and explain neural network dynamics and new ways of simulating and describing early Universe models. Working in the continuous-time limit of neural networks, we analytically match the dynamics of ...
January 8, 2024
This short, self-contained article seeks to introduce and survey continuous-time deep learning approaches that are based on neural ordinary differential equations (neural ODEs). It primarily targets readers familiar with ordinary and partial differential equations and their analysis who are curious to see their role in machine learning. Using three examples from machine learning and applied mathematics, we will see how neural ODEs can provide new insights into deep learning a...