ID: cs/0702148

Linking Microscopic and Macroscopic Models for Evolution: Markov Chain Network Training and Conservation Law Approximations

February 25, 2007

View on ArXiv
Roderick V. N. Melnik
Computer Science
Mathematics
Computational Engineering, F...
Information Theory
Numerical Analysis
Neural and Evolutionary Comp...
Information Theory

In this paper, a general framework for the analysis of a connection between the training of artificial neural networks via the dynamics of Markov chains and the approximation of conservation law equations is proposed. This framework allows us to demonstrate an intrinsic link between microscopic and macroscopic models for evolution via the concept of perturbed generalized dynamic systems. The main result is exemplified with a number of illustrative examples where efficient numerical approximations follow directly from network-based computational models, viewed here as Markov chain approximations. Finally, stability and consistency conditions of such computational models are discussed.

Similar papers 1

Towards a Theory of Evolution as Multilevel Learning

October 27, 2021

87% Match
Vitaly Vanchurin, Yuri I. Wolf, ... , Koonin Eugene V.
Populations and Evolution
Disordered Systems and Neura...
Machine Learning

We apply the theory of learning to physically renormalizable systems in an attempt to develop a theory of biological evolution, including the origin of life, as multilevel learning. We formulate seven fundamental principles of evolution that appear to be necessary and sufficient to render a universe observable and show that they entail the major features of biological evolution, including replication and natural selection. These principles also follow naturally from the theor...

Find SimilarView on arXiv

Modeling Global Dynamics from Local Snapshots with Deep Generative Neural Networks

February 10, 2018

86% Match
Scott Gigante, Dijk David van, Kevin Moon, Alexander Strzalkowski, ... , Krishnaswamy Smita
Machine Learning
Machine Learning

Complex high dimensional stochastic dynamic systems arise in many applications in the natural sciences and especially biology. However, while these systems are difficult to describe analytically, "snapshot" measurements that sample the output of the system are often available. In order to model the dynamics of such systems given snapshot data, or local transitions, we present a deep neural network framework we call Dynamics Modeling Network or DyMoN. DyMoN is a neural network...

Find SimilarView on arXiv

Dynamical stability and chaos in artificial neural network trajectories along training

April 8, 2024

86% Match
Kaloyan Danovski, Miguel C. Soriano, Lucas Lacasa
Machine Learning
Disordered Systems and Neura...
Chaotic Dynamics
Data Analysis, Statistics an...

The process of training an artificial neural network involves iteratively adapting its parameters so as to minimize the error of the network's prediction, when confronted with a learning task. This iterative change can be naturally interpreted as a trajectory in network space -- a time series of networks -- and thus the training algorithm (e.g. gradient descent optimization of a suitable loss function) can be interpreted as a dynamical system in graph space. In order to illus...

Find SimilarView on arXiv

Detailed Balanced Chemical Reaction Networks as Generalized Boltzmann Machines

May 12, 2022

86% Match
William Poole, Thomas Ouldridge, ... , Winfree Erik
Molecular Networks
Statistical Mechanics
Machine Learning

Can a micron sized sack of interacting molecules understand, and adapt to a constantly-fluctuating environment? Cellular life provides an existence proof in the affirmative, but the principles that allow for life's existence are far from being proven. One challenge in engineering and understanding biochemical computation is the intrinsic noise due to chemical fluctuations. In this paper, we draw insights from machine learning theory, chemical reaction network theory, and stat...

Find SimilarView on arXiv

Neuromodulated Learning in Deep Neural Networks

December 5, 2018

85% Match
Dennis G Wilson, Sylvain Cussat-Blanc, ... , Harrington Kyle
Neural and Evolutionary Comp...
Machine Learning
Machine Learning

In the brain, learning signals change over time and synaptic location, and are applied based on the learning history at the synapse, in the complex process of neuromodulation. Learning in artificial neural networks, on the other hand, is shaped by hyper-parameters set before learning starts, which remain static throughout learning, and which are uniform for the entire network. In this work, we propose a method of deep artificial neuromodulation which applies the concepts of b...

Find SimilarView on arXiv

Machine learning independent conservation laws through neural deflation

March 28, 2023

85% Match
Wei Zhu, Hong-Kun Zhang, P. G. Kevrekidis
Pattern Formation and Solito...

We introduce a methodology for seeking conservation laws within a Hamiltonian dynamical system, which we term ``neural deflation''. Inspired by deflation methods for steady states of dynamical systems, we propose to {iteratively} train a number of neural networks to minimize a regularized loss function accounting for the necessity of conserved quantities to be {\it in involution} and enforcing functional independence thereof consistently in the infinite-sample limit. The meth...

Find SimilarView on arXiv

Synthesis of recurrent neural networks for dynamical system simulation

December 17, 2015

85% Match
Adam Trischler, Gabriele MT D'Eleuterio
Neural and Evolutionary Comp...

We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector field representation of a given dynamical system using backpropagation, then recast, using matrix manipulations, as a recu...

Find SimilarView on arXiv

Towards Hyperparameter-Agnostic DNN Training via Dynamical System Insights

October 21, 2023

85% Match
Carmel Fiscko, Aayushya Agarwal, Yihan Ruan, Soummya Kar, ... , Sinopoli Bruno
Machine Learning
Systems and Control
Systems and Control

We present a stochastic first-order optimization method specialized for deep neural networks (DNNs), ECCO-DNN. This method models the optimization variable trajectory as a dynamical system and develops a discretization algorithm that adaptively selects step sizes based on the trajectory's shape. This provides two key insights: designing the dynamical system for fast continuous-time convergence and developing a time-stepping algorithm to adaptively select step sizes based on p...

Find SimilarView on arXiv

Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics

December 8, 2020

85% Match
Daniel Kunin, Javier Sagastuy-Brena, Surya Ganguli, ... , Tanaka Hidenori
Machine Learning
Disordered Systems and Neura...
Statistical Mechanics
Neurons and Cognition
Machine Learning

Understanding the dynamics of neural network parameters during training is one of the key challenges in building a theoretical foundation for deep learning. A central obstacle is that the motion of a network in high-dimensional parameter space undergoes discrete finite steps along complex stochastic gradients derived from real-world datasets. We circumvent this obstacle through a unifying theoretical framework based on intrinsic symmetries embedded in a network's architecture...

Find SimilarView on arXiv

Deep neural networks from the perspective of ergodic theory

August 4, 2023

85% Match
Fan Zhang
Machine Learning
Disordered Systems and Neura...

The design of deep neural networks remains somewhat of an art rather than precise science. By tentatively adopting ergodic theory considerations on top of viewing the network as the time evolution of a dynamical system, with each layer corresponding to a temporal instance, we show that some rules of thumb, which might otherwise appear mysterious, can be attributed heuristics.

Find SimilarView on arXiv