February 25, 2007
Similar papers 2
November 2, 2023
Can a micron sized sack of interacting molecules autonomously learn an internal model of a complex and fluctuating environment? We draw insights from control theory, machine learning theory, chemical reaction network theory, and statistical physics to develop a general architecture whereby a broad class of chemical systems can autonomously learn complex distributions. Our construction takes the form of a chemical implementation of machine learning's optimization workhorse: gr...
March 18, 2021
In the last decade, deep learning has become a major component of artificial intelligence. The workhorse of deep learning is the optimization of loss functions by stochastic gradient descent (SGD). Traditionally in deep learning, neural networks are differentiable mathematical functions, and the loss gradients required for SGD are computed with the backpropagation algorithm. However, the computer architectures on which these neural networks are implemented and trained suffer ...
June 5, 2024
Many important phenomena in chemistry and biology are realized via dynamical features such as multi-stability, oscillations, and chaos. Construction of novel chemical systems with such finely-tuned dynamics is a challenging problem central to the growing field of synthetic biology. In this paper, we address this problem by putting forward a molecular version of a recurrent artificial neural network, which we call a recurrent neural chemical reaction network (RNCRN). We prove ...
April 11, 2018
We present a method for using neural networks to model evolutionary population dynamics, and draw parallels to recent deep learning advancements in which adversarially-trained neural networks engage in coevolutionary interactions. We conduct experiments which demonstrate that models from evolutionary game theory are capable of describing the behavior of these neural population systems.
March 7, 2022
Recent work has focused on data-driven learning of the evolution of unknown systems via deep neural networks (DNNs), with the goal of conducting long time prediction of the evolution of the unknown system. Training a DNN with low generalization error is a particularly important task in this case as error is accumulated over time. Because of the inherent randomness in DNN training, chiefly in stochastic optimization, there is uncertainty in the resulting prediction, and theref...
May 23, 1997
We describe the application of tools from statistical mechanics to analyse the dynamics of various classes of supervised learning rules in perceptrons. The character of this paper is mostly that of a cross between a biased non-encyclopedic review and lecture notes: we try to present a coherent and self-contained picture of the basics of this field, to explain the ideas and tricks, to show how the predictions of the theory compare with (simulation) experiments, and to bring to...
September 23, 2022
The solution of time dependent differential equations with neural networks has attracted a lot of attention recently. The central idea is to learn the laws that govern the evolution of the solution from data, which might be polluted with random noise. However, in contrast to other machine learning applications, usually a lot is known about the system at hand. For example, for many dynamical systems physical quantities such as energy or (angular) momentum are exactly conserved...
July 3, 2019
Dynamical systems are capable of performing computation in a reservoir computing paradigm. This paper presents a general representation of these systems as an artificial neural network (ANN). Initially, we implement the simplest dynamical system, a cellular automaton. The mathematical fundamentals behind an ANN are maintained, but the weights of the connections and the activation function are adjusted to work as an update rule in the context of cellular automata. The advantag...
December 11, 2022
A machine learning (ML) system must learn not only to match the output of a target function on a training set, but also to generalize to novel situations in order to yield accurate predictions at deployment. In most practical applications, the user cannot exhaustively enumerate every possible input to the model; strong generalization performance is therefore crucial to the development of ML systems which are performant and reliable enough to be deployed in the real world. Whi...
October 14, 2021
Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constrain...