August 22, 2005
Similar papers 2
June 30, 2022
Associative memory has been a prominent candidate for the computation performed by the massively recurrent neocortical networks. Attractor networks implementing associative memory have offered mechanistic explanation for many cognitive phenomena. However, attractor memory models are typically trained using orthogonal or random patterns to avoid interference between memories, which makes them unfeasible for naturally occurring complex correlated stimuli like images. We approac...
December 7, 2021
In this review, we describe the singular success of attractor neural network models in describing how the brain maintains persistent activity states for working memory, error-corrects, and integrates noisy cues. We consider the mechanisms by which simple and forgetful units can organize to collectively generate dynamics on the long time-scales required for such computations. We discuss the myriad potential uses of attractor dynamics for computation in the brain, and showcase ...
September 11, 2016
We consider the implications of the mathematical analysis of neurone-to-neurone dynamical complex networks. We show how the dynamical behaviour of small scale strongly connected networks lead naturally to non-binary information processing and thus multiple hypothesis decision making, even at the very lowest level of the brain's architecture. In turn we build on these ideas to address the hard problem of consciousness. We discuss how a proposed "dual hierarchy model", made up ...
January 8, 2021
Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time, thus making it crucial for context-dependent computation. Here, we use a top-down modeling approach to examine network-level mechanisms of working memory, an enigmatic issue and central topic of study in neuroscience and machine intelligence. We train thousands of recurrent neural networks on a working memory task and then perform dynamical systems...
March 11, 2009
The human brain is autonomously active, being characterized by a self-sustained neural activity which would be present even in the absence of external sensory stimuli. Here we study the interrelation between the self-sustained activity in autonomously active recurrent neural nets and external sensory stimuli. There is no a priori semantical relation between the influx of external stimuli and the patterns generated internally by the autonomous and ongoing brain dynamics. The...
June 25, 2014
The coding mechanism of sensory memory on the neuron scale is one of the most important questions in neuroscience. We have put forward a quantitative neural network model, which is self organized, self similar, and self adaptive, just like an ecosystem following Darwin theory. According to this model, neural coding is a mult to one mapping from objects to neurons. And the whole cerebrum is a real-time statistical Turing Machine, with powerful representing and learning ability...
August 2, 2018
What is the physiological basis of long-term memory? The prevailing view in neuroscience attributes changes in synaptic efficacy to memory acquisition. This view implies that stable memories correspond to stable connectivity patterns. However, an increasing body of experimental evidence points to significant, activity-independent dynamics in synaptic strengths. Motivated by these observations, we explore the possibility of memory storage within a global component of network c...
May 24, 2018
Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the...
July 28, 2019
Spatiotemporal information processing is fundamental to brain functions. The present study investigates a canonic neural network model for spatiotemporal pattern recognition. Specifically, the model consists of two modules, a reservoir subnetwork and a decision-making subnetwork. The former projects complex spatiotemporal patterns into spatially separated neural representations, and the latter reads out these neural representations via integrating information over time; the t...
July 27, 2023
Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for variou...