August 22, 2005
Similar papers 5
May 15, 2021
In this article, we present a cognitive architecture that is built from powerful yet simple neural models. Specifically, we describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory. The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales than what is possible with existant cognitive architectures.
June 18, 2018
The prefrontal cortex is known to be involved in many high-level cognitive functions, in particular, working memory. Here, we study to what extent a group of randomly connected units (namely an Echo State Network, ESN) can store and maintain (as output) an arbitrary real value from a streamed input, i.e. can act as a sustained working memory unit. Furthermore, we explore to what extent such an architecture can take advantage of the stored value in order to produce non-linear ...
April 6, 2017
The brain must robustly store a large number of memories, corresponding to the many events encountered over a lifetime. However, the number of memory states in existing neural network models either grows weakly with network size or recall fails catastrophically with vanishingly little noise. We construct an associative content-addressable memory with exponentially many stable states and robust error-correction. The network possesses expander graph connectivity on a restricted...
May 13, 2002
We introduce a new biologically-motivated model of sequential spatial memory which is based on the principle of winnerless competition (WLC). We implement this mechanism in a two-layer neural network structure and present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of pre-recorded sequences of spatial patterns.
November 10, 2022
By incorporating feedback loops, that engender amplification and damping so that output is not proportional to input, the biological neural networks become highly nonlinear and thus very likely chaotic in nature. Research in control theory reveals that strange attractors can be approximated by collection of cycles, and be collapsed into a more coherent state centered on one of them if we exert control. We speculate that human memories are encoded by such cycles, and can be re...
September 13, 2016
Efforts at understanding the computational processes in the brain have met with limited success, despite their importance and potential uses in building intelligent machines. We propose a simple new model which draws on recent findings in Neuroscience and the Applied Mathematics of interacting Dynamical Systems. The Feynman Machine is a Universal Computer for Dynamical Systems, analogous to the Turing Machine for symbolic computing, but with several important differences. We ...
November 6, 2002
We propose a new self-organizing mechanism behind the emergence of memory in which temporal sequences of stimuli are transformed into spatial activity patterns. In particular, the memory emerges despite the absence of temporal correlations in the stimuli. This suggests that neural systems may prepare a spatial structure for processing information before the information itself is available. A simple model illustrating the mechanism is presented based on three principles: (1) C...
January 23, 2019
Information coding by precise timing of spikes can be faster and more energy-efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a novel type of attractor neural network in complex state space, and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building...
December 3, 2022
Neural activity in the brain exhibits correlated fluctuations that may strongly influence the properties of neural population coding. However, how such correlated neural fluctuations may arise from the intrinsic neural circuit dynamics and subsequently affect the computational properties of neural population activity remains poorly understood. The main difficulty lies in resolving the nonlinear coupling between correlated fluctuations with the overall dynamics of the system. ...
December 13, 2011
Matching animal-like flexibility in recognition and the ability to quickly incorporate new information remains difficult. Limits are yet to be adequately addressed in neural models and recognition algorithms. This work proposes a configuration for recognition that maintains the same function of conventional algorithms but avoids combinatorial problems. Feedforward recognition algorithms such as classical artificial neural networks and machine learning algorithms are known to ...