August 22, 2005
Similar papers 4
March 13, 2014
The human brain is a dynamical system whose extremely complex sensor-driven neural processes give rise to conceptual, logical cognition. Understanding the interplay between nonlinear neural dynamics and concept-level cognition remains a major scientific challenge. Here I propose a mechanism of neurodynamical organization, called conceptors, which unites nonlinear dynamics with basic principles of conceptual abstraction and logic. It becomes possible to learn, store, abstract,...
May 15, 2021
The deep neural nets of modern artificial intelligence (AI) have not achieved defining features of biological intelligence, including abstraction, causal learning, and energy-efficiency. While scaling to larger models has delivered performance improvements for current applications, more brain-like capacities may demand new theories, models, and methods for designing artificial learning systems. Here, we argue that this opportunity to reassess insights from the brain should st...
March 1, 2023
Despite evidence for the existence of engrams as memory support structures in our brains, there is no consensus framework in neuroscience as to what their physical implementation might be. Here we propose how we might design a computer system to implement engrams using neural networks, with the main aim of exploring new ideas using machine learning techniques, guided by challenges in neuroscience. Building on autoencoders, we propose latent neural spaces as indexes for storin...
January 31, 2016
Spontaneously evolving living systems can be modelled as continuous-time dynamical systems (DSs), whose evolution rules are determined by their velocity vector fields. We point out that because of their architectural plasticity, biological neural networks belong to a novel type of DSs whose velocity field is plastic, albeit within bounds, and affected by sensory stimuli. We introduce DSs with fully plastic velocity fields self-organising under the influence of stimuli, called...
May 10, 2002
We all are fascinated by the phenomena of intelligent behavior, as generated both by our own brains and by the brains of other animals. As physicists we would like to understand if there are some general principles that govern the structure and dynamics of the neural circuits that underlie these phenomena. At the molecular level there is an extraordinary universality, but these mechanisms are surprisingly complex. This raises the question of how the brain selects from these d...
October 16, 2021
Neural systems are well known for their ability to learn and store information as memories. Even more impressive is their ability to abstract these memories to create complex internal representations, enabling advanced functions such as the spatial manipulation of mental representations. While recurrent neural networks (RNNs) are capable of representing complex information, the exact mechanisms of how dynamical neural systems perform abstraction are still not well-understood,...
July 24, 2007
A recent experiment suggests that neural circuits may alternatively implement continuous or discrete attractors, depending on the training set up. In recurrent neural network models, continuous and discrete attractors are separately modeled by distinct forms of synaptic prescriptions (learning rules). Here, we report a solvable network model, endowed with Hebbian synaptic plasticity, which is able to learn either discrete or continuous attractors, depending on the frequency o...
May 1, 2007
We investigate dynamical systems characterized by a time series of distinct semi-stable activity patterns, as they are observed in cortical neural activity patterns. We propose and discuss a general mechanism allowing for an adiabatic continuation between attractor networks and a specific adjoined transient-state network, which is strictly dissipative. Dynamical systems with transient states retain functionality when their working point is autoregulated; avoiding prolonged pe...
November 25, 2014
Memories in the brain are separated in two categories: short-term and long-term memories. Long-term memories remain for a lifetime, while short-term ones exist from a few milliseconds to a few minutes. Within short-term memory studies, there is debate about what neural structure could implement it. Indeed, mechanisms responsible for long-term memories appear inadequate for the task. Instead, it has been proposed that short-term memories could be sustained by the persistent ac...
September 1, 2014
The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfy constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimizat...