ID: 1510.03891

Nonlinear memory capacity of parallel time-delay reservoir computers in the processing of multidimensional signals

October 13, 2015

View on ArXiv

Similar papers 2

Hierarchical Architectures in Reservoir Computing Systems

May 14, 2021

88% Match
John University of Michigan Moon, Wei D. University of Michigan Lu
Emerging Technologies
Artificial Intelligence
Machine Learning

Reservoir computing (RC) offers efficient temporal data processing with a low training cost by separating recurrent neural networks into a fixed network with recurrent connections and a trainable linear network. The quality of the fixed network, called reservoir, is the most important factor that determines the performance of the RC system. In this paper, we investigate the influence of the hierarchical reservoir structure on the properties of the reservoir and the performanc...

Find SimilarView on arXiv

Risk bounds for reservoir computing

October 30, 2019

88% Match
Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega
Machine Learning
Machine Learning

We analyze the practices of reservoir computing in the framework of statistical learning theory. In particular, we derive finite sample upper bounds for the generalization error committed by specific families of reservoir computing systems when processing discrete-time inputs under various hypotheses on their dependence structure. Non-asymptotic bounds are explicitly written down in terms of the multivariate Rademacher complexities of the reservoir systems and the weak depend...

Find SimilarView on arXiv

Limits to Reservoir Learning

July 26, 2023

88% Match
Anthony M. Polloreno
Machine Learning
Information Theory
Information Theory

In this work, we bound a machine's ability to learn based on computational limitations implied by physicality. We start by considering the information processing capacity (IPC), a normalized measure of the expected squared error of a collection of signals to a complete basis of functions. We use the IPC to measure the degradation under noise of the performance of reservoir computers, a particular kind of recurrent network, when constrained by physical considerations. First, w...

Find SimilarView on arXiv

Reducing hyperparameter dependence by external timescale tailoring

July 17, 2023

88% Match
Lina C. Jaurigue, Kathy Lüdge
Computational Physics
Neural and Evolutionary Comp...

Task specific hyperparameter tuning in reservoir computing is an open issue, and is of particular relevance for hardware implemented reservoirs. We investigate the influence of directly including externally controllable task specific timescales on the performance and hyperparameter sensitivity of reservoir computing approaches. We show that the need for hyperparameter optimisation can be reduced if timescales of the reservoir are tailored to the specific task. Our results are...

Find SimilarView on arXiv

Reservoir Computing Benchmarks: a review, a taxonomy, some best practices

May 10, 2024

88% Match
Chester Wringe, Martin Trefzer, Susan Stepney
Emerging Technologies
Machine Learning
Neural and Evolutionary Comp...

Reservoir Computing is an Unconventional Computation model to perform computation on various different substrates, such as RNNs or physical materials. The method takes a "black-box" approach, training only the outputs of the system it is built on. As such, evaluating the computational capacity of these systems can be challenging. We review and critique the evaluation methods used in the field of Reservoir Computing. We introduce a categorisation of benchmark tasks. We review ...

Find SimilarView on arXiv

Multi-parallel-task Time-delay Reservoir Computing combining a Silicon Microring with WDM

October 25, 2023

87% Match
Bernard J. Giron Castro, Christophe Peucheret, ... , Da Ros Francesco
Neural and Evolutionary Comp...
Emerging Technologies
Machine Learning
Optics

We numerically demonstrate a microring-based time-delay reservoir computing scheme that simultaneously solves three tasks involving time-series prediction, classification, and wireless channel equalization. Each task performed on a wavelength-multiplexed channel achieves state-of-the-art performance with optimized power and frequency detuning.

Find SimilarView on arXiv

Optimizing Memory in Reservoir Computers

January 5, 2022

87% Match
Thomas L. Carroll
Neural and Evolutionary Comp...

A reservoir computer is a way of using a high dimensional dynamical system for computation. One way to construct a reservoir computer is by connecting a set of nonlinear nodes into a network. Because the network creates feedback between nodes, the reservoir computer has memory. If the reservoir computer is to respond to an input signal in a consistent way (a necessary condition for computation), the memory must be fading; that is, the influence of the initial conditions fades...

Find SimilarView on arXiv

Benchmarking Learning Efficiency in Deep Reservoir Computing

September 29, 2022

87% Match
Hugo Cisneros, Josef Sivic, Tomas Mikolov
Machine Learning

It is common to evaluate the performance of a machine learning model by measuring its predictive power on a test dataset. This approach favors complicated models that can smoothly fit complex functions and generalize well from training data points. Although essential components of intelligence, speed and data efficiency of this learning process are rarely reported or compared between different candidate models. In this paper, we introduce a benchmark of increasingly difficult...

Find SimilarView on arXiv

Reservoir Computing in robotics: a review

June 6, 2022

87% Match
Paolo Baldini
Robotics
Emerging Technologies

Reservoir Computing is a relatively new framework created to allow the usage of powerful but complex systems as computational mediums. The basic approach consists in training only a readout layer, exploiting the innate separation and transformation provided by the previous, untrained system. This approach has shown to possess great computational capabilities and is successfully used to achieve many tasks. This review aims to represent the current 'state-of-the-art' of the usa...

Find SimilarView on arXiv

Efficient Design of Hardware-Enabled Reservoir Computing in FPGAs

May 4, 2018

87% Match
Bogdan Penkovsky, Laurent Larger, Daniel Brunner
Emerging Technologies
Machine Learning
Neural and Evolutionary Comp...

In this work, we propose a new approach towards the efficient optimization and implementation of reservoir computing hardware reducing the required domain expert knowledge and optimization effort. First, we adapt the reservoir input mask to the structure of the data via linear autoencoders. We therefore incorporate the advantages of dimensionality reduction and dimensionality expansion achieved by conventional algorithmically efficient linear algebra procedures of principal c...

Find SimilarView on arXiv