June 9, 2014
Similar papers 4
March 15, 2018
An analog computer makes use of continuously changeable quantities of a system, such as its electrical, mechanical, or hydraulic properties, to solve a given problem. While these devices are usually computationally more powerful than their digital counterparts, they suffer from analog noise which does not allow for error control. We will focus on analog computers based on active electrical networks comprised of resistors, capacitors, and operational amplifiers which are capab...
June 14, 2024
In this paper, we build a general model of memristors suitable for the simulation of event-based systems, such as hardware spiking neural networks, and more generally, neuromorphic computing systems. We extend an existing general model of memristors - the Generalised Metastable Switch Model - to an event-driven setting, eliminating errors associated discrete time approximation, as well as offering potential improvements in terms of computational efficiency for simulation. We ...
May 1, 2023
This paper introduces a novel simulation tool for analyzing and training neural network models tailored for compute-in-memory hardware. The tool leverages physics-based device models to enable the design of neural network models and their parameters that are more hardware-accurate. The initial study focused on modeling a CMOS-based floating-gate transistor and memristor device using measurement data from a fabricated device. Additionally, the tool incorporates hardware constr...
November 9, 2017
We study the performance of stochastically trained deep neural networks (DNNs) whose synaptic weights are implemented using emerging memristive devices that exhibit limited dynamic range, resolution, and variability in their programming characteristics. We show that a key device parameter to optimize the learning efficiency of DNNs is the variability in its programming characteristics. DNNs with such memristive synapses, even with dynamic range as low as $15$ and only $32$ di...
December 11, 2019
The development of memristive device technologies has reached a level of maturity to enable the design of complex and large-scale hybrid memristive-CMOS neural processing systems. These systems offer promising solutions for implementing novel in-memory computing architectures for machine learning and data analysis problems. We argue that they are also ideal building blocks for the integration in neuromorphic electronic circuits suitable for ultra-low power brain-inspired sens...
September 20, 2017
Neuromorphic circuits mimic partial functionalities of brain in a bio-inspired information processing sense in order to achieve similar efficiencies as biological systems. While there are common mathematical models for neurons, which can be realized as nonlinear oscillating circuits, an electrical representation of the synaptic coupling between those is more challenging. Since this coupling strength depends on the learning procedure in the past, it should include a kind of me...
January 29, 2022
Neuromorphic engineering has led to the necessary process of rethinking of how we process and integrate information, analyze data, and use the resulting insights to improve computation and avoid the current high power and latency of Artificial Intelligence (AI) hardware. Current neuromorphic processors are, however, limited by digital technologies, which cannot reproduce the abilities of biological neural computation in terms of power, latency and area cost. In this paper, we...
March 11, 2024
Memristive devices hold promise to improve the scale and efficiency of machine learning and neuromorphic hardware, thanks to their compact size, low power consumption, and the ability to perform matrix multiplications in constant time. However, on-chip training with memristor arrays still faces challenges, including device-to-device and cycle-to-cycle variations, switching non-linearity, and especially SET and RESET asymmetry. To combat device non-linearity and asymmetry, we ...
March 29, 2024
We like and need Information and Communications Technologies (ICT) for data processing. This is measureable in the exponential growth of data processed by ICT, e.g. ICT for cryptocurrency mining and search engines. So far, the energy demand for computing technology has increased by a factor of 1.38 every ten years due to the exponentially increasing use of ICT systems as computing devices. The energy consumption of ICT systems is expected to rise from 1500 TWh (8% of global e...
August 21, 2011
We present new computational building blocks based on memristive devices. These blocks, can be used to implement either supervised or unsupervised learning modules. This is achieved using a crosspoint architecture which is an efficient array implementation for nanoscale two-terminal memristive devices. Based on these blocks and an experimentally verified SPICE macromodel for the memristor, we demonstrate that firstly, the Spike-Timing-Dependent Plasticity (STDP) can be implem...