Feed-Forward learning algorithm for resistive memories

Dev Narayan Yadav, Phrangboklang Lyngton Thangkhiew, Kamalika Datta, Sandip Chakraborty, Rolf Drechsler, Indranil Sengupta

In: Journal of Systems Architecture: Embedded Software Design (JSA) 131 Elsevier 2022.


Resistive memory systems, due to their inherent ability to perform Vector–Matrix Multiplication (VMM), have drawn the attention of researchers to realize machine learning applications with low overheads. In resistive memory systems, each memory cell (synapse/neuron) stores a weight in the form of resistance/conductance value. Memristor-based resistive memory has been widely explored in this regard because of its small size and low power consumption. The inference quality of a neural network depends on how efficiently and accurately the weights are stored in the synapses. The weights are calculated using various training algorithms, like back-propagation (BP), least mean square (LMS), and random weight change (RWC). The training accuracy of existing algorithms is directly related to the algorithm complexity and the time devoted for training. This paper presents a training algorithm that requires an additional set of memristors and a threshold gate for training and achieves an accuracy similar to existing algorithms without using any complex circuitry. The method can update synapse weights in parallel and requires fewer epochs for training an application. Results on experiments with standard benchmarks reveal that the method can achieve an average speedup of as compared to state-of-the-art methods.

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence