News & Analysis
/
Article

Hardware approach enables significantly faster training of deep neural networks

SEP 24, 2018
Phase change memory is used in cross-bar arrays to massively parallelize the computation required to train fully connected, multinode neural networks, by encoding the weights into multiple resistive memory devices.

DOI: 10.1063/1.5061584

Hardware approach enables significantly faster training of deep neural networks internal name

Hardware approach enables significantly faster training of deep neural networks lead image

While modern computers are very good at raw computation, they are not inherently good at classifying or recognizing objects the way that humans can. In recent years, neural networks — systems of interconnected nodes that mimic human synapses — have enabled computers to perform classification as well as, and sometimes better than, humans. However, these deep neural networks, comprising many layers of neurons, require significant computation resources to train them to perform these tasks due to the large number of calculations and iterations in the algorithms.

New research has developed a method that uses a hardware rather than software approach to significantly enhance the training speed of neural networks. This method uses phase change memory devices in a cross-bar array, instead of conventional graphical processing units (GPUs), to perform highly parallelized computation using pairs of neighboring memory elements in sequence along the array, called conductance pairs.

Deep neural networks trained with this hardware approach offer similar training accuracy as conventional methods, while reducing both the required training time and the required electrical power by a factor of approximately 100. The primary innovation allowing this achievement is the utilization of multiple conductance pairs in the hardware loop.

This research is an attempt to define the phase change memory device parameters required for neural network training, to help justify the large investments that will be required for large-scale manufacturing. Another challenge for this technology, which might prevent or delay it from becoming more ubiquitous, is the relatively low lifetime through repeated programming of the same device. Building redundancy through multiple conductance pairs can extend the lifetime to the order of years while maintaining the same performance.

Source: “Perspective on training fully connected networks with resistive memories: Device requirements for multiple conductances of varying significance,” by Giorgio Cristiano, Massimo Giordano, Stefano Ambrogio, Louis P. Romero, Christina Cheng, Pritish Narayanan, Hsinyu Tsai, Robert M. Shelby, and Geoffrey W. Burr, Journal of Applied Physics (2018). The article can be accessed at https://doi.org/10.1063/1.5042462 .

Related Topics
More Science
/
Article
Phase field simulations shed light on “electrical tree breakdown.”
/
Article
A powerful statistical tool leverages factorial design to study the effects of five factors in only 32 simulations.
/
Article
Soft-magnetic interface materials (MIMs) keep superconducting quantum chips safe from stray magnetic fields.
/
Article
Coating boron nitride nanotube fabrics with aluminum oxide improved its thermal conductivity and oxidation resistance.