Dissertation Defense

Neural Network Implementations on Analog In-Memory-Computing Fabric

Qiwen Wang
WHERE:
1017 Dow BuildingMap
SHARE:

Deep neural networks (DNNs), driven by their unprecedented capabilities, have achieved widespread adaptation. However, the computation requirements of DNNs present great challenges for traditional architectures, particularly due to the memory bottleneck. Analog in-memory computing (IMC) systems hold great promises in meeting these challenges. However, computation accuracy is an additional concern in analog computing systems, even though DNNs are known for their fault tolerance. In general, for analog computing, accuracy needs to be ensured before any performance benefit can become material.

This dissertation presents studies on the implementation of DNNs in realistic analog IMC systems from an accuracy perspective under realistic memory-device and system non-idealities.  In this work, the impacts of the non-idealities were evaluated; methods to mitigate these impacts were developed; memory device performance requirement was established. Deterministic error sources including memory device on/off ratio, programming precision, array size limitation, and ADC characteristics were considered. Stochastic error sources including device programming variation, device defects, and ADC noise were also considered. Particularly, a tiled architecture was developed to mitigate the effects of limited practical memory array sizes, and the consequences of this architecture were carefully studied. Both DNN inference and training operation on analog IMC systems were studied.

Chair: Professor Wei D. Lu