Dissertation Defense

Analog In-Memory Computing on Non-Volatile Crossbar Arrays

Justin M. Correll
1008 EECS BuildingMap


Efficient large-scale matrix operations are essential for the future of AI and machine learning. However, conventional computing architectures have separate “processing” and “memory” which induces significant latency and wastes energy. To solve this bottleneck, processing and memory are combined and the compute is performed in the physical domain with analog values on Multi-Level Cell (MLC) Resistive Random-Access Memory (ReRAM) crossbars. The compute is in parallel and in place with O(1) time complexity and energy consumption is amortized across the crossbar array structure. This new computing paradigm presents new challenges from the bottom up, including devices, circuit, integration, system, and algorithm.

This work includes two fully-integrated CMOS-ReRAM in-memory compute IC prototypes. The first chip demonstrates the mixed-signal support circuitry required to operate a fully-parallel, post-processed passive ReRAM crossbar array. An on-chip processor directs ReRAM bitcell programming and demonstrates proof-of-concept neural network tasks. The second chip introduces analog bit-serial operation with MLC 1T1R ReRAM crossbar arrays in a complete RISC-V based system on a chip. A convolutional neural network is mapped onto prototype hardware to demonstrate AI at the edge with state-of-the-art energy efficiency.

Chair: Professor Michael P. Flynn