Dissertation Defense

Scalable, High-Performance Accelerators for Neural Network and Signal Processing: Logical and Physical Design Considerations

Sung-Gun Cho
SHARE:

Abstract:

The pervasive deployment of machine learning from edge applications to the cloud demands advances of computing in the form of much improved computational capacity per unit power and cost at a sufficiently large scale. This work explores the design space of neural network accelerators, combining both logical and physical mapping considerations to support the rapid scaling up of future machine learning hardware.

This dissertation work first studies the design of a scalable spiking neural network (SNN) in a globally asynchronous and locally synchronous (GALS) architecture. Taking advantage of an asynchronous network-on-chip (NoC) and algorithm-architecture co-design, a multi-channel neuromorphic SNN hardware design can be efficiently extended to a large size.

The second part of this work investigates a low-latency and computation-efficient systolic array for computing matrix-matrix multiplication (MMM) and matrix-vector multiplication (MVM), the elementary operations of all deep learning models. A prototype chip is integrated with an FPGA to provide efficient acceleration and versatility to support a wide array of deep learning models.

Chair: Associate Professor Zhengya Zhang
REMOTE: https://umich.zoom.us/j/95735304253