Distributed, Intelligent Audio Sensing Enabled by Low-Power Integrated Technologies
This event is free and open to the publicAdd to Google Calendar
Distributed audio sensing is promising to bring full bloom of a variety of applications to improve human life. However, despite of the continued efforts, the state-of-the-art audio sensor node systems still remain at centimeter-scale in size, preventing true ubiquitous and unobtrusive deployment of them. In this dissertation, we explore a way to develop a millimeter-scale wireless audio sensor node, by combining the integrated silicon technology, machine learning, and low-power circuit techniques.
This dissertation first presents an audio processing chip that performs audio acquisition and compression, consuming microwatt power. A new low-power compression algorithm and hardware accelerator of it consume only 1.5μW, providing 4-32× real-time audio compression. Second, a picowatt-level standby power on-sensor neural network processor is introduced for sensor applications. With custom instruction set architecture, compact SIMD microarchitecture, and ultra-low leakage SRAM memory, the processor is successfully integrated in an acoustic object detection sensor system to demonstrate its efficacy. Next part of this dissertation proposes a voice activity detector as a wakeup method of sensor node that uses a mixer-based architecture and neural network classifier. By sequentially scanning 4 kHz of frequency bands and down-converting to below 500 Hz, feature extraction power is minimized, and the neural network processor employs computational sprinting, to reduce leakage contribution. The measurement results achieve 91.5%/90% speech/non-speech hit rates at 10 dB SNR with babble noise. Finally, two generations of complete wireless audio sensor nodes with millimeter-scale form factor will be demonstrated.
Co-Chairs: Dennis Sylvester and Hun-Seok Kim