Low-Power Localization Systems with Hardware-Efficient Deep Neural Networks
This event is free and open to the publicAdd to Google Calendar
Localization system solves the problem of identifying the location of the agent or surrounding objects with the information gathered from various sensors. It enables a wide range of practical applications, such as autonomous navigation, self-driving cars, and virtual reality. In recent years, deep neural networks achieved great success in various computer vision tasks, including more accurate localization systems with extensive computation complexity. However, deploying such systems on energy-constrained mobile IoT platforms remains a big challenge due to the contradiction between system performance and power consumption. This thesis presents several practical approaches to develop energy-efficient localization systems. The first work focuses on reducing the complexity of learning based visual-inertial odometry systems by finding the most efficient network architecture through neural architecture search and adaptively disabling visual sensor modality on the fly. The second work introduces a new hardware-efficient heterogeneous transform-domain neural network to reduce computation complexity by replacing convolution operations with element-wise multiplications, learning sparse-orthogonal weights, and efficient quantization using canonical-signed-digit representation. These works explore different yet effective ways to balance the system performance and power consumption for mobile platforms, namely reducing deep neural network complexity and adaptive sensor modality selection and fusion.
Chair: Professor Hun-Seok Kim