Robust and Efficient Semantic Sensor Registration for Mobile Robotics in Unorganized, Natural, Scenes
This event is free and open to the publicAdd to Google Calendar
Advances in sensing and computing hardware have led to increased interest in registration algorithms. In particular, the proliferation of 3D light detection and ranging (LIDAR) sensors and RGBD cameras require efficient, robust, and accurate estimation algorithms for use in robotic mapping, localization, and tracking task. Most modern approaches to autonomous driving require localizing and calibrating multiple LIDAR sensors, both of which are registration tasks. Meanwhile, tasks in the domain of indoor robotics require both localizing the robot and localizing objects of interest in the environment. The registration problem is that of trying to find the three-dimensional transformation between two measurements. This can include consecutive measurements (producing an odometry estimate), measurements from disparate points in time (such as for localization and mapping), and between different sensors (such as for calibrating multiple sensors on a platform). This thesis focuses on leveraging semantic inference to enable efficient and robust sensor registration. In robotics, semantic inference is increasingly being used for downstream reasoning tasks. This thesis explores how that inference can be used in upstream tasks such as egomotion estimation, object pose estimation, and multisensor calibration.
Chairs: Professors Ryan M. Eustice and Odest C. Jenkins