Dissertation Defense

Safe End-to-end Learning-based Robot Autonomy via Integrated Perception, Planning, and Control

Glen Chou
WHERE:
2300 Ford Robotics BuildingMap
SHARE:

Password: 577060

 

To deploy robots in unstructured, human-centric environments, we must guarantee their ability to safely and reliably complete tasks. In such environments, uncertainty runs rampant and robots invariably need data to refine their autonomy stack. While machine learning can leverage data to obtain components of this stack, e.g., task constraints, dynamics, and perception modules, blindly trusting these potentially unreliable models can compromise safety. Determining how to use these learned components while retaining unified, system-level guarantees on safety and robustness remains an urgent open problem.

In this defense, I will present two lines of research towards achieving safe learning-based autonomy. First, I will discuss how to use human task demonstrations to learn hard constraints which must be satisfied to safely complete that task, and how we can guarantee safety by planning with the learned constraints in an uncertainty-aware fashion. Second, I will discuss how to determine where learned perception and dynamics modules can be trusted, and to what extent. We imbue the planner with this knowledge to guarantee safe goal reachability when controlling from high-dimensional observations (e.g., images). We demonstrate that these theoretical guarantees translate to empirical success on high-dimensional, underactuated robots, both in simulation and on hardware.

 

Co-Chair: Professor Dmitry Berenson and Professor Necmiye Ozay