Dissertation Defense

Semantic Robot Programming for Taskable Goal-Directed Manipulation

Zhen Zeng
GM Conference Room, Lurie Engineering Center (4th floor)Map


Autonomous robots have the potential to assist people to be more productive in homes, hospitals, factories and similar environments. It is beneficial to provide pathways to enable end-users to program an arbitrary robot to perform an arbitrary task in an arbitrary world. Advances in robot Programming by Demonstration (PbD) have made it possible for end-users to program robot behavior through demonstrations. However, it still remains a challenge for end-users to program robot behavior in an intuitive, robust, generalizable, and scalable manner.

In this dissertation, we introduce the concept of Semantic Robot Programming (SRP), where the objective is to enable an end-user to intuitively program robots by providing a workspace demonstration of the goal scene for a task. Given RGB-D observations of start and goal scenes, we develop scene estimation techniques that robustly ground start and goal states under perceptual uncertainty into semantic maps. To scale SRP to large workspaces such as an entire building floor, we propose a semantic mapping approach to simultaneously detect and localize objects across an observed scene. In addition, we develop a probabilistic approach for modeling Generalized Object Permanence (GOP) to predict locations of objects when they are out of view. We propose active visual object search strategies for robots to search and gather objects towards accomplishing various tasks in large-scale environments under uncertainty.

Chair: Odest Chadwicke Jenkins