Control Seminar

Perception of People and Scenes for Robot Learning from Demonstration

Chad JenkinsAssociate ProfessorComputer Science and Engineering
SHARE:

We are at the dawn of a robotics revolution where the visions of interconnected heterogeneous robots in widespread use will become a reality. Similar to "app stores" for modern computing, people at varying levels of technical background will contribute to "robot app stores" as designers and developers. However, current paradigms to program robots beyond simple cases remains inaccessible to all but the most sophisticated of developers and researchers. In order for people to fluently program autonomous robots, a robot must be able to interpret commands that accord with a human's model of the world. The challenge is that many aspects of such a model are difficult or impossible for the robot to sense directly. We posit the critical missing component is the grounding of symbols that conceptually tie together low-level perception with user programs and high-level reasoning systems. Such a grounding will enable robots to perform tasks that require extended goal-directed autonomy as well as fluidly work with human partners. Towards making robot programming more accessible and general, I will present our work on improving perception of people and scenes to enable robot learning from human demonstration. Robot learning from demonstration (LfD) has emerged as a compelling alternative to explicit coding in a programming language, where robots are programmed implicitly from a user's demonstration. Phrasing LfD as a statistical regression problem, our multivalued regression algorithms will be presented for learning robot controllers in the face of perceptual aliasing. I will also describe how such regressors can be used within physics-based estimation systems to learn controllers for humanoids from monocular video of human motion. With respect to learning for sequential manipulation tasks, our recent work aims to perceive axiomatic descriptions of scenes from depth for planning goal-directed behavior.

Odest Chadwicke Jenkins, Ph.D., is an Associate Professor of Computer Science and Engineering at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. His research often intersects topics in computer vision, machine learning, and computer animation. Prof. Jenkins has been recognized as a Sloan Research Fellow in 2009. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) for his work in physics-based human tracking from video. His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR) for his research in learning dynamical primitives from human motion, the Air Force Office of Scientific Research (AFOSR) for his work in manifold learning and multi-robot coordination and the National Science Foundation (NSF) for robot learning from multivalued human demonstrations.

Sponsored by

ECE - Systems

Faculty Host

Jim Freudenberg