AI Seminar

Game Playing Meets Game Theory: Strategic Learning from Simulated Play

Michael P. WellmanProfessorComputer Science and Engineering, University of Michigan
WHERE:
3725 Beyster BuildingMap
SHARE:

Abstract:

Recent breakthroughs in AI game-playing — AlphaGo (Go), AlphaZero (Chess, Shogi +), Libratus and DeepStack (Poker) — have demonstrated superhuman performance in a range of recreational strategy games. Extending beyond artificial domains presents several challenges, but the basic idea of learning from simulated play employed in most of these systems is broadly applicable to any domain that can be accurately simulated. This thread of work naturally dovetails with methods developed in the Strategic Reasoning Group at Michigan for reasoning about simulation-based games. I will recap some of this work, with emphasis on how new advances in deep reinforcement learning can contribute to a major broadening of the scope of game-theoretic reasoning for complex multiagent domains.

Bio:

Michael P. Wellman is Professor of Computer Science & Engineering at the University of Michigan. He received a PhD from the Massachusetts Institute of Technology in 1988 for his work in qualitative probabilistic reasoning and decision-theoretic planning. From 1988 to 1992, Wellman conducted research in these areas at the USAF’s Wright Laboratory. For the past 25 years, his research has focused on computational market mechanisms and game-theoretic reasoning methods, with applications in electronic commerce, finance, and cyber-security. As Chief Market Technologist for TradingDynamics, Inc., he designed configurable auction technology for dynamic business-to-business commerce. Wellman previously served as Chair of the ACM Special Interest Group on Electronic Commerce (SIGecom), and as Executive Editor of the Journal of Artificial Intelligence Research. He is a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery.

Organizer

AI Lab