Communications and Signal Processing Seminar
Reinforcement Learning for Mean Field Games with Strategic Complementarities
This event is free and open to the publicAdd to Google Calendar
ABSTRACT: Mean Field Games (MFG) are those in which each agent assumes that the states of all others are drawn in an i.i.d. manner from a common belief distribution, and optimizes accordingly. The equilibrium concept here is a Mean Field Equilibrium (MFE), and algorithms for learning MFE in dynamic MFGs are unknown in general due to the non-stationary evolution of the belief distribution. Our focus is on an important subclass that possesses a monotonicity property called Strategic Complementarities (MFG-SC). We introduce a natural refinement to the equilibrium concept that we call Trembling-Hand-Perfect MFE (T-MFE), which allows agents to employ a measure of randomization while accounting for the impact of such randomization on their payoffs. We propose a simple algorithm for computing T-MFE under a known model. We introduce both a model-free and a model based approach to learning T-MFE under unknown transition probabilities, using the trembling-hand idea of enabling exploration. We analyze the sample complexity of both algorithms. We also develop a scheme on concurrently sampling the system with a large number of agents that negates the need for a simulator, even though the model is non-stationary. Finally, we empirically evaluate the performance of the proposed algorithms via examples motivated by real-world applications. This is joint work with Kiyeob Lee, Desik Rengarajan, and Dileep Kalathl.
BIO: Srinivas Shakkottai received a PhD in 2007 in Electrical and Computer Engineering from the University of Illinois at Urbana- Champaign. He was a postdoctoral scholar in Management Science and Engineering at Stanford University in 2007. He joined Texas A&M University in 2008, where he is currently a professor of Computer Engineering at the Dept. of Electrical and Computer Engineering.
His research interests include caching and content distribution, wireless networks, multi-agent learning and game theory, cyber-physical systems, as well as data collection and analytics. He serves as an Associate editor of IEEE/ACM Transactions on Networking.
Srinivas is the recipient of the Defense Threat Reduction Agency Young Investigator Award (2009) and the NSF Career Award (2012), as well as research awards from Cisco (2008) and Google (2010). He also received an Outstanding Professor Award (2013), the Select Young Faculty Fellowship (2014), and the Engineering Genesis Award (2019) at Texas A&M University.