Efficient Methods for Optimizing Decentralized Multi-Agent Systems
This event is free and open to the publicAdd to Google Calendar
Applications of decentralized multi-agent systems are ubiquitous in the present day, including autonomous driving, gaming AI, sensor networks, etc. Depending on the setting of an application problem, the theoretical framework behind the problem commonly used is either distributed optimization, decentralized control, or multi-agent reinforcement learning (MARL). However, existing methods may not be scalable, may require strong but limiting assumptions, or may need solutions to open problems that remain unresolved when put into practice. In this thesis, we aim to develop more general and efficient computational algorithms for optimizing such systems that better fit with the applications, where the efficiency we are concerned with is in terms of memory storage, communication overhead, convergence speed, and low regret. On the distributed optimization side, we propose a localization scheme that exploits partial dependency structure in the objective functions to save memory and communication required, along with a convergence rate analysis of the proposed scheme. We also study an approximation framework that allows us to relax the Lipschitz gradient assumption commonly made in the literature, and investigate non-linear consensus schemes from a stochastic approximation point of view. To overcome the non-stationarity issue and the curse of dimensionality in MARL problems, we begin by devising algorithms that achieve near-optimal regret for a class of problems with a hierarchical information structure; for even more general settings, we extend the notion of an approximate information state to certain multi-agent cases that can be used to design scalable and low regret learning algorithms without the knowledge of the models.
Chair: Professor Vijay Subramanian