ECE Seminar

Robust and Adaptive Online Decision Making

Chen-Yu WeiPh.D. StudentUniversity of Southern California
WHERE:
1005 EECS BuildingMap
SHARE:
Abstract:
Reinforcement learning is typically built upon the assumption that the environment is uncorrupted and fixed. This assumption does not hold anymore when there exists adversarial corruption or non-stationary transition. Many standard reinforcement learning algorithms are vulnerable to these factors – e.g., a tiny amount of corruption may totally alter the behavior of the algorithm. In the first part of the talk, I will present robust algorithms which achieve the optimal performance under corruption, and reduction techniques that turn a standard algorithm which only works for stationary environments into one that is robust to non-stationarity. The reductions are black-box, general, and optimal for a wide range of problems.

In the second part, I will focus on decentralized multi-agent reinforcement learning. Decentralized algorithms are easy to implement, versatile for different types of games, and scalable to systems with many agents, but they often suffer from non-convergence issues. We will discuss algorithmic techniques that facilitate the convergence of the system, while not introducing extra coordination or communication overhead.

Bio:

Chen-Yu Wei is a Computer Science PhD candidate at University of Southern California, advised by Haipeng Luo. His research focuses on online decision making, reinforcement learning, and learning in games.  He is a recipient of the Best Paper Award at Conference on Learning Theory (COLT 2021), the Best Paper Award at Algorithmic Learning Theory (ALT 2022), the Simons-Berkeley Research Fellowship (2022), and the Best Research Assistant Award at the Department of Computer Science, University of Southern California (2020).

Faculty Host

Lei YingProfessor, Electrical Engineering and Computer ScienceUniversity of Michigan