Communications and Signal Processing Seminar

Parallel and Distributed Algorithms for Large-scale Optimization

Gesualdo ScutariAssistant ProfessorState University of New York, Department of Electrical Engineering
SHARE:

Special CSP Seminar
Nowadays, large-scale systems are ubiquitous. Some examples/applications
include wireless communication networks; electricity grid, sensor, and cloud networks;
and machine learning and signal processing applications, just to name a few.
The analysis and design of distributed/parallel algorithms in this context is a
challenging task. In this talk we will present our ongoing work in this area.

In the first part of the talk, we will introduce a novel and flexible decomposition
framework for the parallel optimization of general nonconvex (nonsmooth) utility
functions, subject to private and/or coupling (nonconvex) constraints.
Our scheme is very general and flexible, and enjoys many desirable features, such as:
i) it is parallel, with a degree of parallelism that can be chosen by the user, ranging from Jacobi schemes (all the variables are updated in parallel) to sequential ones (wherein only one variable is updated at each iteration), covering virtually all the possibilities in "between"; ii) it permits the update in a greedy and/or random fashion of only some (block) variables at each iteration (a feature that turns out to be very effective numerically); iii) it easily leads to distributed implementations; iv) differently from current (accelerated) gradient-like schemes, no knowledge of the utility function parameters (e.g., the Lipschitz constant) is required; v) it can tackle nonconvex and nonseparable utility functions ; vi) it easily allows for inexact solution of the subproblems (in some large-scale problems the cost of computing the exact solution of all the subproblems can be prohibitive); vii) it unifies and generalizes several existing successive convex approximation-based algorithms as well as their convergence properties, including (proximal) gradient or Newton type methods, block coordinate (parallel) descent schemes, difference of convex functions methods; and viii) it appears to be numerically efficient.
The second part of the talk is devoted to the application of the proposed framework to a variety of problems in different areas, including (time permitting): i) optimization problems in machine learning (e.g. LASSO and Logistic regression); ii) MRI imaging reconstruction; and iii) design of different wireless multi-user interfering systems.

Gesualdo Scutari (S'05"“M'06"“SM'11) received the Electrical Engineering and Ph.D. degrees (both with honors) from the University of Rome "La Sapienza", Rome, Italy, in 2001 and 2005, respectively. He is an Assistant Professor with the Department of Electrical Engineering at State University of New York (SUNY) at Buffalo, Buffalo, NY. He had previously held several research appointments, namely, at University of California at Berkeley, Berkeley, CA; Hong Kong University of Science and Technology, Hong Kong; University of Rome, "La Sapienza", Rome, Italy; University of Illinois at Urbana-Champaign, Urbana, IL.
His primary research interests focus on theoretical and algorithmic issues related to continuous (large-scale) optimization, equilibrium programming, and their applications to signal processing, communications and networking; medical imaging; machine learning; and distributed decisions.
Dr. Scutari is an Associate Editor of IEEE Transactions on Signal Processing, and he served as an Associate Editor of IEEE Signal Processing Letters. He serves on the IEEE Signal Processing Society Technical Committee on Signal Processing for Communications (SPCOM). Dr. Scutari received the 2006 Best Student Paper Award at the International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2006, the 2013 NSF Faculty Early Career Development (CAREER) Award, and the 2013 UB Young Investigator Award.

Sponsored by

University of Michigan, Department of Electrical Engineering & Computer Science