AI Seminar

Convex Methods for Latent Representation Learning

Dale SchuurmansProfessor of Computer ScienceUniversity of Alberta
SHARE:

Automated feature discovery is a fundamental problem in data analysis. Although classical feature learning methods fail to guarantee optimal solutions in general, convex reformulations have been developed for a number of such problems. Most of these reformulations are based on one of two key strategies: relaxing pairwise representations, or exploiting induced matrix norms. Despite their use of relaxation, convex reformulations can demonstrate significant improvements in solution quality by eliminating local minima. I will discuss a few recent convex reformulations for representative learning problems, including robust regression, hidden-layer network training, and multi-view learning—demonstrating how latent representation discovery can co-occur with parameter optimization while admitting globally optimal relaxed solutions. In some cases, meaningful rounding guarantees can also be achieved.

Sponsored by

Toyota

Faculty Host

Michael Wellman