Distinguished Lecture

Provable Bounds for Machine Learning: Getting Around Intractability

Sanjeev AroraCharles C. Fitzmorris Professor of Computer SciencePrinceton
WHERE:
1670 Beyster BuildingMap
SHARE:

Abstract – Many tasks in machine learning (especially unsupervised learning) are provably intractable: NP-complete or worse. Researchers have developed heuristic algorithms to try to solve these tasks in practice. In most cases, these algorithms are heuristics with no provable guarantees on their running time or on the quality of solutions they return. Can we change this state of affairs?

This talk will suggest that the answer is yes, and describe some of our recent work.

(a) A new algorithm for learning topic models that provably works under some reasonable assumptions and in practice is up to 50 times faster than existing software like Mallet. (ICML 13)

(b) Provable new algorithm with provable guarantees that learns a class of deep nets. We rely on the generative view of deep nets implicit in
the works of Hinton and others. Our generative model is an n-node multilayer neural net that has degree at most nγ for some γ<1 and each edge has a random edge weight in [-1,1]. Our algorithm learns almost all networks in this class with polynomial running time. We also show that each layer of our randomly generated neural net is a denoising autoencoder (a central object in deep learning).

Biography – Sanjeev Arora is Charles C. Fitzmorris Professor of Computer Science at Princeton University. His research area spans several areas of theoretical Computer Science. He has received the ACM-EATCS Godel Prize (in 2001 and 2010), Packard Fellowship (1997), the ACM Infosys Foundation Award in the Computing Sciences (2012), the Fulkerson Prize (2012), the Simons Investigator Award (2012). He served as the founding director for the Center for Computational Intractability at Princeton.

Sponsored by

CSE