Communications and Signal Processing Seminar

Non-linear models for matrix completion

Greg OngiePost Doctoral Research FellowUniversity of Michigan
SHARE:

Linear models have been very successful for a variety of inference problems where there is missing or corrupted data. A key example is low-rank matrix completion, where the missing entries of a matrix are imputed by exploiting low-dimensional linear structure. However, many real-world datasets fail to have linear structure but still have low-dimensional non-linear structure that can be exploited. We introduce a systematic framework for learning low-dimensional non-linear models in problems with missing data. The approach has close connections to subspace clustering, manifold learning, and kernel methods in machine learning. As a special case, we focus on the problem of matrix completion where we assume data belongs to a low-dimensional algebraic variety, i.e., each data point is a solution to a system of polynomial equations, and study the sampling requirements of this model. We propose an efficient matrix completion algorithm that is able to recover synthetically generated data up to the predicted sampling complexity bounds and outperforms traditional low-rank matrix completion and subspace clustering approaches on real datasets arising in computer vision.
I am a postdoctoral fellow in the Electrical Engineering and Computer Science Department at the University of Michigan, Ann Arbor, co-mentored by Laura Balzano and Jeff Fessler. My research interests include optimization, compressed sensing, and machine learning with applications to image reconstruction in MRI, and related inverse problems. I have published work on: Off-the-grid compressive signal and image recovery; Fast algorithms for large-scale structured low-rank matrix completion; Generalizations of total variation regularization; and Non-convex approaches to large-scale sparsity regularized inverse problems.

Sponsored by

ECE

Faculty Host

Dave Neuhoff