Theory Seminar

Scalable Sparse Approximation of a Sample Mean

Clayton ScottAssociate ProfessorUniversity of Michigan
SHARE:

We examine the problem of approximating the mean of a set
of vectors as a sparse linear combination of those vectors.
This problem is motivated by a common methodology in
machine learning where a probability distribution is represented
as the sample mean of kernel functions. A sparse approximation
of this kernel mean function is desirable in applications where
the function must be evaluated efficiently. However,
existing sparse approximation algorithms such as matching
and basis pursuit scale quadratically in the sample size, and
therefore do not scale well. We introduce an incoherence-based
approximation bound, and examine bound minimization
as a sparse approximation strategy. In the context of sparsely
approximating a kernel mean function, the bound is efficiently
minimized by solving an appropriate instance of the k-center
problem, and the resulting algorithm has linear complexity in
the sample size.

Sponsored by

EECS