Dissertation Defense

Robust Learning From Multiple Information Sources

Tianpei Xie
SHARE:

Abstract:

Traditional learning models, which focus on medium size, single source data, often fail to achieve reliable performance if data come from multiple heterogeneous sources (views). This thesis develops robust multi-view data processing methods that are insensitive to corruptions and anomalies in the data set. The central theme behind all these methods is the use of information-theoretic measures as parsimonious representations of uncertainties in the data, as robust optimization surrogates that allows for efficient learning, and as flexible and reliable discrepancy measures for data fusion.

More specifically, the thesis makes the following contributions: 1) We propose a maximum entropy-based discriminative learning model that can perform both nonparametric classification and anomaly detection simultaneously. 2) We consider a multi-view classification problem on a statistical manifold where class labels are provided by probabilistic density functions and may not be consistent among different views due to the existence of noise corruption. 3) We present a method for estimating the parameters (partial correlations) of a Gaussian graphical model that learns a sparse sub-network topology from partially observed relational data. From a multi-view learning perspective, it can be seen as a two-view learning system given asymmetric information flow from both the internal view and the external view.

Sponsored by

Professor Alfred Hero III

Faculty Host

Professor Alfred Hero III