Randomized Methods for Data Discovery and Decomposition
Add to Google Calendar
In recent years, the topics of “artificial intelligence for scientific discovery” and “scientific
machine learning” have gained wider attention in the computational and scientific communities. One of the main goals is to transfer and/or develop machine learning algorithms to scientific and engineering applications in modeling, design, and control. The long-term goals are to provide automated approaches to support and accelerate growth in data-based discovery, high-consequence decision making, and prototyping.
In this talk, I will discuss sparsity-promoting random feature methods and their applications to scientific modeling and engineering design problems. These methods address some of the challenges of approximating high-dimensional systems when one has limited data with noise and outliers. In particular, I will show that the algorithms perform well on benchmark tests for a wide range of applications. In addition, our methods come with theoretical guarantees of success in terms of generalization and complexity bounds. The main applications of interest include learning governing equations from time-series data, high-dimensional surrogate modeling, and intrinsic signal decomposition, including examples in time-series forecasting, AI for scientific
discovery, and aerospace design.
Dr. Hayden Schaeffer is an Associate Professor in the Department of Mathematical Sciences and is affiliated with the Center of Nonlinear Analysis at Carnegie Mellon University. He holds a Ph.D. and master's degree in Computational and Applied Mathematics from UCLA and a B.A. from Cornell. He has received an NSF CAREER award and an AFOSR Young Investigator Award. Previously, he was an NSF Mathematical Sciences Postdoctoral Research Fellow, a von Karman Instructor at Caltech, a UC President’s Postdoctoral Fellow at UC Irvine, and a Collegium of University Teaching Fellow at UCLA.