Communications and Signal Processing Seminar

Generalization and Inductive Bias in Neural Networks

Cengiz PehlevanAssistant Professor of Applied MathematicsHarvard University

ABSTRACT: I will present a theory that describes generalization and inductive bias in neural networks using kernel methods and statistical mechanics. This theory accurately predicts the generalization performance of neural networks and generic kernels on real data, and elucidates an inductive bias to explain data with “simple functions”, which are identified by solving a related kernel eigenfunction problem on the data distribution. This notion of simplicity allows us to characterize whether a network is compatible with a learning task, facilitating good generalization performance from a small number of training examples. I will present applications of this theory to artificial and biological neural systems, and real datasets. I will discuss extensions to out-of-distribution generalization and data-dependent kernel descriptions of neural networks.





BIO: Cengiz (pronounced “Jen-ghiz”) Pehlevan is an assistant professor of applied mathematics at Harvard SEAS. His research interests are in theoretical neuroscience and theory of neural computation. Cengiz comes to Harvard SEAS from the Flatiron Institute’s Center for Computational Biology (CCB), where he was a a research scientist in the neuroscience group. Before CCB, Cengiz was a postdoctoral associate at Janelia Research Campus, and before that a Swartz Fellow at Harvard. Cengiz received a doctorate in physics from Brown University and undergraduate degrees in physics and electrical engineering from Bogazici University, Turkey.

Join Zoom Meeting

Meeting ID: 922 1113 6360

Passcode: XXXXXX (Will be sent via e-mail to attendees)

Zoom Passcode information is also available upon request to Katherine Godwin ([email protected]).

See full seminar by Professor Pehlevan

Faculty Host

Necmiye OzayAssociate Professor of EECSUniversity of Michigan