- Time
- 1:30 – 2:20pm, Tuesday, October 20, 2009
- Place
- CSE 503
- Speaker
- Adam Klivans (UT Austin)
## Abstract

In the early 1990s, Haussler and Kearns, Schapire, and Sellie defined the agnostic model of learning in which a learner is presented with examples whose labels have been arbitrarily corrupted. The goal of the learner is to output a hypothesis that is competitive with the best fitting function from some fixed concept class. This problem captures the notion of learning in a harsh noise model and is essentially equivalent to the well-known Empirical Risk Minimization task from machine learning. If no assumptions are made on the marginal distribution of the labeled examples, the problem is known to be intractable for even simple concept classes.

In this talk, we assume that the marginal distribution has some structure (such as being Gaussian or uniform over {0,1}^n) and show that the corresponding agnostic learning problem becomes tractable for several well-studied concept classes including intersections of halfspaces and other geometric sets. Our approach involves interesting techniques from analysis and convex geometry.