TIME: 1:30-2:20 pm,  April 17, 2007

PLACE: CSE 403

TITLE: Reductions to the noisy parity problem

SPEAKER: Parikshit Gopalan
         University of Washington

ABSTRACT:
The problem of learning parity functions on the Boolean hypercube from
random examples in the presence of random classification noise (aka
the noisy parity problem) is a notorious open problem in computational
learning. The best algorithm known to date due to Blum, Kalai and
Wasserman runs in time 2^{n/log n}. This problem is widely believed to
be hard.

We show that a number of central problems in uniform distribution
learning on the hypercube such as learning juntas, decision trees and
DNFs are reducible to the noisy parity problem.  For instance, the
problem of learning k-juntas reduces to learning a noisy parity of k
unknown variables. This gives evidence in favor of the commonly held
belief that noisy parity is a hard problem.

On the positive side, we use the same techniques to show that the
problem of learning parity function with adversarial noise reduces to
learning parity with random noise. Together with BKW, this gives the
first non-trivial algorithm for this harder noise model.

Based on joint work with Vitaly Feldman, Subhash Khot and Ashok Ponnuswami.