|
|
|
|
Tentative Order for Material to be Covered
- Defining machine learning, applications, supervised, unsupervised,
semisupervised and reinforcement learning, examples (polynomials,
conjunctive concepts).
- Key perspectives: function approximation, overfitting, bias, variance.
- Random variables, probabilities, Bayes rule, MLE, MAP, conditional
independence.
- Multinomial naive Bayes, logistic regression, gradient ascent.
- Evaluating learning systems
- Generative and discriminative models, bias-variance decomposition,
overfitting, regularization,
- Feature selection, kernel functions, practical issues.
- Probabilistic graphical models, inference (variable elimination, Gibbs
sampling), learning Baysian networks.
- Expectation maximization.
- Hidden Markov models.
- Back propagation, artificial neural networks, deep belief networks.
- Support vector machines.
- Semi-supervised learning, co-training.
- Learning ensembles (bagging, stacking, boosting)
- Learning theory
- Clustering and dimensionality reduction.
|