| Date | Content | Reading | Slides |
|---|---|---|---|
| Introduction to ML: Maximum likelihood, linear methods, overfitting, regularization, optimization | |||
| Weds 1/3 | Welcome/Overview; Maximum likelihood estimation |
Probability review: Murphy 2.1-2.4, 2.6, 2.8, 3.1-3.2 Statistics review, maximum likelihood: Murphy 4.2 |
slides, annotated slides |
| Mon 1/8 | Linear regression |
Linear algebra review: Murphy 7.1-7.3 Matrix calculus review: Murphy 7.8 Maximum likelihood regression: Murphy 4.2 Linear regression: Murphy 11-11.2 |
slides, annotated slides |
| Weds 1/10 | Linear regression with basis functions; Cross-validation |
Maximum likelihood regression: Murphy 4.2 Linear regression: Murphy 11-11.2 |
slides, annotated slides Linear regression demo: .ipynb, .html, diabetes.txt Polynomial regression demo: .ipynb, .html |
| Weds 1/17 | Bias-variance trade-off |
Bias-variance trade-off: Murphy 4.7.6 |
slides, annotated slides Bias Variance tradeoff demo: .ipynb (included in slides) |
| Mon 1/22 | Ridge Regularization |
Ridge regression: Murphy 11.3-11.4 |
slides, annotated slides |
| Weds 1/24 | Lasso; Gradient descent |
Lasso regression: Murphy 11.4 Gradient descent: Murphy 8-8.2.1 |
slides, annotated slides |
| Video | Gradient descent; Stochastic gradient descent |
Gradient descent: Murphy 8-8.2.1 Stochastic gradient descent: Murphy 8.4-8.4.4 |
Lecture Video slides, annotated slides Gradient descent demo .ipynb, .html |
| Mon 1/29 | Prediction pitfalls; Convexity |
slides, annotated slides |
|
| Weds 1/31 | Classification; Logistic regression |
Logistic regression: Murphy 10-10.2.4, 10.3-10.3.3 |
slides, annotated slides |
| Non-linear methods | |||
| Mon 2/5 | Kernel methods |
Kernels: Bishop 6-6.2, Murphy 17, 17.1, 17.3.4, 17.3.9 |
Logistic Regression demo .ipynb, .html slides, annotated slides |
| Weds 2/7 | Midterm |
|
|
| Mon 2/12 | Bootstrap; Neural network basics |
Bootstrap: Efron and Hastie 10.2, 11-11.2 Neural Networks : Murphy 13-13.4.3 |
slides, annotated slides
Tensorflow playground |
Weds 2/14 | Backpropagation in neural networks; Non-parametric methods; Nearest neighbors | Nearest neighbors: Murphy 16.1 |
slides, annotated slides
|
Weds 2/21 | More non-parametric methods; Tree-based |
Trees, Random Forrests: Murphy 18 Gradient Boosting Trees: Murphy 18 |
slides, annotated slides |
| Unsupervised Learning | |||
| Mon 2/26 | PCA; SVD; |
PCA, Singular value decomposition: Murphy 20.1 Kernel PCA: Murphy 20.4.6 |
slides, annotated slides | Weds 2/28 | More matrix decompositions; Autoencoders; K-means; Gaussian mixture models (GMMs) |
Autoencoders: Murphy 20.3, 22.1 K-means, GMM: Murphy 21.3-21.5 |
slides, annotated slides |
| Domain specific models | |||
| Mon 3/4 | Feature extraction; Domain specific models; CNNs |
Self-supervised and transfer learning: Murphy 19 CNNs: ZLLS 7-8 |
slides, annotated slides |
| Weds 3/6 | Optional (Canceled): Self-supervised learning; Sequence models and text processing |
Self-supervised and transfer learning: Murphy 19 CNNs: ZLLS 7-8 Sequence models: ZLLS 9-11 |
slides |