Project Option 3: Supervised Learning: Comparing Trainable Classifiers.
CSE 415: Introduction to Artificial Intelligence
The University of Washington, Seattle, Winter 2018
OPTION 3: Supervised Learning: Comparing Trainable Classifiers.

Classifier Inference: Using Python and Numpy/SciPy implement two different kinds of classifiers. Then compare their performance on multiple criteria (accuracy, training time, etc). Finally, implement either bagging or boosting to combine them. Demonstrate quantitatively what advantages you get from bagging or boosting. The choice of datasets is up to you. Ideally there will be at least one standard dataset commonly used in classifier design, plus at least one original dataset that you either create or find online that has not already been used in this kind of experiment. Suggested standard datasets are: MNIST (handwritten numerals), CIFAR10 (small color images), and the Fisher Iris dataset (measurements for 150 flowers).

Emilia writes the following about support resources for this option:

"The scikit-learn.org site looks like a good resource:
- Random Forests: http://scikit-learn.org/stable/modules/ensemble.html
- Neural Networks: http://scikit-learn.org/stable/modules/ neural_networks_supervised.html
- K Nearest Neighbors: http://scikit-learn.org/stable/modules/neighbors.html
Given the popularity of this field, there are also a lot of different tutorials online, so the students should have no problem coming up with more suggestions of good resources." -- Emilia