Assignment 6: Perceptron Learning and Classification

CSE 415: Introduction to Artificial Intelligence
The University of Washington, Seattle, Spring 2021
Due: Monday, May 24 via Gradescope at 11:59 PM.
Early Bird Bonus Deadline: Wednesday, May 19 at 11:59 PM.

Iris setosa

Introduction

The perceptron is a basic computational model for a neuron or simple neural network that served as a starting point for the development of techniques such as deep learning. By studying the perceptron, we can gain familiarity with many of the basic concepts of neural networks and ML as well as better understand how such techniques complement those of state-space approaches in AI.


Perceptron Model

This assignment is about how perceptrons are trained, and it involves training and testing two kinds of perceptrons to perform classification on multi-dimensional data.

We will be primarily using a dataset is derived from the classical Fisher Iris dataset, however we will also be using other datasets.
In Part A, we'll consider a 2-class classification problem, and you'll implement a very basic binary classifier and the standard perceptron learning algorithm.
In Part B, we'll consider a multi-class classification problem and you'll implement both the classifier and the learning algorithm to handle that problem.
In both parts you'll be expected to submit a report that you can answer with the aid of the code you implement.

Part A: Binary Classification

Begin by downloading the A6 starter files. In Part A you'll be using only the following subset of these files:
A6-Part-A-Report.docx
plot_test.py
binary_perceptron.py
plot_bp.py
remapper.py
run_2_class_2_feature_iris_data.py
iris-lengths-only-2-class-training.csv
iris-lengths-only-2-class-testing.csv
ring-data.csv
Next, run the plot_test.py, and then install the matplotlib module if you don't have it already. You can typically install it by typing the following on a command line:
pip3 install matplotlib
However, depending on how your Python is already set up, that might not work and you can try substituting "pip" for the "pip3". However, on some systems, "pip" will only install it for Python 2.7. If you have trouble installing matplotlib, the staff will try to facilitate your setup through posts on ED or in office hours.

The file A6-Part-A-Report.docx is a template for your Part A report. It contains ten questions that you should answer.

Binary Perceptron

The binary_perceptron.py file is where you will implement your main binary classifier and your training algorithm. The other files will import and run your binary_perceptron code. Implement the indicated methods in binary_perceptron.py and use them to answer some of the questions in the report file.

Plotting Data

Apart from the file binary_perceptron.py, there is also a file plot_bp.py that implements a class PlotBinaryPerceptron. This class can be used to plot the binary perceptron given any dataset with 2 features and labels +1 (positive) and -1 (negative).
The PlotBinaryPerceptron class already has a placeholder dataset, however, in order to provide the dataset of your choice to train the perceptron, you need to extend the class.
The file run_2_class_2_feature_iris_data.py provides an example of how to do so. In this case, it makes use of the Fischer Iris dataset with 2 features and 2 classes, and contains both a train and a test set.
Similar to this, you'll need to run a binary perceptron on another dataset ring-data.csv (Note: This class doesn't have a separate train and test datasets, so you won't need to test for errors).

In order to do so, make a file called run_ring_data.py, and implement a class PlotRingBP that can plot a binary perceptron using the ring-data.csv. It must also possess an instance variable IS_REMAPPED. When the variable is False, the class must plot the dataset as it is, and when True, it plots a remapped version of the dataset (using the function provided in remap.py).

Turn-In Instructions

Turn-ins for Part A at Gradescope are :

A6-Part-A-Report.pdf (the report file as a .pdf)
binary_perceptron.py
run_ring_data.py
Submit them by the early-bird deadline to GradeScope to get the early bird bonus. Note that your files file should not import any additional modules apart from the standard Python library.

Part B: Multi-Class Classification

In this part, you'll use the remaining files from the starter files collection:
A6-Part-B-Report.docx
ternary_perceptron.py
run_3_class_4_feature_iris_data.py
plot_tp.py
iris-all-features-3-class-training.csv
iris-all-features-3-class-testing.csv
synthetic_data.csv

Ternary Perceptron

In this section we will consider the problem of multi-class classification for datasets with 3 classes. The first dataset we use is the all-features Iris Dataset with 3 classes. In order to do multi-class classification of the dataset, you'll first have to implement the ternary perceptron in the file ternary_perceptron.py and run it using run_3_class_4_feature_iris_data.py.
Similar to the binary case, there exists a file plot_tp.py with a parent class PlotTernaryPerceptron for which the class in run_3_class_4_feature_iris_data.py is a child class. It contains the required methods to plot any dataset with d features and 3 classes, by selecting 2 of the features and using those to plot the data in 2D.

Apart from the Iris dataset, we will also use the ternary perceptron to train on a synthetic dataset synthetic_data.csv. This synthetic dataset has only 2 features for each data point, but three classes 0, 1, and 2. In a new file called run_synth_data_ternary.py, write a class called PlotMultiTP that again extends the class in plot_tp.py and plots the ternary perceptron on the synthetic dataset. Since the dataset contains only 2 features for the x vector, you'll simply select both features for the 2D plot.

Once again, use these files to answer some of the questions in the Part B report.

One-Vs-All Classification

Instead of using a ternary classifier to classify the multi-class data, an alternative way to do so would be using the binary classifier itself.
In the scenario of One-Vs-All classification, we designate one of the classes as a positive class +1, and the points in all the other classes as belonging to a negative class -1. For example, if we treat all the data points of class 0 as the positive class (+1), then all the points in classes 1 and 2 will be treated as the negative class (-1)
Hence, now that we have reduced the data to having only 2 classes, we can train a binary perceptron to separate them. Similarly, you can repeat this process by designating in turn each of the classes as the positive class, with all the remaining classes as the negative class.
If there are n classes, then we will end up with n classifiers.
Hence, for a new data point, each trained binary classifier will predict whether the data point either belongs to the class, or not.
One-Vs-All Classification
In a file run_synth_data_1_vs_all.py, implement a class called PlotMultiBPOneVsAll that makes use of the functionalities from plot_bp for binary classification, but adapts it to One-Vs-All classification. Given as input a particular index of the class as positive (implement it as an instance variable POSITIVE), it should be able to do a One-Vs-All classification and plot a resulting class boundary separating that positive class from the remaining classes. For example, if we consider class 0 as the positive class (+1) i.e. when POSITIVE=0, the plot should look like this:

One-Vs-One Classification

Another way to do multi-class classification is via the One-Vs-One approach, where we run a separate classifier for every pair of classes. Each classifier only considers the data points of 2 of the classes, and then it learns to differentiate between the two classes, i.e. classifying the data as one of the two classes. We can repeat the process for every pair of classes, so if there are n classes, then we get nC2 such classifiers. So, each classifier predicts one label out of 2 possibilities. Finally, while testing, if we want to know the class of a new data point, we can compile the class predicted by each classifier as votes, and then by majority vote predict the actual class.
One-Vs-One Classification
In a file run_synth_data_1_vs_1.py, implement a class called PlotMultiBPOneVsOne that makes use of the functionalities from plot_bp such that given the indices of a pair of classes it should be able to do a binary classification and plot a resulting class boundary. Make use of an instance variable CLASSES such that if self.CLASSES = (0,1), the class should plot the appropriate boundary separating classes 0 and 1, where it treats 0 as the positive class (+1), and 1 as the negative class (-1). In such a scenario, the plot should look like this:

Use all these scripts in answering Part B of the report.

Turn-In Instructions

Turn-ins for Part B at Gradescope are :

A6-Part-B-Report.pdf (the report file as a .pdf)
ternary_perceptron.py
run_synth_data_ternary.py
run_synth_data_1_vs_1.py
run_synth_data_1_vs_all.py

Autograder

Some portions of the code you submit (mainly the perceptron) will be graded by an autograder. Please ensure that all the code you write is present in the specified format so that the autograder can evaluate your code correctly.
As for the remaining code, you will have a bit more leeway to design your scripts as long as they meet the requirements in the spec. Note that some tests may depend on the prior tests to be correct if there are dependencies between the functions.
Also, please note that the autograder tests will not have any outputs as the autograder is built in a way that it cannot securely reveal exactly what a student's error is. So you will need to develop your own tests to figure out what the problem is. However, from the names of the tests, you should be able to easily identify the problematic method.
The code may be evaluated for neatness and formatting, and having a clear code will be helpful if partial credit is to be provided.

Updates and Corrections

Last updated May 19 at 12:30 PM.
Added a few lines about the Autograder.
Modified the starter files to make the comments a bit clearer, fixed a typo in the Part A Report.

If needed, updates and corrections will be posted here, and/or in ED.