CSEP 546: Data Mining (Spring 2012)
Assignment 2: Collaborative Filtering and Bayesian Networks

Please submit both code and writeup online by 6:30pm PST on Thursday, May 3, 2012. Please provide all code (sufficiently commented) so that we can run it ourselves. Submit your writeup as a PDF and limit to four pages with reasonable fonts and margins.

Problem 1: Collaborative Filtering on Netflix Ratings

1.0 Read the paper Empirical Analysis of Predictive Algorithms for Collaborative Filtering. You need to read up to Section 2.1, and are encouraged to read further if you have time.

1.1 The dataset we will be using is a subset of the movie ratings data from the Netflix Prize. You can download it here. It contains a training set, a test set, a movies file, a dataset description file, and a README file. The training and test sets are both subsets of the Netflix training data. You will use the ratings provided in the training set to predict those in the test set. You will compare your predictions with the actual ratings provided in the test set. The evaluation metrics you need to measure are the Mean Absolute Error and the Root Mean Squared Error. The dataset description file further describes the dataset, and will help you get started. The README file is from the original set of Netflix files, and has been included to comply with the terms of use for this data.

1.2 Implement the collaborative filtering algorithm described in Section 2.1 of the paper (Equations 1 and 2; ignore Section 2.1.2) for making the predictions.

Extra-credit Add yourself as a new user to the training set. To do this, you will need to create a new user ID for yourself. Select some movies that you have seen among those in the training set, and add your ratings for them to the training set. Extend your system to output predictions for the movies you haven't rated, ranked in order of decreasing ratings. Do you agree with the predictions of your system? Check out some of the top ranked movies that you haven't seen (but only after you have finished work on the project).

Problem writeup:

Problem 2: Bayesian Networks

2.1 You've lost contact with your safari leader and now you find yourself confronted by a legion of 100 marmosets. Your training tells you that you must give a present to the one king in order avoid being clawed. Fortunately, you have some raisinets and are a master of conditional probabilities. A lilliputian wearing a tophat approaches you directly. The king always wears a hat, and the custom has caught-on with 5% of the others. What is the probability that this is the king?

Before you get a chance to decide, a moustachioed marmoset pushes the first one out of the way. He too is wearing a hat. Good thing you memorized the 'M' section of the encyclopedia: the king is seen with a moustache half the time, whereas only 1% of ordinary monkeys have one. What is the probability that he is the king? Assume that moustaches and hats are independent of each other given the identity of the marmoset.

2.2 Consider the following Bayesian network:

Justify your answers.

2.3 Consider a Bayesian network with four Boolean nodes A, B, C, and D, where A is the parent of B, and B is the parent of C and D. Suppose you have a training set composed of the following examples in the form (A,B,C,D), with "?" indicating a missing value: (0,1,1,1), (1,1,0,0), (1,0,0,0), (1,0,1,1), (1,?,0,1). Show the first iteration of EM algorithm (initial parameters, E-step, M-step), assuming the parameters are initialized ignoring missing values.