CSE 446: Machine Learning (Winter 2009)

Assignment 1: Decision tree learning for cancer diagnosis

Due Date: Friday, January 23, 2009 in class and submit code online (the third week)

In this mini-project, you will implement a decision-tree algorithm and apply it to breast cancer diagnosis. For each patient, an image of a fine needle aspirate (FNA) of a breast mass was taken, and nine features in the image potentially correlated with breast cancer were extracted. Your task is to develop a decision tree algorithm, learn from data, and predict for new patients whether they have breast cancer. Our data was provided by Dr. William H. Wolberg and his colleagues at the University of Wisconsin and can be found in the U.C. Irvine Machine Learning Repository.

  1. Download the training and test data. Each patient is represented by one line, with columns separated by commas: the first one is the identifier number, the last is the class (benign or malignant), the rest are attribute values, which are integers ranging from 1 to 10. The attributes are (in case you are curious): Clump Thickness, Uniformity of Cell Size, Uniformity of Cell Shape, Marginal Adhesion, Single Epithelial Cell Size, Bare Nuclei, Bland Chromatin, Normal Nucleoli, Mitoses. (Note that the UCI document page specifies a different number of attributes, because it refers to a set of several related datasets. For detailed information of the dataset that we use here, see this document.)

  2. Implement the ID3 decision tree learner, as described in Chapter 8 of Duda et al. or Chapter 3 of Mitchell. You may program in C/C++, Java or Python. Your program should assume input in the above format. For initial debugging, it is recommended that you construct a very simple data set (e.g., based on a boolean formula) and test your program on it.

  3. Implement both accuracy (misclassification impurity) and information gain (entropy impurity) for evaluation criterion. Also, implement split stopping using chi-square test (see Duda et al. 8.3.3 for details). (If you forgot all you learned in Stat 101, check out Appendix A.6 in Duda et al.)

  4. Use your algorithm to train a decision tree classifier and report accuracy on test. Compare accuracies by varying the evaluation criteria and confidence level in determining split stopping. For the latter, use 1%, 5% and 100% (i.e., you always grow the full tree).

  5. You should make sure your code correctly implements the ID3 algorithm and various options. If, despite your best effort, your code still appears to contain bugs (e.g., it gets strange results), describe what you have attempted for debugging and what you suspect are the remaining problems.

  6. Turn in the following:

    • Your code. Submit your files here. The due date is 11:30AM, January 23, 2009. Your code should contain appropriate comments to facilitate understanding.

    • A report of at most 5 pages (letter size, 1 inch margins, 12pt font) that includes:

      • A high-level description on how your code works.
      • The results you obtain under various settings.
      • Which evaluation criterion and confidence level work well? Why?
      • Do you see evidence of overfitting in some experiments? Explain.

Good luck, and have fun!