Last Updated: January 15
CSE 332 Winter 2014

Shakespeare CSE 332 Project 2: Shake-n-Bacon Bacon

Outline


Due Dates & Turn In


Working with a Partner


You are strongly encouraged, but not required, to work with a partner. You can use Discussion Board to find a partner, and you should complete Catalyst Survey by the above due date if you wish to work as a teama. No more than two students may form a team. You may divide the work however you wish, but place names of both partners at the top of all files (You may attribute one partner as the primary author). Team members will receive the same project grade unless there is an extreme circumstance (notify us before the deadline).

If you work on a team, you shoud:
  1. Turn in only *one* submission per team for each phase
  2. Work together and make sure you both understand the answers to all write-up questions
  3. Understand how your partner's code is structured and how it works.
  4. Test all of your code together to be sure it properly integrates

Introduction to Word Frequency Analysis


A world famous UW history professor wants you to settle a very old debate on who wrote Shakespeare's plays, Shakespeare or Sir Francis Bacon? You protest that this question is surely outside your area of expertise. "Oh, no," chuckles the historian, stroking his snowy white beard. "I need a Computer Scientist! Someone who can efficiently compare any two documents in terms of their word frequency."

Authors tend to use some words more often than others. For example, Shakespeare used "thou" more often than Bacon. The professor believes a "signature" can be found for each author, based on frequencies of words found in the author's works, and that this signature should be consistent across the works of a particular author but vary greatly between authors.

The professor gave you copies of Shakespeare's writing (Hamlet) and Bacon's writing (The New Atlantis), which he has painstakingly typed by hand from his antique, leather-bound first-editions. Being good scientists, however, you quickly realize that it is impossible to draw strong conclusions based on so little data, and asking him to type more books is out of the question! Thus, for Phase B you should download and analyze several more works, as many works as you feel is necessary to support your conclusion. Project Gutenberg is a good place to look.

Learning Objectives


In this project, you will:

The Assignment


This assignment consists of two phases. Each phase includes implementation of data structures, JUnit tests, and programs manipulating those structures. Phase B also involves collecting experimental data and completing write-up questions. You will implement different data-counters and sorting routines to see different ways to solve the problem. A large part of your work will involve using the WordCount program. WordCount should output frequency of each word in the document, starting with the most frequent words and resolving ties in alphabetical order as shown below:

     
     970     the 
     708     and 
     666     of 
     632     to 
     521     at 
     521     i 
     521     into 
     466     a 
     444     my 
     391     in 
     383     buffalo 
     ...

Note that the actual printing code, along with many other parts, are provided. The printing format should exactly match as what is shown above (use provided printing code). Do not make WordCount to print any other extra outputs such as number of words, time it take to run, etc. Your WordCount should work as specified when given correct input parameters as shown below. Print an appropriate error message and terminate program (System.exit) if incorrect or incomplete set of parameters are given. As provided, Wordcount.java only processes a single parameter for the filename. You will need to modify it to process all parameters. WordCount should take parameters as follows:

java WordCount [ -b | -a | -m | -h ] [ -is | -hs | -os | -k <number>] <filename>

For example,
Binary Search Tree & Insertion Sort: java WordCount -b -is hamlet.txt
AVL Tree & Heap Sort: java WordCount -a -hs hamlet.txt
Hash Table & Top-k Sort (k=5): java WordCount -h -k 5 hamlet.txt

Provided Code


Download the project to get started:

CSE332_Project2.zip

Unzip and place the CSE332_Project2 folder in your workspace folder. From eclipse, import it by right click on package explorer -> import -> General -> Existing projects into workspace -> select CSE332_Project2 as root directory. We believe the provided codes are correct, but let us know if you see any problems. Note that the provided codes will not compile or run correctly until you complete "1. Getting Started" of Phase A. Here are brief descriptions of the main provided files in roughly the order we suggest to read.

DataCount.java
A simple container for an object and an associated count.
DataCounter.java
Like a dictionary, except it maintains a count for each data item. We do not require the element type E to be "comparable". Instead, constructors will take function objects of type Comparator and use them to do comparisons. Also notice that a DataCounter provides an iterator SimpleIterator<DataCount<E>>.
SimpleIterator.java
The iterator that a DataCounter must provide. We do not use Java's iterator type because it obligates us to certain rules that are difficult to implement (if curious, read about concurrent modification exceptions).
Comparator.java
The type for function objects that do comparisons between pairs of elements. Constructors of DataCounter implementations should take it as an argument, as illustrated in BinarySearchTree.java.
DataCountStringComparator.java
An implementation of Comparator that orders two DataCount<String> objects. It requires that you correctly implement StringComparator.
BinarySearchTree.java
An implementation of DataCounter using a binary search tree. It is provided as an example of how to use function objects and iterators. The iterators you write will not be as difficult.
TestBinarySearchTree.java
A class containing JUnit tests for BinarySearchTree. It is provided as a simple example JUnit test.
WordCount.java
Processes a text file and prints out the words from the file in frequency order.
FileWordReader.java
Handles input file, converting each word into lower case and removing punctuations, so that every string you process will contain just the 26 lowercase English letters.
GStack.java
Same generic stack interface you saw in Project 1.
Heap.java
Interface for a Heap (priority queue) that your FourHeap should implement. Your Four Heap should use a function object for comparisons, just like the DataCounter implementations.
Hasher.java
Interface for a function object your hashtable implementation will want to take in as a parameter.

Note that you have been provided several other files in addition to the ones listed above. These other files are mostly full of stubs that you must fill in (there are instructions in the comments inside of those files). You will need to implement classes in phaseA, phaseB, testA and testB packages and make several changes to WordCount.java and Sorter.java. You should NOT modify the providedCode package at all. You should *NOT* modify the provided structures, such as package names, file names, interface of public methods and constructors, etc. You also should *NOT* move the provided files to other packages. However, you may add new files (packages other than providedCode), and you may add new packages to project as you need. You can add or modify private methods, and you can add public methods and constructors as long as doing so do not violate the style guidelines (i.e. too many unnecessary methods/classes, new method exposes sensitive internal data of the class).

Phase A Programming


In Phase A, you will work on the simpler and helpful subproblem of computing the frequency of words in a text. HashTable, Other sort and Top-k sort are part of Phase B, so your Phase A submission may simply ignore the parameters -h, -os and -k <number> or print an error message. Note that the write up questions are all submitted with Phase B, but some of them refer to work you did in Phase A, so you may wish to start on them early.

1. Getting Started

After this step, running
java WordCount <filename>
should produce the correct output (implicitly using -b -is, since you do not yet process them). For the code to compile and generate the correct output, you need to first implement the following:

2. Adding Data Counter Implementations

Provide two additional implementations of DataCounter as described below. You should provide the methods defined in DataCounter. Follow the instructions & hints found inside of each file listed below. Then, adjust WordCount to use your new DataCounter implementations with the appropriate command-line arguments.

3. Adding a Sorting Algorithm

Implement Sorter.heapSort and FourHeap as explained below. For the full credit, heapSort should be generic. You do not need to implement an in-place heapsort as was discussed in lecture. Then, adjust WordCount to use your heapSort method when the second command-line argument is -hs.

4. JUnit Tests

Implement test files in package test and testA: TestDataCounter, TestSorter, TestAVLTree, TestMoveToFrontList and TestFourHeap. TestBinarySearchTree is already implemented as a simple example; Note that it is a subclass of TestDataCounter, which should be used to factor out the common testing codes of different data counter implementations (since all data counters should have the same external behavior). We expect your JUint tests to be more thorough than TestBinarySearchTree, testing both internal and external test both internal and external functionalities. To test internal functionality, consider writing a public method for test in the class being tested and call that method in JUnit test & compare with expected output. You do not need to test the functionality you did not implement.

Phase B Programming


In Phase B, you will provide an additional DataCounter implementation, an additional sorting implementation, and compute a numeric value that quantifies the "similarity" of two texts. You will also perform some simple experiment to determine which implementations are the fastest. In this phase, we purposely give somewhat less guidance on how to organize your code and implement your algorithms. It is up to you to make good design decisions. You should fix any bugs and design weaknesses of your Phase A code at this time as well, since it will be more thoroughly graded after Phase B is turned in.

1. Adding more Data Counter and Sorting Algorithms

Implement HashTable, StringHasher, Sorter.otherSort, and Sorter.topKSort. Follow the instructions & hints at the top of each provided file. Adjust WordCount.java to process appropriate parameters. For topK option, if the second argument is -k, then the third argument should be a number (and the fourth is the filename). When the -k <number> option is used, WordCount should print out only the <number> most frequent words, resolving ties alphabetically. If the text has fewer than k distinct words, print all of them.

2. JUnit Tests

Implement TestHashTable and add appropriate testing code to TestSorter for other sort and topK sort.

3. Document Correlation

How to quantify the similarity between two texts is a large area of study. We will use a simple approach, leaving more advanced models such as Latent Semantic Indexing to Above and Beyond. Implement Correlator.java using this simple approach:

Correlator.java should accept command line parameters to find out which DataCounter to use:
java Correlator [ -b | -a | -m | -h ] <filename1> <filename2>

For example,
Using BinarySearchTree: java Correlator -b hamlet.txt the-new-atlantis.txt
Using HashTable: java Correlator -h hamlet.txt the-new-atlantis.txt

4. Experiment & Write up (Due with Phase B)

Submit a README.pdf file, answering the questions in this template README file: (README.docx, README.pdf)

The last three write-up questions will ask you to design and run some experiments to determine which implementations are faster for various inputs. Answering these questions will require writing additional code to run the experiments, collecting timing information and producing result tables and graphs, together with relatively long answers. So do not wait until the last minute. Insert tables and graphs in README.pdf as appropriate, and be sure to give title and label the axis for the graphs. Place all your timing code into the package writeupExperiment. Be careful not to leave any write up related code in the phase A & B files (such as printing runtime in WordCount). To prevent losing points due to the modifications made for write up experiment, we recommand you to copy all files that need to be modified for experiment into the package writeupExperiment, and start working from there. Files in different packages can have same name. Be sure to check if you are using the correct file when there is another file with the same name in different package.

You need to write a second hashing function for Question 9.d. To exaggerate the difference between the two hash functions, you would want to compare a very simple hash function with a decent one (the one used in phase B). In phase B, you ignored words with normalized frequencies above 0.01 (1%) or below 0.0001 (0.01%) in both documents. When determining who wrote Shakespeare's plays dor question 11, feel free to play with these percentages. All graphs and tables should be inserted into README file. For all experimental results, we would like to see detailed interpretation, especially when the result does not match your expectations. For more stable result, we recommend averaging runtime of multiple runs as shown in the example timing code below. Java optimizes repeated operations, so it runs faster at the end of the loop. Throw away several runs at the beginning of the loop to encounter this effect (JVM warmup).

  private static double getAverageRuntime(String[] args) {
    double totalTime = 0;
    for(int i=0; i<NUM_TEST; i++) {
      long startTime = System.currentTimeMillis();
      WordCount.main(args);
      long endTime = System.currentTimeMillis();
      if(NUM_WARMUP <= i) {                    // Throw away first NUM_WARMUP runs to encounter JVM warmup
        totalTime += (endTime - startTime);
      }
    }
    return totalTime / (NUM_TEST-NUM_WARMUP);  // Return average runtime.
  }

Turn in Information


Important remineders:

Phase A:

Phase B:

Above and Beyond (Due with Phase B)


You may do any or all of the following; pick ones you find interesting. Please be sure that you finished your project completely before attempting any of these. We advise you to start on these only if you have time left after thoroughly testing all your code and carefully checked all your writeup answeres. Recall that any extra credit you complete is noted and kept separate in the gradebook and may or may not be used to adjust your grade at the end of the quarter, as detailed in the course grading policy. Submit your extra credit files separately in extra.zip, and provide a very detailed explanation for each of your implementation in question 4 of README.pdf.

  1. Alternate Hashing Strategies: Implement both closed and open addressing and perform experimentation to compare performance. Also design additional hashing functions and determine which affects performance more: the cost of hashing, the collision-avoidance of hashing, or your addressing strategy.
  2. Excluded Words: Adjust WordCount and Correlator to take an additional file that contains words that should be ignored. For example, the file might contain very common words (like the, a, and she) or it might be another text (so WordCount would then report words not appearing in the other text). There are multiple ways to implement this feature: you could skip words-to-be-ignored as you process the input file, or when sorting the elements, or when printing them. Which is fastest and why do you think this is? What data structure did you use to determine efficiently if a word should be ignored and why is this efficient? Note that for top-k, you should still print k words (assuming there are k words that should not be ignored). How does this affect the ways you can implement this skip-some-words feature?
  3. Introspective Sort: Introspective sort is an unstable quicksort variant which switches to heapsort for inputs which would result in a O(n2) running-time for normal quicksort. Thus, it has an average-case and a worst-case runtime of O(n log n), but generally runs faster than heapsort even in the worst case. Implement IntrospectiveSort, and give a sample input which would result in a quadratic runtime for normal quicksort (using a median-of-3 partitioning scheme).
  4. Word Stemming: When working with document correlations, it is often desirable to work only with the roots of words. In this way, "sleeps", "sleeping", and "sleep" are all considered to be the same word. This process is called word stemming, and is used in search engines and many other contexts. However, proper stemming is difficult. Dropping an 's' from the end of a word seems natural but turns "bus" into "bu". Dropping punctuation seems like a good idea but it equates "it's" and "its". One common algorithm is Porter Stemming. Implement this algorithm. Try to do so without looking at the source code posted at the link. If you do look at their source code, cite it properly.
  5. Word Co-Occurrence: A phenomenon even more interesting than word frequency is word co-occurrence. Create a new WordCount that counts the frequency of pairs of words. Your code should insert as a pair any two words which appear within c words of each other, where c is a command-line argument (try 10 for test runs).
  6. Better Correlation Models: Research better ways to quantify the correlation between two text documents and what their advantages/disadvantages are. Describe what you found and implement an algorithm more sophisticated than the simple one in Correlator. Explain how your algorithm affects your conclusions.

Interesting Tidbits


Acknowledgments


A variant of this project was the third and final project in CSE326 for many years. The first word counter appeared during Spring 2000 when 326 was taught by the famous Steve Wolfman. The snowy-bearded historian appeared in Autumn 2001, courtesy of Adrien Treuille. The assignment was subsequently tweaked over the years by Matt Cary, Raj Rao, Ashish Sabharwal, Ruth Anderson, David Bacon, Hal Perkins, and Laura Effinger-Dean. Dan Grossman adapted it to be the second project in CSE332. James Fogarty stayed up late during the snow-pocalypse of 2012 to clarify confusions experienced in previous offerings (which had been carefully detailed by Ruth Anderson).