# CSE143 Notes for Friday, 2/19/16

I began by reviewing the 8 queens problem from Wednesday's lecture. We went through a detailed trace of execution for the problem of placing 4 queens on a board and I showed a handout with the trace shown in two ways: as a sequential list of choices and as a decision tree. It is important to study the 8 queens backtracking solution before you attempt to work on the programming assignment because you want to be sure that you understand the basic idea of backtracking.

There are two solutions to the 4-queens problem and we followed the trace far enough to see the first one:

```        . . Q .
Q . . .
. . . Q
. Q . .
```
In lecture, we produced this highly detailed version of the trace (this is the complete trace, including the second solution):

```        explore col 1
row 1
place
explore col 2
row 3
place
explore col 3
remove
row 4
place
explore col 3
row 2
place
explore col 4
remove
remove
remove
row 2
place
explore col 2
row 4
place
explore col 3
row 1
place
explore col 4
row 3
place
explore col 5
print (this is a solution)
. . Q .
Q . . .
. . . Q
. Q . .
remove
remove
remove
remove
row 3
place
explore col 2
row 1
place
explore col 3
row 4
place
explore col 4
row 2
place
explore col 5
print (this is a solution)
. Q . .
. . . Q
Q . . .
. . Q .
remove
remove
remove
remove
row 4
place
explore col 2
row 1
place
explore col 3
row 3
place
explore col 4
remove
remove
row 2
place
explore col 3
remove
remove
```
Then I discussed the next programming assignment. The program searches for all ways to form anagrams of a particular phrase. For example, if the phrase is "george bush", then some combinations of words that constitute anagrams are [bee, go, shrug] or [bugs, go, here] or [go, he, bus, erg]. An anagram is a rearrangement of the letters. Each of these word combinations has the same set of letters as "george bush". The program uses a dictionary of words to find all combinations that are anagrams of the given phrase.

In solving this problem, one of the questions your program will have to consider is whether a particular word is relevant to the problem you're trying to solve. For example, if you were trying to find anagrams of "george bush" and you were wondering whether to consider the word "abash", the answer would be no. That word is not relevant to the problem of finding anagrams of "george bush" because there are two a's in "abash" but no a's in "george bush". So that's not a word that matters to us. But the word "bee" is relevant All of the letters of "bee" appear in "george bush". So that means that we need to explore this possibility. We do that by taking the letters from "bee" away from the letters for "george bush". That leads to a new set of letters that we could use to continue the process.

So how does this relate to backtracking? The potential solution space is the set of all combinations of words from the dictionary. We can think of this as a decision tree by thinking in terms of picking a first word, then picking a second word, then picking a third word, and so on.

To understand this better, I went through a small example in detail. We made a very short dictionary file that had just four words:

```        bee
go
gush
shrug
```
When we ran the program and asked for anagrams of "George Bush", the program reported these answers:

```        [bee, go, shrug]
[bee, shrug, go]
[go, bee, shrug]
[go, shrug, bee]
[shrug, bee, go]
[shrug, go, bee]
```
So at the top of our decision tree, the choice we are making is what word should come first and the various possibilities are the dictionary of words we've been given:

```                 first word?
|
+-------+---+---+-------+
|       |       |       |
"bee"   "go"   "gush"  "shrug"
```
We will only follow paths that make sense, but with this short dictionary, we're going to try each of these words as a possible first word. So we first explore choosing "bee". Once we've chosen "bee" as the first word, at the next level of the tree, we consider all possible choices for a second word:

```                             first word?
|
+-------+---+---+-------+
|       |       |       |
"bee"   "go"   "gush"  "shrug"
|
second word?
|
+-------+---+---+-------+
|       |       |       |
"bee"   "go"   "gush"  "shrug"
```
At each level, we go through all the words. Remember that backtracking involves an exhaustive search, so you just try all possibilities that make sense. Of course, at this second level, we wouldn't be dealing with the same set of letters as before. At the first level we were looking for words that could be used to form the phrase "george bush". At the second level, we've already accounted for the letters in the first word ("bee"), so now we are searching for something that matches what's left ("gorg ush").

I mentioned that the low-level details for this problem involve keeping track of how many of each letter we have, subtracting the letters for a word from the letters for the phrase, figuring out whether that subtraction is going to work, and so on. All of these details are handled by the LetterInventory class that we wrote for assignment #1. The inventory objects keep track of how many of each letter you have and the "subtract" method sees whether you can subtract one set of letters from another set of letters.

So let's revisit some of this in terms of LetterInventory objects. Imagine making a LetterInventory object for "george bush". If we printed it, we'd get as output [beegghorsu]. When we subtract the inventory for the word "bee", we get [gghorsu]. As we explore various words we could subtract from this, we find that we can successfully subtract the inventory for "go" to get an inventory that prints as [ghrsu]. And from that we can subtract the inventory for "shrug" to get an inventory that prints as [] (in other words, an empty inventory).

For the anagram problem, finding an empty LetterInventory is like getting to column 9 in the 8 queens problem. It means we've found a solution. It means that in our various explorations, we've found a sequence of words that we can successfully subtract from the original to get down to an empty inventory. That means we've accounted for every letter of the original with the current combination of words that we're exploring, which means this is a solution that we'd want to report.

But we also encounter dead ends along the way. For example, when we start with [beegghorsu] and subtract "bee" to get [gghorsu], we end up with something that can't have a second occurrence of "bee". We'd find that when we try to subtract the inventory for "bee" a second time that it fails (subtract returns a null). That would represent a dead end, just as in the 8 queens problem when we would find that we couldn't place a queen on at a particular spot. Dead ends aren't a problem in backtrack. We just skip them and move on.

That's how the anagram problem can be solved with backtracking. Each level of the decision tree (which means each invocation of the recursive method) involves choosing one word. Which choices do we pursue? Only those where we can successfully subtract the word's inventory from the current inventory. And as we proceed to lower levels in the tree, we use the new inventories that the subtract method gives us as the problem to solve at that level. In other words, the overall problem involves a phrase like "george bush", but as we make specific choices, that leads to new smaller problems that involve a smaller inventory of letters because as we choose words, we account for more and more of the letters of the original phrase. And whenever we get to an empty inventory in our recursive backtracking, we know we've found a solution that should be printed.

I mentioned that in some sense this can be the easiest assignment of the quarter. The solution is short (mine was 45 lines long) and we're being very specific about how to approach it. Recursive backtracking is a very specific technique and we're telling you exactly how to apply it to this problem. So you could, theoretically, be done in an hour. Most likely it will take a lot longer because this is your first exposure to backtracking and it will take you a while to figure out how all of this works.

The short example we explored in lecture using a 4-word dictionary is helpful to study both to understand what the program is supposed to do and to test your own solution. It is small enough that you can debug your program step by step to make sure that it is making the right choices at each level of the recursion. But even this short example leads to a large solution space to consider. For those who want to explore this, I have posted a detailed trace of execution that shows step by step what it explores. In this trace, instead of using strings like "george bush" and "gorg ush", I have instead listed what the corresponding LetterInventory objects would return when you call their toString method ([beegghorsu] and [gghorsu]). So if you include println statements in your program, you can compare what your program is doing against this detailed trace to make sure it is working properly.

I have also posted a detailed diagram of the solution space that is explored. Solutions are listed in red inside a blue oval. The diagram is so large that when it is initialy displayed, you won't be able to read it. Click on it and it will expand to a size that is readable. The trace is more likely to be useful than the diagram, but the diagram might help you to understand what the backtracking is doing.

The writeup points out two optimizations that I am requiring you to include. First, it's clear that we will often be needing the LetterInventory for a particular word. There is no reason to do that more than once for any given word, so I'm asking you to "preprocess" the dictionary to compute a LetterInventory for each word. How would we store these results? For each word we end up with a LetterInventory and we want to be able to quickly access the LetterInventory for any particular word. This is a perfect application of a Map. Remember that a Map is composed of key/value pairs, associating each key with a specific value. Here the keys are the words and the values are the LetterInventory objects.

Another optimization I've asked you to perform is to prune the dictionary before you begin the recursive backtracking. In the examples above I was working with the entire dictionary for the backtracking, but you can cut this down considerably by going through the dictionary before you begin the recursion and picking out just the words that are relevant. For example, if we're working with the phrase "george bush", we'd throw out words like "abash" and "aura" right away because they could never be part of a solution.

Stuart Reges