these solutions are pretty rough 1. a) Random pivot is theoretically optimal, O(n log n) with probability 1-epsilon. However, random number generation is expensive, so not the best in practice. b) Median-of-3 is fast and effective in practice. However pathological data can require O(n^2) computation (but are unlikely to encounter except from an adversary). 2. A is source D has parent A E has parent D G has parent E H has parent G B has parent E ** F has parent E ** these two are interchangeable C has parent F 3. a. A D B E G H F C b. AD BE AB DG EH FH CF 4. I'm not going to redraw trees here because ASCII is a pain. But you should redraw them entirely on the exam. a. Tree's the same, except right child of 9 is 15 b. 29 becomes right child of 27 at first, but imbalance at 35, inner rotation new tree w/ 29 as right child of 25, w/ 27 as left child of 29, 35 as right child of 29. 5. Construct a graph G' such that for every edge (v,w) in G, it has edge (w,v), basically reversing edges. Then do a DFS on G' and see if you reach every node. 6. a. It makes a decision based just on local/current information, rahter than making a decision based on global information. b. See hw5 soln. 7. a. O(n), because it traverses to every node in the tree b. 2 / \ 1 4 / \ 0 3 8. a. E is smaller than V^2. E log V^2 = 2 E log V = O(E log V) b. 2*E 9. a. O(1) b. O(degree(V)) c. O(V) d. have both an adj list and an adj matrix, using the list for getAdjacentVertices and the matrix for isEdgeBetween when you add a vertex or edge to the graph, it is added to both the list and the matrix 10. LinkedList l=new LinkedList(); HashSet h=new HashSet(); l.add(g.allVertices().get(0)); while(!l.isEmpty()) { Vertex v=l.removeFirst(); h.add(v); for(Vertex w:g.adjacentVertices(v)) l.add(w); } if (h.size()!=g.allVertices().size()) return false; // not connected int k=0; for(Vertex v:g.allVertices()) { if (g.adjacentVertices(v).size()%2==1) k++; } return k<=2; 11. a. If the data has been predetermined and will not change, a sorted list is faster and smaller in practice. Both the sorted list and BST, if balanced, have O(log n) finds, but the balanced BST offers O(log n) inserts and removes, whereas the sorted list has O(n) inserts and removes. So if you are going to be dynamically inserting/removing, a BST is a better sort. b. Queries can be slow to compute, and furthermore, a lot of times, 90% of the queries access just 10% of the data. If the underlying data changes, a result returned by the cache may be incorrect. So the cache would have to get notified, and this notification should not be costly... c. Building a problem constitutes creating the encryption, e.g. finding two large primes and computing their product. Solving the problem amounts to trying to crack the encryption, e.g. trying to factor a large number into two primes. It's very important that these are very hard to compute. You decrypt by providing the answer, e.g. providing the original factors of the number. To verify that the product is the number needs to be fast. (This question wasn't the most relevant to class.. I just needed a filler question for the practice exam.)