Lecture 15 Summary

There is a lot of review from lecture 14 at the beginning of this lecture. Since on Saturday you will be going directly from the second half of lecture 14 to the beginning of lecture 15, I would suggest that during lecture 14, you stop frequently during the discussion of FFT, and then if your students seem comfortable with the topic, watch the announcements at the beginning oflecture 15, then skip to slide 6 of lecture 15, at 9:00. I would include slide 6 even though it is somewhat repeated from lecture 14, because I think that the slightly different way that the instructor explains the diagram the second time is useful.

The only other interesting point that arises during slides 1-5 that is not discussed much in lecture 14 occurs at 6:20, where the instructor says, "We use exponential notation because of the convenience of manipulating exponents." So it might be worthwhile to discuss this during lecture 14 on the e^2pki/n slide.


This is just announcements, but there is some discussion of the exam, so it might be worthwhile to show it.






The point here is that all of the work happens in the translation step. Doing the translation work allows us to make the actual multiplication quite easy. In this way the FFT provides a bridge between the coefficient representation of a polynomial, and the point-value representation of a polynomial.


At 12:40, the instructor asks, " Why is it safe to assume that n is a power of 2?" Stop here to let students answer.

At 13:40, the instructor asks, "How big a hit will we take in performance if we pad with zeros?" Stop again to let students answer. The phrase "take a hit" means "impact negatively," so this is asking how much of a negative impact on performance we suffer by padding with zeros. Padding with zeros just means adding terms with coefficient zero.


It would probably be useful to show the discussion up to about 16:15 before stopping for the activity.


This is an activity slide. You can show the discussion up to about 18:25 before stopping.


At 21:48, the instructor asks a question that begins a several-minute discussion with the students. It would be good if you tried to do a similar discussion. Stop at 21:48, and try to ask the same sequence of questions that the instructor did.

At 21:48, the instructor asks, "What am I forgetting here?" A student answers "A base case." Then the instructor asks, "What is an appropriate base case?" A student answers, "A 1-degree polynomial." The instructor asks, "If I stop with a 1-degree polynomial, what will I be returning?" A student answers, "a0 + a1 at 1, a0 - a1 at -1"



You are likely to have students ask for more explanation of the details here. It might be worthwhile to look over the proof in the book before class. However, if you are running short on time you can tell them to look in the textbook outside of class, since the instructor makes the point that the math provides no insight into why the algorithm works.

At 29:45, the instructor asks the students if there are any questions about the FFT. One student asks, "When you apply the FFT as an inverse FFT, the results you get aren't exactly the coefficients?" The instructor answers, "They are off by a factor of 2, but this is easy to fix."


The instructor briefly goes to a whiteboard slide in between slides 13 and 14, and introduces Dynamic Programming. The instructor explains that the idea behind introducing all of these different kinds of algorithms is that when a student sees a new problem, they will be able to decide whether to use a greedy algorithm, a dynamic programming algorithm, etc. In this way the students are learning by experience.

The instructor mentions a colloquium talk here at the University of Washington. This isn't important - he's just making the point that it is hard to rigorously define these different categories of algorithms.


This is an activity. You can stop the discussion at about 34:30. This activity should be fairly quick.


This is an activity. You can stop the discussion at about 35:45. The students should just write p[i] by each line. They are already sorted by finish time.


At 38:25, the instructor asks, "How do I express Ij in terms of I1.... Ij-1?" You should stop here for discussion. No one in the UW class answered, so at 39:54, the instructor asks again, "If I tell you j is in the optimal solution, how would you express opt[j]?" If no one got it right the first time, stop again for student answers.

At 41:51, a student asks a question that is hard to hear. The instructor answers, "Opt[j] = optimal subsolution from 1...j" This is just a clarification of the notation.

At 42:30, the instructor asks, "How would you express Opt[j] if j is not in the optimal solution?" Stop here for student responses.

At 43:20, the instructor makes the point that often the key step in dynamic programming is determining the optimization equation.



This is an activity slide. You can stop the video at 44:45. There is a lot of student discussion of the answer, but it's pretty easy to understand.

At 47:30, the instructor makes the point that the pathological case is the easiest case. The pathological case is the case where the algorithm has the worst performance.

At 48:05, a student asks if there is a class of algorithms where the easiest case is the pathological case, since this is also true for quicksort. The instructor responds, "this is just a coincidence."