O(1) - constant, e.g. adding two numbers, cracking bad passwds O(log n) - binary search, Euclid's algorithm O(n) - linear search, computing an average, dot product O(n log n) - good sorts (e.g. merge), Fast Fourier Transform O(n^2) - insertion sort, matrix add O(n^3) - matrix multiply (dot product per element) higher order polynomials suffer from curse of dimensionality O(2^n) - factoring n-bit numbers, cracking good passwds, game searches exponential algorithms suffer from combinatorial explosion We have an unrelenting need for better data structures and algorithms -- we have more data than we know what to do with. e.g. Earth is 5*10^8 sq km, which is 5*10^14 samples at a resolution of a square meter. Toss in 6 different dimensions (say wavelengths) and 365 days of the year. Now we're up to 10^18... What could this be used for? How about weather prediction. Processors and harddisks are growing at equal rates, but runtimes grow faster for sublinear (slower than O(n)) algorithms. runtimes of recursive algorithms: cost of each step, e.g. O(1), O(n) number of substeps spawned (1 vs 2 vs n) size of substep (n-1 vs n/2) Factorial: cost of each step: O(1) number of substeps spawned: 1 size of substep: n-1 = O(n) Binary search: cost of each step: O(1) number of substeps spawned: 1 size of substep: n/2 = O(log n) Merge Sort: cost of each step: O(n) number of substeps spawned: 2 size of substep: n/2 = O(n log n) game searches: cost of each step: polynomial number of substeps spawned: polynomial size of each substep: n-1 = exponential hugeness Rules on optimization: 1) Don't do it 2) Don't do it yet (experts only) 3) Measure performance before and after each optimization Optimized C code is about 2x fast as Java, and about 2000x more likely to be buggy Insertion sort: invariant - everything to the left has already been sorted cool trick - can shift over while walking backward to find the place to insert worst case - backward sorted O(n^2) best case - already sorted O(n) average - O(n^2)