|
|
|
Zasha Weinberg, in lieu of Steve Wolfman* |
|
Winter 2000 |
|
|
|
|
|
|
|
Idea: small amount of fast memory |
|
Keep frequently used data in the fast memory |
|
LRU replacement policy |
|
Keep recently used data in cache |
|
To free space, remove Least Recently Used data |
|
|
|
|
Optimizing use of cache can make programs way
faster |
|
One TA made RadixSort 2x faster, rewriting to
use cache better |
|
Not just for sorting |
|
|
|
|
|
|
Cache miss à read line à get hits for rest of
cache line |
|
Then another miss |
|
# misses = (N2/2)/(cache line size) |
|
|
|
|
Partition kind of like Selection Sort |
|
BUT, subproblems more quickly fit in cache |
|
Selection Sort only fits in cache right at the
end |
|
|
|
|
|
|
|
|
|
|
On each BinSort |
|
Sweep through input list – cache misses along
the way (sucky!) |
|
Append to output list – indexed by pseudorandom
digit (ouch!) |
|
Truly evil for large Radix (e.g. 216),
which reduces # of passes |
|
|
|
|
e.g. Sort 10 billion numbers with 1 MB of RAM. |
|
Databases need to be very good at this |
|
Winter 2000 326’ers won’t need to be |
|