- Q. How detailed should my algorithm be?
E.g., if I want the vertex that has the lowest
number, may I write?:
Vertex := Min (VertexA, VertexB);
Or should I be more detailed?:
temp := vertexA;
if(vertexB < vertexA) temp:=vertexA;
A:
I don't want code, I want a clear, understandable description of
the method. A program is a communication between you and a
machine; an algorithm is a communication between you and another
human about a program.
E.g., "Min (VertexA, VertexB)" is fine, but
I'd be just as happy with English: "between vertices A and B,
choose the one with the smaller vertex number", or "keep an
array with an entry for each vertex holding the count of the
number of times that vertex is reached during process X and find
the vertex with the min count." If you need a balanced binary
search tree, say "using an WhizBlat tree from chapter 4 or my data
structures class, we can ..."; definitely don't give me WhizBlat
code.
The basic issue is that you should break the problem down into steps
that are clear enough that everyone else in the class should be able
to:
- understand what you mean (English is notoriously slippery, so be careful here)
- fill in the details necessary to code it, and
- know how to analyze its run time --
- "min(A,B)" is O(1), whether described at a high level, or via the detailed code above,
- "find the min in that array in time O(n)", or
- "find & remove the min of a heap in time O(log n)".
And it should be clearly organized so that the "big
picture" is evident and so that you can argue that your
algorithm is correct. In general, I want at least a sketch of
correctness with all your algorithms, unless I say otherwise.
- Q. How do I time my program?
You've asked us to measure the time taken by our algorithms.
How do I to this? Should I buy that $3.99 Rolex I keep getting
emails about? --- T. Treasure
A: Dear Timeless,
No, save the Rolex to impress that Certain Someone. Computers
generally have built-in clocks. Accesssing them depends on the
language. For example, in C/C++, you do:
#include <time.h>
int main(void) {
clock_t start_tick, end_tick;
double elapsed;
start_tick = clock();
my_really_fast_algorithm(42);
end_tick = clock();
elapsed = (end_tick - start_tick) / (double)CLOCKS_PER_SEC;
printf("My Really Fast Algorithm took %f seconds!\n", elapsed);
return 0;
}
CLOCKS_PER_SEC
, defined in time.h
,
tells how to scale the system-dependent clock()
results to seconds. (It is often 1000, but that does not
tell you the resolution of the timer, which is often 10 or 16.7
milliseconds.)
Similarly, in Java public static long currentTimeMillis()
returns the current time in millisecond units (but not
necessarily millisecond accuracy).
- Q. How do I time my really, really fast program?
I did what you outlined above, but I always get 0 seconds?!?
---S. LaCode
A: Dear Swifty,
Congratulations! Your algorithm is a blazing
marvel of speed. But just to be on the safe side, I always do
this, too: Since the system clock ticks only once every 10
milliseconds or so, and my computer executes about a billion
operations per second, a fair bit of work gets done between
clock ticks. To time an algorithm more accurately for small
n, repeat it r times, for some number r ≥
1 judiciously chosen so that the elapsed time for all
repetitions is, say, 500 milliseconds or greater, then report
the elapsed time divided by r. It's OK to use a
different r for different input sizes, just so each chunk
you time is bigger than 500 ms or so. A big-O estimate of
running time might suggest how r should be scaled with
n.
Portions of the CSE 417 Web may be reprinted or adapted for academic
nonprofit purposes, providing the source is accurately quoted and duly
credited. The CSE 417 Web: © 1993-2005, Department of Computer Science
and Engineering, University of Washington.
|
|
Computer Science & Engineering
University of Washington
Box 352350
Seattle, WA 98195-2350
(206) 543-1695 voice, (206) 543-2969 FAX
[comments to
cse417-webmaster@cs.washington.edu]
|