____________________________________________________________ operator overloading operator overloading (slides, Set T) already seen operator= as a method; weird prototype Vector & Vector::operator=(const Vector &rightHandSide); operator overloading, design and implementation example consider + for list concat. e.g. for usage like v3 = v1 + v2; method prototype Vector Vector::operator+(const Vector &rightOperand) const; In the above example expr., the left operand v1 is *this, v2 is rightOperand, and the resulting list is returned. observations about the decl. const method -- shouldn't modify left operand const param. -- shouldn't modify right operand ref. param. -- want to avoid copy constr. invocation returns Vector, not a ref. -- result is a new list; not a modified version of one of the operands problem: concatenation not commutative, like we expect + to be (see implementation outline below) What happens when you execute this line, based on the outline below? v3 = v1 + v2; * in method operator+ ... * copy constr. * copying of v2 * copy constr. to return * destr. for result when method ends * in operator= ... * deallocate dyn. mem. for v3 * copying of result A lot of work; very inefficient, but it's not obvious from looking at the apparently innocent and simple line, "v3 = v1 + v2". So what's the right way? Define an append method to use w/ copy constr., i.e. Vector v3 = v1; v3.append(v2); Method append adds the given list to the end of the list on which the method is called. void Vector::append(const Vector &listToAppend); // returns Vector which is the result of concatenating this // vector with the given one, e.g. after // v3 = v1 + v2; // is executed, v3 is the result of adding v2 to the end of // v1 Vector Vector::operator+(const Vector &rightOperand) const { // start by copying this vector into result; copy constr. invoked Vector result = *this; // add elements of rightOperand to end of result ... return result; // note: invokes copy constructor } ____________________________________________________________ algorithm efficiency worst-case running time analysis review of key points "How fast?" is not quite precise enough. "How fast does running time grow as problem size increases?" counting steps, operations *asymptotic* running time analysis dropping constants; why this is OK dropping lower order terms; why this is OK consequence: asymptotic running time depends on algorithm, not programming language usu. consider worst case notation: f(n) = O(g(n)) O(f(n) + g(n)) = O(f(n)) + O(g(n)) analyzing loops (wasn't covered thoroughly in lecture) nested loops running time analysis exercises various operations of List, Stack, Queue ADTs, depending on implementation binary search deallocating a linked list sequence of n insertions (w/ shifts) into an array which is initially empty insertion sort side notes and other materials (see graphs from Carrano et al., p. 396-7) review of the meaning of "log_b n" # times you can repeat division by b on n the number you can raise b to the power of to get n Why does base not matter for asymptotic running time analysis? arithmetic sum is O(n^2) Gauss' pairing technique, e.g. w/ sum of 1..100: 1 + 2 + 3 + 4 + ... 100 + 99 + 98 + 97 + ... = 50 * 101 = 1/2 * 100 * (1 + 100) a nice visual "proof" I like: . . . . . o . . . . o o . . . o o o . . o o o o . o o o o o The total number of o's is about half the area of an n x n square full of o's. Actually, draw an n+1 x n rectangle of o's, and you get the exact result.