CSE 143 Quiz 4: August 6th, 1998
Time limit: 20 minutes
Closed book, closed notes

Name:

Section:
 
 
Question 1:
Consider the following function mentioned in lecture:

    int mystery( int x )
    {
       assert( x > 0 );
       if( x == 1 ) {
           return 0;
       } else {
           return 1 + mystery( x / 2 );
       }
    }
     
  1. What is the value of mystery( 16 )?

  2.  
       4
     
  3. What is the value of mystery( 64 )?

  4.  
       6
     
  5. In general, what is the value of mystery( 2n )?

  6.  
       n
     
  7. Give a tight big-O bound on the running time of this function on some input n.

  8.  
      O(log n)
Question 2:
I am attempting to implement a class for a new ADT called a Quack (kind of like a queue, kind of like a stack).  Of course, I want the Quack class to use canonical form so that I avoid potential memory leaks and dangling pointers.  On the left is the Quack specification file so far.  On the right is some client code that I'd like to have work.
 
 
#ifndef __QUACK_H__ 
#define __QUACK_H__ 

class Quack 
{ 
public: 
        // default constructor: goes with "Quack q1, q2"
    Quack(); 
        // Copy constructor: goes with "bool steue( Quack param )"
    Quack( const Quack& other ); 
  
        // Assignment operator: goes with "q1 = q2"
    Quack& operator =( const Quack& other ); 

    ...  
}; 

#endif // __QUACK_H__

Quack q1, q2; 
 

q1 = q2; 
 
// Note that this line doesn't use any of the canonical form methods.
Quack *duck = NULL; 
 

bool steue( Quack param ) 
{ 
    ... 
}

 
  1. Draw lines connecting the member function declarations on the left with the lines of code on the right where they might get used.
  2. Canonical form consists of four methods.  Which one is missing here?

       The Destructor, or Quack::~Quack()
Question 3:
I need an algorithm that does some computation on integer arrays.  I come up with two possible implementations:
 
void algorithm1( int data[], int n ) 
{ 
    ...  
}
void algorithm2( int data[], int n ) 
{ 
    ... 
}
After some careful analysis, I discover that algorithm1 completes in precisely 100n + 50 time steps on arrays of n elements.  algorithm2, on the other hand, completes in 3n2 - 1 steps.
  • Give tight big-O bounds for algorithm1 and algorithm2, both in terms of n.

  •  
     algorithm1: O(n), algorithm2: O(n2)
     
  • Which algorithm takes fewer steps on a problem of size 1?

  •  
     algorithm1 takes 150 steps; algorithm2 takes only 2 steps, so algorithm2.
     
  • Which algorithm takes fewer steps on a problem of size 100?

  •  
     algorithm1 takes 10050 steps; algorithm2 takes 29999 steps, so algorithm1.
     
  • There exists some crossover point between 1 and 100 where one algorithm becomes worse than the other.  Suppose that crossover point is stored in a const int called crossover.  Write a function algorithm3 that takes advantage of algorithm1, algorithm2 and crossover to get the best possible performance for any value of n.  algorithm3 should have the same interface as algorithm1 and algorithm2.

  •   
        void algorithm3( int data[], int n )

        {
            if( n < crossover ) {
                algorithm2( data, n );
            } else {
                algorithm1( data, n );
            }
        }
    Question 4:
    A certain mathematical sequence is defined as follows, for any initial positive number A: For example, when A = 7, we get the following sequence: Note that once we get down to 1, the sequence repeats forever.  So we can stop computing steps in the sequence once we get to 1.

    Write a recursive function printSeq that takes a positive int parameter and prints the sequence from that int down to 1.  For example, printSeq( 3 ) should print out

    3 10 5 16 8 4 2 1 
        void printSeq( int n )
        {
            cout << n << " ";
            if( n > 1 ) {
                if( n % 2 == 0 ) {
                    printSeq( n / 2 );
                } else {
                    printSeq( 3 * n + 1 );
                }
            }
        }

    BONUS QUESTION:
    Give a big-O bound for printSeq.  Your bound doesn't have to be tight -- any upper bound on the running time will do.

    For any given starting value of A, let L(A) represent the number of steps it takes to get from A down to 1.  Nobody knows what L(A) is!  In fact, nobody even knows if L(A) is always finite -- perhaps there is some A for which you never get down to 1!  If you can find a big-O bound for L(A), then you will have effectively proven that L(A) is always finite, a result that has evaded some the greatest mathematicians of this century!  Good luck.