Asymptotic Analysis¶

In the past, we've run into questions about efficiency when choosing what data structures to use (preferring sets or dictionaries instead of lists), as well as in designing our software to precompute and save results in the initializer rather than compute them just-in-time. By the end of this lesson, students will be able to:

  • Cache or memoize a function by defining a dictionary or using the @cache decorator.
  • Analyze the runtime efficiency of a function involving loops.
  • Describe the runtime efficiency of a function using asymptotic notation.

There are many different ways software efficiency can be quantified and studied.

  • Running Time, or how long it takes a function to run (e.g. microseconds).
  • Space or Memory, or how much memory is consumed while running a function (e.g. megabytes of memory).
  • Programmer Time, or how long it took to implement a function correctly (e.g. hours or days).
  • Energy Consumption, or how much electricity or natural resources a function takes to run (e.g. kilo-watts from a certain energy source).
In [1]:
!pip install -q nb_mypy
%reload_ext nb_mypy
%nb_mypy mypy-options --strict
[notice] A new release of pip is available: 25.0.1 -> 25.1
[notice] To update, run: pip install --upgrade pip
Version 1.0.5

Data classes¶

Earlier, we wrote a University class that needed to loop over all the students enrolled in the university in order to compute the list of students enrolled in a particular class. Let's take this opportunity to review this by also introducing one neat feature of Python that allows you to define simple data types called data classes. We can define a Student data class that takes a str name and dict[str, int] for courses and the credits for each course with the @dataclass decorator. Decorators will turn out to be a useful feature for the remainder of this lesson, and it's helpful to see them in a more familiar context first.

In [6]:
from dataclasses import dataclass, field

# This decorator adds methods like __init__, __repr__, __eq__
@dataclass
class Student:
    """Represents a student with a given name and dictionary mapping course names to credits."""
    # Only in a dataclass are these class attributes turned into instance fields!
    # In a dataclass, our instance fields appear inside the class definition, not in an __init__ method!
    # How do we make these fields private?
    # Make sure to use private fields in your Search homework!
    _name: str
    _courses: dict[str, int] = field(default_factory=dict) # specifies that the default value for courses is an empty dictionary

    def get_courses(self) -> list[str]:
        """Return a list of all the enrolled courses."""
        from time import sleep
        sleep(0.0001) # The time it takes to retrieve a single student's course list from MyUW
        return list(self._courses)

    def get_name(self) -> str:
        """Returns the student's name."""
        return self._name

The @dataclass decorator "wraps" the Student class in this example by providing many useful dunder methods like __init__, __repr__, and __eq__ (used to compare whether two instances are == to each other), among many others. This allows you to write code that actually describes your class, rather than the "boilerplate" or template code that we always need to write when defining a class. Default values for each field can be set with = assignment operator, and dataclasses also have a built-in way of handling mutable default parameters like fields that are assigned to lists or dictionaries using the field function as shown above.

In [7]:
Student("Nicole")
Out[7]:
Student(_name='Nicole', _courses={})
In [9]:
Student(_name='Nicole', _courses={"CSE163": 4})
Out[9]:
Student(_name='Nicole', _courses={'CSE163': 4})

Let's see how we can use this Student data class to implement our University class again. Note that the fields of this University class have been prepended with an underscore, which indicates to other programmers that they are private fields. Unlike Java, private fields in Python are not actually enforced by the programming language, but they help indicate that certain fields should not be accessed (or worse, modified) by other classes.

In [20]:
# Not a dataclass!
class University:
    """
    Represents one or more students enrolled in courses at a university.
    """

    def __init__(self, name: str, students: list[Student] | None = None) -> None:
        """Takes the name of the university and optionally a list of students enrolled."""
        if students is None:
            students = []
        # _name is a private field for the variable "name"
        self._name: str = name
        self._students: list[Student] = students
        self._cache: dict[str, list[Student]] = {}

    def enrollments(self) -> list[Student]:
        """Returns all the enrolled students sorted by their name in alphabetical order."""
        return sorted(self._students, key=lambda student: student.get_name())

    def enroll(self, student: Student) -> None:
        """Enrolls the student to the university."""
        self._students.append(student)

    # Can instead of defining the roster_cached method below, use the @cache decorator to do
    # the same thing. Main drawback to this decorator is it's not as customizable.
    @cache
    def roster(self, course: str) -> list[Student]:
        """Given a course name, return the list of students who are enrolled in that course."""
        result = []
        for student in self._students:
            if course in student.get_courses():
                result.append(student)
        return result

    def roster_cached(self, course: str) -> list[Student]:
        """Given a course name, return the list of students who are enrolled in that course."""
        if course in self._cache:
            return self._cache[course]
        result = []
        for student in self._students:
            if course in student.get_courses():
                result.append(student)
        # Save the result somewhere, so that the next time someone asks for the roster, we just
        # return that saved result instead of recomputing everything again.
        self._cache[course] = result
        return result


uw = University("Udub", [Student("Nicole", {"CSE163": 4})])
uw.enrollments()
Out[20]:
[Student(_name='Nicole', _courses={'CSE163': 4})]

Caching¶

The approach of precomputing and saving results in the constructor is one way to make a function more efficient than simply computing the result each time we call the function. But these aren't the only two ways to approach this problem. In fact, there's another approach that we could have taken that sits in between these two approaches. We can cache or memoize the result by computing it once and then saving the result so that we don't have to compute it again when asked for the same result in the future.

Let's go back to the University class to add a cached version of the roster method and name it roster_cached.

In [21]:
%time uw.roster("CSE163")
CPU times: user 166 µs, sys: 0 ns, total: 166 µs
Wall time: 211 µs
Out[21]:
[Student(_name='Nicole', _courses={'CSE163': 4})]

Okay, so this doesn't seem that much faster since we're only working with 2 students. There were 60,692 enrolled students at UW in 2023. Let's try generating that many students for our courses.

In [16]:
names = ["Thrisha", "Sheamin", "Nicole"]
students = [Student(names[i % len(names)], {"CSE163": 4}) for i in range(60692)]
students[:10]
Out[16]:
[Student(_name='Thrisha', _courses={'CSE163': 4}),
 Student(_name='Sheamin', _courses={'CSE163': 4}),
 Student(_name='Nicole', _courses={'CSE163': 4}),
 Student(_name='Thrisha', _courses={'CSE163': 4}),
 Student(_name='Sheamin', _courses={'CSE163': 4}),
 Student(_name='Nicole', _courses={'CSE163': 4}),
 Student(_name='Thrisha', _courses={'CSE163': 4}),
 Student(_name='Sheamin', _courses={'CSE163': 4}),
 Student(_name='Nicole', _courses={'CSE163': 4}),
 Student(_name='Thrisha', _courses={'CSE163': 4})]
In [ ]:
uw = University("Udub", students)
uw.enrollments()
In [23]:
%time uw.roster("CSE163")[:10]
CPU times: user 1.08 s, sys: 293 ms, total: 1.37 s
Wall time: 11.3 s
Out[23]:
[Student(_name='Thrisha', _courses={'CSE163': 4}),
 Student(_name='Sheamin', _courses={'CSE163': 4}),
 Student(_name='Nicole', _courses={'CSE163': 4}),
 Student(_name='Thrisha', _courses={'CSE163': 4}),
 Student(_name='Sheamin', _courses={'CSE163': 4}),
 Student(_name='Nicole', _courses={'CSE163': 4}),
 Student(_name='Thrisha', _courses={'CSE163': 4}),
 Student(_name='Sheamin', _courses={'CSE163': 4}),
 Student(_name='Nicole', _courses={'CSE163': 4}),
 Student(_name='Thrisha', _courses={'CSE163': 4})]
In [27]:
# The cache will reset each time you redefine the University object
# Or, what if someone wants to drop this class.
%time uw.roster_cached("CSE163")[:10]
CPU times: user 9 µs, sys: 1 µs, total: 10 µs
Wall time: 12.6 µs
Out[27]:
[Student(_name='Thrisha', _courses={'CSE163': 4}),
 Student(_name='Sheamin', _courses={'CSE163': 4}),
 Student(_name='Nicole', _courses={'CSE163': 4}),
 Student(_name='Thrisha', _courses={'CSE163': 4}),
 Student(_name='Sheamin', _courses={'CSE163': 4}),
 Student(_name='Nicole', _courses={'CSE163': 4}),
 Student(_name='Thrisha', _courses={'CSE163': 4}),
 Student(_name='Sheamin', _courses={'CSE163': 4}),
 Student(_name='Nicole', _courses={'CSE163': 4}),
 Student(_name='Thrisha', _courses={'CSE163': 4})]

Python provides a way to implement this caching feature using the @cache decorator. Like the @dataclass decorator, the @cache decorator helps us focus on expressing just the program logic that we want to communicate.

In [ ]:
from functools import cache
...

Counting steps¶

A "simple" line of code takes one step to run. Some common simple lines of code are:

  • print statements
  • Simple arithmetic (e.g. 1 + 2 * 3)
  • Variable assignment or access (e.g. x = 4, 1 + x)
  • List or dictionary accesses (e.g. l[0], l[1] = 4, 'key' in d).

The number of steps for a sequence of lines is the sum of the steps for each line.

In [ ]:
x = 1
print(x)

The number of steps to execute a loop will be the number of times that loop runs multiplied by the number of steps inside its body.

The number of steps in a function is just the sum of all the steps of the code in its body. The number of steps for a function is the number of steps made whenever that function is called (e.g if a function has 100 steps in it and is called twice, that is 200 steps total).

This notion of a "step" intentionally vague. We will see later that it's not important that you get the exact number of steps correct as long as you get an approximation of their relative scale (more on this later). Below we show some example code blocks and annotate their number of steps in the first comment of the code block.

In [28]:
# Total: 4 steps

print(1 + 2)    # 1 step
print("hello")  # 1 step
x = 14 - 2      # 1 step
y = x ** 2      # 1 step
3
hello
In [29]:
# Total: 51 steps

print("Starting loop")  # 1 step

# Loop body has a total of 1 step. It runs 30 times for a total of 30
for i in range(30):     # Runs 30 times
    print(i)            #   - 1 step

 # Loop body has a total of 2 steps. It runs 10 times for a total of 20
for i in range(10):     # Runs 10 times
    x = i               #   - 1 step
    y = i**2            #   - 1 step
Starting loop
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
In [ ]:
# Total: 521 steps

print("Starting loop")   # 1 step

# Loop body has a total of 26 steps. It runs 20 times for a total of 520
for i in range(20):      # Runs 20 times
    print(i)             #   - 1 step

                         #   Loop body has a total of 1 step.
                         #   It runs 25 times for total of 25.
    for j in range(25):  #   - Runs 25 times
        result = i * j   #       - 1 step

Big-O notation¶

Now that we feel a little bit more confortable counting steps, lets consider two functions sum1 and sum2 to compute the sum of numbers from 1 to some input n.

In [ ]:
def sum1(n: int) -> int:
    total = 0
    for i in range(n + 1):
        total += i
    return total


def sum2(n: int) -> int:
    return n * (n + 1) // 2

The second one uses a clever formula discovered by Gauss that computes the sum of numbers from 1 to n using a formula.

To answer the question of how many steps it takes to run these functions, we now need to talk about the number of steps in terms of the input n (since the number of steps might depend on n).

In [ ]:
def sum1(n: int) -> int:     # Total:
    total = 0                #
    for i in range(n + 1):   #
        total += i           #
    return total             #


def sum2(n: int) -> int:     # Total: 
    return n * (n + 1) // 2  #

Since everything we've done with counting steps is a super loose approximation, we don't really want to promise that the relationship for the runtime is something exactly like n + 3. Instead, we want to report that there is some dependence on n for the runtime so that we get the idea of how the function scales, without spending too much time quibbling over whether it's truly n + 3 or n + 2.

Instead, computer scientists generally drop any constants or low-order terms from the expression n + 3 and just report that it "grows like n." The notation we use is called Big-O notation. Instead of reporting n + 3 as the growth of steps, we instead report O(n). This means that the growth is proportional to n but we aren't saying the exact number of steps. We instead only report the terms that have the biggest impact on a function's runtime.

Poll question: count the number of steps run in the loop for the given method.

In [ ]:
# More of like 9n steps ==> O(n)
def method(n: int) -> int:
    result = 0
    for i in range(9): # Run 9 times
        for j in range(n): # Run n times
            result += j
    return result

method(10)
In [ ]:
# How do we get 9n^2? ==> O(n^2)
def method(n: int) -> int:
    result = 0
    for i in range(9): # Run 9 times
        for j in range(n): # Run n times
            for k in range(n): # Run n times
                result += j + k
    return result

# How about 9 * (n)(n+1) / 2? ==> O(n^2)
def method(n: int) -> int:
    result = 0
    for i in range(9): # Run 9 times
        for j in range(n): # Run n times
            # 1 + 2 + 3 + ... + (n - 1) + n = n(n + 1) / 2
            for k in range(j): # Run j times
                result += j + k
    return result

Big-O notation is helpful because it lets us communicate how the runtime of a function scales with respect to its input size. By scale, we mean quantifying approximately how much longer a function will run if you were to increase its input size. For example, if I told you a function had a runtime in O(n2), you would know that if I were to double the input size n, the function would take about 4 times as long to run.

In addition to O(n) and O(n2), there are many other orders of growth in the Big-O Cheat Sheet.

Data structures¶

Most operations in a set or a dictionary, such as adding values, removing values, containment check, are actually constant time or O(1) operations. No matter how much data is stored in a set or dictionary, we can always answer questions like, "Is this value a key in this dictionary?" in constant time.

In the data-structures.ipynb notebook, we wanted to count the number of unique words in the file, moby-dick.txt. Here are two different implementations that are nearly identical, except one uses a list and the other uses a set.

def count_unique_list(file_name: str) -> int:
    words = list()
    with open(file_name) as file:
        for line in file.readlines():
            for word in line.split():
                if word not in words:  # O(n)
                    words.append(word) # O(1)
    return len(words)

def count_unique_set(file_name: str) -> int:
    words = set()
    with open(file_name) as file:
        for line in file.readlines():
            for word in line.split():
                if word not in words:  # O(1)
                    words.add(word)    # O(1)
    return len(words)

The not in operator in the list version was responsible for the difference in runtime because it's an O(n) operation. Whereas, for set, the not in operator is an O(1) operation!

The analysis ends up being a little bit more complex since the size of the list is changing throughout the function (as you add unique words), but with a little work you can show that the list version of this function takes O(n2) time where n is the number of words in the file while the set version only takes O(n) time.