Asymptotic Analysis¶
In the past, we've run into questions about efficiency when choosing what data structures to use (preferring sets or dictionaries instead of lists), as well as in designing our software to precompute and save results in the initializer rather than compute them just-in-time. By the end of this lesson, students will be able to:
- Cache or memoize a function by defining a dictionary or using the
@cache
decorator. - Analyze the runtime efficiency of a function involving loops.
- Describe the runtime efficiency of a function using asymptotic notation.
There are many different ways software efficiency can be quantified and studied.
- Running Time, or how long it takes a function to run (e.g. microseconds).
- Space or Memory, or how much memory is consumed while running a function (e.g. megabytes of memory).
- Programmer Time, or how long it took to implement a function correctly (e.g. hours or days).
- Energy Consumption, or how much electricity or natural resources a function takes to run (e.g. kilo-watts from a certain energy source).
!pip install -q nb_mypy
%reload_ext nb_mypy
%nb_mypy mypy-options --strict
Version 1.0.5
Data classes¶
Earlier, we wrote a University
class that needed to loop over all the students enrolled in the university in order to compute the list of students enrolled in a particular class. Let's take this opportunity to review this by also introducing one neat feature of Python that allows you to define simple data types called data classes. We can define a Student
data class that takes a str
name and dict[str, int]
for courses and the credits for each course with the @dataclass
decorator. Decorators will turn out to be a useful feature for the remainder of this lesson, and it's helpful to see it a more familiar context first.
from dataclasses import dataclass, field
@dataclass
class Student:
"""Represents a student with a given name and dictionary mapping course names to credits."""
# These are instance fields, not class fields!
name: str
courses: dict[str, int] = field(default_factory=dict) # field function specifies that the default value for courses is an empty dictionary
def get_courses(self) -> list[str]:
"""Return a list of all the enrolled courses."""
from time import sleep
sleep(0.0001) # The time it takes to get your course list from MyUW (jk
return list(self.courses)
...
# def get_name(self) -> str:
# return self.name
The @dataclass
decorator "wraps" the Student
class in this example by providing many useful dunder methods like __init__
, __repr__
, and __eq__
(used to compare whether two instances are ==
to each other), among many others. This allows you to write code that actually describes your class, rather than the "boilerplate" or template code that we always need to write when defining a class.
Default values for each field can be set with =
assignment operator, and dataclasses also have a built-in way of handling mutable default parameters like fields that are assigned to lists or dictionaries using the field
function as shown above.
Student("Nicole", {"CSE163": 4})
Student(name='Nicole', courses={'CSE163': 4})
Let's see how we can use this Student
data class to implement our University
class again.
class University:
"""
Represents one or more students enrolled in courses at a university.
"""
def __init__(self, univ_name: str, students: list[Student] | None = None) -> None:
"""Takes the name of the university and optionally a list of students enrolled."""
self._name: str = univ_name
if students is None:
self._students: list[Student] = []
else:
self._students = students
self._courses: dict[str, list[Student]] = {}
for student in self._students:
for course in student.get_courses():
if course in self._courses:
self._courses[course].append(student)
else:
self._courses[course] = [student]
def students(self) -> list[Student]:
"""Getter method for students list."""
return self._students
def enrollments(self) -> list[Student]:
"""Returns all the enrolled students sorted by their name in alphabetical order."""
sorted_students = sorted(self._students, key=lambda student:student.name)
return sorted_students
def enroll(self, student: Student) -> None:
"""Enrolls the student to the university."""
self._students.append(student)
for course in student.get_courses():
if course in self._courses:
self._courses[course].append(student)
else:
self._courses[course] = [student]
def roster(self, course_name: str) -> list[Student] | None:
"""
Returns the list of students enrolled in the given course. If there's no students
enrolled, return an empty list. If the course does not exist, return None.
"""
if course_name in self._courses:
return self._courses[course_name]
return None
def roster_slow(self, course_name: str) -> list[Student] | None:
"""
A version of roster() that does not use precomputation.
"""
enrolled_students = [student for student in self._students if course_name in student.get_courses()]
return enrolled_students
uw = University("Udub", [Student("student", {}), Student("Nicole", {"CSE163":4})])
uw.students()
[Student(name='student', courses={}), Student(name='Nicole', courses={'CSE163': 4})]
Caching¶
The approach of precomputing and saving results in the constructor is one way to make a function more efficient than simply computing the result each time we call the function. But these aren't the only two ways to approach this problem. In fact, there's another approach that we could have taken that sits in between these two approaches. We can cache or memoize the result by computing it once and then saving the result so that we don't have to compute it again when asked for the same result in the future.
Let's go back to the University class to add a version of roster() that is not cached can call it roster_slow()
.
%time uw.roster("CSE163")
CPU times: user 5 µs, sys: 0 ns, total: 5 µs Wall time: 8.58 µs
[Student(name='Nicole', courses={'CSE163': 4})]
%time uw.roster_slow("CSE163")
CPU times: user 51 µs, sys: 7 µs, total: 58 µs Wall time: 387 µs
[Student(name='Nicole', courses={'CSE163': 4})]
Okay, so this doesn't seem that much faster since we're only working with 2 students. There are 60,692 enrolled students at UW in 2023. Let's try generating that many students for our courses.
names = ["Thrisha", "Sheamin", "Nicole"]
students = [Student(names[i % len(names)], {"CSE163": 4}) for i in range(60692)]
students[:10]
[Student(name='Thrisha', courses={'CSE163': 4}), Student(name='Sheamin', courses={'CSE163': 4}), Student(name='Nicole', courses={'CSE163': 4}), Student(name='Thrisha', courses={'CSE163': 4}), Student(name='Sheamin', courses={'CSE163': 4}), Student(name='Nicole', courses={'CSE163': 4}), Student(name='Thrisha', courses={'CSE163': 4}), Student(name='Sheamin', courses={'CSE163': 4}), Student(name='Nicole', courses={'CSE163': 4}), Student(name='Thrisha', courses={'CSE163': 4})]
uw = University("Udub", students)
%time uw.roster("CSE163")[:10]
CPU times: user 6 µs, sys: 1 µs, total: 7 µs Wall time: 8.82 µs
[Student(name='Thrisha', courses={'CSE163': 4}), Student(name='Sheamin', courses={'CSE163': 4}), Student(name='Nicole', courses={'CSE163': 4}), Student(name='Thrisha', courses={'CSE163': 4}), Student(name='Sheamin', courses={'CSE163': 4}), Student(name='Nicole', courses={'CSE163': 4}), Student(name='Thrisha', courses={'CSE163': 4}), Student(name='Sheamin', courses={'CSE163': 4}), Student(name='Nicole', courses={'CSE163': 4}), Student(name='Thrisha', courses={'CSE163': 4})]
%time uw.roster_slow("CSE163")[:10]
CPU times: user 536 ms, sys: 405 ms, total: 940 ms Wall time: 10.5 s
[Student(name='Thrisha', courses={'CSE163': 4}), Student(name='Sheamin', courses={'CSE163': 4}), Student(name='Nicole', courses={'CSE163': 4}), Student(name='Thrisha', courses={'CSE163': 4}), Student(name='Sheamin', courses={'CSE163': 4}), Student(name='Nicole', courses={'CSE163': 4}), Student(name='Thrisha', courses={'CSE163': 4}), Student(name='Sheamin', courses={'CSE163': 4}), Student(name='Nicole', courses={'CSE163': 4}), Student(name='Thrisha', courses={'CSE163': 4})]
Python provides a way to implement this caching feature using the @cache
decorator. Like the @dataclass
decorator, the @cache
decorator helps us focus on expressing just the program logic that we want to communicate.
from functools import cache
class CachedUniversity:
"""
Represents one or more students enrolled in courses at a university.
"""
def __init__(self, univ_name: str, students: list[Student] | None = None) -> None:
"""Takes the name of the university and optionally a list of students enrolled."""
self._name: str = univ_name
if students is None:
self._students: list[Student] = []
else:
self._students = students
def students(self) -> list[Student]:
"""Getter method for students list."""
return self._students
def enrollments(self) -> list[Student]:
"""Returns all the enrolled students sorted by their name in alphabetical order."""
sorted_students = sorted(self._students, key=lambda student:student.name)
return sorted_students
def enroll(self, student: Student) -> None:
"""Enrolls the student to the university."""
self._students.append(student)
@cache
def roster(self, course_name: str) -> list[Student] | None:
"""
Returns the list of students enrolled in the given course. If there's no students
enrolled, return an empty list. If the course does not exist, return None.
"""
return [student for student in self._students if course_name in student.get_courses()]
def roster_slow(self, course_name: str) -> list[Student] | None:
"""
A version of roster() that does not use precomputation.
"""
enrolled_students = [student for student in self._students if course_name in student.get_courses()]
return enrolled_students
uw = CachedUniversity("Udub", [Student("student", {}), Student("Nicole", {"CSE163":4})])
%time uw.roster("CSE163")
CPU times: user 74 µs, sys: 0 ns, total: 74 µs Wall time: 424 µs
[Student(name='Nicole', courses={'CSE163': 4})]
%time uw.roster("CSE163")
CPU times: user 4 µs, sys: 1e+03 ns, total: 5 µs Wall time: 7.63 µs
[Student(name='Nicole', courses={'CSE163': 4})]
Counting steps¶
A "simple" line of code takes one step to run. Some common simple lines of code are:
print
statements- Simple arithmetic (e.g.
1 + 2 * 3
) - Variable assignment or access (e.g.
x = 4
,1 + x
) - List or dictionary accesses (e.g.
l[0]
,l[1] = 4
,'key' in d
).
The number of steps for a sequence of lines is the sum of the steps for each line.
x = 1
print(x)
The number of steps to execute a loop will be the number of times that loop runs multiplied by the number of steps inside its body.
The number of steps in a function is just the sum of all the steps of the code in its body. The number of steps for a function is the number of steps made whenever that function is called (e.g if a function has 100 steps in it and is called twice, that is 200 steps total).
This notion of a "step" intentionally vague. We will see later that it's not important that you get the exact number of steps correct as long as you get an approximation of their relative scale (more on this later). Below we show some example code blocks and annotate their number of steps in the first comment of the code block.
# Total: 4 steps
print(1 + 2) # 1 step
print("hello") # 1 step
x = 14 - 2 # 1 step
y = x ** 2 # 1 step
# Total: 51 steps
print("Starting loop") # 1 step
# Loop body has a total of 1 step. It runs 30 times for a total of 30
for i in range(30): # Runs 30 times
print(i) # - 1 step
# Loop body has a total of 2 steps. It runs 10 times for a total of 20
for i in range(10): # Runs 10 times
x = i # - 1 step
y = i**2 # - 1 step
# Total: 521 steps
print("Starting loop") # 1 step
# Loop body has a total of 26 steps. It runs 20 times for a total of 520
for i in range(20): # Runs 20 times
print(i) # - 1 step
# Loop body has a total of 1 step.
# It runs 25 times for total of 25.
for j in range(25): # - Runs 25 times
result = i * j # - 1 step
Big-O notation¶
Now that we feel a little bit more confortable counting steps, lets consider two functions sum1
and sum2
to compute the sum of numbers from 1 to some input n.
def sum1(n: int) -> int:
total = 0
for i in range(n + 1):
total += i
return total
def sum2(n: int) -> int:
return n * (n + 1) // 2
The second one uses a clever formula discovered by Gauss that computes the sum of numbers from 1 to n using a formula.
To answer the question of how many steps it takes to run these functions, we now need to talk about the number of steps in terms of the input n (since the number of steps might depend on n).
def sum1(n: int) -> int: # Total: n+3 steps
total = 0 # 1 step
for i in range(n + 1): # n+1 times
total += i # 1 step
return total # 1 step
def sum2(n: int) -> int: # Total: 1 step
return n * (n + 1) // 2 # 1 step
Since everything we've done with counting steps is a super loose approximation, we don't really want to promise that the relationship for the runtime is something exactly like n + 3. Instead, we want to report that there is some dependence on n for the runtime so that we get the idea of how the function scales, without spending too much time quibbling over whether it's truly n + 3 or n + 2.
Instead, computer scientists generally drop any constants or low-order terms from the expression n + 3 and just report that it "grows like n." The notation we use is called Big-O notation. Instead of reporting n + 3 as the growth of steps, we instead report O(n). This means that the growth is proportional to n but we aren't saying the exact number of steps. We instead only report the terms that have the biggest impact on a function's runtime.
Poll question: count the number of steps run in the loop for the given method.
def method(n: int) -> int:
result = 0 # 1 step
for i in range(9):
for j in range(n):
result += j # 1 step
return result # 1 step
method(10)
Big-O notation is helpful because it lets us communicate how the runtime of a function scales with respect to its input size. By scale, we mean quantifying approximately how much longer a function will run if you were to increase its input size. For example, if I told you a function had a runtime in O(n2), you would know that if I were to double the input size n, the function would take about 4 times as long to run.
In addition to O(n) and O(n2), there are many other orders of growth in the Big-O Cheat Sheet.
Data structures¶
Most operations in a set or a dictionary, such as adding values, removing values, containment check, are actually constant time or O(1) operations. No matter how much data is stored in a set or dictionary, we can always answer questions like, "Is this value a key in this dictionary?" in constant time.
In the data-structures.ipynb notebook, we wanted to count the number of unique words in the file, moby-dick.txt
. Here are two different implementations that are nearly identical, except one uses a list
and the other uses a set
.
def count_unique_list(file_name: str) -> int:
words = list()
with open(file_name) as file:
for line in file.readlines():
for word in line.split():
if word not in words: # O(n)
words.append(word) # O(1)
return len(words)
def count_unique_set(file_name: str) -> int:
words = set()
with open(file_name) as file:
for line in file.readlines():
for word in line.split():
if word not in words: # O(1)
words.add(word) # O(1)
return len(words)
The not in
operator in the list
version was responsible for the difference in runtime because it's an O(n) operation. Whereas, for set
, the not in
operator is an O(1) operation!
The analysis ends up being a little bit more complex since the size of the list is changing throughout the function (as you add unique words), but with a little work you can show that the list
version of this function takes O(n2) time where n is the number of words in the file while the set
version only takes O(n) time.