This notebook is meant to be a quick refresher of linear algebra and a brief introduction of NumPy
(Python package for scientific computing), and it is by no means a through review. I assume that you have a prior experience of learning linear algebra such as taking an introductory course a while ago. The goal is to go over some of the important properties of matrices and showcase some of the NumPy
methods through practical examples. We consider linear regression and three different solutions: algebraic, analytic, and geometric. I heavily cite and highly recommend Kolter's review notes on linear algebra [2].
Appendix
A. Linear algebra visualized
B. Transpose and 1-dimensional arrays in NumPy
C. Outer products in NumPy
Reference
Why is linear algebra important in machine learning? Machine learning methods often involves a large amount of data, and linear algebra provides a clever way to analyze and manipulate them. To make the argument concrete, let's take a look at a sample dataset.
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
from IPython.display import Image, IFrame
np.set_printoptions(suppress=True, linewidth=120, precision=2)
We can load boston dataset from sklearn
package, which is a very popular and easy to use machine learning package of Python. It implements many kinds of machine learning algorithms and utility functions. The loaded dataset has the following attributes.
boston = load_boston()
print(boston.__dir__())
print(boston.DESCR)
The data and target values are stored in arrays of type numpy.ndarray
. In the data array, each row corresponds to a sample, a Boston suburb or town in this example, and each column corresponds to a feature that is described above. Note that numpy.ndarray
is not just multi-dimensional array (or list
in Python). It implements many useful numeric methods and indexing feature. Refer to the ndarray document and indexing document for the details. Here, I show the first 10 samples, each of which consists of 13 feature values, and some of their statistics by slicing the data array.
print(boston.feature_names)
print(boston.data[:10])
print('\nmean')
print(boston.data[:10].mean(axis=0))
print('\nvariance')
print(boston.data[:10].var(axis=0))
The target values are the following. Our task here is to predict the target value, or the "median value of owner-occupied homes in $1000's" in a Boston town, given its feature values such as "per capita crime rate by town" and "average number of rooms per dwelling."
print('MEDV')
print(boston.target[:10])
Linear regression is one of the simplest statistical models. It assumes that the target variable is explained by a weighted sum of feature values . In an equation,
where is a bias term. Intuitively, terms define the relative up/down from the standard target value. This standard value is what the bias term accounts for. You may wonder if the relationship is really that simple.
"Essentially, all models are wrong, but some are useful."
George Box, 1987
Assuming that the linear regression model is valid and we know all the weights and bias, we can estimate a median house price of a Boston town from its feature values. The bad news is that we don't know the weights... The good news is that we have training samples (a set of features and target pair)! We want to find a set of weights such that the equation holds for the training samples. To this end, we can solve systems of equations.
Great, we can solve it, ...or can we (more on this later)? Let's rewrite the equations with a better notation.
More simply,
or even...
Yes, this is beautiful. This notation is used in linear algebra, and it is a very powerful tool given to us to tackle machine learing problems. The objective here is to find a set of weights that solves this equation. We call this process to learn from data.
A matrix is a rectangular array of numbers. The dimension of matrix is number of rows by number of columns. is the entry of , which is in the th row and the th column.
A = np.array(np.arange(0, 6)).reshape((2, 3))
print(A)
print(A.shape)
for i in range(A.shape[0]):
for j in range(A.shape[1]):
print("{},{} entry: {}".format(i, j, A[i, j]))
A vector is a matrix. Here is said to be a 4-dimensional vector because it has 4 elements in it. denotes the th element of .
y = np.array(2*np.arange(0, 4))
print(y)
print(y.shape)
for i in range(y.shape[0]):
print("{} element: {}".format(i, y[i]))
A = np.array([[1, 0],
[2, 5],
[3, 1]])
B = np.array([[4, 0.5],
[2, 5],
[0, 1]])
assert A.shape == B.shape
print(A + B)
A = np.array([[1, 0],
[2, 5],
[3, 1]])
print(3*A)
matrix (m rows, n columns)
matrix (n-dimensional vector)
matrix (m-dimensional vector)
To get , multiply 's ith row with vector element-wise, then add them up.
Hint:
A = np.array([[1, 2, 1, 5],
[0, 3, 0, 4],
[-1, -2, 0, 0]])
x = np.array([[1],
[3],
[2],
[1]])
assert x.shape[0] == A.shape[1]
y = np.dot(A, x)
y = A.dot(x) # another way to calculate the dot product
print(y)
matrix (l rows, m columns matrix)
matrix (m rows, n columns matrix)
matrix (l rows, n columns matrix)
Hint:
Note and are not the same, i.e. matrix multiplication is NOT commutative. Actually, the latter is not necessarily defined. Check dimension.
A = np.array([[1, 4],
[5, 3],
[2, 6]])
B = np.array([[1, 8, 7, 4],
[5, 6, 2, 3]])
print(A)
print(B)
print(A.dot(B))
try:
print(B.dot(A))
except ValueError as e:
print(e)
Is linear algebra all about saving papers? Definitely NO! Do you remember terminologies such as linear independence, rank, span, etc that you learned in the linear algebra course? Did you get the ideas? Being able to calculate them is important, but understanding the concept is more (at least as) important for the purpose of this course (we can use computers for calculation after all). Let's review the properties of matrices while solving the linear regression of Boston house price. The goal is to solve the following equation.
where and . is greater than because there are more samples than the number of features (remember rows are samples). In other words, is a vertically long matrix.
Here, let's assume that all the features (columns of ) are linearly independent.
A set of vectors is said to be (linearly) independent if no vector can be represented as a linear combination of the remaining vectors. [2]
Otherwise, it is linearly dependent. For example, if we have temperature in Fahrenheit and in Celsius as two different features, the latter is represented in terms of the first as
Such features are linearly dependent. For another example, if we have categorical features like gender, we could have two columns one for male and the other for female. For male samples we have ones in the male column and zeros in the female column, and the opposite for female samples. Did you notice that we have a linear dependence here because these features can be represented in the form
For a matrix where , if its columns are linearly independent, it is said to be full rank. Formally,
The column rank of a matrix is the size of the largest subset of columns of that constitute a linearly independent set. With some abuse of terminology, this is often referred to simply as the number of linearly independent columns of . In the same way, the row rank is the largest number of rows of that constitute a linearly independent set.
For any matrix , it turns out that the column rank of is equal to the row rank of (though we will not prove this), and so both quantities are referred to collectively as the rank of , denoted as .
For , . If , then is said to be full rank. [2]
Therefore, the first statement holds.
The inverse of a square matrix is denoted , and is the unique matrix such that
Note that not all matrices have inverses. Non-square matrices, for example, do not have inverses by definition. However, for some square matrices , it may still be the case that may not exist. In particular, we say that is invertible or non-singular if exists and non-invertible or singular otherwise. In order for a square matrix to have an inverse , then must be full rank. [2]
So, again, let's assume that the columns of are linearly independent, i.e. is full rank. Here's our first attempt to solve the equation for .
Can we do this?
Remember that our is a vertically long matrix i.e. non-square, and cannot be inverted by definition.
Therefore, we can't do .
By convention, an n-dimensional vector is often thought of as a matrix with n rows and 1 column, known as a column vector. If we want to explicitly represent a row vector — a matrix with 1 row and n columns — we typically write . (2)
The transpose can be generalized to matrices.
The transpose of a matrix results from "flipping" the rows and columns. Given a matrix , its transpose, written , is the matrix whose entries are given by
(2)
What is the dimension of ?
A = np.array([[1, 2],
[3, 4],
[5, 6]])
print(A)
print(np.dot(A.T, A))
is always a square matrix (). If is full rank, is also invertible.
Note that the second line multiplies both sides by the transpose of from the left. Then, we can solve for because is invertible.
This is a valid algebraic approach.
Why don't we get the intuition behind the solution. Consider the linear system
where
This equation is saying that is a linear combination of column vectors of .
However, there are no such weights.
With linear algebra's terminology, doesn't lie in the column space of , or the space that column vectors of spans. Formally,
The span of a set of vectors is the set of all vectors that can be expressed as a linear combination of . That is,
(2)
Especially, when 's are the columns of a matrix , their span is said to be the range or the column space of and denoted .
Back to the equation, although the target vector is 3-dimensional, there are only two column vectors that span the space, i.e. the range of is just a 2-dimensional plane. Therefore, there certainly exists 3-dimensional vectors that don't lie on this plane, like the above. Visually, it looks something like below.
Image('../images/linear-algebra/4.5.png', height=300, width=400)
But we want to represent in terms of 's. The best we can do is to find a vector that lies in the range of , but is also as close to as possible.
This objective can be formulated by using norm by saying to find that minimizes .
A norm of a vector is informally a measure of the “length” of the vector. For example, we have the commonly-used Euclidean or norm,
Note that . (2)
If you take the norm of difference of vectors, it is a measure of distance between them. There are several types of norms, but another popular one is norm. Given a vector ,
Let's use norm as a measure of distance for now. For convinience, we can minimize the square of norm without loss of generality. To find weights that minimizes , we can take its derivative with respect to and set to zero. Easy, right?
To this end, the notion of gradient, which is a natural extension of partial derivatives to a vector setting, comes in handy.
Suppose that is a function that takes as input a matrix of size and returns a real value. Then the gradient of (with respect to ) is the matrix of partial derivatives, defined as:
i.e., an matrix with