This notebook is meant to be a quick refresher of linear algebra and a brief introduction of `NumPy`

(Python package for scientific computing), and it is by no means a through review. I assume that you have a prior experience of learning linear algebra such as taking an introductory course a while ago. The goal is to go over some of the important properties of matrices and showcase some of the `NumPy`

methods through practical examples. We consider linear regression and three different solutions: algebraic, analytic, and geometric. I heavily cite and highly recommend Kolter's review notes on linear algebra [2].

- Introduction

1.1 Boston house prices dataset

1.2 Linear regression model - Matrices and vectors

2.1 Matrix

2.2 Vector - Basic operations on matrices and vectors

3.1 Matrix addition

3.2 Scalar matrix multiplication

3.3 Matrix vector multiplication

3.4 Matrix matrix multiplication - Properties of matrices

4.1 Linear independence

4.2 Rank

4.3 Inverse

4.4 Transpose

4.5 Span and range

4.6 Norm

4.7 Gradient

4.8 Matrix calculus

4.9 Dot products

4.10 Projections

4.11 Orthogonality - Discussion and conclusions

Appendix

A. Linear algebra visualized

B. Transpose and 1-dimensional arrays in NumPy

C. Outer products in NumPy

Reference

Why is linear algebra important in machine learning? Machine learning methods often involves a large amount of data, and linear algebra provides a clever way to analyze and manipulate them. To make the argument concrete, let's take a look at a sample dataset.

In [1]:

```
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
from IPython.display import Image, IFrame
np.set_printoptions(suppress=True, linewidth=120, precision=2)
```

We can load boston dataset from `sklearn`

package, which is a very popular and easy to use machine learning package of Python. It implements many kinds of machine learning algorithms and utility functions. The loaded dataset has the following attributes.

In [2]:

```
boston = load_boston()
print(boston.__dir__())
```

In [3]:

```
print(boston.DESCR)
```

The data and target values are stored in arrays of type `numpy.ndarray`

. In the data array, each row corresponds to a sample, a Boston suburb or town in this example, and each column corresponds to a feature that is described above. Note that `numpy.ndarray`

is not just multi-dimensional array (or `list`

in Python). It implements many useful numeric methods and indexing feature. Refer to the ndarray document and indexing document for the details. Here, I show the first 10 samples, each of which consists of 13 feature values, and some of their statistics by slicing the data array.

In [4]:

```
print(boston.feature_names)
print(boston.data[:10])
print('\nmean')
print(boston.data[:10].mean(axis=0))
print('\nvariance')
print(boston.data[:10].var(axis=0))
```

The target values are the following. Our task here is to predict the target value, or the "median value of owner-occupied homes in $1000's" in a Boston town, given its feature values such as "per capita crime rate by town" and "average number of rooms per dwelling."

In [5]:

```
print('MEDV')
print(boston.target[:10])
```

Linear regression is one of the simplest statistical models. It assumes that the target variable $y$ is explained by a weighted sum of feature values ${x}_{1},{x}_{2},\dots ,{x}_{n}$. In an equation,

$${y}_{MEDV}={w}_{CRIM}{x}_{CRIM}+{w}_{ZN}{x}_{ZN}+\cdots +{w}_{LSTAT}{x}_{LSTAT}+b$$

where $b$ is a bias term. Intuitively, ${w}_{\ast}{x}_{\ast}$ terms define the relative up/down from the standard target value. This standard value is what the bias term accounts for. You may wonder if the relationship is really that simple.

"Essentially, all models are wrong, but some are useful."

George Box, 1987

Assuming that the linear regression model is valid and we know all the weights and bias, we can estimate a median house price of a Boston town from its feature values. The bad news is that we don't know the weights... The good news is that we have training samples (a set of features and target pair)! We want to find a set of weights such that the equation holds for the training samples. To this end, we can solve systems of equations.

$$\{\begin{array}{ll}{y}_{MEDV}^{(1)}& ={w}_{CRIM}{x}_{CRIM}^{(1)}+{w}_{ZN}{x}_{ZN}^{(1)}+\cdots +{w}_{LSTAT}{x}_{LSTAT}^{(1)}+b\\ {y}_{MEDV}^{(2)}& ={w}_{CRIM}{x}_{CRIM}^{(2)}+{w}_{ZN}{x}_{ZN}^{(2)}+\cdots +{w}_{LSTAT}{x}_{LSTAT}^{(2)}+b\\ & \phantom{\rule{thinmathspace}{0ex}}\vdots \\ {y}_{MEDV}^{(n)}& ={w}_{CRIM}{x}_{CRIM}^{(n)}+{w}_{ZN}{x}_{ZN}^{(n)}+\cdots +{w}_{LSTAT}{x}_{LSTAT}^{(n)}+b\end{array}$$

Great, we can solve it, ...or can we (more on this later)? Let's rewrite the equations with a better notation.

$$\left[\begin{array}{c}{y}_{MEDV}^{(1)}\\ {y}_{MEDV}^{(2)}\\ \vdots \\ {y}_{MEDV}^{(n)}\end{array}\right]=\left[\begin{array}{ccccc}{x}_{CRIM}^{(1)}& {x}_{ZN}^{(1)}& \dots & {x}_{LSTAT}^{(1)}& 1\\ {x}_{CRIM}^{(2)}& {x}_{ZN}^{(2)}& \dots & {x}_{LSTAT}^{(2)}& 1\\ \vdots & \vdots & \ddots & \vdots & \vdots \\ {x}_{CRIM}^{(n)}& {x}_{ZN}^{(n)}& \dots & {x}_{LSTAT}^{(n)}& 1\end{array}\right]\left[\begin{array}{c}{w}_{CRIM}\\ {w}_{ZN}\\ \vdots \\ {w}_{LSTAT}\\ b\end{array}\right]$$

More simply,

$$\left[\begin{array}{c}{y}^{(1)}\\ {y}^{(2)}\\ \vdots \\ {y}^{(n)}\end{array}\right]=\left[\begin{array}{c}{\mathit{x}}^{(1)}\\ {\mathit{x}}^{(2)}\\ \vdots \\ {\mathit{x}}^{(n)}\end{array}\right]\mathit{w}$$

or even...

$$\mathit{y}=X\mathit{w}$$

Yes, this is beautiful. This notation is used in linear algebra, and it is a very powerful tool given to us to tackle machine learing problems. The objective here is to find a set of weights $\mathit{w}$ that solves this equation. We call this process to learn from data.

$$A=\left[\begin{array}{ccc}0& 1& 2\\ 3& 4& 5\end{array}\right]\phantom{\rule{1em}{0ex}}A\in {\mathbb{R}}^{2\times 3}$$

A matrix is a rectangular array of numbers. The dimension of matrix is number of rows by number of columns. ${A}_{ij}$ is the $i,j$ entry of $A$, which is in the $i$ th row and the $j$ th column.

In [6]:

```
A = np.array(np.arange(0, 6)).reshape((2, 3))
print(A)
print(A.shape)
for i in range(A.shape[0]):
for j in range(A.shape[1]):
print("{},{} entry: {}".format(i, j, A[i, j]))
```

$$\mathit{y}=\left[\begin{array}{c}0\\ 2\\ 4\\ 6\end{array}\right]\phantom{\rule{1em}{0ex}}\mathit{y}\in {\mathbb{R}}^{4}$$

A vector is a $n\times 1$ matrix. Here $\mathit{y}$ is said to be a 4-dimensional vector because it has 4 elements in it. ${\mathit{y}}_{i}$ denotes the $i$ th element of $\mathit{y}$.

In [7]:

```
y = np.array(2*np.arange(0, 4))
print(y)
print(y.shape)
for i in range(y.shape[0]):
print("{} element: {}".format(i, y[i]))
```

$$\left[\begin{array}{cc}1& 0\\ 2& 5\\ 3& 1\end{array}\right]+\left[\begin{array}{cc}4& 0.5\\ 2& 5\\ 0& 1\end{array}\right]=\left[\begin{array}{cc}5& 0.5\\ 4& 10\\ 3& 2\end{array}\right]$$

The shapes have to be the same.

In [8]:

```
A = np.array([[1, 0],
[2, 5],
[3, 1]])
B = np.array([[4, 0.5],
[2, 5],
[0, 1]])
assert A.shape == B.shape
print(A + B)
```

$$3\left[\begin{array}{cc}1& 0\\ 2& 5\\ 3& 1\end{array}\right]=\left[\begin{array}{cc}3& 0\\ 6& 15\\ 9& 3\end{array}\right]$$

In [9]:

```
A = np.array([[1, 0],
[2, 5],
[3, 1]])
print(3*A)
```

$$A\mathit{x}=\mathit{y}$$

$A:m\times n$ matrix (m rows, n columns)

$\mathit{x}:n\times 1$ matrix (n-dimensional vector)

$\mathit{y}:m\times 1$ matrix (m-dimensional vector)

To get ${y}_{i}$, multiply $A$'s ith row with vector $x$ element-wise, then add them up.

$$\left[\begin{array}{cccc}1& 2& 1& 5\\ 0& 3& 0& 4\\ -1& -2& 0& 0\end{array}\right]\left[\begin{array}{c}1\\ 3\\ 2\\ 1\end{array}\right]=\phantom{\rule{thinmathspace}{0ex}}?$$

Hint: ${\mathbb{R}}^{3\times 4}\times {\mathbb{R}}^{4\times 1}={\mathbb{R}}^{3\times 1}$

In [10]:

```
A = np.array([[1, 2, 1, 5],
[0, 3, 0, 4],
[-1, -2, 0, 0]])
x = np.array([[1],
[3],
[2],
[1]])
assert x.shape[0] == A.shape[1]
y = np.dot(A, x)
y = A.dot(x) # another way to calculate the dot product
print(y)
```

$$AB=C$$

$A:=l\times m$ matrix (l rows, m columns matrix)

$B:=m\times n$ matrix (m rows, n columns matrix)

$C:=l\times n$ matrix (l rows, n columns matrix)

$$\left[\begin{array}{cc}1& 4\\ 5& 3\\ 2& 6\end{array}\right]\left[\begin{array}{cccc}1& 8& 7& 4\\ 5& 6& 2& 3\end{array}\right]=\phantom{\rule{thinmathspace}{0ex}}?$$

Hint: ${\mathbb{R}}^{3\times 2}\times {\mathbb{R}}^{2\times 4}={\mathbb{R}}^{3\times 4}$

Note $AB$ and $BA$ are not the same, i.e. matrix multiplication is NOT commutative. Actually, the latter is not necessarily defined. Check dimension.

In [11]:

```
A = np.array([[1, 4],
[5, 3],
[2, 6]])
B = np.array([[1, 8, 7, 4],
[5, 6, 2, 3]])
print(A)
print(B)
```

In [12]:

```
print(A.dot(B))
```

In [13]:

```
try:
print(B.dot(A))
except ValueError as e:
print(e)
```

Is linear algebra all about saving papers? Definitely NO! Do you remember terminologies such as *linear independence*, *rank*, *span*, etc that you learned in the linear algebra course? Did you get the ideas? Being able to calculate them is important, but understanding the concept is more (at least as) important for the purpose of this course (we can use computers for calculation after all). Let's review the properties of matrices while solving the linear regression of Boston house price. The goal is to solve the following equation.

$$\mathit{y}=X\mathit{w}$$

where $X\in {\mathbb{R}}^{m\times n}$ and $m>n$. $m$ is greater than $n$ because there are more samples than the number of features (remember rows are samples). In other words, $X$ is a vertically long matrix.

Here, let's assume that all the features (columns of $X$) are linearly independent.

A set of vectors $\{{\mathit{x}}_{1},{\mathit{x}}_{2},\dots ,{\mathit{x}}_{n}\}\in {\mathbb{R}}^{m}$ is said to be (linearly) independent if no vector can be represented as a linear combination of the remaining vectors. [2]

Otherwise, it is linearly dependent. For example, if we have temperature in Fahrenheit and in Celsius as two different features, the latter is represented in terms of the first as

$$F=\frac{9}{5}C+32.$$

Such features are linearly dependent. For another example, if we have categorical features like gender, we could have two columns one for male and the other for female. For male samples we have ones in the male column and zeros in the female column, and the opposite for female samples. Did you notice that we have a linear dependence here because these features can be represented in the form

$$F=-M+1.$$

For a matrix $A\in {\mathbb{R}}^{m\times n}$ where $m>n$, if its columns are linearly independent, it is said to be full rank. Formally,

The column rank of a matrix $A\in {\mathbb{R}}^{m\times n}$ is the size of the largest subset of columns of $A$ that constitute a linearly independent set. With some abuse of terminology, this is often referred to simply as the number of linearly independent columns of $A$. In the same way, the row rank is the largest number of rows of $A$ that constitute a linearly independent set.

For any matrix $A\in {\mathbb{R}}^{m\times n}$, it turns out that the column rank of $A$ is equal to the row rank of $A$ (though we will not prove this), and so both quantities are referred to collectively as the rank of $A$, denoted as $rank(A)$.

For $A\in {\mathbb{R}}^{m\times n}$, $rank(A)\le min(m,n)$. If $rank(A)=min(m,n)$, then $A$ is said to be full rank. [2]

Therefore, the first statement holds.

The inverse of a square matrix $A\in {\mathbb{R}}^{n\times n}$ is denoted ${A}^{-1}$, and is the unique matrix such that

$${A}^{-1}A=I=A{A}^{-1}.$$Note that not all matrices have inverses. Non-square matrices, for example, do not have inverses by definition. However, for some square matrices $A$, it may still be the case that ${A}^{-1}$ may not exist. In particular, we say that $A$ is invertible or non-singular if ${A}^{-1}$ exists and non-invertible or singular otherwise. In order for a square matrix $A$ to have an inverse ${A}^{-1}$, then $A$ must be full rank. [2]

So, again, let's assume that the columns of $X$ are linearly independent, i.e. $X$ is full rank. Here's our first attempt to solve the equation for $\mathit{w}$.

$$\begin{array}{rl}\mathit{y}& =X\mathit{w}\\ \mathit{w}& ={X}^{-1}\mathit{y}\end{array}$$

Can we do this?

Remember that our $X$ is a vertically long matrix i.e. non-square, and cannot be inverted by definition.

Therefore, we can't do $\mathit{w}={X}^{-1}\mathit{y}$.

By convention, an n-dimensional vector is often thought of as a matrix with n rows and 1 column, known as a column vector. If we want to explicitly represent a row vector — a matrix with 1 row and n columns — we typically write ${\mathit{x}}^{T}$. (2)

$$\mathit{y}=\left[\begin{array}{c}0\\ 1\\ 2\\ 3\end{array}\right]\phantom{\rule{1em}{0ex}}{\mathit{y}}^{T}=\left[\begin{array}{cccc}0& 1& 2& 3\end{array}\right]$$

The transpose can be generalized to matrices.

The transpose of a matrix results from "flipping" the rows and columns. Given a matrix $A\in {\mathbb{R}}^{m\times n}$, its transpose, written ${A}^{T}\in {\mathbb{R}}^{n\times m}$, is the $n\times m$ matrix whose entries are given by

$$({A}^{T}{)}_{ij}={A}_{ji}.$$(2)

$$A=\left[\begin{array}{cc}1& 2\\ 3& 4\\ 5& 6\end{array}\right]\phantom{\rule{1em}{0ex}}{A}^{T}=\left[\begin{array}{ccc}1& 3& 5\\ 2& 4& 6\end{array}\right]$$

What is the dimension of ${A}^{T}A$?

In [14]:

```
A = np.array([[1, 2],
[3, 4],
[5, 6]])
print(A)
print(np.dot(A.T, A))
```

${A}^{T}A$ is always a square matrix (${\mathbb{R}}^{n\times m}{\mathbb{R}}^{m\times n}={\mathbb{R}}^{n\times n}$). If $A$ is full rank, ${A}^{T}A$ is also invertible.

$$\begin{array}{rl}\mathit{y}& =X\mathit{w}\\ {X}^{T}\mathit{y}& ={X}^{T}X\mathit{w}\\ \mathit{w}& ={({X}^{T}X)}^{-1}{X}^{T}\mathit{y}\end{array}$$

Note that the second line multiplies both sides by the transpose of $X$ from the left. Then, we can solve for $\mathit{w}$ because ${X}^{T}X$ is invertible.

This is a valid algebraic approach.

Why don't we get the intuition behind the solution. Consider the linear system

$$\mathit{y}=X\mathit{w}$$

where

$$\mathit{y}=\left[\begin{array}{c}1\\ 2\\ 2\end{array}\right]\phantom{\rule{1em}{0ex}}X=\left[\begin{array}{cc}1& 1\\ 1& 2\\ 1& 3\end{array}\right]\phantom{\rule{1em}{0ex}}\mathit{w}=\left[\begin{array}{c}{w}_{1}\\ {w}_{2}\end{array}\right].$$

This equation is saying that $\mathit{y}$ is a linear combination of column vectors of $X$.

$$\left[\begin{array}{c}1\\ 2\\ 2\end{array}\right]={w}_{1}\left[\begin{array}{c}1\\ 1\\ 1\end{array}\right]+{w}_{2}\left[\begin{array}{c}1\\ 2\\ 3\end{array}\right]$$

However, there are no such weights.

With linear algebra's terminology, $\mathit{y}$ doesn't lie in the column space of $X$, or the space that column vectors of $X$ spans. Formally,

The span of a set of vectors $\{{\mathit{x}}_{1},{\mathit{x}}_{2},\dots ,{\mathit{x}}_{n}\}$ is the set of all vectors that can be expressed as a linear combination of $\{{\mathit{x}}_{1},{\mathit{x}}_{2},\dots ,{\mathit{x}}_{n}\}$. That is,

$$\text{span}(\{{\mathit{x}}_{1},\dots ,{\mathit{x}}_{n}\})=\{\mathit{v}:\mathit{v}=\sum _{i=1}^{n}{\alpha}_{i}{\mathit{x}}_{i},{\alpha}_{i}\in \mathbb{R}\}.$$(2)

Especially, when $\mathit{x}$'s are the columns of a matrix $X$, their span is said to be the range or the column space of $X$ and denoted $\mathcal{R}(X)$.

Back to the equation, although the target vector $\mathit{y}$ is 3-dimensional, there are only two column vectors that span the space, i.e. the range of $X$ is just a 2-dimensional plane. Therefore, there certainly exists 3-dimensional vectors that don't lie on this plane, like the $\mathit{y}$ above. Visually, it looks something like below.

In [15]:

```
Image('../images/linear-algebra/4.5.png', height=300, width=400)
```

Out[15]:

But we want to represent $\mathit{y}$ in terms of ${\mathit{x}}_{i}$'s. The best we can do is to find a vector that lies in the range of $X$, but is also as close to $\mathit{y}$ as possible.

This objective can be formulated by using norm by saying to find $\mathit{w}$ that minimizes $||y-X\mathit{w}|{|}_{2}$.

A norm of a vector $\Vert x\Vert $ is informally a measure of the “length” of the vector. For example, we have the commonly-used Euclidean or ${\ell}_{2}$ norm,

$$\Vert \mathit{x}{\Vert}_{2}=\sqrt{\sum _{i=1}^{n}{x}_{i}^{2}}.$$Note that $\Vert \mathit{x}{\Vert}_{2}^{2}={\mathit{x}}^{T}\mathit{x}$. (2)

If you take the norm of difference of vectors, it is a measure of distance between them. There are several types of norms, but another popular one is ${\ell}_{1}$ norm. Given a vector $\mathit{x}\in {\mathbb{R}}^{n}$,

$${\Vert \mathit{x}\Vert}_{1}=\sum _{i=0}^{n}|{x}_{i}|$$

Let's use ${\ell}_{2}$ norm as a measure of distance for now. For convinience, we can minimize the square of ${\ell}_{2}$ norm without loss of generality. To find weights that minimizes $\Vert \mathit{y}-X\mathit{w}{\Vert}_{2}^{2}$, we can take its derivative with respect to $\mathit{w}$ and set to zero. Easy, right?

To this end, the notion of gradient, which is a natural extension of partial derivatives to a vector setting, comes in handy.

Suppose that $f:{\mathbb{R}}^{m\times n}\to \mathbb{R}$ is a function that takes as input a matrix $A$ of size $m\times n$ and returns a real value. Then the gradient of $f$ (with respect to $A\in {\mathbb{R}}^{m\times n}$) is the matrix of partial derivatives, defined as:

$${\mathrm{\nabla}}_{A}f(A)\in {\mathbb{R}}^{m\times n}=\left[\begin{array}{cccc}\frac{\mathrm{\partial}f(A)}{\mathrm{\partial}{A}_{11}}& \frac{\mathrm{\partial}f(A)}{\mathrm{\partial}{A}_{12}}& \dots & \frac{\mathrm{\partial}f(A)}{\mathrm{\partial}{A}_{1n}}\\ \frac{\mathrm{\partial}f(A)}{\mathrm{\partial}{A}_{21}}& \frac{\mathrm{\partial}f(A)}{\mathrm{\partial}{A}_{22}}& \dots & \frac{\mathrm{\partial}f(A)}{\mathrm{\partial}{A}_{2n}}\\ \vdots & \vdots & \ddots & \vdots \\ \frac{\mathrm{\partial}f(A)}{\mathrm{\partial}{A}_{m1}}& \frac{\mathrm{\partial}f(A)}{\mathrm{\partial}{A}_{m2}}& \dots & \frac{\mathrm{\partial}f(A)}{\mathrm{\partial}{A}_{mn}}\end{array}\right]$$i.e., an $m\times n$ matrix with