Basic Linear Algebra Facts for Bio/Math 4230

Matrix Operations

A matrix is a grid of numbers in rows and columns: $\displaystyle
{\bf M}=\left(
\begin{array}{cc}
1 & 2 \\
3 & 4
\end{array}\right)$.

When you multiply a matrix by a scalar, that is by a single number or parameter, multiplication is component by component:

\begin{displaymath}
5 \cdot {\bf M} = 5 \cdot \left(
\begin{array}{cc}
1 & 2 \\ ...
...left( \begin{array}{cc}
5 & 10 \\
15 & 20
\end{array}\right).
\end{displaymath}

When you add two matrices you do it component by component:

\begin{displaymath}
\left(\begin{array}{cc}
1 & 2 \\
3 & 4
\end{array}\right) +...
...=
\left(\begin{array}{cc}
6 & 8 \\
10 & 12
\end{array}\right).\end{displaymath}

When you multiply a matrix by a vector, however, rows from the left are multiplied by columns on the right:

\begin{displaymath}
\left(\begin{array}{cc}
1 & 2 \\
3 & 4
\end{array}\right) \...
...\right) =
\left(\begin{array}{c} 17 \\ 39 \end{array}\right) .
\end{displaymath}

When matrices are multiplied by matrices, the same rules are followed as with vectors:

\begin{displaymath}
\left(\begin{array}{cc}
1 & 2 \\
3 & 4
\end{array}\right) \...
...eft(\begin{array}{cc}
19 & 22 \\
43 & 50
\end{array}\right) .
\end{displaymath}

The Identity Matrix

The Identity matrix (I) is the matrix that, when multiplied by either a matrix or vector, doesn't change the result. That is, if M is a matrix and v is a vector, M$\cdot$I = M and I$\cdot$v = v. In two dimensions, $\displaystyle {\bf I} =
\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right)$, and in three dimensions, $\displaystyle
{\bf I} =
\left(\begin{array}{ccc} 1 & 0 &0\\ 0 & 1& 0 \\ 0 & 0 & 1 \end{array}\right).
$

Equivalence Between Matrices and Linear Equations

Matrix notation is a short-hand way to write down systems of linear equations, as in

\begin{displaymath}
\left(
\begin{array}{cc}
1 & 2 \\
3 & 4
\end{array}\right)
...
...dot x + 2 \cdot y = 5,\\
3\cdot x + 4 \cdot y = 6.
\end{array}\end{displaymath}

Similarly,

\begin{displaymath}
\left(
\begin{array}{ccc}
1 & 2 &3 \\
4 & 5 & 6 \\
7 & 8 &...
... = 11, \\
7 \cdot x + 8 \cdot y + 9 \cdot z = 12 .
\end{array}\end{displaymath}

Matrix Multiplication as Mapping in the Plane

Think of $(x,y)$ as the coordinate of a point on a plane. If M $\displaystyle =\left(
\begin{array}{cc}
a & b \\
c & d
\end{array}\right)$ is a matrix, then

\begin{displaymath}
{\bf M}
\left(
\begin{array}{c}
x\\ y
\end{array}\right) = \...
...}\right)
=
\left(
\begin{array}{c}
x'\\ y'
\end{array}\right)
\end{displaymath}

gives some new coordinates, $(x',y')$, in the plane which are related to the original coordinates. This is what is meant by a map from the original, $(x,y)$, points to the new, $(x',y')$, points. There are only certain kinds of things which can happen with matrix mappings:

Eigenvalues and Eigenvectors

The easiest way to think of eigenvalues and eigenvectors is in terms of the stretching and compressing of matrix maps. A matrix, M, can only stretch/compress points in as many directions as there are - in two dimensions there are only two directions, and in three dimensions there are only three possible directions. An eigenvector, v is a direction in which the matrix is a pure stretch, that is M $\cdot$ v = $\lambda$ v, where $\lambda$, the eigenvalue, is the amount of stretching in that direction. Since we do not know, a prioiri what the eigenvalue, eigenvector is, let v = $\displaystyle
\left(
\begin{array}{c}
x\\ y
\end{array}\right) $. Then

\begin{displaymath}
{\bf M } \cdot {\bf v} =
\left(
\begin{array}{cc}
a & b \\ ...
...(
\begin{array}{c}
x\\ y
\end{array}\right) = \lambda {\bf v}.
\end{displaymath}

Since this results (in two dimensions) in two equations with three unknowns, the solution can not be totally determined. Solving for $\lambda$, the eigenvalue, gives an equation

\begin{displaymath}
\lambda^2 - (a+d) \lambda + (a d - bc) = 0, \quad \mbox{or}\...
... = \frac12 \left[ (a+d) \pm \sqrt{(a+d)^2 - 4 (ad-bc)}\right].
\end{displaymath}

Two (parallel) eigenvectors corresponding to this eigenvalue are given by

\begin{displaymath}
{\bf v} = \left(
\begin{array}{c}
1\\ \frac{\lambda - a}{b}
...
...d \left(
\begin{array}{c}
b\\ \lambda - a
\end{array}\right) .
\end{displaymath}

All eigenvectors for the same eigenvalue are parallel; they can not be solved for any more specifically because the eigenvalue equations are two equations for three unknowns. The eigenvalues relate to matrix mappings:

In higher dimensions eigenvalue/eigenvector pairs satisfy the matrix equation

\begin{displaymath}
{\bf M} {\bf v} = \lambda {\bf v} \quad \mbox{or} \quad ({\bf M} - \lambda{\bf I} ) \cdot {\bf v} = {\bf0}.
\end{displaymath}



James Powell
2002-01-14