Matrix Mathematics
Contents:
Matrix Definitions:
Matrix Algebra
- Compare two matrices for equality. All of the entries must match.
Example:
A |
= |
⌈ ⌊ |
1 |
2 |
⌉ ⌋ |
= |
⌈ ⌊ |
1 |
2 |
⌉ ⌋ |
= |
B |
0 |
1 |
0 |
1 |
- Add two matrices.
The added matrices must have the same number of columns and the same number of rows.
The matrices must be the same size. Add the corresponding entries.
Matrix addition is commutative and associative.
Example: B + C:
⌈ ⌊ |
1 |
2 |
⌉ ⌋ |
+ |
⌈ ⌊ |
3 |
4 |
⌉ ⌋ |
= |
⌈ ⌊ |
4 |
6 |
⌉ ⌋ |
0 |
1 |
6 |
-1 |
6 |
0 |
- Multiply a scalar by a matrix.
Multiply each entry by the scalar.
Multiplication by multiple scalars is associative.
Example: a B:
7 |
⌈ ⌊ |
1 |
2 |
⌉ ⌋ |
= |
⌈ ⌊ |
7 |
14 |
⌉ ⌋ |
0 |
-1 |
0 |
-7 |
- Distribute scalar multiplication over addition.
Example: a (B + C) =
a B + a C
- Subtraction of a given matrix from another matrix is defined as scalar multiplying that
given matrix by -1 and then adding the result.
The matrices must be the same size. Subtract the corresponding entries.
Example: B - C =
B + (-1) C
- Add or subtract the same matrix from both sides of an equation.
If A = B then
C + A =
C+ B
- Multiply both sides of an equation by the same scalar.
If A = B then
c A = c B
- Multiply two matrices (see Multiplication of Two Matrices below).
The number of columns in the left hand matrix must equal the number of rows in the right
hand matrix.
Notation: The product of matrices A and B is written
A B
- The existence of B A does not depend on the existence of A B.
- Matrix multiplication is not commutative (except in special cases).
B A is usually not equal to A B.
- Reduction of a matrix (see Gauss-Jordan Elimination below).
- Inversion of a matrix (see Using Gauss-Jordan to Invert a Matrix below):
Only a square matrix can be inverted.
Some square matrices can not be inverted.
The inverse of matrix A is written A-1.
- Multiplying a matrix by the identity matrix produces the same matrix.
This is one of the special cases where matrix multiplication is commutative.
A I = A
I A = A
- Multiplying a matrix by its inverse produces an identity matrix.
This is one of the special cases where matrix multiplication is commutative.
A A-1 = I
A-1 A = I
- Division: multiplication by an inverse.
Multiplying by the inverse of a matrix is said to be division by a matrix.
Multiplying by the inverse of a matrix has the effect of undoing multiplication by the
original matrix.
Since multiplication is not commutative, the inverse must be placed on the same side of
the multiplication the original matrix was on.
If A B = X , then
A-1 X = B
If B A = X , then
X A-1 = B
- Multiplication property of equality.
Both sides of an equation can be multiplied by the same given matrix.
The given matrix must be on the same side of each multiplication.
If the given matrix is an inverse, this becomes the division property of equality.
If A = B , then
C A = C B
If A = B , then
A C = B C
- Raising a square matrix to a power.
Repeatedly multiplying a square matrix by itself raises the matrix to a power.
Repeatedly multiplying the inverse of a matrix raises the original matrix to a negative
power.
A A = A2
A A A = A3
A-1 A-1 =
A-2
A-1 A-1 A-1 =
A-3 etc.
- Transpose a matrix.
The matrix must be square (have the same number of columns and rows).
The entry aij of the transpose is the entry aji of the original matrix.
Example: Transpose of A = TA
A = |
⌈ ⌊ |
1 |
2 |
⌉ ⌋ |
TA = |
⌈ ⌊ |
1 |
0 |
⌉ ⌋ |
0 |
1 |
2 |
1 |
The transpose of a matrix is the matrix reflected along its major diagonal.
Allowed Transformations Within a Matrix while Reducing (eliminating) or Pivoting:
- Exchange any two rows (used to move a 0 away from a position needing a 1)
Example:
A |
= |
⌈ ⌊ |
0 |
1 |
| | |
6 |
⌉ ⌋ |
|
A' |
= |
⌈ ⌊ |
1 |
2 |
| | |
4 |
⌉ ⌋ |
Exchange Row 1 with Row 2 |
1 |
2 |
4 |
0 |
1 |
6 |
|
- Multiply a row by a nonzero scalar (used to make a 1 where it is needed).
Example:
A |
= |
⌈ ⌊ |
2 |
4 |
| | |
8 |
⌉ ⌋ |
|
A' |
= |
⌈ ⌊ |
1 |
2 |
| | |
4 |
⌉ ⌋ |
Multiply Row 1 by 1/2 |
0 |
1 |
6 |
0 |
1 |
6 |
|
- Add to one row some other row multiplied by a nonzero scalar (used to make a 0 where it
is needed).
Example:
A |
= |
⌈ ⌊ |
1 |
2 |
| | |
4 |
⌉ ⌋ |
|
|
0 |
1 |
6 |
Get Row 2
|
0 |
1 |
6 |
|
× |
-2 |
-2 |
-2 |
Multiply by -2 |
|
|
|
|
|
|
|
|
|
|
0 |
-2 |
-12 |
Product |
|
|
|
|
|
|
|
|
|
+ |
1 |
2 |
4 |
Add to Row 1 |
A' |
= |
⌈ ⌊ |
1 |
0 |
| | |
-8 |
⌉ ⌋ |
|
|
1 |
0 |
-8 |
New Row 1
|
0 |
1 |
6 |
|
|
|
|
Multiplication of Two Matrices
- Only matrices with certain properties can be multiplied
- The number of columns in the left hand matrix must equal the number of rows in the right
hand matrix. Otherwise, the product does not exist.
- The number of rows in the product is the number of rows in the left hand matrix.
- The number of columns in the product is the number of columns in the right hand matrix.
- Matrix multiplication is normally NOT commutative. Exchanging the two matrices will
produce a different result, or even a case where the product doesn't exist.
- Use the following process for multiplying two matrices:
- It helps to place the matrices in the following arrangement:
|
|
|
|
|
⌈ ⌊ |
3 |
4 |
⌉ ⌋ |
right matrix |
|
6 |
-1 |
|
|
⌈ ⌊ |
1 |
2 |
⌉ ⌋ |
|
⌈ ⌊ |
|
|
⌉ ⌋ |
product |
0 |
1 |
|
|
|
|
left matrix |
|
- Procedure:
- Select a row and a column to calculate in the product.
- Start a running total and set it to 0.
- Start at the left end of the row in left matrix, and at the top of the column in the
right matrix.
- Do one entry in each matrix at a time, until no entries are left in the row and column:
- Multiply the selected entry in the left matrix by the selected entry in the right
matrix.
- Add the result to the running total.
- Move to the next entry rightward in the left matrix, and the next entry downward in the
right matrix.
- When done with all of the selected row and column, enter the running total in the
selected entry in the product.
- Do the first entry in the first row.
|
|
|
|
|
⌈ ⌊ |
3 |
4 |
⌉ ⌋ |
right matrix |
|
6 |
-1 |
|
|
|
| |
|
|
|
⌈ ⌊ |
1 |
2 |
⌉ ⌋ |
-- |
⌈ ⌊ |
15 |
|
⌉ ⌋ |
product |
0 |
1 |
|
|
|
|
left matrix |
|
|
|
|
1 × 3 + 2 × 6 = 15 |
- Do the next column.
|
|
|
|
|
⌈ ⌊ |
3 |
4 |
⌉ ⌋ |
right matrix |
|
6 |
-1 |
|
|
|
|
| |
|
|
⌈ ⌊ |
1 |
2 |
⌉ ⌋ |
-- |
⌈ ⌊ |
15 |
2 |
⌉ ⌋ |
product |
0 |
1 |
|
|
|
|
left matrix |
|
|
|
|
1 × 4 + 2 × (-1) = 2 |
- Do the first entry of the next row.
|
|
|
|
|
⌈ ⌊ |
3 |
4 |
⌉ ⌋ |
right matrix |
|
6 |
-1 |
|
|
|
| |
|
|
|
⌈ ⌊ |
1 |
2 |
⌉ ⌋ |
|
⌈ ⌊ |
15 |
2 |
⌉ ⌋ |
product |
0 |
1 |
-- |
6 |
|
|
left matrix |
|
|
|
|
0 × 3 + 1 × 6 = 6 |
- Do the next entry in that row.
|
|
|
|
|
⌈ ⌊ |
3 |
4 |
⌉ ⌋ |
right matrix |
|
6 |
-1 |
|
|
|
|
| |
|
|
⌈ ⌊ |
1 |
2 |
⌉ ⌋ |
|
⌈ ⌊ |
15 |
2 |
⌉ ⌋ |
product |
0 |
1 |
-- |
6 |
-1 |
|
left matrix |
|
|
|
|
0 × 4 + 1 × (-1) = -1
|
- When all positions are filled, the multiplication is done.
A use for matrix multiplication:
A Quadraphonic/Surround Matrix System
Using Matrices to Represent Equations
- The equations are placed into an augmented matrix:
Example:
2x + 3y = 2
x - 3y = 4
- Each column of the matrix is for one variable.
|
x |
y |
|
const |
⌈ ⌊ |
2 |
3 |
| | |
2 |
⌉ ⌋ |
1 |
-3 |
4 |
Gauss-Jordan Elimination - Solving Systems of equations
- The procedure is as follows:
- Put the equations into the matrix.
- Keep track of the location being worked on, called the pivot position.
- The starting pivot position is in the upper left corner. Work row by row and
column by column.
- Stop when either of these conditions is true:
- The next column for the pivot is beyond the augmentation bar.
- There are no rows left below to move the pivot into.
- For each pivot position, do the following, in order:
- Start: If there is a 0 in the position, exchange rows with a lower row with no 0.
- If there is no lower row without a 0, move the pivot position one column right and
start again.
- Get a 1 at the pivot: Multiply the entire row by 1/v, where v is the value in the
pivot position.
- Get 0s above and below the pivot: For each nonzero elimination location above and
below the pivot, eliminate it with this procedure:
- Let the value in the elimination location be u.
- Add to the row containing the elimination location a row created by multiplying the
pivot row by -u.
- Move the pivot down one row and one row to the right.
- Translate the finished matrix back into equations.
- Determine the type of result:
- If a row exists with all 0s left of the bar, but nonzeros right of the bar, there is
no solution. The system is inconsistent.
- If every row and column left of the bar has a 1 surrounded by 0s, there is one
solution.
- If a column exists with nonzero values to the right of the pivot 1, but to the left of
the bar, there are many solutions. The variable for that column is unconstrained.
- The equations are placed into an augmented matrix:
Example:
2x + 3y = 2
x - 3y = 4
|
x |
y |
|
const |
⌈ ⌊ |
2 |
3 |
| | |
2 |
⌉ ⌋ |
1 |
-3 |
4 |
- The pivot position is in the upper left corner. Make a 1 by multiplying that row by 1/2:
Row 1 gets Row 1 × 1/2
|
x |
y |
|
const |
⌈ ⌊ |
1 |
3/2 |
| | |
1 |
⌉ ⌋ |
1 |
-3 |
4 |
- Make 0 above and below the pivot:
Row 2 gets Row 2 + Row 1 × -1
|
1 |
3/2 |
|
1 |
|
× |
|
|
|
-1 |
|
= |
-1 |
-3/2 |
|
-1 |
|
+ |
1 |
-3 |
|
4 |
|
= |
0 |
-9/2 |
|
3 |
|
|
|
|
|
|
|
|
x |
y |
|
const |
⌈ ⌊ |
1
|
3/2
|
| | |
1
|
⌉ ⌋ |
0 |
-9/2 |
3 |
- Move the pivot position down one row and to the right one column. It is now in the second
column and second row.
- Make a 1 by multiplying that row by -2/9:
Row 2 gets Row 2 × -2/9
|
x |
y |
|
const |
⌈ ⌊ |
1
|
3/2
|
| | |
1
|
⌉ ⌋ |
0 |
1 |
-2/3 |
- Make 0 above and below the pivot:
Row 1 gets Row 1 + Row 2 × -3/2
|
0 |
1 |
|
-2/3 |
|
× |
|
|
|
-3/2 |
|
= |
0 |
-3/2 |
|
1 |
|
+ |
1 |
3/2 |
|
1 |
|
= |
1 |
0 |
|
2 |
|
|
|
|
|
|
|
|
x |
y |
|
const |
⌈ ⌊ |
1 |
0 |
| | |
2 |
⌉ ⌋ |
0 |
1 |
-2/3 |
- Since the next row does not exist, stop.
- The matrix is converted back into equations:
1x + 0y = 2
0x + 1y = -2/3
x = 2
y = -2/3
- Since every row and column left of the bar has a 1 surrounded by 0s, there is one solution.
Using Gauss-Jordan Elimination to Invert a Matrix
- Only a square matrix can be inverted. A square matrix has the same number of rows and columns.
- The inverse of matrix A is A-1
- The procedure is as follows:
- Put the matrix on the left of the augmentation bar, and put an identity matrix the same size
on the right side of the bar.
- Keep track of the location being worked on, called the pivot position.
- The starting pivot position is in the upper left corner. Work row by row and column by column.
- Stop when either of these conditions is true:
- The next column for the pivot is beyond the augmentation bar.
- There are no rows left below to move the pivot into.
- For each pivot position, do the following, in order:
- Start: If there is a 0 in the position, exchange rows with a lower row with no 0.
- If there is no lower row without a 0, move the pivot position one column right and
start again.
- Get a 1 at the pivot: Multiply the entire row by 1/v, where v is the value in the pivot
position.
- Get 0s above and below the pivot: For each nonzero elimination location above and below
the pivot, eliminate it with this procedure:
- Let the value in the elimination location be u.
- Add to the row containing the elimination location a row created by multiplying the
pivot row by -u.
- Move the pivot down one row and one row to the right.
- Determine the type of result:
- If an identity matrix now exists to the left of the augmentation bar, the inverse
exists.
- If there is no identity matrix on the left side, the inverse does not exist.
- If the inverse exists, take it from the right side of the augmentation bar.
- The original matrix and the identity matrix are placed into an augmented matrix:
Example:
⌈ ⌊ |
2 |
3 |
| | |
1 |
0 |
⌉ ⌋ |
1 |
-3 |
0 |
1 |
- The pivot position is in the upper left corner. Make a 1 by multiplying that row by 1/2:
Row 1 gets Row 1 × 1/2
⌈ ⌊ |
1 |
3/2 |
| | |
1/2 |
0 |
⌉ ⌋ |
1 |
-3 |
0 |
1 |
- Make 0 above and below the pivot:
Row 2 gets Row 2 + Row 1 × -1
|
1 |
3/2 |
|
1/2 |
0 |
|
× |
|
|
|
|
-1 |
|
= |
-1 |
-3/2 |
|
-1/2 |
0 |
|
+ |
1 |
-3 |
|
0 |
1 |
|
= |
0 |
-9/2 |
|
-1/2 |
1 |
|
|
|
|
|
|
|
⌈ ⌊ |
1 |
3/2 |
| | |
1/2 |
0 |
⌉ ⌋ |
0 |
-9/2 |
-1/2 |
1 |
- Move the pivot position down one row and to the right one column. It is now in the
second column and second row.
- Make a 1 by multiplying that row by -2/9:
Row 2 gets Row 2 × -2/9
⌈ ⌊ |
1 |
3/2 |
| | |
1/2 |
0 |
⌉ ⌋ |
0 |
1 |
1/9 |
-2/9 |
- Make 0 above and below the pivot:
Row 1 gets Row 1 + Row 2 × -3/2
|
0 |
1 |
|
1/9 |
-2/9 |
|
× |
|
|
|
|
-3/2 |
|
= |
0 |
-3/2 |
|
-1/6 |
1/3 |
|
+ |
1 |
3/2 |
|
1/2 |
0 |
|
= |
|
0 |
|
1/3 |
1/3 |
|
|
|
|
|
|
|
⌈ ⌊ |
1 |
0 |
| | |
1/3 |
1/3 |
⌉ ⌋ |
0 |
1 |
1/9 |
-2/9 |
- Since the next row does not exist, stop.
- Test the matrix:
- If an identity matrix exists left of the augmentation bar, the right side contains the
inverse of the matrix.
- If no identity matrix can be obtained on the left side, the inverse does not exist.
In this case, the identity matrix exists to the left of the bar, so the inverse exists
to the right of the bar.
- Remove the identity matrix, producing the inverse.
Solving systems of equations with Inverse Matrices
LINEAR PROGRAMMING
About Linear Programming
- Define the problem. Here is a sample problem:
- A brickmaker makes two kinds of special blocks: red and yellow.
- Each red block needs 2 pounds of clay and 4 pounds of vitrified binder.
- Each yellow block needs 4 pounds of clay, but the process used also produces 2 pounds
of vitrified binder.
- 100 pounds of clay and 50 pounds of binder are available each day.
- The net profit after taxes is $2 for each red block and $3 for each yellow block.
- Assuming that all of the blocks will be sold, how many blocks of each type should be made
each day to maximize profit?
- Identify your variables.
- The variables are the values you are trying to find using linear programming. Assign a
different variable to each unknown quantity.
- In the example, the variables are r for the number of red blocks and y for the number
of yellow blocks.
- Identify constraining factors.
- Constraining factors are limits on the process you are investigating. Examples include:
- Supplies you can run out of
- Contracts that need to be heeded
- The amount of labor available
- Government regulations
- Waste products produced
- Create one inequality for each constraint.
- In the example:
2r + 4y ≤ 100 ... The clay constraint
4r - 2y ≤ 50 .... The binder constraint
- Identify null constraints.
Null constraints are common-sense constraints imposed by the real world. Examples:
- You can not ungrow corn.
- What's a negative block?
- You can't have negative amounts of supplies or labor.
- A sheet of metal, once cut, can't be made whole again.
- Work can't be done if you run out of supplies.
- Create one inequality for each constraint.
- In the example:
r ≥ 0 ... The red block null constraint
y ≥ 0 ... The yellow block null constraint
- Write the production function.
- The production function (or profit function or objective function) is a linear function p
determining the amount of gain or loss of the desired variable.
- It is the sum of the products of the variables and their gains.
- Also determine whether p is to be maximized or minimized.
- The example:
p = 2r + 3y. Maximize p.
- The procedure without the matrix is:
- Graph the inequalities.
- Find the feasible set of values that satisfy all of the inequalities.
- If no area on the graph satisfies all of the constraints, then no feasible set exists,
and there is no solution.
- Test the production function on the corner points of the feasible area on the graph.
The corner points are the intersections of the constraint lines.
- Choose the most optimum point. If an optimum point exists, it will be at a corner point.
- If two corner points produce the same optimum value, then there are many solutions along
the constraint line connecting the two optimum corner points.
- The solution of this system by graphing is left as an exercise. The solution should be the
same as the Simplex solution to the same problem in the next section.
- To use the Simplex Method with a matrix, continue to the next section.
- Note that the simplex method works for more than two variables. The graphing solution is
restricted to two variables.
The Simplex Method of Linear Programming
Using Matrices to Represent Systems of Inequalities
- The Simplex Method requires that the problem must be a standard maximum problem.
This requires that the following must be true:
- All variables are constrained to be greater or equal to zero.
- All other constraints must be of the form that a linear function of the variables is
less than or equal to a constant.
- The production or objective function is a linear function to be maximized.
- With the matrix, introduce a different slack variable for each inequality.
- In the example change the inequality into an equation with a slack variable:
- 2r + 4y ≤ 100 ... The clay constraint becomes:
2r + 4y + u = 100
- 4r - 2y ≤ 50 ... The binder constraint becomes:
4r - 2y + v = 50
- Load the matrix, called the tableau:
- Make one row for each equation. Label it on the left with the slack variable being
independent.
- Make one column for each variable, including the slack variables.
- Add the BS (Basic Solution) column as an augmentation. It contains the constant to the
right of the equal sign.
- Augment the matrix below the tableau with the negative of the production function, with
all other locations being zero.
- From the example above:
- 2r + 4y + u = 100
- 4r - 2y + v = 50
- The tableau becomes:
|
|
r |
y |
u |
v |
|
BS |
u |
| | | |
2 |
4 |
1 |
0 |
| | | |
100 |
v |
4 |
-2 |
0 |
1 |
50 |
. |
. |
. |
p |
-2 |
-3 |
0 |
0 |
0 |
The Simplex Method of Linear Programming
Using the Pivot Operation to Solve Systems of Inequalities
- Repeatedly perform the Pivot Operation using the following steps:
- Find the Entering Variable.
- Use the column with the most negative entry in the p row. This becomes the Entering Column.
- If there are no negative entries in the p row, STOP. The linear programming operation is
complete. Move on to reading the tableau.
- Find the Departing Variable.
- Divide each entry above the bar in the Basic Solution (BS) column by the entry in the
same row in the Entering Column
- Use the smallest positive quotient found above to select the Departing Row.
- If there are no positive entries in the Entering Column, STOP. There is no solution.
- The pivot point is the entry in the Entering Column and the Departing Row.
- Do the following in order at the pivot position:
- Get a 1 at the pivot: Multiply the entire row by 1/v, where v is the value in the pivot
position.
- Get 0s above and below the pivot: For each nonzero elimination location above and below
the pivot, eliminate it:
- Let the value in the elimination location be u.
- Add to the row containing the elimination location a row created by multiplying the
pivot row by -u.
- Rename the Departing Row with the name of the Entering Variable, since the row is now
solved for that variable.
Example:
- The Entering Column is the y column, because -3 is the most negative entry in the p row.
- Divide 100 by 4, giving 25
- Divide 50 by -2, giving -25
- Select the smallest positive quotient: In this case, it is 25
- This sets the Departing Row to be the u row.
- The pivot element is 4, in the y column and the u row:
|
|
r |
y |
u |
v |
|
BS |
u |
| | | |
2 |
4 |
1 |
0 |
| | | |
100 |
v |
4 |
-2 |
0 |
1 |
50 |
. |
. |
. |
p |
-2 |
-3 |
0 |
0 |
0 |
- Get a 1 in the pivot element by multiplying the u row by 1/4:
|
|
r |
y |
u |
v |
|
BS |
u |
| | | |
1/2 |
1 |
1/4 |
0 |
| | | |
25 |
v |
4 |
-2 |
0 |
1 |
50 |
. |
. |
. |
p |
-2 |
-3 |
0 |
0 |
0 |
- Make a 0 in row v in the Entering column by adding row u multiplied by 2 to row v:
|
|
r |
y |
u |
v |
|
BS |
u |
| | | |
1/2 |
1 |
1/4 |
0 |
| | | |
25 |
v |
5 |
0 |
1/2 |
1 |
100 |
. |
. |
. |
p |
-2 |
-3 |
0 |
0 |
0 |
- Make a 0 in row p in the Entering column by adding row u multiplied by 3 to row p:
Also change the name of row u to y.
|
|
r |
y |
u |
v |
|
BS |
y |
| | | |
1/2 |
1 |
1/4 |
0 |
| | | |
25 |
v |
5 |
0 |
1/2 |
1 |
100 |
. |
. |
. |
p |
-1/2 |
0 |
3/4 |
0 |
75 |
- The Entering Column is the r column, because -1/2 is the most negative entry in the p row.
- Divide 25 by 1/2, giving 50
- Divide 100 by 5, giving 20
- Select the smallest positive quotient: In this case, it is 20
- The Departing Row is the v row.
- The pivot element is 5, in the r column and the v row:
|
|
r |
y |
u |
v |
|
BS |
y |
| | | |
1/2 |
1 |
1/4 |
0 |
| | | |
25 |
v |
5 |
0 |
1/2 |
1 |
100 |
. |
. |
. |
p |
-1/2 |
0 |
3/4 |
0 |
75 |
- Get a 1 in the pivot element by multiplying the v row by 1/5:
|
|
r |
y |
u |
v |
|
BS |
y |
| | | |
1/2 |
1 |
1/4 |
0 |
| | | |
25 |
v |
1 |
0 |
1/10 |
1/5 |
20 |
. |
. |
. |
p |
-1/2 |
0 |
3/4 |
0 |
75 |
- Make a 0 in row y in the Entering column by adding row v multiplied by -1/2 to row y:
|
|
r |
y |
u |
v |
|
BS |
y |
| | | |
0 |
1 |
1/5 |
-1/10 |
| | | |
15 |
v |
1 |
0 |
1/10 |
1/5 |
20 |
. |
. |
. |
p |
-1/2 |
0 |
3/4 |
0 |
75 |
- Make a 0 in row y in the Entering column by adding row v multiplied by 1/2 to row y:
Also change the name of row v to r.
|
|
r |
y |
u |
v |
|
BS |
y |
| | | |
0 |
1 |
1/5 |
-1/10 |
| | | |
15 |
r |
1 |
0 |
1/10 |
1/5 |
20 |
. |
. |
. |
p |
0 |
0 |
4/5 |
1/10 |
85 |
- Read the tableau to get the answer:
- The solution, if it exists, always has 0 for the values of the slack variables. So we can ignore columns u and v.
- u = 0
- v = 0
- Read rows y, r, and p, with the answers for each in the Basic Solution column.
- Translate the numbers to the real world problem:
- Make 20 red blocks.
- Make 15 yellow blocks.
- The maximum profit is $85.
MARKOV CHAINS
Using Matrices to Represent Markov Probability Chains
- What is a Markov Chain?
- A Markov chain is a probability system for calculating the probabilities repeating processes.
- Examples:
- The game of craps
- Moving a piece on the Monopoly board
- Chutes and ladders
- Random walk
- Simple weather models
- The Markov Property
- In a Markov system, the probability of the next state being j depends only on the current
state i.
- Notice that, in the tree and in the state diagram:
- The probability of going from state H to state H is always .8.
- The probability of going from state H to state L is always .2.
- The probability of going from state L to state H is always .6.
- The probability of going from state L to state L is always .4.
- What is a Markov Chain Matrix?
- A matrix can be used to represent the probabilities in a Markov Chain. The following
matrix represents the probabilities in the above diagrams.
(in) |
H |
⌈ ⌊ |
.8 |
.2 |
⌉ ⌋ |
= P |
L |
.6 |
.4 |
|
|
|
H |
L |
|
|
|
|
|
(out) |
|
|
- Note that each row-i column-j element Pij of the Markov
matrix P is the probability of moving from the i-th state to the j-th state
in one transition.
- The row i is selected by the present state, and the column j is selected by the next state.
- Note that the sum of the elements in each row must equal 1.
- Conditional Probability in Markov Chains:
- The probability of starting in state i and ending up in state j after k transitions take
place is denoted by Pij(k).
- Matrix P(k) is the k-step transition matrix, where P
is the 1-step transition matrix. So:
- P(1) = P
- P(2) = PP = P2
- P(3) = P(2)P(1) =
PPP = P3
- P(4) = P(2)P(2) =
PPPP = P4
- P(5) = P(4)P(1) =
PPPPP = P5
- Matrix P(k) gives the probability of moving from any state i to any
state j after k transitions in the same way that matrix P gives the
probabilities for one transition.
- A trick for finding a given power of a matrix:
- Find the powers of the matrix (less than the desired power) that are powers of two,
by repeatedly squaring the result.
- P(1) = P
- P(2) = P(1)2
- P(4) = P(2)2
- P(8) = P(4)2
- Find which powers of 2 make the desired power.
- Multiply the matrices that are the powers of 2 listed above.
- P(13) = P(8)P(4)P(1)
= P13
- Probability Vector:
- A probability vector is a row vector with the following properties:
- Each element is in the range 0 ≤ p ≤ 1
- The sum of the elements is 1.
- Each row of a transition matrix is a probability vector.
- State Vector:
- A state vector is a probability vector indicating what the probabilities are that the
Markov chain is in each state.
- State Vector Theorem:
- Xk denotes the state vector describing the Markov chain
after k transitions from initial state X0.
Thus:
- X1 = X0 P
- X2 = X1 P
= X0 P(2)
- Xk = X0 P(k)
Finding the Stable Vector of a Regular Markov Chain
- What is a Regular Matrix?
- A regular matrix is a Markov chain with the following property:
- A k exists such that P(k) has all positive entries.
The Monopoly board is a regular Markov chain with 43 states.
- Here is a sample regular matrix:
(in) |
H |
⌈ ⌊ |
.8 |
.2 |
⌉ ⌋ |
= P |
L |
.6 |
.4 |
|
|
|
H |
L |
|
|
|
|
|
(out) |
|
|
- Stable Probabilities Theorem:
- Given a Markov chain with a regular matrix, there exists a unique probability vector
W such that, for each state j, the difference
|pij(k) - wj|
can be made as small as you like by choosing a large enough k.
- This vector W is the stable vector.
- The stable vector W is found by solving the equation W P
= W as follows.
- Subtract W from both sides of the equations, obtaining
W P - W = 0
(where 0 is an appropriately sized matrix of all zeros).
- Factor out the W, giving
W(P-I) = 0.
- Find the P-I matrix.
- Using the example above:
⌈ ⌊ |
.8 |
.2 |
⌉ ⌋ |
− |
⌈ ⌊ |
1 |
0 |
⌉ ⌋ |
= |
⌈ ⌊ |
-.2 |
.2 |
⌉ ⌋ |
.6 |
.4 |
0 |
1 |
.6 |
-.6 |
- The algebra says to multiply out the
W(P-I), using wj for the
elements of W. Then set up a system of equations by setting each
resulting element equal to 0, adding the equation
w1 + w2 + ... + wn = 1
to it, and solving the system. The practical way to do this follows:
- Transpose the matrix P-I:
T(P-I) = |
⌈ ⌊ |
-.2 |
.6 |
⌉ ⌋ |
.2 |
-.6 |
- Augment the matrix with the 0 column vector. Then add an augmented row
of all 1s on top.
⌈ | ⌊ |
1 |
1 |
| | | |
1 |
⌉ | ⌋ |
-.2 |
.6 |
0 |
.2 |
-.6 |
0 |
- Reduce the matrix to solve the system. If done correctly, the last row becomes all zeros.
⌈ | ⌊ |
1 |
0 |
| | | |
.75 |
⌉ | ⌋ |
0 |
1 |
.25 |
0 |
0 |
0 |
- Delete the last row (all zeros) and the portion to the left of the augmentation bar.
Then convert the column vector to a row vector.
This is the stable vector W.
- This Markov chain is expected to end up in the first state in 3/4 of the transitions,
and the second state in 1/4 of the transitions.
- The stable vector can be checked by multiplying it by the original Markov chain matrix.
The result should be the stable vector:
|
|
|
|
|
⌈ ⌊ |
.8 |
.2 |
⌉ ⌋ |
|
.6 |
.4 |
|
|
|
|
|
[ |
.75 |
.25 |
] |
|
[ |
.75 |
.25 |
] |
Finding the Probabilities of the Outcomes of an Absorbing Markov Chain
- What is an Absorbing State?
- An absorbing state is a state that always transitions to itself. If is impossible to
leave an absorbing state.
- State i is an absorbing state if pii = 1 and all other pij = 0.
- What is an Absorbing Markov Chain?
- An absorbing Markov chain has the following properties:
- At least one absorbing state.
- For each absorbing state, there exists least one nonzero transition from at least one
nonabsorbing state.
- The game of craps is an absorbing Markov chain.
- What is Canonical Form?
- Canonical Form places the absorbing states first.
- Here is a sample absorbing system NOT in canonical form. The state numbers shown are
those after changing to canonical form:
State # |
|
3 |
1 |
4 |
2 |
|
3 |
⌈ | | ⌊ |
.4 |
.2 |
.2 |
.2 |
⌉ | | ⌋ |
1 |
0 |
1 |
0 |
0 |
4 |
.2 |
.2 |
.6 |
0 |
2 |
0 |
0 |
0 |
1 |
- This is the same Markov chain in canonical form. The absorbing states are first. Number
the states now.
Be sure to trade the columns at the same time you trade the rows.
State # |
|
3 |
1 |
4 |
2 |
|
1 |
⌈ | | ⌊ |
1 |
0 |
0 |
0 |
⌉ | | ⌋ |
2 |
0 |
1 |
0 |
0 |
3 |
.2 |
.2 |
.4 |
.2 |
4 |
.2 |
0 |
.2 |
.6 |
- Finding the Fundamental Matrix of an absorbing Markov Chain:
- Get the matrix into canonical form. The example will use the above matrix.
- Number the states.
- Separate the matrix into four regions:
- A square identity matrix I in the upper left corner.
If there is only one absorbing state, it will be a single 1.
- A matrix consisting of all zeros 0 in the upper right corner.
It is the same height as the identity matrix.
- A matrix consisting of all absorbing transitions R in the lower
left corner. It is UNDER the identity matrix.
- A square matrix of all nonabsorbing transitions Q in the lower
right corner. It is a square matrix under the zero matrix.
Like this:
Dividing up our sample matrix:
State# |
|
1 |
2 |
|
|
3 |
4 |
|
1 |
⌈ ⌊ |
1 |
0 |
⌉ ⌋ |
⌈ ⌊ |
0 |
0 |
⌉ ⌋ |
2 |
0 |
1 |
0 |
0 |
|
. |
|
. |
|
3 |
⌈ ⌊ |
.2 |
.2 |
⌉ ⌋ |
⌈ ⌊ |
.4 |
.2 |
⌉ ⌋ |
4 |
.2 |
0 |
.2 |
.6 |
- Find the Fundamental Matrix:
The Fundamental matrix N is found from the canonical form with the
formula:
N =
(I − Q)-1
To do this, continue following the instructions:
- Find the difference I−Q:
Using the example above:
⌈ ⌊ |
1 |
0 |
⌉ ⌋ |
− |
⌈ ⌊ |
.4 |
.2 |
⌉ ⌋ |
= |
⌈ ⌊ |
.6 |
-.2 |
⌉ ⌋ |
0 |
1 |
.2 |
.6 |
-.2 |
.4 |
- Invert the square matrix I-Q:
The square matrix and the identity matrix are placed into an augmented matrix:
⌈ ⌊ |
.6 |
-.2 |
| | |
1 |
0 |
⌉ ⌋ |
-.2 |
.4 |
0 |
1 |
Finding the inverse (using the process covered above) produces:
The inverse will always exist if the Markov chain is absorbing.
- Obtain the fundamental matrix. It is the inverse just calculated:
|
|
3 |
4 |
|
State # |
N = |
⌈ ⌊ |
2 |
1 |
⌉ ⌋ |
3 |
1 |
3 |
4 |
- Finding the Expected number of times the Markov chain is in a particular state:
- Read the fundamental matrix N to get the expected number of times the
Markov chain is in a particular state before absorbing.
- The nij element of N is the expected number of times the
chain starting in state i will be in state j before absorption occurs.
- In the example, if the Markov chain starts in state 3, it will be in state 3 an average
of 2 times before absorbing.
- Finding the expected number of transitions before absorbing:
- Read the fundamental matrix N to get the expected number of times the
Markov chain is in a particular state before absorbing.
- The sum of the entries in row i of N is the expected number of
transitions the chain starting in state i will undergo before absorption occurs.
- In the example, if the Markov chain starts in state 3, it absorb after an average of
2 + 1 = 3 transitions.
- Finding the probability of entering a particular absorbing state:
All of these tools make the matrix extremely useful, making it worth studying.
Finite Math