-
Notifications
You must be signed in to change notification settings - Fork 0
Chapter 2
If A is an m x n matrix - that is, a matrix with m rows and n columns - then the scalar entry in the ith row and the jth column of A is denoted by aij and is called the (i, j)-entry of A.
Each column of A is a list of m real numbers, which identifies a vector in Rm. Often, these columns are denoted by a1, ..., an, and matrix A is written as A = [ a1 a2 ... an ]. Observe that the number aij is the ith entry (from the top) of the jth column vector aj.
The diagonal entries in an m x n matrix A = [ aij ] are a11, a22, a33, ..., and they form the main diagonal of A.
A diagonal matrix is a square n x n matrix whose nondiagonal entries are zero.
An m x n matrix whose entries are all zero is a zero matrix and is written as 0.
We say that two matrices are equal if they have the same size (i.e., the same number of rows and the same number of columns) and if their corresponding columns are equal, which amounts to saying that their corresponding entries are equal.
If A and B are m x n matrices, then the sum A + B is the m x n matrix whose columns are the sums of the corresponding columns in A and B. Since vector addition of the columns is done entrywise, each entry in A + B is the sum of the corresponding entries in A and B.
Note: The sum A + B is defined only when A and B are the same size.
If r is a scalar and A is a matrix, then the scalar multiple rA is the matrix whose columns are r times the corresponding columns in A.
Let A, B, and C be matrices of the same size, and let r and s be scalars.
a. A + B = B + A.
b. (A + B) + C = A + (B + C).
c. A + 0 = A.
d. r(A + B) = rA + rB.
e. (r + s)A = rA + sA.
f. r(sA) = (rs)A.
If A is an m x n matrix, and if B is an n x p matrix with columns b1, ..., bp, then the product AB is the m x p matrix whose columns are Ab1, ..., Abp. That is, AB = A[ b1 b2 ... bp ] = [ Ab1 Ab2 ... Abp ].
Note: AB has the same number of rows as A and the same number of columns as B.
Each column of AB is a linear combination of the columns of A using weights from the corresponding column of B.
Row Column Rule for Computing AB
If the product AB is defined, then the entry in row i and column j of AB is the sum of the products of corresponding entries from row i of A and column j of B. If (AB)ij denotes the (i, j)-entry in AB, and if A is an m x n matrix, then (AB)ij = ai1b1j + ai2b2j + ... + ainbnj.
Let rowi(A) denote the ith row of a matrix A. Then rowi(AB) = rowi(A) · B.
Let A be an m x n matrix, and let B and C have sizes for which the indicated sums and products are defined.
a. A(BC) = (AB)C. (associative law of multiplication)
b. A(B + C) = AB + AC. (left distributive law)
c. (B + C)A = BA + CA. (right distributive law)
d. r(AB) = (rA)B = A(rB). (for any scalar r)
e. ImA = A = AIn. (identity for matrix multiplication)
Warnings
- In general, AB is not equal to BA.
- The cancellation laws do not hold for matrix multiplication. That is, if AB = AC, then it is not true in general that B = C.
- If a product AB is the zero matrix, you cannot conclude in general that either A = 0 or B = 0.
If A is an n x n matrix and if k is a positive integer, then Ak denotes the product of k copies of A.
If A is nonzero and if x is in Rn, then Akx is the result of left-multiplying x by A repeatedly k times. if k = 0, then A0x should be x itself. Thus A0 is interpreted as the identity matrix.
Given an m x n matrix A, the transpose of A is the n x m matrix, denoted by AT, whose columns are formed from the corresponding rows of A.
Let A and B denote matrices whose sizes are appropriate for the following sums and products.
a. (AT)T = A.
b. (A + B)T = AT + BT.
c. For any scalar r, (rA)T = rAT.
d. (AB)T = BTAT.
Note: The transpose of a product of matrices equals the product of their transposes in the reverse order.
An n x n matrix A is said to be invertible if there is an n x n matrix C such that CA = I and AC = I where I = In, the n x n identity matrix. This unique inverse is denoted by A-1, so that A-1A = I and AA-1 = I.
A matrix that is not invertible is sometimes called a singular matrix, and an invertible matrix is called a nonsingular matrix.
Let A be a 2 x 2 matrix with elements a, b, c, and d. The determinant of A, denoted by det A, is the quantity ad - bc. Matrix A is invertible if and only if det A is not equal to 0.
If A is an invertible n x n matrix, then for each b in Rn, the equation Ax = b has the unique solution x = A-1b.
If A is an invertible matrix, then A-1 is invertible and (A-1)-1 = A.
If A and B are n x n invertible matrices, then so is AB, and the inverse of AB is the product of the inverses of A and B in the reverse order. That is, (AB)-1 = B-1A-1.
If A is an invertible matrix, then so is AT, and the inverse of AT is the transpose of A-1. That is, (AT)-1 = (A-1)T.
An elementary matrix is one that is obtained by performing a single elementary row operation on an identity matrix.
If an elementary row operation is performed on an m x n matrix A, the resulting matrix can be written as EA, where the m x n matrix E is created by performing the same row operation on Im.
Each elementary matrix E is invertible. The inverse of E is the elementary matrix of the same type that transforms E back into I.
An n x n matrix A is invertible if and only if A is row equivalent to In, and in this case, any sequence of elementary row operations that reduces A to In also transforms In into A-1.
Let A and B be square matrices. If AB = I, then A and B are both invertible, with B = A-1 and A = B01.
Let T : Rn -> Rn be a linear transformation and let A be the standard matrix for T. Then T is invertible if and only if A is an invertible matrix. In that case, the linear transformation S given by S(x) = A-1x is the unique function satisfying the equations S(T(x)) = x for all x in Rn and T(S(x)) = x for all x in Rn.
A subspace of Rn is any set H in Rn that has three properties:
a. The zero vector is in H.
b. For each u and v in H, the sum u+v is in H.
c. For each u in H and scalar c, the vector cu is in H.
The column space of a matrix A is the set Col A of all linear combinations of the columns of A.
The null space of a matrix A is the set Nul A of all solutions of the homogeneous equation Ax = 0.
The null space of an m x n matrix A is a subspace of Rn. Equivalently, the set of all solutions of a system Ax = 0 of m homogeneous linear equations in n unknowns is a subspace of Rn.
A basis for a subspace H of Rn is a linearly independent set in H that spans H.
The pivot columns of a matrix A form a basis for the column space of A.
Suppose the set B = {b1, ..., bp} is a basis for a subspace H. For each x in H, the coordinates of x relative to the basis B are weights c1, ..., cp such that x = c1b1 + ... + cpbp, and the vector in Rp [x]B = [c1, ..., cp] is called the coordinate vector of x (relative to B) or the B-coordinate vector of x.
The dimension of a nonzero subspace H, denoted by dim H, is the number of vectors in any basis for H. The dimension of the zero subspace {0} is defined to be zero.
The rank of a matrix A, denoted by rank A, is the dimension of the column space of A.
The Rank Theorem
If a matrix A has n columns, then rank A + dim Nul A = n.
The Basis Theorem
Let H be a p-dimensional subspace of Rn. Any linearly independent set of exactly p elements in H is automatically a basis for H. Also, any set of p elements of H that spans H is automatically a basis for H.