-
Notifications
You must be signed in to change notification settings - Fork 0
Chapter 4
A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars, subject to 10 axioms (or rules) listed below. The axioms must hold for all vectors u, v, and w in V and for all scalars c and d.
- The sum of u and v, denoted by u+v, is in V.
- u + v = v + u.
- (u + v) + w = u + (v + w).
- There is a zero vector 0 in V such that u + 0 = u.
- For each u in V, there is a vector -u in V such that u + (-u) = 0.
- The scalar multiple of u by c, denoted by cu, is in V.
- c(u + v) = cu + cv.
- (c + d)u = cu + du.
- c(du) = (cd)u.
- 1u = u.
A subspace of a vector space V is a subset H of V that has three properties:
a. The zero vector of V is in H.
b. H is closed under vector addition.
c. H is closed under scalar multiplication.
If v1, ..., vp are in a vector space V, then span{v1, ..., vp} is a subspace of V.
The null space of an m x n matrix A, written as Nul A, is the set of all solutions of the homogeneous equation Ax = 0. In set notation, Nul A = {x: x in Rn and Ax = 0}. The null space is a subspace of Rn. Equivalently, the set of all solutions of a system Ax = 0 of m homogeneous linear equations in n unknowns is a subspace of Rn.
The column space of an m x n matrix A, written as Col A, is the set of all linear combinations of the columns of A. If A = [a1 ... an], then Col A = span{a1, ..., an}. The column spaces is a subspace of Rm. The column space is all of Rm if and only if the equation Ax = b has a solution for each b in Rm.
A linear transformation T from a vector space V to a vector space W is a rule that assigns to each vector x in V a unique vector T(x) in W, such that (i) T(u + v) = T(u) + T(v) for all u, v, in V, and (ii) T(cu) = cT(u) for all u in V and all scalars c.
The kernel (or null space) of such a T is the set of all u in V such that T(u) = 0.
The range of T is the set of all vectors in W of the form T(x) for some x in V.
An indexed set {v1, ..., vp} of two or more vectors, with v1 not equal to 0 is lnearly dependent if and only if some vj (with j > 1) is a linear combination of the preceding vectors, v1, ..., vj-1.
Let H be a subspace of vector space V. An indexed set of vectors B = {b1, ..., bp} in V is a basis for H if (i) B is a linearly independent set, and (ii) the subspace spanned by B coincides with H; that is, H = span{b1, ..., bp}.
The Spanning Set Theorem
Let S = {v1, ..., vp} be a set in V, and let H = span{v1, ..., vp}.
a. If one of the vectors in S - say, vk - is a linear combination of the remaining vectors in S, then the set formed from S by removing vk still spans H.
b. IF H not equal to {0}, some subset of S is a basis for H.
The pivot columns of a matrix A form a basis for Col A.
The Unique Representation Theorem
Let B = {b1, ..., bn} be a basis for a vector space V. Then for each x in V, there exists a unique set of scalars c1, ..., cn such that x = c1b1 + ... + cnbn. The coordinates of x relative to the basis B (or the B-coordinates of x) are the weights c1, ..., cn.
Let B = {b1, ..., bn} be a basis for a vector space V. Then the coordinate mapping x -> [x]B is a one-to-one linear transformation from V onto Rn.
In general, a one-to-one linear transformation from a vector space V to a vector space W is called an isomorphism from V to W.