Let V and W be vector spaces over a field F. A function T:V→W is called a linear transformation from V to W if, for all x,y∈V and c∈F, the following conditions hold:
(a)T(x+y)=T(x)+T(y)(b)T(cx)=cT(x)
A transformation T:V→W is linear if and only ifT(cx+y)=cT(x)+T(y)
for all x,y∈V and all scalars c∈F.
Let V and W be vector spaces, and let T:V→W be linear. We define the null space (or kernel) of T, denoted N(T), to be the set of all vectors x∈V such that T(x)=0. That is,
We define the range (or image) of T, denoted R(T), to be the subset of W consisting of all images T(x) of vectors x∈V. That is,
R(T)={T(x):x∈V}.
Theorem 2.1. Let V and W be vector spaces and T:V→W be linear. Then N(T) and R(T) are subspaces of V and W, respectively.
Definition. Let V and W be vector spaces, and let T:V→W be linear. If N(T) and R(T) are finite-dimensional, then we define the nullity of T, denoted nullity(T), and the rank of T, denoted rank(T), to be the dimensions of N(T) and R(T), respectively.
Theorem 2.2. Let V and W be vector spaces, and let T:V→W be linear. If β={v1,v2,…,vn} is a basis for V, then
R(T)=span(T(β))=span({T(v1),T(v2),…,T(vn)}).
Theorem 2.3 (Dimension Theorem). Let V and W be vector spaces, and let T:V→W be linear. If V is finite-dimensional, then
nullity(T)+rank(T)=dim(V).
Theorem 2.4. Let V and W be vector spaces, and let T:V→W be linear. Then T is one-to-one if and only ifN(T)={0}.
Theorem 2.5. Let V and W be vector spaces of equal finite dimension, and let T:V→W be linear. Then the following are equivalent:
(a)T is one-to-one.
(b)T is onto.
(c)rank(T)=dim(V).
Theorem 2.6. Let V and W be vector spaces over F, and suppose that {v1,v2,…,vn} is a basis for V. For w1,w2,…,wn in W, there exists exactly one linear transformation T:V→W such that T(vi)=wi for i=1,2,…,n.
Corollary 1. Let V and W be vector spaces, and suppose that V has a finite basis {v1,v2,…,vn}. If U,T:V→W are linear and U(vi)=T(vi) for i=1,2,…,n, then U=T.
2.2 The Matrix Representation of a Linear Transformation
Let V be a finite-dimensional vector space. An ordered basis for V is a basis for V endowed with a specific order; that is, an ordered basis for V is a finite sequence of linearly independent vectors in V that generates V.
For the vector space Fn, we call {e1,e2,…,en} the standard ordered basis for Fn. Similarly, for the vector space Pn(F), we call {1,x,…,xn} the standard ordered basis for Pn(F).
Suppose that V and W are finite-dimensional vector spaces with ordered bases β={v1,v2,…,vn} and γ={w1,w2,…,wm}, respectively. Let T:V→W be linear. Then for each j, 1≤j≤n, there exist unique scalars aij∈F, 1≤i≤m, such that
T(vj)=i=1∑maijwifor 1≤j≤n.
We call the m×n matrix A defined by Aij=aij the matrix representation of T in the ordered bases β and γ and write
A=[T]βγ.
If V=W and β=γ, then we write simply
A=[T]β.
Notice that the j th column of A is simply [T(vj)]γ. Also observe that if U:V→W is a linear transformation such that [U]βγ=[T]βγ, then U=T.
Let T,U:V→W be arbitrary functions, where V and W are vector spaces over F, and let a∈F. We defineT+U:V→W by
(T+U)(x)=T(x)+U(x)for all x∈V,
and aT:V→W by
(aT)(x)=aT(x)for all x∈V.
Let V and W be vector spaces over F. We denote the vector space of all linear transformations from V into W by L(V,W). In the case that V=W, we write L(V) instead of L(V,W).
Theorem 2.7. Let V and W be finite-dimensional vector spaces with ordered bases β and γ, respectively, and let T,U:V→W be linear transformations. Then
(a)[T+U]βγ=[T]βγ+[U]βγ(b)[aT]βγ=a[T]βγfor all scalars a.
2.3 Composition of Linear Transformations and Matrix Multiplication
Theorem 2.8. Let V, W, and Z be vector spaces over the same field F, and let T:V→W and U:W→Z be linear. Then UT:V→Z is linear.
Let T:V→W and U:W→Z be linear transformations, and let A=[U]βγ and B=[T]αβ, where α={v1,v2,…,vn}, β={w1,w2,…,wm}, and γ={z1,z2,…,zp} are ordered bases for V, W, and Z, respectively.
We would like to define the product AB of two matrices so that
We define the Kronecker deltaδij by δij=1 if i=j and δij=0 if i=j. The n×nidentity matrixIn is defined by (In)ij=δij.
Thus, for example,
I1=(1),I2=(1001),andI3=100010001.
Theorem 2.9. Let A be an m×n matrix, B and C be n×p matrices, and D and E be q×m matrices. Then
(a)A(B+C)=AB+AC and (D+E)A=DA+EA.
(b)a(AB)=(aA)B=A(aB) for any scalar a.
(c)ImA=A=AIn.
(d)If V is an n-dimensional vector space with an ordered basis β, then [IV]β=In.
Theorem 2.10. Let V and W be finite-dimensional vector spaces having ordered bases β and γ, respectively, and let T:V→W be linear. Then, for each u∈V, we have
Let A be an m×n matrix with entries from a field F. We denote by LA the mapping LA:Fn→Fm defined by LA(x)=Ax for each column vector x∈Fn. We call LA a left-multiplication transformation.
Theorem 2.11. Let A be an m×n matrix with entries from F. Then the left-multiplication transformation LA:Fn→Fm is linear. Furthermore, if B is any other m×n matrix (with entries from F) and β and γ are the standard ordered bases for Fn and Fm, respectively, then we have the following properties.
(a)[LA]βγ=A.
(b)LA=LB if and only if A=B.
(c)LA+B=LA+LB and LaA=aLA for all a∈F.
(d)If T:Fn→Fm is linear, then there exists a unique m×n matrix C such that T=LC. In fact, C=[T]βγ.
Let V and W be vector spaces, and let T:V→W be linear. A function U:W→V is said to be an inverse of T if TU=IW and UT=IV. If T has an inverse, then T is said to be invertible.
We often use the fact that a function is invertible if and only if it is both one-to-one and onto.
Theorem 2.12. Let V and W be vector spaces, and let T:V→W be linear and invertible. Then T−1:W→V is linear.
Theorem 2.13. Let T be an invertible linear transformation from V to W. Then V is finite-dimensional if and only if W is finite-dimensional. In this case, dim(V)=dim(W).
Definition. Let A be an n×n matrix. Then A is invertible if there exists an n×n matrix B such that AB=BA=I. If A is invertible, then the matrix B such that AB=BA=I is unique.
Theorem 2.14. Let V and W be finite-dimensional vector spaces with ordered bases β and γ, respectively. Let T:V→W be linear. Then T is invertible if and only if [T]βγ is invertible. Furthermore,[T−1]γβ=([T]βγ)−1.
Let V and W be vector spaces. We say that V is isomorphic to W if there exists a linear transformation T:V→W that is invertible. Such a linear transformation is called an isomorphism from V onto W.
Theorem 2.15. Let V and W be finite-dimensional vector spaces (over the same field). Then V is isomorphic to W if and only if dim(V)=dim(W).
Theorem 2.16. Let V and W be finite-dimensional vector spaces of dimensions n and m, respectively. Then L(V,W) is finite-dimensional of dimension mn.
Let β and β′ be two ordered bases for a finite-dimensional vector space V, and let Q=[IV]β′β. Then
(a)Q is invertible.
(b)For any v∈V, [v]β=Q[v]β′.
The matrix Q=[IV]β′β is called a change of coordinate matrix.
Theorem 2.17. Let T be a linear operator on a finite-dimensional vector space V, and let β and β′ be ordered bases for V. Suppose that Q is the change of coordinate matrix that changes β′-coordinates into β-coordinates. Then
Suppose that V is a finite-dimensional vector space with the ordered basis β={x1,x2,…,xn}. we call the ordered basis β∗={f1,f2,…,fn} of V∗ that satisfies fi(xj)=δij (1≤i,j≤n) the dual basis of β.
Let V and W be finite-dimensional vector spaces over F with ordered bases β and γ, respectively. For any linear transformation T:V→W, the mapping Tt:W∗→V∗ defined by Tt(g)=gT for all g∈W∗ is a linear transformation with the property that
[Tt]γ∗β∗=([T]βγ)t.
The linear transformation Tt is called the transpose of T.