5.4 Invariant Subspaces and the Cayley-Hamilton Theorem – Linear Algebra, 5th Edition

5.4 Invariant Subspaces and the Cayley-Hamilton Theorem

In Section 5.1, we observed that if v is an eigenvector of a linear operator T, then T maps the span of {v} into itself. Subspaces that are mapped into themselves are of great importance in the study of linear operators (see, e.g., Exercises 29-33 of Section 2.1).

Definition.

Let T be a linear operator on a vector space V. A subspace W of V is called a T-invariant subspace of V if T(W)W, that is, if T(v)W for all vW.

Example 1

Suppose that T is a linear operator on a vector space V. Then the following subspaces of V are T-invariant:

  1. (1) {0}

  2. (2) V

  3. (3) R(T)

  4. (4) N(T)

  5. (5) Eλ, for any eigenvalue λ of T.

The proofs that these subspaces are T-invariant are left as exercises. (see Exercise 3.)

Example 2

Let T be the linear operator on R3 defined by

T(a, b, c)=(a+b, b+c, 0).

Then the xyplane={(x, y, 0): x, yR} and the xaxis={(x, 0, 0): xR} are T-invariant subspaces of R3.

Let T be a linear operator on a vector space V, and let x be a nonzero vector in V. The subspace

W=span({x, T(x), T2(x), })

is called the T-cyclic subspace of V generated by x. It is a simple matter to show that W is T-invariant. In fact, W is the “smallest” T-invariant sub- space of V containing x. That is, any T-invariant subspace of V containing x must also contain W (see Exercise 11). Cyclic subspaces have various uses. We apply them in this section to establish the Cayley-Hamilton theorem. In Exercise 31, we outline a method for using cyclic subspaces to compute the characteristic polynomial of a linear operator without resorting to determinants. Cyclic subspaces also play an important role in Chapter 7, where we study matrix representations of nondiagonalizable linear operators.

Example 3

Let T be the linear operator on R3 defined by

T(a, b, c)=(b+c, a+c, 3c).

We determine the T-cyclic subspace generated by e1=(1, 0, 0). Since

T(e1)=T(1, 0, 0)=(0, 1, 0)=e2

and

T2(e1)=T(T(e1))=T(e2)=(1, 0, 0)=e1, 

it follows that

span({e1, T(e1), T2(e1), })=span({e1, e2})={(s, t, 0): s, tR}.

Example 4

Let T be the linear operator on P(R) defined by T(f(x))=f(x). Then the T-cyclic subspace generated by x2 is span({x2, 2x, 2})=P2(R).

The existence of a T-invariant subspace provides the opportunity to define a new linear operator whose domain is this subspace. If T is a linear operator on V and W is a T-invariant subspace of V, then the restriction TW of T to W (see Appendix B) is a mapping from W to W, and it follows that TW is a linear operator on W (see Exercise 7). As a linear operator, TW inherits certain properties from its parent operator T. The following result illustrates one way in which the two operators are linked.

Theorem 5.20.

Let T be a linear operator on a finite-dimensional vector space V, and let W be a T-invariant subspace of V. Then the characteristic polynomial of T divides the characteristic polynomial of T.

Proof.

Choose an ordered basis γ={v1, v2, , vk} for W, and extend it to an ordered basis β={v1, v2, , vk, vk+1, , vn} for V. Let A=[T]β and B1=[TW]γ. Then, by Exercise 12, A can be written in the form

A=(B1B2OB3).

Let f(t) be the characteristic polynomial of T and g(t) the characteristic polynomial of TW. Then

f(t)=det(AtIn)=det(B1tIkB2OB3tInk)=g(t)det(B3tInk)

by Exercise 21 of Section 4.3. Thus g(t) divides f(t).

Example 5

Let T be the linear operator on R4 defined by

T(a, b, c, d)=(a+b+2cd, b+d, 2cd, c+d), 

and let W={(t, s, 0, 0): t, sR}. Observe that W is a T-invariant subspace of R4 because, for any vector (a, b, 0, 0)W,

T(a, b, 0, 0)=(a+b, b, 0, 0)W.

Let γ={e1, e2}, which is an ordered basis for W. Extend γ to the standard ordered basis β for R4. Then

B1=[TW]γ=(1101)andA=[T]β=(1121010100210011)

in the notation of Theorem 5.20. Let f(t) be the characteristic polynomial of T and g(t) be the characteristic polynomial of TW. Then

f(t)=det (AtI4)=det (1t12101t01002t10011t)=det (1t101t)det (2t111t)=g(t)det (2t111t).

In view of Theorem 5.20, we may use the characteristic polynomial of TW to gain information about the characteristic polynomial of T itself. In this regard, cyclic subspaces are useful because the characteristic polynomial of the restriction of a linear operator T to a cyclic subspace is readily computable.

Theorem 5.21.

Let T be a linear operator on a finite-dimensional vector space V, and let W denote the T-cyclic subspace of V generated by a nonzero vector vV. Let k=dim(W). Then

  1. (a) {v, T(v), T2(v), , Tk1(v)} is a basis for W.

  2. (b) If a0v+a1T(v)++ak1Tk1(v)+Tk(v)=0, then the characteristic polynomial of TW is f(t)=(1)k(a0+a1t++ak1tk1+tk).

Proof.

(a) Since v0, the set {v} is linearly independent. Let j be the largest positive integer for which

β={v, T(v), , Tj1(v)}

is linearly independent. Such a j must exist because V is finite-dimensional. Let Z=span(β). Then β is a basis for Z. Furthermore, Tj(v)Z by Theorem 1.7 (p. 40). We use this information to show that Z is a T-invariant subspace of V. Let wZ. Since w is a linear combination of the vectors of β, there exist scalars b0, b1, , bj1 such that

w=b0v+b1T(v)++bj1Tj1(v), 

and hence

T(w)=b0T(v)+b1T2(v)++bj1Tj(v).

Thus T(w) is a linear combination of vectors in Z, and hence belongs to Z. So Z is T-invariant. Furthermore, vZ. By Exercise 11, W is the smallest T-invariant subspace of V that contains v, so that WZ. Clearly, ZW, and so we conclude that Z=W. It follows that β is a basis for W, and therefore dim(W)=j. Thus j=k. This proves (a).

(b) Now view β (from (a)) as an ordered basis for W. Let a0, a1, , ak1 be the scalars such that

a0v+a1T(v)++ak1Tk1(v)+Tk(v)=0.

Observe that

[TW]β=(000a0100a1001ak1), 

which has the characteristic polynomial

f(t)=(1)k(a0+a1t++ak1tk1+tk)

by Exercise 19. Thus f(t) is the characteristic polynomial of TW, proving (b).

Example 6

Let T be the linear operator of Example 3, and let W=span({e1, e2}), the T-cyclic subspace generated by e1. We compute the characteristic polynomial f(t) of TW in two ways: by means of Theorem 5.21 and by means of determinants.

(a) By means of Theorem 5.21. From Example 3, we have that {e1, e2} is a cycle that generates W, and that T2(e1=e1). Hence

1e1+0T(e1)+T2(e1)=0.

Therefore, by Theorem 5.21(b),

f(t)=(1)2(1+0t+t2)=t2+1.

(b) By means of determinants. Let β={e1, e2}, which is an ordered basis for W. Since T(e1)=e2 and T(e2)=e1, we have

[TW]β=(0110)

and therefore,

f(t)=det (t11t)=t2+1.

The Cayley-Hamilton Theorem

As an illustration of the importance of Theorem 5.21, we prove a well- known result that is used in Chapter 7. The reader should refer to Appendix E for the definition of f(T), where T is a linear operator and f(x) is a polynomial.

Theorem 5.22. (Cayley-Hamilton)

Let T be a linear operator on a finite-dimensional vector space V, and let f(t) be the characteristic polynomial of T. Then f(T)=T0, the zero transformation. That is, T “satisfies” its characteristic equation.

Proof.

We show that f(T)(v)=0 for all vV. This is obvious if v=0 because f(T) is linear; so suppose that v0. Let W be the T-cyclic subspace generated by v, and suppose that dim(W)=k. By Theorem 5.21(a), there exist scalars a0, a1, , ak1 such that

a0v+a1T(v)++ak1Tk1(v)+Tk(v)=0.

Hence Theorem 5.21(b) implies that

g(t)=(1)k(a0+a1t++ak1tk1+tk)

is the characteristic polynomial of TW. Combining these two equations yields

g(T)(v)=(1)k(a0I+a1T++ak1Tk1+Tk)(v)=0.

By Theorem 5.20, g(t) divides f(t); hence there exists a polynomial q(t) such that f(t)=q(t)g(t). So

f(T)(v)=q(T)g(T)(v)=q(T)(g(T)(v))=q(T)(0)=0.

Example 7

Let T be the linear operator on R2 defined by T(a, b)=(a+2b, 2a+b), and let β={e1, e2}. Then

A=(1221), 

where A=[T]β. The characteristic polynomial of T is, therefore,

f(t)=det (AtI)=det(1t221t)=t22t+5.

It is easily verified that T0=f(T)=T22T+5I. Similarly,

f(A)=A22A+5I=(3443)+(2442)+(5005)=(0000).

Example 7 suggests the following result.

Corollary (Cayley-Hamilton Theorem for Matrices).

Let A be an n×n matrix, and let f(t) be the characteristic polynomial of A. Then f(A)=O, the n×n zero matrix.

Proof.

see Exercise 15.

Invariant Subspaces and Direct Sums3

It is useful to decompose a finite-dimensional vector space V into a direct sum of as many T-invariant subspaces as possible because the behavior of T on V can be inferred from its behavior on the direct summands. For example, T is diagonalizable if and only if V can be decomposed into a direct sum of one-dimensional T-invariant subspaces (see Exercise 35). In Chapter 7, we consider alternate ways of decomposing V into direct sums of T-invariant subspaces if T is not diagonalizable. We proceed to gather a few facts about direct sums of T-invariant subspaces that are used in Section 7.4. The first of these facts is about characteristic polynomials.

Theorem 5.23.

Let T be a linear operator on a finite-dimensional vector space V, and suppose that V=W1W2Wk, where Wi is a T-invariant subspace of V for each i(1ik). Suppose that fi(t) is the characteristic polynomial of TWi(1ik). Then f1(t)·f2(t)··fk(t) is the characteristic polynomial of T.

Proof.

The proof is by mathematical induction on k. In what follows,f(t) denotes the characteristic polynomial of T. Suppose first that k=2. Let β1 be an ordered basis for W1, β2 an ordered basis for W2, and β=β1β2. Then β is an ordered basis for V by Theorem 5.9(d) (p. 275). Let A=[T]β, B1=[TW1]β1, and B2=[TW2]β2. By Exercise 33, it follows that

A=(B1OOB2), 

where O and O are zero matrices of the appropriate sizes. Then

f(t)=det(AtI)=det(B1tI)·det(B2tI)=f1(t)·f2(t)

as in the proof of Theorem 5.20, proving the result for k=2.

Now assume that the theorem is valid for k1 summands, where k12, and suppose that V is a direct sum of k subspaces, say,

V=W1W2Wk.

Let W=W1+W2++Wk1. It is easily verified that W is T-invariant and that V=WWk. So by the case for k=2, f(t)=g(t)·fk(t), where g(t) is the characteristic polynomial of TW. Clearly W=W1W2Wk1, and therefore g(t)=f1(t)·f2(t)··fk1(t) by the induction hypothesis. We conclude that f(t)=g(t)·fk(t)=f1(t)·f2(t)··fk(t).

As an illustration of this result, suppose that T is a diagonalizable linear operator on a finite-dimensional vector space V with distinct eigenvalues λ1, λ2, , λk. By Theorem 5.10 (p. 277), V is a direct sum of the eigenspaces of T. Since each eigenspace is T-invariant, we may view this situation in the context of Theorem 5.23. For each eigenvalue λi, the restriction of T toEλi has characteristic polynomial (λit)mi, where mi is the dimension of Eλi. By Theorem 5.23, the characteristic polynomialf(t) of T is the product

f(t)=(λ1t)m1(λ2t)m2 (λkt)mk.

It follows that the multiplicity of each eigenvalue is equal to the dimension of the corresponding eigenspace, as expected.

Example 8

Let T be the linear operator on R4 defined by

T(a, b, c, d)=(2ab, a+b, cd, c+d), 

and let W1={(s, t, 0, 0): s, tR} and W2={(0, 0, s, t): s, tR}. Notice that W1 and W2 are each T-invariant and that R4=W1W2. Let β1={e1, e2}, β2={e3, e4}, and β=β1β2={e1, e2, e3, e4}. Then β1 is an ordered basis for W1, β2 is an ordered basis for W2, and β is an ordered basis for R4. Let A=[T]β, B1=[TW1]β1 and B2=[TW2]β2. Then

B1=(2111), B2=(1111), 

and

A=(B1OOB2)=(2100110000110011).

Let f(t), f1(t), and f2(t) denote the characteristic polynomials of T, TW1, and TW2, respectively. Then

f(t)=det(AtI)=det(B1tI)·det(B2tI)=f1(t)·f2(t).

The matrix A in Example 8 can be obtained by joining the matrices B1 and B2 in the manner explained in the next definition.

Definition.

Let B1Mm×m(F), and let B2Mn×n(F). We define the direct sum of B1 and B2, denoted B1B2, as the (m+n)×(m+n) matrix A such that

Aij={(B1)ijfor 1i, jm(B2)(im), (jm)for m+1i, jn+m0otherwise. 

If B1, B2, , Bk are square matrices with entries from F, then we define the direct sum of B1, B2, , Bk recursively by

B1B2Bk=(B1B2Bk1)Bk.

If A=B1B2Bk, then we often write

A=(B1OOOB2OOOBk).

Example 9

Let

B1=(1211), B2=(3), andB3=(121123111).

Then

The final result of this section relates direct sums of matrices to direct sums of invariant subspaces. It is an extension of Exercise 33 to the case k2.

Theorem 5.24.

Let T be a linear operator on a finite-dimensional vector space V, and let W1, W2, , Wk be T-invariant subspaces of V such that V=W1W2Wk. For each i, let βi be an ordered basis for Wi, and let β=β1β2βk. Let A=[T]β and Bi=[TWi]βi for i=1, 2, , k. Then A=B1B2Bk.

Proof.

see Exercise 34.

Exercises

  1. Label the following statements as true or false.

    1. (a) There exists a linear operator T with no T-invariant subspace.

    2. (b) If T is a linear operator on a finite-dimensional vector space V and W is a T-invariant subspace of V, then the characteristic polynomial of TW divides the characteristic polynomial of T.

    3. (c) Let T be a linear operator on a finite-dimensional vector space V, and let v and w be in V. If W is the T-cyclic subspace generated by v, W is the T-cyclic subspace generated by w, and W=W, then v=w.

    4. (d) If T is a linear operator on a finite-dimensional vector space V, then for any vV the T-cyclic subspace generated by v is the same as the T-cyclic subspace generated by T(v).

    5. (e) Let T be a linear operator on an n-dimensional vector space. Then there exists a polynomial g(t) of degree n such that g(T)=T0.

    6. (f) Any polynomial of degree n with leading coefficient (1)n is the characteristic polynomial of some linear operator.

    7. (g) If T is a linear operator on a finite-dimensional vector space V, and if V is the direct sum of k T-invariant subspaces, then there is an ordered basis β for V such that [T]β is a direct sum of k matrices.

  2. For each of the following linear operators T on the vector space V, determine whether the given subspace W is a T-invariant subspace of V.

    1. (a) V=P3(R), T(f(x))=f(x), and W=P2(R)

    2. (b) V=P(R), T(f(x))=xf(x), and W=P2(R)

    3. (c) V=R3, T(a, b, c)=(a+b+c, a+b+c, a+b+c), and W={(t, t, t): tR}

    4. (d) V=C([0, 1]), T(f(t))=[01f(x)dx]t, and W={fV: f(t)=at+b for some a and b}

    5. (f) V=M2×2(R), T(A)=(0110)A, and W={AV: At=A}

  3. Let T be a linear operator on a finite-dimensional vector space V. Prove that the following subspaces are T-invariant.

    1. (a) {0} and V

    2. (b) N(T) and R(T)

    3. (c) Eλ, for any eigenvalue λ of T

  4. Let T be a linear operator on a vector space V, and let W be a T-invariant subspace of V. Prove that W is g(T)-invariant for any polynomial g(t).

  5. Let T be a linear operator on a vector space V. Prove that the intersection of any collection of T-invariant subspaces of V is a T-invariant subspace of V.

  6. For each linear operator T on the vector space V, find an ordered basis for the T-cyclic subspace generated by the vector z.

    1. (a) V=R4, T(a, b, c, d)=(a+b, bc, a+c, a+d), and z=e1.

    2. (b) V=P3(R), T(f(x))=f(x), and z=x3.

    3. (c) V=M2×2(R), T(A)=At, and z=(0110).

    4. (d) V=M2×2(R), T(A)=(1122)A, and z=(0110).

  7. Prove that the restriction of a linear operator T to a T-invariant sub-space is a linear operator on that subspace.

  8. Let T be a linear operator on a vector space with a T-invariant subspace W. Prove that if v is an eigenvector of TW with corresponding eigenvalue λ, then v is also an eigenvector of T with corresponding eigenvalue λ.

  9. For each linear operator T and cyclic subspace W in Exercise 6, compute the characteristic polynomial of TW in two ways, as in Example 6.

  10. For each linear operator in Exercise 6, find the characteristic polynomialf(t) of T, and verify that the characteristic polynomial of TW (computed in Exercise 9) dividesf(t).

  11. Let T be a linear operator on a vector space V, let v be a nonzero vector in V, and let W be the T-cyclic subspace of V generated by v. Prove that

    1. (a) W is T-invariant.

    2. (b) Any T-invariant subspace of V containing v also contains W.

  12. Prove that A=(B1B2OB3) in the proof of Theorem 5.20.

  13. Let T be a linear operator on a vector space V, let v be a nonzero vector in V, and let W be the T-cyclic subspace of V generated by v. For any wV, prove that wW if and only if there exists a polynomial g(t) such that w=g(T)(v).

  14. Prove that the polynomial g(t) of Exercise 13 can always be chosen so that its degree is less than dim(W).

  15. Use the Cayley-Hamilton theorem (Theorem 5.22) to prove its corollary for matrices. Warning: If f(t)=det(AtI) is the characteristic polynomial of A, it is tempting to “prove” that f(A)=O by saying “f(A)=det(AAI)=det(O)=0.” Why is this argument incorrect? Visit goo.gl/ZMVn9i for a solution.

  16. Let T be a linear operator on a finite-dimensional vector space V.

    1. (a) Prove that if the characteristic polynomial of T splits, then so does the characteristic polynomial of the restriction of T to any T-invariant subspace of V.

    2. (b) Deduce that if the characteristic polynomial of T splits, then any nontrivial T-invariant subspace of V contains an eigenvector of T.

  17. Let A be an n×n matrix. Prove that

    dim(span({In, A, A2, }))n.
  18. Let A be an n×n matrix with characteristic polynomial

    f(t)=(1)ntn+an1tn1++a1t+a0.
    1. (a) Prove that A is invertible if and only if a00.

    2. (b) Prove that if A is invertible, then

      A1=(1/a0)[(1)nAn1+an1An2++a1In].
    3. (c) Use (b) to compute A1 for

      A=(121023001).
  19. Let A denote the k×k matrix

    (000a0100a1010a2001ak1), 

    where a0, a1, , ak1 are arbitrary scalars. Prove that the characteristic polynomial of A is

    (1)k(a0+a1t++ak1tk1+tk).

    Hint: Use mathematical induction on k, computing the determinant by cofactor expansion along the first row.

  20. Let T be a linear operator on a vector space V, and suppose that V is a T-cyclic subspace of itself. Prove that if U is a linear operator on V, then UT=TU if and only if U=g(T) for some polynomial g(t). Hint: Suppose that V is generated by v. Choose g(t) according to Exercise 13 so that g(T)(v)=U(v).

  21. Let T be a linear operator on a two-dimensional vector space V. Prove that either V is a T-cyclic subspace of itself or T=cI for some scalar c.

  22. Let T be a linear operator on a two-dimensional vector space V and suppose that TcI for any scalar c. Show that if U is any linear operator on V such that UT=TU, then U=g(T) for some polynomial g(t).

  23. Let T be a linear operator on a finite-dimensional vector space V, and let W be a T-invariant subspace of V. Suppose that v1, v2, , vk are eigenvectors of T corresponding to distinct eigenvalues. Prove that if v1+v2++vk is in W, then viW for all i. Hint: Use mathematical induction on k.

  24. Prove that the restriction of a diagonalizable linear operator T to any nontrivial T-invariant subspace is also diagonalizable. Hint: Use the result of Exercise 23.

    1. (a) Prove the converse to Exercise 19(a) of Section 5.2: If T and U are diagonalizable linear operators on a finite-dimensional vector space V such that UT=TU, then T and U are simultaneously diagonalizable. (See the definitions in the exercises of Section 5.2.) Hint: For any eigenvalue λ of T, show that Eλ is U-invariant, and apply Exercise 24 to obtain a basis for Eλ of eigenvectors of U.

    2. (b) State and prove a matrix version of (a).

  25. Let T be a linear operator on an n-dimensional vector space V such that T has n distinct eigenvalues. Prove that V is a T-cyclic subspace of itself. Hint: Use Exercise 23 to find a vector v such that {v, T(v), , Tn1(v)} is linearly independent.

Exercises 27 through 31 require familiarity with quotient spaces as defined in Exercise 31 of Section 1.3. Before attempting these exercises, the reader should first review the other exercises treating quotient spaces: Exercise 35 of Section 1.6, Exercise 42 of Section 2.1, and Exercise 24 of Section 2.4.

For the purposes of Exercises 27 through 31, T is a fixed linear operator on a finite-dimensional vector space V, and W is a nonzero T-invariant subspace of V. We require the following definition.

Definition.

Let T be a linear operator on a vector space V, and let W be a T-invariant subspace of V. Define T¯: V/WV/W by

T¯(v+W)=T(v)+Wfor any v+WV/W.
    1. (a) Prove that T¯ is well defined. That is, show that T¯(v+W)=T¯(v+W) whenever v+W=v+W.

    2. (b) Prove that T¯ is a linear operator on V/W.

    3. (c) Let η: VV/W be the linear transformation defined in Exercise 42 of Section 2.1 by η(v)=v+W. Show that the diagram of Figure 5.6 commutes; that is, prove that ηT=T¯η. (This exercise does not require the assumption that V is finite-dimensional.)

    Figure 5.6

  1. Letf(t), g(t), and h(t) be the characteristic polynomials of T, TW, and T¯, respectively. Prove that f(t)=g(t)h(t). Hint: Extend an ordered basis γ={v1, v2, , vk} for W to an ordered basis β={v1, v2, , vk, vk+1, , vn} for V. Then show that the collection of cosets α={vk+1+W, vk+2+W, , vn+W} is an ordered basis for V/W, and prove that

    [T]β=(B1B2OB3), 

    where B1=[T]γ and B3=[T¯]α.

  2. Use the hint in Exercise 28 to prove that if T is diagonalizable, then so is T¯.

  3. Prove that if both TW and T¯ are diagonalizable and have no common eigenvalues, then T is diagonalizable.

The results of Theorem 5.21 and Exercise 28 are useful in devising methods for computing characteristic polynomials without the use of determinants. This is illustrated in the next exercise.

  1. Let A=(113234121), let T=LA, and let W be the cyclic subspace of R3 generated by e1.

    1. (a) Use Theorem 5.21 to compute the characteristic polynomial of TW.

    2. (b) Show that {e2+W} is a basis for R3/W, and use this fact to compute the characteristic polynomial of T¯.

    3. (c) Use the results of (a) and (b) to find the characteristic polynomial of A.

Exercises 32 through 39 are concerned with direct sums.

  1. Let T be a linear operator on a vector space V, and let W1, W2, Wk be T-invariant subspaces of V. Prove that W1+W2++Wk is also a T-invariant subspace of V.

  2. Give a direct proof of Theorem 5.24 for the case k=2. (This result is used in the proof of Theorem 5.23.)

  3. Prove Theorem 5.24. Hint: Begin with Exercise 33 and extend it using mathematical induction on k, the number of subspaces.

  4. Let T be a linear operator on a finite-dimensional vector space V. Prove that T is diagonalizable if and only if V is the direct sum of one-dimensional T-invariant subspaces.

  5. Let T be a linear operator on a finite-dimensional vector space V, and let W1, W2, Wk be T-invariant subspaces of V such that V=W1W2Wk. Prove that

    det(T)=det(TW1)·det(TW2)··det(TWk).
  6. Let T be a linear operator on a finite-dimensional vector space V, and let W1, W2, , Wk be T-invariant subspaces of V such that V=W1W2Wk. Prove that T is diagonalizable if and only if TWi is diagonalizable for all i.

  7. Let C be a collection of diagonalizable linear operators on a finite- dimensional vector space V. Prove that there is an ordered basis β such that [T]β is a diagonal matrix for all TC if and only if the operators of C commute under composition. (This is an extension of Exercise 25.) Hints for the case that the operators commute: The result is trivial if each operator has only one eigenvalue. Otherwise, establish the general result by mathematical induction on dim(V), using the fact that V is the direct sum of the eigenspaces of some operator in C that has more than one eigenvalue.

  8. Let B1, B2, , Bk be square matrices with entries in the same field, and let A=B1B2Bk. Prove that the characteristic polynomial of A is the product of the characteristic polynomials of the Bi's.

  9. Let

    A=(12nn+1n+22nn2n+1n2n+2n2).

    Find the characteristic polynomial of A. Hint: First prove that A has rank 2 and that span({(1, 1, , 1), (1, 2, , n)}) is LA-invariant.

  10. Let AMn×n(R) be the matrix defined by Aij=1 for all i and j. Find the characteristic polynomial of A.