6.4 Normal and SelfAdjoint Operators
We have seen the importance of diagonalizable operators in Chapter 5. For an operator on a vector space V to be diagonalizble, it is necessary and sufficient for V to contain a basis of eigenvectors for this operator. As V is an inner product space in this chapter, it is reasonable to seek conditions that guarantee that V has an orthonormal basis of eigenvectors. A very important result that helps achieve our goal is Schur’s theorem (Theorem 6.14). The formulation that follows is in terms of linear operators. The next section contains the more familiar matrix form. We begin with a lemma.
Lemma. Let T be a linear operator on a finitedimensional inner product space V. If T has an eigenvector, then so does T*.
Proof.
Suppose that v is an eigenvector of T with corresponding eigenvalue λ. Then for any $x\in \text{V}$,
and hence v is orthogonal to the range of $\text{T}\text{*}\overline{\lambda}\text{I}$. So $\text{T}\text{*}\overline{\lambda}\text{I}$ is not onto and hence is not onetoone. Thus $\text{T}\text{*}\overline{\lambda}\text{I}$ has a nonzero null space, and any nonzero vector in this null space is an eigenvector of T* with corresponding eigenvalue $\overline{\lambda}$.
Recall (see the exercises of Section 2.1 and see Section 5.4) that a subspace W of V is said to be Tinvariant if T(W) is contained in W. If W is Tinvariant, we may define the restriction ${\text{T}}_{\text{W}}:\text{W}\to \text{W}$ by ${\text{T}}_{\text{W}}(x)=\text{T}(x)$ for all $x\in \text{W}$. It is clear that ${\text{T}}_{\text{W}}$ is a linear operator on W. Recall from Section 5.2 that a polynomial is said to split if it factors into linear polynomials.
Theorem 6.14 (Schur).
Let T be a linear operator on a finitedimensional inner product space V. Suppose that the characteristic polynomial of T splits. Then there exists an orthonormal basis $\gamma $ for V such that the matrix ${[\text{T}]}_{\gamma}$ is upper triangular.
Proof.
By Exercise 12(a) of Section 5.2, there exists an ordered basis $\beta =\{{w}_{1},\text{}{w}_{2},\text{}\dots ,\text{}{w}_{n}\}$ for V such that ${[\text{T}]}_{\beta}$ is upper triangular. Now apply the GramSchmidt process to $\beta $ to obtain an orthogonal basis ${\beta}^{\prime}=\{{v}_{1},\text{}{v}_{2},\text{}\dots ,\text{}{v}_{n}\}$ for V. For each k, $1\le k\le n$, let
As in the proof of Theorem 6.4, $\text{span}({S}_{k})=\text{span}({S}_{k}^{\prime})$ for all k. By Exercise 12 of Section 2.2, $\text{T}({w}_{k})\in \text{span}({S}_{k})$ for all k. Hence $\text{T}({v}_{k})\in \text{span}({S}_{k}^{\prime})$ for all k, and so ${[\text{T}]}_{{\beta}^{\prime}}$ is upper triangular by the same exercise. Finally, let ${z}_{i}={\displaystyle \frac{1}{{v}_{i}}}{v}_{i}$ for all $1\le i\le n$ and $\gamma =\{{z}_{1},\text{}{z}_{2},\text{}\dots ,\text{}{z}_{n}\}$. Then $\gamma $ is an orthonormal basis for V, and ${[\text{T}]}_{\gamma}$ is upper triangular.
We now return to our original goal of finding an orthonormal basis of eigenvectors of a linear operator T on a finitedimensional inner product space V. Note that if such an orthonormal basis $\beta $ exists, then ${[\text{T}]}_{\beta}$ is a diagonal matrix, and hence ${[\text{T}\text{*}]}_{\beta}={[\text{T}]}_{{}_{\beta}}^{\text{*}}$ is also a diagonal matrix. Because diagonal matrices commute, we conclude that T and T* commute. Thus if V possesses an orthonormal basis of eigenvectors of T, then $\text{TT}\text{*}=\text{T}\text{*}\text{T}$.
Definitions.
Let V be an inner product space, and let T be a linear operator on V. We say that T is normal if $\text{TT*}=\text{T*T}$. An $n\times n$ real or complex matrix A is normal if $AA\text{*}=A\text{*}A$.
It follows immediately from Theorem 6.10 (p. 356) that T is normal if and only if ${[\text{T}]}_{\beta}$ is normal, where $\beta $ is an orthonormal basis.
Example 1
Let $\text{T}:\text{}{\text{R}}^{2}\to {\text{R}}^{2}$ be rotation by $\theta $, where $0<\theta <\pi $. The matrix representation of T in the standard ordered basis is given by
Note that $AA*=I=A*A$; so A, and hence T, is normal.
Example 2
Suppose that A is a real skewsymmetric matrix; that is, ${A}^{t}=A$. Then A is normal because both $A{A}^{t}$ and ${A}^{t}A$ are equal to ${A}^{2}$.
Clearly, the operator T in Example 1 does not even possess one eigenvector. So in the case of a real inner product space, we see that normality is not sufficient to guarantee an orthonormal basis of eigenvectors. All is not lost, however. We show that normality suffices if V is a complex inner product space.
Before we prove the promised result for normal operators, we need some general properties of normal operators.
Theorem 6.15.
Let V be an inner product space, and let T be a normal operator on V. Then the following statements are true.

(a) $\text{T}(x)=\text{T*}(x)$ for all $x\in \text{V}$.

(b) $\text{T}c\text{I}$ is normal for every $c\in F$.

(c) If x is an eigenvector of T corresponding to eigenvalue $\lambda $, then x is also an eigenvector of T* corresponding to eigenvalue $\overline{\lambda}$. That is, if $\text{T}(x)=\lambda x$, then $\text{T*}(x)=\overline{\lambda}x$.

(d) If ${\lambda}_{1}$ and ${\lambda}_{2}$ are distinct eigenvalues of T with corresponding eigenvectors ${x}_{1}$ and ${x}_{2}$, then ${x}_{1}$ and ${x}_{2}$ are orthogonal.
Proof.
Proof. (a) For any $x\in \text{V}$, we have
The proof of (b) is left as an exercise.
(c) Suppose that $\text{T}(x)=\lambda x$ for some $x\in \text{V}$. Let $\text{U}=\text{T}\lambda \text{I}$. Then $\text{U}(x)=0$, and U is normal by (b). Thus (a) implies that
Hence $\text{T*}(x)=\overline{\lambda}x$. So x is an eigenvector of T*.
(d) Let ${\lambda}_{1}$ and ${\lambda}_{2}$ be distinct eigenvalues of T with corresponding eigenvectors ${x}_{1}$ and ${x}_{2}$. Then, using (c), we have
Since ${\lambda}_{1}\ne {\lambda}_{2}$, we conclude that $\langle {x}_{1},\text{}{x}_{2}\rangle =0$.
Theorem 6.16.
Let T be a linear operator on a finitedimensional complex inner product space V. Then T is normal if and only if there exists an orthonormal basis for V consisting of eigenvectors of T.
Proof.
Suppose that T is normal. By the fundamental theorem of algebra (Theorem D.4), the characteristic polynomial of T splits. So we may apply Schur’s theorem to obtain an orthonormal basis $\beta =\{{v}_{1},\text{}{v}_{2},\text{}\dots ,\text{}{v}_{n}\}$ for V such that ${[\text{T}]}_{\beta}=A$ is upper triangular. We know that ${v}_{1}$ is an eigenvector of T because A is upper triangular. Assume that ${v}_{1},\text{}{v}_{2},\text{}\dots ,\text{}{v}_{k1}$ are eigenvectors of T. We claim that ${v}_{k}$ is also an eigenvector of T. It then follows by mathematical induction on k that all of the ${v}_{i}$’s are eigenvectors of T. Consider any $j<k$, and let ${\lambda}_{j}$ denote the eigenvalue of T corresponding to ${v}_{j}$. By Theorem 6.15, $\text{T*}({v}_{j})={\overline{\lambda}}_{j}{v}_{j}$. Since A is upper triangular,
Furthermore, by the corollary to Theorem 6.5 (p. 345),
It follows that $\text{T}({v}_{k})={A}_{kk}{v}_{k}$, and hence ${v}_{k}$ is an eigenvector of T. So by induction, all the vectors in $\beta $ are eigenvectors of T.
The converse was already proved on page 367.
Interestingly, as the next example shows, Theorem 6.16 does not extend to infinitedimensional complex inner product spaces.
Example 3
Consider the inner product space H with the orthonormal set S from Example 9 in Section 6.1. Let $\text{V}=\text{span}(S)$, and let T and U be the linear operators on V defined by $\text{T}(f)={f}_{1}f$ and $\text{U}(f)={f}_{1}f$. Then
for all integers n. Thus
It follows that $\text{U}=\text{T*}$. Furthermore, $\text{TT*}=\text{I}=\text{T*T}$; so T is normal.
We show that T has no eigenvectors. Suppose that f is an eigenvector of T, say, $\text{T}(f)=\lambda f$ for some $\lambda $. Since V equals the span of S, we may write
Hence
Since ${a}_{m}\ne 0$, we can write ${f}_{m+1}$ as a linear combination of ${f}_{n},\text{}{f}_{n+1},\text{}\dots ,\text{}{f}_{m}$. But this is a contradiction because S is linearly independent.
Example 1 illustrates that normality is not sufficient to guarantee the existence of an orthonormal basis of eigenvectors for real inner product spaces. For real inner product spaces, we must replace normality by the stronger condition that $\text{T}=\text{T*}$ in order to guarantee such a basis.
Definitions.
Let T be a linear operator on an inner product space V. We say that T is selfadjoint (or Hermitian) if $\text{T}=\text{T*}$. An $n\text{\hspace{0.17em}}\times \text{\hspace{0.17em}}n$ real or complex matrix A is selfadjoint (or Hermitian) if $A=A\text{*}$.
It follows immediately that if $\beta $ is an orthonormal basis, then T is selfadjoint if and only if ${[\text{T}]}_{\beta}$ is selfadjoint. For real matrices, this condition reduces to the requirement that A be symmetric.
Before we state our main result for selfadjoint operators, we need some preliminary work.
By definition, a linear operator on a real inner product space has only real eigenvalues. The lemma that follows shows that the same can be said for selfadjoint operators on a complex inner product space. Similarly, the characteristic polynomial of every linear operator on a complex inner product space splits, and the same is true for selfadjoint operators on a real inner product space.
Lemma. Let T be a selfadjoint operator on a finitedimensional inner product space V. Then

(a) Every eigenvalue of T is real.

(b) Suppose that V is a real inner product space. Then the characteristic polynomial of T splits.
Proof.
(a) Suppose that $\text{T}(x)=\lambda x$ for $x\ne 0$. Because a selfadjoint operator is also normal, we can apply Theorem 6.15(c) to obtain
So $\lambda =\overline{\lambda}$ that is, $\lambda $ is real.
(b) Let $n=\mathrm{dim}(\text{V}),\text{}\beta $ be an orthonormal basis for V, and $A={[\text{T}]}_{\beta}$. Then A is selfadjoint. Let ${\text{T}}_{A}$ be the linear operator on ${\text{C}}^{n}$ defined by ${\text{T}}_{A}(x)=Ax$ for all $x\in {\text{C}}^{n}$. Note that ${\text{T}}_{A}$ is selfadjoint because ${[{\text{T}}_{A}]}_{\gamma}=A$, where $\gamma $ is the standard ordered (orthonormal) basis for ${\text{C}}^{n}$. So, by (a), the eigenvalues of ${\text{T}}_{A}$ are real. By the fundamental theorem of algebra, the characteristic polynomial of ${\text{T}}_{A}$ splits into factors of the form $t\lambda $. Since each $\lambda $ is real, the characteristic polynomial splits over R. But ${\text{T}}_{A}$ has the same characteristic polynomial as A, which has the same characteristic polynomial as T. Therefore the characteristic polynomial of T splits.
We are now able to establish one of the major results of this chapter.
Theorem 6.17.
Let T be a linear operator on a finitedimensional real inner product space V. Then T is selfadjoint if and only if there exists an orthonormal basis $\beta $ for V consisting of eigenvectors of T.
Proof.
Suppose that T is selfadjoint. By the lemma, we may apply Schur’s theorem to obtain an orthonormal basis $\beta $ for V such that the matrix $A={[\text{T}]}_{\beta}$ is upper triangular. But
So A and A* are both upper triangular, and therefore A is a diagonal matrix. Thus $\beta $ must consist of eigenvectors of T.
The converse is left as an exercise.
We restate this theorem in matrix form in the next section (as Theorem 6.20 on p. 381).
Example 4
As we noted earlier, real symmetric matrices are selfadjoint, and selfadjoint matrices are normal. The following matrix A is complex and symmetric:
But A is not normal, because ${(AA\text{*})}_{12}=1+i$ and ${(A\text{*}A)}_{12}=1i$. Therefore complex symmetric matrices need not be normal.
Exercises

Label the following statements as true or false. Assume that the underlying inner product spaces are finitedimensional.

(a) Every selfadjoint operator is normal.

(b) Operators and their adjoints have the same eigenvectors.

(c) If T is an operator on an inner product space V, then T is normal if and only if ${[\text{T}]}_{\beta}$ is normal, where $\beta $ is any ordered basis for V.

(d) A real or complex matrix A is normal if and only if ${\text{L}}_{A}$ is normal.

(e) The eigenvalues of a selfadjoint operator must all be real.

(f) The identity and zero operators are selfadjoint.

(g) Every normal operator is diagonalizable.

(h) Every selfadjoint operator is diagonalizable.


For each linear operator T on an inner product space V, determine whether T is normal, selfadjoint, or neither. If possible, produce an orthonormal basis of eigenvectors of T for V and list the corresponding eigenvalues.

(a) $\text{V}={\text{R}}^{2}$ and T is defined by $\text{T}(a,\text{}b)=(2a2b,\text{}2a+5b)$.

(b) $\text{V}={\text{R}}^{3}$ and T is defined by $\text{T}(a,\text{}b,\text{}c)=(a+b,\text{}5b,\text{}4a2b+5c)$.

(c) $\text{V}={\text{C}}^{2}$ and T is defined by $\text{T}(a,\text{}b)=(2a+ib,\text{}a+2b)$.

(d) $\text{V}={\text{P}}_{2}(R)$ and T is defined by $\text{T}(f)={f}^{\prime}$, where
$$\langle f(x),\text{}g(x)\rangle ={\displaystyle {\int}_{0}^{1}f(t)g(t)\text{}dt.}$$ 
(e) $\text{V}={\text{M}}_{2\times 2}(R)$ and T is defined by $\text{T}(A)={A}^{t}$.

(f) $\text{V}={\text{M}}_{2\times 2}(R)$ and T is defined by $\text{T}\left(\begin{array}{rr}a& b\\ c& d\end{array}\right)=\left(\begin{array}{rr}c& d\\ a& b\end{array}\right)$.


Give an example of a linear operator T on ${\text{R}}^{2}$ and an ordered basis for ${\text{R}}^{2}$ that provides a counterexample to the statement in Exercise 1(c).

Let T and U be selfadjoint operators on an inner product space V. Prove that TU is selfadjoint if and only if $\text{TU}=\text{UT}$.

Prove (b) of Theorem 6.15.

Let V be a complex inner product space, and let T be a linear operator on V. Define
$${\text{T}}_{1}={\displaystyle \frac{1}{2}}(\text{T}+\text{T}\text{*})\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{\text{T}}_{2}={\displaystyle \frac{1}{2i}}(\text{T}\text{T}\text{*}).$$
(a) Prove that ${\text{T}}_{1}$ and ${\text{T}}_{2}$ are selfadjoint and that $\text{T}={\text{T}}_{1}+i{\text{T}}_{2}$.

(b) Suppose also that $\text{T}={\text{U}}_{1}+i{\text{U}}_{2}$, where ${\text{U}}_{1}$ and ${\text{U}}_{2}$ are selfadjoint. Prove that ${\text{U}}_{1}={\text{T}}_{1}$ and ${\text{U}}_{2}={\text{T}}_{2}$.

(c) Prove that T is normal if and only if ${\text{T}}_{1}{\text{T}}_{2}={\text{T}}_{2}{\text{T}}_{1}$.


Let T be a linear operator on an inner product space V, and let W be a Tinvariant subspace of V. Prove the following results.

(a) If T is selfadjoint, then ${\text{T}}_{\text{W}}$ is selfadjoint.

(b) ${\text{W}}^{\perp}$ is T*invariant.

(c) If W is both T and T*invariant, then $({\text{T}}_{\text{W}})\text{*}={(\text{T*})}_{\text{W}}$.

(d) If W is both T and T*invariant and T is normal, then ${\text{T}}_{\text{W}}$ is normal.


Let T be a normal operator on a finitedimensional complex inner product space V, and let W be a subspace of V. Prove that if W is Tinvariant, then W is also T*invariant. Hint: Use Exercise 24 of Section 5.4.

Let T be a normal operator on a finitedimensional inner product space V. Prove that $\text{N}(\text{T})=\text{N}(\text{T*})$ and $\text{R}(\text{T})=\text{R}(\text{T*})$. Hint: Use Theorem 6.15 and Exercise 12 of Section 6.3.

Let T be a selfadjoint operator on a finitedimensional inner product space V. Prove that for all $x\in \text{V}$
$$\text{T}(x)\pm ix{}^{2}=\text{T}(x){}^{2}+x{}^{2}.$$Deduce that $\text{T}i\text{I}$ is invertible and that the adjoint of ${(\text{T}i\text{I})}^{1}$ is ${(\text{T}i\text{I})}^{1}$.

Assume that T is a linear operator on a complex (not necessarily finitedimensional) inner product space V with an adjoint T*. Prove the following results.

(a) If T is selfadjoint, then $\langle \text{T}(x),\text{}x\rangle $ is real for all $x\in \text{V}$.

(b) If T satisfies $\langle \text{T}(x),\text{}x\rangle =0$ for all $x\in \text{V}$, then $\text{T}={\text{T}}_{0}$. Hint: Replace x by $x+y$ and then by $x+iy$, and expand the resulting inner products.

(c) If $\langle \text{T}(x),\text{}x\rangle $ is real for all $x\in \text{V}$, then T is selfadjoint.


Let T be a normal operator on a finitedimensional real inner product space V whose characteristic polynomial splits. Prove that V has an orthonormal basis of eigenvectors of T. Hence prove that T is selfadjoint.

An $n\times n$ real matrix A is said to be a Gramian matrix if there exists a real (square) matrix B such that $A={B}^{t}B$. Prove that A is a Gramian matrix if and only if A is symmetric and all of its eigenvalues are nonnegative. Hint: Apply Theorem 6.17 to $\text{T}={\text{L}}_{A}$ to obtain an orthonormal basis $\{{v}_{1},\text{}{v}_{2},\text{}\dots ,\text{}{v}_{n}\}$ of eigenvectors with the associated eigenvalues ${\lambda}_{1},\text{}{\lambda}_{2},\text{}\dots ,\text{}{\lambda}_{n}$. Define the linear operator U by $\text{U}({v}_{i})=\sqrt{{\lambda}_{i}}{v}_{i}$.

Simultaneous Diagonalization. Let V be a finitedimensional real inner product space, and let U and T be selfadjoint linear operators on V such that $\text{UT}=\text{TU}$. Prove that there exists an orthonormal basis for V consisting of vectors that are eigenvectors of both U and T. (The complex version of this result appears as Exercise 10 of Section 6.6.) Hint: For any eigenspace $\text{W}={\text{E}}_{\lambda}$ of T, we have that W is both T and Uinvariant. By Exercise 7, we have that ${\text{W}}^{\perp}$ is both T and Uinvariant. Apply Theorem 6.17 and Theorem 6.6 (p. 347).

Let A and B be symmetric $n\times n$ matrices such that $AB=BA$. Use Exercise 14 to prove that there exists an orthogonal matrix P such that ${P}^{t}AP$ and ${P}^{t}BP$ are both diagonal matrices.

Prove the CayleyHamilton theorem for a complex $n\times n$ matrix A. That is, if f(t) is the characteristic polynomial of A, prove that $f(A)=O$. Hint: Use Schur’s theorem to show that A may be assumed to be upper triangular, in which case
$$f(t)={\displaystyle \prod _{i=1}^{n}({A}_{ii}t).}$$Now if $\text{T}={\text{L}}_{A}$, we have $({A}_{jj}\text{I}\text{T})({e}_{j})\in \text{span}(\{{e}_{1},\text{}{e}_{2},\text{}\dots ,\text{}{e}_{j1}\})$ for $j\ge 2$, where $\{{e}_{1},\text{}{e}_{2},\text{}\dots ,\text{}{e}_{n}\}$ is the standard ordered basis for ${\text{C}}^{n}$. (The general case is proved in Section 5.4.)
The following definitions are used in Exercises 17 through 23.
Definitions.
A linear operator T on a finitedimensional inner product space is called positive definite [positive semidefinite] if T is selfadjoint and $\langle \text{T}(x),\text{}x\rangle 0\text{}[\langle \text{T}(x),\text{}x\rangle \ge 0]$ for all $x\ne 0$.
An $n\times n$ matrix A with entries from R or C is called positive definite [positive semidefinite] if ${\text{L}}_{A}$ is positive definite [positive semidefinite].

Let T and U be selfadjoint linear operators on an ndimensional inner product space V, and let $A={[\text{T}]}_{\beta}$, where $\beta $ is an orthonormal basis for V. Prove the following results.

(a) T is positive definite [semidefinite] if and only if all of its eigenvalues are positive [nonnegative].

(b) T is positive definite if and only if
$$\sum _{i,\text{}j}{A}_{ij}{a}_{j}{\overline{a}}_{i}0}\text{for all nonzero}n\text{tuples}({a}_{1},\text{}{a}_{2},\text{}\dots ,\text{}{a}_{n}).$$ 
(c) T is positive semidefinite if and only if $A=B\text{*}B$ for some square matrix B.

(d) If T and U are positive semidefinite operators such that ${\text{T}}^{2}={\text{U}}^{2}$, then $\text{T}=\text{U}$.

(e) If T and U are positive definite operators such that $\text{TU}=\text{UT}$, then TU is positive definite.

(f) T is positive definite [semidefinite] if and only if A is positive definite [semidefinite].
Because of (f), results analogous to items (a) through (d) hold for matrices as well as operators.


Let $\text{T}:\text{V}\to \text{W}$ be a linear transformation, where V and W are finitedimensional inner product spaces. Prove the following results.

(a) T*T and TT* are positive semidefinite. (See Exercise 15 of Section 6.3.)

(b) $\text{rank}(\text{T*T})=\text{rank}(\text{TT*})=\text{rank}(\text{T}).$


Let T and U be positive definite operators on an inner product space V. Prove the following results.

(a) $\text{T}+\text{U}$ is positive definite.

(b) If $c>0$, then cT is positive definite.

(c) ${\text{T}}^{1}$ is positive definite.
Visit goo.gl/
cQch7i for a solution. 

Let V be an inner product space with inner product $\langle \cdot ,\text{}\cdot \rangle $, and let T be a positive definite linear operator on V. Prove that ${\langle x,\text{}y\rangle}^{\prime}=\langle \text{T}(x),\text{}y\rangle $ defines another inner product on V.

Let V be a finitedimensional inner product space, and let T and U be selfadjoint operators on V such that T is positive definite. Prove that both TU and UT are diagonalizable linear operators that have only real eigenvalues. Hint: Show that UT is selfadjoint with respect to the inner product ${\langle x,\text{}y\rangle}^{\prime}=\langle \text{T}(x),\text{}y\rangle $. To show that TU is selfadjoint, repeat the argument with ${\text{T}}^{1}$ in place of T.

This exercise provides a converse to Exercise 20. Let V be a finitedimensional inner product space with inner product $\langle \cdot ,\text{}\cdot \rangle $, and let ${\langle \cdot ,\text{}\cdot \rangle}^{\prime}$ be any other inner product on V.

(a) Prove that there exists a unique linear operator T on V such that ${\langle x,\text{}y\rangle}^{\prime}=\langle \text{T}(x),\text{}y\rangle $ for all x and y in V. Hint: Let $\beta =\{{v}_{1},\text{}{v}_{2},\text{}\dots ,\text{}{v}_{n}\}$ be an orthonormal basis for V with respect to $\langle \cdot ,\text{}\cdot \rangle $, and define a matrix A by ${A}_{ij}={\langle {v}_{j},\text{}{v}_{i}\rangle}^{\prime}$ for all i and j. Let T be the unique linear operator on V such that ${[\text{T}]}_{\beta}=A$.

(b) Prove that the operator T of (a) is positive definite with respect to both inner products.


Let U be a diagonalizable linear operator on a finitedimensional inner product space V such that all of the eigenvalues of U are real. Prove that there exist positive definite linear operators ${\text{T}}_{1}$ and ${\text{T}}_{1}^{\prime}$ and selfadjoint linear operators ${\text{T}}_{2}$ and ${\text{T}}_{2}^{\prime}$ such that $\text{U}={\text{T}}_{2}{\text{T}}_{1}={\text{T}}_{1}^{\prime}{\text{T}}_{2}^{\prime}$. Hint: Let $\langle \cdot ,\text{}\cdot \rangle $ be the inner product associated with V, $\beta $ a basis of eigenvectors for U, ${\langle \cdot ,\text{}\cdot \rangle}^{\prime}$ the inner product on V with respect to which $\beta $ is orthonormal (see Exercise 22(a) of Section 6.1), and ${\text{T}}_{1}$ the positive definite operator according to Exercise 22. Show that U is selfadjoint with respect to ${\langle \cdot ,\text{}\cdot \rangle}^{\prime}$ and $\text{U}={\text{T}}_{1}^{1}{\text{U*T}}_{1}$ (the adjoint is with respect to $\langle \cdot ,\text{}\cdot \rangle $). Let ${\text{T}}_{2}={\text{T}}_{{}_{1}}^{1}\text{U*}$.