Worksheet 7

Problem 1

Let VV be a finite dimensional vector space over R\R. Show that if dimV\dim V is odd, then every TL(V)T \in \mathcal{L}\p{V} has an eigenvalue.

Solution.

TT is not assumed to be a matrix, so we can't jump straight into characteristic polynomials. Instead, we need to start with a matrix representation of TT.

Let β\beta be a basis of VV and consider the matrix A=[T]βA = \br{T}_\beta. Since the underlying field is R\R, we can use calculus. The characteristic polynomial pA ⁣:RR\func{p_A}{\R}{\R} is a continuous function, so we may invoke the intermediate value theorem. Since pAp_A is an odd degree polynomial, there are two cases:

{limλpA(λ)=,limλpA(λ)=or{limλpA(λ)=,limλpA(λ)=.\begin{cases} \displaystyle\lim_{\lambda\to\infty} p_A\p{\lambda} = \infty, \\[2ex] \displaystyle\lim_{\lambda\to-\infty} p_A\p{\lambda} = -\infty \end{cases} \quad\text{or}\quad \begin{cases} \displaystyle\lim_{\lambda\to\infty} p_A\p{\lambda} = -\infty, \\[2ex] \displaystyle\lim_{\lambda\to-\infty} p_A\p{\lambda} = \infty. \end{cases}

In either case, the intermediate value theorem tells us that pAp_A is surjective. In particular, there exists λR\lambda \in \R such that pA(λ)=0p_A\p{\lambda} = 0, i.e., λ\lambda is an eigenvalue of AA, so there exists a non-zero vector [v]β\br{v}_\beta such that A[v]β=λ[v]βA\br{v}_\beta = \lambda \br{v}_\beta. But this means

[Tv]β=A[v]β=λ[v]β=[λv]β    Tv=λv.\br{Tv}_\beta = A\br{v}_\beta = \lambda\br{v}_\beta = \br{\lambda v}_\beta \iff Tv = \lambda v.

Thus, because v0v \neq 0, it follows that λ\lambda is an eigenvalue of TT.

Problem 6

Let VV be a vector space over F\F of dimension nn, let TL(V)T \in \mathcal{L}\p{V}, let β\beta be an ordered basis of VV. The determinant of TT, denoted detT\det T is defined as detT=det[T]β\det T = \det \br{T}_\beta.

  1. Prove that the determinant of TT is independent of the choice of β\beta. Namely, prove that if β\beta and γ\gamma are two ordered bases of VV, then det[T]β=det[T]γ\det \br{T}_\beta = \det \br{T}_\gamma.
  2. Prove that TT is invertible if and only if detT0\det T \neq 0.
  3. Prove that if TT is invertible, then det(T1)=(detT)1\det\p{T^{-1}} = \p{\det T}^{-1}.
  4. Prove that if SL(V)S \in \mathcal{L}\p{V}, then det(TS)=det(T)det(S)\det\p{TS} = \det\p{T} \det\p{S}.
  5. Prove that if λF\lambda \in \F, then det(TλidV)=det([T]βλIn)\det\p{T - \lambda \operatorname{id}_V} = \det\p{\br{T}_\beta - \lambda I_n}.
Solution.

As before, all the facts we know about determinants only apply to (square) matrices. TT does not have to be a matrix, but because detT\det T is defined using a matrix representation of TT, these properties will follow by carefully applying the corresponding properties on the matrix [T]β\br{T}_\beta.

  1. Recall that

    [T]β=[I]γβ[T]γ[I]βγ=([I]βγ)1[T]γ[I]βγ.\br{T}_\beta = \br{I}_\gamma^\beta \br{T}_\gamma \br{I}_\beta^\gamma = \p{\br{I}_\beta^\gamma}^{-1} \br{T}_\gamma \br{I}_\beta^\gamma.

    Thus, because the matrix determinant is multiplicative,

    det[T]β=det(([I]βγ)1[T]γ[I]βγ)=det([I]βγ)1det[T]γdet[I]βγ=det(([I]βγ)1)det([I]βγ)det([T]γ)=det(In)det([T]γ)=det[T]γ.\begin{aligned} \det \br{T}_\beta &= \det \p{\p{\br{I}_\beta^\gamma}^{-1} \br{T}_\gamma \br{I}_\beta^\gamma} \\ &= \det\p{\br{I}_\beta^\gamma}^{-1} \det\br{T}_\gamma \det\br{I}_\beta^\gamma \\ &= \det\p{\p{\br{I}_\beta^\gamma}^{-1}} \det\p{\br{I}_\beta^\gamma} \det\p{\br{T}_\gamma} \\ &= \det\p{I_n} \det\p{\br{T}_\gamma} \\ &= \det\br{T}_\gamma. \end{aligned}
  2. This follows from the fact that a matrix AA is invertible if and only if detA0\det A \neq 0. In the following, we apply this fact to A=[T]βA = \br{T}_\beta.

    T is invertible    [T]β is invertible    det[T]β0    detT0.\begin{aligned} T\text{ is invertible} &\iff \br{T}_\beta\text{ is invertible} \\ &\iff \det\br{T}_\beta \neq 0 \\ &\iff \det T \neq 0. \end{aligned}

    The last equivalence holds since detT=det[T]β\det T = \det\br{T}_\beta by definition.

  3. From Part 2, we know detT0\det T \neq 0, so (detT)1\p{\det T}^{-1} is well-defined. Thus,

    1=detIn=det[TT1]β=det([T]β[T1]β)=det([T]β)det([T1]β)=det(T)det(T1)    det(T1)=(detT)1.\begin{aligned} 1 &= \det I_n \\ &= \det \br{TT^{-1}}_\beta \\ &= \det \p{\br{T}_\beta \br{T^{-1}}_\beta} \\ &= \det\p{\br{T}_\beta} \det\p{\br{T^{-1}}_\beta} \\ &= \det\p{T} \det \p{T^{-1}} \\ \implies \det\p{T^{-1}} &= \p{\det T}^{-1}. \end{aligned}

    (Note that I can't say det(TT1)=det(T)det(T1)\det\p{TT^{-1}} = \det\p{T} \det \p{T^{-1}} right away since we only know that the determinant is multiplicative for matrices. We prove this in the next part, though.)

  4. Recall that [TS]β=[T]β[S]β\br{TS}_\beta = \br{T}_\beta \br{S}_\beta. Thus,

    det(TS)=det[TS]β=det([T]β[S]β)=det[T]βdet[S]β=det(T)det(S).\begin{aligned} \det\p{TS} &= \det \br{TS}_\beta \\ &= \det\p{\br{T}_\beta \br{S}_\beta} \\ &= \det \br{T}_\beta \det \br{S}_\beta \\ &= \det\p{T} \det\p{S}. \end{aligned}
  5. This follows from the fact that [idV]β=In\br{\operatorname{id}_V}_\beta = I_n, where n=dimVn = \dim V, and linearity of taking matrix representations:

    [TλidV]β=[T]βλ[idV]β=[T]βλIn.\br{T - \lambda \operatorname{id}_V}_\beta = \br{T}_\beta - \lambda \br{\operatorname{id}_V}_\beta = \br{T}_\beta - \lambda I_n.

    Thus, by definition,

    det(TλidV)=det[TλidV]β=det([T]βλIn).\det\p{T - \lambda \operatorname{id}_V} = \det \br{T - \lambda \operatorname{id}_V}_\beta = \det\p{\br{T}_\beta - \lambda I_n}.