next up previous
Next: ii_solns_8-10_III Up: ii_solns_8-10_III Previous: ii_solns_8-10_III

For Problem EE-1:

The statement to be proved is that for any square matrix $ A$, any $ k$ eigenvectors corresponding to distinct eigenvalues are linearly independent.

For $ k=1$ the statement is trivially true, since an eigenvector is nonzero.

Now consider the case of any $ k$, assuming the statement holds for the case $ k-1$. Suppose the eigenvectors are $ v _
1,\dots, v _ k$, corresponding to eigenvalues $ \lambda _
1,\dots, \lambda _ k$. Suppose

$\displaystyle r _ 1 v _ 1 + \dots + r _
k v _ k =$   0$\displaystyle .$

We must show that $ r _ 1 = \dots = r _ k
= 0$. Apply $ \tau _ A$ to both sides of the linear relation. We get

$\displaystyle r _ 1 \lambda _ 1 v _ 1 + \dots + r _
k \lambda _ k v _ k =$   0$\displaystyle .$

Also multiply through the original linear relation by $ \lambda _ k$, getting

$\displaystyle r _ 1
\lambda _ k v _ 1 + \dots + r _ k \lambda _ k v _ k =$   0$\displaystyle .$

Now subtract the last two linear relations mentioned. We get

$\displaystyle r _ 1 (\lambda _ 1 - \lambda _ k) v _ 1 + \dots + r
_ {k-1} (\lambda _ {k-1} - \lambda _ k) v _ {k-1} +$   0$\displaystyle =$   0$\displaystyle .$

By the inductive hypothesis (the case $ k-1$), $ v _
1,\dots, v _ {k-1}$ are linearly independent. Therefore $ r _ 1 (\lambda _ 1 - \lambda _ k) = \dots = r _ {k-1}
(\lambda _ {k-1} - \lambda _ k) = 0$. Since the scalars $ \lambda _ i$ are distinct, we must have $ r _ 1 = \dots = r _ {k-1}
= 0$. Going back to the original linear relation, this leaves $ r _ k v _ k =$   0. Since $ v _ k \neq$   0, we must have $ r _ k = 0$. In other words, the original linear relation was trivial, so $ v _
1,\dots, v _ k$ are linearly independent.

Therefore the statement is true for all $ k$ (really all $ k \leq n$, where $ A$ is $ n \times n$), by induction.



For Problem EE-2:

In the case $ k=2$, the first $ k-1$ and the last $ k-1$ don't overlap in the middle. (It's clear that this is the first case to examine, since we know it's really false for $ n=2$.)



For Problem EE-3:

(a) We know that multiplying $ M$ on the left by $ D$ scales the rows of $ M$ and multiplying on the right by $ D$ scales the columns of $ M$. An off-diagonal entry will get incompatible scalings. In particular, $ (DM) _ {ij} = D _
{ii} M _ {ij}$ and $ (MD) _ {ij} = M _ {ij} D _ {jj}$, so if $ DM = MD$ then $ M _ {ij} (D _ {ii} - D _ {jj}) = 0$. If $ i \neq j$ then since the diagonal entries of $ D$ are distinct, we get $ M _ {ij} = 0$.

(b) As suggested, the map $ M \mapsto P^{-1} M P$ preserves multiplication, so if we choose $ P$ to diagonalize $ A$ to $ D$ and we let $ C = P^{-1} B P$, then $ CD = DC$. The diagonal entries of $ D$ are the eigenvalues of $ A$ and so are distinct. Then by (a), $ C$ is diagonal and we have diagonalized $ B$. Since the same $ P$ was used for both $ A$ and $ B$, they are simultaneously diagonalizable.



For Problem EE-4:

Following the suggestion, if $ B^2 = A$ then applying the map $ M \mapsto P^{-1} M P$ we get $ E ^ 2 = D$. Now $ E$ commutes with $ E^2$, which is $ D$, so by Problem EE-[*](a), $ E$ is diagonal. Since $ E ^ 2 = \left[\begin{array}{rr}9&0\\  0&1\end{array}\right]$, we must have $ E = \left[\begin{array}{rr}\pm 3& 0\\  0& \pm 1\end{array}\right]$. Then there are only four possibilities for $ B = P E P^{-1}$.



For Problem EE-5:

(Many of the problems I made up for class examples and homework are like this. I usually start with an easy $ P$ of determinant 1 such as $ P = \left[\begin{array}{rr}3&1\\  2&1\end{array}\right]$).



For Problem EE-6:

For convenience let $ R = R _ {90 ^ \circ} =
\left[\begin{array}{rr}0&-1\\  1&0\end{array}\right]$. We have $ p _ R(\lambda) = \lambda ^ 2
+ 1$. The roots are $ \lambda = \pm i \in \mathbb{C}$. For $ \lambda =
i$ we get $ P - iI = \left[\begin{array}{rr}-i&-1\\  1&-i\end{array}\right]$. Although this might not instantly look singular, it has to be! An eigenvector is obtained by solving $ -i x -y = 0$; choosing $ x = 1$ we get $ y = -i$, so the eigenvector is $ \left[\begin{array}{r}1\\  -i\end{array}\right]$. For $ \lambda
= -i$, everything works the same with $ -i$ in place of $ i$, so an eigenvector is $ \left[\begin{array}{r}1\\  i\end{array}\right]$. Therefore $ P^{-1} R P = D$ with $ D = \left[\begin{array}{rr}i&0\\  0&-i\end{array}\right]$ and $ P = \left[\begin{array}{rr}1&1\\  -i&i\end{array}\right]$.



For Problem EE-7:

$ p _ A(\lambda) = \lambda ^ 2 + \lambda + 1$. (Remember, $ -1
= 1$ here!) Neither 0 nor 1 is a root, so $ A$ is not diagonalizable over GF(2). Now look in GF(4). Not surprisingly, $ \alpha$ and $ \beta$ turn out to be roots (using the operation tables), so these are the eigenvalues. They are distinct, so $ A$ is diagonalizable.

If we were asked actually to diagonalize $ A$, we could do it: For $ \lambda = \alpha$, $ A - \alpha I = \left[\begin{array}{cc}(1-\alpha)&1\\  1&-\alpha\end{array}\right]
= \left[\begin{array}{rr}\beta&1\\  1&\alpha\end{array}\right]$. This matrix has to be singular, and in fact the second row is $ \alpha$ times the first row. Solve $ \beta x + y = 0$ by taking $ x = 1$, $ y = \beta$. Similary, if we take $ \lambda = \beta$ the roles of $ \alpha$ and $ \beta$ are reversed. Thus we get $ P^{-1} A P = D$ with $ P = \left[\begin{array}{rr}1&1\\  \beta&\alpha\end{array}\right]$ and $ D = \left[\begin{array}{rr}\alpha&0\\  0&\beta\end{array}\right]$.



For Problem EE-8:

If the eigenvalues are distinct, the blocks are all $ 1 \times 1$, which means the Jordan form is diagonal!



For Problem EE-9:

Put $ A$ in Jordan form by $ P^{-1} A P = J$. Since $ A^2 = A$, applying the similarity map we get $ J^2 = J$. In squaring $ J$, each Jordan block gets squared. For a block $ rI + N$, we have $ (rI+N) ^ 2 = r^2 I + 2rN + N^2$. (Example: $ \left[\begin{array}{rrr}r&0&0\\  1&r&0\\  0&1&r\end{array}\right] ^ 2 = \left[\begin{array}{rrr}r^2&0&0\\  2r&r^2&0\\  1&2r&r^2\end{array}\right]$.) If $ n \geq 3$ we can see that $ J ^ 2$ has an entry of 1 where $ J$ has an entry of 0, so $ J ^ 2 \neq J$. But even for $ n=2$, matching entries, we get $ r ^ 2 = r$, so $ r = 0$ or $ r = 1$. On the other hand, unless the block is $ 1 \times 1$, we have $ 2rN = N$, which gives $ 2r = 1$, an impossibility. Therefore all the Jordan blocks are $ 1 \times 1$ and $ A$ is diagonalizable.



For Problem EE-10:

By Cayley's theorem we just need a matrix with $ p _ A(\lambda)
= \lambda ^ 2 - \lambda - 1$. One such matrix is $ A = \left[\begin{array}{rr}1&1\\  1&0\end{array}\right]$.



For Problem EE-11:

Apply $ \tau _ D$ to the sphere, where $ D = \left[\begin{array}{rrr}a&0&0\\  0&b&0\\  0&0&c\end{array}\right]$. The sphere is turned into the desired ellipsoid and the volume is changed by a factor of $ \det D = abc$. Therefore the formula is $ V = \frac 43 \pi abc$.



For Problem EE-12:

In the definition of the determinant using permutations, there is only one nonzero term, which corresponds to the permutation used to make the matrix from $ I$. Therefore the determinant equals the sign of the permutation.



For Problem EE-13:

The permutation definition has $ 10!$ terms each with 9 multiplications, so we get $ 9 \cdot 10! = 32,659,200$. On the other hand, by elementary row operations we need $ 9 \cdot 10 + 8 \cdot 9 + \dots + 2 \cdot 1
= 330$, if we count division as multiplication by an inverse.



For Problem EE-14:

The answer is that you need to look inside just one container. Now that you know a lot about $ S_3$, you should think this way: Each arrangement of the tops is a permutation. For example $ \left(\begin{array}{rrr}R&B&G\\  G&R&B\end{array}\right)$ means that the container with the red marble has a top marked G, etc. There are six permutations in $ S_3$: the identity, two 3-cycles, and three transpositions (2-cycles). The identity and the transpositions have at least one top correct, which is excluded by the problem. Therefore you need to distinguish between just two possibilities. The 3-cycles (R B G) and (R G B) can be recognized by knowing where just one letter goes, so you need to look in only one container.

Example: If the container with a red marble has the top marked G, the permutation must be (R G B), so the container with the green marble is marked B and the container with the blue marble is marked R.



For Problem EE-15: The only possibilities are the identity map, transpositions, 3-cycles, 4-cycles, and the product of two disjoint transpositions: $ \left(\begin{array}{rr}a&b\end{array}\right) \left(\begin{array}{rr}c&d\end{array}\right)$. We saw in an earlier problem that an $ n$-cycle is the product of $ n-1$ transpositions, so transpositions are odd, 3-cycles are even, and 4-cycles are odd, and of course $ \left(\begin{array}{rr}a&b\end{array}\right) \left(\begin{array}{rr}c&d\end{array}\right)$ is even.

Incidentally, a useful vocabulary word is ``parity'', meaning ``evenness or oddness''. So we can say an $ n$-cycle has the opposite parity from $ n$ itself.



For Problem EE-16:

(a) The new basis vector $ v _ 1$ is written in terms of the old basis as $ v _ 1 = (1,1,1) = 1 \cdot e _ 1 + 1 \cdot e _ 2 + 1 \cdot
e _ 3$, so its coordinate vector is $ \left[\begin{array}{r}1\\  1\\  1\end{array}\right]$, and similarly for $ v _ 2$ and $ v _ 3$. So the change-of-basis matrix is $ P = \left[\begin{array}{rrr}1&1&1\\  1&1&2\\  1&-2&1\end{array}\right]$.

(b) We need to solve for $ e _ 1$ , $ e _ 2$, $ e _ 3$ in terms of $ v _ 1, v _ 2, v _ 3$. The answer is the inverse of the matrix from (a): $ \left[\begin{array}{rrrrrr}1&1&1&1&0&0\\  1&1&2&0&1&0\\  1&-2&1&0&0&1\end{arra...
...53&-1&\frac 13\\
0&1&0&\frac 13&0&-\frac 13\\  0&0&1&-1&1&0\end{array}\right]$, so the change-of-basis matrix is $ \left[\begin{array}{rrr}\frac 53&-1&\frac 13\\  \frac 13&0&-\frac 13\\  -1&1&0\end{array}\right]$.

(c) Solve for new basis elements in terms of old:

\begin{displaymath}
\left \{
\begin{array}{rrrrr}
(1,-1,0) &=& r(1,-1,0) &+& s(1...
...1,-2) &=& t(1,-1,0) &+& u(1,0,-1) \\
\par\end{array}\right .
\end{displaymath}

By inspection the first equation gives $ r = 1, s = 0$. The second equation gives $ 1 = t + u, 1 = -t$, so $ t = -1, u = 2$. The change-of-basis matrix is $ P = \left[\begin{array}{rr}r&t\\  s&u\end{array}\right] = \left[\begin{array}{rr}1&-1\\  0&2\end{array}\right]$.

(d) As usual, the coordinate vector of $ v$ with respect to the standard basis is $ v$ itself. Therefore the change-of-basis matrix is $ P = \left [ \begin{array}{r\vert r\vert r\vert r}v _ 1 & v _ 2 & \dots &v _ n \\
\end{array} \right ]$.

(e) Write the new basis with respect to the old:

\begin{displaymath}
\begin{array}{lrrrrrr}
1 &=& 1 \cdot 1 &+& 0 \cdot x &+& 0 \...
...^2 &=& 1 \cdot 1 &+& 1 \cdot x &+& 1 \cdot x^2 \\
\end{array}\end{displaymath}

So the change-of-basis matrix is $ P = \left[\begin{array}{rrr}1&1&1\\  0&1&1\\  1&1&1\end{array}\right]$.



For Problem EE-17:

(a) The determinant is $ (101-100)(102 - 100) (102 - 101) = 2$.

(b) The matrix is invertible because its determinant is a product of nonzero terms and so is nonzero.

(c) We get equations

$ 8 = c _ 0 + 2 c _ 1 + 4 c _ 2$
$ 7 = c _ 0 + 3 c _ 1 + 9 c _ 2$
$ 1 = c _ 0 + 5 c _ 1 + 25 c _ 2$

which is the same as $ A$c$ =$   b for $ A = \left[\begin{array}{rrr}1&2&4\\  1&3&9\\  1&5&25\end{array}\right]$, c$ = \left[\begin{array}{r}c _ 0\\  c _ 1\\  c _ 2\end{array}\right]$, b$ = \left[\begin{array}{r}8\\  7\\  1\end{array}\right]$.

Since $ A$ is a van der Monde matrix with distinct $ x _ 1,\dots, x _ n$, $ A$ is nonsingular. Therefore there is a unique solution.

(d) As suggested, taking a linear relation and applying powers of $ \tau _ A$, we get equations

\begin{displaymath}
\begin{array}{ccccccc}
r _ 1 v _ 1 &+& {\dots } &+& r _ k v ...
... _ k
\lambda _ k ^ {k-1} v _ k &=& \mbox{\bf0} \\
\end{array}\end{displaymath}.

Let $ P = \left [ \begin{array}{r\vert r\vert r\vert r}v _ 1 & v _ 2 & \dots &v _ n \\
\end{array} \right ]$. Then these equations are

$ P \left[\begin{array}{r}r _ 1\\  {\dots }\\  r _ k\end{array}\right] =$   0, $ P \left[\begin{array}{r}r _ 1 \lambda _ 1 \\  {\dots }\\  r _ k \lambda _ k\end{array}\right] =$   0, $ P \left[\begin{array}{r}r _ 1 \lambda _ 1 ^ 2 \\  {\dots }\\  r _ k \lambda _ k ^ 2 \end{array}\right] =$   0, ..., $ P \left[\begin{array}{r}r _ 1 \lambda _ 1 ^ {k-1}\\  {\dots }\\  r _ k \lambda _ k ^ {k-1}\end{array}\right] =$   0. Putting these together,

$ P \left[\begin{array}{ccccc}
r _ 1&r _ 1 \lambda _ 1&r _ 1 \lambda _ 1 ^ 2&{\d...
...mbda _ k ^ 2&{\dots }&
r _ k \lambda _ k ^ {k-1}\end{array}\right] = \mathcal O$, or

$ P \left[\begin{array}{ccccc}r _ 1&0&0&{\dots }&0\\  0& r _ 2&0&{\dots }&0\\
...
..._ k&\lambda _ k ^ 2&{\dots }&\lambda _ k ^ {k-1}\end{array}\right] = \mathcal O$.

Since $ \lambda _
1,\dots, \lambda _ k$ are distinct, the van der Monde matrix has nonzero determinant and so is invertible. We can cancel it by multiplying both sides on the right by its inverse. Then we get $ P \left[\begin{array}{ccccc}r _ 1&0&0&{\dots }&0\\  0& r _ 2&0&{\dots }&0\\
...
...dots }&{\dots }&{\dots }\\  0&0&0&{\dots }&r _ k\end{array}\right] = \mathcal O$, or

$ \left [ \begin{array}{r\vert r\vert r\vert r}
r _ 1 v _ 1 & r _ 2 v _ 2 & \dots & r _ k v _ k \\
\end{array} \right ] = \mathcal O$. Then $ r _ i v _ i =$   0 for each $ i$. Since each $ v _ i \neq$   0, we must have $ r _ i = 0$, as desired.


next up previous
Next: ii_solns_8-10_III Up: ii_solns_8-10_III Previous: ii_solns_8-10_III
Kirby A. Baker 2001-12-05