next up previous
Next: aa_solns_7 Up: aa_solns_7 Previous: aa_solns_7



For Problem W-1:

Remember, for $ F =$   GF$ (q)$, any $ k$-dimensional vector space over $ F$ has $ q$ elements, even if the vector space is a subspace of some sort. Here we are working inside $ F ^3$.

Following the suggestion, for the first row we just need to choose a nonzero vector, so there are $ q^3 - 1$ possible choices. The first row spans a 1-dimensional subspace of $ F ^3$ with $ q$ elements.

For the second row, we can choose any triple that avoids this 1-dimensional subspace, so there are $ q^3 - q$ possibilities for the second row. The first two rows together are linearly independent, so together they span a 2-dimensional subspace of $ F ^3$ with $ q ^ 2$ elements.

For the third row, we can choose any triple that avoids this 2-dimensional subspace, so there are $ q^3 - q^2$ possibilities.

Putting these together, we have

$ (q^3 -1 )(q^3-q)(q^3 - q^2) = q^3 (q^3 - 1)(q^2 - 1)(q - 1)$

possible matrices. (It is OK to multiply here even though each choice depends on the preceding choices, because the number of possibilities for each choice does not depend on the preceding choices.)

For $ q = 2$ this gives $ 2^3 \cdot 7 \cdot 3 \cdot 1 = 168$ invertible matrices.



For Problem W-2:

(a) One way: $ e ^ D = I + {\frac{\displaystyle 1}{\displaystyle 1!}} D + {\frac{\displaystyl...
...}\end{array}\right] =
\left[\begin{array}{rr}e&0\\  0&e^{-1}\end{array}\right]$.

Although this is just one example, you can see that the general principle is this: If $ D$ is a diagonal matrix with diagonal entries $ d _ 1,\dots, d _ n$ then $ e ^ D$ is a diagonal matrix with diagonal entries $ e ^ {d _ 1},\dots,
e ^ {d _ n}$.

Of course, this idea works for other functions in place of $ e ^ x$. This is another illustration of how the diagonal entries of a diagonal matrix work independently of one another.

(b) Notice that the powers of $ J$ are $ I, J, -I, -J, I, J, -I,\dots $. Then $ e ^ {tJ} = I + {\frac{\displaystyle 1}{\displaystyle 1!}}tJ + {\frac{\displays...
...ystyle 3!}}
t^3(-J) + {\frac{\displaystyle 1}{\displaystyle 4!}} t^4 I + \dots $. Collecting terms we get $ e ^ {tJ} = (1 - {\frac{\displaystyle 1}{\displaystyle 2!}}t^2 + {\frac{\displa...
... J = \left[\begin{array}{cc}\cos t& -\sin t\\  \sin t& \cos t\end{array}\right]$, which we recognize as the rotation matrix $ R _ \theta$ with $ \theta = t$.

Note. This is a matrix version of Euler's formula about complex numbers: $ e ^ {i \theta} = \cos \theta + i \sin \theta$, the complex number representing a rotation by $ \theta$. Putting $ \theta = \pi$ gives the famous formula $ e ^ {\pi i} = -1$. You might have seen this in Math 33B. Complex numbers are the subject of Math 132.

(c) This is easier. By the Golden Rule, the powers of $ N _ 4$ are obtained by successively ``sliding the rows down'' and its fourth power is the zero matrix. Therefore $ e ^ {tN _ 4} =
I + {\frac{\displaystyle 1}{\displaystyle 1!}}t N _ 4 + {\frac{...
... 2!}}t^2 N _ 4 ^ 2
+ {\frac{\displaystyle 1}{\displaystyle 3!}}t^3 N _ 4 ^ 3 + $ zero matrices, so we get $ e ^ {tN _ 4} = \left[\begin{array}{cccc}1&0&0&0\\  t&1&0&0\\  \frac 12 t^2&t&1&0\\
\frac 16 t^3&\frac 12 t^2&t&1\end{array}\right]$



For Problem W-3: In each case we want to express $ T(v _ 1)$ as a linear combination of $ v _ 1$ and $ v _ 2$. Although it is possible to extend the basis and work in $ \mathbb{R}^3$, it seems simplest just to stay in $ V$ and use unknown coefficients. By the Golden Rule, $ Pv$ rotates $ v$ downwards one position.

For (a):

$ v _ 1 = \left[\begin{array}{r}1\\  -1\\  0\end{array}\right]$, $ T(v _ 1) = \left[\begin{array}{r}0\\  1\\  -1\end{array}\right] = r \left[\beg...
...\  0\end{array}\right] + s \left[\begin{array}{r}1\\  0\\  -1\end{array}\right]$

$ v _ 2 = \left[\begin{array}{r}1\\  0\\  -1\end{array}\right]$, $ T(v _ 2) = \left[\begin{array}{r}-1\\  1\\  0\end{array}\right] = t \left[\beg...
...\  0\end{array}\right] + u \left[\begin{array}{r}1\\  0\\  -1\end{array}\right]$

This gives three equations in $ r,s$ and the same for $ t,u$, so use the two easiest equations in each case. We get $ r = -1$, $ s = 1$, $ t = -1$, $ u = 0$. So $ M = \left[\begin{array}{rr}r&t\\  s&u\end{array}\right] = \left[\begin{array}{rr}-1&-1\\  1&0\end{array}\right]$.

As a check, $ T$ is a rotation by $ 120 ^ \circ$, so $ T^3 = \mathbf{1}$. Therefore we should expect $ M^3 = I$, which does turn out to be true. (Although we haven't discussed it, $ M$ imitates $ T$ insofar as such algebraic properties are concerned.)

For (b):

$ v _ 1 = \left[\begin{array}{r}1\\  -1\\  0\end{array}\right]$, $ T(v _ 1) = \left[\begin{array}{r}0\\  1\\  -1\end{array}\right] = r \left[\beg...
...\  0\end{array}\right] + s \left[\begin{array}{r}1\\  1\\  -2\end{array}\right]$

$ v _ 2 = \left[\begin{array}{r}1\\  1\\  -2\end{array}\right]$, $ T(v _ 2) = \left[\begin{array}{r}-2\\  1\\  1\end{array}\right] = t \left[\beg...
...\  0\end{array}\right] + u \left[\begin{array}{r}1\\  1\\  -2\end{array}\right]$

This gives three equations in $ r,s$ and the same for $ t,u$, so again use the two easiest equations in each case. We get $ r = -\frac 12$, $ s = \frac 12$, $ t = -\frac 32$, $ u = - \frac 12$. So $ M = \left[\begin{array}{rr}r&t\\  s&u\end{array}\right] = \left[\begin{array}{rr}-\frac 12&-\frac 32\\  \frac 12& -\frac 12\end{array}\right]$.

Again, it can be checked that $ M^3 = I$.

For Problem W-4:

(a) In the opposite direction, to go from the standard basis to the given nonstandard basis the matrix is $ A = \left[\begin{array}{rr}1&1\\  1&-1\end{array}\right]$. Then the answer is $ M = A^{-1} = \frac 1{-2} \left[\begin{array}{rr}-1&-1\\  -1&1\end{array}\right]
= \frac 12 \left[\begin{array}{rr}1&1\\  1&-1\end{array}\right]$.

As a check, $ \frac 12 \left[\begin{array}{rr}1&1\\  1&-1\end{array}\right]\left[\begin{array}{r}1\\  1\end{array}\right] = \left[\begin{array}{r}1\\  0\end{array}\right]$ and $ \frac 12 \left[\begin{array}{rr}1&1\\  1&-1\end{array}\right]\left[\begin{array}{r}1\\  -1\end{array}\right] = \left[\begin{array}{r}0\\  1\end{array}\right]$.

(b) Following the suggestion, $ A$ is as in part (a) and $ B =
\left[\begin{array}{rr}1&3\\  2&4\end{array}\right]$, giving $ M = B A^{-1} = \left[\begin{array}{rr}2&-1\\  3&-1\end{array}\right]$. Check: $ \left[\begin{array}{rr}2&-1\\  3&-1\end{array}\right] \left[\begin{array}{r}1\\  1\end{array}\right] = \left[\begin{array}{r}1\\  2\end{array}\right]$, $ \left[\begin{array}{rr}2&-1\\  3&-1\end{array}\right] \left[\begin{array}{r}1\\  -1\end{array}\right] = \left[\begin{array}{r}3\\  4\end{array}\right]$,

An alternate method: We want $ M$ with $ M \left[\begin{array}{r}1\\  1\end{array}\right] =
\left[\begin{array}{r}1\\  2\end{array}\right]$ and $ M \left[\begin{array}{r}1\\  -1\end{array}\right] = \left[\begin{array}{r}3\\  4\end{array}\right]$. We can put these together using the idea that ``columns of the matrix on the right work independently'' in a matrix product:

$ M \left[\begin{array}{rr}1&1\\  1&-1\end{array}\right] = \left[\begin{array}{rr}1&3\\  2&4\end{array}\right]$. This says $ MA =
B$ with $ A, B$ as in the first solution. Now we solve by multiplying both sides on the right by $ A^{-1}$, again getting the solution $ M = B A^{-1}$.


next up previous
Next: aa_solns_7 Up: aa_solns_7 Previous: aa_solns_7
Kirby A. Baker 2001-11-19