next up previous
Next: y_solns_6 Up: y_solns_6 Previous: y_solns_6

p. 21, Ex. 3.

This problem can be solved by experimenting, but it's better to use an organized approach. I'll list several options.

(i) Just try matrices that have three zeros. You will find that good solutions are $ N = \left[\begin{array}{cc}0&0\\  1&0\end{array}\right]$ and its transpose.

(ii) Notice that $ A$ should not be invertible, since a product of invertible matrices is invertible (rule $ (AB) ^ {-1} = B
^ {-1} A ^ {-1}$) and the zero matrix is not invertible. Then $ A$ doesn't have rank 2 and we also know it doesn't have rank 0 (which would be the zero matrix). So the rank of $ A$ is 1. This should help experiments.

(ii$ '$) Continuing with the rank 1 idea and recalling a previous problem, $ A = u v ^ t$ for column vectors $ u, v$. $ A^2 =
u v ^ t u v ^ t$, which is column$ \cdot$row$ \cdot$column$ \cdot$row. The middle row and column multiply to make a $ 1 \times 1$ matrix, which is just a scalar, and it had better be 0 or else $ A ^ 2$ won't be the zero matrix. So just pick any two column vectors $ u, v$ with $ v ^ t u = 0$ (which really says the dot product of the two is 0) and let $ A = u v ^ t$. Example: $ v = \left[\begin{array}{c}1\\  0\end{array}\right]$, $ u = \left[\begin{array}{c}0\\  1\end{array}\right]$; then $ A = N$ above.

(iii) Plan what $ \tau _ A$ should do to standard basis vectors. If $ \tau _ A$ takes $ e _ 1$ to $ e _ 2$ and $ e _ 2$ to 0, then doing $ \tau _ A$ twice will take both standard basis vectors to 0, which says $ \tau _ {A^2}$ is the zero transformation, as hoped. What is $ A$? Its first column is $ e _ 2$ and its second column is 0, so $ A = N$.

(iv) [for later, after you have had the ingredients] If $ A ^ 2$ is the zero matrix, then the eigenvalues of $ A$ also must obey $ \lambda^2 = 0$, so the determinant and trace of $ A$ will be 0. (The trace is the sum of the diagonal entries.) So $ A$ should look like $ \left[\begin{array}{cc}a&\cdot\\  \cdot&-a\end{array}\right]$. If $ a \neq 0$ then to make the determinant come out 0 the missing entries should be $ b$ and $ -a^2/b$ for some $ b \neq 0$. If $ a = 0$ then to make the determinant come out 0 the missing entries should be $ b$ and 0 either way around. To summarize, $ A = \left[\begin{array}{cc}a&b\\  -a^2/b&-a\end{array}\right]$ (with $ a,b \neq 0$) or $ A =
\left[\begin{array}{cc}0&b\\  0&0\end{array}\right]$ or $ A = \left[\begin{array}{cc}0&0\\  b&0\end{array}\right]$ (with $ b \neq 0$).





p. 21, Ex. 4.

I get

$ E _ 1 = \left[\begin{array}{rrr}1&0&0\\  -2&1&0\\  0&0&1\end{array}\right]$, $ E _ 2 = \left[\begin{array}{rrr}1&0&0\\  0&1&0\\  -3&0&1\end{array}\right]$, $ E _ 3 = \left[\begin{array}{rrr}1&0&0\\  0&\frac 12&0\\  0&0&1\end{array}\right]$, $ E _ 4 = \left[\begin{array}{rrr}1&1&0\\  0&1&0\\  0&0&1\end{array}\right]$, $ E _ 5 = \left[\begin{array}{rrr}1&0&0\\  0&1&0\\  0&-3&1\end{array}\right]$, $ E _ 6 = \left[\begin{array}{rrr}1&0&0\\  0&1&0\\  0&0&-2\end{array}\right]$, $ E _ 7 = \left[\begin{array}{rrr}1&0&-\frac 12\\  0&1&0\\  0&0&1\end{array}\right]$, $ E _ 8 = \left[\begin{array}{rrr}1&0&0\\  0&1&\frac 12\\  0&0&1\end{array}\right]$.

Of course, someone else might use some other order of row-reduction and get other matrices.

Notice that the text uses $ E$ for elementary matrices, while I have been using $ E$ for a matrix in row-reduced echelon form.



This is really what Theorem 9 on p. 20 is saying for the case of elementary matrices.



p. 21, Ex. 7.

Here we are evidently supposed to do more than just to quote a theorem like the one on Handout S. The more we know, the easier it is to find a good solution. Some methods:

(i) The straightforward method is to write $ A = \left[\begin{array}{cc}a&b\\  c&d\end{array}\right]$ and $ B = \left[\begin{array}{cc}r&s\\  t&u\end{array}\right]$ and solve $ AB = I$ for B in terms of $ A$. Then we can check directly that $ BA = I$.

Details: As a warmup, think how we would do the problem if $ A$ had numbers instead of letters, e.g., $ \left[\begin{array}{cc}3&7\\  4&2\end{array}\right]
\left[\begin{array}{cc}r&s\\  t&u\end{array}\right] = \left[\begin{array}{cc}1&0\\  0&1\end{array}\right]$. This is equivalent to two separate linear systems: $ \left[\begin{array}{cc}3&7\\  4&2\end{array}\right] \left[\begin{array}{c}r\\  t\end{array}\right] = \left[\begin{array}{c}1\\  0\end{array}\right]$ and $ \left[\begin{array}{cc}3&7\\  4&2\end{array}\right] \left[\begin{array}{c}s\\  u\end{array}\right] = \left[\begin{array}{c}0\\  1\end{array}\right]$. They can be made into a single problem with an augmented matrix to row-reduce: $ \left[\begin{array}{cccc}3&7&1&0\\  4&2&0&1\end{array}\right]$. The answer for $ B$ will be the right-hand part after row reduction. This is the same as a common method of computing an inverse, which is of course what we're doing. Starting with $ \left[\begin{array}{cccc}a&b&1&0\\  c&d&0&1\end{array}\right]$ and row-reducing, we eventually will get $ \left[\begin{array}{cccc}1&0&\frac d \Delta&
-\frac b \Delta\\  0&1& - \frac c \Delta & \frac a \Delta\end{array}\right]$, where $ \Delta = ad - bc$ (Greek capital Delta). (If we divide by $ a$, say, during the calculation then we have to split cases into $ a \neq 0$ and $ a = 0$, but the answer will come out the same either way.) $ \Delta$ itself can't be 0 since there is a solution. Then we know $ B = \left[\begin{array}{cc}\frac d \Delta&
-\frac b \Delta\\  - \frac c \Delta & \frac a \Delta\end{array}\right]$ or more simply, $ \frac 1 \Delta \left[\begin{array}{cc}d&-b\\  -c& a\end{array}\right]$. We can check directly that $ BA = I$.

(ii) $ A$ has to be of rank 2 since $ I$ has rank 2 and rank can't increase under matrix multiplication. So row-reducing $ A$ we get $ I$. In other words, $ E _ k \dots E _ 1 A = I$. Multiplying by $ B$ on the right we get $ E _ k\dots E _ 1 I = B$, so $ B = E _ k\dots E _ 1$ and also we have $ BA = I$.

(iii) If $ AB = I$ then $ \tau _ A \tau _ B = \tau _ {AB} = \mathbf{1}$ so $ \tau _ A$ is one-to-one so the nullity of $ A$ is 0. Then the rank of $ A$ is 2, so $ \tau _ A$ is onto. Since $ \tau _ A$ is an isomorphism, it has a two-sided inverse, which is $ \tau _ B$. Then $ B$ is a two-sided inverse for $ A$.

Note. This problem is related to a simple observation: If $ f:X\rightarrow Y$ has both a left inverse $ g$ and a right inverse $ h$ then $ g = h$. The reason: $ g \circ f \circ h$ can be thought of as $ (g \circ f) \circ h = \mathbf{1}_ X \circ h = h$ and as $ g \circ (f \circ h)
= g \circ$   1$ _ Y = g$, so $ h = g$.

In the special case of linear transformations on a finite-dimensional vector space to itself, a corollary of the dimension theorem was that if $ T$ is one-to-one then $ T$ is onto, so if $ T$ has a left inverse then it has a right inverse also, and so a two-sided inverse.



p. 26, Ex. 1. It's simplest to augment $ A$ with an identity matrix to keep track of the cumulative effect of the elementary row operations. The idea is that by doing row operations we are computing $ PA$, and by doing the same row operations on $ [A\vert I]$ we get $ [PA\vert P]$ so we can see $ P$ as well as the reduced version of $ A$.

$ [A\vert I] = \left[\begin{array}{ccccccc}1&2&1&0&1&0&0\\  -1&0&3&5&0&1&0\\  1&...
... -\frac 14\\
0&0&1&\frac {11}8&\frac 18& \frac 14& \frac 18\end{array}\right]$.

This says that $ PA = \left[\begin{array}{cccc}1&0&0&-\frac 78\\  0&1&0&-\frac 14\\
0&0&1&\frac {11}8\end{array}\right]$ and $ P = \left[\begin{array}{ccc}\frac 38& -\frac 14& \frac 38\\
\frac 14& 0& -\frac 14\\  \frac 18& \frac 14& \frac 18\end{array}\right]$.



p. 27, Ex. 5.

Yes, scaling the last three rows by $ \frac 12, \frac 13, \frac
14$ respectively and sweeping out columns using the leading 1's, we see that $ A$ is elementarily equivalent to the identity matrix and so is invertible.

Other ways to see this, for the future:

The determinant of a triangular matrix is the product of the diagonal entries, so $ \det A \neq 0$, which means $ A$ is invertible.

The eigenvalues of a triangular matrix are the diagonal entries, and since none are 0 the matrix is nonsingular.



p. 27, Ex. 6.

The rank of $ A$ is no more than 1, and the rank of $ C = AB$ can't be larger than the maximum of the rank of $ A$ and the rank of $ B$, so rank $ C$ = 1. But $ C$ is $ 2 \times 2$ so if $ C$ were invertible its rank would have to be 2.



p. 74, Ex. 11.

This problem says to show that $ \tau _ A$ is the zero transformation $ \Leftrightarrow $ $ A$ is the zero matrix. Solution:

For ``implies'': if $ \tau _ A(v) =$   0 for all $ v$ then in particular $ \tau _ A(e _ j)
=$   0 for all $ j$, which is the same as saying that each column of $ A$ is the zero vector, i.e., all entries of $ A$ are 0.

For ``impliedby'': If $ A$ is the zero matrix then $ Av =$   0 for all $ v$, which says $ \tau _ A$ is the zero transformation.



p. 74, Ex. 12.

Since the range and null space of $ T$ are identical, the rank and nullity are the same, say $ k$, and we know dim$ ~V
= k + k = 2k$, so dim$ ~V$ is even.

Technically, the simplest example would be to have $ V = \{$0$ \}$ and let $ T$ be the zero transformation. A more meaningful example would be to take $ V = F^2$ (since we know the dimension is even!), and let Range$ (T) =$   Nullspace$ (T) =$   span$ (e _ 1)$. This can be accomplished by having $ T(e _ 2) = e _ 1$ and $ T(e _ 1) =$   0. In other words, $ T = \tau _ A$ for $ A = \left[\begin{array}{cc}0&1\\  0&0\end{array}\right]$.



p. 84, Ex. 7.

A ``linear operator'' is a transformation on a vector space to itself. We know linear transformations on $ \mathbb{R}^2\rightarrow \mathbb{R}^2$ are all of the form $ \tau _ A$ for a suitable $ 2 \times 2$ matrix $ A$, so we are looking for $ 2 \times 2$ matrices $ A,B$ with $ AB=0,
BA \neq 0$ (where 0 means the $ 2 \times 2$ zero matrix).

One way is to experiment with matrices that have lots of 0 entries. For example, $ A = \left[\begin{array}{cc}1&0\\  0&0\end{array}\right]$, $ B = \left[\begin{array}{cc}0&1\\  0&0\end{array}\right]$ provide an example.

Another way is to realize that the nilpotent matrix $ N = \left[\begin{array}{cc}0&0\\  1&0\end{array}\right]$ and its transpose are often good examples of bad behavior, so try $ A = N$, $ B = \left[\begin{array}{cc}a&b\\  c&d\end{array}\right]$ and try to solve for the desired condition. You get an example similar to the one in the preceding paragraph.



p. 85, Ex. 1. $ \mathbb{C}\cong \mathbb{R}^2$ by $ a + bi \mapsto (a,b)$.



p. 86, Ex. 4.

We need to set up a correspondence between pairs of indices $ i,j$ ( $ i= 1,\dots, m$, $ j = 1,\dots, n$) and indices running from 1 to $ mn$. In other words, we need to specify an order for the pairs of indices and say how to list them in that order. Where does $ (i,j)$ go?

One way is to list the matrix by row 1, then row 2, etc.:
$ (1,1),\dots, (1,n),(2,1),\dots, (2,n)$, etc.
Each new row adds $ n$ to the position, but row 1 adds nothing, so the contribution of the row index is off by 1. We get that the position of $ (i,j)$ in the list is $ n(i-1) + j$. Then $ F ^ {m \times n} \cong F ^ {mn}$ by having each matrix $ A
\mapsto (r _ 1,\dots, r _ {mn})$ given by $ r _ {n(i-1) + j}
= A _ {i,j}$ for $ i= 1,\dots, m$, $ j = 1,\dots, n$.



p. 86, Ex. 6.

For `` $ \Rightarrow $'': If $ V$ and $ W$ are isomorphic we know that a basis of one maps to a basis of the other, so they have the same dimension.

For `` $ \Leftarrow $'': If they have the same dimension, say $ n$, we know they are both isomorphic to $ F^n$ and so are isomorphic to each other.



For Problem Q-1:

(a) $ (\forall y)(\exists x)(y = x^3 - x)$.

(b) $ (\forall x)(\forall y)(x + y = y + x)$.

(c) $ (\forall x)(x > 0 \Rightarrow x^2 > 0)$.

(d) $ (\exists x)(1 - 8x + x^2 = 5)$.

(e) $ (\exists x)(\exists y)(17x + 25y = 6$    and $ 101x - 37 y = 13)$.

(f) $ (\exists a)(\not \exists x)(ax = x + 1)$.

(g) $ (\exists a)(\forall x)(ax \neq x + 1)$ (or in other words, $ (\exists a)(\forall x)($n   o   t$ ~ ax = x + 1)$).

Note: In logic, $ \forall$ and $ \exists$ are called quantifiers. $ \forall$ is the universal quantifier (since it says something is true universally) and $ \exists$ is the existential quantifier.

Normally we don't write mathematics in such a condensed form, but it is important always to notice what the underlying quantification is.


next up previous
Next: y_solns_6 Up: y_solns_6 Previous: y_solns_6
Kirby A. Baker 2001-11-09