next up previous
Next: About this document ... Up: i_solns_12 Previous: i_solns_12

For Problem E-1:

Method #1: If $ ru + s(u+v) + t(u+v+w) =$   0, expanding and then regrouping we get $ (r+s+t)u + (s+t)v + tw =$   0. Since $ u,v,w$ are linearly independent, these coefficients must be 0, so $ r+s+t = 0, s+t = 0, t = 0$. Working backwards from $ t = 0$ we get $ s = 0$, $ r = 0$. Therefore $ ru + s(u+v) + t(u+v+w) =$   0 implies $ r = s = t = 0$, and so $ u, u+v,
u+v+w$ are linearly independent.

Method #2: The subspace $ W$ of $ V$ generated by $ u,v,w$ is isomorphic to $ F^3$ with $ u,v,w$ corresponding to $ e_1,e_2,e_3$, since $ u,v,w$ span it and are linearly independent. Under this isomorphism, $ u, u+v,
u+v+w$ correspond to $ (1,0,0),(1,1,0),(1,1,1)$. Making these the rows of a matrix and row-reducing we get the identity matrix, which has rank 3, so they must be linearly independent. Using the isomorphism, $ u, u+v,
u+v+w$ must be linearly independent.



For Problem E-2:

Using the augmented matrix and row reducing,

$ \left[\begin{array}{rrrrr}1&0&1&0&0\  0&0&1&1&0\  1&1&0&1&0\  0&1&1&0&1\end...
...&0&0\  0&1&1&1&0\  0&0&1&1&0\  0&0&0&1&1\end{array}\right] \rightsquigarrow $

$ \left[\begin{array}{rrrrr}1&0&0&1&0\  0&1&0&0&0\  0&0&1&1&0\  0&0&0&1&1\end...
...rray}{rrrrr}1&0&0&0&1\  0&1&0&0&0\  0&0&1&0&1\  0&0&0&1&1\end{array}\right],$ from which we see there is the unique solution $ x = 1, y = 0,
z = 1, w = 1$. This checks.



For Problem E-3:

$ M = \left[\begin{array}{rrrrr} 1& 2& 1& 3& -2\  1& 2& 2& 3& 0\  1& 2& 3& 3& ...
...left[\begin{array}{rrrrr}1&2&0&3&-4\  0&0&1&0&2\  0&0&0&0&0\end{array}\right]$.

The two nonzero rows of $ E$ are a basis for the row space.

For the null space, we write the general solution in variables, say, $ x_1,\ldots, x_5$, getting \begin{displaymath}
\left \{
\begin{array}{ccrrr}
x_1 &=& -2x_2 & -3x_4 & +x_5 \...
...
x_4 &=& & x_4 \\
x_5 &=& & & x_5 \\
\end{array}\right .
\end{displaymath}

so $ \left[\begin{array}{c}x_1\  x_2\  x_3\  x_4\  x_5\end{array}\right] = x_2 ...
...ay}\right] + x_5 \left[\begin{array}{r}1\  0\  -2\  0\  1\end{array}\right]$. These three column vectors are then the basis.

For the column space, we look at $ E$ and see that the pivot columns are columns 1 and 3, so a basis for the column space is columns 1 and 3 of $ M$. (Alternatively, we could transpose $ M$, row reduce, and take the resulting nonzero rows.)



For Problem E-4:

(a) one-to-one and onto; (b) neither; (c) onto; (d) neither; (e) one-to-one.



For Problem E-5:

If $ t_1 \neq t_2$ in $ T$, is $ g(t_1) \neq g(t_2)$? Since $ f$ is onto we know $ t_1 = f(s_1)$, $ t_2 = f(s_2)$, for some $ s_1, s_2 \in S$; we must have $ s_1 \neq s_2$ or else we would have $ t_1 eq t_2$. $ g(t_1),g(t_2)$ are defined to be $ s_1,s_2$, so the answer is yes. Therefore $ g$ is one-to-one.

If $ s \in S$ is there some $ t \in T$ with $ g(t) = s$? Yes, $ t = f(s)$ works. So $ g$ is onto.



For Problem E-6:

First, some notation: If we have a function like $ f:\mathbb{R}\rightarrow \mathbb{R}$ given by $ f(x) = x^2$ and we want to describe it without naming it, we can just write $ x \mapsto x^2$, which we say as ``$ x$ maps to $ x^2$''. The different-looking arrow is a reminder that we're talking about elements rather than whole spaces. But sometimes it's handy to use this kind of notation with the name of a function; for example, instead of ``Let $ t = f(s)$'' we can say ``Let $ s
\stackrel{f}{\mapsto} t$'', read as ``$ s$ maps to $ t$ under $ f$'' or ``via $ f$''.

In this notation, if $ f:S\rightarrow T$ is a one-to-one correspondence then $ s
\stackrel{f}{\mapsto} t$ is the same as $ t \stackrel{f^{-1}}{\mapsto} s$.

The same notation can be used to to express the idea that a linear transformation $ T:V\rightarrow W$ preserves addition of vectors:

Instead of $ T(v_1 + v_2) = T(v_1) + T(v_2)$ we can say
``If $ v_1 \stackrel{T}{\mapsto} w_1$ and $ v_2 \stackrel{T}{\mapsto} w_2$ then $ v_1 + v_2 \stackrel{T}{\mapsto} w_1 + w_2$.''

To say that $ T$ preserves multiplication by scalars, we can say
``If $ v \stackrel{T}{\mapsto} w$ then $ rv \stackrel{T}{\mapsto} rw$, for any $ r \in F$''.

Now to do the problem: To check that $ T^{-1}$ preserves addition, we must show that for any $ w_1, w_2 \in W$ if $ w_1
\stackrel{T^{-1}}{\mapsto} v_1$ and $ w_2 \stackrel{T^{-1}}{\mapsto} v_2$ then $ w_1 + w_2 \stackrel{T^{-1}}{\mapsto} v_1 + v_2$. But this is exactly the same as saying that if $ v_1 \stackrel{T}{\mapsto} w_1$ and $ v_2 \stackrel{T}{\mapsto} w_2$ then $ v_1 + v_2 \stackrel{T}{\mapsto} w_1 + w_2$, which is true since $ T$ preserves addition. Therefore $ T^{-1}$ preserves addition.

Similarly, to check multiplication by scalars, if $ w
\stackrel{T^{-1}}{\mapsto} v$ then we want $ rw \stackrel{T^{-1}}{\mapsto} rv$. But this is the same as $ rv \stackrel{T}{\mapsto} rw$, which is true since $ T$ preserves multiplication by scalars.

Therefore $ T^{-1}$ is an isomorphism.



For Problem E-7:

In (b), with the basis $ 1, x, x^2$ for Pols$ (\mathbb{R},2)$ there is an isomorphism $ \mathbb{R}^3 \equiv$   Pols$ (\mathbb{R},2)$ taking $ e_1,e_2,e_3$ to $ 1, x, x^2$. Under this isomorphism, part (b) becomes part (a), so the span is the whole space.

(c) In (c), using the basis $ x^2, x, 1$ for Pols$ (\mathbb{R},2)$, the resulting isomorphism with $ \mathbb{R}^ 3$ turns part (c) into part (a), so the span is the whole space.



For Problem E-8:

For (a):

(ii) says the pivot columns of $ A$ are columns 1, 3, and 5.

(iii) expresses the other columns as linear combinations of the pivot columns. Therefore columns 2, 4, and 6 are $ \left[\begin{array}{r}3\  0\  0\end{array}\right]$, $ \left[\begin{array}{r}2\  5\  0\end{array}\right]$, and $ \left[\begin{array}{r}-1\  7\  -4\end{array}\right]$.

Putting these together says that the mystery matrix is

$ M = \left[\begin{array}{rrrrrr}1&3&0&2&0&-1\  0&0&1&5&0&7\  0&0&0&0&1&-4\end{array}\right]$.

For (b): Knowing the linear relations means we know which columns are linear combinations of preceding columns, and what the coefficients are. So just as for the mystery matrix, if we are given any matrix in row-reduced echelon form, the pivot columns are those which are not the in the span of the preceding columns; these columns look like some columns of the identity matrix, in order. Each remaining column is a linear combination of the preceding pivot columns, and the coefficients tell its entries.

For (c): Suppose $ A$ is row-reduced by two people to matrices $ E_1$ and $ E_2$ in row-reduced echelon form. Both $ E_1$ and $ E_2$ have the same linear relations between columns as $ A$ does and so have the same linear relations between columns as each other. But by (b) we know these linear relations uniquely determine the entries of $ E_1$ and $ E_2$, so that $ E_1 = E_2$. Therefore the row-reduced echelon form is unique.


next up previous
Next: About this document ... Up: i_solns_12 Previous: i_solns_12
Kirby A. Baker 2001-10-24