next up previous
Next: i_solns_12 Up: i_solns_12 Previous: i_solns_12

p. 5

Ex. 5. Comment: This describes addition and multiplication in GF(2) = $ \mathbb{Z}_2$. Unary minus (negation) is $ -x = x$. The intended solution to the problem is to check each law. After checking commutativity of addition and multiplication, you can save some effort in checking associativity.



Ex. 7: Suppose $ F$ is a subfield of $ \mathbb{R}$. To be a field, $ F$ must contain 0 and 1 (the same 0 and 1 as in $ \mathbb{R}$, since those are the only neutral elements for addition and multiplication). Then $ F$ also contains $ 1 + 1, 1+1+1,..$ and their negatives, so $ F$ contains at least $ \mathbb{Z}$. But to be a subfield $ F$ must also contain the multiplicative inverse of each of its nonzero elements, so $ F$ contains $ \frac 12, \frac 13$, etc. Multiplying these in $ F$ by integers, we get all of $ \mathbb{Q}$. In other words, $ \mathbb{Q}\subseteq F$.



Ex. 8: Let $ F$ be any field of characteristic 0.

Informal version: In $ F$ there is an element 1. $ 1+1$ is like 2, so call it $ \bar 2$. Similarly, we can make $ \bar 3 = 1+1+1$, etc. Their negations are $ -\bar 2$, etc. If we fill out by writing $ \bar 0 = 0$ and $ \bar 1 = 1$, we have a copy of of $ \mathbb{Z}$, sort of pseuo-integers. Their ratios make a copy of $ \mathbb{Q}$.

There are a couple of issues here, though: First, where did we use the fact that that characteristic is 0? The answer is that to be sure about the copy of $ \mathbb{Z}$ we need to know that the sums of 1's and their negations are all distinct (i.e., different from each other). For example, could $ \bar 5 = \bar 2$? Since we can cancel additively in a field, $ 1+1+1+1+1 = 1+1$ would imply $ 1+1+1 = 0$, which can't happen since the characteristic is 0. In other words, by additive cancellation, if $ \bar m = \bar n$, then $ \bar 0 = \bar {m-n}$, which for characteristic 0 implies $ m=n$.

Another issue is whether the sums of 1's have operations like those of integers. For example, do we have $ \bar 2 + \bar 3 = \bar 5$ and $ \bar 2 \bar 3 = \bar 6$? Yes, the first left-hand side is the sum of five 1's and $ \bar 2 \bar 3 = (1+1)(1+1+1) =
1+1+1+1+1+1 = \bar 6$, by the distributive law in $ F$.

Since the problem is early in the book, most likely the authors expected only an informal solution, but it's possible to be more formal, as follows. A ``copy'' of $ \mathbb{Q}$ means a subfield isomorphic to $ \mathbb{Q}$ as a field, so we try to construct a one-to-one function $ \phi: \mathbb{Q}\rightarrow F$, whose ``image'' will be a subfield of $ F$. Let $ \phi(0) = 0$. For an integer $ n > 0$ we define $ \phi(n) = \bar n$. (Here $ \phi(-2) = \bar {-2}$ means $ -\bar 2$, etc.) As in the informal version we can check that $ \phi$ so far is one-to-one and preserves addition and multiplication of integers.

Now for any fraction $ {\frac{\displaystyle a}{\displaystyle b}} \in \mathbb{Q}$ with $ a,b \in \mathbb{Z}$ we define $ \phi({\frac{\displaystyle a}{\displaystyle b}})$ to be $ {\frac{\displaystyle \phi(a)}{\displaystyle \phi(b)}} = {\frac{\displaystyle \bar a}{\displaystyle \bar b}}$, meaning $ \bar a \bar b^{-1}$. But there is also an issue here: Since different fractions can represent the same rational number, e.g., $ {\frac{\displaystyle 6}{\displaystyle 4}} = {\frac{\displaystyle 15}{\displaystyle 10}}$, could we conceivably get inconsistent values from our definition? Is $ {\frac{\displaystyle \bar6}{\displaystyle \bar 4}}
= {\frac{\displaystyle \bar {15}}{\displaystyle \bar{10}}}$, necessarily? In other words, we must check that if $ {\frac{\displaystyle a}{\displaystyle b}} = {\frac{\displaystyle c}{\displaystyle d}}$ then $ {\frac{\displaystyle \bar a}{\displaystyle \bar b}} =
{\frac{\displaystyle \bar c}{\displaystyle \bar d}}$ in $ F$. But it's OK: $ {\frac{\displaystyle a}{\displaystyle b}} = {\frac{\displaystyle c}{\displayst...
...a}{\displaystyle \bar b}}
= {\frac{\displaystyle \bar c}{\displaystyle \bar d}}$, as hoped. We say that ``$ \phi$ is well defined''.

To finish the proof we need to check that $ \phi$ preserves addition and multiplication for rational numbers and is one-to-one. So far we know this on $ \mathbb{Z}$ only. For $ {\frac{\displaystyle a}{\displaystyle b}}, {\frac{\displaystyle c}{\displaystyle b}} \in \mathbb{Q}$ we have $ \phi({\frac{\displaystyle a}{\displaystyle b}} + {\frac{\displaystyle c}{\disp...
...aystyle a}{\displaystyle b}}) + \phi({\frac{\displaystyle c}{\displaystyle b}})$, which is the correct result. Multiplication is checked similarly.

For being one-to-one, let's see if $ \phi({\frac{\displaystyle a}{\displaystyle b}}) = \phi({\frac{\displaystyle c}{\displaystyle d}})$ implies $ {\frac{\displaystyle a}{\displaystyle b}} = {\frac{\displaystyle c}{\displaystyle b}}$: $ \phi({\frac{\displaystyle a}{\displaystyle b}}) = \phi({\frac{\displaystyle c}{\displaystyle d}})$ says $ {\frac{\displaystyle \bar a}{\displaystyle \bar b}} =
{\frac{\displaystyle \bar c}{\displaystyle \bar d}}$, so $ \bar a \bar d = \bar b \bar c$, or $ \phi(ad) = \phi(bc)$. Since we know $ \phi$ is one-to-one for integers, $ ad = bc$, so $ {\frac{\displaystyle a}{\displaystyle b}} = {\frac{\displaystyle c}{\displaystyle d}}$ as hoped.



p. 33

Ex. 1 (outline). The various properties come from similar properties of $ F$ (which you can quote) and from the fact that operations in $ F^n$ are defined coordinatewise.



Ex. 5. $ \oplus$ is not commutative or associative. We do have $ x \oplus$   0$ = x$, though, and for each $ x$, $ x \oplus x =$   0, so $ x$ is its own additive inverse. Also, $ 1 \cdot v = -v$ rather than $ v$, so (4a) fails. But (4b), (4c), (4d) hold, since in each case both sides have the same number of minuses when rewritten using ordinary multiplication by scalars.



p. 39

Ex. 1. Just (b) is a subspace. (a) is not closed under multiplication by negative scalars; (c) is not closed under addition nor under multiplication by scalars; (d) is not closed under addition; (e) is not closed under multiplication by scalars, since irrational scalars don't work.



Ex. 2. (b), (d), and (e) are subspaces. This may seem surprising, but they work because operations on functions are defined pointwise. Saying $ f(-1) = 0$, for example, is somewhat like saying $ a_2 = 0$ for $ (a_1,a_2,a_3) \in \mathbb{R}^3$.

For (e), we need to quote the theorems that the sum of two continuous functions is continuous and that a continuous function times a constant is continuous.

(a) is not a subspace because the constant function 1 is in the set but twice it (the constant function 2) is not. (c) is not a subspace because the 0 function (constant 0) is not in it.



Ex. 3. Method #1: Outline: Write the first vector as a linear combination of the others with unknown coefficients, make linear equations, and solve.

Method #2: Make the vectors the columns of a matrix $ M$, with the ``target'' vector as the last column, and row reduce to a matrix $ E$ in row-reduced echelon form. Since the linear relations between the columns don't change, you can check in $ E$ to see whether the last column is in the span of the preceding ones (and if so, with what coefficients), and the same will hold for columns of $ M$. Notice, though, that $ M$ and $ E$ are exactly the same as what you get in Method #1.



Ex. 4. This is asking for a basis of the solution space of the homogeneous equations, which is the same thing as a basis of the null space of the matrix of coefficients. Row reduce, etc.



Ex. 5. Just (c) is a subspace. The set of matrices in (c) contains the zero matrix and is closed under multiplication by scalars. It is also closed under addition, since if $ A_1, A_2$ are in the set, we have $ (A_1 + A_2)B = A_1 B + A_2B = B A_1 + B A_2 = B(A_1 + A_2)$.

Notice that matrix multiplication is used in defining the subspace but is not used for operations on the vector space of matrices involved.

For (a), the zero matrix is not in the set. For (b), the sum of two noninvertible matrices could be invertible, e.g, $ \left[\begin{array}{rr}1&0\  0&0\end{array}\right]
+ \left[\begin{array}{rr}0&0\  0&1\end{array}\right] = \left[\begin{array}{rr}1&0\  0&1\end{array}\right]$. (Here it's important that $ n \geq 2$.) For (d), the set is not closed under multiplication by 2.



p. 48

Ex. 1. If $ v$ and $ w$ are linearly dependent, there is a nontrivial linear relation $ rv + sw =$   0. Nontrivial means that $ r \neq 0$ or $ s \neq 0$ or both. If $ r \neq 0$ we can solve to get $ v = (-\frac sr) w$, so that $ v$ is a scalar multiple of $ w$. If $ s \neq 0$, a similar proof applies the other way around.



Ex. 2. Method: Make a matrix with these as either the rows or the columns and row reduce. If you get the identity matrix, so the rank is 4, then they are linearly independent; otherwise not.



Ex. 3. Method #1: Make a matrix with the four vectors as rows and row-reduce. The nonzero rows of the row-reduced echelon form are a basis for the row space.

Method #2: Make a matrix with the four vectors as columns and row-reduce. See which columns are pivot columns. The same-numbered columns of the original matrix are a basis for the column space.



Ex. 4. If you were asked only to show that the vectors form a basis, you could do this like Ex. 2 above. Since you are asked also to express the standard basis vectors relative to this basis, you might as well do that first and then you will have shown that these vectors form a basis.

Naive method: Try $ e_1 = r \alpha _ 1 + s \alpha _ 2 + t
\alpha _ 3$, which results in three equations in the three unknowns, which you can solve by row reduction. Then do the same for $ e_2$ and $ e_3$, each time getting new values for the coefficients.

Organized method: Make a matrix with these three vectors as the first three columns and $ e_1,e_2,e_3$ as the last three columns, so the matrix looks like $ [A\vert I]$. Row reduce. You should get a matrix that looks like $ [I\vert B]$. Since linear relations between columns are preserved, looking at each column of the $ B$ part as a linear combination of the first three columns tells you how to represent each of $ e_1,e_2,e_3$ as a linear combination of columns of $ A$. (Note. Actually $ B = A^{-1}$ in this situation.)



Ex. 5. Take three vectors in a plane, such as $ (1,0,0)$, $ (0,1,0)$, $ (1,1,0)$.



Ex. 6. Thinking of the $ 2 \times 2$ matrices as $ \left[\begin{array}{rr}r&s\  t&u\end{array}\right]$, $ W_1$ can be described by $ s = -r$ and $ W_2$ can be described by $ t = -r$.

(a) Both of these equations are preserved under addition of matrices and multiplying matrices by scalars. Also both contain the zero matrix. So $ W _ 1, W _ 2$ are subspaces of $ V$.

(b) Intuitively, each of $ W_1$ and $ W_2$ involves choosing three parameters freely, so their dimensions ought to be 3. More officially, a basis for $ W_1$ consists of the matrices you get by choosing one parameter to be 1 and the others 0: $ \left[\begin{array}{rr}1&-1\  0&0\end{array}\right]$, $ \left[\begin{array}{rr}0&0\  1&0\end{array}\right]$, and $ \left[\begin{array}{rr}0&0\  0&1\end{array}\right]$. Similarly, a basis for $ W_2$ consists of $ \left[\begin{array}{rr}1&0\  -1&0\end{array}\right]$, $ \left[\begin{array}{rr}0&1\  0&0\end{array}\right]$, and $ \left[\begin{array}{rr}0&0\  0&1\end{array}\right]$.

Since each of $ W _ 1, W _ 2$ consists of linear combinations of the respective basis matrices, $ W _ 1 + W _ 2$ consists of linear combinations of all six, or really five since one is repeated. But the five are dependent--obviously, since we are in the space $ V$ of dimension 4. One way to determine the actual dimension is to use an isomorphism of $ V$ with $ \mathbb{R}^4$, let's say $ (r,s,t,u) \mapsto
\left[\begin{array}{rr}r&s\  t&u\end{array}\right]$. We are then asking about five 4-tuples. Make a matrix with the five as its rows or columns, row-reduce, and see what the rank is. You should get 4. Another way is that since $ W_1$ has dimension 3 in a space $ V$ of dimension 4, if you just notice that the first basis vector of $ W_2$ is not in $ W_1$, their sum must be larger than $ W_2$, so all of $ V$.

For $ W _ 1 \cap W _ 2$, go back to the equations in $ r,s,t,u$. We have $ s = -r = t$, which $ u$ varies freely, so the general form of a matrix in this subspace is $ \left[\begin{array}{rr}r&-r\  -r&u\end{array}\right]$. A basis is $ \left[\begin{array}{rr}1&-1\  -1&0\end{array}\right]$ and $ \left[\begin{array}{rr}0&0\  0&1\end{array}\right]$, so the space has dimension 2.

Another way would be to use the equation dim$ W _ 1 +$   dim$ W
_ 2 = \dim(W _ 1 + W _ 2) + \dim (W _ 1 \cap W _ 2)$ and solve $ 3 + 3 = 4 + x$, getting $ x = 2$.



Ex. 9. See the solution of D-1 below. With the isomorphism method you get vectors $ (1,1,0)$, $ (1,0,1)$, $ (0,1,1)$ in $ \mathbb{R}^ 3$. Put them as the rows of a matrix and row reduce; you get the identity matrix, so the rank is 3 and the original rows were linearly independent.



Ex. 12. A basis consists of the $ mn$ matrices each with one entry of 1 and the rest 0's. One way to see this is that if you use an isomorphism with $ \mathbb{R}^ {mn}$, these matrices correspond to the standard basis.



For Problem B-1: All are like $ P \Rightarrow Q$. For example, (g) says $ Q$ is a necessary condition for $ P$. This means that if $ P$ is true, then $ Q$, being necessary for $ P$, also has to be true. So $ P \Rightarrow Q$.



For Problem B-2: Each of (1), (4), (5) implies the others (and itself, for that matter). Each of (2), (3) implies the other (and itself). Each of (1), (4), (5) implies each of (2), (3).



For Problem B-3: Neither of (2), (3) implies any of (1), (4), (5); a counterexample in each case is $ x = -1$.


next up previous
Next: i_solns_12 Up: i_solns_12 Previous: i_solns_12
Kirby A. Baker 2001-10-24