next up previous
Next: l_solns_3 Up: l_solns_3 Previous: l_solns_3

p. 15

Ex. 2: This is practice for complex numbers. Remember, $ i^2 = -1$, and the conjugage of $ a+bi$ means $ a-bi$. For $ 1/(1+i)$, multiply top and bottom by the congugate of the bottom, in this case $ 1-i$. So
$ 1/(1+i) = (1-i)/((1+i)(1-i))
= (1-i)/2 = {\frac 1 2}- {\frac 1 2}i$.

$ A \rightsquigarrow \left[\begin{array}{cc}1&-i\\  0&2+2i\\  0&i\end{array}\right] \rightsquigarrow \left[\begin{array}{cc}1&0\\  0&1\\  0&0\end{array}\right]$. There are no nonpivot columns so the solution space of $ A$ is just 0.

Ex. 3: The zero matrix; $ \left[\begin{array}{cc}1&r\\  0&0\end{array}\right]$ for any $ r \in F$, $ \left[\begin{array}{cc}0&1\\  0&0\end{array}\right]$, and the identity matrix. The number of pivot columns in the respective cases is 0, 1, 1, 2.



p. 49

Ex. 10. Proof #1: If $ \alpha _ 1,\ldots, \alpha _ r$ are linear dependent, one is a linear combination of the others so we can delete it without changing the span. If they are still dependent, we can delete another one, and so on. When we get down to a spanning set that is not linearly dependent (a minimal spanning set), that's a basis!

Proof #2: Examine $ \alpha _ 1,\ldots, \alpha _ r$ in order. Save each one that is not a linear combination of the preceding ones you have saved, and discard the others. Since the missing ones are linear combinations of others, omitting them doesn't change the span. Since of the ones you have saved, each is not in the span of the preceding, the ones saved are linearly independent, by a lemma we have discussed. Since the saved vectors span and are linearly independent, they are a basis.

Ex. 13. Suppose when you did Ex. 9 before, you used the method that $ r(\alpha + \beta) +s (\beta + \gamma) + t (\gamma + \alpha) = 0
\Rightarrow r = s = t = 0$. This depended on solving simultaneous equations $ r+t=0, r+s=0, s+t=0$, and you found they have only the zero solution. However, when you do it over GF(2) you'll find that there is a nonzero solution, $ r=s=t=1$, so the three sums of vectors are linearly dependent.



p. 66, Ex. 3. This is the same as §7 on Handout H. To repeat the same method applied to the present problem: A system of homogeneous equations is described by a matrix of coefficients. What 4-tuples are eligible to be rows of this matrix? Such a 4-tuple $ (a,b,c,d)$ would have the property that it times each of the three given vectors is 0: $ \left[\begin{array}{rrrr}a&b&c&d\end{array}\right] \left[\begin{array}{r}-1\\  0\\  1\\  2\end{array}\right] = 0$, $ \left[\begin{array}{rrrr}a&b&c&d\end{array}\right] \left[\begin{array}{r}3\\  4\\  -2\\  5\end{array}\right] = 0$, $ \left[\begin{array}{rrrr}a&b&c&d\end{array}\right] \left[\begin{array}{r}1\\  4\\  0\\  9\end{array}\right] = 0$. This is the same as $ \left[\begin{array}{rrrr}a&b&c&d\end{array}\right]\left[\begin{array}{rrr}-1&3...
...&0\\  2&5&9\end{array}\right] = \left[\begin{array}{rrr}0&0&0\end{array}\right]$. Transposing, this is the same as $ \left[\begin{array}{rrrr}-1&0&1&2\\  3&4&-2&5\\  1&4&0&9\end{array}\right]
\le...
... c\\  d\end{array}\right] = \left[\begin{array}{r}0\\  0\\  0\end{array}\right]$. You know how to find a basis for the solution space; use that basis for the rows of your answer.



For Problem G-1:

(a) For $ (4) \Rightarrow (1)$, suppose $ v _ 1,\ldots, v _ n$ are a minimal spanning set for $ V$. Since they do span $ V$, the only thing left to show is that they are linearly independent. We prove this by contradiction: If they were linearly dependent, we know that one of them, say $ v_i$, would be in the span of the others. But then $ v_i$ could be omitted from the list without changing the span of the list, so the original list would not have been minimal. This is a contradiction, so $ v _ 1,\ldots, v _ n$ must be linearly independent.

(b) For $ (5) \Rightarrow (1)$, suppose $ v _ 1,\ldots, v _ n$ is a maximal linearly independent list off vectors in $ V$. The only thing left to show is that they span $ V$. Given any $ w \in V$, consider the list $ v _ 1,\ldots, v _ n, w$. Since the original list was a maximal linearly independent set, the new list, being larger, must be linearly dependent. In the nontrivial linear relation, the coefficient of $ w$ must be nonzero, since a linear relation between $ v _ 1,\ldots, v _ n$ not involving $ w$ must be trivial. Then we can solve for $ w$ as a linear combination of $ v _ 1,\ldots, v _ n$, so $ w$ is in their span.



For Problem G-2: Let $ f:S \rightarrow T$. It has been mentioned that $ f(s)$ is called the ``image'' of $ s$ via $ f$. We can also use the terminology that the image of $ f$ itself means the set of all images of elements. For example, $ f$ is onto if its image is all of $ T$.

For the statement about left inverses, direction $ \Rightarrow $: Suppose $ f$ has a left inverse $ g$, so that $ g(f(s)) = s$ for all $ s \in S$. If $ f(s _ 1) = f(s _ 2)$, then $ g(f(s _ 1))
= g(f(s _ 2))$, which is the same as $ s _ 1 = s _ 2$. Therefore $ f$ is one-to-one.

Direction $ \Leftarrow $: If $ f$ is one-to-one, define $ g:T\rightarrow S$ as follows. Given any $ t \in T$, if $ t = f(s)$ for some $ s$ (which would be uniquely determined), let $ g(t) = s$; if $ t$ is not in the image of $ f$, then let $ g(t)$ be any element of $ S$. Then for all $ s \in S$, $ g(f(s)) = s$. (The values of $ g$ on the part of $ T$ outside the image of $ f$ never make a difference.)

For the statement about right inverses, direction $ \Rightarrow $: Suppose $ f$ has a right inverse $ g$ so that $ f(g(t)) = t$ for all $ t \in T$. Then each $ t \in T$ is the image of some $ s \in S$, specifically $ s = g(t)$. Therefore $ f$ is onto.

Direction $ \Leftarrow $: If $ g$ is onto, define $ g:T\rightarrow S$ as follows. Given any $ t \in T$, choose any $ s \in S$ with $ t = f(s)$ (since there's at least one choice), and let $ g(t)$ be that $ s$. Then for any $ s \in S$ we have $ f(g(t)) = t$.

Note. As in calculus, if $ h(s) = f(g(s))$ for all $ s$, we write $ h = f \circ g$ and say $ h$ is the composition of $ f$ and $ g$. We also can write $ \mathbf{1}_S$ or just $ \mathbf{1}$ for the ``identity function'' on $ S$, which means the function taking $ s \mapsto s$ for all $ s \in S$. Then $ g$ is a left inverse of $ f$ when $ g \circ f
= \mathbf{1}$, and $ g$ is a right inverse of $ f$ when $ f \circ g =
\mathbf{1}$ (this time meaning $ \mathbf{1}_T$). This notation makes function inverses look more like mulitiplicative inverses.



For Problem G-3:

(a) As with other problems involving arithmetic progressions, the row-reduced echelon form also has arithmetic progressions: $ \left[\begin{array}{ccccc}1&0&-1&\cdots&-8\\  0&1&2&\cdots&9\\  0&0&0&\cdots&0\\
\cdots &\cdots &\cdots &\cdots &\cdots \\  0&0&0&\cdots&0\end{array}\right]$ So the rank is 2.

(b) Rows are proportional and the matrix is not the zero matrix, so the rank is 1.



For Problem G-4:

(a) $ F^2 = \{(0,0),(0,1),(0,1),(1,1)\}$, with 0$ =
(0,0)$. The only subspace of dimension 0 is $ \{$0$ \}$. The only subspace of dimension 2 is $ F^2$ itself. To get a 1-dimensional subspace, we can take any nonzero vector $ v$ and find the subspace it spans, specifically, each scalar times $ v$. But the only scalars are 0 and 1, so we get $ \{$0$ ,v\}$. There are three nonzero vectors to use, as listed above, so we get three 1-dimensional subspaces.

Note: If we were using GF(3) as the field, the 1-dimensional subspace generated by a vector $ v$ would be $ \{$0$ ,v,2v\}$, so that two nonzero vectors would participate in each 1-dimensional subspace.

(b) $ F^3 = \{(0,0,0),(0,0,1),(0,1,0),(0,1,1),(1,0,0),(1,0,1),
(1,1,0),(1,1,1) \}$, with 0$ = (0,0,0)$. As in (a), we get one 0-dimensional subspace and one 3-dimensional subspace, and just as in (a) the 1-dimensional subspaces are of the form $ \{$0$ ,v\}$ where $ v$ is one of the seven nonzero vectors. Each 2-dimensional subspace $ W$ is the span of two nonzero vectors $ v,w$ and has four elements: $ W = \{rv + sw: r,s
\in \{0,1\}\}$. There are seven possibilities:

\begin{displaymath}
\begin{array}{rrrrrr}
\{ & (0,0,0){,} & (0,1,0){,} & (1,0,0)...
...){,} & (0,1,1){,} & (1,1,0){,} & (1,0,1) & \} \\
\end{array} \end{displaymath}

In each case, for $ v$ and $ w$ any two of the three nonzero vectors will do.



For Problem G-5: The addition and negation properties are the same as for fields. Multiplication is associative and there is a neutral element $ 1 = I$, but multiplication is not commutative (since $ AB \neq BA$ if $ A = \left[\begin{array}{rr}0&1\\  0&0\end{array}\right]$ and $ B = A^t$, for example), and nonzero elements might not have multiplicative inverses (same matrices as examples). The distributive law still holds.

Exception: For $ n \times n$ matrices with $ n = 1$, we do get a field, isomorphic to $ F$.

Note. The laws that matrices do satisfy define what is called a ``ring with 1''. So a field is really a commutative ring with 1 in which each nonzero element has a multiplicative inverse. Other examples of commutative rings with 1 are the $ \mathbb{Z}$ and the set of all polynomials over $ \mathbb{R}$, of all degrees.



For Problem G-6: Following the suggestion, let $ \phi(a + bi) = \left[\begin{array}{rr}a&-b\\  b&a\end{array}\right]$, for $ a,b \in \mathbb{R}$. Then $ \phi((a + bi) + (c + di)) = \phi
((a+c) + (b + d)i) = \left[\begin{array}{rr}a...
...left[\begin{array}{rr}c&-d\\  d&c\end{array}\right] = \phi(a+bi) + \phi(c + di)$, so addition is preserved. Also $ \phi((a+bi)(c+di)) = \phi((ac-bd) + (bc+ad)i)
= \left[\begin{array}{rr}ac-bd&-bc-ad\\  bc+ad&ac-bd\end{array}\right]$, while $ \phi(a+bi)\phi(c+di)
= \left[\begin{array}{rr}a&-b\\  b&a\end{array}\right] \l...
...}\right] = \left[\begin{array}{rr}ac-bd&-bc-ad\\  bc+ad&ac-bd\end{array}\right]$ (the same), so multiplication is preserved. $ \phi$ is one-to-one since two $ \phi(a+bi) = \phi(c + di)$ says $ \left[\begin{array}{rr}a&-b\\  b&a\end{array}\right] = \left[\begin{array}{rr}c&-d\\  d&c\end{array}\right]$ so $ a = c$ and $ b = d$ and so $ a+bi = c+di$. (That was almost too obvious to be written out.) $ \phi$ is onto from the definition of $ S$.



For Problem G-7: See the notes for 1-M, on the web and handed out. In $ \mathbb{Z}_ {13}$, $ 1 \cdot 1 = 1$, $ 2 \cdot 7 = 1$, $ 3 \cdot 9 = 1$, $ 4 \cdot 10 = 1$, $ 5 \cdot 8 = 1$, $ 6 \cdot 11 = 1$, $ 12 \cdot 12 = 1$, so 1 is its own inverse, 2 and 7 are inverses of each other, etc.



For Problem G-8: The intention of this problem was to do this in an ad-hoc way, just to get a sense of how it's tricky to do. An organized solution is in Problem J-8.



For Problem G-9: Again, the intention of this problem was just to get a sense of the difficulty. See Problem J-5.


next up previous
Next: l_solns_3 Up: l_solns_3 Previous: l_solns_3
Kirby A. Baker 2001-10-24