next up previous
Next: n_solns_4 Up: n_solns_4 Previous: n_solns_4



For Problem J-1: In §2, row-reduce $ M$ to get $ E = \left[\begin{array}{ccccc}1&2&0&-1&0\\  0&0&1&3&0\\  0&0&0&0&1\end{array}\right]$, as shown in §3. A basis for the row space consists of the three nonzero rows of $ M$.

In §5, make a matrix $ M$ whose columns are the given vectors. The linear relations between the columns are given by the null space, so the possible coefficients $ (r,s,t,u,v)$ are linear combinations of the basis vectors from §3, namely $ (-2,1,0,0,0)$ and $ (1,0,-3,1,0)$ (where they were columns, but here we can write rows if we want to).

In §7, based on the explanation we make $ A$ with the given vectors as columns and find a basis for the null space of $ A^t$, which is the same as the matrix in §4 (called $ M$ there, but that's a different $ M$ from this problem). Then we transpose back. The basis is the same as shown in §3, so our answer is $ M = \left[\begin{array}{rrrrr}-2&1&0&0&0\\  1&0&-3&1&0\end{array}\right]$.



For Problem J-2:

For §8: Following the suggestion, let's put the two lists together as columns of a matrix. Then the problem is like §6, so one basis consists the first, third, and fifth of these vectors.

If you find the basis by instead making a matrix with these five vectors as rows, when you row reduce you get the identity matrix. This means that the rows generate all of $ \mathbb{R}^ 3$, and the basis is the standard basis.

For §9: As mentioned, the idea is that when you throw two sets of equations together, the set of solutions in common is the intersection of their respective solution sets.

For the vectors $ (1,2,1), (2,4,2), (1,3,3)$, putting them as the rows of a matrix and looking for a basis for the null space we get a single basis vector. Any scaled version of it will do; the simplest seems to be $ (3,-2,1)$. We can check that this vector is indeed in the null space.

For the vectors $ (2,7,8), (3,8,8)$, putting them as the rows of a matrix we get a basis vector proportional to $ (-8,8,-5)$, so let's use that one. We take our two null-space bases together as the rows of a matrix, whose null space should be the intersection we desire. A basis for the null space is a single vector proportional to $ (2,7,8)$.

Although the problem doesn't ask us to, we could check that our answer is in the span of both of the original spanning sets. For $ W _ 1$ we get $ (2,7,8) = - (1,2,1) + 3(1,3,3)$. For $ W _ 2$, $ (2,7,8)$ is already one of the spanning vectors in the second subspace.



For Problem J-3:

(a) For uniformity, let's write all vectors as column vectors and then for a row vector write the transpose of a column vector. Direction `` $ \Leftarrow $'': If $ M = v w ^ t$ then row $ i$ is the $ i$-th entry of $ v$ times the vector $ w ^ t$. Since all rows are proportional, the row space is spanned by $ w ^ t$ alone. Therefore the rank of $ M$ is 1.

Direction `` $ \Rightarrow $'': Since $ M$ is nonzero it has some row that is a nonzero row vector; call it $ v ^ t$. Since $ v ^ t$ is a nonzero vector in a 1-dimensional space (the row space), it spans the row space and every vector in the space is a scalar times $ v ^ t$, in fact a unique scalar since $ v ^ t$ is nonzero. Let $ r _ i$ be the scalar for row $ i$ and let $ w = (r _ 1,\ldots, r _ m) ^ t$, where $ m$ is the number of rows of $ M$. Then $ M = w v ^ t$.

(b) The key observation is that elementary row operations preserve the property that the rows are arithmetic progressions. You can do this directly, but a nicer way is to point out that the differences of consecutive columns are all the same and that this is a linear relation between columns and so is preserved by row operations.

What matrices in row-reduced echelon form have the property that their rows are in arithmetic progression? One possible row is $ 1,0,-1,-2,\ldots $. Another is $ 0,1,2,3,\ldots $. Also possible is $ 0,0,\ldots $. But that's all, since all rows have 1 as their first nonzero entry, if any. So the rank can't be more than 2.



For Problem J-4: (a) In this problem there are two numberings--the numbers used to name the fingers and the numbers of the terms in the sequence. The sequence repeats every 8, so we can work modulo 8. Since 8 divides 1,000,000 evenly, it's tempting to say the 0-th term is the same as the millionth, but since there was no 0-th term in this problem we look at the 8th term, which is finger number 2 (index finger).

(b) This problem really asks what number from 0 to 9 is congruent to $ 7 ^ {1,000,000}$ modulo 10. Working base 10, we simply ignore all digits but the last. $ 7^2$ ends in 9, $ 7^4$ ends the same as $ 9^2$, which is 1, and since $ 7^{1,000,000} =
(7^4)^{250,000} \equiv 1^{250,000} \pmod {10}$, the answer is 1.

(c) See answers to (a) and (b).



For Problem J-5:

(a) We know $ V \cong F^2$, so $ V$ has the same number of elements as $ F^2$, namely 4. In general, if $ F$ is the 2-element field then $ F^n$ will have $ 2^n$ elements.

(b) For any field $ F$, if $ W_1, W_2$ are distinct 2-dimensional subspaces of $ F^3$, then $ W _ 1 + W _ 2$ is larger than either, so must be the whole space, so is of dimension 3. Then the equation dim$ (W _ 1 + W _ 2) +$   dim$ (W _ 1 \cap W _ 2)
=$   dim$ W _ 1 +$   dim$ W _ 2$ becomes $ 3 + ? = 2 + 2$, so the intersection must have dimension 1.

(c) If $ W _ 1$ and $ W _ 2$ are distinct 1-dimensional subspaces, then their intersection is $ \{$0$ \}$, so the equation dim$ (W _ 1 + W _ 2) +$   dim$ (W _ 1 \cap W _ 2)
=$   dim$ W _ 1 +$   dim$ W _ 2$ shows that their sum has dimension 2. Therefore $ W _ 1$ and $ W _ 2$ are contained in just one 2-dimensional subspace.

(d) As suggested, let the 1-dimensional subspaces be the plants and the 2-dimensional subspaces determine blocks (so the block for $ W$ of dimension 2 consists of the 1-dimensional subspaces contained in $ W$). Then by (c), any two plants have exactly one block in common, and by (b), any two blocks have exactly one plant in common. This much would work over any field. For GF(2), the solution to Problem G-4 shows precisely how to make the actual design. For each block we get three plants in it, since there are three 1-dimensional subspaces in each 2-dimensional vector space over GF(2).



For Problem J-6:

$ -x = x$ for all $ x \in F$. For addition and multiplication we get

\begin{displaymath}
\begin{array}{r\vert rrrr}
+ & 0 & 1 & \alpha & \beta \\  \h...
...ta & 0 & 1 \\
\beta & \beta & \alpha & 1 & 0 \\
\end{array}\end{displaymath} and $ \begin{array}{r\vert rrrr}
\cdot & 0 & 1 & \alpha & \beta\\  \hline
0 & 0 & 0 ...
...pha & 0 & \alpha & \beta & 1\\
\beta & 0 & \beta & 1 & \alpha\\
\end{array}$

Therefore $ F$ is closed under addition and multiplication. It does turn out that multiplication is commutative, even though for other $ 2 \times 2$ matrices it might not be.

(b) To see that $ F$ is a field: Additively, $ F$ is a vector subspace of the vector space of $ 2 \times 2$ matrices over GF(2), so the laws involving addition, negation, and 0 hold. Multiplicatively, the associative law is true for matrices in general, $ F$ is commutative as we saw, and the multiplication table shows that each nonzero element of $ F$ has a multiplicative inverse. The distributive law is true for matrices in general.

(c) The characteristic is 2, since $ 1 + 1 = 0$.

(d) Yes, $ \{0,1\} \subseteq F$ has exactly the same operation tables as $ Z_2$.



For Problem J-7: In the addition table, consider the column for any element $ c$ and rows for elements $ r _ 1$ and $ r _ 2$. If the corresponding two entries in the column are the same, that says $ r _ 1 + c = r _ 2 = c$. We are tempted to cancel the $ c$'s, and in fact that is ok since field have additive inverses. Then $ r _ 1 = r _ 2$. This shows that two different rows cannot have equal entries in a column. Since the operation is commutative, each row also has distinct entries. Thus the table is a Latin square.

For the multiplication table, again we are asking whether $ r _ 1 c = r _ 2 c \Rightarrow r _ 1 = r _ 2$. The answer is yes, since in a field nonzero elements have multiplicative inverses and so we can cancel the $ c$. Again the operation is commutative so the rows also have distinct entries. Thus the table is a Latin square.



For Problem J-8: (a) There are two things to show: That each table $ T(r)$ is a Latin square and that any two tables are orthogonal.

To see that each row of $ T(r)$ has distinct entries, look at an arbitrary column $ j$ and rows $ i$, $ i'$. Suppose that $ T(r) _ {ij} = T(r) _ {i'j}$. This says $ x _ i +
r x _ j = x _ {i'} + r x _ j$. Canceling the common term we get $ x _ i = x _ {i'}$, so there is no repeated entry in the column.

We can't just say ``similarly,...'' for the rows, since the definition of $ T(r)$ is not symmetrical for rows and columns! So look at an arbitrary row $ i$ and columns $ j$, $ j'$. Suppose that $ T(r) _ {ij} = T(r) _ {ij'}$. This says $ x _ i + r x _ j = x _ i + r x _ {j'}$. Canceling the common term we get $ r x _ j = r x _ {j'}$. Since $ r \neq 0$ we can also cancel $ r$ and we get $ x _ j = x _ {j'}$. Therefore there is no repeated entry in the row. This finishes the proof that $ T(r)$ is a Latin square, for any $ r$.

Now consider two squares $ T(r)$ and $ T(r')$. Are they orthogonal? This means that there are no two positions with the same pair of entries. In other words, if $ (T(r)_ {ij},T(r')
_ {ij}) = (T(r)_ {i'j'},T(r') _ {i'j'})$, must $ i=i'$ and $ j = j'$? Let's see: This says \begin{displaymath}\left \{
\begin{array}{ccccccc}
x _ i &+& r x _ j &=& x _ {i'...
...& r' x _ j &=& x _ {i'} &+& r' x _ {j'}\\
\end{array}\right .\end{displaymath}

Putting everything on the left in each equation we get \begin{displaymath}\left \{
\begin{array}{ccc}
(x _ i - x _ {i'}) &+& r(x _ j - ...
... x _ {i'}) &+& r'(x _ j - x _ {j'}) = 0\\
\end{array}\right .\end{displaymath}

Subtracting and simplifying, we get $ (r - r')(x _ j - x _ {j'}) = 0$. Since $ r - r' \neq 0$, we can cancel that factor and get $ x _ j - x _ {j'} = 0$, so $ x _ j = x _ {j'}$. Then in the first equation of the last pair we get $ x _ i - x _ {i'} = 0$, so that $ x _ i = x _ {i'}$, and we are done.

(b) Using the order GF$ (4) = \{0,1,\alpha,\beta\}$, the tables are

$ \left[\begin{array}{cccc}0&1&\alpha&\beta\\  1&0&\beta&\alpha\\  \alpha&\beta&0&1\\  \beta&\alpha&1&0\end{array}\right]$ $ \left[\begin{array}{cccc}0&\alpha&\beta&1\\  1&\beta&\alpha&0\\  \alpha&0&1&\beta\\  \beta&1&0&\alpha\end{array}\right]$ $ \left[\begin{array}{cccc}0&\beta&1&\alpha\\  1&\alpha&0&\beta\\  \alpha&1&\beta&0\\  \beta&0&\alpha&1\end{array}\right]$

Using the equivalences suggested in the problem and superimposing the tables, we get the cards

ASr 2Ds 3Cg 4Hb
2Hg ACb 4Dr 3Ss
3Db 4Sg AHs 2Cr
4Cs 3Hr 2Sb ADg

Here colors are lower case; S is spades and s is silver.


next up previous
Next: n_solns_4 Up: n_solns_4 Previous: n_solns_4
Kirby A. Baker 2001-10-24