p. 15
Ex. 2: This is practice for complex numbers. Remember,
, and the conjugage of
means
. For
, multiply top and bottom by the congugate of the
bottom, in this case
. So
.
.
There are no nonpivot columns so the solution space of
is
just
0.
Ex. 3: The zero matrix;
for any
,
, and the identity matrix. The number of
pivot columns in the respective cases is 0, 1, 1, 2.
p. 49
Ex. 10. Proof #1: If
are linear
dependent, one is a linear combination of the others so we can
delete it without changing the span. If they are still dependent,
we can delete another one, and so on. When we get down to a
spanning set that is not linearly dependent (a minimal spanning set),
that's a basis!
Proof #2: Examine
in order.
Save each one that is not a linear combination of the preceding
ones you have saved, and discard the others. Since the missing
ones are linear combinations of others, omitting them doesn't
change the span. Since of the ones you have saved, each is not
in the span of the preceding, the ones saved are linearly
independent, by a lemma we have discussed. Since the saved
vectors span and are linearly independent, they are a basis.
Ex. 13. Suppose when you did Ex. 9 before, you used the method
that
. This depended on solving simultaneous equations
, and you found they have only the zero
solution. However, when you do it over GF(2) you'll find that
there is a nonzero solution,
, so the three sums
of vectors are linearly dependent.
p. 66, Ex. 3. This is the same as §7 on Handout H. To repeat
the same method applied to the present problem: A system
of homogeneous equations is described by a matrix of coefficients.
What 4-tuples are eligible to be rows of this matrix? Such a
4-tuple
would have the property that it times each
of the three given vectors is 0:
,
,
. This is the same as
.
Transposing, this is the same as
. You know how to find a basis for
the solution space; use that basis for the rows of your answer.
For Problem G-1:
(a) For
, suppose
are
a minimal spanning set for
. Since they do span
, the
only thing left to show is that they are linearly independent.
We prove this by contradiction: If they were linearly dependent,
we know that one of them, say
, would be in the span of the others.
But then
could be omitted from the list
without changing the span of the list, so the original list
would not have been minimal. This is a contradiction, so
must be linearly independent.
(b) For
, suppose
is a maximal linearly independent list off vectors in
.
The only thing left to show is that they span
. Given any
, consider the list
. Since
the original list was a maximal linearly independent set, the
new list, being larger, must be linearly dependent. In the
nontrivial linear relation, the coefficient of
must be
nonzero, since a linear relation between
not involving
must be trivial. Then we can solve for
as a linear combination of
, so
is in their span.
For Problem G-2:
Let
. It has been mentioned that
is called
the ``image'' of
via
. We can also use the terminology
that the image of
itself means the set of all images of
elements. For example,
is onto if its image is all of
.
For the statement about left inverses, direction
:
Suppose
has a left inverse
, so that
for
all
. If
, then
, which is the same as
. Therefore
is one-to-one.
Direction
: If
is one-to-one, define
as
follows. Given any
, if
for some
(which
would be uniquely determined), let
; if
is not in
the image of
, then let
be any element of
. Then
for all
,
. (The values of
on the
part of
outside the image of
never make a difference.)
For the statement about right inverses, direction
:
Suppose
has a right inverse
so that
for
all
. Then each
is the image of some
,
specifically
. Therefore
is onto.
Direction
: If
is onto, define
as
follows. Given any
, choose any
with
(since there's at least one choice), and let
be that
.
Then for any
we have
.
Note. As in calculus, if
for all
, we write
and say
is the composition of
and
.
We also can write
or just
for the
``identity function'' on
, which means the function taking
for all
. Then
is a left inverse of
when
, and
is a right inverse of
when
(this time meaning
). This notation
makes function inverses look more like mulitiplicative inverses.
For Problem G-3:
(a) As with other problems involving arithmetic progressions, the
row-reduced echelon form also has arithmetic progressions:
So the rank is 2.
(b) Rows are proportional and the matrix is not the zero matrix, so the rank is 1.
For Problem G-4:
(a)
, with
0
. The only subspace of dimension 0 is
0
.
The only subspace of dimension 2 is
itself. To get a
1-dimensional subspace, we can take any nonzero vector
and
find the subspace it spans, specifically, each scalar times
. But the only scalars are 0 and 1, so we get
0
. There are three nonzero vectors to use, as
listed above, so we get three 1-dimensional subspaces.
Note: If we were using GF(3) as the field, the 1-dimensional subspace
generated by a vector
would be
0
, so that
two nonzero vectors would participate in each 1-dimensional
subspace.
(b)
, with
0
. As in (a), we get
one 0-dimensional subspace and one 3-dimensional subspace, and
just as in (a) the 1-dimensional subspaces are of the form
0
where
is one of the seven nonzero vectors.
Each 2-dimensional subspace
is the span of two nonzero
vectors
and has four elements:
. There are seven possibilities:
In each case, for
and
any two of the three nonzero vectors
will do.
For Problem G-5:
The addition and negation properties are the same as for fields.
Multiplication is associative and there is a neutral element
,
but multiplication is not commutative (since
if
and
, for example), and nonzero elements
might not have multiplicative inverses (same matrices as
examples). The distributive law still holds.
Exception: For
matrices with
, we do get
a field, isomorphic to
.
Note. The laws that matrices do satisfy define what is called
a ``ring with 1''. So a field is really a commutative ring with 1
in which each nonzero element has a multiplicative inverse. Other
examples of commutative rings with 1 are the
and the set
of all polynomials over
, of all degrees.
For Problem G-6:
Following the suggestion, let
,
for
. Then
, so addition is
preserved. Also
, while
(the same), so multiplication is preserved.
is one-to-one since
two
says
so
and
and so
. (That was almost
too obvious to be written out.)
is onto from the
definition of
.
For Problem G-7:
See the notes for 1-M, on the web and handed out. In
,
,
,
,
,
,
,
, so
1 is its own inverse, 2 and 7 are inverses of each other, etc.
For Problem G-8: The intention of this problem was to do this in an ad-hoc way, just to get a sense of how it's tricky to do. An organized solution is in Problem J-8.
For Problem G-9: Again, the intention of this problem was just to get a sense of the difficulty. See Problem J-5.