p. 21, Ex. 3.
This problem can be solved by experimenting, but it's better to use an organized approach. I'll list several options.
(i) Just try matrices that have three zeros. You will find
that good solutions are
and its
transpose.
(ii) Notice that
should not be invertible, since a product
of invertible matrices is invertible (rule
) and the zero matrix is not invertible. Then
doesn't have rank 2 and we also know it doesn't have rank 0
(which would be the zero matrix). So the rank of
is 1.
This should help experiments.
(ii
) Continuing with the rank 1 idea and recalling a previous
problem,
for column vectors
.
, which is column
row
column
row.
The middle row and column multiply to make a
matrix,
which is just a scalar, and it had better be 0 or else
won't be the zero matrix. So just pick any two column vectors
with
(which really says the dot product
of the two is 0) and let
. Example:
,
; then
above.
(iii) Plan what
should do to standard basis
vectors. If
takes
to
and
to
0, then doing
twice will take both
standard basis vectors to 0, which says
is the
zero transformation, as hoped. What is
? Its first column
is
and its second column is
0, so
.
(iv) [for later, after you have had the ingredients] If
is the zero matrix, then the eigenvalues of
also must obey
, so the determinant and trace of
will be
0. (The trace is the sum of the diagonal entries.) So
should look like
. If
then to make the determinant come out 0 the missing entries should
be
and
for some
. If
then to make the determinant come out 0 the missing entries should
be
and 0 either way around. To summarize,
(with
) or
or
(with
).
p. 21, Ex. 4.
I get
,
,
,
,
,
,
,
.
Of course, someone else might use some other order of row-reduction and get other matrices.
Notice that the text uses
for elementary matrices, while I
have been using
for a matrix in row-reduced echelon form.
This is really what Theorem 9 on p. 20 is saying for the case of elementary matrices.
p. 21, Ex. 7.
Here we are evidently supposed to do more than just to quote a theorem like the one on Handout S. The more we know, the easier it is to find a good solution. Some methods:
(i) The straightforward method is to write
and
and solve
for B in terms
of
. Then we can check directly that
.
Details: As a warmup, think how we would do the problem if
had numbers instead of letters, e.g.,
. This is equivalent to
two separate linear systems:
and
. They can
be made into a single problem with an augmented matrix to
row-reduce:
. The answer for
will be the right-hand part after row reduction. This is the same
as a common method of computing an inverse, which is of course what
we're doing. Starting with
and
row-reducing, we eventually will get
, where
(Greek capital Delta). (If we divide by
, say, during the calculation then we have to split cases
into
and
, but the answer will come out the
same either way.)
itself can't be 0 since there is a
solution. Then we know
or more simply,
. We can check directly that
.
(ii)
has to be of rank 2 since
has rank 2 and rank can't
increase under matrix multiplication. So row-reducing
we get
.
In other words,
. Multiplying by
on
the right we get
, so
and also we have
.
(iii) If
then
so
is one-to-one so the nullity of
is 0. Then
the rank of
is 2, so
is onto. Since
is an isomorphism, it has a two-sided inverse, which is
.
Then
is a two-sided inverse for
.
Note. This problem is related to a simple observation: If
has both a left inverse
and a right inverse
then
. The reason:
can be thought of as
and as
1
, so
.
In the special case of linear transformations on a finite-dimensional
vector space to itself, a corollary of the dimension theorem was that
if
is one-to-one then
is onto, so if
has a left
inverse then it has a right inverse also, and so a two-sided
inverse.
p. 26, Ex. 1. It's simplest to augment
with an identity matrix
to keep track of the cumulative effect of the elementary row operations.
The idea is that by doing row operations we are computing
,
and by doing the same row operations on
we get
so we can see
as well as the reduced version of
.
.
This says that
and
.
p. 27, Ex. 5.
Yes, scaling the last three rows by
respectively and sweeping out columns using the leading
1's, we see that
is elementarily equivalent to the
identity matrix and so is invertible.
Other ways to see this, for the future:
The determinant of a triangular matrix is the product of the diagonal
entries, so
, which means
is invertible.
The eigenvalues of a triangular matrix are the diagonal entries, and since none are 0 the matrix is nonsingular.
p. 27, Ex. 6.
The rank of
is no more than 1, and the rank
of
can't be larger than the maximum of the rank of
and the rank of
, so rank
= 1. But
is
so if
were invertible its rank would have to be 2.
p. 74, Ex. 11.
This problem says to show that
is the
zero transformation
is the zero matrix. Solution:
For ``implies'':
if
0 for all
then in particular
0 for all
, which is the same as saying that each column of
is the zero vector, i.e., all entries of
are 0.
For ``impliedby'':
If
is the zero matrix then
0 for all
, which
says
is the zero transformation.
p. 74, Ex. 12.
Since the range and null space of
are identical,
the rank and nullity are the same, say
, and we know
dim
, so
dim
is even.
Technically, the simplest example would be to have
0
and let
be the zero transformation. A more meaningful example
would be to take
(since we know the dimension is even!),
and let
Range
Nullspace
span
. This can
be accomplished by having
and
0.
In other words,
for
.
p. 84, Ex. 7.
A ``linear operator'' is a transformation on a vector space
to itself. We know linear transformations on
are all
of the form
for a suitable
matrix
,
so we are looking for
matrices
with
(where 0 means the
zero matrix).
One way is to experiment with matrices that have lots of 0 entries.
For example,
,
provide
an example.
Another way is to realize that the nilpotent matrix
and its transpose are often good examples of
bad behavior, so try
,
and try to
solve for the desired condition. You get an example similar to the
one in the preceding paragraph.
p. 85, Ex. 1.
by
.
p. 86, Ex. 4.
We need to set up a correspondence between pairs of indices
(
,
) and indices running from 1 to
. In other words, we need to specify an order for the pairs
of indices and say how to list them in that order. Where does
go?
One way is to list the matrix by row 1, then row 2, etc.:
, etc.
Each new row adds
to the position, but row 1 adds nothing,
so the contribution of the row index is off by 1. We get that
the position of
in the list is
. Then
by having each matrix
given by
for
,
.
p. 86, Ex. 6.
For ``
'':
If
and
are isomorphic we know that a basis of one
maps to a basis of the other, so they have the same dimension.
For ``
'':
If they have the same dimension, say
, we know they are
both isomorphic to
and so are isomorphic to each other.
For Problem Q-1:
(a)
.
(b)
.
(c)
.
(d)
.
(e)
and
.
(f)
.
(g)
(or in other words,
n o t
).
Note: In logic,
and
are called quantifiers.
is the universal quantifier (since it says something is
true universally) and
is the existential quantifier.
Normally we don't write mathematics in such a condensed form, but it is important always to notice what the underlying quantification is.