For Problem CC-1:
The transformation is the same as
.
The range has basis
. The null space has basis
. The inverse image of
is the
graph of
. Graphs are these:
For Problem CC-2:
so the matrix is
. Notice that this is
a scaledrotation:
.
For Problem CC-3:
(a)
, so eigenvalues are
and
.
. We could write
out equations, but to save effort, notice that we're just
looking for a nonzero vector in the null space, or equivalently, a vector
perpendicular to both rows, so we can use
as the
eigenvector.
and a vector
perpendicular to both rows is
. Therefore
with
and
.
(b) Using the same
as in (a) we get
. It is not even necessary to
calculate the entries of
.
For Problem CC-4:
(a)
, so eigenvalues are
,
. We get
and (as in
the solution to CC-
) a vector perpendicular
to both rows is
. We get
, and a vector perpendicular to both rows
is
. Therefore
with
and
. Later we'll
need
, which is
.
(b) We want
, so if we let
then we'll have
. We don't know
or
yet, but we can get them by letting
be the cube root of
, specifically,
and then setting
. This
answer can be checked by cubing it.
Incidentally, do you see connections between this problem and
Problem CC-
?
For Problem CC-5:
so
trace
, while
so
trace
. If we switch the order
of summation in this second equation and then switch the letters
and
, neither of which changes the value, we get the
first equation.
For Problem CC-6:
For (a): Say
is invertible. Then
.
The case where
is invertible is the same with the letters switched.
(b) It is possible to have
and
(mathcal O
meaning zero matrices here): Take
,
. The only matrix similar to a zero matrix
is the zero matrix itself, so
and
can't be
similar.
For Problem CC-7:
(a) The characteristic polynomial of
is
, where
trace
and
det
. But
is also the sum of the eigenvalues and
is also the
product of the eigenvalues, so
and
, and
. From here there
are two approaches:
(i) a diagonal matrix as the solution: Solving with the quadratic
equation, we find that the eigenvalues are
,
so an answer is
.
(ii) (easier) just invent a matrix with the proper trace and determinant.
For diagonal entries take
. For off-diagonal entries to make
the determinant come out right, take
. So an answer is
.
Note: In a problem of this kind, you can always take one diagonal entry to be 0 and still be sure of finding other entries that will work.
(b) One way:
. We know
for
and
. The first of these is excluded, and
the other gives
.
A better way: A rotation matrix is characterized by having
orthonormal columns and determinant 1. In a diagonal matrix
the columns are already perpendicular; to be orthonormal the
diagonal entries should be
. To get determinant 1,
they should both be 1 or both be
. The first is excluded,
so the only possible answer is
.
(c) This problem mentions Cayley's Theorem, which we'll
discuss in class: Theorem: For any
matrix
,
is the
zero matrix. Working as in (a)-(ii), let the
diagonal entries be
and the off-diagonal entries be
, so the matrix is
. It can be
checked that
works. (The method (a)-(i) is also OK, but
this time the eigenvalues are complex.)
Incidentally, recalling the polynomial factorization
, we see that for this
we have
(the zero matrix),
so
. For the same reason, the eigenvalues of
must be complex cube roots of 1, which are studied in Math 132.
(d) The simplest examples would be to take
to be the
identity matrix and
, or the
zero matrix and
; the eigenspace
is the whole space either way. If the problem had asked for a
example, it's still simplest to use a diagonal
matrix:
and
,
or
and
;
the eigenspace is the
-plane in either case.
For Problem CC-8:
(a) We might as well try for the
-cycle
, so
.
Let's start with
. This takes 1 to 2, which is
good, but 2 to 1, which is bad. To get 2 to go to 3, follow with
, on the left since we are composing functions.
So far we have
. This takes 1 to 2 and 2 to
3 but 3 to 1, so follow with
, and so on. Therefore
one answer is
.
More generally, if symbols are
, we
have
.
Incidentally, when we say ``and so on'', we are really using informal
induction. That's appropriate for small proofs like this one, as
long as we know how to make it formal if required. The
induction step here would involve checking that
. In this course there wasn't much
practice with induction, so you will not be expected to use it
in unfamiliar contexts like this one.
For another answer, first start with a naive attempt:
. This has two drawbacks:
First, it involves thinking from left to right, instead of right
to left, and second, it gives the undesired answer
. But that is an
-cycle, so
in a sense it has solved the problem, even if awkwardly. We can
fix up this idea by looking at it and reversing the appearances
of
, while leaving the appearance of 1 alone. We
get
as desired. More generally,
.
(b) As stated, every permutation is a product of disjoint cycles, and by (a) every cycle is a product of transpositions, so every permutation is a product of transpositions.
It is also possible to prove (b) directly. Take a permutation
. Imagine doing
to a deck of cards numbered
, so they're all out of order. Can you put them
back in order using transpositions? Yes: Wherever card 1 is,
switch it to the top. Then switch card 2 to the second position
from the top, etc. When you're done the deck will be in order.
In other words, you have accomplished
by doing
a product of
transpositions, say
, where the
are
transpositions. Then
(reversing
just like inverting a product of matrices), and since each
transposition is its own inverse this shows
.
(c) In
,
and the two 3-cycles are even;
the three transpositions are odd. As stated it can be checked
that even times even or odd times odd is even, while
even times odd or odd times odd is odd.
For Problem CC-9:
(a) Since these are separate DE's,
and
.
(b)
x
x with
and
x
.
(c)
x
x
says
, the same as (a).
(d) If
x
x
, differentiating and assuming
that ordinary rules apply, we get
x
x
x
, and clearly setting
we do get
x
as
the value.
Note: Since
involves terms that are all powers of
, it commutes with
, so it doesn't matter whether we
write
or
, but the first of these
fits better here.
For Problem CC-10:
(a) To summarize:
with
and
,
.
Substituting
x
z we get
z
z, which is
the same as
z
z (since the derivative of a
vector means the vector of derivatives, and since we are multiplying
by the constant matrix
). Putting
on the right we
get
z
z, or
z
z, which has the
solution
,
. But
what are these initial values? We had
x
z, so
z
x,
. The solution, then,
from
x
z, is
,
or
,
. This checks in the original DE.
In matrix form the answer is
x
x
.
(b) The matrix power answer ought to be
x
x
.
Inventing reasonable rules, the derivative of the right-hand side
is
x
x
, so that checks.
Note: This answer and the previous answer are consistent: Because
is an infinite sum of scaled powers of
, and because
similarity by a fixed
preserves matrix multiplication and addition,
we get
.