Mathematics 244 -- Linear Algebra
Review True-False Questions -- Exam 3
April 29, 2004
-
A) If A is a 3 x 3 matrix whose characteristic polynomial
has a double root, then A is not diagonalizable -- FALSE. A diagonal
matrix with 2 1's and a 2 on the diagonal is a counterexample -- the characteristic
polynomial is (1 - lambda)2(2-lambda)
-
B) If T: R2 -> R2 and there
exists a non-zero vector x that satisfies T2(x) =
4x, then T(x) = + 2x or -2x -- FALSE. For example, let
T(1,0) = (0,1) and T(0,1) = (4,0). Then
T2(1,0) = 4(1,0), but T(1,0) = (0,1)
is neither (2,0) nor (-2,0).
(But it is true that T has
some eigenvector for lambda = + or - 2. Say T(x) = y, so
T(y) = 4x. Consider the vector z = 2x + y. Then T(z) =
2T(x) + T(y) = 2y + 4x = 2(2x + y) = 2z. If z = 0, then a similar
argument shows w = -2x + y is an eigenvector for lambda = -2.)
-
C) Every 3 x 3 matrix has at least one eigenvalue lambda in R
-- TRUE. The characteristic polynomial p(lambda) is a polynomial of degree three
with real coefficients. Recall the shape of the graph of
a cubic polynomial. The coefficient of lambda3
is negative, so the limit as lambda -> -infinity
is +infinity and the limit as lambda -> +infinity
is -infinity. The
polynomial is a continuous function, so the Intermediate Value Theorem
implies that there is some lambda where the polynomial
takes the value zero. Hence it has at least one (and no more than three)
real root(s).
-
D) If lambda is an eigenvalue of (A-sI)-1, where
s
in R, then s + 1/lambda is an eigenvalue of A
--TRUE. Let x be an eigenvector for (A-sI)-1 with
eigenvalue lambda. then (A-sI)-1x = lambda x. Multiplying
both sides by (A-sI), we get
x = lambda Ax - lambda sx. Rearranging
this, we get Ax = (s + 1/lambda)x. Thus
x is an eigenvector
for A with eigenvalue
s + 1/lambda.
(This fact is used in Numerical Linear
Algebra to derive a good method for computing eigenvalues for large
matrices, called the Power Method.)
-
E) The eigenvalues of aE11+bE12 +bE21+aE22
are lambda = a+b and lambda = a-b -- TRUE. The characteristic
polynomial is lambda2 - 2a lambda +
(a2 - b2), which factors as
(lambda - (a+b))(lambda - (a - b)). Corresponding eigenvectors
are T(1,1) = (a+b,a+b) = (a+b)(1,1) and
T(1,-1) = (a-b,b-a) = (a-b)(1,-1).
-
F) If the linear mapping T : R3 -> R3
has eigenvalues lambda = 1,2, and x1,x2
are linearly independent eigenvectors for lambda = 1, while y
is an eigenvector for lambda = 2, then
E = {x1,x2,y}
is linearly independent -- TRUE.
Consider a linear combination a x1
+ b x2 + cy = 0. Apply T to both sides. Then
a x1
+ b x2 + 2c y = 0. Subtract 2 times the previous equation
from this one:
a x1 + b x2 = 0. Since
{x1,x2}
is linearly independent, a = b = 0, but then c = 0 also.
(This is a special case of the third ``Proof to Know'' on the review sheet.)
-
G) If A is an n x n matrix with linearly independent
rows, then det(A) = 0 -- FALSE. In fact the determinant must
be nonzero, because A is invertible in this case. To see
this, recall that we know det(A) = det(At).
The columns of At are linearly independent
which says that they form a basis of Rn.
So the Dimension Theorem says the dimension of the image of At
is n and the dimension of the kernel of At is zero.
Hence At is invertible, so
det(At) = det(A) is nonzero.
(Note: It is a general fact for all matrices (not necessarily square)
that the dimension of the span of the rows is the same as the dimension
of the span of the columns. This can be seen by reducing the
matrix to echelon form.)
-
H) Let V be a vector space of dimension n and
T: V -> V be a linear mapping. If [T]aa
is the identity matrix for some basis a, then T
is the identity mapping -- TRUE. Let a = {x1, ... , xn}.
Since the matrix of T with respect to a is
the identity matrix, T(xi) = xi
for all i = 1, ... , n. Since T is linear,
this implies T(x) = x for all x in V.
So T is the identity mapping.