Mathematics 44, section 1 -- Linear Algebra

Notes on Proof of Invariance of Dimension

We will present an alternate proof of the important result given in Proposition 6.6.2 of Smith. The basis for this alternate approach is the understanding we have begun to develop of the process of solving systems of linear equations via elimination of variables. To begin, here is a slightly more precise statement of a general process that can be applied to put the system in "triangular" form with leading terms containing successively larger-numbered variables as you "go down" the system:

To put a general system of m equations in n unknowns

a11x1 + a12 x2 + ... + a1nxn = b1
a21x1 + a22x2 + ... + a2nxn = b2
...
am1x1 + am2x2 + ... + amnxn = bm

into triangular form:

  1. Start with the variable x1. If there is an equation containing x1 with a non-zero coefficient (if necessary) interchange equations to put that one first. Otherwise proceed to step 2 below.
  2. Continue with the variable x2. If there is an equation containing x2 with a non-zero coefficient, then (if necessary) interchange equations to put that one immediately after the equation containing x1 as leading term (if there is one). Otherwise continue with x3.
  3. Continue in the same way with x3, x4, etc. until there are no more remaining equations that can be changed.

Note: If an entire equation cancels out, that's OK. It just reduces the actual "effective number" of equations in your system.

Key Observation

If you apply this process to a system of equations with m < n -- that is, a system with more variables than equations, and all the right-hand sides bi are zero, then at the end of the process, you will have a triangular form system in which some variables do not appear in any leading term. This means that those variables can be assigned values arbitrarily, and the triangular form equations can be used to solve for the other variables that do appear in leading terms. Since the non-leading variables can be given non-zero values:

Proposition 1.

Any system of linear equations with m < n and in which all the right-hand sides bi are zero has a solution in which some variable is non-zero.

Proposition 2.

Let E = {A1,...,Ak} be a finite set of vectors in a vector space V. If F is a subset of L(E) that is linearly independent, then |F| <= k; in particular, F is a finite set.

Proof. (By contradiction.) Suppose F is a linearly independent subset of L(E) containing at least k + 1 vectors B1,...,Bk+1. Since each Bj is in the linear span of E, we can write

(1)

Bj = cj1A1 + ... + cjkAk

for some cji in R. Consider a linear combination of the Bj adding up to the zero vector:

0 = d1B1 + ... + dk+1Bk+1

Subsitituting for the Bj from (1) and collecting terms, we get:

0 = (c11d1+...+ck+1,1dk+1)A1+...+(c1kd1+...+ck+1,kdk+1)Ak

This equation will be true if the coefficients of the Ai are all zero:

c11d1+...+ck+1,1dk+1 = 0
...
c1kd1+...+ck+1,kdk+1 = 0

But this is a system of k linear equations in k+1 unknowns (the dj). Hence, by Proposition 1, there is a solution in which not all the dj are zero. But this contradicts the hypothesis that F was linearly independent. //

From Proposition 1, we deduce our main result:

Theorem (Invariance of Dimension).

Let V be a vector space that has some finite spanning set. Assume that E and F are two bases of V. Then E and F are both finite and contain the same number of vectors.

Proof. Applying Proposition 2 to E and the finite spanning set, we see that E is finite. Similarly, F is finite. Say |E| = m and |F| = n. By Proposition 2 again, Since E spans V and F is linearly independent, we have |F| = n <= |E| = m. But by the same reasoning, since F spans V and E is linearly independent, we have |E| = m <= |F| = n. Hence m = n.//

Neat proof, no?