# 1.7* Maximal Linearly Independent Subsets

In this section, several significant results from Section 1.6 are extended to infinite-dimensional vector spaces. Our principal goal here is to prove that every vector space has a basis. This result is important in the study of infinite-dimensional vector spaces because it is often difficult to construct an explicit basis for such a space. Consider, for example, the vector space of real numbers over the field of rational numbers. There is no obvious way to construct a basis for this space, and yet it follows from the results of this section that such a basis does exist.

The difficulty that arises in extending the theorems of the preceding section to infinite-dimensional vector spaces is that the principle of mathematical induction, which played a crucial role in many of the proofs of Section 1.6, is no longer adequate. Instead, an alternate result called the *Hausdorff maximal principle* is needed. Before stating this principle, we need to introduce some terminology.

# Definition.

*Let* $\mathcal{F}$ *be a family of sets. A member* `M` *of* $\mathcal{F}$ *is called maximal (with respect to set inclusion) if*

`M`

*is contained in no member of*

`F`

*other than*

`M`

*itself.*

# Example 1

Let $\mathcal{F}$ be the family of all subsets of a nonempty set `S`. (This family $\mathcal{F}$ is called the **power set** of `S`.) The set `S` is easily seen to be a maximal element of $\mathcal{F}$.

# Example 2

Let `S` and `T` be disjoint nonempty sets, and let $\mathcal{F}$ be the union of their power sets. Then `S` and `T` are both maximal elements of $\mathcal{F}$.

# Example 3

Let $\mathcal{F}$ be the family of all finite subsets of an infinite set `S`. Then $\mathcal{F}$ has no maximal element. For if `M` is any member of $\mathcal{F}$ and `S` is any element of `S` that is not in `M`, then $M\cup \left\{s\right\}$ is a member of $\mathcal{F}$ that contains `M` as a proper subset.

# Definition.

*A collection of sets* `C` *is called a chain (or nest or tower) if for each pair of sets*

`A`

*and*

`B`

*in*

`C`,

*either*$A\subseteq B$

*or*$B\subseteq A$.

# Example 4

For each positive integer `n` let ${A}_{n}=\{1,\text{}2,\text{}\dots ,\text{}n\}$. Then the collection of sets $c=\{{A}_{n}:\text{}n=1,\text{}2,\text{}3,\text{}\mathrm{...}\}$ is a chain. In fact, ${A}_{m}\subseteq {A}_{n}$ if and only if $m\le n$.

With this terminology we can now state the Hausdorff maximal principle.

**Hausdorff Maximal Principle.**^{4}*Let* $\mathcal{F}$ *be a family of sets. If, for each chain* $c\subseteq \mathcal{F}$, *there exists a member of* $\mathcal{F}$ *that contains all the members of C, then* $\mathcal{F}$

*contains a maximal member.*

Because the Hausdorff maximal principle guarantees the existence of maximal elements in a family of sets satisfying the hypothesis above, it is useful to reformulate the definition of a basis in terms of a maximal property. In Theorem 1.12, we show that this is possible; in fact, the concept defined next is equivalent to a basis.

# Definition.

*Let* `S` *be a subset of a vector space V. A maximal linearly independent subset of*

`S`

*is a subset*

`B`

*of*

`S`

*satisfying both of the following conditions.*

(a)

`B`*is linearly independent.*(b)

*The only linearly independent subset of*`S`*that contains*`B`*is*`B`*itself.*

# Example 5

Example 2 of Section 1.4 shows that

is a maximal linearly independent subset of

in ${\text{P}}_{3}(R)$. In this case, however, any subset of `S` consisting of two polynomials is easily shown to be a maximal linearly independent subset of `S`. Thus maximal linearly independent subsets of a set need not be unique.

A basis $\beta $ for a vector space V is a maximal linearly independent subset of V, because

$\beta $ is linearly independent by definition.

If $v\in \text{V}$ and $v\notin \beta $, then $\beta \cup \left\{v\right\}$ is linearly dependent by Theorem 1.7 (p. 40) because $\text{span}(\beta )=\text{V}$.

Our next result shows that the converse of this statement is also true.

# Theorem 1.12.

*Let* V *be a vector space and* `S` *a subset that generates V. If $\beta $ is a maximal linearly independent subset of* `S` *, then $\beta $ is a basis for* V.

Proof. Let $\beta $ be a maximal linearly independent subset of `S`. Because $\beta $ is linearly independent, it suffices to prove that $\beta $ generates V. We claim that $S\subseteq \text{span}(\beta )$, for otherwise there exists $v\in S$ such that $v\notin \text{span}(\beta )$. Since Theorem 1.7 (p. 40) implies that $\beta \cup \left\{v\right\}$ is linearly independent, we have contradicted the maximality of $\beta $. Therefore $S\subseteq \text{span}(\beta )$. Because $\text{span}(S)=\text{V}$, it follows from Theorem 1.5 (p. 31) that $\text{span}(\beta )=\text{V}$.

Thus a subset of a vector space is a basis if and only if it is a maximal linearly independent subset of the vector space. Therefore we can accomplish our goal of proving that every vector space has a basis by showing that every vector space contains a maximal linearly independent subset. This result follows immediately from the next theorem.

# Theorem 1.13.

*Let* `S` *be a linearly independent subset of a vector space * V. *There exists a maximal linearly independent subset of* V *that contains* `S`.

Proof. Let $\mathcal{F}$ denote the family of all linearly independent subsets of V containing `S`. To show that $\mathcal{F}$ contains a maximal element, we show that if `C` is a chain in $\mathcal{F}$, then there exists a member `U` of $\mathcal{F}$ containing each member of `C`. If `C` is empty, take $U=S$. Otherwise take U equal to the union of the members of `C`. Clearly `U` contains each member of `C`, and so it suffices to prove that $U\in \mathcal{F}$ (i.e., that `U` is a linearly independent subset of V that contains `S`). Because each member of `C` is a subset of V containing `S`, we have $S\subseteq U\subseteq \text{V}$. Thus we need only prove that `U` is linearly independent. Let ${u}_{1},\text{}{u}_{2},\text{}\dots ,\text{}{u}_{n}$ be in `U` and ${a}_{1},\text{}{a}_{2},\text{}\dots ,\text{}{a}_{n}$ be scalars such that ${a}_{1}{u}_{1}+{a}_{2}{u}_{2}+\cdots +{a}_{n}{u}_{n}=0$. Because ${u}_{i}\in U$ for $i=1,\text{}2,\text{}\dots ,\text{}n$, there exists a set ${A}_{i}$ in `C` such that ${u}_{i}\in {A}_{i}$. But since `C` is a chain, one of these sets, say ${A}_{k}$, contains all the others. Thus ${u}_{i}\in {A}_{k}$ for $i=1,\text{}2,\text{}\dots ,\text{}n$. However, ${A}_{k}$ is a linearly independent set; so ${a}_{1}{u}_{1}+{a}_{2}{u}_{2}+\cdots +{a}_{n}{u}_{n}=0$ implies that ${a}_{1}={a}_{2}=\cdots ={a}_{n}=0$. It follows that `U` is linearly independent.

The Hausdorff maximal principle implies that $\mathcal{F}$ has a maximal element. This element is easily seen to be a maximal linearly independent subset of V that contains `S`.

# Corollary.

*Every vector space has a basis.*

It can be shown, analogously to Corollary 1 of the replacement theorem (p. 47), that every basis for an infinite-dimensional vector space has the same *cardinality.* (Sets have the same cardinality if there is a one-to-one and onto mapping between them.) (See, for example, N. Jacobson, *Lectures in Abstract Algebra,* vol. 2, Linear Algebra, D. Van Nostrand Company, New York, 1953, p. 240.)

Exercises 4–7 extend other results from Section 1.6 to infinite-dimensional vector spaces.

# Exercises

Label the following statements as true or false.

(a) Every family of sets contains a maximal element.

(b) Every chain contains a maximal element.

(c) If a family of sets has a maximal element, then that maximal element is unique.

(d) If a chain of sets has a maximal element, then that maximal element is unique.

(e) A basis for a vector space is a maximal linearly independent subset of that vector space.

(f) A maximal linearly independent subset of a vector space is a basis for that vector space.

Show that the set of convergent sequences is an infinite-dimensional subspace of the vector space of all sequences of real numbers. (See Exercise 21 in Section 1.3.)

Let V be the set of real numbers regarded as a vector space over the field of rational numbers. Prove that V is infinite-dimensional.

*Hint:*Use the fact that $\pi $ is transcendental, that is, $\pi $ is not a zero of any polynomial with rational coefficients.Let W be a subspace of a (not necessarily finite-dimensional) vector space V. Prove that any basis for W is a subset of a basis for V.

Prove the following infinite-dimensional version of Theorem 1.8 (p. 44): Let $\beta $ be a subset of an infinite-dimensional vector space V. Then $\beta $ is a basis for V if and only if for each nonzero vector

`v`in V, there exist unique vectors ${u}_{1},\text{}{u}_{2},\text{}\dots ,\text{}{u}_{n}$ in $\beta $ and unique nonzero scalars ${c}_{1},\text{}{c}_{2},\text{}\dots ,\text{}{c}_{n}$ such that $v={c}_{1}{u}_{1}+{c}_{2}{u}_{2}+\cdots +{c}_{n}{u}_{n}$. Visit goo.gl/fNWSDM for a solution.Prove the following generalization of Theorem 1.9 (p. 45): Let ${S}_{1}$ and ${S}_{2}$ be subsets of a vector space V such that ${S}_{1}\subseteq {S}_{2}$. If ${S}_{1}$ is linearly independent and ${S}_{2}$ generates V, then there exists a basis $\beta $ for V such that ${S}_{1}\subseteq \beta \subseteq {S}_{2}$.

*Hint:*Apply the Hausdorff maximal principle to the family of all linearly independent subsets of ${S}_{2}$ that contain ${S}_{1}$, and proceed as in the proof of Theorem 1.13.Prove the following generalization of the replacement theorem. Let $\beta $ be a basis for a vector space V, and let

`S`be a linearly independent subset of V. There exists a subset ${S}_{1}$ of $\beta $ such that $S\cup {S}_{1}$ is a basis for V.