# 3.4 Systems of Linear Equations—Computational Aspects

In Section 3.3, we obtained a necessary and sufficient condition for a system of linear equations to have solutions (Theorem 3.11 p. 174) and learned how to express the solutions to a nonhomogeneous system in terms of solutions to the corresponding homogeneous system (Theorem 3.9 p. 172). The latter result enables us to determine all the solutions to a given system if we can find one solution to the given system and a basis for the solution set of the corresponding homogeneous system. In this section, we use elementary row operations to accomplish these two objectives simultaneously. The essence of this technique is to transform a given system of linear equations into a system having the same solutions, but which is easier to solve (as in Section 1.4).

# Definition.

*Two systems of linear equations are called equivalent if they have the same solution set.*

The following theorem and corollary give a useful method for obtaining equivalent systems.

# Theorem 3.13.

*Let* $Ax=b$ *be a system of* `m` *linear equations in* `n` *unknowns, and let* `C` *be an invertible* $m\times m$ *matrix. Then the system* $(CA)x=Cb$ *is equivalent to* $Ax=b$.

# Proof.

Let `K` be the solution set for $Ax=b$ and ${K}^{\prime}$ the solution set for $(CA)x=Cb$. If $w\in K$, then $Aw=b$. So $(CA)w=Cb$, and hence $w\in {K}^{\prime}$. Thus $K\subseteq {K}^{\prime}$.

Conversely, if $w\in {K}^{\prime}$, then $(CA)w=Cb$. Hence

so $w\in K$. Thus ${K}^{\prime}\subseteq K$, and therefore, $K={K}^{\prime}$.

# Corollary.

*Let* $Ax=b$ *be a system of* `m` *linear equations in* `n` *unknowns. If* $({A}^{\prime}|{b}^{\prime})$ *is obtained from* $(A|b)$ *by a finite number of elementary row operations, then the system* ${A}^{\prime}x={b}^{\prime}$ *is equivalent to the original system.*

# Proof.

Suppose that $({A}^{\prime}|{b}^{\prime})$ is obtained from $(A|b)$ by elementary row operations. These may be executed by multiplying $(A|b)$ by elementary $m\times m$ matrices ${E}_{1},\text{}{E}_{2},\text{}\dots ,\text{}{E}_{p}$. Let $C={E}_{p}\text{}\cdots \text{}{E}_{2}{E}_{1};$ then

Since each ${E}_{i}$ is invertible, so is `C`. Now ${A}^{\prime}=CA$ and ${b}^{\prime}=Cb$. Thus by Theorem 3.13, the system ${A}^{\prime}x={b}^{\prime}$ is equivalent to the system $Ax=b$.

We now describe a method for solving any system of linear equations. Consider, for example, the system of linear equations

First, we form the augmented matrix

By using elementary row operations, we transform the augmented matrix into an upper triangular matrix in which the first nonzero entry of each row is 1, and it occurs in a column to the right of the first nonzero entry of each preceding row. (Recall that matrix `A` is upper triangular if ${A}_{ij}=0$ whenever $i>j$.)

*In the leftmost nonzero column, create a 1 in the first row.*In our example, we can accomplish this step by interchanging the first and third rows. The resulting matrix is$$\left(\begin{array}{rrrrr}1& 2& 1& -1& 2\\ 1& 1& 1& 0& 3\\ 3& 2& 3& -2& 1\end{array}\right).$$*By means of type 3 row operations, use the first row to obtain zeros in the remaining positions of the leftmost nonzero column.*In our example, we must add $-1$ times the first row to the second row and then add $-3$ times the first row to the third row to obtain$$\left(\begin{array}{rrrrr}1& 2& 1& -1& 2\\ 0& -1& 0& 1& 1\\ 0& -4& 0& 1& -5\end{array}\right).$$*Create a 1 in the next row in the leftmost possible column, without using previous row(s).*In our example, the second column is the leftmost possible column, and we can obtain a 1 in the second row, second column by multiplying the second row by $-1$. This operation produces$$\left(\begin{array}{rrrrr}1& 2& 1& -1& 2\\ 0& 1& 0& -1& -1\\ 0& -4& 0& 1& -5\end{array}\right).$$*Now use type 3 elementary row operations to obtain zeros below the 1 created in the preceding step.*In our example, we must add four times the second row to the third row. The resulting matrix is$$\left(\begin{array}{rrrrr}1& 2& 1& -1& 2\\ 0& 1& 0& -1& -1\\ 0& 0& 0& -3& -9\end{array}\right).$$*Repeat steps 3 and 4 on each succeeding row until no nonzero rows remain.*(This creates zeros below the first nonzero entry in each row.) In our example, this can be accomplished by multiplying the third row by $-{\displaystyle \frac{1}{3}}$. This operation produces$$\left(\begin{array}{rrrrr}1& 2& 1& -1& 2\\ 0& 1& 0& -1& -1\\ 0& 0& 0& 1& 3\end{array}\right).$$We have now obtained the desired matrix. To complete the simplification of the augmented matrix, we must make the first nonzero entry in each row the only nonzero entry in its column. (This corresponds to eliminating certain unknowns from all but one of the equations.)

*Work upward, beginning with the last nonzero row, and add multiples of each row to the rows above.*(This creates zeros above the first nonzero entry in each row.) In our example, the third row is the last nonzero row, and the first nonzero entry of this row lies in column 4. Hence we add the third row to the first and second rows to obtain zeros in row 1, column 4 and row 2, column 4. The resulting matrix is$$\left(\begin{array}{ccccc}1& 2& 1& 0& 5\\ 0& 1& 0& 0& 2\\ 0& 0& 0& 1& 3\end{array}\right).$$*Repeat the process described in step 6 for each preceding row until it is performed with the second row, at which time the reduction process is complete.*In our example, we must add $-2$ times the second row to the first row in order to make the first row, second column entry become zero. This operation produces$$\left(\begin{array}{ccccc}1& 0& 1& 0& 1\\ 0& 1& 0& 0& 2\\ 0& 0& 0& 1& 3\end{array}\right).$$We have now obtained the desired reduction of the augmented matrix. This matrix corresponds to the system of linear equations

$$\begin{array}{rrrrrcl}{x}_{1}& +& & {x}_{3}& & =& 1\\ & & {x}_{2}& & & =& 2\\ & & & & {x}_{4}& =& 3.\end{array}$$

Recall that, by the corollary to Theorem 3.13, this system is equivalent to the original system. But this system is easily solved. Obviously ${x}_{2}=2$ and ${x}_{4}=3$. Moreover, ${x}_{1}$ and ${x}_{3}$ can have any values provided their sum is 1. Letting ${x}_{3}=t$, we then have ${x}_{1}=1-t$. Thus an arbitrary solution to the original system has the form

Observe that

is a basis for the homogeneous system of equations corresponding to the given system.

In the preceding example we performed elementary row operations on the augmented matrix of the system until we obtained the augmented matrix of a system having properties 1, 2, and 3 on page 28. Such a matrix has a special name.

# Definition.

*A matrix is said to be in reduced row echelon form if the following three conditions are satisfied.*

(a) Any row containing a nonzero entry precedes any row in which all the entries are zero (if any).

(b) The first nonzero entry in each row is the only nonzero entry in its column.

(c) The first nonzero entry in each row is 1 and it occurs in a column to the right of the first nonzero entry in the preceding row.

# Example 1

(a) The matrix on page 184 is in reduced row echelon form. Note that the first nonzero entry of each row is 1 and that the column containing each such entry has all zeros otherwise. Also note that each time we move downward to a new row, we must move to the right one or more columns to find the first nonzero entry of the new row.

(b) The matrix

$$\left(\begin{array}{ccc}1& 1& 0\\ 0& 1& 0\\ 1& 0& 1\end{array}\right),$$is

*not*in reduced row echelon form, because the first column, which contains the first nonzero entry in row 1, contains another nonzero entry. Similarly, the matrix$$\left(\begin{array}{cccc}0& 1& 0& 2\\ 1& 0& 0& 1\\ 0& 0& 1& 1\end{array}\right),$$is

*not*in reduced row echelon form, because the first nonzero entry of the second row is not to the right of the first nonzero entry of the first row. Finally, the matrix$$\left(\begin{array}{ccc}2& 0& 0\\ 0& 1& 0\end{array}\right),$$is not in reduced row echelon form, because the first nonzero entry of the first row is not 1.

It can be shown (see the corollary to Theorem 3.16) that the reduced row echelon form of a matrix is unique; that is, if different sequences of elementary row operations are used to transform a matrix into matrices `Q` and ${Q}^{\prime}$ in reduced row echelon form, then $Q={Q}^{\prime}$. Thus, although there are many different sequences of elementary row operations that can be used to transform a given matrix into reduced row echelon form, they all produce the same result.

The procedure described on pages 182-184 for reducing an augmented matrix to reduced row echelon form is called Gaussian elimination. It consists of two separate parts.

In the

*forward pass*(steps 1-5), the augmented matrix is transformed into an upper triangular matrix in which the first nonzero entry of each row is 1, and it occurs in a column to the right of the first nonzero entry of each preceding row.In the

*backward pass*or*back-substitution*(steps 6-7), the upper triangular matrix is transformed into reduced row echelon form by making the first nonzero entry of each row the only nonzero entry of its column.

Of all the methods for transforming a matrix into its reduced row echelon form, Gaussian elimination requires the fewest arithmetic operations. (For large matrices, it requires approximately 50% fewer operations than the Gauss-Jordan method, in which the matrix is transformed into reduced row echelon form by using the first nonzero entry in each row to make zero all other entries in its column.) Because of this efficiency, Gaussian elimination is the preferred method when solving systems of linear equations on a computer. In this context, the Gaussian elimination procedure is usually modified in order to minimize roundoff errors. Since discussion of these techniques is inappropriate here, readers who are interested in such matters are referred to books on numerical analysis.

When a matrix is in reduced row echelon form, the corresponding system of linear equations is easy to solve. We present below a procedure for solving any system of linear equations for which the augmented matrix is in reduced row echelon form. First, however, we note that every matrix can be transformed into reduced row echelon form by Gaussian elimination. In the forward pass, we satisfy conditions (a) and (c) in the definition of reduced row echelon form and thereby make zero all entries below the first nonzero entry in each row. Then in the backward pass, we make zero all entries above the first nonzero entry in each row, thereby satisfying condition (b) in the definition of reduced row echelon form.

# Theorem 3.14.

*Gaussian elimination transforms any matrix into its reduced row echelon form.*

We now describe a method for solving a system in which the augmented matrix is in reduced row echelon form. To illustrate this procedure, we consider the system

for which the augmented matrix is

Applying Gaussian elimination to the augmented matrix of the system produces the following sequence of matrices.

The system of linear equations corresponding to this last matrix is

Notice that we have ignored the last row since it consists entirely of zeros.

To solve a system for which the augmented matrix is in reduced row echelon form, divide the variables into two sets. The first set consists of those variables that appear as leftmost variables in one of the equations of the system (in this case the set is $\{{x}_{1},\text{}{x}_{2},\text{}{x}_{4}\}$). The second set consists of all the remaining variables (in this case, $\{{x}_{3},\text{}{x}_{5}\}$). To each variable in the second set, assign a parametric value ${t}_{1},\text{}{t}_{2},\text{}\dots \text{}({x}_{3}={t}_{1},\text{}{x}_{5}={t}_{2})$, and then solve for the variables of the first set in terms of those in the second set:

Thus an arbitrary solution is of the form

where ${t}_{1},\text{}{t}_{2}\in R$. Notice that

is a basis for the solution set of the corresponding homogeneous system of equations and

is a particular solution to the original system.

Therefore, in simplifying the augmented matrix of the system to reduced row echelon form, we are in effect simultaneously finding a particular solution to the original system and a basis for the solution set of the associated homogeneous system. Moreover, this procedure detects when a system is inconsistent, for by Exercise 3, solutions exist if and only if, in the reduction of the augmented matrix to reduced row echelon form, we do not obtain a row in which the only nonzero entry lies in the last column.

Thus to use this procedure for solving a system $Ax=b$ of `m` linear equations in `n` unknowns, we need only begin to transform the augmented matrix $(A|b)$ into its reduced row echelon form $({A}^{\prime}|{b}^{\prime})$ by means of Gaussian elimination. If a row is obtained in which the only nonzero entry lies in the last column, then the original system is inconsistent. Otherwise, discard any zero rows from $({A}^{\prime}|{b}^{\prime})$, and write the corresponding system of equations. Solve this system as described above to obtain an arbitrary solution of the form

where `r` is the number of nonzero rows in ${A}^{\prime}(r\le m)$. The preceding equation is called a **general solution** of the system $Ax=b$. It expresses an arbitrary solution `s` of $Ax=b$ in terms of $n-r$ parameters. The following theorem states that `s` cannot be expressed in fewer than $n-r$ parameters.

# Theorem 3.15.

*Let* $Ax=b$ *be a system of* `r` *nonzero equations in* `n` *unknowns. Suppose that* $\text{rank}(A)=\text{rank}(A|b)$ *and that* $(A|b)$ *is in reduced row echelon form. Then*

(a) $\text{rank}(A)=r.$

(b)

*If the general solution obtained by the procedure above is of the form*$$s={s}_{0}+{t}_{1}{u}_{1}+{t}_{2}{u}_{2}+\text{}\cdots \text{}+{t}_{n-r}{u}_{n-r},$$*then*$\{{u}_{1},\text{}{u}_{2},\text{}\dots ,\text{}{u}_{n-r}\}$*is a basis for the solution set of the corresponding homogeneous system, and*${s}_{0}$*is a solution to the original system.*

# Proof.

Since $(A|b)$ is in reduced row echelon form, $(A|b)$ must have `r` nonzero rows. Clearly these rows are linearly independent by the definition of the reduced row echelon form, and so $\text{rank}(A|b)=r$. Thus $\text{rank}(A)=r$.

Let `K` be the solution set for $Ax=b$, and let ${\text{K}}_{\text{H}}$ be the solution set for $Ax=0$. Setting ${t}_{1}={t}_{2}=\text{}\cdots \text{}={t}_{n-r}=0$, we see that $s={s}_{0}\in K$. But by Theorem 3.9 (p. 172), $K=\left\{{s}_{0}\right\}+{\text{K}}_{\text{H}}$. Hence

Because $\text{rank}(A)=r$, we have $\mathrm{dim}({\text{K}}_{\text{H}})=n-r$. Thus since $\mathrm{dim}({\text{K}}_{\text{H}})=n-r$ and ${\text{K}}_{\text{H}}$ is generated by a set $\{{u}_{1},\text{}{u}_{2},\text{}\dots ,\text{}{u}_{n-r}\}$ containing at most $n-r$ vectors, we conclude that this set is a basis for ${\text{K}}_{\text{H}}$.

# An Interpretation of the Reduced Row Echelon Form

Let `A` be an $\text{m}\times \text{n}$ matrix with columns ${\text{a}}_{1},\text{}{a}_{2},\text{}\dots ,\text{}{a}_{n}$, and let `B` be the reduced row echelon form of `A`. Denote the columns of `B` by ${b}_{1},\text{}{b}_{2},\text{}\dots ,\text{}{b}_{n}$. If the rank of `A` is `r`, then the rank of `B` is also `r` by the corollary to Theorem 3.4 (p. 152). Because `B` is in reduced row echelon form, no nonzero row of `B` can be a linear combination of the other rows of `B`. Hence `B` must have exactly `r` nonzero rows, and if $r\ge 1$, the vectors ${e}_{1},\text{}{e}_{2},\text{}\dots ,\text{}{e}_{r}$ must occur among the columns of `B`. For $i=1,\text{}2,\text{}\dots ,\text{}r$, let ${j}_{i}$ denote a column number of `B` such that ${b}_{{j}_{i}}={e}_{i}$. We claim that ${a}_{{j}_{1}},\text{}{a}_{{j}_{2}},\text{}\dots ,\text{}{a}_{{j}_{r}}$, the columns of `A` corresponding to these columns of `B`, are linearly independent. For suppose that there are scalars ${c}_{1},\text{}{c}_{2},\text{}\dots ,\text{}{c}_{r}$ such that

Because `B` can be obtained from `A` by a sequence of elementary row operations, there exists (as in the proof of the corollary to Theorem 3.13) an invertible $m\times m$ matrix `M` such that $MA=B$. Multiplying the preceding equation by `M` yields

Since $M{a}_{{j}_{i}}={b}_{{j}_{i}}={e}_{i}$, it follows that

Hence ${c}_{1}={c}_{2}=\text{}\cdots \text{}={c}_{r}=0$, proving that the vectors ${a}_{{j}_{1}},\text{}{a}_{{j}_{2}},\text{}\dots ,\text{}{a}_{{j}_{r}}$ are linearly independent.

Because `B` has only `r` nonzero rows, every column of `B` has the form

for scalars ${d}_{1},\text{}{d}_{2},\text{}\dots ,\text{}{d}_{r}$. The corresponding column of `A` must be

The next theorem summarizes these results.

# Theorem 3.16.

*Let* `A` *be an* $m\times n$ *matrix of rank* `r`, *where* $r>0$*, and let* `B` *be the reduced row echelon form of* `A`*. Then*

(a)

*The number of nonzero rows in*`B`*is*`r`.(b)

*For each*$i=1,\text{}2,\text{}\dots ,\text{}r$*there is a column*${b}_{{j}_{i}}$*of*`B`such*that*${b}_{{j}_{i}}={e}_{i}$.(c)

*The columns of*`A`*numbered*${j}_{1},\text{}{j}_{2},\text{}\dots ,\text{}{j}_{r}$*are linearly independent.*(d)

*For each*$k=1,\text{}2,\text{}\dots ,\text{}n$*, if column*`k`*of*`B`is ${d}_{1}{e}_{1}+{d}_{2}{e}_{2}+\cdots +{d}_{r}{e}_{r}$*, then column*`k`*of*`A`*is*${d}_{1}{a}_{{j}_{1}}+{d}_{2}{a}_{{j}_{2}}+\cdots +{d}_{r}{a}_{{j}_{r}}.$

# Corollary.

*The reduced row echelon form of a matrix is unique.*

# Proof.

Exercise. (See Exercise 15.)

# Example 2

Let

The reduced row echelon form of `A` is

Since `B` has three nonzero rows, the rank of `A` is 3. The first, third, and fifth columns of `B` are ${e}_{1},\text{}{e}_{2}$, and ${e}_{3}$; so Theorem 3.16(c) asserts that the first, third, and fifth columns of `A` are linearly independent.

Let the columns of `A` be denoted ${a}_{1},\text{}{a}_{2},\text{}{a}_{3},\text{}{a}_{4},\text{}\text{and}{a}_{5}$. Because the second column of `B` is $2{e}_{1}$, it follows from Theorem 3.16(d) that ${a}_{2}=2{a}_{1}$, as is easily checked. Moreover, since the fourth column of `B` is $4{e}_{1}+(-1){e}_{2}$, the same result shows that

In Example 6 of Section 1.6, we extracted a basis for ${\text{R}}^{3}$ from the generating set

The procedure described there can be streamlined by using Theorem 3.16. We begin by noting that if `S` were linearly independent, then `S` would be a basis for ${\text{R}}^{3}$. In this case, it is clear that `S` is linearly dependent because `S` contains more than $\mathrm{dim}({\text{R}}^{3})=3$ vectors. Nevertheless, it is instructive to consider the calculation that is needed to determine whether `S` is linearly dependent or linearly independent. Recall that S is linearly dependent if there are scalars ${c}_{1},\text{}{c}_{2},\text{}{c}_{3},\text{}{c}_{4}$, and ${c}_{5}$, not all zero, such that

Thus `S` is linearly dependent if and only if the system of linear equations

has a nonzero solution. The augmented matrix of this system of equations is

and its reduced row echelon form is

Using the technique described earlier in this section, we can find nonzero solutions of the preceding system, confirming that `S` is linearly dependent. However, Theorem 3.16(c) gives us additional information. Since the first, third, and fourth columns of `B` are ${e}_{1},\text{}{e}_{2}$, and ${e}_{3}$, we conclude that the first, third, and fourth columns of `A` are linearly independent. But the columns of `A` other than the last column (which is the zero vector) are vectors in `S`. Hence

is a linearly independent subset of `S`. If follows from (b) of Corollary 2 to the replacement theorem (p. 48) that $\beta $ is a basis for ${\text{R}}^{3}$.

Because every finite-dimensional vector space over `F` is isomorphic to ${\text{F}}^{n}$ for some `n`, a similar approach can be used to reduce any finite generating set to a basis. This technique is illustrated in the next example.

# Example 3

The set

generates a subspace V of ${\text{P}}_{3}(R)$. To find a subset of S that is a basis for V, we consider the subset

consisting of the images of the polynomials in `S` under the standard representation of ${\text{P}}_{3}(R)$ with respect to the standard ordered basis. Note that the $4\times 5$ matrix in which the columns are the vectors in ${S}^{\prime}$ is the matrix `A` in Example 2. From the reduced row echelon form of `A`, which is the matrix `B` in Example 2, we see that the first, third, and fifth columns of `A` are linearly independent and the second and fourth columns of `A` are linear combinations of the first, third, and fifth columns. Hence

is a basis for the subspace of ${\text{R}}^{4}$ that is generated by ${S}^{\prime}$. It follows that

is a basis for the subspace V of ${\text{P}}_{3}(R)$.

We conclude this section by describing a method for extending a linearly independent subset `S` of a finite-dimensional vector space V to a basis for V. Recall that this is always possible by (c) of Corollary 2 to the replacement theorem (p. 48). Our approach is based on the replacement theorem and assumes that we can find an explicit basis $\beta $ for V. Let ${S}^{\prime}$ be the ordered set consisting of the vectors in `S` followed by those in $\beta $. Since $\beta \subseteq {S}^{\prime}$, the set ${S}^{\prime}$ generates V. We can then apply the technique described above to reduce this generating set to a basis for V containing `S`.

# Example 4

Let

It is easily verified that V is a subspace of ${\text{R}}^{5}$ and that

is a linearly independent subset of V.

To extend `S` to a basis for V, we first obtain a basis $\beta $ for V. To do so, we solve the system of linear equations that defines V. Since in this case V is defined by a single equation, we need only write the equation as

and assign parametric values to ${x}_{2},\text{}{x}_{3},\text{}{x}_{4}$, and ${x}_{5}$. If ${x}_{2}={t}_{1},\text{}{x}_{3}={t}_{2},\text{}{x}_{4}={t}_{3}$, and ${x}_{5}={t}_{4}$, then the vectors in V have the form

Hence

is a basis for V by Theorem 3.15.

The matrix whose columns consist of the vectors in `S` followed by those in $\beta $ is

and its reduced row echelon form is

Thus

is a basis for V containing `S`.

# Exercises

Label the following statements as true or false.

(a) If $({A}^{\prime}|{b}^{\prime})$ is obtained from $(A|b)$ by a finite sequence of elementary column operations, then the systems $Ax=b$ and ${A}^{\prime}x={b}^{\prime}$ are equivalent.

(b) If $({A}^{\prime}|{b}^{\prime})$ is obtained from $(A|b)$ by a finite sequence of elementary row operations, then the systems $Ax=b$ and ${A}^{\prime}x={b}^{\prime}$ are equivalent.

(c) If

`A`is an $n\times n$ matrix with rank`n`, then the reduced row echelon form of`A`is ${I}_{n}$.(d) Any matrix can be put in reduced row echelon form by means of a finite sequence of elementary row operations.

(e) If $(A|b)$ is in reduced row echelon form, then the system $Ax=b$ is consistent.

(f) Let $Ax=b$ be a system of

`m`linear equations in`n`unknowns for which the augmented matrix is in reduced row echelon form. If this system is consistent, then the dimension of the solution set of $Ax=0$ is $n-r$, where`r`equals the number of nonzero rows in`A`.(g) If a matrix

`A`is transformed by elementary row operations into a matrix ${A}^{\prime}$ in reduced row echelon form, then the number of nonzero rows in ${A}^{\prime}$ equals the rank of`A`.

Use Gaussian elimination to solve the following systems of linear equations.

(a) $\begin{array}{rcrcrcr}{x}_{1}& +& 2{x}_{2}& -& {x}_{3}& =& -1\\ 2{x}_{1}& +& 2{x}_{2}& +& {x}_{3}& =& 1\\ 3{x}_{1}& +& 5{x}_{2}& -& 2{x}_{3}& =& -1\end{array}$

(b) $\begin{array}{rcrcrcl}{x}_{1}& -& 2{x}_{2}& -& {x}_{3}& =& 1\\ 2{x}_{1}& -& 3{x}_{2}& +& {x}_{3}& =& 6\\ 3{x}_{1}& -& 5{x}_{2}& & & =& 7\\ {x}_{1}& & & +& 5{x}_{3}& =& 9\end{array}$

(c) $\begin{array}{rcrcrcrcr}{x}_{1}& +& 2{x}_{2}& & & +& 2{x}_{4}& =& 6\\ 3{x}_{1}& +& 5{x}_{2}& -& {x}_{3}& +& 6{x}_{4}& =& 17\\ 2{x}_{1}& +& 4{x}_{2}& +& {x}_{3}& +& 2{x}_{4}& =& 12\\ 2{x}_{1}& & & -& 7{x}_{3}& +& 11{x}_{4}& =& 7\end{array}$

(d) $\begin{array}{rcrcrcrcr}{x}_{1}& -& {x}_{2}& -& 2{x}_{3}& +& 3{x}_{4}& =& -63\\ 2{x}_{1}& -& {x}_{2}& +& 6{x}_{3}& +& 6{x}_{4}& =& -2\\ -2{x}_{1}& +& {x}_{2}& -& 4{x}_{3}& -& 3{x}_{4}& =& 0\\ 3{x}_{1}& -& 2{x}_{2}& +& 9{x}_{3}& +& 10{x}_{4}& =& -5\end{array}$

(e) $\begin{array}{rcrcrcrcr}{x}_{1}& -& 4{x}_{2}& -& {x}_{3}& +& {x}_{4}& =& 3\\ 2{x}_{1}& -& 8{x}_{2}& +& {x}_{3}& -& 4{x}_{4}& =& 9\\ -{x}_{1}& +& 4{x}_{2}& -& 2{x}_{3}& +& 5{x}_{4}& =& -6\end{array}$

(f) $\begin{array}{rcrcrcrcl}{x}_{1}& +& 2{x}_{2}& -& {x}_{3}& +& 3{x}_{4}& =& 2\\ 2{x}_{1}& +& 4{x}_{2}& -& {x}_{3}& +& 6{x}_{4}& =& 5\\ & & {x}_{2}& & & +& 2{x}_{4}& =& 3\end{array}$

(g) $\begin{array}{rcrcrcrcrcl}2{x}_{1}& -& 2{x}_{2}& -& {x}_{3}& +& 6{x}_{4}& -& 2{x}_{5}& =& 1\\ {x}_{1}& -& {x}_{2}& +& {x}_{3}& +& 2{x}_{4}& -& {x}_{5}& =& 2\\ 4{x}_{1}& -& 4{x}_{2}& +& 5{x}_{3}& +& 7{x}_{4}& -& {x}_{5}& =& 6\end{array}$

(h)$\begin{array}{rcrcrcrcrcr}3{x}_{1}& -& {x}_{2}& +& {x}_{3}& -& {x}_{4}& +& 2{x}_{5}& =& 5\\ {x}_{1}& -& {x}_{2}& -& {x}_{3}& -& 2{x}_{4}& -& {x}_{5}& =& 2\\ 5{x}_{1}& -& 2{x}_{2}& +& {x}_{3}& -& 3{x}_{4}& +& 3{x}_{5}& =& 10\\ 2{x}_{1}& -& {x}_{2}& & & -& 2{x}_{4}& +& {x}_{5}& =& 5\end{array}$

(i) $\begin{array}{rcrcrcrcrcr}3{x}_{1}& -& {x}_{2}& +& 2{x}_{3}& +& 4{x}_{4}& +& {x}_{5}& =& 2\\ {x}_{1}& -& {x}_{2}& +& 2{x}_{3}& +& 3{x}_{4}& +& {x}_{5}& =& -1\\ 2{x}_{1}& -& 3{x}_{2}& +& 6{x}_{3}& +& 9{x}_{4}& +& 4{x}_{5}& =& -5\\ 7{x}_{1}& -& 2{x}_{2}& +& 4{x}_{3}& +& 8{x}_{4}& +& {x}_{5}& =& 6\end{array}$

(j) $\begin{array}{rcrcrcrcrcr}2{x}_{1}& & & +& 3{x}_{3}& & & -& 4{x}_{5}& =& 5\\ 3{x}_{1}& -& 4{x}_{2}& +& 8{x}_{3}& +& 3{x}_{4}& & & =& 8\\ {x}_{1}& -& {x}_{2}& +& 2{x}_{3}& +& {x}_{4}& -& {x}_{5}& =& 2\\ -2{x}_{1}& +& 5{x}_{2}& -& 9{x}_{3}& -& 3{x}_{4}& -& 5{x}_{5}& =& -8\end{array}$

Suppose that the augmented matrix of a system $Ax=b$ is transformed into a matrix $({A}^{\prime}|{b}^{\prime})$ in reduced row echelon form by a finite sequence of elementary row operations.

(a) Prove that $\text{rank}({A}^{\prime})\ne \text{rank}({A}^{\prime}|{b}^{\prime})$ if and only if $({A}^{\prime}|{b}^{\prime})$ contains a row in which the only nonzero entry lies in the last column.

(b) Deduce that $Ax=b$ is consistent if and only if $({A}^{\prime}|{b}^{\prime})$ contains no row in which the only nonzero entry lies in the last column.

For each of the systems that follow, apply Exercise 3 to determine whether the system is consistent. If the system is consistent, find all solutions. Finally, find a basis for the solution set of the corresponding homogeneous system.

(a) $\begin{array}{rcrcrcrcl}{x}_{1}& +& 2{x}_{2}& -& {x}_{3}& +& {x}_{4}& =& 2\\ 2{x}_{1}& +& {x}_{2}& +& {x}_{3}& -& {x}_{4}& =& 3\\ {x}_{1}& +& 2{x}_{2}& -& 3{x}_{3}& +& 2{x}_{4}& =& 2\end{array}$

(b) $\begin{array}{rcrcrcrcr}{x}_{1}& +& {x}_{2}& -& 3{x}_{3}& +& {x}_{4}& =& -2\\ {x}_{1}& +& {x}_{2}& +& {x}_{3}& -& {x}_{4}& =& 2\\ {x}_{1}& +& {x}_{2}& -& {x}_{3}& & & =& 0\end{array}$

(c) $\begin{array}{rcrcrcrcl}{x}_{1}& +& {x}_{2}& -& 3{x}_{3}& +& {x}_{4}& =& 1\\ {x}_{1}& +& {x}_{2}& +& {x}_{3}& -& {x}_{4}& =& 2\\ {x}_{1}& +& {x}_{2}& -& {x}_{3}& & & =& 0\end{array}$

Let the reduced row echelon form of

`A`be$$\left(\begin{array}{rrrrr}1& 0& 2& 0& -2\\ 0& 1& -5& 0& -3\\ 0& 0& 0& 1& 6\end{array}\right).$$Determine

`A`if the first, second, and fourth columns of`A`are$$\left(\begin{array}{r}1\\ -1\\ 3\end{array}\right),\phantom{\rule{1em}{0ex}}\left(\begin{array}{r}0\\ -1\\ 1\end{array}\right),\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\left(\begin{array}{r}1\\ -2\\ 0\end{array}\right),$$respectively.

Let the reduced row echelon form of

`A`be$$\left(\begin{array}{rrrrrr}1& -3& 0& 4& 0& 5\\ 0& 0& 1& 3& 0& 2\\ 0& 0& 0& 0& 1& -1\\ 0& 0& 0& 0& 0& 0\end{array}\right).$$Determine

`A`if the first, third, and sixth columns of`A`are$$\left(\begin{array}{r}1\\ -2\\ -1\\ 3\end{array}\right),\phantom{\rule{1em}{0ex}}\left(\begin{array}{r}-1\\ 1\\ 2\\ -4\end{array}\right),\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\left(\begin{array}{r}3\\ -9\\ 2\\ 5\end{array}\right),$$respectively.

It can be shown that the vectors ${u}_{1}=(2,\text{}-3,\text{}1),\text{}{u}_{2}=(1,\text{}4,\text{}-2),\text{}{u}_{3}=(-8,\text{}12,\text{}-4),\text{}{u}_{4}=(1,\text{}37,\text{}-17)$ and ${u}_{5}=(-3,\text{}-5,\text{}8)$ generate ${\text{R}}^{3}$. Find a subset of $\{{u}_{1},\text{}{u}_{2},\text{}{u}_{3},\text{}{u}_{4},\text{}{u}_{5}\}$ that is a basis for ${\text{R}}^{3}$.

Let W denote the subspace of ${\text{R}}^{5}$ consisting of all vectors having coordinates that sum to zero. The vectors

$$\begin{array}{rcl}{u}_{1}=(2,\text{}-3,\text{}4,\text{}-5,\text{}2),& & {u}_{2}=(-6,\text{}9,\text{}-12,\text{}15,\text{}-6),\\ {u}_{3}=(3,\text{}-2,\text{}7,\text{}-9,\text{}1),& & {u}_{4}=(2,\text{}-8,\text{}2,\text{}-2,\text{}6),\\ {u}_{5}=(-1,\text{}1,\text{}2,\text{}1,\text{}-3),& & {u}_{6}=(0,\text{}-3,\text{}-18,\text{}9,\text{}12),\\ {u}_{7}=(1,\text{}0,\text{}-2,\text{}3,\text{}-2),& \phantom{\rule{2em}{0ex}}\text{and}\phantom{\rule{2em}{0ex}}& {u}_{8}=(2,\text{}-1,\text{}1,\text{}-9,\text{}7)\end{array}$$generate W. Find a subset of $\{{u}_{1},\text{}{u}_{2},\text{}\dots ,\text{}{u}_{8}\}$ that is a basis for W.

Let W be the subspace of ${\text{M}}_{2\times 2}(R)$ consisting of the symmetric $2\times 2$ matrices. The set

$$S=\left\{\left(\begin{array}{rr}0& -1\\ -1& 1\end{array}\right),\text{}\left(\begin{array}{rr}1& 2\\ 2& 3\end{array}\right),\text{}\left(\begin{array}{rr}2& 1\\ 1& 9\end{array}\right),\text{}\left(\begin{array}{rr}1& -2\\ -2& 4\end{array}\right),\text{}\left(\begin{array}{rr}-1& 2\\ 2& -1\end{array}\right)\right\}$$generates W. Find a subset of

`S`that is a basis for W.Let

$$\text{V}=\{({x}_{1},\text{}{x}_{2},\text{}{x}_{3},\text{}{x}_{4},\text{}{x}_{5})\in {\text{R}}^{5}:\text{}{x}_{1}-2{x}_{2}+3{x}_{3}-{x}_{4}+2{x}_{5}=0\}.$$(a) Show that $S=\{(0,\text{}1,\text{}1,\text{}1,\text{}0)\}$ is a linearly independent subset of V.

(b) Extend

`S`to a basis for V.

Let V be as in Exercise 10.

(a) Show that $S=\{(1,\text{}2,\text{}1,\text{}0,\text{}0)\}$ is a linearly independent subset of V.

(b) Extend

`S`to a basis for V.

Let V denote the set of all solutions to the system of linear equations

$$\begin{array}{rcrcrcrcrcrcl}{x}_{1}& -& {x}_{2}& & & +& 2{x}_{4}& -& 3{x}_{5}& +& {x}_{6}& =& 0\\ 2{x}_{1}& -& {x}_{2}& -& {x}_{3}& +& 3{x}_{4}& -& 4{x}_{5}& +& 4{x}_{6}& =& 0.\end{array}$$(a) Show that $S=\{(0,\text{}-1,\text{}0,\text{}1,\text{}1,\text{}0),\text{}(1,\text{}0,\text{}1,\text{}1,\text{}1,\text{}0)\}$ is a linearly independent subset of V.

(b) Extend

`S`to a basis for V.

Let V be as in Exercise 12.

(a) Show that $S=\{(1,\text{}0,\text{}1,\text{}1,\text{}1,\text{}0),\text{}(0,\text{}2,\text{}1,\text{}1,\text{}0,\text{}0)\}$ is a linearly independent subset of V.

(b) Extend

`S`to a basis for V.

If $(A|b)$ is in reduced row echelon form, prove that

`A`is also in reduced row echelon form.Prove the corollary to Theorem 3.16: The reduced row echelon form of a matrix is unique. Visit goo.gl/

cZVzxM for a solution.