4 minute read

Row & Column Space

$\mathbf{Definition.}$ Row & Column Space
Let $A$ be an $m \times n$ matrix over $F$.
1. The row space of $\mathbf{A}$, $\text{row}(A)$ is a subspace of $F^n$ spanned by the rows of $A$.
2. The column space of $\mathbf{A}$, $\text{col}(A)$ is a subspace of $F^n$ spanned by the columns of $A$.
3. $\text{dim}(\text{row}(A))$ is the rank of $\mathbf{A}$, $\text{rank}(A)$.
4. A $m \times n$ matrix $B$ is row-equivalent to $A$ if $B$ can be obtained by applying a finite number of elementary row operations (EROs) to $A$.

  • Interchange 2 rows
  • Multiply a row by a nonzero constant
  • Add a constant multiple of a row to another row

5. A matrix is in a row echelon form (REF) if

  • Any row containing only zero’s are at the bottom
  • The first nonzero entry (from left) of each row is 1 (called leading 1)
  • The leading 1 appear from left to right in successive rows

6. A matrix is in a reduced row echelon form (RREF) if one condition is added

  • Each leading 1 is the only nonzero entry in its column

A schematic diagram of row echelon form $U$ and reduced row echelon form $R$ is as follows.

\[U=\left[\begin{array}{cccccccc} \bullet & \star & \star & \star & \star & \star & \star & \star \\ 0 & \bullet & \star & \star & \star & \star & \star & \star \\ 0 & 0 & 0 & \bullet & \star & \star & \star & \star \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \bullet \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right] \quad \underset{\begin{gathered} \text { scaling and reduction } \end{gathered}}{\xrightarrow{\hspace{3cm}}} \quad R=\left[\begin{array}{cccccccc} 1 & 0 & \star^{\prime} & 0 & \star^{\prime} & \star^{\prime} & \star^{\prime} & 0 \\ 0 & 1 & \star^{\prime} & 0 & \star^{\prime} & \star^{\prime} & \star^{\prime} & 0 \\ 0 & 0 & 0 & 1 & \star^{\prime} & \star^{\prime} & \star^{\prime} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]\]




$\mathbf{Remark.}$
1. One can obtain a REF or RREF by applying ERO’s. We called this procedure Gaussian Elimination.
2. An ERO doesn’t change the row space. Hence, if $A$ is equivalent to $B$, then $\text{row}(A) = \text{row}(B)$.
In particular, $\text{row}(A) = \text{row}(R)$ where $R$ is a REF or RREF of $A$.
3. $\text{rank}(A)$ is the number of leading 1’s a RREF or RREF of $A$ or the number of nonzero rows in a REF or RREF of $A$.
4. Recall that applying an ERO to $A$ is equivalent to multiplying $A$ by $E$, where $E$ is the $m \times m$ matrix obtained by applying the ERO to $I$.

Row & Column spaces have the same dimension

Here is an important theorem which tells us the relationship between row and column space:

$\mathbf{Thm\ 1.}$ $\text{dim}(\text{col}(A)) = \text{dim}(\text{row}(A))$
Let $A$ be $m \times n$ matrix and let $R$ be a REF of $A$.
Then, the columns of $A$ that correspond the columns of $R$ containing leading 1’s form a basis of $\text{col}(A)$. In particular, $\text{dim}(\text{col}(A)) = \text{rank}(A)._\blacksquare$

$\mathbf{Proof.}$

Let $\mathbf{c_1}, \cdots, \mathbf{c_n}$ be column vectors of $A$. Assume that $R$ contains $k$ leading 1’s, i.e., $\text{rank}(A) = k$.

Write $R = EA$, where $E$ is a product of elementary matrices. Then, $E \mathbf{c_1}, \cdots, E \mathbf{c_n}$ are column vectors of $R$.

Let $E \mathbf{c_{i_1}}, \cdots, E \mathbf{c_{i_k}}$ be columns of $R$ that contain leading 1’s. Then

\[\begin{align*} \{ E \mathbf{c_{i_1}}, \cdots, E \mathbf{c_{i_k}} \} \end{align*}\]

is a basis of $\text{col}(R)$.

For $\forall C \in \text{col}(A)$, it is clear that $EC \in \text{col}(R) \to a_1 E \mathbf{c_{i_1}} + \cdots + a_k E \mathbf{c_{i_k}} = EC$. Thus, $a_1 \mathbf{c_{i_1}} + \cdots + a_k \mathbf{c_{i_k}} = C$. So,

\[\begin{align*} \{ \mathbf{c_{i_1}}, \cdots, \mathbf{c_{i_k}} \} \end{align*}\]

spans $\text{col}(A)$. Now, it suffices to show linearly independence.

If $a_1 \mathbf{c_{i_1}} + \cdots + a_k \mathbf{c_{i_k}} = \mathbf{0}$, then $a_1 E \mathbf{c_{i_1}} + \cdots + a_k E \mathbf{c_{i_k}} = \mathbf{0} \to a_i = 0$ as it is a basis of $\text{col}(A)$.



$\mathbf{Corollary.}$ $\text{rank}(A^\top) = \text{rank}(A)$ $( \because \text{col}(A) = \text{row}(A^\top))$

From these fact, a helpful inequality of rank is obtained:

$\mathbf{Thm\ 2.}$ For $l \times m$ matrix $A$ and $m \times n$ matrix $B$, $\text{rank}(AB) \leq \text{min}(\text{rank}(A), \text{rank}(B))$

$\mathbf{Proof.}$

First, we will see that $\text{rank}(AB) \leq \text{rank}(A)$.

Note that $\text{col}(AB) \subseteq \text{col}(A)$. Thus,

\[\begin{align*} \text{dim}(\text{col}(AB)) = \text{rank}(AB) \leq \text{rank}(A) \end{align*}\]

And, for $\text{rank}(AB) \leq \text{rank}(B)$,

\[\begin{align*} \text{rank}(AB) &= \text{rank}(B^\top A^\top) \\ &\leq \text{rank}(B^\top) \\ &= \text{rank}(B)._\blacksquare \end{align*}\]


Furthermore, we will proof that $\text{rank}(A^\top A) = \text{rank}(A)$, but it needs an additional theorem called rank-nullity theorem which we will discuss in the next post.

$\mathbf{Thm\ 3.}$ For any $m \times n$ matrix $A$,

\[\begin{aligned} \text{rank}(A^\top A) = \text{rank}(A) \end{aligned}\]
$\mathbf{Proof.}$

As $A^\top A$ is $n \times n$ matrix,

\[\begin{aligned} \text{rank}(A^\top A) + \text{nullity}(A^\top A) = n = \text{rank}(A^\top) + \text{nullity}(A^\top) \end{aligned}\]

Thus, $\text{rank}(A^\top A) = \text{rank}(A^\top) = \text{rank}(A)$.


$\mathbf{Corollary.}$ For $m \times n$ matrix $A$,

\[\begin{aligned} \text{col}(A^\top A) = \text{col}(A^\top) \end{aligned}\]

Equivalently,

\[\begin{aligned} \text{row}(A^\top A) = \text{row}(A) \end{aligned}\]
$\mathbf{Proof.}$

It is clear that $\text{col}(A^\top A) \subseteq \text{col}(A^\top)$.

Since $\text{dim}(\text{col}(A^\top A)) = \text{rank}(A^\top A) = \text{rank}(A) = \text{dim}(\text{row}(A)) = \text{dim}(\text{col}(A^\top))$, done.





Uniqueness of RREF

Reference

[1] K. Hoffman, and R. Kunze. Linear Algebra PHI Learning, Second edition, 2004

Leave a comment