[Linear Algebra] Determinants - Part I
Determinants
$\mathbf{Def.}$ Determinant of square matrix
Let any square matrix $A \in \mathbb{R}^{n \times n}$ be given.
where $j_i \in {1, 2, \cdots, n}$ for all $i = 1, \cdots, n$ and $\text{sign}(j_1, \cdots, j_n)$ is $1$ when the minimum number of interchanges to put permutation in natural order is even; otherwise $-1$.
$\mathbf{Example.}$ Calculation of determinant
For $2 \times 2$ matrix,
For $3 \times 3$ matrix,
\(\begin{align*}
\text{det}(A) = \begin{vmatrix} a_{11} \ a_{12} \ a_{13} \\ a_{21} \ a_{22} \ a_{23} \\ a_{31} \ a_{32} \ a_{33} \end{vmatrix} = a_{11} \begin{vmatrix} a_{22} \ a_{23} \\ a_{32} \ a_{33} \end{vmatrix} - a_{12} \begin{vmatrix} a_{21} \ a_{23} \\ a_{31} \ a_{33} \end{vmatrix} + a_{13} \begin{vmatrix} a_{21} \ a_{22} \\ a_{31} \ a_{32} \end{vmatrix}
\end{align*}\)
Permutation of Column indices | minimum number of interchanges | Signed Elementary Product |
---|---|---|
${ 1, 2, 3 }$ | $0$ | $+a_{11} a_{22} a_{33}$ |
${ 1, 3, 2 }$ | $1$ | $-a_{11} a_{23} a_{32}$ |
${ 2, 1, 3 }$ | $1$ | $-a_{12} a_{21} a_{33}$ |
${ 2, 3, 1 }$ | $2$ | $+a_{12} a_{23} a_{31}$ |
${ 3, 1, 2 }$ | $2$ | $+a_{13} a_{21} a_{32}$ |
${ 3, 2, 1 }$ | $1$ | $-a_{13} a_{22} a_{31}$ |
Then, by definition, we can easily prove:
$\mathbf{Thm\ 1.1.}$ Let $A \in \mathbb{R}^{n \times n}$ with a row or a column of zeros. Then, $\text{det}(A) = 0$.
$\mathbf{Thm\ 1.2.}$ Let $A \in \mathbb{R}^{n \times n}$ be a triangluar matrix. Then, $\text{det}(A)$ is the product of the diagonal entries.
Cofactor Expansion
Calculating the determinant as defined above can often be cumbersome. If the matrix size is very large, it is very inefficient to determine the sign of permutation one by one. There is useful alternative: cofactor expansion, a calculation method used more often than previous definition.
$\mathbf{Def.}$ Minor, Cofactor
The minor $M_{ij}$ of entry $a_{ij}$ of a square matrix $A$ can be defined as the determinant of the submatrix that remains when the $i$th row and $j$th column of $A$ are deleted.
The cofactor $C_{ij}$ of entry of a square matrix $A$ is
$\mathbf{Example.}$ Calculation of minors and cofactors
Consider
$\mathbf{Thm\ 1.3.}$ Cofactor Expansion
Let any $A \in \mathbb{R}^{n \times n}$ be given.
and
$\text{det}(A) = a_{i1} C_{i1} + \cdots + a_{in} C_{in} \ (1 \leq i \leq n)$: cofactor expansion along $i$th row
So, thanks to cofactor expansion, by choosing sparse row or column we may obtain the determinant easily.
$\mathbf{Example.}$ Calculation of cofactor expansion
Consider
Let’s choose the second row. Then, $\text{det}(A) = 0 \cdot C_{21} + 5 \cdot C_{22} + 0 \cdot C_{23}$
Properties
The following theorems are important properties of the determinant.
$\mathbf{Thm\ 1.4.}$ If $A$ is a square matrix, $\text{det}(A) = \text{det}(A^\top)$
$\mathbf{Proof}$
Proof by induction on $n$.
It is clear for $n = 1$ as $A = A^\top$.
Assume that it is true for $n = k - 1$.
When $n = k$,
And, $C_{i1}’ = C_{1i}$ clearly as it is transpose of $(k - 1) \times (k-1)$ matrices.
$\mathbf{Thm\ 1.5.}$ Effect of elementary row operations on a determinant
(a) If $B$ is the matrix that results when a single row/column of $A$ is multiplied by a scalar $k$, then $\text{det}(B) = k \cdot \text{det}(A)$
(b) If $B$ is the matrix that results when a single 2 rows/columns of $A$ are interchanged, then $\text{det}(B) = - \text{det}(A)$
(c) If $B$ is the matrix that results when a multiple of one row/column of $A$ is added to another row/column, then $\text{det}(B) = \text{det}(A)$
$\mathbf{Proof}$
(a) Suppose that $B$ is obtained by a multiplication $k$ on a $i$th row of $A$ without loss of generality.
\[\begin{align*} \text{det}(B) = ka_{i1} C_{i1} + ka_{i2} C_{i2} + \cdots + ka_{in} C_{in} = k \cdot \text{det}(A). \end{align*}\](b) trivial
(c) Suppose that $B$ is obtained by a multiplication $k$ of $j$th row of $A$ is added to $i$th row of it.
by $\text{Thm } 1.6.$ (a)$._\blacksquare$
$\mathbf{Thm\ 1.6.}$ Let $A$ be an $n \times n$ matrix.
(a) If $A$ is has 2 identical rows or columns, $\text{det}(A) = 0$.
(b) If $A$ is has 2 proportional rows or columns, $\text{det}(A) = 0$.
(c) $\text{det}(kA) = k^n \text{det}(A)$
$\mathbf{Proof}$
(a) Let $B$ be an matrix that is obtained by interchanging two identical rows or columns of $A$. Then, it is clear that $B = A$. But, by $\text{Thm } 1.5.$ (b), $\text{det}(B) = -\text{det}(A)$.
(b) By $\text{Thm } 1.5.$ (a), it can be proved similarly with (a).
(c) By defintion of determinant,
By applying these theorems, several important fact of matrices related to determinant can be proved.
$\mathbf{Thm\ 1.7.}$ Determinant Test for Invertibility
A square matrix $A$ is invertible $\rightleftharpoons$ $\text{det}(A) \neq 0$.
$\mathbf{Proof}$
( $\Rightarrow$ ) RREF of $A$ is an indentity matrix $I_n$.
It is clear that $\text{det}(I_n) = 1$ and it implies that $\text{det}(A) \neq 0$ because $\text{det}(I_n)$ is just the value after some elementary products on $\text{det} (A)$.
- row interchanging $\to \times (-1)$
- row addition $\to \times 1$
- multiplication $\to \times k$
( $\Leftarrow$ ) $\text{det}(A) \neq 0 \to \text{det}(R) \neq 0$ where $R$ is RREF of $A$.
Then, RREF $R$ must be $I_n$. Or, it will have at least one zero diagonal entries because $R$ is the triangular matrix, which makes determinant $0._\blacksquare$
$\mathbf{Lemma.}$ Determinant of a product of elementary matrices
Let $E$ be an $n \times n$ elementary matrix, and $B$ be an arbitrary $n \times n$ matrix.
$\mathbf{Proof}$
(i) Multiply a row by nonzero $k$
$\text{det}(E) = k \cdot \text{det}(I_n) = k$
$\text{det}(EB) = k \cdot \text{det}(B) = \text{det}(E) \cdot \text{det}(B)$
(ii) Interchanging 2 rows
$\text{det}(E) = -1$
$\text{det}(EB) = - \text{det}(B) = \text{det}(E) \cdot \text{det}(B)$
(iii) Add a multiple of one row to another
$\text{det}(E) = 1$
$\text{det}(EB) = \text{det}(B) = \text{det}(E) \cdot \text{det}(B)$
$\mathbf{Thm\ 1.8.}$ Determinant of a product of matrices
Let $A$ and $B$ be an arbitrary $n \times n$ matrices.
$\mathbf{Proof}$
(i) Suppose $A$ is singular. Then, $AB$ is also singular matrix.
$\text{det}(A) = 0 = \text{det}(AB) = 0$
(ii) Suppose $A$ is non-singular (invertible). Then, we can represent $A$ as
$A = E_1 \cdot E_2 \cdots E_k \cdot I_n$ where $E_i$ is a $n \times n$ elementary matrix. Then,
$\text{det}(AB) = \text{det}(E_1 \cdot E_2 \cdots E_k \cdot I_n \cdot B) = \text{det}(B)._\blacksquare$
$\mathbf{Colloary.}$ $\text{det}(A^\top A) = \text{det}(A A^\top)$
$\mathbf{Proof}$
(a) Suppose $A$ is singular. Then, $AA^\top$ and $A^\top A$ is also singular matrix as the determinant of thme are $0$.
(b) Suppose $A$ is non-singular (invertible). Then, $\text{det}(A) = \text{det}(A^\top) \neq 0$.
$\to \text{det}(A^\top A) = \text{det}(A^2) = \text{det}(AA^\top)._\blacksquare$
$\mathbf{Thm\ 1.9.}$ Determinant of an inverse matrix
Let $A$ be an invertible $n \times n$ matrix.
$\mathbf{Proof}$
$AA^{-1} = I_n._\blacksquare$
Leave a comment