6 minute read

Determinants

$\mathbf{Def.}$ Determinant of square matrix

Let any square matrix $A \in \mathbb{R}^{n \times n}$ be given.

$\text{det}(A) := \sum \text{sign}(j_1, \cdots, j_n) a_{1 j_1} \cdots a_{n j_n}$


where $j_i \in {1, 2, \cdots, n}$ for all $i = 1, \cdots, n$ and $\text{sign}(j_1, \cdots, j_n)$ is $1$ when the minimum number of interchanges to put permutation in natural order is even; otherwise $-1$.

$\mathbf{Example.}$ Calculation of determinant

For $2 \times 2$ matrix,

\[\begin{align*} \text{det}(A) = \begin{vmatrix} a_{11} \ a_{12} \\ a_{21} \ a_{22} \end{vmatrix} = a_{11}a_{22} - a_{12} a_{21} \end{align*}\]

For $3 \times 3$ matrix,

\(\begin{align*} \text{det}(A) = \begin{vmatrix} a_{11} \ a_{12} \ a_{13} \\ a_{21} \ a_{22} \ a_{23} \\ a_{31} \ a_{32} \ a_{33} \end{vmatrix} = a_{11} \begin{vmatrix} a_{22} \ a_{23} \\ a_{32} \ a_{33} \end{vmatrix} - a_{12} \begin{vmatrix} a_{21} \ a_{23} \\ a_{31} \ a_{33} \end{vmatrix} + a_{13} \begin{vmatrix} a_{21} \ a_{22} \\ a_{31} \ a_{32} \end{vmatrix} \end{align*}\)

Permutation of Column indices minimum number of interchanges Signed Elementary Product
${ 1, 2, 3 }$ $0$ $+a_{11} a_{22} a_{33}$
${ 1, 3, 2 }$ $1$ $-a_{11} a_{23} a_{32}$
${ 2, 1, 3 }$ $1$ $-a_{12} a_{21} a_{33}$
${ 2, 3, 1 }$ $2$ $+a_{12} a_{23} a_{31}$
${ 3, 1, 2 }$ $2$ $+a_{13} a_{21} a_{32}$
${ 3, 2, 1 }$ $1$ $-a_{13} a_{22} a_{31}$


Then, by definition, we can easily prove:

$\mathbf{Thm\ 1.1.}$ Let $A \in \mathbb{R}^{n \times n}$ with a row or a column of zeros. Then, $\text{det}(A) = 0$.
$\mathbf{Thm\ 1.2.}$ Let $A \in \mathbb{R}^{n \times n}$ be a triangluar matrix. Then, $\text{det}(A)$ is the product of the diagonal entries.

Cofactor Expansion

Calculating the determinant as defined above can often be cumbersome. If the matrix size is very large, it is very inefficient to determine the sign of permutation one by one. There is useful alternative: cofactor expansion, a calculation method used more often than previous definition.

$\mathbf{Def.}$ Minor, Cofactor

The minor $M_{ij}$ of entry $a_{ij}$ of a square matrix $A$ can be defined as the determinant of the submatrix that remains when the $i$th row and $j$th column of $A$ are deleted.

The cofactor $C_{ij}$ of entry of a square matrix $A$ is

$C_{ij} = (-1)^{i + j} M_{ij}$


$\mathbf{Example.}$ Calculation of minors and cofactors

Consider

\[\begin{align*} &A = \begin{bmatrix} 3 \ 1 \ 4 \\ 2 \ 5 \ 6 \\ 1 \ 4 \ 8 \end{bmatrix} \\ \\ &M_{11} = \begin{bmatrix} 5 \ 6 \\ 4 \ 8 \end{bmatrix} = 16 \\ &M_{12} = \begin{bmatrix} 2 \ 6 \\ 1 \ 8 \end{bmatrix} = 10 \\ &M_{23} = \begin{bmatrix} 3 \ 1 \\ 1 \ 4 \end{bmatrix} = 11 \\ &M_{11} = \begin{bmatrix} 3 \ 4 \\ 2 \ 6 \end{bmatrix} = 10 \\\\ &C_{11} = 16, C_{12} = -10, C_{23} = -11, C_{32} = -10 \end{align*}\]


$\mathbf{Thm\ 1.3.}$ Cofactor Expansion

Let any $A \in \mathbb{R}^{n \times n}$ be given.

$\text{det}(A) = a_{1j} C_{1j} + \cdots + a_{nj} C_{nj} \ (1 \leq j \leq n)$: cofactor expansion along $j$th column

and

$\text{det}(A) = a_{i1} C_{i1} + \cdots + a_{in} C_{in} \ (1 \leq i \leq n)$: cofactor expansion along $i$th row


So, thanks to cofactor expansion, by choosing sparse row or column we may obtain the determinant easily.

$\mathbf{Example.}$ Calculation of cofactor expansion

Consider

\[\begin{align*} &A = \begin{bmatrix} 3 \ 1 \ 4 \\ 0 \ 5 \ 0 \\ 1 \ 4 \ 8 \end{bmatrix} \\ \\ \end{align*}\]

Let’s choose the second row. Then, $\text{det}(A) = 0 \cdot C_{21} + 5 \cdot C_{22} + 0 \cdot C_{23}$


Properties

The following theorems are important properties of the determinant.

$\mathbf{Thm\ 1.4.}$ If $A$ is a square matrix, $\text{det}(A) = \text{det}(A^\top)$

$\mathbf{Proof}$

Proof by induction on $n$.

It is clear for $n = 1$ as $A = A^\top$.
Assume that it is true for $n = k - 1$.
When $n = k$,

\[\begin{align*} &\text{det}(A) = a_{11} C_{11} + a_{12} C_{12} + \cdots + a_{1n} C_{1n} \\ &\text{det}(A^\top) = a_{11} C_{11}' + a_{12} C_{21}' + \cdots + a_{1n} C_{n1}' \end{align*}\]

And, $C_{i1}’ = C_{1i}$ clearly as it is transpose of $(k - 1) \times (k-1)$ matrices.



$\mathbf{Thm\ 1.5.}$ Effect of elementary row operations on a determinant
(a) If $B$ is the matrix that results when a single row/column of $A$ is multiplied by a scalar $k$, then $\text{det}(B) = k \cdot \text{det}(A)$

(b) If $B$ is the matrix that results when a single 2 rows/columns of $A$ are interchanged, then $\text{det}(B) = - \text{det}(A)$

(c) If $B$ is the matrix that results when a multiple of one row/column of $A$ is added to another row/column, then $\text{det}(B) = \text{det}(A)$

$\mathbf{Proof}$

(a) Suppose that $B$ is obtained by a multiplication $k$ on a $i$th row of $A$ without loss of generality.

\[\begin{align*} \text{det}(B) = ka_{i1} C_{i1} + ka_{i2} C_{i2} + \cdots + ka_{in} C_{in} = k \cdot \text{det}(A). \end{align*}\]

(b) trivial

(c) Suppose that $B$ is obtained by a multiplication $k$ of $j$th row of $A$ is added to $i$th row of it.

\[\begin{align*} \text{det}(B) =(a_{i1} + ka_{j1}) C_{i1} + (a_{i2} + ka_{j2}) C_{i2} + \cdots + (a_{in} + ka_{jn}) C_{in} = \text{det}(A) \end{align*}\]

by $\text{Thm } 1.6.$ (a)$._\blacksquare$



$\mathbf{Thm\ 1.6.}$ Let $A$ be an $n \times n$ matrix.
(a) If $A$ is has 2 identical rows or columns, $\text{det}(A) = 0$.

(b) If $A$ is has 2 proportional rows or columns, $\text{det}(A) = 0$.

(c) $\text{det}(kA) = k^n \text{det}(A)$

$\mathbf{Proof}$

(a) Let $B$ be an matrix that is obtained by interchanging two identical rows or columns of $A$. Then, it is clear that $B = A$. But, by $\text{Thm } 1.5.$ (b), $\text{det}(B) = -\text{det}(A)$.

(b) By $\text{Thm } 1.5.$ (a), it can be proved similarly with (a).

(c) By defintion of determinant,

\[\begin{align*} \text{det}(kA) = \sum \text{sign}(\sigma) ka_{1 j_1} \cdot ka_{2 j_2} \cdots ka_{n j_n} = k^n \cdot \text{det}(A)._\blacksquare \end{align*}\]



By applying these theorems, several important fact of matrices related to determinant can be proved.

$\mathbf{Thm\ 1.7.}$ Determinant Test for Invertibility
A square matrix $A$ is invertible $\rightleftharpoons$ $\text{det}(A) \neq 0$.

$\mathbf{Proof}$

( $\Rightarrow$ ) RREF of $A$ is an indentity matrix $I_n$.
It is clear that $\text{det}(I_n) = 1$ and it implies that $\text{det}(A) \neq 0$ because $\text{det}(I_n)$ is just the value after some elementary products on $\text{det} (A)$.

  • row interchanging $\to \times (-1)$
  • row addition $\to \times 1$
  • multiplication $\to \times k$

( $\Leftarrow$ ) $\text{det}(A) \neq 0 \to \text{det}(R) \neq 0$ where $R$ is RREF of $A$.
Then, RREF $R$ must be $I_n$. Or, it will have at least one zero diagonal entries because $R$ is the triangular matrix, which makes determinant $0._\blacksquare$



$\mathbf{Lemma.}$ Determinant of a product of elementary matrices
Let $E$ be an $n \times n$ elementary matrix, and $B$ be an arbitrary $n \times n$ matrix.

$\text{det}(EB) = \text{det}(E) \text{det}(B)$
$\mathbf{Proof}$

(i) Multiply a row by nonzero $k$
$\text{det}(E) = k \cdot \text{det}(I_n) = k$
$\text{det}(EB) = k \cdot \text{det}(B) = \text{det}(E) \cdot \text{det}(B)$

(ii) Interchanging 2 rows
$\text{det}(E) = -1$
$\text{det}(EB) = - \text{det}(B) = \text{det}(E) \cdot \text{det}(B)$

(iii) Add a multiple of one row to another
$\text{det}(E) = 1$
$\text{det}(EB) = \text{det}(B) = \text{det}(E) \cdot \text{det}(B)$



$\mathbf{Thm\ 1.8.}$ Determinant of a product of matrices
Let $A$ and $B$ be an arbitrary $n \times n$ matrices.

$\text{det}(AB) = \text{det}(A) \text{det}(B)$
$\mathbf{Proof}$

(i) Suppose $A$ is singular. Then, $AB$ is also singular matrix.
$\text{det}(A) = 0 = \text{det}(AB) = 0$

(ii) Suppose $A$ is non-singular (invertible). Then, we can represent $A$ as
$A = E_1 \cdot E_2 \cdots E_k \cdot I_n$ where $E_i$ is a $n \times n$ elementary matrix. Then,

$\text{det}(AB) = \text{det}(E_1 \cdot E_2 \cdots E_k \cdot I_n \cdot B) = \text{det}(B)._\blacksquare$



$\mathbf{Colloary.}$ $\text{det}(A^\top A) = \text{det}(A A^\top)$

$\mathbf{Proof}$

(a) Suppose $A$ is singular. Then, $AA^\top$ and $A^\top A$ is also singular matrix as the determinant of thme are $0$.

(b) Suppose $A$ is non-singular (invertible). Then, $\text{det}(A) = \text{det}(A^\top) \neq 0$.
$\to \text{det}(A^\top A) = \text{det}(A^2) = \text{det}(AA^\top)._\blacksquare$



$\mathbf{Thm\ 1.9.}$ Determinant of an inverse matrix
Let $A$ be an invertible $n \times n$ matrix.

$\text{det}(A^{-1}) = \frac{1}{\text{det}(A)}$
$\mathbf{Proof}$

$AA^{-1} = I_n._\blacksquare$



Leave a comment