2 minute read

Cramer’s Rule: Formula for inverse

In the previous post, we covered the definition of determinant and its associated important properties. This post introduces an example of the use of this determinant.

Adjoint Matrix, Formula for Inverse

In this section, we will see that the inverse formula of the matrix is represented by determinant and the matrix of cofactors.

$\mathbf{Thm\ 2.1.}$ If the entries in any row/column of a square matrix are multiplied by the cofactors of the corresponding entries in a different row (column), then the sum of the products is zero. In other words, for $i \neq j$,

$a_{i1} C_{j1} + a_{i1} C_{j2} + \cdots + a_{in} C_{jn} = 0$
$a_{1i} C_{1j} + a_{1i} C_{2j} + \cdots + a_{ni} C_{nj} = 0._\blacksquare$
$\mathbf{Proof.}$

Let $A’$ be a matrix which has the entries of $i$th row/column of $A$ as $j$th row/column, and others are same with $A$, i.e.,

\[\begin{align*} A' =\begin{bmatrix} a_{11} \ \cdots \ a_{1n} \\ a_{21} \ \cdots \ a_{2n} \\ \vdots \ \ddots \ \vdots \\ a_{j-1, 1} \ \cdots \ a_{j-1, n} \\ a_{i1} \ \cdots \ a_{in} \\ a_{j+1, 1} \ \cdots \ a_{j+1, n} \\ \vdots \ddots \ \vdots \\ a_{n1} \ \cdots \ a_{nn} \end{bmatrix} \end{align*}\]

Then, by choosing $i$th row,

$a_{i1} C_{j1} + a_{i1} C_{j2} + \cdots + a_{in} C_{jn} = det(A') = 0$


as the $i$th row and $j$th row are identical. The proof of column case is analogous to it.




$\mathbf{Def.}$ Adjoint matrix
$A: n \times n$ square matrix, $C_{ij}:$ cofactor of $a_{ij}$. The adjoint of $A$ $\text{adj}(A)$ is defined by a transpose of the matrix of cofactors from $A$, $C$. i.e., $\text{adj}(A) = C^\top._\blacksquare$

\(C = \begin{bmatrix} C_{11} \ \cdots \ C_{1n} \\ \ \ddots \ \\ C_{n1} \ \cdots \ C_{nn} \end{bmatrix}\)

$\mathbf{Thm\ 2.2.}$ Formula for the inverse of a matrix

$A^{-1} = \frac{1}{\text{det}(A)} \cdot \text{adj}(A)._\blacksquare$
$\mathbf{Proof.}$
\[\begin{align*} A \cdot \text{adj}(A) &= \begin{bmatrix} a_{11} \ \cdots \ a_{n1} \\ \ \ddots \ \\ a_{1n} \ \cdots \ a_{nn} \end{bmatrix} \cdot \begin{bmatrix} C_{11} \ \cdots \ C_{n1} \\ \ \ddots \ \\ C_{1n} \ \cdots \ C_{nn} \end{bmatrix} \\ &= \text{det}(A) \cdot I_n._\blacksquare \end{align*}\]




Cramer’s Rule

The most well-known example of the use of determinant is Cramer’s rule. This rule expresses the solution of the system by using a determinant when the matrix is invertible in a given linear system.

If $Ax = b$ is a linear system of $n$ equations in $n$ unknowns, then the system has a unique solution if and only if $\text{det}(A) \neq 0$, in which case the solution is

$x_1 = \frac{\text{det}(A_1)}{\text{det}(A)}, x_2 = \frac{\text{det}(A_2)}{\text{det}(A)}, \cdots, x_n = \frac{\text{det}(A_n)}{\text{det}(A)}$


where $A_j$ is the matrix that results when the $j$th column of $A$ is replaced by $b._\blacksquare$

$\mathbf{Proof.}$

Suppose $Ax = b$, $A$ is invertible.
Then,
\(\begin{align*} x &= A^{-1} b \\ &= \frac{1}{\text{det}(A)} \cdot \text{adj}(A) \cdot b. \end{align*}\)

Thus, the $j$th element of $x$, $x_j$ is
\(\begin{align*} x_j &= \frac{1}{\text{det}(A)} (b_1 C_{1j} + \cdots + b_n C_{nj}) \\ &= \frac{1}{\text{det}(A)} \text{det}(A_j)._\blacksquare \end{align*}\)

Leave a comment