How to Construct an Invertible Matrix? Just Choose Large Diagonals

Suppose that I asked you give me an example of an n\times n complex matrix

\displaystyle A=\begin{bmatrix}a_{11}&a_{12}&\cdots&a_{1n}\\a_{21}&a_{22}&\cdots&a_{2n}\\ \vdots&\vdots&\vdots&\vdots\\a_{n1}&a_{n2}&\cdots&a_{nn}\end{bmatrix}

which is invertible. You would probably give me the n\times n identity matrix I, which is its own inverse (i.e. an involution). Now suppose I ask for an invertible matrix, which is not the identity. You may change one of the diagonal entries to a different nonzero complex number. Perhaps, you know a bit of linear algebra, and you give me an an upper-triangular matrix with no zeros on the diagonal.

So far, you have only given me matrices with some zero entries. Not surprisingly, matrices are easier to deal with when many of the entries are zero. To up the difficulty level, I now ask you for an invertible matrix, for which all the entries are nonzero. You scratch your head, mumble “um…”, and ask for a minute.

Before we dive into necessary and sufficient criteria for the invertibility of a matrix, I want to share an anecdote about Dennis Gaitsgory, who was my professor for Math 123 as an undergraduate student. Since the source for the anecdote is the Harvard College Math Review endpaper “How to Compute Determinants” written by Professor Gaitsgory himself, I have no reason to consider it apocryphal.

During his graduate student years, Professor Gaitsgory was a teaching fellow for an undergraduate linear algebra class. One early morning, he was explaining determinants to his students, and he made the claim that the determinant of a generic matrix is never zero. As you may know, having nonzero determinant is equivalent to a matrix being invertible. To backup his claim, he gave the pretty “generic looking” 3\times 3 matrix

\displaystyle A=\begin{bmatrix}1&2&3\\4&5&6\\7&8&9\end{bmatrix}

as an example. The logic behind giving a single example to illustrate a broad claim is bit off, but in his defense, the students were non-math majors. Computing the determinant by Laplace expansion,

\begin{array}{lcl}\displaystyle\det A&=&\displaystyle1\cdot\begin{vmatrix}5&6\\8&9\end{vmatrix}-2\cdot\begin{vmatrix}4&6\\7&9\end{vmatrix}+3\cdot\begin{vmatrix}4&5\\7&8\end{vmatrix}\\[1 em]&=&\displaystyle1\cdot\left(5\cdot9-6\cdot8\right)-2\cdot\left(4\cdot9-6\cdot7\right)+3\cdot\left(4\cdot8-5\cdot7\right)\\[1 em]&=&\displaystyle-3+12-9\\[1 em]&=&\displaystyle0\end{array}

Professor–or rather, graduate student–Gaitsgory thought he made a mistake in his computation; otherwise, he had just given a nonexample! Well, he had. Even brilliant mathematicians struggle to give examples of invertible matrices early in the morning.

Gaitsgory’s example is not invertible because the columns are linearly dependent, and therefore the matrix is rank deficient. To see this, observe that

\displaystyle\begin{bmatrix}2\\5\\8\end{bmatrix}-\begin{bmatrix}1\\4\\7\end{bmatrix}=\begin{bmatrix}1\\1\\1\end{bmatrix}\Longrightarrow\begin{bmatrix}3\\6\\9\end{bmatrix}=\begin{bmatrix}2\\5\\8\end{bmatrix}+\begin{bmatrix}1\\1\\1\end{bmatrix}=2\begin{bmatrix}2\\5\\8\end{bmatrix}-\begin{bmatrix}1\\4\\7\end{bmatrix}

which shows that the third column is a linear combination of the first two columns.

Returning to my request for an invertible matrix with no zero entries, how would you, the reader, go about giving such a matrix. Choosing numbers more or less at random for the entries and then checking the determinant is slow and tedious, particularly if the size n of the matrix is large. A better approach would be to start with a matrix whose entries are identical and nonzero and then start changing the entries so that the columns are linearly independent. For example, start with

A_{0}=\begin{bmatrix}1&1&\cdots&1\\1&1&\cdots&1\\ \vdots&\vdots&\vdots&\vdots\\1&1&\cdots&1\end{bmatrix}

Clearly, A_{0} is not invertible; it has rank 1. But say we change the second coordinate of the second column to 2, the third coordinate of the third column to 3, … , and the n^{th} coordinate of the n^{th} column to n to obtain a matrix A=(a_{ij}). Symbolically,

a_{ij}=\displaystyle\begin{cases} {1}&{i\neq j} \\ {j}&{i=j}\end{cases}

I claim that A is invertible. Suppose \beta_{1},\ldots,\beta_{n} are scalars such that

\displaystyle\beta_{1}\begin{bmatrix}1\\1\\ \vdots\\1\end{bmatrix}+\beta_{2}\begin{bmatrix}1\\2\\ \vdots\\1\end{bmatrix}+\cdots+\beta_{n}\begin{bmatrix}1\\1\\ \vdots\\n\end{bmatrix}=\begin{bmatrix}\beta_{1}+\beta_{2}+\cdots+\beta_{n}\\ \beta_{1}+2\beta_{2}+\cdots+\beta_{n}\\ \vdots\\ \beta_{1}+\beta_{2}+\cdots+n\beta_{n}\end{bmatrix}=\begin{bmatrix} 0\\ 0\\ \vdots\\ 0\end{bmatrix}

Subtracting the first row from the second, we see that \beta_{2}=0. Subtracting the second row from the third row, we see that 2\beta_{3}=0\Leftrightarrow\beta_{3}=0. Suppose that we have shown that \beta_{2},\ldots,\beta_{k}=0. Then subtracting the k^{th} row from the (k+1)^{th} row, we see that (k+1)\beta_{k+1}=0\Leftrightarrow\beta_{k+1}=0. By induction, we conclude that \beta_{2},\ldots,\beta_{n}=0. Applying this result to the first row, we conclude that all the \beta_{i} are zero.

We have satisfied my request for an invertible matrix with all nonzero entries, but it took some effort. Now, I want to describe a method which focuses only on the magnitude of the diagonal entries of the matrix relative to the magnitude of the other entries of a given row. A matrix A is diagonally dominant if

\displaystyle\left|a_{i}\right|\geq\sum_{{j=1}\atop{j\neq i}}^{n}\left|a_{ij}\right|,\indent\forall i=1,\ldots,n

A is said to be strictly diagonally dominant if the inequality is strict. Define nonnegative quantities R_{1},\ldots,R_{n} by

\displaystyle R_{i}:=\sum_{{j=1}\atop{j\neq i}}^{n}\left|a_{ij}\right|,\indent\forall i=1,\ldots,n

The closed disk \overline{D}(a_{ii},R_{i}) in the complex plane centered at a_{ii} and of radius R_{i} is an example of a Gershgorin disk. We can use Gershgorin’s circle theorem to construct invertible matrices with very little effort.

Theorem. Every eigenvalue of A lies in at least one of the Gershgorin disks \overline{D}(a_{ii},R_{i}).

Proof. Let \lambda be an eigenvalue of A, and let x=(x_{1},\ldots,x_{n})\in\mathbb{C}^{n} be an associated eigenvector. Since x is a nonzero vector by definition, there exists an index i such that \left|x_{i}\right|=\max_{j}\left|x_{j}\right|>0. Applying the definition of the eigenvector,

\displaystyle\sum_{j=1}^{n}a_{ij}x_{j}=\lambda x_{i},\indent\forall i=1,\ldots,n

which we rewrite as

\displaystyle\sum_{{j=1}\atop{j\neq i}}^{n}a_{ij}x_{j}=\lambda x_{i}-a_{ii}x_{i}=(\lambda-a_{iii})x_{i}

Taking the magnitude of both sides and applying the triangle inequality to the sum, we obtain the estimate

\displaystyle\left|\lambda-a_{ii}\right|\cdot\left|x_{i}\right|\leq\sum_{{j=1}\atop{j\neq i}}^{n}\left|a_{ij}\right|\cdot\left|x_{j}\right|\Longleftrightarrow\left|\lambda-a_{ii}\right|\leq\sum_{{j=1}\atop{j\neq i}}^{n}\left|a_{ij}\right|\cdot\left|\frac{x_{j}}{x_{i}}\right|,

where division is possible since \left|x_{i}\right|>0 by construction. Moreover, since x_{i} is taken to be the coordinate with the largest magnitude, we have that \left|\frac{x_{j}}{x_{i}}\right|\leq1, for all j=1,\ldots,n. We conclude that

\displaystyle\left|\lambda-a_{ii}\right|\leq\sum_{{j=1}\atop{j\neq i}}^{n}\left|a_{ij}\right|=R_{i}

\Box

Recall that a matrix is invertible if and only if it has trivial kernel; or equivalently, it does not have eigenvalue \lambda=0. If we choose the diagonal entries of A sufficiently large so that \left|a_{ii}\right|>R_{i} for i=1,\ldots,n, then the reverse triangle inequality implies that all the eigenvalues of A have positive magnitude. Hence, A is invertible. 

Advertisements
This entry was posted in Problem Solving and tagged , , , , , , . Bookmark the permalink.

2 Responses to How to Construct an Invertible Matrix? Just Choose Large Diagonals

  1. This is awesome. Your blog looks awesome by the way! It’s great to see someone else who loves exploring these topics for fun.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s