Suppose that I asked you give me an example of an complex matrix

which is invertible. You would probably give me the identity matrix , which is its own inverse (i.e. an involution). Now suppose I ask for an invertible matrix, which is not the identity. You may change one of the diagonal entries to a different nonzero complex number. Perhaps, you know a bit of linear algebra, and you give me an an upper-triangular matrix with no zeros on the diagonal.

So far, you have only given me matrices with some zero entries. Not surprisingly, matrices are easier to deal with when many of the entries are zero. To up the difficulty level, I now ask you for an invertible matrix, for which all the entries are nonzero. You scratch your head, mumble “um…”, and ask for a minute.

Before we dive into necessary and sufficient criteria for the invertibility of a matrix, I want to share an anecdote about Dennis Gaitsgory, who was my professor for Math 123 as an undergraduate student. Since the source for the anecdote is the Harvard College Math Review endpaper “How to Compute Determinants” written by Professor Gaitsgory himself, I have no reason to consider it apocryphal.

During his graduate student years, Professor Gaitsgory was a teaching fellow for an undergraduate linear algebra class. One early morning, he was explaining determinants to his students, and he made the claim that the determinant of a generic matrix is never zero. As you may know, having nonzero determinant is equivalent to a matrix being invertible. To backup his claim, he gave the pretty “generic looking” matrix

as an example. The logic behind giving a single example to illustrate a broad claim is bit off, but in his defense, the students were non-math majors. Computing the determinant by Laplace expansion,

Professor–or rather, graduate student–Gaitsgory thought he made a mistake in his computation; otherwise, he had just given a nonexample! Well, he had. Even brilliant mathematicians struggle to give examples of invertible matrices early in the morning.

Gaitsgory’s example is not invertible because the columns are linearly dependent, and therefore the matrix is rank deficient. To see this, observe that

which shows that the third column is a linear combination of the first two columns.

Returning to my request for an invertible matrix with no zero entries, how would you, the reader, go about giving such a matrix. Choosing numbers more or less at random for the entries and then checking the determinant is slow and tedious, particularly if the size of the matrix is large. A better approach would be to start with a matrix whose entries are identical and nonzero and then start changing the entries so that the columns are linearly independent. For example, start with

Clearly, is not invertible; it has rank . But say we change the second coordinate of the second column to , the third coordinate of the third column to , … , and the coordinate of the column to to obtain a matrix . Symbolically,

I claim that is invertible. Suppose are scalars such that

Subtracting the first row from the second, we see that . Subtracting the second row from the third row, we see that . Suppose that we have shown that . Then subtracting the row from the row, we see that . By induction, we conclude that . Applying this result to the first row, we conclude that all the are zero.

We have satisfied my request for an invertible matrix with all nonzero entries, but it took some effort. Now, I want to describe a method which focuses only on the magnitude of the diagonal entries of the matrix relative to the magnitude of the other entries of a given row. A matrix is *diagonally dominant *if

is said to be *strictly diagonally dominant* if the inequality is strict. Define nonnegative quantities by

The closed disk in the complex plane centered at and of radius is an example of a *Gershgorin disk*. We can use Gershgorin’s circle theorem to construct invertible matrices with very little effort.

Theorem.Every eigenvalue of lies in at least one of the Gershgorin disks .

*Proof. *Let be an eigenvalue of , and let be an associated eigenvector. Since is a nonzero vector by definition, there exists an index such that . Applying the definition of the eigenvector,

which we rewrite as

Taking the magnitude of both sides and applying the triangle inequality to the sum, we obtain the estimate

,

where division is possible since by construction. Moreover, since is taken to be the coordinate with the largest magnitude, we have that , for all . We conclude that

Recall that a matrix is invertible if and only if it has trivial kernel; or equivalently, it does not have eigenvalue . If we choose the diagonal entries of sufficiently large so that for , then the reverse triangle inequality implies that all the eigenvalues of have positive magnitude. Hence, is invertible.

This is awesome. Your blog looks awesome by the way! It’s great to see someone else who loves exploring these topics for fun.

Thanks, William!