A few weeks ago, we proved that a Hermitian matrix is positive-definite if and only if its leading principal minors are positive. This result is known as Sylvester’s criterion, after nineteenth-century English mathematician J.J. Sylvester. Disappointingly, no one asked whether we can relax the hypothesis to positive-semidefinite. The answer is no, not if only expect to have check the leading principal minors. Instead, we have to check whether all of the principal minors are nonnegative. This added burden is probably an indication that Sylvester’s criterion is not computationally useful when is large. Perhaps, someone more knowledgeable about numerical analysis will weigh in on this matter.
Here’s the statement for generalized Sylvester’s criterion:
Theorem 1. A Hermitian matrix is positive-semidefinite if and only if all the principal minors are nonnegative.
Unfortunately, the proof is not as elegant as the one for Sylvester’s criterion for positive-definiteness. We’ll break up into a few lemmas to simplify the exposition beginning with the matrix product representation of a principal submatrix.
In case you’re human and you forget things, a submatrix is constructed from as follows: take a subset of column indices and a subset of row indices and define by
If , then we say that submatrix is principal. The associated minor is the determinant of the matrix .
A useful way to think about minors is from the perspective of linear transformations. Let and be as above, let denote the standard basis for , let denote the standard basis for , and let denote the Euclidean inner product. Define operators by
We’re abusing notation slightly above by identifying with an isomorphic subspace of . For , the adjoint maps to . So,
We can therefore write .
The necessity of Sylvester’s criterion is the easy part of the proof. If is positive-semidefinite, then
where . Hence, the principal submatrix is positive-semidefinite and therefore has nonnegative determinant (all nonnegative eigenvalues).
For sufficiency, we’ll need the existence of a nonsingular principal submatrix of a positive-semidefinite matrix.
Lemma 2. Let be an complex matrix with rank . If are linearly independent rows of and are linearly independent columns of , then the matrix formed by striking the remaining rows and columns of is nonsingular.
By hypothesis, the -tuples
where , and the -tuples
where , are linearly independent in and , respectively. We need to show that the matrix
Consider the matrix matrix consisting of the column vectors , which has rank since the column vectors were hypothesized to be linearly independent.
Since the row rank equals the column rank, we see that form a basis for the row space of . Hence, for every row of , there exist scalars such that
The scalars , where and , form an matrix . Moreover, we can write as the product . Observe that we have the inequalities
which imply that . Equivalently, is nonsingular.
Moreover, any submatrix, where , is singular. To see this, we appeal to the product decomposition , where and are subsets of rows and columns, respectively. The matrix has rank , has rank , and has rank . Hence,
Corollary 3. Suppose an Hermitian matrix has rank . Then there exists a nonsingular principal submatrix of .
Proof. Suppose are linearly independent column vectors of . I claim that the adjoint vectors are also linearly independent. Indeed, suppose there are scalars such that
Since are linearly independent and for any scalars ,
taking , we conclude that are linearly independent.
By definition of the rank of a matrix, there exists columns with indices that are linearly independent. Since is Hermitian, the rows with indices are linearly independent. By the preceding lemma, the resulting principal submatrix is nonsingular.
Roger Horn and Charles R. Johnson, Matrix Analysis (Second Edition), Cambridge UP, Cambridge, 2012.