## Linear Algebra and Convexity: Positive-Definite Matrices

My apologies for not posting in a few days; I’ve been feeling a bit under the weather. Nevertheless, I thought that I would wrap up the work-week with a brief post on the connection between convexity and a very nice class of $n\times n$ matrices called positive-definite matrices. Recall that an $n\times n$ real matrix $A$ is positive-definite if $\langle{Ax,x}\rangle\geq0$ for all $x\in\mathbb{R}^{n}$, with equality if and only if $x=0$ (here, $\langle{\cdot,\cdot}\rangle$ denotes the Euclidean inner product).

Let $\text{Sym}^{++}(n,\mathbb{R})$ denote the set of all real matrices $A \in M_{n}(\mathbb{R})$ that are strictly positive. I claim that $\text{Sym}^{++}(n,\mathbb{R})$ is a convex open subset of $M_{n}(\mathbb{R})$. Fix $\epsilon>0$, and suppose $B\in M_{n}(\mathbb{R})$ satisfies $\left\|A-B\right\|<\epsilon$. Then

$\displaystyle\langle{Bx,x}\rangle=\langle{(B-A)x,x}\rangle+\langle{Ax,x}\rangle > -\epsilon+\inf_{\left\|x\right\|=1}\langle{Ax,x}\rangle$

I claim that this last expression is strictly greater than zero for sufficiently small $\epsilon$. Indeed, the map $x\mapsto\langle{Ax,x}\rangle$ defines a continuous function on the unit sphere, which is compact in finite-dimensional spaces. Hence, the extreme value theorem implies that the infimum is attained at some $x,\left\|x\right\|=1$, and the claim follows from our hypothesis that $A$ is strictly positive. Choosing $\epsilon<\inf_{\left\|x\right\|=1}\langle{Ax,x}\rangle$, we obtain the desired conclusion. Hence, $\left\{B\in M_{n}(\mathbb{R}) : \left\|A-B\right\|<\epsilon\right\}\subset\text{Sym}^{++}(n,\mathbb{R})$.

To see that $\text{Sym}^{++}(n,\mathbb{R})$ is convex, fix $A,B\in\text{Sym}^{++}(n,\mathbb{R})$ and $\lambda\in[0,1]$. Then for all $x\in\mathbb{R}^{n}$,

$\displaystyle\langle{\left(\lambda A+(1-\lambda)B\right)x,x}\rangle=\lambda\langle{Ax,x}\rangle+(1-\lambda)\langle{Bx,x}\rangle>0$,

since both $\langle{Ax,x}\rangle$ and $\langle{Bx,x}\rangle$ are positive.

We now show that the function

$\displaystyle f:\text{Sym}^{++}(n,\mathbb{R})\rightarrow\mathbb{R}, f(A):=\log(\det A)$

is concave.

Proof. Since any positive matrix is unitarily similar to a diagonal matrix, we may assume without loss of generality that $A$ is diagonal. Since $A$ is strictly positive, $A$ has an invertible self-adjoint square root, which we denote by $A^{\frac{1}{2}}$. Hence,

$\displaystyle\int_{\mathbb{R}^{n}}e^{-\langle{Ax,x}\rangle}dx=\int_{\mathbb{R}^{n}}e^{-\langle{A^{\frac{1}{2}}x,A^{\frac{1}{2}}x}\rangle}dx=\int_{\mathbb{R}^{n}}e^{-\left\|A^{\frac{1}{2}}x\right\|^{2}}dx$

If we make the change of variable $y=A^{\frac{1}{2}}x$, then

$\displaystyle\int_{\mathbb{R}^{n}}e^{-\left\|A^{\frac{1}{2}}x\right\|^{2}}dx=\frac{1}{\sqrt{\det A}}\int_{\mathbb{R}^{n}}e^{-\left\|y\right\|^{2}}dy=\dfrac{\pi^{\frac{n}{2}}}{\sqrt{\det A}}$

Now let $\alpha\in (0,1)$. By Hölder’s inequality,

$\begin{array}{lcl}\displaystyle\int_{\mathbb{R}^{n}}e^{-\langle{\left(\alpha A+(1-\alpha)B\right)x,x}\rangle}dx&=&\displaystyle\int_{\mathbb{R}^{n}}\left(e^{-\langle{Ax,x}\rangle}\right)^{\alpha}\left(e^{-\langle{Bx,x}\rangle}\right)^{1-\alpha}dx\\&\leq&\displaystyle\left(\int_{\mathbb{R}^{n}}\left(e^{-\langle{Ax,x}\rangle}\right)^{\alpha \cdot\frac{1}{\alpha}}dx\right)^{\alpha}\left(\int_{\mathbb{R}^{n}}\left(e^{-\langle{Bx,x}\rangle}\right)^{(1-\alpha)\cdot\frac{1}{1-\alpha}}dx\right)^{1-\alpha}\\&=&\displaystyle\left(\int_{\mathbb{R}^{n}}e^{-\langle{Ax,x}\rangle}dx\right)^{\alpha}\left(\int_{\mathbb{R}^{n}}e^{-\langle{Bx,x}\rangle}dx\right)^{1-\alpha}\end{array}$

Using our computation above, we obtain the inequality

$\displaystyle\dfrac{\pi^{\frac{n}{2}}}{\sqrt{\det\left[\alpha A+(1-\alpha)B\right]}}\leq\left(\dfrac{\pi^{\frac{n}{2}}}{\sqrt{\det A}}\right)^{\alpha}\left(\dfrac{\pi^{\frac{n}{2}}}{\sqrt{\det B}}\right)^{1-\alpha}=\dfrac{\pi^{\frac{n}{2}}}{(\det A)^{\frac{\alpha}{2}}(\det B)^{\frac{1-\alpha}{2}}}$,

which gives, upon rearranging, the inequality

$\displaystyle\det\left[\alpha A+(1-\alpha)B\right]\geq\left(\det A\right)^{\alpha}\left(\det B\right)^{1-\alpha}$

Taking the logarithm of both sides completes the proof of the concavity of $f$. $\Box$

Advertisements
This entry was posted in math.GM and tagged , . Bookmark the permalink.