Lebesgue Measure and Linear Transformations

Lemma 1. Let U\subset\mathbb{R}^{n} be open. If F: U\rightarrow\mathbb{R}^{n} is locally Lipschitz on U, then the set F(A) is measurable.

Proof. Let A\subset U be measurable. Replacing A by A\cap[-N,N], N\in\mathbb{N}, we may assume that A is bounded. Observe that by the inner regularity of the Lebsgue measure, we may write

\displaystyle A=B\cup\bigcup_{j=1}^{\infty}K_{j}

where the sets K_{j} are compact and B is a set of measure zero. Since the set F(\bigcup_{j=1}^{\infty}K_{j})=\bigcup_{j=1}^{\infty}F(K_{j}) is a Borel set, being the union of compact sets, it suffices to show that F(B) is measurable. Fix \varepsilon>0. Since B has measure zero, we can cover B by a sequence of cubes Q_{j} with edge lengths r_{j} and with sum of measures less than \varepsilon. Since F is locally Lipschitz, it has Lipschitz constant L on the set U\cap[-N,N]. Hence, F(Q_{j}) is contained in a ball of radius L\sqrt{n}r_{j}, \forall j. Hence, F(Q_{j}) is contained a cube with edge length 2L\sqrt{n}r_{j}. Thus,

\displaystyle \lambda_{n}(F(B))\leq\sum_{j=1}^{\infty}\lambda_{n}(F(Q_{j}))\leq\sum_{j=1}^{\infty}\left(2L\sqrt{n}r_{j}\right)^{n}=\left(2L\sqrt{n}\right)^{n}\varepsilon

Since \varepsilon>0 was arbitrary, we conclude that F(B) has measure zero, and therefore necessarily measurable. \Box

Recall that we say that a n\times n real matrix A is orthogonal if A is invertible and A^{-1}=A^{T}. There are a number of properties which equivalently characterize an orthogonal matrix. We summarize them with the following proposition.

Proposition 2. The following are equivalent:

  1. The matrix A:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n} is orthogonal.
  2. The columns of A form an orthonormal basis.
  3. \langle{Ax,Ay}\rangle=\langle{x,y}\rangle, for all vectors x,y\in\mathbb{R}^{n}.
  4. The linear transformation defined by A is an isometry.

Proof. In what follows, \left\{e_{1},\ldots,e_{n}\right\} denotes the standard basis of \mathbb{R}^{n}, and \left\{Ae_{1},\ldots,Ae_{n}\right\} denotes the column vectors of the matrix A. Suppose (1) holds. Then, for 1\leq i,j\leq n,

\displaystyle\langle{Ae_{i},Ae_{j}}\rangle=\langle{A^{T}Ae_{i},e_{j}}\rangle=\langle{A^{-1}Ae_{i},e_{j}}\rangle=\langle{e_{i},e_{j}}\rangle=\begin{cases}{1}&{i=j}\\ {0}&{i\neq j}\end{cases}

Now suppose that (2) holds. By hypothesis, for any vectors x,y\in\mathbb{R}^{n}, we can write

\displaystyle x=\alpha_{1}e_{1}+\cdots+\alpha_{n}e_{n},\indent y=\beta_{1}e_{1}+\cdots+\beta_{n}e_{n}

Using the bilinearity of the inner product, we obtain that


(3)\Longrightarrow(4) is obvious. To see that (4)\Longrightarrow(1), we will prove the more general result that any isometry of \mathbb{R}^{n} is an affine transformation. Recall that an isometry of Euclidean space is a function \phi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n} which preserves Euclidean distance: \left\|\phi(x)-\phi(y)\right\|=\left\|x-y\right\|, where \left\|\cdot\right\| denotes the Euclidean norm. It turns out that every isometry of \mathbb{R}^{n} is an affine map such that

\displaystyle\phi(x)=Ax+b,\indent\forall x\in\mathbb{R}^{n}

and A is orthogonal. To see this, note that by the polarization identity, the map A:=\phi-\phi(0) peserves the Euclidean inner product. Hence, for any \alpha,\beta\in\mathbb{R} and x,y\in\mathbb{R}^{n}, we see that

\begin{array}{lcl}\displaystyle\left\|A(\alpha x+\beta y)-\alpha A(x)-\beta A(y)\right\|^{2}&=&\displaystyle\left\|A(\alpha x+\beta y)\right\|^{2}\\&-&\displaystyle2\langle{A(\alpha x+\beta y),\alpha A(x)+\beta A(y)}\rangle+\left\|\alpha A(x)+\beta A(y)\right\|^{2}\\&=&\displaystyle2\left\|\alpha x+\beta y\right\|^{2}-2\langle{\alpha x+\beta y,\alpha x}\rangle-2\langle{\alpha x+\beta y,\beta y}\rangle\\&=&\displaystyle0\end{array}

So, A is linear, equivalently \phi is affine. That A is orthogonal is an immediate consequence of the preservation of the inner product under A. \Box

Although every orthogonal matrix necessarily has determinant satisfying (\det A)^{2}=1, this condition is not sufficient for a matrix to be orthogonal. Indeed, consider the 2\times 2 matrix

\displaystyle A=\begin{bmatrix}{2}&{0}\\{0}&{\frac{1}{2}}\end{bmatrix},

which has both determinant 1 and orthogonal columns.

If T:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n} is a linear transformation given by an orthogonal matrix, then T is tautologically invertible and moreover, its inverse is given by an orthogonal matrix. Consider the pushforward of the Lebesgue measure, which we denote by \mu:=T_{*}\lambda_{n}. If we can show that \mu is a translation-invariant measure on the measure space (\mathbb{R}^{n},\mathcal{B}(\mathbb{R}^{n})), and \mu([0,1]^{n})=1, then the uniqueness of the Lebesgue measure (see Proposition 5 here) proves that \mu=\lambda_{n}. Replacing T by S=T^{-1}, we obtain the desired invariance property of Lebesgue measure.

For any Borel set B\subset\mathbb{R}^{n} and x\in\mathbb{R}^{n}, we have that


where we use the translation-invariance of the Lebesgue measure in the penultimate equality. By Theorem 6 here, we conclude that \mu=\kappa\lambda_{n}, for some nonzero real constant \kappa. To see that \kappa=1, observe that since T is orthogonal,


If B_{1}(0) is the open unit ball in \mathbb{R}^{n}, then T^{-1}(B_{1}(0))=B_{1}(0). We conclude that


and therefore \kappa=1.

This entry was posted in math.CA, math.FA and tagged , , , . Bookmark the permalink.

One Response to Lebesgue Measure and Linear Transformations

  1. Pingback: Lebesgue Measure and Linear Transformations II | Math by Matt

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s