In today’s post, I want to show how we can prove inequalities for sums of i.i.d. random variables (i.e. random walks) using martingale arguments. The inequality I want to (ultimately) consider was first proven by Khintchine and Kolmogorov in 1925. I don’t know the argument of their proof, but given that this was before Doob’s monumental work on martingales was published, I do think it’s different than that presented here. Here is their result:

If are mean-zero i.i.d. random variables essentially bounded by a constant , then

We first will review a few general results about martingales and stopping times.

I assume that the reader is familiar with Doob’s optional stopping theorem for martingales:

Theorem 1.Suppose is a martingale adapted to a filtration , and are stopping times adapted to this filtration. If , thenwhere is the -algebra defined by

I will include a proof of this result in a subsequent post of martingales, but for now, we will use it without proof. In what follows, all stopping times are assumed to be adapted to the filtration generated by the given martingale.

Recall the definition of a **random walk**: if is a collection of i.i.d. random variables with values in , then is a random walk, where . We can use random walks to define martingales which allow computation of the moments of stopped random walks, as the next lemma shows.

Lemma 2.Suppose is a collection of i.i.d. real-valued random variables so that is a random walk. If , then is a martingale. If and , then is a martingale.

*Proof. *For , we have by the independence of the that

since . Analogously,

since and .

We now use Doob’s optional stopping theorem to prove Wald’s first and second identities.

Lemma 3.If is a random walk with . If and is a stopping time with , thenIf and , then

*Proof. *It is an easy exercise for the reader to verify that if is a martingale and is a stopping time, then is also a stopping time and is also martingale. By Doob’s optional stopping theorem with ,

By the dominated convergence theorem, as . For all , observe that

Applying the dominated convergence theorem again, we can interchange summation and expectation to obtain

where we use the fact that . Letting , the hypothesis that is finite a.s., and applying the dominated convergence theorem completes the proof.

To prove Wald’s second identity, we first note that by Lemma 2,

To complete the proof, it suffices by the dominated convergence theorem to show that

For any ,

Taking expectation of both sides of the inequality, we have by the independence of the that , so applying the dominated convergence theorem,

We can use Wald’s identities to prove Kolmogorov’s and Khintchine’s inequality.

*Proof.* Fix and define a random time by

where . It is clear that , for every . Note that

Since is a bounded stopping time, we have by Wald’s second identity that

For the LHS, we can write

Observe that implies that , so that

Squaring both sides, we have that . Substituting this upper bound into the equation above, we conclude that

since by independence of the .