Tverberg’s Theorem (and a Bit of Tensor Products)

For those of you who have been waiting for me to get from Radon’s theorem to the Colorful Carathéodory theorem and then to Tverberg’s theorem, your patience will be rewarded with today’s post. The proof I will be presenting is due to Karanbir Sarkaria, but the presentation mostly comes form Gil Kilai’s blog post on the result, which in turn is based on the article “Carathéodory’s Theorem, Colorful and Applicable” by Imre Bárány and Shmuel Onn.

To prove Tverberg’s theorem, we will need some nontrivial algebra in the form of tensor products. Recall that if V and W are respectively n- and m-dimensional vector spaces over some field \mathbb{F}, then the tensor product is defined (up to isomorphism) to be the vector space V\otimes W together with a bilinear map V\times W\rightarrow V\otimes W, (v,w)\mapsto v\otimes w having the following universal property: for any bilinear map \phi: V\times W\rightarrow U, there exists a unique map \Phi: V\otimes W\rightarrow U such that

\displaystyle\Phi(v\otimes w)=\phi(v,w),\indent\forall (v,w)\in V\times W

Let V\otimes W denote the \mathbb{F}-vector space of all functions F: \left\{e_{j}\right\}_{j=1}^{n}\times\left\{f_{i}\right\}_{i=1}^{m}\rightarrow\mathbb{F}. We use the notation e_{j}\otimes f_{i} to denote the function

\displaystyle(e_{j}\otimes f_{i})(e_{j'},f_{i'})=\begin{cases}1&{j'=j,i'=i}\\ 0&{\text{otherwise}}\end{cases}

We define a bilinear function V\times W\rightarrow V\otimes W by

\displaystyle(v,w)\mapsto v\otimes w:=\sum_{i=1}^{m}\sum_{j=1}^{n}v^{j}w^{i}(e_{j}\otimes f_{i}),

where v=\sum_{j=1}^{n}v^{j}e_{j} and w=\sum_{i=1}^{m}w^{i}f_{i}. I claim that \left\{e_{j}\otimes f_{i}\right\}_{1\leq j\leq n,1\leq i\leq m} is a basis for V\otimes W. Indeed,

\displaystyle F(e_{j_{0}},f_{i_{0}})=F(e_{j_{0}},f_{i_{0}})(e_{j_{0}}\otimes f_{i_{0}})(e_{j_{0}},f_{i_{0}})=\left(\sum_{i=1}^{m}\sum_{j=1}^{n}F(e_{j},f_{i})e_{j}\otimes f_{i}\right)(e_{j_{0}},f_{i_{0}})

Let U be another \mathbb{F}-vector space and \phi:V\times W\rightarrow U be a bilinear function. Define \Phi: V \otimes W\rightarrow U to be the linear map satisfying

\displaystyle\Phi(e_{j},f_{i})=\phi(e_{j},f_{i}),\indent\forall1\leq j\leq n,1\leq i\leq m

For any (v,w)\in V\times W, we have that

\begin{array}{lcl}\displaystyle\Phi\left(v\otimes w\right)=\Phi\left(\sum_{i=1}^{m}\sum_{j=1}^{n}v^{j}w^{i}e_{j}\otimes f_{i}\right)&=&\displaystyle\sum_{i=1}^{m}\sum_{j=1}^{n}v^{j}w^{i}\Phi(e_{j}\otimes f_{i})\\&=&\displaystyle\sum_{i=1}^{m}\sum_{j=1}^{n}v^{j}w^{i}\phi(e_{j},f_{i})\\&=&\displaystyle\phi\left(v,w\right)\end{array}

where the last equality follows from the bilinearity of \phi.

In fact, a pure tensor v\otimes w can be identified as a rank-$latex $ n\times m matrices over \mathbb{F}. Indeed, define the map from V\times W into this space by

\displaystyle(v,w)\mapsto \begin{bmatrix}v^{1}w^{1}&\cdots& v^{1}w^{m}\\ \vdots&\cdots&\vdots\\v^{n}w^{1}&\cdots&v^{n}w^{m}\end{bmatrix},\indent v=\sum_{j=1}^{n}v^{j}e_{j}, w=\sum_{i=1}^{m}w^{i}f_{i}

Theorem. (Tverberg) Let k\geq 2. For any set S\subset\mathbb{R}^{d} containing at least (k-1)(d+1)+1 points, there exist k pairwise-disjoint subsets A_{1},\cdots,A_{k}\subset S such that


Proof. Let v_{1},\cdots,v_{m}\in S be distinct points in \mathbb{R}^{d}. Without loss of generality, we may assume that m=(k-1)(d+1)+1. Set V:=\mathbb{R}^{d+1}, define H to be the d-dimensional affine space

\displaystyle\left\{(x_{1},\cdots,x_{d+1})\in\mathbb{R}^{d+1}: x_{d+1}=1\right\},

and embed v_{1},\cdots,v_{m} in H by \tilde{v}_{j}:=(v_{j},1)\in\mathbb{R}^{d+1}. Define a (k-1) dimensional vector space W\subset\mathbb{R}^{k-1} by taking w_{1},\cdots,w_{k-1} to be the k-1 standard basis elements and setting w_{k}:=\sum_{j=1}^{k}-w_{j}. Consider the tensor product V\otimes W. We define m subsets of V\otimes W by

\displaystyle S_{j}:=\left\{\tilde{v}_{j}\otimes w_{i}:i=1,\cdots,k\right\},\indent\forall j=1,\cdots,m

I claim that 0\in\text{co}(S_{1})\cap\cdots\cap\text{co}(S_{m}). Indeed, since w_{k}=\sum_{j=1}^{k-1}-w_{j}, we have that

\displaystyle\frac{1}{k}\sum_{i=1}^{k}\tilde{v}_{j}\otimes w_{i}=\frac{1}{k}\sum_{i=1}^{k-1}\tilde{v}_{j}\otimes w_{i}-\frac{1}{k}\sum_{i=1}^{k-1}\tilde{v}_{j}\otimes w_{i}=0

for every i=1,\cdots,m. The Colorful Carathéodory Theorem tells us that there exists a point s_{j}\in S_{j}, for j=1,\cdots,m, such that 0\in\text{co}(s_{1},\cdots,s_{m}), so we can write 0=\sum_{i=1}^{m}\lambda_{i}s_{i}, where \lambda_{i}\in[0,1] and each s_{i} is of the form s_{i}=\tilde{v}_{i}\otimes w_{j_{i}}. We can use the indices j_{1},\cdots,j_{m} to obtain a partition of \left\{1,\cdots,m\right\} defined by

\displaystyle\Omega_{r}:=\left\{i: j_{i}=r\right\},\indent\forall r=1,\cdots,k

where we allow the \Omega_{r} be empty. We can rewrite the convex combination for 0 as

\displaystyle0=\sum_{r=1}^{k}\sum_{i\in\Omega_{r}}\lambda_{i}\tilde{v}_{i}\otimes w_{r}=\sum_{r=1}^{k}\sum_{i\in\Omega_{r}}\lambda_{i}\sum_{l=1}^{d+1}\tilde{v}_{i}^{l}e_{l}\otimes w_{r}

Since \left\{e_{i}\otimes e_{j}\right\}_{1\leq i\leq d+1,1\leq j\leq k-1} is a basis for V\otimes W, we have that, for any l=1,\cdots,d+1 fixed,

\displaystyle\sum_{r=1}^{k}\sum_{i\in\Omega_{r}}\lambda_{i}\tilde{v}_{i}^{l}e_{l}\otimes w_{r}=0

Since w_{k}=-(w_{1}+\cdots+w_{k-1}), we see that

\displaystyle\sum_{r=1}^{k-1}\left(\sum_{i\in\Omega_{r}}\lambda_{i}\tilde{v}_{i}^{l}-\sum_{i\in\Omega_{k}}\lambda_{i}\tilde{v}_{i}^{l}\right)e_{l}\otimes w_{r}=0,

which implies that \sum_{i\in\Omega_{r}}\lambda_{i}\tilde{v}_{i}^{l}=\sum_{i\in\Omega_{k}}\lambda_{i}\tilde{v}_{i}^{l}, for all r=1,\cdots,k-1, since \left\{e_{l}\otimes w_{r}\right\}_{r=1}^{k-1} is linearly independent in V\otimes W. Since this equality holds for l=1,\cdots,d+1, we conclude that the k vectors in V defined by

\displaystyle\tilde{y}_{r}=\sum_{i\in\Omega_{r}}\lambda_{i}\tilde{v}_{i},\indent\forall r=1,\cdots,k

are all equal. Since the last coordinate of \tilde{v}_{i}, for i=1,\cdots,m, is 1, we see that \sum_{i\in\Omega_{1}}\lambda_{i}=\cdots=\sum_{i\in\Omega_{k}}\lambda_{i}. Since \sum_{i=1}^{d+1}\lambda_{i}=1, we see that \sum_{i\in\Omega_{r}}\lambda_{i}=\frac{1}{k}, for every r=1,\cdots,k. Scaling by k, we obtain that the vector k\tilde{y}_{r}\in\text{co}\left\{\tilde{v}_{i}:i\in\Omega_{r}\right\}. Defining y_{r}\in\mathbb{R}^{d} by deleting the last coordinate of \tilde{y}_{r}, we see that

\displaystyle ky_{r}\in\bigcap_{r=1}^{k}\text{co}\left\{v_{i}:i\in\Omega_{r}\right\}\Longleftrightarrow ky_{1}\in\bigcap_{r=1}^{k}\text{co}\left\{v_{i}:i\in\Omega_{r}\right\},

since all the y_{r} are equal. \Box

About these ads
This entry was posted in Combinatorics, Convexity and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s