# Optimal Transport in One Dimension

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In this article, we briefly explore the optimal transport problem on the real line along with some examples.

## Linear Cost Example

For this example, consider the cost function $c(x,y)=L(x-y)$ along with a given linear map $A:\mathbb {R} ^{d}\rightarrow \mathbb {R}$ . Moreover, if let $\gamma$ be any transport plan, then by direct computation we see that

                                                                                $\int L(x-y)d\gamma =\int L(x)d\gamma -\int L(y)d\gamma =\int L(x)d\mu -\int L(y)d\nu$ which suggests that this result only depends on the marginals of $\gamma$ (wherein $\mu$ and $\nu$ are compactly supported probability measures). In fact, in such cases, every transport plan/map is optimal.

## Distance Cost Example

Consider the cost function $c(x,y)=|x-y|$ along with probability measures (on $\mathbb {R}$ ) $\mu$ and $\nu$ . Then, for any $(x,y)\in spt(\mu )\times spt(\nu )$ we see that $c(x,y)=y-x$ , which then immediately puts us back in the linear cost position, so any transport map/plan is also optimal for such costs.

## Book Shifting Example

Consider the cost function $c(x,y)=|x-y|$ along with $\mu ={\frac {1}{2}}\lambda _{[0,2]}$ and $\nu ={\frac {1}{2}}\lambda _{[1,3]}$ (where $\lambda$ is the one-dimensional Lebesgue measure). A (monotone) transport plan that rearranges $\mu$ to look like $\nu$ is given by $T_{0}(x)=x+1$ and its corresponding cost is

                                                                                               $M(T_{0})=\int |T_{0}(x)-x|d\mu \equiv 1$ .


Furthermore, notice that the piecewise map $T_{1}$ given by $T_{1}(x)=x+2$ (for $x\leq 1$ ) and $T_{1}(x)=x$ (for $x>1$ ) satisfies $T_{1}\#\mu =\nu$ , i.e. $T_{1}$ is a transport map from $\mu$ to $\nu$ ; moreover, the corresponding cost is

                                                                                          $M(T_{1})=\int |T_{1}(x)-x|d\mu ={\frac {1}{2}}\int _{0}^{2}2dx\equiv 1$ and so we conclude that $T_{1}$ is indeed optimal as well.

Theorem: Let $\mu ,\nu$ be probability measures on $\mathbb {R}$ with cumulative distribution functions (CDFs) $F$ and $G$ , respectively. Also, let $\pi$ be the probability measure on $\mathbb {R} ^{2}$ with the CDF $H(x,y)=\min(F(x),G(y))$ . Then, $\pi \in \Gamma (\mu ,\nu )$ and is optimal (in the Kantorovich problem setting) between $\mu$ and $\nu$ for the (quadratic) cost function $c(x,y)=|x-y|^{2}$ , and the corresponding cost is

                                                                                            $T_{2}(\mu ,\nu )=\int _{0}^{1}|F^{-1}(t)-G^{-1}(t)|^{2}dt$ where $F^{-1}$ and $G^{-1}$ are the pseudo-inverses of the respective CDFs.

### Ideas and Remarks for the Proof

Note the proof from Cedric-Villani gives this result in arbitrary dimensions. Below is a rough outline of proof, and the full details can be found in "Topics in Optimal Transportation" (Villani, cite later). Moreover, the measure $\pi$ constructed in the theorem indeed is optimal provided that the cost function $c(x,y)$ is of the form $c(x-y)$ where $c$ is a convex, nonnegative symmetric function on $\mathbb {R}$ .

One of the first major steps in proving this theorem is showing that $supp(\pi )\subset \{(x,y)\in \mathbb {R} ^{2}:F(x^{-})\leq G(y){\text{ and }}G(y^{-})\leq F(x)\}$ by considering specific cases. Upon showing this, we may conclude that $\pi$ is supported in a monotone subset of $\mathbb {R} ^{2}$ and hence also supported in the sub-differential of some lower semi-continuous convex function. From here, we make use of the Knott-Smith optimality criterion (Villani pg 66) which establishes that $\pi$ is an optimal transference plan. Then, upon showing that $\pi =(F^{-1}\times G^{-1})\#\lambda _{[0,1]}$ , we see that for any nonnegative, measurable function $u$ on $\mathbb {R} ^{2}$ $\int _{\mathbb {R} ^{2}}u(x,y)d\pi (x,y)=\int _{0}^{1}u(F^{-1}(t),G^{-1}(t))dt$ .


This then immediately yields the cost $T_{2}(\mu ,\nu )$ and completes the proof.