Stokes’ theorem on a simplicial complex

This is a note on a Stokes’ theorem on a simplicial complex. Originally I wanted to establish some formulas on a graph, it turns out that it’s better to work on a simplicial complex. After discussing with Raymond, we arrived at some interesting results. Our goal is to prove the following

Theorem 1 (Raymond-Gauss-Green-Stokes theorem)

\displaystyle \boxed{ \int _\Omega d\alpha=\int_{\partial \Omega}\alpha} \ \ \ \ \ (1)

for a {k}-form {\alpha} and a {(k+1)}-chain {\Omega} on a simplical complex {X}.

Let us now explain the setup.

Let {X} be a simplicial complex. A directed edge (or simply, an edge) is an ordered pair of adjacent vertices {(x_1, x_2)}, also written as {\overrightarrow{x_1 x_2}}. We will denote by {-\overrightarrow{x_1x_2}} the opposite edge of {\overrightarrow{x_1x_2}}, i.e. {-\overrightarrow{x_2 x_1}=\overrightarrow{x_1 x_2}}. We define the set of {k}-simplices {X_k} as follows.

Define {X_0} be the vertices of {X}. Define {X_1} to be the set of all the edges of {X}. (Edges always mean directed edges. )

An ordered pair of {k+1} vertices {(x_0, \cdots ,x_k)} is said to be a {k}-simplex if {x_i} are distinct and they are all mutually connected to each others. We identify two {k}-simplices {\Delta_1=\Delta_2} if {\Delta_2} can be obtained after applying an even permutation on {\Delta_1}. If {\Delta_2} is obtained by applying an odd permutation on {\Delta_1}, we will denote it by {\Delta_2=-\Delta_1}. E.g. {(x, y, z)=(y, z, x)=(z, x, y)=-(y,x,z)}. We denote by {X_k} all the {k}-simplices of {X}.

A {k}-chain is a (formal!) finite linear combination of {k}-simplices with integer coefficients. For example, if {e_1, e_2} are edges (i.e. 1-simplices), then {1997e_1-8964e_2} is a {1}-chain. We will denote by {C_k} to be the set of all {k}-chains. In other words {C_k} is the free abelian group generated by {X_k}. We use the convention that {n(-\Delta)=(-n) \Delta}, which we will simply write as {-n\Delta}.

For {x\in X_0}, by a “tangent vector” we mean an edge {\overrightarrow{xy}\in X_0} (if it exists). The set of all tangent vectors at {x} will be denoted as {T_xX} (in this setting this is not a vector space, though it’s certainly can be redefined to be one, we do not do this here). A {k}-form {\alpha} is an assignment from each point {x\in X_0} to an alternating {k}-linear map {\alpha_x: \overbrace{T_xX \times \cdots \times T_xX}^k\rightarrow \mathbb{R}}. (“form” here means alternating/antisymmetric multilinear (well… ) form, as opposed to symmetric multilinear form etc. ) If {k=0}, a 0-form is just a function on the set of vertices, i.e. {X_0}. By alternating we mean {\alpha_x(v_{i_1}, \cdots, v_{i_k})=sgn(i_1, \cdots , i_k)\cdot\alpha_x(v_1, \cdots, v_k)} where {v_i\in T_xX} are not necessarily distinct and {(i_1, \cdots, i_k)} is a permutation of {(1, \cdots, k)}. Here, {\alpha_x} is “{k}-linear” if {\alpha_x(\cdots, -v_i, \cdots)=-\alpha_x(\cdots, v_i, \cdots)} for all {i}, holding other variables constant. It is easy to see that if any two of the {v_i}‘s are equal, then {\alpha_x(v_1, \cdots, v_k)=0} and thus {\alpha_x} must be trivial if {k>|T_xX|=\deg x}. We will denote by {\Omega^k} to be the set of all {k}-forms on {X}.

We now define the integral {\displaystyle \int_\Delta \alpha} of a {k}-form {\alpha} as follows. For {\Delta =(x_0, \cdots, x_k)\in X_k }, {\displaystyle\int_\Delta\alpha} is defined to be the sum

Definition 2

\displaystyle  \begin{array}{rcl}  \displaystyle\int_\Delta\alpha :=\frac{1}{k+1}\left(\displaystyle\sum_{i=0}^k(-1)^i \alpha _{x_i}(\overrightarrow {x_ix_0}, \cdots , \overbrace{\overrightarrow {x_ix_i}}^{omitted}, \cdots, \overrightarrow {x_ix_k})\right). \end{array}

For example, if {\Delta=\overrightarrow{x_0x_1}} is an edge, then for a 1-form {\alpha}, {\displaystyle\int_{\Delta}\alpha=\frac{1}{2}(\alpha_{x_0}(\overrightarrow{x_0x_1})-\alpha_{x_1}(\overrightarrow{x_1x_0}))=\alpha_{x_0}(\overrightarrow{x_0x_1})}. Thus we see that integrating a 1-form over an edge is just applying it on this edge, naturally. (In general, for a {k}-form {\alpha}, we don’t require {\alpha_{x_0}(\overrightarrow{x_0x_1}, \cdots, \overrightarrow {x_0 x_k})=\alpha_{x_1}(\overrightarrow{x_1x_2}, \cdots, \overrightarrow {x_1 x_k}, \overrightarrow {x_1 x_0})}, i.e. cyclic permutation doesn’t necessarily gives the same result, so when integrating, we apply {\alpha} over all {k+1} vertices and we take the average of it. ) Of course, we can also integrate on a {k}-chain {\Gamma=\sum_i \Delta_i} by extending this linearly, i.e. {\displaystyle \int_\Gamma \alpha:=\sum_{i}\int_{\Delta_i} \alpha}.

We would like to derive the Stoke’s theorem as in the classical case: for a smooth {k}-form {\alpha} and a {(k+1)}-dimensional (oriented) domain {\Omega} with smooth enough {k}-dimensional boundary {\partial \Omega},

\displaystyle \int_{\Omega} d\alpha=\int_{\partial \Omega} \alpha.

In our case, there is a natural candidate for the “boundary operator” {\partial =\partial _k: C_k\rightarrow C_{k-1}} defined by

Definition 3

\displaystyle \Delta=(x_0, \cdots, x_k) \stackrel{ \partial }{\longmapsto}\sum_{i=0}^k(-1)^i(x_0,x_1, \cdots \overbrace { x_i}^{omitted}, \cdots, x_k)

for {\Delta \in X_k} and extend it linearly on {C_k}.

For example, {\partial (x, y)=y-x\in C_0} and {\partial (x, y, z, w)=(y,z,w)-(x, z, w)+(x, y, w)-(x, y, z)\in C_2}.

Now, we define the exterior derivative of a {k}-form {\alpha}, denoted by {d\alpha}, as follows. For a 0-form {f}, we define a 1-form {df} by

\displaystyle (df)_x(\overrightarrow {xy})=f(y)-f(x).

For a 1-form {\alpha}, we define the two-form {d\alpha} by

\displaystyle (d\alpha)_x(\overrightarrow{xy_0}, \overrightarrow{xy_1})=\frac{1}{2}(\alpha_x(\overrightarrow{xy_0})-\alpha_x(\overrightarrow{xy_1})-\alpha_{y_1}(\overrightarrow{y_1y_0})).

(Note that in this definition, {d\alpha_{x_0}(\overrightarrow{x_0x_1}, \overrightarrow{x_0x_2})=d\alpha_{x_1}(\overrightarrow{x_1x_2}, \overrightarrow{x_1x_0})=d\alpha_{x_2}(\overrightarrow{x_2x_0}, \overrightarrow{x_2x_1})}. This is not an incidence. )

We know that for the ordinary exterior derivative, we have {d^2=0}, let us check this for a 0-form {f}:

\displaystyle  \begin{array}{rcl}  2(ddf)_x(\overrightarrow{xy_0}, \overrightarrow{xy_1}) &=&df_x(\overrightarrow{xy_0})-df_x(\overrightarrow{xy_1})-df_{y_1}(\overrightarrow{y_1 y_0})\\ &=&\left(f(y_0)-f(x)\right)-\left(f(y_1)-f(x)\right)-\left(f(y_0)-f(y_1)\right)\\ &=&0. \end{array}

So the definition is not unreasonable, at least in this case. For a two-form {\omega}, we define

\displaystyle  d\omega_{x_0}(\overrightarrow{x_0x_1}, \overrightarrow{x_0x_2}, \overrightarrow{x_0x_3}):=(\omega_{[123]}-\omega_{[023]}+\omega_{[013]}-\omega_{[012]}) =\sum_{i=0}^3(-1)^i\omega_{[0,\cdots \widehat{i}, \cdots ,3]}

where {\widehat {i}} means the index {i} is omitted and

\displaystyle \omega_{[i j k]}:=\frac{1}{3}(\omega_{x_i}(\overrightarrow{x_ix_j}, \overrightarrow{x_ix_k})-\omega_{x_j}(\overrightarrow{x_jx_i},\overrightarrow{x_jx_k})+\omega_{x_k}(\overrightarrow{x_kx_i}, \overrightarrow{x_kx_j})).

Remark 1 Things are getting quite complicated at this stage, so let’s see what’s going on here by looking at {\omega_{[012]}}: up a to factor 1/3, it is equal to

\displaystyle \omega_{x_0}(\overrightarrow{x_0x_1}, \overrightarrow{x_0x_2})+\omega_{x_1}(\overrightarrow{x_1x_2}, \overrightarrow{x_1x_0}) +\omega_{x_2}(\overrightarrow{x_2x_0}, \overrightarrow{x_2x_1}).

i.e. sum of {\omega} when applied on the 3 vertices of the triangle {\Delta x_0x_1 x_2}, so we can unambiguously define {\omega(\Delta x_0x_1 x_2)} as {\omega_{[012]}}. Therefore {d\omega_{x_0}(\overrightarrow{x_0x_1}, \overrightarrow{x_0x_2}, \overrightarrow{x_0x_3})} is just the average of {\omega} over the 4 sides (taking care of the orientation) of the tetrahedron {(x_0, x_1, x_2, x_3)}, in particular the base point ({x_0}) is not important, as long as orientation is maintained.

It is possible to check directly that {dd\alpha =0} for {\alpha\in \Omega^1}, however this is quite tedious, and it is actually possible to see in a more general way that {d^2=0}. Nevertheless, I have done this exercise myself. You may check directly at my computation if you have doubt :-).

Example 1 Let’s directly compute {(dd\alpha)_{x_0}(\overrightarrow{x_0x_1}, \overrightarrow{x_0x_2}, \overrightarrow{x_0x_3})}! This is just

\displaystyle \omega_{[123]}-\omega_{[023]}+\omega_{[013]}-\omega_{[012]}

where {\omega=d\alpha}. To further simplify the notations, let’s denote {\alpha_{x_1}(\overrightarrow{x_1x_2})} simply by {12}. (Since there are only 4 numbers I hope this is not too confusing. ) Then (using {12=-21 } etc.)

\displaystyle  \begin{array}{rcl}  3\omega_{[123]}&=&\left((12-13-32)-(21-23-31)+(31-32-21)\right)\\ &=&3(12-13-32). \end{array}

Similarly

\displaystyle \omega_{[123]}=(02-03-32), \omega_{[013]}=(01-03-31), \omega_{[012]}=(01-02-21).

So the sum is

\displaystyle (12-13-32)-(02-03-32)+(01-03-31)-(01-02-21)=0!

Definition 4 In general, for a {k}-form {\alpha}, we define {d\alpha} by

\displaystyle  \begin{array}{rcl}  &(d\alpha)_{x_0}(\overrightarrow{x_0x_1}, \cdots, \overrightarrow{x_0x_k}) :=\displaystyle\sum_{i=0}^k (-1)^i\alpha_{[0, 1, \cdots, \widehat{i}, \cdots, k]}\quad \text{ where }\\ &\beta_{[i_0,\cdots, i_l]}:=\frac{1}{l+1}\sum_{m=0}^l (-1)^m\beta_{x_{i_m}}(\overrightarrow{x_{i_m}x_{i_0}}, \overrightarrow{x_{i_m}x_{i_1}}, \cdots, \overbrace{\overrightarrow{x_{i_m}x_{i_m}}}^{omitted}, \cdots, \overrightarrow{x_{i_m}x_{i_l}}) \text{ for }\beta\in \Omega^l. \end{array}

Remark 2 {\alpha_{[0,1,\cdots, \widehat i, \cdots, k]}} is exactly the integral {\displaystyle\int_{(x_0, x_1, \cdots, \widehat{x_i}, \cdots, x_k)}\alpha}, this is the crucial observation for proving the Stokes’ theorem, or in other way round, we make {d\alpha} this way for Stokes’ theorem to work!

Proposition 5 For {\alpha\in \Omega^k}, {d\alpha} is a {(k+1)}-form.

Proof: We check that

\displaystyle d\alpha_{x_0}(\overrightarrow{x_0x_2}, \overrightarrow{x_0x_1}, \cdots)=-d\alpha_{x_0}(\overrightarrow{x_0x_1}, \overrightarrow{x_0x_2}, \cdots).

Note that

\displaystyle  \begin{array}{rcl}  &&(k+1)\alpha_{[0, 2, 1, \cdots,\widehat i,\cdots, k]}\\ &=&\alpha(\overrightarrow{x_0x_2}, \overrightarrow{x_0x_1},\cdots )-\alpha(\overrightarrow {x_2 x_0}, \overrightarrow{x_2 x_1}, \cdots )+\alpha(\overrightarrow{x_1 x_0}, \overrightarrow{x_1 x_2}, \cdots)-\alpha(\overrightarrow{x_3 x_0}, \overrightarrow{x_3 x_2},\overrightarrow{x_3 x_1} \cdots)+\cdots\\ &=&-\left[\alpha( \overrightarrow{x_0x_1},\overrightarrow{x_0x_2},\cdots ) -\alpha(\overrightarrow{x_1 x_0}, \overrightarrow{x_1 x_2}, \cdots) +\alpha(\overrightarrow {x_2 x_0}, \overrightarrow{x_2 x_1}, \cdots ) -\alpha(\overrightarrow{x_3 x_0}, \overrightarrow{x_3 x_1}, \overrightarrow{x_3 x_2} \cdots)+\cdots\right]\\ &=&-(k+1)\alpha_{[0,1, 2, \cdots,\widehat i,\cdots, k]}. \end{array}

Thus {d\alpha_{x_0}(\overrightarrow{x_0x_2}, \overrightarrow{x_0x_1}, \cdots)=-d\alpha_{x_0}(\overrightarrow{x_0x_1}, \overrightarrow{x_0x_2}, \cdots)}. The remaining cases are similar. \Box

Therefore we have the map

\displaystyle d: \Omega^k \rightarrow \Omega^{k+1}.

The above proof also shows that we have the following property, which is the other crucial observation for proving the Stokes theorem:

Proposition 6 {\alpha_{[0,1,2, \cdots ]}=\alpha_{[i_0, i_1,\cdots]}} where {(i_0, i_1, \cdots)} is an even permutation of {(0, 1, 2, \cdots)}. Therefore we also have

\displaystyle d\alpha_{x_{i_0}}(\overrightarrow{x_{i_0}x_{i_1}}, \overrightarrow{x_{i_0}x_{i_2}}, \cdots, \overrightarrow{x_{i_0}x_{i_k}})=d\alpha_{x_{ 0}}(\overrightarrow{x_{ 0}x_{ 1}}, \overrightarrow{x_{ 0}x_{ 2}}, \cdots, \overrightarrow{x_{ 0}x_{ k}})

where {(i_0, i_1, \cdots)} is an even permutation of {(0, 1, 2, \cdots)}.

We also have

Proposition 7

\displaystyle d^2=0: \Omega^{k}\rightarrow \Omega ^{k+2}.

Proof: It actually comes from the fact that the boundary of the boundary of a {m}-simplex is empty. More precisely, let’s take {k=1} for concreteness, note that for {\Delta=(x_0, x_1,x_2, x_3)\in X_3}, {d^2\alpha_{x_0}(\overrightarrow{x_0x_1}, \overrightarrow{x_0x_2}, \overrightarrow{x_0x_3})} is just summing up the four numbers when {d\alpha} is applied on the four (oriented) faces of {\Delta}. (Note: {d\alpha_{x_0}(\overrightarrow{x_0x_1}, \overrightarrow{x_0 x_2})=d\alpha_{x_1}(\overrightarrow{x_1x_2}, \overrightarrow{x_1 x_0})}, so this is well-defined, independent of the vertices we base at, as long as the orientation is consistent. ) In turn, when applying exterior derivative {d} again, for each of the four faces, this amounts to summing up the number when {\alpha} is applied to the (oriented) boundary (consisting of 3 directed edges) of this face (a triangle). Since each pair of adjacent faces share a common edge with opposite orientation, the result follows. \Box

You will not miss the similarity between {\partial } and {d}. Indeed they can be regarded as the dual version of each other, roughly speaking: {d\alpha(\Delta)=\alpha(\partial \Delta)}. But you will immediately object: a form does not act on the k-simplices, it acts on vectors! So in order to let the forms act on the simplices, we integrate them on simplices, so the above relation should read:

Theorem 8 (Raymond-Gauss-Green-Stokes theorem)

\displaystyle \boxed{ \int _\Omega d\alpha=\int_{\partial \Omega}\alpha} \ \ \ \ \ (2)

for a {k}-form {\alpha} and a {(k+1)}-chain {\Omega}.

It turns out that this version of Stokes theorem is extremely easy to prove, once we set it up properly, partly because we are only doing combinatorics on some finite sets (and all forms are finite-valued), thus avoiding a lot of troubles concerning about all sorts of infinity (both in domain and in the value, as in analysis). Also, the “differentiability” condition is “trivial”, i.e. all forms can be exterior differentiated, we do not have to take care of any differentiability issue, e.g. on {\mathbb{R}}, {f(x)=|x|} is differentiable except at 0, do we still have fundamental theorem of calculus (simplest form of Stokes theorem): {\displaystyle\int_{a}^b f'(x)dx =f(b)-f(a)}?

Proof of Raymond-Gauss-Green-Stokes theorem:
Clearly we only have to prove it for the case {\Omega=(x_0, \cdots, x_{k+1})} is a {(k+1)}-simplex, due to linearity of the integral over the domain. We use {\widehat{x}} to denote an omission of {x}.

Note that the R.H.S. of (2) is

\displaystyle  \int_{\partial \Delta}\alpha=\sum_{i=0}^{k+1}(-1)^i\int_{(x_0, \cdots, \widehat {x_i}, \cdots, x_{k+1})} \alpha= \sum_{i=0}^{k+1}(-1)^i \alpha_{[0, \cdots, \widehat i, \cdots, k+1]}. \ \ \ \ \ (3)

The last equation holds because {\int_{(x_0, \cdots,\widehat{x_i}, \cdots, x_{k+1})}\alpha} is by definition just the (signed) average of {\alpha} over the {i}-th face {(x_0, \cdots,\widehat{x_i}, \cdots, x_{k+1})}, which in turn is {\alpha_{[\cdots, \widehat i, \cdots]}}, by definition. On the other hand,

\displaystyle  \int_{(x_0, \cdots, x_{k+1})}d\alpha=\frac{1}{k+2}\sum_{i=0}^{k+1}(-1)^i d\alpha_{[0, \cdots, \widehat i, \cdots, k+1]}.

By Proposition 6, {\displaystyle(-1)^i d\alpha_{[0, \cdots, \widehat i, \cdots, k+1]}\equiv (d\alpha)_{x_0}(\overrightarrow{x_0x_1}, \cdots, \overrightarrow{x_0x_{k+1}})=\sum_{j=0}^{k +1} (-1)^j\alpha_{[0, 1, \cdots, \widehat{j}, \cdots, k+1]}} are all independent of {i}, and since there are {k+2} such terms,

\displaystyle \int_{(x_0, \cdots, x_{k+1})}d\alpha= \frac{1}{k+2}\sum_{i=0}^{k+1}(-1)^i d\alpha_{[0, \cdots, \widehat i, \cdots, k+1]}=\sum_{j=0}^{k+1} (-1)^j\alpha_{[0, 1, \cdots, \widehat{j}, \cdots, k+1]}.

Compared with (3), we can get the result. \Box

Advertisements
This entry was posted in Analysis, Discrete Mathematics. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s