## A problem on coordinate vector field

[Updated on April 14: a more detailed explanation of a Frobenius type lemma (Lemma 3) and one more example]
Today John Ma asked an interesting question about vector fields on a surface. I think it is quite a good exercise in differential geometry so I am recording it here.

Question: Let ${M}$ be a two-dimensional smooth manifold (i.e. surface) and ${V_1, V_2}$ be two linearly independent vector fields on ${M}$, can we find a local coordinates ${x^1, x^2}$ (around any point) and a function ${f}$ on ${M}$ such that

$\displaystyle \begin{cases}V_1=f \frac{\partial }{\partial x^1} \\V_2=f \frac{\partial }{\partial x^2}\end{cases}?$

As expected, the answer is “no” in general, as there must be some “compatibility conditions” on the vector fields. So the question is to find out the necessary and sufficient conditions for it. Of course, the analogous question can be asked in higher dimension, but I would like to focus on the surface case first, partly because it’s simpler in notations, and partly because there is an expression whose geometric meaning is not clear to me even in this case.
After discussing with John Ma and Raymond Chow, we have come up with the following result:

Theorem 1 (John-Raymond’s Theorem, 2011) Let ${M}$ be a two-dimensional smooth manifold (i.e. surface) and ${V_1, V_2}$ be two linearly independent vector fields on ${M}$. Then there exists a local coordinates ${x^1, x^2}$ (around any point) and a function ${f}$ on ${M}$ such that

$\displaystyle \begin{cases}V_1=f \frac{\partial }{\partial x^1}, \\V_2=f \frac{\partial }{\partial x^2}\end{cases}$

if and only if the 1-form

$\displaystyle \Omega:=i_{[V_1, V_2]}(\omega^1\wedge \omega^2)$

is closed, where ${\omega^i}$‘s are the dual coframe (i.e. 1-form) of ${V_i}$‘s. (Here ${i_X \alpha}$ is the contraction ${\alpha(X, \cdot)}$ for a 2-form ${\alpha}$ and $[\,,\, ]$ is the Lie bracket. )

We begin with a basic result in geometric P.D.E.

Lemma 2 With the same assumptions in Theorem 1, if ${g_1, g_2}$ are given functions on ${M}$, then we can locally solve for a function ${k}$ on ${M}$ satisfying

$\displaystyle \begin{cases}V_1(k)=g_1, \\V_2(k)=g_2\end{cases}$

if and only if the 1-form

$\displaystyle g_1\omega^1+ g_2\omega^2$

is closed.

Proof: Let us transform the system of equations into the following equivalent form ($dk$ is the differential of $k$):

$\displaystyle \begin{cases}dk(V_1)=g_1, \\dk(V_2)=g_2.\end{cases}$

Then it is clear that

$\displaystyle dk=g_1 \omega^1+g_2 \omega^2.$

Thus to solve for ${k}$, the R.H.S of the above must be locally exact, but this is equivalent to (by Poincare’s lemma) ${g_1 \omega^1+g_2 \omega^2}$ being closed. $\Box$

The following lemma is in the same spirit as Frobenius theorem on integrability, this can be regarded as a special case of Theorem 1.

Lemma 3 Suppose ${V_1, V_2}$ are two linearly independent vector fields on the surface ${M}$. Then ${[V_1, V_2]=0}$ if and only if we can find a local coordinates ${{x^1}, x^2}$ around any point such that

$\displaystyle \begin{cases} \frac{\partial}{\partial {x^1}}=V_1\\ \frac{\partial}{\partial x^2}=V_2 \end{cases}$

Proof: The sufficient condition is trivial (Clairaut’s theorem), we only have to prove that it is a necessary condition. We give two proofs, one using “vector calculus” and one using “exterior differential calculus”, they can be treated as the dual versions of each other.
Proof 1
First of all, since ${V_i}$‘s are independent, by standard ODE theory and an application of inverse mapping theorem, there exists local coordinates ${u^i}$‘s on a small neighborhood (of a given point) such that (see M. doCarmo: Differential Geometry of Curves and Surfaces Section 3.4 p.182 “Main Theorem”)

$\displaystyle \frac{\partial }{\partial u^i}=f^i(u^1, u^2)V_i\text{ for }i=1,2$

and some smooth functions ${f^1, f^2}$, clearly ${f^i}$ are nowhere zero. (The geometric meaning is that the coordinate vectors are parallel to ${V_i}$. ) Then

$\displaystyle 0=[\frac{\partial }{\partial u^1}, \frac{\partial }{\partial u^2}]=[f^1 V_1, f^2 V_2]=f^1 V_1(f^2)V_2-f^2 V_2(f^1) V_1.$

As ${V_i}$ are independent, so ${V_1(f^2)=V_2(f^1)=0}$. Thus ${f^1=f^1(u^1)}$ and ${f^2=f^2(u^2)}$ (we can assume this in a sufficiently small connected neighborhood). Choose two functions ${{x^1}(u^1), x^2(u^2)}$ such that

$\displaystyle \frac{dx^1}{d u^1}=f^1, \frac{dx^2}{d u^2}=f^2.$

We claim that ${x^i}$‘s are the desired coordinates functions. Indeed, ${({x^1}(u^1, \tilde x^2), x^2(u^1, u^2))}$ has invertible Jacobian and thus defines a coordinates by inverse mapping theorem. Also,

$\displaystyle \frac{\partial }{\partial x^i}=\frac{1}{\frac{dx^i}{du^i}}\frac{\partial }{\partial u^i}=\frac{1}{f^i(u^i)}\frac{\partial }{\partial u^i}=V_i.$

Proof 2
This is even simpler. By the formula

$\displaystyle d\omega^i(V_j, V_k)=V_j(\omega^i(V_k))-V_k(\omega^i(V_j))-\omega^i([V_j, V_k]),$

we have ${d\omega^i=0}$ for ${i=1,2}$. So by Poincare lemma, locally there are functions ${x^i, i=1,2}$ such that

$\displaystyle \omega^i=dx^i.$

As ${dx^1\wedge dx^2=\omega^1\wedge \omega^2\neq0, }$ by inverse mapping theorem, ${x^i}$ give a local coordinates on ${M}$. Also ${\omega^i=dx^i}$ is equivalent to ${V_i=\frac{\partial }{\partial x^i}}$. $\Box$

We are now ready to prove the John-Raymond’s Theorem.
Proof: Let us assume the existence of such ${f}$ and ${x^i}$‘s first. Let ${g=1/f}$ (clearly ${f}$ is nowhere zero. ) Then

$\displaystyle 0=[\frac{\partial}{\partial x^1}, \frac{\partial}{\partial x^2}]=[gV_1, gV_2]=g^2[V_1, V_2]+g V_1 (g)V_2-g V_2(g)V_1.$

So we have

$\displaystyle a^1V_1+a^2V_2=[V_1, V_2]=V_2(\log g)V_1-V_1(\log g)V_2$

where ${a^i:=\omega^i([V_1, V_2])}$, i.e. ${[V_1, V_2]=a^1V_1+a^2V_2}$. Therefore

$\displaystyle \begin{cases} V_1(\log g)=-a^2, \\V_2(\log g)=a^1. \end{cases}$

The 1-form ${\Omega:=-a^2\omega^1+a^1 \omega^2}$ is closed, since it is ${d(\log g)}$. It is also easy to see that

$\displaystyle \Omega=-\omega^2([V_1, V_2])\omega^1+\omega^1([V_1, V_2])\omega^2=i_{[V_1, V_2]}(\omega ^1 \wedge \omega^2).$

For the converse, if such ${\Omega}$ is closed, then we can apply Lemma 2 to solve for ${g}$ (and hence ${f=1/g})$. Then ${gV_1, gV_2}$ are linearly independent vector fields with vanishing Lie bracket, by reversing the above argument:

$\displaystyle [gV_1, gV_2]=0.$

Thus by Lemma 3, there exists (local) coordinates ${x^1, x^2}$ such that

$\displaystyle gV_i=\frac{\partial }{\partial x^i} \text{ for }i=1,2.$

We have completed the proof. $\Box$

Remark 1

1. In two dimensional case, this result is also equivalent to the condition that the Lie derivative

$\displaystyle L_{[V_1, V_2]}(\omega^1\wedge \omega^2)=0,$

by applying Cartan’s formula, noting that ${d(\omega^1\wedge \omega^2)=0}$ as ${\omega^1\wedge \omega^2}$ is a top form. This is not true in higher dimension.

2. It is not clear to me what is the geometric meaning of ${i_{[V_1, V_2]}(\omega ^1 \wedge \omega^2)}$ being closed. Perhaps there’s a better way of expressing this 1-form ${\Omega}$. The generalization to higher dimension is not difficult, by separately treating the pairs of vector fields ${(V_i, V_j)}$. E.g. in 3-dimensional case it consists of three relations corresponding to the three pairs ${(V_1, V_2), (V_2, V_3), (V_1, V_3)}$.

Example 1 To get some feeling of what’s really going on, I have done an example on the simplest non-trivial case, namely, ${V_i}$ are orthonormal vector fields on ${\mathbb{R}^2}$. Suppose ${V_1=(\cos \theta, \sin \theta)}$ and ${V_2=(-\sin \theta, \cos \theta)}$ where ${\theta=\theta(x, y)}$ is a local smooth function on the plane (this is NOT the angle in polar coordinates!). Then ${\omega^1\wedge \omega^2=dx\wedge dy}$ is the standard volume form on ${\mathbb{R}^2}$ where ${x, y}$ are the standard coordinates.

For a vector field ${X=(a,b)}$, ${i_X(dx \wedge dy)=-bdx+a dy}$ and so ${d(i_X(dx\wedge dy))=(\frac{\partial a}{\partial x}+\frac{\partial b}{\partial y})dx\wedge dy=div (X)dx\wedge dy.}$ (This is not a coincidence, actually the intrinsic definition of divergence is exactly ${(div X)\text{Vol}=d(i_X \text{Vol})}$. )

So the condition for Theorem 1 is exactly ${div [V_1, V_2]=0}$. We will calculate that ${[V_1, V_2]}$ is actually the (negative) gradient ${-\nabla \theta}$, so we conclude that

$\displaystyle \boxed{ \text{The condition in Theorem 1 holds if and only if }\Delta \theta=0. } \ \ \ \ \ (1)$

Let us compute ${[V_1, V_2]}$. There are some little tricks to aid our computations. Note that ${V_i}$ being orthonormal implies

$\displaystyle D_{V_1}V_2\parallel V_1,\; D_{V_2}V_1\parallel V_2\text{ and }\langle D_{X}V_2, V_1\rangle =-\langle D_{X}V_1, V_2\rangle .$

Since

$\displaystyle \begin{array}{rcl} [V_1, V_2]=D_{V_1}V_2-D_{V_2}V_1 &=&\langle D_{V_1}V_2, V_1\rangle V_1-\langle D_{V_2}V_1, V_2\rangle V_2\\ &=&-\langle D_{V_1}V_1, V_2\rangle V_1-\langle D_{V_2}V_1, V_2\rangle V_2, \end{array}$

we just have to find ${\langle D_{V_i}V_1, V_2\rangle}$ for ${i=1,2}$. We compute

$\displaystyle \begin{array}{rcl} D_{V_1 }V_1&=&\cos \theta \frac{\partial }{\partial x}(\cos \theta, \sin \theta)+\sin \theta \frac{ \partial }{\partial y}(\cos \theta , \sin \theta)\\ &=&\cos \theta \theta _x(-\sin \theta, \cos \theta)+\sin \theta \theta_y (-\sin \theta , \cos \theta) \end{array}$

So

$\displaystyle \langle D_{V_1}V_1, V_2\rangle=\cos \theta \theta_x+ \sin \theta \theta_y.$

Similarly we have

$\displaystyle \langle D_{V_2}V_1, V_2\rangle=-\sin \theta \theta_x+\cos \theta \theta _y.$

Therefore

$\displaystyle \begin{array}{rcl} [V_1, V_2]&=&-(\cos \theta \theta_x+ \sin \theta \theta_y)V_1 -(-\sin \theta \theta_x+\cos \theta \theta _y)V_2\\ &=&-(\cos \theta \theta_x+ \sin \theta \theta_y)(\cos \theta , \sin \theta) -(-\sin \theta \theta_x+\cos \theta \theta _y)(-\sin \theta, \cos \theta)\\ &=&-(\theta_x, \theta_y)\\ &=& -\nabla \theta. \end{array}$

We have deduced (1). I think this result is quite interesting itself, I would expect there is a more intrinsic viewpoint explaining it.