[Updated on April 14: a more detailed explanation of a Frobenius type lemma (Lemma 3) and one more example]
Today John Ma asked an interesting question about vector fields on a surface. I think it is quite a good exercise in differential geometry so I am recording it here.
Question: Let be a two-dimensional smooth manifold (i.e. surface) and be two linearly independent vector fields on , can we find a local coordinates (around any point) and a function on such that
As expected, the answer is “no” in general, as there must be some “compatibility conditions” on the vector fields. So the question is to find out the necessary and sufficient conditions for it. Of course, the analogous question can be asked in higher dimension, but I would like to focus on the surface case first, partly because it’s simpler in notations, and partly because there is an expression whose geometric meaning is not clear to me even in this case.
After discussing with John Ma and Raymond Chow, we have come up with the following result:
Theorem 1 (John-Raymond’s Theorem, 2011) Let be a two-dimensional smooth manifold (i.e. surface) and be two linearly independent vector fields on . Then there exists a local coordinates (around any point) and a function on such that
if and only if the 1-form
We begin with a basic result in geometric P.D.E.
Lemma 2 With the same assumptions in Theorem 1, if are given functions on , then we can locally solve for a function on satisfying
if and only if the 1-form
Proof: Let us transform the system of equations into the following equivalent form ( is the differential of ):
Then it is clear that
Thus to solve for , the R.H.S of the above must be locally exact, but this is equivalent to (by Poincare’s lemma) being closed.
Proof: The sufficient condition is trivial (Clairaut’s theorem), we only have to prove that it is a necessary condition. We give two proofs, one using “vector calculus” and one using “exterior differential calculus”, they can be treated as the dual versions of each other.
First of all, since ‘s are independent, by standard ODE theory and an application of inverse mapping theorem, there exists local coordinates ‘s on a small neighborhood (of a given point) such that (see M. doCarmo: Differential Geometry of Curves and Surfaces Section 3.4 p.182 “Main Theorem”)
and some smooth functions , clearly are nowhere zero. (The geometric meaning is that the coordinate vectors are parallel to . ) Then
As are independent, so . Thus and (we can assume this in a sufficiently small connected neighborhood). Choose two functions such that
We claim that ‘s are the desired coordinates functions. Indeed, has invertible Jacobian and thus defines a coordinates by inverse mapping theorem. Also,
This is even simpler. By the formula
we have for . So by Poincare lemma, locally there are functions such that
As by inverse mapping theorem, give a local coordinates on . Also is equivalent to .
We are now ready to prove the John-Raymond’s Theorem.
Proof: Let us assume the existence of such and ‘s first. Let (clearly is nowhere zero. ) Then
So we have
where , i.e. . Therefore
The 1-form is closed, since it is . It is also easy to see that
For the converse, if such is closed, then we can apply Lemma 2 to solve for (and hence . Then are linearly independent vector fields with vanishing Lie bracket, by reversing the above argument:
Thus by Lemma 3, there exists (local) coordinates such that
We have completed the proof.
- In two dimensional case, this result is also equivalent to the condition that the Lie derivative
by applying Cartan’s formula, noting that as is a top form. This is not true in higher dimension.
- It is not clear to me what is the geometric meaning of being closed. Perhaps there’s a better way of expressing this 1-form .
The generalization to higher dimension is not difficult, by separately treating the pairs of vector fields . E.g. in 3-dimensional case it consists of three relations corresponding to the three pairs .
Example 1 To get some feeling of what’s really going on, I have done an example on the simplest non-trivial case, namely, are orthonormal vector fields on . Suppose and where is a local smooth function on the plane (this is NOT the angle in polar coordinates!). Then is the standard volume form on where are the standard coordinates.
For a vector field , and so (This is not a coincidence, actually the intrinsic definition of divergence is exactly . )
So the condition for Theorem 1 is exactly . We will calculate that is actually the (negative) gradient , so we conclude that
Let us compute . There are some little tricks to aid our computations. Note that being orthonormal implies
we just have to find for . We compute
Similarly we have
We have deduced (1). I think this result is quite interesting itself, I would expect there is a more intrinsic viewpoint explaining it.