A generalization of Hsiung-Minkowski formulas for hypersurfaces in the Euclidean space

In this short note, we will prove the following generalization of Hsiung-Minkowski formulas for hypersurfaces in the Euclidean space.

Theorem 1 Let {\Sigma\subset \mathbb R^n} be a closed hypersurface, then for any {p\ge0} and {0 \le k\le n-2}, we have

\displaystyle \int_\Sigma u^p \sigma_k=\int_\Sigma u^{p+1}\sigma_{k+1}- \frac{p}{m{{m-1}\choose k}}\int_\Sigma u^{p-1}AT_k(X^T, X^T).

Here {\sigma_k} is the normalized {k}-th mean curvature, u=X\cdot \nu, {X^T} is the tangential component of the position vector onto {T\Sigma}. (If {p=0}, we use the convention that {u^p\equiv 1} and the second term on the RHS of this equation is understood to be zero. )  

The classical Hsiung-Minkowski formulas [Hsiung] can be recovered by putting {p=0} in the above equation. This also generalizes the results in one of my previous posts.

0.1. Preliminaries

Let us fix some notations. Let {\Sigma\subset \mathbb R^n} be a hypersurface and {m=n-1}. Let {\lbrace e_i\rbrace_{i=1}^{m}} be a local orthonormal frame on {\Sigma} and let {\nu=e_n} be the unit outward normal of {\Sigma}. Let {\overline \nabla } and {\nabla } be the connections on {\mathbb R^n} and {\Sigma} respectively. We define the the shape operator by {A=\overline \nabla \nu: T\Sigma \rightarrow T\Sigma} and {A_i^j} is defined by {A(e_i)=\sum_{j=1}^m A_i^j e_j}. By abusing of notation, we will also denote the second fundamental form {\langle A(u), v\rangle} by {A(u,v)}.

We define the {k}-th mean curvature {H_k} and the normalized {k}-th mean curvature {\sigma_k} of {\Sigma} by

\displaystyle  H _k = \frac 1{k!}\sum_{\substack{i_1,\cdots, i_k\\ j_1, \cdots, j_k}} \epsilon_{j_1\cdots j_k}^{i_1\cdots i_r}A_{i_1}^{j_1}\cdots A_{i_k}^{j_k}= \frac 1{k!}\sum_{i_1,\cdots, i_k} A_{i_1}^{[i_1}\cdots A_{i_k}^{i_k]} \textrm{\; and\;} \sigma_k= \frac{H_k}{{{n-1}\choose k}}

respectively. We use the convention that {H_0=\sigma_0=1}.

Following [Reilly], we define the {k}-th Newton transformation {T_k: T\Sigma\rightarrow T\Sigma} of {A} as

\displaystyle  {(T_k)}_j^{\,i}= \frac 1 {k!} \sum_{\substack{i_1,\cdots, i_k\\ j_1, \cdots, j_k}} \epsilon^{i i_1 \ldots i_k}_{j j_1 \ldots j_k} A_{i_1}^{j_1}\cdots A_{i_k}^{j_k}.

Alternatively, {T_k} can be defined recursively by (see e.g. [Reilly])

\displaystyle  T_0=\mathrm{Id}, \quad T_{k}=H_k I - AT_{k-1}\textrm{ for }k\ge 1. \ \ \ \ \ (1)

We use the convention that {A_i^{[i}A_j^{j]}=A_i^i A_j^j-A_i^jA_j^i}, for example. Unless otherwise stated, repeated indices will be summed over {1, \cdots, m}.

We will need the following two lemmas.

Lemma 2 For all {1\le k\le m}, we have the following identities:

  1. {A_{[i_1}^{i_1}\cdots A_{i_{k-1}}^{i_{k-1}}A_{i]}^{l}=(k-1)! (T_{k-1})^l_q A^q_i}.
  2. {\mathrm{div}(T_k)=0}, i.e. {(T_{k})^i_{j,i}=0.}
  3. {(T_{k-1})^l_q A^q_{i.l}=(H_k)_i}.

 

Proof: The equations (2) and (3) require the Codazzi equation {A^i_{j,k}=A^i_{k,j}} which holds for any hypersurface in a space form. (2) is well-known and can be found e.g. in [Reilly]. To prove (1), let us assume that {\lbrace e_i\rbrace_{i=1}^m} are the principal directions with principal curvatures {\lbrace \lambda_i\rbrace_{i=1}^m}.

\displaystyle  \begin{array}{rcl}  (k-1)! (T_{k-1})^l_q A^q_{i} &=&\sum_{\substack{q, i_1,\cdots, i_{k-1}\\j_1, \cdots, j_{k-1}}} \epsilon^{li_1 \cdots i_{k-1}}_{qj_1\cdots j_{k-1}}A_{i_1}^{j_1}\cdots A_{i_{k-1}}^{j_{k-1}}A^q_i\\ &=&\sum_{\substack{q, i_1,\cdots, i_{k-1}\\j_1, \cdots, j_{k-1}}} \epsilon^{li_1 \cdots i_{k-1}}_{qj_1\cdots j_{k-1}}\lambda_{i_1}\cdots \lambda_{i_{k-1}}\lambda_{i}\delta_{i_1}^{j_1}\cdots \delta_{i_{k-1}}^{j_{k-1}}\delta^q_i\\ &=&\sum_{\substack{ i_1,\cdots, i_{k-1}\\j_1, \cdots, j_{k-1}}} \epsilon^{li_1 \cdots i_{k-1}}_{i j_1\cdots j_{k-1}}\lambda_{i_1}\cdots \lambda_{i_{k-1}}\lambda_{i}\delta_{i_1}^{j_1}\cdots \delta_{i_{k-1}}^{j_{k-1}}\\ &=&\sum_{i_1,\cdots, i_{k-1}} \lambda_{i_1}\cdots \lambda_{i_{k-1}}\lambda_{i}\delta_{i_1}^{[i_1}\cdots \delta_{i_{k-1}}^{j_{k-1}}\delta^{l]}_i\\ &=&A_{[i_1}^{i_1}\cdots A_{i_{k-1}}^{j_{k-1}}A^{l}_{i]}. \end{array}

We now prove (3). Note that by (1) and (2), we have

\displaystyle  \begin{array}{rcl}  T^l_q A^q_{i,l}&=&(T^l_q A^q_{i})_l =\frac{1}{(k-1)!}(A_{i_1}^{[i_1}\cdots A_{i_{k-1}}^{i_{k-1}}A_{i}^{l]})_l\\ &=&\frac{1}{(k-1)!}\left((k-1)A_{i_1,l}^{[i_1}\cdots A_{i_{k-1}}^{i_{k-1}}A_{i}^{l]}+A_{i_1}^{[i_1}\cdots A_{i_{k-1}}^{i_{k-1}}A_{i,l}^{l]}\right)\\ &=&\frac{1}{(k-1)!}A_{i_1}^{[i_1}\cdots A_{i_{k-1}}^{i_{k-1}}A_{i,l}^{l]}\\ &=&\frac{1}{(k-1)!}A_{i_1}^{[i_1}\cdots A_{i_{k-1}}^{i_{k-1}}A_{l.i}^{l]}\\ &=&\frac{1}{k!}(A_{i_1}^{[i_1}\cdots A_{i_{k-1}}^{i_{k-1}}A_{l}^{l]})_i =(H_k)_i. \end{array}

The third line follows from the fact that {A_{i, l}^{i_1}} is symmetric in the indices {i}, {l}. \Box

Lemma 3 For {k\ge 1}, we have

\displaystyle  (k+1)H_{k+1}= H_k H_1-(T_{k-1})^l_q A^q_{i}A^{i}_l .

 

Proof: See e.g. [LWX] Lemma 2. \Box

0.2. Main result

We can now state and prove our main result.

Theorem 4 Let {\Sigma\subset \mathbb R^n} be a closed hypersurface, then for any {p\ge0} and {k\ge 0}, we have

\displaystyle \int_\Sigma u^p \sigma_k=\int_\Sigma u^{p+1}\sigma_{k+1}- \frac{p}{m{{m-1}\choose k}}\int_\Sigma u^{p-1}AT_k(X^T, X^T).

Here {X^T} is the tangential component of the position vector onto {T\Sigma}.  

Proof: We first assume {k\ge 1}. In the following computation, we will omit {\Sigma} in the integral symbol. Using Lemma 2 and integration by parts, we compute

\displaystyle  \begin{array}{rcl}  -k \int u ^p H_k &=&-\frac{1}{(k-1)!}\int u^p A_{[i_1}^{i_1}\cdots A_{i_k]}^{i_k}\\ &=& \frac{1}{(k-1)!}\int u^p A_{[i_1}^{i_1}\cdots A_{i_{k-1}}^{i_{k-1}} X_{i_k]}^{i_k}\cdot \nu\\ &=& -\frac{1}{(k-1)!}\int u^p A_{[i_1}^{i_1}\cdots A_{i_{k-1}}^{i_{k-1}} \nu_{i_k]}\cdot X^{i_k}\\ &=& -\frac{1}{(k-1)!}\int u^p A_{[i_1}^{i_1}\cdots A_{i_{k-1}}^{i_{k-1}} A_{i_k]}^lX_l\cdot X^{i_k}\\ &=& -\int u^p (T_{k-1})^l_q A^q_{i}X_l\cdot X^{i}\\ &=& \int [(u^p )_l(T_{k-1})^l_q A^q_{i}X\cdot X^{i}+ u^p (T_{k-1})^l_q A^q_{i,l}X\cdot X^{i} + u^p (T_{k-1})^l_q A^q_{i}X\cdot X^{i}_l]\\ &=& \int [(u^p )_l(T_{k-1})^l_q A^q_{i}X\cdot X^{i}+ u^p (H_k)_{l}X\cdot X^{l} + u^p (T_{k-1})^l_q A^q_{i} X\cdot X^{i}_l]\\ &=& \int [(u^p )_l(T_{k-1})^l_q A^q_{i} X\cdot X^{i}-(u^p)_l H_k X\cdot X^{l}-u^pH_k X_l\cdot X^{l}\\ &&-u^p H_k X\cdot X^{l}_l-u^p (T_{k-1})^l_q A^q_{i}A^{i}_l X\cdot \nu]\\ &=& \int [(u^p )_l(T_{k-1})^l_q A^q_{i} X\cdot X^{i}-(u^p)_l H_k X\cdot X^{l}-mu^pH_k \\ &&+u^{p+1} H_k H_1-u^{p+1} (T_{k-1})^l_q A^q_{i}A^{i}_l ]. \end{array}

So we have

\displaystyle  \begin{array}{rcl} (m-k)\int_\Sigma u^pH_k &=& \int_\Sigma [(u^p )_l(T_{k-1})^l_q A^q_{i} X\cdot X^{i}-(u^p)_l H_k X\cdot X^{l}\\ &&+u^{p+1} H_k H_1-u^{p+1} (T_{k-1})^l_q A^q_{i}A^{i}_l ]\\ &=& \int_\Sigma \left(u^p )_l(T_{k-1})^l_q A^q_{i} X\cdot X^{i}-(u^p)_l H_k X\cdot X^{l}\right)\\ & &+(k+1)\int_\Sigma u^{p+1}H_{k+1} \end{array} \ \ \ \ \ (2)

where we have used Lemma 3. Using {u_l=A_l^j(X\cdot X_j)}, we have

\displaystyle  \begin{array}{rcl}  &&(u^p )_l((T_{k-1})^l_q A^q_{i} X\cdot X^{i}-H_k X\cdot X^{l})\\ &=&pu^{p-1}A_l^j A_i^q(T_{k-1})_{q}^{l}(X\cdot X_j)(X\cdot X^{i})-pu^{p-1}H_kA_l^j (X\cdot X^{l})(X\cdot X_j)\\ &=& -pu ^{p-1} A(H_k I - AT_{k-1})(X^T, X^T)\\ &=& -pu ^{p-1} AT_{k}(X^T, X^T). \end{array}

Here we have used (1) in the last line. Substitute this into (2), we obtain

\displaystyle (m-k)\int_\Sigma u^p H_k = (k+1)\int_\Sigma u^{p+1}H_{k+1}- p \int_\Sigma u^{p-1}AT_k (X^T, X^T).

This is equivalent to

\displaystyle \int_\Sigma u^p \sigma_k=\int_\Sigma u^{p+1}\sigma_{k+1}- \frac{p}{m{{m-1}\choose k}}\int_\Sigma u^{p-1}AT_k(X^T, X^T).

We now consider the case where {k=0}. Define the function {f=\frac{r^2}{2}=\frac{|X|^2}{2}} on {\mathbb R^n}. Then it is easy to check that {\overline \nabla ^2 f=g_0} where {g_0} is the Euclidean metric. Let {\lbrace e_i\rbrace_{i=1}^m} be a local orthonormal frame on {\Sigma}, {e_n=\nu} and {z} be the restriction of {f} on {\Sigma}. Then we have {\nabla z=X^T} and {u=\frac{\partial f}{\partial \nu}=X\cdot \nu}. From this we see that

\displaystyle \delta_{ij}= \overline \nabla ^2_{ij} f= \nabla ^2_{ij}z+ u A_{ij}\textrm{\quad and \quad}0=\delta_{in}=\overline \nabla ^2 _{in}f= u_i - A_{i}^jz_j.

So we have

\displaystyle  \begin{array}{rcl}  \int_\Sigma pu^{p-1}A(X^T, X^T) =\int_\Sigma pu^{p-1}A_{j}^iz_iz_j &=&\int_\Sigma pu^{p-1}u_j z_j\\ &=&-\int_\Sigma u^pz_{jj}\\ &=&-\int_\Sigma u^p(\delta_{jj}-u A_{jj})\\ &=&\int_\Sigma (mu^{p+1} \sigma_1-mu^p ). \end{array}

Therefore

\displaystyle \int_\Sigma u^p= \int_\Sigma u^{p+1}\sigma_1-\frac p m \int_\Sigma u^{p-1}A(X^T, X^T).

This completes the proof. \Box

Corollary 5 For {0\le k \le n-2}, suppose {\Sigma} is convex (i.e. {A\ge 0}), then we have

\displaystyle \int_\Sigma \sigma_k = \int_\Sigma \sigma_{k+1}u \le \int_\Sigma \sigma_{k+1}u^2 \le \int_\Sigma \sigma_{k+2}u^3 \le \cdots

In particular,

\displaystyle  n\mathrm{Vol}(\Omega)\le \int_\Sigma \sigma_1 u^2 \le \int_\Sigma \sigma_2 u^3\le \cdots

and

\displaystyle  \mathrm{Area}(\Sigma)\le \int_\Sigma \sigma_1 u \le \int_\Sigma \sigma_2 u^2\le \cdots

Here {\Omega} is the region bounded by {\Sigma}.  

Advertisements
This entry was posted in Geometry, Inequalities. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s