## On an inequality of De Lellis and Topping

This is a sequel to my previous post.

It is a classical result of Schur that if the Ricci curvature ${Ric=\frac Rn g}$ on a Riemannian manifold ${(M^n,g)}$, then the scalar curvature ${R}$ must be constant for ${n\ge 3}$. This is a simple consequence of the twice contracted Bianchi identity ${\mathrm{div}(Ric-\frac R2g)=0}$. It is interesting to see how ${R}$ differs from a constant if ${Ric-\frac Rng}$ is close to zero. In this direction Lellis and Topping proved in [Lellis-Topping] that

 Theorem 1 If ${(M^n,g)}$ is a closed Riemannian manifold ${(n\ge 3)}$ with nonnegative Ricci curvature, then $\displaystyle \int_M (R-\overline R)^2 \le \frac{4n(n-1)}{(n-2)^2}\int_M |Ric - \frac Rng|^2.$ Here ${\overline R}$ is the average of ${R}$ on ${M}$.

In this note, we will show an analogous result for the higher ${r}$-th mean curvature for closed submanifolds in space forms (Theorem 4). In particular we will show that our result implies Theorem 1 and some other results in [Cheng-Zhou] and [Cheng1]. We will also establish another version of this type of result which involves the Riemannian curvature tensor (Theorem 11), which gives a quantitative version of another result of Schur: if ${(M^n,g)}$ ${(n\ge 3)}$ has sectional curvature which depends on its base point only, then its curvature is constant.

1. Submanifolds in space forms

Let ${\Sigma^n}$ be an immersed submanifold in a space form ${(N^m,h)}$, ${n. The second fundamental form of ${\Sigma }$ in ${N }$ is defined by ${A(X,Y)=-({\overline \nabla _XY})^\perp}$ and is normal-valued. Here ${\overline \nabla }$ is the connection on ${N}$. We denote ${A(e_i, e_j)}$ by ${A_{ij}}$, where ${\lbrace e_i\rbrace_{i=1}^n}$ is a local orthonormal frame on ${\Sigma}$.

We define the ${r}$-th mean curvature as follows. If ${r}$ is even,

$\displaystyle H _r = \frac 1{r!}\sum_{\substack{i_1,\cdots, i_r\\ j_1, \cdots, j_r}} \delta _{j_1\cdots j_r}^{i_1\cdots i_r}h(A_{i_1j_1},A_{i_2j_2})\cdots h(A_{i_{r-1}j_{r-1}},A_{i_rj_r}).$

If ${r}$ is odd, the ${r}$-th mean curvature is a normal vector field defined by

$\displaystyle H _r =\frac 1{r!}\sum_{\substack{i_1,\cdots, i_r\\ j_1, \cdots, j_r}} \delta _{j_1\cdots j_r}^{i_1\cdots i_r}h(A_{i_1j_1},A_{i_2j_2})\cdots h(A_{i_{r-2}j_{r-2}},A_{i_{r-1}j_{r-1}})A_{i_rj_r}.$

Here ${\delta_{i_1 \cdots i_r}^{j_1\cdots j_r}}$ is the generalized Kronecker delta symbol. In the codimension one case, i.e. ${\Sigma}$ is a hypersurface, by taking the inner product with a unit normal if necessary, we can assume ${H_r}$ is scalar valued. In this case the value of ${H_r}$ is given by

$\displaystyle H_r =\sum_{i_1<\cdots< i_r}k_{i_1}\cdots k_{i_r} \ \ \ \ \ (1)$

where ${k_i}$ are the principal curvatures. This definition of ${H_r}$ will be used whenever ${\Sigma }$ is a hypersurface.

Following [Grosjean] and [Reilly], we define the (generalized) ${r}$-th Newton transformation ${T^r}$ of ${A}$ (as a ${(1,1)}$ tensor, possibly vector-valued) as follows.
If ${r}$ is even,

$\displaystyle {(T^r)}_j^{\,i}= \frac 1 {r!} \sum_{\substack{i,i_1,\cdots, i_r\\j, j_1, \cdots, j_r}} \delta^{i i_1 \ldots i_r}_{j j_1 \ldots j_r} h(A_{i_1j_1},A_{i_2j_2})\cdots h(A_{i_{r-1}j_{r-1}},A_{i_rj_r}).$

If ${r}$ is odd,

$\displaystyle {(T^r)}_j^{\,i}= \frac 1 {r!} \sum_{\substack{i,i_1,\cdots, i_r\\j, j_1, \cdots, j_r}} \delta^{i i_1 \ldots i_r}_{j j_1 \ldots j_r} h(A_{i_1j_1},A_{i_2j_2})\cdots h(A_{i_{r-2}j_{r-2}},A_{i_{r-1}j_{r-1}})A_{i_rj_r}.$

Again in the codimension one case, we can assume ${T^r}$ is an ordinary ${(1,1)}$ tensor and if ${\lbrace e_i\rbrace_{i=1}^n}$ are the eigenvectors of ${A}$, then

$\displaystyle T^r(e_i)=\frac 1{{n\choose r}}\sum_{\substack{i_1<\cdots

This definition of ${T^r}$ will be used whenever ${\Sigma}$ is a hypersurface.

 Lemma 2 If ${\Sigma^n}$ is an immersed submanifold in a space form ${N^m}$, let ${\mathrm{div}}$ and ${\mathrm{tr}}$ denotes the divergence and the trace on ${\Sigma}$ respectively, then $\displaystyle \mathrm{div} (T^r)=0 \textrm{\quad and \quad}\mathrm{tr}(T^{r})= (n-r)H_r.$

Proof: The first assertion follows from the proof of [Grosjean] Lemma 2.1. But since it has only been shown in the case where ${r}$ is even in that paper, let us assume the odd case, and for the sake of demonstration let ${r=3}$. Using local orthonormal frame, the first assertion follows from

$\displaystyle \begin{array}{rcl} &&r! \nabla _i{ (T^{r})}_{i}^j\\ &=& \delta_{i \, i_1 i_1 i_3}^{j \,j_1 j_2 j_3} (h(\nabla_iA_{i_1j_1 }, A_{i_2j_2 }) A_{i_3j_3 }+h(A_{i_1j_1 }, \nabla_iA_{i_2j_2 }) A_{i_3j_3 }+h(A_{i_1j_1 }, A_{i_2j_2 }) \nabla_i A_{i_3j_3 })\\ &=& \delta_{i \, i_1 i_1 i_3}^{j \,j_1 j_2 j_3} (h(\nabla_{i_1}A_{ij_1 }, A_{i_2j_2 }) A_{i_3j_3 }+h(A_{i_1j_1 }, \nabla_{i_1}A_{ij_2 }) A_{i_3j_3 }+h(A_{i_1j_1 }, A_{i_2j_2 }) \nabla_{i_3} A_{ij_3 })\\ &=& -\delta_{i \, i_1 i_1 i_3}^{j \,j_1 j_2 j_3} (h(\nabla_iA_{i_1j_1 }, A_{i_2j_2 }) A_{i_3j_3 }+h(A_{i_1j_1 }, \nabla_iA_{i_2j_2 }) A_{i_3j_3 }+h(A_{i_1j_1 }, A_{i_2j_2 }) \nabla_i A_{i_3j_3 })\\ &=&-r! \nabla _i T^{r}_{ij} \end{array}$

where we have used the Codazzi equation ${\nabla _i A_{jk}=\nabla _j A_{ik}}$ (as ${N}$ has constant curvature). The second assertion is straightforward. $\Box$

By Lemma 2 we immediately have the following Schur-type theorem which is perhaps well-known to experts (we will write ${V s}$ instead of ${V\otimes s}$ for a vector valued function ${V}$ and a ${(1,1) }$ tensor ${s}$):

 Theorem 3 Let ${\Sigma^n}$ be a closed (compact without boundary) immersed submanifold in a space form ${N^m}$ and ${r\in\lbrace1, \cdots, n-1\rbrace}$. If the traceless part ${\stackrel{\circ}{T^{r}}}$ of ${T^{r}}$ vanishes, i.e. ${T^r= \frac {n-r}nH_r I}$, then ${H_r}$ is parallel. (In particular it is constant in the case where it can be defined as a scalar. )

Proof:

By Lemma 2, as ${\stackrel \circ {T^r} = T^r- \frac{(n-r)}n H_r I}$, we have ${0=\mathrm{div}(\stackrel\circ {T^r})=-\frac {n-r}n\nabla H_r }$. i.e. ${H_r}$ is parallel. $\Box$

 Theorem 4 Let ${\Sigma^n}$ (${n\ge 2}$) be a closed immersed orientable submanifold in a space form ${N^m}$, ${m>n}$. Let ${r\in \lbrace 1,\cdots, n-1\rbrace}$. Assume that either ${r}$ is even, or ${r}$ is odd and ${N=\mathbb R^m}$, or ${\Sigma}$ is of codimension one, i.e. a hypersurface. Let ${\overline H_r=\frac 1{ \mathrm{Area}(\Sigma)}\int_\Sigma H_r}$ be the average of ${H_r}$ (which is a vector when ${r}$ is odd and is defined by (1) in the codimension one case) and ${\stackrel{\circ}{T^{r}}= T^{r}- \frac{(n-r)}n H_r I}$ be the traceless part of ${T^{r}}$. Let ${\lambda}$ be the first eigenvalue for the Laplacian on ${\Sigma}$ and suppose the Ricci curvature of ${\Sigma }$ is bounded from below by ${-(n-1)K}$, ${K\ge 0}$, then for ${r=1,\cdots, n-1}$, we have $\displaystyle \int_\Sigma |H_r-\overline H_r|^2 \le \frac{ n(n-1)}{(n-r)^2} (1+\frac{nK}\lambda)\int_\Sigma |\stackrel \circ {T^{r}}|^2, \ \ \ \ \ (2)$ or equivalently, $\displaystyle \int_\Sigma |T^{r} - \frac{n-r}n \overline H_r I|^2 \le n (1+\frac{(n-1)K}\lambda) \int_\Sigma |T^{r} - \frac{n-r}n H_r I|^2.$

Proof: We follow the ideas in [Lellis-Topping] and [Cheng-Zhou]. We do the the case where ${r}$ is odd (and thus ${N=\mathbb R^m}$) first. We can assume ${H_r-\overline H_r}$ is not vanishing everywhere, otherwise there is nothing to prove. Let ${F=(F^1,\cdots, F^m)}$ be the solution to

$\displaystyle \begin{cases} \Delta F = H_r- \overline H_r\\ \int_\Sigma F =0. \end{cases} \ \ \ \ \ (3)$

The solution exists because ${\int_\Sigma H- \overline H_r=0}$. As ${\stackrel \circ {T^r} = T^r- \frac{(n-r)}n H_r I}$ and ${\textrm{div}(T^r)=0}$ by Lemma 2, (as a vector-valued ${1}$-form) we have

$\displaystyle \mathrm{div}(\stackrel\circ {T^r})=-\frac {n-r}n\nabla H_r.$

Let us denote the dot product in ${\mathbb R^m}$ by ${\cdot}$ and the intrinsic inner product on ${\Sigma }$ by ${\langle \cdot, \cdot\rangle}$. Then

$\displaystyle \begin{array}{rcl} \int_\Sigma |\Delta F|^2 = \int_\Sigma (H_r- \overline H_r)\cdot\Delta F &=& -\int_\Sigma \langle \nabla H_r,\nabla F\rangle\\ &=&\frac n{n-r} \int_\Sigma \langle \mathrm{div}(\stackrel \circ {T^r}),\nabla F\rangle\\ &=& \frac n{n-r}\int_\Sigma \langle -\stackrel \circ {T^r} , \nabla ^2F\rangle\\ &= &\frac n{n-r}\int_\Sigma \langle -\stackrel \circ {T^r} , \nabla ^2F- \frac {\Delta F }nI\rangle\\ &\le&\frac n{n-r} \|\stackrel \circ {T^r} \|_{L^2}\|\nabla ^2 F- \frac{\Delta F}n I\|_{L^2}. \end{array} \ \ \ \ \ (4)$

We have

$\displaystyle \begin{array}{rcl} \|\nabla ^2 F -\frac {\Delta F}n I\|_{L^2}^2 &= &\int_\Sigma |\nabla ^2 F|^2 +\frac 1n \int_\Sigma |\Delta F|^2 -\frac 2n \int_\Sigma |\Delta F|^2\\ &= &\int_\Sigma |\nabla ^2 F|^2 -\frac 1n \int_\Sigma |\Delta F|^2. \end{array} \ \ \ \ \ (5)$

By Bochner formula, we have

$\displaystyle \int_\Sigma |\nabla ^2F|^2 = \int_ M |\Delta F|^2 -\int_\Sigma Ric(\nabla F, \nabla F) \le \int_\Sigma |\Delta F|^2 +(n-1)K\int_\Sigma |\nabla F|^2.$

Here by ${Ric(\nabla F,\nabla F)}$ we mean the quantity ${\displaystyle \sum_{\substack{1\le i,j\le n\\ 1\le l\le m}} R_{ij}\nabla _iF^l \nabla _j F^l}$. Consider

$\displaystyle \begin{array}{rcl} \int_\Sigma | \nabla F|^2 =-\int_\Sigma F \cdot\Delta F &\le (\int_\Sigma |F|^2)^\frac 12\left(\int_\Sigma |\Delta F|^2\right)^\frac12\\ &\le \left(\frac {\int_\Sigma | \nabla F|^2}{\lambda}\right)^\frac 12\left(\int_\Sigma |\Delta F|^2\right)^\frac12. \end{array}$

Thus

$\displaystyle \int_\Sigma |\nabla F|^2 \le \frac 1{\lambda} \int_\Sigma |\Delta F|^2. \ \ \ \ \ (6)$

Here we have used the fact that the first eigenvalue ${\lambda=\min \{ \frac {\int_\Sigma | \nabla \phi|^2}{\int_\Sigma \phi^2}: \int_\Sigma \phi=0, \phi\ne0\}}$. So (5) becomes

$\displaystyle \begin{array}{rcl} \|\nabla ^2 F -\frac {\Delta F}n I\|_{L^2}^2 &=\frac{n-1}n \int_\Sigma |\Delta F|^2 - \int_\Sigma Ric(\nabla F,\nabla F)\\ &\le (\frac{n-1}n) (1+\frac{nK}\lambda)\int_\Sigma |\Delta F|^2 . \end{array} \ \ \ \ \ (7)$

Substitute this into (4), we obtain (2):

$\displaystyle \int_\Sigma |H_r-\overline H_r|^2 \le \frac{ n(n-1)}{(n-r)^2} (1+\frac{nK}\lambda)\int_\Sigma |\stackrel \circ {T^{r}}|^2.$

As ${T^{r}- \frac{n-r}n \overline H_r I= \stackrel\circ {T^{r}}+\frac{n-r}n (H_r-\overline H_r)I}$, we have

$\displaystyle |T^{r}- \frac{n-r}n \overline H_r I |^2= |\stackrel \circ {T^{r}}|^2 +\frac {(n-r)^2}n|H-\overline H_r|^2.$

Therefore (2) can be rephrased as

$\displaystyle \int_\Sigma |T^{r} - \frac{n-r}n \overline H_r I|^2 \le n (1+\frac{(n-1)K}\lambda) \int_\Sigma |T^{r} - \frac{n-r}n H_r I|^2.$

For the remaining cases where ${r}$ is even or ${\Sigma}$ is a hypersurface, as ${H_r}$ is scalar valued, just replace ${F}$ in (3) by a scalar valued function ${f}$ and run through the same proof, we can get the result. $\Box$

 Remark 1 When ${r=1}$ and ${\Sigma}$ is a hypersurface, as ${T^1= H_1 I-A}$, it is easy to see that ${\stackrel \circ {T^{1}}= -\stackrel \circ A}$ where ${\stackrel \circ A}$ is the traceless part of the second fundamental form. This generalizes [Perez] Theorem 3.1 (see also [Cheng-Zhou]). In the codimension one case, this recovers [Cheng1] Theorem 1.10.

 Remark 2 For a embedded hypersurface in Euclidean space, having nonnegative Ricci curvature is equivalent to ${A\ge 0}$ (i.e. convex), see [Perez] p. 48. So when ${K=0}$, the curvature assumptions in Theorem 4 can be replaced by ${\Sigma}$ being convex when it is an embedded hypersurface.

2. Relations with Lovelock curvatures

In this section, we investigate the relation between Theorem 4 and an analogous result of Ge-Wang-Xia [Ge-Wang-Xia] when ${r}$ is even. Following [Ge-Wang-Xia], we define the Lovelock curvatures (or the so called ${2k}$-dimensional Euler density in Physics) of a Riemannian manifold ${(M^n,g)}$, ${k<\frac n2}$, by

$\displaystyle R^{(k)}= \frac 1{2^k} \delta_{ i_1 \ldots i_{2k}}^{j_1 \ldots j_{2k}}{R_{j_1 j_2}}^{i_1i_2}\cdots {R_{j_{2k-1} j_{2k}}}^{i_{2k-1} i_{2k}}. \ \ \ \ \ (8)$

It can be easily seen that ${R^{(1)}}$ is the sclar curvature. We can also define the generalized Einstein ${2}$-tensor by defining

$\displaystyle {E^{(k)}}_{i}^j=-\frac 1{2^{k+1}} \delta_{ ii_1 \ldots i_{2k}}^{jj_1 \ldots j_{2k}}{R_{j_1 j_2}}^{i_1i_2}\cdots {R_{j_{2k-1} j_{2k}}}^{i_{2k-1} i_{2k}}.$

We have the following analogue of Lemma 2 for ${E^{(k)}}$:

 Lemma 5 We have $\displaystyle \mathrm{tr}(E^{(k)})= -\frac{n-2k}2R^{(k)}\textrm{\quad and \quad }\nabla ^i {E^{(k)}}_i^j=0.$

Proof: The first assertion is a straightforward calculation. For the second assertion, for the sake of demonstration let ${k=1}$. Then this follows from

$\displaystyle \begin{array}{rcl} -2^{k+1}\nabla ^i {E^{(k)}}_i^j &=&\delta_{i i_1 i_2}^{j j_1j_2}\nabla^i{R_{j_1 j_2}}^{i_1i_2}\\ &=&-\delta_{i i_1 i_2}^{j j_1j_2}(\nabla^{i_1}{R_{j_1 j_2}}^{i_2i}+\nabla^{i_2}{R_{j_1 j_2}}^{ii_1})\\ &=&-\delta_{i_2 i i_1 }^{j j_1j_2}\nabla^i{R_{j_1 j_2}}^{i_1i_2}-\delta_{i_1i_2i }^{j j_1j_2}\nabla^i{R_{j_1 j_2}}^{i_1i_2}\\ &=&-2\delta_{i i_1 i_2}^{j j_1j_2}\nabla^i{R_{j_1 j_2}}^{i_1i_2}. \end{array}$

Here we have used the Bianchi identity in the second line. $\Box$

We see that ${E^{(k)}}$ is divergence free and indeed ${E^{(1)}}$ is the Einstein tensor. By Lemma 5, it is clear that we can prove the analogue of Theorem 4 in this setting. Indeed, by using the same method, Ge, Wang and Xia [Ge-Wang-Xia] proved that (they have assumed ${Ric\ge 0}$, but their result can be easily extended to the version below):

 Theorem 6 Let ${(M,g)}$ be a closed Riemannian manifold with ${Ric\ge -(n-1)K}$, ${K\ge0}$ and ${1\le k<\frac n2}$, then $\displaystyle \int_M |R^{(k)}-\overline R^{(k)}|^2 \le \frac{4n(n-1)}{(n-2k)^2}((1+\frac{nK}\lambda)\int_M |\stackrel{\circ}{E^{(k)}}|^2.$ Here ${\overline R^{(k)}}$ is the average of ${R^{(k)}}$.

We will show that Theorem 4 is equivalent to Theorem 6 in the case where ${r}$ is even.

 Proposition 7 For an immersed submanifold ${\Sigma\subset \mathbb R^{n+1}}$, for ${k=1, \cdots, \lfloor \frac n2\rfloor}$, we have $\displaystyle E^{(k)}=-\frac { k!}2T^{2k}.$

Proof: We use a local orthnormal frame for computations. We also denote the dot product in ${\mathbb R^m}$ by ${\cdot}$. By Gauss equation, we have

$\displaystyle {R_{ij}}^{kl}= A_{ik} \cdot A_{jl}-A_{il} \cdot A_{jk}.$

Consider

$\displaystyle \begin{array}{rcl} & &\delta_{i i_1 \ldots i_{2k}}^{jj_1 \ldots j_{2k}}{R_{j_1 j_2}}^{i_1i_2}\cdots {R_{j_{2k-1} j_{2k}}}^{i_{2k-1} i_{2k}}\\ & =&\delta_{i i_1 \ldots i_{2k}}^{jj_1 \ldots j_{2k}}(A_{i_1j_1} \cdot A_{i_2j_2}-A_{i_2j_1} \cdot A_{i_1j_2} )\cdots (A_{i_{2k-1}j_{2k-1}}\cdot A_{i_{2k}j_{2k}}-A_{i_{2k}j_{2k-1}}\cdot A_{i_{2k-1}j_{2k}})\\ & =&2^k\delta_{i i_1 \ldots i_{2k}}^{jj_1 \ldots j_{2k}}(A_{i_1j_1}\cdot A_{i_2j_2})\cdots (A_{i_{2k-1}j_{2k-1}}\cdot A_{i_{2k}j_{2k}})\\ & =&2^kk!{(T^{2k})}_i^{\,j}. \end{array}$

This implies the result. $\Box$

 Proposition 8 Theorem 4 is equivalent to Theorem 6 when ${r=2k}$ and ${N=\mathbb R^m}$.

Proof: To see that Theorem 4 is equivalent to Theorem 6 when ${r}$ is even, we just observe that by Proposition 7, clearly Theorem 6 implies our result in the even case and when ${N}$ is Euclidean. On the other hand, since any manifold can be isometrically embedded into some ${\mathbb R^m}$ for ${m}$ large enough, we see that our result also implies Theorem 6. $\Box$

3. Equality case of Theorem 4

It is easy to see from the proof that if the Ricci curvature assumption in Theorem 4 is strengthened to ${Ric>-(n-1)Kg}$, then ${H_r=\overline H_r}$ (as ${F}$ is constant). On the other hand, it is more subtle if we omit this assumption and so far we have only got some partial results in this case. The equality case for ${r=1, K=0}$ when ${\Sigma}$ is an embedded hypersurface has been considered in [Cheng-Zhou], in which they prove that ${\Sigma}$ is a distance sphere. It seems that their proof cannot be modified in our case because in their proof it is essential that ${\Sigma}$ contains a point whose Ricci curvature is positive, which is not true for submanifold in higher codimension in general.
The case for ${r=2}$.
Suppose ${\Sigma^n}$ is immersed in a space form ${N^m}$ of curvature ${c}$, using the same computations as in the proof of Proposition 7, we can get that

$\displaystyle E^{(1)}= -T^2- {{n-2}\choose 2 }cI. \ \ \ \ \ (9)$

On the other hand, it is not hard to deduce that ${E^{1}= Ric-\frac R2 g}$ is the Einstein tensor (one way of seeing that without computing is to observe that ${E^{(1)}\in \mathcal T^2}$ contains only linear term involving the curvature and is divergence free, thus, up to constant, it must be the Einstein tensor). Thus

$\displaystyle -\stackrel{\circ }{T^2}=\stackrel\circ{Ric}=Ric -\frac Rn g.$

Note also that ${R^{(1)}= R}$ and is equal to ${H_2}$ up to a constant which only depends on ${c}$ and ${n}$. Thus our result in this case is reduced to Theorem 1.2 of [Cheng2] or the ${k=1}$ case of Theorem 6. By the rigidity case of [Cheng2] Theorem 1.2, we deduce that the equality holds in Theorem 4 in the ${r=2}$ case if and only if ${\Sigma}$ is Einstein. Therefore ${R}$ is constant and so is ${H_2}$ by Gauss equation. In particular, if ${\Sigma}$ is an embedded surface in ${\mathbb R^3}$, then by [Ros] it is a sphere.

4. Another form of almost-Schur type theorem

In this section, we derive another form of Schur-type theorem which gives a quantitative version of the following classical result of Schur: if ${(M^n,g)}$ ${(n\ge 3)}$ has sectional curvature which depends on its base point only, then its curvature is constant.

We first set up some notations. Let ${\mathcal T^r(M)}$ denote the space of covariant ${r}$-tensor on ${M}$ (e.g. ${g\in \mathcal T^2(M)}$). We define a product

$\displaystyle \odot: \mathcal T^2(M)\times \mathcal T^2(M)\rightarrow \mathcal T^4(M)$

by

$\displaystyle (\alpha\odot \beta)(X,Y,Z,W)= \alpha(X,Z)\beta(Y,W)-\alpha(X,W)\beta(Y,Z).$

For our purpose, our product is different from the Kulkarni-Nomizu product (ours has only half of the terms in their product). We define ${B= g\odot g}$. It is easy to see that ${B}$ is the Riemann curvature tensor of a space form with curvature ${1}$ (we use the convention that ${R_{ijij}}$ is the sectional curvature). In local coordinates, it is given by

$\displaystyle B_{ijkl}=g_{ik}g_{jl}-g_{il}g_{jk}.$

 Lemma 9 Let ${h}$ be a symmetric ${2}$-tensor, then $\displaystyle 2\langle Ric , h\rangle=\langle Rm, g\odot h\rangle \textrm{\quad and \quad} 2\langle \frac Rng, g\odot h\rangle=\langle \frac R{n(n-1)}g\odot g, h\rangle.$ In particular, $\displaystyle \langle Ric-\frac Rn g, h\rangle= \frac 12\langle Rm-\frac R{n(n-1)g}g\odot g, g\odot h\rangle.$

Proof: The first assertion follows from

$\displaystyle \langle Rm, g\odot h\rangle= R_{ijkl} (g_{ik}h_{jl}-g_{il}h_{jk})=2R_{ij}h_{ij}= 2\langle Ric, h\rangle.$

Define the contraction map ${C: \mathcal T^4(M)\rightarrow \mathcal T^2(M)}$ by ${C(A)_{ij}=g^{kl}A_{kilj}}$. Then ${C(B)= (n-1)g}$. By similar calculations, ${ \langle B, g\odot h\rangle= 2(n-1)g_{ij}h_{ij}= 2\langle C(B),h\rangle}$. Thus

$\displaystyle \langle \frac R{n(n-1)}B, h\rangle= 2\langle \frac Rng, g\odot h\rangle.$

$\Box$

 Lemma 10 Let ${h}$ be a symmetric ${2}$-tensor, then $\displaystyle |g\odot h|^2 =2(n-1)|h|^2.$

Proof: Let ${e_i}$ be an orthonormal basis such that ${h_{ij}= \lambda_j \delta_{ij}}$ (no summation). Then if it easy to see that

$\displaystyle (g\odot h)_{ijkl}=g_{ik}h_{jl}-g_{il}h_{jk}= \begin{cases} \lambda_j\textrm{\quad if }(i,j)=(k,l)\textrm{ and }i\ne j\\ -\lambda_j\textrm{\quad if }(i,j)=(l,k)\textrm{ and }i\ne j\\ 0\textrm{\quad otherwise.} \end{cases}$

Thus ${ |g\odot h|^2 = \sum (g\cdot h)_{ijkl} ^2 = 2\sum_{i\ne j}\lambda_j^2=2(n-1)\sum_j \lambda_j^2= 2(n-1)|h|^2}$. $\Box$

 Theorem 11 Suppose ${(M^n,g)}$ (${n\ge 3}$) is a closed Riemannian manifold such that its Ricci curvature is bounded from below by ${-K}$, ${K\ge 0}$, then we have $\displaystyle \begin{array}{rcl} \int _M (R-\overline R) ^2&\stackrel{\mathrm{(i)}}\le &\frac {4n(n-1) } {(n-2)^2}(1+\frac{nK}\lambda) \int_M |Ric-\frac Rn g|^2 \\ &\stackrel{\mathrm{(ii)}}\le &\frac {2n(n-1)^2 } {(n-2)^2} (1+\frac{nK}\lambda)\int_M |Rm-\frac {R }{n(n-1)} g\odot g|^2. \end{array} \ \ \ \ \ (10)$ where ${\overline R}$ is the average of its scalar curvature ${R}$. The equality sign in (i) holds if and only if ${(M,g)}$ is Einstein. The equality sign in (ii) holds if and only if ${(M,g)}$ has constant curvature ${\frac{\overline R}{n(n-1)}}$.

Proof: From [Cheng2], we have

$\displaystyle \int _M (R-\overline R) ^2 \le \frac {4n(n-1) } {(n-2)^2}(1+\frac{nK}\lambda) \int_M |Ric-\frac Rn g|^2. \ \ \ \ \ (11)$

By Lemmas 9, 10, as ${\stackrel\circ {Ric}=Ric-\frac Rn g}$,

$\displaystyle \begin{array}{rcl} |\stackrel\circ {Ric}|^2 &=&\frac 12 \langle Rm-\frac R{n(n-1)}g\odot g, g\odot \stackrel\circ{Ric}\rangle\\ &\le& \frac 12 |Rm-\frac R{n(n-1)}g\odot g||g\odot \stackrel\circ{Ric}|\\ &= &\frac {\sqrt{2(n-1)}}2 |Rm-\frac R{n(n-1)}g\odot g ||\stackrel\circ{Ric}|. \end{array}$

Thus

$\displaystyle |\stackrel\circ {Ric}|^2\le \frac{n-1}2 |Rm-\frac R{n(n-1)}g\odot g|^2.$

Plugging this into (11), we can get the result.

If the inequality in (i) becomes an equality, then by [Cheng2], ${(M,g)}$ is Einstein. If the equality in (ii) holds, then from the proof we have

$\displaystyle Rm - \frac {R}{n(n-1)}g\odot g= \alpha (g\odot (Ric -\frac Rn g))$

for some constant ${\alpha}$. By taking the trace we deduce that ${\alpha=\frac 1{n-1}}$ and

$\displaystyle Rm= \frac 1{n-1}g\odot Ric.$

Let ${\lbrace e_i\rbrace}$ be an orthonormal basis such that ${Ric(e_i,e_j)=(n-1)\lambda_i\delta _{ij}}$. (In this proof repeated indices will not be summed over. ) Then in this frame we have

$\displaystyle R_{ijkl}= \lambda_j \delta_{il}\delta_{jk}- \lambda_j \delta_{ik}\delta_{jl} .$

From this we see that ${R_{ijij}=\lambda_j}$ (${i\ne j}$) is independent of ${i}$, and thus the sectional curvature only depends on the base point. We thus conclude our result using either the classical Schur’s theorem or using (10). $\Box$

 Corollary 12 With the same assumptions as in Theorem 11, we have $\displaystyle \int_M |Rm-\frac{\overline R}{n(n-1)}g\odot g|^2 \le \frac{n^2+4(n-1)Kn\lambda^{-1}}{(n-2)^2}\int_M |Rm-\frac{R}{n(n-1)}g\odot g|^2.$

Proof: It is easy to see that

$\displaystyle \langle Rm, g\odot g\rangle=2R \textrm{\quad and \quad }|g\odot g|^2= 2n(n-1)$

which implies

$\displaystyle \langle Rm- \frac {R}{n(n-1)}g\odot g, g\odot g\rangle=0.$

As

$\displaystyle Rm- \frac {\overline R}{n(n-1)}g\odot g=(Rm- \frac { R}{n(n-1)}g\odot g)+\frac {1}{n(n-1)}(R-\overline R)g\odot g,$

we have

$\displaystyle |Rm- \frac {\overline R}{n(n-1)}g\odot g|^2=|Rm- \frac { R}{n(n-1)}g\odot g|^2+\frac 2{n(n-1)}(R-\overline R)^2.$

Combining this with (10), we can get the result. $\Box$