A matrix inequality and its geometric application

In this post I will show a matrix inequality by Huang and Wu, and give one geometric application of it. See their paper for other applications.

Proposition 1 (Huang-Wu) For an {n\times n} matrix, {A=(a_{ij})},

\displaystyle  \sigma_1 \widetilde \sigma_1 = \sigma_2 +\frac{n}{2(n-1)} \widetilde \sigma_1 ^2+\sum_{1\leq i<j\leq n} a_{ij}a_{ji} +\frac{1}{2(n-1)} \sum_{2\leq i<j\leq n}(a_{ii}-a_{jj})^2 ,

where {\displaystyle\sigma_1 = \sum_{i=1}^n a_{ii}}, {\displaystyle\widetilde \sigma_1 = \sum_ {i=2}^n a_{ii}} and {\displaystyle\sigma_2= \sum_{1\leq i<j\leq n}(a_{ii}a_{jj}-a_{ij}a_{ji})}.

In particular if {A} is real and {\displaystyle\sum_{1\leq i<j\leq n }a_{ij}a_{ji}\geq 0},

\displaystyle  \sigma_1 \widetilde \sigma_1 \geq \sigma_2 +\frac{n}{2(n-1)} \widetilde \sigma_1 ^2 .


Proof: Denote {a_{ii}} by {\lambda_i}. Consider

\displaystyle  \begin{array}{rcl}  \sigma_1\widetilde \sigma_1&=& (\lambda_1+\cdots +\lambda_n)(\lambda_2+\cdots+ \lambda_n)\\&=& \lambda_1(\lambda_2+\cdots +\lambda_n)+\widetilde \sigma_1^2\\ &=& \displaystyle\sigma_2 -\sum_{2\leq i<j\leq n}\lambda_i\lambda_j+ \sum_{1\leq i<j\leq n}a_{ij}a_{ji} + \widetilde \sigma _1^2 .\end{array}

We want to eliminate {\displaystyle\sum_{2\leq i<j\leq n}\lambda_i\lambda_j} in the above. This is done by observing (summation runs from {2\leq i<j\leq n} if not specified)

\displaystyle  \begin{array}{rcl}  \sum(\lambda_i-\lambda_j)^2&=&\displaystyle\sum (\lambda _i^2 + \lambda _j^2) -2\sum\lambda_i\lambda_j\\ &=&\displaystyle(n-2)\sum_{j\geq 2}\lambda_i^2-2\sum\lambda_i\lambda_j\\ &=&\displaystyle (n-2)(\sum_{j\geq 2}\lambda_j)^2-2(n-1)\sum \lambda_i\lambda_j. \end{array}

Substitute this expression of {\sum \lambda_i\lambda_j} into the first equation, we are done. \Box

Let {(M^n,g)} be a Riemannian manifold and {\Sigma} be a {(n-1)}-dimensional submanifold of {M}. Let {e_1,\cdots, e_n} be a local orthnormal frame on {M} such that {e_2,\cdots ,e_n} are tangential to {\Sigma}. Denote by {R} and {Rc=(R_{ij})} the scalar curvature and the Ricci curvature of {M} respectively, {H_\Sigma} and {A_\Sigma} to be the mean curvature and the shape operator of {\Sigma} in {M} respectively.

Proposition 2 With the notations and assumptions above,

\displaystyle  \begin{array}{rcl} \displaystyle R (R_\Sigma -H_\Sigma ^2 +|A_\Sigma|^2)&=& \displaystyle\frac{1}{2} (R^2-|Rc|^2 )+ \frac{n}{2(n-1)}(\sum_{i=2}^nR_{ii})^2+ \sum_{1\leq i<j\leq n}R_{ij}^2\\ &&\displaystyle+\frac{1}{2(n-1)} \sum_{2\leq i<j\leq n}(R_{ii}-R_{jj})^2. \end{array}


Proof: The result follows by applying Proposition 1 to {(R_{ij})} directly. Note that by the Gauss equation {R_\Sigma= \sum_{i=2}^n R_{ii}+H_\Sigma^2-|A_\Sigma|^2},

\displaystyle  \widetilde \sigma_1(R_{ij})= R_\Sigma - H_\Sigma ^2+|A_\Sigma|^2.


\displaystyle \sigma _2(R_{ij})= \sum_{1\leq i<j\leq n}(R_{ii}R_{jj}-R_{ij}^2)=\frac{1}{2}\sum_{i,j=1}^n(R_{ii}R_{jj}-R_{ij}^2)=\frac{1}{2}(R^2-|Rc|^2).


Corollary 3 With the notations and assumptions above, if {R\geq |Rc|} on {M}, then

\displaystyle R_\Sigma \geq H_\Sigma ^2-|A_\Sigma |^2.

The equality holds if and only if at that point, {R=|Rc(e_1,e_1)|} and {R_{ij}=0} if {(i,j)\neq (1,1)}.  

Proof: Clearly the inequality holds at any point {p\in \Sigma} where {R>0}. Suppose {R(p)=0}, then by Proposition 2 we have {\sum_{i=2}^n R_{ii}=0} at {p}. But the Gauss equation shows that {\sum_{i=2}^n R_{ii}= R_\Sigma - H_\Sigma ^2+|A_\Sigma |^2}. Hence the inequality holds.

The necessary and sufficient condition for the equality can be easily seen from the RHS of the equation in Proposition 2. \Box

When {M^n} is a hypersurface in {\mathbb{R}^{n+1}}, then the condition in Corollary 3 can be replaced by the extrinsic curvature conditions of {M}. Let {(\lambda_1, \cdots, \lambda_n)} be the principal curvatures of {M} in {\mathbb{R}^{n+1}}. We define the {k}-th mean curvature of {M} to be

\displaystyle \sigma _k(M)=\sum_{1\leq i_1<\cdots<i_k\leq n}\lambda _{i_1}\cdots \lambda_{i_k}.

For even {k}, {\sigma_k(M)} is an intrinsic quantity (by Reilly) depending only on the induced metric of {M}. For odd {k}, {\sigma_k(M)} is well-defined up to a sign (i.e. the choice of the normal {\nu}), but {\sigma_{k-1}(M)\sigma _{k+1}(M)} is independent of the choice of {\nu}.

Proposition 4 With the same notations as before, if {R} is nonnegative and

\displaystyle  \sigma_2(M)^2+\sigma _1(M)\sigma _3(M)+ 2\sigma _4(M)\geq 0, \ \ \ \ \ (1)

then {R\geq |Rc|} on {M}. In particular, in this case, {R_\Sigma \geq H_\Sigma ^2-|A_\Sigma |^2}.  

Proof: To simplify notations, we simply denote {\sigma _k(M)} by {\sigma_k}. There is an orthonormal {\{e_i\}_{i=1}^n} which diagonalizes the shape operator: {Ae_i =\lambda _i e_i}. Then {Rc} is diagonalized by {e_i} (using Gauss equation) with

\displaystyle  R_{ii} = \lambda _i \sum_{j\neq i}\lambda _j = \sigma _1 \lambda _i -\lambda _i^2.


\displaystyle  |Rc|^2 = \sum _i (\sigma_1 \lambda _i - \lambda _i ^2)^2= \sigma_1^2 \sum _i \lambda _i^2-2\sigma _1 \sum_i \lambda _i^3 + \sum _i \lambda _i^4. \ \ \ \ \ (2)

Let {p_k = \sum _{i=1}^n \lambda_i^k}. Then by Newton’s identities, we have

\displaystyle  \begin{array}{rcl}  \begin{cases} p_1=\sigma _1,\\ p_2=\sigma_1 p_1-2\sigma_2=\sigma _1^2-2\sigma_2,\\ p_3= \sigma _1 p_2 -\sigma_2 p_1 + 3\sigma_3=\sigma_1 ^3 -3\sigma_1\sigma_2 +3\sigma_3,\\ p_4 =\sigma _1 p_3 -\sigma_2 p_2 + \sigma _3 p_1 -4 \sigma _4= \sigma_1^4 - 4 \sigma_1 ^2\sigma _2 +4\sigma_1\sigma_3 +2\sigma_2^2 -4\sigma_4.\end{cases} \end{array}

Plugging these into (2), and using {R=2\sigma_2}, we have

\displaystyle R^2-|Rc|^2= 2(\sigma_2^2 +\sigma_1\sigma_3 +2 \sigma_4).

The result follows. \Box

Remark 1 The condition (1) in Proposition 4 is automatically satisfied when {n=2} ({\sigma_k } is defined to be zero if {k>n}). However the inequality is trivial in this case, as {\Sigma} is a curve, {R_\Sigma=0} and {|H_\Sigma |=|A_\Sigma|} is just the geodesic curvature of {\Sigma}. In general, (1) can also be replaced by other stronger condition. For example, by a Maclaurin-type inequality (see Hardy, Littlewood, Polya’s Inequalities p.52 Theorem 53, or here for a slightly stronger statement)

\displaystyle  \sigma_{k-1}\sigma_{k+1}\leq \sigma_k ^2,

so we can replace the condition by {\sigma_1(M)\sigma _3 (M)+\sigma _4(M)\geq 0}.  

This entry was posted in Algebra, Geometry. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s