涉及几何量计算的一致性


In the following, we will give some explanation of the calculation of Riemannian geometry and geometry measure theory. As for our interest, we only consider the $n$ dimension hypersurface in $\mathbb{R}^{n+1}$, i.e., the codimension is one. In what follows, we use the Einstein summation convention.

Firstly, we consider the covariant derivative. In Euclidean space, the gradient is define by
\begin{align}
Df=\langle Df,e_i\rangle e_i.
\end{align}
As for the absolute derivative, we define the gradient on $M$ by
\begin{align}
\nabla f=(Df)^{Tangent projection}
\end{align}

Let $\nu$ denote the normal direction of $M$ at $p\in M\subset \mathbb{R}^{n+1}$. Then
\begin{align}
\nabla f=Df-\langle Df,\nu \rangle\nu.
\end{align}

Chose a local orthonormal basis $\{\tau_i\}_{i=1}^n$ of $TM$ at $p$ and $\{\tau_1,...,\tau_n,\nu\}$ is the orientation of $M$, we get
\begin{align}
\nabla f=\langle Df,\tau_i\rangle \tau_i=(D_{\tau_i}f)\tau_i.
\end{align}
Obviously, the above calculation is independent the choice of $\{\tau_i\}_{i=1}^n$.

In Riemannian Geometry, the gradient is defined as
\begin{align}
\nabla f=g^{ij}\frac{\partial f}{\partial x_i}\frac{\partial }{\partial x_j}=g^{ij}\frac{\partial f}{\partial x_i}\frac{\partial F }{\partial x_j}.
\end{align}
Here $F:D\rightarrow \mathbb{R}^{n+1}$ denote the imbedding mapping of $M$. As in the classical differential geometry, we denote
\begin{align}
g_{ij}=\langle \frac{\partial F }{\partial x_i},\frac{\partial F }{\partial x_j}\rangle.
\end{align}

Now we check the consistence of these two definitions, i.e., verify that these two formulas derive the same result. For simplicity, we denote $\frac{\partial F }{\partial x_i}$, $\frac{\partial }{\partial x_i}$ as $\partial_i$.

Let $\tau_i=a_{ij}\partial _j$. By the orthonormal of $\{\tau_i\}$, we have
\begin{align}
\delta_{ij}=a_{ik}g_{kl}a_{jl}.
\end{align}
Let $A=(a_{ij})$. Then
\begin{align}
I_n=AgA^T.
\end{align}
This means that $gA^T=A^{-1}$, it follows that
\begin{align}
gA^TA=I_n=A^TAg,\ \ A^TA=g^{-1}.
\end{align}

Hence
\begin{align}
\langle Df,\tau_i \rangle\tau_i=\langle Df,a_{ij}\partial_j\rangle a_{ik}\partial _k=a_{ij}a_{ik}\langle Df,\partial_j\rangle \partial _k=g^{jk}(D_{\partial_j}f)\partial_k.
\end{align}
This completes the proof.

Secondly, we want to define the divergence operator and generalized it to the nontangential vector case which is the same as usual if the vector is tangent vector.

In Riemannian geometry, we define
\begin{align}
div(X)=\partial_iX^i+\Gamma_{ik}^iX^k=g^{il}(\partial_lX_i-\Gamma_{il}^kX_k),
\end{align}
where $X$ is a tangent vector field $X=X^i\partial_i=g^{ij}X_j\partial_i$ and we have used $g^{ij}_{,k}\equiv0$.

By using
\begin{align}
\Gamma_{ik}^i=\frac{1}{\sqrt{G}}\partial_k\sqrt{G},
\end{align}
we have
\begin{align}
div(X)=\partial_iX^i+\frac{1}{\sqrt{G}}(\partial_k\sqrt{G})X^k=\frac{1}{\sqrt{G}}\partial_i\left(\sqrt{G}g^{ij}X_j\right)=g^{il}(\partial_lX_i-\Gamma_{il}^kX_k)
\end{align}


We can also define the Laplace-Beltrami operator of $f$ on $M$ by
\begin{align}
\Delta f=div(\nabla f)=\frac{1}{\sqrt{G}}\partial_i\left(\sqrt{G}g^{ij}\partial_jf\right)=g^{il}(\partial_lX_i-\Gamma_{il}^kX_k),
\end{align}


In Jiaqiang Mei's book, he defined the divergence as
\begin{align}
div(X)=\langle \nabla _{\tau_i}X,\tau_i\rangle,
\end{align}
where $\{\tau_i\}_{i=1}^n$ is the orthonormal basis of $T_pM$.
Now we check the consistence with the usual definition.
In fact,
\begin{align}
\langle \nabla _{\tau_i}X,\tau_i\rangle&=\nabla _{a_{ij}\partial_j}X,a_{ik}\partial_k\rangle\nonumber\\
&=a_{ij}a_{ik}\langle\nabla _{\partial_j}X,\partial_k\rangle\nonumber\\
&=g^{jk}\langle\nabla _{\partial_j}X,\partial_k\rangle\nonumber\\
&=g^{jk}\langle\nabla _{\partial_j}(X^i\partial_i),\partial_k\rangle\nonumber\\
&=g^{jk}\partial_jX^ig_{ik}+g^{jk}X^i\Gamma_{ji}^lg_{lk}\nonumber\\
&=\partial_iX^i+\Gamma_{ji}^jX^i\\
&=\partial_iX^i+\Gamma_{ik}^iX^k.\nonumber
\end{align}
This complete the proof!

As the proof demonstrated that if $X$ is the tangent vector field, we have
\begin{align}
div(X)=\langle \nabla _{\tau_i}X,\tau_i\rangle=g^{jk}\langle\nabla _{\partial_j}X,\partial_k\rangle\,
\end{align}
then we can generalize $X$ to general vector field on $M$ by
\begin{align}
div(X)=\langle D _{\tau_i}X,\tau_i\rangle=g^{jk}\langle D _{\partial_j}X,\partial_k\rangle=g^{jk}\langle \frac{\partial X}{\partial x_j},\frac{\partial F}{\partial x_k}\rangle,
\end{align}
this is just the natural way to change $\nabla$ to $D$.

But it is not so obvious for the second equality. Now we try to give a more precise computation. Also, we have to define the covariant derivative of $X$.

As in the previous computation, we have
\begin{align}
(\nabla X)(dx^j,\partial_i)=\partial_iX^j+\Gamma_{ik}^jX^k=g^{jl}(\partial_iX_l-\Gamma_{il}^kX^k).
\end{align}

We should check that $D_{\partial_i}X-\nabla_{\partial_i}X$ only have normal part, then the definition is consistence. Although t is just follow from the geometry idea of covariant derivative, but we need a clarification to convince us.

Here is the calculation:
\begin{align*}
D_{\partial_i}X-\nabla_{\partial_i}X
&=D_{\partial_i}(X^j\partial_j)-\partial_iX^j\partial_j-\Gamma_{ik}^jX^k\partial_j\\
&=\partial_i(X^j)\partial_j+X^jD_{\partial_i}\partial_j-\partial_iX^j\partial_j-\Gamma_{ik}^jX^k\partial_j\\
&=X^j(\Gamma_{ij}^k\partial_k+h_{ij}\nu)-\Gamma_{ik}^jX^k\partial_j\\
&=X^jh_{ij}\nu.
\end{align*}
It means that if we replace $\nabla$ to $D$ in the divergence definition is invariant for tangent vector field, thus the new definition generalized to the nontangent vector.

Now for the orthonormal basis $\{\tau_i\}_{i=1}^n$, we have
\begin{align}
div(X)=D_{\tau_i}X\cdot\tau_i=trace((I-\nu\otimes\nu)DX).
\end{align}

From the above computation, some thing has appear. It make me believe that the covariant derivative of function, tangent vector, or even any tensor in the intrinsic sense by taking the covariant derivative as taking usual derivative and then make orthogonal projection, i.e., take usual derivative and times $I-\nu\otimes\nu$.

We still have a question on the consistence of $\Delta$ to function, vector, etc.

Firstly, let $\nabla_jf=\nabla f\cdot e_j$, where $\{e_j\}_{j=1}^{n+1}$ is the standard orthonormal basis of $\mathbb{R}^{n+1}$. Then
\begin{align}
\nabla f&=(\nabla_1f,\nabla_2f,...,\nabla_{n+1}f)^T,\\
\nabla f&=(\langle Df,\tau_i\rangle \tau_i)=\nabla_ife_i.
\end{align}

Let $X=(X^1,X^2,...,X^n,X^{n+1})\in C^1(M)$, then
\begin{align*}
div(X)=\sum_{i}^n\langle DX,\tau_i\rangle\tau_i&=\sum_{i}^n<\sum_{j=1}^{n+1}(D_{\tau_i}X^j)e_j,\tau_i\rangle\\
&=\sum_{j}^{n+1}\langle e_j,\sum_{i}^n(D_{\tau_i}X^j)\tau_i\rangle\\
&=\sum_{j}^{n+1}\langle e_j,\nabla X^j\rangle\\
&=\sum_{j}^{n+1}\nabla_j X^j\\
&=\sum_{j}^{n+1}\delta_j X^j.
\end{align*}
In the above computation, we abuse some notation, $\delta_j $ represent the $j-$th row of $\nabla$ with $j=1,2,...,n+1$. As usual, $\nabla f=\delta f=(\delta_1f,...,\delta_{n+1f})^T$.

Thus, the Laplace-Beltrami operator can also be expressed by
\begin{align}
\Delta_M f=div_M(\nabla_M f)=\sum_{i=1}^{n+1}\delta_i(\delta_if).
\end{align}
This proves the consistence with the usual Laplacian.

It is not hard to check
\begin{align*}
\Delta (uv)&=\Delta u v+2\nabla u\nabla v+u\Delta v,\\
div(fX)&=\langle\nabla f,X\rangle+fdiv(X),\\
\nabla (fg)&=\nabla f g+f\nabla g.
\end{align*}
A question arise, whether the second inequality still hold for $X$ is not the tangent vector. In fact, it is easy to check that it is still true for nontangential vector field. Now let's check it!
\begin{align*}
div_M(fX)&=\langle D_{\tau_i}(fX),\tau_i\rangle\\
&=\langle (D_{\tau_i}f)X+fD_{\tau_i}X,\tau_i\rangle\\
&=\langle (D_{\tau_i}f)X,\tau_i\rangle+\langle fD_{\tau_i}X,\tau_i\rangle\\
&=\langle X,(D_{\tau_i}f)\tau_i\rangle+f\langle D_{\tau_i}X,\tau_i\rangle\\
&=\langle X,\nabla f\rangle+fdiv(X).
\end{align*}

In Simon or Giusti's book, they introduce $d(x)=d(x,M)$ on one side and $d(x)=-d(x,M)$ on the other side for $x\in\mathbb{R}^n$, then
\begin{align}
Dd=\nu\ \ \mbox{on}\ \ M.
\end{align}

It is easy to check that
\begin{align}
\delta_i&=D_i-v_iv_hD_h,\\
\nu_i\delta_i&=0,\\
\nu_i\delta_j\nu_i&=0,\\
\delta_i\nu_j\delta_j&=-\nu_j\delta_i\delta_j,\\
\delta_i\nu_j&=d_{ij}=\delta_j\nu_i,\\
-div_M(\nu)&=-\delta_i\nu_i=H,\\
\Delta \vec{x}&=H\nu,\\
|B|^2&=\delta_i\nu_j\delta_j\nu_i,\\
-\Delta\nu &=|B|^2\nu+\nabla H
\end{align}
and
\begin{align}
\int_{M}\delta_i\phi dV_M=-\int_{M}H\phi \nu_idV_m,
\end{align}
where $\phi\in C_0^1(M)$. See more geometric explanation in Qing Han's book on nonlinear elliptic equations.

Note that
\begin{align*}
\delta_i\nu_i&=div(\nu)=g^{ij}\langle D_{\partial_j}\nu,\partial_j\rangle\\
&=g^{ij}(D_{\partial_j}\langle \nu,\partial_j\rangle-\langle \nu,D_{\partial_j}\partial_j\rangle)\\
&=-g^{ij}\langle \nu,h_{ij}\nu\rangle\\
&=-H,
\end{align*}
This check the consistence of the mean curvature's expression.

Next, we want to check the consistence for $|B|^2$. Note that originally,
\begin{align}
|B|^2=g^{ik}g^{jl}h_{ij}h_{kl}.
\end{align}
One should note that
\begin{align}
\sum_{i=1}^{n+1}\sum_{j=1}^{n+1}\delta_i\nu_j\delta_j\nu_i=trace((D^2d)^2)=\sum_{i=1}^n\lambda_i^2.
\end{align}
This is a geometric quantity, which is invariant for coordinate transformation.

Locally, we have
\begin{align}
\delta_i\nu=D_i\nu-v_iv_kD_k\nu
\end{align}


Recall that
\begin{align*}
\Delta_M f&=\sum_{i=1}^n\delta_i(\delta_if)\\
&=div_M(Df)-div_M(\langle Df,\nu\rangle\nu)\\
&=trace((I-\nu\otimes\nu)D^2f)
-\langle Df,\nu\rangle div_M(\nu)\\
&=\Delta_{\mathbb{R}^n}f-D^2f(\nu,\nu)+H\langle Df,\nu\rangle.
\end{align*}
Here, we give a geometric computation to give another interpretation.
Note that
\begin{align}
\Delta_{\mathbb{R}^n+1}f&=D_{\tau_i}D_{\tau_i}f-D_{D_{\tau_i}\tau_i}f+D_{\nu}D_{\nu}f-D_{D_{\nu}\nu}f\nonumber\\
&=D_{\tau_i}D_{\tau_i}f-D_{(D_{\tau_i}\tau_i)^T}f-D_{(D_{\tau_i}\tau_i)^{N}}f+D_{\nu}D_{\nu}f-D_{D_{\nu}\nu}f\nonumber\\
&=\Delta_Mf-D_{H\nu}(f)+D^2f(\nu,\nu)\nonumber\\
&=\Delta_Mf-H\langle Df,\nu\rangle+D^2f(\nu,\nu)\nonumber,
\end{align}
then we get
\begin{align}
\Delta_Mf=\Delta_{\mathbb{R}^n+1}f+H\langle Df,\nu\rangle-D^2f(\nu,\nu).\nonumber
\end{align}
This formula is a very useful formula in calculation of $\Delta |x|^2$, i.e.,
\begin{align}
\Delta_M \left(\frac{|x|^2}{2}\right)&=n+H\langle x,\nu\rangle,\\
\nabla_M |x|&=(I-\nu\otimes\nu)D|x|=\left(\frac{x}{|x|}\right)^{Tangent\ projection}
\end{align}

For PDEer, to calculate the tangential Laplace along the smooth boundary, they just need to choose a coordinate tangent to $\partial\Omega$. Suppose that
$\partial\Omega$ is locally represented by a graph $x_{n+1}=\varphi(x_1,x_2,...,x_n)$, $f$ is a function defined on $\overline{\Omega}$. We assume $(0,0)\in\partial\Omega$, $\Omega\cap B_r^{n+1}=\{(x',x_{n+1}):x_{n+1}>f(x)\}\cap B_r^{n+1}$, $\nabla \varphi(0)=0$ and $D^2\varphi(0)$ is a diagonal matrix. It is easy to check that $\Gamma_{ij}^k(0,0)=0$ and $g_{ij}(0)=\delta_{ij}$, then
\begin{align}
\Delta_{\partial\Omega} f\Big|_{(0,0)}=\sum_{i=1}^n\frac{\partial^2f(x',\varphi(x'))}{\partial x_i^2}\Big|_{(0)}.
\end{align}
Keep on calculating, we get
\begin{align*}
\Delta_{\partial\Omega}f\Big|_{(0,0)}=\sum_{i=1}^nf_{ii}(0,0)+f_{n+1}(0,0)\sum_{i=1}^n\varphi_{ii}(0)\\
=\sum_{i=1}^nf_{ii}(0,0)+H\frac{\partial f}{\partial \vec{n}}(0,0),
\end{align*}
where $\vec{n}$ is the inward unit normal vector at $(0,0)\in\partial\Omega$. In fact, we only just reproof the above formula which is suitable for PDEeAs usual, by Gauss equation and Codazzi equation we have

\begin{align}
R_{ijkl}=h_{il}h_{jk}-h_{ik}h_{jl},
\end{align}
and
\begin{align}
\nabla_kh_{ij}=\nabla_jh_{ik}.
\end{align}
In fact, we have $h_{ij}=h_{ji}$, then
$\nabla h$ (or $\nabla B$) is a totally symmetric (3,0) covariant tensor.

Note the mean curvature are defined as
\begin{align}
H=g^{ij}h_{ij},\ \ Rc_{jk}=R_{ijki}\ \ K=\frac{R(X,Y,Y,X)}{|X|^2|Y|^2-\langle X,Y \rangle^2},\ \ R=g^{ij}Rc_{ij}.
\end{align}
In above definition, $H$ may be defined with the opposite sign.

In the following, by direct calculation,
\begin{align}
\Delta \textbf{F}=H\nu,   -div_M(\nu)=H,    -\Delta_M \nu=|B|^2\nu+\nabla H,
\end{align}
where $|B|^2=g^{ik}g^{jl}h_{ij}h_{kl}$.

The divergence theorem for smooth, compact,properly embedded hypersurface with smooth boundary states that for any $C^1-$vector field $X$ the identity
\begin{align}
\int_{M}div(X)dV_g=-\int_{M}H\langle \nu, X\rangle dV_g+\int_{\partial M}X\cdot \eta dS_{\partial M}
\end{align}
where $\eta$ denotes the our unit normal field to $\partial M$, which is tangent to $M$ at all boundary points.

Thus if $\partial M=\emptyset$, $u\in C_0^2(M)$ and $v\in C^2(M)$, (it is no need to assume $H\equiv0$, this is coincident with the usual Riemannian geometry), we get
\begin{align}
\int_{M}\Delta u vdV_g+\int_{M}\langle \nabla u,\nabla v\rangle dV_g=0.
\end{align}

Let remember the Simon's identity if $H\equiv0$. That is
\begin{equation}
\Delta_M \frac{1}{2}|B|^2=|\nabla_M B|^2-|B|^4.
\end{equation}

In 1979, Simon, Yau and Schoen have prove the following inequality for minimal hypersurface in $\mathbb{R}^n$,
\begin{align}
\Delta_M \frac{1}{2}|B|^2\geq\left(1+\frac{2}{n}\right)|\nabla_M |B||^2-|B|^4.
\end{align}
They use this inequality to prove the Bernstein theorem for $2\leq n\leq 5$.

 

 

Suppose that $x_{n+1}=u(x_1,x_2,...,x_n)$ with $x\in\Omega$ and $u\in C^2(\Omega)\cap C(\overline{\Omega})$. Then the induced metric is given by
\begin{equation}
g_{ij}=\delta_{ij}+D_iuD_ju.
\end{equation}

Suppose the normal derivative of the surface is
\begin{equation}
\vec{n}=\frac{(-Du,1)}{\sqrt{1+|Du|^2}},
\end{equation}
then the second fundamental of the surface is
\begin{equation}
A_{ij}=<\vec{n},D_{ij}X>=\frac{D_{ij}u}{\sqrt{1+|Du|^2}},
\end{equation}
that is
\begin{equation}
A=A_{ij}dx^idx^j.
\end{equation}

As in 3-dim case, the main curvatures are defined as the eigenvalues of the matrix
\[Ag^{-1}.\]

Note that \[g^{-1}=I-\frac{Du(Du)^T}{1+|Du|^2}\]
and
\[A=\frac{D^2u}{\sqrt{1+|Du|^2}},\]
then
\begin{equation}
\det(Ag^{-1})=\det(g^{-1})det(A)=\frac{1}{1+|Du|^2}\cdot \frac{det(D^2u)}{(1+|Du|^2)^{\frac{n}{2}}},
\end{equation}
and
\begin{equation}
\frac{1}{n}tr(g^{-1}A)=\frac{1}{n}\frac{(1+|Du|^2)\Delta u-DuD^2u(Du)^T}{(1+|Du|^2)^{\frac{3}{2}}}=\frac{1}{n}div\left(\frac{Du}{\sqrt{1+|Du|^2}}\right),
\end{equation}
where the first is the Gauss curvature and the second is the mean curvature.

The weingarton transformation is defined as
\[W(\mathbf{-\nu}_i)=A_{i}^j\mathbf{r}_j,\]
where $A_i^j=A_{ik}g^{kj}$.

In above computation, we must familiar some useful fact in linear algebra:
\[(I+xx^T)^{-1}=I-\frac{xx^T}{1+|x|^2}.\]

Note that
\[\Gamma_{ij}^k=\frac{1}{2}g^{kl}\left\{\frac{\partial g_{il}}{\partial x_j}+\frac{\partial g_{jl}}{\partial x_i}-\frac{\partial g_{ij}}{\partial x_l}\right\},\]
and
\[g_{ij}=\delta_{ij}+D_iuD_ju,\]
then
\[\Gamma_{ij}^k=g^{kl}D_{ij}uD_lu=\Big(\delta_{kl}-\frac{D_{k}uD_{l}u}{1+|Du|^2}\Big)D_{ij}uD_lu.\]

posted @ 2021-04-09 16:36  Minimal_Cone  阅读(151)  评论(0编辑  收藏  举报