$$ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Self-defined math definitions %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Math symbol commands \newcommand{\intd}{\,{\rm d}} % Symbol 'd' used in integration, such as 'dx' \newcommand{\diff}{{\rm d}} % Symbol 'd' used in differentiation \newcommand{\Diff}{{\rm D}} % Symbol 'D' used in differentiation \newcommand{\pdiff}{\partial} % Partial derivative \newcommand{DD}[2]{\frac{\diff}{\diff #2}\left( #1 \right)} \newcommand{Dd}[2]{\frac{\diff #1}{\diff #2}} \newcommand{PD}[2]{\frac{\pdiff}{\pdiff #2}\left( #1 \right)} \newcommand{Pd}[2]{\frac{\pdiff #1}{\pdiff #2}} \newcommand{\rme}{{\rm e}} % Exponential e \newcommand{\rmi}{{\rm i}} % Imaginary unit i \newcommand{\rmj}{{\rm j}} % Imaginary unit j \newcommand{\vect}[1]{\boldsymbol{#1}} % Vector typeset in bold and italic \newcommand{\phs}[1]{\dot{#1}} % Scalar phasor \newcommand{\phsvect}[1]{\boldsymbol{\dot{#1}}} % Vector phasor \newcommand{\normvect}{\vect{n}} % Normal vector: n \newcommand{\dform}[1]{\overset{\rightharpoonup}{\boldsymbol{#1}}} % Vector for differential form \newcommand{\cochain}[1]{\overset{\rightharpoonup}{#1}} % Vector for cochain \newcommand{\bigabs}[1]{\bigg\lvert#1\bigg\rvert} % Absolute value (single big vertical bar) \newcommand{\Abs}[1]{\big\lvert#1\big\rvert} % Absolute value (single big vertical bar) \newcommand{\abs}[1]{\lvert#1\rvert} % Absolute value (single vertical bar) \newcommand{\bignorm}[1]{\bigg\lVert#1\bigg\rVert} % Norm (double big vertical bar) \newcommand{\Norm}[1]{\big\lVert#1\big\rVert} % Norm (double big vertical bar) \newcommand{\norm}[1]{\lVert#1\rVert} % Norm (double vertical bar) \newcommand{\ouset}[3]{\overset{#3}{\underset{#2}{#1}}} % over and under set % Super/subscript for column index of a matrix, which is used in tensor analysis. \newcommand{\cscript}[1]{\;\; #1} % Star symbol used as prefix in front of a paragraph with no indent \newcommand{\prefstar}{\noindent$\ast$ } % Big vertical line restricting the function. % Example: $u(x)\restrict_{\Omega_0}$ \newcommand{\restrict}{\big\vert} % Math operators which are typeset in Roman font \DeclareMathOperator{\sgn}{sgn} % Sign function \DeclareMathOperator{\erf}{erf} % Error function \DeclareMathOperator{\Bd}{Bd} % Boundary of a set, used in topology \DeclareMathOperator{\Int}{Int} % Interior of a set, used in topology \DeclareMathOperator{\rank}{rank} % Rank of a matrix \DeclareMathOperator{\divergence}{div} % Curl \DeclareMathOperator{\curl}{curl} % Curl \DeclareMathOperator{\grad}{grad} % Gradient \DeclareMathOperator{\tr}{tr} % Trace \DeclareMathOperator{\span}{span} % Span $$

止于至善

As regards numerical analysis and mathematical electromagnetism

Summary of multi-index convention for partial derivatives

Let \(d\) be the spatial dimension. \(\alpha = (\alpha_1, \cdots, \alpha_d) \in \mathbb{N}_0^d\).

\(\displaystyle \lvert \alpha \rvert = \sum_{i = 1}^d \alpha_i\) (1)

The factorization of a multi-index is equal to the product of the factorization of each component index:

\(\displaystyle \alpha ! = \prod_{i = 1}^d \alpha_i !\) (2)

The multi-index is distributed to each coordinate component of \(x \in \mathbb{R}^d\):

\(\displaystyle x^{\alpha} = \prod_{i = 1}^d x_i^{\alpha_i}\) (3)

The \(\lvert \alpha \rvert\)-fold mixed derivative with respect to \(x\):

\(\displaystyle \begin{array}{rl} \partial_x^{\alpha} & = \prod_{i = 1}^d \partial_{x_i}^{\alpha_i} \hspace{3cm} \text{(4)}\\ \partial_x^{\alpha} x^{\beta} & = \frac{\beta !}{(\beta - \alpha) !} x^{\beta - \alpha} \quad (\alpha \leq \beta) \hspace{3cm} \text{(5)} \end{array}\)

where \(\beta \in \mathbb{N}_0^d\), \(\alpha \leq \beta\) means \(\alpha_i \leq \beta_i\) for \(i = 1, \cdots, d\) and \(\beta - \alpha\) is the normal subtraction of two vectors. When \(\beta \equiv \alpha\),

\(\displaystyle \partial_x^{\alpha} x^{\alpha} = \alpha !\) (6)

The combination for multi-index:

\(\displaystyle C_{\alpha}^{\beta} = \prod_{i = 1}^d C_{\alpha_i}^{\beta_i} \quad (\beta \leq \alpha)\) (7)

Proposition 1. (Leibniz formula for \(\lvert \alpha \rvert\)-fold partial derivatives)

\(\displaystyle \partial_x^{\alpha} (uv) = \sum_{\beta \leq \alpha} C_{\alpha}^{\beta} (\partial_x^{\beta} u) (\partial_x^{\alpha - \beta} v),\) (8)

where \(\alpha\) and \(\beta\) are multi-indices and \(x \in \mathbb{R}^d\).

Proof 1.  When $d=1$, it is the classical Leibiniz formula in calculus.

2. Assume it holds for $d=n$. Then for $d=n+1$, we have

\begin{equation*}
      \begin{split}
        \pdiff_x^{\alpha}(uv) &=\left( \prod_{i=1}^{n+1}\pdiff_{x_i}^{\alpha_i} \right)(uv) =
        \pdiff_{x_1}^{\alpha_1} \left( \prod_{i=2}^{n+1}\pdiff_{x_i}^{\alpha_i} \right) \\
        \text{Let $\tilde{\alpha}=(\alpha_2,\cdots,\alpha_{n+1})$ and
          $\tilde{x}=(x_2,\cdots,x_{n+1})$} \\ &= \pdiff_{x_1}^{\alpha_1}
        \pdiff_{\tilde{x}}^{\tilde{\alpha}}(uv) = \pdiff_{x_1}^{\alpha_1}\left(
          \sum_{\tilde{\beta}\leq\tilde{\alpha}}C_{\tilde{\alpha}}^{\tilde{\beta}}(\pdiff_x^{\tilde{\beta}}u)(\pdiff_x^{\tilde{\alpha}-\tilde{\beta}}v)
        \right) \\
        &= \sum_{\tilde{\beta}\leq\tilde{\alpha}}C_{\tilde{\alpha}}^{\tilde{\beta}}
        \underbrace{\pdiff_{x_1}^{\alpha_1}\left[(\pdiff_x^{\tilde{\beta}}u)(\pdiff_x^{\tilde{\alpha}-\tilde{\beta}}v)\right]}_{\text{Apply the Leibniz formula for $d=1$}} \\
        &= \sum_{\tilde{\beta}\leq\tilde{\alpha}}C_{\tilde{\alpha}}^{\tilde{\beta}}\left[
          \sum_{\beta_1\leq\alpha_1}C_{\alpha_1}^{\beta_1}(\pdiff_{x_1}^{\beta_1}\pdiff_{\tilde{x}}^{\tilde{\beta}}u)(\pdiff_{x_1}^{\alpha_1-\beta_1}\pdiff_{\tilde{x}}^{\tilde{\alpha}-\tilde{\beta}}v)
        \right] \\
        &= \sum_{\tilde{\beta}\leq\tilde{\alpha}}\sum_{\beta_1\leq\alpha_1}C_{\tilde{\alpha}}^{\tilde{\beta}}C_{\alpha_1}^{\beta_1}(\pdiff_x^{\beta}u)(\pdiff_x^{\alpha-\beta}v) \\
        &= \sum_{\beta\leq\alpha}C_{\alpha}^{\beta}(\pdiff_x^{\beta}u)(\pdiff_x^{\alpha-\beta}v).
      \end{split}
    \end{equation*}

3. Apply the principle of mathematical induction, the proposition is proved.

Proposition 2. (Taylor expansion for multi-dimensional functions) Let \(f : X \rightarrow \mathbb{R}\) be a function from \(C^m (X)\) with \(X \subset \mathbb{R}^d\). Let \(x_0 \in X\) be an expansion center. The Taylor expansion around \(x_0\) is

\(\displaystyle f (x) = \sum_{\text{$\begin{array}{c} \alpha \in \mathbb{N}_0^d\\ \lvert \alpha \rvert \leq m \end{array}$}} (x - x_0)^{\alpha} \frac{1}{\alpha !} \partial_x^{\alpha} f (x_0) + R_r,\) (9)

where \(R_r\) is the high order remainder.

Proposition 3. The total number of terms in the \(m\)-th order Taylor expansion for a function defined on \(X \subset \mathbb{R}^d\) is \(C_{m + d}^d\).

Proof  The number of terms in the Taylor expansion for $\abs{\alpha}=k$ is equivalent to the number of ways for distributing the $k$ times of partial derivatives to the $d$ coordinate components, some of which may be assigned no derivatives at all. Hence the answer is $C_{k+d-1}^{d-1}$. The total number of terms in the Taylor expansion is
  \begin{equation*}
    \begin{split}
      \sum_{k=0}^{m}C_{k+d-1}^{d-1} &= C_{d-1}^{d-1} + C_d^{d-1} + \cdots + C_{m+d-1}^{d-1} \\
      \text{Replace $C_{d-1}^{d-1}$ with $C_d^d$,} \\
      &= C_d^d + C_d^{d-1} + C_{d+1}^{d-1} + \cdots + C_{m+d-1}^{d-1} \\
      \text{Apply $C_n^{m-1} + C_n^m = C_{n+1}^m$,} \\
      &= C_{d+1}^d + C_{d+1}^{d-1} + \cdots + C_{m+d-1}^{d-1} \\
      &= C_{m+d}^d.
    \end{split}
  \end{equation*}

Comment For Sobolev norms in [Ada75], there are also partial derivatives (in the weak sense) of different orders:

\(\displaystyle \lVert u \rVert_{m, p} = \left( \sum_{\lvert \alpha \rvert \leq m} \lVert \partial^{\alpha} u \rVert_p^p \right)^{1 / p} \quad (1 \leq p < \infty) .\)

References

[Ada75]

Robert Alexander Adams. Sobolev Spaces (Pure and applied mathematics, a series of monographs and textbooks). &nbsp;PURE AND APPLIED MATHEMATICS: A series of Monographs and Textbooks ISBN: 9780120441501. Academic Press, June 1975.

posted @ 2022-02-04 19:25  皮波迪博士  阅读(143)  评论(0编辑  收藏  举报