Summary of multi-index convention for partial derivatives
Let \(d\) be the spatial dimension. \(\alpha = (\alpha_1, \cdots, \alpha_d) \in \mathbb{N}_0^d\).
\(\displaystyle \lvert \alpha \rvert = \sum_{i = 1}^d \alpha_i\) | (1) |
The factorization of a multi-index is equal to the product of the factorization of each component index:
\(\displaystyle \alpha ! = \prod_{i = 1}^d \alpha_i !\) | (2) |
The multi-index is distributed to each coordinate component of \(x \in \mathbb{R}^d\):
\(\displaystyle x^{\alpha} = \prod_{i = 1}^d x_i^{\alpha_i}\) | (3) |
The \(\lvert \alpha \rvert\)-fold mixed derivative with respect to \(x\):
where \(\beta \in \mathbb{N}_0^d\), \(\alpha \leq \beta\) means \(\alpha_i \leq \beta_i\) for \(i = 1, \cdots, d\) and \(\beta - \alpha\) is the normal subtraction of two vectors. When \(\beta \equiv \alpha\),
\(\displaystyle \partial_x^{\alpha} x^{\alpha} = \alpha !\) | (6) |
The combination for multi-index:
\(\displaystyle C_{\alpha}^{\beta} = \prod_{i = 1}^d C_{\alpha_i}^{\beta_i} \quad (\beta \leq \alpha)\) | (7) |
Proposition 1. (Leibniz formula for \(\lvert \alpha \rvert\)-fold partial derivatives)
\(\displaystyle \partial_x^{\alpha} (uv) = \sum_{\beta \leq \alpha} C_{\alpha}^{\beta} (\partial_x^{\beta} u) (\partial_x^{\alpha - \beta} v),\) | (8) |
where \(\alpha\) and \(\beta\) are multi-indices and \(x \in \mathbb{R}^d\).
Proof 1. When $d=1$, it is the classical Leibiniz formula in calculus.
2. Assume it holds for $d=n$. Then for $d=n+1$, we have
\begin{equation*}
\begin{split}
\pdiff_x^{\alpha}(uv) &=\left( \prod_{i=1}^{n+1}\pdiff_{x_i}^{\alpha_i} \right)(uv) =
\pdiff_{x_1}^{\alpha_1} \left( \prod_{i=2}^{n+1}\pdiff_{x_i}^{\alpha_i} \right) \\
\text{Let $\tilde{\alpha}=(\alpha_2,\cdots,\alpha_{n+1})$ and
$\tilde{x}=(x_2,\cdots,x_{n+1})$} \\ &= \pdiff_{x_1}^{\alpha_1}
\pdiff_{\tilde{x}}^{\tilde{\alpha}}(uv) = \pdiff_{x_1}^{\alpha_1}\left(
\sum_{\tilde{\beta}\leq\tilde{\alpha}}C_{\tilde{\alpha}}^{\tilde{\beta}}(\pdiff_x^{\tilde{\beta}}u)(\pdiff_x^{\tilde{\alpha}-\tilde{\beta}}v)
\right) \\
&= \sum_{\tilde{\beta}\leq\tilde{\alpha}}C_{\tilde{\alpha}}^{\tilde{\beta}}
\underbrace{\pdiff_{x_1}^{\alpha_1}\left[(\pdiff_x^{\tilde{\beta}}u)(\pdiff_x^{\tilde{\alpha}-\tilde{\beta}}v)\right]}_{\text{Apply the Leibniz formula for $d=1$}} \\
&= \sum_{\tilde{\beta}\leq\tilde{\alpha}}C_{\tilde{\alpha}}^{\tilde{\beta}}\left[
\sum_{\beta_1\leq\alpha_1}C_{\alpha_1}^{\beta_1}(\pdiff_{x_1}^{\beta_1}\pdiff_{\tilde{x}}^{\tilde{\beta}}u)(\pdiff_{x_1}^{\alpha_1-\beta_1}\pdiff_{\tilde{x}}^{\tilde{\alpha}-\tilde{\beta}}v)
\right] \\
&=
\sum_{\tilde{\beta}\leq\tilde{\alpha}}\sum_{\beta_1\leq\alpha_1}C_{\tilde{\alpha}}^{\tilde{\beta}}C_{\alpha_1}^{\beta_1}(\pdiff_x^{\beta}u)(\pdiff_x^{\alpha-\beta}v)
\\
&= \sum_{\beta\leq\alpha}C_{\alpha}^{\beta}(\pdiff_x^{\beta}u)(\pdiff_x^{\alpha-\beta}v).
\end{split}
\end{equation*}
3. Apply the principle of mathematical induction, the proposition is proved.
Proposition 2. (Taylor expansion for multi-dimensional functions) Let \(f : X \rightarrow \mathbb{R}\) be a function from \(C^m (X)\) with \(X \subset \mathbb{R}^d\). Let \(x_0 \in X\) be an expansion center. The Taylor expansion around \(x_0\) is
\(\displaystyle f (x) = \sum_{\text{$\begin{array}{c} \alpha \in \mathbb{N}_0^d\\ \lvert \alpha \rvert \leq m \end{array}$}} (x - x_0)^{\alpha} \frac{1}{\alpha !} \partial_x^{\alpha} f (x_0) + R_r,\) | (9) |
where \(R_r\) is the high order remainder.
Proposition 3. The total number of terms in the \(m\)-th order Taylor expansion for a function defined on \(X \subset \mathbb{R}^d\) is \(C_{m + d}^d\).
Proof The number of terms in the Taylor expansion for $\abs{\alpha}=k$ is equivalent to the number of ways for distributing the $k$ times of partial derivatives to the $d$ coordinate components, some of which may be assigned no derivatives at all. Hence the answer is $C_{k+d-1}^{d-1}$. The total number of terms in the Taylor expansion is
\begin{equation*}
\begin{split}
\sum_{k=0}^{m}C_{k+d-1}^{d-1} &= C_{d-1}^{d-1} + C_d^{d-1} + \cdots + C_{m+d-1}^{d-1} \\
\text{Replace $C_{d-1}^{d-1}$ with $C_d^d$,} \\
&= C_d^d + C_d^{d-1} + C_{d+1}^{d-1} + \cdots + C_{m+d-1}^{d-1} \\
\text{Apply $C_n^{m-1} + C_n^m = C_{n+1}^m$,} \\
&= C_{d+1}^d + C_{d+1}^{d-1} + \cdots + C_{m+d-1}^{d-1} \\
&= C_{m+d}^d.
\end{split}
\end{equation*}
Comment For Sobolev norms in [Ada75], there are also partial derivatives (in the weak sense) of different orders:
References
[Ada75]
Robert Alexander Adams. Sobolev Spaces (Pure and applied mathematics, a series of monographs and textbooks). PURE AND APPLIED MATHEMATICS: A series of Monographs and Textbooks ISBN: 9780120441501. Academic Press, June 1975.