Fork me on GitHub

Speex is based on CELP, which stands for Code Excited Linear Prediction. This section attempts to introduce the principles behind CELP, so if you are already familiar with CELP, you can safely skip to section 7. The CELP technique is based on three ideas:

 

  1. The use of a linear prediction (LP) model to model the vocal tract
  2. The use of (adaptive and fixed) codebook entries as input (excitation) of the LP model
  3. The search performed in closed-loop in a ``perceptually weighted domain''

This section describes the basic ideas behind CELP. Note that it's still incomplete.

 


Linear Prediction (LPC)

Linear prediction is at the base of many speech coding techniques, including CELP. The idea behind it is to predict the signal $x[n]$ using a linear combination of its past samples:

 

 

\begin{displaymath}
y[n]=\sum_{i=1}^{N}a_{i}x[n-i]\end{displaymath}

 

 

where $y[n]$ is the linear prediction of $x[n]$. The prediction error is thus given by: 

 

\begin{displaymath}
e[n]=x[n]-y[n]=x[n]-\sum_{i=1}^{N}a_{i}x[n-i]\end{displaymath}

 

 

The goal of the LPC analysis is to find the best prediction coefficients $a_{i}$ which minimize the quadratic error function: 

 

\begin{displaymath}
E=\sum_{n=0}^{L-1}\left[e[n]\right]^{2}=\sum_{n=0}^{L-1}\left[x[n]-\sum_{i=1}^{N}a_{i}x[n-i]\right]^{2}\end{displaymath}

 

 

That can be done by making all derivatives $\frac{\partial E}{\partial a_{i}}$ equal to zero: 

 

\begin{displaymath}
\frac{\partial E}{\partial a_{i}}=\frac{\partial}{\partial a...
...um_{n=0}^{L-1}\left[x[n]-\sum_{i=1}^{N}a_{i}x[n-i]\right]^{2}=0\end{displaymath}

 

 

The $a_{i}$ filter coefficients are computed using the Levinson-Durbin algorithm, which starts from the auto-correlation $R(m)$ of the signal $x[n]$.

 

 

\begin{displaymath}
R(m)=\sum_{i=0}^{N-1}x[i]x[i-m]\end{displaymath}

 

 

For an order $N$ filter, we have: 

 

\begin{displaymath}
\mathbf{R}=\left[\begin{array}{cccc}
R(0) & R(1) & \cdots & ...
...s & \vdots\\
R(N-1) & R(N-2) & \cdots & R(0)\end{array}\right]\end{displaymath}

 

 

 

 

\begin{displaymath}
\mathbf{r}=\left[\begin{array}{c}
R(1)\\
R(2)\\
\vdots\\
R(N)\end{array}\right]\end{displaymath}

 

 

The filter coefficients $a_{i}$ are found by solving the system $\mathbf{Ra}=\mathbf{r}$. What the Levinson-Durbin algorithm does here is making the solution to the problem $\mathcal{O}\left(N^{2}\right)$instead of $\mathcal{O}\left(N^{3}\right)$ by exploiting the fact that matrix $\mathbf{R}$ is toeplitz hermitian. Also, it can be proven that all the roots of $A(z)$ are within the unit circle, which means that $1/A(z)$ is always stable. This is in theory; in practice because of finite precision, there are two commonly used techniques to make sure we have a stable filter. First, we multiply $R(0)$ by a number slightly above one (such as 1.0001), which is equivalent to adding noise to the signal. Also, we can apply a window to the auto-correlation, which is equivalent to filtering in the frequency domain, reducing sharp resonances.

The linear prediction model represents each speech sample as a linear combination of past samples, plus an error signal called the excitation (or residual). 

 

\begin{displaymath}
x[n]=\sum_{i=1}^{N}a_{i}x[n-i]+e[n]\end{displaymath}

 

 

In the z-domain, this can be expressed as

 

 

\begin{displaymath}
x(z)=\frac{1}{A(z)}\: e(z)\end{displaymath}

 

 

where $A(z)$ is defined as

 

 

\begin{displaymath}
A(z)=1-\sum_{i=1}^{N}a_{i}z^{-i}\end{displaymath}

 

 

We usually refer to $A(z)$ as the analysis filter and $1/A(z)$ as the synthesis filter. The whole process is called short-term prediction as it predicts the signal $x[n]$using a prediction using only the $N$ past samples, where $N$ is usually around 10.

Because LPC coefficients have very little robustness to quantization, they are converted to Line Spectral Pair (LSP) coefficients which have a much better behaviour with quantization, one of them being that it's easy to keep the filter stable.

 


Pitch Prediction

During voiced segments, the speech signal is periodic, so it is possible to take advantage of that property by approximating the excitation signal $e[n]$ by a gain times the past of the excitation:

 

 

\begin{displaymath}
e[n]\simeq p[n]=\beta e[n-T]\end{displaymath}

 

 

where $T$ is the pitch period, $\beta$ is the pitch gain. We call that long-term prediction since the excitation is predicted from $e[n-T]$ with $T\gg N$.

 

Innovation Codebook

The final excitation $e[n]$ will be the sum of the pitch prediction and an innovation signal $c[n]$ taken from a fixed codebook, hence the name Code Excited Linear Prediction. The final excitation is given by:

 

 

\begin{displaymath}
e[n]=p[n]+c[n]=\beta e[n-T]+c[n]\end{displaymath}

 

 

The quantization of $c[n]$ is where most of the bits in a CELP codec are allocated. It represents the information that couldn't be obtained either from linear prediction or pitch prediction. In the z-domain we can represent the final signal $X(z)$ as 

 

\begin{displaymath}
X(z)=\frac{C(z)}{A(z)\left(1-\beta z^{-T}\right)}\end{displaymath}

 

 

 


Analysis-by-Synthesis and Error Weighting

Most (if not all) modern audio codecs attempt to ``shape'' the noise so that it appears mostly in the frequency regions where the ear cannot detect it. For example, the ear is more tolerant to noise in parts of the spectrum that are louder and vice versa. That's why instead of minimizing the simple quadratic error 

 

\begin{displaymath}
E=\sum_{n}\left(x[n]-\overline{x}[n]\right)^{2}\end{displaymath}

 

 

where $\overline{x}[n]$ is the encoder signal, we minimize the error for the perceptually weighted signal 

 

\begin{displaymath}
X_{w}(z)=W(z)X(z)\end{displaymath}

 

 

where $W(z)$ is the weighting filter, usually of the form

 

\begin{displaymath}
W(z)=\frac{A\left(\frac{z}{\gamma_{1}}\right)}{A\left(\frac{z}{\gamma_{2}}\right)}
\end{displaymath} (1)

 

with control parameters $\gamma_{1}>\gamma_{2}$. If the noise is white in the perceptually weighted domain, then in the signal domain its spectral shape will be of the form 

 

\begin{displaymath}
A_{noise}(z)=\frac{1}{W(z)}=\frac{A\left(\frac{z}{\gamma_{2}}\right)}{A\left(\frac{z}{\gamma_{1}}\right)}\end{displaymath}

 

 

If a filter $A(z)$ has (complex) poles at $p_{i}$ in the $z$-plane, the filter $A(z/\gamma)$ will have its poles at $p'_{i}=\gamma p_{i}$, making it a flatter version of $A(z)$.

Analysis-by-synthesis refers to the fact that when trying to find the best pitch parameters ($T$$\beta$) and innovation signal $c[n]$, we do not work by making the excitation $e[n]$ as close as the original one (which would be simpler), but apply the synthesis (and weighting) filter and try making $X_{w}(z)$ as close to the original as possible.

 

参考资料:

1 百科总结: https://zh.wikipedia.org/wiki/%E7%A0%81%E6%BF%80%E5%8A%B1%E7%BA%BF%E6%80%A7%E9%A2%84%E6%B5%8B
2 详细介绍: http://ntools.net/arc/Documents/speex/manual/node8.html

posted on 2017-10-24 10:08  虚生  阅读(346)  评论(0编辑  收藏  举报