Speex is based on CELP, which stands for Code Excited Linear Prediction. This section attempts to introduce the principles behind CELP, so if you are already familiar with CELP, you can safely skip to section 7. The CELP technique is based on three ideas:
- The use of a linear prediction (LP) model to model the vocal tract
- The use of (adaptive and fixed) codebook entries as input (excitation) of the LP model
- The search performed in closed-loop in a ``perceptually weighted domain''
This section describes the basic ideas behind CELP. Note that it's still incomplete.
Linear Prediction (LPC)
Linear prediction is at the base of many speech coding techniques, including CELP. The idea behind it is to predict the signal using a linear combination of its past samples:
![\begin{displaymath}
y[n]=\sum_{i=1}^{N}a_{i}x[n-i]\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img4.png)
where is the linear prediction of
. The prediction error is thus given by:
![\begin{displaymath}
e[n]=x[n]-y[n]=x[n]-\sum_{i=1}^{N}a_{i}x[n-i]\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img6.png)
The goal of the LPC analysis is to find the best prediction coefficients which minimize the quadratic error function:
![\begin{displaymath}
E=\sum_{n=0}^{L-1}\left[e[n]\right]^{2}=\sum_{n=0}^{L-1}\left[x[n]-\sum_{i=1}^{N}a_{i}x[n-i]\right]^{2}\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img8.png)
That can be done by making all derivatives equal to zero:
![\begin{displaymath}
\frac{\partial E}{\partial a_{i}}=\frac{\partial}{\partial a...
...um_{n=0}^{L-1}\left[x[n]-\sum_{i=1}^{N}a_{i}x[n-i]\right]^{2}=0\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img10.png)
The filter coefficients are computed using the Levinson-Durbin algorithm, which starts from the auto-correlation
of the signal
.
![\begin{displaymath}
R(m)=\sum_{i=0}^{N-1}x[i]x[i-m]\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img12.png)
For an order filter, we have:
![\begin{displaymath}
\mathbf{R}=\left[\begin{array}{cccc}
R(0) & R(1) & \cdots & ...
...s & \vdots\\
R(N-1) & R(N-2) & \cdots & R(0)\end{array}\right]\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img13.png)
![\begin{displaymath}
\mathbf{r}=\left[\begin{array}{c}
R(1)\\
R(2)\\
\vdots\\
R(N)\end{array}\right]\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img14.png)
The filter coefficients are found by solving the system
. What the Levinson-Durbin algorithm does here is making the solution to the problem
instead of
by exploiting the fact that matrix
is toeplitz hermitian. Also, it can be proven that all the roots of
are within the unit circle, which means that
is always stable. This is in theory; in practice because of finite precision, there are two commonly used techniques to make sure we have a stable filter. First, we multiply
by a number slightly above one (such as 1.0001), which is equivalent to adding noise to the signal. Also, we can apply a window to the auto-correlation, which is equivalent to filtering in the frequency domain, reducing sharp resonances.
The linear prediction model represents each speech sample as a linear combination of past samples, plus an error signal called the excitation (or residual).
![\begin{displaymath}
x[n]=\sum_{i=1}^{N}a_{i}x[n-i]+e[n]\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img22.png)
In the z-domain, this can be expressed as

where is defined as

We usually refer to as the analysis filter and
as the synthesis filter. The whole process is called short-term prediction as it predicts the signal
using a prediction using only the
past samples, where
is usually around 10.
Because LPC coefficients have very little robustness to quantization, they are converted to Line Spectral Pair (LSP) coefficients which have a much better behaviour with quantization, one of them being that it's easy to keep the filter stable.
Pitch Prediction
During voiced segments, the speech signal is periodic, so it is possible to take advantage of that property by approximating the excitation signal by a gain times the past of the excitation:
![\begin{displaymath}
e[n]\simeq p[n]=\beta e[n-T]\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img26.png)
where is the pitch period,
is the pitch gain. We call that long-term prediction since the excitation is predicted from
with
.
Innovation Codebook
The final excitation will be the sum of the pitch prediction and an innovation signal
taken from a fixed codebook, hence the name Code Excited Linear Prediction. The final excitation is given by:
![\begin{displaymath}
e[n]=p[n]+c[n]=\beta e[n-T]+c[n]\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img32.png)
The quantization of is where most of the bits in a CELP codec are allocated. It represents the information that couldn't be obtained either from linear prediction or pitch prediction. In the z-domain we can represent the final signal
as

Analysis-by-Synthesis and Error Weighting
Most (if not all) modern audio codecs attempt to ``shape'' the noise so that it appears mostly in the frequency regions where the ear cannot detect it. For example, the ear is more tolerant to noise in parts of the spectrum that are louder and vice versa. That's why instead of minimizing the simple quadratic error
![\begin{displaymath}
E=\sum_{n}\left(x[n]-\overline{x}[n]\right)^{2}\end{displaymath}](http://ntools.net/arc/Documents/speex/manual/img35.png)
where is the encoder signal, we minimize the error for the perceptually weighted signal

where is the weighting filter, usually of the form
with control parameters . If the noise is white in the perceptually weighted domain, then in the signal domain its spectral shape will be of the form

If a filter has (complex) poles at
in the
-plane, the filter
will have its poles at
, making it a flatter version of
.
Analysis-by-synthesis refers to the fact that when trying to find the best pitch parameters (,
) and innovation signal
, we do not work by making the excitation
as close as the original one (which would be simpler), but apply the synthesis (and weighting) filter and try making
as close to the original as possible.
参考资料:
1 百科总结: https://zh.wikipedia.org/wiki/%E7%A0%81%E6%BF%80%E5%8A%B1%E7%BA%BF%E6%80%A7%E9%A2%84%E6%B5%8B
2 详细介绍: http://ntools.net/arc/Documents/speex/manual/node8.html
作者:虚生 出处:https://www.cnblogs.com/dylancao/ 以音频和传感器算法为核心的智能可穿戴产品解决方案提供商 ,提供可穿戴智能软硬件解决方案的设计,开发和咨询服务。 勾搭热线:邮箱:1173496664@qq.com weixin:18019245820 市场技术对接群:347609188 |
![]() |
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
· 一个奇形怪状的面试题:Bean中的CHM要不要加volatile?
· [.NET]调用本地 Deepseek 模型
· 一个费力不讨好的项目,让我损失了近一半的绩效!
· 微软正式发布.NET 10 Preview 1:开启下一代开发框架新篇章
· 没有源码,如何修改代码逻辑?
· PowerShell开发游戏 · 打蜜蜂
· 在鹅厂做java开发是什么体验
· WPF到Web的无缝过渡:英雄联盟客户端的OpenSilver迁移实战