信号分帧的三种实现方法及时间效率对比
1. 背景
当一段时域信号很长时,通常我们需要将一长段信号切成一小段一小段的信号进行处理,比如 短时傅里叶变换stft或小波wavelet变换等等。
通常,为了信号的平滑过渡,N个一小段信号中 , 前一个小段信号与后一个小段信号之间存在着一段重合的部分,我们叫做overlap。
在前一段随笔(如何将声学的spectrogram(声谱图)重新反变换成时域语音信号 )中,我们也遇到过这种分帧形式。
2. 实现方法 (python代码为主)
无论哪种方法,首先我们要获取一个概况:
假设我们有一个信号 sigData, 数据总长为sigLen,我们每一帧的数据个数为blkSize, 重合的百分比为 Overlap
stepSize : 那么每次我们向前移动的数据个数stepSize 为 int( blkSize*(1-Overlap) ) ,且必须大于1。
frameNumSize: 一共会分为的数据块个数 frameNumSize : frameNumSize = 1+ floor ( (Length(sigData) - blkSize) / stepSize )
2.1 循环取数的方法
#%% method 1 import numpy as np def cut_to_sigBlks_test1(sigData,blkSize,Overlap): if Overlap > 1: Overlap = Overlap/100 # 1.获取其实idx的step ,由于overlap 存在 ,stepSize 小于等于blkSize sigLen = np.size(sigData) stepSize = int( np.floor(blkSize*(1-Overlap)) ) if stepSize < 1: stepSize =int(1) frameNumSize = int( ((sigLen-blkSize)//stepSize) +1) # 获得一共有多少个 片段 # 2.3 循环获得数据 sigBlks = np.zeros((frameNumSize,blkSize),dtype= sigData.dtype)for i in np.arange(frameNumSize): sigBlks[i,:] = sigData[i*stepSize:i*stepSize+blkSize] return sigBlks #%% Test sigData = np.arange(20) blkSize = 7 Overlap = 0.3 sigBlks = cut_to_sigBlks_test1(sigData,blkSize,Overlap) print('sigData: \n',sigData) print('sigBlks: \n',sigBlks)
显示结果为:
sigData:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19]
sigBlks:
[[ 0 1 2 3 4 5 6]
[ 4 5 6 7 8 9 10]
[ 8 9 10 11 12 13 14]
[12 13 14 15 16 17 18 ] ]
2.2 引索取数方法
#%% method 2 import numpy as np def cut_to_sigBlks_test2(sigData,blkSize,Overlap): if Overlap > 1: return print('overlap need less than 1') Overlap = Overlap/100 # 1.获取其实idx的step ,由于overlap 存在 ,stepSize 小于等于blkSize sigLen = np.size(sigData) stepSize = int( np.floor(blkSize*(1-Overlap)) ) if stepSize < 1: stepSize =int(1) frameNumSize = int( ((sigLen-blkSize)//stepSize) +1) # 获得一共有多少个 片段 # 2.2 method 2 获得idxArray, [向量化方法] # 生成 引索数组, 大小为 row nums = frameNumSize, col nums = blocksize # 生成开始引索序列,间隔为 stepSize ,考虑上 overlap startIdxArry = np.arange(0,stepSize*frameNumSize,stepSize) # 生成信号分块的引索数组,按行分块 idxArray = np.tile(np.r_[0:blkSize],(frameNumSize,1)) + startIdxArry[:,np.newaxis] sigBlks = sigData[idxArray] return sigBlks #%% Test sigData = np.arange(20) sigData.astype(np.float64) blkSize = 7 Overlap = 0.3 # sigBlks = cut_to_sigBlks_test1(sigData,blkSize,Overlap) sigBlks = cut_to_sigBlks_test2(sigData,blkSize,Overlap) print('sigData: \n',sigData) print('sigBlks: \n',sigBlks)
显示结果为:
sigData:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19]
sigBlks:
[[ 0 1 2 3 4 5 6]
[ 4 5 6 7 8 9 10]
[ 8 9 10 11 12 13 14]
[12 13 14 15 16 17 18 ] ]
2.3 使用python numpy模块中 as_strides 方法
相当于引索,不过是numpy内置的引索函数,要求必须是内存中连续存放的一段数据。 stride相当于上文中的step
#%% method 3 import numpy as np def cut_to_sigBlks_test3(sigData,blkSize,Overlap,axis=0): if Overlap > 1: return print('overlap need less than 1') Overlap = Overlap/100 # 1.获取其实idx的step ,由于overlap 存在 ,stepSize 小于等于blkSize sigLen = np.size(sigData) stepSize = int( np.floor(blkSize*(1-Overlap)) ) if stepSize < 1: stepSize =int(1) frameNumSize = int( ((sigLen-blkSize)//stepSize) +1) # 获得一共有多少个 片段 # 2.2 method 3 获得idxArray, [向量化方法] sigData = np.ascontiguousarray(sigData) # 将x转化为连续内存存储 strides = np.asarray(sigData.strides) new_stride = np.prod(strides[strides > 0] // sigData.itemsize) * sigData.itemsize axis=0 # 切分数据 按行存储 if axis == -1: shape = list(sigData.shape)[:-1] + [blkSize, frameNumSize] strides = list(strides) + [stepSize * new_stride] elif axis == 0: shape = [frameNumSize, blkSize] + list(sigData.shape)[1:] strides = [stepSize * new_stride] + list(strides) else: print('error') sigBlks = np.lib.stride_tricks.as_strided(sigData, shape=shape, strides=strides) return sigBlks #%% Test sigData = np.arange(20) sigData.astype(np.float64) blkSize = 7 Overlap = 0.3 # sigBlks = cut_to_sigBlks_test1(sigData,blkSize,Overlap) # sigBlks = cut_to_sigBlks_test2(sigData,blkSize,Overlap) sigBlks = cut_to_sigBlks_test3(sigData,blkSize,Overlap) print('sigData: \n',sigData) print('sigBlks: \n',sigBlks)
显示结果为:
sigData:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19]
sigBlks:
[[ 0 1 2 3 4 5 6]
[ 4 5 6 7 8 9 10]
[ 8 9 10 11 12 13 14]
[12 13 14 15 16 17 18 ] ]
3. 比较这三种运算方法的时间效率
3.1 三种方式耗时效率比较
这三种方法中,第一种是方便理解的循环思维,第二种是向量化思维,第三种也是向量化思维同时运用了一个numpy库的as_stride性质
先说结论:大矩阵时,时间效率 为 method 3 < method 1 < method 2
创建一个1000000个数据点,每1024个点分帧,overlap = 0.3。每种方法循环1000次,用的时间分别为:
#%% Test cost time import time as time sigData = np.arange(1000000) sigData = np.array(sigData,dtype='float64') blkSize = 1024 Overlap = 0.3 st= time.time() for i in np.arange(100): sigBlks1 = cut_to_sigBlks_test1(sigData,blkSize,Overlap) et= time.time() print('cut_to_sigBlks_test1:',et-st) st= time.time() for i in np.arange(100): sigBlks2 = cut_to_sigBlks_test2(sigData,blkSize,Overlap) et= time.time() print('cut_to_sigBlks_test2:',et-st) st= time.time() for i in np.arange(100): sigBlks3 = cut_to_sigBlks_test3(sigData,blkSize,Overlap) et= time.time() print('cut_to_sigBlks_test3:',et-st)
cut_to_sigBlks_test1: 1.0691425800323486
cut_to_sigBlks_test2: 1.8650140762329102
cut_to_sigBlks_test3: 0.003989458084106445
可见耗时 为 method 3 < method 1 < method 2
本来以为第一种比第二种方法耗时间长,实验出乎意料啊。不过第二种写法更优美,哈哈!
3.2 方法二 耗时原因探索
#%% Test method 2 time cost import time as time sigData = np.arange(100000000) sigData = np.array(sigData,dtype='float64') blkSize = 1024 Overlap = 0.3 if Overlap > 1: Overlap = Overlap/100 # 1.获取其实idx的step ,由于overlap 存在 ,stepSize 小于等于blkSize sigLen = np.size(sigData) stepSize = int( np.floor(blkSize*(1-Overlap)) ) t1 = time.time() if stepSize < 1: stepSize =int(1) t2 = time.time() frameNumSize = int( ((sigLen-blkSize)//stepSize) +1) # 获得一共有多少个 片段 t3 = time.time() # 2.2 method 2 获得idxArray, [向量化方法] # 生成 引索数组, 大小为 row nums = frameNumSize, col nums = blocksize # 生成开始引索序列,间隔为 stepSize ,考虑上 overlap t4 = time.time() startIdxArry = np.arange(0,sigLen-blkSize,stepSize) t5 = time.time() # 生成信号分块的引索数组,按行分块 idxArray = np.tile(np.r_[0:blkSize],(frameNumSize,1)) + startIdxArry[:,np.newaxis] t6 = time.time() sigBlks = sigData[idxArray] t7 = time.time() t_all = np.array([t1,t2,t3,t4,t5,t6,t7]) dt = t_all[1:]-t_all[:-1] print('total cost time:' ,t7-t1) print('delta time:',dt)
结果为
total cost time: 1.699458122253418
delta time: [ dt1 | dt2 | dt3 | dt4 | dt5 | dt6 ]
[ 0 | 0 | 0 | 0.000997 | 0.675195 | 1.02327 ]
可见时间主要消耗在
idxArray = np.tile(np.r_[0:blkSize],(frameNumSize,1)) + startIdxArry[:,np.newaxis] 0.675195 s
sigBlks = sigData[idxArray] 1.02327 S
可见当矩阵比较大时,时间主要消耗在通过 引索矩阵 获取 新矩阵 。后续思考是否有优化的方法