16、频率域滤波

 1、频率域与空间域之间的关系

  在频率域滤波基础——傅里叶变换计算及应用基础中,我们已经知道了如何将图像转换到频率域,以及如何将频率域图像通过傅里叶逆变换转换回图像,这样一来,可以利用空域图像与频谱之间的对应关系,尝试将空域卷积滤波变换为频域滤波,而后再将频域滤波处理后的图像反变换回空间域,从而达到图像增强的目的,这样做的一个最主要的吸引力再域频率域滤波具有直观性的特点。

  根据著名的卷积定律,两个二维连续函数再空间域的卷积可由其相应的两个傅里叶变换乘积的反变换而得到,反之,在频率域中的卷积可由在空间域中乘积的傅里叶变换而得到。即

$$
\begin{array}{l}{f_{1}(t) \leftrightarrow F_{1}(\omega), f_{2}(t) \leftrightarrow F_{2}(\omega)} \\ {f_{1}(t) * f_{2}(t) \leftrightarrow F_{1}(\omega) \cdot F_{2}(\omega)}\end{array}
$$

2、频率域滤波的基本步骤

  本文重点就来讲一讲如何通过频率域滤波来实现图像的平滑和锐化。首先将频率域滤波的步骤来做一个小结如下:

  • 给定一幅大小为MXN的输入图像f(x,y),选择填充参数P=2M.Q=2N.
  • 对f(x,y)添加必要数量的0,形成大小为PXQ的填充图像$f_{p}(x, y)$。
  • 用$(-1)^{x+y}$乘以$f_{p}(x, y)$,移到其变换中心。
  • 计算上图的DFT,得到F(u,v).
  • 生成一个实的,对称的滤波函数$H(u, v)$,其大小为PXQ,中心在$(P / 2, Q / 2)$处。用阵列相乘得到乘积$G(u, v)=H(u, v) F(u, v)$;即$G(i, k)=H(i, k) F(i, k)$
  • 得到处理后的图像:

    $$
    \mathrm{g}_{p}(x, y)=\left\{\text {real}\left[\mathfrak{J}^{-1}[G(u, v)]\right\}(-1)^{x+y}\right.
    $$

其中,为忽略由于计算不准确导致的寄生复成分,选择实部,下标p指出我们处理的是填充后的图像。

  • 从$g_{p}(x, y)$的左上象限提取MXN区域,得到最终处理结果$g(x, y)$

  由上述叙述可知,滤波的关键取决于滤波函数$H(u, v)$,常常称为滤波器,或滤波器传递函数,因为它在滤波中抑制或除去了频谱中的某些分量,而保留其他的一些频率不受影响,从而达到滤波的目的。而本节将讲解一些常用的滤波器。

3、使用频率域滤波平滑图像

  在空间域我们已经讲过图像平滑的目的及空间域平滑滤波,现在在频率域滤波中我们首先讲解三种频率平滑滤波器:理想滤波器、巴特沃斯滤波器、高斯滤波器。这三种滤波器涵盖了从非常急剧(理想)到非常平滑的滤波范围。

1、理想低通滤波器(ILPF)

   首先最容易想到得衰减高频成分得方法就是在一个称为截止频率得位置截断所有的高频成分,将图像频谱中所有高于这一截止频率的频谱成分设为0,低于截止频率的成分保持不变。如图所示(左图是其透视图,右图是其图像表示):

 

  

  能够达到这种用效果的滤波器我们称之为理想滤波器,用公式表示为:

$$
H(u, v)=\left\{\begin{array}{ll}{1} & {\text { if } D(u, v) \leq D_{0}} \\ {0} & {\text { if } D(u, v)>D_{0}}\end{array}\right.
$$

   其中,D0表示通带的半径。D(u,v)的计算方式也就是两点间的距离,很简单就能得到。

$$
D(u, v)=\sqrt{\left(u-\frac{P}{2}\right)^{2}+\left(v-\frac{Q}{2}\right)^{2}}
$$

 

算法在OpenCV中的实现如下所示:

#include "stdafx.h"
#include <math.h>
#include <random>
#include <iostream>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>

using namespace cv;
using namespace std;


int main()
{
	//Mat input = imread(p[1], CV_LOAD_IMAGE_GRAYSCALE);
	//这是在ubuntu上运行的,p[1]为控制台给出的参数,即图片路径
   //如果不知道怎么传入参数,可以直接改为
	Mat input=imread("2222.jpg",CV_LOAD_IMAGE_GRAYSCALE);
	//image.jpg必须放在当前目录下,image.jpg即为输入图片
	imshow("input", input);
	int w = getOptimalDFTSize(input.cols);
	int h = getOptimalDFTSize(input.rows);
	Mat padded;
	copyMakeBorder(input, padded, 0, h - input.rows, 0, w - input.cols, BORDER_CONSTANT, Scalar::all(0));
	padded.convertTo(padded, CV_32FC1);
	imshow("padded", padded);
	for (int i = 0; i < padded.rows; i++)//中心化操作,其余操作和上一篇博客的介绍一样
	{
		float *ptr = padded.ptr<float>(i);
		for (int j = 0; j < padded.cols; j++)	ptr[j] *= pow(-1, i + j);
	}
	Mat plane[] = { padded,Mat::zeros(padded.size(),CV_32F) };
	Mat complexImg;
	merge(plane, 2, complexImg);
	dft(complexImg, complexImg);
	//****************************************************
	Mat mag = complexImg;
	mag = mag(Rect(0, 0, mag.cols & -2, mag.rows & -2));//这里为什么&上-2具体查看opencv文档
	//其实是为了把行和列变成偶数 -2的二进制是11111111.......10 最后一位是0
	//获取中心点坐标
	int cx = mag.cols / 2;
	int cy = mag.rows / 2;

	float D0 = 20;
	for (int y = 0; y < mag.rows; y++)
	{
		double* data = mag.ptr<double>(y);
		for (int x = 0; x < mag.cols; x++)
		{
			double d = sqrt(pow((y - cy), 2) + pow((x - cx), 2));
			if (d <= D0) {
				//小于截止频率不做操作
			}
			else {
				data[x] = 0;
			}
		}
	}

	//******************************************************************
	split(mag, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("mag", plane[0]);

	//******************************************************************
	//*************************idft*************************************
	idft(mag, mag);
	split(mag, plane);
	magnitude(plane[0], plane[1], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("idft-mag", plane[0]);
	waitKey();
	return 0;
}

从左至右依次是:原图,优化过后的图、傅里叶变换图、理想低通滤波器处理过后的高斯变换图、理想低通滤波器处理过后的效果图:

 

 

 

 

 

 

  

 2、巴特沃思低通滤波器(BLPF)

  截止频率位于距离原点$D_{0}$处的n阶巴特沃斯滤波器的传递函数定义为:

$$
H(u, v)=\frac{1}{1+\left[D(u, v) / D_{0}\right]^{2 n}}
$$

  式中

$$
D(u, v)=\sqrt{\left(u-\frac{P}{2}\right)^{2}+\left(v-\frac{Q}{2}\right)^{2}}
$$

  下图即为巴特沃斯滤波器的透视图及其图像显示:

  

 

   与理想低通滤波不同,巴特沃斯低通滤波器并没有再通过频率和滤除频率之间给出明显截止的急剧不确定性。对于具有平滑传递函数的滤波器,可在这样一点上定义截止频率,即使H(u,v)下降为其最大值的某个百分比的点。

同理,我们可以在OpenCV下实现相关算法:

#include "stdafx.h"
#include <math.h>
#include <random>
#include <iostream>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>

using namespace cv;
using namespace std;


int main()
{
	//Mat input = imread(p[1], CV_LOAD_IMAGE_GRAYSCALE);
	//这是在ubuntu上运行的,p[1]为控制台给出的参数,即图片路径
   //如果不知道怎么传入参数,可以直接改为
	Mat input=imread("2222.jpg",CV_LOAD_IMAGE_GRAYSCALE);
	//image.jpg必须放在当前目录下,image.jpg即为输入图片
	imshow("input", input);
	//Butterworth_Low_Paass_Filter(input);
	int w = getOptimalDFTSize(input.cols);
	int h = getOptimalDFTSize(input.rows);
	Mat padded;
	copyMakeBorder(input, padded, 0, h - input.rows, 0, w - input.cols, BORDER_CONSTANT, Scalar::all(0));
	padded.convertTo(padded, CV_32FC1);
	imshow("padded", padded);
	for (int i = 0; i < padded.rows; i++)//中心化操作,其余操作和上一篇博客的介绍一样
	{
		float *ptr = padded.ptr<float>(i);
		for (int j = 0; j < padded.cols; j++)	ptr[j] *= pow(-1, i + j);
	}
	Mat plane[] = { padded,Mat::zeros(padded.size(),CV_32F) };
	Mat complexImg;
	merge(plane, 2, complexImg);
	//****************************************************
	dft(complexImg, complexImg);
	split(complexImg, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("complexImg", plane[0]);
	//****************************************************
	Mat mag = complexImg;
	mag = mag(Rect(0, 0, mag.cols & -2, mag.rows & -2));//这里为什么&上-2具体查看opencv文档
	//其实是为了把行和列变成偶数 -2的二进制是11111111.......10 最后一位是0
	//获取中心点坐标
	int cx = mag.cols / 2;
	int cy = mag.rows / 2;


	int n = 10;//表示巴特沃斯滤波器的次数
	//H = 1 / (1+(D/D0)^2n)

	double D0 = 10;

	for (int y = 0; y < mag.rows; y++) {
		double* data = mag.ptr<double>(y);
		for (int x = 0; x < mag.cols; x++) {
			//cout << data[x] << endl;
			double d = sqrt(pow((y - cy), 2) + pow((x - cx), 2));
			//cout << d << endl;
			double h = 1.0 / (1 + pow(d / D0, 2 * n));
			if (h <= 0.5) {
				data[x] = 0;
			}
			else {
				//data[x] = data[x]*0.5;
				//cout << h << endl;
			}

			//cout << data[x] << endl;
		}
	}


	//******************************************************************
	split(mag, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("mag", plane[0]);

	//******************************************************************
	//*************************idft*************************************
	idft(mag, mag);
	split(mag, plane);
	magnitude(plane[0], plane[1], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("idft-mag", plane[0]);
	waitKey();
	return 0;
}

  从左至右依次是:原图,优化过后的图、傅里叶变换图、巴特沃斯滤波器平滑后的高斯变换图、巴特沃斯滤波器平滑后的效果图:

  

 

3、高斯低通滤波器(GLPF)

     高斯低通滤波的频率域二维形式由下式给出:

$$
H(u, v)=e^{-D^{2}(u, v) / 2 D_{0}^{2}}
$$

  

  式中

$$
D(u, v)=\sqrt{\left(u-\frac{P}{2}\right)^{2}+\left(v-\frac{Q}{2}\right)^{2}}
$$

  是离频率矩阵中心的距离。$D_{0}$是截止频率,当$D(u, v)=D_{0}$时,GLPF下降到其最大值的0.607处。

  高斯滤波器的透视图及其图像显示如下图所示:

  

在OpenCV下实现高斯低通滤波算法如下:

#include "stdafx.h"
#include <math.h>
#include <random>
#include <iostream>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>

using namespace cv;
using namespace std;




int main()
{
	//Mat input = imread(p[1], CV_LOAD_IMAGE_GRAYSCALE);
	//这是在ubuntu上运行的,p[1]为控制台给出的参数,即图片路径
   //如果不知道怎么传入参数,可以直接改为
	Mat input=imread("2222.jpg",CV_LOAD_IMAGE_GRAYSCALE);
	//image.jpg必须放在当前目录下,image.jpg即为输入图片
	imshow("input", input);
	int w = getOptimalDFTSize(input.cols);
	int h = getOptimalDFTSize(input.rows);
	Mat padded;
	copyMakeBorder(input, padded, 0, h - input.rows, 0, w - input.cols, BORDER_CONSTANT, Scalar::all(0));
	padded.convertTo(padded, CV_32FC1);
	imshow("padded", padded);
	for (int i = 0; i < padded.rows; i++)//中心化操作,其余操作和上一篇博客的介绍一样
	{
		float *ptr = padded.ptr<float>(i);
		for (int j = 0; j < padded.cols; j++)	ptr[j] *= pow(-1, i + j);
	}
	Mat plane[] = { padded,Mat::zeros(padded.size(),CV_32F) };
	Mat complexImg;
	merge(plane, 2, complexImg);
	dft(complexImg, complexImg);
	//************************gaussian****************************
	Mat gaussianBlur(padded.size(), CV_32FC2);
	float D0 = 2 * 10 * 10;
	for (int i = 0; i < padded.rows; i++)
	{
		float*p = gaussianBlur.ptr<float>(i);
		for (int j = 0; j < padded.cols; j++)
		{
			float d = pow(i - padded.rows / 2, 2) + pow(j - padded.cols / 2, 2);
			p[2 * j] = expf(-d / D0);
			p[2 * j + 1] = expf(-d / D0);
		   
		}
	}
	multiply(complexImg, gaussianBlur, gaussianBlur);//矩阵元素对应相乘法,注意,和矩阵相乘区分
	//*****************************************************************	
	split(complexImg, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("dft", plane[0]);
	//******************************************************************
	split(gaussianBlur, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("gaussianBlur", plane[0]);

	//******************************************************************
	//*************************idft*************************************
	idft(gaussianBlur, gaussianBlur);
	split(gaussianBlur, plane);
	magnitude(plane[0], plane[1], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("idft-gaussianBlur", plane[0]);
	waitKey();
	return 0;
}

从左至右依次是:原图,优化过后的图、傅里叶变换图、高斯低通滤波后的高斯变换图、高斯平滑过后的效果图:

    

 

4、使用频率域滤波锐化图像

  前面已经讲过了通过衰减图像的傅里叶变换的高频成分可以平滑图像。因为边缘和其他灰度的急剧变化与高频成分有关,所以图像的锐化可在频率域通过滤波来实现,高通滤波器会衰减傅里叶变换中的低频成分而不扰乱高频信息。故我们可以考虑用高通滤波器来进行图像的锐化,一个高频滤波器是从给定低通滤波器用下式得到的:

$$
H_{\mathrm{HP}}(u, v)=1-H_{\mathrm{LP}}(u, v)
$$

  与低通滤波器对应,这里介绍理想高通滤波器,巴特沃斯高通滤波器,高斯高通滤波器三种高通滤波器。

1、理想高通滤波器 (IHPF)

  与低通对应,理想高通滤波器定义为:

$$
H(u, v)=\left\{\begin{array}{ll}{0} & {\text { if } D(u, v) \leq D_{0}} \\ {1} & {\text { if } D(u, v)>D_{0}}\end{array}\right.
$$

   透视图和图像显示如下:

  

#include "stdafx.h"
#include <math.h>
#include <random>
#include <iostream>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>

using namespace cv;
using namespace std;


int main()
{
	//Mat input = imread(p[1], CV_LOAD_IMAGE_GRAYSCALE);
	//这是在ubuntu上运行的,p[1]为控制台给出的参数,即图片路径
   //如果不知道怎么传入参数,可以直接改为
	Mat input = imread("2222.jpg", CV_LOAD_IMAGE_GRAYSCALE);
	//image.jpg必须放在当前目录下,image.jpg即为输入图片
	imshow("input", input);
	int w = getOptimalDFTSize(input.cols);
	int h = getOptimalDFTSize(input.rows);
	Mat padded;
	copyMakeBorder(input, padded, 0, h - input.rows, 0, w - input.cols, BORDER_CONSTANT, Scalar::all(0));
	padded.convertTo(padded, CV_32FC1);
	imshow("padded", padded);
	for (int i = 0; i < padded.rows; i++)//中心化操作,其余操作和上一篇博客的介绍一样
	{
		float *ptr = padded.ptr<float>(i);
		for (int j = 0; j < padded.cols; j++)    ptr[j] *= pow(-1, i + j);
	}
	Mat plane[] = { padded,Mat::zeros(padded.size(),CV_32F) };
	Mat complexImg;
	merge(plane, 2, complexImg);
	dft(complexImg, complexImg);
	//****************************************************
	Mat mag = complexImg;
	mag = mag(Rect(0, 0, mag.cols & -2, mag.rows & -2));//这里为什么&上-2具体查看opencv文档
	//其实是为了把行和列变成偶数 -2的二进制是11111111.......10 最后一位是0
	//获取中心点坐标
	int cx = mag.cols / 2;
	int cy = mag.rows / 2;

	float D0 = 20;
	for (int y = 0; y < mag.rows; y++)
	{
		double* data = mag.ptr<double>(y);
		for (int x = 0; x < mag.cols; x++)
		{
			double d = sqrt(pow((y - cy), 2) + pow((x - cx), 2));
			if (d > D0) {
				//小于截止频率不做操作
			}
			else {
				data[x] = 0;
			}
		}
	}

	//******************************************************************
	split(mag, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("mag", plane[0]);

	//******************************************************************
	//*************************idft*************************************
	idft(mag, mag);
	split(mag, plane);
	magnitude(plane[0], plane[1], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("idft-mag", plane[0]);
	waitKey();
	return 0;
}

  结果如下:

  

2、巴特沃思高通滤波器 (BHPF)

   同理n阶巴特沃斯高通滤波器的定义为

$$
H(u, v)=\frac{1}{1+\left[D_{0} / D(u, v)\right]^{2 n}}
$$

 透视图和图像显示如下:

 

  

 

 

#include "stdafx.h"
#include <math.h>
#include <random>
#include <iostream>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>

using namespace cv;
using namespace std;


int main()
{
	//Mat input = imread(p[1], CV_LOAD_IMAGE_GRAYSCALE);
	//这是在ubuntu上运行的,p[1]为控制台给出的参数,即图片路径
   //如果不知道怎么传入参数,可以直接改为
	Mat input = imread("2222.jpg", CV_LOAD_IMAGE_GRAYSCALE);
	//image.jpg必须放在当前目录下,image.jpg即为输入图片
	imshow("input", input);
	//Butterworth_Low_Paass_Filter(input);
	int w = getOptimalDFTSize(input.cols);
	int h = getOptimalDFTSize(input.rows);
	Mat padded;
	copyMakeBorder(input, padded, 0, h - input.rows, 0, w - input.cols, BORDER_CONSTANT, Scalar::all(0));
	padded.convertTo(padded, CV_32FC1);
	imshow("padded", padded);
	for (int i = 0; i < padded.rows; i++)//中心化操作,其余操作和上一篇博客的介绍一样
	{
		float *ptr = padded.ptr<float>(i);
		for (int j = 0; j < padded.cols; j++)    ptr[j] *= pow(-1, i + j);
	}
	Mat plane[] = { padded,Mat::zeros(padded.size(),CV_32F) };
	Mat complexImg;
	merge(plane, 2, complexImg);
	//****************************************************
	dft(complexImg, complexImg);
	split(complexImg, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("complexImg", plane[0]);
	//****************************************************
	Mat mag = complexImg;
	mag = mag(Rect(0, 0, mag.cols & -2, mag.rows & -2));//这里为什么&上-2具体查看opencv文档
	//其实是为了把行和列变成偶数 -2的二进制是11111111.......10 最后一位是0
	//获取中心点坐标
	int cx = mag.cols / 2;
	int cy = mag.rows / 2;


	int n = 1;//表示巴特沃斯滤波器的次数
	//H = 1 / (1+(D/D0)^2n)

	double D0 = 30;

	for (int y = 0; y < mag.rows; y++) {
		double* data = mag.ptr<double>(y);
		for (int x = 0; x < mag.cols; x++) {
			//cout << data[x] << endl;
			double d = sqrt(pow((y - cy), 2) + pow((x - cx), 2));
			//cout << d << endl;
			double h = 1.0 / (1 + pow(d / D0, 2 * n));
			if (h > 0.5) {//低通变高通
				data[x] = 0;
			}
			else {
				//data[x] = data[x]*0.5;
				//cout << h << endl;
			}

			//cout << data[x] << endl;
		}
	}


	//******************************************************************
	split(mag, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("mag", plane[0]);

	//******************************************************************
	//*************************idft*************************************
	idft(mag, mag);
	split(mag, plane);
	magnitude(plane[0], plane[1], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("idft-mag", plane[0]);
	waitKey();
	return 0;
}

  结果如下:

    

 

3、高斯高通滤波器(GHPF)

   高斯高通滤波器定义为:

$$
H(u, v)=1-\mathrm{e}^{-D^{2}(u, v) / 2 D_{0}^{2}}
$$

 透视图和图像显示如下:

     

 

OpenCV实现为:

#include "stdafx.h"
#include <math.h>
#include <random>
#include <iostream>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>

using namespace cv;
using namespace std;

int main()
{
	//Mat input = imread(p[1], CV_LOAD_IMAGE_GRAYSCALE);
	//这是在ubuntu上运行的,p[1]为控制台给出的参数,即图片路径
   //如果不知道怎么传入参数,可以直接改为
	Mat input=imread("2222.jpg",CV_LOAD_IMAGE_GRAYSCALE);
	//image.jpg必须放在当前目录下,image.jpg即为输入图片
	imshow("input", input);
	int w = getOptimalDFTSize(input.cols);
	int h = getOptimalDFTSize(input.rows);
	Mat padded;
	copyMakeBorder(input, padded, 0, h - input.rows, 0, w - input.cols, BORDER_CONSTANT, Scalar::all(0));
	padded.convertTo(padded, CV_32FC1);
	imshow("padded", padded);
	for (int i = 0; i < padded.rows; i++)//中心化操作,其余操作和上一篇博客的介绍一样
	{
		float *ptr = padded.ptr<float>(i);
		for (int j = 0; j < padded.cols; j++)	ptr[j] *= pow(-1, i + j);
	}
	Mat plane[] = { padded,Mat::zeros(padded.size(),CV_32F) };
	Mat complexImg;
	merge(plane, 2, complexImg);
	dft(complexImg, complexImg);
	//************************gaussian****************************
	Mat gaussianSharpen(padded.size(), CV_32FC2);
	float D0 = 2 * 10 * 10;
	for (int i = 0; i < padded.rows; i++)
	{
		float*q = gaussianSharpen.ptr<float>(i);
		for (int j = 0; j < padded.cols; j++)
		{
			float d = pow(i - padded.rows / 2, 2) + pow(j - padded.cols / 2, 2);

			q[2 * j] = 1 - expf(-d / D0);
			q[2 * j + 1] = 1 - expf(-d / D0);
		}
	}
	multiply(complexImg, gaussianSharpen, gaussianSharpen);
	//*****************************************************************	
	split(complexImg, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("dft", plane[0]);
	//******************************************************************

	split(gaussianSharpen, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("gaussianSharpen", plane[0]);
	//******************************************************************
	//*************************idft*************************************
	idft(gaussianSharpen, gaussianSharpen);

	split(gaussianSharpen, plane);
	magnitude(plane[0], plane[1], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("idft-gaussianSharpen", plane[0]);
	waitKey();
	return 0;
}

  从左至右依次是:原图,优化过后的图、傅里叶变换图、高斯锐化后的高斯变换图、高斯锐化过后的效果图:

 

6、选择率滤波器

   在很多应用中,图像处理的目的是处理指定频段或频率矩阵的小区域,此时我们需要使用选择滤波器对其进行处理,选择滤波器分为带通滤波器和带阻滤波器和陷波器。

6.1、带阻滤波器、带通滤波器

  所谓的带通滤波器即是将低频滤波和高频滤波相结合,最后我们可以利用带阻滤波器滤掉周期性出现的噪声,由前面可知,

理想带通滤波器公式为

$$
H(\mathrm{u}, v)=\left\{\begin{array}{l}{0 i f D_{0}-\frac{W}{2} \leq D \leq D_{0}+\frac{W}{2}} \\ {1}\end{array}\right.
$$

 巴特沃斯带通滤波器公式为

$$
H(u, v)=\frac{1}{1+\left[\frac{D W}{D^{2}-D_{0}^{2}}\right]^{2 n}}
$$

高斯带通滤波器公式为:

$$
H(u, v)=1-\mathrm{e}^{-\left[\frac{D^{2}-D_{0}^{2}}{D W}\right]^{2}}
$$

与由低通滤波器得到高通滤波器相同的方法,由带阻滤波器我们可以得到带通滤波器

$$
H_{\mathrm{BP}}(u, v)=1-H_{\mathrm{BR}}(u, v)
$$

 示例

以下就是理想带通滤波器的OpenCV实现及效果图

#include "stdafx.h"
#include <math.h>
#include <random>
#include <iostream>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>

using namespace cv;
using namespace std;


int main()
{
	//Mat input = imread(p[1], CV_LOAD_IMAGE_GRAYSCALE);
	//这是在ubuntu上运行的,p[1]为控制台给出的参数,即图片路径
   //如果不知道怎么传入参数,可以直接改为
	Mat input = imread("2222.jpg", CV_LOAD_IMAGE_GRAYSCALE);
	//image.jpg必须放在当前目录下,image.jpg即为输入图片
	imshow("input", input);
	int w = getOptimalDFTSize(input.cols);
	int h = getOptimalDFTSize(input.rows);
	Mat padded;
	copyMakeBorder(input, padded, 0, h - input.rows, 0, w - input.cols, BORDER_CONSTANT, Scalar::all(0));
	padded.convertTo(padded, CV_32FC1);
	imshow("padded", padded);
	for (int i = 0; i < padded.rows; i++)//中心化操作,其余操作和上一篇博客的介绍一样
	{
		float *ptr = padded.ptr<float>(i);
		for (int j = 0; j < padded.cols; j++)    ptr[j] *= pow(-1, i + j);
	}
	Mat plane[] = { padded,Mat::zeros(padded.size(),CV_32F) };
	Mat complexImg;
	merge(plane, 2, complexImg);
	dft(complexImg, complexImg);
	//****************************************************
	Mat mag = complexImg;
	mag = mag(Rect(0, 0, mag.cols & -2, mag.rows & -2));//这里为什么&上-2具体查看opencv文档
	//其实是为了把行和列变成偶数 -2的二进制是11111111.......10 最后一位是0
	//获取中心点坐标
	int cx = mag.cols / 2;
	int cy = mag.rows / 2;

	float D0 = 20;
	for (int y = 0; y < mag.rows; y++)
	{
		double* data = mag.ptr<double>(y);
		for (int x = 0; x < mag.cols; x++)
		{
			double d = sqrt(pow((y - cy), 2) + pow((x - cx), 2));
			if ( D0<= d&& d <= D0+30 ) {
			
			}
			else {
				data[x] = 0;
			}
		}
	}

	//******************************************************************
	split(mag, plane);
	magnitude(plane[0], plane[1], plane[0]);
	plane[0] += Scalar::all(1);
	log(plane[0], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("mag", plane[0]);

	//******************************************************************
	//*************************idft*************************************
	idft(mag, mag);
	split(mag, plane);
	magnitude(plane[0], plane[1], plane[0]);
	normalize(plane[0], plane[0], 1, 0, CV_MINMAX);
	imshow("idft-mag", plane[0]);
	waitKey();
	return 0;
}

    

 

6.2、陷波滤波器

   陷波器是一个相对于带通滤波更有用的选择滤波器,其拒绝事先定义好的一个关于频率矩阵中心的一个领域的频域。零相移滤波器必须是关于远点对称的,因此,一个中心位于$\left(u_{0}, v_{0}\right)$的陷波在位置$\left(-u_{0},-v_{0}\right)$必须有一个对应的陷波。陷波带阻滤波器可以用中心已被平移到陷波滤波器中心的高通滤波器的乘积来构造。一般形式为:

$$
H_{N R}(u, v)=\prod_{k=1}^{Q} H_{k}(u, v) H_{-k}(u, v)
$$

 其中,$H_{k}(u, v) H_{-k}(u, v)$是高通滤波器,他们的中心分别位于$\left(u_{k}, v_{k}\right)$和$\left(-u_{k},-v_{k}\right)$处些中心是根据频率矩形的中心(M/2,N/2)确定的。对于每个滤波器,距离计算由下式执行:

$$
D_{k}(u, v)=\left[\left(u-M / 2-u_{k}\right)^{2}+\left(v-N / 2-v_{k}\right)^{2}\right]^{1 / 2}
$$

$$
D_{-k}(u, v)=\left[\left(u-M / 2+u_{k}\right)^{2}+\left(v-N / 2+v_{k}\right)^{2}\right]^{1 / 2}
$$

 

 

 关于摩尔纹,简单来说是差拍原理的一种表现。从数学上讲,两个频率接近的等幅正弦波叠加,合成信号的幅度将按照两个频率之差变化。如果感光元件CCD(CMOS)像素的空间频率与影像中条纹的空间频率接近,就会产生摩尔纹。我们手机对着电脑拍照的时候看到 的波纹即是摩尔纹,现在手机都自带摩尔纹消除了,所以这里无法用手机得到带摩尔纹的图片,这里给出数字图像处理的摩尔纹图片链接,带摩尔纹图片下载的链接为:http://www.imageprocessingplace.com/DIP-3E/dip3e_book_images_downloads.htm第四章的Fig0464(a)(car_75DPI_Moire).tif

  下面通过OpenCV来完成摩尔纹消除实验

示例:

#include "stdafx.h"
#include <math.h>
#include <random>
#include <iostream>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>

using namespace cv;
using namespace std;

typedef cv::Mat Mat;

Mat image_add_border(Mat &src)
{
	int w = 2 * src.cols;
	int h = 2 * src.rows;
	std::cout << "src: " << src.cols << "*" << src.rows << std::endl;

	cv::Mat padded;
	copyMakeBorder(src, padded, 0, h - src.rows, 0, w - src.cols,
		cv::BORDER_CONSTANT, cv::Scalar::all(0));
	padded.convertTo(padded, CV_32FC1);
	std::cout << "opt: " << padded.cols << "*" << padded.rows << std::endl;
	return padded;
}

//transform to center 中心化
void center_transform(cv::Mat &src)
{
	for (int i = 0; i < src.rows; i++) {
		float *p = src.ptr<float>(i);
		for (int j = 0; j < src.cols; j++) {
			p[j] = p[j] * pow(-1, i + j);
		}
	}
}

//对角线交换内容
void zero_to_center(cv::Mat &freq_plane)
{
	//    freq_plane = freq_plane(Rect(0, 0, freq_plane.cols & -2, freq_plane.rows & -2));
		//这里为什么&上-2具体查看opencv文档
		//其实是为了把行和列变成偶数 -2的二进制是11111111.......10 最后一位是0
	int cx = freq_plane.cols / 2; int cy = freq_plane.rows / 2;//以下的操作是移动图像  (零频移到中心)
	cv::Mat part1_r(freq_plane, cv::Rect(0, 0, cx, cy));//元素坐标表示为(cx,cy)
	cv::Mat part2_r(freq_plane, cv::Rect(cx, 0, cx, cy));
	cv::Mat part3_r(freq_plane, cv::Rect(0, cy, cx, cy));
	cv::Mat part4_r(freq_plane, cv::Rect(cx, cy, cx, cy));

	cv::Mat tmp;
	part1_r.copyTo(tmp);//左上与右下交换位置(实部)
	part4_r.copyTo(part1_r);
	tmp.copyTo(part4_r);
	part2_r.copyTo(tmp);//右上与左下交换位置(实部)
	part3_r.copyTo(part2_r);
	tmp.copyTo(part3_r);
}


void show_spectrum(cv::Mat &complexI)
{
	cv::Mat temp[] = { cv::Mat::zeros(complexI.size(),CV_32FC1),
					  cv::Mat::zeros(complexI.size(),CV_32FC1) };
	//显示频谱图
	cv::split(complexI, temp);
	cv::Mat aa;
	cv::magnitude(temp[0], temp[1], aa);
	//    zero_to_center(aa);
	cv::divide(aa, aa.cols*aa.rows, aa);
	double min, max;
	cv::Point min_loc, max_loc;
	cv::minMaxLoc(aa, &min, &max,&min_loc, &max_loc);
	std::cout << "min: " << min << " max: " << max << std::endl;
	std::cout << "min_loc: " << min_loc << " max_loc: " << max_loc << std::endl;
	cv::circle(aa, min_loc, 20, cv::Scalar::all(1), 3);
	cv::circle(aa, max_loc, 20, cv::Scalar::all(1), 3);

	cv::imshow("src_img_spectrum", aa);
}

//频率域滤波
cv::Mat frequency_filter(cv::Mat &padded, cv::Mat &blur)
{
	cv::Mat plane[] = { padded, cv::Mat::zeros(padded.size(), CV_32FC1) };
	cv::Mat complexIm;

	cv::merge(plane, 2, complexIm);
	cv::dft(complexIm, complexIm);//fourior transform
	show_spectrum(complexIm);

	cv::multiply(complexIm, blur, complexIm);
	cv::idft(complexIm, complexIm, CV_DXT_INVERSE);//idft
	cv::Mat dst_plane[2];
	cv::split(complexIm,dst_plane);
	center_transform(dst_plane[0]);
	//    center_transform(dst_plane[1]);

	cv::magnitude(dst_plane[0], dst_plane[1], dst_plane[0]);//求幅值(模)
//    center_transform(dst_plane[0]);        //center transform

	return dst_plane[0];
}

//陷波滤波器
cv::Mat notch_kernel(cv::Mat &scr, std::vector<cv::Point> &notch_pot, float D0)
{
	cv::Mat notch_pass(scr.size(), CV_32FC2);
	int row_num = scr.rows;
	int col_num = scr.cols;
	int n = 4;
	for (int i = 0; i < row_num; i++) {
		float *p = notch_pass.ptr<float>(i);
		for (int j = 0; j < col_num; j++) {
			float h_nr = 1.0;
			for (unsigned int k = 0; k < notch_pot.size(); k++) {
				int u_k = notch_pot.at(k).y;
				int v_k = notch_pot.at(k).x;
				float dk = sqrt(pow((i - u_k), 2) +
					pow((j - v_k), 2));
				float d_k = sqrt(pow((i + u_k), 2) +
					pow((j + v_k), 2));
				float dst_dk = 1.0 / (1.0 + pow(D0 / dk, 2 * n));
				float dst_d_k = 1.0 / (1.0 + pow(D0 / d_k, 2 * n));
				h_nr = dst_dk * dst_d_k * h_nr;
				//                std::cout <<  "notch_pot: " << notch_pot.at(k) << std::endl;
			}
			p[2 * j] = h_nr;
			p[2 * j + 1] = h_nr;

		}
	}

	cv::Mat temp[] = { cv::Mat::zeros(scr.size(), CV_32FC1),
					   cv::Mat::zeros(scr.size(), CV_32FC1) };
	cv::split(notch_pass, temp);

	std::string name = "notch滤波器d0=" + std::to_string(D0);
	cv::Mat show;
	cv::normalize(temp[0], show, 1, 0, CV_MINMAX);
	cv::imshow(name, show);
	return notch_pass;
}

std::string name_win("Notch_filter");
cv::Rect g_rectangle;
bool g_bDrawingBox = false;//是否进行绘制
cv::RNG g_rng(12345);

std::vector<cv::Point>notch_point;

void on_MouseHandle(int event, int x, int y, int flags, void*param);
void DrawRectangle(cv::Mat& img, cv::Rect box);

int main()
{
	

	Mat srcImage = cv::imread("Fig0464(a)(car_75DPI_Moire).tif", cv::IMREAD_GRAYSCALE);
	if (srcImage.empty())
		return -1;
	imshow("src_img", srcImage);
	Mat padded = image_add_border(srcImage);
	center_transform(padded);
	Mat plane[] = { padded, cv::Mat::zeros(padded.size(), CV_32FC1) };
	Mat complexIm;

	merge(plane, 2, complexIm);
	cv::dft(complexIm, complexIm);//fourior transform
	Mat temp[] = { cv::Mat::zeros(complexIm.size(),CV_32FC1),
					  cv::Mat::zeros(complexIm.size(),CV_32FC1) };
	//显示频谱图
	split(complexIm, temp);
	Mat aa;
	magnitude(temp[0], temp[1], aa);
	divide(aa, aa.cols*aa.rows, aa);


	cv::namedWindow(name_win);
	cv::imshow(name_win, aa);

	cv::setMouseCallback(name_win, on_MouseHandle, (void*)&aa);
	Mat tempImage = aa.clone();
	int key_value = -1;
	while (1) {
		key_value = cv::waitKey(10);
		if (key_value == 27) {//按下ESC键查看滤波后的效果
			break;
		}

	}
	if (!notch_point.empty()) {
		for (unsigned int i = 0; i < notch_point.size(); i++) {
			cv::circle(tempImage, notch_point.at(i), 3, cv::Scalar(1), 2);
			std::cout << notch_point.at(i) << std::endl;
		}
	}
	cv::imshow(name_win, tempImage);

	Mat ker = notch_kernel(complexIm, notch_point, 30);
	cv::multiply(complexIm, ker, complexIm);

	split(complexIm, temp);
	magnitude(temp[0], temp[1], aa);
	divide(aa, aa.cols*aa.rows, aa);
	imshow("aa", aa);
	cv::idft(complexIm, complexIm, CV_DXT_INVERSE);//idft
	cv::Mat dst_plane[2];
	cv::split(complexIm, dst_plane);
	center_transform(dst_plane[0]);
 //center_transform(dst_plane[1]);

  //   cv::magnitude(dst_plane[0],dst_plane[1],dst_plane[0]);  //求幅值(模)

	cv::normalize(dst_plane[0], dst_plane[0], 1, 0, CV_MINMAX);
	imshow("dest", dst_plane[0]);
	cv::waitKey(0);

	return 1;
}


void on_MouseHandle(int event, int x, int y, int falgs, void* param)
{
	Mat& image = *(cv::Mat*)param;

	switch (event) {//鼠标移动消息
	case cv::EVENT_MOUSEMOVE: {
		if (g_bDrawingBox) {//标识符为真,则记录下长和宽到Rect型变量中

			g_rectangle.width = x - g_rectangle.x;
			g_rectangle.height = y - g_rectangle.y;
		}
	}
							  break;

	case cv::EVENT_LBUTTONDOWN: {
		g_bDrawingBox = true;
		g_rectangle = cv::Rect(x, y, 0, 0);//记录起点
		std::cout << "start point( " << g_rectangle.x << "," << g_rectangle.y << ")" << std::endl;
	}
								break;

	case cv::EVENT_LBUTTONUP: {
		g_bDrawingBox = false;
		bool w_less_0 = false, h_less_0 = false;

		if (g_rectangle.width < 0) {//对宽高小于0的处理
			g_rectangle.x += g_rectangle.width;
			g_rectangle.width *= -1;
			w_less_0 = true;

		}
		if (g_rectangle.height < 0) {
			g_rectangle.y += g_rectangle.height;
			g_rectangle.height *= -1;
			h_less_0 = true;
		}
		std::cout << "end Rect[ " << g_rectangle.x << "," << g_rectangle.y << "," << g_rectangle.width << ","
			<< g_rectangle.height << "]" << std::endl;

		if (g_rectangle.height > 0 && g_rectangle.width > 0) {
			Mat imageROI = image(g_rectangle).clone();
			double min, max;
			cv::Point min_loc, max_loc;
			cv::minMaxLoc(imageROI, &min, &max, &min_loc, &max_loc);
			cv::circle(imageROI, max_loc, 10, 1);
			max_loc.x += g_rectangle.x;
			max_loc.y += g_rectangle.y;

			notch_point.push_back(max_loc);

			cv::circle(image, max_loc, 10, 1);
			//            cv::imshow( "ROI", imageROI );
			cv::imshow("src", image);
		}

	}
							  break;
	}
}

运行程序后,用鼠标在频谱图上画圈,将四个陷波点标记后,在命令框内按下ESC键即可看到陷波结果:

 

 

7、同态滤波

   图像的形成是由光源的入射和反射到成像系统中来的到物体的外观特征的一个过程,假如我们用形如f(x,y)的二维函数来表示图像,则f(x,y)可以由两个分量来表示:(1)入射到被观察场景的光源照射总量(2)场景中物体所反射光的总量。这两个分量分别被称为入射分量和反射分量,可以分别表示为i(x,y)和r(x,y)。两个函数的乘积为f(x,y)

$$
f(x, y)=i(x, y) r(x, y)
$$

其中$0<i(x, y)<\infty ,0<r(x, y)<1$,由于两个函数的乘积的傅里叶变换是不可分离的,我们不能轻易地使用上式分别对照明和反射的频率成分进行操作,即

$$
\mathcal{F}(F(x, y)) \neq \mathcal{F}(i(x, y)) \mathcal{F}(r(x, y))
$$

但是,假设我们定义

$$
\begin{aligned} z(x, y) &=\ln F(x, y) \\ &=\ln i(x, y)+\ln r(x, y) \end{aligned}
$$

则有

$$
\begin{aligned} \mathcal{F}(z(x, y)) &=\mathcal{F}(\ln F(x, y)) \\ &=\mathcal{F}(\ln i(x, y))+\mathcal{F}(\ln r(x, y)) \end{aligned}
$$

$$
Z(\omega, \nu)=I(\omega, \nu)+R(\omega, \nu)
$$

其中Z,I,R分别是z,$\ln i(x, y)$和$\ln r(x, y)$的傅里叶变换。函数Z表示低频照明图像和高频反射图像两个图像的傅立叶变换,其传递函数如下所示:

如果我们现在应用具有抑制低频分量和增强高频分量的传递函数的滤波器,那么我们可以抑制照明分量并增强反射分量。从而有

$$
\begin{aligned} S(\omega, \nu) &=H(\omega, \nu) Z(\omega, \nu) \\ &=H(\omega, \nu) I(\omega, \nu)+H(\omega, \nu) R(\omega, \nu) \end{aligned}
$$

其中S是傅里叶变换的结果。在空间领域

$$
\begin{aligned} s(x, y) &=\mathcal{F}^{-1}(S(\omega, \nu)) \\ &=\mathcal{F}^{-1}(H(\omega, \nu) I(\omega, \nu))+\mathcal{F}^{-1}(H(\omega, \nu) R(\omega, \nu)) \end{aligned}
$$

由定义:

$$
i^{\prime}(x, y)=\mathcal{F}^{-1}(H(\omega, \nu) I(\omega, \nu))
$$

$$
r^{\prime}(x, y)=\mathcal{F}^{-1}(H(\omega, \nu) R(\omega, \nu))
$$

 我们有

$$
s(x, y)=i^{\prime}(x, y)+r^{\prime}(x, y)
$$

最后,因为z(x,y)是通过去输入图像的对数形成的,我们可以通过取滤波后的而结果的指数这一反处理来形成输出图像:

$$
\begin{aligned} \hat{F}(x, y) &=\exp [s(x, y)] \\ &=\exp \left[i^{\prime}(x, y)\right] \exp \left[r^{\prime}(x, y)\right] \\ &=i_{0}(x, y) r_{0}(x, y) \end{aligned}
$$

以上便是同态滤波的整个流程,可以总结如下:

\ begin {figure} \ par \ centerline {\ psfig {figure = figure55.ps,width = 5in}} \ par \ end {figure}

  图像的照射分量通常由慢的空间变化来表征,而反射分量往往引起突变,特别是在不同物体的连接部分。这些特性导致图像取对数后的傅里叶变换的低频成分与照射分量相联系,而高频成分与反射分量相联系。虽然这些联系只是粗略的近似,但它们用在图像滤波中是有益的。

  使用同态滤波器可更好的控制照射成分和反射成分。这种控制需要指定一个滤波器H(u,v),它可用不同的可控方法影响傅里叶变换的低频和高频成分。例如,采用形式稍微变化一下的高斯高通滤波器可得到函数:

$$
H(u, v)=\left(\gamma_{H}-\gamma_{L}\right)\left[1-e^{-c\left[D^{2}(u, v) / D_{0}^{2}\right]}\right]+\gamma_{L}
$$

  其中$D(u, v)=\sqrt{\left(u-\frac{P}{2}\right)^{2}+\left(v-\frac{Q}{2}\right)^{2}}$,常数c控制函数边坡的锐利度。

利用OpenCV的相关实现如下所示:

示例:

 

#include "stdafx.h"
#include <math.h>
#include <random>
#include <iostream>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>

using namespace cv;
using namespace std;
Mat Butterworth_Homomorphic_Filter(Size sz, int D, int n, float high_h_v_TB, float low_h_v_TB, Mat& realIm)
{
	Mat single(sz.height, sz.width, CV_32F);
	cout << sz.width << " " << sz.height << endl;
	Point centre = Point(sz.height / 2, sz.width / 2);
	double radius;
	float upper = (high_h_v_TB * 0.1);
	float lower = (low_h_v_TB * 0.1);
	long dpow = D * D;
	float W = (upper - lower);

	for (int i = 0; i < sz.height; i++)
	{
		for (int j = 0; j < sz.width; j++)
		{
			radius = pow((float)(i - centre.x), 2) + pow((float)(j - centre.y), 2);
			float r = exp(-n * radius / dpow);
			if (radius < 0)
				single.at<float>(i, j) = upper;
			else
				single.at<float>(i, j) = W * (1 - r) + lower;
		}
	}

	single.copyTo(realIm);
	Mat butterworth_complex;
	//make two channels to match complex
	Mat butterworth_channels[] = { Mat_<float>(single), Mat::zeros(sz, CV_32F) };
	merge(butterworth_channels, 2, butterworth_complex);

	return butterworth_complex;
}
//DFT 返回功率谱Power
Mat Fourier_Transform(Mat frame_bw, Mat& image_complex, Mat &image_phase, Mat &image_mag)
{
	Mat frame_log;
	frame_bw.convertTo(frame_log, CV_32F);
	frame_log = frame_log / 255;
	/*Take the natural log of the input (compute log(1 + Mag)*/
	frame_log += 1;
	log(frame_log, frame_log); // log(1 + Mag)

	/*2. Expand the image to an optimal size
	The performance of the DFT depends of the image size. It tends to be the fastest for image sizes that are multiple of 2, 3 or 5.
	We can use the copyMakeBorder() function to expand the borders of an image.*/
	Mat padded;
	int M = getOptimalDFTSize(frame_log.rows);
	int N = getOptimalDFTSize(frame_log.cols);
	copyMakeBorder(frame_log, padded, 0, M - frame_log.rows, 0, N - frame_log.cols, BORDER_CONSTANT, Scalar::all(0));

	/*Make place for both the complex and real values
	The result of the DFT is a complex. Then the result is 2 images (Imaginary + Real), and the frequency domains range is much larger than the spatial one. Therefore we need to store in float !
	That's why we will convert our input image "padded" to float and expand it to another channel to hold the complex values.
	Planes is an arrow of 2 matrix (planes[0] = Real part, planes[1] = Imaginary part)*/
	Mat image_planes[] = { Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F) };
	/*Creates one multichannel array out of several single-channel ones.*/
	merge(image_planes, 2, image_complex);

	/*Make the DFT
	The result of thee DFT is a complex image : "image_complex"*/
	dft(image_complex, image_complex);

	/***************************/
	//Create spectrum magnitude//
	/***************************/
	/*Transform the real and complex values to magnitude
	NB: We separe Real part to Imaginary part*/
	split(image_complex, image_planes);
	//Starting with this part we have the real part of the image in planes[0] and the imaginary in planes[1]
	phase(image_planes[0], image_planes[1], image_phase);
	magnitude(image_planes[0], image_planes[1], image_mag);

	//Power
	pow(image_planes[0], 2, image_planes[0]);
	pow(image_planes[1], 2, image_planes[1]);

	Mat Power = image_planes[0] + image_planes[1];

	return Power;
}
void Inv_Fourier_Transform(Mat input, Mat& inverseTransform)
{
	/*Make the IDFT*/
	Mat result;
	idft(input, result, DFT_SCALE);
	/*Take the exponential*/
	exp(result, result);

	vector<Mat> planes;
	split(result, planes);
	magnitude(planes[0], planes[1], planes[0]);
	planes[0] = planes[0] - 1.0;
	normalize(planes[0], planes[0], 0, 255, CV_MINMAX);

	planes[0].convertTo(inverseTransform, CV_8U);
}
//SHIFT
void Shifting_DFT(Mat &fImage)
{
	//For visualization purposes we may also rearrange the quadrants of the result, so that the origin (0,0), corresponds to the image center.
	Mat tmp, q0, q1, q2, q3;

	/*First crop the image, if it has an odd number of rows or columns.
	Operator & bit to bit by -2 (two's complement : -2 = 111111111....10) to eliminate the first bit 2^0 (In case of odd number on row or col, we take the even number in below)*/
	fImage = fImage(Rect(0, 0, fImage.cols & -2, fImage.rows & -2));
	int cx = fImage.cols / 2;
	int cy = fImage.rows / 2;

	/*Rearrange the quadrants of Fourier image so that the origin is at the image center*/
	q0 = fImage(Rect(0, 0, cx, cy));
	q1 = fImage(Rect(cx, 0, cx, cy));
	q2 = fImage(Rect(0, cy, cx, cy));
	q3 = fImage(Rect(cx, cy, cx, cy));

	/*We reverse each quadrant of the frame with its other quadrant diagonally opposite*/
	/*We reverse q0 and q3*/
	q0.copyTo(tmp);
	q3.copyTo(q0);
	tmp.copyTo(q3);

	/*We reverse q1 and q2*/
	q1.copyTo(tmp);
	q2.copyTo(q1);
	tmp.copyTo(q2);
}
void homomorphicFiltering(Mat src, Mat& dst)
{
	Mat img;
	Mat imgHls;
	vector<Mat> vHls;

	if (src.channels() == 3)
	{
		cvtColor(src, imgHls, CV_BGR2HSV);
		split(imgHls, vHls);
		vHls[2].copyTo(img);
	}
	else
		src.copyTo(img);

	//DFT
	//cout<<"DFT "<<endl;
	Mat img_complex, img_mag, img_phase;
	Mat fpower = Fourier_Transform(img, img_complex, img_phase, img_mag);
	Shifting_DFT(img_complex);
	Shifting_DFT(fpower);
	//int D0 = getRadius(fpower,0.15);
	int D0 = 10;
	int n = 2;
	int w = img_complex.cols;
	int h = img_complex.rows;
	//BHPF
//  Mat filter,filter_complex;
//  filter = BHPF(h,w,D0,n);
//  Mat planes[] = {Mat_<float>(filter), Mat::zeros(filter.size(), CV_32F)};
//  merge(planes,2,filter_complex);

	int rH = 150;
	int rL = 50;
	Mat filter, filter_complex;
	filter_complex = Butterworth_Homomorphic_Filter(Size(w, h), D0, n, rH, rL, filter);

	//do mulSpectrums on the full dft
	mulSpectrums(img_complex, filter_complex, filter_complex, 0);

	Shifting_DFT(filter_complex);
	//IDFT
	Mat result;
	Inv_Fourier_Transform(filter_complex, result);

	if (src.channels() == 3)
	{
		vHls.at(2) = result(Rect(0, 0, src.cols, src.rows));
		merge(vHls, imgHls);
		cvtColor(imgHls, dst, CV_HSV2BGR);
	}
	else
		result.copyTo(dst);

}
int main()
{
	Mat img = imread("test.jpg");
	imshow("rogin img", img);
	Mat dst;
	homomorphicFiltering(img, dst);
	imshow("homoMorphic filter", (img+dst)/2);
	waitKey();
	return 0;
}

  运行结果如下所示:

 

 

 

 

 

 

OpenCV 频率域处理:离散傅里叶变换、频率域滤波

CS425 Lab: Frequency Domain Processing

基于opencv的理想低通滤波器和巴特沃斯低通滤波器

opencv 频率域滤波实例

OpenCV 频率域实现钝化模板、高提升滤波和高频强调滤波 C++

【数字图像处理】4.10:灰度图像-频域滤波 概论

【数字图像处理】4.11:灰度图像-频域滤波 滤波器

【数字图像处理】4.12:灰度图像-频域滤波 同态滤波

基于OpenCV的同态滤波

计算机视觉(六):频率域滤波器

OpenCV使用陷波滤波器减少摩尔波纹 C++

posted @ 2019-06-12 12:07  noticeable  阅读(3853)  评论(0编辑  收藏  举报