IOS+openCV在Xcode的入门开发(转)

看这篇文章之前先看看这个地址:OpenCV iOS开发(一)——安装

 

昨天折腾了一天,终于搞定了openCV+IOS在Xcode下的环境并且实现一个基于霍夫算法的圆形识别程序。废话不多说,下面就是具体的折腾流程:

------------------------------------------------------安装openCV-------------------------------------------------------------------

官网上有教程:http://docs.opencv.org/doc/tutorials/introduction/ios_install/ios_install.html#ios-installation

如果一切都和官网上说的这么简单的话我就不用写博客啦~(请把<my_working _directory>换成你要安装openCV的路径)

cd ~/<my_working _directory>
git clone https://github.com/Itseez/opencv.git
cd /
sudo ln -s /Applications/Xcode.app/Contents/Developer Developer

一直到这步都安装正常(如果你没有git的话去http://sourceforge.net/projects/git-osx-installer/)下载安装

 

cd ~/<my_working_directory>
python opencv/platforms/ios/build_framework.py ios

最后一步卡在最后一句是大概是因为没有安装 Cmake ,安装Cmake去官网上下的dmg貌似是没用的=。=,所以我换了一种方式,通过 homebrew 来安装,首先安装homebrew(ruby是Mac自带的所以不用担心啦)

 ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

然后安装Cmake:

 brew install cmake

安装成功后,我们回到上面命令的最后一步,安装openCV库。接着就是漫长的等待,大概有半个时辰(一个时辰=两个小时)安装结束后,你会看到在安装路径下面有个ios的文件夹里面就是千辛万苦得来的openCV IOS库,喘口气,我们继续配置Xcode项目运行环境

---------------------------------------------------------配置Xcode openCV 环境------------------------------------------------------------------

安装还不是最折腾的,作为一个小白来使用Xcode本身就是很大的挑战(我前天还不会开发IOS。。。),更何况官网上的教程 http://docs.opencv.org/doc/tutorials/ios/hello/hello.html#opencvioshelloworld是对应的Xcode5.0的,而在Mac OSX 10.10上的Xcode已经到了6.3,两者界面存在了一定的差异,所以只能碰运气了。。。好在我运气不错~

其实上面这段教程,大部分都是可以跟着做的

1、Create a new XCode project.
2、Now we need to link opencv2.framework with Xcode. Select the project Navigator in the left hand panel and click on project name.
3、Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
4、Click on Add others and go to directory where opencv2.framework is located and click open
5、Now you can start writing your application.

这段说的就是建立一个新项目然后选中项目,找到Build Phases,然后把刚才生成的openCV库加入就行了,但这个时候要仔细看下面的图,他的图上还有三个库也一并引入

之后说的是配置宏命令

Link your project with OpenCV as shown in previous section.
Open the file named NameOfProject-Prefix.pch ( replace NameOfProject with name of your project) and add the following lines of code.
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif

这段说的是要将openCV库的预编译命令在pch文件中声明,但是,Xcode从5.0以后创建项目就不会自动生成这个文件了,必须手动生成,于是我们选择file->new,在弹出框里面选择IOS下面的other,在里面找到pch文件,命名为与项目命相同的文件,并加入这段代码,同样仔细看教程中的图,把其余两个也一并添上。文件码完之后,要开始关联改文件到项目中了。选中项目,然后再Build Phases边上找到Build Settings,选中下面一行的All,然后搜索prefix,在Apple LLVM 6.1 LANGUAGE中找到这一项,在后面添入 $(SRCROOT)/项目文件/名称.pch,然后找到这一项上面的Precompile prefix Header 选择Yes, 这样文件就加入到项目的预编译命令当中了。但这还没完:

With the newer XCode and iOS versions you need to watch out for some specific details

The *.m file in your project should be renamed to *.mm.
You have to manually include AssetsLibrary.framework into your project, which is not done anymore by default.

这段说的就是所有运用了openCV的地方的.m文件都要改成.mm文件,然后要在项目中引入AssetsLibrary.framework(见前文引入openCV库的步骤)

环境基本上已经配齐了,之后就是运用它们来开发HelloWorld了~

 -----------------------------------------------HelloOpenCV----------------------------------------------------------------------

第一步:打开这个页面http://docs.opencv.org/doc/tutorials/ios/image_manipulation/image_manipulation.html#opencviosimagemanipulation

- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
  CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
  CGFloat cols = image.size.width;
  CGFloat rows = image.size.height;

  cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)

  CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to  data
                                                 cols,                       // Width of bitmap
                                                 rows,                       // Height of bitmap
                                                 8,                          // Bits per component
                                                 cvMat.step[0],              // Bytes per row
                                                 colorSpace,                 // Colorspace
                                                 kCGImageAlphaNoneSkipLast |
                                                 kCGBitmapByteOrderDefault); // Bitmap info flags

  CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
  CGContextRelease(contextRef);

  return cvMat;
}
- (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image
{
  CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
  CGFloat cols = image.size.width;
  CGFloat rows = image.size.height;

  cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels

  CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to data
                                                 cols,                       // Width of bitmap
                                                 rows,                       // Height of bitmap
                                                 8,                          // Bits per component
                                                 cvMat.step[0],              // Bytes per row
                                                 colorSpace,                 // Colorspace
                                                 kCGImageAlphaNoneSkipLast |
                                                 kCGBitmapByteOrderDefault); // Bitmap info flags

  CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
  CGContextRelease(contextRef);

  return cvMat;
 }
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
  NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
  CGColorSpaceRef colorSpace;

  if (cvMat.elemSize() == 1) {
      colorSpace = CGColorSpaceCreateDeviceGray();
  } else {
      colorSpace = CGColorSpaceCreateDeviceRGB();
  }

  CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

  // Creating CGImage from cv::Mat
  CGImageRef imageRef = CGImageCreate(cvMat.cols,                                 //width
                                     cvMat.rows,                                 //height
                                     8,                                          //bits per component
                                     8 * cvMat.elemSize(),                       //bits per pixel
                                     cvMat.step[0],                            //bytesPerRow
                                     colorSpace,                                 //colorspace
                                     kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
                                     provider,                                   //CGDataProviderRef
                                     NULL,                                       //decode
                                     false,                                      //should interpolate
                                     kCGRenderingIntentDefault                   //intent
                                     );


  // Getting UIImage from CGImage
  UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
  CGImageRelease(imageRef);
  CGDataProviderRelease(provider);
  CGColorSpaceRelease(colorSpace);

  return finalImage;
 }

不管别的先建立一组文件(.h+.mm)把这三个函数收了,注意要在头上引入

#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
#import <opencv2/opencv.hpp>

这三个函数的功能是把UIImage转成cv::Mat类型和转回来,然后有了cv::Mat就可以想干嘛干嘛啦,比如做个霍夫算法检测圆形:

+ (UIImage *) hough:(UIImage *) image
{
    cv::Mat img = [self cvMatFromUIImage:image];
    
    cv::Mat gray(img.size(), CV_8UC4);
    cv::Mat background(img.size(), CV_8UC4,cvScalar(255,255,255));
    cvtColor(img, gray, CV_BGR2GRAY );
    std::vector lines;
    
    std::vector circles;
    HoughCircles(gray, circles, CV_HOUGH_GRADIENT,
                 2, image.size.width/8, 200, 100 );
    for( size_t i = 0; i < circles.size(); i++ )
    {
        cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
        int radius = cvRound(circles[i][2]);
        cv::circle( background, center, 3, cvScalar(0,0,0), -1, 8, 0 );
        cv::circle( background, center, radius, cvScalar(0,0,0), 3, 8, 0 );
    }
    
    UIImage *res = [self UIImageFromCVMat:background];
    return res;
}

------------------------------------------------------我是最后的分割线------------------------------------------------------

这个架子搭好以后就可以方便的开发了,objective-C + Cocoa的语法真是奇葩到了极点,怎么都看不顺眼。。。。不过storyboard开发界面倒是挺好用的,希望不久的将来能做个美图秀秀一样的软件玩玩~

 

转至:http://www.cnblogs.com/tonyspotlight/p/4568305.html

posted @ 2017-10-10 18:48  追寻1024的程序猿  阅读(714)  评论(0编辑  收藏  举报