[转载]taking photos with live image preview

I’m writing this because – as of April 2011 – Apple’s official documentation is badly wrong. Some of their source code won’t even compile (typos that are obvious if they’d checked them), and some of their instructions are hugely over-complicated and yet simply don’t work.

This is a step-by-step guide to taking photos with live image preview. It’s also a good starting point for doing much more advanced video and image capture on iOS 4.

 

It’s very easy to write an app that takes photos. It’s quite a lot of code, but it’s been built-in to iOS/iPhone OS for a few years now – and it still works.

But … with iOS 4, the new “AV Foundation” library offers a much more powerful way of taking photos, which lets you put the camera view inside your own app. So, for instance, you can make an app that looks like this:

 

 

The entire AV Foundation library is not available on the oldest iPhone and iPod Touch devices. I believe this is because Apple is doing a lot of the work in hardware, making use of features that didn’t exist in the original iPhone chips, and the 3G chips.

Interestingly, the AV Foundation library *is* available on the Simulator – which suggest that Apple certainly *could* have implemented AV F for older phones, but they decided not to. It’s very useful that you can test most of your AV F app on the Simulator (so long as you copy/paste some videos into the Simulator to work with).

 

You need *all* the following frameworks (all come with Xcode, but you have to manually add them to your project):

  1. CoreVideo
  2. CoreMedia
  3. AVFoundation (of course…)
  4. ImageIO
  5. QuartzCore (maybe)

 

Create a new UIViewController, add its view to the screen (either in IB or through code – if you don’t know how to add a ViewController’s view, you need to do some much more basic iPhone tutorials first).

Add a UIView object to the NIB (or as a subview), and create a @property in your controller:

@property(nonatomic, retain) IBOutlet UIView *vImagePreview;

Connect the UIView to the outlet above in IB, or assign it directly if you’re using code instead of a NIB.

Then edit your UIViewController, and give it the following viewDidAppear method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
-(void) viewDidAppear:(BOOL)animated
{
	AVCaptureSession *session = [[AVCaptureSession alloc] init];
	session.sessionPreset = AVCaptureSessionPresetMedium;
 
	CALayer *viewLayer = self.vImagePreview.layer;
	NSLog(@"viewLayer = %@", viewLayer);
 
	AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
 
	captureVideoPreviewLayer.frame = self.vImagePreview.bounds;
	[self.vImagePreview.layer addSublayer:captureVideoPreviewLayer];
 
	AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
 
	NSError *error = nil;
	AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
	if (!input) {
		// Handle the error appropriately.
		NSLog(@"ERROR: trying to open camera: %@", error);
	}
	[session addInput:input];
 
	[session startRunning];
}

Run your app on a device (NB: this will NOT run on Simulator – Apple doesn’t support cameras on the simulator (yet)), and … you should see the live camera view appearing in your subview

 

 

In the AVFoundation docs, Apple has a whole section on trying to do what we did above. Here’s a link: AV Foundation Programming Guide – Video Preview. But it doesn’t work.

Here’s Apple’s code from the AV Foundation docs, the link above:

// Code that DOES NOT work -- don't do this!
[viewLayer addSublayer:captureVideoPreviewLayer];

If you look in the docs for AVCaptureVideoPreviewLayer, you’ll find a *different* source code example, which does work:

// Code that DOES work:
captureVideoPreviewLayer.frame = self.vImagePreview.bounds;
[self.vImagePreview.layer addSublayer:captureVideoPreviewLayer];

 

In the AV Foundation docs, there’s also a section on how to get Images from the camera. This is mostly correct, and then at the last minute it goes horribly wrong.

Apple provides a link to another part of the docs, with the following source code:

{
    ...
    UIImage* image = imageFromSampleBuffer(imageSampleBuffer);
    ...
}
 
UIImage *imageFromSampleBuffer(CMSampleBufferRef sampleBuffer)
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // Lock the base address of the pixel buffer.
    CVPixelBufferLockBaseAddress(imageBuffer,0);
 
    // Get the number of bytes per row for the pixel buffer.
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    // Get the pixel buffer width and height.
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
 
    // Create a device-dependent RGB color space.
    static CGColorSpaceRef colorSpace = NULL;
    if (colorSpace == NULL) {
        colorSpace = CGColorSpaceCreateDeviceRGB();
		if (colorSpace == NULL) {
            // Handle the error appropriately.
            return nil;
        }
    }
 
    // Get the base address of the pixel buffer.
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
    // Get the data size for contiguous planes of the pixel buffer.
    size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
 
    // Create a Quartz direct-access data provider that uses data we supply.
    CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL);
    // Create a bitmap image from data supplied by the data provider.
    CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst |
kCGBitmapByteOrder32Little, dataProvider, NULL, true, kCGRenderingIntentDefault);
    CGDataProviderRelease(dataProvider);
 
    // Create and return an image object to represent the Quartz image.
    UIImage *image = [UIImage imageWithCGImage:cgImage];
    CGImageRelease(cgImage);
 
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
 
    return image;
}

This code has never worked for me – it always returns an empty 0×0 image, which is useless. That’s 45 lines of useless code, that everyone is required to re-implement in every app they write.

Or maybe not.

Instead, if you look at the WWDC videos, you find an alternate approach, that takes just two lines of source code:

NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];

Even better … this actually works!

 

There’s two halves to this. Obviously, we’ll need a button to capture a photo, and a UIImageView to display it. Less obviously, we’ll have to alter our existing camera-setup routine.

To make this work, we have to create an “output source” for the camera when we start it, and then later on when we want to take a photo we ask that “output” object to give us a single image.

 

So, create a new @property to hold a reference to our output object:

@property(nonatomic, retain) AVCaptureStillImageOutput *stillImageOutput;

Then make a UIImageView where we’ll display the captured photo. Add this to your NIB, or programmatically.

Hook it up to another @property, or assign it manually, e.g.;

@property(nonatomic, retain) IBOutlet UIImageView *vImage;

Finally, create a UIButton, so that you can take the photo.

Again, add it to your NIB (or programmatically to your screen), and hook it up to the following method:

-(IBAction) captureNow
{
	AVCaptureConnection *videoConnection = nil;
	for (AVCaptureConnection *connection in stillImageOutput.connections)
	{
		for (AVCaptureInputPort *port in [connection inputPorts])
		{
			if ([[port mediaType] isEqual:AVMediaTypeVideo] )
			{
				videoConnection = connection;
				break;
			}
		}
		if (videoConnection) { break; }
	}
 
	NSLog(@"about to request a capture from: %@", stillImageOutput);
	[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
	{
		 CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
		 if (exifAttachments)
		 {
			// Do something with the attachments.
			NSLog(@"attachements: %@", exifAttachments);
		 }
		else
			NSLog(@"no attachments");
 
		NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
		UIImage *image = [[UIImage alloc] initWithData:imageData];
 
		self.vImage.image = image;
	 }];
}

 

Go back to the viewDidAppear method you created at the start of this post. The very last line must REMAIN the last line, so we’ll insert the new code immediately above it. Here’s the new code to insert:

stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
 
[session addOutput:stillImageOutput];

Run the app, and you should get something like the image I showed at the start, where the part on the left is a live-preview from the camera, and the part on the right updates each time you click the “take photo” button:

 

 

 

 

  1. thanks – very helpful!

     
     
  2. Fantastic post!

    Is the block under your heading “Part 2: modify…” starting with the line

    AVCaptureConnection *videoConnection = nil;

    really needed?

    That block declares a local variable that is never used when finalizing the AVCaptureStillImageOutput part of the capture session.

    The AvCaptureConnection again appears to be recomputed each time the capture is initiated (in your captureNow IBAction callback further above).

    Can you use the first declaration, as an attribute, so that it doesn’t have to be computed each time the photo is taken (reduce shutter lag)?

     
     
  3. @jpap

    You’re right – that “videoConnection”, and the whole block with it, isn’t need – it’s a copy/paste error from the “captureNow” method (elsewhere in the post), which *does* need it.

    Also, yes – you might be able to keep it in an attribute and re-use; although IIRC there are some situations where the array of available outputs changes dynamically – e.g. if you plug/unplug external monitors, projectors, etc.

    In practice, I’ve seen no noticeable shutter lag, except maybe on the 3GS, so I haven’t tried optimizing that method. The iphone4 and the ipod touch4 seem to be instantaneous with this code in the projects I’ve used it in.

     
     
  4.  
    Johny cash

    Hello,

    I am trying to port your code into Monotouch , would you please provide the whole sample project so I can see if I am missing things ?

    Thanks

     
     
  5. Hi,
    Would you be so kind to provide source code for download?

    Thanks

     
     
  6. Thanks for this – consider it bookmarked !

    I assume you’ve submitted a bug radar or a document feedback to get Apple to correct the docs ?

     
     
  7. 10,000 people already have.

    …so far as anyone can tell.

    Until Apple makes their bug-tracking public, I’m not going to waste my time reporting the 100′s of bugs I come across – all of which have ALMOST CERTAINLY been reported already. Apple’s super-secrecy here hurts everyone – including Apple.

     
     
  8. @Johny, @Amer

    All the code is there in the post, I believe. This was simply written directly into a blank Apple xcode default template app.

    (in the screenshots, you can see the two tabs named “First” and “Second” – these are the defaults if you create a new Tabbed Application, I didn’t even rename them)

    If you’re having dfificulty starting or running a new Xcode project, I’m afraid I can’t help you. I suggest googling “tabbed iphone tutorial” or similar.

     
     
  9. Thanks buddy VV Nice post. Can you please tell me i need to display live video of iPhone with various effects like gray scale, sapia etc ..

     
     
  10. Hi!

    Thanks for this very good tutorial. It is very helpful. Keep on!

    Is there some way to make images smaller? In my project I have to process image and I it take a lot of time :-( . Maybe there is some settings to do that?
    Also, I have CaptureManager in own class, and I start running preview from another class. Image will not be saved before I press “takeImage” button again . I have debugged this and the part where completionHandler is, is waiting for calling methods to do what it have to do, and then is continuing with saving stillImage. I am new in programming objective-c, and maybe I dont understand how completionHandler is working. Can you please give me some tip on what to do? Is it possible to use something else but completionHandler to achive the better result?

    Thanks!

     
     
  11. Managing the camera (and using AV Foundation) is a pretty advanced iPhone topic – if you’re new to objective-C, I strongly advise you do *not* use this code, and instead focus on simpler projects until you’re comfortable with the language and platform. AV Foundation is missing a lot of documentation, it deals with low-level hardware, and brings in subtleties from Core Animation. That’s way too much distraction if you’re new.

     
     
  12. Hi!

    Millons of thanks for this tutoral. I was trying to catch the video input of the back camera for an AR app. I used te AVCAM sample code from Apple, but it works terribly slow and used lots of CPU running time. I also tried the UIImagePicker, but it has too many limitations and a lot of bugs.

    With your code it works smoothly now.

    Again, MILLIONS of thanks!!!

     
     
  13. Thank you very much!

    This example has clarified a lot for me. I was using the broken code for a day and wondering why nothing was showing up.

    Now that I am able to take a snapshot I want to receive a constant video stream from the camera or constant snaps on a interval into a buffer.

    Right now I am taking a snapshot every second and then sending it out on a UDP socket. Though it is working I can tell it is not efficient plus the iPhone makes a taking picture sound every second, which is undesirable.

    Any feedback will be appreciated.

     
     
  14.  
    Old McStopher

    Bravo. Not often does one find a tutorial of this quality. I have four apps in the works that will take advantage of the camera. AVFoundation definitely wins over UIImagePicker and this tutorial gets the basics down quite effectively and efficiently!

     
     
  15. Hey,
    can someone tell me what to do step for step ?

     
     
  16. Thanks for you code . But after using them I found the ” session.sessionPreset = AVCaptureSessionPresetMedium” seems not work fine. When I change it to High or Low , the image captured is always 640*480! Can you tell me what is the problem ? Thank you very mush!

     
     
  17. thanks for this post :D

    I would like to know how can I change the size of camera view ?

    Thanks

     
     
  18. How would you combine the video input with the UI elements from the window in an app? I have an app which has several non-opaque views which rotate and translate with device movement using the gyro. I’ve managed to create a video preview view behind this UI and I can capture the video from the camera to disk, but I haven’t found examples of saving both the video and the UI together. Are you aware of a way to do this?

     
     
  19. @Randy

    The easy way would be to add a Composition Layer / Composition Instructions into your AV Foundation output stack, that inserts the UIView contents manually. IIRC there’s an Apple-provided composition-source that takes a UIView as input – that’s how Apple does it in their apps (IIRC).

     
     
  20. Thanks for the tutorial. I get an error in my XCode saying: “Use of undeclared identifier ‘kCGImagePropertyExifDictionary’ on the line
    CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);

    Do you know why?

     
     
  21. Sorry, found it. Zut. #import

     
     
     
 
posted @ 2011-11-21 22:54  Loretta^_^  阅读(386)  评论(0编辑  收藏  举报