AVFoundation development script notes: Chapter 6 capture media

6.1 overview of capture function

The photo and video capture function of AV Foundation is one of its strengths from the beginning of the framework. Starting from iOS 4, developers can directly access the camera of IOS devices and the data generated by the camera, and define a new class for photo and video applications. The capture function of the framework is still the most concerned area of Apple media engineers. The release of each new version brings powerful new functions and improvements. Although the core capture classes are consistent on IOS and OS X, you will find that there are still some differences in the frameworks under different platforms. These differences are generally customized functions for the platform. For example, Mac OSX defines the AVCaptureScreenInput class for screen capture, while IOS does not have this class due to sandbox restrictions. Our explanation of AVFoundation capture function mainly takes IOS version as an example, but most of the functions discussed are applicable to OS X.

When developing an application with capture function, many classes will be used. The first step of learning the function of the framework is to understand the various classes and the roles and responsibilities of each class. Figure 6-1 shows some classes that may be used when developing capture applications.

 

6.1.1 capture session

The core class of AV Foundation capture stack is AVCaptureSession. A capture session is equivalent to a virtual "patch panel" for connecting input and output resources. Capture sessions manage data streams from physical devices, such as cameras and microphone devices, to one or more destinations. The input and output lines can be dynamically configured, so that developers can reconfigure the capture environment as needed during the session.

A session preset can be additionally configured for capturing sessions to control the format and quality of captured data. The default session preset is avcapturesessionpresethhigh, which is suitable for most cases, but the framework still provides multiple presets to customize the output to meet the special needs of the application.

6.1.2 capture equipment

AVCaptureDevice defines an interface for physical devices such as cameras or microphones. In most cases, these devices are built into the Mac, iPhone or iPad, but they may also be external digital cameras or camcorders. AVCaptureDevice defines a large number of control methods for physical hardware devices, such as controlling the focus, exposure, white balance and flash of the camera.

AVCaptureDevice defines a large number of class methods to access the capture device of the system. The most commonly used method is defaultDeviceWithMediaType:, which will return a default device specified by the system according to the given media type. As shown in the following example:

 

AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

In the example, we request a default video device. In the iOS system with front camera and rear camera, this method will return the rear camera because it is the default camera of the system. On Mac machines with cameras, the built-in FaceTime camera is returned.

Input of capture device

Before processing with a capture device, you first need to add it as input to the capture session. However, a capture device cannot be directly added to AVCaptureSession, but it can be added by encapsulating it in an AVCaptureDeviceInput instance. This object acts as a patch panel between device output data and capture sessions. Use the deviceInputWithDevice:error: method to create AVCaptureDeviceInput, as shown below:

 

NSError *error;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputwithDevice:videoDevice error:serror];

You need to pass a valid NSError pointer for this method, because the error description information encountered in input creation will be reflected here.

6.1.4 captured output

AV Foundation defines many extension classes for AVCaptureOutput. AVCaptureOutput is an abstract base class used to find the output destination for the data obtained from the capture session. The framework defines some advanced extension classes of this abstract class, such as avcapturellillmageoutput and AVCaptureMovieFileOutput, which can easily realize the function of capturing still photos and videos. You can also find the underlying extensions here, such as AVCaptureAudioDataOutput and AVCaptureVideoDataOutput, which can directly access the digital samples captured by the hardware. Using these low-level output classes requires a better understanding of the data rendering of capture devices, but these classes can provide more powerful functions, such as real-time processing of audio and video streams.

6.1.5 snap connection

In the above illustration, there is a class without a clear name, which is represented by a black arrow connecting different components. This is the AVCaptureConnection class. The capture session first needs to determine the media type input and rendered by the given capture device, and automatically establish its connection to the capture output that can receive the media type. For example, AVCaptureMovieFileOutput can accept audio and video data, so the session will determine which inputs produce video and which inputs produce audio, and establish the connection correctly. Access to these connections allows developers to have low-level control over the signal flow, such as disabling certain connections or accessing separate audio tracks in audio connections.

6.1.6 Snap Preview

If you can't see the scene being captured in the image capture, the application won't be so easy to use. Fortunately, the framework defines the AVCaptureVideoPreviewLayer class to meet this requirement. The preview layer is a subclass of CALayer of Core Animation, which can preview the captured video data in real time. The role of this class is similar to that of AVPlayerLayer, but it is customized for the needs of camera capture. Like AVPlayerLayer, avcapturevideo previewlayer also supports the concept of video gravity and can control the scaling and stretching effect of video content rendering, as shown in Figure 6-2, figure 6-3 and figure 6-4.

 

6.2 Simple Secrets

We need to understand the capture class at a higher level, so let's take a look at how to create a capture session for a simple camera application.

 

// 1. Create a capture session.
AVCaptureSession *session = [[AVCaptureSession alloc] init];

//2. Get a reference to the default camera.
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

//3. Create a device input for the camera .
NSError *error;
AVCaptureDeviceInput * cameraInput = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:serror] ;

//4. Connect the input to the session.
if ([session canAddInput:cameraInput]) {
    [session addInput:cameraInput] ;
}

//5. Create an AVCaptureOutput to capture still images .
AVCaptureStillImageOutput *imageOutput = [[AVCaptureStillImageOutput alloc] init] ;

// 6. Add the output to the session.
if ([session canAddOutput:imageOutput]) {
    [session addOutput:imageOutput] ;
}
// 7. Start the session and begin the fl

In the above example, we have established the infrastructure for capturing still pictures from the default camera. Create a capture session, add the default camera to the session through the input of the capture device, add a capture output to the session for outputting static pictures, and then start running the session, and the video data stream can begin to be transmitted in the system. The typical session creation process is more complex than the code in the example, but our example has combined these core components well.

Now that we know about the core capture classes, let's start to have a deeper understanding of AVFoundation's capture functions through actual applications.

6.3 creating camera applications

In this section, you will learn about the API of AVFoundation about capture in detail by creating a Camera application named Kamera (as shown in Figure 6-5). The example project is to imitate the Camera application built in Apple mobile phone, support users to take high-quality still pictures and videos, and write them into iOS Camera Roll through the Assets Library framework. In the process of implementing this application, you will implement a large number of practical methods required to complete this core function, and have a deeper understanding of the classes captured by AVFoundation. You can find Kamera in the Chapter6 directory_ Starter's initial project.

 

be careful:
Developing Kamera applications requires developers to compile and test with real machines. Most AV Foundation functions can be tested with iOs simulator, but the capture related API can only be tested on the real machine.

6.3.1 create preview view

Figure 6-6 shows the composition diagram of Kamera application user interface. We focus on the implementation of the middle tier THPreviewView, because it directly contains the user interface of AV Foundation.

 

The THPreviewView class shown in Figure 6-6 provides users with a real-time preview view of the current shooting content of the camera. We will implement this behavior using the AVCaptureVideoPreviewLayer method. First, learn how to implement THPreviewView by understanding its interface, as shown in code listing 6-1.

Code listing 6-1 THPreviewView interface

 

#import <AVFoundation/AVFoundation.h>

@protocol THPreviewViewDelegate <NSObject>
- (void)tappedToFocusAtPoint:(CGPoint)point;
- (void)tappedToExposeAtPoint:(CGPoint)point;
- (void)tappedToResetFocusAndExposure;
@end

@interface THPreviewView : UIView

@property (strong, nonatomic) AVCaptureSession *session;
@property (weak, nonatomic) id<THPreviewViewDelegate> delegate;

@property (nonatomic) BOOL tapToFocusEnabled;
@property (nonatomic) BOOL tapToExposeEnabled;

@end

Most attributes and defined methods need to be used with multiple click gestures. The Kamera application supports click focus and click exposure functions. However, the key attribute defined by this class is session, which is used to associate avcapturevideo previewlayer with the activated AVCaptureSession. Let's continue to look at the concrete implementation of this class.

Listing 6-2 defines a list of abbreviations for this class. Most of the code in this project version of the class has touch handling methods, but we won't discuss touch actions here. Let's focus on the part of the code about AVFoundation.

Code listing 6-2 implementation of thpreviewview

 

#import "THPreviewView.h"

@implementation THPreviewView

+ (Class)layerClass {                                                       // 1
    return [AVCaptureVideoPreviewLayer class];
}

- (void)setSession:(AVCaptureSession *)session {                            // 2
    [(AVCaptureVideoPreviewLayer*)self.layer setSession:session];
}

- (AVCaptureSession*)session {
    return [(AVCaptureVideoPreviewLayer*)self.layer session];
}

- (CGPoint)captureDevicePointForPoint:(CGPoint)point {                      // 3
    AVCaptureVideoPreviewLayer *layer =
        (AVCaptureVideoPreviewLayer *)self.layer;
    return [layer captureDevicePointOfInterestForPoint:point];
}

(1) CALayer instances usually support UIView. It is generally a generic CALayer instance, but overriding the layerClass class method allows developers to customize the layer type when creating view instances. Available in UIView Override the layerClass method on and return the AVCaptureVideoPreviewLayer class object.

(2) Override the access method of the session property. Access the layer property of the view in the setSession: method, which is an instance of AVCaptureVideoPreviewLayer, and set AVCaptureSession for it. This outputs the captured data directly to the layer and ensures synchronization with the session state. If necessary, override the session method to return the capture session.

(3) captureDevicePointForPoint: method is a private method used to support different touch processing methods defined by this class. This method converts the touch point on the screen coordinate system into the point on the camera coordinate system.

Coordinate space conversion

Here, we will focus on the captureDevicePointForPoint: Method in code listing 6-2 to draw your attention to several important methods defined by AVCaptureVideoPreviewLayer class. When using the capture API of AV Foundation, you must understand the difference between the screen coordinate system and the capture device coordinate system.

The upper left corner of the screen coordinate system of iPhone 5 or iPhone 5s is (0, 0), the lower right corner of the vertical mode is (320, 568), and the lower right corner of the horizontal mode is (568, 320). As an iOS developer, you must be very familiar with this coordinate space. The coordinate system of the capture device has different definitions. They are usually based on the local settings of the camera sensor. They are not rotatable in the horizontal direction, and the upper left corner is (0, 0) and the lower right corner is (1, 1), as shown in Figure 6-7.

 

Before iOS 6, it was very difficult to convert between these two coordinate spaces. To accurately convert screen coordinate points into camera coordinate points (or the opposite transformation), developers must consider factors such as video gravity, mirror image, layer transformation and direction for comprehensive calculation. Fortunately, AVCaptureVideoPreviewLayer now defines a conversion method to make this process much easier.

AVCaptureVideoPreviewLayer defines two methods for converting between two coordinate systems:

● captureDevicePointOfInterestForPoint: obtain the CGPoint data of the screen coordinate system and return the CGPoint data of the converted equipment coordinate system.
● pointForCaptureDevicePointOfInterest: obtain the CGPoint data of the camera coordinate system and return the converted CGPoint data of the screen coordinate system.

THPreviewView uses the captureDevicePointOfInterestForPoint: method to convert user touch point information into points in the camera device coordinate system. This conversion coordinate point will be used in the implementation of click focus and click exposure functions in Kamera application.

After implementing THPreviewView, let's continue to discuss the core capture code.

6.3.2 creating capture controllers

The code to capture the session will be included in a class called THCameraController. This class is used to configure and manage different capture devices, and also control and interact with the output of capture. First, let's look at the interface of THCameraController class, as shown in code listing 6-3.

Code listing 6-3 interface of thcameracontroller

 

#import <AVFoundation/AVFoundation.h>

extern NSString *const THThumbnailCreatedNotification;

@protocol THCameraControllerDelegate <NSObject>                             // 1
- (void)deviceConfigurationFailedWithError:(NSError *)error;
- (void)mediaCaptureFailedWithError:(NSError *)error;
- (void)assetLibraryWriteFailedWithError:(NSError *)error;
@end

@interface THCameraController : NSObject

@property (weak, nonatomic) id<THCameraControllerDelegate> delegate;
@property (nonatomic, strong, readonly) AVCaptureSession *captureSession;

// Session Configuration                                                    // 2
- (BOOL)setupSession:(NSError **)error;
- (void)startSession;
- (void)stopSession;

// Camera Device Support                                                    // 3
- (BOOL)switchCameras;
- (BOOL)canSwitchCameras;
@property (nonatomic, readonly) NSUInteger cameraCount;
@property (nonatomic, readonly) BOOL cameraHasTorch;
@property (nonatomic, readonly) BOOL cameraHasFlash;
@property (nonatomic, readonly) BOOL cameraSupportsTapToFocus;
@property (nonatomic, readonly) BOOL cameraSupportsTapToExpose;
@property (nonatomic) AVCaptureTorchMode torchMode;
@property (nonatomic) AVCaptureFlashMode flashMode;

// Tap to * Methods                                                         // 4
- (void)focusAtPoint:(CGPoint)point;
- (void)exposeAtPoint:(CGPoint)point;
- (void)resetFocusAndExposureModes;

/** Media Capture Methods **/                                               // 5

// Still Image Capture
- (void)captureStillImage;

// Video Recording
- (void)startRecording;
- (void)stopRecording;
- (BOOL)isRecording;

@end

This interface is the most code we have seen so far, so let's break it down into several small parts to learn.
(1) THCameraControllerDelegate defines the method that needs to be called on the object delegate when an error event occurs.
(2) These methods are used to configure and control capture sessions.
(3) These methods can switch between different cameras to test the different functions of the camera and ensure that users can make correct choices on the interface.
(4) These methods can realize the functions of click focus and click exposure, and allow developers to set focus and exposure parameters through multi touch.
(5) These methods also include the function of capturing still pictures and videos.

Let's go to the implementation file and start learning the specific implementation process.

6.3.3 setting up capture sessions

First implement the THCameraController class, starting with the setupSession: method, as shown in Listing 6 and 4.

Code listing 6-4 setupSession: method

 

#import "THCameraController.h"
#import <AVFoundation/AVFoundation.h>
#import <AssetsLibrary/AssetsLibrary.h>
#import "NSFileManager+THAdditions.h"

@interface THCameraController ()

@property (strong, nonatomic) dispatch_queue_t videoQueue;
@property (strong, nonatomic) AVCaptureSession *captureSession;
@property (weak, nonatomic) AVCaptureDeviceInput *activeVideoInput;

@property (strong, nonatomic) AVCaptureStillImageOutput *imageOutput;
@property (strong, nonatomic) AVCaptureMovieFileOutput *movieOutput;
@property (strong, nonatomic) NSURL *outputURL;

@end

@implementation THCameraController

- (BOOL)setupSession:(NSError **)error {

    self.captureSession = [[AVCaptureSession alloc] init];                  // 1
    self.captureSession.sessionPreset = AVCaptureSessionPresetHigh;

    // Set up default camera device
    AVCaptureDevice *videoDevice =                                          // 2
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    AVCaptureDeviceInput *videoInput =                                      // 3
        [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:error];
    if (videoInput) {
        if ([self.captureSession canAddInput:videoInput]) {                 // 4
            [self.captureSession addInput:videoInput];
            self.activeVideoInput = videoInput;
        }
    } else {
        return NO;
    }

    // Setup default microphone
    AVCaptureDevice *audioDevice =                                          // 5
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];

    AVCaptureDeviceInput *audioInput =                                      // 6
        [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:error];
    if (audioInput) {
        if ([self.captureSession canAddInput:audioInput]) {                 // 7
            [self.captureSession addInput:audioInput];
        }
    } else {
        return NO;
    }

    // Setup the still image output
    self.imageOutput = [[AVCaptureStillImageOutput alloc] init];            // 8
    self.imageOutput.outputSettings = @{AVVideoCodecKey : AVVideoCodecJPEG};

    if ([self.captureSession canAddOutput:self.imageOutput]) {
        [self.captureSession addOutput:self.imageOutput];
    }

    // Setup movie file output
    self.movieOutput = [[AVCaptureMovieFileOutput alloc] init];             // 9

    if ([self.captureSession canAddOutput:self.movieOutput]) {
        [self.captureSession addOutput:self.movieOutput];
    }
    
    self.videoQueue = dispatch_queue_create("com.tapharmonic.VideoQeue", NULL);

    return YES;
}

@end

(1) The first object created is the session itself. AVCaptureSession is the central hub for capturing various activities in the scene, and it is also the object to be added for input and input data. A capture session can configure session presets. In this example, we set the avcapturesessionpresethhigh preset value for it, because it is the default selection and meets the needs of Kamera. Other session presets can be found in the documentation on AVCaptureSession.
(2) Get a pointer to the default video capture device. In almost all iOS systems, AVCaptureDevice will return to the rear camera of the mobile phone.
(3) Before adding a capture device to AVCaptureSession, you first need to encapsulate it into an AVCaptureDeviceInput object. In particular, developers need to pass a valid NSError pointer to this method to capture any error messages when trying to create input data.
(4) When a valid AVCaptureDeviceInput is returned, first you want to call the canAddInput: method of the session to test whether it can be added to the session. If so, call the addInput: method to add it to the session and pass it the input information of the capture device.
(5) Similar to finding a device to get the default video capture device, selecting the default audio capture device is the same, that is, returning a pointer to the built-in microphone.
(6) Create a capture input for this device. Also, be aware of the errors encountered when calling this method.
(7) Test whether the device can be added to the session and, if so, add it to the capture session.
(8) Create an AVCapturetillmageOutput instance. It is a subclass of AVCaptureOutput, which is used to capture still pictures from the camera. You can configure a dictionary for the outputSettings property of an object to indicate that you want to capture pictures in JPEG format. Once created and configured, you can call the addOutput: method to add it to the capture session. Like adding device input information just now, developers also want to test whether the output can be added to the capture session before this. If you add output content blindly without testing, you may throw exception errors and cause the application to crash.
(9) Create a new AVCaptureMovieFileOutput instance. It is a subclass of AVCaptureOutput that is used to record QuickTime movies to the file system. Developers also need to test and add this output to the session. Finally, return YES to let the calling function know that the session configuration has been completed.

6.3.4 starting and stopping sessions

The object graph of the capture session will be properly set by calling the setupSession: method, but you need to start the session before using the capture session. The first step to start a session is to start the data stream and make it ready to capture pictures and videos. Let's take a look at the implementation of the startSession method and stopSession method, as shown in code listing 6-5.

Listing 6-5 starts and stops the capture session

 

- (void)startSession {
    if (![self.captureSession isRunning]) {                                 // 1
        dispatch_async([self globalQueue], ^{
            [self.captureSession startRunning];
        });
    }
}

- (void)stopSession {
    if ([self.captureSession isRunning]) {                                  // 2
        dispatch_async([self globalQueue], ^{
            [self.captureSession stopRunning];
        });
    }
}

(1) Check that the capture session is not ready to run. If you are not ready, call the startRunning method of the capture session. This is a synchronous call and will consume a certain amount of time, so you should queue up in the videoQueue to call this method asynchronously, so as not to hinder the main thread.
(2) The implementation of stopsession is basically the same. Calling the stopRunning method on the capture session will stop the data flow in the system. This is also a synchronous call, so you should call this method asynchronously.

When running the application, one or two dialog boxes will appear immediately upon startup, as shown in Figure 6-8. We will continue to discuss why these dialog boxes are displayed and how to deal with them.

 

6.3.5 handling privacy needs

In iOS 7 and iOS 8, the function of privacy protection has been further improved, making the action of applications trying to use device hardware more transparent. When the application tries to access the microphone and camera, a dialog box will pop up asking the user if he is authorized. This privacy improvement is very popular for the iOS platform, but we may have a little more trouble creating capture applications.

be careful:
In IOS 7, users will be asked whether they can access the camera of the device only when there are laws and regulations in special regions. Starting from iOS8, users in all regions must obtain authorization in the application before they can access the camera.

This reminder is triggered by creating AVCaptureDeviceInput. When these dialog boxes appear, the system does not stop waiting for the user's response, but immediately returns to a device. For example, when calling an audio device, it returns to a silent device, and if it is a camera, it returns to a black and white frame. Until the user answers and agrees with the content in the dialog box, it will not actually start capturing audio or video content. If the user answers "no", nothing will be recorded during this session. If the user does not agree to access, "the creation of AVCaptureDeviceInput at the next application startup will return nil, and the creation method will receive the error code averrorapplicationisnotauthorized touseddevice generated by NSError and the failure reason, as shown below:

 

NSLocalizedFailureReason = This app is not authorized to use iPhone Microphone.

If you receive this error message, the only way is to pop up an error message, as shown in Figure 6-9, telling the user that you need to authorize access to the required hardware in the "Settings" application.

 

be careful:
Users can modify privacy settings in the settings application at any time, so be sure to check any returned error information when creating AVCaptureDeviceInput object.

After the session configuration is completed, let's] continue to learn how to implement the specific functions of Kamera application.

6.3.6 switching cameras

Basically, all current iOS devices have front and rear cameras. Kamera application will use these two cameras, so the first function to be developed is to enable users to switch between cameras. Let's start with several supporting methods, which can simplify the implementation of functions, as shown in code listing 6-6.

Code listing 6-6 support method of camera

 

- (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position { // 1
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *device in devices) {
        if (device.position == position) {
            return device;
        }
    }
    return nil;
}

- (AVCaptureDevice *)activeCamera {                                         // 2
    return self.activeVideoInput.device;
}

- (AVCaptureDevice *)inactiveCamera {                                       // 3
    AVCaptureDevice *device = nil;
    if (self.cameraCount > 1) {
        if ([self activeCamera].position == AVCaptureDevicePositionBack) {
            device = [self cameraWithPosition:AVCaptureDevicePositionFront];
        } else {
            device = [self cameraWithPosition:AVCaptureDevicePositionBack];
        }
    }
    return device;
}

- (BOOL)canSwitchCameras {                                                  // 4
    return self.cameraCount > 1;
}

- (NSUInteger)cameraCount {                                                 // 5
    return [[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] count];
}

(1) cameraWithPosition: the method returns the AVCaptureDevice of the specified location. The valid locations are AVCaptureDevicePositionFront or AVCaptureDevicePositionBack. Traverse the available video devices and return the value corresponding to the position parameter.
(2) The activecamera method returns the camera corresponding to the current capture session and the device attribute entered by the active capture device.
(3) The inactivecamera method returns the currently inactive camera by finding the reverse camera of the currently active camera. nil if the device on which the application is running has only one camera.
(4) The canswitchcameras method returns a BOOL value indicating whether more than one camera is available.
(5) Finally, cameraCount returns the number of available video capture devices.

After implementing these methods, we continue to complete the camera switching function. Switching between front and rear cameras requires reconfiguring the capture session. Fortunately, AVCaptureSession can be reconfigured dynamically, so you don't have to worry about the overhead of stopping and restarting sessions. However, any changes we make to the session must be made separately and atomically through the beginConfiguration and commitConfiguration methods. As shown in code listing 6-7.

Code listing 6-7 switching cameras

 

- (BOOL)switchCameras {

    if (![self canSwitchCameras]) {                                         // 1
        return NO;
    }

    NSError *error;
    AVCaptureDevice *videoDevice = [self inactiveCamera];                   // 2

    AVCaptureDeviceInput *videoInput =
    [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];

    if (videoInput) {
        [self.captureSession beginConfiguration];                           // 3

        [self.captureSession removeInput:self.activeVideoInput];            // 4

        if ([self.captureSession canAddInput:videoInput]) {                 // 5
            [self.captureSession addInput:videoInput];
            self.activeVideoInput = videoInput;
        } else {
            [self.captureSession addInput:self.activeVideoInput];
        }

        [self.captureSession commitConfiguration];                          // 6

    } else {
        [self.delegate deviceConfigurationFailedWithError:error];           // 7
        return NO;
    }

    return YES;
}

(1) First, confirm whether the camera can be switched. If not, return NO and exit the method.
(2) Next, get the pointer to the inactive camera and create a new AVCaptureDeviceInput for it.
(3) Call beginConfiguration in the session and mark the beginning of the atomic configuration change.
(4) Remove the currently active AVCaptureDevicelnput. The current video capture device input information must be removed before a new object is added.
(5) Perform a standard test to check whether a new AVCaptureDeviceInput can be added, and if so, add it to the session and set it to activeVideoInput. To ensure security, if the new input cannot be added, the previous input needs to be added again.
(6) After the configuration is completed, call commitConfiguration on AVCaptureSession, which will integrate all changes in batches to obtain a separate and atomic modification of the session.
(7) When creating a new AVCaptureDeviceInput, if an error occurs, you need to notify the delegate to handle the error.

Run the application again. Assuming that the iOS device has two cameras, click the camera icon in the upper right corner of the screen to switch between the front and rear cameras.

6.3.7 configure capture equipment

AVCaptureDevice defines many ways for developers to control cameras on iOS devices. In particular, the focal length, exposure and white balance of the camera can be adjusted and locked independently. Focusing and exposure can also be set based on specific points of interest to realize the functions of click focusing and click exposure in the application. AVCaptureDevice also allows you to control the LED of your device as a flash or flashlight for taking photos.

Whenever you modify a camera device, you must first test whether the modification action can be supported by the device. Not all cameras (even iOS devices with a single camera) can support all functions. For example, the front camera does not support focusing operation, because the distance between it and the target will not exceed an arm length; But most rear cameras can support full-size focusing. Trying to apply an unsupported modification action may cause exceptions and cause the application to crash, so it is necessary to test it before making modifications. For example, before setting the focusing mode to automatic, first check whether this mode is supported, as shown in the following code:

 

AVCaptureDevice *device = // Active video capture device
if ([device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {
    // Perform configuration
}

When it is verified that the modification of this configuration can be supported, the actual device configuration can be executed. The basic skills of modifying the capture device include locking the device, preparing the configuration, performing the required modifications, and finally unlocking the device. For example, after determining that the camera supports the self focusing mode, configure the focusMode attribute as follows:

 

AVCaptureDevice *device = // Active video capture device
if ([device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {
    NSError *error;
    device.focusMode = AVCaptureFocusModeAutoFocus;
    [device unlockForConfiguration];
} else {
    // handle error
}

Devices on Mac, iPhone or iPad are common to the system, so AVCaptureDevice requires developers to obtain an exclusive lock on the device before modifying the device. Failure to do so will cause the application to throw an exception. Although it is not required to release the exclusive lock immediately after the configuration is completed, if it is not released, it will have side effects on other applications using the same resources. Therefore, most of the time, we should be good citizens of the platform and release the exclusive lock every time the configuration is completed.

After learning the above knowledge, let's continue to study how to realize various device configuration functions of Kamera.

6.3.8 adjusting focal length and exposure

Most rear cameras on iOS devices support setting focus and exposure data based on a given point of interest. Intuitively using this function on the interface allows the user to click a position on the camera interface, and it will automatically focus or expose at this point. You can also lock the focal length and exposure of these points of interest to ensure that the user can click the Photo button stably. We first implement the click focus function, as shown in code listing 6-8.

Code listing 6-8 implementation of click focus method

 

- (BOOL)cameraSupportsTapToFocus {                                          // 1
    return [[self activeCamera] isFocusPointOfInterestSupported];
}

- (void)focusAtPoint:(CGPoint)point {                                       // 2

    AVCaptureDevice *device = [self activeCamera];

    if (device.isFocusPointOfInterestSupported &&                           // 3
        [device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {

        NSError *error;
        if ([device lockForConfiguration:&error]) {                         // 4
            device.focusPointOfInterest = point;
            device.focusMode = AVCaptureFocusModeAutoFocus;
            [device unlockForConfiguration];
        } else {
            [self.delegate deviceConfigurationFailedWithError:error];
        }
    }
}

(1) First, implement the cameraSupportsTapToFocus method. The first step is to ask whether the active camera supports the focusing of points of interest. The client uses this method to update the user interface on demand.
(2) Pass a CGPoint structure to the focusAtPoint: method. The point passed in has been converted from screen coordinates to capture device coordinates, which is carried out in the previously implemented THPreviewView class.
(3) After getting the pointer of the active camera device, test whether it supports point of interest focusing and confirm whether it supports autofocus mode. This mode uses autofocus for individual scans and sets focusMode to avcapture focusmodelocked.
(4) Lock device ready for configuration. If the lock is obtained, set the focusPointOfInterest property to the CGPoint value passed in, and set the focusing mode to AVCaptureFocusModeAutoFocus. Finally, call the unlockForConfiguration of AVCaptureDevice to release the lock.

Run the application to find a specific object that you want to focus and click on. The application will display a blue rectangular area on the screen, in which the focus of the camera is located. Now that the focus is locked, move the camera to find a new point of interest and click to confirm the new object. After the click focus function is completed, let's see the realization of click exposure.

The default exposure mode of most devices is AVCaptureExposureModeContinuousAutoExposure, which automatically adjusts the exposure according to the changes of the scene. We can realize the "click to expose and lock" function in a similar way to click to focus; But this process is a little difficult. Although the AVCaptureExposureMode enumeration defines the AVCaptureExposureModeAutoExpose value, this value is not supported by current iOS devices. This means that developers need a little creativity to implement this function in the same way as click focus. Let's take a look at the specific implementation process, as shown in code listing 6-9.

Code listing 6-9 "click exposure" method

 

- (BOOL)cameraSupportsTapToExpose {                                         // 1
    return [[self activeCamera] isExposurePointOfInterestSupported];
}

// Define KVO context pointer for observing 'adjustingExposure" device property.
static const NSString *THCameraAdjustingExposureContext;

- (void)exposeAtPoint:(CGPoint)point {

    AVCaptureDevice *device = [self activeCamera];

    AVCaptureExposureMode exposureMode =
    AVCaptureExposureModeContinuousAutoExposure;

    if (device.isExposurePointOfInterestSupported &&                        // 2
        [device isExposureModeSupported:exposureMode]) {

        NSError *error;
        if ([device lockForConfiguration:&error]) {                         // 3

            device.exposurePointOfInterest = point;
            device.exposureMode = exposureMode;

            if ([device isExposureModeSupported:AVCaptureExposureModeLocked]) {
                [device addObserver:self                                    // 4
                         forKeyPath:@"adjustingExposure"
                            options:NSKeyValueObservingOptionNew
                            context:&THCameraAdjustingExposureContext];
            }

            [device unlockForConfiguration];
        } else {
            [self.delegate deviceConfigurationFailedWithError:error];
        }
    }
}

- (void)observeValueForKeyPath:(NSString *)keyPath
                      ofObject:(id)object
                        change:(NSDictionary *)change
                       context:(void *)context {

    if (context == &THCameraAdjustingExposureContext) {                     // 5

        AVCaptureDevice *device = (AVCaptureDevice *)object;

        if (!device.isAdjustingExposure &&                                  // 6
            [device isExposureModeSupported:AVCaptureExposureModeLocked]) {

            [object removeObserver:self                                     // 7
                        forKeyPath:@"adjustingExposure"
                           context:&THCameraAdjustingExposureContext];

            dispatch_async(dispatch_get_main_queue(), ^{                    // 8
                NSError *error;
                if ([device lockForConfiguration:&error]) {
                    device.exposureMode = AVCaptureExposureModeLocked;
                    [device unlockForConfiguration];
                } else {
                    [self.delegate deviceConfigurationFailedWithError:error];
                }
            });
        }

    } else {
        [super observeValueForKeyPath:keyPath
                             ofObject:object
                               change:change
                              context:context];
    }
}

(1) The implementation of the camerasupportstaptoexpose method is almost the same as that of cameraSupportsTapToFocus. That is, ask whether the active device supports the exposure of a point of interest.
(2) Perform device configuration tests to ensure that the required configurations are supported. In this example, you need to verify whether AVCaptureExposureModeContinuousAutoExposure mode is supported.
(3) Lock the device, prepare the configuration, and set the exposurePointOfInterest and exposureMode properties to the expected values.
(4) Determine whether the device supports locked exposure mode. If supported, use KVO to determine the status of the adjustingExposure property of the device. By observing this attribute, we can know when the exposure adjustment is completed, which gives us the opportunity to lock the exposure at this point.
(5) Determine whether the listening callback corresponds to the desired change operation by testing whether the context parameter is the & thcameraadjustingexposuurecontext pointer.
(6) Judge whether the exposure level of the device is no longer adjusted, and confirm whether the exposure mode of the device can be set to avcapture exposure mode locked. If you can, continue with the following process.
(7) Remove self as the adjustingExposure property listener so that you won't be notified of subsequent changes.
(8) Finally, schedule back to the main queue asynchronously and define - a block to set the exposureMode attribute to avcapture exposureMode locked It is very important to move the exposureMode change to the next event loop so that the removeObserver: call in the previous step can be completed.

Indeed, this is more complicated than the implementation of click focus; But you will find that the final effect is the same. Run the application again, find some darker areas in the preview view and double-click. You will see an orange rectangle on the screen, and the exposure will lock this area. Point to the camera in the bright area (the computer screen works well). Double click again to see that the exposure is adjusted to a reasonable level.

Added click settings to lock the focus and exposure area, which adds a lot of light to the Kamera application. However, developers also need to define a method that allows users to switch back to continuous focus and exposure mode. Let's do it now. Listing 6-10 shows the implementation of the resetfocusandexposuuremodes method.

Listing 6-10 resets focus and exposure

 

- (void)resetFocusAndExposureModes {

    AVCaptureDevice *device = [self activeCamera];

    AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure;

    AVCaptureFocusMode focusMode = AVCaptureFocusModeContinuousAutoFocus;

    BOOL canResetFocus = [device isFocusPointOfInterestSupported] &&        // 1
    [device isFocusModeSupported:focusMode];

    BOOL canResetExposure = [device isExposurePointOfInterestSupported] &&  // 2
    [device isExposureModeSupported:exposureMode];

    CGPoint centerPoint = CGPointMake(0.5f, 0.5f);                          // 3

    NSError *error;
    if ([device lockForConfiguration:&error]) {

        if (canResetFocus) {                                                // 4
            device.focusMode = focusMode;
            device.focusPointOfInterest = centerPoint;
        }

        if (canResetExposure) {                                             // 5
            device.exposureMode = exposureMode;
            device.exposurePointOfInterest = centerPoint;
        }
        
        [device unlockForConfiguration];
        
    } else {
        [self.delegate deviceConfigurationFailedWithError:error];
    }
}

(1) First of all, whether the human focus light interest point and Da Ray's childhood focus mode are supported.
(2) Pill line inflammation like test formula, nitrate protection exposure can be ignored through the function test of Xiangmei.
(3) Create a CGPoint. The average value of X and y is 0.5f. In retrospect, capture the upper left corner (0, O) and the lower right corner (1, 1). Creating a CGPoint (0.5, 0.5) will start from the midpoint of the "image tip space".
(4) Lock each alignment and configuration. If the focus can be lost again, make the desired modification.
(5) Tongxiang, if the exposure can be repeated, there will be no expected exposure mode.

Run the application and perform some focus and exposure adjustments. Double click the preview mode to re focus any two digits.

6.3.9 adjust the interior light and flashlight mode

AVCaptureDevice allows developers to modify the interior light and flashlight mode of the camera. The LED light behind the device is used as an internal light when taking still pictures and as a continuous light (flashlight) when taking videos. The fastmode and torchMode properties of the capture device can be set to one of the following three values:
AVCapture(TorchFlash)ModeOn: always on.
AVCapture(FlashlTorch)ModeOff: always off.
AVCapture(FlashlTorch)ModeAuto: the system will automatically turn off or on the LED based on the ambient lighting.

Code listing 6-11 shows the implementation of the interior light and flashlight mode method.

Code listing 6-11 interior light and flashlight method

 

- (BOOL)cameraHasFlash {
    return [[self activeCamera] hasFlash];
}

- (AVCaptureFlashMode)flashMode {
    return [[self activeCamera] flashMode];
}

- (void)setFlashMode:(AVCaptureFlashMode)flashMode {

    AVCaptureDevice *device = [self activeCamera];

    if (device.flashMode != flashMode) {

        NSError *error;
        if ([device lockForConfiguration:&error]) {
            device.flashMode = flashMode;
            [device unlockForConfiguration];
        } else {
            [self.delegate deviceConfigurationFailedWithError:error];
        }
    }
}

- (BOOL)cameraHasTorch {
    return [[self activeCamera] hasTorch];
}

- (AVCaptureTorchMode)torchMode {
    return [[self activeCamera] torchMode];
}

- (void)setTorchMode:(AVCaptureTorchMode)torchMode {

    AVCaptureDevice *device = [self activeCamera];

    if ([device isTorchModeSupported:torchMode]) {

        NSError *error;
        if ([device lockForConfiguration:&error]) {
            device.torchMode = torchMode;
            [device unlockForConfiguration];
        } else {
            [self.delegate deviceConfigurationFailedWithError:error];
        }
    }
}

We won't explain the above code step by step here, because this code should look very familiar now. As before, before changing the configuration, the developer needs to confirm whether the function we are trying to modify is supported.

The home view controller of the Kamera application will correctly call the setFlashMode: or setTorchMode: methods, depending on the location of the Video/Photo mode selector on the user interface. Run the application and click the lightning icon in the upper left corner of the screen to switch between different modes.

6.3.10 taking still pictures

During the implementation of the setupSession: method, we add an avcaptureillmageoutput instance to the capture session. This class is a subclass of AVCaptureOutput, which is used to capture static pictures. The AVCapturetillmageOutput class defines the capturestillimageasynchronously from connection: completionhandler: method to perform the actual shooting. Let's look at this simple example:

 

AVCaptureConnection *connection = // Active video capture connection
id completionHandler = ^(CMSampleBufferRef buffer, NSError *error) {
    // Handle image capture
};
[imageOutput captureStillImageAsynchronouslyFromConnection:connection
                                         completionHandler:completionHandler];

This method contains some new object types that need to be discussed; The first is AVCaptureConnection. When creating a session and adding capture device input and capture output, the session automatically establishes the connection between input and output, and selects the signal flow line as needed. Accessing these connections is a very useful function in some cases because it allows developers to better control the data sent to the output. Another new object type is CMSampleBuffer. CMSampleBuffer is a Core Foundation object defined by the Core Media framework. This object type will be described in detail in the next chapter, but now we only need to know that this object is used to save the captured image data. Since we specified AVVideoCodecJPEG as the codec key when creating a still picture output object, the bytes contained in the object will be compressed into JPEG format.

Let's take a look at how the Kamera application uses these objects when taking still pictures (as shown in listing 6-12)

Listing 6-12 captures a still image

 

- (void)captureStillImage {

    AVCaptureConnection *connection =                                       // 1
    [self.imageOutput connectionWithMediaType:AVMediaTypeVideo];

    if (connection.isVideoOrientationSupported) {                           // 2
        connection.videoOrientation = [self currentVideoOrientation];
    }

    id handler = ^(CMSampleBufferRef sampleBuffer, NSError *error) {
        if (sampleBuffer != NULL) {

            NSData *imageData =                                             // 4
                [AVCaptureStillImageOutput
                    jpegStillImageNSDataRepresentation:sampleBuffer];

            UIImage *image = [[UIImage alloc] initWithData:imageData];      // 5
        } else {
            NSLog(@"NULL sampleBuffer: %@", [error localizedDescription]);
        }
    };
    // Capture still image                                                  // 6
    [self.imageOutput captureStillImageAsynchronouslyFromConnection:connection
                                                  completionHandler:handler];
}

- (AVCaptureVideoOrientation)currentVideoOrientation {

    AVCaptureVideoOrientation orientation;

    switch ([UIDevice currentDevice].orientation) {                         // 3
        case UIDeviceOrientationPortrait:
            orientation = AVCaptureVideoOrientationPortrait;
            break;
        case UIDeviceOrientationLandscapeRight:
            orientation = AVCaptureVideoOrientationLandscapeLeft;
            break;
        case UIDeviceOrientationPortraitUpsideDown:
            orientation = AVCaptureVideoOrientationPortraitUpsideDown;
            break;
        default:
            orientation = AVCaptureVideoOrientationLandscapeRight;
            break;
    }

    return orientation;
}

(1) First, get the pointer of the current AVCaptureConnection used by the avcaptureillmageoutput object by calling the connectionWithMediaType: method. AVMediaTypeVideo media type is generally passed when looking for avcaptureillmageoutput connection.
(2) The kamera application itself only supports vertical orientation, so the user interface remains unchanged when rotating the device. However, we hope to meet the needs of users who use mobile phones horizontally, so we need to adjust the direction of the result image accordingly. To determine whether the connection supports setting the video direction; If yes, set it to AVCaptureVideoOrientation. Returned by the currentVideoOrientation method
(3) Take the orientation of UIDevice, switch the value to determine the corresponding AVCaptureVideoOrientation. It is important to note that the AVCaptureVideoOrientation values on the left and right are opposite to their UIDeviceOrientation values.
(4) In the completion handler block, if a valid CMSampleBuffer is received, call the jpegsilmagensdatarepresentation method of avcapture stillmageoutput class, and an NSData representing picture bytes will be returned.
(5) Create a new Ullmage instance with NSData.

Now we have successfully captured an image and created the corresponding UlIlmage. It can be rendered in the corresponding location of the user interface or written to the specified location of the application sandbox. However, most users want to write pictures taken through the Kamera app into Camera Roll in the Photos app. To add this function, you need to use the Assets Library framework.

6.3.11 using the Assets Library framework

The Assets Library framework allows developers to programmatically access user albums and video libraries managed by iOS Photos applications. This framework is added by default in many AV Foundation applications, so it is very important to master its usage.

The core class of this framework is ALAssetsLibrary. The instance of the ALAssetsLibrary class defines the interface to interact with the user library. The object has multiple "write" methods that allow developers to write photos or videos to their libraries. When the application tries to interact with the library for the first time, a dialog box as shown in Figure 6-10 will pop up.

 

image.png

Before actual use, users must explicitly allow applications to access the library. If the user does not agree to access, any write operation to the library will fail and an error message indicating that the user refuses access will be returned. If accessing the user library is the core function of the application, the developer first needs to determine the authorization status of the application, as shown in the following example.

 

ALAuthorizationStatus status = [ALAssetsLibrary authorizationStatus];
if (status == ALAuthorizationStatusDenied) {
    // Show prompt indicating the application won't function
    // correctly without access to the library
} else {
    // Perform authorized access to the library
}

Let's see how to write the captured image to the Assets Library, as shown in listing 6-13.

Listing 6-13 captures a still image

 

- (void)captureStillImage {

    AVCaptureConnection *connection =
        [self.imageOutput connectionWithMediaType:AVMediaTypeVideo];

    if (connection.isVideoOrientationSupported) {
        connection.videoOrientation = [self currentVideoOrientation];
    }

    id handler = ^(CMSampleBufferRef sampleBuffer, NSError *error) {
        if (sampleBuffer != NULL) {

            NSData *imageData =
                [AVCaptureStillImageOutput
                    jpegStillImageNSDataRepresentation:sampleBuffer];

            UIImage *image = [[UIImage alloc] initWithData:imageData];
            [self writeImageToAssetsLibrary:image];                         // 1

        } else {
            NSLog(@"NULL sampleBuffer: %@", [error localizedDescription]);
        }
    };
    // Capture still image
    [self.imageOutput captureStillImageAsynchronouslyFromConnection:connection
                                                  completionHandler:handler];
}

- (void)writeImageToAssetsLibrary:(UIImage *)image {

    ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];              // 2

    [library writeImageToSavedPhotosAlbum:image.CGImage                     // 3
                              orientation:(NSInteger)image.imageOrientation // 4
                          completionBlock:^(NSURL *assetURL, NSError *error) {
                              if (!error) {
                                  [self postThumbnailNotifification:image]; // 5
                              } else {
                                  id message = [error localizedDescription];
                                  NSLog(@"Error: %@", message);
                              }
                          }];
}

- (void)postThumbnailNotifification:(UIImage *)image {
    dispatch_async(dispatch_get_main_queue(), ^{
        NSNotificationCenter *nc = [NSNotificationCenter defaultCenter];
        [nc postNotificationName:THThumbnailCreatedNotification object:image];
    });
}

(1) In the capture completion handler, call the new method writeimage'toassetslibrary:, and pass the Ullmage object created with the picture data to it.
(2) Create a new ALAssctsLibrary instance to write to the user's Camera Roll programmatically.
(3) Call the writeimage tosavedphotosalbum: Orientation: completion block: method of the library. The picture parameter must be CGImageRef, so we need to obtain the CGlmage representation of Ullmage.
(4) The direction parameter is an ALAssetOrientation enumeration value. These values directly correspond to the UIlmageOrientation value returned by the imageOrientation of the image, so this value is converted to an NSInterger.
(5) If the writing is successful, a notification with a captured picture is sent. Used to draw thumbnails in the user interface of the Kamera application.

Run the application, switch to photosmode and take some pictures. When you take a picture for the first time, a prompt box will pop up asking whether you are authorized to access the "photo library". In this example, you must answer "agree". Click the picture icon on the left of the Photo button in the application to get a simple view of the user Camera Roll; However, to see the picture details, you need to go to the Photos application.

The function of taking still pictures has been completed. Let's continue to learn about the function of video capture.

6.3.12 video capture

One last thing to discuss before concluding this chapter is the capture of video content. When setting up a capture session, add an output called AVCaptureMovieFileOutput. This class defines a simple and practical method to capture QuickTime movies to disk. Most of the core functions of this class are inherited from the superclass AVCaptureFileOutput. This abstract superclass defines many practical functions, such as recording to the maximum time limit or recording to a specific file size. It can also be configured to reserve the minimum available disk space, which is very important when recording video on mobile devices with limited storage space.

Usually, when a QuickTime movie is ready for release, the metadata of the movie header is at the beginning of the file. In this way, the video player can quickly read the information contained in the header to determine the content, structure and location of multiple samples contained in the file. However, when recording a QuickTime movie, the header cannot be created until all samples are captured. When recording ends, create header data and attach it to the end of the file, as shown in Figure 6-11.

 

 

There is a problem with putting the process of creating headers after all movie samples have been captured, especially in the case of mobile devices. If you encounter a crash or other interruption, such as a phone call in, the movie header will not be written correctly and an unreadable movie file will be generated on the disk. AVCaptureMoviceFileOutput One of the core functions provided is segment capture QuickTime Film, as shown in Figure 6-12 As shown in.

When recording starts, a minimized header will be written in the front of the file. As the recording progresses, the clip will be written in a certain period to create a complete header. By default, a clip is written every 10 seconds, but this time interval can be changed by modifying the movieFragmentInterval attribute of the capture output. The method of writing clips can gradually create a complete QuickTime movie header, which ensures that when the application crashes or interrupts, the movie will still be saved with the last written clip as the end point. The default fragment interval is sufficient to meet the requirements of Kamera application, but you can modify this value in your own application.

Let's implement the video recording function from the transmission method used to start and stop recording, as shown in code listing 6-14.

Code listing 6-14 transmission method of video recording

 

- (BOOL)isRecording {                                                       // 1
    return self.movieOutput.isRecording;
}

- (void)startRecording {

    if (![self isRecording]) {

        AVCaptureConnection *videoConnection =                              // 2
            [self.movieOutput connectionWithMediaType:AVMediaTypeVideo];

        if ([videoConnection isVideoOrientationSupported]) {                // 3
            videoConnection.videoOrientation = self.currentVideoOrientation;
        }

        if ([videoConnection isVideoStabilizationSupported]) {              // 4
            
            videoConnection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto;
            
            // Deprecated approach below
            // videoConnection.enablesVideoStabilizationWhenAvailable = YES;
        }

        AVCaptureDevice *device = [self activeCamera];

        if (device.isSmoothAutoFocusSupported) {                            // 5
            NSError *error;
            if ([device lockForConfiguration:&error]) {
                device.smoothAutoFocusEnabled = NO;
                [device unlockForConfiguration];
            } else {
                [self.delegate deviceConfigurationFailedWithError:error];
            }
        }

        self.outputURL = [self uniqueURL];                                  // 6
        [self.movieOutput startRecordingToOutputFileURL:self.outputURL      // 8
                                      recordingDelegate:self];

    }
}

- (NSURL *)uniqueURL {                                                      // 7

    NSFileManager *fileManager = [NSFileManager defaultManager];
    NSString *dirPath =
        [fileManager temporaryDirectoryWithTemplateString:@"kamera.XXXXXX"];

    if (dirPath) {
        NSString *filePath =
            [dirPath stringByAppendingPathComponent:@"kamera_movie.mov"];
        return [NSURL fileURLWithPath:filePath];
    }

    return nil;
}

- (void)stopRecording {                                                     // 9
    if ([self isRecording]) {
        [self.movieOutput stopRecording];
    }
}

(1) An isrecoding method is provided here to obtain the status of the AVCaptureMovieFileOutput instance. This is a supporting method, which mainly provides the current state of the controller for external clients.
(2) In the startRecording method, obtain and process the information of the current video capture connection. Used to configure some core properties of captured video data.
(3) Judge whether setting the videoOrientation property is supported; If supported, set it to the current video direction. Setting the direction of the video does not physically rotate the pixel cache, but the corresponding matrix changes of the QuickTime file are applied.
(4) If enablesVideoStabilizationWhenAvailable can be set, set it to YES. Not all cameras and devices support this feature, so it needs to be tested. Supporting video stabilization can significantly improve the quality of captured video, especially on devices such as iPhone. One of the key concerns is that video stability is only involved when recording video files. This stabilization effect will not be seen in the video preview screen.
(5) The camera can operate in the smooth focusing mode, that is, slow down the focusing speed of the camera lens. Usually, when the user moves to shoot, the camera will try to focus quickly, which will produce a pulse effect in the captured video. When focusing smoothly, the rate of focusing operation is reduced, providing a more natural video recording effect.
(6) Find a unique file system URL to write to the captured video. Developers need to keep a strong reference to the address, because this address will be used later when processing video.
(7) The unique URL lookup code looks familiar because we have used this method before. This method uses a classification method we added to NSFileManager, called termporaryDirectoryWithTemplateString It creates a uniquely named directory for the destination to which the file will be written.
(8) Finally, call the startRecordingToOutputFileURL:recordingDelegate: Method on the capture output, pass the outputURL and set self as the delegate. This actually starts recording.
(9) Add a method to stop recording. You can call the stopRecording method on the captured output.

If you compile the project now, you will notice a warning in the editor, which means that the recording delegate self specified by us does not follow the AVCaptureFileOutputRecordingDelegate protocol. We need to modify the extension of the controller class, as shown in listing 6-15.

Code listing 6-15 follows AVCaptureFileOutputRecordingDelegate protocol

 

@interface THCameraController () <AVCaptureFileOutputRecordingDelegate>

@property (strong, nonatomic) AVCaptureSession *captureSession;
@property (weak, nonatomic) AVCaptureDeviceInput *activeVideoInput;

@property (strong, nonatomic) AVCaptureStillImageOutput *imageOutput;
@property (strong, nonatomic) AVCaptureMovieFileOutput *movieOutput;
@property (strong, nonatomic) NSURL *outputURL;

@end

 

Following these protocols means that one of the necessary methods needs to be implemented. This method is used to get the final file and write it to Camera Roll Yes. Code listing 6- 16 The implementation of these methods is given.

Listing 6-16 writes the captured video

 

- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
      fromConnections:(NSArray *)connections
                error:(NSError *)error {
    if (error) {                                                            // 1
        [self.delegate mediaCaptureFailedWithError:error];
    } else {
        [self writeVideoToAssetsLibrary:[self.outputURL copy]];
    }
    self.outputURL = nil;
}

- (void)writeVideoToAssetsLibrary:(NSURL *)videoURL {

    ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];              // 2

    if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:videoURL]) {   // 3

        ALAssetsLibraryWriteVideoCompletionBlock completionBlock;

        completionBlock = ^(NSURL *assetURL, NSError *error){               // 4
            if (error) {
                [self.delegate assetLibraryWriteFailedWithError:error];
            } else {
                [self generateThumbnailForVideoAtURL:videoURL];
            }
        };

        [library writeVideoAtPathToSavedPhotosAlbum:videoURL                // 8
                                    completionBlock:completionBlock];
    }
}

- (void)generateThumbnailForVideoAtURL:(NSURL *)videoURL {

    dispatch_async([self globalQueue], ^{

        AVAsset *asset = [AVAsset assetWithURL:videoURL];

        AVAssetImageGenerator *imageGenerator =                             // 5
            [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];
        imageGenerator.maximumSize = CGSizeMake(100.0f, 0.0f);
        imageGenerator.appliesPreferredTrackTransform = YES;

        CGImageRef imageRef = [imageGenerator copyCGImageAtTime:kCMTimeZero // 6
                                                     actualTime:NULL
                                                          error:nil];
        UIImage *image = [UIImage imageWithCGImage:imageRef];
        CGImageRelease(imageRef);

        dispatch_async(dispatch_get_main_queue(), ^{                        // 7
            [self postThumbnailNotifification:image];
        });
    });
}

(1) In the delegate callback, if an error message is returned, you only need to notify the delegate to handle the error. If no error is encountered, try to write the video to the user's Camera Roll by calling the writeVideoToAssetsLibrary: method.
(2) Create an instance of ALAssetsLibrary, which will provide an interface for writing video.
(3) Before writing to the repository, you should call the videoAtPathIsCompatibleWithSavedPhotosAlbum: method to check whether the video can be written. In this case, this method will return YES, but developers should get into the habit of calling this method when writing to the repository.
(4) Create a completion handler block, which is called when the write to the repository is completed. If an error occurs in the operation, call the delegate to notify the error information; Otherwise, the write operation is successful, so the video thumbnail for interface display is generated next.
(5) On the videoQueue, create a new AVAsset and AVAssetlmageGenerator for the new video. Set the maximumSize property to 100 wide and 0 high, and set the height of the picture according to the aspect ratio of the video. You also need to set the appliespreferenredtracktransform to YES, so that changes in the video (such as changes in the direction of the video) will be taken into account when capturing thumbnails. If you do not set this, the wrong thumbnail direction will appear.
(6) Because only one image needs to be captured, you can use the copyCGiImageAtTime:actualTime:error: method. This is a synchronous method, which is why we move this operation out of the main thread. Create a UIlmage for display in the user interface according to the returned CGImageRef data. When the copyCGImageAtTime:actualTime:error: method is called, the developer needs to be responsible for releasing the created image, so the CGImageRelease(imageRef) method needs to be called to avoid memory leakage.
(7) UlImage creates the main thread and sends the latest notification.
(8) Execute the action of actually writing to the resource library, and pass the videoURL and completionBlock.

Run the application and record a video for a few seconds. Clicking Stop will generate a new thumbnail created for the scene just shot. You can click this thumbnail to open the browse box and view the newly created video.

6.4 summary

Now you should have a good knowledge of the core AV Foundation capture API. We learned how to configure and control an AVCaptureSession, how to directly control and operate the capture device through some examples, and how to use the subclass of AVCaptureOutput to capture still pictures and videos. Using these core functions, an application similar to the function of Apple's built-in "camera" application is created. Although the explanations in this chapter are aimed at the iOS platform, the methods and technologies we use are also applicable to the creation of camera and video applications on the Mac platform. We have made a good start in the study of AVFoundation capture API. In the next chapter, we will continue to learn some more advanced capture features to raise cameras and video applications to a new level.

Tags: ffmpeg

Posted by deft on Thu, 05 May 2022 23:06:21 +0300