Fork me on GitHub

ios多线程学习笔记(2)

Operation Objects:


An operation object is an instance of the NSOperation class (in the Foundation framework) that you use to encapsulate work you want your application to perform. The NSOperation class itself is an abstract base class that must be subclassed in order to do any useful work. Despite being abstract, this class does provide a significant amount of infrastructure to minimize the amount of work you have to do in your own subclasses. In addition, the Foundation framework provides two concrete subclasses that you can use as-is with your existing code: NSBlockOperation and NSInvocationOperation. 

Concurrent Versus Nonconcurrent Operations:

Although you typically execute operations by adding them to an operation queue, doing so is not required. It is also possible to execute an operation object manually by calling its start method, but doing so does not guarantee that the operation runs concurrently with the rest of your code. The isConcurrent method of the NSOperation class tells you whether an operation runs synchronously or asynchronously with respect to the thread in which its start method was called. By default, this method returns NO, which means the operation runs synchronously in the calling thread.

If you want to implement a concurrent operation—that is, one that runs asynchronously with respect to the calling thread—you must write additional code to start the operation asynchronously. For example, you might spawn a separate thread, call an asynchronous system function, or do anything else to ensure that the start method starts the task and returns immediately and, in all likelihood, before the task is finished.

Most developers should never need to implement concurrent operation objects. If you always add your operations to an operation queue, you do not need to implement concurrent operations. When you submit a nonconcurrent operation to an operation queue, the queue itself creates a thread on which to run your operation. Thus, adding a nonconcurrent operation to an operation queue still results in the asynchronous execution of your operation object code. The ability to define concurrent operations is only necessary in cases where you need to execute the operation asynchronously without adding it to an operation queue.

 

Creating a NSInvocationOperation:

The NSInvocationOperation class is a concrete subclass of NSOperation that, when run, invokes the selector you specify on the object you specify.This class implements a non-concurrent operation.

Creating a NSBlockOperation:

The NSBlockOperation class is a concrete subclass of NSOperation that acts as a wrapper for one or more block objects. NSBlockOperation中所有的blocks是按顺序执行,不是并发的,而且,如果NSBlockOperation没有被加入到operation queue中,该operation也不是并发的,当调用start的时候该operation会阻塞直到执行完。



Defining a Custom Operation Object:

If the block operation and invocation operation objects do not quite meet the needs of your application, you can subclass NSOperationdirectly and add whatever behavior you need.

 

Customizing the Executing Behavior of an Operation Object:

The configuration of operation objects occurs after you have created them but before you add them to a queue. The types of configurations described in this section can be applied to all operation objects, regardless of whether you subclassed NSOperationyourself or used an existing subclass.

(1) Configuring Interoperation Dependencies

 

Dependencies are a way for you to serialize the execution of distinct operation objects. An operation that is dependent on other operations cannot begin executing until all of the operations on which it depends have finished executing. Thus, you can use dependencies to create simple one-to-one dependencies between two operation objects or to build complex object dependency graphs.

To establish dependencies between two operation objects, you use the addDependency: method of NSOperation. This method creates a one-way dependency from the current operation object to the target operation you specify as a parameter. This dependency means that the current object cannot begin executing until the target object finishes executing. Dependencies are also not limited to operations in the same queue. Operation objects manage their own dependencies and so it is perfectly acceptable to create dependencies between operations and add them all to different queues. One thing that is not acceptable, however, is to create circular dependencies between operations. Doing so is a programmer error that will prevent the affected operations from ever running.

When all of an operation’s dependencies have themselves finished executing, an operation object normally becomes ready to execute. If the operation object is in a queue, the queue may start executing that operation at any time. If you plan to execute the operation manually, it is up to you to call the operation’s start method. You should always configure dependencies before running your operations or adding them to an operation queue. Dependencies added afterward may not prevent a given operation object from running.

(2) Changing an Operation's Execution Priority

 

For operations added to a queue, execution order is determined first by the readiness of the queued operations and then by their relative priority. Readiness is determined by an operation’s dependencies on other operations, but the priority level is an attribute of the operation object itself. By default, all new operation objects have a “normal” priority, but you can increase or decrease that priority as needed by calling the object’s setQueuePriority: method.

Priority levels apply only to operations in the same operation queue. If your application has multiple operation queues, each prioritizes its own operations independently of any other queues. Thus, it is still possible for low-priority operations to execute before high-priority operations in a different queue.

Priority levels are not a substitute for dependencies. Priorities determine the order in which an operation queue starts executing only those operations that are currently ready. For example, if a queue contains both a high-priority and low-priority operation and both operations are ready, the queue executes the high-priority operation first. However, if the high-priority operation is not ready but the low-priority operation is, the queue executes the low-priority operation first. If you want to prevent one operation from starting until another operation has finished, you must use dependencies.(一个Operation只要加入到queue中就会处于ready状态,除非这个Operation对另一个Operation有依赖性).

(3) Setting up a Completion Block

 

 an operation can execute a completion block when its main task finishes executing. You can use a completion block to perform any work that you do not consider part of the main task. For example, you might use this block to notify interested clients that the operation itself has completed. A concurrent operation object might use this block to generate its final KVO notifications.

To set a completion block, use the setCompletionBlock: method of NSOperation. The block you pass to this method should have no arguments and no return value.

Executing Operations:

(1) Adding Operations to an Operation Queue

 

By far, the easiest way to execute operations is to use an operation queue, which is an instance of the NSOperationQueue class. Your application is responsible for creating and maintaining any operation queues it intends to use. An application can have any number of queues, but there are practical limits to how many operations may be executing at a given point in time. Operation queues work with the system to restrict the number of concurrent operations to a value that is appropriate for the available cores and system load. Therefore, creating additional queues does not mean that you can execute additional operations.

To create a queue, you allocate it in your application as you would any other object:

To add operations to a queue, you use the addOperation: method. You can add groups of operations using the addOperations:waitUntilFinished: method, or you can add block objects directly to a queue (without a corresponding operation object) using the addOperationWithBlock: method. Each of these methods queues up an operation (or operations) and notifies the queue that it should begin processing them. In most cases, operations are executed shortly after being added to a queue, but the operation queue may delay execution of queued operations for any of several reasons. Specifically, execution may be delayed if queued operations are dependent on other operations that have not yet completed. Execution may also be delayed if the operation queue itself is suspended or is already executing its maximum number of concurrent operations. The following examples show the basic syntax for adding operations to a queue.

Although the NSOperationQueue class is designed for the concurrent execution of operations, it is possible to force a single queue to run only one operation at a time. The setMaxConcurrentOperationCount: method lets you configure the maximum number of concurrent operations for an operation queue object. Passing a value of 1 to this method causes the queue to execute only one operation at a time. Although only one operation at a time may execute, the order of execution is still based on other factors, such as the readiness of each operation and its assigned priority. Thus, a serialized operation queue does not offer quite the same behavior as a serial dispatch queue in Grand Central Dispatch does. If the execution order of your operation objects is important to you, you should use dependencies to establish that order before adding your operations to a queue.

(2) Executing Operations Manually

 

Although operation queues are the most convenient way to run operation objects, it is also possible to execute operations without a queue. If you choose to execute operations manually, however, there are some precautions you should take in your code. In particular, the operation must be ready to run and you must always start it using its start method.

An operation is not considered able to run until its isReady method returns YES. The isReady method is integrated into the dependency management system of the NSOperation class to provide the status of the operation’s dependencies. Only when its dependencies are cleared is an operation free to begin executing.

(3) Canceling Operations

 

Once added to an operation queue, an operation object is effectively owned by the queue and cannot be removed. The only way to dequeue an operation is to cancel it. You can cancel a single individual operation object by calling its cancel method or you can cancel all of the operation objects in a queue by calling the cancelAllOperations method of the queue object.

You should cancel operations only when you are sure you no longer need them. Issuing a cancel command puts the operation object into the “canceled” state, which prevents it from ever being run. Because a canceled operation is still considered to be “finished”, objects that are dependent on it receive the appropriate KVO notifications to clear that dependency. Thus, it is more common to cancel all queued operations in response to some significant event, like the application quitting or the user specifically requesting the cancellation, rather than cancel operations selectively.

(4) Suspending and Resuming Queues

If you want to issue a temporary halt to the execution of operations, you can suspend the corresponding operation queue using thesetSuspended: method. Suspending a queue does not cause already executing operations to pause in the middle of their tasks. It simply prevents new operations from being scheduled for execution. You might suspend a queue in response to a user request to pause any ongoing work, because the expectation is that the user might eventually want to resume that work.

Dispatch Queues:

All dispatch queues are first-in, first-out data structures. Thus, the tasks you add to a queue are always started in the same order that they were added. Dispatch queues execute their tasks concurrently with respect to other dispatch queues. The serialization of tasks is limited to the tasks in a single dispatch queue.There are three types of dispatch queues:

(1) Serial Queues

Serial queues (also known as private dispatch queues) execute one task at a time in the order in which they are added to the queue. The currently executing task runs on a distinct thread (which can vary from task to task) that is managed by the dispatch queue. You can create as many serial queues as you need, and each queue operates concurrently with respect to all other queues. In other words, if you create four serial queues, each queue executes only one task at a time but up to four tasks could still execute concurrently, one from each queue.

(2) Concurrent Queues

Concurrent queues (also known as a type of global dispatch queue) execute one or more tasks concurrently, but tasks are still started in the order in which they were added to the queue. The currently executing tasks run on distinct threads that are managed by the dispatch queue. The exact number of tasks executing at any given point is variable and depends on system conditions.You cannot create concurrent dispatch queues yourself. Instead, there are three global concurrent queues for your application to use.

(3) Main Dispatch Queue

The main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. This queue works with the application’s run loop (if one is present) to interleave the execution of queued tasks with the execution of other event sources attached to the run loop. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application.

Creating and Managing Dispatch Queues:

(1) Getting the Global Concurrent Dispatch Queues

 

A concurrent dispatch queue is useful when you have multiple tasks that can run in parallel. A concurrent queue is still a queue in that it dequeues tasks in a first-in, first-out order; however, a concurrent queue may dequeue additional tasks before any previous tasks finish. The actual number of tasks executed by a concurrent queue at any given moment is variable and can change dynamically as conditions in your application change. Many factors affect the number of tasks executed by the concurrent queues, including the number of available cores, the amount of work being done by other processes, and the number and priority of tasks in other serial dispatch queues.

The system provides each application with three concurrent dispatch queues. These queues are global to the application and are differentiated only by their priority level. Because they are global, you do not create them explicitly. Instead, you ask for one of the queues using the dispatch_get_global_queue function, as shown in the following example:

In addition to getting the default concurrent queue, you can also get queues with high- and low-priority levels by passing in the DISPATCH_QUEUE_PRIORITY_HIGH and DISPATCH_QUEUE_PRIORITY_LOW constants to the function instead. As you might expect, tasks in the high-priority concurrent queue execute before those in the default and low-priority queues. Similarly, tasks in the default queue execute before those in the low-priority queue.The second argument to the dispatch_get_global_queue function is reserved for future expansion. For now, you should always pass 0 for this argument.

(2) Creating Serial Dispatch Queues

 

Unlike concurrent queues, which are created for you, you must explicitly create and manage any serial queues you want to use. You can create any number of serial queues for your application but should avoid creating large numbers of serial queues solely as a means to execute as many tasks simultaneously as you can. If you want to execute large numbers of tasks concurrently, submit them to one of the global concurrent queues. When creating serial queues, try to identify a purpose for each queue, such as protecting a resource or synchronizing some key behavior of your application.

Below shows the steps required to create a custom serial queue. The dispatch_queue_create function takes two parameters: the queue name and a set of queue attributes. The debugger and performance tools display the queue name to help you track how your tasks are being executed. The queue attributes are reserved for future use and should be NULL.



(3) Getting Common Queues at Runtime

Grand Central Dispatch provides functions to let you access several common dispatch queues from your application:

  • Use the dispatch_get_current_queue function for debugging purposes or to test the identity of the current queue. Calling this function from inside a block object returns the queue to which the block was submitted (and on which it is now presumably running). Calling this function from outside of a block returns the default concurrent queue for your application.

  • Use the dispatch_get_main_queue function to get the serial dispatch queue associated with your application’s main thread. This queue is created automatically for Cocoa applications and for applications that either call the dispatch_mainfunction or configure a run loop (using either the CFRunLoopRef type or an NSRunLoop object) on the main thread.

  • Use the dispatch_get_global_queue function to get any of the shared concurrent queues. For more information, see“Getting the Global Concurrent Dispatch Queues.”

(4) Memory Management for Dispatch Queues

Dispatch queues and other dispatch objects are reference-counted data types. When you create a serial dispatch queue, it has an initial reference count of 1. You can use the dispatch_retain and dispatch_release functions to increment and decrement that reference count as needed. When the reference count of a queue reaches zero, the system asynchronously deallocates the queue.You do not need to retain or release any of the global dispatch queues, including the concurrent dispatch queues or the main dispatch queue. Any attempts to retain or release the queues are ignored.

(5) Storing Custom Context Information with a Queue

 

All dispatch objects (including dispatch queues) allow you to associate custom context data with the object. To set and get this data on a given object, you use the dispatch_set_context and dispatch_get_context functions. The system does not use your custom data in any way, and it is up to you to both allocate and deallocate the data at the appropriate times.

For queues, you can use context data to store a pointer to an Objective-C object or other data structure that helps identify the queue or its intended usage to your code. You can use the queue’s finalizer function to deallocate (or disassociate) your context data from the queue before it is deallocated(see below 6). 

(6) Providing a Clean Up Function for a Queue

After you create a serial dispatch queue, you can attach a finalizer function to perform any custom clean up when the queue is deallocated. Dispatch queues are reference counted objects and you can use the dispatch_set_finalizer_f function to specify a function to be executed when the reference count of your queue reaches zero. You use this function to clean up the context data associated with a queue and the function is called only if the context pointer is not NULL.

Listing 3-3 shows a custom finalizer function and a function that creates a queue and installs that finalizer. The queue uses the finalizer function to release the data stored in the queue’s context pointer. (The myInitializeDataContextFunctionand myCleanUpDataContextFunction functions referenced from the code are custom functions that you would provide to initialize and clean up the contents of the data structure itself.) The context pointer passed to the finalizer function contains the data object associated with the queue.

 

Adding Tasks to a Queue:

(1) Adding a Single Task to a Queue

There are two ways to add a task to a queue: asynchronously or synchronously. When possible, asynchronous execution using thedispatch_async and dispatch_async_f functions is preferred over the synchronous alternative. When you add a block object or function to a queue, there is no way to know when that code will execute. As a result, adding blocks or functions asynchronously lets you schedule the execution of the code and continue to do other work from the calling thread. This is especially important if you are scheduling the task from your application’s main thread—perhaps in response to some user event.

Although you should add tasks asynchronously whenever possible, there may still be times when you need to add a task synchronously to prevent race conditions or other synchronization errors. In these instances, you can use the dispatch_sync anddispatch_sync_f functions to add the task to the queue. These functions block the current thread of execution until the specified task finishes executing.

You should never call the dispatch_sync or dispatch_sync_f function from a task that is executing in the same queue that you are planning to pass to the function. This is particularly important for serial queues, which are guaranteed to deadlock, but should also be avoided for concurrent queues.


(2) Performing a Completion Block When a Task is Done

Below shows an averaging function implemented using blocks. The last two parameters to the averaging function allow the caller to specify a queue and block to use when reporting the results. After the averaging function computes its value, it passes the results to the specified block and dispatches it to the queue. To prevent the queue from being released prematurely, it is critical to retain that queue initially and release it once the completion block has been dispatched.

(3) Performing Loop Iterations Concurrently

One place where concurrent dispatch queues might improve performance is in places where you have a loop that performs a fixed number of iterations. For example, suppose you have a for loop that does some work through each loop iteration:

If the work performed during each iteration is distinct from the work performed during all other iterations, and the order in which each successive loop finishes is unimportant, you can replace the loop with a call to the dispatch_apply or dispatch_apply_ffunction. These functions submit the specified block or function to a queue once for each loop iteration. When dispatched to a concurrent queue, it is therefore possible to perform multiple loop iterations at the same time.

You can specify either a serial queue or a concurrent queue when calling dispatch_apply or dispatch_apply_f. Passing in a concurrent queue allows you to perform multiple loop iterations simultaneously and is the most common way to use these functions. Although using a serial queue is permissible and does the right thing for your code, using such a queue has no real performance advantages over leaving the loop in place.

Below shows how to replace the preceding for loop with the dispatch_apply syntax. The block you pass in to thedispatch_apply function must contain a single parameter that identifies the current loop iteration. When the block is executed, the value of this parameter is 0 for the first iteration, 1 for the second, and so on. The value of the parameter for the last iteration is count - 1, where count is the total number of iterations.


You should make sure that your task code does a reasonable amount of work through each iteration. As with any block or function you dispatch to a queue, there is overhead to scheduling that code for execution. If each iteration of your loop performs only a small amount of work, the overhead of scheduling the code may outweigh the performance benefits you might achieve from dispatching it to a queue.

(4) Using Objective-c Objects in Tasks

GCD provides built-in support for Cocoa memory management techniques so you may freely use Objective-C objects in the blocks you submit to dispatch queues. Each dispatch queue maintains its own autorelease pool to ensure that autoreleased objects are released at some point; queues make no guarantee about when they actually release those objects though.If your application is memory constrained and your block creates more than a few autoreleased objects, creating your own autorelease pool is the only way to ensure that your objects are released in a timely manner. You typically create anNSAutoreleasePool object at the beginning of your block code and drain it (or release the pool) at the end of your block code.

Suspending and Resuming Dispatch Queues:

You can prevent a queue from executing block objects temporarily by suspending it. You suspend a dispatch queue using thedispatch_suspend function and resume it using the dispatch_resume function. Calling dispatch_suspend increments the queue’s suspension reference count, and calling dispatch_resume decrements the reference count. While the reference count is greater than zero, the queue remains suspended. Therefore, you must balance all suspend calls with a matching resume call in order to resume processing blocks.Suspend and resume calls are asynchronous and take effect only between the execution of blocks. Suspending a queue does not cause an already executing block to stop.

Using Dispatch Semaphores to Regulate the Use of Finite Resources:

 

If the tasks you are submitting to dispatch queues access some finite resource, you may want to use a dispatch semaphore to regulate the number of tasks simultaneously accessing that resource. A dispatch semaphore works like a regular semaphore with one exception. When resources are available, it takes less time to acquire a dispatch semaphore than it does to acquire a traditional system semaphore. This is because Grand Central Dispatch does not call down into the kernel for this particular case. The only time it calls down into the kernel is when the resource is not available and the system needs to park your thread until the semaphore is signaled.

The semantics for using a dispatch semaphore are as follows:

  1. When you create the semaphore (using the dispatch_semaphore_create function), you can specify a positive integer indicating the number of resources available.

  2. In each task, call dispatch_semaphore_wait to wait on the semaphore.

  3. When the wait call returns, acquire the resource and do your work.

  4. When you are done with the resource, release it and signal the semaphore by calling the dispatch_semaphore_signalfunction.

For an example of how these steps work, consider the use of file descriptors on the system. Each application is given a limited number of file descriptors to use. If you have a task that processes large numbers of files, you do not want to open so many files at one time that you run out of file descriptors. Instead, you can use a semaphore to limit the number of file descriptors in use at any one time by your file-processing code. The basic pieces of code you would incorporate into your tasks is as follows:



When you create the semaphore, you specify the number of available resources. This value becomes the initial count variable for the semaphore. Each time you wait on the semaphore, the dispatch_semaphore_wait function decrements that count variable by 1. If the resulting value is negative, the function tells the kernel to block your thread. On the other end, thedispatch_semaphore_signal function increments the count variable by 1 to indicate that a resource has been freed up. If there are tasks blocked and waiting for a resource, one of them is subsequently unblocked and allowed to do its work.

Waiting on Groups of Queued Tasks:

 

Dispatch groups are a way to block a thread until one or more tasks finish executing. You can use this behavior in places where you cannot make progress until all of the specified tasks are complete. For example, after dispatching several tasks to compute some data, you might use a group to wait on those tasks and then process the results when they are done. Another way to use dispatch groups is as an alternative to thread joins. Instead of starting several child threads and then joining with each of them, you could add the corresponding tasks to a dispatch group and wait on the entire group.

Listing 3-6 shows the basic process for setting up a group, dispatching tasks to it, and waiting on the results. Instead of dispatching tasks to a queue using the dispatch_async function, you use the dispatch_group_async function instead. This function associates the task with the group and queues it for execution. To wait on a group of tasks to finish, you then use thedispatch_group_wait function, passing in the appropriate group.



posted on 2012-02-06 00:50  pengyingh  阅读(796)  评论(0编辑  收藏  举报

导航