iOS: 并发编程的几个知识点

iOS 多线程问题


查阅的大部分资料都是英文的,整理完毕之后,想翻译成中文,却发现很多名字翻译成中文很难表述清楚。

所以直接把整理好的资料发出来,大家就当顺便学习学习英语。


  

1. Thread Safe Vs Main Thread Safe

Main Thread Safe means only safe execute on main thread;

Thread Safe means you can modify on any thread simultaneously;

2. ConditionLock Vs Condition

NSCondition

A condition variable whose semantics follow those used for POSIX-style conditions.

A condition is another type of semaphore that allows threads to signal each other when a certain condition is true. Conditions are typically used to indicate the availability of a resource or to ensure that tasks are performed in a specific order. When a thread tests a condition, it blocks unless that condition is already true. It remains blocked until some other thread explicitly changes and signals the condition. The difference between a condition and a mutex lock is that multiple threads may be permitted access to the condition at the same time. The condition is more of a gatekeeper that lets different threads through the gate depending on some specified criteria.

Due to the subtleties involved in implementing operating systems, condition locks are permitted to return with spurious success even if they were not actually signaled by your code. To avoid problems caused by these spurious signals, you should always use a predicate in conjunction with your condition lock. 

When a thread waits on a condition, the condition object unlocks its lock and blocks the thread. When the condition is signaled, the system wakes up the thread. The condition object then reacquires its lock before returning from the wait or  waitUntilDate: method. Thus, from the point of view of the thread, it is as if it always held the lock.

A boolean predicate is an important part of the semantics of using conditions because of the way signaling works. Signaling a condition does not guarantee that the condition itself is true. Using a predicate ensures that these spurious signals do not cause you to perform work before it is safe to do so. The predicate itself is simply a flag or other variable in your code that you test in order to acquire a Boolean result.

The semantics for using an NSCondition object is as follows:

  1. Lock the condition object.
  2. Test a boolean predicate. (This predicate is a boolean flag or other variable in your code that indicates whether it is safe to perform the task protected by the condition.)
  3. If the boolean predicate is false, call the condition object’s wait
     or waitUntilDate: method to block the thread. Upon returning from these methods, go to step 2 to retest your boolean predicate. (Continue waiting and retesting the predicate until it is true.)
  4. If the boolean predicate is true, perform the task.
  5. Optionally update any predicates (or signal any conditions) affected by your task.
  6. When your task is done, unlock the condition object.
lock the condition
while (!(boolean_predicate)) {
    wait on condition
}
do protected work
(optionally, signal or broadcast the condition again or change a predicate value)
unlock the condition

  

NSCondition 的底层是通过pthread_mutex +  pthread_cond_t 来实现的。

NSConditionLock

A lock that can be associated with specific, user-defined conditions.

Using an NSConditionLock object, you can ensure that a thread can acquire a lock only if a certain condition is met.

An NSConditionLock object defines a mutex lock that can be locked and unlocked with specific values. 

NSConditionLock just support condition with a int, if you want support a custom condition value, you should use NSCondition.

用互斥所能不能实现生产者,消费者模型???
答案是: YES

参考资料:

https://web.stanford.edu/class/cs140/cgi-bin/lecture.php?topic=locks

http://blog.ibireme.com/2016/01/16/spinlock_is_unsafe_in_ios/

https://bestswifter.com/ios-lock/

3. @synchronized Directive

The object passed to the @synchronized directive is a unique identifier used to distinguish the protected block.

If you execute the preceding method in two different threads, passing a different object for the anObj parameter on each thread, each would take its lock and continue processing without being blocked by the other. If you pass the same object in both cases, however, one of the threads would acquire the lock first and the other would block until the first thread completed the critical section.

Several Common ways to use @synchronized wrong

  • @synchronized(nil)
  • @synchronized(][NSObject all] init])

Exceptions With @synchronized

As a precautionary measure, the @synchronized block implicitly adds an exception handler to the protected code. This handler automatically releases the mutex in the event that an exception is thrown. This means that in order to use the @synchronized directive, you must also enable Objective-C exception handling in your code.

If you do not want the additional overhead caused by the implicit exception handler, you should consider using the lock classes.

原理

OBJC_EXPORT  int objc_sync_enter(id obj)
    OBJC_AVAILABLE(10.3, 2.0, 9.0, 1.0);

OBJC_EXPORT  int objc_sync_exit(id obj)
    OBJC_AVAILABLE(10.3, 2.0, 9.0, 1.0);

@synchronized(obj) {
    // do work
}

  

会被编译器转换为:

@try {
    objc_sync_enter(obj);
    // do work
} @finally {
    objc_sync_exit(obj);    
}

  

Example

结论:

  • 你调用 sychronized 的每个对象,Objective-C runtime 都会为其分配一个递归锁并存储在哈希表中。
  • 如果在 sychronized 内部对象被释放或被设为 nil 看起来都 OK。
  • 注意不要向你的 sychronized block 传入 nil!这将会从代码中移走线程安全。

参考资料

http://rykap.com/objective-c/2015/05/09/synchronized/

http://yulingtianxia.com/blog/2015/11/01/More-than-you-want-to-know-about-synchronized/

https://opensource.apple.com/source/objc4/objc4-646/runtime/objc-sync.mm

4. Runloop 

Perform selector on a thread

当目标线程runloop未启动时是没有效果的。

启动 Runloop 

If no input sources or timers are attached to the run loop, this method exits immediately;

Manually removing all known input sources and timers from the run loop is not a guarantee that the run loop will exit.macOS can install and remove additional input sources as needed to process requests targeted at the receiver’s thread. Those sources could therefore prevent the run loop from exiting.

The Run Loop Sequence of Events

Each time you run it, your thread’s run loop processes pending events and generates notifications for any attached observers. The order in which it does this is very specific and is as follows:

  1. Notify observers that the run loop has been entered.
  2. Notify observers that any ready timers are about to fire.
  3. Notify observers that any input sources that are not port based are about to fire.
  4. Fire any non-port-based input sources that are ready to fire.
  5. If a port-based input source is ready and waiting to fire, process the event immediately. Go to step 9.
  6. Notify observers that the thread is about to sleep.
  7. Put the thread to sleep until one of the following events occurs:
    • An event arrives for a port-based input source.
    • A timer fires.
    • The timeout value set for the run loop expires.
    • The run loop is explicitly woken up.
  8. Notify observers that the thread just woke up.
  9. Process the pending event.
    • If a user-defined timer fired, process the timer event and restart the loop. Go to step 2.
    • If an input source fired, deliver the event.
    • If the run loop was explicitly woken up but has not yet timed out, restart the loop. Go to step 2.
  10. Notify observers that the run loop has exited.

Example: Detect Main Runloop lag with RunloopObserver

参考资料:

https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/Multithreading/RunLoopManagement/RunLoopManagement.html

http://www.tanhao.me/code/151113.html/

6. Queue Vs Thread 

Thread != Queue

A queue doesn't own a thread and a thread is not bound to a queue. There are threads and there are queues. Whenever a queue wants to run a block, it needs a thread but that won't always be the same thread. It just needs any thread for it (this may be a different one each time) and when it's done running blocks (for the moment), the same thread can now be used by a different queue.

There's also no guarantee that a given serial queue will always use the same thread.

The only exception is the main queue:

dispatch_get_main_queue will must run on main thread.
While, main thread may run task at any more than one queue.

7. Dispatch Sync Vs Dispatch Async

dispatch_sync

dispatch_sync  
└──dispatch_sync_f
    └──_dispatch_sync_f2
        └──_dispatch_sync_f_slow
static void _dispatch_sync_f_slow(dispatch_queue_t dq, void *ctxt, dispatch_function_t func) {  
    _dispatch_thread_semaphore_t sema = _dispatch_get_thread_semaphore();
    struct dispatch_sync_slow_s {
        DISPATCH_CONTINUATION_HEADER(sync_slow);
    } dss = {
        .do_vtable = (void*)DISPATCH_OBJ_SYNC_SLOW_BIT,
        .dc_ctxt = (void*)sema,
    };
    _dispatch_queue_push(dq, (void *)&dss);

    _dispatch_thread_semaphore_wait(sema);
    _dispatch_put_thread_semaphore(sema);
    // ...
}

 

Submits a block to a dispatch queue for synchronous execution. Unlike 

dispatch_async, this function does not return until the block has finished. Calling this function and targeting the current queue results in deadlock. 

Unlike with dispatch_async, no retain is performed on the target queue. Because calls to this function are synchronous, it "borrows" the reference of the caller. Moreover, no Block_copy is performed on the block.

As an optimization, this function invokes the block on the current thread when possible.

dispatch_sync does two things:

  1. queue a block
  2. blocks the current thread until the block has finished running

dispatch_async

 

dispatch_async(dispatch_queue_t queue, dispatch_block_t block) {  
    dispatch_async_f(dq, _dispatch_Block_copy(work), _dispatch_call_block_and_release);    
}
dispatch_async_f(dispatch_queue_t queue, void *context, dispatch_function_t work);

Dead Locks

dispatch_sync(queueA, ^{
    dispatch_sync(queueB, ^{
        dispatch_sync(queueA, ^{         // DEAD LOCK
            // some task
        });
    });
});

Example:

dispatch_async(QueueA, ^{
    someFunctionA(...);
    dispatch_sync(QueueB, ^{
        someFunctionB(...);
    });
});

When QueueA runs the block, it will temporarily own a thread, any thread. someFunctionA(...)will execute on that thread. Now while doing the synchronous dispatch, QueueA cannot do anything else, it has to wait for the dispatch to finish. QueueB on the other hand, will also need a thread to run its block and execute someFunctionB(...). So either QueueA temporarily suspends its thread and QueueB uses some other thread to run the block or QueueA hands its thread over to QueueB (after all it won't need it anyway until the synchronous dispatch has finished) and QueueB directly uses the current thread of QueueA.

Needless to say that the last option is much faster as no thread switch is required. And this is the optimization the sentence talks about. So a dispatch_sync() to a different queue may not always cause a thread switch (different queue, maybe same thread).

But a dispatch_sync() still cannot happen to the same queue (same thread, yes, same queue, no). That's because a queue will execute block after block and when it currently executes a block, it won't execute another one until this one is done. So it executes BlockA and BlockA does a dispatch_sync() of BlockB on the same queue. The queue won't run BlockB as long as it still runs BlockA, but running BlockA won't continue until BlockB has run. 

Important: You should never call the dispatch_sync or dispatch_sync_f function from a task that is executing in the same queue that you are planning to pass to the function. This is particularly important for serial queues, which are guaranteed to deadlock, but should also be avoided for concurrent queues.

8. Dispatch set target

The misunderstanding here is that dispatch_get_specific doesn't traverse the stack of nested queues, it traverses the queue targeting lineage. 

Modifying the target queue of some objects changes their behavior:

  • Dispatch queues:

    A dispatch queue's priority is inherited from its target queue. 

    If you submit a block to a serial queue, and the serial queue’s target queue is a different serial queue, that block is not invoked concurrently with blocks submitted to the target queue or to any other queue with that same target queue.

  • Dispatch sources:

    A dispatch source's target queue specifies where its event handler and cancellation handler blocks are submitted.

  • Dispatch I/O channels:

    A dispatch I/O channel's target queue specifies where its I/O operations are executed.

By default, a newly created queue forwards into the default priority global queue. 

参考资料:

https://bestswifter.com/deep-gcd/?spm=5176.100239.0.0.vCv2rL

https://stackoverflow.com/questions/20860997/dispatch-queue-set-specific-vs-getting-the-current-queue

https://stackoverflow.com/questions/23955948/why-did-apple-deprecate-dispatch-get-current-queue

https://stackoverflow.com/questions/7346929/why-do-we-use-builtin-expect-when-a-straightforward-way-is-to-use-if-else

https://www.objc.io/issues/2-concurrency/concurrency-apis-and-pitfalls/?spm=5176.100239.blogcont17709.5.71pknM

libdispatch 源码:https://opensource.apple.com/tarballs/libdispatch/

9. Read-write Lock in GCD

Use dispatch_barrier_async().

When the barrier block reaches the front of a private concurrent queue, it is not executed immediately. Instead, the queue waits until its currently executing blocks finish executing. At that point, the barrier block executes by itself. Any blocks submitted after the barrier block are not executed until the barrier block completes.

The queue you specify should be a concurrent queue that you create yourself using the dispatch_queue_create function. If the queue you pass to this function is a serial queue or one of the global concurrent queues, this function behaves like the dispatch_async function.

 

附录:

测试Demo:https://files.cnblogs.com/files/smileEvday/iOSMultiThreadSample.zip

posted on 2017-07-24 20:49  一片-枫叶  阅读(1011)  评论(0编辑  收藏  举报