多线程---threading

1,几个概念:

  • GIL:  Global Interpreter Lock,全局解释器锁。为了解决多线程之间数据完整性和状态同步的问题,设计为在任意时刻只有一个线程在解释器中运行。
  • 线程:程序执行的最小单位。
  • 进程:系统资源分配的最小单位。
  • 线程安全:多线程环境中,共享数据同一时间只能有一个线程来操作。
  • 原子操作:原子操作就是不会因为进程并发或者线程并发而导致被中断的操作。

2,threading:

  thread 模块已被废弃。用户可以使用 threading 模块代替。所以,在 Python3 中不能再使用"thread" 模块。为了兼容性,Python3 将 thread 重命名为 "_thread"。

threading.thread原型:

class threading.Thread(group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None)
   # group should be None; reserved for future extension when a ThreadGroup class is implemented
    #target is the callable object to be invoked by the run() method. Defaults to None, meaning nothing is called
    #name is the thread name. By default, a unique name is constructed of the form “Thread-N” where N is a small decimal number
    #args is the argument tuple for the target invocation. Defaults to ().
    #kwargs is a dictionary of keyword arguments for the target invocation. Defaults to {}.
    #If not None, daemon explicitly sets whether the thread is daemonic. If None (the default), the daemonic property is inherited from the current thread

thread method:

start()
    #用于启动thread的run方法,每个thread最多只需调用一次start(),如果多次调用,会引起RuntimeError错误
run()
    #从target和kwargs获取相应的参数,run当前thread,可以通过is_alive查看thread状态是否为alive
join(timeout=None)
    #join用于等待thread结束;join使线程具有阻塞属性,只有当前thread的join结束之后,下一个线程才可以开启
   # 默认timeout=None时,线程可能正常或者异常终止;当timeout不为None时,join超时后也会退出
    #需要先start再join,否则会报错
is_alive()
    #返回当前thread的状态,用于判断是否alive

class threading.Lock

acquire(blocking=True, timeout=-1)
    #用于获取lock锁,默认时阻塞的,timeout=-1表示等待时间无限制;否则按timeout指定值按秒计数等待
release()
    #release a lock,释放锁

class threading.RLock

  R:reentrant,可重入的

acquire(blocking=True, timeout=-1)
    #如果当前线程已经owns the lock,再获取时就会递归加一(increment the recursion level by one)并返回
release()

 

example1:线程不安全example

import threading
import time

def sub1():
    global count
    tmp = count
    time.sleep(0.001)
    print("tmp is %0d" % tmp)
    count = tmp + 1
    print("count is %0d @" % count,time.time())
    time.sleep(2)

count = 0
def verify(sub):
    global count
    thread_list = []
    for i in range(3):
        t = threading.Thread(target=sub, args=())
        t.start()
        thread_list.append(t)
    for j in thread_list:
        j.join()
    print(count)

verify(sub1)

执行结果如下所示:

1 tmp is 0
2 count is 1 @ 1585317643.214646
3 tmp is 0
4 count is 1 @ 1585317643.214646
5 tmp is 0
6 count is 1 @ 1585317643.2156456
7 1

分析:

在这个例子中,我们把
count+=1
代替为
tmp = count
time.sleep(0.001)
count = tmp + 1
是因为,尽管count+=1是非原子操作,但是因为CPU执行的太快了,比较难以复现出多进程的非原子操作导致的进程不安全。
经过代替之后,尽管只sleep了0.001秒,但是对于CPU的时间来说是非常长的,会导致这个代码块执行到一半,GIL锁就释放了。
即tmp已经获取到count的值了,但是还没有将tmp + 1赋值给count。而此时其他线程如果执行完了count = tmp + 1,
当返回到原来的线程执行时,尽管count的值已经更新了,但是count = tmp + 1是个赋值操作,赋值的结果跟count的更新的值是一样的。
最终导致了我们累加的值有很多丢失。

example2:线程安全--使用thread.Lock

 1 import threading
 2 import time
 3 count = 0
 4 lock = threading.Lock()
 5 
 6 def sub2():
 7     global count
 8     if  lock.acquire():
 9         #acquire()是获取锁,acquire()返回获取锁的结果,成功获取到互斥锁为True,如果没有获取到互斥锁则返回False
10         tmp = count
11         print("tmp is %0d" % tmp)
12         time.sleep(0.001)
13         count = tmp + 1
14         print("count is %0d" % count)
15         time.sleep(2)
16         lock.release() #一系列操作结束之后需要释放锁
17 
18 def verify(sub):
19     global count
20     thread_list = []
21     for i in range(3):
22         t = threading.Thread(target=sub,args=())
23         t.start()
24         thread_list.append(t)
25     for j in thread_list:
26         j.join()
27     print(count)
28 
29 verify(sub2)

执行的结果如下所示:

tmp is 0
count is 1 @ 1585317856.0688052
tmp is 1
count is 2 @ 1585317858.071597
tmp is 2
count is 3 @ 1585317860.0745535
3

使用lock之后,需要等待lock release之后,再执行下一个线程,所以结果是累加和

example3:使用with结构实现lock

1 def sub3():
2     global count
3     with lock:
4         tmp = count
5         print("tmp is %0d" % tmp)
6         time.sleep(0.001)
7         count = tmp + 1
8         print("count is %0d @" % count, time.time())
9         time.sleep(2)

其他代码相同,执行sub3就可以,执行结果和example2是相同的;使用with lock不需要显示的写lock的acquire和release,这是为什么?

example4:lock导致deamlock---迭代死锁与递归锁

 1 import threading
 2 import time
 3 count_list = [0, 0]
 4 lock = threading.Lock()
 5 def change_0():
 6     global count_list
 7     print("before change_0 lock")
 8     with lock:
 9         print("after change_0 lock")
10         tmp = count_list[0]
11         time.sleep(0.001)
12         count_list[0] = tmp + 1
13         time.sleep(2)
14         print("Done. count_list[0]:%s" % count_list[0])
15 def change_1():
16     global count_list
17     with lock:
18         tmp = count_list[1]
19         time.sleep(0.001)
20         count_list[1] = tmp + 1
21         time.sleep(2)
22         print("Done. count_list[1]:%s" % count_list[1])
23 def change():
24     with lock:
25         print("before change_0")
26         change_0()
27         time.sleep(0.001)
28         print("before change_1")
29         change_1()
30 def verify(sub):
31     global count_list
32     thread_list = []
33     for i in range(100):
34         t = threading.Thread(target=sub, args=())
35         t.start()
36         thread_list.append(t)
37     for j in thread_list:
38         j.join()
39     print(count_list)
40 
41 if __name__ == "__main__":
42     verify(change)

执行结果如下:

before change_0
before change_0 lock

Process finished with exit code -1

上述结果产生死锁,所以手动退出

产生死锁的原因:threading.Lock只能一个thread获取当前lock,所以当有多层获取lock嵌套时,就会发生死锁,如上所示。

示例中,我们有一个共享资源count_list,有两个分别取这个共享资源第一部分和第二部分的数字(count_list[0]和count_list[1])。两个访问函数都使用了锁来确保在获取数据时没有其它线程修改对应的共享数据。
现在,如果我们思考如何添加第三个函数来获取两个部分的数据。一个简单的方法是依次调用这两个函数,然后返回结合的结果。

这里的问题是,如有某个线程在两个函数调用之间修改了共享资源,那么我们最终会得到不一致的数据。

最明显的解决方法是在这个函数中也使用lock。然而,这是不可行的。里面的两个访问函数将会阻塞,因为外层语句已经占有了该锁

结果是没有任何输出,死锁。

为了解决这个问题,我们可以用threading.RLock代替threading.Lock

example5:RLock的使用

 1 import threading
 2 import time
 3 count_list = [0, 0]
 4 #lock = threading.Lock() #会产生死锁
 5 lock = threading.RLock() #不会产生死锁
 6 
 7 def change_0():
 8     global count_list
 9     print("before change_0 lock")
10     with lock:
11         print("after change_0 lock")
12         tmp = count_list[0]
13         time.sleep(0.001)
14         count_list[0] = tmp + 1
15         time.sleep(2)
16         print("Done. count_list[0]:%s @" % count_list[0],time.time())
17 def change_1():
18     global count_list
19     with lock:
20         tmp = count_list[1]
21         time.sleep(0.001)
22         count_list[1] = tmp + 1
23         time.sleep(2)
24         print("Done. count_list[1]:%s @" % count_list[1],time.time())
25 def change():
26     with lock:
27         print("before change_0")
28         change_0()
29         time.sleep(0.001)
30         print("before change_1")
31         change_1()
32 def verify(sub):
33     global count_list
34     thread_list = []
35     for i in range(3):
36         t = threading.Thread(target=sub, args=())
37         t.start()
38         thread_list.append(t)
39     for j in thread_list:
40         j.join()
41     print(count_list)
42 
43 if __name__ == "__main__":
44     verify(change)

执行结果如下所示:

before change_0
before change_0 lock
after change_0 lock
Done. count_list[0]:1 @ 1585359946.4287996
before change_1
Done. count_list[1]:1 @ 1585359948.4335155
before change_0
before change_0 lock
after change_0 lock
Done. count_list[0]:2 @ 1585359950.4367368
before change_1
Done. count_list[1]:2 @ 1585359952.439945
before change_0
before change_0 lock
after change_0 lock
Done. count_list[0]:3 @ 1585359954.4420621
before change_1
Done. count_list[1]:3 @ 1585359956.4463983
[3, 3]

 

example6:lock导致deamlock---进程间死锁

死锁的另外一个原因是两个进程想要获得的锁已经被对方进程获得,只能互相等待又无法释放已经获得的锁,而导致死锁。假设银行系统中,用户a试图转账100块给用户b,与此同时用户b试图转账500块给用户a,则可能产生死锁。
2个线程互相等待对方的锁,互相占用着资源不释放。

 1 import threading
 2 import time
 3 
 4 class Account(object):
 5     def __init__(self, name, balance, lock):
 6         self.name = name
 7         self.balance = balance
 8         self.lock = lock
 9         
10     def withdraw(self, amount):
11         self.balance -= amount
12         
13     def deposit(self, amount):
14         self.balance += amount
15         
16 def transfer(from_account, to_account, amount):
17     with from_account.lock:
18         from_account.withdraw(amount)
19         time.sleep(1)
20         print("trying to get %s's lock..." % to_account.name)
21         with to_account.lock:
22             to_account_deposit(amount)
23     print("transfer finish")
24     
25 if __name__ == "__main__":
26     a = Account('a',1000, threading.Lock())
27     b = Account('b',1000, threading.Lock())
28     thread_list = []
29     thread_list.append(threading.Thread(target = transfer, args=(a,b,100)))
30     thread_list.append(threading.Thread(target = transfer, args=(b,a,500)))
31     for i in thread_list:
32         i.start()
33     for j in thread_list:
34         j.join()

执行结果是死锁:

trying to get account a's lock...
trying to get account b's lock...

你正在写一个多线程程序,其中线程需要一次获取多个锁,此时如何避免死锁问题。
解决方案:
多线程程序中,死锁问题很大一部分是由于线程同时获取多个锁造成的。举个例子:一个线程获取了第一个锁,然后在获取第二个锁的 时候发生阻塞,那么这个线程就可能阻塞其他线程的执行,从而导致整个程序假死。 其实解决这个问题,核心思想也特别简单:目前我们遇到的问题是两个线程想获取到的锁,都被对方线程拿到了,那么我们只需要保证在这两个线程中,获取锁的顺序保持一致就可以了。举个例子,我们有线程thread_a, thread_b, 锁lock_1, lock_2。只要我们规定好了锁的使用顺序,比如先用lock_1,再用lock_2,当线程thread_a获得lock_1时,其他线程如thread_b就无法获得lock_1这个锁,也就无法进行下一步操作(获得lock_2这个锁),也就不会导致互相等待导致的死锁。简言之,解决死锁问题的一种方案是为程序中的每一个锁分配一个唯一的id,然后只允许按照升序规则来使用多个锁,这个规则使用上下文管理器 是非常容易实现的,示例如下:

example7:

 1 import threading
 2 import time
 3 from contextlib import contextmanager
 4 
 5 thread_local = threading.local()
 6 
 7 @contextmanager
 8 def acquire(*locks):
 9     #sort locks by object identifier
10     locks = sorted(locks, key=lambda x: id(x))
11     
12     #make sure lock order of previously acquired locks is not violated
13     acquired = getattr(thread_local,'acquired',[])
14     if acquired and (max(id(lock) for lock in acquired) >= id(locks[0])):
15         raise RuntimeError('Lock Order Violation')
16     
17     # Acquire all the locks
18     acquired.extend(locks)
19     thread_local.acquired = acquired
20     
21     try:
22         for lock in locks:
23             lock.acquire()
24         yield
25     finally:
26         for lock in reversed(locks):
27             lock.release()
28         del acquired[-len(locks):]
29 
30 class Account(object):
31     def __init__(self, name, balance, lock):
32         self.name = name
33         self.balance = balance
34         self.lock = lock
35         
36     def withdraw(self, amount):
37         self.balance -= amount
38         
39     def deposit(self, amount):
40         self.balance += amount
41         
42 def transfer(from_account, to_account, amount):
43     print("%s transfer..." % amount)
44     with acquire(from_account.lock, to_account.lock):
45         from_account.withdraw(amount)
46         time.sleep(1)
47         to_account.deposit(amount)
48     print("%s transfer... %s:%s ,%s: %s" % (amount,from_account.name,from_account.balance,to_account.name, to_account.balance))
49     print("transfer finish")
50     
51 if __name__ == "__main__":
52     a = Account('a',1000, threading.Lock())
53     b = Account('b',1000, threading.Lock())
54     thread_list = []
55     thread_list.append(threading.Thread(target = transfer, args=(a,b,100)))
56     thread_list.append(threading.Thread(target = transfer, args=(b,a,500)))
57     for i in thread_list:
58         i.start()
59     for j in thread_list:
60         j.join()

执行结果如下:

100 transfer...
500 transfer...
100 transfer... a:900 ,b:1100
transfer finish
500 transfer... b:600, a:1400
transfer finish

成功的避免了互相等待导致的死锁问题。

在上述代码中,有几点语法需要解释:

  • 1. 装饰器@contextmanager是用来让我们能用with语句调用锁的,从而简化锁的获取和释放过程。关于with语句,大家可以参考浅谈 Python 的 with 语句(https://www.ibm.com/developerworks/cn/opensource/os-cn-pythonwith/)。简言之,with语句在调用时,先执行 __enter__()方法,然后执行with结构体内的语句,最后执行__exit__()语句。有了装饰器@contextmanager. 生成器函数中 yield 之前的语句在 __enter__() 方法中执行,yield 之后的语句在 __exit__() 中执行,而 yield 产生的值赋给了 as 子句中的 value 变量。
  • 2. try和finally语句中实现的是锁的获取和释放。
  • 3. try之前的语句,实现的是对锁的排序,以及锁排序是否被破坏的判断。

今天我们主要讨论了Python多线程中如何保证线程安全,互斥锁的使用方法。另外着重讨论了两种导致死锁的情况:迭代死锁与互相等待死锁,以及这两种死锁的解决方案:递归锁(RLock)的使用和锁的升序使用。

参考资料:

https://www.cnblogs.com/chengd/articles/7770898.html

 

posted @ 2020-03-27 20:38  burlingame  阅读(208)  评论(0编辑  收藏  举报