GIL以及协程

GIL以及协程

一、GIL全局解释器锁

  • 演示
'''
python解释器:
    - Cpython c语言
    - Jpython java

1、GIL:全局解释器锁
    - 翻译:在同一个进程下开启的多个线程,同一时刻只能有一个线程执行,因为Cpython的内存管理不是线程安全。

    - GIL全局解释器锁,本质上就是一把互斥锁,保证数据安全

定义:
In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple
native threads from executing Python bytecodes at once. This lock is necessary mainly
because CPython’s memory management is not thread-safe. (However, since the GIL
exists, other features have grown to depend on the guarantees that it enforces.)

结论:在Cpython解释器中,同一个进程下开启的多线程,同一时刻只能有一个线程执行,无法利用多多核优势。


GIL全局解释器的优缺点:

    优点:
        保证数据的安全
    缺点:
        单个进程下,开启多个线程,牺牲执行效率,无法实现并行,只能实现并发

        - IO密集型:用多线程
        - 计算密集型:用多进程
'''

import time
from threading import Thread, Lock
lock = Lock()


n = 100


def task():
    lock.acquire()
    global n
    m = n
    time.sleep(1)
    n = m - 1
    lock.release()


if __name__ == '__main__':
    list1 = []


    for line in range(10):
        t = Thread(target=task)
        t.start()
        list1.append(t)

    for t in list1:
        t.join()

    print(n)
  • 查找资源
# 查文档,看是否能手动清理内存
# import gc

# - 查看课外问题:
# - 国内: 开源中国、CSDN、cnblogs、https://www.v2ex.com/
# - 国外: Stack Overflow、github、谷歌

二、使用多线程提高效率

  • 实例
from threading import Thread
from multiprocessing import Process
import time

'''
IO密集型下使用多线程
计算密集型下使用多进程

IO密集型任务,每个任务4s

    - 单核:
        - 开启多线程节省资源
        
    - 多核:
        - 多线程:
            - 开启4个子线程:16s
            
        - 多进程:
            - 开启4个子进程:16s + 申请开启资源消耗的时间
            
计算密集型任务,每个任务4s
    - 单核:
        - 开启线程比进程节省资源
        
    - 多核:
        多线程:
            - 开启4个子线程:16s
            
        多进程:
            - 开启多个进程:4s
            
        
'''

# def task1():
#     #计算1000000词的 += 1
#     i = 10
#     for line in range(1000000):
#         i += 1
#
#
# def task2():
#     time.sleep(2)
#
#
# if __name__ == '__main__':
#
#     # 1、开启多进程
#     # 测试计算密集型
#     start_time = time.time()
#     list1 = []
#     for line in range(6):
#         p = Process(target=task1)
#         p.start()
#         list1.append(p)
#
#     for p in list1:
#         p.join()
#
#     end_time = time.time()
#
#     #消耗时间
#     print(f'多进程计算密集型消耗时间:{end_time - start_time}')
#     #多进程密集型消耗时间:1.4906916618347168
#
#     # 测试IO密集型
#     start_time = time.time()
#     list1 = []
#     for line in range(6):
#         p = Process(target=task2)
#         p.start()
#         list1.append(p)
#
#     for p in list1:
#         p.join()
#
#     end_time = time.time()
#
#     #消耗时间
#     print(f'多进程IO型消耗时间:{end_time - start_time}')
#
#
#
#
#     #2、开启多线程
#     #测试计算密集型
#     start_time = time.time()
#     list1 = []
#     for line in range(6):
#         t = Thread(target=task1)
#         t.start()
#         list1.append(t)
#
#     for t in list1:
#         t.join()
#
#     end_time = time.time()
#     print(f'多线程计算密集型消耗时间:{end_time - start_time}')
#     #多线程密集型消耗时间:0.41376233100891113
#
#
#     #测试IO密集型
#     start_time = time.time()
#     list1 = []
#     for line in range(6):
#         t = Thread(target=task2)
#         t.start()
#         list1.append(t)
#
#     for t in list1:
#         t.join()
#
#     end_time = time.time()
#     print(f'多线程IO密集型消耗时间:{end_time - start_time}')
#


# 计算密集型任务
def task1():
    # 计算1000000次 += 1
    i = 10
    for line in range(10000000):
        i += 1


# IO密集型任务
def task2():
    time.sleep(3)


if __name__ == '__main__':
    # 1、测试多进程:
    # 测试计算密集型
    start_time = time.time()
    list1 = []
    for line in range(6):
        p = Process(target=task1)
        p.start()
        list1.append(p)

    for p in list1:
        p.join()
    end_time = time.time()
    # 消耗时间: 5.33872389793396
    print(f'计算密集型消耗时间: {end_time - start_time}')

    # 测试IO密集型
    start_time = time.time()
    list1 = []
    for line in range(6):
        p = Process(target=task2)
        p.start()
        list1.append(p)

    for p in list1:
        p.join()
    end_time = time.time()
    # 消耗时间: 4.517091751098633
    print(f'IO密集型消耗时间: {end_time - start_time}')


    # 2、测试多线程:
    # 测试计算密集型
    start_time = time.time()
    list1 = []
    for line in range(6):
        p = Thread(target=task1)
        p.start()
        list1.append(p)

    for p in list1:
        p.join()
    end_time = time.time()
    # 消耗时间: 5.988943815231323
    print(f'计算密集型消耗时间: {end_time - start_time}')

    # 测试IO密集型
    start_time = time.time()
    list1 = []
    for line in range(6):
        p = Thread(target=task2)
        p.start()
        list1.append(p)

    for p in list1:
        p.join()
    end_time = time.time()
    # 消耗时间: 3.00256085395813
    print(f'IO密集型消耗: {end_time - start_time}')
   
结论:
   # 由1和3对比得:在计算密集型情况下使用多进程(多核的情况下多个CPU)
   # 由2和3对比得:在IO密集型情况下使用多线程(多核的情况下多个CPU)

   # 都使用多线程(单核单个CPU)

三、协程

  • 演示

```python
'''
1、什么是协程?
- 进程:资源单位
- 线程:执行单位
- 协程:单线程下实现并发

    - 在IO密集型的情况下,使用协程能提高最高效率

    注意;协程不是任何单位,只是一个程序员YY出来的东西

    总结:多进程---> 多线程---> 让每一个线程都实现协程(单线程下实现并发)

    协程的目的:
        - 手动实现“遇到IO切换 + 保存状态” 去欺骗操作系统,让操作系统误以为没有IO操作,将CPU的执行权限给你

'''

import time

def task1():
time.sleep(1)

def task2():
time.sleep(3)

def task3():
time.sleep(5)

def task4():
time.sleep(7)

def task5():
time.sleep(9)

遇到IO切换(gevent) + 保存状态

from gevent import monkey #猴子补丁

monkey.patch_all() #监听所有的任务是否有IO操作
from gevent import spawn #spawn(任务)

from gevent import joinall
import time

def task1():
print('start from task1....')
time.sleep(1)
print('end from task1....')

def task2():
print('start from task2....')
time.sleep(1)
print('end from task2....')

def task3():
print('start from task3....')
time.sleep(1)
print('end from task3....')

if name == 'main':

start_time = time.time()
sp1 = spawn(task1)
sp2 = spawn(task2)
sp3 = spawn(task3)

# sp1.start()
# sp2.start()
# sp3.start()
# sp1.join()
# sp2.join()
# sp3.join()
joinall([sp1, sp2, sp3])  #等同于上面六步

end_time = time.time()

print(f'消耗时间:{end_time - start_time}')

start from task1....

start from task2....

start from task3....

end from task1....

end from task2....

end from task3....

消耗时间:1.0085582733154297

四、tcp服务端实现并发

  • 代码
- client 文件
import socket

client = socket.socket()

client.connect(
    ('127.0.0.1', 9000)
)

print('Client is run....')
while True:
    msg = input('客户端>>:').encode('utf-8')
    client.send(msg)

    data = client.recv(1024)
    print(data)


- sever 文件
import socket
from concurrent.futures import ThreadPoolExecutor

server = socket.socket()

server.bind(
    ('127.0.0.1', 9000)
)

server.listen(5)


# 1.封装成一个函数
def run(conn):
    while True:
        try:
            data = conn.recv(1024)
            if len(data) == 0:
                break
            print(data.decode('utf-8'))
            conn.send('111'.encode('utf-8'))

        except Exception as e:
            break

    conn.close()


if __name__ == '__main__':
    print('Server is run....')
    pool = ThreadPoolExecutor(50)
    while True:
        conn, addr = server.accept()
        print(addr)
        pool.submit(run, conn)






感谢: 亚峰牛皮
posted @ 2019-12-10 20:19  迎着阳光  阅读(265)  评论(0编辑  收藏  举报