python进程、线程和协程03--协程

  • 进程 > 线程 > 协程
  • 协程用于耗时操作,高效利用CPU,例如网络请求、网路下载(爬虫)、IO操作(文件的读写)。

1、使用生成器完成协程

###python3.8
import time
def task1():
    for i  in range(3):
        print('A' + str(i))
        yield
        time.sleep(1)
def task2():
    for i  in range(3):
        print('B' + str(i))
        yield
        time.sleep(2)
if __name__ == '__main__':
    g1 = task1()
    g2 = task2()
    while True:
        try:
            next(g1)
            next(g2)
        except:
            break

2、使用greenlet完成协程

import time
from greenlet import greenlet
def task1():
    for i  in range(3):
        print('A' + str(i))
        g2.switch()
        time.sleep(0.1)
def task2():
    for i  in range(3):
        print('B' + str(i))
        g3.switch()
        time.sleep(2)
def task3():
    for i  in range(3):
        print('C' + str(i))
        g1.switch()
        time.sleep(2)
if __name__ == '__main__':
    g1 = greenlet(task1)
    g2 = greenlet(task2)
    g3 = greenlet(task3)
    # g1.switch()
    # g2.switch()
    g3.switch()    #g1、g2、g3三个只需要调用一个即可

3、geven和猴子补丁

  • greenlet已经实现了协程,但是这个工人切换,python有一个比greenlet更强大的并且能够自动切换任务的模块gevent。
  • gevent原理是当一个greentlet遇到IO (指的是input ouput输入输出,比如网络、文件操作等)操作时,就自动切换到其他的greenlet,等到IO完成,再适当的时候切换回来继续执行。
  • 由于IO操作非常耗时,经常使程序处于等待状态,有了gevent自动切换协程,就能保证总有greenlet在运行,而不是等待IO。

示例1:

import time
import gevent
from gevent import monkey    #导入猴子模块
monkey.patch_all()           #打猴子补丁,会将gevent给覆盖掉,就可以感知耗时操作了
def task1():
    for i  in range(3):
        print('A' + str(i))
        time.sleep(0.1)
def task2():
    for i  in range(3):
        print('B' + str(i))
        time.sleep(2)
def task3():
    for i  in range(3):
        print('C' + str(i))
        time.sleep(2)
if __name__ == '__main__':
    g1 = gevent.spawn(task1)
    g2 = gevent.spawn(task2)
    g3 = gevent.spawn(task3)
    g1.join()
    g2.join()
    g3.join()
    print('******END******')

示例2:

###python3.8
import gevent
from gevent import monkey
monkey.patch_all()
import requests                    #必须放在打补丁的后面
def download(url):
    response = requests.get(url)
    content = response.text
    print('下载了{}的数据,长度:{}'.format(url, len(content)))
if __name__ == '__main__':
    urls = ['http://www.163.com', 'http://www.qq.com', 'http://www.baidu.com']
    g1 = gevent.spawn(download, urls[0])
    g2 = gevent.spawn(download, urls[1])
    g3 = gevent.spawn(download, urls[2])
    gevent.joinall([g1, g2, g3])

 

#                                                                                                                   #
posted @ 2021-08-03 15:36  麦恒  阅读(69)  评论(0编辑  收藏  举报