scrapy框架
简介
Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中。
其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。
Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下
Scrapy主要包括了以下组件:
- 引擎(Scrapy)
用来处理整个系统的数据流处理, 触发事务(框架核心) - 调度器(Scheduler)
用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址 - 下载器(Downloader)
用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的) - 爬虫(Spiders)
爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面 - 项目管道(Pipeline)
负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。 - 下载器中间件(Downloader Middlewares)
位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。 - 爬虫中间件(Spider Middlewares)
介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。 - 调度中间件(Scheduler Middewares)
介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。
Scrapy运行流程大概如下:
- 引擎从调度器中取出一个链接(URL)用于接下来的抓取
- 引擎把URL封装成一个请求(Request)传给下载器
- 下载器把资源下载下来,并封装成应答包(Response)
- 爬虫解析Response
- 解析出实体(Item),则交给实体管道进行进一步的处理
- 解析出的是链接(URL),则把URL交给调度器等待抓取
安装
Linux/mac - pip3 install scrapy Windows: - 安装twsited a. pip3 install wheel b. 下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted c. 进入下载目录,执行 pip3 install Twisted-xxxxx.whl - 安装scrapy d. pip3 install scrapy -i http://pypi.douban.com/simple --trusted-host pypi.douban.com - 安装pywin32 e. pip3 install pywin32 -i http://pypi.douban.com/simple --trusted-host pypi.douban.com
快速使用
Django: django-admin startproject mysite cd mysite python manage.py startapp app01 # 写代码 python manage.py runserver Scrapy: 创建project: scrapy startproject xianglong # 创建项目 cd xianglong # 进入项目目录 scrapy genspider chouti chouti.com # 类似于django的创建app # 写代码 scrapy crawl chouti --nolog # 启动
启动时,如果打印结果出现编码错误,在windows下可以在chouti.py中加上下面的内容
# -*- coding: utf-8 -*- import scrapy # import sys, io # sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')
创建完成后的项目目录
文件说明:
- scrapy.cfg 项目的配置信息,主要为Scrapy命令行工具提供一个基础的配置信息。(真正爬虫相关的配置信息在settings.py文件中)
- items.py 设置数据存储模板,用于结构化数据,如:Django的Model
- pipelines 数据处理行为,如:一般结构化的数据持久化
- settings.py 配置文件,如:递归的层数、并发数,延迟下载等
- spiders 爬虫目录,如:创建文件,编写爬虫规则
注意:一般创建爬虫文件时,以网站域名命名
在chouti.py中我们就可以编写我们的爬虫内容了
# -*- coding: utf-8 -*- import scrapy from bs4 import BeautifulSoup from scrapy.selector import HtmlXPathSelector from scrapy.http import Request from ..items import XianglongItem class ChoutiSpider(scrapy.Spider): # 爬虫应用的名称,通过此名称启动爬虫命令 name = 'chouti' # 允许的域名 allowed_domains = ['chouti.com'] # 起始URL,也就是爬取的url,可以有多个 start_urls = ['http://dig.chouti.com/', ] # 爬取完成后默认执行的回调函数 def parse(self, response): """ 当起始URL下载完毕后,自动执行parse函数:response封装了响应相关的所有内容。 :param response: :return: """ hxs = HtmlXPathSelector(response=response) # 去下载的页面中:找新闻 items = hxs.xpath("//div[@id='content-list']/div[@class='item']") # //获取子子孙孙 /获取子标签 for item in items: href = item.xpath('.//div[@class="part1"]//a[1]/@href').extract_first() # .//获取相对当前对象的子子孙孙 /@href获取href属性 text = item.xpath('.//div[@class="part1"]//a[1]/text()').extract_first() # /text()获取文本内容 item = XianglongItem(title=text, href=href) yield item
执行scrapy crawl chouti --nolog命令,会自动爬取start_urls中的网址,爬取完成后会执行回调函数parse,这里传的response就是爬取结果的对象,在处理这个对象时,我们不再使用bs4模块,而是使用scrapy提供好的HtmlXPathSelector,这个对象的用法很多
#!/usr/bin/env python # -*- coding:utf-8 -*- from scrapy.selector import Selector, HtmlXPathSelector from scrapy.http import HtmlResponse html = """<!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> </head> <body> <ul> <li class="item-"><a id='i1' href="link.html">first item</a></li> <li class="item-0"><a id='i2' href="llink.html">first item</a></li> <li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li> </ul> <div><a href="llink2.html">second item</a></div> </body> </html> """ response = HtmlResponse(url='http://example.com', body=html,encoding='utf-8') # hxs = HtmlXPathSelector(response) # print(hxs) # hxs = Selector(response=response).xpath('//a') # print(hxs) # hxs = Selector(response=response).xpath('//a[2]') # print(hxs) # hxs = Selector(response=response).xpath('//a[@id]') # print(hxs) # hxs = Selector(response=response).xpath('//a[@id="i1"]') # print(hxs) # hxs = Selector(response=response).xpath('//a[@href="link.html"][@id="i1"]') # print(hxs) # hxs = Selector(response=response).xpath('//a[contains(@href, "link")]') # print(hxs) # hxs = Selector(response=response).xpath('//a[starts-with(@href, "link")]') # print(hxs) # hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]') # print(hxs) # hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/text()').extract() # print(hxs) # hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/@href').extract() # print(hxs) # hxs = Selector(response=response).xpath('/html/body/ul/li/a/@href').extract() # print(hxs) # hxs = Selector(response=response).xpath('//body/ul/li/a/@href').extract_first() # print(hxs) # ul_list = Selector(response=response).xpath('//body/ul/li') # for item in ul_list: # v = item.xpath('./a/span') # # 或 # # v = item.xpath('a/span') # # 或 # # v = item.xpath('*/a/span') # print(v)
在获取到想要的内容后,如果要做持久化,可以通过yield一个特殊的对象,这个对象就是下面这个文件中类的对象
该文件内容为
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html import scrapy class XianglongItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() title = scrapy.Field() href = scrapy.Field()
只要yield这个特殊对象,这个框架会自动将这个对象传给下面的文件
在这个文件中我们可以执行持久化操作
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html class XianglongPipeline(object): def process_item(self, item, spider): # 每次yield都会执行这个函数,spider为chouti.py中那个类的对象,item为我们yield的特殊对象 self.f.write(item['href']+'\n') self.f.flush() return item def open_spider(self, spider): """ 爬虫开始执行时,调用 :param spider: :return: """ self.f = open('url.log','w') def close_spider(self, spider): """ 爬虫关闭时,被调用 :param spider: :return: """ self.f.close()
要使用这个功能还需要在配置文件中配置,这段内容本来被注释了,只要取消注释就行了,300表示权重
ITEM_PIPELINES = { 'xianglong.pipelines.XianglongPipeline': 300, }
如果我们在爬取完一个网页后,又从中得到了其它url,想要爬取这个url该怎么办呢,不能直接在start_urls中添加,这个start_urls只会在开始时读取一次,后面添加是不会管了,这时我们需要yield一个Request对象,这样框架就会帮我们爬取相关的内容了
# -*- coding: utf-8 -*- import scrapy from bs4 import BeautifulSoup from scrapy.selector import HtmlXPathSelector from scrapy.http import Request from ..items import XianglongItem class ChoutiSpider(scrapy.Spider): name = 'chouti' allowed_domains = ['chouti.com'] start_urls = ['http://dig.chouti.com/', ] def parse(self, response): """ 当起始URL下载完毕后,自动执行parse函数:response封装了响应相关的所有内容。 :param response: :return: """ hxs = HtmlXPathSelector(response=response) # 去下载的页面中:找新闻 items = hxs.xpath("//div[@id='content-list']/div[@class='item']") for item in items: href = item.xpath('.//div[@class="part1"]//a[1]/@href').extract_first() text = item.xpath('.//div[@class="part1"]//a[1]/text()').extract_first() item = XianglongItem(title=text, href=href) yield item pages = hxs.xpath('//div[@id="page-area"]//a[@class="ct_pagepa"]/@href').extract() for page_url in pages: page_url = "https://dig.chouti.com" + page_url yield Request(url=page_url, callback=self.parse)
上面我们从爬取的内容中提取了一些a标签的href,然后yield了Request对象Request(url=page_url, callback=self.parse),这样就可以爬取这个对象中的url了,爬取完成后还会执行我们传入的回调函数
相关配置,settings中
DEPTH_LIMIT = 2
这个表示只会爬取2层url,也就是第一层爬取过程中yield出的url为第二层,爬取第二层时yield出来的为第三层,第三层不会接着爬取
start_requests
如果我们在爬取start_urls中的地址后不想要执行默认的parse回调函数,而是想执行其它的函数,那么我们可以使用start_requests
# -*- coding: utf-8 -*- import scrapy from bs4 import BeautifulSoup from scrapy.selector import HtmlXPathSelector from scrapy.http import Request from ..items import XianglongItem from scrapy.http import HtmlResponse from scrapy.http.response.html import HtmlResponse """ obj = ChoutiSpider() obj.start_requests() """ class ChoutiSpider(scrapy.Spider): name = 'chouti' allowed_domains = ['chouti.com'] start_urls = ['https://dig.chouti.com/',] def start_requests(self): for url in self.start_urls: yield Request( url=url, callback=self.parse, headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'} ) def parse(self, response): """ 当起始URL下载完毕后,自动执行parse函数:response封装了响应相关的所有内容。 :param response: :return: """ pages = response.xpath('//div[@id="page-area"]//a[@class="ct_pagepa"]/@href').extract() for page_url in pages: page_url = "https://dig.chouti.com" + page_url yield Request(url=page_url,callback=self.parse,headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'})
爬取start_urls时其实会先执行start_requests,在里面会返回Request对象,我们只要修改Request对象的参数就可以改变回调函数
解析器
上面我们提到了使用HtmlXPathSelector来解析爬取的内容,其实response对象本身也有xpath方法,可以来解析数据
将字符串转换成对象: - 方式一: response.xpath('//div[@id='content-list']/div[@class='item']') - 方式二: hxs = HtmlXPathSelector(response=response) items = hxs.xpath("//div[@id='content-list']/div[@class='item']") 查找规则: //a //div/a //a[re:test(@id, "i\d+")] items = hxs.xpath("//div[@id='content-list']/div[@class='item']") for item in items: item.xpath('.//div') 解析: 标签对象:xpath('/html/body/ul/li/a/@href') 列表: xpath('/html/body/ul/li/a/@href').extract() 值: xpath('//body/ul/li/a/@href').extract_first()
同时这个解析器还可以单独应用
from scrapy.selector import Selector, HtmlXPathSelector from scrapy.http import HtmlResponse html = """<!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> </head> <body> <ul> <li class="item-"><a id='i1' href="link.html">first item</a></li> <li class="item-0"><a id='i2' href="llink.html">first item</a></li> <li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li> </ul> <div><a href="llink2.html">second item</a></div> </body> </html> """ response = HtmlResponse(url='http://example.com', body=html,encoding='utf-8') obj = response.xpath('//a[@id="i1"]/text()').extract_first() print(obj)
pipelines
上面我们介绍了pipelines是用来持久化数据的,我们只是配置了一个类,其实从配置文件可以看出,由于有权重,所以可以配置多个pipelines,他们会按照权重从小到大执行
配置文件
ITEM_PIPELINES = { 'xianglong.pipelines.FilePipeline': 300, 'xianglong.pipelines.DBPipeline': 301, }
pipelines
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html """ 当根据配置文件: ITEM_PIPELINES = { 'xianglong.pipelines.FilePipeline': 300, 'xianglong.pipelines.DBPipeline': 301, } """ from scrapy.exceptions import DropItem class FilePipeline(object): def process_item(self, item, spider): print('写入文件',item.href) return item def open_spider(self, spider): """ 爬虫开始执行时,调用 :param spider: :return: """ print('打开文件') def close_spider(self, spider): """ 爬虫关闭时,被调用 :param spider: :return: """ print('关闭文件') class DBPipeline(object): def process_item(self, item, spider): print('数据库',item.href) return item def open_spider(self, spider): """ 爬虫开始执行时,调用 :param spider: :return: """ print('打开数据') def close_spider(self, spider): """ 爬虫关闭时,被调用 :param spider: :return: """ print('关闭数据库')
可以看到FilePipeline的300比DBPipeline的301小,所以会先执行FilePipeline,执行结果
执行完FilePipeline的process_item就会接着执行DBPipeline的,我们可以看到这里FilePipeline的process_item的返回值为item,这个返回值就是下一个pipelines类接收到的item,也就是DBPipeline的process_item接收到的item,如果返回的是None,那么DBPipeline的process_item接收到的就是None,前一个类还可以抛出一个异常DropItem(),这样后面接收到这个异常的process_item就不会执行了
from scrapy.exceptions import DropItem class FilePipeline(object): def process_item(self, item, spider): print('文件',item.href) raise DropItem()
pipelines的所有方法
上面我们使用了pipelines的三种方法,其实pipelines总共有5种方法
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html """ 当根据配置文件: ITEM_PIPELINES = { 'xianglong.pipelines.FilePipeline': 300, 'xianglong.pipelines.DBPipeline': 301, } 找到相关的类:FilePipeline之后,会优先判断类中是否含有 from_crawler 如果有: obj = FilePipeline.from_crawler() 没有则: obj = FilePipeline() obj.open_spider(..) ob.process_item(..) obj.close_spider(..) """ from scrapy.exceptions import DropItem class FilePipeline(object): def __init__(self,path): self.path = path self.f = None @classmethod def from_crawler(cls, crawler): """ 初始化时候,用于创建pipeline对象 :param crawler: :return: """ # return cls() path = crawler.settings.get('XL_FILE_PATH') return cls(path) def process_item(self, item, spider): self.f.write(item['href']+'\n') return item def open_spider(self, spider): """ 爬虫开始执行时,调用 :param spider: :return: """ self.f = open(self.path,'w') def close_spider(self, spider): """ 爬虫关闭时,被调用 :param spider: :return: """ self.f.close()
from_crawler方法就是用来返回当前类的对象的,一般没有特别的操作可以不写他,如果想要从配置文件中读取东西(如写入文件的路径),就可以用到这个方法的crawler参数(配置必须是大写的),然后将获取的内容在实例化时传给类,再配合另一个初始化方法
就可以在当前对象中调用我们获取到的内容了,如上面的self.path就是从配置文件中获取的文件路径
POST/请求头/Cookie
之前我们都是发送的get请求,如何发送post请求并且携带请求头和cookie呢,其实都是在Request对象中设置
获取cookie我们可以手动获取,这里以登录抽屉并点赞为例
# -*- coding: utf-8 -*- import scrapy from bs4 import BeautifulSoup from scrapy.selector import HtmlXPathSelector from scrapy.http import Request from ..items import XianglongItem from scrapy.http import HtmlResponse from scrapy.http.response.html import HtmlResponse """ obj = ChoutiSpider() obj.start_requests() """ class ChoutiSpider(scrapy.Spider): name = 'chouti' allowed_domains = ['chouti.com'] start_urls = ['http://dig.chouti.com/',] cookie_dict = {} def start_requests(self): for url in self.start_urls: yield Request(url=url,callback=self.parse_index) def parse_index(self,response): # 原始cookie # print(response.headers.getlist('Set-Cookie')) # 解析后的cookie from scrapy.http.cookies import CookieJar cookie_jar = CookieJar() cookie_jar.extract_cookies(response, response.request) for k, v in cookie_jar._cookies.items(): for i, j in v.items(): for m, n in j.items(): self.cookie_dict[m] = n.value req = Request( url='http://dig.chouti.com/login', method='POST', headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'}, body='phone=8613121758648&password=woshiniba&oneMonth=1', cookies=self.cookie_dict, callback=self.parse_check_login ) yield req def parse_check_login(self,response): print(response.text) yield Request( url='https://dig.chouti.com/link/vote?linksId=19440976', method='POST', cookies=self.cookie_dict, callback=self.parse_show_result ) def parse_show_result(self,response): print(response.text)
上面我们通过CookieJar自己通过循环获取了cookie字典,并且在每次yield的Request对象中加上了cookies和请求头等信息,post请求还带上了body,请求体的信息,上面发送的是form-data格式的请求体,如果是json格式序列化一下即可
这样通过手动的方式获取cookie较为麻烦,我们可以让scrapy自动帮我们带cookie
# -*- coding: utf-8 -*- import scrapy from bs4 import BeautifulSoup from scrapy.selector import HtmlXPathSelector from scrapy.http import Request from ..items import XianglongItem from scrapy.http import HtmlResponse from scrapy.http.response.html import HtmlResponse """ obj = ChoutiSpider() obj.start_requests() """ class ChoutiSpider(scrapy.Spider): name = 'chouti' allowed_domains = ['chouti.com'] start_urls = ['http://dig.chouti.com/',] def start_requests(self): for url in self.start_urls: yield Request(url=url,callback=self.parse_index,meta={'cookiejar':True}) def parse_index(self,response): req = Request( url='http://dig.chouti.com/login', method='POST', headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'}, body='phone=8613121758648&password=woshiniba&oneMonth=1', callback=self.parse_check_login, meta={'cookiejar': True} ) yield req def parse_check_login(self,response): # print(response.text) yield Request( url='https://dig.chouti.com/link/vote?linksId=19440976', method='POST', callback=self.parse_show_result, meta={'cookiejar': True} ) def parse_show_result(self,response): print(response.text)
只要在每一个Request对象中传入meta={'cookiejar': True},那么scrapy就会自动帮我们获取cookie并在访问时携带了
配置文件制定是否允许操作cookie:
# Disable cookies (enabled by default) # COOKIES_ENABLED = False
去重规则
当我们爬取网站时,对于一些爬取过的url我们需要做记录,下次再遇到时就不再爬取了
在Request类实例化时有一个dont_filter=False参数,False表示需要去重,如果为True则不会执行去重
在去重时我们首先要定义配置文件
DUPEFILTER_CLASS = 'xianglong.dupe.MyDupeFilter'
定义完成后访问爬取url时会先执行这个类中的内容
from scrapy.dupefilter import BaseDupeFilter from scrapy.utils.request import request_fingerprint """ 1. 根据配置文件找到 DUPEFILTER_CLASS = 'xianglong.dupe.MyDupeFilter' 2. 判断是否存在from_settings 如果有: obj = MyDupeFilter.from_settings() 否则: obj = MyDupeFilter() """ class MyDupeFilter(BaseDupeFilter): def __init__(self): self.record = set() @classmethod def from_settings(cls, settings): return cls() def request_seen(self, request): ident = request_fingerprint(request) if ident in self.record: print('已经访问过了', request.url) return True self.record.add(ident) def open(self): # can return deferred 可以打开redis pass def close(self, reason): # can return a deferred 可以关闭redis pass
在这个类中我们定义了一个空集合,每次拿到要爬取的url后会先判断这个url在不在集合中,如果在,那么说明已经访问过了,return True,如果不在则添加到集合再访问,这里添加url时我们不是直接添加的url,而是利用request_fingerprint(request)对这个类进行加密
这样可以固定存入的长度,而且如果是存入url,有些url只是参数的位置变了,其实还是一样的,但是只存url则无法判断
为请求创建唯一标识
http://www.oldboyedu.com?id=1&age=2 http://www.oldboyedu.com?age=2&id=1 from scrapy.utils.request import request_fingerprint from scrapy.http import Request u1 = Request(url='http://www.oldboyedu.com?id=1&age=2') u2 = Request(url='http://www.oldboyedu.com?age=2&id=1') result1 = request_fingerprint(u1) result2 = request_fingerprint(u2) print(result1,result2) # c49a0582ee359d61d0fe5f28084b2ea04106050d c49a0582ee359d61d0fe5f28084b2ea04106050d
可以看到通过request_fingerprint处理后存入的值长度固定,且只是参数位置不同的url存入的值也相同
记录到低要不要放在数据库
访问记录可以放在redis中【使用redis集合存储】
dont_filter到低在哪里
from scrapy.core.scheduler import Scheduler def enqueue_request(self, request): # request.dont_filter=False # self.df.request_seen(request): # - True,已经访问 # - False,未访问 # request.dont_filter=True,全部加入到调度器 if not request.dont_filter and self.df.request_seen(request): self.df.log(request, self.spider) return False # 如果往下走,把请求加入调度器 dqok = self._dqpush(request)
下载中间件
对爬虫中所有请求发送时,携带请求头
方案一:在每个Request对象中添加一个请求头
方案二:使用下载中间件
配置文件
DOWNLOADER_MIDDLEWARES = { 'xianglong.middlewares.UserAgentDownloaderMiddleware': 543, }
# -*- coding: utf-8 -*- # Define here the models for your spider middleware # # See documentation in: # https://doc.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals class UserAgentDownloaderMiddleware(object): @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() return s def process_request(self, request, spider): # Called for each request that goes through the downloader # middleware. # Must either: # - return None: continue processing this request # - or return a Response object # - or return a Request object # - or raise IgnoreRequest: process_exception() methods of # installed downloader middleware will be called request.headers['User-Agent'] = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36" # return None # 继续执行后续的中间件的process_request # from scrapy.http import Request # return Request(url='www.baidu.com') # 重新放入调度器中,当前请求不再继续处理 # from scrapy.http import HtmlResponse # 执行从最后一个开始执行所有的process_response # return HtmlResponse(url='www.baidu.com',body=b'asdfuowjelrjaspdoifualskdjf;lajsdf') def process_response(self, request, response, spider): # Called with the response returned from the downloader. # Must either; # - return a Response object # - return a Request object # - or raise IgnoreRequest return response def process_exception(self, request, exception, spider): # Called when a download handler or a process_request() # (from other downloader middleware) raises an exception. # Must either: # - return None: continue processing this exception # - return a Response object: stops process_exception() chain # - return a Request object: stops process_exception() chain pass
中间件中不同的return值有不同的结果,在process_request中如果return的是一个request对象,则会返回调度器,访问这个request,这样会成为一个死循环,如果return None,则继续执行,如果返回一个Response对象,则从最后一个开始执行所有process_response
如果抛出异常则执行process_exception,在process_response中如果return的是一个request对象则返回调度器访问这个request,如果返回Response对象则继续执行,如果抛出异常则执行process_exception
方案三:内置下载中间件
配置文件: USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'
scrapy中如何添加代理
方式一:内置添加代理功能
# -*- coding: utf-8 -*- import os import scrapy from scrapy.http import Request class ChoutiSpider(scrapy.Spider): name = 'chouti' allowed_domains = ['chouti.com'] start_urls = ['https://dig.chouti.com/'] def start_requests(self): os.environ['HTTP_PROXY'] = "http://192.168.11.11" for url in self.start_urls: yield Request(url=url,callback=self.parse) def parse(self, response): print(response)
只需要添加os.environ['HTTP_PROXY'] = "http://192.168.11.11",就可以添加代理了,HTTP可以更改成其它名字
但是这种方法会一直使用代理ip进行爬取,容易被封
方式二:自定义下载中间件
配置下载中间件
import random import base64 import six def to_bytes(text, encoding=None, errors='strict'): """Return the binary representation of `text`. If `text` is already a bytes object, return it as-is.""" if isinstance(text, bytes): return text if not isinstance(text, six.string_types): raise TypeError('to_bytes must receive a unicode, str or bytes ' 'object, got %s' % type(text).__name__) if encoding is None: encoding = 'utf-8' return text.encode(encoding, errors) class MyProxyDownloaderMiddleware(object): def process_request(self, request, spider): proxy_list = [ {'ip_port': '111.11.228.75:80', 'user_pass': 'xxx:123'}, {'ip_port': '120.198.243.22:80', 'user_pass': ''}, {'ip_port': '111.8.60.9:8123', 'user_pass': ''}, {'ip_port': '101.71.27.120:80', 'user_pass': ''}, {'ip_port': '122.96.59.104:80', 'user_pass': ''}, {'ip_port': '122.224.249.122:8088', 'user_pass': ''}, ] proxy = random.choice(proxy_list) if proxy['user_pass'] is not None: request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port']) encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass'])) request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass) else: request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
这种方法每次访问会随机从代理ip中选一个使用
配置文件
DOWNLOADER_MIDDLEWARES = { # 'xiaohan.middlewares.MyProxyDownloaderMiddleware': 543, }
scrapy中如何处理https
如果是花钱买的认证那么我们不需要进行操作就可以直接访问,如果是自己生成的https证书,那么要进行下面的操作
middleware中间件文件中
from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate) class MySSLFactory(ScrapyClientContextFactory): def getCertificateOptions(self): from OpenSSL import crypto v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read()) v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read()) return CertificateOptions( privateKey=v1, # pKey对象 certificate=v2, # X509对象 verify=False, method=getattr(self, 'method', getattr(self, '_ssl_method', None)) )
配置文件中
DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory" DOWNLOADER_CLIENTCONTEXTFACTORY = "xiaohan.middlewares.MySSLFactory"
下载中间件的作用?
在每次下载前和下载后对请求和响应可以定制功能。例如:user-agent/代理/cookie
爬虫中间件
编写
middlewares.py class XiaohanSpiderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the spider middleware does not modify the # passed objects. def __init__(self): pass @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() return s # 每次下载完成之后,未执行parse函数之前。 def process_spider_input(self, response, spider): # Called for each response that goes through the spider # middleware and into the spider. # Should return None or raise an exception. print('process_spider_input',response) return None # 执行完回调函数执行 def process_spider_output(self, response, result, spider): # Called with the results returned from the Spider, after # it has processed the response. # Must return an iterable of Request, dict or Item objects. print('process_spider_output',response) for i in result: yield i def process_spider_exception(self, response, exception, spider): # Called when a spider or process_spider_input() method # (from other spider middleware) raises an exception. # Should return either None or an iterable of Response, dict # or Item objects. pass # 爬虫启动时,第一次执行start_requests时,触发。(只执行一次) def process_start_requests(self, start_requests, spider): # Called with the start requests of the spider, and works # similarly to the process_spider_output() method, except # that it doesn’t have a response associated. # Must return only requests (not items). print('process_start_requests') for r in start_requests: yield r
应用
SPIDER_MIDDLEWARES = { 'xiaohan.middlewares.XiaohanSpiderMiddleware': 543, }
信号
首先创建一个新的文件
extends.py from scrapy import signals class MyExtension(object): def __init__(self): pass @classmethod def from_crawler(cls, crawler): obj = cls() # 在爬虫打开时,触发spider_opened信号相关的所有函数:xxxxxxxxxxx crawler.signals.connect(obj.xxxxxxxxxxx1, signal=signals.spider_opened) # 在爬虫关闭时,触发spider_closed信号相关的所有函数:xxxxxxxxxxx crawler.signals.connect(obj.uuuuuuuuuu, signal=signals.spider_closed) return obj def xxxxxxxxxxx1(self, spider): print('open') def uuuuuuuuuu(self, spider): print('close') return obj
在这里定义一个类,这个类实例化时,可以添加触发信号时的函数
配置文件
EXTENSIONS = { 'xiaohan.extends.MyExtension':500, }
这条配置其实就是会实例化这个类的对象,也就是调用上面的from_crawler方法,而这个方法中又绑定了信号函数
配置文件
# -*- coding: utf-8 -*- # Scrapy settings for step8_king project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # http://doc.scrapy.org/en/latest/topics/settings.html # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html # 1. 爬虫名称 BOT_NAME = 'step8_king' # 2. 爬虫应用路径 SPIDER_MODULES = ['step8_king.spiders'] NEWSPIDER_MODULE = 'step8_king.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent # 3. 客户端 user-agent请求头 # USER_AGENT = 'step8_king (+http://www.yourdomain.com)' # Obey robots.txt rules # 4. 禁止爬虫配置 # ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16) # 5. 并发请求数 # CONCURRENT_REQUESTS = 4 # Configure a delay for requests for the same website (default: 0) # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs # 6. 延迟下载秒数 # DOWNLOAD_DELAY = 2 # The download delay setting will honor only one of: # 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名 # CONCURRENT_REQUESTS_PER_DOMAIN = 2 # 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP # CONCURRENT_REQUESTS_PER_IP = 3 # Disable cookies (enabled by default) # 8. 是否支持cookie,cookiejar进行操作cookie # COOKIES_ENABLED = True # COOKIES_DEBUG = True # Disable Telnet Console (enabled by default) # 9. Telnet用于查看当前爬虫的信息,操作爬虫等... # 使用telnet ip port ,然后通过命令操作 # TELNETCONSOLE_ENABLED = True # TELNETCONSOLE_HOST = '127.0.0.1' # TELNETCONSOLE_PORT = [6023,] # 10. 默认请求头 # Override the default request headers: # DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', # } # Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html # 11. 定义pipeline处理请求 # ITEM_PIPELINES = { # 'step8_king.pipelines.JsonPipeline': 700, # 'step8_king.pipelines.FilePipeline': 500, # } # 12. 自定义扩展,基于信号进行调用 # Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html # EXTENSIONS = { # # 'step8_king.extensions.MyExtension': 500, # } # 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度 # DEPTH_LIMIT = 3 # 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo # 后进先出,深度优先 # DEPTH_PRIORITY = 0 # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue' # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue' # 先进先出,广度优先 # DEPTH_PRIORITY = 1 # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue' # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue' # 15. 调度器队列 # SCHEDULER = 'scrapy.core.scheduler.Scheduler' # from scrapy.core.scheduler import Scheduler # 16. 访问URL去重 # DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl' # Enable and configure the AutoThrottle extension (disabled by default) # See http://doc.scrapy.org/en/latest/topics/autothrottle.html """ 17. 自动限速算法 from scrapy.contrib.throttle import AutoThrottle 自动限速设置 1. 获取最小延迟 DOWNLOAD_DELAY 2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY 3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY 4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间 5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCY target_delay = latency / self.target_concurrency new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间 new_delay = max(target_delay, new_delay) new_delay = min(max(self.mindelay, new_delay), self.maxdelay) slot.delay = new_delay """ # 开始自动限速 # AUTOTHROTTLE_ENABLED = True # The initial download delay # 初始下载延迟 # AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies # 最大下载延迟 # AUTOTHROTTLE_MAX_DELAY = 10 # The average number of requests Scrapy should be sending in parallel to each remote server # 平均每秒并发数 # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: # 是否显示 # AUTOTHROTTLE_DEBUG = True # Enable and configure HTTP caching (disabled by default) # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings """ 18. 启用缓存 目的用于将已经发送的请求或相应缓存下来,以便以后使用 from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware from scrapy.extensions.httpcache import DummyPolicy from scrapy.extensions.httpcache import FilesystemCacheStorage """ # 是否启用缓存策略 # HTTPCACHE_ENABLED = True # 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可 # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy" # 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略 # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy" # 缓存超时时间 # HTTPCACHE_EXPIRATION_SECS = 0 # 缓存保存路径 # HTTPCACHE_DIR = 'httpcache' # 缓存忽略的Http状态码 # HTTPCACHE_IGNORE_HTTP_CODES = [] # 缓存存储的插件 # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' """ 19. 代理,需要在环境变量中设置 from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware 方式一:使用默认 os.environ { http_proxy:http://root:woshiniba@192.168.11.11:9999/ https_proxy:http://192.168.11.11:9999/ } 方式二:使用自定义下载中间件 def to_bytes(text, encoding=None, errors='strict'): if isinstance(text, bytes): return text if not isinstance(text, six.string_types): raise TypeError('to_bytes must receive a unicode, str or bytes ' 'object, got %s' % type(text).__name__) if encoding is None: encoding = 'utf-8' return text.encode(encoding, errors) class ProxyMiddleware(object): def process_request(self, request, spider): PROXIES = [ {'ip_port': '111.11.228.75:80', 'user_pass': ''}, {'ip_port': '120.198.243.22:80', 'user_pass': ''}, {'ip_port': '111.8.60.9:8123', 'user_pass': ''}, {'ip_port': '101.71.27.120:80', 'user_pass': ''}, {'ip_port': '122.96.59.104:80', 'user_pass': ''}, {'ip_port': '122.224.249.122:8088', 'user_pass': ''}, ] proxy = random.choice(PROXIES) if proxy['user_pass'] is not None: request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port']) encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass'])) request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass) print "**************ProxyMiddleware have pass************" + proxy['ip_port'] else: print "**************ProxyMiddleware no pass************" + proxy['ip_port'] request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port']) DOWNLOADER_MIDDLEWARES = { 'step8_king.middlewares.ProxyMiddleware': 500, } """ """ 20. Https访问 Https访问时有两种情况: 1. 要爬取网站使用的可信任证书(默认支持) DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory" DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory" 2. 要爬取网站使用的自定义证书 DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory" DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory" # https.py from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate) class MySSLFactory(ScrapyClientContextFactory): def getCertificateOptions(self): from OpenSSL import crypto v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read()) v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read()) return CertificateOptions( privateKey=v1, # pKey对象 certificate=v2, # X509对象 verify=False, method=getattr(self, 'method', getattr(self, '_ssl_method', None)) ) 其他: 相关类 scrapy.core.downloader.handlers.http.HttpDownloadHandler scrapy.core.downloader.webclient.ScrapyHTTPClientFactory scrapy.core.downloader.contextfactory.ScrapyClientContextFactory 相关配置 DOWNLOADER_HTTPCLIENTFACTORY DOWNLOADER_CLIENTCONTEXTFACTORY """ """ 21. 爬虫中间件 class SpiderMiddleware(object): def process_spider_input(self,response, spider): ''' 下载完成,执行,然后交给parse处理 :param response: :param spider: :return: ''' pass def process_spider_output(self,response, result, spider): ''' spider处理完成,返回时调用 :param response: :param result: :param spider: :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable) ''' return result def process_spider_exception(self,response, exception, spider): ''' 异常调用 :param response: :param exception: :param spider: :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline ''' return None def process_start_requests(self,start_requests, spider): ''' 爬虫启动时调用 :param start_requests: :param spider: :return: 包含 Request 对象的可迭代对象 ''' return start_requests 内置爬虫中间件: 'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50, 'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500, 'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700, 'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800, 'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900, """ # from scrapy.contrib.spidermiddleware.referer import RefererMiddleware # Enable or disable spider middlewares # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html SPIDER_MIDDLEWARES = { # 'step8_king.middlewares.SpiderMiddleware': 543, } """ 22. 下载中间件 class DownMiddleware1(object): def process_request(self, request, spider): ''' 请求需要被下载时,经过所有下载器中间件的process_request调用 :param request: :param spider: :return: None,继续后续中间件去下载; Response对象,停止process_request的执行,开始执行process_response Request对象,停止中间件的执行,将Request重新调度器 raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception ''' pass def process_response(self, request, response, spider): ''' spider处理完成,返回时调用 :param response: :param result: :param spider: :return: Response 对象:转交给其他中间件process_response Request 对象:停止中间件,request会被重新调度下载 raise IgnoreRequest 异常:调用Request.errback ''' print('response1') return response def process_exception(self, request, exception, spider): ''' 当下载处理器(download handler)或 process_request() (下载中间件)抛出异常 :param response: :param exception: :param spider: :return: None:继续交给后续中间件处理异常; Response对象:停止后续process_exception方法 Request对象:停止中间件,request将会被重新调用下载 ''' return None 默认下载中间件 { 'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100, 'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300, 'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350, 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400, 'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500, 'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550, 'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580, 'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590, 'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600, 'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700, 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750, 'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830, 'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850, 'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900, } """ # from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware # Enable or disable downloader middlewares # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # DOWNLOADER_MIDDLEWARES = { # 'step8_king.middlewares.DownMiddleware1': 100, # 'step8_king.middlewares.DownMiddleware2': 500, # }
自定义命令
首先创建一个commands目录,在其中创建一个文件,文件名就是命令名
crawlall.py
from scrapy.commands import ScrapyCommand from scrapy.utils.project import get_project_settings class Command(ScrapyCommand): requires_project = True def syntax(self): return '[options]' def short_desc(self): return 'Runs all of the spiders' def run(self, args, opts): spider_list = self.crawler_process.spiders.list() for name in spider_list: self.crawler_process.crawl(name, **opts.__dict__) self.crawler_process.start()
这样使用scrapy crawlall命令启动项目就可以执行所有的spider爬虫文件了,这里的self.crawler_process.crawl(name, **opts.__dict__)和self.crawler_process.start()就是爬虫项目的入口
short_desc中的内容就是scrapy --help时能看到的命令解释
源码
def run(self, args, opts): from scrapy.crawler import CrawlerProcess CrawlerProcess.crawl CrawlerProcess.start """ self.crawler_process对象中含有:_active = {d,} """ self.crawler_process.crawl('chouti',**opts.__dict__) self.crawler_process.crawl('cnblogs',**opts.__dict__) # self.crawler_process.start()
TinyScrapy
from twisted.web.client import getPage from twisted.internet import reactor from twisted.internet import defer url_list = ['http://www.bing.com', 'http://www.baidu.com', ] def callback(arg): print('回来一个', arg) defer_list = [] for url in url_list: ret = getPage(bytes(url, encoding='utf8')) ret.addCallback(callback) defer_list.append(ret) def stop(arg): print('已经全部现在完毕', arg) reactor.stop() d = defer.DeferredList(defer_list) d.addBoth(stop) reactor.run()
#!/usr/bin/env python # -*- coding:utf-8 -*- from twisted.web.client import getPage from twisted.internet import reactor from twisted.internet import defer @defer.inlineCallbacks def task(url): ret = getPage(bytes(url, encoding='utf8')) ret.addCallback(callback) yield ret def callback(arg): print('回来一个', arg) url_list = ['http://www.bing.com', 'http://www.baidu.com', ] defer_list = [] for url in url_list: ret = task(url) defer_list.append(ret) def stop(arg): print('已经全部现在完毕', arg) reactor.stop() d = defer.DeferredList(defer_list) d.addBoth(stop) reactor.run()
#!/usr/bin/env python # -*- coding:utf-8 -*- from twisted.internet import defer from twisted.web.client import getPage from twisted.internet import reactor import threading def _next_request(): _next_request_from_scheduler() def _next_request_from_scheduler(): ret = getPage(bytes('http://www.chouti.com', encoding='utf8')) ret.addCallback(callback) ret.addCallback(lambda _: reactor.callLater(0, _next_request)) _closewait = None @defer.inlineCallbacks def engine_start(): global _closewait _closewait = defer.Deferred() yield _closewait @defer.inlineCallbacks def task(url): reactor.callLater(0, _next_request) yield engine_start() counter = 0 def callback(arg): global counter counter +=1 if counter == 10: _closewait.callback(None) print('one', len(arg)) def stop(arg): print('all done', arg) reactor.stop() if __name__ == '__main__': url = 'http://www.cnblogs.com' defer_list = [] deferObj = task(url) defer_list.append(deferObj) v = defer.DeferredList(defer_list) v.addBoth(stop) reactor.run()
#!/usr/bin/env python # -*- coding:utf-8 -*- from twisted.web.client import getPage, defer from twisted.internet import reactor import queue class Response(object): def __init__(self, body, request): self.body = body self.request = request self.url = request.url @property def text(self): return self.body.decode('utf-8') class Request(object): def __init__(self, url, callback=None): self.url = url self.callback = callback class Scheduler(object): def __init__(self, engine): self.q = queue.Queue() self.engine = engine def enqueue_request(self, request): self.q.put(request) def next_request(self): try: req = self.q.get(block=False) except Exception as e: req = None return req def size(self): return self.q.qsize() class ExecutionEngine(object): def __init__(self): self._closewait = None self.running = True self.start_requests = None self.scheduler = Scheduler(self) self.inprogress = set() def check_empty(self, response): if not self.running: self._closewait.callback('......') def _next_request(self): while self.start_requests: try: request = next(self.start_requests) except StopIteration: self.start_requests = None else: self.scheduler.enqueue_request(request) while len(self.inprogress) < 5 and self.scheduler.size() > 0: # 最大并发数为5 request = self.scheduler.next_request() if not request: break self.inprogress.add(request) d = getPage(bytes(request.url, encoding='utf-8')) d.addBoth(self._handle_downloader_output, request) d.addBoth(lambda x, req: self.inprogress.remove(req), request) d.addBoth(lambda x: self._next_request()) if len(self.inprogress) == 0 and self.scheduler.size() == 0: self._closewait.callback(None) def _handle_downloader_output(self, body, request): """ 获取内容,执行回调函数,并且把回调函数中的返回值获取,并添加到队列中 :param response: :param request: :return: """ import types response = Response(body, request) func = request.callback or self.spider.parse gen = func(response) if isinstance(gen, types.GeneratorType): for req in gen: self.scheduler.enqueue_request(req) @defer.inlineCallbacks def start(self): self._closewait = defer.Deferred() yield self._closewait def open_spider(self, spider, start_requests): self.start_requests = start_requests self.spider = spider reactor.callLater(0, self._next_request) class Crawler(object): def __init__(self, spidercls): self.spidercls = spidercls self.spider = None self.engine = None @defer.inlineCallbacks def crawl(self): self.engine = ExecutionEngine() self.spider = self.spidercls() start_requests = iter(self.spider.start_requests()) start_requests = iter(start_requests) self.engine.open_spider(self.spider, start_requests) yield self.engine.start() class CrawlerProcess(object): def __init__(self): self._active = set() self.crawlers = set() def crawl(self, spidercls, *args, **kwargs): crawler = Crawler(spidercls) self.crawlers.add(crawler) d = crawler.crawl(*args, **kwargs) self._active.add(d) return d def start(self): dl = defer.DeferredList(self._active) dl.addBoth(self._stop_reactor) reactor.run() def _stop_reactor(self, _=None): reactor.stop() class Spider(object): def start_requests(self): for url in self.start_urls: yield Request(url) class ChoutiSpider(Spider): name = "chouti" start_urls = [ 'http://dig.chouti.com/', ] def parse(self, response): print(response.text) class CnblogsSpider(Spider): name = "cnblogs" start_urls = [ 'http://www.cnblogs.com/', ] def parse(self, response): print(response.text) if __name__ == '__main__': spider_cls_list = [ChoutiSpider, CnblogsSpider] crawler_process = CrawlerProcess() for spider_cls in spider_cls_list: crawler_process.crawl(spider_cls) crawler_process.start()
#!/usr/bin/env python # -*- coding:utf-8 -*- import types from twisted.internet import defer from twisted.web.client import getPage from twisted.internet import reactor class Request(object): def __init__(self, url, callback): self.url = url self.callback = callback self.priority = 0 class HttpResponse(object): def __init__(self, content, request): self.content = content self.request = request class ChouTiSpider(object): def start_requests(self): url_list = ['http://www.cnblogs.com/', 'http://www.bing.com'] for url in url_list: yield Request(url=url, callback=self.parse) def parse(self, response): print(response.request.url) # yield Request(url="http://www.baidu.com", callback=self.parse) from queue import Queue Q = Queue() class CallLaterOnce(object): def __init__(self, func, *a, **kw): self._func = func self._a = a self._kw = kw self._call = None def schedule(self, delay=0): if self._call is None: self._call = reactor.callLater(delay, self) def cancel(self): if self._call: self._call.cancel() def __call__(self): self._call = None return self._func(*self._a, **self._kw) class Engine(object): def __init__(self): self.nextcall = None self.crawlling = [] self.max = 5 self._closewait = None def get_response(self,content, request): response = HttpResponse(content, request) gen = request.callback(response) if isinstance(gen, types.GeneratorType): for req in gen: req.priority = request.priority + 1 Q.put(req) def rm_crawlling(self,response,d): self.crawlling.remove(d) def _next_request(self,spider): if Q.qsize() == 0 and len(self.crawlling) == 0: self._closewait.callback(None) if len(self.crawlling) >= 5: return while len(self.crawlling) < 5: try: req = Q.get(block=False) except Exception as e: req = None if not req: return d = getPage(req.url.encode('utf-8')) self.crawlling.append(d) d.addCallback(self.get_response, req) d.addCallback(self.rm_crawlling,d) d.addCallback(lambda _: self.nextcall.schedule()) @defer.inlineCallbacks def crawl(self): spider = ChouTiSpider() start_requests = iter(spider.start_requests()) flag = True while flag: try: req = next(start_requests) Q.put(req) except StopIteration as e: flag = False self.nextcall = CallLaterOnce(self._next_request,spider) self.nextcall.schedule() self._closewait = defer.Deferred() yield self._closewait @defer.inlineCallbacks def pp(self): yield self.crawl() _active = set() obj = Engine() d = obj.crawl() _active.add(d) li = defer.DeferredList(_active) li.addBoth(lambda _,*a,**kw: reactor.stop()) reactor.run()