scrapy爬虫之断点续爬和多个spider同时爬取
from scrapy.commands import ScrapyCommand from scrapy.utils.project import get_project_settings #断点续爬scrapy crawl spider_name -s JOBDIR=crawls/spider_name #运行命令scrapy crawlall class Command(ScrapyCommand): requires_project = True def syntax(self): return '[options]' def short_desc(self): return 'Runs all of the spiders' def run(self, args, opts): spider_list = self.crawler_process.spiders.list() for name in spider_list: self.crawler_process.crawl(name, **opts.__dict__) self.crawler_process.start()
多个spider同时运行
新建命令文件夹commands,目录下新建crawlall.py
scrapy crawlall 需在settings里配置 COMMANDS_MODULE = 'project.commands'
执行命令scrapy crawlall
原理:通过加载用户初始化的 crawler_process.spiders 获取列表下的所有spider的name,然后遍历list 分别crawl
断点续爬
#断点续爬 scrapy crawl spider_name -s JOBDIR=crawls/spider_name
↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
terminnal 执行此命令
可在crawls目录下记录断点,下次继续重复执行命令可从断点续爬。
详细见开发者文档
https://doc.scrapy.org/en/latest/topics/jobs.html?highlight=jobdir