scrapy按顺序启动多个爬虫代码片段(python3)
问题:在运行scrapy的过程中,如果想按顺序启动爬虫怎么做?
背景:爬虫A爬取动态代理ip,爬虫B使用A爬取的动态代理ip来伪装自己,爬取目标,那么A一定要在B之前运行该怎么做?
IDE:pycharm
版本:python3
框架:scrapy
系统:windows10
代码如下:(请自行修改)
# !/usr/bin/env python3 # -*- coding:utf-8 -*- from scrapy import cmdline from twisted.internet import reactor, defer from scrapy.crawler import CrawlerRunner from scrapy.utils.log import configure_logging from torrentSpider.spiders.proxy_ip_spider import ProxyIpSpider from torrentSpider.spiders.douban_spider import DoubanSpider from scrapy.utils.project import get_project_settings ''' 以下是多个爬虫顺序执行的命令 ''' configure_logging() # 加入setting配置文件,否则配置无法生效
# get_project_settings()获取的是setting.py的配置
runner = CrawlerRunner(get_project_settings()) @defer.inlineCallbacks def crawl(): yield runner.crawl(ProxyIpSpider) yield runner.crawl(DoubanSpider) reactor.stop() crawl() reactor.run() # the script will block here until the last crawl call is finished ''' 以下是单个爬虫执行的命令 ''' # def execute(): # cmdline.execute(['scrapy', 'crawl', 'proxy_ip_spider']) # # # execute()