如何用脚本方式启动scrapy爬虫
众所周知,直接通过命令行scrapy crawl yourspidername
可以启动项目中名为yourspidername的爬虫。在python脚本中可以调用cmdline模块来启动命令行:
$ cat yourspider1start.py
from scrapy import cmdline
# 方法 1
cmdline.execute('scrapy crawl yourspidername'.split())
# 方法 2
sys.argv = ['scrapy', 'crawl', 'down_info_spider']
cmdline.execute()
# 方法 3, 创建子进程执行外部程序。方法仅仅返回外部程序的执行结果。0表示执行成功。
os.system('scrapy crawl down_info_spider')
# 方法 4
import subprocess
subprocess.Popen('scrapy crawl down_info_spider')
其中,在方法3、4中,推荐subprocess
subprocess module intends to replace several other, older modules and functions, such as:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
通过其返回值的poll方法可以判断子进程是否执行结束
我们也可以直接通过shell脚本每隔2秒启动所有爬虫:
$ cat startspiders.sh
#!/usr/bin/env bash
count=0
while [ $count -lt $1 ];
do
sleep 2
nohup python yourspider1start.py >/dev/null 2>&1 &
nohup python yourspider2start.py >/dev/null 2>&1 &
let count+=1
done
以上方法本质上都是启动scrapy命令行。如何通过调用scrapy内部函数,在编程方式下启动爬虫呢?
官方文档给出了两个scrapy工具:
- scrapy.crawler.CrawlerRunner, runs crawlers inside an already setup Twisted reactor
- scrapy.crawler.CrawlerProcess, 父类是CrawlerRunner
scrapy框架基于Twisted异步网络库,CrawlerRunner和CrawlerProcess帮助我们从Twisted reactor内部启动scrapy。
直接使用CrawlerRunner可以更精细的控制crawler进程,要手动指定Twisted reactor关闭后的回调函数。指定如果不打算在应用程序中运行更多的Twisted reactor,使用子类CrawlerProcess则更合适。
下面简单是文档中给的用法示例:
# encoding: utf-8
__author__ = 'fengshenjie'
from twisted.internet import reactor
from scrapy.utils.project import get_project_settings
def run1_single_spider():
'''Running spiders outside projects
只调用spider,不会进入pipeline'''
from scrapy.crawler import CrawlerProcess
from scrapy_test1.spiders import myspider1
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(myspider1)
process.start() # the script will block here until the crawling is finished
def run2_inside_scrapy():
'''会启用pipeline'''
from scrapy.crawler import CrawlerProcess
process = CrawlerProcess(get_project_settings())
process.crawl('spidername') # scrapy项目中spider的name值
process.start()
def spider_closing(arg):
print('spider close')
reactor.stop()
def run3_crawlerRunner():
'''如果你的应用程序使用了twisted,建议使用crawlerrunner 而不是crawlerprocess
Note that you will also have to shutdown the Twisted reactor yourself after the spider is finished. This can be achieved by adding callbacks to the deferred returned by the CrawlerRunner.crawl method.
'''
from scrapy.crawler import CrawlerRunner
runner = CrawlerRunner(get_project_settings())
# 'spidername' is the name of one of the spiders of the project.
d = runner.crawl('spidername')
# stop reactor when spider closes
# d.addBoth(lambda _: reactor.stop())
d.addBoth(spider_closing) # 等价写法
reactor.run() # the script will block here until the crawling is finished
def run4_multiple_spider():
from scrapy.crawler import CrawlerProcess
process = CrawlerProcess()
from scrapy_test1.spiders import myspider1, myspider2
for s in [myspider1, myspider2]:
process.crawl(s)
process.start()
def run5_multiplespider():
'''using CrawlerRunner'''
from twisted.internet import reactor
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
configure_logging()
runner = CrawlerRunner()
from scrapy_test1.spiders import myspider1, myspider2
for s in [myspider1, myspider2]:
runner.crawl(s)
d = runner.join()
d.addBoth(lambda _: reactor.stop())
reactor.run() # the script will block here until all crawling jobs are finished
def run6_multiplespider():
'''通过链接(chaining) deferred来线性运行spider'''
from twisted.internet import reactor, defer
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
configure_logging()
runner = CrawlerRunner()
@defer.inlineCallbacks
def crawl():
from scrapy_test1.spiders import myspider1, myspider2
for s in [myspider1, myspider2]:
yield runner.crawl(s)
reactor.stop()
crawl()
reactor.run() # the script will block here until the last crawl call is finished
if __name__=='__main__':
# run4_multiple_spider()
# run5_multiplespider()
run6_multiplespider()
References
- 编程方式下运行 Scrapy spider, 基于scrapy1.0版本
作者:kakashis
联系方式:fengshenjiev[AT]gmail.com
本文版权归作者所有,欢迎转载,演绎或用于商业目的,但是必须说明本文出处(包含链接)。
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· AI与.NET技术实操系列:基于图像分类模型对图像进行分类
· go语言实现终端里的倒计时
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 分享一个免费、快速、无限量使用的满血 DeepSeek R1 模型,支持深度思考和联网搜索!
· 基于 Docker 搭建 FRP 内网穿透开源项目(很简单哒)
· 25岁的心里话
· ollama系列01:轻松3步本地部署deepseek,普通电脑可用
· 按钮权限的设计及实现