scrapy_redis分布式爬虫

文章来源:https://github.com/rmax/scrapy-redis

Scrapy-Redis

Requirements

  • Python 2.7, 3.4 or 3.5
  • Redis >= 2.8
  • Scrapy >= 1.1
  • redis-py >= 2.10

Usage

Use the following settings in your project:

# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# Ensure all spiders share same duplicates filter through redis.
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

# Default requests serializer is pickle, but it can be changed to any module
# with loads and dumps functions. Note that pickle is not compatible between
# python versions.
# Caveat: In python 3.x, the serializer must return strings keys and support
# bytes as values. Because of this reason the json or msgpack module will not
# work by default. In python 2.x there is no such issue and you can use
# 'json' or 'msgpack' as serializers.
#SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat"

# Don't cleanup redis queues, allows to pause/resume crawls.
#SCHEDULER_PERSIST = True

# Schedule requests using a priority queue. (default)
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'

# Alternative queues.
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.FifoQueue'
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.LifoQueue'

# Max idle time to prevent the spider from being closed when distributed crawling.
# This only works if queue class is SpiderQueue or SpiderStack,
# and may also block the same time when your spider start at the first time (because the queue is empty).
#SCHEDULER_IDLE_BEFORE_CLOSE = 10

# Store scraped item in redis for post-processing.
ITEM_PIPELINES = {
    'scrapy_redis.pipelines.RedisPipeline': 300
}

# The item pipeline serializes and stores the items in this redis key.
#REDIS_ITEMS_KEY = '%(spider)s:items'

# The items serializer is by default ScrapyJSONEncoder. You can use any
# importable path to a callable object.
#REDIS_ITEMS_SERIALIZER = 'json.dumps'

# Specify the host and port to use when connecting to Redis (optional).
#REDIS_HOST = 'localhost'
#REDIS_PORT = 6379

# Specify the full Redis URL for connecting (optional).
# If set, this takes precedence over the REDIS_HOST and REDIS_PORT settings.
#REDIS_URL = 'redis://user:pass@hostname:9001'

# Custom redis client parameters (i.e.: socket timeout, etc.)
#REDIS_PARAMS  = {}
# Use custom redis client class.
#REDIS_PARAMS['redis_cls'] = 'myproject.RedisClient'

# If True, it uses redis' ``SPOP`` operation. You have to use the ``SADD``
# command to add URLs to the redis queue. This could be useful if you
# want to avoid duplicates in your start urls list and the order of
# processing does not matter.
#REDIS_START_URLS_AS_SET = False

# Default start urls key for RedisSpider and RedisCrawlSpider.
#REDIS_START_URLS_KEY = '%(name)s:start_urls'

# Use other encoding than utf-8 for redis.
#REDIS_ENCODING = 'latin1'

Feeding a Spider from Redis

The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.

For example, create a file myspider.py with the code below:

from scrapy_redis.spiders import RedisSpider

class MySpider(RedisSpider):
    name = 'myspider'

    def parse(self, response):
        # do stuff
        pass

Then:

  1. run the spider:

scrapy runspider myspider.py

         2.push urls to redis:

redis-cli lpush myspider:start_urls http://google.com

Note

These spiders rely on the spider idle signal to fetch start urls, hence it may have a few seconds of delay between the time you push a new url and the spider starts crawling it.

分布式爬虫的原理就是共享调度器(爬取队列),利用scrapy_redis重写调度器

spider文件夹下爬虫文件中的name,为lpush到redis中的key值,如果redis不为空,爬虫文件就会从redis中根据key值获取到对应的url运行爬虫。

 

 

分布式爬虫的实现

1.redis 服务
2.确保scrapy-redis 环境已经安装

3.确定单机scrapy 能够正常运行

4.配置setting.py 文件

setting.py 配置代码如下

# url指纹过滤器
'DUPEFILTER_CLASS' : "scrapy_redis.dupefilter.RFPDupeFilter",
# 调度器
'SCHEDULER' : "scrapy_redis.scheduler.Scheduler",
# 设置爬虫是否可以中断
'SCHEDULER_PERSIST' : True,
'SCHEDULER_QUEUE_CLASS' : "scrapy_redis.queue.SpiderQueue",  # 按照队列模式
# redis 连接配置
'REDIS_HOST' : '10.15.112.29',
'REDIS_PORT' :6379,
#密码和选择的库号
 'REDID_PARAMS' = {
     'password': '12324345',
     'db' : 0,
}
## 配置redis管道文件,权重数字相对最大
'ITEM_PIPELINES'  = {
    'scrapy_redis.pipelines.RedisPipeline': 3,  # redis管道文件,自动把数据加载到redis
}
SCHEDULER是任务分发与调度,把所有的爬虫开始的请求都放在redis里面,所有爬虫都去redis里面读取请求。 DUPEFILTER_CLASS是去重队列,负责所有请求的去重,REDIS_START_URLS_AS_SET指的是使用redis里面的set类型(简单完成去重),如果你没有设置,默认会选用list。

5 接下来就是设置你的爬虫文件了
在spider文件下找你写好的爬虫文件

然后导入

from scrapy_redis.spiders import RedisCrawlSpider

你的class类要继承RedisCrawlSpider 

class BosszpSpider(RedisCrawlSpider):

然后设置属性
设置爬虫的rediskey(这个个自己取名,要和redis的键名一样)

redis_key = 'boss_url'

接下来连接redis 

启动redis服务端的命令是redis-server ./redis.cfon

键名就是你爬虫属性里设置的名字

值就是你首次请求的url

配置完后就可以启动你的爬虫了

命令 scrapy runspider 爬虫文件名

posted @ 2019-11-23 13:06  腹肌猿  阅读(260)  评论(0编辑  收藏  举报