Python爬虫:scrapy爬虫设置随机访问时间间隔


本文链接:https://blog.csdn.net/mouday/article/details/81512748

scrapy中有一个参数:DOWNLOAD_DELAY 或者 download_delay 可以设置下载延时,不过Spider类被初始化的时候就固定了,爬虫运行过程中没发改变。

随机延时,可以降低被封ip的风险

代码示例

random_delay_middleware.py

# -*- coding:utf-8 -*-

import logging
import random
import time


class RandomDelayMiddleware(object):
    def __init__(self, delay):
        self.delay = delay

    @classmethod
    def from_crawler(cls, crawler):
        delay = crawler.spider.settings.get("RANDOM_DELAY", 10)
        if not isinstance(delay, int):
            raise ValueError("RANDOM_DELAY need a int")
        return cls(delay)

    def process_request(self, request, spider):
        delay = random.randint(0, self.delay)
        logging.debug("### random delay: %s s ###" % delay)
        time.sleep(delay)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

使用方式:

custom_settings = {
        "RANDOM_DELAY": 3,
        "DOWNLOADER_MIDDLEWARES": {
            "middlewares.random_delay_middleware.RandomDelayMiddleware": 999,
        }
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

说明:
RANDOM_DELAY: 下载随机延时范围,[0, RANDOM_DELAY]
比如上面我设置了3秒,那么随机延时范围将是[0, 3]
如果设置了DOWNLOAD_DELAY 那么,总的延时应该是两者之和:

total_delay = DOWNLOAD_DELAY + RANDOM_DELAY
  • 1

更精确的说,应该是:

DOWNLOAD_DELAY + 0 < total_delay < DOWNLOAD_DELAY + RANDOM_DELAY
  • 1
posted @ 2019-11-13 16:30  逐梦~前行  阅读(4550)  评论(0编辑  收藏  举报