scrapy有用的(代理,user-agent,随机延迟等)

代理

方法一(待测试)

见scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware

import os
# 设置相应的代理用户名密码,主机和端口号
os.environ["http_proxy"] = "http://user:password@proxy.internal.server.com:8080"
class YourCrawlSpider(CrawlSpider):
    """"""
最简单直接的办法,spider的开头设置系统环境变量

方法二 中间件

class ProxyMiddleware(object): 
    # overwrite process request 
    def process_request(self, request, spider): 
        # 设置代理的主机和端口号
        request.meta['proxy'] = "http://proxy.internal.server.com:8080"

        # 设置代理的认证用户名和密码
        proxy_user_pass = "user:password"
        encoded_user_pass = base64.encodestring(proxy_user_pass)

        # 设置代理
        request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
中间件
DOWNLOADER_MIDDLEWARES = { 
    'middlewares.ProxyMiddleware': 90,
}
settings.py

 

补充

class RandomProxy(object):
    def process_request(self, request, spider):
        proxy = random.choice(PROXIES)

        if proxy['user_passwd'] is None:
            # 没有代理账户验证的免费代理的使用
            request.meta['proxy'] = "http//" + proxy['ip_port']
        else:
            request.meta['proxy'] = "http//" + proxy['ip_port']
            # 对账户密码进行base64编码转换
            base64_userpasswd = base64.b64decode(proxy['user_passwd'])
            # 对应到代理服务器的信令格式里
            request.headers['Proxy-Authorization'] = 'Basic ' + base64_userpasswd
中间件(后面补充完善)
PROXIES = [
    {'ip_port': '111.8.60.9:8123', 'user_passwd': ''},
    {'ip_port': '101.71.27.120:80', 'user_passwd': 'user2:pass2'},
    {'ip_port': '122.96.59.104:80', 'user_passwd': 'user3:pass3'},
    {'ip_port': '122.224.249.122:8088', 'user_passwd': 'user4:pass4'},
]

DOWNLOADER_MIDDLEWARES = { 'douban.middlewares.RandomUserAgent': 100,
                         'douban.middlewares.RandomProxy': 200, }
settings.py(配合上面,后面补充完善)

 

实测有效

参考https://www.cnblogs.com/cnkai/p/7401526.html

class TaiyingshiSpider(scrapy.Spider):
    custom_settings = {
        "DOWNLOAD_DELAY": 0,
        "DOWNLOADER_MIDDLEWARES":{
            # "imfcrawl.middlewares.ProxyMiddleware":543,
        }
    }

   """其他略""" 
spiders.py
PROXIES = [
    "http://123.127.217.170:80",
    "http://223.202.204.195:80",
    "http://223.202.204.194:80",
    "http://140.143.105.246:80",
]
settings.py
import random
class ProxyMiddleware(object):
    '''
    设置Proxy
    '''

    def __init__(self, ip):
        self.ip = ip

    @classmethod
    def from_crawler(cls, crawler):
        return cls(ip=crawler.settings.get('PROXIES'))

    def process_request(self, request, spider):
        try:
            proxy_ip = random.choice(self.ip)
        except:
            proxy_ip = None

        if proxy_ip:
            # print(proxy_ip)
            request.meta['proxy'] = proxy_ip
middlewares.py(随机代理)

场景使用

那么问题来了,现在有这么一个场景,如上所述的话,我每个请求都会使用代理池里面的代理IP地址,但是有些操作是不需要代理IP地址的,那么怎么才能让它请求超时的时候,再使用代理池的IP地址进行重新请求呢?

link:https://www.cnblogs.com/lei0213/p/7904994.html

  1、我们都知道scrapy的基本请求步骤是,首先执行父类里面(scrapy.Spider)里面的start_requests方法,

  2、然后start_requests方法也是取拿我们设置的start_urls变量里面的url地址

  3、最后才执行make_requests_from_url方法,并只传入一个url变量

  那么,我们就可以重写make_requests_from_url方法,从而直接调用scrapy.Request()方法,我们简单的了解一下里面的几个参数:

  1、url=url,其实就是最后start_requests()方法里面拿到的url地址

  2、meta这里我们只设置了一个参数,download_timeout:10,作用就是当第一次发起请求的时候,等待10秒钟,如果没有请求成功的话,就会直接执行download_middleware里面的方法,我们下面介绍。

  3、callback回调函数,其实就是本次的本次所有操作完成后执行的操作,注意,这里可不是说执行完上面所有操作后,再执行这个操作,比如说请求了一个url,并且成功了,下面就会执行这个方法。

  4、dont_filter=False,这个很重要,有人说过不加的话默认就是False,但是亲测必须得加,作用就是scrapy默认有去重的方法,等于False的话就意味着不参加scrapy的去重操作。亲测,请求一个页面,拿到第一个页面后,抓取想要的操作后,第二页就不行了,只有加上它才可以。

import scrapy

 

 

class HttpbinTestSpider(scrapy.Spider):

    name = "httpbin_test"

    allowed_domains = ["httpbin.ort/get"]

    start_urls = ['http://httpbin.org/get']

 

    def make_requests_from_url(self,url):

        self.logger.debug('Try first time')

        return scrapy.Request(url=url,meta={'download_timeout':10},callback=self.parse,dont_filter=False)

 

    def parse(self, response):

        print(response.text)
spider.py
下面就是上面请求10秒后超时会执行的操作process_exception方法,心细的同学会发现,我们在spider文件里面输出log的时候,是直接输出的,那是因为scrapy早都在父类里面给你定义好了,直接应用就行,但是在middlewares里面需要自己定义一个类变量定义,才能使用引用。
class HttpbinProxyMiddleware(object):

    logger = logging.getLogger(__name__)

 

    # def process_request(self, request, spider):

    #     # pro_addr = requests.get('http://127.0.0.1:5000/get').text

    #     # request.meta['proxy'] = 'http://' + pro_addr

    #     pass

    #

    # def process_response(self, request, response, spider):

    #     # 可以拿到下载完的response内容,然后对下载完的内容进行修改(修改文本的编码格式等操作)

    #     pass

 

    def process_exception(self, request, response, spider):

        self.logger.debug('Try Exception time')

        self.logger.debug('Try second time')

        proxy_addr = requests.get('http://127.0.0.1:5000/get').text

        self.logger.debug(proxy_addr)

        request.meta['proxy'] = 'http://{0}'.format(proxy_addr)
middleware.py

这里才是关键,我们需要执行middlewares里面的HttpbinProxyMiddleware类下面的方法,这里需要注意的是我取消了下载中间件的retry中间件,因为scrapy本身就有自动重试的方法,为了试验效果,这里取消了默认的重试中间件。

DOWNLOADER_MIDDLEWARES = {

   'httpbin.middlewares.HttpbinProxyMiddleware': 543,

   #设置不参与scrapy的自动重试的动作

   'scrapy.downloadermiddlewares.retry.RetryMiddleware':None

}
settings.py

 

User-agent

user-agent是咱们模拟浏览器比较重要的参数,主要是防止爬虫被ban,前几章我们了解到在settings.py中可以设置user-agent,如:

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36'
1
但是这种情况还是有被禁(ban)的情况,另外我们再增加延时爬取,也可以减小被ban的风险,当然这些都是比较简单的伪装技巧,作为爬虫入门已经够用。
那么我们设置更多的user-agent来模拟浏览器下载网页数据,每次下载时随机设置一个user-agent,这样更不容易被ban。

import random
from scrapy.downloadermiddlewares.useragent import UserAgentMiddleware


class RotateUserAgentMiddleware(UserAgentMiddleware):
    def __init__(self, user_agent=''):
        self.user_agent = user_agent

    def process_request(self, request, spider):
        ua = random.choice(self.user_agent_list)
        if ua:
            # print(ua)
            request.headers.setdefault('User-Agent', ua)

    # the default user_agent_list composes chrome,I E,firefox,Mozilla,opera,netscape
    # for more user agent strings,you can find it in http://www.useragentstring.com/pages/useragentstring.php
    user_agent_list = [ \
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1" \
        "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11", \
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6", \
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6", \
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1", \
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5", \
        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5", \
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3", \
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", \
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
    ]
middlewares.py(user_agent_list可放到settings.py里)
DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,  # 这个是scrapy自有的,需要None掉,后面给出该中间件代码
    'spider名称.middlewares.RotateUserAgentMiddleware': 543,
}
settings.py
from scrapy import signals


class UserAgentMiddleware(object):
    """This middleware allows spiders to override the user_agent"""

    def __init__(self, user_agent='Scrapy'):
        self.user_agent = user_agent

    @classmethod
    def from_crawler(cls, crawler):
        o = cls(crawler.settings['USER_AGENT'])
        crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
        return o

    def spider_opened(self, spider):
        self.user_agent = getattr(spider, 'user_agent', self.user_agent)  # 爬虫开始时如果用户在spider设置了user_agent,则使用该agent

    def process_request(self, request, spider):
        if self.user_agent:
            request.headers.setdefault(b'User-Agent', self.user_agent)
参考:UserAgentMiddleware

 随机延迟

import logging
import random
import time


class RandomDelayMiddleware(object):
    def __init__(self,crawler):
        self.api_url = crawler.spider.settings.get("API_URL")  #
        self.delay = crawler.spider.settings.get("RANDOM_DELAY")  # 在settings上设置一个delay最大值

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def process_request(self, request, spider):
        # print(request.url)
        if request.url == self.api_url:
            delay = random.randint(0, self.delay)
            logging.debug("### random delay: %s s ###" % delay)
            time.sleep(delay)  # 随机延迟
            # time.sleep(0.5)  # 固定延迟
RandomDelayMiddleware.py
class TaiyingshiSpider(scrapy.Spider):
    name = 'taiyingshi'
    allowed_domains = ['taiyingshi.com',"127.0.0.1"]
    start_urls = ['http://www.taiyingshi.com/last/']
    # start_urls = ['http://www.taiyingshi.com/people/xj25.html']

        "DOWNLOAD_DELAY": 0,  # scrapy自带的延迟,实际延迟需要与下面相加
        "RANDOM_DELAY" : 3,  # 最大随机延迟,最终延迟与上面相加

        "DOWNLOADER_MIDDLEWARES":{
            # "imfcrawl.middlewares.ProxyMiddleware":543,
            'imfcrawl.middlewares.RandomDelayMiddleware':550  # 启用中间件
        },
    }
在settings.py或者spider的custom_settings设置

实测有效。

 

 

参考or转发

https://www.cnblogs.com/cnkai/p/7401526.html

https://www.cnblogs.com/lei0213/p/7904994.html

https://blog.csdn.net/yancey_blog/article/details/53896092

posted @ 2018-11-09 13:52  fat39  阅读(1037)  评论(0编辑  收藏  举报