scrapy框架设置代理ip,headers头和cookies

【设置代理ip】

根据最新的scrapy官方文档,scrapy爬虫框架的代理配置有以下两种方法:

一.使用中间件DownloaderMiddleware进行配置
使用Scrapy默认方法scrapy startproject创建项目后项目目录结构如下,spider中的crawler是已经写好的爬虫程序:
 
settings.py文件其中的DOWNLOADER_MIDDLEWARES用于配置scrapy的中间件.我们可以在这里进行自己爬虫中间键的配置,配置后如下:

DOWNLOADER_MIDDLEWARES = {
    'WandoujiaCrawler.middlewares.ProxyMiddleware': 100,
}

其中WandoujiaCrawler是我们的项目名称,后面的数字代表中间件执行的优先级,官方文档中默认proxy中间件的优先级编号是750,我们的中间件优先级要高于默认的proxy中间键.中间件middlewares.py的写法如下(scrapy默认会在这个文件中写好一个中间件的模板,不用管它写在后面即可):

# -*- coding: utf-8 -*-
class ProxyMiddleware(object):
    def process_request(self, request, spider):
        request.meta['proxy'] = "http://proxy.yourproxy:8001"

这里有两个问题:
一是proxy一定是要写号http://前缀的否则会出现to_bytes must receive a unicode, str or bytes object, got NoneType的错误.
二是官方文档中写到process_request方法一定要返回request对象,response对象或None的一种,但是其实写的时候不用return,乱写可能会报错.
另外如果代理有用户名密码等就需要在后面再加上一些内容:

# Use the following lines if your proxy requires authentication
proxy_user_pass = "USERNAME:PASSWORD"
# setup basic authentication for the proxy
encoded_user_pass = base64.encodestring(proxy_user_pass)
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass


二.直接在爬虫程序中设置proxy字段
我们可以直接在自己具体的爬虫程序中设置proxy字段,代码如下,直接在构造Request里面加上meta字段即可:

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse, meta={'proxy': 'http://proxy.yourproxy:8001'})
 
    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').extract_first(),
                'author': quote.css('span small::text').extract_first(),
                'tags': quote.css('div.tags a.tag::text').extract(),
            }

----------------------------------------------------------------------------------------------------------------------

第二种

1.在Scrapy工程下新建“middlewares.py”

复制代码
 1 # Importing base64 library because we'll need it ONLY in case if the proxy we are going to use requires authentication
 2 import base64
 3  
 4 # Start your middleware class
 5 class ProxyMiddleware(object):
 6     # overwrite process request
 7     def process_request(self, request, spider):
 8         # Set the location of the proxy
 9         request.meta['proxy'] = "http://YOUR_PROXY_IP:PORT"
10  
11         # Use the following lines if your proxy requires authentication
12         proxy_user_pass = "USERNAME:PASSWORD"
13         # setup basic authentication for the proxy
14         encoded_user_pass = base64.encodestring(proxy_user_pass)
15         request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
16 
17 
18 
复制代码

2.在项目配置文件里(./project_name/settings.py)添加

1 DOWNLOADER_MIDDLEWARES = {
2     'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
3     'project_name.middlewares.ProxyMiddleware': 100,
4 }

只要两步,现在请求就是通过代理的了。测试一下^_^

复制代码
 1 from scrapy.spider import BaseSpider
 2 from scrapy.contrib.spiders import CrawlSpider, Rule
 3 from scrapy.http import Request
 4  
 5 class TestSpider(CrawlSpider):
 6     name = "test"
 7     domain_name = "whatismyip.com"
 8     # The following url is subject to change, you can get the last updated one from here :
 9     # http://www.whatismyip.com/faq/automation.asp
10     start_urls = ["http://xujian.info"]
11  
12     def parse(self, response):
13         open('test.html', 'wb').write(response.body)
14
复制代码

 

增加文件middlewares.py放置在setting.py平行的目录下

复制代码
 1 import base64
 2 class ProxyMiddleware(object):
 3 # overwrite process request
 4 def process_request(self, request, spider):
 5     # Set the location of the proxy
 6     request.meta['proxy'] = "http://YOUR_PROXY_IP:PORT"
 7 
 8     # Use the following lines if your proxy requires authentication
 9     proxy_user_pass = "USERNAME:PASSWORD"
10     # setup basic authentication for the proxy
11     encoded_user_pass = base64.b64encode(proxy_user_pass)
12     request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
复制代码

很多网上的答案使用base64.encodestring来编码proxy_user_pass,有一种情况,当username太长的时候,会出现错误,所以推荐使用b64encode编码方式

然后在setting.py中,在DOWNLOADER_MIDDLEWARES中把它打开,projectname.middlewares.ProxyMiddleware: 1就可以了

 【设置headers和cookies】

scrapy中有三种方式设置headers,cookies

setting中设置cookie
middlewares中设置cookie
sipder文件中重写start_requests方法
这里记录第三种,重写start_requests方法,这里以豆瓣网为例

一、设置请求头headers
在start_request中新增

headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'
    }​
二、设置cookies
1、登录豆瓣网并获取cookie

首先在chrome中登录豆瓣网
按F12调出开发者工具
查看cookie


2、在start_requests中新增

cookies = {
    'key':'value',
    'key':'value',
    'key':'value'
    }

一行代码把字符串的cookies转字典​

#bakooie装换成字典模式方便scrapy使用
cookid = "_ga=GA1.2.1937936278.1538889470; __gads=ID=1ba11c2610acf504:T=1539160131:S=ALNI_MZwhotFaAA6KsIVzHG-ev0RnU4OIQ; .CNBlogsCookie=7F3B19F5204038FAE6287F33828591011A60086D0F4349BEDA5F568571875F43E1EED5EDE24E458FAB8972604B4ECD19FC058F5562321A6D87ABF8AAC19F32EC6C004B2EBA69A29B8532E5464ECD145896AA49F1; .Cnblogs.AspNetCore.Cookies=CfDJ8J0rgDI0eRtJkfTEZKR_e81dD8ABr7voOOlhOqLJ7tzHG0h7wCeF8EYzLUZbtYueLnkUIzSWDE9LuJ-53nA6Lem4htKEIqdoOszI5xWb4PUZHJtM1qgEjI1E1Q8YLz8cU3jts5xoHMzq7qq7AmtrlCYYqvBMgEX8GACn8j61WrxZfKe9Hmh4akC9AxcODmAPP--axDI0w6LTSQYKl4GnKihmxM6DQ3RDCXXzWukG-3xiPfKv5vdSNFBTvj7b2qOeTmy45RWkQT9dqf_bXjniWnhPHRnGq8uNHqN2bpzUlCOxsrjwuZlhbAPPLCnX90XJaA; _gid=GA1.2.201165281.1540104585"
coolies = dict(i.split('=',1) for i in cookid.split(';'))
print(coolies)



3、修改方法返回值

yield scrapy.Request(url=url, headers=headers, cookies=cookies, callback=self.parse)​​
4、修改COOKIES_ENABLED

当COOKIES_ENABLED是注释的时候scrapy默认没有开启cookie
当COOKIES_ENABLED没有注释设置为False的时候scrapy默认使用了settings里面的cookie
当COOKIES_ENABLED设置为True的时候scrapy就会把settings的cookie关掉,使用自定义cookie
因此在settings.py文件中设置COOKIES_ENABLED = True​

5、修改COOKIES_ENABLED

在settings.py文件中设置ROBOTSTXT_OBEY = False​

三、测试
1、新建scrapy爬虫项目

scrapy stratproject douban

2、在./doouban/spiders/下新建short_spider.py文件

# -*- coding: utf-8 -*-
import scrapy
 
class ShortSpider(scrapy.Spider):
    name = 'short'
    allow_domains = ['movie.douban.com']
 
    # 重写start_requests方法
    def start_requests(self):
 
        # 浏览器用户代理
        headers = {
            'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36'
        }
        # 指定cookies
        cookies = {
            'key':'value',
            'key':'value',
            'key':'value'
        }
        urls = [
            'https://movie.douban.com/subject/26266893/comments?start=250&limit=20&sort=new_score&status=P'
        ]
        for url in urls:
            yield scrapy.Request(url=url, headers=headers, cookies=cookies, callback=self.parse)
 
    def parse(self, response):
        file_name = 'data.html'
        with open(file_name, 'wb') as f:
            f.write(response.body)
 因为豆瓣电影短评论对未登录用户有限制,因此这里使用豆瓣电影段评论数比较靠后的页数进行测试

3、进入douban目录执行scrapy crawl short

状态码200,爬取成功(无权限返回403)





 

posted @ 2019-05-10 09:39  三冬三夏  阅读(3165)  评论(0编辑  收藏  举报