scrapy CrawlSpider链接提取器, scrapy-redis分布式爬虫

crawlspider 命令

1.创建scrapy工程:scrapy startproject projectName
2.创建爬虫文件:scrapy genspider -t crawl spiderName www.xxx.com
  指令多了 "-t crawl",表示创建的爬虫文件是基于CrawlSpider这个类的,而不再是Spider这个基类。
3.运行 scrapy crawl name --nolog

 

spider.py

class Spider2Spider(CrawlSpider):
    name = 'spider2'
    # allowed_domains = ['www.xxx.com']
    start_urls = ['https://dig.chouti.com/r/scoff/hot/1']

    rules = (
        Rule(LinkExtractor(allow=r'/r/scoff/hot/\d+'), callback='parse_item', follow=True),
    
Rule(LinkExtractor(allow=r'/scoff/$'), callback='parse_item', follow=True),
) def parse_item(self, response): print(response)

 

scrapy-redis命令

运行命令: 

cd scrapy2
cd spiders
scrapy runspider spider2.py

 

流程

1.创建scrapy工程:scrapy startproject projectName
2.创建爬虫文件:scrapy genspider -t crawl spiderName www.xxx.com
3.对爬虫文件中的相关属性进行修改:
    - 导包:from scrapy_redis.spiders import RedisCrawlSpider
    - 将当前爬虫文件的父类设置成RedisCrawlSpider
    - 将起始url列表替换成redis_key = 'xxx'(调度器队列的名称)
    - 注释掉start_urls = []
4.在配置文件中进行配置:
    - 使用组件中封装好的可以被共享的管道类, 这个类在文件中看不到:
        ITEM_PIPELINES = {
            'scrapy_redis.pipelines.RedisPipeline': 400
        }
- 配置调度器(使用组件中封装好的可以被共享的调度器) # 增加了一个去重容器类的配置, 作用使用Redis的set集合来存储请求的指纹数据, 从而实现请求去重的持久化 DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter" # 使用scrapy-redis组件自己的调度器 SCHEDULER = "scrapy_redis.scheduler.Scheduler" # 配置调度器是否要持久化, 也就是当爬虫结束了, 要不要清空Redis中请求队列和去重指纹的set。如果是True, 就表示要持久化存储, 就不清空数据, 否则清空数据 SCHEDULER_PERSIST = True - 指定存储数据的redis: REDIS_HOST = 'redis服务的ip地址' REDIS_PORT = 6379 - 配置redis数据库的配置文件 - 取消保护模式:protected-mode no - bind绑定: #bind 127.0.0.1 - 启动redis 5.执行分布式程序 scrapy runspider xxx.py 6.向调度器队列中仍入一个起始url: 在redis-cli中执行:

 

D:\program files\redis配置文件的配置:

- 注释该行:bind 127.0.0.1,表示可以让其他ip访问redis
- 将yes该为no:protected-mode no,表示可以让其他ip操作redis

 

spider2.py

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy_redis.spiders import RedisCrawlSpider
from scrapy2.items import Scrapy2Item


class Spider2Spider(RedisCrawlSpider):
    name = 'spider2'
    # allowed_domains = ['www.xxx.com']
    # start_urls = ['https://dig.chouti.com/r/scoff/hot/1']
    redis_key = 'chouti'
    rules = (
        Rule(LinkExtractor(allow=r'/all/hot/recent/\d+'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        div_list = response.xpath('//div[@class="item"]')
        for div in div_list:
            title = div.xpath('./div[4]/div[1]/a/text()').extract_first()
            author = div.xpath('./div[4]/div[2]/a[4]/b/text()').extract_first()
            item = Scrapy2Item()
            item['title'] = title
            item['author'] = author

            yield item

 

setttings.py

# -*- coding: utf-8 -*-

# Scrapy settings for scrapy2 project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'scrapy2'

SPIDER_MODULES = ['scrapy2.spiders']
NEWSPIDER_MODULE = 'scrapy2.spiders'

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'scrapy2 (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 32

# 增加了一个去重容器类的配置, 作用使用Redis的set集合来存储请求的指纹数据, 从而实现请求去重的持久化
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
# 使用scrapy-redis组件自己的调度器
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# 配置调度器是否要持久化, 也就是当爬虫结束了, 要不要清空Redis中请求队列和去重指纹的set。如果是True, 就表示要持久化存储, 就不清空数据, 否则清空数据
SCHEDULER_PERSIST = True  #数据指纹

REDIS_HOST = '127.0.0.1'
REDIS_PORT = 6379

ITEM_PIPELINES = {
    'scrapy_redis.pipelines.RedisPipeline': 400
}

 

items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class Scrapy2Item(scrapy.Item):
    # define the fields for your item here like:
    title = scrapy.Field()
    author = scrapy.Field()

 

'movie_data',item
posted @ 2019-03-05 19:18  NachoLau  阅读(539)  评论(0编辑  收藏  举报