04scrapy框架的日志等级和请求传参

一.scrapy的日志等级

  -在使用scrapy crawl spiderFileName运行程序时,在终端打印输出的就是scrapy的日志信息.

  -日志信息的种类:

    ERROR:一般错误

    WARNING:警告

    INFO:一般的信息

    DEBUG: 调试信息

  -在设置日志信息指定输出:

  在settings.py配置文件中,加入

  

 LOG_LEVEL = ‘指定日志信息种类’即可。

 LOG_FILE = 'log.txt'则表示将日志信息写入到指定文件中进行存储。

二:请求传参:

  -在某些情况下,我们爬取的数据不再同一个页面中,例如,我们爬取一个电影网站,电影的名称,评分在一级页面,而要爬取的其他电影详情在其二级子页面中,这时我们就性需要用请求传参

  -案例展示:爬取www.id97.com电影网,将一级页面中的电影名称,类型,评分一级二级页面中的上映时间,导演,片长进行爬取.

  爬取文件:

# -*- coding: utf-8 -*-
import scrapy
from moviePro.items import MovieproItem

class MovieSpider(scrapy.Spider):
    name = 'movie'
    allowed_domains = ['www.id97.com']
    start_urls = ['http://www.id97.com/']

    def parse(self, response):
        div_list = response.xpath('//div[@class="col-xs-1-5 movie-item"]')

        for div in div_list:
            item = MovieproItem()
            item['name'] = div.xpath('.//h1/a/text()').extract_first()
            item['score'] = div.xpath('.//h1/em/text()').extract_first()
            #xpath(string(.))表示提取当前节点下所有子节点中的数据值(.)表示当前节点
            item['kind'] = div.xpath('.//div[@class="otherinfo"]').xpath('string(.)').extract_first()
            item['detail_url'] = div.xpath('./div/a/@href').extract_first()
            #请求二级详情页面,解析二级页面中的相应内容,通过meta参数进行Request的数据传递
            yield scrapy.Request(url=item['detail_url'],callback=self.parse_detail,meta={'item':item})

    def parse_detail(self,response):
        #通过response获取item
        item = response.meta['item']
        item['actor'] = response.xpath('//div[@class="row"]//table/tr[1]/a/text()').extract_first()
        item['time'] = response.xpath('//div[@class="row"]//table/tr[7]/td[2]/text()').extract_first()
        item['long'] = response.xpath('//div[@class="row"]//table/tr[8]/td[2]/text()').extract_first()
        #提交item到管道
        yield item

items文件:


# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class MovieproItem(scrapy.Item):
    # define the fields for your item here like:
    name = scrapy.Field()
    score = scrapy.Field()
    time = scrapy.Field()
    long = scrapy.Field()
    actor = scrapy.Field()
    kind = scrapy.Field()
    detail_url = scrapy.Field()
 

管道文件:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

import json
class MovieproPipeline(object):
    def __init__(self):
        self.fp = open('data.txt','w')
    def process_item(self, item, spider):
        dic = dict(item)
        print(dic)
        json.dump(dic,self.fp,ensure_ascii=False)
        return item
    def close_spider(self,spider):
        self.fp.close()

三.如何提高scrapy的爬取效率

  增加并发:

    默认scrapy开启的并发线程为32个,可以适当进行增加.在settings配置文件中修改

CONCURRENT_REQUESTS = 100值为100,并发设置成了为100

降低日志级别:

  在运行scrpay时,会有大量的日志信息信息输出,为了减少cpu的使用率,可以设置log输出信息为INFO或者ERROR即可,在配置文件编写:LOG_LEVEL = "INFO"

禁止cookie:

  如果不是真的需要cookie,则在scrapy爬取数据时可可以禁止cookie从而减少cpu的使用率,提升爬取效率,在设置文件中编写:COOKIES_ENABLED=False

禁止重试:

  对识别的HTTP进行重新请求(重试)会减慢爬取速度,因为可以禁止重试,在配置文件中编写:

  RETRY_ENABLED = False

减少下载超时:

  如果对一非常慢的链接进行爬取,减少下载超时可以能让卡主的链接快速被放弃,从而提升效率,在配置文件中进行编写:DOWNLOAD_TIMEOUT = 10 超时时间为10s

测试案例:爬取校花网校花图片:www.421609.com

爬虫文件:

# -*- coding: utf-8 -*-
import scrapy
from xiaohua.items import XiaohuaItem

class XiahuaSpider(scrapy.Spider):

    name = 'xiaohua'
    allowed_domains = ['www.521609.com']
    start_urls = ['http://www.521609.com/daxuemeinv/']

    pageNum = 1
    url = 'http://www.521609.com/daxuemeinv/list8%d.html'

    def parse(self, response):
        li_list = response.xpath('//div[@class="index_img list_center"]/ul/li')
        for li in li_list:
            school = li.xpath('./a/img/@alt').extract_first()
            img_url = li.xpath('./a/img/@src').extract_first()

            item = XiaohuaItem()
            item['school'] = school
            item['img_url'] = 'http://www.521609.com' + img_url

            yield item

        if self.pageNum < 10:
            self.pageNum += 1
            url = format(self.url % self.pageNum)
            #print(url)
            yield scrapy.Request(url=url,callback=self.parse)

item文件

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class XiaohuaItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    school=scrapy.Field()
    img_url=scrapy.Field()

pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

import json
import os
import urllib.request
class XiaohuaPipeline(object):
    def __init__(self):
        self.fp = None

    def open_spider(self,spider):
        print('开始爬虫')
        self.fp = open('./xiaohua.txt','w')

    def download_img(self,item):
        url = item['img_url']
        fileName = item['school']+'.jpg'
        if not os.path.exists('./xiaohualib'):
            os.mkdir('./xiaohualib')
        filepath = os.path.join('./xiaohualib',fileName)
        urllib.request.urlretrieve(url,filepath)
        print(fileName+"下载成功")

    def process_item(self, item, spider):
        obj = dict(item)
        json_str = json.dumps(obj,ensure_ascii=False)
        self.fp.write(json_str+'\n')

        #下载图片
        self.download_img(item)
        return item

    def close_spider(self,spider):
        print('结束爬虫')
        self.fp.close()

settings.py配置文件:

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 100
COOKIES_ENABLED = False
LOG_LEVEL = 'ERROR'
RETRY_ENABLED = False
DOWNLOAD_TIMEOUT = 3
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
DOWNLOAD_DELAY = 3

 

posted @ 2019-09-16 00:25  扎西德勒119  阅读(162)  评论(0编辑  收藏  举报