Scrapy学习-22-扩展开发

开发scrapy扩展
定义
  扩展框架提供一个机制,使得你能将自定义功能绑定到Scrapy。
  扩展只是正常的类,它们在Scrapy启动时被实例化、初始化
 
注意
  实际上自定义扩展和spider中间件、下载中间件都是扩展
  spider middlewares、downloader middlewares、pipelines 都拥有自己的manager管理器,这些管理器都继承与extension管理器
 
扩展设置
  扩展使用 Scrapy settings 管理它们的设置,这跟其他Scrapy代码一样。
  通常扩展需要给它们的设置加上前缀,以避免跟已有(或将来)的扩展冲突。
    比如,一个扩展处理 Google Sitemaps, 则可以使用类似 GOOGLESITEMAP_ENABLED、GOOGLESITEMAP_DEPTH 等设置
 
加载和激活扩展
  扩展在扩展类被实例化时加载和激活。 因此,所有扩展的实例化代码必须在类的构造函数(__init__)中执行
  要使得扩展可用,需要把它添加到Scrapy的 EXTENSIONS 配置中。 在 EXTENSIONS 中,每个扩展都使用一个字符串表示,即扩展类的全Python路径。
   比如
     EXTENSIONS = { 'scrapy.contrib.corestats.CoreStats': 500, 'scrapy.telnet.TelnetConsole': 500, } 
  如你所见,EXTENSIONS 配置是一个dict,key是扩展类的路径,value是顺序, 它定义扩展加载的顺序。
     扩展顺序不像中间件的顺序那么重要,而且扩展之间一般没有关联。 扩展加载的顺序并不重要,因为它们并不相互依赖
 
禁用扩展
EXTENSIONS = {
    'scrapy.contrib.corestats.CoreStats': None,
}

 

如何实现你的扩展
  实现你的扩展很简单。每个扩展是一个单一的Python class,它不需要实现任何特殊的方法。
  Scrapy扩展(包括middlewares和pipelines)的主要入口是 from_crawler 类方法, 它接收一个 Crawler 类的实例,该实例是控制Scrapy crawler的主要对象。
     如果扩展需要,你可以通过这个对象访问settings,signals,stats,控制爬虫的行为。
  通常来说,扩展关联到 signals 并执行它们触发的任务。
  最后,如果 from_crawler 方法抛出 NotConfigured 异常, 扩展会被禁用。否则,扩展会被开启
 
扩展实例
from scrapy import signals
from scrapy.exceptions import NotConfigured

class SpiderOpenCloseLogging(object):

    def __init__(self, item_count):
        self.item_count = item_count

        self.items_scraped = 0

    @classmethod
    def from_crawler(cls, crawler):
        # first check if the extension should be enabled and raise

        # NotConfigured otherwise

        if not crawler.settings.getbool('MYEXT_ENABLED'):

            raise NotConfigured

        # get the number of items from settings

        item_count = crawler.settings.getint('MYEXT_ITEMCOUNT', 1000)

        # instantiate the extension object

        ext = cls(item_count)

        # connect the extension object to signals

        crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)

        crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)

        crawler.signals.connect(ext.item_scraped, signal=signals.item_scraped)

        # return the extension object

        return ext

    def spider_opened(self, spider):
        spider.log("opened spider %s" % spider.name)

    def spider_closed(self, spider):
        spider.log("closed spider %s" % spider.name)

    def item_scraped(self, item, spider):
        self.items_scraped += 1

        if self.items_scraped % self.item_count == 0:
            spider.log("scraped %d items" % self.items_scraped)

 

内置扩展介绍
记录统计扩展(Log Stats extension)       记录基本的统计信息,比如爬取的页面和条目(items)

核心统计扩展(Core Stats extension)      如果统计收集器(stats collection)启用了,该扩展开启核心统计收集

Telnet console 扩展                     提供一个telnet控制台,用于进入当前执行的Scrapy进程的Python解析器

内存使用扩展(Memory usage extension)     监控Scrapy进程内存使用量并且:如果使用内存量超过某个指定值,发送提醒邮件。如果超过某个指定值,关闭spider

内存调试扩展(Memory debugger extension)  该扩展用于调试内存使用量,它收集以下信息:没有被Python垃圾回收器收集的对象。应该被销毁却仍然存活的对象

关闭spider扩展                          当某些状况发生,spider会自动关闭。每种情况使用指定的关闭原因

StatsMailer extension                   这个简单的扩展可用来在一个域名爬取完毕时发送提醒邮件, 包含Scrapy收集的统计信息

Debugging extensions                    当收到 SIGQUIT 或 SIGUSR2 信号,spider进程的信息将会被存储下来

调试扩展(Debugger extension)            当收到 SIGUSR2 信号,将会在Scrapy进程中调用 Python debugger。 debugger退出后,Scrapy进程继续正常运行

 

内置核心统计扩展源码
"""
Extension for collecting core stats like items scraped and start/finish times
"""
import datetime

from scrapy import signals

class CoreStats(object):

    def __init__(self, stats):
        self.stats = stats

    @classmethod
    def from_crawler(cls, crawler):
        o = cls(crawler.stats)
        crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
        crawler.signals.connect(o.spider_closed, signal=signals.spider_closed)
        crawler.signals.connect(o.item_scraped, signal=signals.item_scraped)
        crawler.signals.connect(o.item_dropped, signal=signals.item_dropped)
        crawler.signals.connect(o.response_received, signal=signals.response_received)
        return o

    def spider_opened(self, spider):
        self.stats.set_value('start_time', datetime.datetime.utcnow(), spider=spider)

    def spider_closed(self, spider, reason):
        self.stats.set_value('finish_time', datetime.datetime.utcnow(), spider=spider)
        self.stats.set_value('finish_reason', reason, spider=spider)

    def item_scraped(self, item, spider):
        self.stats.inc_value('item_scraped_count', spider=spider)

    def response_received(self, spider):
        self.stats.inc_value('response_received_count', spider=spider)

    def item_dropped(self, item, spider, exception):
        reason = exception.__class__.__name__
        self.stats.inc_value('item_dropped_count', spider=spider)
        self.stats.inc_value('item_dropped_reasons_count/%s' % reason, spider=spider)

 

内置内存使用扩展源码
"""
MemoryUsage extension

See documentation in docs/topics/extensions.rst
"""
import sys
import socket
import logging
from pprint import pformat
from importlib import import_module

from twisted.internet import task

from scrapy import signals
from scrapy.exceptions import NotConfigured
from scrapy.mail import MailSender
from scrapy.utils.engine import get_engine_status

logger = logging.getLogger(__name__)


class MemoryUsage(object):

    def __init__(self, crawler):
        if not crawler.settings.getbool('MEMUSAGE_ENABLED'):
            raise NotConfigured
        try:
            # stdlib's resource module is only available on unix platforms.
            self.resource = import_module('resource')
        except ImportError:
            raise NotConfigured

        self.crawler = crawler
        self.warned = False
        self.notify_mails = crawler.settings.getlist('MEMUSAGE_NOTIFY_MAIL')
        self.limit = crawler.settings.getint('MEMUSAGE_LIMIT_MB')*1024*1024
        self.warning = crawler.settings.getint('MEMUSAGE_WARNING_MB')*1024*1024
        self.check_interval = crawler.settings.getfloat('MEMUSAGE_CHECK_INTERVAL_SECONDS')
        self.mail = MailSender.from_settings(crawler.settings)
        crawler.signals.connect(self.engine_started, signal=signals.engine_started)
        crawler.signals.connect(self.engine_stopped, signal=signals.engine_stopped)

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def get_virtual_size(self):
        size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss
        if sys.platform != 'darwin':
            # on Mac OS X ru_maxrss is in bytes, on Linux it is in KB
            size *= 1024
        return size

    def engine_started(self):
        self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())
        self.tasks = []
        tsk = task.LoopingCall(self.update)
        self.tasks.append(tsk)
        tsk.start(self.check_interval, now=True)
        if self.limit:
            tsk = task.LoopingCall(self._check_limit)
            self.tasks.append(tsk)
            tsk.start(self.check_interval, now=True)
        if self.warning:
            tsk = task.LoopingCall(self._check_warning)
            self.tasks.append(tsk)
            tsk.start(self.check_interval, now=True)

    def engine_stopped(self):
        for tsk in self.tasks:
            if tsk.running:
                tsk.stop()

    def update(self):
        self.crawler.stats.max_value('memusage/max', self.get_virtual_size())

    def _check_limit(self):
        if self.get_virtual_size() > self.limit:
            self.crawler.stats.set_value('memusage/limit_reached', 1)
            mem = self.limit/1024/1024
            logger.error("Memory usage exceeded %(memusage)dM. Shutting down Scrapy...",
                        {'memusage': mem}, extra={'crawler': self.crawler})
            if self.notify_mails:
                subj = "%s terminated: memory usage exceeded %dM at %s" % \
                        (self.crawler.settings['BOT_NAME'], mem, socket.gethostname())
                self._send_report(self.notify_mails, subj)
                self.crawler.stats.set_value('memusage/limit_notified', 1)

            open_spiders = self.crawler.engine.open_spiders
            if open_spiders:
                for spider in open_spiders:
                    self.crawler.engine.close_spider(spider, 'memusage_exceeded')
            else:
                self.crawler.stop()

    def _check_warning(self):
        if self.warned: # warn only once
            return
        if self.get_virtual_size() > self.warning:
            self.crawler.stats.set_value('memusage/warning_reached', 1)
            mem = self.warning/1024/1024
            logger.warning("Memory usage reached %(memusage)dM",
                        {'memusage': mem}, extra={'crawler': self.crawler})
            if self.notify_mails:
                subj = "%s warning: memory usage reached %dM at %s" % \
                        (self.crawler.settings['BOT_NAME'], mem, socket.gethostname())
                self._send_report(self.notify_mails, subj)
                self.crawler.stats.set_value('memusage/warning_notified', 1)
            self.warned = True

    def _send_report(self, rcpts, subject):
        """send notification mail with some additional useful info"""
        stats = self.crawler.stats
        s = "Memory usage at engine startup : %dM\r\n" % (stats.get_value('memusage/startup')/1024/1024)
        s += "Maximum memory usage           : %dM\r\n" % (stats.get_value('memusage/max')/1024/1024)
        s += "Current memory usage           : %dM\r\n" % (self.get_virtual_size()/1024/1024)

        s += "ENGINE STATUS ------------------------------------------------------- \r\n"
        s += "\r\n"
        s += pformat(get_engine_status(self.crawler.engine))
        s += "\r\n"
        self.mail.send(rcpts, subject, s)

 

 
posted @ 2018-05-23 16:34  前路~  阅读(658)  评论(0编辑  收藏  举报