Scrapy错误处理meta中的refresh指令

郑昀 20101124

当用 Scrapy(一个开源爬虫框架) 访问 http://www.cjis.cn/info/zjzx.jsp 页面时,由于该页面html中指定了 <meta http-equiv="refresh" content="30;   url=http://www.cjis.cn/info/zjzx.jsp"> ,所以 Scrapy 会自己循环请求该页面,直至到达最大跳转限制后退出,并打印:

DEBUG: Discarding <GET http://www.cjis.cn/info/zjzx.jsp>: max redirections reached 。
所以我们必须禁用 RedirectMiddleware ,操作如下:

修改一个scrapy project的settings.py,增加下面这段话:

DOWNLOADER_MIDDLEWARES_BASE = {
    'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
    'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
    'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
    'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
    'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
    #'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
    'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
    'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
    'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 800,
    'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
    'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
}

注意,把 RedirectMiddleware 给注释了。

posted @ 2010-11-25 17:41  老兵笔记  阅读(4165)  评论(0编辑  收藏  举报