在使用scrapy做爬虫的时候碰到一个问题,耗了挺长时间都没有解决,关键是从日志里面没有看出问题,最后还是通过阅读源码才找出问题所在。在此将问题现象以及解决方法记录一下。
现象:
在一个页面中有n多的连接,url的正则表达式如下:r"en/descriptions/[\d]+/[-:\.\w]+$",大部分连接都能抓取下来,但部分如 en/descriptions/32725456/not-a-virus:Client-SMTP.Win32.Blat.ai, en/descriptions/33444568/not-a-virus:Client-SMTP.Win32.Blat.au的却抓取不到,日志中没有任何提示信息。
分析:
首先是怀疑CrawlSpider的Rule定义有问题,但经过测试发现Rule的定义正常;此时只能怀疑是SgmlLinkExtractor的定义的问题了,通过SgmlLinkExtractor的process_value的回调跟踪来看,SgmlLinkExtractor在解析link的时候没有问题,但最终返回时却将部分link丢弃。
通过源码发现,scrapy.contrib.linkextractors.sgml.SgmlLinkExtractor在默认情况下将deny_extensions 设置为scrapy.linkextractor.IGNORED_EXTENSIONS,SgmlLinkExtractor在extract_links的时候调用_process_links, _process_links又调用了_link_allowed,在_link_allowed中依据种种过则对所有的link进行过滤,过滤规则中就有deny_extensions。默认IGNORED_EXTENSIONS将ai,au都包含了。所以也就出现了ai,au为结尾的link被过滤。至此真正的问题出处算是找到了。
解决方式:
根据源码分析的结果,在定义SgmlLinkExtractor时重新定义deny_extensions即可。比如
rules = ( Rule(SgmlLinkExtractor(allow=(r"en/descriptions\?", )), follow = True, ), Rule(SgmlLinkExtractor(allow=(r"en/descriptions/[\d]+/[-:\.\w]+$", ), deny_extensions = ""), callback = "parse_item", follow = True), )
scrapy部分相关源码如下:
scrapy.contrib.linkextractors.sgml.SgmlLinkExtractor:
class SgmlLinkExtractor(BaseSgmlLinkExtractor): def __init__(self, allow=(), deny=(), allow_domains=(), deny_domains=(), restrict_xpaths=(), tags=('a', 'area'), attrs=('href'), canonicalize=True, unique=True, process_value=None, deny_extensions=None): self.allow_res = [x if isinstance(x, _re_type) else re.compile(x) for x in arg_to_iter(allow)] self.deny_res = [x if isinstance(x, _re_type) else re.compile(x) for x in arg_to_iter(deny)] self.allow_domains = set(arg_to_iter(allow_domains)) self.deny_domains = set(arg_to_iter(deny_domains)) self.restrict_xpaths = tuple(arg_to_iter(restrict_xpaths)) self.canonicalize = canonicalize if deny_extensions is None: deny_extensions = IGNORED_EXTENSIONS self.deny_extensions = set(['.' + e for e in deny_extensions]) tag_func = lambda x: x in tags attr_func = lambda x: x in attrs BaseSgmlLinkExtractor.__init__(self, tag=tag_func, attr=attr_func, unique=unique, process_value=process_value) def extract_links(self, response): base_url = None if self.restrict_xpaths: hxs = HtmlXPathSelector(response) html = ''.join(''.join(html_fragm for html_fragm in hxs.select(xpath_expr).extract()) \ for xpath_expr in self.restrict_xpaths) base_url = get_base_url(response) else: html = response.body links = self._extract_links(html, response.url, response.encoding, base_url) links = self._process_links(links) return links def _process_links(self, links): links = [x for x in links if self._link_allowed(x)] links = BaseSgmlLinkExtractor._process_links(self, links) return links def _link_allowed(self, link): parsed_url = urlparse(link.url) allowed = _is_valid_url(link.url) if self.allow_res: allowed &= _matches(link.url, self.allow_res) if self.deny_res: allowed &= not _matches(link.url, self.deny_res) if self.allow_domains: allowed &= url_is_from_any_domain(parsed_url, self.allow_domains) if self.deny_domains: allowed &= not url_is_from_any_domain(parsed_url, self.deny_domains) if self.deny_extensions: allowed &= not url_has_any_extension(parsed_url, self.deny_extensions) if allowed and self.canonicalize: link.url = canonicalize_url(parsed_url) return allowed def matches(self, url): if self.allow_domains and not url_is_from_any_domain(url, self.allow_domains): return False if self.deny_domains and url_is_from_any_domain(url, self.deny_domains): return False allowed = [regex.search(url) for regex in self.allow_res] if self.allow_res else [True] denied = [regex.search(url) for regex in self.deny_res] if self.deny_res else [] return any(allowed) and not any(denied)
scrapy.linkextractor.IGNORED_EXTENSIONS:
IGNORED_EXTENSIONS = [ # images 'mng', 'pct', 'bmp', 'gif', 'jpg', 'jpeg', 'png', 'pst', 'psp', 'tif', 'tiff', 'ai', 'drw', 'dxf', 'eps', 'ps', 'svg', # audio 'mp3', 'wma', 'ogg', 'wav', 'ra', 'aac', 'mid', 'au', 'aiff', # video '3gp', 'asf', 'asx', 'avi', 'mov', 'mp4', 'mpg', 'qt', 'rm', 'swf', 'wmv', 'm4a', # office suites 'xls', 'ppt', 'doc', 'docx', 'odt', 'ods', 'odg', 'odp', # other 'css', 'pdf', 'doc', 'exe', 'bin', 'rss', 'zip', 'rar', ]