Scrapy框架CrawlSpider类爬虫实例
CrawlSpider类爬虫中:
rules用于定义提取URl地址规则,元祖数据有顺序
#LinkExtractor 连接提取器,提取url地址
#callback 提取出来的url地址的response会交给callback处理
#follow 当前url地址的响应是否重新经过rules进行提取url地址
cf.py具体实现代码如下(简化版):
1 # -*- coding: utf-8 -*- 2 import scrapy 3 from scrapy.linkextractors import LinkExtractor 4 from scrapy.spiders import CrawlSpider, Rule 5 import re 6 7 class CfSpider(CrawlSpider): 8 name = 'cf' 9 allowed_domains = ['bxjg.circ.gov.cn'] 10 start_urls = ['http://bxjg.circ.gov.cn/web/site0/tab5240/Default.htm'] 11 12 rules = ( 13 Rule(LinkExtractor(allow=r'/web/site0/tab5240/info\d+\.htm'), callback='parse_item', ), 14 Rule(LinkExtractor(allow=r'/web/site0/tab5240/module14430/page\d+\.htm'),follow=True, ), 15 ) 16 17 def parse_item(self, response): 18 item = {} 19 item['title'] = re.findall("<!--TitleStart-->(.*?)<!--TitleEnd-->", response.body.decode())[0] 20 item['publish_date'] = re.findall("发布时间:(20\d{2}-\d{2}-\d{2})", response.body.decode())[0] 21 print(item)