python——CrawlSpiders类(深入爬取)

爬虫的自我修养_5

一、CrawlSpiders类简介

通过下面的命令可以快速创建 CrawlSpider模板 的代码:

scrapy genspider -t crawl tencent tencent.com

上一个案例中,我们通过正则表达式,制作了新的url作为Request请求参数,现在我们可以换个花样...

class scrapy.spiders.CrawlSpider

它是Spider的派生类,Spider类的设计原则是只爬取start_url列表中的网页,而CrawlSpider类定义了一些规则(rule)来提供跟进link的方便的机制,从爬取的网页中获取link并继续爬取的工作更适合。

源码参考

class CrawlSpider(Spider):
    rules = ()
    def __init__(self, *a, **kw):
        super(CrawlSpider, self).__init__(*a, **kw)
        self._compile_rules()

    #首先调用parse()来处理start_urls中返回的response对象
    #parse()则将这些response对象传递给了_parse_response()函数处理,并设置回调函数为parse_start_url()
    #设置了跟进标志位True
    #parse将返回item和跟进了的Request对象    
    def parse(self, response):
        return self._parse_response(response, self.parse_start_url, cb_kwargs={}, follow=True)

    #处理start_url中返回的response,需要重写
    def parse_start_url(self, response):
        return []

    def process_results(self, response, results):
        return results

    #从response中抽取符合任一用户定义'规则'的链接,并构造成Resquest对象返回
    def _requests_to_follow(self, response):
        if not isinstance(response, HtmlResponse):
            return
        seen = set()
        #抽取之内的所有链接,只要通过任意一个'规则',即表示合法
        for n, rule in enumerate(self._rules):
            links = [l for l in rule.link_extractor.extract_links(response) if l not in seen]
            #使用用户指定的process_links处理每个连接
            if links and rule.process_links:
                links = rule.process_links(links)
            #将链接加入seen集合,为每个链接生成Request对象,并设置回调函数为_repsonse_downloaded()
            for link in links:
                seen.add(link)
                #构造Request对象,并将Rule规则中定义的回调函数作为这个Request对象的回调函数
                r = Request(url=link.url, callback=self._response_downloaded)
                r.meta.update(rule=n, link_text=link.text)
                #对每个Request调用process_request()函数。该函数默认为indentify,即不做任何处理,直接返回该Request.
                yield rule.process_request(r)

    #处理通过rule提取出的连接,并返回item以及request
    def _response_downloaded(self, response):
        rule = self._rules[response.meta['rule']]
        return self._parse_response(response, rule.callback, rule.cb_kwargs, rule.follow)

    #解析response对象,会用callback解析处理他,并返回request或Item对象
    def _parse_response(self, response, callback, cb_kwargs, follow=True):
        #首先判断是否设置了回调函数。(该回调函数可能是rule中的解析函数,也可能是 parse_start_url函数)
        #如果设置了回调函数(parse_start_url()),那么首先用parse_start_url()处理response对象,
        #然后再交给process_results处理。返回cb_res的一个列表
        if callback:
            #如果是parse调用的,则会解析成Request对象
            #如果是rule callback,则会解析成Item
            cb_res = callback(response, **cb_kwargs) or ()
            cb_res = self.process_results(response, cb_res)
            for requests_or_item in iterate_spider_output(cb_res):
                yield requests_or_item

        #如果需要跟进,那么使用定义的Rule规则提取并返回这些Request对象
        if follow and self._follow_links:
            #返回每个Request对象
            for request_or_item in self._requests_to_follow(response):
                yield request_or_item

    def _compile_rules(self):
        def get_method(method):
            if callable(method):
                return method
            elif isinstance(method, basestring):
                return getattr(self, method, None)

        self._rules = [copy.copy(r) for r in self.rules]
        for rule in self._rules:
            rule.callback = get_method(rule.callback)
            rule.process_links = get_method(rule.process_links)
            rule.process_request = get_method(rule.process_request)

    def set_crawler(self, crawler):
        super(CrawlSpider, self).set_crawler(crawler)
        self._follow_links = crawler.settings.getbool('CRAWLSPIDER_FOLLOW_LINKS', True)

二、LinkExtractors

Link Extractors 的目的很简单: 提取链接

每个LinkExtractor有唯一的公共方法是 extract_links(),它接收一个 Response 对象,并返回一个 scrapy.link.Link 对象。

Link Extractors要实例化一次,并且 extract_links 方法会根据不同的 response 调用多次提取链接。

主要参数

class scrapy.linkextractors.LinkExtractor(
    allow = (),		# 满足括号中“正则表达式”的值会被提取,如果为空,则全部匹配
    deny = (),		# 与这个正则表达式(或正则表达式列表)不匹配的URL一定不提取
    allow_domains = (),		# 会被提取的链接的domains
    deny_domains = (),		# 一定不会被提取链接的domains
    deny_extensions = None,
    restrict_xpaths = (),	# 使用xpath表达式,和allow共同作用过滤链接(一般只用allow就行了)
    tags = ('a','area'),
    attrs = ('href'),
    canonicalize = True,
    unique = True,
    process_value = None
)

三、LinkExtractors

在rules中包含一个或多个Rule对象,每个Rule对爬取网站的动作定义了特定操作。如果多个rule匹配了相同的链接,则根据规则在本集合中被定义的顺序,第一个会被使用。

主要参数

class scrapy.spiders.Rule(
        link_extractor, 
        callback = None, 
        cb_kwargs = None, 
        follow = None, 
        process_links = None, 
        process_request = None
)
  • link_extractor:是一个Link Extractor对象,用于定义需要提取的链接。

  • callback: 从link_extractor中每获取到链接时,参数所指定的值作为回调函数,该回调函数接受一个response作为其第一个参数。

    注意:当编写爬虫规则时,避免使用parse作为回调函数。由于CrawlSpider使用parse方法来实现其逻辑,如果覆盖了 parse方法,crawl spider将会运行失败。

  • follow:是一个布尔(boolean)值,指定了根据该规则从response提取的链接是否需要跟进。 如果callback为None,follow 默认设置为True ,否则默认为False。

  • process_links:指定该spider中哪个的函数将会被调用,从link_extractor中获取到链接列表时将会调用该函数。该方法主要用来过滤。

  • process_request:指定该spider中哪个的函数将会被调用, 该规则提取到每个request时都会调用该函数。 (用来过滤request)

小Tips

由于CrawlSpider使用parse方法来实现其逻辑,如果覆盖了 parse方法,crawl spider将会运行失败。

四、Logging

Scrapy提供了log功能,可以通过 logging 模块使用。

可以修改配置文件settings.py,任意位置添加下面两行。

LOG_FILE = "TencentSpider.log"
LOG_LEVEL = "INFO"

Log levels

  • Scrapy提供5层logging级别:

  • CRITICAL - 严重错误(critical)

  • ERROR - 一般错误(regular errors)
  • WARNING - 警告信息(warning messages)
  • INFO - 一般信息(informational messages)
  • DEBUG - 调试信息(debugging messages)

logging设置

通过在setting.py中进行以下设置可以被用来配置logging:

  1. LOG_ENABLED 默认: True,启用logging
  2. LOG_ENCODING 默认: 'utf-8',logging使用的编码
  3. LOG_FILE 默认: None,在当前目录里创建logging输出文件的文件名
  4. LOG_LEVEL 默认: 'DEBUG',log的最低级别
  5. LOG_STDOUT 默认: False 如果为 True,进程所有的标准输出(及错误)将会被重定向到log中。例如,执行 print "hello" ,其将会在Scrapy log中显示。

 示例1、使用CrawlSpider爬取腾讯招聘网站

爬虫模块

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor # 导入链接规则匹配类,用来提取符合规则的连接
from scrapy.spiders import CrawlSpider, Rule    # 导入CrawlSpider类和Rule
from day_31.TencentCrawlSpider.TencentCrawlSpider.items import TencentcrawlspiderItem


class TencentSpider(CrawlSpider):
    name = 'tencent'
    allowed_domains = ['tencent.com']
    start_urls = ['http://hr.tencent.com/position.php?&start=0']

    rules = (
        Rule(LinkExtractor(allow=r'position\.php\?&start=\d+#a'), callback='parse_item', follow=True),
        # Response里链接的提取规则,返回的符合匹配规则的链接匹配对象的列表
        # 获取这个列表里的链接,依次发送请求,并且继续跟进,调用指定回调函数处理
        # 前面加r表示将正则表达式编译成一个规则的对象
    )

    # 指定的回调函数
    def parse_item(self, response):
        for i in response.xpath('//tr[@class="even"] | //tr[@class="odd"]'):
            item = TencentcrawlspiderItem()
            item['name'] = i.xpath(".//a/text()").extract()[0]
            item['link'] = i.xpath(".//a/@href").extract()[0]
            item['type'] = i.xpath("./td[2]/text()").extract()[0]
            item['number'] = i.xpath(".//td[3]/text()").extract()[0]
            item['place'] = i.xpath(".//td[4]/text()").extract()[0]
            item['rtime'] = i.xpath(".//td[5]/text()").extract()[0]
            yield item

管道模块

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import json

class TencentcrawlspiderPipeline(object):
    def __init__(self):
        self.file = open('tencent-job.json','wb')

    def process_item(self, item, spider):
        text = json.dumps(dict(item),ensure_ascii=False)+'\n'
        self.file.write(text.encode('utf-8'))
        return item

    def close_spider(self, spider):
        self.file.close()
 1 import scrapy
 2 
 3 class TencentcrawlspiderItem(scrapy.Item):
 4     # define the fields for your item here like:
 5     name = scrapy.Field()
 6     link = scrapy.Field()
 7     type = scrapy.Field()
 8     number = scrapy.Field()
 9     place = scrapy.Field()
10     rtime = scrapy.Field()
items.py
 1 # -*- coding: utf-8 -*-
 2 
 3 # Scrapy settings for TencentCrawlSpider project
 4 #
 5 # For simplicity, this file contains only settings considered important or
 6 # commonly used. You can find more settings consulting the documentation:
 7 #
 8 #     http://doc.scrapy.org/en/latest/topics/settings.html
 9 #     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
10 #     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
11 
12 BOT_NAME = 'TencentCrawlSpider'
13 
14 SPIDER_MODULES = ['TencentCrawlSpider.spiders']
15 NEWSPIDER_MODULE = 'TencentCrawlSpider.spiders'
16 
17 
18 # Crawl responsibly by identifying yourself (and your website) on the user-agent
19 #USER_AGENT = 'TencentCrawlSpider (+http://www.yourdomain.com)'
20 
21 # Obey robots.txt rules
22 # ROBOTSTXT_OBEY = True
23 
24 # Configure maximum concurrent requests performed by Scrapy (default: 16)
25 #CONCURRENT_REQUESTS = 32
26 
27 # Configure a delay for requests for the same website (default: 0)
28 # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
29 # See also autothrottle settings and docs
30 DOWNLOAD_DELAY = 3
31 # The download delay setting will honor only one of:
32 #CONCURRENT_REQUESTS_PER_DOMAIN = 16
33 #CONCURRENT_REQUESTS_PER_IP = 16
34 
35 # Disable cookies (enabled by default)
36 #COOKIES_ENABLED = False
37 
38 # Disable Telnet Console (enabled by default)
39 #TELNETCONSOLE_ENABLED = False
40 
41 # Override the default request headers:
42 DEFAULT_REQUEST_HEADERS = {
43     'User-Agent':'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0;',
44     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
45   # 'Accept-Language': 'en',
46 }
47 
48 LOG_FILE = 'tencentlog.txt'
49 LOG_LEVEL = 'DEBUG'
50 # Enable or disable spider middlewares
51 # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
52 #SPIDER_MIDDLEWARES = {
53 #    'TencentCrawlSpider.middlewares.TencentcrawlspiderSpiderMiddleware': 543,
54 #}
55 
56 # Enable or disable downloader middlewares
57 # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
58 #DOWNLOADER_MIDDLEWARES = {
59 #    'TencentCrawlSpider.middlewares.MyCustomDownloaderMiddleware': 543,
60 #}
61 
62 # Enable or disable extensions
63 # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
64 #EXTENSIONS = {
65 #    'scrapy.extensions.telnet.TelnetConsole': None,
66 #}
67 
68 # Configure item pipelines
69 # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
70 ITEM_PIPELINES = {
71    'TencentCrawlSpider.pipelines.TencentcrawlspiderPipeline': 300,
72 }
73 
74 # Enable and configure the AutoThrottle extension (disabled by default)
75 # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
76 #AUTOTHROTTLE_ENABLED = True
77 # The initial download delay
78 #AUTOTHROTTLE_START_DELAY = 5
79 # The maximum download delay to be set in case of high latencies
80 #AUTOTHROTTLE_MAX_DELAY = 60
81 # The average number of requests Scrapy should be sending in parallel to
82 # each remote server
83 #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
84 # Enable showing throttling stats for every response received:
85 #AUTOTHROTTLE_DEBUG = False
86 
87 # Enable and configure HTTP caching (disabled by default)
88 # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
89 #HTTPCACHE_ENABLED = True
90 #HTTPCACHE_EXPIRATION_SECS = 0
91 #HTTPCACHE_DIR = 'httpcache'
92 #HTTPCACHE_IGNORE_HTTP_CODES = []
93 #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
settings.py

小Tips

1、python 爬虫爬取内容时, \xa0 、 \u3000 的含义

\xa0 是不间断空白符  

我们通常所用的空格是 \x20 ,是在标准ASCII可见字符 0x20~0x7e 范围内。
而 \xa0 属于 latin1 (ISO/IEC_8859-1)中的扩展字符集字符,代表空白符nbsp(non-breaking space)。
latin1 字符集向下兼容 ASCII ( 0x20~0x7e )。通常我们见到的字符多数是 latin1 的,比如在 MySQL 数据库中。

\u3000 是全角的空白符

根据Unicode编码标准及其基本多语言面的定义, \u3000 属于CJK字符的CJK标点符号区块内,是空白字符之一。它的名字是 Ideographic Space ,有人译作表意字空格、象形字空格等。顾名思义,就是全角的 CJK 空格。它跟 nbsp 不一样,是可以被换行间断的。常用于制造缩进, wiki 还说用于抬头,但没见过。

2、response.url    # 获取当前页面url

3、在allow里面的正则匹配,有特殊字符('.','?')要加转义字符'\'
page_lx = LinkExtractor(allow=('position\.php\?&start=\d+'))

4、字符串去空格 str.strip()

示例二:爬取网页里面的信息(东莞)

爬虫模块

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from newdongguan.items import NewdongguanItem


class DongdongSpider(CrawlSpider):
    name = 'dongdong'
    allowed_domains = ['wz.sun0769.com']
    start_urls = ['http://wz.sun0769.com/index.php/question/questionType?type=4&page=']

    # 每一页的匹配规则
    pagelink = LinkExtractor(allow=("type=4"))
    # 每一页里的每个帖子的匹配规则
    contentlink = LinkExtractor(allow=(r"/html/question/\d+/\d+.shtml"))

    rules = (
        Rule(pagelink),
        Rule(contentlink, callback = "parse_item",follow=False)
    )

    def parse_item(self, response):
        item = NewdongguanItem()
        # 标题
        item['title'] = response.xpath('//div[contains(@class, "pagecenter p3")]//strong/text()').extract()[0]
        # 编号
        item['number'] = item['title'].split(' ')[-1].split(":")[-1]
        # 内容,先使用有图片情况下的匹配规则,如果有内容,返回所有内容的列表集合
        content = response.xpath('//div[@class="contentext"]/text()').extract()
        # 如果没有内容,则返回空列表,则使用无图片情况下的匹配规则
        if len(content) == 0:
            content = response.xpath('//div[@class="c1 text14_2"]/text()').extract()
            item['content'] = "".join(content).strip()
        else:
            item['content'] = "".join(content).strip()
        # 链接
        item['url'] = response.url

        yield item

管道模块

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import json

class DongguancrawlspiderPipeline(object):
    def __init__(self):
        self.file = open('dongguan.json','wb')

    def process_item(self, item, spider):
        text = json.dumps(dict(item),ensure_ascii=False)+'\n'
        self.file.write(text.encode('utf-8'))
        return item

    def close_spider(self,spider):
        self.file.close()
1 import scrapy
2 
3 class DongguancrawlspiderItem(scrapy.Item):
4     # define the fields for your item here like:
5     title = scrapy.Field()
6     content = scrapy.Field()
7     url = scrapy.Field()
8     number = scrapy.Field()
items.py
 1 # -*- coding: utf-8 -*-
 2 
 3 # Scrapy settings for DongguanCrawlSpider project
 4 #
 5 # For simplicity, this file contains only settings considered important or
 6 # commonly used. You can find more settings consulting the documentation:
 7 #
 8 #     http://doc.scrapy.org/en/latest/topics/settings.html
 9 #     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
10 #     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
11 
12 BOT_NAME = 'DongguanCrawlSpider'
13 
14 SPIDER_MODULES = ['DongguanCrawlSpider.spiders']
15 NEWSPIDER_MODULE = 'DongguanCrawlSpider.spiders'
16 
17 
18 # Crawl responsibly by identifying yourself (and your website) on the user-agent
19 #USER_AGENT = 'DongguanCrawlSpider (+http://www.yourdomain.com)'
20 
21 # Obey robots.txt rules
22 # ROBOTSTXT_OBEY = True
23 
24 # Configure maximum concurrent requests performed by Scrapy (default: 16)
25 #CONCURRENT_REQUESTS = 32
26 
27 # Configure a delay for requests for the same website (default: 0)
28 # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
29 # See also autothrottle settings and docs
30 #DOWNLOAD_DELAY = 3
31 # The download delay setting will honor only one of:
32 #CONCURRENT_REQUESTS_PER_DOMAIN = 16
33 #CONCURRENT_REQUESTS_PER_IP = 16
34 
35 # Disable cookies (enabled by default)
36 #COOKIES_ENABLED = False
37 
38 # Disable Telnet Console (enabled by default)
39 #TELNETCONSOLE_ENABLED = False
40 
41 # Override the default request headers:
42 DEFAULT_REQUEST_HEADERS = {
43     'User-Agent':'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0;',
44     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
45   # 'Accept-Language': 'en',
46 }
47 
48 LOG_FILE = 'dongguan.log'
49 LOG_LEVER = 'DEBUG'
50 
51 # Enable or disable spider middlewares
52 # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
53 #SPIDER_MIDDLEWARES = {
54 #    'DongguanCrawlSpider.middlewares.DongguancrawlspiderSpiderMiddleware': 543,
55 #}
56 
57 # Enable or disable downloader middlewares
58 # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
59 #DOWNLOADER_MIDDLEWARES = {
60 #    'DongguanCrawlSpider.middlewares.MyCustomDownloaderMiddleware': 543,
61 #}
62 
63 # Enable or disable extensions
64 # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
65 #EXTENSIONS = {
66 #    'scrapy.extensions.telnet.TelnetConsole': None,
67 #}
68 
69 # Configure item pipelines
70 # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
71 ITEM_PIPELINES = {
72    'DongguanCrawlSpider.pipelines.DongguancrawlspiderPipeline': 300,
73 }
74 
75 # Enable and configure the AutoThrottle extension (disabled by default)
76 # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
77 #AUTOTHROTTLE_ENABLED = True
78 # The initial download delay
79 #AUTOTHROTTLE_START_DELAY = 5
80 # The maximum download delay to be set in case of high latencies
81 #AUTOTHROTTLE_MAX_DELAY = 60
82 # The average number of requests Scrapy should be sending in parallel to
83 # each remote server
84 #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
85 # Enable showing throttling stats for every response received:
86 #AUTOTHROTTLE_DEBUG = False
87 
88 # Enable and configure HTTP caching (disabled by default)
89 # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
90 #HTTPCACHE_ENABLED = True
91 #HTTPCACHE_EXPIRATION_SECS = 0
92 #HTTPCACHE_DIR = 'httpcache'
93 #HTTPCACHE_IGNORE_HTTP_CODES = []
94 #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
settings.py

1、提取出来的链接可能被篡改,所以我们可以通过process_link来修改url(一般不会遇到)

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from newdongguan.items import NewdongguanItem


class DongdongSpider(CrawlSpider):
    name = 'dongdong'
    allowed_domains = ['wz.sun0769.com']
    start_urls = ['http://wz.sun0769.com/index.php/question/questionType?type=4&page=']

    # 每一页的匹配规则
    pagelink = LinkExtractor(allow=("type=4"))
    # 每一页里的每个帖子的匹配规则
    contentlink = LinkExtractor(allow=(r"/html/question/\d+/\d+.shtml"))

    rules = (
        # 本案例的url被web服务器篡改,需要调用process_links来处理提取出来的url
        Rule(pagelink, process_links = "deal_links"),
        Rule(contentlink, callback = "parse_item")
    )

    # links 是当前response里提取出来的链接列表
    def deal_links(self, links):
        for each in links:
            each.url = each.url.replace("?","&").replace("Type&","Type?")
        return links

    def parse_item(self, response):
        ...

2、修改成spider类

# -*- coding: utf-8 -*-
import scrapy
from newdongguan.items import NewdongguanItem


class DongdongSpider(scrapy.Spider):
    name = 'xixi'
    allowed_domains = ['wz.sun0769.com']
    url = 'http://wz.sun0769.com/index.php/question/questionType?type=4&page='
    offset = 0
    start_urls = [url + str(offset)]


    def parse(self, response):
        # 每一页里的所有帖子的链接集合
        links = response.xpath('//div[@class="greyframe"]/table//td/a[@class="news14"]/@href').extract()
        # 迭代取出集合里的链接
        for link in links:
            # 提取列表里每个帖子的链接,发送请求放到请求队列里,并调用self.parse_item来处理
            yield scrapy.Request(link, callback = self.parse_item)

        # 页面终止条件成立前,会一直自增offset的值,并发送新的页面请求,调用parse方法处理
        if self.offset <= 71160:
            self.offset += 30
            # 发送请求放到请求队列里,调用self.parse处理response
            yield scrapy.Request(self.url + str(self.offset), callback = self.parse)

    # 处理每个帖子的response内容
    def parse_item(self, response):
        item = NewdongguanItem()
        # 标题
        item['title'] = response.xpath('//div[contains(@class, "pagecenter p3")]//strong/text()').extract()[0]
        # 编号
        item['number'] = item['title'].split(' ')[-1].split(":")[-1]
        # 内容,先使用有图片情况下的匹配规则,如果有内容,返回所有内容的列表集合
        content = response.xpath('//div[@class="contentext"]/text()').extract()
        # 如果没有内容,则返回空列表,则使用无图片情况下的匹配规则
        if len(content) == 0:
            content = response.xpath('//div[@class="c1 text14_2"]/text()').extract()
            item['content'] = "".join(content).strip()
        else:
            item['content'] = "".join(content).strip()
        # 链接
        item['url'] = response.url

        # 交给管道
        yield item

小Tips:

list = [a,b,c]
string = "123".join(list)
print(string)
>> a 123b 123c

string.replace("\xa0","")	# 将空格换成空

string.strip()		# 去首尾的空格
string.lstrip()		# 去左边(前面)的空格
string.rstrip()		# 去右边(后面)的空格

 

posted @ 2017-12-04 21:42  想54256  阅读(2690)  评论(0编辑  收藏  举报