CSS Ribbon

Reproducing the GitHub Ribbon in CSS

Python学习---Day96

转载:http://www.cnblogs.com/wupeiqi/articles/6229292.html

Scrapy

Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中。
其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。

Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下

 

Scrapy主要包括了以下组件:

  • 引擎(Scrapy)
    用来处理整个系统的数据流处理, 触发事务(框架核心)
  • 调度器(Scheduler)
    用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址
  • 下载器(Downloader)
    用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)
  • 爬虫(Spiders)
    爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面
  • 项目管道(Pipeline)
    负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。
  • 下载器中间件(Downloader Middlewares)
    位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。
  • 爬虫中间件(Spider Middlewares)
    介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。
  • 调度中间件(Scheduler Middewares)
    介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。

Scrapy运行流程大概如下:

    1. 引擎从调度器中取出一个链接(URL)用于接下来的抓取
    2. 引擎把URL封装成一个请求(Request)传给下载器
    3. 下载器把资源下载下来,并封装成应答包(Response)
    4. 爬虫解析Response
    5. 解析出实体(Item),则交给实体管道进行进一步的处理
    6. 解析出的是链接(URL),则把URL交给调度器等待抓                
# -*- coding: utf-8 -*-
import scrapy
from scrapy.selector import Selector,HtmlXPathSelector

class ChoutiSpider(scrapy.Spider):
    name = 'chouti'
    allowed_domains = ['chouti.com']
    start_urls = ['https://dig.chouti.com/']

    def parse(self, response):
        # hxs = Selector(response=response).xpath('//a').extract()
        # # print(hxs)
        # for i in hxs:
        #     print(i) #标签对象
        hxs = Selector(response=response).xpath('//div[@id="content-list"]/div[@class="item"]') #extract将取到的对象转换为字符串
        for obj in hxs:
            a = obj.xpath('.//a[@class="show-content color-chag"]/text()').extract_first() #标签对象/拿到每一个item的文本信息
            print(a.strip())
'''
选择器:
//表示子孙中
.//当前对象的子孙中
/儿子
/div在儿子中找div标签
/div[@id="i1"]在儿子中找div标签且标签id是i1
obj.extract()列表中的每一个对象转换为字符串
obj.extract_first()列表中的第一个对象转换为字符串
//div/text()获取某个标签的文本
'''
chouti.py

 ##################################################################

爬取抽屉网的所有新闻

# -*- coding: utf-8 -*-
import scrapy
from scrapy.selector import Selector,HtmlXPathSelector
import sys
import io
from ..items import ChoutiItem
from scrapy.http import Request
class ChoutiSpider(scrapy.Spider):
    name = 'chouti'
    allowed_domains = ['chouti.com']
    start_urls = ['https://dig.chouti.com/']
    vis_urls = set()
    def parse(self, response):
        hxs1 = Selector(response=response).xpath('//div[@id="content-list"]/div[@class="item"]')
        for obj in hxs1:
            title = obj.xpath('.//a[@class="show-content color-chag"]/text()').extract_first().strip()
            href = obj.xpath('.//a[@class="show-content color-chag"]/@href').extract_first().strip()
            # print(obj.extract_first())
            item_obj = ChoutiItem(title=title,href=href)
            # 持久化是将item传送给pipeline
            yield item_obj






        # hxs = Selector(response=response).xpath('//div[@id="dig_lcpage"]//a/@href').extract()
        # hxs = Selector(response=response).xpath('//a[starts-with(@href,/all/hot/recent)]/@href').extract()
        hxs2 = Selector(response=response).xpath('//a[re:test(@href,"/all/hot/recent/\d+")]/@href').extract()
        for url in hxs2:
            md5_url = self.md5(url)#用md5规范字符串长度
            if md5_url in self.vis_urls:
               pass
            else:
                print(url)
                self.vis_urls.add(md5_url)
                url = 'https://dig.chouti.com%s'%url
                yield Request(url=url,callback=self.parse)#默认会发给调度器里,语法规定的,引擎自动放入url,将要新访问的url添加到调度器

    def md5(self,url):
        import hashlib
        obj = hashlib.md5()
        obj.update(bytes(url,encoding='utf-8'))
        return obj.hexdigest()
'''
选择器:
//表示子孙中
.//当前对象的子孙中
/儿子
/div在儿子中找div标签
/div[@id="i1"]在儿子中找div标签且标签id是i1
obj.extract()列表中的每一个对象转换为字符串
obj.extract_first()列表中的第一个对象转换为字符串
//div/text()获取某个标签的文本
'''
# hxs = Selector(response=response).xpath('//a').extract()
# # print(hxs)
# for i in hxs:
#     print(i) #标签对象
# hxs = Selector(response=response).xpath('//div[@id="content-list"]/div[@class="item"]') #extract将取到的对象转换为字符串
# for obj in hxs:
#     a = obj.xpath('.//a[@class="show-content color-chag"]/text()').extract_first() #标签对象/拿到每一个item的文本信息
#     print(a.strip())
# 获取当前页的所有页码
chouti.py
# -*- coding: utf-8 -*-

# Scrapy settings for day96 project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'day96'

SPIDER_MODULES = ['day96.spiders']
NEWSPIDER_MODULE = 'day96.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'day96 (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'day96.middlewares.Day96SpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'day96.middlewares.Day96DownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'day96.pipelines.Day96Pipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
settings.py
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


class Day96Pipeline(object):
    # def __init__(self,*args,**kwargs):

    def process_item(self, item, spider):
        print(spider,item)
        tpl = "%s\n%s\n\n"%(item['title'],item['href'])
        f = open('news.json','a')
        f.write(tpl)
        f.close()
pipelines.py
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class ChoutiItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    title = scrapy.Field()
    href = scrapy.Field()
items.py

cmd>>scrapy crawl chouti --nolog

 

posted on 2018-06-26 08:17  pandaboy1123  阅读(160)  评论(0编辑  收藏  举报

导航