scrapy管道持久化存储
items.py :数据结构模板文件,定义数据属性
pipelines.py :管道文件,接收数据(items)进行持久化操作
持续化流程
1. 爬虫文件爬取到数据后,需要将数据封装到items对象中 2. 使用yield关键字将items对象提交给pipelines管道进行持久化操作 3. 在管道文件中的process_item方法中接收爬虫提交过来的item对象,然后编写持久化存储的代码将item对象中存储的数据进行持久化存储 4.settings.py配置文件中开启管道
爬取糗事百科首页中的段子和作者的数据爬取下来,进行持久化存储
爬虫文件:qiushibaike.py
import scrapy from secondblood.items import SecondbloodItem class QiubaidemoSpider(scrapy.Spider): name = 'qiubaiDemo' allowed_domains = ['www.qiushibaike.com'] start_urls = ['http://www.qiushibaike.com/'] def parse(self, response): odiv = response.xpath('//div[@id="content-left"]/div') for div in odiv: # xpath函数返回的为列表,列表中存放的数据为Selector类型的数据。解析到的内容被封装在了Selector对象中,需要调用extract()函数将解析的内容从Selecor中取出。 author = div.xpath('.//div[@class="author clearfix"]//h2/text()').extract_first() author = author.strip('\n')#过滤空行 content = div.xpath('.//div[@class="content"]/span/text()').extract_first() content = content.strip('\n')#过滤空行 #将解析到的数据封装至items对象中 item = SecondbloodItem() item['author'] = author item['content'] = content yield item#提交item到管道文件(pipelines.py)
items文件: items.py
import scrapy class SecondbloodItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() author = scrapy.Field() #存储作者 content = scrapy.Field() #存储段子内容
管道文件: pipelines.py
# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html class SecondbloodPipeline(object): #构造方法 def __init__(self): self.fp = None #定义一个文件描述符属性 #下列都是在重写父类的方法: #开始爬虫时,执行一次 def open_spider(self,spider): print('爬虫开始') self.fp = open('./data.txt', 'w') #因为该方法会被执行调用多次,所以文件的开启和关闭操作写在了另外两个只会各自执行一次的方法中。 def process_item(self, item, spider): #将爬虫程序提交的item进行持久化存储 self.fp.write(item['author'] + ':' + item['content'] + '\n') return item #结束爬虫时,执行一次 def close_spider(self,spider): self.fp.close() print('爬虫结束')
配置文件:settings.py
#开启管道 ITEM_PIPELINES = { 'secondblood.pipelines.SecondbloodPipeline': 300, #300表示为优先级,值越小优先级越高 }
基于mysql的管道存储
将item数据写入mysql数据库
pipelines.py文件
# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html #导入数据库的类 import pymysql class QiubaiproPipelineByMysql(object): conn = None #mysql的连接对象声明 cursor = None#mysql游标对象声明 def open_spider(self,spider): print('开始爬虫') #链接数据库 self.conn = pymysql.Connect(host='127.0.0.1',port=3306,user='root',password='123456',db='qiubai') #编写向数据库中存储数据的相关代码 def process_item(self, item, spider): #1.链接数据库 #2.执行sql语句 sql = 'insert into qiubai values("%s","%s")'%(item['author'],item['content']) self.cursor = self.conn.cursor() #执行事务 try: self.cursor.execute(sql) self.conn.commit() except Exception as e: print(e) self.conn.rollback() return item def close_spider(self,spider): print('爬虫结束') self.cursor.close() self.conn.close()
配置文件 :settings.py
ITEM_PIPELINES = { 'qiubaiPro.pipelines.QiubaiproPipelineByMysql': 300, }
基于redis的管道存储
# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import redis class QiubaiproPipelineByRedis(object): conn = None def open_spider(self,spider): print('开始爬虫') #创建链接对象 self.conn = redis.Redis(host='127.0.0.1',port=6379) def process_item(self, item, spider): dict = { 'author':item['author'], 'content':item['content'] } #写入redis中 self.conn.lpush('data', dict) return item
配置文件 :settings.py
ITEM_PIPELINES = { 'qiubaiPro.pipelines.QiubaiproPipelineByRedis': 300, }