摘要: scrapy 数据存储mysql #spider.pyfrom scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from Cwpjt.items import CwpjtI 阅读全文
posted @ 2019-05-07 18:35 三冬三夏 阅读(124) 评论(0) 推荐(0) 编辑
摘要: DUPEFILTER_CLASS='scrapy_redis.dupefilter.RFPDupeFilter'SCHEDULER = 'scrapy_redis.scheduler.Scheduler'SCHEDULER_PERSIST = TrueREDIS_URL = 'redis://127 阅读全文
posted @ 2019-05-07 17:09 三冬三夏 阅读(436) 评论(1) 推荐(0) 编辑