python-爬虫-scrapy
入门:
下载:pip install scrapy
工程:scrapy startproject 工程名
Spider: scrapy genspider 爬虫名 url (--nolog//可选不显示日志)
简介:
持久化存储:
1 :终端存储:scrapy crawl -o aaa.text
2 : 管道存储:items对象即穿过来的{}字典,之后存储
3: open_spider()---->链接数据库,close_spider()-->关闭数据库,process_item()--->存储
代理Ip:
1自定义下载中间件
middleware.py---》
class MyProxy(object):
def process_request(self,request,spider):
# 请求ip 更换
request.meta['proxy'] = "http://202.112.51.51:8082"
2 开启下载中间件
DOWNLOADER_MIDDLEWARES = {
'firstBlood.middlewares.MyProxy': 543,
}
日志等级:
1
ERROR:错误
WARNING:警告
INFO:一般信息
DEBUG:调试信息(默认)
指定日志信息等级:
settings:LOG_LEVEL = ‘ERROR’
将日志信息存储到制定文件中:
settings:LOG_FILE = ‘log.txt’
2 二级传参
yield scrapy.Request(url=url,callback=self.secondParse,meta={'item':ite
m})
调用:item = response.meta['item']
请求传参:
方式一:用scrapy.Requests(method='post')
方式二:重写start_request(self)方法(推荐)
class FanyiSpider(scrapy.Spider):
def start_requests(self):
data = {
'kw':'dog'
}
for url in self.start_urls:
# FormRequest发送post请求
yield scrapy.FormRequest(url=url,formdata=data,callback=self.parse)
CrawlSpider:
一般多层请求是多少层多少方法,或递归方法--->yield scrapy.Request(url,callback,meta)
特殊请求有多种:
一:初始请求化作请求队列,功能(获取url列表,不断请求,在新页面获取新url列表)
import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule class CrawlspiderSpider(CrawlSpider): name = 'crawlSpider' start_urls = ['https://www.qiushibaike.com/text'] rules = ( Rule(LinkExtractor(allow=r'/text/page/\d+'), callback='parse_item', follow=True),) ''' LinkExtractor : 设置提取链接的规则(正则表达式) allow=(), : 设置允许提取的url restrict_xpaths=(), :根据xpath语法,定位到某一标签下提取链接 restrict_css=(), :根据css选择器,定位到某一标签下提取链接 deny=(), : 设置不允许提取的url(优先级比allow高) allow_domains=(), : 设置允许提取url的域 deny_domains=(), :设置不允许提取url的域(优先级比allow_domains高) unique=True, :如果出现多个相同的url只会保留一个 strip=True :默认为True,表示自动去除url首尾的空格 ''' ''' rule link_extractor, : linkExtractor对象 callback=None, : 设置回调函数 follow=None, : 设置是否跟进 process_links=None, :可以设置回调函数,对所有提取到的url进行拦截 process_request=identity : 可以设置回调函数,对request对象进行拦截 ''' def parse_item(self, response): div_list = response.xpath('//div[@id="content‐left"]/div') for div in div_list: item = PhpmasterItem() author = div.xpath('./div/a[2]/h2/text()').extract_first() item['author'] = str(author).strip() # print(author) content = div.xpath('./a[1]/div/span/text()').extract() content = ''.join(content) item['content'] = str(content).strip() yield item
二:下载img时,把img_url传到管道中,管道中下载(请求方入下级)
Spider::yield item[‘img_url’]
Setting::IMAGES_STORE = './images/'
Pip::
from qiubaipic.settings import IMAGES_STORE as images_store from scrapy.pipelines.images import ImagesPipeline class QiubaipicPipeline(ImagesPipeline): def get_media_requests(self, item, info): img_link = "http:"+item['img_link'] yield scrapy.Request(img_link)
图片分组:
def file_path(self, request, response=None, info=None): '''完成图片存储路径''' img_name = request.url.split('/')[-1] # 图片名 file_name = request.meta['file_name'] # 路径 image_guid = file_name + '/' + img_name # 天价世界名画/2560580770.jpg img_path = IMAGES_STORE + file_name + '/' # ./image/天价世界名画/ 必须存在 if not os.path.exists(img_path): os.makedirs(img_path) print(request.url) return '%s' % (image_guid)
分布式爬虫:
代理IP池和UA池
代理ip中间件:
http_list = [] https_list = [] def process_request(self, request, spider): h = request.url.split(':')[0] if h == 'http': http = 'http://'+random.choice(http_list) if h == 'https': http = 'https://'+random.choice(https_list) request.meta['proxy'] = http
Ua中间件:
user_agent_list = [] def process_request(self, request, spider): #从列表中随机抽选出一个ua值 ua = random.choice(user_agent_list) # 4. ua值进行当前拦截到请求的ua的写入操作 request.headers.setdefault('User-Agent',ua)
脚本运行scrapy项目:
项目下新建xxx.py;
from scrapy import cmdline # 帮助我们直接执行scrapy 命令 cmdline.execute('scrapy crawl logrule --nolog'.split())