关于scrapy爬虫的注意事项
1. 图片下载的设置
class ClawernameSpider(scrapy.Spider): # 定制化设置 custom_settings = { 'LOG_LEVEL': 'DEBUG', # Log等级,默认是最低级别debug 'ROBOTSTXT_OBEY': False, # default Obey robots.txt rules 'DOWNLOAD_DELAY': 0, # 下载延时,默认是0 'COOKIES_ENABLED': False, # 默认enable,爬取登录后数据时需要启用 'DOWNLOAD_TIMEOUT': 25, # 下载超时,既可以是爬虫全局统一控制,也可以在具体请求中填入到Request.meta中,Request.meta['download_timeout'] 'RETRY_TIMES': 8, # ……………… 'IMAGES_STORE': r'E:\scrapyFashionbeansPic\images', # 爬虫下载图片存储位置,没有则新建,已经存在的图片不会再下载 'IMAGES_EXPIRES': 90, # 图片过期时间,单位为天 'IMAGES_MIN_HEIGHT': 100, # 图片最小尺寸(高度),低于这个高度的图片不会下载 'IMAGES_MIN_WIDTH': 100, # 图片最小尺寸(宽度),低于这个宽度的图片不会下载 'DOWNLOADER_MIDDLEWARES': { # 下载中间件设置,后面的数字(范围0~999)为优先级,越小越先处理 'scrapyFashionbeans.middlewares.HeadersMiddleware': 100, 'scrapyFashionbeans.middlewares.ProxiesMiddleware': 200, 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None, }, # ……………… }
2. 在Request中设置flag,针对性处理每个request
在构造Request时,增加flags这个参数
def start_requests(self): for each in keyLst: yield scrapy.Request( url = f'https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords={quote(each)}', meta = {'key': each, 'dont_redirect': True}, callback = self.parse, errback = self.error, # 在request中埋入flag,在经过中间件时,可以以此为判断依据,针对性处理每条Request flags = [1] )
使用在下载中间件的示例:
# 在下载中间件中,代理处理部分
# 当Request的flag[0]设置为1时,不添加代理IP
class ProxiesMiddleware(object): def __init__(self): runTimer = datetime.datetime.now() print(f"instance ProxiesMiddleware, startProxyTimer, runTimer:{runTimer}.") timerUpdateProxies() print(f"instance ProxiesMiddleware, startProxyTimer, runTimer:{runTimer}.") def process_request(self, request, spider): print('Using ProxiesMiddleware!') # 在这里识别 request.url是不是指列表页,从而不启用代理。 # 或者在发送列表页request时,将某个栏位(flags可用,类型是列表)置上标记,在这个地方检察这个标记,从而决定要不要启动代理。 if request.flags: if request.flags[0] == 1: # flags[0] 如果为1表示这条request并不需要添加代理 return None # 不加proxy, 直接返回, 给其他中间件继续处理request if not request.meta.get('proxyFlag'): request.meta['proxy']='http://xxxxxxxxxxxx:xxxxxxxxxxxx@proxy.abuyun.com:9020'
3. 跟进一个网页中的超链接
for page in range(1 ,pageNum + 1): # 从本网页实现页面跟进 # href是从网页中拿到的超链接地址 yield response.follow( url = re.sub(r'page=\d+', f'page={page}', href, count = 1), meta = {'dont_redirect':True , 'key':response.meta['key']}, callback = self.galance, errback = self.error )
4. 利用redis进行网页去重,做分布式爬虫的基础。
关于scrapy-redis的原理,请参考:https://www.biaodianfu.com/scrapy-redis.html
参考文章:https://www.cnblogs.com/zjl6/p/6742673.html
class ClawernameSpider(scrapy.Spider): # 定制化设置 custom_settings = { 'LOG_LEVEL': 'DEBUG', # Log等级,默认是最低级别debug # ……………… # 利用redis对网页去重的设置项 'DUPEFILTER_CLASS': "scrapy_redis.dupefilter.RFPDupeFilter", 'SCHEDULER': "scrapy_redis.scheduler.Scheduler", 'SCHEDULER_PERSIST': False, # Don't cleanup redis queues, allows to pause/resume crawls. # ……………… }
运行redis-cli.exe,然后执行flushdb *清除掉redis中的记录,以免之前没爬取成功的页面,在下次爬取时被忽略掉了。
keys *
flushdb
OK
补充说明:运行redis-cli.exe,执行key *可以看到这个数据库中所有表的名字。
5. scrapy定时关闭
假设有如下需求:指定一个爬虫,每天运行一次,但是需要保证这个爬虫的运行时间不能超过24小时。
对于scrapy框架来说,非常简单,只需要配置一个扩展就好了,打开settings.py,添加一个配置项:
CLOSESPIDER_TIMEOUT = 86400 # 24小时*3600秒 = 86400
CLOSESPIDER_TIMEOUT说明:
CLOSESPIDER_TIMEOUT 的默认值: 0
一个整数值,单位为秒。如果一个spider在指定的秒数后仍在运行, 它将以 closespider_timeout 的原因被自动关闭。 如果值设置为0(或者没有设置),spiders不会因为超时而关闭。
相关的扩展还有很多,比如可以配置获得了一定数量的item则退出等等,详见文档扩展(Extensions):http://scrapy-chs.readthedocs.io/zh_CN/0.24/topics/extensions.html
CLOSESPIDER_TIMEOUT(秒):在指定时间过后,就终止爬虫程序.
CLOSESPIDER_ITEMCOUNT:抓取了指定数目的Item之后,就终止爬虫程序.
CLOSESPIDER_PAGECOUNT:在收到了指定数目的响应之后,就终止爬虫程序.
CLOSESPIDER_ERRORCOUNT:在发生了指定数目的错误之后,就终止爬虫程序.
6. scrapy最大爬取深度depth_limit
有时候,有些奇怪的页面会一直循环跳转,导致爬虫一直不能结束,所以在爬取过程中,需要指定最大深度。
但是还有个非常重要的点需要注意:就是retry_times 和 depth_limit之间的关系。
因为在retry的过程中,会累计 depth,当超过 depth_limit 时,这个页面就会被抛弃掉。(注意,不是指爬虫结束)
# settings 'RETRY_TIMES': 8, 'DEPTH_LIMIT': 2, # Log E:\Miniconda\python.exe E:/PyCharmCode/allCategoryGet_2/main.py 2018-02-05 18:07:22 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: allCategoryGet_2) 2018-02-05 18:07:22 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'allCategoryGet_2', 'NEWSPIDER_MODULE': 'allCategoryGet_2.spiders', 'SPIDER_MODULES': ['allCategoryGet_2.spiders']} 2018-02-05 18:07:22 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2018-02-05 18:07:23 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'allCategoryGet_2.middlewares.ProxiesMiddleware', 'allCategoryGet_2.middlewares.HeadersMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2018-02-05 18:07:23 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2018-02-05 18:07:23 [requests.packages.urllib3.connectionpool] DEBUG: Starting new HTTP connection (1): api.goseek.cn 2018-02-05 18:07:23 [requests.packages.urllib3.connectionpool] DEBUG: http://api.goseek.cn:80 "GET /Tools/holiday?date=20180205 HTTP/1.1" 200 None 2018-02-05 18:07:23 [scrapy.middleware] INFO: Enabled item pipelines: ['allCategoryGet_2.pipelines.MongoPipeline'] 2018-02-05 18:07:23 [scrapy.core.engine] INFO: Spider opened 2018-02-05 18:07:23 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2018-02-05 18:07:23 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 2018-02-05 18:07:44 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.amazon.com/gp/site-directory> (failed 1 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>] 2018-02-05 18:07:46 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/gp/site-directory> (referer: https://www.amazon.com) parseCategoryIndexPage: url = https://www.amazon.com/gp/site-directory, status = 200, meta = {'dont_redirect': True, 'download_timeout': 30.0, 'proxy': 'http://proxy.abuyun.com:9020', 'download_slot': 'www.amazon.com', 'retry_times': 1, 'download_latency': 1.7430000305175781, 'depth': 0} 2018-02-05 18:07:48 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/gp/site-directory> (referer: https://www.amazon.com) parseCategoryIndexPage: url = https://www.amazon.com/gp/site-directory, status = 200, meta = {'dont_redirect': True, 'download_timeout': 30.0, 'proxy': 'http://proxy.abuyun.com:9020', 'download_slot': 'www.amazon.com', 'retry_times': 1, 'download_latency': 1.1499998569488525, 'depth': 1} ……………………………………………………………… ……………………………………………………………… ……………………………………………………………… ……………………………………………………………… parseSecondLayerForward: follow nextCategoryElem = {'oriName': 'Muffin & Cupcake Pans', 'linkNameLst': ['Home, Garden & Tools', 'Kitchen & Dining', 'Bakeware'], 'linkLst': ['https://www.amazon.com/gp/site-directory', 'https://www.amazon.com/kitchen-dining/b/ref=sd_allcat_ki/132-5073023-0203563?ie=UTF8&node=284507', '/b/ref=lp_284507_ln_0/132-5073023-0203563?node=289668&ie=UTF8&qid=1517825504'], 'href': '/b/ref=lp_284507_ln_0_12/132-5073023-0203563?node=289700&ie=UTF8&qid=1517825504', 'classTop': 'Home, Garden & Tools', 'classSecond': 'Kitchen & Dining', 'constructionClass': 2} 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_4/132-5073023-0203563?node=289675&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_5/132-5073023-0203563?node=289679&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_6/132-5073023-0203563?node=8614937011&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_7/132-5073023-0203563?node=2231404011&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_8/132-5073023-0203563?node=289727&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_9/132-5073023-0203563?node=3736941&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_10/132-5073023-0203563?node=2231407011&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_11/132-5073023-0203563?node=289696&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_12/132-5073023-0203563?node=289700&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_0_13/132-5073023-0203563?node=289701&ie=UTF8&qid=1517825504 parseSecondLayerForward: ###secondCategoryElem = {'oriName': 'Pie, Tart & Quiche Pans', 'linkNameLst': ['Home, Garden & Tools', 'Kitchen & Dining', 'Bakeware'], 'linkLst': ['https://www.amazon.com/gp/site-directory', 'https://www.amazon.com/kitchen-dining/b/ref=sd_allcat_ki/132-5073023-0203563?ie=UTF8&node=284507', '/b/ref=lp_284507_ln_0/132-5073023-0203563?node=289668&ie=UTF8&qid=1517825504'], 'href': '/b/ref=lp_284507_ln_0_13/132-5073023-0203563?node=289701&ie=UTF8&qid=1517825504', 'classTop': 'Home, Garden & Tools', 'classSecond': 'Kitchen & Dining', 'constructionClass': 2} ……………………………………………………………… ……………………………………………………………… ……………………………………………………………… ……………………………………………………………… parseSecondLayerForward: ###secondCategoryElem = {'oriName': 'Popover Pans', 'linkNameLst': ['Home, Garden & Tools', 'Kitchen & Dining', 'Bakeware'], 'linkLst': ['https://www.amazon.com/gp/site-directory', 'https://www.amazon.com/kitchen-dining/b/ref=sd_allcat_ki/132-5073023-0203563?ie=UTF8&node=284507', '/b/ref=lp_284507_ln_0/132-5073023-0203563?node=289668&ie=UTF8&qid=1517825504'], 'href': '/b/ref=lp_284507_ln_0_15/132-5073023-0203563?node=5038552011&ie=UTF8&qid=1517825504', 'classTop': 'Home, Garden & Tools', 'classSecond': 'Kitchen & Dining', 'constructionClass': 2} parseSecondLayerForward: follow nextCategoryElem = {'oriName': 'Popover Pans', 'linkNameLst': ['Home, Garden & Tools', 'Kitchen & Dining', 'Bakeware'], 'linkLst': ['https://www.amazon.com/gp/site-directory', 'https://www.amazon.com/kitchen-dining/b/ref=sd_allcat_ki/132-5073023-0203563?ie=UTF8&node=284507', '/b/ref=lp_284507_ln_0/132-5073023-0203563?node=289668&ie=UTF8&qid=1517825504'], 'href': '/b/ref=lp_284507_ln_0_15/132-5073023-0203563?node=5038552011&ie=UTF8&qid=1517825504', 'classTop': 'Home, Garden & Tools', 'classSecond': 'Kitchen & Dining', 'constructionClass': 2} parseSecondLayerForward: follow nextCategoryElem = {'oriName': 'Wine Accessory Sets', 'linkNameLst': ['Home, Garden & Tools', 'Kitchen & Dining', 'Wine Accessories'], 'linkLst': ['https://www.amazon.com/gp/site-directory', 'https://www.amazon.com/kitchen-dining/b/ref=sd_allcat_ki/132-5073023-0203563?ie=UTF8&node=284507', '/b/ref=lp_284507_ln_14/132-5073023-0203563?node=13299291&ie=UTF8&qid=1517825504'], 'href': '/b/ref=lp_284507_ln_14_8/132-5073023-0203563?node=13299321&ie=UTF8&qid=1517825504', 'classTop': 'Home, Garden & Tools', 'classSecond': 'Kitchen & Dining', 'constructionClass': 2} 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_14_7/132-5073023-0203563?node=289737&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.spidermiddlewares.depth] DEBUG: Ignoring link (depth > 2): https://www.amazon.com/b/ref=lp_284507_ln_14_8/132-5073023-0203563?node=13299321&ie=UTF8&qid=1517825504 2018-02-05 18:07:51 [scrapy.core.engine] INFO: Closing spider (finished) 2018-02-05 18:07:51 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/exception_count': 1, 'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 1, 'downloader/request_bytes': 1414, 'downloader/request_count': 4, 'downloader/request_method_count/GET': 4, 'downloader/response_bytes': 199923, 'downloader/response_count': 3, 'downloader/response_status_count/200': 3, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2018, 2, 5, 10, 7, 51, 461602), 'log_count/DEBUG': 186, 'log_count/INFO': 8, 'request_depth_max': 2, 'response_received_count': 3, 'retry/count': 1, 'retry/reason_count/twisted.web._newclient.ResponseNeverReceived': 1, 'scheduler/dequeued': 4, 'scheduler/dequeued/memory': 4, 'scheduler/enqueued': 4, 'scheduler/enqueued/memory': 4, 'start_time': datetime.datetime(2018, 2, 5, 10, 7, 23, 282602)} 2018-02-05 18:07:51 [scrapy.core.engine] INFO: Spider closed (finished)
7. 关于参数 dont_redirect
dont_redirect:指这个请求是否允许重定向。默认值为:False,允许重定向
导致网页重定向的原因,一般有如下几个:第一,网页自身就是一个跳转页面。
第二,这个网页分为电脑版和移动版,站点会根据用户的访问数据(user-agent)来决定返回给用户哪种网页。
# 使用示例: # https://www.1688.com/ # user-agent = "MQQBrowser/26 Mozilla/5.0 (Linux; U; Android 2.3.7; zh-cn; MB200 Build/GRJ22; CyanogenMod-7) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" yield Request( url = "https://www.1688.com/", # 默认为False meta={}, # 如果将这个参数置为True,表示不允许网页重定向,这样如果网页发生了跳转,那么网页爬不下来 # meta = {'dont_redirect': True}, callback = self.parseCategoryIndex, errback = self.error )
默认值:False,允许重定向
2018-02-06 16:57:46 [scrapy.core.engine] INFO: Spider opened 2018-02-06 16:57:46 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2018-02-06 16:57:46 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 2018-02-06 16:57:47 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://m.1688.com/touch/?src=desktop> from <GET https://www.1688.com/> 2018-02-06 16:57:50 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://m.1688.com?src=desktop> from <GET http://m.1688.com/touch/?src=desktop> 2018-02-06 16:57:52 [scrapy.core.engine] DEBUG: Crawled (400) <GET http://m.1688.com?src=desktop> (referer: https://www.1688.com) error = [Failure instance: Traceback: <class 'scrapy.spidermiddlewares.httperror.HttpError'>: Ignoring non-200 response
修改为:True,不允许重定向
2018-02-06 17:08:08 [scrapy.core.engine] DEBUG: Crawled (301) <GET https://www.1688.com/> (referer: https://www.1688.com)
2018-02-06 17:08:08 [scrapy.core.engine] INFO: Closing spider (finished)
特别注意:一般情况下,我们都会把这个参数置为True,不允许跳转。因为在一开始,就必须明确你需要解析的页面的结构。后面的解析方法也是针对目标页面的,如果可以随意重定向,那后面的解析也就变得灵活,并且无法精准预测。
8. 关于参数 dont_filter
dont_filter:指这个请求是否允许过滤。默认值为:False,参与过滤。
因为在scrapy中,自带url过滤功能,如果 dont_filter == False,表明这个Request (不仅仅指url)只会被使用一次。当然,如果发生了retry,是不算的。
唯一需要注意的是:如果有些Request会出现重复多次访问,需要在Request中,将这个参数置为 True
原文链接:https://blog.csdn.net/Ren_ger/article/details/85067419