- 爬豆瓣长评时,突然不能继续爬取了。response.status_code返回为418,get_html(utl)返回ip不能用了。
- 解决:status_code报错后,设置等待时间
count=0
while status_code!=200:
count+=1
time.sleep(count*0.1)
headers={'User-Agent':UserAgent().random}
response=requests.get(url,headers=headers)
response.encoding=response.apparent_encoding
html=response.text
return html
- 刚开始爬虫的时候还是time.sleep随机等待一下吧。未雨绸缪,同时也不要给对方服务器造成太大负担。
- 附上部分完整代码
import requests
import csv
import time
from lxml import etree
from fake_useragent import UserAgent
import random
#爬取页面代码并解析
def get_html(url):
try:
headers={'User-Agent':UserAgent().random}
response=requests.get(url,headers=headers)
status_code=response.status_code
count=0
while status_code!=200:
count+=1
time.sleep(count*0.1)
headers={'User-Agent':UserAgent().random}
response=requests.get(url,headers=headers)
response.encoding=response.apparent_encoding
html=response.text
return html
response.encoding=response.apparent_encoding
html=response.text
return html
except:
print('爬取出错')