爬虫下载City Scape数据
爬虫下载City Scape数据
CityScape是道路场景的经典数据集,但是如right Img8bit_sequence_trainvaltest达到322G,需要用服务器下载比较方便。
需求场景
由于服务器没有GUI的浏览器,CityScape的这部分数据又需要申请下载,找不到对应的url,因此直接wget是不行的,于是博主又开始用python干起了爬虫的老本行。
不同的是,这次下载的数据集达到322G,因此显然不能一次性下到内存,需要分块下载
代码
import requests
import contextlib
import sys
def download(url, session_id, save_path):
cookies = {
'PHPSESSID': session_id
}
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.5',
'Connection': 'keep-alive',
'Cookie': f'PHPSESSID={session_id}',
'DNT': '1',
'Host': 'www.cityscapes-dataset.com',
'Referer': 'https://www.cityscapes-dataset.com/downloads/',
'Upgrade-Insecure-Request': '1',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36'
}
res = requests.get(url, headers=headers, cookies=cookies, stream=True)
with contextlib.closing(res) as r:
accepts = 0
with open(save_path, "wb") as f:
for chunk in res.iter_content(chunk_size=4096):
if chunk:
f.write(chunk)
accepts += len(chunk)
progress = accepts / int(r.headers['Content-Length'])
sys.stdout.write(("%.3f\n" % progress))
download(
url='https://www.cityscapes-dataset.com/file-handling/?packageID=10', # 想下的资源
session_id='h0ukmht9lecft5lqsim3mov9l2', # 注意session_id可能会过期,需要自己修改
save_path='test.zip'
)
小结
这份代码其实是为帮学长下数据集而定制的,实验室搬砖之余,顺便复习一下自动下载~