爬取bian图网的图片
将下列代码复制粘贴到一个.py文件里运行就会在py文件同级目录下生成4k图片的文件夹存放下载的图片
import requests
from bs4 import BeautifulSoup
import time
import os
if not os.path.exists('./4k图片/'):
os.mkdir('./4k图片/')
'''
彼岸图库 4k图片
第一页 https://pic.netbian.com/4kmeinv/index.html
第二页 https://pic.netbian.com/4kmeinv/index_2.html
第三页 https://pic.netbian.com/4kmeinv/index_3.html
'''
headers = {
'cookie': '__yjs_duid=1_609256ccf97c86f63356e4e9f3fa5eb21654735480955; Hm_lvt_c59f2e992a863c2744e1ba985abaea6c=1654735481; zkhanecookieclassrecord=%2C65%2C59%2C66%2C54%2C53%2C55%2C; PHPSESSID=25p1pnl1nog1nn56lic0j2fga6; zkhanmlusername=qq803835154342; zkhanmluserid=826128; zkhanmlgroupid=3; zkhanmlrnd=VQOfLNvHK33WGXiln7nY; zkhanmlauth=264643c01db497a277bbf935b54aa3f3; Hm_lpvt_c59f2e992a863c2744e1ba985abaea6c=1654741154'
}
page = list(map(int,input('请输入要爬取的页数:').strip().split())) # 从page[0]到page[1]页爬虫,for e.g. input: 9 100
for i in range(page[0],page[1]):
if i == 1:
index = 'index'
else:
index = f'index_{i+1}'
theme_url = f'https://pic.netbian.com/4kmeinv/{index}.html' # 要爬取主题的url
response = requests.get(theme_url)
response.encoding = 'gbk'
main_page = BeautifulSoup(response.text,features="lxml")
li_all_a = main_page.find('div',class_='slist').find_all('a') # li标签下所有的a标签
if not os.path.exists('./4k图片/第{}页'.format(i)):
os.mkdir('./4k图片/第{}页/'.format(i))
for a in li_all_a:
href = a.get('href')
picture_num = href[8:13]
picture_name = a.find('b').string
down_url = f'https://pic.netbian.com/downpic.php?id={picture_num}&classid=54' # 子页面中的下载地址
down_response = requests.get(down_url,headers=headers)
with open(f'4k图片/'+'第{}页/'.format(i)+picture_name+'.jpg',mode='wb') as f:
f.write(down_response.content) # 图片内容写入文件
print('正在保存',picture_name+'.jpg')
time.sleep(1)
response.close()
print('程序运行完毕')
作者:楚千羽
出处:https://www.cnblogs.com/chuqianyu/
本文来自博客园,本文作者:楚千羽,转载请注明原文链接:https://www.cnblogs.com/chuqianyu/p/16488955.html
本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须在文章页面给出原文连接,否则保留追究法律责任的权利!