python利用urllib实现的爬取京东网站商品图片的爬虫
本例程使用urlib实现的,基于python2.7版本,采用beautifulsoup进行网页分析,没有第三方库的应该安装上之后才能运行,我用的IDE是pycharm,闲话少说,直接上代码!
1 # -*- coding: utf-8 -* 2 import re 3 import os 4 import urllib 5 import urllib2 6 from bs4 import BeautifulSoup 7 def craw(url,page): 8 html1=urllib2.urlopen(url).read() 9 html1=str(html1) 10 soup=BeautifulSoup(html1,'lxml') 11 imagelist=soup.select('#J_goodsList > ul > li > div > div.p-img > a > img') 12 namelist=soup.select('#J_goodsList > ul > li > div > div.p-name > a > em') 13 #pricelist=soup.select('#plist > ul > li > div > div.p-price > strong') 14 #print pricelist 15 path = "E:/{}/".format(str(goods)) 16 if not os.path.exists(path): 17 os.mkdir(path) 18 for (imageurl,name) in zip(imagelist,namelist): 19 name=name.get_text() 20 imagename=path + name +".jpg" 21 imgurl="http:"+str(imageurl.get('data-lazy-img')) 22 if imgurl == 'http:None': 23 imgurl = "http:" + str(imageurl.get('src')) 24 try: 25 urllib.urlretrieve(imgurl,filename=imagename) 26 except: 27 continue 28 29 ''' 30 #J_goodsList > ul > li:nth-child(1) > div > div.p-img > a > img 31 #plist > ul > li:nth-child(1) > div > div.p-name.p-name-type3 > a > em 32 #plist > ul > li:nth-child(1) > div > div.p-price > strong:nth-child(1) > i 33 ''' 34 35 if __name__ == "__main__": 36 goods=raw_input('please input the goos you want:') 37 pages=input('please input the pages you want:') 38 count =0.0 39 for i in range(1,int(pages+1),2): 40 url="https://search.jd.com/Search?keyword={}&enc=utf-8&qrst=1&rt=1&stop=1&vt=2&suggest=1.def.0.T06&wq=diann&page={}".format(str(goods),str(i)) 41 craw(url,i) 42 count += 1 43 print 'work completed {:.2f}%'.format(count/int(pages)*100)
图片的命名为商品的名称,京东商品图片地址的属性很可能会有所变动,所以大家进行编写的时候应该举一反三,灵活运用!
这是我下载下来的手机类图片文件的截图:
我本地的爬取的速度很快,不到一分钟就能爬取100页上千个商品的图片!