Python简单爬虫爬取自己博客园所有文章

初学Python,用python写的一个简单爬虫,爬取自己博客园上面的所有文章。

 1 #coding=utf-8
 2 import re
 3 import urllib
 4 
 5 def getHtml(url):
 6     page = urllib.urlopen(url)
 7     html = page.read()
 8     return html
 9 
10 def getPage(html):
11     reg = r'随笔-([0-9]+)'
12     pageCount = re.findall(reg,html)
13     return pageCount[0]
14 
15 def getArticleUrl(html):
16     reg = r'(http://www.cnblogs.com/sunniest/p/[0-9]+.html)'
17     articleUrl = re.findall(reg,html)
18     return articleUrl
19 
20 def downloadPage(urlList):
21     x = 0
22     for article in urlList:
23         urllib.urlretrieve(article,'%s.html' % x)
24         x+=1
25 
26 article = []
27 htmlStr = getHtml("http://www.cnblogs.com/sunniest/default.html")
28 pageCount = getPage(htmlStr)
29 page = int(pageCount)/40+1
30 for i in range(1,page+1):
31     html = getHtml("http://www.cnblogs.com/sunniest/default.html?page="+str(i))
32     articleUrl = getArticleUrl(html)
33     article = article.__add__(articleUrl)
34 
35 article = list(set(article))
36 downloadPage(article)

爬取后的网页会保存在项目的根目录下,暂时未支持js、css等文件的爬取,所以页面显示效果会比较差。

posted @ 2016-11-15 14:10  Sunnier  阅读(1821)  评论(1编辑  收藏  举报