爬取全部的校园新闻
0.从新闻url获取点击次数,并整理成函数
- newsUrl
- newsId(re.search())
- clickUrl(str.format())
- requests.get(clickUrl)
- re.search()/.split()
- str.lstrip(),str.rstrip()
- int
- 整理成函数
- 获取新闻发布时间及类型转换也整理成函数
# 获取点击次数 def click(url): id=re.search('/(\d+).html',url).groups(0)[0] clickurl='http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(id) a=requests.get(clickurl).text click_num=re.search("hits'[)].html[(]'(\d+)'[)];",a).groups(0)[0] return click_num # 获取时间 def newsdt(showinfo): newsDate =showinfo.split()[0].split(':')[1] newsTime=showinfo.split()[1] newsDT = newsDate+' '+newsTime dt = datetime.strptime(newsDT, '%Y-%m-%d %H:%M:%S') return dt
1.从新闻url获取新闻详情: 字典,anews
# 字典 def anews(url): newsDatail ={} r =requests.get(url) r.encoding = 'utf8' soup = BeautifulSoup(r.text, 'html.parser') newsDatail['newsTitle'] = soup.select('.show-title')[0].text showinfo = soup.select('.show-info')[0].text newsDatail['newsDT']=newsdt(showinfo) newsDatail['newsClick']=click(url) return newsDatail
2.从列表页的url获取新闻url:列表append(字典) alist
def alist(listUrl): res = requests.get(listUrl) res.encoding = 'utf8' soup = BeautifulSoup(res.text,'html.parser') newsList = [] for news in soup.select('li'): if len(news.select('.news-list-title'))>0: newsUrl = news.select('a')[0]['href'] newsDesc = news.select('.news-list-description')[0].text newsDict = anews(newsUrl) newsDict['newsUrl'] = newsUrl newsDict['description']=newsDesc newsList.append(newsDict) return newsList
3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews
*每个同学爬学号尾数开始的10个列表页
url ='http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html' for i in range(6,17): allList.extend(alist(url.format(i))) time.sleep(random.random()*3) print(alist(url.format(i)))
4.设置合理的爬取间隔
import time
import random
time.sleep(random.random()*3)
5.用pandas做简单的数据处理并保存
保存到csv或excel文件
newsdf.to_csv(r'F:\duym\爬虫\gzccnews.csv')
f1 = pandas.DataFrame(allList) f1.to_csv(r'E:\news.csv',encoding='utf-8')