爬取全部的校园新闻
1.从新闻url获取新闻详情: 字典,anews
# 标题
title = newSoup.select('.show-title')[0].text
# 发布信息
newInfo = newSoup.select('.show-info')[0].text
# 发布时间
newDT = newsDateTime(newInfo)
# 作者
author = newInfo.split()[2].lstrip('作者:')
# 审核
examine = newInfo.split()[3].lstrip('审核:')
# 来源
source = newInfo.split()[4].lstrip('来源:')
# 获取点击次数的url
newClick = newsClick(url)
# 把获取到的新闻内容添加进字典中
newsDetail = {}
newsDetail['newsTitle'] = title
newsDetail['newsDaTe'] = newDT
newsDetail['newsAuthor'] = author
newsDetail['newsExamine'] = examine
newsDetail['newsSource'] = source
newsDetail['newsClick'] = newClick
2.从列表页的url获取新闻url:列表append(字典) alist
newList = []
li = newSoup.select('li')
for new in li:
if len(new.select('.news-list-text')) > 0:
newUrl = new.select('a')[0]['href']
# 获取新闻的概述信息
newDescription = new.select('.news-list-description')[0].text
newsDict = newsInfo(newUrl)
# 把新闻的概述信息添加进字典中
newsDict['newsDescription'] = newDescription
newList.append(newsDict)
3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews
*每个同学爬学号尾数开始的10个列表页
def alist(url): res=requests.get(listUrl) res.encoding='utf-8' soup = BeautifulSoup(res.text,'html.parser') newsList=[] for news in soup.select('li'): if len(news.select('.news-list-title'))>0: newsUrl=news.select('a')[0]['href'] newsDesc=news.select('.news-list-description')[0].text newsDict=anews(newsUrl) newsDict['description']=newsDesc newsList.append(newsDict) return newsList listUrl='http://news.gzcc.cn/html/xiaoyuanxinwen/' alist(listUrl) allnews=[] for i in range(95,100): listUrl='http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html'.format(i) allnews.extend(alist(listUrl)) len(allnews)
4.设置合理的爬取间隔
import time
import random
time.sleep(random.random()*3)
5.用pandas做简单的数据处理并保存
保存到csv或excel文件
newsdf.to_csv(r'F:\duym\爬虫\gzccnews.csv')
保存到数据库
import sqlite3
with sqlite3.connect('gzccnewsdb.sqlite') as db:
newsdf.to_sql('gzccnewsdb',db)
打开生成的news.csv文件,我们可以看到里面的内容如下图所示: