爬取全部的校园新闻

作业来源:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE1/homework/3002

作业要求:

从新闻url获取点击次数,并整理成函数

  • newsUrl
  • newsId(re.search())
  • clickUrl(str.format())
  • requests.get(clickUrl)
  • re.search()/.split()
  • str.lstrip(),str.rstrip()
  • int
  • 整理成函数
  • 获取新闻发布时间及类型转换也整理成函数

1.从新闻url获取新闻详情: 字典,anews

2.从列表页的url获取新闻url:列表append(字典) alist

3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews

*每个同学爬学号尾数开始的10个列表页

4.设置合理的爬取间隔

import time

import random

time.sleep(random.random()*3)

5.用pandas做简单的数据处理并保存

保存到csv或excel文件 

newsdf.to_csv(r'F:\duym\爬虫\gzccnews.csv')

作业详情:

 

import codecs
from urllib import request, parse
from bs4 import BeautifulSoup
import re
import time
from urllib.error import HTTPError, URLError
import sys

###新闻类定义
class News(object):
def __init__(self):
self.url = None #该新闻对应的url
self.topic = None #新闻标题
self.date = None #新闻发布日期
self.content = None #新闻的正文内容
self.author = None #新闻作者

###如果url符合解析要求,则对该页面进行信息提取
def getNews(url):
#获取页面所有元素
html = request.urlopen(url).read().decode('utf-8', 'ignore')
#解析
soup = BeautifulSoup(html, 'html.parser')

#获取信息
if not(soup.find('div', {'id':'artical'})): return

news = News() #建立新闻对象

page = soup.find('div', {'id':'artical'})

if not(page.find('h1', {'id':'artical_topic'})): return
topic = page.find('h1', {'id':'artical_topic'}).get_text() #新闻标题
news.topic = topic

if not(page.find('div', {'id': 'main_content'})): return
main_content = page.find('div', {'id': 'main_content'}) #新闻正文内容

content = ''

for p in main_content.select('p'):
content = content + p.get_text()

news.content = content

news.url = url #新闻页面对应的url
f.write(news.topic+'\t'+news.content+'\n')

##dfs算法遍历全站###
def dfs(url):
global count
print(url)

pattern1 = 'http://news\.ifeng\.com\/[a-z0-9_\/\.]*$' #可以继续访问的url规则
pattern2 = 'http://news\.ifeng\.com\/a\/[0-9]{8}\/[0-9]{8}\_0\.shtml$' #解析新闻信息的url规则

#该url访问过,则直接返回
if url in visited: return
print(url)

#把该url添加进visited()
visited.add(url)
# print(visited)

try:
#该url没有访问过的话,则继续解析操作
html = request.urlopen(url).read().decode('utf-8', 'ignore')
# print(html)
soup = BeautifulSoup(html, 'html.parser')

if re.match(pattern2, url):
getNews(url)
# count += 1

####提取该页面其中所有的url####
links = soup.findAll('a', href=re.compile(pattern1))
for link in links:
print(link['href'])
if link['href'] not in visited:
dfs(link['href'])
# count += 1
except URLError as e:
print(e)
return
except HTTPError as e:
print(e)
return
# print(count)
# if count > 3: return

visited = set() ##存储访问过的url

f = open('ifeng/news.txt', 'a+', encoding='utf-8')


dfs('http://www.gzcc.cn/')

 

 

 

 


 

posted @ 2019-04-15 15:14  钟金晖  阅读(217)  评论(0编辑  收藏  举报