获取一篇新闻的全部信息

作业要求来源:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE2/homework/2894

要求:

给定一篇新闻的链接newsUrl,获取该新闻的全部信息:标题、作者、发布单位、审核、来源,将发布时间转换成datetime类型,整个过程包装成一个简单清晰的函数。

源代码:

import requests
import re
from datetime import datetime
from bs4 import BeautifulSoup
url='http://news.gzcc.cn/html/2019/xiaoyuanxinwen_0329/11095.html'#新闻链接
click_url='http://oa.gzcc.cn/api.php?op=count&id=11086&modelid=80'#点击次数链接
res = requests.get(url)
res.encoding = 'utf-8'
soup=BeautifulSoup(res.text,'html.parser')

def news_time(soup):#新闻的发布时间
    time1=soup.select('.show-info')[0].text[5:24]
    time2=soup.select('.show-info')[0].text.split()[0].lstrip('发布时间:')
    detail=soup.select('.show-content')[0].text
    print(datetime.strptime(time1,'%Y-%m-%d %H:%M:%S'))#格式化时间
    
def new_info(soup):#新闻的信息
    info= soup.select('.show-info')[0].text
    author = info.split()[2]#作者
    check = info.split()[3]#审核
    source  = info.split()[4]#来源
    print(author)
    print(check)
    print(source)


def click_count(click_url):#统计点击次数
    id = re.findall('(\d{1,7})',click_url)[-1]
    click_format=click_url.format(id)
    click_content = requests.get(click_url)
    click_count = int(click_content.text.split('.html')[-1].lstrip("('").rstrip("');"))
    print("点击次数:{}".format(click_count))

def news_content(soup):#输出新闻内容
    detail = soup.select('.show-content')[0].text
    print(detail)

new_info(soup)
news_time(soup)
click_count(click_url)
news_content(soup)

运行结果:

posted @ 2019-04-03 14:17  LSpirit  阅读(166)  评论(0编辑  收藏  举报