py数据结构化与保存

1. 将新闻的正文内容保存到文本文件。

content_info['content'] = soup.select('#content')[0].text
    with open('test.txt', 'a', encoding='UTF-8') as story:
        story.write(content_info['content'])

2. 将新闻数据结构化为字典的列表:

  • 单条新闻的详情-->字典news
    def gzcc_content_info(content_url):
        content_info = {}
        resp = requests.get(content_url)
        resp.encoding = 'utf-8'
        soup = BeautifulSoup(resp.text, 'html.parser')
        match_str = {'author': '作者:(.*)\s+[审核]?', 'examine': '审核:(.*)\s+[来源]?', 'source': '来源:(.*)\s+[摄影]?', \
                     'photography': '摄影:(.*)\s+[点击]'}
        remarks = soup.select('.show-info')[0].text
        for i in match_str:
            if re.match('.*' + match_str[i], remarks):
                content_info[i] = re.search(match_str[i], remarks).group(1).split("\xa0")[0]
            else:
                content_info[i] = "  "
        time = re.search('\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}', remarks).group()
        content_info['time'] = datetime.strptime(time, '%Y-%m-%d %H:%M:%S')
        content_info['title'] = soup.select('.show-title')[0].text
        content_info['url'] = content_url
        content_info['clicks'] = gzcc_content_clicks(content_url)
        return content_info
  • 一个列表页所有单条新闻汇总-->列表newsls.append(news)
    def gzcc_list_page(page_url):
        page_news = []
        res = requests.get(page_url)
        res.encoding = 'utf-8'
        soup = BeautifulSoup(res.text, 'html.parser')
        news_list = soup.select('.news-list')[0]
        news_point = news_list.select('li')
        for i in news_point:
            a = i.select('a')[0]['href']
            page_news.append(gzcc_content_info(a))
        return page_news
  • 所有列表页的所有新闻汇总列表newstotal.extend(newsls)
    all_news = []
    url = 'http://news.gzcc.cn/html/xiaoyuanxinwen/'
    res = requests.get(url)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    n = int(soup.select('#pages')[0].select("a")[-2].text)
    all_news.extend(gzcc_list_page(url))
    for i in range(2, n):
        all_news.extend(gzcc_list_page('http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html'.format(i)))

3. 安装pandas,用pandas.DataFrame(newstotal),创建一个DataFrame对象df.

df = pandas.DataFrame(all_news)

4. 通过df将提取的数据保存到csv或excel 文件。

df.to_excel('news.xlsx')

5. 用pandas提供的函数和方法进行数据分析:

  • 提取包含点击次数、标题、来源的前6行数据
    df[['clicks', 'title', 'source']].head(6)
  • 提取‘学校综合办’发布的,‘点击次数’超过3000的新闻。
    df[(df['clicks'] > 3000) & (df['source'] == '学校综合办')]
  • 提取'国际学院'和'学生工作处'发布的新闻。
    news_info = ['国际学院', '学生工作处']
    df[df['source'].isin(news_info)]
posted @ 2018-04-11 20:55  162--麦振澎  阅读(197)  评论(0编辑  收藏  举报