作业:中文词频统计

这个作业的要求来自于:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE2/homework/2773

 

1. 下载一长篇中文小说。

我下载的是匪我思存的中长篇小说《东宫》。

2. 从文件读取待分析文本。

article = open('test.txt',encoding='UTF-8').read()

3. 安装并使用jieba进行中文分词。

4. 更新词库,加入所分析对象的专业词汇。

jieba.add_word('李承鄞')
words = list(jieba.cut(article))

5. 生成词频统计

for w in articleSet:
    if len(w)>1:
        articleDict[w] = words.count(w)

6. 排序

articlelist = sorted(articleDict.items(),key = lambda x:x[1], reverse = True)

7. 排除语法型词汇,代词、冠词、连词

dele = {'','','','','','','','',' ','','',''}
articleDict = {}
articleSet = set(words)-dele

8. 输出词频最大TOP20,把结果存放到文件里

for i in range(20):
    print(articlelist[i])

9. 生成词云。

from wordcloud
import WordCloudimport matplotlib.pyplot as plt
import jieba

cut_text = " ".join(words)
'print(cut_text)'

mywc = WordCloud().generate(cut_text)
plt.imshow(mywc)plt.axis("off")
plt.show()


 


项目代码如下所示:

# -*- coding: utf-8 -*-"""
Created on Mon Mar 18 11:47:24 2019
@author: Administrator"""
from wordcloud 
import WordCloudimport matplotlib.pyplot as plt
import jieba


article = open('test.txt',encoding='UTF-8').read()
dele = {'。','!','?','的','“','”','(',')',' ','》','《',','}
jieba.add_word('李承鄞')
words = list(jieba.cut(article))
articleDict = {}
articleSet = set(words)-dele
for w in articleSet:    
if len(w)>1:        
articleDict[w] = words.count(w)

articlelist = sorted(articleDict.items(),key = lambda x:x[1], reverse = True)

cut_text = " ".join(words)
'print(cut_text)'

mywc = WordCloud().generate(cut_text)
plt.imshow(mywc)plt.axis("off")
plt.show()
'''
for i in range(20):    
print(articlelist[i])
import pandas as pd
pd.DataFrame(data=articlelist).to_csv('test.csv',encoding='UTF-8')
'''

运行效果如图所示:

生成词云如图所示:

 


 

posted @ 2019-03-18 14:31  林溢漫  阅读(310)  评论(0编辑  收藏  举报