1.英文词频统
下载一首英文的歌词或文章
将所有,.?!’:等分隔符全部替换为空格
将所有大写转换为小写
生成单词列表
生成词频统计
排序
排除语法型词汇,代词、冠词、连词
输出词频最大TOP20
将分析对象存为utf-8编码的文件,通过文件读取的方式获得词频分析内容。
# news='''A special variant of the Code Completion " \ # "feature invoked by pressing Ctrl twice " \ # "allows you to complete the name of any class no matter " \ # "if it was imported in the current file or not. If the class " \ # "is not imported yet, the import statement is generated automatically.''' f=open('news.txt','r') news=f.read() f.close() sep=''',.?'":!''' exclude={'the','and','a','not'} for c in sep: news=news.replace(c,' ') wordList=news.lower().split() wordDict={} '''for w in wordList: wordDict[w]=wordDict.get(w,0)+1 for w in exclude del(wordDict[w]) ''' wordSet=set(wordList)-exclude for w in wordSet: wordDict[w]=wordList.count(w) dictList=list(wordDict.items()) dictList.sort(key=lambda x:x[1],reverse=True) # for w in wordDict: # print(w,wordDict[w]) #print(dictList) for i in range(20): print(dictList[i])
2.中文词频统计
下载一长篇中文文章。
从文件读取待分析文本。
news = open('gzccnews.txt','r',encoding = 'utf-8')
安装与使用jieba进行中文分词。
pip install jieba
import jieba
list(jieba.lcut(news))
生成词频统计
排序
排除语法型词汇,代词、冠词、连词
输出词频最大TOP20(或把结果存放到文件里)
将代码与运行结果截图发布在博客上。
import jieba f = open('hongloumeng.txt','r') text = f.read() f.close() symbol = '''一!“”,。?;’"',.、:\n''' for s in symbol: text = text.replace(s,' ') wordlist = list(jieba.cut(text)) exclude = {'说','有','得','没','的','他','了','她','是','在','—','你','走','对','他们','着','把','不','也','我','人','而', '与','就','可是','那','要','又','想','和','一个',' ','呢','很','一点','都','去', '没有','个','上','给','点','小','看','之','‘','道','便','听','只'} set = set(wordlist) - exclude dict = {} for key in set: dict[key]=wordlist.count(key) dictlist = list(dict.items()) dictlist.sort(key=lambda x: x[1], reverse=True) for i in range(20): print(dictlist[i])