2.安装Spark与Python练习

一、安装Spark

检查基础环境hadoop,jdk

 配置文件
vim /usr/local/spark/conf/spark-env.sh

  

 环境变量
vim ~/.bashrc

  

   启动spark
pyspark

  

  试运行Python代码

  

二、Python编程练习:英文文本的词频统计

  准备文本文件(txt)

  准备一篇英文文章,命名为bumi.txt(文本文件名字随意)

  

  读文件
txt = open("bumi.txt", "r",encoding='UTF-8').read()
  预处理:大小写,标点符号,停用词

  将大写字母变成小写字母

txt = txt.lower()

  去除标点符号及停用词

for ch in '!"@#$%^&*()+,-./:;<=>?@[\\]_`~{|}':
    txt=txt.replace(ch," ")
words = txt.split()
stop_words = ['so','out','all','for','of','to','on','in','if','by','under','it','at','into','with','about']
lenwords=len(words)
afterwords=[]
for i in range(lenwords):
    z=1
    for j in range(len(stop_words)):
        if words[i]==stop_words[j]:
            continue
        else:
            if z==len(stop_words):
                afterwords.append(words[i])
                break
            z=z+1
            continue
  统计每个单词出现的次数
counts = {}
for word in afterwords:
    counts[word] = counts.get(word,0) + 1
items = list(counts.items())
items.sort(key=lambda x:x[1],reverse=True)
  按词频大小排序
i=1
while i<=len(items):
    word,count = items[i-1]
    print("{0:<20}{1}".format(word,count))
    i=i+1
  结果写文件
txt= open("bumi001.txt", "w",encoding='UTF-8')
txt.write(str(items))
print("文件写入成功")
  结果如图所示

  

   

posted @ 2022-03-02 17:40  bumi  阅读(68)  评论(0编辑  收藏  举报