hive进行词频统计

统计文件信息:

$ /opt/cdh-5.3.6/hadoop-2.5.0/bin/hdfs dfs -text /user/hadoop/wordcount/input/wc.input
hadoop spark
spark hadoop
oracle mysql postgresql
postgresql oracle mysql
mysql mongodb
hdfs yarn mapreduce
yarn hdfs
zookeeper

针对于以上文件使用hive做词频统计:

create table docs (line string);

load data inpath '/user/hadoop/wordcount/input/wc.input' into table docs;

create table word_counts as
select word,count(1) as count from
(select explode(split(line,' ')) as word from docs) word
group by word
order by word;

分段解释:

--使用split函数对表中行按空格进行分隔:

select split(line,' ') from docs;
["hadoop","spark",""]
["spark","hadoop"]
["oracle","mysql","postgresql"]
["postgresql","oracle","mysql"]
["mysql","mongodb"]
["hdfs","yarn","mapreduce"]
["yarn","hdfs"]
["zookeeper"]

--使用explode函数对split的结果集进行行拆列:

select explode(split(line,' ')) as word from docs;
word
hadoop
spark

spark
hadoop
oracle
mysql
postgresql
postgresql
oracle
mysql
mysql
mongodb
hdfs
yarn
mapreduce
yarn
hdfs
zookeeper

--以上输出内容已经满足对其做统计分析,这时通过sql对其进行分析:

select word,count(1) as count from
(select explode(split(line,' ')) as word from docs) word
group by word
order by word;

word    count
     1
hadoop    2
hdfs    2
mapreduce    1
mongodb    1
mysql    3
oracle    2
postgresql    2
spark    2
yarn    2
zookeeper    1

posted @   ChavinKing  阅读(3616)  评论(0编辑  收藏  举报
编辑推荐:
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
阅读排行:
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
点击右上角即可分享
微信分享提示