Hive:常用的一些命令

1、一般可以通过beeline(CDH默认是使用hive,就会进入hive操作窗口),代理方式登录hive;

2、使用数据库abc_hive_db:use abc_hive_db;

3、查看数据库中有哪些表:show tables ;有哪些特定表 show tables like '*tb_site*';

4、查看某张表的表结构:desc tablename;

方法2:查看表的字段信息及元数据存储路径
desc extended table_name;
方法3:查看表的字段信息及元数据存储路径
desc formatted table_name;
 
备注:查看表元数据存储路径时,推荐方法3,信息比较清晰。

 

二、查看表容量大小
方法1:查看一个hive表文件总大小时(单位为Byte),我们可以通过一行脚本快速实现,其命令如下:
--#查看普通表的容量
$ hadoop fs -ls  /user/hive/warehouse/table_name|awk -F ' ' '{print $5}'|awk '{a+=$1}END{print a}'
48
这样可以省去自己相加,下面命令是列出该表的详细文件列表
$ hadoop fs -ls  /user/hive/warehouse/table_name
--#查看分区表的容量 
$ hadoop fs -ls  /user/hive/warehouse/table_name/yyyymm=201601|awk -F ' ' '{print $5}'|awk '{a+=$1}END{print a/(1024*1024*1024)}'
39.709
这样可以省去自己相加,下面命令是列出该表的详细文件列表
$ hadoop fs -ls  /user/hive/warehouse/table_name/yyyymm=201601
方法2:查看该表总容量大小,单位为G
$ hadoop fs -du /user/hive/warehouse/table_name|awk '{ SUM += $1 } END { print SUM/(1024*1024*1024)}'

5、创建表:

--OID,MSISDN,StartTime,EndTime,AP_MAC,ApAliasName,HotSpotName,Longitude,Latitude,Floor
0: jdbc:hive2://xxxxx/> create table tmp_wifi1109(OID string,MSISDN string,StartTime timestamp,
EndTime timestamp,AP_MAC string,ApAliasName string,HotSpotName string,Longitude string,Latitude
string,Floor string) row format delimited fields terminated by ',' stored as textfile;

表添加字段、修改、删除字段:
ALTER TABLE name RENAME TO new_name
ALTER TABLE name ADD COLUMNS (col_spec[, col_spec ...])
ALTER TABLE name DROP [COLUMN] column_name
ALTER TABLE name CHANGE column_name new_name new_type
ALTER TABLE name REPLACE COLUMNS (col_spec[, col_spec ...]
18 rows selected (3.608 seconds)
0: jdbc:hive2://10.178.152.162:21066/> alter table res_site_hangzhou add columns (cell_id_x16 string);
No rows affected (1.985 seconds)
0: jdbc:hive2://10.178.152.162:21066/> desc res_site_hangzhou;
+----------------------------+------------+----------+--+
|          col_name          | data_type  | comment  |
+----------------------------+------------+----------+--+
| oid                        | int        |          |
| objectid                   | int        |          |
....
| cell_id_x16                | string     |          |
+----------------------------+------------+----------+--+

0: jdbc:hive2://10.178.152.162:21066/> alter table res_site_hangzhou change cell_id_x16 objectidx16 string;
No rows affected (2.085 seconds)
0: jdbc:hive2://10.178.152.162:21066/> desc res_site_hangzhou;
+----------------------------+------------+----------+--+
|          col_name          | data_type  | comment  |
+----------------------------+------------+----------+--+
| oid                        | int        |          |
| objectid                   | int        |          |
....
| objectidx16                | string     |          |
+----------------------------+------------+----------+--+

 

6、从hdfs文件中导入数据到表:

注意:tmp_wifi1109创建时格式要求设置:
create table if not exists tmp_wifi1109(id int,name string) row format delimited fields terminated by ',' stored as textfile;
入库代码:
0
: jdbc:hive2://xxxx/> load data inpath 'hdfs:/user/xx_xx/dt/wifi_user_list_1109.csv' into table tmp_wifi1109;
0: jdbc:hive2://xxxx/> load data [local] inpath '/wifi_user_list_1109.csv' [overwrite] into table tmp_wifi1109;

7、把表之间关联的结果存储某张新建表:

create table tmp_mr_s1_mme1109 as 
select distinct b.OID,b.MSISDN,b.StartTime,b.EndTime,b.AP_MAC,b.ApAliasName,b.HotSpotName,b.Longitude,b.Latitude,b.Floor,
a.ues1ap_id,a.cellid
from default.s1mme a join abc_hive_db.tmp_wifi1109 b on a.msisdn=b.MSISDN and a.hour>='20161109' and a.hour<'20161110' where (
  (a.start_time<=b.StartTime and a.end_time>=b.StartTime)
  or (a.start_time<=b.EndTime and a.end_time>=b.EndTime)
  or (a.start_time>=b.StartTime and a.end_time<=b.EndTime)
)

8、导出某张表中的记录到hdfs:


  set hive.merge.mapfiles=true;
  set hive.merge.mapredfiles=true;

  set hive.merge.size.per.task= 1000000000;

set hive.merge.smallfiles.avgsize= 1000000000;
use abc_hive_db;
insert overwrite directory '/user/dt/dat/1109/' row format delimited fields terminated by '|' select * from tmp_mr_s1_mme1109;

-- 如果此时到出文件没有合并的情况下:可以使用getmerge来合并文件。
hdfs dfs -getmerge /user/dt/dat/1100/* mergefile.csv

导出文件是指定分列格式:

insert overwrite directory '/user/jr/dt/my_table' 
row format delimited fields terminated by '|'
collection items terminated by ','
map keys terminated by ':'
select * from my_table

 

9、查看表分区字段:

0: jdbc:hive2://xxx/> show partitions default.s1_mme;
+------------------------------------+--+
|             partition              |
+------------------------------------+--+
| hour=2016110214                  |
| hour=2016110215                  |
| hour=2016110216                  |
...
+------------------------------------+--+
如果某一个张表中包含多个分区字段例如(default.s1_mme包含两个分区(p_city,p_day)),那么当我们查看某个城市都有哪些天数据分区的时候就可以执行以下命令:
0: jdbc:hive2://xxx/> show partitions default.s1_mme partition(p_city='wuhan');
 
 加载数据到某个分区下:
load data local inpath '/sd/dataext' into table testPar partition(dt='20180117');

参考信息:

hive partitions相关处理:http://blog.sina.com.cn/s/blog_9f48885501016hn5.html

http://blog.sina.com.cn/s/blog_9f48885501016k5m.html

posted @ 2016-11-18 22:03  cctext  阅读(7099)  评论(0编辑  收藏  举报