Hive(11) Hive数据导入与导出

Hive数据导入

1、直接向表中插入数据(强烈不推荐使用)

insert into table score3 partition(month ='201807') values ('001','002','100');

2、通过load加载数据(必须掌握)

load data [local] inpath 'dataPath' overwrite | into table student [partition (partcol1=val1,…)]; 
load data local inpath '/kkb/install/hivedatas/score.csv' overwrite into table score3 partition(month='201806');

3、通过查询加载数据(必须掌握)

create table score5 like score;
insert overwrite table score5 partition(month = '201806') select s_id,c_id,s_score from score;

4、查询语句中创建表并加载数据(as select)

create table score6 as select * from score;

5、创建表时指定location

使用这种方式导入数据时,如果表是分区表,则必须在最后一步进行表的修复(映射)msck repair table ...

  • 创建表,并指定在hdfs上的位置
create external table score7 (s_id string,c_id string,s_score int) row format delimited fields terminated by '\t' location '/myscore7';
  • 上传数据到hdfs上,可以直接在hive客户端下面通过dfs命令来进行操作hdfs的数据
0: jdbc:hive2://node03:10000> dfs -mkdir -p /myscore7;
0: jdbc:hive2://node03:10000> dfs -put /kkb/install/hivedatas/score.csv /myscore7;
  • 查询数据
select * from score7;

6、export导出与import 导入 hive表数据(内部表操作)

hive (myhive)> create table teacher2 like teacher;
hive (myhive)> export table teacher to  '/kkb/teacher';
hive (myhive)> import table teacher2 from '/kkb/teacher';

Hive数据导出

1 insert 导出

  • 将查询的结果导出到本地
insert overwrite local directory '/kkb/install/hivedatas/stu' select * from stu;
  • 将查询的结果格式化导出到本地
insert overwrite local directory '/kkb/install/hivedatas/stu2' row format delimited fields terminated by ',' select * from stu;
  • 将查询的结果导出到HDFS(没有local)
insert overwrite directory '/kkb/hivedatas/stu' row format delimited fields terminated by  ','  select * from stu;

2 Hive Shell 命令导出

  • 基本语法:

    • hive -e "sql语句" > file
    • hive -f sql文件 > file
# 输出重定向:
hive -e 'select * from myhive.stu;' > /kkb/install/hivedatas/student1.txt

3 export导出到HDFS上

export table  myhive.stu to '/kkb/install/hivedatas/stuexport';
posted @ 2020-08-24 00:15  Whatever_It_Takes  阅读(236)  评论(0编辑  收藏  举报