Hive文件的存储格式
hive文件存储格式包括以下几类:
TEXTFILE
SEQUENCEFILE
RCFILE
自定义格式
其中TEXTFILE为默认格式,建表时不指定默认为这个格式,导入数据时会直接把数据文件拷贝到hdfs上不进行处理。
SequenceFile,RCFile格式的表不能直接从本地文件导入数据,数据要先导入到textfile格式的表中,然后再从textfile表中用insert导入到SequenceFile,RCFile表中。
TEXTFIEL
默认格式,数据不做压缩,磁盘开销大,数据解析开销大。
可结合Gzip、Bzip2使用(系统自动检查,执行查询时自动解压),但使用这种方式,hive不会对数据进行切分,从而无法对数据进行并行操作。
实例:
> create table test1(str STRING) > STORED AS TEXTFILE; OK Time taken: 0.786 seconds #写脚本生成一个随机字符串文件,导入文件: > LOAD DATA LOCAL INPATH '/home/work/data/test.txt' INTO TABLE test1; Copying data from file:/home/work/data/test.txt Copying file: file:/home/work/data/test.txt Loading data to table default.test1 OK Time taken: 0.243 seconds
SEQUENCEFILE
SequenceFile是Hadoop API提供的一种二进制文件支持,其具有使用方便、可分割、可压缩的特点。
SequenceFile支持三种压缩选择:NONE, RECORD, BLOCK。 Record压缩率低,一般建议使用BLOCK压缩。
示例:
> create table test2(str STRING) > STORED AS SEQUENCEFILE; OK Time taken: 5.526 seconds hive> SET hive.exec.compress.output=true; hive> SET io.seqfile.compression.type=BLOCK; hive> INSERT OVERWRITE TABLE test2 SELECT * FROM test1;
RCFILE
RCFILE是一种行列存储相结合的存储方式。首先,其将数据按行分块,保证同一个record在一个块上,避免读一个记录需要读取多个block。其次,块数据列式存储,有利于数据压缩和快速的列存取。RCFILE文件示例:
> create table test3(str STRING) > STORED AS RCFILE; OK Time taken: 0.184 seconds > INSERT OVERWRITE TABLE test3 SELECT * FROM test1;
实践证明RCFile目前没有性能优势, 只有存储上能省10%的空间, 作者自己都承认. Facebook用它也就是为了存储,. RCFile目前没有使用特殊的压缩手段, 例如算术编码, 后缀树等, 没有像InfoBright那样能skip 大量io.
ORC格式
ORC是RCfile的升级版,性能有大幅度提升,
而且数据可以压缩存储,压缩比和Lzo压缩差不多,比text文件压缩比可以达到70%的空间。而且读性能非常高,可以实现高效查询。
具体介绍https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC
建表语句如下:
同时,将ORC的表中的NULL取值,由默认的\N改为'',
方式一:
hive> show create table test_orc; CREATE TABLE `test_orc`( `advertiser_id` string, `ad_plan_id` string, `cnt` bigint) PARTITIONED BY ( `day` string, `type` tinyint COMMENT '0 as bid, 1 as win, 2 as ck', `hour` tinyint) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://namenode/hivedata/warehouse/pmp.db/test_orc' TBLPROPERTIES ( 'last_modified_by'='pmp_bi', 'last_modified_time'='1465992624', 'transient_lastDdlTime'='1465992624')
方式二:
drop table test_orc; create table if not exists test_orc( advertiser_id string, ad_plan_id string, cnt BIGINT ) partitioned by (day string, type TINYINT COMMENT '0 as bid, 1 as win, 2 as ck', hour TINYINT) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' with serdeproperties('serialization.null.format' = '') STORED AS ORC; 查看结果 hive> show create table test_orc; CREATE TABLE `test_orc`( `advertiser_id` string, `ad_plan_id` string, `cnt` bigint) PARTITIONED BY ( `day` string, `type` tinyint COMMENT '0 as bid, 1 as win, 2 as ck', `hour` tinyint) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://namenode/hivedata/warehouse/pmp.db/test_orc' TBLPROPERTIES ( 'transient_lastDdlTime'='1465992726')
方式三:
drop table test_orc; create table if not exists test_orc( advertiser_id string, ad_plan_id string, cnt BIGINT ) partitioned by (day string, type TINYINT COMMENT '0 as bid, 1 as win, 2 as ck', hour TINYINT) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS ORC; 查看结果 hive> show create table test_orc; CREATE TABLE `test_orc`( `advertiser_id` string, `ad_plan_id` string, `cnt` bigint) PARTITIONED BY ( `day` string, `type` tinyint COMMENT '0 as bid, 1 as win, 2 as ck', `hour` tinyint) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://namenode/hivedata/warehouse/pmp.db/test_orc' TBLPROPERTIES ( 'transient_lastDdlTime'='1465992916')
自定义格式
当用户的数据文件格式不能被当前 Hive 所识别的时候,可以自定义文件格式。
用户可以通过实现inputformat和outputformat来自定义输入输出格式,参考代码:.\hive-0.8.1\src\contrib\src\java\org\apache\hadoop\hive\contrib\fileformat\base64
实例:
> create table test4(str STRING) > stored as > inputformat 'org.apache.hadoop.hive.contrib.fileformat.base64.Base64TextInputFormat' > outputformat 'org.apache.hadoop.hive.contrib.fileformat.base64.Base64TextOutputFormat';
$ cat test1.txt aGVsbG8saGl2ZQ== aGVsbG8sd29ybGQ= aGVsbG8saGFkb29w
test1文件为base64编码后的内容,decode后数据为:
hello,hive hello,world hello,hadoop
load数据并查询:
hive> LOAD DATA LOCAL INPATH '/home/work/test1.txt' INTO TABLE test4; Copying data from file:/home/work/test1.txt Copying file: file:/home/work/test1.txt Loading data to table default.test4 OK Time taken: 4.742 seconds hive> select * from test4; OK hello,hive hello,world hello,hadoop Time taken: 1.953 seconds
总结
相比TEXTFILE和SEQUENCEFILE,RCFILE由于列式存储方式,数据加载时性能消耗较大,但是具有较好的压缩比和查询响应。数据仓库的特点是一次写入、多次读取,因此,整体来看,RCFILE相比其余两种格式具有较明显的优势。
参考链接: http://blog.csdn.net/yfkiss/article/details/7787742
http://blog.csdn.net/longshenlmj/article/details/51702343
http://www.cnblogs.com/ggjucheng/archive/2013/01/03/2843318.html