hive学习日志

hive学习日志

hive处理的输入数据一般是巨量的,编写hive查询语句需要一定的mr知识的及过硬的hql知识,有些失误可能会造成几个小时的运行浪费。

不登录hive cli运行hql的方法:
bin/hive -e 'select * from t1'

非交互模式运行hql脚本
bin/hive -f hive.sql

交互模式运行hql脚本
bin/hive -i hive.sql

HQL数据类型
data_type
: primitive_type
| array_type
| map_type
| struct_type

primitive_type
: TINYINT
| SMALLINT
| INT
| BIGINT
| BOOLEAN
| FLOAT
| DOUBLE
| STRING

array_type
: ARRAY < data_type >

map_type
: MAP < primitive_type, data_type >

struct_type
: STRUCT < col_name : data_type [COMMENT col_comment], ...>

HQL语法:

创建表
完全新建
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name
[(col_name data_type [COMMENT col_comment], ...)]
[COMMENT table_comment]
[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
[ROW FORMAT row_format]
[STORED AS file_format]
[LOCATION hdfs_path]
[TBLPROPERTIES (property_name=property_value, ...)]
[AS select_statement]

[EXTERNAL] 表示外接表,不会保存任何东西到hive的数据库,而是放到LOCATION指定的位置,或者不指定LOCATION放到默认位置。

PARTITIONED by 表示分区字段,自动加到hive表里,添加了分区之后,每次处理数据都要求指定分区的值,形势如: PARTITIONE(p1=1,p2=2)
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] buckets的划分,能把一个分区的文件划分为多个,作为多个maper的入口。

ROW FORMAT 指定行列分割符号等
: DELIMITED [FIELDS TERMINATED BY char] [COLLECTION ITEMS TERMINATED BY char]
[MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
| SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]

STORED AS
: SEQUENCEFILE
| TEXTFILE
| INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname

AS select_statement
select_statement 一个select语句,表示从一个select语句导入数据。如:
CREATE TABLE new_key_value_store
ROW FORMAT SERDE "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe"
STORED AS RCFile AS
SELECT (key % 1024) new_key, concat(key, value) key_value_pair
FROM key_value_store
SORT BY new_key, key_value_pair;

从已知表创建:
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name
LIKE existing_table_name
[LOCATION hdfs_path]

删除表
drop TABLE tablename

hive的udf
hive可以通过反射机制来直接执行udf。
如:
SELECT reflect("java.lang.String", "valueOf", 1),
FROM src LIMIT 1;

java.lang.String指明一个类完整路径名,
"valueOf" 静态公共函数
1 参数,参数格式(param1,param2 ...)

join的进程数:

SELECT a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key1) 单进程
当join对比项始终有一个表的一个字段时(如b.key1)mr会在一个进程中完成。
SELECT a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key2) 2进程
这种情况下,多一个join多一个进程

posted @ 2013-10-24 10:52  nosqlcn  阅读(467)  评论(0编辑  收藏  举报