Hive安装

===============MySQL安装=========================================
这个安装很简单,是在线安装,只需要按顺序执行一下几个命令就ok了。
(1)sudo apt-get install mysql-server
(2)sudo apt-get install mysql-client
(3)sudo apt-get install libmysqlclient-dev

安装过程会让你给数据库root用户输入密码,不要忽略。然后通过如下命令检查是否安装成功:

sudo netstat -tap | grep mysql

登录验证:

mysql -uroot -p123456

就可以登录到mysql了,可以执行show databases;查看数据库:

===============Hive安装==========================================
1. 将hive解压到/opt下:
tar -zxvf apache-hive-3.1.1-bin.tar.gz
2. 将文件重命名为hive文件:
mv apache-hive-3.1.1-bin hive
3. 修改环境变量/etc/profile:
$ export HIVE_HOME=/opt/hive
$ export PATH=$HIVE_HOME/bin:$PATH
4. 执行hive --version 测试

5. 修改hive-site.xml:
参考(https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration)
cd /opt/hive/conf/;cp hive-default.xml.template hive-site.xml;

<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://master:3306/my_hive?createDatabaseIfNotExist=true</value>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>Username to use against metastore database</description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password to use against metastore database</description>
</property>

</configuration>

6.复制mysql的驱动程序到hive/lib下面(这里已经拷贝好了)


7. 在mysql中hive的schema(在此之前需要创建mysql下的hive数据库)

cd /opt/hive/bin;schematool -dbType mysql -initSchema

8. 检查DB是否创建
mysql -uroot -p123456

9. 登陆Hive CLI客户端
$ hive
$ set hive.cli.print.current.db=true #显示数据库名称

==============数据库 DDL============================

10.创建一个新的数据库
CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
[COMMENT database_comment]
[LOCATION hdfs_path]
[WITH DBPROPERTIES (property_name=property_value, ...)];
示例:
CREATE DATABASE IF NOT EXISTS jacksun COMMENT 'my_hive database at /use/jacksun' LOCATION '/use/jacksun'
with dbproperties ('own'='jacksun', 'day'='20181020');

show create database jacksun;
describe database extended jacksun;

11. 删除一个数据库
DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];
示例:
DROP DATABASE IF EXISTS jacksun CASCADE;
--cascade表示可以级联删除; 及数据库中有表也可以删除;

12. 修改一个数据库属性;
ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...);
ALTER (DATABASE|SCHEMA) database_name SET OWNER [USER|ROLE] user_or_role;
ALTER (DATABASE|SCHEMA) database_name SET LOCATION hdfs_path;
示例:
alter database jacksun set dbproperties ('k1'='v1', 'k2'='v2');

==============表 DDL============================
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
[(col_name data_type [COMMENT col_comment], ... [constraint_specification])]
[COMMENT table_comment]
[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
[SKEWED BY (col_name, col_name, ...)
ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)
[STORED AS DIRECTORIES]
[
[ROW FORMAT row_format]
[STORED AS file_format]
| STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]
]
[LOCATION hdfs_path]
[TBLPROPERTIES (property_name=property_value, ...)]
[AS select_statement];

======通过LIKE创建表DDL========================================================
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
LIKE existing_table_or_view_name
[LOCATION hdfs_path];

======表DDL数据类型===========================================
data_type
: primitive_type
| array_type
| map_type
| struct_type
| union_type

-------原始数据类型--------------------------
primitive_type
: TINYINT
| SMALLINT
| INT
| BIGINT
| BOOLEAN
| FLOAT
| DOUBLE
| DOUBLE PRECISION
| STRING
| BINARY
| TIMESTAMP
| DECIMAL
| DECIMAL(precision, scale)
| DATE
| VARCHAR
| CHAR

array_type
: ARRAY < data_type >

map_type
: MAP < primitive_type, data_type >

struct_type
: STRUCT < col_name : data_type [COMMENT col_comment], ...>

union_type
: UNIONTYPE < data_type, data_type, ... >

======行 格式化 row_format===========================================

: DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]]
[COLLECTION ITEMS TERMINATED BY char]
[MAP KEYS TERMINATED BY char]
[LINES TERMINATED BY char]
[NULL DEFINED AS char] -- (Note: Available in Hive 0.13 and later)

| SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]

示例:
------正则表达式---含列分割符--------------------
CREATE TABLE apachelog (
host STRING,
identity STRING,
user STRING,
time STRING,
request STRING,
status STRING,
size STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "([^]*) ([^]*) ([^]*) (-|\\[^\\]*\\]) ([^ \"]*|\"[^\"]*\") (-|[0-9]*) (-|[0-9]*)(?: ([^ \"]*|\".*\") ([^ \"]*|\".*\"))?"
)
STORED AS TEXTFILE;

---------Json文件解析-----------------------------------------------
CREATE TABLE my_table(a string, b bigint, ...)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
STORED AS TEXTFILE;

---------CSV TSV 文件解析-----------------------------------------------
CREATE TABLE my_table(a string, b string, ...)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
"separatorChar" = "\t",
"quoteChar" = "'",
"escapeChar" = "\\"
)
STORED AS TEXTFILE;


======文件 格式化file_format===========================================
: SEQUENCEFILE
| TEXTFILE -- (Default, depending on hive.default.fileformat configuration)
| RCFILE -- (Note: Available in Hive 0.6.0 and later)
| ORC -- (Note: Available in Hive 0.11.0 and later)
| PARQUET -- (Note: Available in Hive 0.13.0 and later)
| AVRO -- (Note: Available in Hive 0.14.0 and later)
| JSONFILE -- (Note: Available in Hive 4.0.0 and later)
| INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname

======表DDL约束===========disable novalidate 禁止非验证=====================
constraint_specification:
: [, PRIMARY KEY (col_name, ...) DISABLE NOVALIDATE ]
[, CONSTRAINT constraint_name FOREIGN KEY (col_name, ...) REFERENCES table_name(col_name, ...) DISABLE NOVALIDATE

---按天分区------------------------------------
create table table_name (
id int,
dtDontQuery string,
name string
)
partitioned by (date string)

---按日期,国家分区------------------------------------
CREATE TABLE page_view(
viewTime INT,
userid BIGINT,
page_url STRING,
referrer_url STRING,
ip STRING COMMENT 'IP Address of the User')
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
STORED AS SEQUENCEFILE;

---外部扩展表------------------------------------
CREATE EXTERNAL TABLE page_view(
viewTime INT,
userid BIGINT,
page_url STRING,
referrer_url STRING,
ip STRING COMMENT 'IP Address of the User',
country STRING COMMENT 'country of origination')
COMMENT 'This is the staging page view table'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054'
STORED AS TEXTFILE
LOCATION '<hdfs_location>';

-----Create Table As Select (CTAS)------------------
CREATE TABLE new_key_value_store
ROW FORMAT SERDE "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe"
STORED AS RCFile
AS
SELECT (key % 1024) new_key, concat(key, value) key_value_pair
FROM key_value_store
SORT BY new_key, key_value_pair;

-----Create Table Like------------------------------
CREATE TABLE empty_key_value_store
LIKE key_value_store [TBLPROPERTIES (property_name=property_value, ...)];

-----Bucketed Sorted Tables-----列表桶--------------------

CREATE TABLE page_view(viewTime INT, userid BIGINT,
page_url STRING, referrer_url STRING,
ip STRING COMMENT 'IP Address of the User')
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
CLUSTERED BY(userid) SORTED BY(viewTime) INTO 32 BUCKETS
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\001'
COLLECTION ITEMS TERMINATED BY '\002'
MAP KEYS TERMINATED BY '\003'
STORED AS SEQUENCEFILE;

--调整数据倾斜---------------------------
CREATE TABLE list_bucket_single (key STRING, value STRING)
SKEWED BY (key) ON (1,5,6) [STORED AS DIRECTORIES];


CREATE TABLE list_bucket_multiple (col1 STRING, col2 int, col3 STRING)
SKEWED BY (col1, col2) ON (('s1',1), ('s3',3), ('s13',13), ('s78',78)) [STORED AS DIRECTORIES];

-----创建临时表--------------------------------------
临时表具有以下限制:A 不支持分区列。B 不支持创建索引
CREATE TEMPORARY TABLE list_bucket_multiple (col1 STRING, col2 int, col3 STRING);


----创建 约束 Constraints-------------
create table pk(id1 integer, id2 integer,
primary key(id1, id2) disable novalidate);

create table fk(id1 integer, id2 integer,
constraint c1 foreign key(id1, id2) references pk(id2, id1) disable novalidate);

》》》》》》》》》》》》》》》》》》》》》》》》》
《《《《《《《《《《《《《《《《《《《《《《《《《
----表删除----------------
DROP TABLE [IF EXISTS] table_name [PURGE];

-------清空 表----------
-------如果表属性“auto.purge”(参见上面的TBLPROPERTIES)被设置为“true”,那么在发出针对它的TRUNCATE TABLE命令时,表的数据不会移动到Trash---


TRUNCATE TABLE table_name [PARTITION partition_spec];

partition_spec:
: (partition_column = partition_col_value, partition_column = partition_col_value, ...)


》》》》》》》》》》》》》》》》》》》》》》》》》
《《《《《《《《《《《《《《《《《《《《《《《《《
--------Alter: Rename Table重命名---------------------------------


ALTER TABLE table_name RENAME TO new_table_name;

--------Alter: 修改表属性---------------------------------
ALTER TABLE table_name SET TBLPROPERTIES table_properties;

table_properties:
: (property_name = property_value, property_name = property_value, ... )

TBLPROPERTIES ("comment"="table_comment")
TBLPROPERTIES ("hbase.table.name"="table_name") –-查看Hbase集成.
TBLPROPERTIES ("immutable"="true") or ("immutable"="false")
TBLPROPERTIES ("orc.compress"="ZLIB") or ("orc.compress"="SNAPPY") or ("orc.compress"="NONE")
TBLPROPERTIES ("transactional"="true") or ("transactional"="false")
TBLPROPERTIES ("NO_AUTO_COMPACTION"="true") or ("NO_AUTO_COMPACTION"="false")
TBLPROPERTIES ("compactor.mapreduce.map.memory.mb"="mapper_memory")
TBLPROPERTIES ("compactorthreshold.hive.compactor.delta.num.threshold"="threshold_num")
TBLPROPERTIES ("compactorthreshold.hive.compactor.delta.pct.threshold"="threshold_pct")
TBLPROPERTIES ("auto.purge"="true") or ("auto.purge"="false")
TBLPROPERTIES ("EXTERNAL"="TRUE")

1) comment:可以用来定义表的描述信息
2) hbase.table.name:hive通过 storage handler将hive与各种工具联系起来,这是是使用hive接入hbase时,设置的属性
3)immutable:顾名思义‘不可变的’,当表的这个属性为true时,若表中无数据时可以insert数据,但是当表已经有数据时,insert操作会失败。
不可变表用来防止意外更新,避免因脚本错误导致的多 次更新,而没有报错。本人实际中还没用到这个属性。
4) orc.compress:这是orc存储格式表的一个属性,用来指定orc存储的压缩方式。
5) 这5个属性与hive的事务支持有关。
Transactional,
NO_AUTO_COMPACTION,
compactor.mapreduce.map.memory.mb,
compactorthreshold.hive.compactor.delta.num.threshold,
compactorthreshold.hive.compactor.delta.pct.threshold:
6) auto.purge:当设置为ture时,删除或者覆盖的数据会不经过回收站,直接被删除。配置了此属性会影响到这些操作: Drop Table, Drop Partitions, Truncate Table,Insert Overwrite.
7) EXTERNAL:通过修改此属性可以实现内部表和外部表的转化。

ALTER TABLE table_name SET TBLPROPERTIES ('comment' = new_comment);

--------Alter: Add SerDe Properties 序列化反序列化属性---------------------------------

ALTER TABLE table_name [PARTITION partition_spec] SET SERDE serde_class_name [WITH SERDEPROPERTIES serde_properties];

ALTER TABLE table_name [PARTITION partition_spec] SET SERDEPROPERTIES serde_properties;

serde_properties:
: (property_name = property_value, property_name = property_value, ... )

ALTER TABLE table_name SET SERDEPROPERTIES ('field.delim' = ',');

---------Alter: Table Storage Properties-------------------------------------

ALTER TABLE table_name CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name, ...)]
INTO num_buckets BUCKETS;


---------Alter Table Skewed--数据倾斜--------------------------------

ALTER TABLE table_name SKEWED BY (col_name1, col_name2, ...)
ON ([(col_name1_value, col_name2_value, ...) [, (col_name1_value, col_name2_value), ...]
[STORED AS DIRECTORIES];

------出去倾斜处理
ALTER TABLE table_name NOT SKEWED;

---This turns off the list bucketing feature, although the table remains skewed.-----------
ALTER TABLE table_name NOT STORED AS DIRECTORIES;

--Alter Table Set Skewed Location---
ALTER TABLE table_name SET SKEWED LOCATION (col_name1="location1" [, col_name2="location2", ...] );


=======Alter Table Constraints 更新数据约束=============================================

ALTER TABLE table_name ADD CONSTRAINT constraint_name PRIMARY KEY (column, ...) DISABLE NOVALIDATE;
ALTER TABLE table_name ADD CONSTRAINT constraint_name FOREIGN KEY (column, ...) REFERENCES table_name(column, ...) DISABLE NOVALIDATE RELY;
ALTER TABLE table_name DROP CONSTRAINT constraint_name;


---------Add Partitions 添加新的分区-------------------------


ALTER TABLE table_name ADD [IF NOT EXISTS] PARTITION partition_spec [LOCATION 'location'][, PARTITION partition_spec [LOCATION 'location'], ...];

partition_spec:
: (partition_column = partition_col_value, partition_column = partition_col_value, ...)


ALTER TABLE page_view ADD PARTITION (dt='2008-08-08', country='us') location '/path/to/us/part080808'
PARTITION (dt='2008-08-09', country='us') location '/path/to/us/part080809';

---------更改分区名称-------------------------

ALTER TABLE table_name PARTITION partition_spec RENAME TO PARTITION partition_spec;

----------交换分区----------------------------------------
-- Move partition from table_name_1 to table_name_2
ALTER TABLE table_name_2 EXCHANGE PARTITION (partition_spec) WITH TABLE table_name_1;

-- multiple partitions
ALTER TABLE table_name_2 EXCHANGE PARTITION (partition_spec, partition_spec2, ...) WITH TABLE table_name_1;

--------批量修复分区-------------------
MSCK [REPAIR] TABLE table_name [ADD/DROP/SYNC PARTITIONS];

MSCK REPAIR TABLE table_name;
---the default option for MSC command is ADD;
---The SYNC PARTITIONS option is equivalent to calling both ADD and DROP PARTITIONS.
为了让 MSCK 命令工作,分区的目录名必须是 /partition_name=partition_value/结构的,否则将无法添加分区。这时候你必须使用add partition命令了。


-----删除分区-----------------------------
ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec[, PARTITION partition_spec, ...]
[IGNORE PROTECTION] [PURGE];
-- (Note: PURGE available in Hive 1.2.0 and later, IGNORE PROTECTION not available 2.0.0 and later)

ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec PURGE;
ALTER TABLE page_view DROP PARTITION (dt='2008-08-08', country='us');


-----归档Archive Partition----------------
ALTER TABLE table_name ARCHIVE PARTITION partition_spec;
ALTER TABLE table_name UNARCHIVE PARTITION partition_spec;

==========Alter Table/Partition File Format===============
ALTER TABLE table_name [PARTITION partition_spec] SET FILEFORMAT file_format;

==========Alter Table/Partition Location===============
ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION "new location";

=========Alter Table/Partition Touch================

ALTER TABLE table_name TOUCH [PARTITION partition_spec];

========修改表/分区的保护=========
ALTER TABLE table_name [PARTITION partition_spec] ENABLE|DISABLE NO_DROP;
ALTER TABLE table_name [PARTITION partition_spec] ENABLE|DISABLE OFFLINE;

>>>>>>>>>>Alter Table/Partition Compact>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
ALTER TABLE table_name [PARTITION (partition_key = 'partition_value' [, ...])]
COMPACT 'compaction_type'[AND WAIT]
[WITH OVERWRITE TBLPROPERTIES ("property"="value" [, ...])];

>>>>>>>>Alter Table/Partition Concatenate>>>>>>>>>>>>>>>>>>>>>>>>
ALTER TABLE table_name [PARTITION (partition_key = 'partition_value' [, ...])] CONCATENATE;

==============修改列名/类型/位置/注释=============

ALTER TABLE table_name CHANGE [COLUMN] col_old_name col_new_name column_type [COMMENTcol_comment] [FIRST|(AFTER column_name)]

CREATE TABLE test_change (a int, b int, c int);

// First change column a's name to a1.
ALTER TABLE test_change CHANGE a a1 INT;

// Next change column a1's name to a2, its data type to string, and put it after column b.
ALTER TABLE test_change CHANGE a1 a2 STRING AFTER b;
// The new table's structure is: b int, a2 string, c int.

// Then change column c's name to c1, and put it as the first column.
ALTER TABLE test_change CHANGE c c1 INT FIRST;
// The new table's structure is: c1 int, b int, a2 string.

// Add a comment to column a1
ALTER TABLE test_change CHANGE a1 a1 INT COMMENT 'this is column a1';


=============增加/替换列============
ALTER TABLE table_name
[PARTITION partition_spec]
ADD|REPLACE COLUMNS (col_name data_type [COMMENT col_comment], ...)
[CASCADE|RESTRICT]


ALTER TABLE foo PARTITION (ds='2008-04-08', hr=11) CHANGE COLUMN dec_column_name dec_column_name DECIMAL(38,18);
ALTER TABLE foo PARTITION (ds='2008-04-08', hr=12) CHANGE COLUMN dec_column_name dec_column_name DECIMAL(38,18);

// hive.exec.dynamic.partition needs to be set to true to enable dynamic partitioning with ALTER PARTITION
SET hive.exec.dynamic.partition = true;

// This will alter all existing partitions in the table with ds='2008-04-08' -- be sure you know what you are doing!
ALTER TABLE foo PARTITION (ds='2008-04-08', hr) CHANGE COLUMN dec_column_name dec_column_name DECIMAL(38,18);

// This will alter all existing partitions in the table -- be sure you know what you are doing!
ALTER TABLE foo PARTITION (ds, hr) CHANGE COLUMN dec_column_name dec_column_name DECIMAL(38,18)

 


++++++++++++创建视图++++++++++++++++++++++

CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...) ]
[COMMENT view_comment]
[TBLPROPERTIES (property_name = property_value, ...)]
AS SELECT ...;

Example:
CREATE VIEW onion_referrers(url COMMENT 'URL of Referring page')
COMMENT 'Referrers to The Onion website'
AS
SELECT DISTINCT referrer_url
FROM page_view
WHERE page_url='http://www.theonion.com';

 

---------删除视图------------------------------
DROP VIEW [IF EXISTS] [db_name.]view_name;

Example:
DROP VIEW onion_referrers;

-----------Alter View Properties------------
ALTER VIEW [db_name.]view_name SET TBLPROPERTIES table_properties;

table_properties:
: (property_name = property_value, property_name = property_value, ...)

------Alter View As Select----------------
ALTER VIEW [db_name.]view_name AS select_statement;

--CREATE VIEW
--CREATE OR REPLACE VIEW


-------Create/Drop/Alter Index--------
>>>>>>>Create Index
CREATE INDEX index_name
ON TABLE base_table_name (col_name, ...)
AS index_type
[WITH DEFERRED REBUILD]
[IDXPROPERTIES (property_name=property_value, ...)]
[IN TABLE index_table_name]
[
[ ROW FORMAT ...] STORED AS ...
| STORED BY ...
]
[LOCATION hdfs_path]
[TBLPROPERTIES (...)]
[COMMENT "index comment"];

>>>>>>>>>>>Drop Index
DROP INDEX [IF EXISTS] index_name ON table_name;

>>>>>>>>>>>>Alter Index
ALTER INDEX index_name ON table_name [PARTITION partition_spec] REBUILD;

-----------创建宏Create/Drop Macro----------
Create Temporary Macro
CREATE TEMPORARY MACRO macro_name([col_name col_type, ...]) expression;

CREATE TEMPORARY MACRO fixed_number() 42;
CREATE TEMPORARY MACRO string_len_plus_two(x string) length(x) + 2;
CREATE TEMPORARY MACRO simple_add (x int, y int) x + y;

Drop Temporary Macro
DROP TEMPORARY MACRO [IF EXISTS] macro_name;

>>>>>>>>>>>>>>>>>>Create/Drop/Reload Function

CREATE TEMPORARY FUNCTION function_name AS class_name;


DROP TEMPORARY FUNCTION [IF EXISTS] function_name;

>>>>>>>>>>>>永久函数Permanent Functions>>>>>>>>>>>>>

CREATE FUNCTION [db_name.]function_name AS class_name
[USING JAR|FILE|ARCHIVE 'file_uri' [, JAR|FILE|ARCHIVE 'file_uri'] ];

DROP FUNCTION [IF EXISTS] function_name;

>>>>>>>>角色和权限管理>>>>>>>>>>>>>>>>>>>>

CREATE ROLE role_name;
DROP ROLE role_name;
SHOW CURRENT ROLES;
SET ROLE (role_name|ALL|NONE);
SHOW ROLES;

=====授权管理=================
GRANT role_name [, role_name] ...
TO principal_specification [, principal_specification] ...
[ WITH ADMIN OPTION ];

principal_specification
: USER user
| ROLE role

=====回收管理=================
REVOKE [ADMIN OPTION FOR] role_name [, role_name] ...
FROM principal_specification [, principal_specification] ... ;

principal_specification
: USER user
| ROLE role

---显示角色
SHOW ROLE GRANT (USER|ROLE) principal_name;

--
SHOW PRINCIPALS role_name;


-------数据对象权限管理----Object Privilege Commands
Grant
GRANT
priv_type [, priv_type ] ...
ON table_or_view_name
TO principal_specification [, principal_specification] ...
[WITH GRANT OPTION];
Revoke
REVOKE [GRANT OPTION FOR]
priv_type [, priv_type ] ...
ON table_or_view_name
FROM principal_specification [, principal_specification] ... ;

principal_specification
: USER user
| ROLE role

priv_type
: INSERT | SELECT | UPDATE | DELETE | ALL

grant select on table secured_table to role my_role;
revoke update, select on table secured_table from role my_role;

Show Grant
SHOW GRANT [principal_specification] ON (ALL | [TABLE] table_or_view_name);

principal_specification
: USER user
| ROLE role

show grant user ashutosh on table hivejiratable;
show grant user ashutosh on all;
show grant on table hivejiratable;

>>>>查看>>>数据库对象>>>>>>>>>>>>>>>>>>>>>>

Show Databases
SHOW (DATABASES|SCHEMAS) [LIKE 'identifier_with_wildcards'];

Show Tables
SHOW TABLES [IN database_name] ['identifier_with_wildcards'];

Show Views
SHOW VIEWS [IN/FROM database_name] [LIKE 'pattern_with_wildcards'];

SHOW VIEWS; -- show all views in the current database
SHOW VIEWS 'test_*'; -- show all views that start with "test_"
SHOW VIEWS '*view2'; -- show all views that end in "view2"
SHOW VIEWS LIKE 'test_view1|test_view2'; -- show views named either "test_view1" or "test_view2"
SHOW VIEWS FROM test1; -- show views from database test1
SHOW VIEWS IN test1; -- show views from database test1 (FROM and IN are same)
SHOW VIEWS IN test1 "test_*"; -- show views from database test2 that start with "test_"

Show Partitions
SHOW PARTITIONS table_name;

SHOW PARTITIONS table_name PARTITION(ds='2010-03-03'); -- (Note: Hive 0.6 and later)
SHOW PARTITIONS table_name PARTITION(hr='12'); -- (Note: Hive 0.6 and later)
SHOW PARTITIONS table_name PARTITION(ds='2010-03-03', hr='12'); -- (Note: Hive 0.6 and later)
SHOW PARTITIONS databaseFoo.tableBar PARTITION(ds='2010-03-03', hr='12'); -- (Note: Hive 0.13.0 and later)

how Table/Partition Extended
SHOW TABLE EXTENDED [IN|FROM database_name] LIKE 'identifier_with_wildcards' [PARTITION(partition_spec)];


Show Table Properties
SHOW TBLPROPERTIES tblname;
SHOW TBLPROPERTIES tblname("foo");

Show Create Table
SHOW CREATE TABLE ([db_name.]table_name|view_name);

Show Indexes
SHOW [FORMATTED] (INDEX|INDEXES) ON table_with_index [(FROM|IN) db_name];

Show Columns
SHOW COLUMNS (FROM|IN) table_name [(FROM|IN) db_name] [ LIKE 'pattern_with_wildcards'];
-- SHOW COLUMNS
CREATE DATABASE test_db;
USE test_db;
CREATE TABLE foo(col1 INT, col2 INT, col3 INT, cola INT, colb INT, colc INT, a INT, b INT, c INT);

-- SHOW COLUMNS basic syntax
SHOW COLUMNS FROM foo; -- show all column in foo
SHOW COLUMNS FROM foo "*"; -- show all column in foo
SHOW COLUMNS IN foo "col*"; -- show columns in foo starting with "col" OUTPUT col1,col2,col3,cola,colb,colc
SHOW COLUMNS FROM foo '*c'; -- show columns in foo ending with "c" OUTPUT c,colc
SHOW COLUMNS FROM foo LIKE "col1|cola"; -- show columns in foo either col1 or cola OUTPUT col1,cola
SHOW COLUMNS FROM foo FROM test_db LIKE 'col*'; -- show columns in foo starting with "col" OUTPUT col1,col2,col3,cola,colb,colc
SHOW COLUMNS IN foo IN test_db LIKE 'col*'; -- show columns in foo starting with "col" (FROM/IN same) OUTPUT col1,col2,col3,cola,colb,colc

-- Non existing column pattern resulting in no match
SHOW COLUMNS IN foo "nomatch*";
SHOW COLUMNS IN foo "col+"; -- + wildcard not supported
SHOW COLUMNS IN foo "nomatch";

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Show Functions
SHOW FUNCTIONS "a.*";

>>>>>>>>权限管理>>>>>>>>>>>>>>>>>>>>>>
SHOW ROLE GRANT
SHOW GRANT
SHOW CURRENT ROLES
SHOW ROLES
SHOW PRINCIPALS

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Show Locks
SHOW LOCKS <table_name>;
SHOW LOCKS <table_name> EXTENDED;
SHOW LOCKS <table_name> PARTITION (<partition_spec>);
SHOW LOCKS <table_name> PARTITION (<partition_spec>) EXTENDED;
SHOW LOCKS (DATABASE|SCHEMA) database_name; -- (Note: Hive 0.13.0 and later; SCHEMA added in Hive 0.14.0)

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Show Conf
SHOW CONF <configuration_name>;

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Show Transactions
SHOW TRANSACTIONS;

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Show Compactions
SHOW COMPACTIONS;

 

Describe Database
DESCRIBE DATABASE [EXTENDED] db_name;
DESCRIBE SCHEMA [EXTENDED] db_name; -- (Note: Hive 1.1.0 and later)

 

DESCRIBE [EXTENDED | FORMATTED]
[db_name.]table_name [PARTITION partition_spec] [col_name ( [.field_name] | [.'$elem$'] | [.'$key$'] | [.'$value$'] )* ];

show partitions part_table;
DESCRIBE extended part_table partition (d='abc');
DESCRIBE formatted part_table partition (d='abc');


DESCRIBE FORMATTED default.src_table PARTITION (part_col = 100) columnA;
DESCRIBE default.src_thrift lintString.$elem$.myint;

>>>>>中止事务>>>>>>>>>>>>Abort Transactions

ABORT TRANSACTIONS transactionID [ transactionID ...];
ABORT TRANSACTIONS 0000007 0000008 0000010 0000015;

 


posted @ 2018-12-18 18:07  JackSun924  阅读(253)  评论(0编辑  收藏  举报