In Hive, we can create tables like:
1 create table test (
2
3 name STRING,
4
5 age STRING
6
7 )
8
9 stored as textfile;
this is a simplest table we can create in Hive, like mysql.
However, Hive offers a lot more options to create the table:
1. internal/external table
internal table is like the mysql tables, data stored in the Hive specified locations and managed by Hive.
In other words, if we create an internal table `test`, and then use
both the table schema and the data will be deleted at the same time.
not just that, if we want to load in data into that table, we have to do things like:
hive>load data local inpath '/test/test.txt' overwrite test;
or the equivalent queries to load in data into table.
However, Hive offers the external table type to allow us just point out the location of the file and Hive will read from the external file.
we call that external table which can be created like:
hive>create external table test (...) ;
usually we will specify the data format of this table like:
hive> CREATE TABLE user(id INT, name STRING) ROW FORMAT
DELIMITED FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n' STORED AS TEXTFILE;
so Hive will load in data based on the format you specified.
e.g. you can create the external table schema and then use 'location' feature to load in the data.
usually, we want to add in more data everyday into Hive without use 'load data local', or sometimes we want to copy the file to a certain location but load in to Hive only when we want.
then we can use:
hive>alter table test add partition (dt = '2013-08-31') location 'test/test.txt';
in order to do that, we have to create table schema using partition function to let Hive know about the parititions.
hive>create table test partitioned by (dt STRING) (....);
Hive partition:
as mentioned above, Hive can handle partitions like mysql.
however, partition is more useful to Hive because of some certain reasons:
Hive does not support update table because all the data is stored as text or sequential way at least.
unlike the real DB, Hive is just an abstract DB which we can issue queries to it and it will transfer those queries into hadoop map/reduce jobs.
so with out update table, how can we manage the table partially like delete one day's data or just add one day's data?
we use Hive partition here:
we can do:
hive>load data local inpath 'test/test.txt' overwrite table test partition (dt = '2013-08-31');
by doing this, we will only overwrite the dt = '2013-08-31' partition without affecting the other partitions' data.
this is the most usual way to manage data by date.
Note: drop partitions will not affect the other partitions or the DB schema as well.
Note: drop external table will not delete the real data on disk or hadoop cluster, therefore we have to do that manually by 'hadoop fs -rmr' on the real location of data.
dynamic partition:
Hive offers dynamic partition as you can insert into a certain partition based on the return of another SELECT query, like:
hive>insert overwrite table test(name, age) partition (dt = '2013-08-31', siteid)
select name, age, siteid from dummy_table where ...;
here siteid from SELECT will decide which partition it will write into.
NOTE: Hive can only accept dynamic partitions after static partitions, so this won't work:
hive>insert overwrite table test(name, age) partition (siteid, dt = '2013-08-31')
select name, age, siteid from dummy_table where ...;
so when we create the Hive table schema, we have to consider this as well, otherwise after table is created, the only way to change it is to drop table and redo everything!
Because Hive use directories to implement partition hierarchy.(see Hive Spec)
Note: don't use the column which has thousands of different data as the dynamic partition, it will create thousands of directories in your filesystem!
imagine you have 10000 different siteid and the table above has to be populated everyday based on dt.
then you will have 10000 more directories everyday!(which is probably not what you want...)
another advantage of partition is, once you do:
hive>select * from test where dt = '2013-08-31';
if dt is the partition, then Hive does not have to invoke hadoop job, because there will be no map/reduce. As Hive use directories to manage all the partitions, Hive just need to read out the file in that partition dir and then print it out, which will be much faster than doing hadoop map/reduce job.