sqoop:mysql和Hbase/Hive/Hdfs之间相互导入数据
1.安装sqoop
请参考http://www.cnblogs.com/Richardzhu/p/3322635.html
增加了SQOOP_HOME相关环境变量:source ~/.bashrc /etq/profile
sqoop help检测sqoop是否安装好了,没有error表示安装好了
2.互导数据
mysql到hbase
sqoop import --connect jdbc:mysql://54.0.88.53:3306/chen --username root --password password --table hivetest --hbase-create-table --hbase-table test --column-family tbl_name --hbase-row-key tbl_type --hbase-row-key可以指定datatable中哪一列作为hbase新表的rowkey,--column-family是除rowkey之外的所有列的列族名
mysql到hive
复制表结构
sqoop create-hive-table --connect jdbc:mysql://54.0.88.53:3306/chen --table hivetest --username root --password password --hive-table hivetest
导入数据(存在时不冲突,不存在时创建)
注:多次执行会增量的load数据到hive
sqoop import --connect jdbc:mysql://54.0.88.53:3306/chen --username root --password password --table hivetest --hive-import sqoop import --connect 'jdbc:sqlserver://192.168.1.80;username=test;password=test;database=ba' --table=monthly_list_cdr_ac --hive-import -m 14 --hive-table monthly_list_cdr_ac --split-by day_date --hive-partition-key dt --hive-partition-value 20130531
hive到mysql(和HDFS导出的方式相同)
注:在无primary key情况下多次执行会增量的load数据到mysql
sqoop export --connect jdbc:mysql://54.0.88.53:3306/chen --username root --password password --table detail3 --export-dir /apps/hive/warehouse/detail3 --input-fields-terminated-by '\|'
连接mysql并列出数据库中的表
sqoop list-tables --connect jdbc:mysql://localhost:3306/chen --username root --password password sqoop import --connect jdbc:mysql://mysqlserver_IP/databaseName --table testtable -m 1 sqoop import --connect jdbc:mysql://10.233.45.104:3306/test --username root --password root --table testa --hive-import -m 1
其中, mysqlserver_IP是mysql服务器地址,databaseName是数据库名,testtable是表名,-m 1 指定只用一个map任务,默认是4个map,这是导成文件格式。
问题1:
INFO mapred.JobClient: Task Id : attempt_201108051007_0010_m_000000_0, Status : FAILED
java.util.NoSuchElementException
这种错误的原因是sqoop解析文件的字段与Mysql数据库的表的字段没有对应上。因此需要告诉sqoop文件的分隔符,使它能够正确的解析文件字段。hive默认的字段分隔符为'\001'。
其他数据导入导出
将结果集导入mysql
从本地导入:
load data local inpath '/home/labs/kang/award.txt' overwrite into table award;
sqoop导入:对应编码,记得删除当前文件夹产生的java文件
sqoop export --connect "jdbc:mysql://54.0.88.53:3306/mydb?useUnicode=true&characterEncoding=UTF-8" --username root --password password --table china --export-dir /apps/hive/warehouse/china --input-fields-terminated-by '|'
将hive中的表导入hbase中,首先要拼接Rowkey和value:
insert overwrite table detail3 select concat(cust_no, sa_tx_dt, tx_log_no), concat( cust_no,"\|", sa_tx_dt,"\|", tx_log_no,"\|",sa_tx_tm,"\|", temp,"\|", cust_acct_no,"\|", sa_tx_crd_no,"\|", cr_tx_amt,"\|", acct_bal,"\|", f_fare,"\|", dr_cr_cod,"\|", tran_cd,"\|", tx_type,"\|", xt_op_trl,"\|", xt_op_trl2,"\|", bus_inst_no,"\|", canal,"\|", sa_op_acct_no_32,"\|", sa_op_cust_name,"\|", sa_op_bank_no,"\|", cr_cust_docag_stno,"\|", sa_otx_flg,"\|", sa_rmrk,"\|", other,"\|", tlr_no,"\|") from detail2; drop table hbase_detail3; CREATE EXTERNAL TABLE hbase_detail3(key string, values string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = "values:val") TBLPROPERTIES("hbase.table.name" = "detail3"); //建立外部表 insert overwrite table hbase_detail3 select * from detail3;
本地文件到hbase
hive -e "select * from hivetest" >> hive.csv hive.tsv hadoop fs -put hive.tsv /user/hdfs/chen hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,info:tbl_type hbase_hive /user/hdfs/chen/hive.csv hbase org.apache.hadoop.hbase.mapreduce.Driver import hbase_hive ./hive.csv