一:sqoop增量导入的两种方式
Incremental import arguments:
Argument | Description |
---|---|
--check-column (col) |
Specifies the column to be examined when determining which rows to import. (the column should not be of type CHAR/NCHAR/VARCHAR/VARNCHAR/ LONGVARCHAR/LONGNVARCHAR) |
--incremental (mode) |
Specifies how Sqoop determines which rows are new. Legal values for mode include append and lastmodified . |
--last-value (value) |
Specifies the maximum value of the check column from the previous import. |
此处采用的--increamental append方式,这种方式需注意主键或split-colunm是递增,否则建议在关系表中增加一个createTime字段,采用lastmodified方式。
二:shell脚本
1 #!/bin/sh 2 export SQOOP_HOME=/usr/share/sqoop-1.4.4 3 hostname="192.168.1.199" 4 user="root" 5 password="root" 6 database="test" 7 table="tags" 8 curr_max=0 9 10 function db_to_hive(){ 11 12 ${SQOOP_HOME}/bin/sqoop import --connect jdbc:mysql://${hostname}/${database} --username ${user} --password ${password} --table ${table} --split-by docid --hive-import --hive-table lan.ding 13 --fields-terminated-by '\t' --incremental append --check-column docid --last-value ${curr_max} 14 result=`mysql -h${hostname} -u${user} -p${password} ${database}<<EOF 15 select max(docid) from ${table}; 16 EOF` 17 curr_max=`echo $result |awk '{print $2}'` 18 } 19 20 if [ $# -eq 0 ];then 21 while true 22 do 23 db_to_hive 24 sleep 120 25 done 26 exit 27 fi
每隔2分钟,就往hive中增量导入数据。