- sqoop导出数据到hdfs
- 启动sqoop容器,进入该容器,直接执行如下
| [root@node1 ~]# docker start sqoop |
| |
| [root@node1 ~]# docker exec -it sqoop bash |
| |
| [root@15b0369d3f2a /]# sqoop import \ |
| > |
| > |
| > |
| > |
| > |
| > |
| > -m 1 |
| Warning: /opt/sqoop/../hbase does not exist! HBase imports will fail. |
| Please set $HBASE_HOME to the root of your HBase installation. |
| Warning: /opt/sqoop/../hcatalog does not exist! HCatalog jobs will fail. |
| Please set $HCAT_HOME to the root of your HCatalog installation. |
| Warning: /opt/sqoop/../accumulo does not exist! Accumulo imports will fail. |
| Please set $ACCUMULO_HOME to the root of your Accumulo installation. |
| Warning: /opt/sqoop/../zookeeper does not exist! Accumulo imports will fail. |
| Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. |
| 24/01/16 02:38:06 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 |
| 24/01/16 02:38:06 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. |
| 24/01/16 02:38:06 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled. |
| 24/01/16 02:38:06 INFO manager.SqlManager: Using default fetchSize of 1000 |
| 24/01/16 02:38:06 INFO tool.CodeGenTool: Beginning code generation |
| 24/01/16 02:38:06 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 02:38:06 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CISS4.CISS_BASE_AREAS t WHERE 1=0 |
| 24/01/16 02:38:06 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/hadoop-2.7.0 |
| Note: /tmp/sqoop-root/compile/15f7e1e1fefe0351ed95710380d65b4d/CISS4_CISS_BASE_AREAS.java uses or overrides a deprecated API. |
| Note: Recompile with -Xlint:deprecation for details. |
| 24/01/16 02:38:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/15f7e1e1fefe0351ed95710380d65b4d/CISS4.CISS_BASE_AREAS.jar |
| 24/01/16 02:38:07 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 02:38:07 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 02:38:07 INFO mapreduce.ImportJobBase: Beginning import of CISS4.CISS_BASE_AREAS |
| 24/01/16 02:38:07 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar |
| 24/01/16 02:38:07 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 02:38:08 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps |
| 24/01/16 02:38:08 INFO client.RMProxy: Connecting to ResourceManager at hadoop.bigdata.cn/172.33.0.121:8032 |
| 24/01/16 02:38:12 INFO db.DBInputFormat: Using read commited transaction isolation |
| 24/01/16 02:38:12 INFO mapreduce.JobSubmitter: number of splits:1 |
| 24/01/16 02:38:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1705372474880_0002 |
| 24/01/16 02:38:13 INFO impl.YarnClientImpl: Submitted application application_1705372474880_0002 |
| 24/01/16 02:38:13 INFO mapreduce.Job: The url to track the job: http://hadoop.bigdata.cn:8088/proxy/application_1705372474880_0002/ |
| 24/01/16 02:38:13 INFO mapreduce.Job: Running job: job_1705372474880_0002 |
| 24/01/16 02:38:20 INFO mapreduce.Job: Job job_1705372474880_0002 running in uber mode : true |
| 24/01/16 02:38:20 INFO mapreduce.Job: map 0% reduce 0% |
| 24/01/16 02:38:22 INFO mapreduce.Job: map 100% reduce 0% |
| 24/01/16 02:38:22 INFO mapreduce.Job: Job job_1705372474880_0002 completed successfully |
| 24/01/16 02:38:22 INFO mapreduce.Job: Counters: 32 |
| File System Counters |
| FILE: Number of bytes read=0 |
| FILE: Number of bytes written=0 |
| FILE: Number of read operations=0 |
| FILE: Number of large read operations=0 |
| FILE: Number of write operations=0 |
| HDFS: Number of bytes read=100 |
| HDFS: Number of bytes written=3109447 |
| HDFS: Number of read operations=140 |
| HDFS: Number of large read operations=0 |
| HDFS: Number of write operations=5 |
| Job Counters |
| Launched map tasks=1 |
| Other local map tasks=1 |
| Total time spent by all maps in occupied slots (ms)=3492 |
| Total time spent by all reduces in occupied slots (ms)=0 |
| TOTAL_LAUNCHED_UBERTASKS=1 |
| NUM_UBER_SUBMAPS=1 |
| Total time spent by all map tasks (ms)=1746 |
| Total vcore-seconds taken by all map tasks=1746 |
| Total megabyte-seconds taken by all map tasks=1787904 |
| Map-Reduce Framework |
| Map input records=47562 |
| Map output records=47562 |
| Input split bytes=87 |
| Spilled Records=0 |
| Failed Shuffles=0 |
| Merged Map outputs=0 |
| GC time elapsed (ms)=74 |
| CPU time spent (ms)=1410 |
| Physical memory (bytes) snapshot=363761664 |
| Virtual memory (bytes) snapshot=2938097664 |
| Total committed heap usage (bytes)=303038464 |
| File Input Format Counters |
| Bytes Read=0 |
| File Output Format Counters |
| Bytes Written=2966276 |
| 24/01/16 02:38:22 INFO mapreduce.ImportJobBase: Transferred 2.9654 MB in 14.0301 seconds (216.4317 KB/sec) |
| 24/01/16 02:38:22 INFO mapreduce.ImportJobBase: Retrieved 47562 records. |
| 24/01/16 02:32:29 ERROR tool.ImportTool: Import failed: java.net.NoRouteToHostException: No Route to Host from sqoop.bigdata.cn/172.33.0.110 to hadoop.bigdata.cn:9000 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost |
| |
| |
| vim /etc/hosts |
| |
| 192.168.128.100 node1 |
| |
| 24/01/16 02:34:36 ERROR tool.ImportTool: Import failed: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /tmp/hadoop-yarn/staging/root/.staging/job_1705372474880_0001. Name node is in safe mode. |
| The reported blocks 140 has reached the threshold 0.9990 of total blocks 140. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 19 seconds. |
| |
| |
| docker start hadoop |
- 启动hive容器,进入该容器,启动metastore和hiveserver2
| [root@node1 /] |
| [root@7f6f4591b59d /] |
| 341 Jps |
| [root@7f6f4591b59d /] |
| [1] 356 |
| [root@7f6f4591b59d /] |
| Starting Hive Metastore Server |
| SLF4J: Class path contains multiple SLF4J bindings. |
| SLF4J: Found binding in [jar:file:/opt/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.0/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| SLF4J: See http://www.slf4j.org/codes.html |
| SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] |
| |
| [root@7f6f4591b59d /] |
| [2] 442 |
| [root@7f6f4591b59d /] |
| SLF4J: Class path contains multiple SLF4J bindings. |
| SLF4J: Found binding in [jar:file:/opt/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.0/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| SLF4J: See http://www.slf4j.org/codes.html |
| SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] |
| |
| [root@7f6f4591b59d /] |
| 356 RunJar |
| 535 Jps |
| 442 RunJar |
| |
| 1234567f6f4591b59d /] |
| which: no hbase in (/opt/apache-hive-2.1.0-bin/bin:/opt/hadoop-2.7.0/sbin:/opt/hadoop-2.7.0/bin:/opt/jdk1.8.0_141/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin) |
| ls: cannot access /opt/apache-hive-2.1.0-bin/lib/hive-jdbc-*-standalone.jar: No such file or directory |
| Connecting to jdbc:hive2://hive.bigdata.cn:10000 |
| SLF4J: Class path contains multiple SLF4J bindings. |
| SLF4J: Found binding in [jar:file:/opt/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.0/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| SLF4J: See http://www.slf4j.org/codes.html |
| SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] |
| Connected to: Apache Hive (version 2.1.0) |
| Driver: Hive JDBC (version 2.1.0) |
| 24/01/16 03:28:43 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false. |
| Transaction isolation: TRANSACTION_REPEATABLE_READ |
| Beeline version 2.1.0 by Apache Hive |
| 0: jdbc:hive2://hive.bigdata.cn:10000> |
| [root@15b0369d3f2a /]# sqoop import \ |
| > |
| > |
| > |
| > |
| > |
| > |
| > -m 1 |
| Warning: /opt/sqoop/../hbase does not exist! HBase imports will fail. |
| Please set $HBASE_HOME to the root of your HBase installation. |
| Warning: /opt/sqoop/../hcatalog does not exist! HCatalog jobs will fail. |
| Please set $HCAT_HOME to the root of your HCatalog installation. |
| Warning: /opt/sqoop/../accumulo does not exist! Accumulo imports will fail. |
| Please set $ACCUMULO_HOME to the root of your Accumulo installation. |
| Warning: /opt/sqoop/../zookeeper does not exist! Accumulo imports will fail. |
| Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. |
| 24/01/16 03:43:33 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 |
| 24/01/16 03:43:33 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. |
| 24/01/16 03:43:34 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled. |
| 24/01/16 03:43:34 INFO manager.SqlManager: Using default fetchSize of 1000 |
| 24/01/16 03:43:34 INFO tool.CodeGenTool: Beginning code generation |
| 24/01/16 03:43:34 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 03:43:34 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CISS4.CISS_SERVICE_WORKORDER t WHERE 1=0 |
| 24/01/16 03:43:34 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/hadoop-2.7.0 |
| Note: /tmp/sqoop-root/compile/73bacca31f243527e909e4616be7a1cc/CISS4_CISS_SERVICE_WORKORDER.java uses or overrides a deprecated API. |
| Note: Recompile with -Xlint:deprecation for details. |
| 24/01/16 03:43:36 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/73bacca31f243527e909e4616be7a1cc/CISS4.CISS_SERVICE_WORKORDER.jar |
| 24/01/16 03:43:36 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 03:43:36 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 03:43:37 INFO mapreduce.ImportJobBase: Beginning import of CISS4.CISS_SERVICE_WORKORDER |
| 24/01/16 03:43:37 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar |
| 24/01/16 03:43:37 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 03:43:38 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps |
| 24/01/16 03:43:38 INFO client.RMProxy: Connecting to ResourceManager at hadoop.bigdata.cn/172.33.0.121:8032 |
| 24/01/16 03:43:40 INFO db.DBInputFormat: Using read commited transaction isolation |
| 24/01/16 03:43:40 INFO mapreduce.JobSubmitter: number of splits:1 |
| 24/01/16 03:43:41 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1705372474880_0003 |
| 24/01/16 03:43:41 INFO impl.YarnClientImpl: Submitted application application_1705372474880_0003 |
| 24/01/16 03:43:41 INFO mapreduce.Job: The url to track the job: http://hadoop.bigdata.cn:8088/proxy/application_1705372474880_0003/ |
| 24/01/16 03:43:41 INFO mapreduce.Job: Running job: job_1705372474880_0003 |
| 24/01/16 03:43:49 INFO mapreduce.Job: Job job_1705372474880_0003 running in uber mode : true |
| 24/01/16 03:43:49 INFO mapreduce.Job: map 0% reduce 0% |
| 24/01/16 03:43:56 INFO mapreduce.Job: map 100% reduce 0% |
| 24/01/16 03:43:56 INFO mapreduce.Job: Job job_1705372474880_0003 completed successfully |
| 24/01/16 03:43:56 INFO mapreduce.Job: Counters: 32 |
| File System Counters |
| FILE: Number of bytes read=0 |
| FILE: Number of bytes written=0 |
| FILE: Number of read operations=0 |
| FILE: Number of large read operations=0 |
| FILE: Number of write operations=0 |
| HDFS: Number of bytes read=100 |
| HDFS: Number of bytes written=132588080 |
| HDFS: Number of read operations=140 |
| HDFS: Number of large read operations=0 |
| HDFS: Number of write operations=5 |
| Job Counters |
| Launched map tasks=1 |
| Other local map tasks=1 |
| Total time spent by all maps in occupied slots (ms)=14558 |
| Total time spent by all reduces in occupied slots (ms)=0 |
| TOTAL_LAUNCHED_UBERTASKS=1 |
| NUM_UBER_SUBMAPS=1 |
| Total time spent by all map tasks (ms)=7279 |
| Total vcore-seconds taken by all map tasks=7279 |
| Total megabyte-seconds taken by all map tasks=7453696 |
| Map-Reduce Framework |
| Map input records=178609 |
| Map output records=178609 |
| Input split bytes=87 |
| Spilled Records=0 |
| Failed Shuffles=0 |
| Merged Map outputs=0 |
| GC time elapsed (ms)=1307 |
| CPU time spent (ms)=13140 |
| Physical memory (bytes) snapshot=672899072 |
| Virtual memory (bytes) snapshot=2936315904 |
| Total committed heap usage (bytes)=558366720 |
| File Input Format Counters |
| Bytes Read=0 |
| File Output Format Counters |
| Bytes Written=132443966 |
| 24/01/16 03:43:56 INFO mapreduce.ImportJobBase: Transferred 126.4458 MB in 18.4013 seconds (6.8716 MB/sec) |
| 24/01/16 03:43:56 INFO mapreduce.ImportJobBase: Retrieved 178609 records. |
| [root@15b0369d3f2a /] |
| -rw-r--r-- 1 root supergroup 0 2024-01-16 06:40 /test/full_imp/ciss4.ciss_service_workorder/_SUCCESS |
| -rw-r--r-- 1 root supergroup 132443966 2024-01-16 06:40 /test/full_imp/ciss4.ciss_service_workorder/part-m-00000 |
| 0: jdbc:hive2://hive.bigdata.cn:10000> DROP TABLE IF EXISTS test_text; |
| OK |
| No rows affected (0.099 seconds) |
| 0: jdbc:hive2://hive.bigdata.cn:10000> create external table test_text( |
| . . . . . . . . . . . . . . . . . . .> line string |
| . . . . . . . . . . . . . . . . . . .> ) |
| ce_workorder';. . . . . . . . . . . .> location '/test/full_imp/ciss4.ciss_servi |
| OK |
| No rows affected (0.038 seconds) |
| |
| 0: jdbc:hive2://hive.bigdata.cn:10000> select count(*) from test_text; |
| WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. |
| Query ID = root_20240116064205_1ce62fe3-8c46-4175-b942-2ef33ab4d9da |
| Total jobs = 1 |
| Launching Job 1 out of 1 |
| Number of reduce tasks determined at compile time: 1 |
| In order to change the average load for a reducer (in bytes): |
| set hive.exec.reducers.bytes.per.reducer=<number> |
| In order to limit the maximum number of reducers: |
| set hive.exec.reducers.max=<number> |
| In order to set a constant number of reducers: |
| set mapreduce.job.reduces=<number> |
| Starting Job = job_1705372474880_0016, Tracking URL = http://hadoop.bigdata.cn:8088/proxy/application_1705372474880_0016/ |
| Kill Command = /opt/hadoop-2.7.0/bin/hadoop job -kill job_1705372474880_0016 |
| Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 |
| 2024-01-16 06:42:08,879 Stage-1 map = 0%, reduce = 0% |
| Ended Job = job_1705372474880_0016 with errors |
| Error during job, obtaining debugging information... |
| FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask |
| MapReduce Jobs Launched: |
| Stage-Stage-1: Map: 1 Reduce: 1 FAIL |
| Total MapReduce CPU Time Spent: -1 msec |
| WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. |
| Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2) |
| [root@7f6f4591b59d /] |
| which: no hbase in (/opt/apache-hive-2.1.0-bin/bin:/opt/hadoop-2.7.0/sbin:/opt/hadoop-2.7.0/bin:/opt/jdk1.8.0_141/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin) |
| SLF4J: Class path contains multiple SLF4J bindings. |
| SLF4J: Found binding in [jar:file:/opt/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.0/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| SLF4J: See http://www.slf4j.org/codes.html |
| SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] |
| |
| Logging initialized using configuration in jar:file:/opt/apache-hive-2.1.0-bin/lib/hive-common-2.1.0.jar!/hive-log4j2.properties Async: true |
| Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. |
| hive> set hive.exec.mode.local.auto=true; |
| |
| hive> DROP TABLE IF EXISTS test_text; |
| OK |
| Time taken: 0.632 seconds |
| hive> create external table test_text( |
| > line string |
| > ) |
| > location '/test/full_imp/ciss4.ciss_service_workorder'; |
| OK |
| Time taken: 0.173 seconds |
| |
| hive> select count(*) from test_text; |
| Automatically selecting local only mode for query |
| WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. |
| Query ID = root_20240116064840_29a0446e-deb2-4da9-a5e9-65f6a9c87898 |
| Total jobs = 1 |
| Launching Job 1 out of 1 |
| Number of reduce tasks determined at compile time: 1 |
| In order to change the average load for a reducer (in bytes): |
| set hive.exec.reducers.bytes.per.reducer=<number> |
| In order to limit the maximum number of reducers: |
| set hive.exec.reducers.max=<number> |
| In order to set a constant number of reducers: |
| set mapreduce.job.reduces=<number> |
| Job running in-process (local Hadoop) |
| 2024-01-16 06:48:42,842 Stage-1 map = 100%, reduce = 100% |
| Ended Job = job_local1988012907_0001 |
| MapReduce Jobs Launched: |
| Stage-Stage-1: HDFS Read: 264887932 HDFS Write: 106 SUCCESS |
| Total MapReduce CPU Time Spent: 0 msec |
| OK |
| 194673 |
| Time taken: 2.446 seconds, Fetched: 1 row(s) |
- Sqoop使用Avro格式导出oracle数据到hdfs
| [root@15b0369d3f2a /]# sqoop import \ |
| > -Dmapreduce.job.user.classpath.first=true \ |
| > |
| > |
| > |
| > |
| > |
| > |
| > |
| > |
| > -m 1 |
| Warning: /opt/sqoop/../hbase does not exist! HBase imports will fail. |
| Please set $HBASE_HOME to the root of your HBase installation. |
| Warning: /opt/sqoop/../hcatalog does not exist! HCatalog jobs will fail. |
| Please set $HCAT_HOME to the root of your HCatalog installation. |
| Warning: /opt/sqoop/../accumulo does not exist! Accumulo imports will fail. |
| Please set $ACCUMULO_HOME to the root of your Accumulo installation. |
| Warning: /opt/sqoop/../zookeeper does not exist! Accumulo imports will fail. |
| Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. |
| 24/01/16 06:19:12 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 |
| 24/01/16 06:19:12 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. |
| 24/01/16 06:19:12 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled. |
| 24/01/16 06:19:12 INFO manager.SqlManager: Using default fetchSize of 1000 |
| 24/01/16 06:19:12 INFO tool.CodeGenTool: Beginning code generation |
| 24/01/16 06:19:12 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 06:19:12 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CISS4.CISS_SERVICE_WORKORDER t WHERE 1=0 |
| 24/01/16 06:19:12 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/hadoop-2.7.0 |
| Note: /tmp/sqoop-root/compile/6722a12c7d57684746fe2e7fde521c66/CISS4_CISS_SERVICE_WORKORDER.java uses or overrides a deprecated API. |
| Note: Recompile with -Xlint:deprecation for details. |
| 24/01/16 06:19:13 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/6722a12c7d57684746fe2e7fde521c66/CISS4.CISS_SERVICE_WORKORDER.jar |
| 24/01/16 06:19:14 INFO tool.ImportTool: Destination directory /test/full_imp/ciss4.ciss_service_workorder deleted. |
| 24/01/16 06:19:14 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 06:19:14 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 06:19:14 INFO mapreduce.ImportJobBase: Beginning import of CISS4.CISS_SERVICE_WORKORDER |
| 24/01/16 06:19:14 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar |
| 24/01/16 06:19:14 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 06:19:14 INFO manager.OracleManager: Time zone has been set to GMT |
| 24/01/16 06:19:14 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CISS4.CISS_SERVICE_WORKORDER t WHERE 1=0 |
| 24/01/16 06:19:14 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CISS4.CISS_SERVICE_WORKORDER t WHERE 1=0 |
| 24/01/16 06:19:14 INFO mapreduce.DataDrivenImportJob: Writing Avro schema file: /tmp/sqoop-root/compile/6722a12c7d57684746fe2e7fde521c66/CISS4_CISS_SERVICE_WORKORDER.avsc |
| 24/01/16 06:19:14 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps |
| 24/01/16 06:19:14 INFO client.RMProxy: Connecting to ResourceManager at hadoop.bigdata.cn/172.33.0.121:8032 |
| 24/01/16 06:19:17 INFO db.DBInputFormat: Using read commited transaction isolation |
| 24/01/16 06:19:17 INFO mapreduce.JobSubmitter: number of splits:1 |
| 24/01/16 06:19:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1705372474880_0012 |
| 24/01/16 06:19:17 INFO impl.YarnClientImpl: Submitted application application_1705372474880_0012 |
| 24/01/16 06:19:17 INFO mapreduce.Job: The url to track the job: http://hadoop.bigdata.cn:8088/proxy/application_1705372474880_0012/ |
| 24/01/16 06:19:17 INFO mapreduce.Job: Running job: job_1705372474880_0012 |
| 24/01/16 06:19:23 INFO mapreduce.Job: Job job_1705372474880_0012 running in uber mode : true |
| 24/01/16 06:19:23 INFO mapreduce.Job: map 0% reduce 0% |
| 24/01/16 06:19:32 INFO mapreduce.Job: map 100% reduce 0% |
| 24/01/16 06:19:32 INFO mapreduce.Job: Job job_1705372474880_0012 completed successfully |
| 24/01/16 06:19:32 INFO mapreduce.Job: Counters: 32 |
| File System Counters |
| FILE: Number of bytes read=0 |
| FILE: Number of bytes written=0 |
| FILE: Number of read operations=0 |
| FILE: Number of large read operations=0 |
| FILE: Number of write operations=0 |
| HDFS: Number of bytes read=100 |
| HDFS: Number of bytes written=99422448 |
| HDFS: Number of read operations=140 |
| HDFS: Number of large read operations=0 |
| HDFS: Number of write operations=5 |
| Job Counters |
| Launched map tasks=1 |
| Other local map tasks=1 |
| Total time spent by all maps in occupied slots (ms)=18486 |
| Total time spent by all reduces in occupied slots (ms)=0 |
| TOTAL_LAUNCHED_UBERTASKS=1 |
| NUM_UBER_SUBMAPS=1 |
| Total time spent by all map tasks (ms)=9243 |
| Total vcore-seconds taken by all map tasks=9243 |
| Total megabyte-seconds taken by all map tasks=9464832 |
| Map-Reduce Framework |
| Map input records=178609 |
| Map output records=178609 |
| Input split bytes=87 |
| Spilled Records=0 |
| Failed Shuffles=0 |
| Merged Map outputs=0 |
| GC time elapsed (ms)=1165 |
| CPU time spent (ms)=13410 |
| Physical memory (bytes) snapshot=683810816 |
| Virtual memory (bytes) snapshot=2947174400 |
| Total committed heap usage (bytes)=555745280 |
| File Input Format Counters |
| Bytes Read=0 |
| File Output Format Counters |
| Bytes Written=99270492 |
| 24/01/16 06:19:32 INFO mapreduce.ImportJobBase: Transferred 94.8166 MB in 17.9358 seconds (5.2864 MB/sec) |
| 24/01/16 06:19:32 INFO mapreduce.ImportJobBase: Retrieved 178609 records. |
| [root@15b0369d3f2a /] |
| -rw-r--r-- 1 root supergroup 0 2024-01-16 06:19 /test/full_imp/ciss4.ciss_service_workorder/_SUCCESS |
| -rw-r--r-- 1 root supergroup 99270492 2024-01-16 06:19 /test/full_imp/ciss4.ciss_service_workorder/part-m-00000.avro |
| hive> DROP TABLE IF EXISTS test_avro; |
| OK |
| Time taken: 0.093 seconds |
| hive> create external table test_avro( |
| > line string |
| > ) |
| > stored as avro |
| > location '/test/full_imp/ciss4.ciss_service_workorder'; |
| OK |
| Time taken: 0.123 seconds |
| |
| hive> select count(*) from test_avro; |
| Automatically selecting local only mode for query |
| WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. |
| Query ID = root_20240116065642_b024f3bf-040d-4eef-b371-563568447dc1 |
| Total jobs = 1 |
| Launching Job 1 out of 1 |
| Number of reduce tasks determined at compile time: 1 |
| In order to change the average load for a reducer (in bytes): |
| set hive.exec.reducers.bytes.per.reducer=<number> |
| In order to limit the maximum number of reducers: |
| set hive.exec.reducers.max=<number> |
| In order to set a constant number of reducers: |
| set mapreduce.job.reduces=<number> |
| Job running in-process (local Hadoop) |
| 2024-01-16 06:56:43,542 Stage-1 map = 0%, reduce = 0% |
| 2024-01-16 06:56:45,551 Stage-1 map = 100%, reduce = 100% |
| Ended Job = job_local595040272_0002 |
| MapReduce Jobs Launched: |
| Stage-Stage-1: HDFS Read: 463445520 HDFS Write: 318 SUCCESS |
| Total MapReduce CPU Time Spent: 0 msec |
| OK |
| 178609 |
| Time taken: 3.297 seconds, Fetched: 1 row(s) |
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· DeepSeek 开源周回顾「GitHub 热点速览」
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· 单线程的Redis速度为什么快?