展开
拓展 关闭
订阅号推广码
GitHub
视频
公告栏 关闭

Linux安装DataX和Maxwell

  • 将datax.tar.gz上传到服务器
# 解压
tar -zxvf datax.tar.gz -C /opt/software/
# 验证
python /opt/software/datax/bin/datax.py /opt/software/datax/job/job.json
查看详情
[root@slave1 reader]# python /opt/software/datax/bin/datax.py /opt/software/datax/job/job.json

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !
Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.


2023-12-18 19:20:26.057 [main] INFO  VMInfo - VMInfo# operatingSystem class => sun.management.OperatingSystemImpl
2023-12-18 19:20:26.067 [main] INFO  Engine - the machine info  => 

	osInfo:	Oracle Corporation 1.8 25.181-b13
	jvmInfo:	Linux amd64 3.10.0-957.el7.x86_64
	cpu num:	1

	totalPhysicalMemory:	-0.00G
	freePhysicalMemory:	-0.00G
	maxFileDescriptorCount:	-1
	currentOpenFileDescriptorCount:	-1

	GC Names	[Copy, MarkSweepCompact]

	MEMORY_NAME                    | allocation_size                | init_size                      
	Eden Space                     | 273.06MB                       | 273.06MB                       
	Code Cache                     | 240.00MB                       | 2.44MB                         
	Survivor Space                 | 34.13MB                        | 34.13MB                        
	Compressed Class Space         | 1,024.00MB                     | 0.00MB                         
	Metaspace                      | -0.00MB                        | 0.00MB                         
	Tenured Gen                    | 682.69MB                       | 682.69MB                       


2023-12-18 19:20:26.085 [main] INFO  Engine - 
{
	"content":[
		{
			"reader":{
				"name":"streamreader",
				"parameter":{
					"column":[
						{
							"type":"string",
							"value":"DataX"
						},
						{
							"type":"long",
							"value":19890604
						},
						{
							"type":"date",
							"value":"1989-06-04 00:00:00"
						},
						{
							"type":"bool",
							"value":true
						},
						{
							"type":"bytes",
							"value":"test"
						}
					],
					"sliceRecordCount":100000
				}
			},
			"writer":{
				"name":"streamwriter",
				"parameter":{
					"encoding":"UTF-8",
					"print":false
				}
			}
		}
	],
	"setting":{
		"errorLimit":{
			"percentage":0.02,
			"record":0
		},
		"speed":{
			"byte":10485760
		}
	}
}

2023-12-18 19:20:26.106 [main] WARN  Engine - prioriy set to 0, because NumberFormatException, the value is: null
2023-12-18 19:20:26.108 [main] INFO  PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0
2023-12-18 19:20:26.108 [main] INFO  JobContainer - DataX jobContainer starts job.
2023-12-18 19:20:26.109 [main] INFO  JobContainer - Set jobId = 0
2023-12-18 19:20:26.146 [job-0] INFO  JobContainer - jobContainer starts to do prepare ...
2023-12-18 19:20:26.147 [job-0] INFO  JobContainer - DataX Reader.Job [streamreader] do prepare work .
2023-12-18 19:20:26.147 [job-0] INFO  JobContainer - DataX Writer.Job [streamwriter] do prepare work .
2023-12-18 19:20:26.147 [job-0] INFO  JobContainer - jobContainer starts to do split ...
2023-12-18 19:20:26.148 [job-0] INFO  JobContainer - Job set Max-Byte-Speed to 10485760 bytes.
2023-12-18 19:20:26.148 [job-0] INFO  JobContainer - DataX Reader.Job [streamreader] splits to [1] tasks.
2023-12-18 19:20:26.148 [job-0] INFO  JobContainer - DataX Writer.Job [streamwriter] splits to [1] tasks.
2023-12-18 19:20:26.165 [job-0] INFO  JobContainer - jobContainer starts to do schedule ...
2023-12-18 19:20:26.172 [job-0] INFO  JobContainer - Scheduler starts [1] taskGroups.
2023-12-18 19:20:26.174 [job-0] INFO  JobContainer - Running by standalone Mode.
2023-12-18 19:20:26.193 [taskGroup-0] INFO  TaskGroupContainer - taskGroupId=[0] start [1] channels for [1] tasks.
2023-12-18 19:20:26.201 [taskGroup-0] INFO  Channel - Channel set byte_speed_limit to -1, No bps activated.
2023-12-18 19:20:26.201 [taskGroup-0] INFO  Channel - Channel set record_speed_limit to -1, No tps activated.
2023-12-18 19:20:26.219 [taskGroup-0] INFO  TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] is started
2023-12-18 19:20:26.320 [taskGroup-0] INFO  TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[112]ms
2023-12-18 19:20:26.320 [taskGroup-0] INFO  TaskGroupContainer - taskGroup[0] completed it's tasks.
2023-12-18 19:20:36.201 [job-0] INFO  StandAloneJobContainerCommunicator - Total 100000 records, 2600000 bytes | Speed 253.91KB/s, 10000 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.050s |  All Task WaitReaderTime 0.065s | Percentage 100.00%
2023-12-18 19:20:36.201 [job-0] INFO  AbstractScheduler - Scheduler accomplished all tasks.
2023-12-18 19:20:36.201 [job-0] INFO  JobContainer - DataX Writer.Job [streamwriter] do post work.
2023-12-18 19:20:36.201 [job-0] INFO  JobContainer - DataX Reader.Job [streamreader] do post work.
2023-12-18 19:20:36.201 [job-0] INFO  JobContainer - DataX jobId [0] completed successfully.
2023-12-18 19:20:36.202 [job-0] INFO  HookInvoker - No hook invoked, because base dir not exists or is a file: /opt/software/datax/hook
2023-12-18 19:20:36.203 [job-0] INFO  JobContainer - 
	 [total cpu info] => 
		averageCpu                     | maxDeltaCpu                    | minDeltaCpu                    
		-1.00%                         | -1.00%                         | -1.00%
                        

	 [total gc info] => 
		 NAME                 | totalGCCount       | maxDeltaGCCount    | minDeltaGCCount    | totalGCTime        | maxDeltaGCTime     | minDeltaGCTime     
		 Copy                 | 0                  | 0                  | 0                  | 0.000s             | 0.000s             | 0.000s             
		 MarkSweepCompact     | 0                  | 0                  | 0                  | 0.000s             | 0.000s             | 0.000s             

2023-12-18 19:20:36.203 [job-0] INFO  JobContainer - PerfTrace not enable!
2023-12-18 19:20:36.204 [job-0] INFO  StandAloneJobContainerCommunicator - Total 100000 records, 2600000 bytes | Speed 253.91KB/s, 10000 records/s | Error 0 records, 0 bytes |  All Task WaitWriterTime 0.050s |  All Task WaitReaderTime 0.065s | Percentage 100.00%
2023-12-18 19:20:36.204 [job-0] INFO  JobContainer - 
任务启动时刻                    : 2023-12-18 19:20:26
任务结束时刻                    : 2023-12-18 19:20:36
任务总计耗时                    :                 10s
任务平均流量                    :          253.91KB/s
记录写入速度                    :          10000rec/s
读出记录总数                    :              100000
读写失败总数                    :                   0
  • 报错
DataX (DATAX-OPENSOURCE-3.0), From Alibaba !
Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.
2023-12-18 19:12:36.620 [main] WARN  ConfigParser - 插件[streamreader,streamwriter]加载失败,1s后重试... Exception:Code:[Common-00], Describe:[您提供的配置文件存在错误信息,请检查您的作业配置 .] - 配置信息错误,您提供的配置文件[/opt/software/datax/plugin/reader/._drdsreader/plugin.json]不存在. 请检查您的配置文件. 
2023-12-18 19:12:37.640 [main] ERROR Engine - 
经DataX智能分析,该任务最可能的错误原因是:
com.alibaba.datax.common.exception.DataXException: Code:[Common-00], Describe:[您提供的配置文件存在错误信息,请检查您的作业配置 .] - 配置信息错误,您提供的配置文件[/opt/software/datax/plugin/reader/._drdsreader/plugin.json]不存在. 请检查您的配置文件.
	at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:26)
	at com.alibaba.datax.common.util.Configuration.from(Configuration.java:95)
	at com.alibaba.datax.core.util.ConfigParser.parseOnePluginConfig(ConfigParser.java:153)
	at com.alibaba.datax.core.util.ConfigParser.parsePluginConfig(ConfigParser.java:125)
	at com.alibaba.datax.core.util.ConfigParser.parse(ConfigParser.java:63)
	at com.alibaba.datax.core.Engine.entry(Engine.java:137)
	at com.alibaba.datax.core.Engine.main(Engine.java:204)
  • 解决方案:删除临时文件
find /opt/software/datax/plugin/reader/ -type f -name "._*er" | xargs rm -rf
find /opt/software/datax/plugin/writer/ -type f -name "._*er" | xargs rm -rf
  • Maxwell-1.30.0及以上版本不再支持JDK1.8
  • 将maxwell-1.29.2.tar.gz上传到服务器
# 解压
tar -zxvf maxwell-1.29.2.tar.gz -C /opt/software/
# 修改名称
mv maxwell-1.29.2/ maxwell

# 配置mysql
vim /etc/my.cnf
# 配置如下
[mysqld]
# 数据库id
server-id = 1
# 启动binlog,该参数的值会作为binlog的文件名
log-bin=mysql-bin
# binlog类型,maxwell要求为row类型
binlog_format=row
# 启用binlog的数据库,需根据实际情况作出修改
binlog-do-db=edu2077

# 重启mysql
systemctl restart mysqld.service
# 登录mysql,创建Maxwell所需数据库和用户
msyql> CREATE DATABASE maxwell;
# mysql5.7调整密码级别
mysql> set global validate_password_policy=0;
mysql> set global validate_password_length=4;
# mysql8.0调整密码级别
# 安装插件
INSTALL COMPONENT 'file://component_validate_password';
show variables like 'validate_password%';
# mysql8.0调整
mysql> set global validate_password.policy=0;
mysql> set global validate_password.length=1;

# 创建用户并赋予权限
mysql> CREATE USER 'maxwell'@'%' IDENTIFIED BY 'maxwell';
mysql> GRANT ALL ON maxwell.* TO 'maxwell'@'%';
mysql> GRANT SELECT, REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'maxwell'@'%';

# 配置maxwell
cd /opt/software/maxwell
# 复制一份
cp config.properties.example config.properties
vim config.properties
# 配置如下
# Maxwell数据发送目的地,可选配置有stdout|file|kafka|kinesis|pubsub|sqs|rabbitmq|redis
producer=kafka
# 目标Kafka集群地址
kafka.bootstrap.servers=slave1:9092,slave2:9092,slave3:9092
# 目标Kafka topic,可静态配置,例如:maxwell,也可动态配置,例如:%{database}_%{table}
kafka_topic=topic_db
# MySQL相关配置
host=slave1
user=maxwell
password=maxwell
jdbc_options=useSSL=false&serverTimezone=Asia/Shanghai
posted @ 2023-12-18 09:05  DogLeftover  阅读(142)  评论(0编辑  收藏  举报