CDH配置Flume无法失效的问题

在使用过cdh6.3.1版本的flume时,配置了从kafka消费写入hdfs,使用的是FileChannel。
使用FileChannel还是考虑到数据可靠性。

但是今天在配置的时候,一直不生效。

由于我在配置FileChannel时,在指定的Agent的服务器上去设置目录。

这里必须将该目录的owner用户设置为hdfs,否则就会导致不生效。但是在后台的日志信息中看不到任何消息的。

chown -R hdfs:hdfs /home/hadoop/bigdata/flume_job

我的agent配置如下:

tier1.sources=bsc_traces uni_v3
tier1.sinks=bsc_traces uni_v3
tier1.channels=bsc_traces uni_v3

tier1.sources.bsc_traces.type=org.apache.flume.source.kafka.KafkaSource
tier1.sources.bsc_traces.batchSize=100
tier1.sources.bsc_traces.batchDurationMillis=3000
tier1.sources.bsc_traces.kafka.bootstrap.servers=192.168.1.17:9092
tier1.sources.bsc_traces.kafka.topics=testTopic2
tier1.sources.bsc_traces.kafka.consumer.group.id=bigdata_ods_bsc_traces
tier1.sources.bsc_traces.kafka.consumer.auto.offset.reset=earliest
tier1.sources.bsc_traces.kafka.consumer.auto.commit.enable=false
tier1.sources.bsc_traces.kafka.consumer.timeout.ms=15000
tier1.sources.bsc_traces.kafka.consumer.fetch.max.wait.ms=5000
tier1.sources.bsc_traces.kafka.consumer.max.poll.records=100
tier1.sources.bsc_traces.kafka.consumer.max.poll.interval.ms=3000000
tier1.sources.bsc_traces.interceptors=i1
tier1.sources.bsc_traces.interceptors.i1.type=com.xxx.flume.interceptor.AutoPartitionInterceptor$Builder
tier1.sources.bsc_traces.interceptors.i1.timestampField=block_timestamp:GMT+0
tier1.sources.bsc_traces.channels=bsc_traces
tier1.sinks.bsc_traces.type=hdfs
tier1.sinks.bsc_traces.hdfs.path=hdfs://ns1/user/hive/warehouse/ods.db/ods_bsc_traces/pk_year=%{pk_year}/pk_month=%{pk_month}/pk_day=%{pk_day}
tier1.sinks.bsc_traces.hdfs.filePrefix=ods_bsc_traces
tier1.sinks.bsc_traces.hdfs.fileSufix=.log
tier1.sinks.bsc_traces.hdfs.useLocalTimeStamp=true
tier1.sinks.bsc_traces.hdfs.batchSize=500
tier1.sinks.bsc_traces.hdfs.fileType=DataStream
tier1.sinks.bsc_traces.hdfs.writeFormat=Text
tier1.sinks.bsc_traces.hdfs.rollSize=2147483648
tier1.sinks.bsc_traces.hdfs.rollInterval=0
tier1.sinks.bsc_traces.hdfs.rollCount=0
tier1.sinks.bsc_traces.hdfs.idleTimeout=120
tier1.sinks.bsc_traces.hdfs.minBlockReplicas=1
tier1.sinks.bsc_traces.channel=bsc_traces
tier1.channels.bsc_traces.type=file
tier1.channels.bsc_traces.checkpointDir=/home/hadoop/bigdata/flume_job/chkDir/ods_bsc_traces
tier1.channels.bsc_traces.dataDirs=/home/hadoop/bigdata/flume_job/dataDir/ods_bsc_traces

tier1.sources.uni_v3.type=org.apache.flume.source.kafka.KafkaSource
tier1.sources.uni_v3.batchSize=100
tier1.sources.uni_v3.batchDurationMillis=3000
tier1.sources.uni_v3.kafka.bootstrap.servers=192.168.1.17:9092
tier1.sources.uni_v3.kafka.topics=TestTopic
tier1.sources.uni_v3.kafka.consumer.group.id=bigdata2_uni_v3
tier1.sources.uni_v3.kafka.consumer.auto.offset.reset=earliest
tier1.sources.uni_v3.kafka.consumer.auto.commit.enable=false
tier1.sources.uni_v3.kafka.consumer.timeout.ms=15000
tier1.sources.uni_v3.kafka.consumer.fetch.max.wait.ms=5000
tier1.sources.uni_v3.kafka.consumer.max.poll.records=100
tier1.sources.uni_v3.kafka.consumer.max.poll.interval.ms=3000000
tier1.sources.uni_v3.interceptors=i1
tier1.sources.uni_v3.interceptors.i1.type=com.xxx.flume.interceptor.BlockExtractorInterceptor$Builder
tier1.sources.uni_v3.channels=uni_v3
tier1.sinks.uni_v3.type=hdfs
tier1.sinks.uni_v3.hdfs.path=hdfs://ns1/user/hive/warehouse/ods.db/ods_%{project}_%{name}/pk_year=%{pk_year}/pk_month=%{pk_month}/pk_day=%{pk_day}
tier1.sinks.uni_v3.hdfs.filePrefix=%{project}_%{name}
tier1.sinks.uni_v3.hdfs.fileSufix=.log
tier1.sinks.uni_v3.hdfs.useLocalTimeStamp=true
tier1.sinks.uni_v3.hdfs.batchSize=500
tier1.sinks.uni_v3.hdfs.fileType=DataStream
tier1.sinks.uni_v3.hdfs.writeFormat=Text
tier1.sinks.uni_v3.hdfs.rollSize=2147483648
tier1.sinks.uni_v3.hdfs.rollInterval=3600
tier1.sinks.uni_v3.hdfs.rollCount=0
tier1.sinks.uni_v3.hdfs.idleTimeout=120
tier1.sinks.uni_v3.hdfs.minBlockReplicas=1
tier1.sinks.uni_v3.channel=uni_v3
tier1.channels.uni_v3.type=file
tier1.channels.uni_v3.checkpointDir=/home/hadoop/bigdata/flume_job/chkDir/uni_v3
tier1.channels.uni_v3.dataDirs=/home/hadoop/bigdata/flume_job/dataDir/uni_v3
posted @ 2023-01-03 21:56  硅谷工具人  阅读(64)  评论(0编辑  收藏  举报
成功之道,在于每个人生阶段都要有不同的目标,并且通过努力实现自己的目标,毕竟人生不过百年! 所有奋斗的意义在于为个人目标实现和提升家庭幸福,同时能推进社会进步和国家目标! 正如古人讲的正心诚意格物致知,修身齐家治国平天下。