flume到hdfs和kafka
注意:需要将hadoop的相关jar包复制到flume下面
1. flume-conf.properties的配置
a1.sources = r1 a1.sinks = k1 sink-hdfs a1.channels = c1 chn-hdfs
#tail -F 注意exec只适合测试,可以使用TAILDIR a1.sources.r1.type = exec a1.sources.r1.command = tail -F /home/abc/robotResume/jupiter/jupiter_http_log/logback.log a1.sources.r1.inputCharset = UTF-8 a1.sources.r1.interceptors = i1 a1.sources.r1.interceptors.i1.type = timestamp a1.sinks.k1.type=org.apache.flume.sink.kafka.KafkaSink a1.sinks.k1.kafka.topic=nginx-resume a1.sinks.k1.kafka.bootstrap.servers=localhost:9092 a1.sinks.k1.kafka.flumeBatchSize=100 a1.sinks.k1.kafka.producer.acks=1 a1.sinks.sink-hdfs.type=hdfs a1.sinks.sink-hdfs.hdfs.fileType=DataStream a1.sinks.sink-hdfs.hdfs.writeFormat=Text a1.sinks.sink-hdfs.hdfs.path=hdfs://localhost:7000/flumeResume/data/%Y-%m-%d a1.sinks.sink-hdfs.hdfs.rollInterval=0 a1.sinks.sink-hdfs.hdfs.rollSize=10240000 a1.sinks.sink-hdfs.hdfs.rollCount=0
#idleTimeout表示在此时间内没有写入则关闭连接并生成文件,0表示禁用此功能
a1.sinks.sink-hdfs.hdfs.idleTimeout=0
a1.sinks.sink-hdfs.hdfs.minBlockReplicas=1
a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 a1.channels.chn-hdfs.type=memory a1.channels.chn-hdfs.capacity=1000 a1.channels.chn-hdfs.transactionCapacity=100 a1.sources.r1.channels = c1 chn-hdfs a1.sinks.k1.channel = c1 a1.sinks.sink-hdfs.channel=chn-hdfs
2. sink hdfs文件滚动配置
hdfs.minBlockReplicas=1表示flume不需要去考虑hdfs的datanode复制备份。如果不设置1,其他roll设置会失效,flume会自动判断roll的条件。
3. sink hdfs定期配置
hdfs.round 默认false
hdfs.roundValue 值
hdfs.roundUnit second, minute or hour
4. 按月生成文件夹,并按24小时生成文件
a1.sinks.sink-hdfs.type=hdfs a1.sinks.sink-hdfs.hdfs.fileType=DataStream a1.sinks.sink-hdfs.hdfs.writeFormat=Text a1.sinks.sink-hdfs.hdfs.path=hdfs://localhost:7000/resumeLog/%Y-%m a1.sinks.sink-hdfs.hdfs.rollInterval=86400 a1.sinks.sink-hdfs.hdfs.rollSize=0 a1.sinks.sink-hdfs.hdfs.rollCount=0 #event次数 a1.sinks.sink-hdfs.hdfs.idleTimeout=0 #在此时间内没有写入则关闭连接并生成文件,0禁用此功能
a1.sinks.sink-hdfs.hdfs.minBlockReplicas=1