logback kafkaAppender输出日志到kafka
官网地址https://github.com/danielwegener/logback-kafka-appender
本文以spring boot项目为基础,更多的信息,请参考官网
https://github.com/danielwegener/logback-kafka-appender
使用maven引入所需要的jar包
<dependency> <groupId>com.github.danielwegener</groupId> <artifactId>logback-kafka-appender</artifactId> <version>0.2.0-RC1</version> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <!--<version>1.2.3</version> <scope>runtime</scope>--> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-core</artifactId> <!--<version>1.2.3</version>--> </dependency>
配置logback-spring.xml,增加一个appender节点
<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender"> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <pattern>%message %n</pattern> <charset>utf8</charset> </encoder> <topic>rmcloud-gateway-audit-log</topic> <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/> <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/>
<!--注意此处应该是spring boot中的kafka配置属性--> <producerConfig>bootstrap.servers=127.0.0.1:9092</producerConfig>
<producerConfig>retries=1</producerConfig>
<producerConfig>batch-size=16384</producerConfig>
<producerConfig>buffer-memory=33554432</producerConfig>
<producerConfig>properties.max.request.size==2097152</producerConfig>
</appender>
<root level="INFO"> <appender-ref ref="kafkaAppender"/>
</root>
自定义regular Filter
import ch.qos.logback.classic.spi.ILoggingEvent; import ch.qos.logback.core.filter.Filter; import ch.qos.logback.core.spi.FilterReply; import com.alibaba.fastjson.JSON; import com.alibaba.fastjson.JSONObject; import com.vcredit.rmcloud.gateway.bean.JsonResult; import com.vcredit.rmcloud.gateway.bean.RmcloudConstant; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.StringUtils; /** * 扩展logback filter,过虑rmcloud的日志,输出到kafka * * @author lee * @date 2018/9/11 */ @Slf4j public class LogKafkaFilter extends Filter<ILoggingEvent> { @Override public FilterReply decide(ILoggingEvent iLoggingEvent) { String message = iLoggingEvent.getMessage(); /** * 此处是业务代码,可根据自己 的业务代码实现 */ if (StringUtils.isNotBlank(message)) {
JSONObject auditLog = JSON.parseObject(message); log.info("responseBody:" + auditLog.get("responseBody").toString()); JsonResult jsonResult = JSON.parseObject(auditLog.get("responseBody").toString(), JsonResult.class); if (auditLog.get("serviceId").toString().startsWith(RmcloudConstant.SERVICE_ID_RMCLOUD_START)) { return FilterReply.ACCEPT; } } return FilterReply.DENY; } }
将自定义的Filter加入到kafkaAppender中
<appender name="kafkaRmcloudAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender"> <filter class="com.xx.xx.xx.filter.LogKafkaFilter"/> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <pattern>%message %n</pattern> <charset>utf8</charset> </encoder> <topic>rmcloud-gateway-audit-log</topic> <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/> <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/> <producerConfig>bootstrap.servers=${kafkaServers}</producerConfig> </appender>
这样过滤不需要日志内容
另外,在github上发现另外一个KafkaAppener
https://github.com/johnmpage/logback-kafka
记录点滴,沉淀自己,汇聚成海,重新再出发