基于Kubernetes构建日志收集系统-部署Fluentd(三)
简介
现在,我们将介绍如何使用Fluentd收集容器日志,并对敏感信息进行脱敏处理。将数据写入Elasticsearch集群。
正文
第一步:创建ConfigMap yaml配置文件,命名为:fluentd-config-map.yaml。yaml文件内容如下:
kind: ConfigMap
apiVersion: v1
metadata:
name: fluentd-es-config-v0.2.0
namespace: kube-logging
labels:
addonmanager.kubernetes.io/mode: Reconcile
data:
system.conf: |-
<system>
root_dir /tmp/fluentd-buffers/
</system>
containers.input.conf: |-
<source>
@id fluentd-containers.log
@type tail
path /var/log/containers/*.log
pos_file /var/log/es-containers.log.pos
tag raw.kubernetes.*
read_from_head true
<parse>
@type multi_format
<pattern>
format json
time_key time
time_format %Y-%m-%dT%H:%M:%S.%NZ
</pattern>
<pattern>
format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
time_format %Y-%m-%dT%H:%M:%S.%N%:z
</pattern>
</parse>
</source>
<filter **>
@type record_modifier
<replace>
key token
expression /^\w{10}/
replace *******
</replace>
</filter>
output.conf: |-
<match **>
@id elasticsearch
@type elasticsearch
@log_level info
type_name _doc
include_tag_key true
host elasticsearch-client.kube-logging.svc.cluster.local
port 9200
user elastic
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
logstash_format true
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 8
overflow_action block
</buffer>
</match>
第二步: 采用DaemonSet(守护进程)方式部署。创建yaml文件,命名为:fluentd-daemonset.yaml。部署过程中,我安装fluent-plugin-record-modifier插件,该插件提供replace语法,可以把关键敏感信息匿名化处理。yaml文件内容如下:
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd-es
namespace: kube-logging
labels:
k8s-app: fluentd-es
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- "namespaces"
- "pods"
verbs:
- "get"
- "watch"
- "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: fluentd-es
namespace: kube-logging
apiGroup: ""
roleRef:
kind: ClusterRole
name: fluentd-es
apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-es-v2.7.0
namespace: kube-logging
labels:
k8s-app: fluentd-es
version: v2.7.0
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: fluentd-es
version: v2.7.0
template:
metadata:
labels:
k8s-app: fluentd-es
version: v2.7.0
# This annotation ensures that fluentd does not get evicted if the node
# supports critical pod annotation based priority scheme.
# Note that this does not guarantee admission on the nodes (#40573).
annotations:
seccomp.security.alpha.kubernetes.io/pod: "docker/default"
spec:
serviceAccountName: fluentd-es
containers:
- name: fluentd-es
image: quay.io/fluentd_elasticsearch/fluentd:v2.7.0
command: ["/bin/sh"]
args: ["-c", "fluent-gem install fluent-plugin-record-modifier;fluentd;while true; do sleep 30; done;"]
env:
- name: FLUENTD_ARGS
value: --no-supervisor -q
- name: FLUENT_ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-pw-elastic
key: password
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config-volume
mountPath: /etc/fluent/config.d
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config-volume
configMap:
name: fluentd-es-config-v0.2.0
第三步:使用如下命令,执行yaml文件。
kubectl apply -f fluentd-config-map.yaml -f fluentd-daemonset.yaml
第四步:确认是否部署成功,执行如下命令,如果fluentd 都处于Running状态,说明部署成功。
kubectl get pod -n kube-logging
关于Elasticsearch 集群部署 、Kibana部署 ,请查看这两篇文章。欢迎点赞,评论。
分享是一种美德