31. ELK日志收集
日志分析系统 - k8s部署ElasticSearch集群 - 帝都攻城狮 - 博客园 (cnblogs.com)
https://blog.csdn.net/miss1181248983/article/details/113773943

31.1 日志收集方式
| 1.node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输出/dev/stderr)日志收集。 |
| 2.使用sidcar容器(一个pod多容器)收集当前pod内一个或者多个业务容器的日志(通常基于emptyDir实现业务容器与sidcar之间的日志共亭)。 |
| 3.在容器内置日志收集服务进程。 |
31.2 daemonset日志收集
logstach容器内收集-->kafka-zk-->logstach过滤写入-->ES-cluster
| 基于daemonset运行日志收集服务,主要收集以下类型日志: |
| 1.node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输出/dev/stderr)日志收集,即应用程序产生的标准输出和错误输出的日志。 |
| 因为容器里的日志都是输出到标准输出、错误输出,然后需要提前把容器里的日志驱动与日志类型改成jsonfile类型 |
| 实现方式: |
| 将容器内的日志改好jsonfile之后挂载到宿主机,在把宿主机的日志挂载到logstash中进行过滤,这样就收集起来了 |
对比类型 |
containerd |
docker |
日志存储路径 |
真实路径:/var/log/pods/CONTAINER_NAMES |
真实路径:/var/lib/docker/containers/CONTAINERID |
日志配置参数 |
配置文件:/etc/systemd/system/kubelet.service 配置参数: --container-log-max-files=5 --container-log-max-size="10OMi" --logging-format="json" |
配置文件:/etc/docker/daemon.json 参数:"log-driver" : "json-file", "log-opts" :{ "max-file" : "5", "max-size": "100m" } |
| root@k8s-master1:~1.logstash-image-Dockerfile |
| FROM logstash:7.12.1 |
| |
| USER root |
| WORKDIR /usr/share/logstash |
| |
| ADD logstash.yml /usr/share/logstash/config/logstash.yml |
| ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf |
| |
| root@k8s-master1:~1.logstash-image-Dockerfile |
| input { |
| file { |
| |
| |
| |
| path => "/var/log/pods/*/*/*.log" |
| |
| start_position => "beginning" |
| |
| type => "jsonfile-daemonset-applog" |
| } |
| |
| file { |
| |
| path => "/var/log/*.log" |
| start_position => "beginning" |
| |
| type => "jsonfile-daemonset-syslog" |
| } |
| } |
| |
| output { |
| if [type] == "jsonfile-daemonset-applog" { |
| kafka { |
| |
| bootstrap_servers => "${KAFKA_SERVER}" |
| |
| topic_id => "${TOPIC_ID}" |
| batch_size => 16384 |
| |
| codec => "${CODEC}" |
| } } |
| |
| if [type] == "jsonfile-daemonset-syslog" { |
| kafka { |
| bootstrap_servers => "${KAFKA_SERVER}" |
| topic_id => "${TOPIC_ID}" |
| batch_size => 16384 |
| codec => "${CODEC}" |
| }} |
| } |
| root@k8s-master1:~1.logstash-image-Dockerfile |
| http.host: "0.0.0.0" |
| |
| |
| root@k8s-master1:~1.logstash-image-Dockerfile |
| |
| |
| |
| |
| |
| |
| nerdctl build -t harbor.nbrhce.com/baseimages/logstash:v7.12.1-json-file-log-v1 . |
| |
| nerdctl push harbor.nbrhce.com/baseimages/logstash:v7.12.1-json-file-log-v1 |
- k8s YAML DaemonSet-logstash容器内收集
| root@k8s-master1:~/20220821/ELK/1.daemonset-logstash |
| apiVersion: apps/v1 |
| kind: DaemonSet |
| metadata: |
| name: logstash-elasticsearch |
| namespace: kube-system |
| labels: |
| k8s-app: logstash-logging |
| spec: |
| selector: |
| matchLabels: |
| name: logstash-elasticsearch |
| template: |
| metadata: |
| labels: |
| name: logstash-elasticsearch |
| spec: |
| tolerations: |
| |
| |
| - key: node-role.kubernetes.io/master |
| operator: Exists |
| effect: NoSchedule |
| containers: |
| - name: logstash-elasticsearch |
| image: harbor.nbrhce.com/baseimages/logstash:v7.12.1-json-file-log-v1 |
| env: |
| - name: "KAFKA_SERVER" |
| value: "172.31.4.101:9092,172.31.4.102:9092,172.31.4.103:9092" |
| - name: "TOPIC_ID" |
| value: "jsonfile-log-topic" |
| - name: "CODEC" |
| value: "json" |
| |
| |
| |
| |
| |
| |
| |
| volumeMounts: |
| - name: varlog |
| mountPath: /var/log |
| - name: varlibdockercontainers |
| |
| mountPath: /var/log/pods |
| readOnly: false |
| terminationGracePeriodSeconds: 30 |
| volumes: |
| |
| - name: varlog |
| hostPath: |
| path: /var/log |
| |
| - name: varlibdockercontainers |
| hostPath: |
| path: /var/lib/docker/containers |
| path: /var/log/pods |
| |
| root@k8s-master1:~1.daemonset-logstash |
| input { |
| kafka { |
| |
| bootstrap_servers => "172.31.4.101:9092,172.31.4.102:9092,172.31.4.103:9092" |
| |
| topics => ["jsonfile-log-topic"] |
| |
| codec => "json" |
| } |
| } |
| |
| output { |
| |
| if [type] == "jsonfile-daemonset-applog" { |
| elasticsearch { |
| hosts => ["172.31.2.101:9200","172.31.2.102:9200"] |
| |
| index => "jsonfile-daemonset-applog-%{+YYYY.MM.dd}" |
| }} |
| |
| if [type] == "jsonfile-daemonset-syslog" { |
| elasticsearch { |
| hosts => ["172.31.2.101:9200","172.31.2.102:9200"] |
| index => "jsonfile-daemonset-syslog-%{+YYYY.MM.dd}" |
| }} |
| |
| } |
31.3 Sidcar容器日志收集
| 使用sidcar容器一个pod多容器收集当前pod内一个或多个业务容器的日志、通常基于emptyDir实现业务容器与sidcar之间的日志共享 |
| 容器之间的文件系统是隔离的,通常emptyDir来实现日志的共享,应该就是把业务容器的日志路径挂载到emptyDir,sidcar容器收集日志的路径就是这个emptyDir |
| 优点:这样收集日志的好处就是可以精细化服务的日志 |
| 缺点:就是占用资源要是有旧业务容器还需要改造POD添加sidcar容器 |
| root@k8s-master1:~2.sidecar-logstash/1.logstash-image-Dockerfile |
| FROM logstash:7.12.1 |
| |
| USER root |
| WORKDIR /usr/share/logstash |
| |
| ADD logstash.yml /usr/share/logstash/config/logstash.yml |
| ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf |
| root@k8s-master1:~2.sidecar-logstash/1.logstash-image-Dockerfile |
| http.host: "0.0.0.0" |
| |
| root@k8s-master1:~2.sidecar-logstash/1.logstash-image-Dockerfile |
| input { |
| file { |
| path => "/var/log/applog/catalina.out" |
| start_position => "beginning" |
| type => "app1-sidecar-catalina-log" |
| } |
| file { |
| path => "/var/log/applog/localhost_access_log.*.txt" |
| start_position => "beginning" |
| type => "app1-sidecar-access-log" |
| } |
| } |
| |
| output { |
| if [type] == "app1-sidecar-catalina-log" { |
| kafka { |
| bootstrap_servers => "${KAFKA_SERVER}" |
| topic_id => "${TOPIC_ID}" |
| batch_size => 16384 |
| codec => "${CODEC}" |
| } } |
| |
| if [type] == "app1-sidecar-access-log" { |
| kafka { |
| bootstrap_servers => "${KAFKA_SERVER}" |
| topic_id => "${TOPIC_ID}" |
| batch_size => 16384 |
| codec => "${CODEC}" |
| }} |
| } |
| root@k8s-master1:~/20220821/ELK/2.sidecar-logstash |
| kind: Deployment |
| |
| apiVersion: apps/v1 |
| metadata: |
| labels: |
| app: magedu-tomcat-app1-deployment-label |
| name: magedu-tomcat-app1-deployment |
| namespace: magedu |
| spec: |
| replicas: 3 |
| selector: |
| matchLabels: |
| app: magedu-tomcat-app1-selector |
| template: |
| metadata: |
| labels: |
| app: magedu-tomcat-app1-selector |
| spec: |
| containers: |
| - name: sidecar-container |
| image: harbor.magedu.net/baseimages/logstash:v7.12.1-sidecar |
| imagePullPolicy: IfNotPresent |
| |
| |
| env: |
| - name: "KAFKA_SERVER" |
| value: "172.31.4.101:9092,172.31.4.102:9092,172.31.4.103:9092" |
| - name: "TOPIC_ID" |
| value: "tomcat-app1-topic" |
| - name: "CODEC" |
| value: "json" |
| |
| volumeMounts: |
| - name: applogs |
| mountPath: /var/log/applog |
| - name: magedu-tomcat-app1-container |
| image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1 |
| imagePullPolicy: IfNotPresent |
| |
| ports: |
| - containerPort: 8080 |
| protocol: TCP |
| name: http |
| env: |
| - name: "password" |
| value: "123456" |
| - name: "age" |
| value: "18" |
| resources: |
| limits: |
| cpu: 1 |
| memory: "512Mi" |
| requests: |
| cpu: 500m |
| memory: "512Mi" |
| volumeMounts: |
| - name: applogs |
| mountPath: /apps/tomcat/logs |
| startupProbe: |
| httpGet: |
| path: /myapp/index.html |
| port: 8080 |
| initialDelaySeconds: 5 |
| failureThreshold: 3 |
| periodSeconds: 3 |
| readinessProbe: |
| httpGet: |
| |
| path: /myapp/index.html |
| port: 8080 |
| initialDelaySeconds: 5 |
| periodSeconds: 3 |
| timeoutSeconds: 5 |
| successThreshold: 1 |
| failureThreshold: 3 |
| livenessProbe: |
| httpGet: |
| |
| path: /myapp/index.html |
| port: 8080 |
| initialDelaySeconds: 5 |
| periodSeconds: 3 |
| timeoutSeconds: 5 |
| successThreshold: 1 |
| failureThreshold: 3 |
| volumes: |
| - name: applogs |
| emptyDir: {} |
31.4 filebeat容器内置进程收集
| root@k8s-master1:~/20220821/ELK/3.container-filebeat-process/1.webapp-filebeat-image-Dockerfile |
| |
| FROM harbor.magedu.net/pub-images/tomcat-base:v8.5.43 |
| |
| ADD catalina.sh /apps/tomcat/bin/catalina.sh |
| ADD server.xml /apps/tomcat/conf/server.xml |
| |
| ADD myapp.tar.gz /data/tomcat/webapps/myapp/ |
| ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh |
| ADD filebeat.yml /etc/filebeat/filebeat.yml |
| RUN chown -R tomcat.tomcat /data/ /apps/ |
| |
| |
| |
| EXPOSE 8080 8443 |
| |
| CMD ["/apps/tomcat/bin/run_tomcat.sh"] |
| root@k8s-master1:~1.webapp-filebeat-image-Dockerfile |
| |
| filebeat.inputs: |
| - type: log |
| |
| enabled: true |
| paths: |
| |
| - /apps/tomcat/logs/catalina.out |
| fields: |
| |
| type: filebeat-tomcat-catalina |
| - type: log |
| |
| enabled: true |
| paths: |
| - /apps/tomcat/logs/localhost_access_log.*.txt |
| fields: |
| type: filebeat-tomcat-accesslog |
| |
| filebeat.config.modules: |
| path: ${path.config}/modules.d/*.yml |
| reload.enabled: false |
| setup.template.settings: |
| index.number_of_shards: 1 |
| setup.kibana: |
| |
| |
| output.kafka: |
| hosts: ["172.31.4.101:9092"] |
| |
| required_acks: 1 |
| |
| topic: "filebeat-magedu-app1" |
| |
| compression: gzip |
| |
| max_message_bytes: 1000000 |
| |
| |
| |
| |
| |
| |
| root@k8s-master1:~1.webapp-filebeat-image-Dockerfile |
| |
| |
| |
| |
| /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat & |
| su - tomcat -c "/apps/tomcat/bin/catalina.sh start" |
| tail -f /etc/hosts |
| |
| root@k8s-master1:~3.container-filebeat-process |
| --- |
| apiVersion: rbac.authorization.k8s.io/v1 |
| kind: ClusterRole |
| metadata: |
| name: filebeat-serviceaccount-clusterrole |
| labels: |
| k8s-app: filebeat-serviceaccount-clusterrole |
| rules: |
| - apiGroups: [""] |
| resources: |
| - namespaces |
| - pods |
| - nodes |
| verbs: |
| - get |
| - watch |
| - list |
| |
| --- |
| apiVersion: rbac.authorization.k8s.io/v1 |
| kind: ClusterRoleBinding |
| metadata: |
| name: filebeat-serviceaccount-clusterrolebinding |
| subjects: |
| - kind: ServiceAccount |
| name: default |
| namespace: magedu |
| roleRef: |
| kind: ClusterRole |
| name: filebeat-serviceaccount-clusterrole |
| apiGroup: rbac.authorization.k8s.io |
| root@k8s-master1:~3.container-filebeat-process |
| kind: Deployment |
| |
| apiVersion: apps/v1 |
| metadata: |
| labels: |
| app: magedu-tomcat-app1-filebeat-deployment-label |
| name: magedu-tomcat-app1-filebeat-deployment |
| namespace: magedu |
| spec: |
| replicas: 1 |
| selector: |
| matchLabels: |
| app: magedu-tomcat-app1-filebeat-selector |
| template: |
| metadata: |
| labels: |
| app: magedu-tomcat-app1-filebeat-selector |
| spec: |
| containers: |
| - name: magedu-tomcat-app1-filebeat-container |
| image: harbor.magedu.net/magedu/tomcat-app1:v1-filebeat |
| imagePullPolicy: IfNotPresent |
| |
| ports: |
| - containerPort: 8080 |
| protocol: TCP |
| name: http |
| env: |
| - name: "password" |
| value: "123456" |
| - name: "age" |
| value: "18" |
| resources: |
| limits: |
| cpu: 1 |
| memory: "512Mi" |
| requests: |
| cpu: 500m |
| memory: "512Mi" |
| |
| root@k8s-master1:~3.container-filebeat-process |
| --- |
| kind: Service |
| apiVersion: v1 |
| metadata: |
| labels: |
| app: magedu-tomcat-app1-filebeat-service-label |
| name: magedu-tomcat-app1-filebeat-service |
| namespace: magedu |
| spec: |
| type: NodePort |
| ports: |
| - name: http |
| port: 80 |
| protocol: TCP |
| targetPort: 8080 |
| nodePort: 30092 |
| selector: |
| app: magedu-tomcat-app1-filebeat-selector |
| root@k8s-master1:~3.container-filebeat-process |
| input { |
| kafka { |
| bootstrap_servers => "172.31.4.101:9092,172.31.4.102:9092,172.31.4.103:9092" |
| topics => ["filebeat-magedu-app1"] |
| codec => "json" |
| } |
| } |
| |
| output { |
| if [fields][type] == "filebeat-tomcat-catalina" { |
| elasticsearch { |
| hosts => ["172.31.2.101:9200","172.31.2.102:9200"] |
| index => "filebeat-tomcat-catalina-%{+YYYY.MM.dd}" |
| }} |
| |
| if [fields][type] == "filebeat-tomcat-accesslog" { |
| elasticsearch { |
| hosts => ["172.31.2.101:9200","172.31.2.102:9200"] |
| index => "filebeat-tomcat-accesslog-%{+YYYY.MM.dd}" |
| }} |
| |
| } |