kubernetes部署Fluentd+Elasticsearch+kibana 日志收集系统

一、介绍

1. Fluentd 是一个开源收集事件和日志系统,用与各node节点日志数据的收集、处理等等。详细介绍移步-->官方地址:http://fluentd.org/  

2. Elasticsearch 是一个开源的,基于Lucene的搜索服务器。它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。详细介绍移步-->官方地址:http://www.elasticsearch.org/overview/ 

3. Kibana 开源的用于数据可视化的web ui工具,可使用它对日志进行高效的搜索、可视化、分析等各种操作。详细介绍移步-->官方地址http://www.elasticsearch.org/overview/kibana/

二、流程

每个node节点上面的fluentd监控并收集该节点上面的系统日志,并将处理过后的日志信息发送给Elasticsearch,Elasticsearch汇总各个node节点的日志信息,最后结合Kibana 实现web ui界面的数据展示。

三、安装实现

1.确保k8s集群正常工作(当然这是必须的....)

2.fluentd.yaml文件编写,这里要实现每个节点都能有fluentd跑起来,只需要将kind设置为DaemonSet即可。

 1 apiVersion: extensions/v1beta1
 2 kind: DaemonSet
 3 metadata:
 4   name: fluentd-elasticsearch
 5   namespace: kube-system
 6   labels:
 7     k8s-app: fluentd-logging
 8 spec:
 9   template:
10     metadata:
11       labels:
12         name: fluentd-elasticsearch
13     spec:
14       containers:
15       - name: fluentd-elasticsearch
16         image: gcr.io/google-containers/fluentd-elasticsearch:1.20
17         resources:
18           limits:
19             memory: 200Mi
20           requests:
21             cpu: 100m
22             memory: 200Mi
23         volumeMounts:
24         - name: varlog
25           mountPath: /var/log
26         - name: varlibdockercontainers
27           mountPath: /var/lib/docker/containers
28           readOnly: true
29       terminationGracePeriodSeconds: 30
30       volumes:
31       - name: varlog
32         hostPath:
33           path: /var/log
34       - name: varlibdockercontainers
35         hostPath:
36           path: /var/lib/docker/containers

 

3.elasticsearch-rc.yaml&elasticsearch-svc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: elasticsearch-logging-v1
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 2
  selector:
    k8s-app: elasticsearch-logging
    version: v1
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - image: gcr.io/google-containers/elasticsearch:v2.4.1
        name: elasticsearch-logging
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: es-persistent-storage
          mountPath: /data
      volumes:
      - name: es-persistent-storage
        emptyDir: {}

 

 

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "Elasticsearch"
spec:
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: elasticsearch-logging

 

4.kibana-rc.yaml&kibana-svc.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana-logging
  template:
    metadata:
      labels:
        k8s-app: kibana-logging
    spec:
      containers:
      - name: kibana-logging
        image: gcr.io/google-containers/kibana:v4.6.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
          requests:
            cpu: 100m
        env:
          - name: "ELASTICSEARCH_URL"
            value: "http://elasticsearch-logging:9200"
          - name: "KIBANA_BASE_URL"
            value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP

 

apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "Kibana"
spec:
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
  selector:
    k8s-app: kibana-logging

 

5.kubectl create -f ****** ,这里就自己发挥吧。

镜像推荐使用最新的iamge,多去github/kubernetes看看 里面有详细的说明

 

posted @ 2016-12-20 16:02  jiayong  阅读(7226)  评论(0编辑  收藏  举报