32. EFK日志收集
32.1 EFK架构工作流程
一个开源的分布式、Restful 风格的搜索和数据分析引擎,它的底层是开源库Apache Lucene。它可以被下面这样准确地形容:
一个分布式的实时文档存储,每个字段可以被索引与搜索;
一个分布式实时分析搜索引擎;
能胜任上百个服务节点的扩展,并支持 PB 级别的结构化或者非结构化数据
Kibana是一个开源的分析和可视化平台,设计用于和Elasticsearch一起工作。可以通过Kibana来搜索,查看,并和存储在Elasticsearch索引中的数据进行交互。也可以轻松地执行高级数据分析,并且以各种图标、表格和地图的形式可视化数据
一个针对日志的收集、处理、转发系统。通过丰富的插件系统,可以收集来自于各种系统或应用的日志,转化为用户指定的格式后,转发到用户所指定的日志存储系统之中。
Fluentd 通过一组给定的数据源抓取日志数据,处理后(转换成结构化的数据格式)将它们转发给其他服务,比如 Elasticsearch、对象存储、kafka等等。Fluentd 支持超过300个日志存储和分析服务,所以在这方面是非常灵活的。主要运行步骤如下
首先 Fluentd 从多个日志源获取数据
结构化并且标记这些数据
然后根据匹配的标签将数据发送到多个目标服务
为什么推荐使用fluentd作为k8s体系的日志收集工具?
1.将日志文件JSON化
2.可插拔架构设计
3.极小的资源占用
基于C和Ruby语言, 30-40MB,13,000 events/second/core
4.极强的可靠性
基于内存和本地文件的缓存
强大的故障转移
32.2 Fluentd配置文件
数据源,对应Input 通过使用 source 指令,来选择和配置所需的输入插件来启用 Fluentd 输入源, source 把事件提交到 fluentd 的路由引擎中。使用type来区分不同类型的数据源。如下配置可以监听指定文件的追加输入
<source >
@type tail
path /var/log/httpd-access.log
pos_file /var/log/td-agent/httpd-access.log.pos
tag myapp.access
format apache2
</source>
filter,Event processing pipeline(事件处理流)
filter 可以串联成 pipeline,对数据进行串行处理,最终再交给 match 输出。 如下可以对事件内容进行处理:
<source >
@type http
port 9880
</source>
<filter myapp.access>
@type record_transformer
<record>
host_param “
</record>
</filter>
可以在 里指定 ,这个 source 所触发的事件就会被发送给指定的 label 所包含的任务,而不会被后续的其他任务获取到。 source@label
<source >
@type forward
</source>
<source >
@type tail
@label @SYSTEM
path /var/log/httpd-access.log
pos_file /var/log/td-agent/httpd-access.log.pos
tag myapp.access
format apache2
</source>
<filter access.**>
@type record_transformer
<record>
</record>
</filter>
<match **>
@type elasticsearch
</match>
<label @SYSTEM>
<filter var.log.middleware.**>
@type grep
</filter>
<match **>
@type s3
</match>
</label>
查找匹配 “tags” 的事件,并处理它们。match 命令的最常见用法是将事件输出到其他系统(因此,与 match 命令对应的插件称为 “输出插件”)
<source >
@type http
port 9880
</source>
<filter myapp.access>
@type record_transformer
<record>
host_param “
</record>
</filter>
<match myapp.access>
@type file
path /var/log/fluent/access
</match>
32.3 Fluentd日志的处理结构
事件的结构
time:事件的处理时间
tag:事件的来源,在fluentd.conf中配置
record:真实的日志内容,json对象
正常原始日志
192.168.0.1 - - [28/Feb/2013:12:00:00 +0900] "GET / HTTP/1.1" 200 777
2020-07-16 08:40:35 +0000 apache.access: {"user" :"-" ,"method" :"GET" ,"code" :200,"size" :777,"host" :"192.168.0.1" ,"path" :"/" }
32.4 Fluentd缓冲事件模型
因为每个事件数据量通常很小,考虑数据传输效率、稳定性等方面的原因,所以基本不会每条事件处理完后都会立马写入到output端,因此fluentd建立了缓冲模型,模型中主要有两个概念:
可以设置的参数,主要有:
buffer_type,缓冲类型,可以设置file或者memory
buffer_chunk_limit,每个chunk块的大小,默认8MB
buffer_queue_limit ,chunk块队列的最大长度,默认256
flush_interval ,flush一个chunk的时间间隔
retry_limit ,chunk块发送失败重试次数,默认17次,之后就丢弃该chunk数据
retry_wait ,重试发送chunk数据的时间间隔,默认1s,第2次失败再发送的话,间隔2s,下次4秒,以此类推
随着fluentd事件的不断生成并写入chunk,缓存块持变大,当缓存块满足buffer_chunk_limit大小或者新的缓存块诞生超过flush_interval时间间隔后,会推入缓存queue队列尾部,该队列大小由buffer_queue_limit决定。
比较理想的情况是每次有新的缓存块进入缓存队列,则立马会被写入到后端,同时,新缓存块也持续入列,但是入列的速度不会快于出列的速度,这样基本上缓存队列处于空的状态,队列中最多只有一个缓存块。
但是实际情况考虑网络等因素,往往缓存块被写入后端存储的时候会出现延迟或者写入失败的情况,当缓存块写入后端失败时,该缓存块还会留在队列中,等retry_wait时间后重试发送,当retry的次数达到retry_limit后,该缓存块被销毁(数据被丢弃)。
此时缓存队列持续有新的缓存块进来,如果队列中存在很多未及时写入到后端存储的缓存块的话,当队列长度达到buffer_queue_limit大小,则新的事件被拒绝,fluentd报错,error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data" 。
还有一种情况是网络传输缓慢的情况,若每3秒钟会产生一个新块,但是写入到后端时间却达到了30s钟,队列长度为100,那么每个块出列的时间内,又有新的10个块进来,那么队列很快就会被占满,导致异常出现
32.5 EFK部署
32.5.1 部署NFS存储
apiVersion: v1
kind: Namespace
metadata:
name: elasticsearch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: elasticsearch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: ["" ]
resources: ["nodes" ]
verbs: ["get" , "list" , "watch" ]
- apiGroups: ["" ]
resources: ["persistentvolumes" ]
verbs: ["get" , "list" , "watch" , "create" , "delete" ]
- apiGroups: ["" ]
resources: ["persistentvolumeclaims" ]
verbs: ["get" , "list" , "watch" , "update" ]
- apiGroups: ["storage.k8s.io" ]
resources: ["storageclasses" ]
verbs: ["get" , "list" , "watch" ]
- apiGroups: ["" ]
resources: ["events" ]
verbs: ["create" , "update" , "patch" ]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: elasticsearch
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: elasticsearch
rules:
- apiGroups: ["" ]
resources: ["endpoints" ]
verbs: ["get" , "list" , "watch" , "create" , "update" , "patch" ]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: elasticsearch
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: elasticsearch
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: elasticsearch
spec:
replicas: 1
strategy:
type : Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env :
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 10.0.0.109
- name: NFS_PATH
value: /data/k8sdata/elasticsearch
volumes:
- name: nfs-client-root
nfs:
server: 10.0.0.109
path: /data/k8sdata/elasticsearch
32.5.2 部署sc动态存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
namespace: elasticsearch
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
reclaimPolicy: Retain
mountOptions:
- noatime
parameters:
archiveOnDelete: "true"
32.5.3 部署ES的SVC资源
1.在STS资源使用无头SVC可以访问集群资源{ServiceName}.{Namespace}.svc.{ClusterDomain}
2.常规的service服务和无头服务的区别
● service:一组Pod访问策略,提供cluster-IP群集之间通讯,还提供负载均衡和服务发现
● Headless service 无头服务,不需要cluster-IP,clusterIP为None的service,直接绑定具体的Pod的IP,无头服务经常用于statefulset的有状态部署。
3.无头服务使用场景
1.无头服务用于服务发现机制的项目或中间件,如kafka和zookeeper之间进行leader选举,采用的是实例之间的实例IP通讯
2.既然不需要负载均衡,则就不需要Cluster IP,如果没有Cluster IP则kube-proxy不会处理他们,并且Kubernetes平台也不会给他创建负载均衡
4.k8s内部资源互相调用解析
通过deployment 资源,生成pod控制器,通过控制器生成一份或多份的pod。同命名空间下,创建service资源,service 资源的yaml 配置连接同命名空间下的那个标签lables的资源,通过此方式连接了刚刚创建的pod控制器资源,同时默认service 资源的yaml 配置生成一个集群IP+port,也就是反向代理后端刚刚连接上的deployment(pod控制器)资源,客户端访问集群IP+port,就会负载均衡的方式访问绑定的pod,通过kube-proxy组件进行资源的调度,负载。而内部的资源,比如另一个pod控制器想访问deployment(pod控制器)的资源,由于deployment可能启动多份pod,而且pod的IP也会变化,所以k8s内部资源通过IP方式互相通信是不可能的。但是lables是不变,而serice操作了此过成,通过service资源绑定后,访问service资源的解析地址(nginx-ds.default.svc.cluster.local.)即可进行容器的服务互相发现。调用方式不是集群IP+port
5.无头服务使用
中间件服务场景:
无头服务有一个很重要的场景是发现所有的pod包括未就绪的pod,只有准备就绪的pod能够作为服务的后端。但有时希望即使pod没有准备就绪,服务发现机制也能够发现所有匹配服务标签选择器的pod
当不需要负载均衡以及Service IP时:
以zk场景为例,zk节点之间通讯的端口是2888和3888,确实也不需要负载均衡以及Service IP。而提供给客户端的端口是2181,只有它需要,所以结合以上2个场景无头服务(用于zk pod之间彼此的通讯和选举的)
apiVersion: v1
kind: Service
metadata:
name: es-cluster-svc
namespace: elasticsearch
spec:
selector:
app: es
clusterIP: None
ports:
- name: restful
port: 9200
targetPort: 9200
- name: inter-node
port: 9300
---
apiVersion: v1
kind: Service
metadata:
name: es-cluster-svc-nodeport
namespace: elasticsearch
spec:
selector:
app: es
type : NodePort
ports:
- name: restful
port: 9200
targetPort: 9200
nodePort: 39200
- name: inter-node
port: 9300
32.5.4 部署ES的STS资源
测试命令记录:在容器内
curl -u elastic:xxx http://es-cluster-0.es-cluster-svc.elasticsearch:9200/
curl http://es-cluster-0.es-cluster-svc:9200
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: elasticsearch
spec:
serviceName: es-cluster-svc
replicas: 3
selector:
matchLabels:
app: es
template:
metadata:
labels:
app: es
spec:
initContainers:
- name: increase-vm-max-map
image: busybox:1.32
command : ["sysctl" , "-w" , "vm.max_map_count=262144" ]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox:1.32
command : ["sh" , "-c" , "ulimit -n 65536" ]
securityContext:
privileged: true
- name: fix-permissions
image: busybox:1.32
imagePullPolicy: IfNotPresent
command : ["sh" , "-c" , "chown -R 1000:1000 /usr/share/elasticsearch/data" ]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
containers:
- name: es-container
image: elasticsearch:7.8.0
ports:
- name: restful
containerPort: 9200
protocol: TCP
- name: internal
containerPort: 9300
protocol: TCP
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: cert
mountPath: /usr/share/elasticsearch/config/elastic-stack-ca.p12
subPath: elastic-stack-ca.p12
readOnly: true
env :
- name: cluster.name
value: es-cluster
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: discovery.zen.minimum_master_nodes
value: "2"
- name: discovery.seed_hosts
value: "es-cluster-0.es-cluster-svc,es-cluster-1.es-cluster-svc,es-cluster-2.es-cluster-svc"
- name: ES_JAVA_OPTS
value: "-Xms1g -Xmx1g"
- name: network.host
value: "0.0.0.0"
- name: http.cors.enabled
value: "true"
- name: http.cors.allow-origin
value: "*"
- name: http.cors.allow-headers
value: "Authorization,X-Requested-With,Content-Length,Content-Type"
- name: xpack.security.enabled
value: "true"
- name: xpack.security.transport.ssl.enabled
value: "true"
- name: xpack.security.transport.ssl.verification_mode
value: "certificate"
- name: xpack.security.transport.ssl.keystore.path
value: "elastic-stack-ca.p12"
- name: xpack.security.transport.ssl.truststore.path
value: "elastic-stack-ca.p12"
volumes:
- name: cert
configMap:
name: elastic-certificates
items:
- key: 9-elastic-stack-ca.p12
path: elastic-stack-ca.p12
volumeClaimTemplates:
- metadata:
name: data
labels:
app: es-volume
namespace: elasticsearch
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: managed-nfs-storage
resources:
requests:
storage: 5Gi
32.5.5 部署Kibana
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: elasticsearch
labels:
app: kibana
spec:
type : NodePort
ports:
- port: 5601
nodePort: 35601
targetPort: 5601
selector:
app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: elasticsearch
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: kibana:7.8.0
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
env :
- name: "ELASTICSEARCH_HOSTS"
value: http://es-cluster-svc:9200
- name: "ELASTICSEARCH_USERNAME"
valueFrom:
secretKeyRef:
name: elasticsearch-auth
key: user
- name: "ELASTICSEARCH_PASSWORD"
valueFrom:
secretKeyRef:
name: elasticsearch-auth
key: password
ports:
- containerPort: 5601
32.5.6 部署Fluentd配置文件CM
kind: ConfigMap
apiVersion: v1
metadata:
name: fluentd-config
namespace: elasticsearch
labels:
addonmanager.kubernetes.io/mode: Reconcile
data:
system.conf: |-
<system>
root_dir /tmp/fluentd-buffers/
</system>
containers.input.conf: |-
<source >
@id fluentd-containers.log
@type tail
path /var/log/containers/*.log
pos_file /var/log/es-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ
localtime
tag raw.kubernetes.*
format json
read_from_head true
</source>
<match raw.kubernetes.**>
@id raw.kubernetes
@type detect_exceptions
remove_tag_prefix raw
message log
stream stream
multiline_flush_interval 5
max_bytes 500000
max_lines 1000
</match>
system.input.conf: |-
<source >
@id journald-docker
@type systemd
filters [{ "_SYSTEMD_UNIT" : "docker.service" }]
<storage>
@type local
persistent true
</storage>
read_from_head true
tag docker
</source>
<source >
@id journald-kubelet
@type systemd
filters [{ "_SYSTEMD_UNIT" : "kubelet.service" }]
<storage>
@type local
persistent true
</storage>
read_from_head true
tag kubelet
</source>
forward.input.conf: |-
<source >
@type forward
port 24224
bind 0.0.0.0
</source>
output.conf: |-
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
<match **>
@id elasticsearch
@type elasticsearch
@log_level info
include_tag_key true
host es-cluster-svc
port 9200
user "#{ENV['FLUENT_ES_USERNAME']}"
password "#{ENV['FLUENT_ES_PASSWORD']}"
logstash_prefix ${tag}
logstash_format true
request_timeout 30s
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 8
overflow_action block
</buffer>
</match>
32.5.7 部署Fluentd Daemonset
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd-es
namespace: elasticsearch
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- "namespaces"
- "pods"
verbs:
- "get"
- "watch"
- "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: fluentd-es
namespace: elasticsearch
apiGroup: ""
roleRef:
kind: ClusterRole
name: fluentd-es
apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-es
namespace: elasticsearch
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: fluentd-es
template:
metadata:
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: fluentd-es
containers:
- name: fluentd-es
image: quay.io/fluentd_elasticsearch/fluentd:v3.0.1
env :
- name: FLUENTD_ARGS
value: --no-supervisor -q
- name: "FLUENT_ES_HOST"
value: es-cluster-0.elasticsearch
- name: "FLUENT_ES_PORT"
value: "9200"
- name: "FLUENT_ES_USERNAME"
valueFrom:
secretKeyRef:
name: elasticsearch-auth
key: user
- name: "FLUENT_ES_PASSWORD"
valueFrom:
secretKeyRef:
name: elasticsearch-auth
key: password
resources:
limits:
memory: 1Gi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config-volume
mountPath: /etc/fluent/config.d
nodeSelector:
beta.kubernetes.io/fluentd-ds-ready: "true"
tolerations:
- operator: "Exists"
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config-volume
configMap:
name: fluentd-config
只有节点打上了true的标签才可以被部署被收集日志哦!!!
$ kubectl label node xxxxx fluentd=true
$ kubectl label node xxxxx fluentd=true
32.5.8 部署Secrets
apiVersion: v1
kind: Secret
metadata:
name: elasticsearch-auth
namespace: elasticsearch
type : Opaque
stringData:
user: elastic
password: admin123456
32.5.9 部署ES-auth-config
docker run -it -d --name elastic-cret docker.elastic.co/elasticsearch/elasticsearch:7.8.0 /bin/bash
docker exec -it elastic-cret /bin/
./bin/elasticsearch-certutil ca
docker cp elastic-cret:/usr/share/elasticsearch/elastic-stack-ca.p12 ./
kubectl create configmap -n elasticsearch elastic-certificates --from-file=elastic-stack-ca.p12
elastic-auth-certificates.yaml
kind: ConfigMap
metadata:
name: elastic-certificates
namespace: elasticsearch
apiVersion: v1
binaryData:
9-elastic-stack-ca.p12: MIIJ2wIBAzCCCZQGCSqGSIb3DQEHAaCCCYUEggmBMIIJfTCCBWEGCSqGSIb3DQEHAaCCBVIEggVOMIIFSjCCBUYGCyqGSIb3DQEMCgECoIIE+zCCBPcwKQYKKoZIhvcNAQwBAzAbBBTP4MOpdFA3qGzmyWxfCEkL2hgFkAIDAMNQBIIEyEEu9hhnk5BZ5p6KEXXSIoSEXt0nSZOts4humMlIHdKlgtGRdMd0tDm0gm2Y/EvbzmwCq5DO8oiGlic3TI1IVK4KGLQ0uRkKqtzmM2A82VJSC9w9uVJ0otST5R0zvXM8I6eWQv4zSVfdwpxfBjvq1kN/0GU9l70EOtKTd6vdMoqYMRHJr3b+76D/XtEcOOrZOmTJ8Oe9tXtT1WNbKJvDMrdN8ieXVfKvTcbhePHNhwQjzbIcMWS7K4H9JDwZZjUzuKe0RkxqxluK4tr/9If+rsTEjP6JgBw8eFWfrlOxKmq2C01JmdLsCKtAwkmFAGkbShIrh7GQrochINYqgZIQajUTwrsR8cQLCNVzp48IOhCHZhdRGqBWoc4Tafu5lcIbIqnD5LPSB4mayMti88qU7rLdyODw8C6kEynERz0QJnLcRV6ejcaWr8p40eN0/oPX+QPG7vGR9wBRIUSO24ob0D4Lb0WhkaqAx2ElT3f71ZuahgEoPJd9nAKBa0sSCWRwv62GO9iTepBUI199a4yKP8icGALjJl2kIkJi2PfOn0B+fGo+g7V0jVbBEkbXUkNukNJ0fOhVFQA1PbJAMXWFmucFmbO12TqG0OgLfy/niT0gGTSnEMaF4pwA2FPL4VHfwkJjd58FS8xcdi4A8vZP5P2NI8h+xCIMn9rDF4SgmXcQy0wytlwPf+qQWGU+sxj/Hlb8RW5UQ7RmHXfz72YvIJSzYTzKWHXmDfqra29RKsgdiSBKk+WzvQwVBi2a049+w8QGgHpIFc3a5vORfjwLXN1edQhHT8cJjc0gaa961lNe7bdDTmzBny+wzyfBRyaI7N0CeuXziAZMsp7tNG1Ybcq9CTC3Xg+UEPbUZV5eV//i6gBtg704uFIfGe5DCFDI07LWADtENAXVkhVWoFv7P5lGnii3v1FpT9SVo4iW+gUsiPsnSd9alPucpMXjdblw4FQwPr37CCcJbeISgVbQUthNnLo6bPXLwXy82A9Sh1rdMY3lyouUy/poVmM9K54/r49Na6GFtFbXqzplx9fjBowrEr8F4qQkCkQzhUdeiv1DpWyXNyfUGFbothwRLrY/aIUAXr7ndnX6fvqAaNnh023Lc2upLv9iRwqZCe2gJ44GFJwq9FM+JlQvqRcyl/LxRyrexJj3VQHh1hm5Knng0+eQNY2MV6qDsnT77nvhlObEw//Xz6NRqd/EIQJIt9q/EW+6/AMbgNkn5FCvbLrLhdVx12GW5E1/UANQweTJ1ZKMnTyREPNgFk+g/VAowp+UTJaabhe+vcEHMjJQCFZjedfnVEKhYKl5Hc1BN3QkGUBU8AsWgJ2zjOgh3iuPzN7r+BZ2QCoGZlWZ9aAt4jLZ7nuHNDQBtSByeC3AJw9bQp3J7TDQXMHruZoPbpRToeLAZIxNs/eMFhQ8zG7lKhrxl3sL22zmCZbgmH1Qe1xHsaAg/C2azUaotn9Koq1eLKars8kM5gAn6QlLpdfB1dyAJhfIJR3vQiWi0rXWDsP65A1Yr5REAQ61X95PyOo0N+PFrKxr4ST+TlRDZdH8aoOn01FpoByNHF7s//qcduNnfaxQ+LY9MQiY7H/vHMUwTVzXIjLQq3GPqS9AXVUMSKMgmoEcxt8XH0KtBTE4MBMGCSqGSIb3DQEJFDEGHgQAYwBhMCEGCSqGSIb3DQEJFTEUBBJUaW1lIDE2NzE3MTQyMTY4NjkwggQUBgkqhkiG9w0BBwagggQFMIIEAQIBADCCA/oGCSqGSIb3DQEHATApBgoqhkiG9w0BDAEGMBsEFFs7AeZuAV+I9cX5fcL3Fp2mnjOWAgMAw1CAggPAylrpDhRbLIutXu8iwV8tLvkhNodr3qhV4AzxDXe8cNwG1tNIoXzMgJFDCOjRV8xG8/TAKuTnSaLMLwDXqefwU6JSv3/K7sNBpQx7m2LQe7q9Lzlhjrr0IPirhOlbDVeUpjCOmHKBx5tShROI7U5ldsiAYrf7IX6nKA5mgTfgpk4F0jL2e21B45GR01gLXgmhFj+xwymjFJPkPeAX9Y4EWLh3jxZGbVnJMfrCggx3b+viS8OavhVKO9arLA1JZl9AF3+55bOvPdcAoxmgOJkm3P9KQXg8hrUkZr3aR/t5hriIdeYcYI4XGuM7/E35hhwt8YGb3Ky8HVwUTp5CJ2QBoYjXDt3XtbCUeF7REzhR4cpvoWLLemN/xG9+3PB4iy8dW1MrKCjYjHX24l6d2wIhhkddmc2PZcW8gW+HFzLV3gTHYFvX0IbKTDmZwXL/pQjd7t1G1ZpVo9WvLtoKcpRnZWVPhh+2CdM/OEZnbQwcOQYRenRP0rZLvnbOP6/F5qCghOTe9Ou+o0eaa81pD4KLvAXOuoshRDcBbbN0YcgWbCnDXpn0mw7mvmcWo5diqqnP2SGfu90kKgjDUinkRWHlPmR9te9vrCz9IyOIQlAuYKnpETjofu2Rs8+dId8xU9ztPq6/UCPwl5/kQPGy58MuC3R72QA6XhdiJRb6HQgt7w5DALgQwPpL4iJeZIOY9aftVWOk40tAd8Ajg6DC9VWFNWayQ2Utw8D+OCrhkETrf0NYJySEyfsUV0o9MZ/FymFtvEKUL8wJym5BWnDUyGaAHAYnLkCKrlOBX+a/neM9B2JxZbRXjg66uqydEhocKBxfqHwhyxQ0j3Z253Qp548JO7ASetlEsR8xSZPzV9LwzN1uF0kMkpHB3sL+J6VP+8ZMCerVS9Jt9T+5cAssq9sDTI6VtYar+3wT+zNO3O99E/2PWlz7IX/7KIU5J9NKBB3gLVbI74v7D9rqT/LWJjkk5d63mieFy/2RRkriRxdKvLB/GZQL1UhO+M309Wy4cl7S7cHSah42RVOl140KfGQqV3GLUGvGH2RR/v+dZ/VZoStn/YeeTLr8GusH/n0znLdXP6tbZHwc40n51CwVC2FGQ3MgVinJMS/b9aMWGEb//Qf9u2jHYgW7rG6Wj4s6lb2mDAGfm8og7rGSTjSMITi+G38OlbZDlFwHs1qukINjOHt/SXmzSemsOpJDJnFbfVxLigFMo230pbNbTv2lD3n0JTQ5j9/SPAN3ESx9ljF6wGBCLbALvh3sfDmsHa7gwbfNMD4wITAJBgUrDgMCGgUABBQiywoTO1oV7n2PT6io0pzAK0J0bQQUtzqY2urT1hw5pizQdqLZnMRq0foCAwGGoA==
32.5.10 集群部署之后
kubectl exec -it -n elasticsearch es-cluster-0 -- bash
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
仅第一次建立集群时需要自定义密码,只要es集群的pv所对应的后端存储还在,即使后面重建es集群也无需再次自定义密码,当然想要修改es的密码除外。
32.5.11 部署之后的效果
集群-不要管的命名空间哦
Kibana
看到让你输入用户名密码就对了输入,用户:elastic 密码:xx
接下来
32.5.12 番外篇部署ES-Head插件
目前在研究这里然后还没有成功不知道为什么 连接不上ES集群
测试命令记录
http://10.0.0.102:9200/?auth_user=elastic&auth_password=
apiVersion: v1
kind: Service
metadata:
name: head
namespace: elasticsearch
labels:
app: head
spec:
selector:
app: head
type : NodePort
ports:
- port: 9100
protocol: TCP
targetPort: 9100
nodePort: 39100
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: head
namespace: elasticsearch
labels:
app: head
spec:
replicas: 1
selector:
matchLabels:
app: head
template:
metadata:
labels:
app: head
spec:
containers:
- name: head
image: mobz/elasticsearch-head:5
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
env :
- name: ES_SVC
value: "es-cluster-svc"
- name: ES_PORT
value: "9200"
- name: "ELASTICSEARCH_HOSTS"
value: http://es-cluster-svc:9200
ports:
- containerPort: 9100