centos7安装elk-8.6.2集群

服务器配置

系统 IP 配置 安装服务
CentOS Linux release 7.9.2009 (Core) 192.168.0.2 6C12G 300G es01 / logstash01
CentOS Linux release 7.9.2009 (Core) 192.168.0.3 6C12G 300G es02 / logstash02
CentOS Linux release 7.9.2009 (Core) 192.168.0.4 6C12G 300G es03 / logstash03
CentOS Linux release 7.9.2009 (Core) 192.168.0.5 2C4G 30G kibana
k8s1.22 filebeat(daemonset)

安装架构

elk架构

安装步骤

系统配置

以下所有操作在三台es主机上分别执行

修改内核参数,Elasticsearch的工作涉及到大量的内存映射区域,例如文件缓存,这会消耗大量的内核资源。因此,需要修改部分内核参数的值

# 设置最大打开文件数
sed -i '$a fs.file-max=65536' /etc/sysctl.conf
# 设置控制一个进程可以拥有的最大内存映射区域数量,
sed -i '$a vm.max_map_count=262144' /etc/sysctl.conf
# 永久修改,在下面文件中添加以下配置
cat /etc/security/limits.conf | grep -v '#' | grep -v '^$'
#####################
* soft nofile 65536
* hard nofile 65536
#####################

分别修改三台主机名

# 第一台主机名
hostnamectl set-hostname es01
# 第二台主机名
hostnamectl set-hostname es02
# 第三台主机名
hostnamectl set-hostname es03
# 生效
bash

修改host文件

cat /etc/hosts
########################
172.16.0.107 es01
172.16.0.108 es02
172.16.0.109 es03
########################

安装包下载

建议在本地把所有安装包下载完成后传到服务器上,直接在服务器上下载速度会很慢

elasticsearch下载地址:https://www.elastic.co/cn/downloads/elasticsearch

logstash下载地址:https://www.elastic.co/cn/downloads/logstash

kibana下载地址:https://www.elastic.co/cn/downloads/kibana

filebeat下载地址:https://www.elastic.co/cn/downloads/beats/filebeat

可自行选择需要下载的版本,本次采用8.6.2

下载好后的截图如下

elk

下载完成后开始安装

安装elastic

以下操作分别在三台es主机上执行

新建安装目录

mkdir /opt/elk
# 创建完成后分别将es和logstash的安装包上传到这三个目录

上传完成后执行安装

rpm -ivh elasticsearch-8.6.2-x86_64.rpm

创建es数据存放目录并授权

mkdir -p /data/elasticsearch
chown -R elasticsearch:elasticsearch /data/elasticsearch/

下面步骤在es01上执行

修改配置文件

vim /etc/elasticsearch/elasticsearch.yml

需要修改的配置如下

# 集群名称
cluster.name: elk-test 
# 节点名称
node.name: es01             
# 数据日志存放目录
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
# 本机网络
network.host: 192.168.0.2
# 集群主机
discovery.seed_hosts: ["192.168.0.2", "192.168.0.3", "192.168.0.4"]
# 使用安装增强ssl
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# 指定master节点
cluster.initial_master_nodes: ["es01"]
http.host: 0.0.0.0

修改完成后启动es

systemctl start elasticsearch.service
# 查看启动状态
systemctl status elasticsearch.service
# 没问题加入开机自启
systemctl enable elasticsearch.service

重置es密码

cd /usr/share/elasticsearch/bin/
# 执行重置命令,输入新密码
./elasticsearch-reset-password -u elastic -i

重置完成后创建es02和es03加入集群的token

cd /usr/share/elasticsearch/bin/
# 生成token
./elasticsearch-create-enrollment-token -s node

接下来的操作分别在es02和es03上执行

cd /usr/share/elasticsearch/bin/
# 命令的TOKEN替换成es01上生成的token
./elasticsearch-reconfigure-node --enrollment-token TOEKN

分别执行完成后各自修改配置文件

es02的配置文件如下

cluster.name: elk-test
node.name: es02
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 1192.168.0.3
discovery.seed_hosts: ["192.168.0.2", "192.168.0.3", "192.168.0.4"]
cluster.initial_master_nodes: ["es-01"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
http.host: 0.0.0.0
transport.host: 0.0.0.0

es03的配置文件如下

cluster.name: elk-test
node.name: es03
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.0.4
discovery.seed_hosts: ["192.168.0.2", "192.168.0.3", "192.168.0.4"]
cluster.initial_master_nodes: ["es-01"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
http.host: 0.0.0.0

修改完成后分别启动es以及开机自启

# 启动
systemctl start elasticsearch.service
# 查看状态
systemctl status elasticsearch.service
# 开机自启
systemctl enable elasticsearch.service

如果出现启动失败仔细查看配置文件,检查是不是多了某些配置项

浏览器分别访问https://192.168.0.2/3/4:9200,查看集群状态,也可以在谷歌浏览器安装一个es的插件去查看,插件效果如下

es插件

到此es的安装完成

logstash安装

logstash安装比较简单,分别在三台主机上直接执行安装即可

# 分别在三台主机执行安装
rpm -ivh logstash-8.6.2-x86_64.rpm

复制证书,因为es使用https,logstash访问es时需要配置证书

cp -r /etc/elasticsearch/certs/ /etc/logstash/

kibana安装

在kibana主机上执行

修改主机名

hostnamectl set-hostname kibana
# 生效
bash

新建安装目录并将下载好的安装包上传到安装目录

mkdir /opt/kibana && cd /opt/kibana
# 使用rz上传
rz 

上传完成后执行安装

rpm -ivh kibana-8.6.2-x86_64.rpm

安装完成后修改配置文件

vim /etc/kibana/kibana.yml

需要修改的内容如下

server.port: 5601
server.host: "192.168.0.5"
logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file
pid.file: /run/kibana/kibana.pid
#kibana默认语言设置为中文
i18n.locale: "zh-CN"   

修改完成后启动

systemctl start kibana
systemctl status kibana
systemctl enable kibana

启动完成后浏览器访问192.168.0.5:5601,此时需要在es01上生成kibana的token

连接到es01服务器,执行以下操作

# 生成kibana的token,进去主机es01操作
cd /usr/share/elasticsearch/bin/
./elasticsearch-create-enrollment-token -s kibana
# 浏览器访问kibana页面,输入获取到的token
# 输入完成后会提示需要6位验证码,此时在kibana主机上进行操作
cd /usr/share/kibana/bin/
./kibana-verification-code
# 获取到验证码后输入,配置完成,然后使用安装es时设置的账号密码进行登录

filebeat安装

filebeat这里只介绍k8s下的安装,直接使用官方的yaml安装即可,不需要修改任何配置,后期直接修改configmap来配置自己的日志流

apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
- apiGroups: ["apps"]
  resources:
    - replicasets
  verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
  resources:
    - jobs
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat
  # should be the namespace where filebeat is running
  namespace: kube-system
  labels:
    k8s-app: filebeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat-kubeadm-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: kube-system
roleRef:
  kind: Role
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat-kubeadm-config
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: kube-system
roleRef:
  kind: Role
  name: filebeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  # 此处需要填入filebeat的日志采集配置,根据实际环境补充即可
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:8.6.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value:
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value:
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0640
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
posted @ 2023-05-05 20:58  pollosD  阅读(249)  评论(0编辑  收藏  举报