ELK8.8部署安装并配置xpark认证

ELK8.8部署安装并配置xpark认证
  • 介绍

  主要记录下filebeat+logstash+elasticsearch+kibana抽取过滤存储展示应用日志文件的方式;版本基于8.8,并开启xpack安全认证。由于从7.X开始就自带JDK,故这里也不展示环境配置等步骤。


  • 下载服务
elasticsearch:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.8.1-linux-x86_64.tar.gz
kibana:https://artifacts.elastic.co/downloads/kibana/kibana-8.8.1-linux-x86_64.tar.gz
filebeat:https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.8.1-linux-x86_64.tar.gz
logstash:https://artifacts.elastic.co/downloads/logstash/logstash-8.8.1-linux-x86_64.tar.gz
  • 环境介绍
IP 系统 服务 软件版本
172.16.0.1 CentOS Linux release 7.6.1810 (Core) logstash+elasticsearch+kibana 8.8.1
172.16.0.2 CentOS Linux release 7.6.1810 (Core) logstash+elasticsearch+kibana 8.8.1
172.16.0.3 CentOS Linux release 7.6.1810 (Core) logstash+elasticsearch+kibana 8.8.1
  • 部署elasticsearch
  1. 由于elasticsearch无法用root用户启动,这里创建一个普通用户elk,后续操作将在此用户下进行;
useradd -d /home/elk -m elk
echo '123@qwe'|passwd elk --stdin
  1. 安装es
- 创建es数据目录和log目录;
mkdir /data/elk/elasticsearch/{data,log}

- 解压安装包
tar -zxvf elasticsearch-8.8.1-linux-x86_64.tar.gz
- 进入config目录,修改配置文件elasticsearch.yml;
cd elasticsearch-8.8.1/config

- 修改配置文件取消以下注释并配置;
vim elasticsearch.yml
cluster.name: my-application
node.name: node-1
path.data: /data/elk/elasticsearch/data
path.logs: /data/elk/elasticsearch/logs
network.host: 0.0.0.0
http.port: 9200

- 首次启动不要后台启动;
pwd
/home/elk/elasticsearch-8.8.1/config
cd /home/elk/elasticsearch-8.8.1/bin
./elasticsearch
**前台日志输出最后内容<要记录下来>**:
 Elasticsearch security features have been automatically configured!
 Authentication is enabled and cluster connections are encrypted.

ℹ️  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  2j6qweqeRqnAnPGU61

ℹ️  HTTP CA certificate SHA-256 fingerprint:
  09189c0bb24353451b32f603d509272d591sad123815b1233d7ae

ℹ️  Configure Kibana to use this cluster:
 Run Kibana and click the configuration link in the terminal when Kibana starts.
 Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjguMSIsImFkciI6WyIxMC4yNTMuMTc3LjkyOjkyMDAiXSwiZmdyIjoiMDkxODljMGJiMjc4NDE4YTIyNjE4YjBlN2M5OGIzMmY2MDNkNTA5MjcyZDU5MWZiNzkwMDQzODE1YjY3ZDdhZSIsImtleSI6Im02ckE5WWdCUEJtZ2J3czVUWU14OjRUYVliMi1SUWFHSlVlRWJaYk5NUVEifQ==

ℹ️ Configure other nodes to join this cluster:
 Copy the following enrollment token and start new Elasticsearch nodes with `bin/elasticsearch --enrollment-token <token>` (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjguMSIsImFkciI6WyIxMC4yNTMuMTc3LjkyOjkyMDAiXSwiZmdyIjoiMDkxODljMGJiMjc4NDE4YTIyNjE4YjBlN2M5OGIzMmY2MDNkNTA5MjcyZDU5MWZiNzkwMDQzODE1YjY3ZDdhZSIsImtleSI6Im1xckE5WWdCUEJtZ2J3czVUWU12Omt1aEdkVXAzUTA2LUpqOVNmMWkweEEifQ==

  If you're running in Docker, copy the enrollment token and run:
  `docker run -e "ENROLLMENT_TOKEN=<token>" docker.elastic.co/elasticsearch/elasticsearch:8.8.1`


- 重新开启一个会话窗口,再次查看elasticsearch.yml配置,会发现多了xpack安全认证;

    ```
    # Enable security features
    xpack.security.enabled: true
    
    xpack.security.enrollment.enabled: true
    
    xpack.monitoring.collection.enabled: true
    
    # Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
    xpack.security.http.ssl:
      enabled: true
      keystore.path: certs/http.p12
    
    # Enable encryption and mutual authentication between cluster nodes
    xpack.security.transport.ssl:
      enabled: true
      verification_mode: certificate
      keystore.path: certs/transport.p12
      truststore.path: certs/transport.p12
    # Create a new cluster with the current node only
    # Additional nodes can still join the cluster later
    cluster.initial_master_nodes: ["node-1"]
    ```
    同时/home/elk/elasticsearch-8.8.1/config目录下增加一个certs目录,里面有以下内容;
    ll certs/
    总用量 24
    -rw-rw---- 1 elk elk 1915 6  26 11:29 http_ca.crt
    -rw-rw---- 1 elk elk 9997 6  26 11:29 http.p12
    -rw-rw---- 1 elk elk 5822 6  26 11:29 transport.p12
    
- 关闭elasticsearch服务,并后台启动;
[elk@host-172-16-0-1 config]$ ps -ef|grep elasticsearch|grep -v grep|awk '{print $2}'|xargs kill
[elk@host-172-16-0-1 config]$ cd  ../bin/
[elk@host-172-16-0-1 bin]$ ./elasticsearch -d 

- 初始化elasticsearch内置kibana用户密码;
./elasticsearch-reset-password -u kibaina

- 浏览器访问https://172.16.0.1:9200,输入elastic密码2j6qweqeRqnAnPGU61:

  1. 安装kibana
- 解压安装包;
tar -zxvf kibana-8.8.1-linux-x86_64.tar.gz
cd  kibana-8.8.1/config

- 拷贝es目录下certs文件夹到config下;
\cp  -rf /home/elk/elasticsearch-8.8.1/config/certs ./

- 修改kibana.yml,取消以下注释并配置;
vim kibana.yml
server.port: 5601
server.host: "172.16.0.1"
elasticsearch.hosts: ["https://172.16.0.1:9200"]
elasticsearch.username: "kibana"    # es内置用户;
elasticsearch.password: "pkRqnAnPGU61123"  # es初始化的的密码;
elasticsearch.ssl.certificateAuthorities: [ "/home/elk/kibana-8.8.1/config/certs/http_ca.crt" ]
i18n.locale: "zh-CN"

- 启动kibana
cd  /home/elk/kibana-8.8.1/
nohup ./bin/kibana &

- 访问http://172.16.0.1:5601/login,输入elastic密码2j6qweqeRqnAnPGU61:

  1. 安装logstash
- 解压安装包;
tar -zxvf logstash-8.8.1-linux-x86_64.tar.gz
cd logstash-8.8.1/config/

- 拷贝es目录下certs文件夹到config下;
\cp  -rf /home/elk/elasticsearch-8.8.1/config/certs ./

- 修改配置文件logstash.yml;
vim logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: 2j6qweqeRqnAnPGU61
xpack.monitoring.elasticsearch.hosts: ["https://172.16.0.1:9200"]
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/home/elk/logstash-8.8.1/config/certs/http_ca.crt"
xpack.monitoring.elasticsearch.ssl.ca_trusted_fingerprint: 09189c0bb278418a22618b0e7c98b32f603d509272d591fb790043815b67d7ae 

- 修改logstash-sample.conf配置;
vim logstash-sample.conf
input {
  beats {
    port => 5041
  }
}
output {
  elasticsearch {
    hosts => ["https://172.16.0.1:9200"]
    #index => "%{[fields][service_name]}-%{+YYYY.MM.dd}"
    ssl => true
    ssl_certificate_verification => false
    cacert => "/home/elk/logstash-8.8.1/config/certs/http_ca.crt"
    ca_trusted_fingerprint => "09189c0bb278418a22618b0e7c98b32f603d509272d591fb790043815b67d7ae"
    user => "elastic"
    password => "2j6qweqeRqnAnPGU61"
  }
  stdout {codec => rubydebug}
}

- 启动logstash;
nohup ./bin/logstash -f /home/elk/logstash-8.8.1/config/logstash-sample.conf &
  1. 安装filebeat
- 分别登录172.16.0.2/3两台应用服务器/data目录;
mkdir filebeat
tar -zxvf filebeat-8.8.1-linux-x86_64.tar.gz
cd filebeat-8.8.1-linux-x86_64

- 修改filebeat.yml配置文件
vim filebeat.yml
filebeat.inputs:
 - type: log
 id: 1
 enable: true
 paths: 
   - /data/app/ap/logs/*.log  # 要采集的日志文件或路径
# output.elasticsearch:  # 由于本文架构是filebeat的output是到logstash,故关闭默认output.elasticsearch;
output.logstash:
    hosts: ["172.16.0.1:5041"]  # 这里的端口要与logstash-sample.conf配置里的一致;

- 配置完成,临时启动filebeat;
nohup ./filebeat -e -c filebeat.yml > /dev/null 2>&1

- 由于通过nohub方式启动filebeat,运行一段时间后filebeat自动退出;原因是filebeat默认会定期检测文件是否有新的内容,如果超过一定时间检测的文件没有新数据写入,那么filebeat会自动退出,解决办法就是将filebeat通过系统后台的方式长期运行;
    - 添加systemctl服务启动配置
    vim  /etc/systemd/system/filebeat.service
    
    [Unit]
    Description=Filebeat is a lightweight shipper for metrics.
    Documentation=https://www.elastic.co/products/beats/filebeat
    Wants=network-online.target
    After=network-online.target
    
    [Service]
    Environment="LOG_OPTS=-e"
    Environment="CONFIG_OPTS=-c /data/filebeat/filebeat-8.8.1-linux-x86_64/filebeat.yml"
    Environment="PATH_OPTS=-path.home /data/filebeat/filebeat-8.8.1-linux-x86_64/filebeat -path.config /data/filebeat/fileb
    eat-8.8.1-linux-x86_64 -path.data /data/filebeat/filebeat-8.8.1-linux-x86_64/data -path.logs /data/filebeat/filebeat-8.
    8.1-linux-x86_64/logs"
    ExecStart=/data/filebeat/filebeat-8.8.1-linux-x86_64/filebeat $LOG_OPTS $CONFIG_OPTS $PATH_OPTS
    Restart=always
    
    [Install]
    WantedBy=multi-user.target
    
    - 授予可执行权限
    chmod +x /etc/systemd/system/filebeat.service
    
    - 配置开机启动等
    systemctl daemon-reload
    systemctl enable filebeat
    systemctl start filebeat
  • 登录kibana查看
posted @   白日梦想家Zz  阅读(2158)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· winform 绘制太阳,地球,月球 运作规律
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 上周热点回顾(3.3-3.9)
· 超详细:普通电脑也行Windows部署deepseek R1训练数据并当服务器共享给他人
点击右上角即可分享
微信分享提示