ELK 8.4.3 docker 保姆级安装部署详细步骤 - 修改版

1. 创建docker网络

docker network create -d bridge elastic

2. 拉取elasticsearch 8.4.3版本

docker pull elasticsearch:8.4.3

3. 第一次执行docker脚本

docker run -it \ -p 9200:9200 \ -p 9300:9300 \ --name elasticsearch \ --net elastic \ -e ES_JAVA_OPTS="-Xms1g -Xmx1g" \ -e "discovery.type=single-node" \ -e LANG=C.UTF-8 \ -e LC_ALL=C.UTF-8 \ elasticsearch:8.4.3

注意第一次执行脚本不要加-d这个参数,否则看不到服务首次运行时生成的随机密码和随机 enrollment token

✅ Elasticsearch security features have been automatically configured!
✅ Authentication is enabled and cluster connections are encrypted.


ℹ️  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  UkNx8px1yrMYIht30QUc


ℹ️  HTTP CA certificate SHA-256 fingerprint:
  e924551c1453c893114a05656882eea81cb11dd87c1258f83e6f676d2428f8f2


ℹ️  Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjQuMyIsImFkciI6WyIxNzIuMTguMC4yOjkyMDAiXSwiZmdyIjoiZTkyNDU1MWMxNDUzYzg5MzExNGEwNTY1Njg4MmVlYTgxY2IxMWRkODdjMTI1OGY4M2U2ZjY3NmQyNDI4ZjhmMiIsImtleSI6Inptd3g3NDBCQkNXVExGNGFNTzhEOl8yb25udm52VGp1TElfQWJTaTdmaEEifQ==


ℹ️ Configure other nodes to join this cluster:
• Copy the following enrollment token and start new Elasticsearch nodes with `bin/elasticsearch --enrollment-token <token>` (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjQuMyIsImFkciI6WyIxNzIuMTguMC4yOjkyMDAiXSwiZmdyIjoiZTkyNDU1MWMxNDUzYzg5MzExNGEwNTY1Njg4MmVlYTgxY2IxMWRkODdjMTI1OGY4M2U2ZjY3NmQyNDI4ZjhmMiIsImtleSI6InpXd3g3NDBCQkNXVExGNGFNTzhEOjB5aUVodjR6VEdhUlNkazNxb1dpb0EifQ==


  If you're running in Docker, copy the enrollment token and run:
  `docker run -e "ENROLLMENT_TOKEN=<token>" docker.elastic.co/elasticsearch/elasticsearch:8.4.3`

4. 创建相应目录并复制配置文件到主机

mkdir /data/apps/elk8.4.3/elasticsearch
docker cp elasticsearch:/usr/share/elasticsearch/config /data/apps/elk8.4.3/elasticsearch/        
docker cp elasticsearch:/usr/share/elasticsearch/data /data/apps/elk8.4.3/elasticsearch/
docker cp elasticsearch:/usr/share/elasticsearch/plugins /data/apps/elk8.4.3/elasticsearch/
docker cp elasticsearch:/usr/share/elasticsearch/logs /data/apps/elk8.4.3/elasticsearch/

5. 删除容器

docker rm -f elasticsearch

6. 修改/data/apps/elk8.4.3/elasticsearch/config/elasticsearch.yml

增加:xpack.monitoring.collection.enabled: true
说明:添加这个配置以后在kibana中才会显示联机状态,否则会显示脱机状态

cluster.name: "docker-cluster"
network.host: 0.0.0.0


#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 28-02-2024 10:09:05
#
# --------------------------------------------------------------------------------


# Enable security features
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
xpack.security.enrollment.enabled: true


# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12


# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

7. 启动elasticsearch

docker run -it \
   -d \
   -p 9200:9200 \
   -p 9300:9300 \
   --name elasticsearch \
   --net elastic \
   -e ES_JAVA_OPTS="-Xms1g -Xmx1g" \
   -e "discovery.type=single-node" \
   -e LANG=C.UTF-8 \
   -e LC_ALL=C.UTF-8 \
   -v /data/apps/elk8.4.3/elasticsearch/config:/usr/share/elasticsearch/config \
   -v /data/apps/elk8.4.3/elasticsearch/data:/usr/share/elasticsearch/data \
   -v /data/apps/elk8.4.3/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
   -v /data/apps/elk8.4.3/elasticsearch/logs:/usr/share/elasticsearch/logs \
   elasticsearch:8.4.3

8. 启动后可以访问https://10.200.146.31:9200来验证是否成功

用户名:elastic
密码在第一次启动时保存下来的信息中查找

9. 安装Kibana

docker pull kibana:8.4.3

10. 启动

不要有参数 -d ,否则看不见初始化链接

docker run -it \
    --restart=always \
    --log-driver json-file \
    --log-opt max-size=100m \
    --log-opt max-file=2 \
    --name kibana \
    -p 5601:5601 \
    --net elastic \
    kibana:8.4.3

11. 初始化Kibana鉴权凭证

根据上一步日志返回的url 访问http://10.200.146.31:5601/?code=224897,会出现以下画面

在textarea中填入之前elasticsearch生成的相关信息,注意这个token只有30分钟的有效期,如果过期了只能进入容器重置token,进入容器执行 /bin/elasticsearch-create-enrollment-token -s kibana --url "https://127.0.0.1:9200"

ℹ️  Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjQuMyIsImFkciI6WyIxNzIuMTguMC4yOjkyMDAiXSwiZmdyIjoiZTkyNDU1MWMxNDUzYzg5MzExNGEwNTY1Njg4MmVlYTgxY2IxMWRkODdjMTI1OGY4M2U2ZjY3NmQyNDI4ZjhmMiIsImtleSI6Inptd3g3NDBCQkNXVExGNGFNTzhEOl8yb25udm52VGp1TElfQWJTaTdmaEEifQ==

12. 创建kibana目录并copy相关配置信息

mkdir /data/apps/elk8.4.3/kibana
docker cp kibana:/usr/share/kibana/config /data/apps/elk8.4.3/kibana/        
docker cp kibana:/usr/share/kibana/data /data/apps/elk8.4.3/kibana/        
docker cp kibana:/usr/share/kibana/plugins /data/apps/elk8.4.3/kibana/        
docker cp kibana:/usr/share/kibana/logs /data/apps/elk8.4.3/kibana/       
sudo chown -R 1000:1000 /data/apps/elk8.4.3/kibana

13. 修改/data/apps/elk8.4.3/kibana/config/kibana.yml

### >>>>>>> BACKUP START: Kibana interactive setup (2024-02-28T10:19:30.247Z)


#
# ** THIS IS AN AUTO-GENERATED FILE **
#


# Default Kibana configuration for docker target
#server.host: "0.0.0.0"
#server.shutdownTimeout: "5s"
#elasticsearch.hosts: [ "http://elasticsearch:9200" ]
#monitoring.ui.container.elasticsearch.enabled: true
### >>>>>>> BACKUP END: Kibana interactive setup (2024-02-28T10:19:30.247Z)
i18n.locale: "zh-CN"
# This section was automatically generated during setup.
server.host: 0.0.0.0
server.shutdownTimeout: 5s

#这个ip一定是elasticsearch的容器ip,可使用docker inspect | grep -i ipaddress  
elasticsearch.hosts: ['https://172.18.0.2:9200']
monitoring.ui.container.elasticsearch.enabled: true

#以下信息是刚才初始化kibana健全配置后自动生成的,如果没有说明之前初始化失败了
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3MDkxMTU1Njg3MDE6bVZYQlE0c3FTVXFTc2VCMmYwQXU5dw
elasticsearch.ssl.certificateAuthorities: [/usr/share/kibana/data/ca_1709115570238.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://172.18.0.2:9200'], ca_trusted_fingerprint: e924551c1453c893114a05656882eea81cb11dd87c1258f83e6f676d24
28f8f2}]

14. 删除容器并重启

docker rm -f kibana


docker run -it \
    -d \
    --restart=always \
    --log-driver json-file \
    --log-opt max-size=100m \
    --log-opt max-file=2 \
    --name kibana \
    -p 5601:5601 \
    --net elastic \
    -v /data/apps/elk8.4.3/kibana/config:/usr/share/kibana/config \
    -v /data/apps/elk8.4.3/kibana/data:/usr/share/kibana/data \
    -v /data/apps/elk8.4.3/kibana/plugins:/usr/share/kibana/plugins \
    -v /data/apps/elk8.4.3/kibana/logs:/usr/share/kibana/logs \
    kibana:8.4.3

15. Logstash拉取镜像

docker pull logstash:8.4.3

16. 执行脚本

docker run -it \
    -d \
    --name logstash \
    -p 9600:9600 \
    -p 5044:5044 \
    --net elastic \
    logstash:8.4.3

17. 创建目录并同步配置文件

mkdir /data/apps/elk8.4.3/logstash
docker cp logstash:/usr/share/logstash/config /data/apps/elk8.4.3/logstash/ 
docker cp logstash:/usr/share/logstash/pipeline /data/apps/elk8.4.3/logstash/ 
sudo cp -rf /data/apps/elk8.4.3/elasticsearch/config/certs /data/apps/elk8.4.3/logstash/config/certs
sudo chown -R 1000:1000 /data/apps/elk8.4.3/logstash

18. 修改配置/data/apps/elk8.4.3/logstash/config/logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: [ "https://172.18.0.2:9200" ]
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "第一次启动elasticsearch是保存的信息中查找UkNx8px1yrMYIht30QUc" 
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/usr/share/logstash/config/certs/http_ca.crt"
xpack.monitoring.elasticsearch.ssl.ca_trusted_fingerprint: "第一次启动elasticsearch是保存的信息中查找e924551c1453c893114a05656882eea81cb11dd87c1258f83e6f676d2428f8f2"

19. 修改配置/data/apps/elk8.4.3/logstash/pipeline/logstash.conf

input {
  beats {
    port => 5044
  }
}


filter {
  date {
        # 因为我的日志里,我的time字段格式是2024-03-14T15:34:03+08:00 ,所以要使用以下两行配置
        match => [ "time", "ISO8601" ]
        target => "@timestamp"
  }
  json {
    source => "message"
  }
  mutate {
    remove_field => ["message", "path", "version", "@version", "agent", "cloud", "host", "input", "log", "tags", "_index", "_source", "ecs", "event"]
  }
}


output {
  elasticsearch {
    hosts => ["https://172.18.0.2:9200"]
    index => "douyin-%{+YYYY.MM.dd}"
    ssl => true
    ssl_certificate_verification => false
    cacert => "/usr/share/logstash/config/certs/http_ca.crt"
    ca_trusted_fingerprint => "第一次启动elasticsearch是保存的信息中查找e924551c1453c893114a05656882eea81cb11dd87c1258f83e6f676d2428f8f2"
    user => "elastic"
    password => "第一次启动elasticsearch是保存的信息中查找UkNx8px1yrMYIht30QUc"
  }
}

20. 删除容器并重新启动

docker rm -f logstash


docker run -it \
    -d \
    --name logstash \
    -p 9600:9600 \
    -p 5044:5044 \
    --net elastic \
    -v /data/apps/elk8.4.3/logstash/config:/usr/share/logstash/config \
    -v /data/apps/elk8.4.3/logstash/pipeline:/usr/share/logstash/pipeline \
    logstash:8.4.3

21. Filebeat 拉取镜像

sudo docker pull elastic/filebeat:8.4.3

22. 启动脚本

docker run -it \
    -d \
    --name filebeat \
    --network host \
    -e TZ=Asia/Shanghai \
    elastic/filebeat:8.4.3 \
    filebeat -e  -c /usr/share/filebeat/filebeat.yml

23. 创建目录并同步配置

mkdir /data/apps/elk8.4.3/filebeat
docker cp filebeat:/usr/share/filebeat/filebeat.yml /data/apps/elk8.4.3/filebeat/ 
docker cp filebeat:/usr/share/filebeat/data /data/apps/elk8.4.3/filebeat/ 
docker cp filebeat:/usr/share/filebeat/logs /data/apps/elk8.4.3/filebeat/ 
sudo chown -R 1000:1000 /data/apps/elk8.4.3/filebeat

24. 修改/data/apps/elk8.4.3/filebeat/filebeat.yml

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false


processors:
  - add_cloud_metadata: ~
  - add_docker_metadata: ~


output.logstash:
  enabled: true
  # 因为filebeat启动时候network是host,所以这里直接设置成localhost,就可以访问logstash的服务了
  hosts: ["localhost:5044"]


filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /usr/share/filebeat/target/test.log # 这个路径是需要收集的日志路径,是docker容器中的路径
  scan_frequency: 10s
  exclude_lines: ['HEAD']
  exclude_lines: ['HTTP/1.1']
  multiline.pattern: '^[[:space:]]+(at|\.{3})\b|Exception|捕获异常'
  multiline.negate: false
  multiline.match: after

25. 删除镜像并重新启动

docker rm -f filebeat


docker run -it \
    -d \
    --name filebeat \
    --network host \
    -e TZ=Asia/Shanghai \
    -v /data/apps/douyin/logs:/usr/share/filebeat/target \
    -v /data/apps/elk8.4.3/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml \
    -v /data/apps/elk8.4.3/filebeat/data:/usr/share/filebeat/data \
  -v /data/apps/elk8.4.3/filebeat/logs:/usr/share/filebeat/logs \
    elastic/filebeat:8.4.3 \
    filebeat -e  -c /usr/share/filebeat/filebeat.yml

补充一下windows下docker启动filebeat的问题

windows下如果按着上面的启动方式会启动失败,错误日志返回
Exiting: error loading config file: config file ("/usr/share/filebeat/filebeat.yml") can only be writable by the owner but the permissions are "-rwxrwxrwx" (to fix the permissions use: 'chmod go-w /usr/share/filebeat/filebeat.yml')
可是windows下并没有chmod方法来修改文件权限,使用自带的安全设置对everyone设置为执行和只读也没有成功,没办法只好在git bash里来启动这个docker了,哪位高人如果有省事的方法告诉我一下,谢谢!

docker run -it -d --name filebeat --network host -e TZ=Asia/Shanghai -v /e/data/apps/douyin/logs:/usr/share/filebeat/target -v /e/data/apps/elk8.4.3/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /e/data/apps/elk8.4.3/filebeat/data:/usr/share/filebeat/data -v /e/data/apps/elk8.4.3/filebeat/logs:/usr/share/filebeat/logs elastic/filebeat:8.4.3 sh -c "filebeat -e -c /usr/share/filebeat/filebeat.yml"

26. 启动后便可以回到kibana看到相关数据了

27. Analytics -> Discover 可以看到刚才日志中的数据,如果没有就创建新的数据集,并使用douyin这个索引

28. 至此ELK-F部署完成

本文自https://www.jianshu.com/p/5c441ce929b1 基础上作了一定的修改

posted @   从雍和宫走到电影学院  阅读(1575)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 微软正式发布.NET 10 Preview 1:开启下一代开发框架新篇章
· 没有源码,如何修改代码逻辑?
· NetPad:一个.NET开源、跨平台的C#编辑器
· PowerShell开发游戏 · 打蜜蜂
· 凌晨三点救火实录:Java内存泄漏的七个神坑,你至少踩过三个!
点击右上角即可分享
微信分享提示