ELKF日志系统搭建(一)基础——ELKF架构
一、ELKF简介
ElasticSearch:搜索、分析和存储数据
Logstash:采集日志、格式化、过滤数据(数据清洗的过程),最后将数据推送到Elasticsearch存储
Kibana:数据可视化
Beats:集合了多种单一用途数据采集器,用于实现从边缘机器向Logstash和Elasticsearch发送数据,使用最多的是Filebeat,是一个轻量级日志采集 器
二、环境准备(三台机器做同样的操作)
环境说明:
准备三台机器: server IP 主机名 安装应用 centos7.9 192.168.200.21 elk01 数据、主节点 安装elasticsearch、logstash、kabana、filebeat centos7.9 192.168.200.22 elk02 数据、主节点 安装elasticsearch、kabana centos7.9 192.168.200.23 elk03 数据、主节点 安装elasticsearch、kabana
1、关闭selinux
setenforce 0 #临时关闭SELinux sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config #永久关闭SELnux(重启生效)
2、修改最大打开文件数
cat /etc/security/limits.conf | grep -v "^#" | grep -v "^$" * soft nproc 65536 * hard nproc 65536 * soft nofile 65536 * hard nofile 65536 cat /etc/sysctl.conf | grep -v "^#" vm.max_map_count = 655360 # 应用配置 sysctl -p cat /etc/systemd/system.conf | grep -v "^#" [Manager] DefaultLimitNOFILE=655360 DefaultLimitNPROC=655360
3、配置hsost文件
cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.200.21 elk01 192.168.200.22 elk02 192.168.200.23 elk03
4、安装Java环境
查询yum源支持的jdk的rpm包 yum list | grep jdk 安装jdk-11版本 yum install -y java-11-openjdk* # 查看java版本 [root@elk01 ~]# java -version openjdk version "11.0.19" 2023-04-18 LTS OpenJDK Runtime Environment (Red_Hat-11.0.19.0.7-1.el7_9) (build 11.0.19+7-LTS) OpenJDK 64-Bit Server VM (Red_Hat-11.0.19.0.7-1.el7_9) (build 11.0.19+7-LTS, mixed mode, sharing)
三、安装和配置elasticsearch(三台机器都要安装;注意:安装后先启动配置文件中指定的master节点机器,其他节点暂时不要启动,等加入集群后再启动)
1、安装elasticsearch
rpm --import https: //artifacts.elastic.co/GPG-KEY-elasticsearch [root@elk01 ~]# cat /etc/yum.repos.d/elasticsearch.repo [elasticsearch] name=Elasticsearch repository for 8.x packages baseurl=https: //artifacts.elastic.co/packages/8.x/yum gpgcheck=1 gpgkey=https: //artifacts.elastic.co/GPG-KEY-elasticsearch enabled=0 autorefresh=1 type=rpm-md yum install --enablerepo=elasticsearch elasticsearch -y 已加载插件:fastestmirror, langpacks Loading mirror speeds from cached hostfile * base : mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com 正在解决依赖关系 --> 正在检查事务 ---> 软件包 elasticsearch.x86_64.0.8.9.0-1 将被 安装 --> 解决依赖关系完成 依赖关系解决 =============================================================================================== Package 架构 版本 源 大小 =============================================================================================== 正在安装: elasticsearch x86_64 8.9.0-1 elasticsearch 578 M 事务概要 =============================================================================================== 安装 1 软件包 总下载量:578 M 安装大小:1.2 G Downloading packages: elasticsearch-8.9.0-x86_64.rpm | 578 MB 01:09:19 Running transaction check Running transaction test Transaction test succeeded Running transaction Creating elasticsearch group ... OK Creating elasticsearch user... OK 正在安装 : elasticsearch-8.9.0-1.x86_64 1/1 --------------------------- Security autoconfiguration information ------------------------------ Authentication and authorization are enabled. TLS for the transport and HTTP layers is enabled and configured. The generated password for the elastic built- in superuser is : u9COgfA0okh4LD+fRIRh If this node should join an existing cluster, you can reconfigure this with '/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token ' after creating an enrollment token on your existing cluster. You can complete the following actions at any time: Reset the password of the elastic built- in superuser with '/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic' . Generate an enrollment token for Kibana instances with '/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana' . Generate an enrollment token for Elasticsearch nodes with '/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node' . ------------------------------------------------------------------------------------------------- ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd sudo systemctl daemon-reload sudo systemctl enable elasticsearch.service ### You can start elasticsearch service by executing sudo systemctl start elasticsearch.service 验证中 : elasticsearch-8.9.0-1.x86_64 1/1 已安装: elasticsearch.x86_64 0:8.9.0-1 完毕!
2、配置elasticsearch
[root@elk01 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v "^#" #集群名称 cluster.name: my-elk #当前节点在集群中的名称,在集群中,名称唯一 node.name: elk01 #数据存储目录 path.data: / var /lib/elasticsearch #日志目录 path.logs: / var /log/elasticsearch #应用监听地址 network.host: 0.0.0.0 #应用监听端口 http.port: 9200 #集群节点列表 discovery.seed_hosts: [ "elk01" , "elk02" , "elk03" ] #首次启动指定的Master节点,在Node节点需注释掉 cluster.initial_master_nodes: [ "elk01" ] xpack.security.enabled: true xpack.security.enrollment.enabled: true xpack.security.http.ssl: enabled: true keystore.path: certs/http.p12 xpack.security.transport.ssl: enabled: true verification_mode: certificate keystore.path: certs/transport.p12 truststore.path: certs/transport.p12 http.host: 0.0.0.0 #防火墙开放相关端口 firewall-cmd --add-port=9200/tcp --permanent firewall-cmd --add-port=9300/tcp --permanent firewall-cmd --reload #启动elasticsearch服务并设为开机自启(注意:3台全部配置好后,先启动配置文件中指定的Master节点这一台,node节点暂时不要启动) systemctl enable elasticsearch systemctl start elasticsearch #安装完 Elasticsearch 时,安装过程会配置 默认情况下为单节点群集。如果想要其他节点加入现有群集 可以在配置的master执行下面的命令在现有节点上生成注册令牌。 [root@server01 ~]# /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node eyJ2ZXIiOiI4LjkuMSIsImFkciI6WyIxOTIuMTY4LjEwMC4yMTo5MjAwIl0sImZnciI6IjhlZDhmMmY2MjExYWM1YzM5NzdjNzllZDdmODY1MDFiY2E5YjlkNDA4ZTgwNDg0NjRiZDViZWFkNzZiZjRmMjIiLCJrZXkiOiJKX0RERG9vQkRPSVRJM0NlWnNIODpINUFMeHo1eFQ2bU5vcUF1cDNxRk1RIn0= 之后在新的 Elasticsearch 节点上,将注册令牌作为参数传递给工具:elasticsearch-reconfigure-node [root@server02 ~]# /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token eyJ2ZXIiOiI4LjkuMSIsImFkciI6WyIxOTIuMTY4LjEwMC4yMTo5MjAwIl0sImZnciI6IjhlZDhmMmY2MjExYWM1YzM5NzdjNzllZDdmODY1MDFiY2E5YjlkNDA4ZTgwNDg0NjRiZDViZWFkNzZiZjRmMjIiLCJrZXkiOiJKX0RERG9vQkRPSVRJM0NlWnNIODpINUFMeHo1eFQ2bU5vcUF1cDNxRk1RIn0= This node will be reconfigured to join an existing cluster, using the enrollment token that you provided. This operation will overwrite the existing configuration. Specifically: - Security auto configuration will be removed from elasticsearch.yml - The [certs] config directory will be removed - Security auto configuration related secure settings will be removed from the elasticsearch.keystore Do you want to continue with the reconfiguration process [y/N]y 如果有多个节点,执行相同的操作,执行完后启动elasticsearch服务 启动后可以使用journalctl -f 查看跟踪日志 [root@server02 ~]# journalctl -- Logs begin at 日 2023-08-20 00:17:50 CST, end at 日 2023-08-20 03:05:01 CST. -- 8月 20 00:17:50 centos7 systemd-journal[107]: Runtime journal is using 8.0M (max allowed 188.5M 8月 20 00:17:50 centos7 kernel: Initializing cgroup subsys cpuset 8月 20 00:17:50 centos7 kernel: Initializing cgroup subsys cpu 8月 20 00:17:50 centos7 kernel: Initializing cgroup subsys cpuacct ...... ...... 也可以使用journalctl --unit elasticsearch命令列出elasticsearch的日志条目 [root@server02 ~]# journalctl --unit elasticsearch -- Logs begin at 日 2023-08-20 00:17:50 CST, end at 日 2023-08-20 03:05:01 CST. -- 8月 20 01:14:44 elk02 systemd[1]: Starting Elasticsearch... 8月 20 01:15:30 elk02 systemd[1]: Started Elasticsearch. 8月 20 01:24:29 elk02 systemd[1]: Stopping Elasticsearch... 8月 20 01:24:31 elk02 systemd[1]: Stopped Elasticsearch. 在任意一节点查看集群状态和信息(elastic为账号;086530是密码) #查询状态 [root@elk01 ~]# curl -kXGET "https://localhost:9200/_cluster/health?pretty=true" -u elastic:086530 #查询Elasticsearch运行状态 [root@elk01 ~]# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:086530 https: //localhost:9200 #查询集群节点信息 [root@elk01 ~]# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:086530 https: //localhost:9200/_cluster/health?pretty=true [root@server02 ~]# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:086530 https: //localhost:9200/_cat/nodes #修改elasticsearch默认密码(在任意一节点修改密码,其他节点也会修改) [root@elk01 ~]# /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic This tool will reset the password of the [elastic] user. You will be prompted to enter the password. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: Re-enter password for [elastic]: Password for the [elastic] user successfully reset.
四、安装和配置Kibana(三台机器都要安装和配置)
1、安装Kibana
[root@elk8]# cat /etc/yum.repos.d/kibana.repo [kibana-8.x] name=Kibana repository for 8.x packages baseurl=https: //artifacts.elastic.co/packages/8.x/yum gpgcheck=1 gpgkey=https: //artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md yum install -y kibana
2、配置Kibana
#vim /etc/kibana/kibana.yml server.port: 5601 server.host: "0.0.0.0" server.name: "elk01" # server.publicBaseUrl 缺失,在生产环境中运行时应配置。某些功能可能运行不正常。 # 这里地址改为你访问kibana的地址,不能以 / 结尾 server.publicBaseUrl: "http://192.168.200.21:5601" # Kibana 修改中文 在kibana.yml配置文件中添加一行配置 i18n.locale: "zh-CN" # 分别在三台机器上生成kibana加密密钥,并将生成的密钥加入到kibana配置文件中 # xpack.encryptedSavedObjects.encryptionKey: Used to encrypt stored objects such as dashboards and visualizations # xpack.reporting.encryptionKey: Used to encrypt saved reports # xpack.security.encryptionKey: Used to encrypt session information [root@aclab ~]# /usr/share/kibana/bin/kibana-encryption-keys generate xpack.encryptedSavedObjects.encryptionKey: 5bb5e37c09fd6b05958be5a3edc82cf9 xpack.reporting.encryptionKey: b2b873b52ab8ec55171bd8141095302c xpack.security.encryptionKey: 30670e386fab78f50b012e25cb284e88 # 防火墙放行5601端口 firewall-cmd --add-port=5601/tcp --permanent firewall-cmd --reload # 重启 # 开机启动kibana systemctl enable kibana # 启动kibana systemctl start kibana # 生成kibana令牌 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana eyJ2ZXIiOiI4LjAuMSIsImFkciI6WyIxOTIuMTY4LjIxMC4xOTo5MjAwIl0sImZnciI6IjMzYjUwYTkxN2VmYjIwZjhjYzFjMmM0ZjFhMDdlY2Q2MTliZGUxOTU4MzMyOGY2MTJjMzMyODFjNzI0ODQ5NDYiLCJrZXkiOiJBemgtXzRBQnBtQ3lIN2p4MG1VdDpNN0tiNTFMNlM5NnhwU1lTdGpIOUVRIn0= # 测试kibana,浏览器访问: http: //192.168.200.21:5601/ # 在tonken处输入刚刚的令牌 eyJ2ZXIiOiI4LjAuMSIsImFkciI6WyIxOTIuMTY4LjIxMC4xOTo5MjAwIl0sImZnciI6IjMzYjUwYTkxN2VmYjIwZjhjYzFjMmM0ZjFhMDdlY2Q2MTliZGUxOTU4MzMyOGY2MTJjMzMyODFjNzI0ODQ5NDYiLCJrZXkiOiJBemgtXzRBQnBtQ3lIN2p4MG1VdDpNN0tiNTFMNlM5NnhwU1lTdGpIOUVRIn0= # 在服务器中检索验证码 sh /usr/share/kibana/bin/kibana-verification-code # 输入Elasticsearch的用户名密码,进入系统 # 进入kibana后台后可以根据需要修改elastic密码(elasticsearch和kibana的登录密码都会修改)
注意:elk02和elk03机器上的server.name和server.publicBaseUrl要修改为本机的主机名和ip链接。
3、Kibana配置ES监控和管理
(1)、Kibana首次登录进来后,选择自己浏览。
(2)、打开Management,找到“堆栈监测”开启ES集群的监控
(3)、点击打开监控
(4)、稍等一会
(5)、如果上面的界面加载了很久都没跳出页面可以如下图右上角。选择今天的数据时间范围,就有数据了,然后弹出创建规则窗口,按推荐默认的来也可以。
(6)、就可以看到集群的状态 了
五、安装和配置Logstash(只在elk01上安装Logstash)
1、安装Logstash
#添加yum源 [root@elk01 ~]# cat /etc/yum.repos.d/logstash.repo [logstash-8.x] name=Elastic repository for 8.x packages baseurl=https: //artifacts.elastic.co/packages/8.x/yum gpgcheck=1 gpgkey=https: //artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md #安装 yum install -y logstash
2、配置Logstash
#将ES的证书复制到Logstash目录。因为我们的ES使用的HTTPS访问认证, Logstash要发送日志到ES时,需要进行证书认证。 cp -r /etc/elasticsearch/certs /etc/logstash/ #创建软链接 ln -s /usr/share/logstash/bin/logstash /bin/ ln -s /usr/share/logstash/bin/logstash-plugin /bin/ ln -s /usr/share/logstash/bin/logstash.lib.sh /usr/bin/ #使用Logstash收集日志。进入Logstash目录,创建日志收集的配置文件,因为我的服务器没装有什么应用,就以收集操作系统日志作为例子。 [root@elk01 ~]# cat /etc/logstash/conf.d/systemlog.conf input { file { path => [ "/var/log/messages" ] type => "system" start_position => "beginning" } } output { elasticsearch { hosts => [ "https://192.168.200.21:9200" , "https://192.168.200.22:9200" , "https://192.168.200.23:9200" ] index => "192.168.200.21-syslog-%{+YYYY.MM}" user => "elastic" password => "086530qwe" ssl => "true" cacert => "/etc/logstash/certs/http_ca.crt" } } #验证配置文件是否正确,最后面显示OK才能正常启动logstash [root@elk01 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/systemlog.conf --config.test_and_exit #启动服务 systemctl enable logstash systemctl start logstash #防火墙开放相关端口 firewall-cmd --add-port=9600/tcp --permanent firewall-cmd --reload # 测试logstash [root@elk01 ~]# logstash -e 'input { stdin { } } output { stdout {} }' Using bundled JDK: /usr/share/logstash/jdk ...... ...... ipelines=>[:main], :non_running_pipelines=>[]} #输入内容并按回车 hello word { "@version" => "1" , "message" => "hello word" , "host" => { "hostname" => "elk01" }, "@timestamp" => 2023-08-20T16:54:10.531323818Z, "event" => { "original" => "hello word" } } 说明: 在默认情况下,stdout输出插件的编解码器为rubydebug,所以输出内容中包含 有版本、时间等信息,其中message属性包含的就是在命令行输入的内容 @timestamp:标记事件发生的时间点 host:标记事件发生的主机 @version:标记事件的唯一类型 #收集日志。运行日志收集脚本,开始收集日志,并查看日志。 logstash -f /etc/logstash/conf.d/systemlog.conf &
使用Kibana查看日志。打开Kibana页面,点击左侧菜单栏的Discover选项卡,如下图创建数据视图。
如下图复制粘贴索引名称过来,然后保存数据视图,名称可以自己命名。(我这里已经创建了,所以会显示数据视图已存在)
然后就可以查看到日志了,左侧可以做一些日志筛选,右侧可选相应时间段。
以上,就算是整套ELK收集日志的一个简单示范:Logstash收集日志发送到ES,Kibana从ES读取数据进行日志展示和查询。
六、安装和配置Filebeat收集日志
1、安装Filebeat
#配置yum源 [root@elk01 ~]# cat /etc/yum.repos.d/elastic.repo [elastic-8.x] name=Elastic repository for 8.x packages baseurl=https: //artifacts.elastic.co/packages/8.x/yum gpgcheck=1 gpgkey=https: //artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md #安装 yum install -y filebeat
2、配置Filebeat收集日志
#同Logstash一样,因需要进行证书认证,需要将ES的证书复制到Filebeat目录。 cp -r /etc/elasticsearch/certs /etc/filebeat/ 编辑filebeat 的配置文件,编辑filebeat.yml配置文件可以先做个备份,为方便编写,将filebeat.yml原有的内容全部清空,然后写上我们自己的日志收集配置。我这里以日志格式是Json格式的kibana日志作为示范。 [root@elk01 ~]# cat /etc/filebeat/filebeat.yml filebeat.inputs: - type: log enabled: true paths: - / var /log/kibana/kibana.log json.keys_under_root: true json.overwrite_keys: true output.elasticsearch: hosts: [ "192.168.200.21:9200" , "192.168.200.22:9200" , "192.168.200.23:9200" ] index: "filebeat-kibanalog-%{+yyyy.MM}" protocol: "https" username: "elastic" password: "086530qwe" ssl.certificate_authorities: - /etc/filebeat/certs/http_ca.crt setup.template.name: "filebeat" setup.template.pattern: "filebeat-*" setup.template.enabled: false setup.template.overwrite: true #启动filebeat收集日志。 因为filebeat收集日志的配置是yml格式的,书写语法比较严格规范,在启动filebeat前,可以先检查一下配置文件的语法有没有问题。 [root@elk01 ~]# filebeat test config -c /etc/filebeat/filebeat.yml Config OK #启动服务 systemctl enable filebeat.service systemctl start filebeat.service
同logstash一样,打开Kibana创建数据视图导入索引,查看日志。