Centos7.x RPM安装ELK 7.5.0
一、环境介绍
单位需要分析tomcat 日志和业务日志,比较以后还是选择用ELK 来进行日志的分析,以及可视化的展示。
系统环境
服务器:
1、AWS EC2 2C8G
[root@ip-10-0-10-229 ~]cat /etc/redhat-release CentOS Linux release 7.7.1908 (Core) [root@ip-10-0-10-229 ~]uname -a Linux elk-server 3.10.0-1062.9.1.el7.x86_64 #1 SMP Fri Dec 6 15:49:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
2、JDK版本
jdk版本要求9版本以上的
[root@ip-10-0-10-229 ~]java -version openjdk version "13.0.1" 2019-10-15 OpenJDK Runtime Environment AdoptOpenJDK (build 13.0.1+9) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 13.0.1+9, mixed mode, sharing)
3、ELK 版本
elasticsearch 7.5.0 kibana 7.5.0 logstash 7.5.0
4、JDK 安装包
#附上一个JAVA 下载地址,当然你也可以不用,elasticsearch7.5中包含有java所以只需要设置变量即可 wget https://download.java.net/java/GA/jdk11/13/GPL/openjdk-11.0.1_linux-x64_bin.tar.gz /etc/profile #文末添加以下配置 export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL export JAVA_HOME=/usr/share/elasticsearch/jdk export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar #使变量生效 source /etc/profile
5、ELK 安装包
官网的包下载实在是太慢,这个包是在aws 的S3桶里的
wget https://rgc-solution-server-validation.s3.cn-north-1.amazonaws.com.cn/xuewenlong/elasticsearch-7.5.0-x86_64.rpm wget https://rgc-solution-server-validation.s3.cn-north-1.amazonaws.com.cn/xuewenlong/kibana-7.5.0-x86_64.rpm wget https://rgc-solution-server-validation.s3.cn-north-1.amazonaws.com.cn/xuewenlong/logstash-7.5.0.rpm
二、elasticsearch安装
1、安装elasticsearch
rpm -ivh elasticsearch-7.5.0-x86_64.rpm
2、修改elasticsearch配置文件
[root@ip-10-0-10-229 ~]# cat /etc/elasticsearch/elasticsearch.yml |grep -v "^#" #数据存储路径 path.data: /var/lib/elasticsearch #日志存储路径 path.logs: /var/log/elasticsearch #服务端口 http.port: 9200 #集群名 cluster.name: elk-cluster #node名 node.name: elk-1 #集群master需要和node名设置一致 cluster.initial_master_nodes: ["node-1"] network.host: 10.0.10.229 #xpack密码配置 xpack.security.enabled: true xpack.license.self_generated.type: basic xpack.security.transport.ssl.enabled: true #elasticsearch-head插件 http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User" #可选优化配置 #设置单个request请求的内存熔断限制,默认是jvm堆的60%(es7.0引入了新的内存熔断机制,会智能判断,规避OOM)。 indices.breaker.request.limit: 10% #query请求可使用的jvm内存限制,默认是10%。 indices.queries.cache.size: 20% #查询request请求的DSL语句缓存,被缓存的DSL语句下次请求时不会被二次解析,可提升检索性能,默认值是1%。 indices.requests.cache.size: 2% #设置字段缓存的最大值,默认无限制。 indices.fielddata.cache.size: 30% #用来对索引数据进行冷热分离,需要注意的是 setting 中也要进行相关配置 #"index.routing.allocation.require.box_type": "hot" node.attr.box_type: hot
重点
3、设置elasticsearch的jave目录
手动安装java需设置
使用es自带的java无需设置,版本的问题不知道是不是因为一开始用JDK8 没有设置路径的问题,可测试下
4、修改配置文件
[root@ip-10-0-10-229 ~]cat /etc/sysconfig/elasticsearch |grep JAVA
5、X-Pack设置密码访问
[root@ip-10-0-10-229 elasticsearch]# cat /etc/elasticsearch/elasticsearch.yml |grep -v "^#" path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch http.port: 9200 cluster.name: elk-cluster node.name: elk-1 cluster.initial_master_nodes: ["node-1"] network.host: 10.0.10.229 xpack.security.enabled: true xpack.license.self_generated.type: basic xpack.security.transport.ssl.enabled: true [root@ip-10-0-10-229 elasticsearch]# systemctl restart elasticsearch [root@ip-10-0-10-229 elasticsearch]# systemctl status elasticsearch ● elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2020-06-08 01:51:52 UTC; 7s ago Docs: http://www.elastic.co Main PID: 5453 (java) CGroup: /system.slice/elasticsearch.service ├─5453 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.enco... └─5548 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller Jun 08 01:51:37 ip-10-0-10-229.cn-north-1.compute.internal systemd[1]: Starting Elasticsearch... Jun 08 01:51:37 ip-10-0-10-229.cn-north-1.compute.internal elasticsearch[5453]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely... release. Jun 08 01:51:52 ip-10-0-10-229.cn-north-1.compute.internal systemd[1]: Started Elasticsearch. Hint: Some lines were ellipsized, use -l to show in full. [root@ip-10-0-10-229 elasticsearch]# /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana]: Reenter password for [kibana]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic]
5、启动 elasticsearch
systemctl start elasticsearch
systemctl enable elasticsearch
6、检测是否启动
elasticsearch 启动之后有时候会退出,这个时候有检查下内存是否够用或者适当的增加配置
[root@ip-10-0-10-229 ~]# netstat -pntl |grep java tcp6 0 0 10.0.10.229:9200 :::* LISTEN 13898/java tcp6 0 0 10.0.10.229:9300 :::* LISTEN 13898/java [root@ip-10-0-10-229 ~]curl 10.0.10.229:9200 { "name" : "node-1", "cluster_name" : "my-es", "cluster_uuid" : "FhHOQO2MQbWRX0MiTRFF6g", "version" : { "number" : "7.5.0", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "e9ccaed468e2fac2275a3761849cbee64b39519f", "build_date" : "2019-11-26T01:06:52.518245Z", "build_snapshot" : false, "lucene_version" : "8.3.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
7、安装elasticsearch-head插件
在elasticsearch.yml文末中增加以下配置解决跨域访问的问题
http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User" git clone git://github.com/mobz/elasticsearch-head.git cd elasticsearch-head npm install npm run start open http://localhost:9100/
二、安装kibana
1、安装 kibana
rpm -ivh kibana-7.5.0-x86_64.rpm
2、修改配置文件
[root@ip-10-0-10-229 ~]# cat /etc/kibana/kibana.yml |grep -v "^#" server.port: 5601 server.host: "10.0.10.229" logging.dest: /var/log/kibana/kibana.log elasticsearch.hosts: ["http://10.0.10.229:9200/"] kibana.index: ".kibana" elasticsearch.username: "kibana" elasticsearch.password: "bsh@123" i18n.locale: "zh-CN"
重点
7版本elasticsearch.hosts的配置在6版本里面为elasticsearch.url 需要配置正确否则message日志会报错
FATAL Error: [elasticsearch.url]: definition for this key is missing
3、启动
systemctl start kibana
systemctl enable kibana
web页面查看,还没有索引
三、logstash安装配置
1、安装logstash
rpm -vih logstash-7.5.0.rpm
2、写一个配置文件收集系统日志
[root@ip-10-0-10-229 ~]cat /etc/logstash/conf.d/file.conf input{ file{ path => ["/var/log/messages"] type => "system-log" start_position => "beginning" } } filter{ } output{ elasticsearch{ hosts => ["10.0.10.229:9200"] index => "system-log-%{+YYYY.MM}" user => "elastic" password => "xuewenlong@123" } }
3、启动放置在后台
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf &
4、把日志添加至kiban展示
ELK7 搭建完成
四、收集tomcat access 日志
登录tomcat server下载logstash安装包
1、安装 logstash
wget https://rgc-solution-server-validation.s3.cn-north-1.amazonaws.com.cn/xuewenlong/logstash-7.5.0.rpm rpm -i logstash-7.5.0.rpm
2、添加日志文件
[root@ip-tomcat ~]#cat /etc/logstash/conf.d/miniprogram-prod-access-bz.conf input{ file{ path => ["/home/bsh/tools/apache-tomcat-8.5.23/logs/localhost_access_log*.log"] type => "access" start_position => "beginning" codec => "json" } file{ path => ["/home/ec2-user/homeconnect/logs/AspectLog/aspect.log"] type => "aspect" start_position => "beginning" codec => "json" } } filter{ mutate { convert => ["Request time", "float"] } if [ip] != "-" { geoip { source => "ip" target => "geoip" # database => "/usr/share/GeoIP/GeoIPCity.dat" add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ] add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ] } mutate { convert => [ "[geoip][coordinates]", "float"] } } } output{ if [type] == "aspect" { elasticsearch { hosts => ["10.0.10.229:9200"] index => "logstash-miniprogram-uat-aspect-bz.log.%{+YYYY.MM}" user => "elastic" password => "xuewenlong@123" } } if [type] == "access" { elasticsearch { hosts => ["10.0.10.229:9200"] index => "logstash-miniprogram-uat-access-bz.log.%{+YYYY.MM}" user => "elastic" password => "xuewenlong@123" } } }