ELK Stack 笔记

ELK Stack

ELK Stack

ELK 介绍

LOG有多重要这个不言而喻, 面对如此大量的数据,又是分布在不同地方,如何快速准确的查找日志?使用传统的方法,去登陆到一台台机器上查看?这种方法无疑显得非常笨拙和低效了。于是一些聪明人就提出了建立一套集中式的方法,把不同来源的数据集中整合到一个地方。

一个完整的集中式日志系统,是离不开以下几个主要特点的

  • 收集-能够采集多种来源的日志数据
  • 传输-能够稳定的把日志数据传输到中央系统
  • 存储-如何存储日志数据
  • 分析-可以支持 UI 分析
  • 警告-能够提供错误报告,监控机制

基于上述思路,于是许多产品或方案就应运而生了。比如,简单的 RsyslogSyslog-ng;商业化的 Splunk ;开源的有 FaceBook 公司的 Scribe,ApacheChukwaLinkedinKafakClouderaFluentdELK 等等。
在上述产品中,Splunk 是一款非常优秀的产品,但是它是商业产品,价格昂贵,让许多人望而却步。
直到 ELK 的出现,让大家又多了一种选择。相对于其他几款开源软件来说,本文重点介绍 ELK

ELK 不是一款软件,而是 ElasticsearchLogstashKibana三种软件产品的首字母缩写。这三者都是开源软件,通常配合使用,而且又先后归于 Elastic.co 公司名下,所以被简称为 ELK Stack

  • Elasticsearch:一个基于 Restful 分布式搜索和分析引擎,具有高可伸缩、高可靠和易管理等特点。基于 Apache Lucene 构建,能对大容量的数据进行接近实时的存储、搜索和分析操作。通常被用作某些应用的基础搜索引擎,使其具有复杂的搜索功能。目前很多网站都在使用Elasticearch进行全文检索,例如:GitHubStackOverflow等。
  • Logstash:数据收集引擎。它支持动态的从各种数据源搜集数据,并对数据进行过滤、分析、丰富、统一格式等操作,然后存储到用户指定的位置。
  • Kibana:数据分析和可视化平台。通常与 Elasticsearch 配合使用,对其中数据进行搜索、分析和以统计图表的方式展示;
  • FilebeatELK Stack 的新成员,一个轻量级开源日志文件数据搜集器,基于 Logstash-Forwarder 源代码开发,是对它的替代。在需要采集日志数据的 Server 上安装 Filebeat,并指定日志目录或日志文件后,Filebeat就能读取数据,迅速发送到 Logstash进行解析,亦或直接发送到 Elasticsearch 进行集中式存储和分析。

架构

Logstash 读取Log发送至 Elasticsearch , kibana 通过 Elasticsearch 提供的RestfulAPI查询日志。

Alt text

可以当作一个MVC模型,LogstashController 层,Elasticsearch]2 是一个 Model 层,kibanaView层。

Elasticsearch

安装

# 不能使用ROOT用户启动,所以创建一个新用户
[root@WEB-PM0121 ~] groupadd elk # 添加用户组
[root@WEB-PM0121 ~] useradd -g elk elk # 添加用户到指定用户组
[root@WEB-PM0121 ~] passwd elk # 为指定用户设置密码
[root@WEB-PM0121 bin]# su elk # 切换用户
 
[elk@WEB-PM0121 ~]# java -version # 查看JAVA版本
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
 
sudo yum install java-1.8.0-openjdk #如果没有则需要安装
 
# 下载Elasticsearch
[elk@WEB-PM0121 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz
--2018-05-16 14:45:50-- https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz
Resolving artifacts.elastic.co... 54.235.171.120, 107.21.237.95, 107.21.253.15, ...
Connecting to artifacts.elastic.co|54.235.171.120|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 29056810 (28M) [binary/octet-stream]
Saving to: “elasticsearch-6.2.4.tar.gz.2
72% [=========================================================> ] 21,151,222 1.22M/s eta 9s
 
tar xzvf elasticsearch-6.2.4.tar.gz # 解压
 
# 目录结构
[elk@WEB-PM0121 ~]# cd elasticsearch-6.2.4
[elk@WEB-PM0121 elasticsearch-6.2.4]# pwd
/home/chenxu/elasticsearch-6.2.4
[elk@WEB-PM0121 elasticsearch-6.2.4]# ls
bin config data lib LICENSE.txt logs modules NOTICE.txt plugins README.textile vi
[elk@WEB-PM0121 elasticsearch-6.2.4]#
 
# 修改配置文件
[elk@WEB-PM0121 elasticsearch-6.2.4]# cd config
[elk@WEB-PM0121 config]# vi elasticsearch.yml
cluster.name: cxelk # 友好名称
network.host: 0.0.0.0 # 要不然只能本机访问
 
# 启动
[elk@WEB-PM0121 config]# cd ../bin
[elk@WEB-PM0121 bin]# ./elasticsearch
# 默认是前台启动,可以用./elasticsearch& 或者 ./elasticsearch -d 后端启动
 
# 验证访问,出现出现JSON则证明启动成功
[root@WEB-PM0121 bin]# curl 'http://10.12.54.127:9200'
{
"name" : "SvJ09aS",
"cluster_name" : "cxelk",
"cluster_uuid" : "WbsI8yKWTsKUwhU8Os8vJQ",
"version" : {
"number" : "6.2.4",
"build_hash" : "ccec39f",
"build_date" : "2018-04-12T20:37:28.497551Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

常见问题

  • ERROR: bootstrap checks failed:max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
    原因:无法创建本地文件问题,用户最大可创建文件数太小
    解决方案:
    切换到root用户,编辑limits.conf配置文件, 添加类似如下内容:
    vi /etc/security/limits.conf
    添加如下内容:
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
  • max number of threads [1024] for user [es] likely too low, increase to at least [2048]
    原因:无法创建本地线程问题,用户最大可创建线程数太小
    解决方案:切换到root用户,进入limits.d目录下,修改90-nproc.conf 配置文件。
    vi /etc/security/limits.d/90-nproc.conf
    修改 * soft nproc 1024 为 * soft nproc 2048

  • max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
    原因:最大虚拟内存太小
    解决方案:切换到root用户下,修改配置文件sysctl.conf
    vi /etc/sysctl.conf
    添加下面配置:vm.max_map_count=655360
    并执行命令:sysctl -p

  • Exception in thread “main” 2017-11-10 06:29:49,106 main ERROR No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property ‘log4j2.debug’ to show Log4j2 internal initialization logging.ElasticsearchParseException[malformed, expected settings to start with ‘object’, instead was [VALUE_STRING]]
    原因:elasticsearch.yml中的配置项的格式有问题
    解决方案:请尽量保持冒号前面没空格,后面一个空格,不要用tab键
    bootstrap.memory_lock: false

关闭 Elasticsearch

[root@WEB-PM0121 bin]# ps -ef | grep elastic
[root@WEB-PM0121 bin]# kill -9 2782 # 2782 为线程号

Elasticsearch-head

[elk@WEB-PM0121]# wget https://github.com/mobz/elasticsearch-head/archive/master.zip # 下载head插件
[elk@WEB-PM0121]# unzip master.zip # 解压
[elk@WEB-PM0121]# cd elasticsearch-head-master # 进入head目录
[elk@WEB-PM0121 elasticsearch-head]# npm install # 安装
[elk@WEB-PM0121 elasticsearch-head]# npm run start # 运行
[root@WEB-PM0121 elasticsearch-head]# curl 'http://127.0.0.1:9100' # 测试访问出现html

Alt text

Kibana

[root@WEB-PM0121 ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-linux-x86_64.tar.gz # 下载Kibana
[root@WEB-PM0121 ~]# tar xzvf kibana-6.2.4-linux-x # 解压
 
# 目录结构
[elk@WEB-PM0121 kibana-6.2.4-linux-x86_64]$ cd ..
[elk@WEB-PM0121 chenxu]$ cd kibana-6.2.4-linux-x86_64
[elk@WEB-PM0121 kibana-6.2.4-linux-x86_64]$ ll
total 1196
drwxr-xr-x 2 1000 1000 4096 Apr 13 04:57 bin
drwxrwxr-x 2 1000 1000 4096 May 14 15:18 config
drwxrwxr-x 2 1000 1000 4096 May 14 15:07 data
-rw-rw-r-- 1 1000 1000 562 Apr 13 04:57 LICENSE.txt
drwxrwxr-x 6 1000 1000 4096 Apr 13 04:57 node
drwxrwxr-x 909 1000 1000 36864 Apr 13 04:57 node_modules
-rw-rw-r-- 1 1000 1000 1134238 Apr 13 04:57 NOTICE.txt
drwxrwxr-x 3 1000 1000 4096 Apr 13 04:57 optimize
-rw-rw-r-- 1 1000 1000 721 Apr 13 04:57 package.json
drwxrwxr-x 2 1000 1000 4096 Apr 13 04:57 plugins
-rw-rw-r-- 1 1000 1000 4772 Apr 13 04:57 README.txt
drwxr-xr-x 15 1000 1000 4096 Apr 13 04:57 src
drwxrwxr-x 5 1000 1000 4096 Apr 13 04:57 ui_framework
drwxr-xr-x 2 1000 1000 4096 Apr 13 04:57 webpackShims
 
# 修改配置文件
[elk@WEB-PM0121 kibana-6.2.4-linux-x86_64]$ cd config
[elk@WEB-PM0121 config]$ vi kibana.yml
 
# 修改以下配置节点
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://localhost:9200" # elasticsearch 端口
kibana.index: ".kibana"
 
# 启动Kibana
[elk@WEB-PM0121 config]$ cd ../bin
[elk@WEB-PM0121 config]$ ./kibana
 
# 验证Kibana
 
[elk@WEB-PM0121 bin]$ curl '127.0.0.1:5601'
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';
 
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;

Logstash

由于生产系统基于.NET,所以 LogstashWindows 下部署, 在 Logstash 下载页面下载对应的压缩包

配置文件格式

Logstash 需要一个配置管理输入、过滤器和输出相关的配置。配置内容格式如下

# 输入
input {
...
}
 
# 过滤器
filter {
...
}
 
# 输出
output {
...
}

测试输入输出

测试一下输入输出, 在Logstash中的config文件夹下新建 logstash_test.conf键入测试代码

input { stdin { } } output { stdout {} }
E:\Dev\ELK\logstash-6.2.3\bin>logstash -f ../config/logstash_test.conf # 启动并指定配置文件
Sending Logstash's logs to E:/Dev/ELK/logstash-6.2.3/logs which is now configure
d via log4j2.properties
[2018-05-17T14:04:26,229][INFO ][logstash.modules.scaffold] Initializing module
{:module_name=>"fb_apache", :directory=>"E:/Dev/ELK/logstash-6.2.3/modules/fb_ap
ache/configuration"}
[2018-05-17T14:04:26,249][INFO ][logstash.modules.scaffold] Initializing module
{:module_name=>"netflow", :directory=>"E:/Dev/ELK/logstash-6.2.3/modules/netflow
/configuration"}
[2018-05-17T14:04:26,451][WARN ][logstash.config.source.multilocal] Ignoring the
'pipelines.yml' file because modules or command line options are specified
[2018-05-17T14:04:27,193][INFO ][logstash.runner ] Starting Logstash {"
logstash.version"=>"6.2.3"}
[2018-05-17T14:04:28,016][INFO ][logstash.agent ] Successfully started
Logstash API endpoint {:port=>9600}
[2018-05-17T14:04:29,038][INFO ][logstash.pipeline ] Starting pipeline {:
pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipelin
e.batch.delay"=>50}
[2018-05-17T14:04:29,164][INFO ][logstash.pipeline ] Pipeline started suc
cesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x47319180 run>"}
The stdin plugin is now waiting for input:
[2018-05-17T14:04:29,378][INFO ][logstash.agent ] Pipelines running {:
count=>1, :pipelines=>["main"]}
123 # 输入测试数据
2018-05-17T06:05:00.467Z PC201801151216 123 # 输出的结果
456 # 输入测试数据
2018-05-17T06:05:04.877Z PC201801151216 456 # 输出的结果

发送至Elasticsearch

我们需要从文件中读取并发送到 elasticsearch中。
Logstash中的config文件夹下新建logstash.conf键入代码

input {
file { # 指定文件模式
path => "E:/WebSystemLog/*" # 测试日志文件
start_position => "beginning"
}
}
output {
elasticsearch{
hosts=> ["http://10.12.54.127:9200"]
index => "chenxu-%{+YYYY.MM.dd}"
}
stdout {} # 控制台打印
}

Logstash根目录新建一个run.bat方便我们启动Logstash键入代码

./bin/logstash.bat -f ./config/logstash.conf
E:\Dev\ELK\logstash-6.2.3\bin>cd ..
 
E:\Dev\ELK\logstash-6.2.3>run # 启动
 
E:\Dev\ELK\logstash-6.2.3>./bin/logstash.bat -f ./config/logstash.conf
Sending Logstash's logs to E:/Dev/ELK/logstash-6.2.3/logs which is now configure
d via log4j2.properties
[2018-05-17T15:17:36,317][INFO ][logstash.modules.scaffold] Initializing module
{:module_name=>"fb_apache", :directory=>"E:/Dev/ELK/logstash-6.2.3/modules/fb_ap
ache/configuration"}
[2018-05-17T15:17:36,334][INFO ][logstash.modules.scaffold] Initializing module
{:module_name=>"netflow", :directory=>"E:/Dev/ELK/logstash-6.2.3/modules/netflow
/configuration"}
[2018-05-17T15:17:36,533][WARN ][logstash.config.source.multilocal] Ignoring the
'pipelines.yml' file because modules or command line options are specified
[2018-05-17T15:17:37,127][INFO ][logstash.runner ] Starting Logstash {"
logstash.version"=>"6.2.3"}
[2018-05-17T15:17:37,682][INFO ][logstash.agent ] Successfully started
Logstash API endpoint {:port=>9600}
[2018-05-17T15:17:39,774][INFO ][logstash.pipeline ] Starting pipeline {:
pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipelin
e.batch.delay"=>50}
[2018-05-17T15:17:40,170][INFO ][logstash.outputs.elasticsearch] Elasticsearch p
ool URLs updated {:changes=>{:removed=>[], :added=>[http://10.12.54.127:9200/]}}
 
[2018-05-17T15:17:40,179][INFO ][logstash.outputs.elasticsearch] Running health
check to see if an Elasticsearch connection is working {:healthcheck_url=>http:/
/10.12.54.127:9200/, :path=>"/"}
[2018-05-17T15:17:40,366][WARN ][logstash.outputs.elasticsearch] Restored connec
tion to ES instance {:url=>"http://10.12.54.127:9200/"}
[2018-05-17T15:17:40,425][INFO ][logstash.outputs.elasticsearch] ES Output versi
on determined {:es_version=>6}
[2018-05-17T15:17:40,430][WARN ][logstash.outputs.elasticsearch] Detected a 6.x
and above cluster: the `type` event field won't be used to determine the documen
t _type {:es_version=>6}
[2018-05-17T15:17:40,445][INFO ][logstash.outputs.elasticsearch] Using mapping t
emplate from {:path=>nil}
[2018-05-17T15:17:40,462][INFO ][logstash.outputs.elasticsearch] Attempting to i
nstall template {:manage_template=>{"template"=>"logstash-*", "version"=>60001,
"settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynami
c_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=
>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"ma
tch"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>
false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "pro
perties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geo
ip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=
>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_f
loat"}}}}}}}}
[2018-05-17T15:17:40,502][INFO ][logstash.outputs.elasticsearch] New Elasticsear
ch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://10.12.54
.127:9200"]}
[2018-05-17T15:17:41,094][INFO ][logstash.pipeline ] Pipeline started suc
cesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x31bffa29 run>"}
[2018-05-17T15:17:41,199][INFO ][logstash.agent ] Pipelines running {:
count=>1, :pipelines=>["main"]}
 
# 由于配置文件中指定了目录`E:/WebSystemLog/*`所以我们手动修改该文件随便键入几行测试日志
# 可以看到 logstash stdout 已经在控制台中打印出来了
2018-05-17T07:19:13.779Z PC201801151216 SDFSDFSD
2018-05-17T07:19:13.781Z PC201801151216 SDFSDF
2018-05-17T07:19:13.781Z PC201801151216 SDFSD
2018-05-17T07:19:13.781Z PC201801151216 SDFSDF
2018-05-17T07:19:13.781Z PC201801151216 SDFSDF
2018-05-17T07:19:13.745Z PC201801151216 TEST123
2018-05-17T07:19:13.781Z PC201801151216 SDFSDF

Kibana中查看数据

Management > Index Patterns > Create Index Pattern > Next step

Alt text

选择 @timestamp > Create index pattern > Discover
可以看到我们测试的数据已经在Kibana中了。

Alt text

参考

posted @ 2018-05-24 09:56  陈大欠  阅读(2025)  评论(1编辑  收藏  举报