ELK

日志平台-ELK Stack

参考

 

1.没有日志分析系统

 运维痛点

1.运维要不停的查看各种日志。 
2.故障已经发生了才看日志(时间问题。) 
3.节点多,日志分散,收集日志成了问题。 
4.运行日志,错误等日志等,没有规范目录,收集困难。

 环境痛点

1.开发人员不能登陆线上服务器查看详细日志。 
2.各个系统都有日志,日志数据分散难以查找。 
3.日志数据量大,查询速度慢,数据不够实时。

 解决痛点

1.收集(Logstash) 
2.存储(Elasticsearch、Redis、Kafka) 
3.搜索+统计+展示(Kibana) 
4.报警,数据分析(Zabbix)

2.ElkStack介绍

对于日志来说,最常见的需求就是收集、存储、查询、展示,开源社区正好有相对应的开源项目:logstash(收集)、elasticsearch(存储+搜索)、kibana(展示),我们将这三个组合起来的技术称之为ELKStack,所以说ELKStack指的是Elasticsearch、Logstash、Kibana技术栈的结合,

 

ElkStack环境

[root@10 ~]# cat /etc/issue
CentOS release 6.7 (Final)

  [root@10 ~]# uname -a
   Linux 10.0.0.10 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

4.ElkStack部署

1. 安装 java (elasticsearch至少需要Java 8,建议使用 JDK版本1.8.0_131)       

查看CentOS自带JDK是否已安装

[root@10 ~]# yum list installed |grep java
java-1.6.0-openjdk.x86_64
java-1.6.0-openjdk-devel.x86_64
java-1.7.0-openjdk.x86_64
java-1.7.0-openjdk-devel.x86_64
tzdata-java.noarch 2015e-1.el6 @anaconda-CentOS-201508042137.x86_64

若有自带安装的JDK,卸载CentOS系统自带Java环境

[root@node1 ~]# yum -y remove java-1.7.0-openjdk* 
[root@node1 ~]# yum -y remove java-1.6.0-openjdk*

卸载tzdata-java

[root@10 ~]# yum -y remove tzdata-java.noarch

上传java rpm包,下载地址

http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

解压

[root@10 ~]# rpm -ivh jdk-8u131-linux-x64.rpm
Preparing... ########################################### [100%]
1:jdk1.8.0_131 ########################################### [100%]
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...

进入配置文件

[root@10 ~]# vim /etc/profile

将以下文件添加到尾行

JAVA_HOME=/usr/java/jdk1.8.0_131
PATH=$JAVA_HOME/bin:$PATH
CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar
export PATH JAVA_HOME CLASSPATH

更新 profile 文件

[root@10 ~]# source /etc/profile

查看Java版本信息

[root@10 ~]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

安装 elasticsearch 2.4.5

下载 elasticsearch 安装包并解压


[root@10 ~]# wget https://mirror.tuna.tsinghua.edu.cn/ELK/yum/elasticsearch-2.x/elasticsearch-2.4.5.rpm

--2017-04-08 00:35:08-- https://mirror.tuna.tsinghua.edu.cn/ELK/yum/elasticsearch-2.x/elasticsearch-2.4.5.rpm
Resolving mirror.tuna.tsinghua.edu.cn... 101.6.6.178, 2402:f000:1:416:101:6:6:178
Connecting to mirror.tuna.tsinghua.edu.cn|101.6.6.178|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 27238255 (26M) [application/x-redhat-package-manager]
Saving to: “elasticsearch-2.4.5.rpm”


100%[==================================================================================>] 27,238,255 185K/s in 2m 22s


2017-04-08 00:37:31 (187 KB/s) - “elasticsearch-2.4.5.rpm” saved [27238255/27238255]


 

[root@10 ~]# rpm -ivh elasticsearch-2.4.5.rpm
warning: elasticsearch-2.4.5.rpm: Header V4 RSA/SHA1 Signature, key ID d88e42b4: NOKEY
Preparing... ########################################### [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
1:elasticsearch ########################################### [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using chkconfig
sudo chkconfig --add elasticsearch
### You can start elasticsearch service by executing
sudo service elasticsearch start

进入elasticsearch配置目录

[root@10 ~]# cd /etc/elasticsearch/

配置 elasticsearch.yml 文件

[root@10 elasticsearch]# vim elasticsearch.yml 
17 cluster.name: oldboy #集群名称,重要
23 node.name: linux-node1 #节点名称
33 path.data: /data/es-data #创建此目录并授权 elasticsearch
37 path.logs: /var/log/elasticsearch/ #日志目录
43 bootstrap.memory_lock: true #打开内存
54 network.host: 0.0.0.0 #网络
58 http.port: 9200 #端口
68 discovery.zen.ping.unicast.hosts: ["其他主机IP地址"] #设置单播地址,集群使用

查看是否有/data/es-data目录

[root@10 ~]# find /data/es-data/

如果没有就创建

[root@10 ~]# mkdir -p /data/es-data

修改权限

[root@10 ~]# chown -R elasticsearch:elasticsearch /data/es-data

启动

[root@10 ~]# /etc/init.d/elasticsearch start
Starting elasticsearch:                                             [OK]

加入开机启动 

chkconfig elasticsearch on

如无法启动查看日志

[root@10 ~]# tail -f /var/log/elasticsearch/oldboy.log 

查看9200端口是否存在

[root@10 ~]# netstat -lntp

通过IP加端口号访问

http://10.0.0.10:9200

安装head插件

下载head插件

[root@10 ~] /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

访问地址

http://10.0.0.10:9200/_plugin/head/

 

 bigdesk 和 kopf 功能一样

安装 bigdesk插件

下载地址

链接:http://pan.baidu.com/s/1nvzCBOH  密码:9dag

解压

将其解压到 /usr/share/elasticsearch/plugins 文件下

访问地址

http://10.0.0.10:9200/_plugin/bigdesk-master/

也可以将  bigdesk-master 目录改名为 bigdesk 目录方便访问

 安装kopf 插件

下载安装

/usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

然后在本地浏览器中输入:http://10.0.0.10:9200/_plugin/kopf

安装 logstash 2.4.1 

下载并解压安装包

[root@10 ~]# wget https://mirror.tuna.tsinghua.edu.cn/ELK/yum/logstash-2.4/logstash-2.4.1.noarch.rpm
[root@10 ~]# rpm -ivh logstash-2.4.1.noarch.rpm

启动

[root@10 ~]# /etc/init.d/logstash start

加入开机启动

[root@10 ~]# chkconfig logstash on

标准输出

[root@10 ~]# /opt/logstash/bin/logstash -e 'input  { stdin{} } output { stdout{} }'

详细输出

[root@10 ~]# /opt/logstash/bin/logstash -e 'input  { stdin{} } output { stdout{ codec => rubydebug} }'

将输出到elasticsearch

[root@10 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["10.0.0.10:9200"] } }'

将详细信息输出到elasticsearch

[root@10 ~]# /opt/logstash/bin/logstash -e 'input  { stdin{} } output { elasticsearch { hosts => ["10.0.0.10:9200"] } stdout{ codec => rubydebug} }'

将 /var/log/messages 输出到 elasticsearch

vim all.conf

输入以下内容

input {
      file {
           path => "/var/log/messages"
           type => "system"
           start_position => "beginning"
      }
}
output {
      elasticsearch {
             hosts => ["10.0.0.10:9200"]
             index => "system-%{+YYYY.MM.dd}"
       }
}

启动文件

[root@10 ~]# /opt/logstash/bin/logstash -f all.conf 

收集Java日志,并将报错收集为一个事件


input {
    file {
         path => ["/var/log/messages", "/var/log/secure" ]
         type => "system"
         start_position => "beginning"
        }
    file {
         path => "/var/log/elasticsearch/elasticsearch.log"
         type => "es-error"
         start_position => "beginning"
         codec => multiline {
         pattern => "^\["
         negate => true
         what => "previous"
         }
      }
  }
output {
     if [type] == "system" {
         elasticsearch {
         hosts => "10.0.0.10"
         index => "system-%{+YYYY.MM.dd}"
       }
   }
     if [type] == "es-error" {
        elasticsearch {
        hosts => "10.0.0.10"
        index => "es-error-%{+YYYY.MM.dd}"
       }
   }

}

 

 收集nginx日志

配置nginx.conf文件

[root@10 ~]# /vim /etc/nginx/nginx.conf

将以下代码插入到http端

log_format json '{"@timestamp":"$time_iso8601",'
               '"@version":"1",'
               '"client":"$remote_addr",'
               '"url":"$uri",'
               '"status":"$status",'
               '"domain":"$host",'
               '"host":"$server_addr",'
               '"size":$body_bytes_sent,'
               '"responsetime":"$request_time",'
               '"referer":"$http_referer",'
               '"forwarded":"$http_x_forwarded_for",'
               '"ua":"$http_user_agent"'
               '}';

将以下代码添加到server端,将原有的代码注释掉

#access_log  logs/host.access.log  main;
 access_log  /var/log/nginx/access_json.log  json;

创建脚本json.conf 文件并添加以下内容

input {
    file {
        path => "/var/log/nginx/access_json.log"
        codec => "json"
    }

}

output {
    stdout {
        codec => "rubydebug"
    }
}

运行json文件

[root@10 ~]# /opt/logstash/bin/logstash -f json.conf 

将json.conf文件添加到all.conf 文件中

input {
     file {
         path => ["/var/log/messages", "/var/log/secure" ]
         type => "system"
         start_position => "beginning"
      }
     file {
         path => "/var/log/nginx/access_json.log"
         codec => "json"
         start_position => "beginning"
        type => "nginx-log"
      }
    file {
       path => "/var/log/elasticsearch/elasticsearch.log"
       type => "es-error"
       start_position => "beginning"
       codec => multiline {
       pattern => "^\["
       negate => true
       what => "previous"
       }
    }
 }
output {
     if [type] == "system" {
         elasticsearch {
         hosts => "10.0.0.10"
         index => "system-%{+YYYY.MM.dd}"
      }
  } 
     if [type] == "es-error" {
        elasticsearch {
        hosts => "10.0.0.10"
        index => "es-error-%{+YYYY.MM.dd}"
     }
 }
     if [type] == "nginx-log" {
        elasticsearch {
        hosts => "10.0.0.10"
        index => "nginx-log-%{+YYYY.MM.dd}"
     }
   }
} 

运行all.conf 文件

[root@10 ~]# /opt/logstash/bin/logstash -f all.conf 

收集nginx日志时收集时间可能会稍长,需要稍微等待

收集syslog日志

创建syslog.conf文件并写入以下内容

[root@10 ~]# vim syslog.conf

 

input  {
  syslog {
     type => "system-syslog"
     host => "10.0.0.10"
     port => "514"
        }
    }


output {
 stdout {
   codec => "rubydebug"
   }
}

运行syslog.conf文件

[root@10 ~]# /opt/logstash/bin/logstash -f syslog.conf 

修改rsyslog.conf文件 79行

 

[root@10 ~]# vim /etc/rsyslog.conf 
 #*.* @@remote-host:514  改为 *.* @@10.0.0.11:514

重启rsyslog.conf文件

[root@10 ~]# /etc/init.d/rsyslog restart

查看是否有输出

将syslog.conf文件添加到all.conf文件中


input {
    syslog {
         type => "system-syslog"
         host => "10.0.0.12"
         port => "514"
     }

  file {

         path => ["/var/log/messages", "/var/log/secure" ]
         type => "system"
         start_position => "beginning"
     }
   file {
        path => "/var/log/nginx/access_json.log"
        codec => "json"
        start_position => "beginning"
        type => "nginx-log"
        }
   file {
        path => "/var/log/elasticsearch/elasticsearch.log"
        type => "es-error"
        start_position => "beginning"
        codec => multiline {
        pattern => "^\["
        negate => true
        what => "previous"
      }
   }
}
output {
    if [type] == "system" {
        elasticsearch {
        hosts => "10.0.0.12"
        index => "system-%{+YYYY.MM.dd}"
       }
   }
    if [type] == "es-error" {
        elasticsearch {
        hosts => "10.0.0.12"
        index => "es-error-%{+YYYY.MM.dd}"
       }
   }
    if [type] == "nginx-log" {
        elasticsearch {
        hosts => "10.0.0.12"
        index => "nginx-log-%{+YYYY.MM.dd}"
      }
   }
if [type] == "system-syslog" {
       elasticsearch {
       hosts => "10.0.0.12"
       index => "system-syslog-%{+YYYY.MM.dd}"
      }
   }
}

收集 mysql-slowlog 

input {
  #stdin {
  file {
    type => "mysql-slowlog"
    path => "/root/master-slow.log"
    start_position => "beginning"
    codec => multiline {
      pattern => "^# User@Host:"
      negate => true
      what => previous
    }
  }
}
filter {
  # drop sleep events
  grok {
    match => { "message" => "SELECT SLEEP" }
    add_tag => [ "sleep_drop" ]
    tag_on_failure => [] # prevent default _grokparsefailure tag on real records
  }
  if "sleep_drop" in [tags] {
    drop {}
  }
  grok {
    match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s*# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)\n# Time:.*$" ]
    #match => [ "message", "# User@Host:\s+%{WORD:user1}\[%{WORD:user2}\]\s+@\s+\[(?:%{IP:clientip})?\]\s+#\s+Thread_id:\s+%{NUMBER:thread_id:int}\s+Schema:\s+%{WORD:schema}\s+QC_hit:\s+%{WORD:qc_hit}\s+#\s+Query_time:\s+%{NUMBER:query_time:float}\s+Lock_time:\s+%{NUMBER:lock_time:float}\s+Rows_sent:\s+%{NUMBER:rows_sent:int}\s+Rows_examined:\s+%{NUMBER:rows_examined:int}\s+#\s+Rows_affected:\s+%{NUMBER:rows_affected:int}\s+SET\s+timestamp=%{NUMBER:timestamp};\s+(?<query>(?<action>\w+)\s+.*);"]
  }
  date {
    match => [ "timestamp", "UNIX" ]
    remove_field => [ "timestamp" ]
  }
}
output {
  elasticsearch { 
    hosts => ["10.0.0.10:9200"] 
        index=>"mysql-slow-log-%{+YYYY.MM.dd}"
    }
  stdout { codec => rubydebug }
}

Redis 解耦

输入到redis中

input {
    stdin{}
}
output {
    redis {
        host => "10.0.0.12"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "demo"
    }
}

 

输入到elasticsearch中

input {
    redis {
        host => "10.0.0.12"
        port => "6379"
        db => "6"
        data_type => "list"
        key => "demo"
        batch_count => 1
    }
}

output {
    elasticsearch {
        hosts => ["10.0.0.12:9200"]
        index => "redis-demo-%{+YYYY.MM.dd}"
       }
   }

安装 kibana 4.6.3

 开原本可视化平台

安装

[root@10 ~]# wget https://mirror.tuna.tsinghua.edu.cn/ELK/yum/kibana-4.6/kibana-4.6.3-x86_64.rpm
[root@10 ~]# rpm -ivh kibana-4.6.3-x86_64.rpm

编辑配置文件

[root@10 ~]#vim /opt/kibana/config/kibana.yml
server.port: 5601      #端口
server.host: "0.0.0.0"  #主机
elasticsearch.url: "http://10.0.0.10:9200"  #  elasticsearch地址            
kibana.index: ".kibana"

启动并加入开机启动

[root@10 ~]# /etc/init.d/kibana start] chkconfig kibana on

 访问地址

http://10.0.0.10:5601

如何生产上线ELK

1.日志分类
系统日志: rsyslog logstash syslog插件
访问日志: nginx logstash codec json
错误日志: file logstash file+mulitline
运行日志: file logstash codec json
设备日志: syslog logstash syslog插件
debug日志: file logstash json or mulitline

日志标准化
路径 固定
格式 尽量json

系统日志开始 -> 错误日志 -> 运行日志 -> 访问日志

 

posted @ 2017-07-21 17:29  答&案  阅读(173)  评论(0)    收藏  举报