Elasticsearch + Logstash + Kibana + SearchGuard 安装部署(烨哥提供)

Elasticsearch + Logstash + Kibana + SearchGuard 安装部署

环境说明

系统以及java版本

系统版本 JAVA 版本
CentOS 7.4.1708 1.8

Elasticsearch node 配置

三台服务器都添加host 文件
IP Elasticsearch node
10.3.245.25 node-25
10.3.245.40 node-40
10.3.245.65 node-65

ELK 版本信息

ElasticSearch版本 Kibana版本 Logstash SearchGuard版本
6.6.1 6.6.1 6.6.1 25.5

SearchGuard 插件版本

ES对应的SearchGuard插件版本 Kibana对应的SearchGuard插件版本
25.5 18.5

JAVA 安装

三台服务器都需要安装

java 版本是1.8

解压配置环境变量

# 解压
tar fzx jdk-8u181-linux-x64.tar.gz

# 移动到/usr/local
mv jdk1.8.0_181 /usr/local/

# 配置环境变量
# 将下面两行放到/etc/profile 文件最后
export JAVA_HOME=/usr/local/jdk1.8.0_181
export PATH=$JAVA_HOME/bin:$PATH

# 加载使其生效
source /etc/profile

检查是否生效

# 检查是否生效
java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

Elasticsearch 安装

在安装elk 之前 java必须提前安装,三台服务器都需要安装

es 的版本是 6.6.1

基本配置

# 本次使用的是rpm包安装
yum localinstall elasticsearch-6.6.1.rpm -y

# 修改jvm 配置 将内存大小给4G,如果内存比较多,可以给8G,或者16G,本机内存比较小,所以给4G
vim /etc/elasticsearch/jvm.options

-Xms4g
-Xmx4g

# 修改es配置文件参数,下面以10.3.245.25 为例

grep -Ev "^$|^#" /etc/elasticsearch/elasticsearch.yml 
cluster.name: elk
node.name: node-25
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 10.3.245.25
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["node-25", "node-65","node-40"]
discovery.zen.minimum_master_nodes: 2

# restart  Elasticsearch
/etc/init.d/elasticsearch restart

Search Guard 插件安装

Elasticsearch 版本对应的searchguard 插件版本,需要在searchguard 官网查看并下载

安装插件

# 切换到es bin 目录
cd /usr/share/elasticsearch/bin/
# 安装有两种方法:
#第一种是在线安装 
./elasticsearch-plugin install com.floragunn:search-guard-6:6.6.1-25.5
#第二种是线下安装
./elasticsearch-plugin install -b file:///root/search-guard-6-6.6.1-25.5.zip
# 本次使用第二种线下安装

配置Elasticsearch

# 进行demo模式的安装
cd /usr/share/elasticsearch/plugins/search-guard-6/tools
ls
hash.bat  hash.sh  install_demo_configuration.sh  sgadmin.bat  sgadmin_demo.sh  sgadmin.sh
chmod +x install_demo_configuration.sh
./install_demo_configuration.sh # 三个y 即可

# 安装完成之后,会在/etc/elasticsearch/elasticsearch.yml 配置文件中添加一些参数。
# 请把https 更新为http,修改参数searchguard.ssl.http.enabled 的值为false
# 下面是具体配置完的参数
grep -Ev "^$|^#" /etc/elasticsearch/elasticsearch.yml 
cluster.name: elk
node.name: node-25
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 10.3.245.25
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["node-25", "node-65","node-40"]
discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Authorization"
searchguard.ssl.transport.pemcert_filepath: esnode.pem
searchguard.ssl.transport.pemkey_filepath: esnode-key.pem
searchguard.ssl.transport.pemtrustedcas_filepath: root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.http.enabled: false
searchguard.ssl.http.pemcert_filepath: esnode.pem
searchguard.ssl.http.pemkey_filepath: esnode-key.pem
searchguard.ssl.http.pemtrustedcas_filepath: root-ca.pem
searchguard.allow_unsafe_democertificates: true
searchguard.allow_default_init_sgindex: true
searchguard.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de
searchguard.audit.type: internal_elasticsearch
searchguard.enable_snapshot_restore_privilege: true
searchguard.check_snapshot_restore_write_privileges: true
searchguard.restapi.roles_enabled: ["sg_all_access"]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3
xpack.security.enabled: false

# 在es上配置还有一步
# 将下面的配置添加到 /etc/security/limits.conf

elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
elasticsearch soft nproc 10240
elasticsearch hard nproc 10240
elasticsearch soft nofile 65535
elasticsearch hard nofile 65535

# 重启elasticsearch 服务
/etc/init.d/elasticsearch restart

SearchGuard 多用户设置

目录位置

/usr/share/elasticsearch/plugins/search-guard-6/sgconfig

SearchGuard 各个文件的说明

  • sg_internal_users.yml: 存储用户名密码,密码可以使用plugin/tools/hash.sh生成
  • sg_roles.yml:权限设置,定义什么类型的权限
  • sg_roles_mapping.yml: 映射角色关系,可以把权限映射给用户,也可以映射给用户组
  • sg_action_groups.yml: 定义一些用户动作的权限与es索引之间的关系
  • sg_config.yml:全局设置

实例 (创建一个用户)

注: 如果是集群,则下面的操作在所有的服务器都要操作一遍

  1. 创建用户密码

    # cd /usr/share/elasticsearch/plugins/search-guard-6/tools
    # ls
    hash.bat  hash.sh  install_demo_configuration.sh  sgadmin.bat  sgadmin_demo.sh  sgadmin.sh
    # sh hash.sh -p oapassword
    $2y$12$d5adFlpwkVFfyL7awgSbPekVsi7v7vfrNFQWCH98/7Oh4dtCHH5Iy
    
    #编辑 sg_internal_users.yml 文件
    #将下面几行代码放到最后的位置就可以
    
    # password is: oapassword
    oauser:
      hash: $2y$12$d5adFlpwkVFfyL7awgSbPekVsi7v7vfrNFQWCH98/7Oh4dtCHH5Iy
      roles:
        - sg_oauser
    
    
  2. 创建角色 sg_osuser

    # vim sg_roles.yml
    # 将下面的代码放到最后就可以
    
    sg_oauser:
      cluster:
        - UNLIMITED
      indices:
        'kibana*':
          '*':
            - READ
        '?nkibana*':
          '*':
            - READ
        'logstash-oa*':
          '*':
            - CRUD
        'oa-*':
          '*':
            - CRUD
    
  3. 给用户赋予角色

    # vim sg_roles_mapping.yml
    # 将下面代码放到最后就可以
    
    sg_oauser:
      readonly: true
      backendroles:
        - sg_oauser
    
  4. 更新用户配置

    # cd /usr/share/elasticsearch/plugins/search-guard-6/tools
    
    # ls
    hash.bat  hash.sh  install_demo_configuration.sh  sgadmin.bat  sgadmin_demo.sh  sgadmin.sh
    
    # ./sgadmin.sh -cd ../sgconfig/ -icl -nhnv -cacert /etc/elasticsearch/root-ca.pem -cert /etc/elasticsearch/kirk.pem -key /etc/elasticsearch/kirk-key.pem -h IP地址
    

权限说明

权限定义

权限配置

权限映射

检测Elasticsearch

在浏览器上输入http://10.3.245.25:9200 ,正常情况下会弹出一个密码验证框,输入用户名和密码,SearchGuard的默认用户名和密码都是admin

{
name: "node-25",
cluster_name: "elk",
cluster_uuid: "bzeCw1L9RGi1dlCqOXDC4A",
version: {
number: "6.6.1",
build_flavor: "default",
build_type: "rpm",
build_hash: "1fd8f69",
build_date: "2019-02-13T17:10:04.160291Z",
build_snapshot: false,
lucene_version: "7.6.0",
minimum_wire_compatibility_version: "5.6.0",
minimum_index_compatibility_version: "5.0.0"
},
tagline: "You Know, for Search"
}

故障原因

如果Elasticsearch 没有重启成功,请检查日志 /var/log/elasticsearch/elk.log (elk.log 其中elk是集群名称)

  • 配置文件中的配置参数,是否有写错或者漏写
  • 日志文件路径是否由于权限问题没有生成
  • java 环境变量是否没有识别到,在/etc/sysconfig/elasticsearch 配置文件中配置JAVA_HOME

Kibana 安装

kibana 也需要依赖JAVA

Kibana 版本为 6.6.1

安装

yum localinstall kibana-6.6.1-x86_64.rpm -y

安装SearchGuard插件

# 切换到kibana bin 目录
cd /usr/share/kibana/bin/
ls
kibana  kibana-keystore  kibana-plugin
# 安装插件
./kibana-plugin install  file:///root/search-guard-kibana-plugin-6.6.1-18.5.zip

配置

# kibana 配置文件
grep -Ev "^$|^#" /etc/kibana/kibana.yml 
server.port: 5601
server.host: "10.3.245.25"
server.name: "kibana-server"
elasticsearch.hosts: ["http://10.3.245.25:9200","http://10.3.245.40:9200","http://10.3.245.65:9200"]
kibana.index: ".kibana"
elasticsearch.username: "admin"
elasticsearch.password: "admin"
elasticsearch.pingTimeout: 1500
elasticsearch.requestTimeout: 30000
elasticsearch.requestHeadersWhitelist: [ "authorization","sgtenant" ]
elasticsearch.shardTimeout: 30000
elasticsearch.startupTimeout: 5000
xpack.graph.enabled: false
xpack.ml.enabled: false
xpack.watcher.enabled: false
xpack.security.enabled: false

#重启kibana
/etc/init.d/kibana restart

访问Kibana

输入用户名和密码 admin/admin 登录

Logstash 安装

软件下载安装

logstash 版本为 6.6.1

# yum 安装
yum localinstall logstash-6.6.1.rpm -y

# 安装后logstash 目录 /usr/share/logstash

下载logstash pattern 目录

配置文件

/etc/logstash/jvm.options 中的
-Xms4g
-Xmx4g

修改为4G 或更多

# 修改配置文件
cd /etc/logstash

ls /etc/logstash/
conf.d  jvm.options  log4j2.properties  logstash-sample.conf  logstash.yml  pipelines.yml  startup.options

#启动命令(默认情况下logstash 不提供启停脚本)
/usr/share/logstash/bin/logstash   --path.settings /etc/logstash  -f /etc/logstash/conf.d -r -l /tmp

# --path.settings  配置文件路径
# -f /etc/logstash/conf.d 清洗数据路径
# -r 如果有配置更新,会自动更新
# -l 日志目录
# logstash 说明

Usage:
    bin/logstash [OPTIONS]

Options:
    -n, --node.name NAME          Specify the name of this logstash instance, if no value is given
                                  it will default to the current hostname.
                                   (default: "m8-ops-elk-02.test.xesv5.com")
    -f, --path.config CONFIG_PATH Load the logstash config from a specific file
                                  or directory.  If a directory is given, all
                                  files in that directory will be concatenated
                                  in lexicographical order and then parsed as a
                                  single config file. You can also specify
                                  wildcards (globs) and any matched files will
                                  be loaded in the order described above.
    -e, --config.string CONFIG_STRING Use the given string as the configuration
                                  data. Same syntax as the config file. If no
                                  input is specified, then the following is
                                  used as the default input:
                                  "input { stdin { type => stdin } }"
                                  and if no output is specified, then the
                                  following is used as the default output:
                                  "output { stdout { codec => rubydebug } }"
                                  If you wish to use both defaults, please use
                                  the empty string for the '-e' flag.
                                   (default: nil)
    --field-reference-parser MODE Use the given MODE when parsing field
                                  references.
                                  
                                  The field reference parser is used to expand
                                  field references in your pipeline configs,
                                  and will be becoming more strict to better
                                  handle illegal and ambbiguous inputs in a
                                  future release of Logstash.
                                  
                                  Available MODEs are:
                                   - `LEGACY`: parse with the legacy parser,
                                     which is known to handle ambiguous- and
                                     illegal-syntax in surprising ways;
                                     warnings will not be emitted.
                                   - `COMPAT`: warn once for each distinct
                                     ambiguous- or illegal-syntax input, but
                                     continue to expand field references with
                                     the legacy parser.
                                   - `STRICT`: parse in a strict manner; when
                                     given ambiguous- or illegal-syntax input,
                                     raises a runtime exception that should
                                     be handled by the calling plugin.
                                  
                                   The MODE can also be set with
                                   `config.field_reference.parser`
                                  
                                   (default: "COMPAT")
    --modules MODULES             Load Logstash modules.
                                  Modules can be defined using multiple instances
                                  '--modules module1 --modules module2',
                                     or comma-separated syntax
                                  '--modules=module1,module2'
                                  Cannot be used in conjunction with '-e' or '-f'
                                  Use of '--modules' will override modules declared
                                  in the 'logstash.yml' file.
    -M, --modules.variable MODULES_VARIABLE Load variables for module template.
                                  Multiple instances of '-M' or
                                  '--modules.variable' are supported.
                                  Ignored if '--modules' flag is not used.
                                  Should be in the format of
                                  '-M "MODULE_NAME.var.PLUGIN_TYPE.PLUGIN_NAME.VARIABLE_NAME=VALUE"'
                                  as in
                                  '-M "example.var.filter.mutate.fieldname=fieldvalue"'
    --setup                       Load index template into Elasticsearch, and saved searches, 
                                  index-pattern, visualizations, and dashboards into Kibana when
                                  running modules.
                                   (default: false)
    --cloud.id CLOUD_ID           Sets the elasticsearch and kibana host settings for
                                  module connections in Elastic Cloud.
                                  Your Elastic Cloud User interface or the Cloud support
                                  team should provide this.
                                  Add an optional label prefix '<label>:' to help you
                                  identify multiple cloud.ids.
                                  e.g. 'staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy'
    --cloud.auth CLOUD_AUTH       Sets the elasticsearch and kibana username and password
                                  for module connections in Elastic Cloud
                                  e.g. 'username:<password>'
    --pipeline.id ID              Sets the ID of the pipeline.
                                   (default: "main")
    -w, --pipeline.workers COUNT  Sets the number of pipeline workers to run.
                                   (default: 4)
    --java-execution              Use Java execution engine.
                                   (default: false)
    -b, --pipeline.batch.size SIZE Size of batches the pipeline is to work in.
                                   (default: 125)
    -u, --pipeline.batch.delay DELAY_IN_MS When creating pipeline batches, how long to wait while polling
                                  for the next event.
                                   (default: 50)
    --pipeline.unsafe_shutdown    Force logstash to exit during shutdown even
                                  if there are still inflight events in memory.
                                  By default, logstash will refuse to quit until all
                                  received events have been pushed to the outputs.
                                   (default: false)
    --path.data PATH              This should point to a writable directory. Logstash
                                  will use this directory whenever it needs to store
                                  data. Plugins will also have access to this path.
                                   (default: "/usr/share/logstash/data")
    -p, --path.plugins PATH       A path of where to find plugins. This flag
                                  can be given multiple times to include
                                  multiple paths. Plugins are expected to be
                                  in a specific directory hierarchy:
                                  'PATH/logstash/TYPE/NAME.rb' where TYPE is
                                  'inputs' 'filters', 'outputs' or 'codecs'
                                  and NAME is the name of the plugin.
                                   (default: [])
    -l, --path.logs PATH          Write logstash internal logs to the given
                                  file. Without this flag, logstash will emit
                                  logs to standard output.
                                   (default: "/usr/share/logstash/logs")
    --log.level LEVEL             Set the log level for logstash. Possible values are:
                                    - fatal
                                    - error
                                    - warn
                                    - info
                                    - debug
                                    - trace
                                   (default: "info")
    --config.debug                Print the compiled config ruby code out as a debug log (you must also have --log.level=debug enabled).
                                  WARNING: This will include any 'password' options passed to plugin configs as plaintext, and may result
                                  in plaintext passwords appearing in your logs!
                                   (default: false)
    -i, --interactive SHELL       Drop to shell instead of running as normal.
                                  Valid shells are "irb" and "pry"
    -V, --version                 Emit the version of logstash and its friends,
                                  then exit.
    -t, --config.test_and_exit    Check configuration for valid syntax and then exit.
                                   (default: false)
    -r, --config.reload.automatic Monitor configuration changes and reload
                                  whenever it is changed.
                                  NOTE: use SIGHUP to manually reload the config
                                   (default: false)
    --config.reload.interval RELOAD_INTERVAL How frequently to poll the configuration location
                                  for changes, in seconds.
                                   (default: 3000000000)
    --http.host HTTP_HOST         Web API binding host (default: "127.0.0.1")
    --http.port HTTP_PORT         Web API http port (default: 9600..9700)
    --log.format FORMAT           Specify if Logstash should write its own logs in JSON form (one
                                  event per line) or in plain text (using Ruby's Object#inspect)
                                   (default: "plain")
    --path.settings SETTINGS_DIR  Directory containing logstash.yml file. This can also be
                                  set through the LS_SETTINGS_DIR environment variable.
                                   (default: "/usr/share/logstash/config")
    --verbose                     Set the log level to info.
                                  DEPRECATED: use --log.level=info instead.
    --debug                       Set the log level to debug.
                                  DEPRECATED: use --log.level=debug instead.
    --quiet                       Set the log level to info.
                                  DEPRECATED: use --log.level=info instead.
    -h, --help                    print help	

测试Logstash 输出到Elasticsearch

# cat /etc/logstash/conf.d/message.conf 
input {
  http {
    port => 7474
  }
}

filter {
  grok {
    patterns_dir => ['/etc/logstash/pattern']
    pattern_definitions => {
        "DATETIME" => "%{MONTH}%{SPACE}%{MONTHDAY}%{SPACE}%{TIME}"
    }
    match => {
      "message" => '%{DATETIME:datetime} %{HOSTNAME:hostname} (%{HOSTNAME:system}:|%{USERNAME:system}\[%{USERNAME}\]:) %{GREEDYDATA:message}'
    }
    overwrite => ["message"]
    remove_field => ["headers"]
  }
}

output {
    elasticsearch {
        hosts => ["10.3.245.65:9200","10.3.245.25:9200"]
        index => "test"
        user => admin
        password => admin
    }
}

启动Logstash

Logstash 启动的时候 增加 -t 是进行语法检测
# 启动logstash

/usr/share/logstash/bin/logstash   --path.settings /etc/logstash  -f /etc/logstash/conf.d -r -l /tmp/
Sending Logstash logs to /tmp/ which is now configured via log4j2.properties
[2020-01-04T23:51:26,208][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-01-04T23:51:26,221][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.6.1"}
[2020-01-04T23:51:31,358][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2020-01-04T23:51:31,719][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://admin:xxxxxx@10.3.245.65:9200/, http://admin:xxxxxx@10.3.245.25:9200/]}}
[2020-01-04T23:51:31,903][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://admin:xxxxxx@10.3.245.65:9200/"}
[2020-01-04T23:51:31,943][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2020-01-04T23:51:31,946][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2020-01-04T23:51:31,951][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://admin:xxxxxx@10.3.245.25:9200/"}
[2020-01-04T23:51:31,979][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.3.245.65:9200", "//10.3.245.25:9200"]}
[2020-01-04T23:51:31,988][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2020-01-04T23:51:32,007][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2020-01-04T23:51:32,320][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7d814629 run>"}
[2020-01-04T23:51:32,332][INFO ][logstash.inputs.http     ] Starting http input listener {:address=>"0.0.0.0:7474", :ssl=>"false"}
[2020-01-04T23:51:32,366][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-01-04T23:51:32,564][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

从postman 输出信息到logstash

posted @ 2022-03-16 18:20  大川哥  阅读(264)  评论(0编辑  收藏  举报