部署 ELK 日志采集系统

https://blog.csdn.net/yuemancanyang/article/details/122769308
https://wiki.eryajf.net/pages/2351.html
http://doc.ruoyi.vip/ruoyi-cloud/cloud/elk.html#%E5%AE%89%E8%A3%85

安装 elasticsearch
https://blog.csdn.net/u011665991/article/details/109494752
https://blog.csdn.net/yuemancanyang/article/details/122769308
docker-compose
https://juejin.cn/post/7143974532766760990
问题解决
https://www.cnblogs.com/hellxz/p/11057234.html

自定义docker子网

docker network create eggcode
使用自定义网络
docker run -it --name <容器名> --network eggcode --network-alias <网络别名> <镜像名>

https://www.cnblogs.com/shenh/p/9714547.html
https://blog.csdn.net/gelald/article/details/126914228

mkdir -p /usr/local/elasticsearch/{config,data,logs}

chown -R 1000:1000 /usr/local/elasticsearch

cd /usr/local/elasticsearch/config
touch elasticsearch.yml
-----------------------配置内容----------------------------------
# 集群名称
cluster.name: eggcode-es
# 初始化主节点,首次启动后删除此配置
cluster.initial_master_nodes: ["node-1"]
# 当前节点名称
node.name: node-1
# 访问主机
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["127.0.0.1"]
# 跨域
http.cors.enabled: true
http.cors.allow-origin: "*"
# 组件监控
xpack.monitoring.collection.enabled: true

参考:
https://juejin.cn/post/7088314722432319524
http://doc.ruoyi.vip/ruoyi-cloud/cloud/elk.html#%E5%AE%89%E8%A3%85
https://blog.csdn.net/qq_42483521/article/details/122843731
https://blog.csdn.net/u010953706/article/details/118720027
https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#initial_master_nodes
https://opster.com/analysis/elasticsearch-using-discovery-type-and-host-providers/
配置文件详解
https://zhuanlan.zhihu.com/p/41850863
https://www.jianshu.com/p/92e31cbae07f
https://www.elastic.co/guide/en/elasticsearch/reference/7.0/discovery-settings.html
https://blog.csdn.net/w1346561235/article/details/101426889

docker run -d \
--name=es \
--restart=always \
--network eggcode \
-p 9200:9200 \
-p 9300:9300 \
-e ES_JAVA_OPTS="-Xms1g -Xmx1g" \
-v /usr/local/elasticsearch/data:/usr/share/elasticsearch/data \
-v /usr/local/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /usr/local/elasticsearch/logs:/usr/local/elasticsearch/logs \
docker.elastic.co/elasticsearch/elasticsearch:7.17.10
[root@prop-host config]# curl 127.0.0.1:9200
{
  "name" : "8358f9dd00f8",
  "cluster_name" : "eggcode-es",
  "cluster_uuid" : "ZAgwKHhwRx6GKQBN2Rb3Lw",
  "version" : {
    "number" : "7.17.10",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "fecd68e3150eda0c307ab9a9d7557f5d5fd71349",
    "build_date" : "2023-04-23T05:33:18.138275597Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

kibana
查看es ip

[root@elasticsearch home]# docker inspect --format '{{ .NetworkSettings.IPAddress }}' es
172.17.0.3

mkdir -p /usr/local/kibana/config
touch kibana.yml
内容

server.name: kibana
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://es:9200" ]
i18n.locale: "zh-CN"

启动

docker run -d --restart=always \
--network eggcode \
--name kibana -p 5601:5601 \
-v /usr/local/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml \
kibana:7.17.10

访问 kibana
浏览器上输入:http://ip:5601,由于启动较慢,可多刷新几次。

logstash
https://blog.csdn.net/u014568072/article/details/115638260
mkdir -p /usr/local/logstash/config
touch logstash.yml
vi logstash.yml

node.name: common-logstash
http.host: "0.0.0.0"
path.config:/usr/share/logstash/config
path.logs: /usr/share/logstash/logs
# 避免日志文件过大
log.level: warn

https://segmentfault.com/a/1190000016591476?utm_source=sf-similar-article
启动

docker run -d --restart=always \
--network eggcode \
-p 5044:5044 --name logstash \
-v /usr/local/logstash/pipeline:/usr/share/logstash/pipeline \
-v /usr/local/logstash/config:/usr/share/logstash/config \
-v /usr/local/logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml \
docker.elastic.co/logstash/logstash:7.17.10

pipelines.yml 如果没用到可以去掉

logstash.conf 采集配置

# 采集:
input {
  beats {
    port => 5044
  	ssl => false
  }
}

# 输出:
output {
	if "yyzx-gateway" in [tags] {
        elasticsearch {
            action => "index"
            hosts => ["es:9200"]
            # elasticsearch按索引文档进行数据存储,所以这里将每天的日志建立索引存储
            index => "yyzx-gateway"
        }
    } else if "yyzx-upms" in [tags] {
        elasticsearch {
            action => "index"
            hosts => ["es:9200"]
            # elasticsearch按索引文档进行数据存储,所以这里将每天的日志建立索引存储
            index => "yyzx-upms"
        }
    } else if "yyzx-auth" in [tags] {
        elasticsearch {
            action => "index"
            hosts => ["es:9200"]
            # elasticsearch按索引文档进行数据存储,所以这里将每天的日志建立索引存储
            index => "yyzx-auth"
        }
    } else if "yyzx-task" in [tags] {
        elasticsearch {
            action => "index"
            hosts => ["es:9200"]
            # elasticsearch按索引文档进行数据存储,所以这里将每天的日志建立索引存储
            index => "yyzx-task"
        }
    } else {
        elasticsearch {
            action => "index"
            hosts => ["es:9200"]
            # elasticsearch按索引文档进行数据存储,所以这里将每天的日志建立索引存储
            index => "%{[fields][log-name]}-%{+YYYY.MM.dd}"
        }
   }
   
}

pipelines.yml

- pipeline.id: main
  path.config: /usr/share/logstash/pipeline/logstash.conf

验证配置文件是否正确

/usr/share/logstash/bin/logstash -t -f /etc/logstash/conf.d/logstash.conf

Configuration OK
[2023-05-15T04:39:27,463][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
logstash@612aeb2ba236:~$ 

安装 filebeat
https://www.elastic.co/guide/en/beats/filebeat/current/running-with-systemd.html
添加 yum 仓库 key

sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

新增 yum 源

vi /etc/yum.repos.d/elastic.repo

[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

安装
https://blog.csdn.net/cy309173854/article/details/78668237
sudo yum install filebeat-7.17.10 -y
启动
sudo systemctl enable filebeat
sudo systemctl start filebeat
yum 安装后的目录
https://www.elastic.co/guide/en/beats/filebeat/current/running-with-systemd.html

/etc/filebeat/filebeat.yml
配置文件

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  id: yyzx-service

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /opt/yyzx/logs/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['ERROR', 'INFO']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  tags: ["yyzx-server"]


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.


# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["test.com:5044"]

# --------------running logging--------------------------
logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0640

微服务的配置

filebeat.inputs:
- type: log
  id: njzx-gateway
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-gateway-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-gateway
  tags: ["njzx-gateway"]
- type: log
  id: njzx-upms
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-upms-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-upms
  tags: ["njzx-upms"]
- type: log
  id: njzx-auth
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/auth.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-auth
  tags: ["njzx-auth"]
- type: log
  id: njzx-task
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-task-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-task
  tags: ["njzx-task"]
- type: log
  id: njzx-project
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-project-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-project
  tags: ["njzx-project"]
- type: log
  id: njzx-point
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-point-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-point
  tags: ["njzx-point"]
- type: log
  id: njzx-pay
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-pay-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-pay
  tags: ["njzx-pay"]
- type: log
  id: njzx-trian
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-trian-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-trian
  tags: ["njzx-trian"]
- type: log
  id: njzx-miniapp
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-miniapp-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-miniapp
  tags: ["njzx-miniapp"]
- type: log
  id: njzx-marketactivity
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-marketactivity-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-marketactivity
  tags: ["njzx-marketactivity"]
- type: log
  id: njzx-rights
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-rights-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-rights
  tags: ["njzx-rights"]
- type: log
  id: njzx-course
  enabled: true
  paths:
    - /znxx/znxx4.5/logs/phsh-course-biz.log
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  fields:
    log-name: njzx-course
  tags: ["njzx-course"]
- type: log
  id: njzx-linux
  enabled: true
  paths:
    - /var/log/messages
  fields:
    log-name: njzx-linux
  tags: ["njzx-linux"]

processors:
- drop_fields:
    fields: ["log","host","input","agent","ecs"]

name: 39.98.215.147

output.logstash:
  hosts:
    - 'test.com:5044'
logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 416
# ============================= X-Pack Monitoring ==============================
xpack.monitoring:
  enabled: true
  elasticsearch:
    hosts: ["http://test.com:9200"]
  

filebeat 必须有log文件的读取权限
比如 linux 系统日志
chmod -R 644 /var/log/messages
默认配置文件:
https://github.com/elastic/beats/blob/main/filebeat/filebeat.yml
预置的input类型
https://juejin.cn/post/7003201862086164511

创建日志视图
https://blog.csdn.net/qq_34807429/article/details/107015741

https://blog.csdn.net/yhj_911/article/details/119416804

logstash 配置参数介绍
https://blog.csdn.net/qq330983778/article/details/106343363

kibana 文档
https://www.elastic.co/guide/en/kibana/7.17/i18n-settings-kb.html
设置中文
https://www.elastic.co/guide/en/kibana/7.17/i18n-settings-kb.html
查看es索引
https://blog.csdn.net/JineD/article/details/108134763

featbeat 文档
https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html
featbeat 安装目录
https://www.elastic.co/guide/en/beats/filebeat/current/running-with-systemd.html

filebeat 采集日志
https://blog.csdn.net/xiaoyu_BD/article/details/83826128

/usr/share/logstash/logs which is now configured via log4j2.properties

es 入门
https://www.cnblogs.com/woshimrf/p/es7-start.html
https://www.cnblogs.com/woshimrf/p/es-start.html

elk docker 安装 并设置密码
https://www.cnblogs.com/woshimrf/p/docker-es7.html

多行日志
https://developer.aliyun.com/article/764544

filebeat 配置参数解释
https://blog.csdn.net/Smookey/article/details/83934834

logstash 配置参数解释
https://blog.csdn.net/weixin_42073629/article/details/110154037

filebeat 自定义字段和标签,多日志采集
https://blog.csdn.net/shm19990131/article/details/107323005

filebeat 详细教程
https://www.cnblogs.com/cjsblog/p/9495024.html
xpack 监控
https://www.elastic.co/guide/en/beats/filebeat/7.9/configuration-monitor.html
https://segmentfault.com/a/1190000015107702

bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144
https://blog.csdn.net/sujins5288/article/details/103300280

posted @ 2023-06-17 09:37  EggCode  阅读(14)  评论(0编辑  收藏  举报