Elasticsearch + Kibana + Filebeat 日志收集
一、安装准备
1.1 安装docker
安装依赖 yum install -y yum-utils device-mapper-persistent-data lvm2 设置yum源 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 安装docker yum install docker-ce 启动docker systemctl start docker systemctl enable docker
修改国内源
vi /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com"] } systemctl restart docker.service
1.2 关闭swap
swapoff -a
修改/etc/fstab 注释掉关于swap部分
1.3系统参数修改
echo 'vm.max_map_count=262144' >> /etc/sysctl.conf sysctl -p vim /etc/security/limits.conf root soft nofile 65535 root hard nofile 65535 * soft nofile 65535 * hard nofile 65535
1.4 docker-compse
curl -L https://get.daocloud.io/docker/compose/releases/download/1.26.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose
二、Docker部署ES和Kibana
2.1 准备安装目录
# es mkdir -p /opt/es/{conf,data,logs} chown -R 1000.1000 /opt/es #kibana mkdir -p /opt/kibana/{conf,data} chown -R 1000.1000 /opt/kibaba #docker-compose mkdir -p /opt/elk
2.2 准备配置文件
/opt/es/conf/elasticsearch.yml
cluster.name: "es-cluster" network.host: 0.0.0.0 node.name: node1 discovery.zen.minimum_master_nodes: 1 http.port: 9200 transport.tcp.port: 9300 # 如果是多节点es,通过ping来健康检查 # # discovery.zen.ping.unicast.hosts: [] # discovery.zen.fd.ping_timeout: 120s discovery.zen.fd.ping_retries: 3 discovery.zen.fd.ping_interval: 30s cluster.info.update.interval: 1m xpack.security.enabled: false indices.fielddata.cache.size: 20% indices.breaker.total.limit: 60% indices.recovery.max_bytes_per_sec: 100mb indices.memory.index_buffer_size: 20% script.painless.regex.enabled: true # 手动指定可以成为 mater 的所有节点的 name 或者 ip,这些配置将会在第一次选举中进行计算 cluster.initial_master_nodes: ["node1"] # # 支持跨域访问 http.cors.enabled: true http.cors.allow-origin: "*" # # 安全认证 #xpack.security.enabled: false bootstrap.system_call_filter: false
/opt/kibana/conf/kibana.yml
kibana.yml [root@localhost conf]# cat kibana.yml # 服务端口 server.port: 5601 # # 服务IP server.host: "0.0.0.0" # # ES elasticsearch.hosts: ["http://elasticsearch:9200"] # # 汉化 i18n.locale: "zh-CN
/opt/elk/docker-compose.yml
version: "3" services: elasticsearch: container_name: elasticsearch hostname: node1 image: elasticsearch:7.1.1 restart: always ports: - 9200:9200 - 9300:9300 volumes: - /opt/es/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - /opt/es/data:/usr/share/elasticsearch/data - /opt/es/logs:/usr/share/elasticsearch/logs - /etc/localtime:/etc/localtime:ro environment: - "cluster.name=es-cluster" - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "TZ=Asia/Shanghai" es-head: container_name: es-head image: mobz/elasticsearch-head:5 restart: always ports: - 9100:9100 depends_on: - elasticsearch kibana: container_name: kibana hostname: kibana image: kibana:7.1.1 restart: always ports: - 5601:5601 volumes: - /opt/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml - /opt/kibana/data:/usr/share/kibana/data - /etc/localtime:/etc/localtime:ro environment: - "TZ=Asia/Shanghai" - "elasticsearch.hosts=http://es-master:9200" depends_on: - elasticsearch
启动:
cd /opt/elk
docker-compose up -d
查看:
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e9f460804b5e elasticsearch:7.1.1 "/usr/local/bin/dock…" 22 hours ago Up 22 hours 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp elasticsearch 80e7132c3589 kibana:7.1.1 "/usr/local/bin/kiba…" 23 hours ago Up 22 hours 0.0.0.0:5601->5601/tcp kibana bfebe88bea8e mobz/elasticsearch-head:5 "/bin/sh -c 'grunt s…" 23 hours ago Up 23 hours 0.0.0.0:9100->9100/tcp es-head
访问:
之后访问 yourip:9100就能访问head了,需要把连接中的localhost改为ip
三、安装filebeat
3.1下载filebeat
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.1.1-x86_64.rpm rpm -ivh filebeat-7.1.1-x86_64.rpm
3.2 修改配置文件
cd /etc/filebeat mv filebeat.yml filebeat.yml.bak
filebeat.yml
filebeat.inputs: - type: log enabled: true paths: - /data/logs/pb-dubbo-user/err_*.log fields: source: dubbo-user - type: log enabled: true paths: - /data/logs/pb-server-admin/err_*.log fields: source: server-admin - type: log enabled: true paths: - /data/logs/pb-dubbo-product/err_*.log fields: source: dubbo-product - type: log enabled: true paths: - /data/logs/pb-server-api/err_*.log fields: source: server-api filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 1 # 定义模板的相关信息 setup.template.name: "pb_log" setup.template.pattern: "pb-*" setup.template.overwrite: true setup.template.enabled: true #自定义ES的索引需要把ilm设置为false setup.ilm.enabled: false setup.kibana: hosts: "192.168.100.163:5601" output.elasticsearch: hosts: ["192.168.100.163:9200"] index: "pb-%{[fields.source]}-*" indices: - index: "pb-dubbo-user" when.equals: fields.source: "dubbo-user" - index: "pb-server-admin" when.equals: fields.source: "server-admin" - index: "pb-server-api" when.equals: fields.source: "server-api" - index: "pb-dubbo-product" when.equals: fields.source: "dubbo-product" processors: - add_host_metadata: ~ - add_cloud_metadata: ~
为了方便查看应用的错误信息,为每个应用的日志建立索引
之后再kibana中创建elasticsearch的索引就行