es 慢查询日志收集-grok解析示例

个人比较熟悉elk

对logstash也相对熟悉,有较丰富的插件应用使用经验和一部分插件的开发经验

但logstash毕竟基于jvm,负载较重

同时logstash执行一些数据dsl操作,也是不小的负载

对一般的服务不影响

但如一些轻量的客户端,logstash过重

收集es集群所有节点慢查询日志,以下几种方案

1 syslog es节点直接网络发送至日志中心

2 nfs/cifs/gluster等网络文件系统,es节点挂载写入,另一台独立服务器挂载读取()

3 logstash直接布署在es node节点,读取文件,本地etl,或上报后再etl,因为logstash会和es进程竞争资源,因此方案并不好,

4 filebeat 直接布署在es node节点,读取文件,本地etl,或上报后再etl,因为logstash会和es进程竞争资源,方案同logstash,但整体的资源负载比logstash低

这里选择方案4,同时logstash filebeat都可以在es节点执行轻量的etl,但为了减少资源占用,只上报,不作本etl

方案如下

node[elasticsearch(提供es服务),filebeat(上报文件日志)]->node[logstash(接收日志,解析执行etl,后再上报)]-> 写入kafka/es等

文档

[Slow Log | Elasticsearch Guide 7.13] | Elastic

日志样例

[2030-08-30T11:59:37,786][WARN ][i.s.s.query ] [node-0] [index6][0] took[78.4micros], took_millis[0], total_hits[0 hits], stats[], search_type[QUERY_THEN_FETCH], total_shards[1], source[{"query":{"match_all":{"boost":1.0}}}], id[MY_USER_ID],

日志解析方式采用grok

以官方给的日志样例debug为例

原始日志

[2021-06-01T02:40:07,070][INFO ][index.search.slowlog.query] [datanode-10.11.110.22] [ik_sl_v2_201907_news][0] took[32.4s], took_millis[32443], total_hits[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[6626], source[{"size":2000,"query":{"bool":{"filter":[{"bool":{"should":[{"range":{"date_udate":{"from":"2021-05-30T23:00:00","to":"2021-05-31T23:59:59","include_lower":true,"include_upper":true,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},"sort":[{"_doc":{"order":"asc"}}],"slice":{"field":"_id","id":21,"max":30}}], id[], 

grok命令

\[%{DATA:datetime}\]\[INFO \]\[index.search.slowlog.query\] \[%{USERNAME:node}\] \[%{USERNAME:index}\]\[%{NUMBER:shard}\] took\[%{DATA:took}\], took_millis\[%{NUMBER:took_millis}\], total_hits\[%{NUMBER:hits}\], types\[%{DATA:types}\], stats\[%{DATA:stats}\], search_type\[QUERY_THEN_FETCH\], total_shards\[%{NUMBER:total_shards}\], source\[%{DATA:source}\], id\[%{DATA:id}\], 

grok解析结果

{
  "datetime": [
    [
      "2021-06-01T02:40:07,070"
    ]
  ],
  "node": [
    [
      "datanode-10.11.110.22"
    ]
  ],
  "index": [
    [
      "ik_sl_v2_201907_news"
    ]
  ],
  "shard": [
    [
      "0"
    ]
  ],
  "BASE10NUM": [
    [
      "0",
      "32443",
      "0",
      "6626"
    ]
  ],
  "took": [
    [
      "32.4s"
    ]
  ],
  "took_millis": [
    [
      "32443"
    ]
  ],
  "hits": [
    [
      "0"
    ]
  ],
  "types": [
    [
      ""
    ]
  ],
  "stats": [
    [
      ""
    ]
  ],
  "total_shards": [
    [
      "6626"
    ]
  ],
  "source": [
    [
      "{"size":2000,"query":{"bool":{"filter":[{"bool":{"should":[{"range":{"date_udate":{"from":"2021-05-30T23:00:00","to":"2021-05-31T23:59:59","include_lower":true,"include_upper":true,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},"sort":[{"_doc":{"order":"asc"}}],"slice":{"field":"_id","id":21,"max":30}}"
    ]
  ],
  "id": [
    [
      ""
    ]
  ]
}

grokdebug 验证New Tab (grokdebug.herokuapp.com)

logstash 支持的grok语法 logstash-patterns-core/grok-patterns at master · logstash-plugins/logstash-patterns-core (github.com)

END

posted @ 2021-06-21 22:42  cclient  阅读(1345)  评论(0编辑  收藏  举报