ELK日志分析系统(4)-elasticsearch数据存储
1. 概述
logstash把格式化的数据发送到elasticsearch以后,elasticsearch负责存储搜索日志数据
elasticsearch的搜索接口还是很强大的,这边不详细展开,因为kibana会去调用el的接口;
本文将讲解elasticsearch的相关配置和遇到的问题,至于elasticsearch的相关搜索使用,后面会找个时间整理一下。
2. 配置
配置路径:docker-elk/elasticsearch/config/elasticsearch.yml
- 关闭安全验证,否则kibana连接不上:xpack.security.enabled:false
- 配置支持跨域调用,否则kibana会提示连接不上: http.cors.enabled: true
另外由于elasticsearch很容易被攻击,所以建议不要把elasticsearch的端口对外开放
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | cluster.name: "docker-cluster" network.host: 0.0.0.0 ## Use single node discovery in order to disable production mode and avoid bootstrap checks ## see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html # discovery.type: single-node ## X-Pack settings ## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html # xpack.license.self_generated.type: trial xpack.security.enabled: false xpack.monitoring.collection.enabled: true http.cors.enabled: true http.cors.allow-origin: "*" |
elasticsearch的缓存路径是/usr/share/elasticsearch/data
验证是否成功:
访问http://192.168.1.165:9200 ,如果得到以下数据表示成功:
3. 异常处理
3.1. index has exceeded [1000000] - maximum allowed to be analyzed for highlighting
详细的出错内容是这样:
{"type":"illegal_argument_exception","reason":"The length of [message] field of [l60ZgW0Bv9XMTlnX27A_] doc of [syslog] index has exceeded [1000000] - maximum allowed to be analyzed for highlighting. This maximum can be set by changing the [index.highlight.max_analyzed_offset] index level setting. For large texts, indexing with offsets or term vectors is recommended!”}}
错误原因:索引偏移量默认是100000,超过了
最大迁移索引不能配置在配置文件中,只能接口修改
1 2 3 4 5 6 7 | # 修改最大索引迁移 curl -XPUT "http://192.168.1.165:9200/_settings" -H 'Content-Type: application/json' -d' { "index" : { "highlight.max_analyzed_offset" : 100000000 } }’ |
3.1. circuit_breaking_exception', '[parent] Data too large, data for [<http_request>] would be [246901928/235.4mb], which is larger than the limit of [246546432/235.1mb]
详细的出错内容是这样:
elasticsearch.exceptions.TransportError: TransportError(429, 'circuit_breaking_exception', '[parent] Data too large, data for [<http_request>] would be [246901928/235.4mb], which is larger than the limit of [246546432/235.1mb], real usage: [246901768/235.4mb], new bytes reserved: [160/160b], usages [request=0/0b, fielddata=11733/11.4kb, in_flight_requests=160/160b, accounting=6120593/5.8mb]')
错误原因:
堆内存不够当前查询加载数据所以会报 https://github.com/docker-library/elasticsearch/issues/98
解决方案:
- 提高堆栈内存
在宿主机执行:sudo sysctl -w vm.max_map_count=262144
docker增加命令参数设置java的虚拟机初始化堆栈大小1G,和最大堆栈大小3G
docker-compose路径:配置路径:docker-elk/docker-compose.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | services: elasticsearch: build: context: elasticsearch/ args: ELK_VERSION: $ELK_VERSION volumes: - type: bind source: ./elasticsearch/config/elasticsearch.yml target: /usr/share/elasticsearch/config/elasticsearch.yml read_only: true - type: volume source: elasticsearch target: /usr/share/elasticsearch/data ports: - "9200:9200" - "9300:9300" environment: ES_JAVA_OPTS: "-Xms1g -Xmx3g" ELASTIC_PASSWORD: changeme LOGSPOUT: ignore networks: - elk |
- 增加堆内存的使用率,默认70%
1 2 3 4 5 6 7 8 9 10 11 | curl -X PUT "http://192.168.1.165:9200/_cluster/settings" -H 'Content-Type: application/json' -d' { "transient" : { "indices.breaker.total.limit" : "90%" } }’ |
3. 安装可视化插件
使用docker启动
docker run -d --name elasticsearch-head -p 9100:9100 mobz/elasticsearch-head:5
elasticsearch需要配置支持跨域调用,否则会提示连接不上
ElasticSearch head入口:http://192.168.1.165:9100
插件效果如下:
这个插件估计对新版本的elasticsearch支持不好,后面可以换一个支持新版本elsticsearch的插件。
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 一个奇形怪状的面试题:Bean中的CHM要不要加volatile?
· [.NET]调用本地 Deepseek 模型
· 一个费力不讨好的项目,让我损失了近一半的绩效!
· .NET Core 托管堆内存泄露/CPU异常的常见思路
· PostgreSQL 和 SQL Server 在统计信息维护中的关键差异
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· DeepSeek “源神”启动!「GitHub 热点速览」
· 我与微信审核的“相爱相杀”看个人小程序副业
· Plotly.NET 一个为 .NET 打造的强大开源交互式图表库
· 上周热点回顾(2.17-2.23)