ElastAlert配置和告警规则各种用法
config.yaml配置说明
#用来加载rule的目录,默认是example_rules
rules_folder: example_rules
#用来设置定时向elasticsearch发送请求
run_every:
minutes: 1
#用来设置请求里时间字段的范围
buffer_time:
minutes: 15
#elasticsearch的host地址
es_host: 192.168.232.191
#elasticsearch 对应的端口号
es_port: 9200
#可选的,es url前缀
#es_url_prefix:elasticsearch
#可选的,查询es的方式,默认是GET
#es_send_get_body_as:GET
#可选的,选择是否用SSL连接es,true或者false
#use_ssl: True
#可选的,是否验证TLS证书,设置为true或者false,默认为- true
#verify_certs: True
#es认证的username和password
#es_username: someusername
#es_password: somepassword
#elastalert产生的日志在elasticsearch中的创建的索引
writeback_index: elastalert_status
#失败重试的时间限制
alert_time_limit:
days: 2
创建ElastAlert索引
可以在/usr/bin/目录下看到以下四个命令:
$ ll /usr/bin/elastalert*
-rwxr-xr-x 1 root root 399 Nov 20 16:39 /usr/bin/elastalert
-rwxr-xr-x 1 root root 425 Nov 20 16:39 /usr/bin/elastalert-create-index
-rwxr-xr-x 1 root root 433 Nov 20 16:39 /usr/bin/elastalert-rule-from-kibana
-rwxr-xr-x 1 root root 419 Nov 20 16:39 /usr/bin/elastalert-test-rule
- elastalert-create-index会创建一个索引,ElastAlert 会把执行记录存放到这个索引中,默认情况下,索引名叫 elastalert_status。其中有4个_type,都有自己的@timestamp 字段,所以同样也可以用kibana来查看这个索引的日志记录情况。
- elastalert-rule-from-kibana从Kibana已保存的仪表盘中读取Filtering 设置,帮助生成config.yaml里的配置。不过注意,它只会读取 filtering,不包括queries。
- elastalert-test-rule测试自定义配置中的rule设置。
执行elastalert-create-index在ES创建索引,这不是必须的步骤,但是强烈建议创建。因为对于审计和测试很有用,并且重启ES不影响计数和发送
$ elastalert-create-index
1.邮箱报警配置
# Alert when the rate of events exceeds a threshold
# (Optional)
# Elasticsearch host
es_host: 192.168.232.191
# (Optional)
# Elasticsearch port
es_port: 9200
# (OptionaL) Connect with SSL to Elasticsearch
#use_ssl: True
# (Optional) basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
# (Required)
# Rule name, must be unique
name: Example frequency rule
# (Required)
# Type of alert.
# the frequency rule type alerts when num_events events occur with timeframe time
type: frequency
# (Required)
# Index to search, wildcard supported
index: metricbeat-*
# (Required, frequency specific)
# Alert when this many documents matching the query occur within a timeframe
num_events: 5
# (Required, frequency specific)
# num_events must occur within this amount of time to trigger an alert
timeframe:
hours: 4
# (Required)
# A list of Elasticsearch filters used for find events
# These filters are joined with AND and nested in a filtered query
# For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html
filter:
- query_string:
query: "system.process.cpu.total.pct: > 10%" // field支持嵌套
smtp_host: smtp.163.com
smtp_port: 25
smtp_auth_file: /opt/elastalert/smtp_auth.yaml
#回复给那个邮箱
email_reply_to: xxx@163.com
##从哪个邮箱发送
from_addr: xxx@163.com
# (Required)
# The alert is use when a match is found
alert:
- "email"
# (required, email specific)
# a list of email addresses to send alerts to
email:
- "yyy@qq.com"
上述配置表示选择metricbeat作为告警索引,在4小时内讲匹配过滤规则,当CPU使用百分比的值为10%超过5次后,即满足告警条件,然后发送邮件
还有一个smtp_auth.yaml文件,这个里面记录了发送邮箱的账号和密码,163邮箱有授权码机制,所以密码处应该填写授权码(没有的话则需要开启)。
#发送邮件的邮箱
user: xxx@163.com
#不是邮箱密码,是设置的POP3密码
password: xxx
高级配置
避免重复告警
# 5分钟内相同的报警不会重复发送
realert:
minutes: 5
# 指数级扩大 realert 时间,中间如果有报警,
# 则按照5>10>20>40>60不断增大报警时间到制定的最大时间,
# 如果之后报警减少,则会慢慢恢复原始realert时间
exponential_realert:
hours: 1
聚合相同告警
# 根据报警的内容,将相同的报警安装 name 来聚合
aggregation_key: name
# 聚合报警的内容,只展示 name 与 message
summary_table_fields:
- name
- message
告警内容格式化
可以自定义告警内容,内部是使用Python的format来实现的
alert_subject: "Error {} @{}"
alert_subject_args:
- name
- "@timestamp"
alert_text_type: alert_text_only
alert_text: |
### Error frequency exceeds
> Name: {}
> Message: {}
> Host: {} ({})
alert_text_args:
- name
- message
- hostname
- host
测试rule
可以在运行rule之前先通过elastalert-test-rule命令来测试一下
$ elastalert-test-rule ~/elastalert/example_rules/example_frequency.yaml
运行rule
启动elastalert服务,监听es,这里加了--rule example_frequency.yaml
表示只运行example_frequency.yaml这一个rule文件,如果不加该选项则会运行rules_folder下所有rule文件,上面配置中的rules_folder为默认的example_rules。
$ python -m elastalert.elastalert --verbose --rule example_frequency.yaml
为了让服务后台运行并且可以达到守护进程的效果,在生产环境中笔者建议使用supervisor管理。