监控:Prometheus监控locust,grafana实时展示报告(通过boomer实现的负载生成器)

时间:2020/12/31

本文是对文章https://www.cnblogs.com/jinziguang/p/13610209.html的补充:将locust生成的数据通过prometheus收集,再通过grafana展示出来

 

环境:

windows:运行locust的master机和slave机;浏览器打开grafana页面查看统计数据

linux:运行prometheus和grafana

软件版本:

grafana7.3.6-1.x86_64.rpm

prometheus2.8.0.linux-amd64.tar.gz(https://github.com/prometheus/prometheus/releases/download/v2.8.0/prometheus-2.8.0.linux-amd64.tar.gz

locust 1.4.1(实测该版本可以和boomer:github.com/myzhan/boomer匹配使用)(pip3.exe install locustio==1.4.1)

 

执行思路:

1.通过boomer项目中prometheus_exporter.py作为master机启动,将数据采集在prometheus

2.slave机依然由go语言去做,作为一个负载生成器(此处也可以用python去做slave机)

3,服务器linux上安装prometheus,配置文件中将slave机添加为节点机;服务器安装grafana,将prometheus作为数据源进行展示

 

搭建环境:

1.修改prometheus_exporter.py,修改后的结果:

# coding: utf8

import six
from itertools import chain

from flask import request, Response
from locust import stats as locust_stats, runners as locust_runners
from locust import User, task, events
from prometheus_client import Metric, REGISTRY, exposition

# This locustfile adds an external web endpoint to the locust master, and makes it serve as a prometheus exporter.
# Runs it as a normal locustfile, then points prometheus to it.
# locust -f prometheus_exporter.py --master

# Lots of code taken from [mbolek's locust_exporter](https://github.com/mbolek/locust_exporter), thx mbolek!


class LocustCollector(object):
    registry = REGISTRY

    def __init__(self, environment, runner):
        self.environment = environment
        self.runner = runner

    def collect(self):
        # collect metrics only when locust runner is spawning or running.
        runner = self.runner

        if runner and runner.state in (locust_runners.STATE_SPAWNING, locust_runners.STATE_RUNNING):
            stats = []
            for s in chain(locust_stats.sort_stats(runner.stats.entries), [runner.stats.total]):
                stats.append({
                    "method": s.method,
                    "name": s.name,
                    "num_requests": s.num_requests,
                    "num_failures": s.num_failures,
                    "avg_response_time": s.avg_response_time,
                    "min_response_time": s.min_response_time or 0,
                    "max_response_time": s.max_response_time,
                    "current_rps": s.current_rps,
                    "median_response_time": s.median_response_time,
                    "ninetieth_response_time": s.get_response_time_percentile(0.9),
                    # only total stats can use current_response_time, so sad.
                    #"current_response_time_percentile_95": s.get_current_response_time_percentile(0.95),
                    "avg_content_length": s.avg_content_length,
                    "current_fail_per_sec": s.current_fail_per_sec
                })

            # perhaps StatsError.parse_error in e.to_dict only works in python slave, take notices!
            errors = [e.to_dict() for e in six.itervalues(runner.stats.errors)]

            metric = Metric('locust_user_count', 'Swarmed users', 'gauge')
            metric.add_sample('locust_user_count', value=runner.user_count, labels={})
            yield metric
            
            metric = Metric('locust_errors', 'Locust requests errors', 'gauge')
            for err in errors:
                metric.add_sample('locust_errors', value=err['occurrences'],
                                  labels={'path': err['name'], 'method': err['method'],
                                          'error': err['error']})
            yield metric

            is_distributed = isinstance(runner, locust_runners.MasterRunner)
            if is_distributed:
                metric = Metric('locust_slave_count', 'Locust number of slaves', 'gauge')
                metric.add_sample('locust_slave_count', value=len(runner.clients.values()), labels={})
                yield metric

            metric = Metric('locust_fail_ratio', 'Locust failure ratio', 'gauge')
            metric.add_sample('locust_fail_ratio', value=runner.stats.total.fail_ratio, labels={})
            yield metric

            metric = Metric('locust_state', 'State of the locust swarm', 'gauge')
            metric.add_sample('locust_state', value=1, labels={'state': runner.state})
            yield metric

            stats_metrics = ['avg_content_length', 'avg_response_time', 'current_rps', 'current_fail_per_sec',
                             'max_response_time', 'ninetieth_response_time', 'median_response_time', 'min_response_time',
                             'num_failures', 'num_requests']

            for mtr in stats_metrics:
                mtype = 'gauge'
                if mtr in ['num_requests', 'num_failures']:
                    mtype = 'counter'
                metric = Metric('locust_stats_' + mtr, 'Locust stats ' + mtr, mtype)
                for stat in stats:
                    # Aggregated stat's method label is None, so name it as Aggregated
                    # locust has changed name Total to Aggregated since 0.12.1
                    if 'Aggregated' != stat['name']:
                        metric.add_sample('locust_stats_' + mtr, value=stat[mtr],
                                          labels={'path': stat['name'], 'method': stat['method']})
                    else:
                        metric.add_sample('locust_stats_' + mtr, value=stat[mtr],
                                          labels={'path': stat['name'], 'method': 'Aggregated'})
                yield metric


@events.init.add_listener
def locust_init(environment, runner, **kwargs):
    print("locust init event received")
    if environment.web_ui and runner:
        @environment.web_ui.app.route("/export/prometheus")
        def prometheus_exporter():
            registry = REGISTRY
            encoder, content_type = exposition.choose_encoder(request.headers.get('Accept'))
            if 'name[]' in request.args:
                registry = REGISTRY.restricted_registry(request.args.get('name[]'))
            body = encoder(registry)
            return Response(body, content_type=content_type)
        REGISTRY.register(LocustCollector(environment, runner))


class Dummy(User):
    @task(20)
    def hello(self):
        pass

2.服务器安装grafana:

sudo yum localinstall grafana7.3.6-1.x86_64.rpm安装即可,systemctl restart grafana-server重启服务。

浏览器输入服务器ip:3000即可打开页面,账户密码默认admin,admin

3.服务器安装prometheus:

将prometheus-2.8.0.linux-amd64.tar.gz解压后执行./prometheus --web.enable-lifecycle --web.enable-admin-api &,之后修改yaml文件后只需要执行curl -X POST http://服务器ip:9090/-/reload即可

浏览器输入服务器ip:9090即可打开页面

 4.修改prometheus配置文件(prometheus.yml),添加在最下面添加如下配置:

- job_name: locust

    metrics_path: '/export/prometheus'
    static_configs:
      - targets: ['slave机ip:8089']
        labels:
          instance: locust

5.grafana可视化配置

a.添加数据源,选择prometheus,输入ip为服务器ip:9090即可

b.导入仪表盘,推荐使用https://grafana.com/grafana/dashboards/12081(该仪表盘需要grafana版本为7.3.6,笔者之前使用6.4.1版本结果导入仪表盘出错,执行rpm -Uvh 进行升级即可)

以上两步做完之后就可以看到grafana的仪表盘了,只是没有数据。

 

执行压测:

1.运行master机:locust --master --web-host=本机ip -f prometheus_exporter.py

2.检查是否正在监听:

cmd中执行netstat -ano|findstr 8089,发现当前服务器ip和master机ip正在ESTABLISH着8089端口

浏览器输入master机ip:8089/export/prometheus可查看到prometheus数据

3.运行负载机:go run test.go --master-host=master机ip --master-port=5557

4..浏览器输入master机ip:8089,输入总user数+ramp up数,开始压测

5.浏览器打开服务器ip:3000,查看仪表盘,正常显示当前locust的执行数据

 

posted @ 2020-12-31 14:24  _titleInfo  阅读(1623)  评论(0编辑  收藏  举报
//雪花飘落效果