基于locust全链路压测系统
2021年中旬就计划着搭建一套压测系统,大约9月份已经搭建完成,使用至今还是比较稳定了,分享一下搭建思路及过程:
为什么选择Locust呢,因为Locust可以仅需要执行命令就可以完成压测任务,并且集群压测也很简单,只需压测机安装locust并把压测脚本推送到服务器即可。
Locust QQ群:
画了一个大致的思路图:
我们说的全链路其实有几层意思:
1.多接口多场景,而非单接口或单url
2.按照用户访问场景及频率,用户访问的路径是有先后的,访问的接口频率也是不一样的。怎么理解这个呢,很简单,比如获取列表的接口(get_list)和获取内容的接口(get_content),用户访问任何页面有可能都会访问
get_list,但用户可能都不会点击详情,所以调用get_list的频率会更多。
怎么真实的获取到用户访问的链路场景呢?
1.通过用户访问的日志,分析用户的行为,然后编写压测场景用例
2.模拟用户场景,导出用户记录
A.浏览器直接导出记录生成.har文件
B.app通过抓包工具获取用户记录导出生成.har文件
当然有的人说har文件解析生成接口后,后续压测能一直有效么,比如token等校验通不过,解决这个问题很简单,和研发商量一下,请求参数里加每个值或对特定设备或标识放开就行,后续一路畅通无阻。
压测脚本来源有了,第二步就是解析har文件,模块库里有解析har的,但发现不满足自己使用,自己写吧,项目结构仅供参考:
解析Har文件:
1 # -*- coding = utf-8 -*- 2 # ------------------------------ 3 # @time: 2021/3/22 14:53 4 # @Author: drew_gg 5 # @File: disassemble_har.py 6 # @Software: cover_app_platform 7 # ------------------------------ 8 9 import json 10 from app.locust.anasiysis_har import judgment_exist as jud 11 from app.locust.anasiysis_har import deal_headers as dh 12 from app.locust.anasiysis_har import deal_request_data as dr 13 from app.config.har_to_api import api_filter as af 14 15 16 key_words = af.key_words 17 18 19 def disassemble_har(har_file, api_only=0): 20 """ 21 提取分解har文件 22 :param har_file: .har文件 23 :param api_only: 1:去重,其他:不去重 24 :return: 25 """ 26 27 req_l = [] 28 rdl = [] 29 rdl_set = [] 30 host = '' 31 count = 1 32 # url过滤非接口请求 33 with open(har_file, "r", encoding='utf-8') as f: 34 f = json.loads(f.read()) 35 for i in f['log']['entries']: 36 if jud.judgment_exist(i['request']['url'], key_words) is False: 37 req_l.append(i) 38 for index, i in enumerate(req_l): 39 rd = {} 40 # 解析host 41 host = i['request']['url'].split('//')[0] + '//' + i['request']['url'].split('//')[1].split('/')[0] 42 # 解析子url 43 # son_url = i['request']['url'].split(host)[1].split('&')[0] 44 son_url = i['request']['url'].split(host)[1] 45 deal_url = son_url.split('?')[0] 46 if deal_url == '/': 47 if len(son_url.split('?'))> 1: 48 deal_url = son_url.split('?')[1] 49 else: 50 deal_url = '/' 51 deal_url = deal_url.replace('/', '_').replace('-', '_').replace('.', '_').strip('_').lstrip('_') 52 if api_only == 1: 53 method_name = 'api_' + deal_url.lower() 54 else: 55 method_name = 'api_' + deal_url.lower() + '_' + str(index) 56 # 解析处理header 57 headers = dh.deal_headers(i['request']['headers']) 58 method = i['request']['method'] 59 # 解析处理请求参数 60 if method.upper() == "POST": 61 request_data = dr.deal_request_data(method, i['request']['postData']) 62 if method.upper() == "GET": 63 request_data = '\'' + i['request']['url'].split(son_url)[1] + '\'' 64 host = '"' + host + '"' 65 son_url = '"' + son_url + '"' 66 rd['host'] = host 67 rd['url'] = son_url 68 rd['headers'] = headers 69 rd['method'] = method 70 rd['method_name'] = method_name 71 rd['request_data'] = request_data 72 if api_only == 1: 73 # 去重并计数判断 74 if index == 0: 75 rd['count'] = count 76 rdl_set.append(rd) 77 else: 78 for x in rdl_set: 79 if son_url == x['url']: 80 x['count'] += 1 81 count = x['count'] 82 else: 83 if count == 1: 84 rd['count'] = count 85 rdl_set.append(rd) 86 count = 1 87 else: 88 rd['count'] = count 89 rdl.append(rd) 90 if api_only != 1: 91 rdl_set = rdl 92 return rdl_set, host 93 94 95 if __name__ == '__main__': 96 har_path = r'D:\thecover_project\cover_app_platform\app\file_upload\首页普通\20210803-113719\syptxq.har' 97 disassemble_har(har_path)
解析har文件,处理header、获取接口必要参数,然后对请求做分析,如果要去重,则统计相同请求的数量,压测时生成压测权重,如果不去重,后续生成压测脚本时则需要对处理方法名称。
解析好har文件后,需要生成调试脚本和压测脚本:
我处理方式实直接生成py文件,事先创建好模板,如:
生成调试脚本比较简单,只需要一个模板就行,生成locust压测脚本则稍微负责点,我是分拆成多个模板,然后整合到一个模板。
生成的脚本都规范放在目录里:
生成脚本目录结构:
生成压测脚本示例:
1 # -*- coding = utf-8 -*- 2 # ------------------------------ 3 # @time: 2021-04-19 13:43:10.380837 4 # @Author: drew_gg 5 # @File: liao_bao.py 6 # @Software: api_locust 7 # ------------------------------ 8 9 10 from locust import SequentialTaskSet, task, constant, tag, TaskSet 11 from locust.contrib.fasthttp import FastHttpUser 12 13 14 class LiaoBao20210419(TaskSet): 15 16 @task(1) 17 @tag('api_getlist') 18 def api_getlist(self): 19 headers = {'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'tenantId': '7'} 20 # 请求参数组装 ## r_url:固定参数 21 r_url = "/getList?vno=6.4.0" 22 requests_data = {'account': 'E2247B94-51E2-4952-BC06-24752911C060', 'client': 'iOS', 'data': '{"operation_type":0,"news_id":0,xxxxxxxxxxxxxxxxxxx'} 23 # 发起请求 24 with self.client.post(r_url, data=requests_data, catch_response=True, name=r_url) as r: 25 if r.content == b"": 26 r.failure("No data") 27 if r.status_code != 200: 28 em = "request error --" + str(r.status_code) 29 r.failure(em) 30 31 @task(4) 32 @tag('api_getsysnotice') 33 def api_getsysnotice(self): 34 headers = {'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'tenantId': '7'} 35 # 请求参数组装 ## r_url:固定参数 36 r_url = "/getSysnotice?vno=6.4.0" 37 requests_data = {'account': 'E251179A-6309-4326-9827-73C892131605', 'client': 'iOS', 'data': '{"page_size":15,"page":1}', xxxxxxxxxxxxxxxxxxxxxxxx} 38 # 发起请求 39 with self.client.post(r_url, data=requests_data, catch_response=True, name=r_url) as r: 40 if r.content == b"": 41 r.failure("No data") 42 if r.status_code != 200: 43 em = "request error --" + str(r.status_code) 44 r.failure(em) 45 46 @task(4) 47 @tag('api_user_preparecancelaccount') 48 def api_user_preparecancelaccount(self): 49 headers = {'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'tenantId': '7'} 50 # 请求参数组装 ## r_url:固定参数 51 r_url = "/user/prepareCancelAccount?vno=6.4.0" 52 requests_data = {'account': '2FF3D47C-995B-4D7E-93CD-58B4F1E94B74', 'client': 'iOS', 'data': '{}', xxxxxxxxxxxxxxxxxxxxxxx} 53 # 发起请求 54 with self.client.post(r_url, data=requests_data, catch_response=True, name=r_url) as r: 55 if r.content == b"": 56 r.failure("No data") 57 if r.status_code != 200: 58 em = "request error --" + str(r.status_code) 59 r.failure(em) 60 61 62 class liao_bao_locust(FastHttpUser): 63 host = "https://xxxxxx.xxxxx.com" 64 wait_time = constant(0) 65 tasks = {LiaoBao20210419: 1}
生成好脚本后,需要生成执行命令:
1 # -*- coding = utf-8 -*- 2 # ------------------------------ 3 # @time: 2021/3/3 11:08 4 # @Author: drew_gg 5 # @File: locust_create_cmd.py 6 # @Software: cover_app_platform 7 # ------------------------------ 8 9 10 def create_master_cmd(locust_pra): 11 """ 12 生成master命令 13 :param locust_pra: 14 :return: 15 """ 16 # locust master 命令样式: 17 """ 18 locust -f /work/locust/api_locust/locust_view/fm_api/locust_api/locust_fm_640.py 19 --master 20 --master-bind-port 9800 21 --headless 22 -u 600 23 -r 200 24 --expect-worker 16 25 -t 10m 26 -s 10 27 --csv /work/locust/locust_report/fm/locust_get_dynamic.py0223145309 28 --html /work/locust/api_locust/resource/html/new_html/locust_get_operation_parm.html 29 """ 30 run_port = '9800' 31 master_cmd = "locust -f %s --master --master-bind-port %s --headless " % (locust_pra['to_file'], run_port) 32 master_pra = "-u %s -r %s --expect-worker %s -t %ss -s 10 --csv %s --html %s > %s" % \ 33 (locust_pra['user'], locust_pra['rate'], locust_pra['thread'], locust_pra['time'], locust_pra['csv'], 34 locust_pra['html'], locust_pra['master_log']) 35 master_cmd = master_cmd + master_pra 36 return master_cmd 37 38 39 def create_slave_cmd(locust_pra): 40 """ 41 生成slave命令 42 :return: 43 """ 44 run_port = '9800' 45 if len(locust_pra['api']) == 1 and locust_pra['api'][0] == '': 46 slave_cmd = "locust -f %s --master-host %s --master-port %s --headless --worker > %s" % \ 47 (locust_pra['to_file'], locust_pra['master'].split('-')[0], run_port, locust_pra['slave_log']) 48 else: 49 tags = '' 50 for i in locust_pra['api']: 51 tags += i.split(".py")[0] + ' ' 52 slave_cmd = "locust -f %s --master-host %s --master-port %s --headless --worker -T %s > %s" % \ 53 (locust_pra['to_file'], locust_pra['master'].split('-')[0], run_port, tags, locust_pra['slave_log']) 54 return slave_cmd
然后把文件推送到服务器上,服务器也需要有规定的目录:
每台压测机上建立三个目录:
master上存储压测生成的报告、csv文件,然后写个定时程序拉去报告到项目服务器,压测完后可直接查询报告。
平台主要界面:
1.首页
2.上传并解析har文件页面
3.压测脚本在线编辑执行页面
4.接口调试页面
5.调试结果页
6.压测配置页面
7.压测执行及记录页面
8.压测报告页面
9.服务器管理页面
大致包含这些功能,当然,项目搭建过程中遇到各种坑,要尝试才知道,后续打算优化一下代码,再升级几个版本,也算彻底搞定。
欢迎感兴趣的一起研究讨论。
本文来自博客园,作者:drewgg,转载请注明原文链接:https://www.cnblogs.com/drewgg/p/15724714.html