返回顶部

压力测试数据

 

 

测试环境

测试 s3 documentdb  

aws 服务器

2 个 cpu

4 g 内存

 

测试项目

  8 个进程

       16 个线程

需要压测的 api

对 s3 和 documentdb 的分别进行

500 个请求  100 并发

1000 个请求  300 并发

5000 个请求 1000 并发进行测试

 

s3 存取api

1
2
write_to_s3
reade_to_s3

documentdb  存取api

1
2
wreite_to_documentdb
reade_to_documentdb

  

项目启动命令

1
2
gunicorn main:app -k gevent -w 8 --threads 16 --bind :5001  --timeout 3600 --keep-alive 65 --log-level info --access-logfile - --error-logfile - --capture-ou
tput

 

测试代码

复制代码
from botocore.exceptions import ClientError
import boto3


class S3Processor(object):
    def __init__(self, bucket_name):
        self.bucket_name = bucket_name
        self.client = boto3.client(
            "s3",
            region_name="us-east-1"
        )

    def put_object(self, body, key):
        """
        :param body: b'bytes' | file | string,  (bytes or seekable file-like object) -- Object data
        :param key:  'string | folder/string' Object key for which the PUT action was initiated
        :return: {
            'ResponseMetadata': {
                'HTTPStatusCode': 200,
                '...': '...'
            }
        }
        """
        return self.client.put_object(Body=body, Key=key, Bucket=self.bucket_name)

    def get_object(self, key):
        """
        :param key:
        :return: {
            'ResponseMetadata': {
                'HTTPStatusCode' : 200,
                '...': '...'
            },
            'Body' : botocore.response.StreamingBody object
        }
        """
        try:
            return self.client.get_object(Key=key, Bucket=self.bucket_name)
        except ClientError as e:
            return {}

    def delete_object(self, key):
        """
        :param key: Key name of the object to delete.
        :return: {
            'ResponseMetadata': {
                'HTTPStatusCode': 204
                '...': '...',
            }
        }
        """
        return self.client.delete_object(Key=key, Bucket=self.bucket_name)


if __name__ == '__main__':
    # s3_client = S3Processor(bucket_name="webtool-backend-legislation-dev")
    # res = s3_client.put_object(body='123444', key='123')
    # res1 = s3_client.put_object(body='1234', key='david/123')
    # res = s3_client.get_object(key='david/123')
    # res = s3_client.delete_object(key='123')

    s3_client = S3Processor(bucket_name="webtool-backend-legislation-qa")
    with open("s2022_85s_aos.xml", 'rb') as f:
        print(s3_client.put_object(body=f.read().decode(), key='c0de6c8cda6794e00c86764af713edd7'))
operate_s3.py
复制代码

 

复制代码
from pymongo import MongoClient


class DocumentDBTrigger(object):

    def __init__(self, database, collection):
        self.client = MongoClient(
            "noahark-docudb-cluster.ce4ki3t0skfm.us-east-1.docdb.amazonaws.com",
            username="root",
            password="p5Vae0kgur2BpgZK",
            retryWrites=False, maxConnecting=20
        )[database][collection]

    def insert_one(self, document_dict,  bypass_document_validation=False, session=None):
        """Insert a single document
        >> db.test.insert_one({'x': 1})
        :Parameters:
          - `document`: The document to insert. Must be a mutable mapping
            type. If the document does not have an _id field one will be
            added automatically.
          - `bypass_document_validation`: (optional) If ``True``, allows the
            write to opt-out of document level validation. Default is
            ``False``.
          - `session` (optional): a
            :class:`~pymongo.client_session.ClientSession`.

        :Returns:
          - An instance of :class:`~pymongo.results.InsertOneResult`.
        """
        return self.client.insert_one(
            document_dict,  bypass_document_validation=bypass_document_validation, session=session
        )

    def find_one(self, filter_condition=None, *args, **kwargs):
        """Get a single document from the database.

        All arguments to :meth:`find` are also valid arguments for
        :meth:`find_one`, although any `limit` argument will be
        ignored. Returns a single document, or ``None`` if no matching
        document is found.

        The :meth:`find_one` method obeys the :attr:`read_preference` of
        this :class:`Collection`.

        :Parameters:

          - `filter` (optional): a dictionary specifying
            the query to be performed OR any other type to be used as
            the value for a query for ``"_id"``.

          - `*args` (optional): any additional positional arguments
            are the same as the arguments to :meth:`find`.

          - `**kwargs` (optional): any additional keyword arguments
            are the same as the arguments to :meth:`find`.

              >> collection.find_one(max_time_ms=100)
        """
        return self.client.find_one(filter=filter_condition, *args, **kwargs)


if __name__ == '__main__':
    document = DocumentDBTrigger('pressure_measurement', 'object')
    res = document.find_one({"object_id": "c0de6c8cda6794e00c86764af713edd7"})
    print(res)
    # with open("s2022_85s_aos.xml", 'rb') as f:
    #     res = document.insert_one({"object_id": "c0de6c8cda6794e00c86764af713edd7", "text": f.read().decode()})
    #     print(res)
operate_documentdb.py
复制代码

 

 

s3压测

 

ab 压力测试参数说明

  • -c  一次产生的请求个数

  • -n   在测试会话中所执行的请求个数

吞吐率(Requests per second)

服务器并发处理能力的量化描述,单位是reqs/s,指的是在某个并发用户数下单位时间内处理的请求数。某个并发用户数下单位时间内能处理的最大请求数,称之为最大吞吐率。

  • a、吞吐率和并发用户数相关
  • b、不同的并发用户数下,吞吐率一般是不同的

计算公式:总请求数/处理完成这些请求数所花费的时间,即

Request per second=Complete requests/Time taken for tests

并发连接数(The number of concurrent connections)

并发连接数指的是某个时刻服务器所接受的请求数目,简单的讲,就是一个会话

 

并发用户数(Concurrency Level)

要注意区分这个概念和并发连接数之间的区别,一个用户可能同时会产生多个会话,也即连接数。在HTTP/1.1下,IE7支持两个并发连接,IE8支持6个并发连接,FireFox3支持4个并发连接,所以相应的,我们的并发用户数就得除以这个基数。

用户平均请求等待时间(Time per request)

计算公式:处理完成所有请求数所花费的时间/(总请求数/并发用户数),即:

Time per request=Time taken for tests/(Complete requests/Concurrency Level)

服务器平均请求等待时间(Time per request:across all concurrent requests)

计算公式:处理完成所有请求数所花费的时间/总请求数,即:

Time taken for/testsComplete requests

可以看到,它是吞吐率的倒数。

同时,它也等于用户平均请求等待时间/并发用户数,即

Time per request/Concurrency Level

 

 

 

写入压测

1
ab -n 500 -c 100  http://127.0.0.1:5001/write_to_s3

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[2022-03-09 08:29:46,483] ERROR in app: Exception on /write_to_s3 [GET]
Traceback (most recent call last):
  File "/mnt/package/repo/lib64/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
    response = self.full_dispatch_request()
  File "/mnt/package/repo/lib64/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/mnt/package/repo/lib64/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/mnt/package/repo/lib64/python3.7/site-packages/flask/_compat.py", line 39, in reraise
    raise value
  File "/mnt/package/repo/lib64/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
    rv = self.dispatch_request()
  File "/mnt/package/repo/lib64/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/root/project/pressure_measurement/main.py", line 12, in write_to_s3
    res = s3_client.put_object(body=f.read().decode(), key='c0de6c8cda6794e00c86764af713edd7')
  File "/root/project/pressure_measurement/operate_s3.py", line 26, in put_object
    return self.client.put_object(Body=body, Key=key, Bucket=self.bucket_name)
  File "/mnt/package/repo/lib64/python3.7/site-packages/botocore/client.py", line 386, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/mnt/package/repo/lib64/python3.7/site-packages/botocore/client.py", line 705, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (SlowDown) when calling the PutObject operation (reached max retries: 4): Please reduce your request rate.

 

查询压测

1
ab -n 500 -c 100  http://127.0.0.1:5001/reade_to_s3

 

1
ab -n 1000 -c 300  http://127.0.0.1:5001/reade_to_s3

 

 

1
ab -n 5000 -c 1000  http://127.0.0.1:5001/reade_to_s3

 

 

document db  写入压测

1
ab -n 500 -c 100  http://127.0.0.1:5001/wreite_to_documentdb

 

 

1
ab -n 1000 -c 300  http://127.0.0.1:5001/wreite_to_documentdb

 

1
ab -n 5000 -c 1000  http://127.0.0.1:5001/wreite_to_documentdb

 

 

document db  查询压测

1
ab -n 500 -c 100  http://127.0.0.1:5001/reade_to_documentdb

 

 

ab -n 1000 -c 300  http://127.0.0.1:5001/reade_to_documentdb

 

 

ab -n 5000 -c 1000  http://127.0.0.1:5001/reade_to_documentdb

 

 

 

 

 

 

 

 

 

 

posted @   Crazymagic  阅读(332)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· DeepSeek 开源周回顾「GitHub 热点速览」
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
点击右上角即可分享
微信分享提示

目录导航