使用sysbench 压测
1.sysbench基础知识
sysbench的cpu测试是在指定时间内,循环进行素数计算
素数(也叫质数)就是从1开始的自然数中,无法被整除的数,比如2、3、5、7、11、13、17等。编程公式:对正整数n,如果用2到根号n之间的所有整数去除,均无法整除,则n为素数
它主要包括以下几种方式的测试类型:
1. cpu 处理器性能
2. threads 线程调度器性能
3. mutex 互斥锁性能
4. memory 内存分配及传输速度
5. fileio 文件IO性能
6. oltp 数据库性能(OLTP基准测试)
2.sysbench安装
# CentOS7下可使用yum安装
yum install sysbench
3.CPU压测
1)压测命令
2)常用参数
--cpu-max-prime: 素数生成数量的上限
- 若设置为3,则表示2、3、5(这样要计算1-5共5次)
- 若设置为10,则表示2、3、5、7、11、13、17、19、23、29(这样要计算1-29共29次)
- 默认值为10000
--threads: 线程数
- 若设置为1,则sysbench仅启动1个线程进行素数的计算
- 若设置为2,则sysbench会启动2个线程,同时分别进行素数的计算
- 默认值为1
--time: 运行时长,单位秒
- 若设置为5,则sysbench会在5秒内循环往复进行素数计算, 从输出结果可以看到在5秒内完成了几次, 比如配合--cpu-max-prime=3,则表示第一轮算得3个素数, 如果时间还有剩就再进行一轮素数计算,直到时间耗尽。 每完成一轮就叫一个event
- 默认值为10
- 相同时间,比较的是谁完成的event多
--time: 运行时长,单位秒
- 若设置为100,则表示当完成100次event后,即使时间还有剩,也停止运行
- 默认值为0,则表示不限event次数
- 相同event次数,比较的是谁用时更少
3)案例结果分析
执行命令
# 素数上限2万,默认10秒,2个线程
sysbench cpu --cpu-max-prime=20000 --threads=2 run
结果分析
sysbench 1.0.9 (using system LuaJIT 2.0.4) Running the test with following options: Number of threads: 2 // 指定线程数为2 Initializing random number generator from current time Prime numbers limit: 20000 // 每个线程产生的素数上限均为2万个 Initializing worker threads... Threads started! CPU speed: events per second: 650.74 // 所有线程每秒完成了650.74次event General statistics: total time: 10.0017s // 共耗时10秒 total number of events: 6510 // 10秒内所有线程一共完成了6510次event Latency (ms): min: 3.03 // 完成1次event的最少耗时3.03秒 avg: 3.07 // 所有event的平均耗时3.07毫秒 max: 3.27 // 完成1次event的最多耗时3.27毫秒 95th percentile: 3.13 // 95%次event在3.13秒毫秒内完成 sum: 19999.91 // 每个线程耗时10秒,2个线程叠加耗时就是20秒 Threads fairness: events (avg/stddev): 3255.0000/44.00 // 平均每个线程完成3255次event,标准差为44 execution time (avg/stddev): 10.0000/0.00 // 每个线程平均耗时10秒,标准差为0
event: 完成了几轮的素数计算
stddev(标准差): 在相同时间内,多个线程分别完成的素数计算次数是否稳定,如果数值越低,则表示多个线程的结果越接近(即越稳定)。该参数对于单线程无意义。
4)结果分析
如果有2台服务器进行CPU性能对比,当素数上限和线程数一致时:
- 相同时间,比较event
- 相同event,比较时间
- 时间和event都相同,比较stddev(标准差)
参考:linux sysbench (一): CPU性能测试详解 - seasonzone - 博客园 (cnblogs.com)
4.fileio 文件IO性能压测
1)测试命令
# 线程数=4 每隔4s输出一次结果 测试时间=60s
# 文件数=2 文件总大小=4G 文件操作模式=随机读写
# 块大小 16384
sysbench --threads=4 --report-interval=4 --time=60 --test=fileio --file-num=2 --file-total-size=4G --file-test-mode=rndrw prepare
sysbench --threads=4 --report-interval=4 --time=60 --test=fileio --file-num=2 --file-total-size=4G --file-test-mode=rndrw run
sysbench --threads=4 --report-interval=4 --time=60 --test=fileio --file-num=2 --file-total-size=4G --file-test-mode=rndrw cleanup
2)I/O性能测试参数
1 2 3 4 5 6 7 8 9 10 11 12 13 | # fileio options([]为默认参数): --file-num=N # 创建的文件数量 [128] --file-block-size=N # 在所有IO操作中使用的块大小 [16384] --file-total-size=SIZE # 要创建的文件的总大小 [2G] --file-test-mode=STRING # 测试模式 {seqwr(顺序写), seqrewr(顺序重写), seqrd(顺序读), rndrd(随机读), rndwr(随机写), rndrw(随机读写)} --file-io-mode=STRING # 文件操作模式 {sync(同步),async(异步),mmap} [sync] --file-extra-flags=[LIST,...] # 用于打开文件的附加标志列表 {sync,dsync,direct} [] --file-fsync-freq=N # 执行N条请求数量后执行fsync() (0 - don't use fsync()) [100] --file-fsync-all[= on |off] # 每条写指令后执行fsync() [off] --file-fsync-end[= on |off] # 测试执行后执行fsync() [ on ] --file-fsync-mode=STRING # 同步方式 {fsync, fdatasync} [fsync] --file-merged-requests=N # 允许范围内,最多合并IO请求数量 (0 - don't merge) [0] --file-rw-ratio=N # 组合测试读/写比率 [1.5] |
3)案例结果分析
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | Extra file open flags: (none) 2 files, 2GiB each 4GiB total file size Block size 16KiB Number of IO requests: 0 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Initializing worker threads... Threads started! [ 4s ] reads: 155.03 MiB/s writes: 103.35 MiB/s fsyncs: 329.73/s latency (ms,95%): 0.099 # 读速度 写速度 fsyncs()调用次数 [ 8s ] reads: 141.36 MiB/s writes: 94.24 MiB/s fsyncs: 302.25/s latency (ms,95%): 0.095 [ 12s ] reads: 187.62 MiB/s writes: 125.08 MiB/s fsyncs: 400.25/s latency (ms,95%): 0.097 ...# 中间输出部分删除 [ 56s ] reads: 139.68 MiB/s writes: 93.11 MiB/s fsyncs: 297.98/s latency (ms,95%): 0.097 [ 60s ] reads: 178.84 MiB/s writes: 119.23 MiB/s fsyncs: 381.53/s latency (ms,95%): 0.099 File operations: reads/s: 9536.35 # 读操作/s writes/s: 6357.60 # 写操作/s fsyncs/s: 318.01 # fsyncs操作/s Throughput: read, MiB/s: 149.01 # 读速度 written, MiB/s: 99.34 # 写速度 General statistics: total time: 60.0075s # 测试时间 total number of events: 972876 # events总数 Latency (ms): min: 0.00 # 最小耗时 数值太小,输出精度不够 avg: 0.24 # 平均耗时 max: 1569.96 # 最大耗时 95th percentile: 0.10 # 95%events操作耗时 sum: 237799.11 # 所有线程的耗时总和 Threads fairness: events (avg/stddev): 243219.0000/683.34 # 平均每线程完成events数/标准差 execution time (avg/stddev): 59.4498/0.00 # 平均每线程执行时间/标准差 |
参考:(6条消息) Sysbench性能测试(四): 文件I/O性能测试_-Obliviate的博客-CSDN博客
5.OLTP数据库性能(OLTP基准测试)
使用sysbench 做数据库性能压测。需要现在服务器上创建测试库
》》create database db1;
1.初始化压测数据, 压测完成之后需要删除
sysbench --db-driver=mysql --time=90 --threads=4 --report-interval=10 --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-user=root --mysql-password='11111111111' --mysql-db=db1 --tables=20 --table_size=1000000 oltp_read_write --db-ps-mode=disable prepare
参数:
–mysql-db=test //测试数据库
–mysql-user=root //用户名
–myssql-password=123456 //密码
--tables / –oltp-tables-count=8 //表的数量
--table_size / –oltp-table-size=100000 //每张表记录为10万
–mysql-sock=/var/lib/mysql/mysql.sock //mysql.sock路径,本地连接时可指定host ip连接,也可直接使用Mysql.sock连接
读写性能
sysbench --db-driver=mysql --time=90 --threads=8 --report-interval=10 --mysql-host=10.33xxxxxx --mysql-port=3306 --mysql-user=xxxx --mysql-password='xxxx' --mysql-db=db1 --tables=20 --table_size=1000000 oltp_read_write --db-ps-mode=disable run
只读性能
sysbench --db-driver=mysql --time=90 --threads=8 --report-interval=10 --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-user=xxxx --mysql-password='xxxxx' --mysql-db=db1 --tables=20 --table_size=1000000 oltp_read_only --db-ps-mode=disable run
delete性能
sysbench --db-driver=mysql --time=90 --threads=8 --report-interval=10 --mysql-host=10.33.xxxxx --mysql-port=3306 --mysql-user=xxxx --mysql-password='xxxx' --mysql-db=db1 --tables=20 --table_size=1000000 oltp_delete --db-ps-mode=disable run
更新索引字段性能
sysbench --db-driver=mysql --time=90 --threads=8 --report-interval=10 --mysql-host=10.33.xxxxx --mysql-port=3306 --mysql-user=xxxxx --mysql-password='xxxxxx!!' --mysql-db=db1 --tables=20 --table_size=1000000 oltp_update_index --db-ps-mode=disable run
更新非索引字段性能
sysbench --db-driver=mysql --time=90 --threads=8 --report-interval=10 --mysql-host=10.33.xxxx --mysql-port=3306 --mysql-user=xxxx --mysql-password='xxxxx!!' --mysql-db=db1 --tables=20 --table_size=1000000 oltp_update_non_index --db-ps-mode=disable run
插入性能
sysbench --db-driver=mysql --time=90 --threads=8 --report-interval=10 --mysql-host=10.33.xxxx --mysql-port=3306 --mysql-user=xxx --mysql-password='xxxx' --mysql-db=db1 --tables=20 --table_size=1000000 oltp_insert --db-ps-mode=disable run
写入性能
sysbench --db-driver=mysql --time=90 --threads=8 --report-interval=10 --mysql-host=10.33.xxxx --mysql-port=3306 --mysql-user=xxxxx --mysql-password='xxxxx!!' --mysql-db=db1 --tables=20 --table_size=1000000 oltp_write_only --db-ps-mode=disable run
6.自定义lua脚本
-- Copyright (C) 2006-2018 Alexey Kopytov <akopytov@gmail.com> -- This program is free software; you can redistribute it and/or modify -- it under the terms of the GNU General Public License as published by -- the Free Software Foundation; either version 2 of the License, or -- (at your option) any later version. -- This program is distributed in the hope that it will be useful, -- but WITHOUT ANY WARRANTY; without even the implied warranty of -- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -- GNU General Public License for more details. -- You should have received a copy of the GNU General Public License -- along with this program; if not, write to the Free Software -- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA -- ----------------------------------------------------------------------------- -- Common code for OLTP benchmarks. -- ----------------------------------------------------------------------------- function init() assert(event ~= nil, "this script is meant to be included by other OLTP scripts and " .. "should not be called directly.") end if sysbench.cmdline.command == nil then error("Command is required. Supported commands: prepare, prewarm, run, " .. "cleanup, help") end -- Command line options sysbench.cmdline.options = { table_size = {"Number of rows per table", 10000}, range_size = {"Range size for range SELECT queries", 100}, tables = {"Number of tables", 1}, point_selects = {"Number of point SELECT queries per transaction", 10}, simple_ranges = {"Number of simple range SELECT queries per transaction", 1}, sum_ranges = {"Number of SELECT SUM() queries per transaction", 1}, order_ranges = {"Number of SELECT ORDER BY queries per transaction", 1}, distinct_ranges = {"Number of SELECT DISTINCT queries per transaction", 1}, index_updates = {"Number of UPDATE index queries per transaction", 1}, non_index_updates = {"Number of UPDATE non-index queries per transaction", 1}, delete_inserts = {"Number of DELETE/INSERT combinations per transaction", 1}, range_selects = {"Enable/disable all range SELECT queries", true}, auto_inc = {"Use AUTO_INCREMENT column as Primary Key (for MySQL), " .. "or its alternatives in other DBMS. When disabled, use " .. "client-generated IDs", true}, skip_trx = {"Don't start explicit transactions and execute all queries " .. "in the AUTOCOMMIT mode", false}, secondary = {"Use a secondary index in place of the PRIMARY KEY", false}, create_secondary = {"Create a secondary index in addition to the PRIMARY KEY", true}, mysql_storage_engine = {"Storage engine, if MySQL is used", "innodb"}, pgsql_variant = {"Use this PostgreSQL variant when running with the " .. "PostgreSQL driver. The only currently supported " .. "variant is 'redshift'. When enabled, " .. "create_secondary is automatically disabled, and " .. "delete_inserts is set to 0"} } -- Prepare the dataset. This command supports parallel execution, i.e. will -- benefit from executing with --threads > 1 as long as --tables > 1 function cmd_prepare() local drv = sysbench.sql.driver() local con = drv:connect() for i = sysbench.tid % sysbench.opt.threads + 1, sysbench.opt.tables, sysbench.opt.threads do create_table(drv, con, i) end end -- Preload the dataset into the server cache. This command supports parallel -- execution, i.e. will benefit from executing with --threads > 1 as long as -- --tables > 1 -- -- PS. Currently, this command is only meaningful for MySQL/InnoDB benchmarks function cmd_prewarm() local drv = sysbench.sql.driver() local con = drv:connect() assert(drv:name() == "mysql", "prewarm is currently MySQL only") -- Do not create on disk tables for subsequent queries con:query("SET tmp_table_size=2*1024*1024*1024") con:query("SET max_heap_table_size=2*1024*1024*1024") for i = sysbench.tid % sysbench.opt.threads + 1, sysbench.opt.tables, sysbench.opt.threads do local t = "sbtest" .. i print("Prewarming table " .. t) con:query("ANALYZE TABLE sbtest" .. i) con:query(string.format( "SELECT AVG(id) FROM " .. "(SELECT * FROM %s FORCE KEY (PRIMARY) " .. "LIMIT %u) t", t, sysbench.opt.table_size)) con:query(string.format( "SELECT COUNT(*) FROM " .. "(SELECT * FROM %s WHERE k LIKE '%%0%%' LIMIT %u) t", t, sysbench.opt.table_size)) end end -- Implement parallel prepare and prewarm commands sysbench.cmdline.commands = { prepare = {cmd_prepare, sysbench.cmdline.PARALLEL_COMMAND}, prewarm = {cmd_prewarm, sysbench.cmdline.PARALLEL_COMMAND} } -- Template strings of random digits with 11-digit groups separated by dashes -- 10 groups, 119 characters -- 5 groups, 59 characters local txn_type_value_template = "###########-###########-###########-" .. "###########-###########" local txn_currency_code_value_template = "###########-###########-###########-" .. "###########-###########" local txn_direction_value_template = "##" local txn_status_value_template = "###########-###########" local txn_status_remarks_value_template = "###########-###########-" .. "-###########-###########-###########-###########" local ewallet_ref_value_template = "###########-###########-###########-" .. "###########-###########" local merchant_ref_value_template = "###########-###########-###########-" .. "###########-###########" local third_party_ref_value_template = "###########-###########-###########-" .. "###########-###########" -- local created_date_time_value_template = "###########-###########-###########-" .. -- "###########-###########" -- local updated_date_time_value_template = "###########-###########-###########-" .. -- "###########-###########" -- local txn_date_time_value_template = "###########-###########-###########-" .. -- "###########-###########" local original_txn_ref_value_template = "###########-###########-###########-" .. "###########-###########" local source_of_fund_value_template = "###########-###########-###########-" .. "###########-###########" local external_txn_type_value_template = "###########-###########-###########-" .. "###########-###########" local user_id_value_template = "###########-###########-###########-" .. "###########-###########" local merchant_id_value_template = "###########-###########-###########-" .. "###########-###########" local merchant_id_ext_value_template = "###########-###########-###########-" .. "###########-###########" local mfg_no_value_template = "###########-###########-###########-" .. "###########-###########" local rfid_tag_no_value_template = "###########-###########-###########-" .. "###########-###########" local payment_info_value_template = "###########-###########-###########-" .. "###########-###########" local extend_info_value_template = "###########-###########-###########-" .. "###########-###########" local pending_data_value_template = "###########-###########-###########-" .. "###########-###########" function get_txn_type_value() return sysbench.rand.string(txn_type_value_template) end function get_txn_currency_code_value() return sysbench.rand.string(txn_currency_code_value_template) end function get_txn_direction_value() return sysbench.rand.string(txn_direction_value_template) end function get_txn_status_value() return sysbench.rand.string(txn_status_value_template) end function get_txn_status_remarks_value() return sysbench.rand.string(txn_status_remarks_value_template) end function get_ewallet_ref_value() return sysbench.rand.string(ewallet_ref_value_template) end function get_merchant_ref_value() return sysbench.rand.string(merchant_ref_value_template) end function get_third_party_ref_value() return sysbench.rand.string(third_party_ref_value_template) end -- function get_created_date_time_value() -- return sysbench.rand.string(created_date_time_value_template) -- end -- function get_updated_date_time_value() -- return sysbench.rand.string(updated_date_time_value_template) -- end -- function get_txn_date_time_value() -- return sysbench.rand.string(txn_date_time_value_template) -- end function get_original_txn_ref_value() return sysbench.rand.string(original_txn_ref_value_template) end function get_source_of_fund_value() return sysbench.rand.string(source_of_fund_value_template) end function get_external_txn_type_value() return sysbench.rand.string(external_txn_type_value_template) end function get_user_id_value() return sysbench.rand.string(user_id_value_template) end function get_merchant_id_value() return sysbench.rand.string(merchant_id_value_template) end function get_merchant_id_ext_value() return sysbench.rand.string(merchant_id_ext_value_template) end function get_mfg_no_value() return sysbench.rand.string(mfg_no_value_template) end function get_rfid_tag_no_value() return sysbench.rand.string(rfid_tag_no_value_template) end function get_payment_info_value() return sysbench.rand.string(payment_info_value_template) end function get_extend_info_value() return sysbench.rand.string(extend_info_value_template) end function get_pending_data_value() return sysbench.rand.string(pending_data_value_template) end function create_table(drv, con, table_num) local id_index_def, id_def local engine_def = "" local extra_table_options = "" local query if sysbench.opt.secondary then id_index_def = "KEY xid" else id_index_def = "PRIMARY KEY" end if drv:name() == "mysql" or drv:name() == "attachsql" or drv:name() == "drizzle" then if sysbench.opt.auto_inc then id_def = "INTEGER NOT NULL AUTO_INCREMENT" else id_def = "INTEGER NOT NULL" end engine_def = "/*! ENGINE = " .. sysbench.opt.mysql_storage_engine .. " */" extra_table_options = mysql_table_options or "" elseif drv:name() == "pgsql" then if not sysbench.opt.auto_inc then id_def = "INTEGER NOT NULL" elseif pgsql_variant == 'redshift' then id_def = "INTEGER IDENTITY(1,1)" else id_def = "SERIAL" end else error("Unsupported database driver:" .. drv:name()) end print(string.format("Creating table 'sbtest%d'...", table_num)) query = string.format([[ CREATE TABLE if not exists sbtest%d( id %s, txn_type varchar(64) DEFAULT NULL, txn_currency_code varchar(64) DEFAULT NULL, txn_amount bigint(20) DEFAULT '0', txn_direction varchar(2) DEFAULT 'NA', txn_status varchar(64) DEFAULT NULL, txn_status_remarks varchar(200) DEFAULT NULL, ewallet_ref varchar(64) DEFAULT NULL, merchant_ref varchar(64) DEFAULT NULL, third_party_ref varchar(64) DEFAULT NULL, created_date_time datetime DEFAULT NULL, updated_date_time datetime DEFAULT NULL, version int(11) DEFAULT NULL, txn_date_time datetime DEFAULT NULL, original_txn_ref varchar(64) DEFAULT NULL, source_of_fund varchar(64) DEFAULT NULL, external_txn_type varchar(64) DEFAULT NULL, user_id varchar(64) DEFAULT NULL, merchant_id varchar(64) DEFAULT NULL, merchant_id_ext varchar(64) DEFAULT NULL, mfg_no varchar(64) DEFAULT NULL, rfid_tag_no varchar(64) DEFAULT NULL, payment_info text, extend_info text, pending_data longblob, %s (id) ) %s %s]], table_num, id_def, id_index_def, engine_def, extra_table_options) con:query(query) if (sysbench.opt.table_size > 0) then print(string.format("Inserting %d records into 'sbtest%d'", sysbench.opt.table_size, table_num)) end if sysbench.opt.auto_inc then query = "INSERT INTO sbtest" .. table_num .. "(txn_type,txn_currency_code,txn_amount,txn_direction,txn_status,txn_status_remarks,ewallet_ref,merchant_ref,third_party_ref,created_date_time,updated_date_time,version,txn_date_time,original_txn_ref,source_of_fund,external_txn_type,user_id,merchant_id,merchant_id_ext,mfg_no,rfid_tag_no,payment_info,extend_info,pending_data ) VALUES" else query = "INSERT INTO sbtest" .. table_num .. "(id,txn_type,txn_currency_code,txn_amount,txn_direction,txn_status,txn_status_remarks,ewallet_ref,merchant_ref,third_party_ref,created_date_time,updated_date_time,version,txn_date_time,original_txn_ref,source_of_fund,external_txn_type,user_id,merchant_id,merchant_id_ext,mfg_no,rfid_tag_no,payment_info,extend_info,pending_data) VALUES" end con:bulk_insert_init(query) local txn_type_val local txn_currency_code_val local txn_amount_val local txn_direction_val local txn_status_val local txn_status_remarks_val local ewallet_ref_val local merchant_ref_val local third_party_ref_val local created_date_time_val local updated_date_time_val local version_val local txn_date_time_val local original_txn_ref_val local source_of_fund_val local external_txn_type_val local user_id_val local merchant_id_val local merchant_id_ext_val local mfg_no_val local rfid_tag_no_val local payment_info_val local extend_info_val local pending_data_val for i = 1, sysbench.opt.table_size do txn_type_val = get_txn_type_value() txn_currency_code_val = get_txn_currency_code_value() txn_direction_val = get_txn_direction_value() txn_status_val = get_txn_status_value() txn_status_remarks_val = get_txn_status_remarks_value() ewallet_ref_val = get_ewallet_ref_value() merchant_ref_val = get_merchant_ref_value() third_party_ref_val = get_third_party_ref_value() -- created_date_time_val = get_created_date_time_value() -- updated_date_time_val = get_updated_date_time_value() -- txn_date_time_val = get_txn_date_time_value() original_txn_ref_val = get_original_txn_ref_value() source_of_fund_val = get_source_of_fund_value() external_txn_type_val = get_external_txn_type_value() user_id_val = get_user_id_value() merchant_id_val = get_merchant_id_value() merchant_id_ext_val = get_merchant_id_ext_value() mfg_no_val = get_mfg_no_value() rfid_tag_no_val = get_rfid_tag_no_value() payment_info_val = get_payment_info_value() extend_info_val = get_extend_info_value() pending_data_val = get_pending_data_value() if (sysbench.opt.auto_inc) then query = string.format("('%s', '%s',%d, '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s',%d, '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s')", txn_type_val,txn_currency_code_val,sb_rand(20, sysbench.opt.table_size),txn_direction_val,txn_status_val,txn_status_remarks_val,ewallet_ref_val,merchant_ref_val,third_party_ref_val,os.date("%Y-%m-%d %H:%M:%S", os.time()),os.date("%Y-%m-%d %H:%M:%S", os.time()),sb_rand(11, sysbench.opt.table_size),os.date("%Y-%m-%d %H:%M:%S", os.time()),original_txn_ref_val,source_of_fund_val,external_txn_type_val,user_id_val,merchant_id_val,merchant_id_ext_val,mfg_no_val,rfid_tag_no_val,payment_info_val,extend_info_val,pending_data_val) else query = string.format("(%d, '%s', '%s',%d, '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s',%d, '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s')", i, txn_type_val,txn_currency_code_val,sb_rand(20, sysbench.opt.table_size),txn_direction_val,txn_status_val,txn_status_remarks_val,ewallet_ref_val,merchant_ref_val,third_party_ref_val,os.date("%Y-%m-%d %H:%M:%S", os.time()),os.date("%Y-%m-%d %H:%M:%S", os.time()),b_rand(11, sysbench.opt.table_size),s.date("%Y-%m-%d %H:%M:%S", os.time()),original_txn_ref_val,source_of_fund_val,external_txn_type_val,user_id_val,merchant_id_val,merchant_id_ext_val,mfg_no_val,rfid_tag_no_val,payment_info_val,extend_info_val,pending_data_val) end con:bulk_insert_next(query) end con:bulk_insert_done() if sysbench.opt.create_secondary then print(string.format("Creating a secondary index on 'sbtest%d'...", table_num)) con:query(string.format("CREATE unique INDEX wallet_txn_log_ux01 ON sbtest%d(ewallet_ref)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx01 ON sbtest%d(user_id)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx02 ON sbtest%d(txn_type)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx03 ON sbtest%d(txn_status)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx04 ON sbtest%d(merchant_ref)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx05 ON sbtest%d(third_party_ref)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx06 ON sbtest%d(txn_date_time)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx07 ON sbtest%d(created_date_time)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx08 ON sbtest%d(mfg_no)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx09 ON sbtest%d(rfid_tag_no)", table_num, table_num)) con:query(string.format("CREATE INDEX merchant_third_party_ref_idx ON sbtest%d(merchant_ref,third_party_ref)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx10 ON sbtest%d(merchant_id)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx11 ON sbtest%d(txn_date_time,txn_type)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_user_id_status_txndateTime_01 ON sbtest%d(user_id,txn_status,txn_date_time)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx12 ON sbtest%d(updated_date_time)", table_num, table_num)) con:query(string.format("CREATE INDEX wallet_txn_log_idx14 ON sbtest%d(user_id,rfid_tag_no)", table_num, table_num)) end end local t = sysbench.sql.type local stmt_defs = { point_selects = { "SELECT c FROM sbtest%u WHERE id=?", t.INT}, simple_ranges = { "SELECT c FROM sbtest%u WHERE id BETWEEN ? AND ?", t.INT, t.INT}, sum_ranges = { "SELECT SUM(k) FROM sbtest%u WHERE id BETWEEN ? AND ?", t.INT, t.INT}, order_ranges = { "SELECT c FROM sbtest%u WHERE id BETWEEN ? AND ? ORDER BY c", t.INT, t.INT}, distinct_ranges = { "SELECT DISTINCT c FROM sbtest%u WHERE id BETWEEN ? AND ? ORDER BY c", t.INT, t.INT}, index_updates = { "UPDATE sbtest%u SET k=k+1 WHERE id=?", t.INT}, non_index_updates = { "UPDATE sbtest%u SET c=? WHERE id=?", {t.CHAR, 120}, t.INT}, deletes = { "DELETE FROM sbtest%u WHERE id=?", t.INT}, inserts = { "INSERT INTO sbtest%u (id, k, c, pad,txn_type,txn_currency_code) VALUES (?, ?, ?, ?, ?, ?)", t.INT, t.INT, {t.CHAR, 120}, {t.CHAR, 60}, {t.CHAR, 60}, {t.CHAR, 60}}, } function prepare_begin() stmt.begin = con:prepare("BEGIN") end function prepare_commit() stmt.commit = con:prepare("COMMIT") end function prepare_for_each_table(key) for t = 1, sysbench.opt.tables do stmt[t][key] = con:prepare(string.format(stmt_defs[key][1], t)) local nparam = #stmt_defs[key] - 1 if nparam > 0 then param[t][key] = {} end for p = 1, nparam do local btype = stmt_defs[key][p+1] local len if type(btype) == "table" then len = btype[2] btype = btype[1] end if btype == sysbench.sql.type.VARCHAR or btype == sysbench.sql.type.CHAR then param[t][key][p] = stmt[t][key]:bind_create(btype, len) else param[t][key][p] = stmt[t][key]:bind_create(btype) end end if nparam > 0 then stmt[t][key]:bind_param(unpack(param[t][key])) end end end function prepare_point_selects() prepare_for_each_table("point_selects") end function prepare_simple_ranges() prepare_for_each_table("simple_ranges") end function prepare_sum_ranges() prepare_for_each_table("sum_ranges") end function prepare_order_ranges() prepare_for_each_table("order_ranges") end function prepare_distinct_ranges() prepare_for_each_table("distinct_ranges") end function prepare_index_updates() prepare_for_each_table("index_updates") end function prepare_non_index_updates() prepare_for_each_table("non_index_updates") end function prepare_delete_inserts() prepare_for_each_table("deletes") prepare_for_each_table("inserts") end function thread_init() drv = sysbench.sql.driver() con = drv:connect() -- Create global nested tables for prepared statements and their -- parameters. We need a statement and a parameter set for each combination -- of connection/table/query stmt = {} param = {} for t = 1, sysbench.opt.tables do stmt[t] = {} param[t] = {} end -- This function is a 'callback' defined by individual benchmark scripts prepare_statements() end -- Close prepared statements function close_statements() for t = 1, sysbench.opt.tables do for k, s in pairs(stmt[t]) do stmt[t][k]:close() end end if (stmt.begin ~= nil) then stmt.begin:close() end if (stmt.commit ~= nil) then stmt.commit:close() end end function thread_done() close_statements() con:disconnect() end function cleanup() local drv = sysbench.sql.driver() local con = drv:connect() for i = 1, sysbench.opt.tables do print(string.format("Dropping table 'sbtest%d'...", i)) con:query("DROP TABLE IF EXISTS sbtest" .. i ) end end local function get_table_num() return sysbench.rand.uniform(1, sysbench.opt.tables) end local function get_id() return sysbench.rand.default(1, sysbench.opt.table_size) end function begin() stmt.begin:execute() end function commit() stmt.commit:execute() end function execute_point_selects() local tnum = get_table_num() local i for i = 1, sysbench.opt.point_selects do param[tnum].point_selects[1]:set(get_id()) stmt[tnum].point_selects:execute() end end local function execute_range(key) local tnum = get_table_num() for i = 1, sysbench.opt[key] do local id = get_id() param[tnum][key][1]:set(id) param[tnum][key][2]:set(id + sysbench.opt.range_size - 1) stmt[tnum][key]:execute() end end function execute_simple_ranges() execute_range("simple_ranges") end function execute_sum_ranges() execute_range("sum_ranges") end function execute_order_ranges() execute_range("order_ranges") end function execute_distinct_ranges() execute_range("distinct_ranges") end function execute_index_updates() local tnum = get_table_num() for i = 1, sysbench.opt.index_updates do param[tnum].index_updates[1]:set(get_id()) stmt[tnum].index_updates:execute() end end function execute_non_index_updates() local tnum = get_table_num() for i = 1, sysbench.opt.non_index_updates do param[tnum].non_index_updates[1]:set_rand_str(c_value_template) param[tnum].non_index_updates[2]:set(get_id()) stmt[tnum].non_index_updates:execute() end end function execute_delete_inserts() local tnum = get_table_num() for i = 1, sysbench.opt.delete_inserts do local id = get_id() local k = get_id() param[tnum].deletes[1]:set(id) param[tnum].inserts[1]:set(id) param[tnum].inserts[2]:set(k) param[tnum].inserts[3]:set_rand_str(c_value_template) param[tnum].inserts[4]:set_rand_str(pad_value_template) param[tnum].inserts[5]:set_rand_str(txn_type_value_template) param[tnum].inserts[6]:set_rand_str(txn_currency_code_value_template) stmt[tnum].deletes:execute() stmt[tnum].inserts:execute() end end -- Re-prepare statements if we have reconnected, which is possible when some of -- the listed error codes are in the --mysql-ignore-errors list function sysbench.hooks.before_restart_event(errdesc) if errdesc.sql_errno == 2013 or -- CR_SERVER_LOST errdesc.sql_errno == 2055 or -- CR_SERVER_LOST_EXTENDED errdesc.sql_errno == 2006 or -- CR_SERVER_GONE_ERROR errdesc.sql_errno == 2011 -- CR_TCP_CONNECTION then close_statements() prepare_statements() end end
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY