fio 测试 磁盘I/O: ls -1 /sys/block/sda/queue/ | awk '{cmd="cat "i$0; print i$0; system(cmd) }' i=`pwd`'/'
小型计算机系统接口(SCSI,Small Computer System Interface)
SAS(Serial Attached SCSI,串列SCSI)
SCSI 硬盘名称: sd[a-p]
sda1、sda2又分别代表不同分区(Partition)
1. 进入/sys/block/sda/queue/, 查看磁盘sda的参数
ls -1 /sys/block/sda/queue/ | awk '{cmd="cat "i$0; print i$0; system(cmd) }' i=`pwd`'/'
/sys/block/sda/queue/add_random
1
/sys/block/sda/queue/discard_granularity
0
/sys/block/sda/queue/discard_max_bytes
0
/sys/block/sda/queue/discard_zeroes_data
0
/sys/block/sda/queue/hw_sector_size
512
/sys/block/sda/queue/iosched
cat: /sys/block/sda/queue/iosched: 是一个目录
/sys/block/sda/queue/iostats
1
/sys/block/sda/queue/logical_block_size
512
/sys/block/sda/queue/max_hw_sectors_kb
32767
/sys/block/sda/queue/max_integrity_segments
0
/sys/block/sda/queue/max_sectors_kb
512
/sys/block/sda/queue/max_segments
128
/sys/block/sda/queue/max_segment_size
65536
/sys/block/sda/queue/minimum_io_size
4096
/sys/block/sda/queue/nomerges
0
/sys/block/sda/queue/nr_requests
128
/sys/block/sda/queue/optimal_io_size
0
/sys/block/sda/queue/physical_block_size
4096
/sys/block/sda/queue/read_ahead_kb
128
/sys/block/sda/queue/rotational
1
/sys/block/sda/queue/rq_affinity
1
/sys/block/sda/queue/scheduler
noop deadline [cfq]
/sys/block/sda/queue/unpriv_sgio
0
/sys/block/sda/queue/write_same_max_bytes
scsi设备列表
lsscsi -l
[0:0:0:0] disk ATA ST1000DM003-1ER1 CC46 /dev/sda
state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30
执行fio命令时,先确保改磁盘没有数据,不然数据会被覆盖
fio --name=randwrite --rw=randwrite --bs=4k --size=20G --runtime=1200 --ioengine=libaio --iodepth=16 --numjobs=1 --filename=/dev/sda --direct=1 --group_reporting
filename=/dev/sda 测试文件名称,通常选择需要测试的盘的data目录。
direct=1 测试过程绕过机器自带的buffer。使测试结果更真实。
rw=randwrite 测试随机写的I/O
rw=randrw 测试随机写和读的I/O
bs=4k 单次io的块文件大小为4k
bsrange=512-2048 同上,提定数据块的大小范围
size=5g 本次的测试文件大小为5g,以每次4k的io进行测试。
numjobs=30 本次的测试线程为30.
runtime=1200 测试时间为1200秒
ioengine=psync io引擎使用pync方式
rwmixwrite=30 在混合读写的模式下,写占30%
group_reporting 汇总每个进程的信息。
lockmem=1g 只使用1g内存进行测试。
zero_buffers 用0初始化系统buffer。
nrfiles=8 每个进程生成文件的数量。
runtime=int
Terminate processing after the specified number of seconds.
filename=str
fio normally makes up a file name based on the job name, thread number, and file number. If you want to share files between threads in a job or several jobs, specify a filename for each of them to override the default. If the I/O engine is file-based, you can specify a number of files by separating the names with a ':' character. '-' is a reserved name, meaning stdin or stdout, depending on the read/write direction set.
iodepth=int
Number of I/O units to keep in flight against the file. Note that increasing iodepth beyond 1 will not affect synchronous ioengines (except for small degress when verify_async is in use). Even async engines my impose OS restrictions causing the desired depth not to be achieved. This may happen on Linux when using libaio and not setting direct=1, since buffered IO is not async on that OS. Keep an eye on the IO depth distribution in the fio output to verify that the achieved depth is as expected. Default: 1.
然后观察iostat输出
iostat -mtx 2
2018年01月22日 16时54分33秒
avg-cpu: %user %nice %system %iowait %steal %idle
1.01 0.00 0.51 25.25 0.00 73.23
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 1.50 0.00 3.00 183.50 0.12 0.71 9.09 18.11 101.74 177.67 100.50 5.36 100.00
rrqms:每秒这个设备相关的读取请求有多少被Merge了(读取数据时,VFS将请求发到各个FS,如果FS发现不同的读取请求读取的是相同Block的数据,FS会将这个请求合并Merge)
wrqm/s:每秒这个设备相关的写入请求有多少被Merge了。
rsec/s:The number of sectors read from the device per second.
wsec/s:The number of sectors written to the device per second.
rKB/s:The number of kilobytes read from the device per second.
wKB/s:The number of kilobytes written to the device per second.
avgrq-sz:The average size (in sectors) of the requests that were issued to the device. (平均请求扇区数)
avgqu-sz: The average queue length of the requests that were issued to the device. (平均请求队列的长度)
await:每一个IO请求的响应时间(队列时间+服务时间),一般地系统IO响应时间应该低于5ms
svctm:表示平均每次设备I/O操作的服务时间。如果await的值远高于svctm的值,则表示I/O队列等待太长。
%util: 在统计时间内所有处理IO时间,除以总共统计时间。
参考: