infobin(binlog分析工具)
infobin
参考github:
https://github.com/gaopengcarl/infobin
使用前自己随便写了一些数据,有大事务,长事务等。
1、安装
[root@root ~]# git clone https://github.com/gaopengcarl/infobin
2、使用
[root@root infobin]# ./infobin mysql-bin.000092 10 2000 3 -t > 1.log
说明:
可以分析binlog 的一些信息比如:
1、本binlog中是否有长期未提交的事物
2、本binlog中是否有大事物
3、本binlog中每个表生成了多少binlog
4、本binlog中binlog生成速度。
./infobin mysql-bin.001793 20 2000000 10 -t >log1793.log
第一个20 是分片数量,将binlog分为大小相等的片段,生成时间越短则这段时间生成binlog量越大,则事物越频繁。
第二个2000000 是大于2M左右的事物定义为大事物。
第三个10 是大于10秒未提交的事物定义为长期未提交的事物。
第四个-t 代表不做详细event解析输出,仅仅获取相应的结果
3、查看分析结果
[root@root infobin]# cat 1.log
Check is Little_endian
[Author]: gaopeng [QQ]:22389860 [blog]:http://blog.itpub.net/7728585/
Warning: This tool only Little_endian platform!
Little_endian check ok!!!
-------------Now begin--------------
Check Mysql Version is:5.7.34-log
Check Mysql binlog format ver is:V4
Warning:Check This binlog is not closed!
Check This binlog total size:414853(bytes)
Note:load data infile not check!
-------------Total now-------------- #从这里开始看统计信息
Trx total[counts]:166 --事务个数,似乎统计的不准。
Event total[counts]:870 --event个数
Max trx event size:8063(bytes) Pos:1281[0X501] --最大event长度
Avg binlog size(/sec):751.545(bytes)[0.734(kb)]
Avg binlog size(/min):45092.719(bytes)[44.036(kb)]
--Piece view: --时间切片后,生成的binlog量。
(1)Time:1665566267-1665566274(7(s)) piece:41485(bytes)[40.513(kb)]
(2)Time:1665566274-1665566274(0(s)) piece:41485(bytes)[40.513(kb)]
(3)Time:1665566274-1665566274(0(s)) piece:41485(bytes)[40.513(kb)]
(4)Time:1665566274-1665566274(0(s)) piece:41485(bytes)[40.513(kb)]
(5)Time:1665566274-1665566278(4(s)) piece:41485(bytes)[40.513(kb)]
(6)Time:1665566278-1665566483(205(s)) piece:41485(bytes)[40.513(kb)]
(7)Time:1665566483-1665566598(115(s)) piece:41485(bytes)[40.513(kb)]
(8)Time:1665566598-1665566598(0(s)) piece:41485(bytes)[40.513(kb)]
(9)Time:1665566598-1665566601(3(s)) piece:41485(bytes)[40.513(kb)]
(10)Time:1665566601-1665566819(218(s)) piece:41485(bytes)[40.513(kb)]
--Large than 2000(bytes) trx: --大事务的统计情况
(1)Trx_size:201728(bytes)[197.000(kb)] trx_begin_p:1159[0X487] trx_end_p:202887[0X31887]
(2)Trx_size:20328(bytes)[19.852(kb)] trx_begin_p:204752[0X31FD0] trx_end_p:225080[0X36F38]
(3)Trx_size:100958(bytes)[98.592(kb)] trx_begin_p:263845[0X406A5] trx_end_p:364803[0X59103]
(4)Trx_size:2195(bytes)[2.144(kb)] trx_begin_p:365318[0X59306] trx_end_p:367513[0X59B99]
(5)Trx_size:2195(bytes)[2.144(kb)] trx_begin_p:367578[0X59BDA] trx_end_p:369773[0X5A46D]
(6)Trx_size:2195(bytes)[2.144(kb)] trx_begin_p:369838[0X5A4AE] trx_end_p:372033[0X5AD41]
(7)Trx_size:2195(bytes)[2.144(kb)] trx_begin_p:372098[0X5AD82] trx_end_p:374293[0X5B615]
(8)Trx_size:2195(bytes)[2.144(kb)] trx_begin_p:374358[0X5B656] trx_end_p:376553[0X5BEE9]
(9)Trx_size:2195(bytes)[2.144(kb)] trx_begin_p:376618[0X5BF2A] trx_end_p:378813[0X5C7BD]
(10)Trx_size:2195(bytes)[2.144(kb)] trx_begin_p:378878[0X5C7FE] trx_end_p:381073[0X5D091]
(11)Trx_size:2195(bytes)[2.144(kb)] trx_begin_p:381138[0X5D0D2] trx_end_p:383333[0X5D965]
(12)Trx_size:2195(bytes)[2.144(kb)] trx_begin_p:383398[0X5D9A6] trx_end_p:385593[0X5E239]
(13)Trx_size:2195(bytes)[2.144(kb)] trx_begin_p:386108[0X5E43C] trx_end_p:388303[0X5ECCF]
Total large trx count size(kb):#336.879(kb)
--Large than 3(secs) trx: --长事务分布情况
(1)Trx_sec:13(sec) trx_begin_time:[20221012 17:17:58(CST)] trx_end_time:[20221012 17:18:11(CST)] trx_begin_pos:204752 trx_end_pos:225080 query_exe_time:0
--Every Table binlog size(bytes) and times:
Note:size unit is bytes
---(1)Current Table:tidb_binlog.checkpoint::
Insert:binlog size(0(Bytes)) times(0)
Update:binlog size(34272(Bytes)) times(153)
Delete:binlog size(0(Bytes)) times(0)
Total:binlog size(34272(Bytes)) times(153)
---(2)Current Table:ceshi.t1::
Insert:binlog size(342975(Bytes)) times(51)
Update:binlog size(0(Bytes)) times(0)
Delete:binlog size(0(Bytes)) times(0)
Total:binlog size(342975(Bytes)) times(51)
---Total binlog dml event size:377247(Bytes) times(204)
测试SQL:
MySQL [ceshi]> flush logs; Query OK, 0 rows affected (0.008 sec) MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 100; Query OK, 100 rows affected (0.004 sec) Records: 100 Duplicates: 0 Warnings: 0 MySQL [ceshi]> begin; Query OK, 0 rows affected (0.000 sec) MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 10; Query OK, 10 rows affected (0.001 sec) Records: 10 Duplicates: 0 Warnings: 0 MySQL [ceshi]> commit; Query OK, 0 rows affected (0.002 sec) MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 50; Query OK, 50 rows affected (0.005 sec) Records: 50 Duplicates: 0 Warnings: 0 MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 1; Query OK, 1 row affected (0.002 sec) Records: 1 Duplicates: 0 Warnings: 0 MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 1; Query OK, 1 row affected (0.002 sec) Records: 1 Duplicates: 0 Warnings: 0 MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 1; Query OK, 1 row affected (0.002 sec) Records: 1 Duplicates: 0 Warnings: 0 MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 1; Query OK, 1 row affected (0.001 sec) Records: 1 Duplicates: 0 Warnings: 0 MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 1; Query OK, 1 row affected (0.002 sec) Records: 1 Duplicates: 0 Warnings: 0 MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 1; Query OK, 1 row affected (0.002 sec) Records: 1 Duplicates: 0 Warnings: 0 MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 1; Query OK, 1 row affected (0.001 sec) Records: 1 Duplicates: 0 Warnings: 0 MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 1; Query OK, 1 row affected (0.002 sec) Records: 1 Duplicates: 0 Warnings: 0 MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 1; Query OK, 1 row affected (0.002 sec) Records: 1 Duplicates: 0 Warnings: 0 MySQL [ceshi]> insert into t1 select null,repeat('a',2000) from t1 limit 1; Query OK, 1 row affected (0.001 sec) Records: 1 Duplicates: 0 Warnings: 0