MyBastis 三种批量插入方式的性能比较

数据库使用的是MySQL,JDK版本1.8,运行在SpringBoot环境下

本文章源代码:https://github.com/runbeyondmove/mybatis-batch-demo

对比3种可用的方式

1、反复执行单条插入语句
2、xml拼接sql
3、批处理执行

先说结论:少量插入请使用反复插入单条数据,方便。数量较多请使用批处理方式。(可以考虑以有需求的插入数据量20条左右为界吧,在我的测试和数据库环境下耗时都是百毫秒级的,方便最重要)。无论何时都不用xml拼接sql的方式

1. xml映射文件中的代码

<insert id="insert" parameterType="top.spanrun.bootssm.model.UserInf" useGeneratedKeys="true" keyProperty="id">
        <!--
        @mbggenerated  generator自动生成,注意order的before和after
        -->
        <!--<selectKey keyProperty="id" order="AFTER" resultType="java.lang.Integer">
            SELECT LAST_INSERT_ID()
        </selectKey>-->
        insert into user_inf (id, uname, passwd, gentle, email, city)
        values (#{id,jdbcType=INTEGER}, #{uname,jdbcType=VARCHAR}, #{passwd,jdbcType=VARCHAR}, 
            #{gentle,jdbcType=VARCHAR}, #{email,jdbcType=VARCHAR}, #{city,jdbcType=VARCHAR}
            )
    </insert>
    <insert id="insertWithXML" parameterType="java.util.List" useGeneratedKeys="true" keyProperty="id">
        insert into user_inf (id, uname, passwd, gentle, email, city)
        values
        <foreach collection="list" item="user" index="index" separator=",">
            (#{user.id,jdbcType=INTEGER}, #{user.uname,jdbcType=VARCHAR}, #{user.passwd,jdbcType=VARCHAR},
            #{user.gentle,jdbcType=VARCHAR}, #{user.email,jdbcType=VARCHAR}, #{user.city,jdbcType=VARCHAR})
        </foreach>
    </insert>

 

2. Mapper接口

@Mapper
public interface UserInfMapper {

    int insert(UserInf record);

    int insertWithXML(@Param("list") List<UserInf> list);
}

 

3. Service实现,接口声明省略

@Service
public class UserInfServiceImpl implements UserInfService{
    private static final Logger LOGGER = LoggerFactory.getLogger(UserInfServiceImpl.class);

    @Autowired
    SqlSessionFactory sqlSessionFactory;

    @Autowired
    UserInfMapper userInfMapper;

    @Transactional
    @Override
    public boolean testInsertWithBatch(List<UserInf> list) {
        LOGGER.info(">>>>>>>>>>>testInsertWithBatch start<<<<<<<<<<<<<<");
        SqlSession sqlSession = sqlSessionFactory.openSession(ExecutorType.BATCH,false);
        UserInfMapper mapper = sqlSession.getMapper(UserInfMapper.class);

        long startTime = System.nanoTime();

        try {
            List<UserInf> userInfs = Lists.newArrayList();
            for (int i = 0; i < list.size(); i++) {
          // 每1000条提交一次
if ((i+1)%1000 == 0){ sqlSession.commit(); sqlSession.clearCache(); } mapper.insert(list.get(i)); } } catch (Exception e) { e.printStackTrace(); } finally { sqlSession.close(); } LOGGER.info("testInsertWithBatch spend time:{}",System.nanoTime()-startTime); LOGGER.info(">>>>>>>>>>>testInsertWithBatch end<<<<<<<<<<<<<<"); return true; } @Transactional @Override public boolean testInsertWithXml(List<UserInf> list) { LOGGER.info(">>>>>>>>>>>testInsertWithXml start<<<<<<<<<<<<<<"); long startTime = System.nanoTime(); userInfMapper.insertWithXML(list); LOGGER.info("testInsertWithXml spend time:{}",System.nanoTime()-startTime); LOGGER.info(">>>>>>>>>>>testInsertWithXml end<<<<<<<<<<<<<<"); return true; } @Transactional @Override public boolean testInsertWithForeach(List<UserInf> list) { LOGGER.info(">>>>>>>>>>>testInsertWithForeach start<<<<<<<<<<<<<<"); long startTime = System.nanoTime(); for (int i = 0; i < list.size(); i++) { userInfMapper.insert(list.get(i)); } LOGGER.info("testInsertWithForeach spend time:{}",System.nanoTime()-startTime); LOGGER.info(">>>>>>>>>>>testInsertWithForeach end<<<<<<<<<<<<<<"); return true; } @Transactional @Override public boolean testInsert(UserInf userInf) { LOGGER.info(">>>>>>>>>>>testInsert start<<<<<<<<<<<<<<"); long startTime = System.nanoTime(); LOGGER.info("insert before,id=" + userInf.getId()); userInfMapper.insert(userInf); LOGGER.info("insert after,id=" + userInf.getId()); LOGGER.info("testInsert spend time:{}",System.nanoTime()-startTime); LOGGER.info(">>>>>>>>>>>testInsert end<<<<<<<<<<<<<<"); return true; } }

 

4. Controller控制器

@RestController
public class UserInfController {

    @Autowired
    UserInfService userInfService;

    @RequestMapping(value = "test/{size}/{type}")
    public void testInsert(@PathVariable(value = "size") Integer size,@PathVariable(value = "type") Integer type){
        System.out.println(">>>>>>>>>>>>type = " + type + "<<<<<<<<<<<<<");
        switch (type){
            case 1:
                userInfService.testInsertWithForeach(generateList(size));
                break;
            case 2:
                userInfService.testInsertWithXml(generateList(size));
                break;
            case 3:
                userInfService.testInsertWithBatch(generateList(size));
                break;
            default:
                UserInf userInf = new UserInf();
                userInf.setUname("user_single");
                userInf.setGentle("1");
                userInf.setEmail("123@123.com");
                userInf.setCity("广州市");
                userInf.setPasswd("123456");
                userInfService.testInsert(userInf);
        }

    }

    private List<UserInf> generateList(int listSize){
        List<UserInf> list = Lists.newArrayList();

        UserInf userInf = null;
        for (int i = 0; i < listSize; i++) {
            userInf = new UserInf();
            userInf.setUname("user_" + i);
            userInf.setGentle("1");
            userInf.setEmail("123@123.com");
            userInf.setCity("广州市");
            userInf.setPasswd("123456");
            list.add(userInf);
        }

        return list;
    }
}

  

测试结果(单位是纳秒):

1000
testInsertWithForeach spend time:431526521
testInsertWithXml     spend time:118772867
testInsertWithBatch   spend time:175602346

10000
testInsertWithForeach spend time:2072525050
testInsertWithXml     spend time:685605121
testInsertWithBatch   spend time:894647254

100000
testInsertWithForeach spend time:18950160161
testInsertWithBatch   spend time:8469312537


testInsertWithXml报错
### Cause: com.mysql.jdbc.PacketTooBigException: Packet for query is too large (9388970 > 4194304). You can change this value on the server by setting the max_allowed_packet' variable.
; Packet for query is too large (9388970 > 4194304). You can change this value on the server by setting the max_allowed_packet' variable.; nested exception is com.mysql.jdbc.PacketTooBigException: Packet for query is too large
(9388970 > 4194304). You can change this value on the server by setting the max_allowed_packet' variable.] with root cause com.mysql.jdbc.PacketTooBigException: Packet for query is too large (9388970 > 4194304). You can change this value on the server by setting the max_allowed_packet' variable.

  查看xml sql拼接的异常信息,可以发现,最大只能达到4194304,也就是4M,所以这种方式不推荐

结论

循环插入单条数据虽然效率极低,但是代码量极少,如果在使用tk.Mapper的插件情况下,仅需代码,:

@Transactional
public void add1(List<Item> itemList) {
    itemList.forEach(itemMapper::insertSelective);
}

因此,在需求插入数据数量不多的情况下肯定用它了。

xml拼接sql是最不推荐的方式,使用时有大段的xml和sql语句要写,很容易出错,工作效率很低。更关键点是,虽然效率尚可,但是真正需要效率的时候你挂了,要你何用?

批处理执行是有大数据量插入时推荐的做法,使用起来也比较方便。

 

其他在使用中的补充:

1. 使用mybatis generator生成器生成中的一些坑

代码说明:数据库是MySQL,且主键自增,用generator 生成的mapper.xml中的代码,自增ID,使用的是selectKey来获取。

问题描述:insert的时候,添加的时候,第一条数据添加成功,接着添加第二条数据的时候会提示失败,失败的原因是ID还是使用的上一个ID值,主键重复导致插入失败。异常如下:

Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '4' for key 'PRIMARY'

问题原因:BEFORE还是AFTER的问题

<selectKey keyProperty="id" order="BEFORE" resultType="java.lang.Integer">
  SELECT LAST_INSERT_ID()
</selectKey>

需要注意的是,Oracle使用before,MySQL使用after

其实在使用Mybatis generator生成带代码的时候可以通过identity="true"来指定生成的selectKey是before还是after

<generatedKey column="id" sqlStatement="Mysql" identity="true" />

注:在select标签中使用useGeneratedKeys="true" keyProperty="id" 不存在该问题

 

2. mybatis的版本

升级Mybatis版本到3.3.1

 

3. 在批量插入的拼接xml sql时注意foreach是没有使用open和close的,但是在批量查询修改删除时才使用到open和close

<foreach collection="list" item="user" index="index" separator=",">
            (#{user.id,jdbcType=INTEGER}, #{user.uname,jdbcType=VARCHAR}, #{user.passwd,jdbcType=VARCHAR},
            #{user.gentle,jdbcType=VARCHAR}, #{user.email,jdbcType=VARCHAR}, #{user.city,jdbcType=VARCHAR})
        </foreach>

  

 4. 使用批量提交注意的事项

  a. 事务

    由于在 Spring 集成的情况下,事务连接由 Spring 管理(SpringManagedTransaction),所以这里不需要手动关闭 sqlSession,在这里手动提交(commit)或者回滚(rollback)也是无效的。

   b. 批量提交

    批量提交只能应用于 insert, update, delete。

    并且在批量提交使用时,如果在操作同一SQL时中间插入了其他数据库操作,就会让批量提交方式变成普通的执行方式,所以在使用批量提交时,要控制好 SQL 执行顺序

 

posted on 2018-10-18 17:14  runmove  阅读(961)  评论(0编辑  收藏  举报