1.keepAlive官方解读

 

keepAlive false
(1.0.28)
连接池中的minIdle数量以内的连接,空闲时间超过minEvictableIdleTimeMillis,则会执行keepAlive操作。

2.这个参数是严格保证连接池里的连接都是存活的,但其实不止官方解读那一个地方

DruidDataSource.init方法,keepAlive前面,如果没有设置异步,设置了初始化参数,就会初始化连接,而且接着启动创建连接线程,连接池的连接数量未达到配置的最小连接数,会继续创建的,后面又去做keepAlive判断,
我觉得加个这个判断是再次检查之前创建的连接是存活的,
if (createScheduler != null && asyncInit) {
                for (int i = 0; i < initialSize; ++i) {
                    submitCreateTask(true);
                }
            } else if (!asyncInit) {
                // init connections
                while (poolingCount < initialSize) {
                    try {
                        PhysicalConnectionInfo pyConnectInfo = createPhysicalConnection();
                        DruidConnectionHolder holder = new DruidConnectionHolder(this, pyConnectInfo);
                        connections[poolingCount++] = holder;
                    } catch (SQLException ex) {
                        LOG.error("init datasource error, url: " + this.getUrl(), ex);
                        if (initExceptionThrow) {
                            connectError = ex;
                            break;
                        } else {
                            Thread.sleep(3000);
                        }
                    }
                }

                if (poolingCount > 0) {
                    poolingPeak = poolingCount;
                    poolingPeakTime = System.currentTimeMillis();
                }
            }

            createAndLogThread();
            createAndStartCreatorThread();
            createAndStartDestroyThread();

            initedLatch.await();
            init = true;

            initedTime = new Date();
            registerMbean();

            if (connectError != null && poolingCount == 0) {
                throw connectError;
            }

            if (keepAlive) {
                // async fill to minIdle
                if (createScheduler != null) {
                    for (int i = 0; i < minIdle; ++i) {
                        submitCreateTask(true);
                    }
                } else {
                    this.emptySignal();
                }
            }

额外提两点

1.连接池为什么要有initialSize参数,去初始化连接,后面又去启动创建连接的线程,假设连接initialSize为0,这个线程就会去创建连接达到最小连接数;我做了下实验,初步得出个结论,因为创建连接的线程,去创建连接达到最小连接数,是没有办法保证时间的,假设当下连接池达到最小连接数,连接又被拿完,剩下的线程是不是只有等待了,initialSize这个值也是根据使用方的流量设置的,也就是大部分时候initialSize的数量连接是够用的,那么也就是说只有第一次初始化的时候比较慢,之后获取连接都比较快,不会有波动

不设置initialSize,KeepAliveTest运行结果,可以看出创建连接的线程,去创建连接达到最小连接数,是没有办法保证时间的

        dataSource.setPoolPreparedStatements(true);
        dataSource.setMinIdle(10);
        dataSource.setMaxActive(20);
//        dataSource.setMinEvictableIdleTimeMillis(30000);
//        dataSource.setMaxEvictableIdleTimeMillis(30000);
        dataSource.setTimeBetweenEvictionRunsMillis(10);
        dataSource.setFilters("log4j");
        dataSource.setValidationQuery("select 1");

        Properties properties = new Properties();
        properties.put("druid.keepAlive", "true");
        dataSource.configFromPropety(properties);

 

2022-05-19 00:20:33,198 [INFO ] ClickHouseDriver:42 - Driver registered
2022-05-19 00:20:33,704 [INFO ] DruidDataSource:995 - {dataSource-1} inited
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
poolingCount : 0
2022-05-19 00:20:34,254 [DEBUG] Connection:132 - {conn-10001,procId-128} connected
poolingCount : 0
2022-05-19 00:20:34,274 [DEBUG] Statement:137 - {conn-10001, stmt-20000} created
poolingCount : 0
poolingCount : 0
poolingCount : 0
2022-05-19 00:20:34,321 [DEBUG] Statement:137 - {conn-10001, stmt-20000, rs-50000} query executed. 26.8684 millis. select 1
2022-05-19 00:20:34,321 [DEBUG] ResultSet:142 - {conn-10001, stmt-20000, rs-50000} open
poolingCount : 0
poolingCount : 0
poolingCount : 0
2022-05-19 00:20:34,392 [DEBUG] ResultSet:142 - {conn-10001, stmt-20000, rs-50000} Header: [1]
2022-05-19 00:20:34,393 [DEBUG] ResultSet:142 - {conn-10001, stmt-20000, rs-50000} closed
2022-05-19 00:20:34,393 [DEBUG] Statement:137 - {conn-10001, stmt-20000} closed
poolingCount : 0
poolingCount : 1
2022-05-19 00:20:34,423 [DEBUG] Connection:132 - {conn-10002,procId-129} connected
2022-05-19 00:20:34,424 [DEBUG] Statement:137 - {conn-10002, stmt-20001} created
2022-05-19 00:20:34,424 [DEBUG] Statement:137 - {conn-10002, stmt-20001, rs-50001} query executed. 0.5225 millis. select 1
2022-05-19 00:20:34,424 [DEBUG] ResultSet:142 - {conn-10002, stmt-20001, rs-50001} open
2022-05-19 00:20:34,424 [DEBUG] ResultSet:142 - {conn-10002, stmt-20001, rs-50001} Header: [1]
2022-05-19 00:20:34,425 [DEBUG] ResultSet:142 - {conn-10002, stmt-20001, rs-50001} closed
2022-05-19 00:20:34,425 [DEBUG] Statement:137 - {conn-10002, stmt-20001} closed
2022-05-19 00:20:34,430 [DEBUG] Connection:132 - {conn-10003,procId-130} connected
2022-05-19 00:20:34,431 [DEBUG] Statement:137 - {conn-10003, stmt-20002} created
2022-05-19 00:20:34,431 [DEBUG] Statement:137 - {conn-10003, stmt-20002, rs-50002} query executed. 0.4103 millis. select 1
2022-05-19 00:20:34,431 [DEBUG] ResultSet:142 - {conn-10003, stmt-20002, rs-50002} open
2022-05-19 00:20:34,431 [DEBUG] ResultSet:142 - {conn-10003, stmt-20002, rs-50002} Header: [1]
2022-05-19 00:20:34,431 [DEBUG] ResultSet:142 - {conn-10003, stmt-20002, rs-50002} closed
2022-05-19 00:20:34,431 [DEBUG] Statement:137 - {conn-10003, stmt-20002} closed
2022-05-19 00:20:34,437 [DEBUG] Connection:132 - {conn-10004,procId-131} connected
2022-05-19 00:20:34,438 [DEBUG] Statement:137 - {conn-10004, stmt-20003} created
2022-05-19 00:20:34,438 [DEBUG] Statement:137 - {conn-10004, stmt-20003, rs-50003} query executed. 0.3958 millis. select 1
2022-05-19 00:20:34,438 [DEBUG] ResultSet:142 - {conn-10004, stmt-20003, rs-50003} open
2022-05-19 00:20:34,438 [DEBUG] ResultSet:142 - {conn-10004, stmt-20003, rs-50003} Header: [1]
2022-05-19 00:20:34,438 [DEBUG] ResultSet:142 - {conn-10004, stmt-20003, rs-50003} closed
poolingCount : 3
2022-05-19 00:20:34,439 [DEBUG] Statement:137 - {conn-10004, stmt-20003} closed
2022-05-19 00:20:34,444 [DEBUG] Connection:132 - {conn-10005,procId-132} connected
2022-05-19 00:20:34,444 [DEBUG] Statement:137 - {conn-10005, stmt-20004} created
2022-05-19 00:20:34,445 [DEBUG] Statement:137 - {conn-10005, stmt-20004, rs-50004} query executed. 0.5292 millis. select 1
2022-05-19 00:20:34,445 [DEBUG] ResultSet:142 - {conn-10005, stmt-20004, rs-50004} open
2022-05-19 00:20:34,445 [DEBUG] ResultSet:142 - {conn-10005, stmt-20004, rs-50004} Header: [1]
2022-05-19 00:20:34,446 [DEBUG] ResultSet:142 - {conn-10005, stmt-20004, rs-50004} closed
2022-05-19 00:20:34,446 [DEBUG] Statement:137 - {conn-10005, stmt-20004} closed
2022-05-19 00:20:34,451 [DEBUG] Connection:132 - {conn-10006,procId-133} connected
2022-05-19 00:20:34,451 [DEBUG] Statement:137 - {conn-10006, stmt-20005} created
2022-05-19 00:20:34,452 [DEBUG] Statement:137 - {conn-10006, stmt-20005, rs-50005} query executed. 0.4187 millis. select 1
2022-05-19 00:20:34,452 [DEBUG] ResultSet:142 - {conn-10006, stmt-20005, rs-50005} open
2022-05-19 00:20:34,452 [DEBUG] ResultSet:142 - {conn-10006, stmt-20005, rs-50005} Header: [1]
2022-05-19 00:20:34,452 [DEBUG] ResultSet:142 - {conn-10006, stmt-20005, rs-50005} closed
2022-05-19 00:20:34,452 [DEBUG] Statement:137 - {conn-10006, stmt-20005} closed
poolingCount : 6
2022-05-19 00:20:34,456 [DEBUG] Connection:132 - {conn-10007,procId-134} connected
2022-05-19 00:20:34,456 [DEBUG] Statement:137 - {conn-10007, stmt-20006} created
2022-05-19 00:20:34,457 [DEBUG] Statement:137 - {conn-10007, stmt-20006, rs-50006} query executed. 0.7669 millis. select 1
2022-05-19 00:20:34,457 [DEBUG] ResultSet:142 - {conn-10007, stmt-20006, rs-50006} open
2022-05-19 00:20:34,457 [DEBUG] ResultSet:142 - {conn-10007, stmt-20006, rs-50006} Header: [1]
2022-05-19 00:20:34,457 [DEBUG] ResultSet:142 - {conn-10007, stmt-20006, rs-50006} closed
2022-05-19 00:20:34,457 [DEBUG] Statement:137 - {conn-10007, stmt-20006} closed
2022-05-19 00:20:34,462 [DEBUG] Connection:132 - {conn-10008,procId-135} connected
2022-05-19 00:20:34,463 [DEBUG] Statement:137 - {conn-10008, stmt-20007} created
2022-05-19 00:20:34,463 [DEBUG] Statement:137 - {conn-10008, stmt-20007, rs-50007} query executed. 0.4346 millis. select 1
2022-05-19 00:20:34,463 [DEBUG] ResultSet:142 - {conn-10008, stmt-20007, rs-50007} open
2022-05-19 00:20:34,464 [DEBUG] ResultSet:142 - {conn-10008, stmt-20007, rs-50007} Header: [1]
2022-05-19 00:20:34,464 [DEBUG] ResultSet:142 - {conn-10008, stmt-20007, rs-50007} closed
2022-05-19 00:20:34,464 [DEBUG] Statement:137 - {conn-10008, stmt-20007} closed
2022-05-19 00:20:34,468 [DEBUG] Connection:132 - {conn-10009,procId-136} connected
2022-05-19 00:20:34,469 [DEBUG] Statement:137 - {conn-10009, stmt-20008} created
2022-05-19 00:20:34,469 [DEBUG] Statement:137 - {conn-10009, stmt-20008, rs-50008} query executed. 0.503 millis. select 1
2022-05-19 00:20:34,469 [DEBUG] ResultSet:142 - {conn-10009, stmt-20008, rs-50008} open
poolingCount : 8

 

2.连接池一开始初始化大小就为最大连接数的大小,后面如果创建连接就不需要扩容,这可能是Druid性能高的一个原因,相比遍历连接池,去做操作,消耗的时间,在当下这个cpu性能,根本不算什么;而扩容消耗的内存以及后续可能引发的gc代价就比较大了

            connections = new DruidConnectionHolder[maxActive];
            evictConnections = new DruidConnectionHolder[maxActive];
            keepAliveConnections = new DruidConnectionHolder[maxActive];

3.keepAlive还是比较严格,shrink也有判断,假设连接池存活数量小于最小连接数,就要通知创建连接线程去创建连接

public void shrink(boolean checkTime, boolean keepAlive) {
        try {
            lock.lockInterruptibly();
        } catch (InterruptedException e) {
            return;
        }

        boolean needFill = false;
        int evictCount = 0;
        int keepAliveCount = 0;
        int fatalErrorIncrement = fatalErrorCount - fatalErrorCountLastShrink;
        fatalErrorCountLastShrink = fatalErrorCount;
        
        try {
            if (!inited) {
                return;
            }

            final int checkCount = poolingCount - minIdle;
            final long currentTimeMillis = System.currentTimeMillis();
            for (int i = 0; i < poolingCount; ++i) {
                DruidConnectionHolder connection = connections[i];

                if ((onFatalError || fatalErrorIncrement > 0) && (lastFatalErrorTimeMillis > connection.connectTimeMillis))  {
                    keepAliveConnections[keepAliveCount++] = connection;
                    continue;
                }

                if (checkTime) {
                    if (phyTimeoutMillis > 0) {
                        long phyConnectTimeMillis = currentTimeMillis - connection.connectTimeMillis;
                        if (phyConnectTimeMillis > phyTimeoutMillis) {
                            evictConnections[evictCount++] = connection;
                            continue;
                        }
                    }

                    long idleMillis = currentTimeMillis - connection.lastActiveTimeMillis;

                    if (idleMillis < minEvictableIdleTimeMillis
                            && idleMillis < keepAliveBetweenTimeMillis
                    ) {
                        break;
                    }

                    if (idleMillis >= minEvictableIdleTimeMillis) {
                        if (checkTime && i < checkCount) {
                            evictConnections[evictCount++] = connection;
                            continue;
                        } else if (idleMillis > maxEvictableIdleTimeMillis) {
                            evictConnections[evictCount++] = connection;
                            continue;
                        }
                    }

                    if (keepAlive && idleMillis >= keepAliveBetweenTimeMillis) {
                        keepAliveConnections[keepAliveCount++] = connection;
                    }
                } else {
                    if (i < checkCount) {
                        evictConnections[evictCount++] = connection;
                    } else {
                        break;
                    }
                }
            }

            int removeCount = evictCount + keepAliveCount;
            if (removeCount > 0) {
                System.arraycopy(connections, removeCount, connections, 0, poolingCount - removeCount);
                Arrays.fill(connections, poolingCount - removeCount, poolingCount, null);
                poolingCount -= removeCount;
            }
            keepAliveCheckCount += keepAliveCount;

            if (keepAlive && poolingCount + activeCount < minIdle) {
                needFill = true;
            }
        } finally {
            lock.unlock();
        }

        if (evictCount > 0) {
            for (int i = 0; i < evictCount; ++i) {
                DruidConnectionHolder item = evictConnections[i];
                Connection connection = item.getConnection();
                JdbcUtils.close(connection);
                destroyCountUpdater.incrementAndGet(this);
            }
            Arrays.fill(evictConnections, null);
        }

        if (keepAliveCount > 0) {
            // keep order
            for (int i = keepAliveCount - 1; i >= 0; --i) {
                DruidConnectionHolder holer = keepAliveConnections[i];
                Connection connection = holer.getConnection();
                holer.incrementKeepAliveCheckCount();

                boolean validate = false;
                try {
                    this.validateConnection(connection);
                    validate = true;
                } catch (Throwable error) {
                    if (LOG.isDebugEnabled()) {
                        LOG.debug("keepAliveErr", error);
                    }
                    // skip
                }

                boolean discard = !validate;
                if (validate) {
                    holer.lastKeepTimeMillis = System.currentTimeMillis();
                    boolean putOk = put(holer, 0L, true);
                    if (!putOk) {
                        discard = true;
                    }
                }

                if (discard) {
                    try {
                        connection.close();
                    } catch (Exception e) {
                        // skip
                    }

                    lock.lock();
                    try {
                        discardCount++;

                        if (activeCount + poolingCount <= minIdle) {
                            emptySignal();
                        }
                    } finally {
                        lock.unlock();
                    }
                }
            }
            this.getDataSourceStat().addKeepAliveCheckCount(keepAliveCount);
            Arrays.fill(keepAliveConnections, null);
        }

        if (needFill) {
            lock.lock();
            try {
                int fillCount = minIdle - (activeCount + poolingCount + createTaskCount);
                for (int i = 0; i < fillCount; ++i) {
                    emptySignal();
                }
            } finally {
                lock.unlock();
            }
        } else if (onFatalError || fatalErrorIncrement > 0) {
            lock.lock();
            try {
                emptySignal();
            } finally {
                lock.unlock();
            }
        }
    }
CreateConnectionThread.run也会检查活跃
                    if (emptyWait) {
                        // 必须存在线程等待,才创建连接
                        if (poolingCount >= notEmptyWaitThreadCount //
                                && (!(keepAlive && activeCount + poolingCount < minIdle))
                                && !isFailContinuous()
                        ) {
                            empty.await();
                        }

                        // 防止创建超过maxActive数量的连接
                        if (activeCount + poolingCount >= maxActive) {
                            empty.await();
                            continue;
                        }
                    }

  



 

posted on 2022-05-19 00:00  柳无情  阅读(3479)  评论(0编辑  收藏  举报