Spring的JdbcTemplate自动关闭连接

来源:https://blog.csdn.net/partner4java/article/details/7012196

 

JdbcTemplate类使用DataSource得到一个数据库连接。然后,他调用StatementCreator实例创建要执行的语句。下一步,他调用StatementCallBack完成。
一旦StatementCallBack返回结果,JdbcTemplate类完成所有必要清理工作关闭连接。如果StatementCreator或StatementCallBack抛出异常,JdbcTemplate类会捕获他们,并转换为Spring数据访问异常。

 

 

看一个JdbcTemplate里面的比较核心的一个方法: 

  1. //-------------------------------------------------------------------------  
  2. // Methods dealing with prepared statements  
  3. //-------------------------------------------------------------------------  
  4.   
  5. public Object execute(PreparedStatementCreator psc, PreparedStatementCallback action)  
  6.         throws DataAccessException {  
  7.   
  8.     Assert.notNull(psc, "PreparedStatementCreator must not be null");  
  9.     Assert.notNull(action, "Callback object must not be null");  
  10.     if (logger.isDebugEnabled()) {  
  11.         String sql = getSql(psc);  
  12.         logger.debug("Executing prepared SQL statement" + (sql != null ? " [" + sql + "]" : ""));  
  13.     }  
  14.   
  15.     Connection con = DataSourceUtils.getConnection(getDataSource());  
  16.     PreparedStatement ps = null;  
  17.     try {  
  18.         Connection conToUse = con;  
  19.         if (this.nativeJdbcExtractor != null &&  
  20.                 this.nativeJdbcExtractor.isNativeConnectionNecessaryForNativePreparedStatements()) {  
  21.             conToUse = this.nativeJdbcExtractor.getNativeConnection(con);  
  22.         }  
  23.         ps = psc.createPreparedStatement(conToUse);  
  24.         applyStatementSettings(ps);  
  25.         PreparedStatement psToUse = ps;  
  26.         if (this.nativeJdbcExtractor != null) {  
  27.             psToUse = this.nativeJdbcExtractor.getNativePreparedStatement(ps);  
  28.         }  
  29.         Object result = action.doInPreparedStatement(psToUse);  
  30.         handleWarnings(ps);  
  31.         return result;  
  32.     }  
  33.     catch (SQLException ex) {  
  34.         // Release Connection early, to avoid potential connection pool deadlock  
  35.         // in the case when the exception translator hasn't been initialized yet.  
  36.         if (psc instanceof ParameterDisposer) {  
  37.             ((ParameterDisposer) psc).cleanupParameters();  
  38.         }  
  39.         String sql = getSql(psc);  
  40.         psc = null;  
  41.         JdbcUtils.closeStatement(ps);  
  42.         ps = null;  
  43.         DataSourceUtils.releaseConnection(con, getDataSource());  
  44.         con = null;  
  45.         throw getExceptionTranslator().translate("PreparedStatementCallback", sql, ex);  
  46.     }  
  47.     finally {  
  48.         if (psc instanceof ParameterDisposer) {  
  49.             ((ParameterDisposer) psc).cleanupParameters();  
  50.         }  
  51.         JdbcUtils.closeStatement(ps);  
  52.         DataSourceUtils.releaseConnection(con, getDataSource());  
  53.     }  
  54. }  


显然,我们在finally里面看到了关闭调用,在看看这个关闭调用方法内部: 

  1. /** 
  2. * Close the given Connection, obtained from the given DataSource, 
  3. * if it is not managed externally (that is, not bound to the thread). 
  4. * @param con the Connection to close if necessary 
  5. * (if this is <code>null</code>, the call will be ignored) 
  6. * @param dataSource the DataSource that the Connection was obtained from 
  7. * (may be <code>null</code>) 
  8. * @see #getConnection 
  9. */  
  10. public static void releaseConnection(Connection con, DataSource dataSource) {  
  11.     try {  
  12.         doReleaseConnection(con, dataSource);  
  13.     }  
  14.     catch (SQLException ex) {  
  15.         logger.debug("Could not close JDBC Connection", ex);  
  16.     }  
  17.     catch (Throwable ex) {  
  18.         logger.debug("Unexpected exception on closing JDBC Connection", ex);  
  19.     }  
  20. }  
  21.   
  22. /** 
  23. * Actually close the given Connection, obtained from the given DataSource. 
  24. * Same as {@link #releaseConnection}, but throwing the original SQLException. 
  25. * <p>Directly accessed by {@link TransactionAwareDataSourceProxy}. 
  26. * @param con the Connection to close if necessary 
  27. * (if this is <code>null</code>, the call will be ignored) 
  28. * @param dataSource the DataSource that the Connection was obtained from 
  29. * (may be <code>null</code>) 
  30. * @throws SQLException if thrown by JDBC methods 
  31. * @see #doGetConnection 
  32. */  
  33. public static void doReleaseConnection(Connection con, DataSource dataSource) throws SQLException {  
  34.     if (con == null) {  
  35.         return;  
  36.     }  
  37.   
  38.     if (dataSource != null) {  
  39.         ConnectionHolder conHolder = (ConnectionHolder) TransactionSynchronizationManager.getResource(dataSource);  
  40.         if (conHolder != null && connectionEquals(conHolder, con)) {  
  41.             // It's the transactional Connection: Don't close it.  
  42.             conHolder.released();  
  43.             return;  
  44.         }  
  45.     }  
  46.   
  47.     // Leave the Connection open only if the DataSource is our  
  48.     // special SmartDataSoruce and it wants the Connection left open.  
  49.     if (!(dataSource instanceof SmartDataSource) || ((SmartDataSource) dataSource).shouldClose(con)) {  
  50.         logger.debug("Returning JDBC Connection to DataSource");  
  51.         con.close();  
  52.     }  
  53. }  


主要下面这几行代码: 

  1. // Leave the Connection open only if the DataSource is our  
  2. // special SmartDataSoruce and it wants the Connection left open.  
  3. if (!(dataSource instanceof SmartDataSource) || ((SmartDataSource) dataSource).shouldClose(con)) {  
  4.     logger.debug("Returning JDBC Connection to DataSource");  
  5.     con.close();  
  6. }  


哦,可以看到大部分情况下是自动关闭,除非你使用的SmartDataSource,且SmartDataSource指定了允许关闭。 


有些时候,你引入了JdbcTemplate或者DaoSupport,但是有时还需要自己额外的拿到conn进行操作,如下: 
jdbcTemplate.getDataSource().getConnection() 
那么,你应该就需要关闭连接了

druid-1.1.23源码分析——getConnection获取连接_xiaozaq的博客-CSDN博客_getconnectioninternal

动机:

项目有些功能页面访问速度很慢,想优化下。这些功能页面不常用,只是一些简单的查询展示。但是经常首次访问速度超慢(>10s)。访问一次,接着访问速度飞快(<100ms).因为是多数据源项目,就是其中的几个页面有这种情况,并且这几个页面都用到同一个数据库。所以怀疑是因为我们单位内部有入网规范管理系统,连接长时间不用,可能会被关闭,所以数据库连接池获取连接,首次访问连接池要新创建一个连接才能获取到可用连接。

有试过很多druid配置:testWhileIdle,timeBetweenEvictionRunsMillis,removeAbandoned,removeAbandonedTimeoutMillis,initialSize,minIdle,maxActive等都试过了,还是会出现这种问题。

通过druid监控发现:

数据源长时间不用,此时数据源信息(物理打开数:1,物理关闭数:0,逻辑打开数:1,逻辑关闭数:1,正在打开连接数:0,连接池空闲数:1),这时因为连接空闲时间过长,已经不可用了。这里特别说明下,我的minIdle设置明明是10,为什么连接池空闲数会小于10呢?

然后,业务中调用数据源的getConnection()方法获取连接,先是获取到空闲连接(其实已不可用),然后配置了testWhileIdle,所以测试发现不可用,又重新创建一个连接。因此导致页面首次访问速度慢。

原因找到了,接下来就是想办法解决。

通过百度查找druid相关配置资料,对这些配置都试过了还是不行,又怀疑是不是druid版本问题,于是升级到当前最新版本1.1.23还是不行。最终决定分析druid源码

调试解读源码:

1.调用getConnection获取连接。

dataSource.getConnection()

2.进入com.alibaba.druid.pool.DruidDataSource.getConnection

3.init()应该是已经初始化过了,可以不用管;因为配置了druid监控和防火墙,所以filters.size()为2,。

4.进入com.alibaba.druid.filter.FilterChainImpl.dataSource_connect。此时this.pos=0 , filterSize=2

5.看nextFilter()方法:他先是返回的时StatFilter过滤器,然后再将pos自增1.

6.进入StatFilter.dataSource_getConnection

7.chain.dataSource_connect将进入上面的4步那里,但此时的pos为1.所以最终是进入WallFilter.dataSource_getConnection,而WallFilter又是继承的com.alibaba.druid.filter.FilterAdapter,并且没有重写dataSource_getConnection方法。所以实际上是进入了FilterAdapter.dataSource_getConnection。

8.此时调用chain.dataSource_connect(dataSource, maxWaitMillis)后,又将进入上面的4步那里,但此时的pos为2.所以this.pos < filterSize是false,不会进入if里执行了。

9.执行dataSource.getConnectionDirect(maxWaitMillis),这个有点长,就不截图了。

 
  1. public DruidPooledConnection getConnectionDirect(long maxWaitMillis) throws SQLException {
  2. int notFullTimeoutRetryCnt = 0;
  3. for (;;) {
  4. // handle notFullTimeoutRetry
  5. DruidPooledConnection poolableConnection;
  6. try {
  7. poolableConnection = getConnectionInternal(maxWaitMillis);
  8. } catch (GetConnectionTimeoutException ex) {
  9. if (notFullTimeoutRetryCnt <= this.notFullTimeoutRetryCount && !isFull()) {
  10. notFullTimeoutRetryCnt++;
  11. if (LOG.isWarnEnabled()) {
  12. LOG.warn("get connection timeout retry : " + notFullTimeoutRetryCnt);
  13. }
  14. continue;
  15. }
  16. throw ex;
  17. }
  18.  
  19. if (testOnBorrow) {
  20. boolean validate = testConnectionInternal(poolableConnection.holder, poolableConnection.conn);
  21. if (!validate) {
  22. if (LOG.isDebugEnabled()) {
  23. LOG.debug("skip not validate connection.");
  24. }
  25.  
  26. discardConnection(poolableConnection.holder);
  27. continue;
  28. }
  29. } else {
  30. if (poolableConnection.conn.isClosed()) {
  31. discardConnection(poolableConnection.holder); // 传入null,避免重复关闭
  32. continue;
  33. }
  34.  
  35. if (testWhileIdle) {
  36. final DruidConnectionHolder holder = poolableConnection.holder;
  37. long currentTimeMillis = System.currentTimeMillis();
  38. long lastActiveTimeMillis = holder.lastActiveTimeMillis;
  39. long lastExecTimeMillis = holder.lastExecTimeMillis;
  40. long lastKeepTimeMillis = holder.lastKeepTimeMillis;
  41.  
  42. if (checkExecuteTime
  43. && lastExecTimeMillis != lastActiveTimeMillis) {
  44. lastActiveTimeMillis = lastExecTimeMillis;
  45. }
  46.  
  47. if (lastKeepTimeMillis > lastActiveTimeMillis) {
  48. lastActiveTimeMillis = lastKeepTimeMillis;
  49. }
  50.  
  51. long idleMillis = currentTimeMillis - lastActiveTimeMillis;
  52.  
  53. long timeBetweenEvictionRunsMillis = this.timeBetweenEvictionRunsMillis;
  54.  
  55. if (timeBetweenEvictionRunsMillis <= 0) {
  56. timeBetweenEvictionRunsMillis = DEFAULT_TIME_BETWEEN_EVICTION_RUNS_MILLIS;
  57. }
  58.  
  59. if (idleMillis >= timeBetweenEvictionRunsMillis
  60. || idleMillis < 0 // unexcepted branch
  61. ) {
  62. boolean validate = testConnectionInternal(poolableConnection.holder, poolableConnection.conn);
  63. if (!validate) {
  64. if (LOG.isDebugEnabled()) {
  65. LOG.debug("skip not validate connection.");
  66. }
  67.  
  68. discardConnection(poolableConnection.holder);
  69. continue;
  70. }
  71. }
  72. }
  73. }
  74.  
  75. if (removeAbandoned) {
  76. StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace();
  77. poolableConnection.connectStackTrace = stackTrace;
  78. poolableConnection.setConnectedTimeNano();
  79. poolableConnection.traceEnable = true;
  80.  
  81. activeConnectionLock.lock();
  82. try {
  83. activeConnections.put(poolableConnection, PRESENT);
  84. } finally {
  85. activeConnectionLock.unlock();
  86. }
  87. }
  88.  
  89. if (!this.defaultAutoCommit) {
  90. poolableConnection.setAutoCommit(false);
  91. }
  92.  
  93. return poolableConnection;
  94. }
  95. }
 

10.调用getConnectionInternal方法获取连接。

 
  1. private DruidPooledConnection getConnectionInternal(long maxWait) throws SQLException {
  2. if (closed) {
  3. connectErrorCountUpdater.incrementAndGet(this);
  4. throw new DataSourceClosedException("dataSource already closed at " + new Date(closeTimeMillis));
  5. }
  6.  
  7. if (!enable) {
  8. connectErrorCountUpdater.incrementAndGet(this);
  9.  
  10. if (disableException != null) {
  11. throw disableException;
  12. }
  13.  
  14. throw new DataSourceDisableException();
  15. }
  16.  
  17. final long nanos = TimeUnit.MILLISECONDS.toNanos(maxWait);
  18. final int maxWaitThreadCount = this.maxWaitThreadCount;
  19.  
  20. DruidConnectionHolder holder;
  21.  
  22. for (boolean createDirect = false;;) {
  23. if (createDirect) {
  24. createStartNanosUpdater.set(this, System.nanoTime());
  25. if (creatingCountUpdater.compareAndSet(this, 0, 1)) {
  26. PhysicalConnectionInfo pyConnInfo = DruidDataSource.this.createPhysicalConnection();
  27. holder = new DruidConnectionHolder(this, pyConnInfo);
  28. holder.lastActiveTimeMillis = System.currentTimeMillis();
  29.  
  30. creatingCountUpdater.decrementAndGet(this);
  31. directCreateCountUpdater.incrementAndGet(this);
  32.  
  33. if (LOG.isDebugEnabled()) {
  34. LOG.debug("conn-direct_create ");
  35. }
  36.  
  37. boolean discard = false;
  38. lock.lock();
  39. try {
  40. if (activeCount < maxActive) {
  41. activeCount++;
  42. holder.active = true;
  43. if (activeCount > activePeak) {
  44. activePeak = activeCount;
  45. activePeakTime = System.currentTimeMillis();
  46. }
  47. break;
  48. } else {
  49. discard = true;
  50. }
  51. } finally {
  52. lock.unlock();
  53. }
  54.  
  55. if (discard) {
  56. JdbcUtils.close(pyConnInfo.getPhysicalConnection());
  57. }
  58. }
  59. }
  60.  
  61. try {
  62. lock.lockInterruptibly();
  63. } catch (InterruptedException e) {
  64. connectErrorCountUpdater.incrementAndGet(this);
  65. throw new SQLException("interrupt", e);
  66. }
  67.  
  68. try {
  69. if (maxWaitThreadCount > 0
  70. && notEmptyWaitThreadCount >= maxWaitThreadCount) {
  71. connectErrorCountUpdater.incrementAndGet(this);
  72. throw new SQLException("maxWaitThreadCount " + maxWaitThreadCount + ", current wait Thread count "
  73. + lock.getQueueLength());
  74. }
  75.  
  76. if (onFatalError
  77. && onFatalErrorMaxActive > 0
  78. && activeCount >= onFatalErrorMaxActive) {
  79. connectErrorCountUpdater.incrementAndGet(this);
  80.  
  81. StringBuilder errorMsg = new StringBuilder();
  82. errorMsg.append("onFatalError, activeCount ")
  83. .append(activeCount)
  84. .append(", onFatalErrorMaxActive ")
  85. .append(onFatalErrorMaxActive);
  86.  
  87. if (lastFatalErrorTimeMillis > 0) {
  88. errorMsg.append(", time '")
  89. .append(StringUtils.formatDateTime19(
  90. lastFatalErrorTimeMillis, TimeZone.getDefault()))
  91. .append("'");
  92. }
  93.  
  94. if (lastFatalErrorSql != null) {
  95. errorMsg.append(", sql \n")
  96. .append(lastFatalErrorSql);
  97. }
  98.  
  99. throw new SQLException(
  100. errorMsg.toString(), lastFatalError);
  101. }
  102.  
  103. connectCount++;
  104.  
  105. if (createScheduler != null
  106. && poolingCount == 0
  107. && activeCount < maxActive
  108. && creatingCountUpdater.get(this) == 0
  109. && createScheduler instanceof ScheduledThreadPoolExecutor) {
  110. ScheduledThreadPoolExecutor executor = (ScheduledThreadPoolExecutor) createScheduler;
  111. if (executor.getQueue().size() > 0) {
  112. createDirect = true;
  113. continue;
  114. }
  115. }
  116.  
  117. if (maxWait > 0) {
  118. holder = pollLast(nanos);
  119. } else {
  120. holder = takeLast();
  121. }
  122.  
  123. if (holder != null) {
  124. if (holder.discard) {
  125. continue;
  126. }
  127.  
  128. activeCount++;
  129. holder.active = true;
  130. if (activeCount > activePeak) {
  131. activePeak = activeCount;
  132. activePeakTime = System.currentTimeMillis();
  133. }
  134. }
  135. } catch (InterruptedException e) {
  136. connectErrorCountUpdater.incrementAndGet(this);
  137. throw new SQLException(e.getMessage(), e);
  138. } catch (SQLException e) {
  139. connectErrorCountUpdater.incrementAndGet(this);
  140. throw e;
  141. } finally {
  142. lock.unlock();
  143. }
  144.  
  145. break;
  146. }
  147.  
  148. if (holder == null) {
  149. long waitNanos = waitNanosLocal.get();
  150.  
  151. StringBuilder buf = new StringBuilder(128);
  152. buf.append("wait millis ")//
  153. .append(waitNanos / (1000 * 1000))//
  154. .append(", active ").append(activeCount)//
  155. .append(", maxActive ").append(maxActive)//
  156. .append(", creating ").append(creatingCount)//
  157. ;
  158. if (creatingCount > 0 && createStartNanos > 0) {
  159. long createElapseMillis = (System.nanoTime() - createStartNanos) / (1000 * 1000);
  160. if (createElapseMillis > 0) {
  161. buf.append(", createElapseMillis ").append(createElapseMillis);
  162. }
  163. }
  164.  
  165. if (createErrorCount > 0) {
  166. buf.append(", createErrorCount ").append(createErrorCount);
  167. }
  168.  
  169. List<JdbcSqlStatValue> sqlList = this.getDataSourceStat().getRuningSqlList();
  170. for (int i = 0; i < sqlList.size(); ++i) {
  171. if (i != 0) {
  172. buf.append('\n');
  173. } else {
  174. buf.append(", ");
  175. }
  176. JdbcSqlStatValue sql = sqlList.get(i);
  177. buf.append("runningSqlCount ").append(sql.getRunningCount());
  178. buf.append(" : ");
  179. buf.append(sql.getSql());
  180. }
  181.  
  182. String errorMessage = buf.toString();
  183.  
  184. if (this.createError != null) {
  185. throw new GetConnectionTimeoutException(errorMessage, createError);
  186. } else {
  187. throw new GetConnectionTimeoutException(errorMessage);
  188. }
  189. }
  190.  
  191. holder.incrementUseCount();
  192.  
  193. DruidPooledConnection poolalbeConnection = new DruidPooledConnection(holder);
  194. return poolalbeConnection;
  195. }
 

11.继续调试,找到了逻辑打开数统计的地方。

12.继续调试,找到了正在打开连接数统计的地方。holder里包含了druid封装好的数据库连接对象conn。

13.接着调试,运行到1692行的break命令,跳出for循环。

14.poolableConnection返回给了第10步中的poolableConnection。接着往下调试。testOnBorrow一般默认都是false,开启检查影响性能。

15.接下来测试连接是否被关闭,以及空闲检查。testWhileIdle

16.进入了DruidAbstractDataSource.testConnectionInternal

 
  1. protected boolean testConnectionInternal(DruidConnectionHolder holder, Connection conn) {
  2. String sqlFile = JdbcSqlStat.getContextSqlFile();
  3. String sqlName = JdbcSqlStat.getContextSqlName();
  4.  
  5. if (sqlFile != null) {
  6. JdbcSqlStat.setContextSqlFile(null);
  7. }
  8. if (sqlName != null) {
  9. JdbcSqlStat.setContextSqlName(null);
  10. }
  11. try {
  12. if (validConnectionChecker != null) {
  13. boolean valid = validConnectionChecker.isValidConnection(conn, validationQuery, validationQueryTimeout);
  14. long currentTimeMillis = System.currentTimeMillis();
  15. if (holder != null) {
  16. holder.lastValidTimeMillis = currentTimeMillis;
  17. holder.lastExecTimeMillis = currentTimeMillis;
  18. }
  19.  
  20. if (valid && isMySql) { // unexcepted branch
  21. long lastPacketReceivedTimeMs = MySqlUtils.getLastPacketReceivedTimeMs(conn);
  22. if (lastPacketReceivedTimeMs > 0) {
  23. long mysqlIdleMillis = currentTimeMillis - lastPacketReceivedTimeMs;
  24. if (lastPacketReceivedTimeMs > 0 //
  25. && mysqlIdleMillis >= timeBetweenEvictionRunsMillis) {
  26. discardConnection(holder);
  27. String errorMsg = "discard long time none received connection. "
  28. + ", jdbcUrl : " + jdbcUrl
  29. + ", jdbcUrl : " + jdbcUrl
  30. + ", lastPacketReceivedIdleMillis : " + mysqlIdleMillis;
  31. LOG.error(errorMsg);
  32. return false;
  33. }
  34. }
  35. }
  36.  
  37. if (valid && onFatalError) {
  38. lock.lock();
  39. try {
  40. if (onFatalError) {
  41. onFatalError = false;
  42. }
  43. } finally {
  44. lock.unlock();
  45. }
  46. }
  47.  
  48. return valid;
  49. }
  50.  
  51. if (conn.isClosed()) {
  52. return false;
  53. }
  54.  
  55. if (null == validationQuery) {
  56. return true;
  57. }
  58.  
  59. Statement stmt = null;
  60. ResultSet rset = null;
  61. try {
  62. stmt = conn.createStatement();
  63. if (getValidationQueryTimeout() > 0) {
  64. stmt.setQueryTimeout(validationQueryTimeout);
  65. }
  66. rset = stmt.executeQuery(validationQuery);
  67. if (!rset.next()) {
  68. return false;
  69. }
  70. } finally {
  71. JdbcUtils.close(rset);
  72. JdbcUtils.close(stmt);
  73. }
  74.  
  75. if (onFatalError) {
  76. lock.lock();
  77. try {
  78. if (onFatalError) {
  79. onFatalError = false;
  80. }
  81. } finally {
  82. lock.unlock();
  83. }
  84. }
  85.  
  86. return true;
  87. } catch (Throwable ex) {
  88. // skip
  89. return false;
  90. } finally {
  91. if (sqlFile != null) {
  92. JdbcSqlStat.setContextSqlFile(sqlFile);
  93. }
  94. if (sqlName != null) {
  95. JdbcSqlStat.setContextSqlName(sqlName);
  96. }
  97. }
  98. }
 

17.发送sql测试连接,具体里面是如何测试的就不深入了。这里如果测试正常就返回true,否则返回false,或者抛出异常。

18.下面主要分析调试抛出异常的情况。这个是因为我们上了入网规范管理系统,连接超时闲置会被断掉。

(ps:数据库服务器主动断了连接可能testWhileIdle之前就判断连接是否关闭就检查出来了,不会运行到这了。)

18.所以接着返回false值到16步那里,即validate=false。接着往下走:

19.discardConnection丢弃连接。

 
  1. public void discardConnection(DruidConnectionHolder holder) {
  2. if (holder == null) {
  3. return;
  4. }
  5. //这里是获取jdbc的conn(即不是被druid封装的conn)
  6. Connection conn = holder.getConnection();
  7. if (conn != null) {
  8. JdbcUtils.close(conn);//物理关闭连接
  9. }
  10.  
  11. lock.lock();
  12. try {
  13. if (holder.discard) {
  14. return;
  15. }
  16.  
  17. if (holder.active) {
  18. activeCount--;//正在打开连接数-1
  19. holder.active = false;
  20. }
  21. discardCount++;//丢弃连接数+1
  22.  
  23. holder.discard = true;
  24.  
  25. if (activeCount <= minIdle) {//正在打开连接数小于等于最小空闲数
  26. emptySignal();//唤醒CreateConnectionThread线程。
  27. }
  28. } finally {
  29. lock.unlock();
  30. }
  31. }
 

 

20.唤醒CreateConnectionThread线程,新创建连接到连接池。CreateConnectionThread线程做了什么,相关分析以后有时间在分享了。

21.将不可用的连接丢弃后,执行continue,跳转到for循环开始位置。即又从第10步开始了。

22.getConnectionInternal中获取连接是通过takeList()获取的。这里获取逻辑猜测是如果连接池中有空闲连接,则直接返回空闲连接,没有的话需要等CreateConnectionThread线程创建好新线程后才能获取可用连接。

 
  1. DruidConnectionHolder takeLast() throws InterruptedException, SQLException {
  2. try {
  3. while (poolingCount == 0) {
  4. emptySignal(); // send signal to CreateThread create connection
  5.  
  6. if (failFast && isFailContinuous()) {
  7. throw new DataSourceNotAvailableException(createError);
  8. }
  9.  
  10. notEmptyWaitThreadCount++;
  11. if (notEmptyWaitThreadCount > notEmptyWaitThreadPeak) {
  12. notEmptyWaitThreadPeak = notEmptyWaitThreadCount;
  13. }
  14. try {
  15. notEmpty.await(); // signal by recycle or creator
  16. } finally {
  17. notEmptyWaitThreadCount--;
  18. }
  19. notEmptyWaitCount++;
  20.  
  21. if (!enable) {
  22. connectErrorCountUpdater.incrementAndGet(this);
  23. if (disableException != null) {
  24. throw disableException;
  25. }
  26.  
  27. throw new DataSourceDisableException();
  28. }
  29. }
  30. } catch (InterruptedException ie) {
  31. notEmpty.signal(); // propagate to non-interrupted thread
  32. notEmptySignalCount++;
  33. throw ie;
  34. }
  35.  
  36. decrementPoolingCount();
  37. DruidConnectionHolder last = connections[poolingCount];
  38. connections[poolingCount] = null;
  39.  
  40. return last;
  41. }
 

23.最后getConnectionDirect终于获取到一个可用的连接,然后返回给了dataSource.getConnection()。

总结

通过分析我可以选择以下3个方案:

1.继续深入研究druid。实现数据库连接池中空闲连接数定时检测,保证空闲连接不可用的连接丢弃,创建新的连接。好像这个功能可以通过removeAbandoned配置实现,但我配置了,好像没效果。还得研究研究。还有连接池空闲连接数<minIdle数也有待研究。

优点:优化页面访问速度,提升了用户使用体验。

缺点:1.对性能有影响。长时间不用也会自动创建连接,然后又销毁连接。不停的循环。2.花费精力研究druid。

2.配置入网规范管理系统(准入系统)。延长服务器的数据库连接空闲时间,比如延长至3天或者7天。

优点:1.优化页面访问速度,提升了用户使用体验。2.不用修改项目。

缺点:1.对性能有影响。为了长时间保持连接,对准入系统性能有影响。2.需要联系准入系统管理员配置。

3.将该问题保留,待以后处理。

优点:大脑可以休息休息了,最近研究这个头发又掉了些。

缺点:1.用户首次访问页面时可能有点慢,体验不好。

最终我决定采用第3种方案,反正这些页面用的机会少,并且首次访问后速度还是很快的,影响不大。以后扩展该数据源相关的功能后,在看情况是否优化。

如图所示:是点击右边的[View JSON API]进行查看,而不是点击左边的View

 

posted @ 2022-10-24 14:40  CharyGao  阅读(3302)  评论(0编辑  收藏  举报