Recently I started experiencing issues with our connection pool (Apache DBCP2) in several production applications . Every certain amount of time I was forced to restart different applications because they run out of connections in the database pool.
After some investigation I discovered that the problem happened after a number of long running queries that had issued socket time outs.
By inspecting our runtime environments using Java Mission Control, we could see that the JMX bean of our connection pool showed that all the connections in the pool were active and they were never been released.
Upon application restart everything would go back to normal operation.
After further investigation I managed to reproduce the scenario. All we needed to do was to force a socket timeout during a Hibernate transaction to cause that a connection in our DBCP2 pool got affected by the issue.
It seems Hibernate does not close the affected connection and the pool keeps it as active forever eventually (and gradually) consuming all connections in the pool until it starts rejecting new requests because the pool is full.
I also discovered that by forcing hibernate.connection.release_mode to after_statement the problem go solved, however, it is my intention that we use after_transaction mode.
I use Spring to configure my DataSource. I defined a single connection in the pool to make the test easier.
I use Spring ORM to bootstrap my EntityManagerFactory.
And this is how test it the issue:
First I open a database console and get a lock of a record and keep that transaction open.
Then I debug my code below step by step. I make a first invocation of my processRequest method where I try to run a native query that will attempt to get a lock on that exact same record. Since it is already held by the transaction I started in the previous step, the query will eventually cause the socketTimeout.
The processRequest method rollbacks the transaction, print the stack trace and close the entity manager.
At this point I should have my connection back in the DBCP2 pool, right?.
So, from the command line console I release the record I had obtained in the first step above.
And then continue debugging in my IDE by moving into my second method invocation of processRequest. It should succeed this time, since I was supposed to have 1 connection in the pool and the record is no longer locked. However, I get an error reporting that my pool is full.
You can see the code I used to testing here:
When the socket timeout kicks in, this is what I see in the stack trace
But in the second method invocation, when the connection should be available and record should be free I get the following:
I tested the exact same scenario using EclipseLink 2.5.0 and it worked just fine.
The following are the libraries I used to reproduce the problem:
Hibernate Entity Manager v5.2.12
Apache Commons DBCP2 v2.1.1
PostgreSQL JDBC Driver v9.2-1004-jdbc41
Spring/Spring ORM 5.0.2.RELEASE
The following was my testing environment:
MacOS Sierra 10.12.6
PostgreSQL Server PostgreSQL 9.6.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
Oracle JDK build 1.8.0_141-b15
, what version of Hibernate ORM were you using previously that did not have this issue?
, I'm afraid I don't know if there was a version of Hibernate that didn't suffer from this. We have had this issue for a long time, it is just that this past Black Friday, due to an overall database slowness we could easily spot it. Before, this used to happen from time to time and gradually consume the connections in the pool. It happened with certain level of frequency in applications that had shorter socket timeouts. We try to keep our Hibernate version up -to-date, and so we've been using 5.2.12 almost since it was officially released.
, I am not able to run your test because due to:
Could not resolve dependencies for project com.dalorzo.bugs:timeout-bug:jar:1.0-SNAPSHOT: Failed to collect dependencies at com.backcountry.fulfillment.dto:dto-events:jar:15.5.1-SNAPSHOT: Failed to read artifact descriptor for com.backcountry.fulfillment.dto:dto-events:jar:15.5.1-SNAPSHOT: Could not transfer artifact com.backcountry.fulfillment.dto:dto-eventsom:15.5.1-SNAPSHOT from/to artifacts (http://artifactory01.bcinfra.net:8081/artifactory/libs-snapshot-local/): Connect to artifactory01.bcinfra.net:8081 [artifactory01.bcinfra.net/10.42.31.26] failed: Connection timed out -> [Help 1]
Please attach a runnable test case that reproduces your issue without using Spring. You can find test templates at https://github.com/hibernate/hibernate-test-case-templates.
You can configure a connection pool with 1 connection using hibernate.connection.pool_size=1.
, I will work on the test case as requested and upload it to this issue asap.
We have a spring boot application with spring boot jpa starter project dependency. After a while, we have started encountering exceptions such as LockAcquisitionException or GenericJDBCException related to connection or result set extraction. After restarting application everything works fine.
Till this bug is resolved, can someone please suggest a workaround.