Improper Handling of Connections with Socket Timeouts

Description

The Problem

Recently I started experiencing issues with our connection pool (Apache DBCP2) in several production applications . Every certain amount of time I was forced to restart different applications because they run out of connections in the database pool.

After some investigation I discovered that the problem happened after a number of long running queries that had issued socket time outs.

By inspecting our runtime environments using Java Mission Control, we could see that the JMX bean of our connection pool showed that all the connections in the pool were active and they were never been released.

Upon application restart everything would go back to normal operation.

Investigation Discoveries

After further investigation I managed to reproduce the scenario. All we needed to do was to force a socket timeout during a Hibernate transaction to cause that a connection in our DBCP2 pool got affected by the issue.

It seems Hibernate does not close the affected connection and the pool keeps it as active forever eventually (and gradually) consuming all connections in the pool until it starts rejecting new requests because the pool is full.

I also discovered that by forcing hibernate.connection.release_mode to after_statement the problem go solved, however, it is my intention that we use after_transaction mode.

Testing Scenario

I use Spring to configure my DataSource. I defined a single connection in the pool to make the test easier.

I use Spring ORM to bootstrap my EntityManagerFactory.

And this is how test it the issue:

  • First I open a database console and get a lock of a record and keep that transaction open.

  • Then I debug my code below step by step. I make a first invocation of my processRequest method where I try to run a native query that will attempt to get a lock on that exact same record. Since it is already held by the transaction I started in the previous step, the query will eventually cause the socketTimeout.

  • The processRequest method rollbacks the transaction, print the stack trace and close the entity manager.

  • At this point I should have my connection back in the DBCP2 pool, right?.

  • So, from the command line console I release the record I had obtained in the first step above.

  • And then continue debugging in my IDE by moving into my second method invocation of processRequest. It should succeed this time, since I was supposed to have 1 connection in the pool and the record is no longer locked. However, I get an error reporting that my pool is full.

You can see the code I used to testing here:

The Exceptions

When the socket timeout kicks in, this is what I see in the stack trace

But in the second method invocation, when the connection should be available and record should be free I get the following:

I tested the exact same scenario using EclipseLink 2.5.0 and it worked just fine.

Environment

The following are the libraries I used to reproduce the problem:

  • Hibernate Entity Manager v5.2.12

  • Apache Commons DBCP2 v2.1.1

  • PostgreSQL JDBC Driver v9.2-1004-jdbc41

  • Spring/Spring ORM 5.0.2.RELEASE

The following was my testing environment:

  • MacOS Sierra 10.12.6

  • PostgreSQL Server PostgreSQL 9.6.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit

  • Oracle JDK build 1.8.0_141-b15

Assignee

Unassigned

Reporter

Edwin Dalorzo

Fix versions

None

backPortable

None

Suitable for new contributors

None

Requires Release Note

None

Pull Request

None

backportDecision

None

Components

Affects versions

Priority

Major
Configure