Recently, I tried to build a simple application with a somewhat curious architecture: this web application forwards the request from user to a message queue, and another application reads from the message queue, executing an insert sql statement for each message. Netbeans 6.5 beta comes with Glassfish v2 Application Server, which is essentially a Sun Java Application Server 9.1-something with its JMS Message Queue implementation and database connection pool. With its default configuration, I created a JDBC connection to an Oracle Express 10g database, and also its connection pool. The first application is a JSF application, sending a JMS message to the JMS message queue. The second app is an EJB app, consisting a single message driven bean. Naturally the app server pools this bean too.
Ok, during manual application testing, all works well. During automated testing using ab (ApacheBench), things starts to get strange aft er more than 10 concurrent connections. With 50 concurrent connections, about 400 row from total of 2000 rows were missing.
The JDBC Connection pool is set to 32 connections, meanwhile the Oracle database allows only 20 or so connections. OK, I set the connection pool to 20 connections.
Now, there are still missing rows, and the server's log indicates message beans that cannot obtain connection resources. I'm reducing the message bean pool too - but still have missing rows.
I always thought that wrong pool size could affect performance, but wouldn't affect systems correct operation -- this case clearly demonstrated the opposite.