Sizing JDBC Connection Pools for Real Production Load

 

Transcript:

A badly implemented database connection pool can turn a traffic spike into a connection storm. Let’s see how to implement a JDBC connection pool the right way and use a database proxy such as OpenJProxy so that services scale efficiently without killing the database.

A JDBC connection is a stateful session on a database server. It may include authentication state, current transaction context, session settings, and temporary objects. Creating a connection requires a network handshake, authentication, and session initialization, which are expensive operations that should be kept to a minimum. A connection pool helps to achieve this by keeping a set of pre-established database connections and handing them out to application threads on demand. Instead of creating a new database connection for every request, the application borrows one from the pool, uses it, and then returns it. This reduces connection overhead through reuse and enforces back pressure so the database is not overloaded. In other words, a connection pool acts as a concurrency gate in front of a scarce resource. However, a connection pool is not a performance cheat code: it does not make slow queries fast or bad SQL good, but it makes contention visible.

You can implement your own connection pool or use established frameworks such as HikariCP, Apache DBCP, or C3P0. HikariCP is a lightweight and very fast JDBC connection pool and is used by default in Spring. Creating a pool with HikariCP is straightforward: you create a HikariConfig object, set the database URL, user, and password at a minimum, and pass it to the data source. HikariCP provides many configuration options such as maximum pool size and connection timeout, which can be set programmatically or via application properties. Whether a pool helps or hurts your application depends largely on how well it is configured.

Modern applications are often composed of many microservices that scale and deploy independently, frequently running in environments where instances appear and disappear dynamically. Databases, however, remain stateful and resource-constrained systems with hard limits on concurrency, memory, and connection handling. Problems arise when highly dynamic applications interact with a bounded database. In many Java applications, each service instance runs its own connection pool. Platforms like Kubernetes can start instances rapidly, and each instance attempts to fill its pool. As a result, the database is hit with a surge of connection requests. Since databases can handle only a limited number efficiently, this leads to connection storms, unpredictable latency, and instability under load. Increased latency can trigger timeouts, which then trigger retries, further increasing concurrency and making the problem worse.

To avoid this vicious cycle, the first step is to limit pool size and align it with request concurrency. One way to calculate pool size is by measuring real database time per request, including query and transaction duration. Using Little’s Law, you can estimate required concurrency by multiplying target throughput by database time. For example, with a target throughput of 200 operations per second and a database time of 200 milliseconds, you need about four concurrent database connections. Pool size should only be increased until throughput stops improving or latency begins to rise, as oversized pools typically reduce performance by increasing contention. HikariCP documentation explicitly warns against pool oversizing.

The second important step is aligning timeouts. Misaligned timeouts lead to wasted resources, cascading failures, and retry storms. Each downstream timeout should be shorter than the upstream one. User-facing request timeouts should be larger than query or statement timeouts; otherwise, clients may give up while the database continues processing. Pool acquisition timeout should be the shortest, preventing applications from waiting indefinitely for a connection.

The third step is considering a database proxy. A database proxy sits between applications and the database, centralizing connection management. Applications can open many logical connections while the proxy limits the number of physical ones, reducing connection storms and improving reuse. Many proxies, such as PgBouncer or ProxySQL, are tied to specific database protocols and often require external load balancers for high availability, which increases operational complexity. In this context, OpenJProxy offers a different approach by integrating directly with JVM applications and reducing external dependencies.

OpenJProxy does not replace proper schema design or capacity planning. Its primary role is controlling connection pressure, concurrency, and workload behavior at the system level. OpenJProxy consists of two main components. The OpenJProxy server runs as a standalone process and acts as a layer-7 database proxy, understanding SQL-level operations and managing real database connections internally using technologies like HikariCP and native JDBC drivers. The OpenJProxy JDBC driver is a type-3 JDBC driver that does not connect directly to the database but communicates with the proxy server over gRPC and protocol buffers. This design protects applications from connection storms and enables features like circuit breakers and back pressure without requiring changes to application code, as it behaves like a standard JDBC driver.

OpenJProxy also provides several advanced features. Deferred connection acquisition allows the proxy to return a virtual connection handle immediately, acquiring a real database connection only when a statement is executed. This reduces idle connections, improves resource utilization, and increases throughput per physical connection. Client-side load balancing and failover are built into the JDBC driver, allowing traffic to be automatically rerouted if a proxy instance goes down, without requiring an external load balancer. Removing this layer reduces network hops, avoids routing issues with stateful sessions, and improves latency and reliability.

Another feature is slow query segregation. Instead of allowing long-running queries to block fast transactional workloads, OpenJProxy detects slow operations at runtime and routes them to separate execution lanes. Workloads are labeled dynamically based on observed behavior, and slow queries are isolated automatically. The proxy also supports work stealing, allowing idle lanes to be temporarily used by other workloads, ensuring that capacity is not wasted while keeping fast queries fast.

The fastest way to start with OpenJProxy is to run the server using Docker or as a standalone JAR. After adding the OpenJProxy JDBC driver, you update the database driver class and prefix the database URL with OpenJProxy information. The application can then be run as usual, using the proxy transparently. OpenJProxy integrates with major Java frameworks and can be configured in a familiar way, for example in Spring.

Connection pools reduce database overhead but do not make the database faster. They cap concurrency so applications fail predictably instead of collapsing under load. To prevent connection storms and related issues, configure pool size carefully, align timeouts, and consider using a database proxy to improve overall stability.

Summary

In this video, JDBC connection pooling is explained as a way to reduce database connection overhead and control concurrency under load. A poorly configured pool can cause connection storms, timeouts, and cascading failures, especially in dynamic microservice environments. Proper pool sizing based on database response time and aligned timeouts are essential for stability. The video also introduces OpenJProxy as a database proxy that centralizes connection management and prevents connection storms. Combining well-tuned connection pools with a proxy helps applications scale predictably without overloading the database.

About Catherine

Java developer passionate about Spring Boot. Writer. Developer Advocate at BellSoft

Social Media

Videos
card image
Dec 30, 2025
Java in 2025: LTS Release, AI on JVM, Framework Modernization

Java in 2025 isn't about headline features, it's about how production systems changed under the hood. While release notes focus on individual JEPs, the real story is how the platform, frameworks, and tooling evolved to improve stability, performance, and long-term maintainability. In this video, we look at Java from a production perspective. What does Java 25 LTS mean for teams planning to upgrade? How are memory efficiency, startup time, and observability getting better? Why do changes like Scoped Values and AOT optimizations matter beyond benchmarks? We also cover the broader ecosystem: Spring Boot 4 and Framework 7, AI on the JVM with Spring AI and LangChain4j, Kotlin's growing role in backend systems, and tooling updates that make upgrades easier. Finally, we touch on container hardening and why runtime and supply-chain decisions matter just as much as language features.

Videos
card image
Dec 24, 2025
I Solved Advent of Code 2025 in Kotlin: Here's How It Went

Every year, Advent of Code spawns thousands of solutions — but few engineers step back to see the bigger picture. This is a complete walkthrough of all 12 days from 2025, focused on engineering patterns rather than puzzle statements. We cover scalable techniques: interval math without brute force, dynamic programming, graph algorithms (JGraphT), geometry with Java AWT Polygon, and optimization problems that need constraint solvers like ojAlgo. You'll see how Java and Kotlin handle real constraints, how visualizations validate assumptions, and when to reach for libraries instead of writing everything from scratch. If you love puzzles, programming—or both—and maybe want to learn how to solve them on the JVM, this is for you.

Further watching

Videos
card image
Jan 29, 2026
JDBC Connection Pools in Microservices. Why They Break Down (and What to Do Instead)

In this livestream, Catherine is joined by Rogerio Robetti, the founder of Open J Proxy, to discuss why traditional JDBC connection pools break down when teams migrate to microservices, and what is a more efficient and reliable approach to organizing database access with microservice architecture.

Videos
card image
Jan 20, 2026
JDBC vs ORM vs jOOQ: Choose the Right Java Database Tool

Still unsure what is the difference between JPA, Hibernate, JDBC, or jOOQ and when to use which? This video clarifies the entire Java database access stack with real, production-oriented examples. We start at the foundation, which is JDBC, a low-level API every other tool eventually relies on for database communication. Then, we go through the ORM concept, JPA as a specification of ORM, Hibernate as the implementation and extension of JPA, and Blaze Persistence as a powerful upgrade to JPA Criteria API. From there, we take a different path with jOOQ: a database-first, SQL-centric approach that provides type-safe queries and catches many SQL errors at compile time instead of runtime. You’ll see when raw JDBC makes sense for small, focused services, when Hibernate fits CRUD-heavy domains, and when jOOQ excels at complex reporting and analytics. We discuss real performance pitfalls such as N+1 queries and lazy loading, and show practical combination strategies like “JPA for CRUD, jOOQ for reports.” The goal is to equip you with clarity so that you can make informed architectural decisions based on domain complexity, query patterns, and long-term maintainability.

Videos
card image
Jan 13, 2026
Hibernate: Ditch or Double Down? When ORM Isn't Enough

Every Java team debates Hibernate at some point: productivity champion or performance liability? Both are right. This video shows you when to rely on Hibernate's ORM magic and when to drop down to SQL. We walk through production scenarios: domain models with many-to-many relations where Hibernate excels, analytical reports with window functions where JDBC dominates, and hybrid architectures that use both in the same Spring Boot codebase. You'll see real code examples: the N+1 query trap that kills performance, complex window functions and anti-joins that Hibernate can't handle, equals/hashCode pitfalls with lazy loading, and practical two-level caching strategies. We also explore how Hibernate works under the hood—translating HQL to database-specific SQL dialects, managing sessions and transactions through JDBC, implementing JPA specifications. The strategic insight: modern applications need both ORM convenience for transactional business logic and SQL precision for data-intensive analytics. Use Hibernate for CRUD and relationship management. Use SQL where ORM abstractions leak or performance demands direct control.