Sizing JDBC Connection Pools for Real Production Load

 

Transcript:

A badly implemented database connection pool can turn a traffic spike into a connection storm. Let’s see how to implement a JDBC connection pool the right way and use a database proxy such as OpenJProxy so that services scale efficiently without killing the database.

A JDBC connection is a stateful session on a database server. It may include authentication state, current transaction context, session settings, and temporary objects. Creating a connection requires a network handshake, authentication, and session initialization, which are expensive operations that should be kept to a minimum. A connection pool helps to achieve this by keeping a set of pre-established database connections and handing them out to application threads on demand. Instead of creating a new database connection for every request, the application borrows one from the pool, uses it, and then returns it. This reduces connection overhead through reuse and enforces back pressure so the database is not overloaded. In other words, a connection pool acts as a concurrency gate in front of a scarce resource. However, a connection pool is not a performance cheat code: it does not make slow queries fast or bad SQL good, but it makes contention visible.

You can implement your own connection pool or use established frameworks such as HikariCP, Apache DBCP, or C3P0. HikariCP is a lightweight and very fast JDBC connection pool and is used by default in Spring. Creating a pool with HikariCP is straightforward: you create a HikariConfig object, set the database URL, user, and password at a minimum, and pass it to the data source. HikariCP provides many configuration options such as maximum pool size and connection timeout, which can be set programmatically or via application properties. Whether a pool helps or hurts your application depends largely on how well it is configured.

Modern applications are often composed of many microservices that scale and deploy independently, frequently running in environments where instances appear and disappear dynamically. Databases, however, remain stateful and resource-constrained systems with hard limits on concurrency, memory, and connection handling. Problems arise when highly dynamic applications interact with a bounded database. In many Java applications, each service instance runs its own connection pool. Platforms like Kubernetes can start instances rapidly, and each instance attempts to fill its pool. As a result, the database is hit with a surge of connection requests. Since databases can handle only a limited number efficiently, this leads to connection storms, unpredictable latency, and instability under load. Increased latency can trigger timeouts, which then trigger retries, further increasing concurrency and making the problem worse.

To avoid this vicious cycle, the first step is to limit pool size and align it with request concurrency. One way to calculate pool size is by measuring real database time per request, including query and transaction duration. Using Little’s Law, you can estimate required concurrency by multiplying target throughput by database time. For example, with a target throughput of 200 operations per second and a database time of 200 milliseconds, you need about four concurrent database connections. Pool size should only be increased until throughput stops improving or latency begins to rise, as oversized pools typically reduce performance by increasing contention. HikariCP documentation explicitly warns against pool oversizing.

The second important step is aligning timeouts. Misaligned timeouts lead to wasted resources, cascading failures, and retry storms. Each downstream timeout should be shorter than the upstream one. User-facing request timeouts should be larger than query or statement timeouts; otherwise, clients may give up while the database continues processing. Pool acquisition timeout should be the shortest, preventing applications from waiting indefinitely for a connection.

The third step is considering a database proxy. A database proxy sits between applications and the database, centralizing connection management. Applications can open many logical connections while the proxy limits the number of physical ones, reducing connection storms and improving reuse. Many proxies, such as PgBouncer or ProxySQL, are tied to specific database protocols and often require external load balancers for high availability, which increases operational complexity. In this context, OpenJProxy offers a different approach by integrating directly with JVM applications and reducing external dependencies.

OpenJProxy does not replace proper schema design or capacity planning. Its primary role is controlling connection pressure, concurrency, and workload behavior at the system level. OpenJProxy consists of two main components. The OpenJProxy server runs as a standalone process and acts as a layer-7 database proxy, understanding SQL-level operations and managing real database connections internally using technologies like HikariCP and native JDBC drivers. The OpenJProxy JDBC driver is a type-3 JDBC driver that does not connect directly to the database but communicates with the proxy server over gRPC and protocol buffers. This design protects applications from connection storms and enables features like circuit breakers and back pressure without requiring changes to application code, as it behaves like a standard JDBC driver.

OpenJProxy also provides several advanced features. Deferred connection acquisition allows the proxy to return a virtual connection handle immediately, acquiring a real database connection only when a statement is executed. This reduces idle connections, improves resource utilization, and increases throughput per physical connection. Client-side load balancing and failover are built into the JDBC driver, allowing traffic to be automatically rerouted if a proxy instance goes down, without requiring an external load balancer. Removing this layer reduces network hops, avoids routing issues with stateful sessions, and improves latency and reliability.

Another feature is slow query segregation. Instead of allowing long-running queries to block fast transactional workloads, OpenJProxy detects slow operations at runtime and routes them to separate execution lanes. Workloads are labeled dynamically based on observed behavior, and slow queries are isolated automatically. The proxy also supports work stealing, allowing idle lanes to be temporarily used by other workloads, ensuring that capacity is not wasted while keeping fast queries fast.

The fastest way to start with OpenJProxy is to run the server using Docker or as a standalone JAR. After adding the OpenJProxy JDBC driver, you update the database driver class and prefix the database URL with OpenJProxy information. The application can then be run as usual, using the proxy transparently. OpenJProxy integrates with major Java frameworks and can be configured in a familiar way, for example in Spring.

Connection pools reduce database overhead but do not make the database faster. They cap concurrency so applications fail predictably instead of collapsing under load. To prevent connection storms and related issues, configure pool size carefully, align timeouts, and consider using a database proxy to improve overall stability.

Summary

In this video, JDBC connection pooling is explained as a way to reduce database connection overhead and control concurrency under load. A poorly configured pool can cause connection storms, timeouts, and cascading failures, especially in dynamic microservice environments. Proper pool sizing based on database response time and aligned timeouts are essential for stability. The video also introduces OpenJProxy as a database proxy that centralizes connection management and prevents connection storms. Combining well-tuned connection pools with a proxy helps applications scale predictably without overloading the database.

About Catherine

Java developer passionate about Spring Boot. Writer. Developer Advocate at BellSoft

Social Media

Videos
card image
Mar 9, 2026
jOOQ Deep Dive: CTE, MULTISET, and SQL Pipelines

Some backend developers reach the point where the ORM stops being helpful. Complex joins, nested result graphs, or CTE pipelines quickly push frameworks like Hibernate to their limits. And when that happens, teams often end up writing fragile raw SQL strings or fighting performance issues like the classic N+1 query problem. In this video, we build a healthcare scheduling application NeonCare using jOOQ, Spring Boot 4, and PostgreSQL, and show how to write production-grade SQL directly in Java while keeping full compile-time type safety.

Videos
card image
Feb 27, 2026
Spring Developer Roadmap 2026: What You Need to Know

Spring Boot is powerful. But knowing the framework isn’t the same as understanding backend engineering. In this video, I walk through the roadmap I believe matters for a Spring developer in 2026. We start with data. That means real SQL — CTEs, window functions, normalization trade-offs — and understanding what ACID and BASE actually imply for system guarantees. Spring Data JPA is useful, but you still need to know what happens underneath. Then architecture: microservices vs modular monolith, serverless, CQRS, and when HTTP, gRPC, Kafka, or WebSockets make sense. Not as buzzwords — but as design choices with trade-offs. Security and infrastructure follow: OWASP Top 10, AuthN vs AuthZ, encryption in transit and at rest, Docker, Kubernetes, Infrastructure as Code, and observability with Micrometer, OpenTelemetry, and Grafana. This roadmap isn’t about mastering every tool. It’s about knowing what affects reliability in production.

Further watching

Videos
card image
Apr 2, 2026
Java Memory Options You Need in Production

JVM memory tuning can be tricky. Teams increase -Xmx and assume the problem is solved. Then the app still hits OOM. Because maximum heap size is not the only thing that affects memory footprint. The JVM uses RAM for much more than heap: metaspace, thread stacks, JIT/code cache, direct buffers, and native allocations. That’s why your process can run out of memory while heap still looks “fine”. In this video, we break down how JVM memory actually works and how to control it with a minimal, production-safe set of flags. We cover heap sizing (-Xms, -Xmx), dynamic resizing, direct memory (-XX:MaxDirectMemorySize), and total RAM limits (-XX:MaxRAMPercentage) — especially in containerized environments like Docker and Kubernetes. We also explain GC choices such as G1, ZGC, and Shenandoah, when defaults are enough, and why GC logging (-Xlog:gc*) is mandatory before tuning. Finally, we show how to diagnose failures with heap dumps and OOM hooks. This is not about adding more flags. It’s about understanding what actually consumes memory — and making decisions you can justify in production.

Videos
card image
Mar 26, 2026
Java Developer Roadmap 2026: From Basics to Production

Most Java roadmaps teach tools. This one teaches order — the only thing that actually gets you to production. You don’t need to learn everything. You need to learn the right things, in the right sequence. In this video, we break down a practical Java developer roadmap for 2026 — from syntax and OOP to Spring, databases, testing, and deployment. Structured into 8 levels, it shows how real engineers grow from fundamentals to production-ready systems. We cover what to learn and what to ignore: core Java, collections, streams, build tools, Git, SQL and JDBC before Hibernate, the Spring ecosystem, testing with JUnit, and deployment with Docker and CI/CD. You’ll also understand why most developers get stuck — jumping into frameworks too early, skipping SQL, or treating tools as knowledge. This roadmap gives you a clear path into real-world Java development — with priorities, trade-offs, and production context.

Videos
card image
Mar 19, 2026
TOP-5 Lightweight Linux Distributions for Containers

In this video, we compare five lightweight Linux distributions commonly used as base images: Alpine, Alpaquita, Chiseled Ubuntu, RHEL UBI Micro, and Wolfi. There are no rankings or recommendations — just a structured look at how these distros differ so you can evaluate them in your own context.