JEP 483: Ahead-of-Time Class Loading & Linking. Project Leyden in JDK 24

Transcript 


In this video, we'll look at JEP 483: Ahead-of-Time Class Loading and Linking, which is targeted for JDK 24 and marks the initial introduction of Project Leyden into the mainline OpenJDK. When Java applications start, necessary application classes and core JVM classes must be loaded and initialized. There can be thousands of classes, so this process affects the startup times of Java applications. The worst part is that this process repeats every time the application starts.

One way to deal with this issue is to use Application Class Data Sharing (AppCDS), a JVM feature that reads and parses a set of application and JVM classes, storing this data in a read-only archive. When the application starts, the JVM reads the data from the archive, speeding up the startup process. Ahead-of-Time (AOT) class loading and linking goes even further. It not only reads and parses but also loads and links classes, storing this data in an AOT cache. As a result, the JVM has even less work to do at startup. This feature aligns with the overall goal of Project Leyden, which aims to shift some computations from the production run to earlier stages, such as trial runs. AOT class loading and linking is the first step towards introducing Project Leyden to OpenJDK.

You create the AOT cache once, and it can be reused every time your application starts. This feature is compatible with any Java application and doesn’t require changes to the application code. The only requirement is to perform a training run for your application to record its AOT configuration. The more efficient the training run, the better the resulting AOT cache. The training run should mimic the production run, allowing the application to fully configure itself and execute production code paths.

Now, let’s try this feature with the early access builds of JDK 24. For this experiment, I’ve downloaded early access builds of JDK 24, which you can also get for your platform. Note that the results we’ll get are preliminary and may change when stable builds are released. But we’re just testing the waters, right? Take any Spring app you like. I’m using a Spring Boot-based CRUD application that manages personal tasks. It’s a simple app with basic functionality. You can follow along or experiment with your own application.

Let’s first create a JAR file using maven clean package. Running the executable JAR in production is not recommended by the Spring team, so we’ll create an exploded JAR instead. For this, we’ll use the Djarmode=tools option to extract the JAR. Now we have our extracted JAR with the lib subdirectory and the JAR file itself. Let’s first run the application with the JAR file to see how fast it starts without any optimizations. As you can see, the application started in about two seconds.

All right, let’s get down to business. To create an AOT cache, we need to perform two steps (although it’s planned to merge them into one step in the future). First, conduct a trial run of your application with two flags: -XX:AOTMode=record and -XX:AOTConfiguration to record its AOT configuration into the file app.aotconf. Now, push some buttons in the application so it performs useful operations. Next, use the configuration file to create the cache. For this, use three flags: -XX:AOTMode=create, -XX:AOTConfiguration pointing to app.aotconf, and -XX:AOTCache to create the cache named app.aot. This step doesn’t run the application; it only creates the cache.

Now we can run the application. This time, we only need one flag: -XX:AOTCache pointing to the cache file we created earlier. As you can see, the application started in just one second, which is pretty cool.

In this video, we explored a new JDK feature, Ahead-of-Time Class Loading and Linking, aimed at reducing the startup times of Java applications. If you like this video, subscribe to our channel. And until next time!

Summary

JEP 483 introduces Ahead-of-Time (AOT) Class Loading and Linking in JDK 24, which enhances Java application startup times by loading and linking classes ahead of time and storing them in a reusable AOT cache. This feature, part of Project Leyden, reduces the JVM's workload during startup without requiring changes to application code, though a training run mimicking production is needed to create an efficient cache. Early tests with a Spring Boot app showed significant improvements, cutting startup time from two seconds to just one second.

About Catherine

Java developer passionate about Spring Boot. Writer. Developer Advocate at BellSoft

Social Media

Videos
card image
Jan 20, 2026
JDBC vs ORM vs jOOQ: Choose the Right Java Database Tool

Still unsure what is the difference between JPA, Hibernate, JDBC, or jOOQ and when to use which? This video clarifies the entire Java database access stack with real, production-oriented examples. We start at the foundation, which is JDBC, a low-level API every other tool eventually relies on for database communication. Then, we go through the ORM concept, JPA as a specification of ORM, Hibernate as the implementation and extension of JPA, and Blaze Persistence as a powerful upgrade to JPA Criteria API. From there, we take a different path with jOOQ: a database-first, SQL-centric approach that provides type-safe queries and catches many SQL errors at compile time instead of runtime. You’ll see when raw JDBC makes sense for small, focused services, when Hibernate fits CRUD-heavy domains, and when jOOQ excels at complex reporting and analytics. We discuss real performance pitfalls such as N+1 queries and lazy loading, and show practical combination strategies like “JPA for CRUD, jOOQ for reports.” The goal is to equip you with clarity so that you can make informed architectural decisions based on domain complexity, query patterns, and long-term maintainability.

Videos
card image
Jan 13, 2026
Hibernate: Ditch or Double Down? When ORM Isn't Enough

Every Java team debates Hibernate at some point: productivity champion or performance liability? Both are right. This video shows you when to rely on Hibernate's ORM magic and when to drop down to SQL. We walk through production scenarios: domain models with many-to-many relations where Hibernate excels, analytical reports with window functions where JDBC dominates, and hybrid architectures that use both in the same Spring Boot codebase. You'll see real code examples: the N+1 query trap that kills performance, complex window functions and anti-joins that Hibernate can't handle, equals/hashCode pitfalls with lazy loading, and practical two-level caching strategies. We also explore how Hibernate works under the hood—translating HQL to database-specific SQL dialects, managing sessions and transactions through JDBC, implementing JPA specifications. The strategic insight: modern applications need both ORM convenience for transactional business logic and SQL precision for data-intensive analytics. Use Hibernate for CRUD and relationship management. Use SQL where ORM abstractions leak or performance demands direct control.

Further watching

Videos
card image
Feb 6, 2026
Backend Developer Roadmap 2026: What You Need to Know

Backend complexity keeps growing, and frameworks can't keep up. In 2026, knowing React or Django isn't enough. You need fundamentals that hold up when systems break, traffic spikes, or your architecture gets rewritten for the third time.I've been building production systems for 15 years. This roadmap covers three areas that separate people who know frameworks from people who can actually architect backend systems: data, architecture, and infrastructure. This is about how to think, not what tools to install.

Videos
card image
Jan 29, 2026
JDBC Connection Pools in Microservices. Why They Break Down (and What to Do Instead)

In this livestream, Catherine is joined by Rogerio Robetti, the founder of Open J Proxy, to discuss why traditional JDBC connection pools break down when teams migrate to microservices, and what is a more efficient and reliable approach to organizing database access with microservice architecture.

Videos
card image
Jan 27, 2026
Sizing JDBC Connection Pools for Real Production Load

Many production outages start with connection pool exhaustion. Your app waits seconds for connections while queries take milliseconds; yet, most teams run default settings that collapse under load. This video shows how to configure connection pools that survive real production traffic: sizing based on database limits and thread counts, setting timeouts that prevent cascading failures, and implementing an open source database proxy Open J Proxy for centralized connection management with virtual connection handles, client-side load balancing, and slow query segregation. For senior Java developers, DevOps engineers, and architects who need database performance that holds under pressure.