Compilation in Java: JIT vs AOT

Compilation in Java: JIT vs AOT

May 13, 2024
Dmitry Chuyko

Java applications are compiled just-in-time by default. However,GraalVM Native Image made it possible to compile Java programs ahead-of-time.

What’s the difference between these two approaches? Is it better to stick to tried-and-true JIT compilation or plunge into the brave new world of AOT? Read on to find out!

Regardless of what you choose, Bellsoft developed two solutions aimed at helping you overcome some known drawbacks of the two so you’ll be ahead of the game in any case.

If you build Spring Boot applications, check out Alpaquita Containers based on lightweight Alpaquita Linux and Liberica JDK recommended by Spring: these containers will help you save up to 30% of RAM!

How compilation in Java works

Program compilation is the process of translating a program written in a high-level programming language (Java in our case) into low-level machine code understandable for a computer. The resulting machine code contains binary machine instructions, which can be decoded by the CPU. Compilation is performed by a compiler — a program that converts the source code into machine code in a most efficient way. A compiler can perform various optimizations to speed up the program execution or reduce resource consumption.

As Java is platform independent language, the source code is first converted into the JVM bytecode, which is then interpreted by the compiler of a JVM installed on the computer into platform dependent machine code. This additional stage enables the implementation of Java’s WORA (write one, run anywhere) principle.

The source code is translated into machine code before or during the program execution, and this is where JIT and AOT come into play.

What is a JIT compiler

In the case of Just-in-Time compilation, also known as dynamic compilation, JVM bytecode is converted into machine code by the JVM after the program has started, i.e., at run time.

The goal of a JIT compiler is to produce highly performant code. Firstly, it analyzes the code to decide which parts are used more often (so called “hotspots”) and deserve being optimized, which operating system the code runs in, and so on. Secondly, it applies optimizations it deemed necessary: global, local, control flow, etc. As a rule, the better the resulting code, the more optimizations are performed, the longer is the “warm up time,” which is the delay in the actual program execution.

The intensity of optimizations may vary depending on the program. HotSpot, the most popular JVM, provides two compilers, C1 and C2:

  • C1 is a client compiler that starts optimizing the code immediately and performs simple optimizations. As a result, it provides faster startup, but less optimized code.
  • C2 is a server compiler that observes the running code for a while to collect the data for profiling, and only after that starts optimizing it. The warmup time increases, but the code is more performant than the one produced by C1.

In addition, a JIT compiler can perform adaptive optimization and dynamically recompile code portions during app’s execution in case of changes. For instance, methods not used before are suddenly required.

JIT compilation provides performant code for a specific use case and environment it runs in. In addition, you can use a wide variety of Garbage Collectors with JIT, including low-latency collectors such as ZGC present in newer Java versions, and thus reduce latency or increase throughput depending on your needs.

On the other hand, JIT compilation has several drawbacks:

  • The warmup phase may take several minutes, all the while the application processes fewer requests than when at a stable state, leading to higher latency in the beginning. The worst part is that this process repeats every time you restart the application.
  • The application uses more resources during the warmup. As a result, you allocate more memory to your instances than necessary.
  • The resulting container images are “bulky” because they include the compiled code plus the JVM.

The advantages and drawbacks of JIT compilation can be summarized as follows.

JIT advantages

JIT disadvantages

Highly performant code thanks to various performance optimization techniques

Extended warmup period 

Dynamic performance optimization when the application is running

Higher overhead at the start

Familiar debugging and monitoring tools

Bigger container images with JVM

Alpaquita Containers with CRaC for faster startup and optimized footprint

What if you could warm up the application, save its state to a file, and then restore it from the file from the moment it was paused much like in a computer game?

Great news is that you can do that with Alpaquita Containers that support the Coordinated Restore at Checkpoint (CRaC) API. CRaC enables the developers to take a snapshot of a running Java application, replicate this snapshot among cloud instances, and restore the application at peak performance. This way, the startup and warmup times are reduced from minutes to milliseconds. In addition, there’s no memory overhead because the application state is stabilized. Therefore, two major drawbacks of JIT compilation are essentially eliminated. And a lightweight Alpaquita Container with minimalistic, Alpine-inspired Alpaquita contributes to the overall container size reduction.

Spring Boot Petclinic and Alpaquita Containers with CRaC: startup study results

The important thing is that the JIT compiler is there, so dynamic performance optimization is still possible.

Alpaquita Containers with Alpaquita Linux and Liberica JDK provide out-of-the-box support for CRaC API, so you don’t have to tune JDK or OS to work with the feature. To learn more about the feature, refer to the article What is CRaC. And if you are ready for experiments, follow our guide on how to use CRaC with Spring Boot in a Docker container.

HotSpot vs GraalVM

It is important to mention that the HotSpot compiler is not the only JIT-compiler in the Java ecosystem. There’s also OpenJ9 developed by IBM: you can study the comparison of HotSpot vs OpenJ9 performance if you are curious. But as opposed to HotSpot and OpenJ9 written in C/C++, there’s also GraalVM, which is a JDK written in Java.

As opposed to HotSpot JIT compilation, GraalVM’s JIT compiler offers additional optimizations so Graal outperforms HotSpot JVM in some cases. In addition, GraalVM offers a Truffle framework for running non-JVM-based applications allowing for efficient communication in multilingual projects.

But the most interesting part is that GraalVM also includes the Native Image technology that enables the Ahead-of-time compilation of Java applications, which returns us to the main topic of our article.  

What is an AOT compiler

Ahead-of-time or static compilation happens before the program execution, i.e., at build-time.  The GraalVM AOT compiler performs static analysis of the code under the closed-world assumption meaning that it assumes that all classes that will be required at run time are reachable at build-time (the pitfalls of this approach are discussed below). The compiler translates bytecode into machine code specific to the operating system and architecture. All necessary classes are initialized, and their code is loaded into a single native executable together with required library classes and statically linked code from the JDK.   

The resulting native image starts up almost instantly because there is no need to search for hotspots and perform bytecode interpretation at runtime.

An AOT compiler eliminates unused code and dependencies, and coupled with the fact that the executable file doesn’t need a JVM to run, it helps to

  • drastically reduce memory consumption,
  • accelerate startup up 1/10 s,
  • reach peak performance immediately without warm-up,
  • increase the security thanks to a smaller attack surface. 

Another substantial advantage of AOT compilation is that the code of the resulting native executable is very hard to reverse engineer, especially when run through an obfuscator before compiling a native image. It adds an extra level of IP protection.

As tempting as it may sound to migrate the project to Native Image and forget about long warmup and “bloated” containers, the technology has some drawbacks, and the migration presents a hard-to-ignore caveat.

As far as the drawbacks are concerned:

  • Building a native image is extremely resource demanding as you need to allocate several gigabytes of memory to the process.
  • As the resulting executable doesn’t contain a JVM, further performance optimization is impossible.
  • GraalVM offers a limited number of garbage collectors: SerialGC in GraalVM Community Edition and G1GC in GraalVM Enterprise Edition.

Suppose you don’t expect a sudden surge in requests and are satisfied with Native Image performance as it is. AOT compilation of an existing project may be challenging because Native Image doesn’t support the dynamic features of Java: Reflection, JNI, Dynamic Proxy, etc. So you have to provide a workaround — either rewrite the code or provide all metadata for required libraries in the JSON file using Tracing Agent for AOT compiler to “take them in.”

If you are curious, here’s an example of dealing with Reflection in a Spring Boot application when migrating to a Native Image.

To sum up, the pluses and minuses of AOT compilation are provided below.

AOT advantages

AOT disadvantages

Almost instant startup

No dynamic performance optimization

Smaller container images

Incompatible with dynamic features of Java

Smaller attack surface

Limited range of garbage collectors

Hard to reverse engineer

Challenging diagnostics

Liberica Native Image Kit with adds-on for performance and debugging

To elevate the overall performance of native images, you can use Liberica Native Image Kit (NIK), a GraalVM CE-based native-image compiler developed and supported by BellSoft.

  • We added a ParallelGC implementation to Liberica NIK that can significantly improve latency with native images (by 10–40% according to our studies).
  • macOS users can conveniently debug native images thanks to the addition of this feature to Liberica NIK.
  • Liberica NIK is used by default with Cloud Native Buildpacks and recommended by Spring as a Native Build tool, which promises smooth integration of the technology with your Spring Boot project. 

JIT vs AOT: Summary

To sum up, JIT and AOT compilers offer different approaches to optimizing Java application performance.

JIT compilation enables higher overall performance and dynamic performance optimization. However, it may take several minutes for an application to warm up, which is not optimal for certain cloud deployment scenarios. JIT compilation is also associated with memory overhead and higher memory consumption overall. Mitigating these drawbacks is possible with Alpaquita Containers supporting CRaC API.

AOT compilation allows for the creation of native images with almost instant startup. A single native executable can be smaller due to the absence of the JVM, contributing to reduced memory consumption. On the other hand, the Native Image technology doesn’t provide dynamic performance optimization and offers a scarce amount of GC implementations. To smooth out this drawback, you can use Liberica Native Image Kit that includes ParallelGC. In addition, AOT compilation happens under the closed-world assumption, making the migration difficult.

There’s never a one-size-fit solution. AOT and JIT work their magic in different scenarios, so to understand which solution is best for your project, you should match your SLOs with the advantages of these approaches and evaluate the risks related to their drawbacks.


Subcribe to our newsletter


Read the industry news, receive solutions to your problems, and find the ways to save money.

Further reading