Docker, Kubernetes, and other technologies for containerized applications are the must of modern development, especially in the case of microservices. But containerization is an intricate science, and even seasoned developers can make mistakes when working with containers. In this article, you will find out how to avoid these pitfalls and solve the issues you might have already encountered.
- Overview
- Be cautious about automatic Docker image generation
- Set the -Xmx parameter correctly
- Avoid automatic SerialGC switching
- Use newer OpenJDK distributions
- Set memory limits for the container
- Conclusion
- Further Reading
Overview
The process of containerization seems easy: pack the application in a container, deploy it to the cloud, done. But later you run against decreased performance and increased cloud bills: What could go wrong?
There are typical mistakes in Java development leading to deteriorated performance, including
- Faulty application logic
- Incorrect usage of databases
- Concurrency issues
- RAM under- or overutilization
- Improper server infrastructure
In the case of containers, developers should focus on memory usage and infrastructure. Below you will find the recommendations on how to customize the containers and tune JVM settings to reach optimal performance and footprint indicators. But before we go any further, it is worth noting that no JVM fine-tuning will help eliminate strategic errors. Just like local optimization in algorithms cannot beat asymptotics, container tuning won’t fix the code with memory leaks or unnecessary calculations. So don’t neglect application profiling to prevent or remedy such errors on time.
Be cautious about automatic Docker image generation
Imagine you already have a well-written Java application and want to spend minimum time and effort to containerize it and deploy into production. There already exist solutions in the Java world that help with automatic container generation, for example, Packeto Buildpacks. Simply install the pack utility and then follow the official guidelines on developing applications with this tool.
However, if you run the application built this way, you may notice that it is using less CPU and cores than allocated in the container. For instance, the container can work with 4 GB of memory and 16 processors, but the JVM has access only to 1 GB and 2 threads.
This particular issue was solved in the latest buildpack versions. It can also be resolved by setting the container parameters manually and turning off automatic core count and memory calculator with
docker run --rm --tty --publish 8080:8080 --env 'JAVA_TOOL_OPTIONS=-Xmx3072m -XX:ActiveProcessorCount=4' --memory=4G paketo-demo-app
But we recommend always checking and setting these parameters correctly. Automatic image generation saves time, but as a result you get a black box that may not fit your purposes.
Set the -Xmx parameter correctly
If you want to create the image of your application manually, you need to set the -Xmx parameter correctly. It is used to define the max. heap size, but to state the right number you should first define how much RAM the application uses.
The idea is to set the RAM limit for the container higher than the JVM heap size. In addition, the server should have enough RAM to start all your containers.
If the -Xmx value is less than the application needs, it will crash with the “OutOfMemoryError: Java heap space” error.
How to determine the amount of memory your application requires? The easiest method is to activate the GC logging in JVM parameters by running
-verbose:gc -XX:+PrintGCDetails
This way, before the application exits, the total
and used
memory values will be printed in the console.
The amount of memory allocated through -Xmx should be bigger than the de facto used memory found in the GC logs. The difference depends on the peak values. For that purpose, perform load testing, run the application with a profiler, for instance, Java Flight Recorder, and analyze how RAM consumption depends on the load.
Avoid automatic SerialGC switching
Java has a remarkable feature: if you limit your application to 2 GB RAM and less than 2 processors, SerialGC will switch automatically.
SerialGC is the oldest and relatively efficient garbage collector perfect for single-thread applications. But at the same time, it is the slowest Java collector.
The conditions of SerialGC activation are detailed in the select_gc_ergonomically() and is_server_class_machine() functions in OpenJDK documentation. Note that flags such as -XX:+UseG1GC
won’t help if you allocate too little memory.
To sum up, even if you want to save memory, do not set less than 2 GB in the -Xmx parameter.
Use newer OpenJDK distributions
The OpenJDK community has been working actively on container support. A lot of issues have been eliminated.
For example, there is a known issue: top
and free
tools inside of a container show the total host memory, not the one you assigned in the -memory
parameter. You can check it by running the container in the interactive mode:
docker run \
--interactive \
--tty \
--memory 10m \
bellsoft/liberica-openjdk-alpine
After that, you can run commands such as free -m
or top bn1
in the console. No matter how you experiment with startup flags, you won’t see the desired 10 MB. The console will print only the total amount of memory on your computer.
Earlier, JVM demonstrated the same behavior. Run this command in the console:
docker run -m 1gb openjdk:8u131 java -XshowSettings:vm -version
Now compare it to the latest OpenJDK version:
docker run -m 1gb liberica-openjdk-alpine:latest java -XshowSettings:vm -version
When using the older version, the console output will be the total host memory just like with free
. In the case of the newer version, the value will be equal to about a quarter of a claimed memory limit. The same goes for processor count: Runtime.getRuntime().availableProcessors()
It is the result of introducing the -XX:+UseContainerSupport
flag into OpenJDK 10 and further, where it is activated by default.
You can experiment with this flag. Switch it off, and you will get the previous behavior even with the newer OpenJDK versions:
docker run -m 1gb liberica-openjdk-alpine:latest java -XX:-UseContainerSupport -XshowSettings:vm -version
It means that containers are accounted for in the latest versions. Therefore, you should always update your distribution to the latest version to prevent similar issues.
You don’t have to migrate straight to JDK 17, the latest LTS release, although it is desirable because this version contains a lot of new features. If you work with Java 8, this problem was fixed in 8u212. But the best practice is to use the newest official OpenJDK release for your Java version. Liberica JDK updates always come out on time guaranteeing that your runtime will be free of known bugs and vulnerabilities.
Set memory limits for the container
Containers also have memory limitations. For example, you can set the memory limit in Docker with the -memory flag
. Kubernetes has a more complex system: you have to specify request
(searches for the most suitable server) and limit
(sets strict memory limits for cgroups) separately. In any case, it makes sense to allocate more memory to the container than the app requires, but there are certain intricacies we will discuss below.
How to enable Native Memory Tracking
GC logs don’t show all the memory used by your application. For example, extra resources may be spent when utilizing
- Off-heap memory (
ByteBuffer.allocateDirect
) - External native libraries loaded through
System.loadLibrary
- Significant heap fragmentation due to fact that malloc allocates memory in blocks
and so on.
When determining container memory limits, you should consider not only- Xmx, but also total memory. You can check it by using Native Memory Tracking.
First, start your Java application with the following flag:
-XX:NativeMemoryTracking=detail
Then, to see the total memory with its internal part, run
jcmd $pid VM.native_memory
where $pid
is the Java process identifier. It can be found by running jcmd
without parameters or using ps aux | grep java
.
You can track memory changes by using
jcmd $pid VM.native_memory detail.diff
How to disable Swap Memory
Suppose you already know the precise amount of memory required by your application. It is possible to set the -Xmx value higher than available memory on the server or in the container, but the performance will deteriorate.
The reason is that heaps in Java differ from those in C/C++. Scripts and native programs in C/C++ efficiently make it into the swap. In Java, we have to work with one heap, where Java objects are evenly distributed over the whole address space of the process. If most of them get into the swap file, the application will slow down significantly.
What should we do to avoid that? Set the correct -Xmx value or even disable swap in the host machine’s OS and the container. Use a swapoff -a
command for host and delete the respective entries from the /etc/fstab
file (sed -i '/ swap / s/^/#/' /etc/fstab
). In Docker, set the --memory-swap
parameter equal to --memory
.
Fortunately, Kubelet will not work with enabled swap memory by default if you didn’t allow it explicitly with KUBELET_EXTRA_ARGS=--fail-swap-on=false
and the memorySwap parameter in KubeletConfiguration.
Conclusion
To sum up, successful containerization rests upon the following principles:
- Determine how much memory your application requires
- Set the -Xmx parameter and RAM limits for the container accordingly
- Use the latest OpenJDK builds for your Java version
- Tune the system, for example, by disabling swap
This article explains the fundamentals of the processes mentioned above. Later on, we will dive even deeper into the intricacies of containerized applications: choosing the Garbage Collector, making use of Java for embedded, etc. Stay tuned!