Reduce your TCO of building and using Spring Native applications with just 3 steps. Switch to the best Unified Java Runtime. Learn More.

Quarkus, or Spring Native, or JVM in Containers: Choose Your Cloud-Native Fighter

Choose your cloud-native fighter: JVM in containers, Quarkus or Spring Native


Published May 20, 2021


Do you have a ready-to-go Java app and want to take advantage of everything the Modern Cloud offers? Deployment is the next step, and you’ve come to the right place for guidance.

Here we’ll focus on the three cloud-native development methods. They all are pretty efficient in boosting your software’s performance. The article follows our DZone Refcard but looks more into approaches themselves: be it pure containers or using a Java framework (Quarkus from Red Hat or Spring from VMware) for deploying native Java applications.

Contents

  1. Preparation
  2. JVM in Linux containers
  3. MicroProfile & Quarkus
  4. Liberica NIK & Spring Native

Our engineers are passionate about bringing enterprise projects to the cloud. We believe that’s the future. If you have a complex system full of moving parts and no time to learn the ropes, our expert is ready to help. Let’s meet and find out which cloud-native solution best suits your project. Start your digital transformation now!

Preparation

Before we start, let’s assume you’ve got pre-built software based on microservice architecture. If not, we suggest drawing some inspiration from our recent guide on how to build an e-commerce application for an online store.

First, no matter which cloud service provider you choose, there needs to be a tool, or rather a platform, to manage containerized workloads and services in an automatic fashion, a so-called “container orchestrator.” We would recommend Kubernetes as it is open source, portable, and allows for scaling across multiple clouds. Not to mention it is a market leader: 59% of Java developers in large organizations reported using Kubernetes in production in 2020.1

Then comes an ingress controller necessary for providing your containerized application to the users. It also configures an HTTP load balancer and integrates proxies. Among many other options to choose from,2 you can discover that the Kubernetes project maintains an open source NGINX implementation, which would be your best pick.

Voila, you’re all set for building microservice containers. There are multiple ways to create and deploy them in the cloud. We will talk about pure developers’ and DevOps processes here and bring to light three approaches: a rather straightforward one, a Quarkus-heavy one, and one centered around native images.

JVM in Linux containers

Cloud-native software development rests upon four pillars: microservices, containers, continuous integration/continuous delivery, and DevOps.3 Docker containers running in Linux virtual machines happen to be the easier approach to cloud-native Java. Having evolved from deploying WAR files on web servers to OS-level virtualization, this method is quite a potent optimization tool.

You can have this setup for a Linux container as an example: a hypervisor host OS in the cloud, a guest OS on a VM serving as a host OS for Docker, Docker in the guest OS providing a container runtime, and JVM inside the container that runs Java bytecode.

JVM containerization

A suggestion from BellSoft: use Liberica JDK as your runtime component inside the container. It combines excellently with the majority of Linux distributions, including lightweight Alpine. The resulting build will help you decrease static and dynamic footprint, thus reducing expenses and increasing overall gains.

One problem you might run across is container resource management. For instance, when the heap size is raised above the memory allowance of the container (enforced via cgroups), the application can be killed by the OS.

Yet, this is not a thing to worry about: the mentioned issues are known to the OpenJDK community, addressed, and mostly solved. There’s a 1/100 chance you will think about how memory is allocated when deploying your Java application in a Linux container. And if you do, check out our previous article that goes deep into the subject.

MicroProfile & Quarkus

Another popular approach to cloud-nativity is using frameworks with MicroProfile support. These are aimed to optimize Jakarta EE for microservices and vary a lot. Some have their own APIs like Micronaut, some gather libraries for lightweight packaging like Dropwizard, and some originate from major JDK vendors like Helidon (Oracle) or Quarkus (Red Hat). We will elaborate a bit more on the latter, Quarkus specifically.

The Quarkus project is a Kubernetes-native Java framework. It is essentially a full-stack set of technologies adapted to OpenJDK HotSpot for Java virtual machines and GraalVM for native compilation. For microservice creation, Quarkus works with Eclipse MicroProfile libraries along with such tools as Apache Kafka, Camel, dependency injection, Hibernate ORM (JPA and JTA annotations), RESTEasy (JAX-RS), Vert.x, Apache Camel, and more.

Other bonuses include support for Maven and Gradle plugins (e.g., to run the Quarkus application in the development mode on a local or a remote machine) and relatively fast boot time. The Quarkus extensions built within the framework—or custom ones that you can develop on your own—make developing and deploying microservices easier.

However, there are certain disadvantages to the MicroProfile approach:

  • For cloud tasks, Quarkus relies heavily on Kubernetes features, such as for traffic management. It may not be convenient for companies that haven’t used Kubernetes since migrating to this tool requires time and commitment.
  • Working with this technology at first might be full of trial and error, e.g., when enabling native compilation without the introduction of a Quarkus extension results in your Gradle task just failing without any useful messages.
  • A simple reason: Quarkus and other frameworks have fewer libraries compared to our next contender, Spring Boot.

Liberica NIK & Spring Native

What if we take the JDK features, embedded into its code, and bring in a modern twist? Introducing Native Image: a truly progressive and cloud-native approach to Java. Here the optimal tool will be Liberica Native Image Kit (Liberica NIK), based on GraalVM CE with BellSoft’s contributions. It compiles Java bytecode into platform-dependent binary code to form fast and lightweight native executables.

With the full-fledged support for Alpine Linux musl, Liberica NIK is your optimal choice to minimize resource consumption, achieve record startup times (up to 1/10 of a second), and save on memory. You may read more about its advantage in our Liberica NIK announcement.

Native Image

Every rose has its thorn, and this method is no different. There are certain things we cannot do with Native Image, which we described at length in a previous article about building microservices. In short, a different optimization model and a closed-world assumption might make Java apps behave unusually. So, be warned: not all software takes such kinds of optimizations.

To overcome drawbacks, you should opt for the Spring Framework (which needs no introduction) and Spring Native. This experimental utility is used for transforming Spring apps into native executable files. Besides overcoming some of the native image limitations described above, it also provides a native deployment option for tiny containers. Take a look at a part of the presentation by Josh Long and Andy Clement, Spring Native co-developers and experts, at BellSoft’s JRush event in March 2021:

Together, these tools constitute an advanced yet simple and rewarding method to building cloud-native Java applications.


Now, which approach is the optimal one? This is for you to decide. Your business may go even further and build a customized microservices app based on Java SE or GraalVM. And if you need help, you can always rely on BellSoft professionals with 15+ years of Java experience.

This post is only part of our DZone Refcard Introduction to Cloud-Native Java. This extensive document contains basically everything to set up your first cloud-native environment. On the other hand, it features expertise on a specialized topic that is hard to find elsewhere.

Click the button below to get access. Note that you should have a DZone account in order to download the Refcard.

References

  1. Why Large Organizations Trust Kubernetes
  2. Ingress Controllers for Kubernetes
  3. What are Cloud Native Applications?
Author image

Aleksei Voitylov

BellSoft CTO

 LinkedIn