Reduce your TCO of building and using Spring Native applications with just 3 steps. Switch to the best Unified Java Runtime. Learn More.

Microservice architecture: theory and practice

Microservices Inside and Out: When Theory Meets Practice


Published September 21, 2020 | Updated October 14, 2021


There is hardly any developer left who hasn’t heard about the microservices. So much is written about microservice architecture, and yet you might still stumble through the basics of microservices. What’s the difference between SOA and these tiny services? Do they promise new horizons or are detrimental for your carefully crafted applications? And how do you build microservices? If you want to make things as clear as a bell, you’ve come to the right place.

In this article, we give the definition of microservice architecture, discuss frameworks, containers, a native image vs. runtime approach, and the intricacies of creating and supporting microservices from a Java™ language perspective. You will find out how to turn your application into a working microservice. Intrigued? Dive right in!

Prefer speaking to reading? Let us know, and our expert engineers will guide you through the convoluted world of microservices, let alone help with migration if your company needs it.

What exactly are microservices?

The microservice architecture is an alternative lightweight method of organizing software as opposed to monoliths.

Previously, applications were mostly developed on a single codebase as centralized entities. Think of an application with dozens of functionalities that processes tons of data, and is still a monolithic system. And now, think of the time spent on updating or troubleshooting: one tiny requirement made the team change the entire monolith.

This style of software development was great for its purposes until applications became more prevalent on mobile devices and the cloud. When your back-end data is on different and numerous devices, monolithic architecture won’t cut it. New features may lead to potential bugs, and a push to another OS will make the team redesign the application.

This is where service-oriented architecture (SOA) comes in. SOA was an attempt to solve the downtime issue by dividing the structure into discrete units of similar functionality, i.e., services. SOA has helped the developers: services can be accessed remotely, updated and redeployed independently, and transmit information via a communication protocol. However, the required level of abstraction was too high, making the code SOA-oriented instead of domain-oriented. Also, complex message formats and Web services standards do not facilitate communication. As a result, a SOA-based application can be even slower and require more processing power.

Microservices are a variant of SOAs. Such services were designed to avoid the risk of software bloat. Microservices are smaller in size and communicate over a network through language-agnostic protocols, like APIs. It gives developers more freedom in choosing production tools since they don’t have to rely on enterprise service buses, other services, and the way they couple. Then, thanks to advances in containerization and especially tiny Alpine Linux containers, parts of an application with microservices have become even more autonomous. Now business components of your application can be controlled individually while running simultaneously on the same hardware. Microservices in containers give way to cloud-native computing along with efficient and scalable applications.

How Microservice Architecture Works

If your team already has a monolithic application that grew out of its shoes, you can split it into a system of microservices. With time you can add new services that take off some load from the existing ones.

Below you will find the tools that will help you to create microservice architecture using Java™.

Contexts and Dependency Injection

In Java™ EE, Contexts and Dependency Injection is a feature that helps to control the lifecycle of stateful objects using domain-specific contexts and integrate various components into client objects in a type-safe way.

Annotations or various means of external configuration make it possible to animate plain class fields. Containers or frameworks that have control over class/bean lifecycle can inject necessary fields after constructing them in a sophisticated manner. Original code won’t see that complexity and won’t notice a replacement for testing or changing the way of communication. Therefore, the team can write and test smaller components and thus build more flexible and reliable services. The underlying principle of such architecture is called Inversion of Control (IoC).

Dynamic proxies

When we create service architecture, the role of reflection is critical. It powers discovery mechanisms, and components/beans find each other in run time, taking only required actions. One alternative is to be aware of all the requirements in advance and generate code that considers all planned interactions. Another is to generate and reload some classes on the fly. In any case, it’s essential to have a type-safe dependency injection.

In Java™, we can create a facade instance that intercepts interface method calls. This design pattern is called a Dynamic Proxy and is made with the help of java.lang.reflect.Proxy and java.lang.reflect.InvocationHandler. This is a built-in reflection API mechanism, widely used in CDI containers, e.g., in Spring.

Web Server

Frameworks care about plugging in tunable components on a high level for the developer. And yet, there is a world of lower-level communication protocols like HTTP where we need an intermediary between network connections and business logic. There’s also a need to manage some state (contexts), resources, and security. That’s what web servers do. Each microservice is coupled with its server instance configured in a centralized manner with the framework’s help, like Embedded Tomcat.

The Role of JVM

JVM and the core class library serve as a foundation for the web server, libraries, and business logic. The developer’s bytecode is verified and executed by the runtime and may be easily examined by standard diagnostic tools like Mission Control.

JVM tuning is bound to the tuning of the web server, framework, libraries, and the application as well. As they typically allow customization, it’s natural to configure them all at once.

In addition, JVM may be updated independently of business logic. So security updates, new features, and optimized defaults go into production with the main code intact.

Frameworks

Frameworks are critical to implementing many routine tasks within microservices. They are very abundant in our industry: Spring, MicroProfile, Quarkus… The list goes on and on.

Nevertheless, frameworks don’t exist in a vacuum: they sometimes share common APIs. It is not uncommon for frameworks to support multiple APIs because they allow porting existing code as is or with minor changes.

Containerization

Now we have all parts of a microservice defined. Target OS, system packages, and hardware are also determined. The issue is to run the entire service system.

Containerization is a modern technology that makes all OS and external communications abstract without paying a high price, as is the case for hardware virtualization. Container management systems like Docker or Podman let us define, publish, and run container images with various software. Operating systems provide necessary levels of isolation and constrain resources for each container instance as requested by a management tool.

Software parts in a container image form a stack. The software stack makes physical deployment of a microservice more effective. When top levels are updated, we may reuse cached lower ones. Thus, binaries from cached layers aren’t transferred, and other preliminary actions do not perform. A layered image looks like this:

Microservice container layers

The job of developers is to configure all these parts in such a way that they interact with one another, work correctly in a host OS, and communicate with external systems.

Finally, there are orchestration systems that make it possible to distribute containers over a cluster of machines, like Kubernetes and Marathon. Here, traces of Java™ code and JVM disappear, building blocks become the configured containers deployed over the server fleet, and they may coexist with SaaS components available in a cloud.

Pros and Cons of Microservices

There are multiple instances where developers might find microservices quite useful.

  1. Scalability. Microservices are a perfect technology if you plan to scale your application up or down (vertically with CPUs/memory/storage), out or in (horizontally with nodes/desktops/servers). Such flexibility is granted by each service being independent. Plus, it is possible to alter the application dynamically: turning components on and off to balance computational loads. Up/downgrading won’t come as easy for monoliths. Sure, these structures can scale up by replicating the application and running multiple copies. However, each new instance will access all data and resources, increasing I/O traffic, memory consumption and making caching inefficient. Some nodes utilize CPU and data storage in different proportions, which slows down the entire system. Monolithic software products thus lose their acclaimed stability.
  2. Cloud-native apps. Microservices go well together with Docker containers, where they are encapsulated along with runtimes, frameworks and libraries, even an operating system. This way, every business function is separated, maintained, and deployed in the cloud as one unit.
  3. Undemanding maintenance. Smaller services are easier to test and debug. Assume an enterprise supports a monolithic application spread onto thousands of copies. Consequences of updating only one segment are unpredictable. The microservice architecture is more straightforward: here’s an interface, its implementation, and a definite number of services. Replace some of them and don’t touch the rest. Combined with continuous delivery, this design model drives you closer to delivering fail-proof products to your customers.
  4. High resiliency. Since an application is decoupled into independent services, it’s more resistant to errors. A bug in one part of the code will not cause the entire system to shut down. Instead, the team will only have to fix a service.
  5. Overall economy. A benefit in three parts:
    • Building an application as individual services means short release cycles. An enterprise doesn’t need a complete redesign to modernize one service; it uses continuous delivery with cross-functional teams and deploys services independently.
    • Containerized images with microservices use file systems such as UnionFS, AuFS, OverlayFS. They create layers that describe how to recreate actions done to a base image. As a result, only layer updates multiply, leaving containers lightweight, saving valuable resources and reducing company expenses.
    • When functionalities are autonomous, you can assign separate teams to work on each. They don’t even have to be at the same place. Operating on powerful and pricey machines is an unnecessary luxury, too. Your engineers will be able to deploy microservices with the simplest devices (read “x86 32 bit”).
  6. A wide selection of tools. The microservice architecture helps to avoid vendor lock-in. If components don’t depend on one another, they may run on different frameworks or use a number of libraries. Developers might even choose several languages to write code for a single application.
  7. Independence from the database. When a service gets separated from the system, we can forget about the database links because of a unified interface. It is possible to make changes to the data or even replace the database entirely.

At this point, you may be eager to give new impetus to your application by turning it into a refined web of microservices. However, even if the benefits of microservice architecture are clear, there’s always a fly in the ointment. Before switching to microservices, the CTO and engineers in your organization should understand whether this concept aligns with the company’s business goals. Remember that before everything runs like clockwork, the team might spend countless hours and effort planning, drafting documentation, testing, redesigning… So sometimes it is wiser to leave your monolith be. If you are uncertain, you can take advice from professionals. Click the button below, leave your details, and discuss your situation with our experts.

Let’s talk about the drawbacks of the microservice architecture.

In the case of traditional monoliths, applications are contained within the same virtual memory space, and requests are sent to a physical server. Microservices communicate using protocols (HTTP/HTTPS, AMQP, REST, TCP). It implies a number of consequences:

  • constantly moving traffic across networks,
  • request failures and other web-related errors
  • a need to, possibly, encrypt and repackage data into some other forms

When an application is oversaturated with functions, microservice management may become a massive problem. Imagine a network of 10 data silos where each addresses another, sometimes concurrently. Spaghetti code is what we call an anti-pattern of programming, and this is what microservice architecture might turn into in case of poor organization.

Speaking of organization. A service system may be built in such a way that each business module is under the supervision of one small team. However, if your company has been working with a monolith for a long time, it could be challenging to set up and control communication and collaboration between teams.

Nevertheless, the advantages of microservice architecture override the disadvantages. And even the latter ones can easily be tackled, which is what we are going to discuss now.

How to Make Microservices Effective and Efficient

If you’d like to put theory into practice, you can build your own application based on our microservices example (Building a Java microservices E-commerce app). Meanwhile, we are moving on.

Microservice system can quickly spin out of control due to its complexity and turn into an intricate web that even senior developers will find hard to unweave. How to solve this problem? Decouple services with event streaming platforms (Kafka, Kinesis, Spark Streaming), simplify the service-to-service communication with a service mesh (Istio, OSM on Kubernetes), or redesign the system. Optimally, your goal is to avoid this:

image

…by turning it into this:

image

Note that the first scheme doesn’t feature any connection between message queues (represented by “MQ” cylinders) and databases.

Also, if we look at the upper level, right below the user, we can see that it retains substantial complexity. Don’t forget that it’s rare to have only one application copy within a service. When they are numerous, it is necessary to understand where to address requests in case of failure.

image

A service mesh network is meant to facilitate request delivery. Inside it, requests do not leave their individual infrastructure level, routed between microservices via proxies. Individual proxies here are called “sidecars” because they go alongside each service or application.

image

The sidecar proxy design pattern is also used outside of microservices. It eases the tracking and maintenance of applications by abstracting particular features away from the central architecture: interservice communication, security, and monitoring.

However, even when we have our microservices running, configured, and distributed, there is another issue that calls for our attention. How to minimize memory and time consumption?

When working on a project, the developer might encounter various points of potential bloating:

  1. Docker image size,
  2. memory footprint,
  3. startup time.

A containerized project requires access to a Java™ runtime. But here’s a deal: you’re not allowed to put Oracle JDK to your Docker container without having a license due to their policy. Instead, your company may freely use base images provided by BellSoft of about 107 MB (plus an option of 41.5 MB for CLI-like applications, which is the smallest container on the market!)

Running even a relatively little application will consume much memory. We’ve run an experiment by launching a differently configured example project.

JDK 15

  • 135 MB memory usage, started up in 4.395 sec without optimizations.
  • 70 MB memory usage, started up in 1.787 sec with thin jar and class sharing.

JDK 11

  • 165 MB memory usage, started up in 4.95 sec without optimizations.
  • 70 MB memory usage, started up in 2.034 sec with thin jar and class sharing.

Class loading is a significant part of the startup. With thin jar and AppCDS, the speed has grown almost 2.5-fold.

Bottom line: we have a lightweight container image consuming 128 MB of memory without swap and starting up in 8 seconds. Compared to the original results, the startup has accelerated 2.5-fold, from 20 sec to 8 sec, and the memory footprint is reduced to one-sixth.

We’ve sacrificed peak performance for six times fewer memory requirements and startup under 10 seconds. After optimizing, compressing images by hundreds of megabytes, and speeding up the launch, it’s still not enough. Even with all Java’s might, JVM cannot go beyond its capabilities. And here’s where we turn to the Native Image technology.

Enhance your Microservice Architecture with Native Image

Assembled in a native image, the application used 35 MB in RAM and started up in 0.111 sec! At that, the native image itself is 89 MB.

How does Native Image work? Developers change a service binary form to a native executable with the help of GraalVM Native Image technology. A mix of Graal compiler applied in AOT mode and SubstrateVM virtual machine promises instant startup time, low memory consumption, short disk space requirements, and excellent performance. For tools like that, we need to add few dependencies, providing just one additional step to the build phase, and then to get every benefit by extending build scripts with something like:

gradlew nativeImage

The native image runs Java™ microservices in closed-world assumptions. Base container level must provide minimal functionality (and there’s no JDK), which means it may be a “scratch” base image. It will work if a deployed application brings its dependencies: e.g., a native image may be statically linked with a standard C library.

As a rule, some more basic artifacts are required: certificates, SSL libraries, and a dynamically loaded C library. In this case, the binary may be linked differently, and the underlying base image will be a kind of “distroless” image. The distroless base image without libc by gcr.io is about 2 MB, which is slightly less than Alpine Linux, plus there is an option to install glibc, which gives a 17 MB base.

Running your Java™ project in a native image is obviously a more attractive choice in performance and speed. However, your team cannot always guarantee it will work correctly because of certain limitations. Graal VM with optimizations and other beneficial features is not distributed for free. And the so-called “thin jar layer” container image deployment is only possible when we have jars, i.e., only with a regular runtime.

All in all, picking either runtime or native image for building the microservice architecture depends on the tasks at hand, business environment, the state of your enterprise, and many other aspects.

Conclusion

In this article, we covered a wide variety of topics related to microservices. We touched upon the history of microservices, discussed their benefits compared to monoliths, and proved that their advantages outweigh the disadvantages. We also examined the tools necessary to build microservices, compared two useful technologies, JVM and Native Image, and shared some tips on making the microservice architecture as beneficial as possible.

Microservices have the status of a young but promising technology. If you feel that your company is ready to disrupt the status quo and break up its monolithic golem, give microservices a try. Keep in mind, however, that the process of setting up microservice architecture is a complex one. If your team is not ready for the transition, you run the risk of making things even more complicated.

Author image

Dmitry Chuyko

Senior Performance Architect at BellSoft

 Twitter

 LinkedIn