posts
Building cloud-native microservices with Kubernetes

Building cloud-native microservices with Kubernetes

Jan 12, 2024
Dmitry Chuyko
15.3

A modern cloud offers immense opportunities to businesses in terms of growth, innovation acceleration, and cost optimization. But a resilient, flexible, easily scalable cloud-native application doesn’t fall out of the sky. We have to build it ourselves. This article will guide you through the best practices of cloud-native development.

This article was inspired by Episode 4 of JRush, a series of free web conferences for Java experts. You can watch the full presentation on the topic on YouTube or register for JRush to gain access to previous episodes on software trends and cutting-edge technologies.

Explore JRush

Cloud-native microservices: advantages and challenges

Cloud native is an approach to building highly scalable applications that can be seamlessly ported to any platform, be it a private or public cloud environment or a hybrid-cloud setup that combines on-premises and cloud infrastructure. Although it is possible to port a monolithic application (developed on a single codebase) to the cloud, microservices are a better choice for long-term business strategy in most cases. They enable the developers to get the maximum out of their cloud deployment, providing

  • Scalability: small services can be rapidly scaled out in response to traffic upsurge and just as easily scaled in.
  • Agility: developers can integrate new features as soon as possible without interrupting overall application operation, which enables businesses to stay on top of the game and swiftly respond to customer needs. 
  • Resilience: as microservices are independent of each other, a bug or an error in one service won’t cause the whole system to crash.
  • Tech stack flexibility: you can choose different technologies for microservices to better fit their purposes or even write the services in various programming languages.

But before our microservice-based cloud infrastructure starts working to our advantage, we must build it properly. Until then, the benefits of microservices may seem more like challenges. How to make them scalable, resilient, and flexible? How to reduce time-to-market and optimize TCO instead of falling into the trap of hidden cloud costs?

We will provide a few tried-and-true approaches for creating and managing cloud-native applications. If you want to get a deeper understanding of cloud cost optimization techniques and why they should be implemented as soon as possible after cloud migration, we prepared a comprehensive white paper on the topic.

How to make cloud-native infrastructure deliver to your purposes

Follow the best practices of building microservices

Breaking down a monolith into several services is only the first step of your journey to the cloud. Given the complexity of microservice architecture, you have to be careful to avoid a tangled uncontrollable web of non-communicating components, which devour resources and developers’ time. Luckily, there are several proven methods of building lightweight, resilient services with clear connections between components and the outer world.

  • Keep microservices small and with a minimum number of responsibilities. This will increase the processing speed and minimize the consequences of microservice failure.
  • Separate the data storage so that microservices don’t depend on one database. As a result, you will minimize the risks related to data loss or database outage, simplify data management, and keep microservices independent from each other.
  • Organize interservice communication with a service mesh, a decentralized infrastructure layer that handles the request delivery. A service mesh typically attaches a “sidecar” (individual proxy) to each microservice instance, which routes the traffic between services.
  • Utilize a distributed event messaging and streaming platform for reliable asynchronous communication and processing of huge volumes of data.

As a result, your microservices should look similar to this:

Optimal microservices structure 

A key is to make microservices as independent of each other as possible so that local failures or updates don’t affect the functioning of the whole application and establish reliable communication between components.

But there’s one more essential point to consider, namely the resource consumption. A containerized monolith indeed takes up several times more memory than a microservice. Still, even a microservice container image may often bloat, increasing its footprint and slowing down the speed of operations.

Make your containers lightweight by choosing a small base Linux distribution (for instance, the compressed container image of Alpaquita Linux is approx. seven times as small as that of Ubuntu) and keeping unnecessary files out of the image. This way, you will accelerate development and deployment and reduce cloud storage expenses.

Refer to the dedicated article for more useful tips and hands-on examples of reducing container size.

Adopt a multi-cloud strategy

Multi-cloud deployment means utilizing various public cloud services from several cloud providers. Cloud platforms are not created equal: cloud providers offer different services, features, rates, and billing models, so spreading workloads across multiple clouds will help you achieve

  • Business flexibility. Clouds vary on many levels: some offer unique features, some have longer provisioning time or downtime, and some may be unavailable in specific countries. The key is to match the right workload to the right cloud environment so that the company can unleash the full potential of its application and enhance customer experience. 
  • Data sovereignty. Legislation of certain countries may prohibit data transfer from the country of origin or disclose the data to third parties, so it may be impossible to store or process data with a third-party cloud provider. Working with different cloud providers will enable businesses to meet the legislative requirements.
  • Minimized vendor lock-in. If you rely entirely on one cloud provider, it might be difficult to migrate to another cloud platform if your provider changes terms of use, increases costs, etc. Spreading the workloads across platforms will enable you to make your application highly portable, so there won’t be any need to introduce massive changes to the infrastructure in case of migration.
  • Cost-efficiency. Cloud providers implement different billing models (pay-as-you-go, per-subscription, etc. ) and payment periods. Service prices differ, so picking services at the most affordable rates helps companies leverage their IT budgets.
  • Reliability. Several cloud platforms can be used for backup purposes, so if one cloud experiences a severe outage, the consequences can be mitigated by switching to another platform.

Orchestrate containers with Kubernetes

Let’s say you are ready to deploy your microservices on the cloud. Even if you only have a few services, managing them may be challenging. What if you scale them to thousands of containers? What if you add ten more microservices or already have fifty? How to manage these workloads without consuming the time of your engineers?

In comes Kubernetes — an open-source platform that orchestrates containerized applications. Kubernetes provides functionality for automatically scaling, deploying, and managing deployed containers.

What will Kubernetes do for you?

  • Deploy the application on multiple hosts, so in case one server or even the whole cloud platform goes down, the workloads can be rapidly shifted to another environment.
  • Create multiple instances of containers for high availability and fault tolerance.
  • Schedule deployments and create new container instances with zero downtime.
  • Manage containers concerning available underlying resources.
  • Check the health (comparing the actual state to a desired state) of all containers continuously and take action to create new instances in case of failure.

When you deploy containers to Kubernetes, you get a cluster. We use kubectl (a command line tool) to interact with the cluster. The cluster consists of the following components:

  • Nodes — worker machines (virtual or physical) that run containerized applications. A master node manages the work of worker nodes and includes
    • API Server exposes Kubernetes API and transmits all external communications to the cluster;
    • Scheduler watches for newly created pods and assigns them to suitable nodes;
    • Controller manager runs the controller processes (watches the current state of the cluster through API Server and makes changes to reach the desired state);
  • Pods — groups of one or more containers with shared resources;
  • kubelet — an agent that makes sure that containers are running and healthy;
  • kube-proxy — a proxy that maintains network rules on nodes for network communication to pods;
  • Service (load balancer) exposes the application to the external network and directs traffic to pods.

We split an application into several microservices (you can develop services in different programming languages as they communicate with each other through language-agnostic APIs) and containerized them. All containers are then placed into pods in the Kubernetes cluster. After that, each pod can be scaled by creating a desired number of replicas.

Kubernetes pods are ephemeral, meaning that they are created and destroyed dynamically to match the desired state, so a set of pods running in one moment can differ completely from a set of pods running sometime later. To eliminate communication complexity, Kubernetes uses Service API, an abstraction that exposes a group of pods over the network. A Service object defines a set of pods (endpoints) and a policy on how to make these pods accessible.

All these components (and many more underlying processes and tools) aim to simplify workload management through a control loop: developers define a desired state, and Kubernetes performs all the heavy lifting to monitor the actual state and make changes to reach the desired state.

Use open-source tools for the cloud

Docker and Kubernetes are not the only technologies you should use with your cloud-native infrastructure. There are multiple open-source solutions aimed at facilitating microservice communication, performance monitoring, database management, and so on. For instance:

  • Databases
    • Apache Cassandra, a column-oriented NoSQL database with outstanding fault tolerance characteristics;
    • Redis, an in-memory NoSQL database;
    • MongoDB, a document store NoSQL database;
    • MySQL, an easy-to-use relational database;
    • PostgreSQL, an object-relational database with a wide range of features for complex queries and various data types support.
  • Messaging/streaming platforms
    • Apache Pulsar, a multi-tenant distributed messaging and streaming platform for cloud-native workloads;
    • Apache Kafka, a high-throughput distributed event streaming platform;
    • RabbitMQ, a traditional messaging platform that supports 22 programming languages.
  • Observability platforms
    • Prometheus, a monitoring solution that uses a custom query language for data acquisition;
    • Grafana, an analytics and monitoring platform with great visualization capabilities.

The open-source nature of these products means you can use them for free or subscribe for commercial support by selecting the most optimal solution based on the provided services and prices.

Further references: from theory to practice

These recommendations will help you build and manage a scalable, performant, and resilient cloud-native infrastructure. If you are ready to move from theory to practice, feel free to browse through the collection of material we prepared for developers wishing to migrate to microservice architecture and master best practices of cloud development:

Subcribe to our newsletter

figure

Read the industry news, receive solutions to your problems, and find the ways to save money.

Further reading