Jrush episode 4th: Build your Cloud Native Application with Kubernetes

 

Transcript:

Just a little bit of a disclaimer: this presentation is for educational purposes only, and all the contents and points of view expressed in this presentation represent my own views and not those of my employer.

Before I start my talk, I’d like to tell you a little bit about myself and my journey through the world of technology. I started my journey as a developer and developer advocate at Sun Microsystems back in 2007. I worked with an amazing team of developers and open-source advocates all around the globe, including Alex. It was a great time, and I got to learn about so many great things and innovations back then.

After that, I moved on to pursue a Ph.D. and several post-doc positions, mainly researching IoT and using machine learning in a distributed way. Later, I joined Intel as a research scientist, where I worked on amazing products that were deployed in various locations worldwide using Intel technology. Following this, I joined IBM as the head of developer ecosystems for the UK, Europe, the Middle East, and Africa. We had an incredible team of developer advocates promoting open-source technologies and ensuring developers had access to the right technology at the right time using the right tools.

Now, I’m at Discover as a distinguished engineer, continuing the same work but now within the financial services industry.

The first point I’d like to highlight is about multi-cloud environments and what cloud-native actually means. A common question from many developers is, “What does cloud-native mean, and why is it important for multi-cloud applications?”

Cloud-native essentially allows you to build applications that can be ported seamlessly across different environments. Whether it’s a private or public cloud, cloud-native development enables you to write and build your application without being constrained by the underlying infrastructure. This means you can focus on how the application is built rather than where it resides.

The primary benefit of cloud-native development is increased developer productivity, especially when deploying to multiple cloud environments.

In my last talk, I covered advanced topics, but based on feedback, I’ve decided to include some foundational concepts of Kubernetes, orchestration platforms, and containers. That’s why, in this talk, I’ll give you a background on cloud-native development, multi-cloud applications, and why they matter. As Alex pointed out, it’s especially important for financial services companies and others to have a multi-cloud development and deployment model.

Today, I’ll cover the following topics: cloud-native and microservices, Kubernetes, how to deploy your application on Kubernetes, and how Kubernetes operates, including the automation that supports it. If you’re interested in detailed instructions and commands to get started with containerization, microservices, and Kubernetes, I’ve created a full tutorial on my GitHub.

The tutorial features a Java application I built in 2020 during the pandemic. It was a use case for deriving information on COVID-19 from the Johns Hopkins University repository. This application consists of multiple microservices, containerized and deployed into a Kubernetes environment. Later, I moved on to using OpenShift Container Platform and Quarkus containers to take the application even further. All the instructions you need, from start to finish, are on my GitHub.

Now, what motivates clients to adopt multi-cloud applications? This is a subjective question with several common answers.

For businesses, cost reduction is a major driver. Many C-level executives focus on providing flexibility to their clients and delighting them with new and competitive features. Data sovereignty is another critical factor—companies often need to deploy applications in regions where customer data resides to comply with privacy legislation and regulations.

From the developer’s perspective, multi-cloud applications expedite the journey from development to testing, staging, and production. For operations engineers, motivations include simplifying operations, increasing security, and reducing costs.

The role of DevOps teams also comes into play. Over the past decade, DevOps teams, which sit between developers and operations engineers, have become increasingly important. In smaller companies, they often replace traditional development and operations teams entirely. For DevOps, portability is the number one factor.

Portability refers to the ability to migrate applications or workloads across private and public clouds. There are three types of portability to consider:

  1. Application portability: Moving entire applications from one cloud to another.
  2. Workload portability: Spreading workloads across clouds to handle demand spikes.
  3. Function portability: Running services as functions across multiple cloud environments.

A common misconception is that all public cloud environments are the same. This is not true. Public cloud environments differ in availability zones, provisioning times, downtime durations, and available features. For example, some clouds offer specific databases or billing options (hourly, monthly, annually) that others do not.

These differences highlight the importance of choosing the right cloud environment for the right workload. Seamlessly migrating applications between clouds requires careful planning and the right tools to ensure a smooth transition.

Cloud environment – ideally, you want to build your application once and be able to deploy it anywhere, on any cloud. This is the reason why multi-cloud becomes quite important. If you want multiple teams owning and managing different parts of your applications on different clouds, you need a diverse set of skills to make that happen.

So let’s go back to the main question: how can we actually build an application that can be updated continually without downtime, with different teams owning and evolving different services, and allowing applications to migrate seamlessly from one platform to another? The answer is cloud-native.

Cloud-native refers to how an application is built and deployed rather than the application results themselves. It essentially means that the application must be built, delivered, and operated in a way that it is not hardwired to any specific infrastructure. When you build your application, you design it to operate in a way that it can easily run on any cloud environment.

How does that happen? By relying on microservices architecture. Microservices architecture is the building block and the most essential ingredient of a cloud-native application. A cloud-native application consists of discrete, reusable components called microservices, which are designed to integrate into any cloud environment.

Microservices architecture addresses the limitations inherent in monolithic applications. Monolithic applications are built in a way where everything is bundled together. For example, in a store application, catalog services, billing, inventory, and recommendations are all bundled together as one application. If one part fails, the entire application may become unavailable.

In contrast, microservices architecture partitions the application into multiple independent components. Each service is independently deployable, highly maintainable, and testable. Teams can work on different parts of the application independently and deploy them on their own timeline. Additionally, each microservice is organized around business functionality, such as billing or inventory, ensuring separation of concerns.

Monolithic applications are prone to failure because they are designed as a single entity. Developers often face challenges identifying issues, and failures in one part can cascade throughout the system, much like how water leaking into one part of the Titanic could sink the entire ship. In many cases, monolithic architectures are destined to fail.

Microservices provide several advantages. Different parts of the application can evolve on different timelines and be deployed separately. Teams can choose the technology stack that best suits each microservice. Furthermore, services can be scaled dynamically at runtime, reducing costs by only scaling the services in high demand rather than the entire application.

Another significant advantage is resilience. If one part of the application fails, it does not render the entire application unavailable. For instance, in an online shop during Christmas, an issue with the inventory service won’t necessarily affect billing or catalog services.

After partitioning monolithic applications into microservices, the next step is containerizing each microservice with its dependencies and libraries. Containerization allows these microservices to be deployed independently without relying on each other. Tools like Docker or Podman are commonly used for this purpose.

Once applications are containerized, managing a large number of microservices becomes complex. Orchestration tools like Kubernetes are essential for automating deployment, scaling, and management of containerized applications. Kubernetes provides capabilities to run applications in multi-cloud environments, scale services based on demand, and ensure high availability.

Kubernetes enables seamless updates to microservices without causing downtime. It continuously reconciles the desired and observed state of the application, ensuring resources are adjusted as needed. If a service becomes unhealthy, Kubernetes takes action to restore it, such as spinning up new replicas.

To deploy an application onto Kubernetes, you first create a Kubernetes cluster and break down the application into microservices. Each microservice is containerized and deployed as a pod in Kubernetes. Pods are replicated as needed based on load and deployment scenarios. Services within the cluster communicate with each other and are exposed to external networks.

Kubernetes also provides tools to manage deployments, roll out updates, and scale services. Commands like kubectl are used to interact with the cluster, manage deployments, and monitor the application’s state. YAML files can also be used for batch operations, defining deployment details, scaling parameters, and resource requirements.

For open-source alternatives, tools like Podman, Rancher Desktop, and OKD can replace Docker and Kubernetes. Additionally, tools like Prometheus, Redis, PostgreSQL, and Grafana can be used for storage, in-memory operations, and monitoring.

Kubernetes operates using a control loop that continuously reconciles the observed and desired states of the application. For instance, if one replica fails, Kubernetes automatically spins up a replacement to maintain the desired number of replicas.

Summary

If you are starting to migrate your workloads to cloud-native, this talk will guide you through: The specifics of Cloud Native development; The difference between Cloud platforms; The basics of Kubernetes — why you should use it, how it functions, and how to orchestrate your containers with this platform; Open-source tools you can use for efficient Cloud Native development.

About Mo

Mo Haghighi Director/Distinguished Engineer for Cloud Platform and Infrastructure at Discover Financial Services

Social Media

Videos
card image
Nov 22, 2024
Reducing Java Startup Time: 4 Approaches

Java application startup can be significantly accelerated using modern tools. AppCDS stores preloaded classes in a shared archive, cutting startup time by up to 50%, while Project Leyden shifts optimizations to earlier stages with ahead-of-time compilation. GraalVM Native Image creates standalone executables for sub-second startup, and CRaC restores pre-warmed application states for instant readiness.

Videos
card image
Nov 15, 2024
Boost The Performance and Security of Your Spring Boot App with Alpaquita Containers

Alpaquita Containers offer a secure, high-performance solution for running Spring Boot applications in the cloud. These lightweight containers, built on Liberica JDK Lite and Alpaquita Linux, optimize memory and disk usage, reducing resource consumption by up to 30%.

Further watching

Videos
card image
Dec 28, 2024
JDK 24: The New Features in Java 24

JDK 24 is in Rampdown Phase One, which means, we know all the JEPs targeted to this release. And there are a lot of them, so it is time to discuss this new Java release!

Videos
card image
Dec 17, 2024
Master Java Profiling: Tools, Techniques, and Real-World Tips

Java profiling allows to rapidly identify and fix performance bottlenecks in your program. In this video we explain what is profiling, introduce popular profiling tools, list their pros and cons, and provide useful tips and code examples.

Videos
card image
Nov 29, 2024
OpenJDK Projects That We Anticipate

OpenJDK is actively evolving, with projects like Leyden, Valhalla, Babylon, and Lilliput aiming to enhance Java's performance and capabilities. Leyden focuses on faster startup and warmup by reusing precompiled code, while Valhalla introduces value objects, primitive classes, and specialized generics for better memory and runtime efficiency.