Jrush episode 4th: Build your Cloud Native Application with Kubernetes

 

Transcript:

Just a little bit of a disclaimer: this presentation is for educational purposes only, and all the contents and points of view expressed in this presentation represent my own views and not those of my employer.

Before I start my talk, I’d like to tell you a little bit about myself and my journey through the world of technology. I started my journey as a developer and developer advocate at Sun Microsystems back in 2007. I worked with an amazing team of developers and open-source advocates all around the globe, including Alex. It was a great time, and I got to learn about so many great things and innovations back then.

After that, I moved on to pursue a Ph.D. and several post-doc positions, mainly researching IoT and using machine learning in a distributed way. Later, I joined Intel as a research scientist, where I worked on amazing products that were deployed in various locations worldwide using Intel technology. Following this, I joined IBM as the head of developer ecosystems for the UK, Europe, the Middle East, and Africa. We had an incredible team of developer advocates promoting open-source technologies and ensuring developers had access to the right technology at the right time using the right tools.

Now, I’m at Discover as a distinguished engineer, continuing the same work but now within the financial services industry.

The first point I’d like to highlight is about multi-cloud environments and what cloud-native actually means. A common question from many developers is, “What does cloud-native mean, and why is it important for multi-cloud applications?”

Cloud-native essentially allows you to build applications that can be ported seamlessly across different environments. Whether it’s a private or public cloud, cloud-native development enables you to write and build your application without being constrained by the underlying infrastructure. This means you can focus on how the application is built rather than where it resides.

The primary benefit of cloud-native development is increased developer productivity, especially when deploying to multiple cloud environments.

In my last talk, I covered advanced topics, but based on feedback, I’ve decided to include some foundational concepts of Kubernetes, orchestration platforms, and containers. That’s why, in this talk, I’ll give you a background on cloud-native development, multi-cloud applications, and why they matter. As Alex pointed out, it’s especially important for financial services companies and others to have a multi-cloud development and deployment model.

Today, I’ll cover the following topics: cloud-native and microservices, Kubernetes, how to deploy your application on Kubernetes, and how Kubernetes operates, including the automation that supports it. If you’re interested in detailed instructions and commands to get started with containerization, microservices, and Kubernetes, I’ve created a full tutorial on my GitHub.

The tutorial features a Java application I built in 2020 during the pandemic. It was a use case for deriving information on COVID-19 from the Johns Hopkins University repository. This application consists of multiple microservices, containerized and deployed into a Kubernetes environment. Later, I moved on to using OpenShift Container Platform and Quarkus containers to take the application even further. All the instructions you need, from start to finish, are on my GitHub.

Now, what motivates clients to adopt multi-cloud applications? This is a subjective question with several common answers.

For businesses, cost reduction is a major driver. Many C-level executives focus on providing flexibility to their clients and delighting them with new and competitive features. Data sovereignty is another critical factor—companies often need to deploy applications in regions where customer data resides to comply with privacy legislation and regulations.

From the developer’s perspective, multi-cloud applications expedite the journey from development to testing, staging, and production. For operations engineers, motivations include simplifying operations, increasing security, and reducing costs.

The role of DevOps teams also comes into play. Over the past decade, DevOps teams, which sit between developers and operations engineers, have become increasingly important. In smaller companies, they often replace traditional development and operations teams entirely. For DevOps, portability is the number one factor.

Portability refers to the ability to migrate applications or workloads across private and public clouds. There are three types of portability to consider:

  1. Application portability: Moving entire applications from one cloud to another.
  2. Workload portability: Spreading workloads across clouds to handle demand spikes.
  3. Function portability: Running services as functions across multiple cloud environments.

A common misconception is that all public cloud environments are the same. This is not true. Public cloud environments differ in availability zones, provisioning times, downtime durations, and available features. For example, some clouds offer specific databases or billing options (hourly, monthly, annually) that others do not.

These differences highlight the importance of choosing the right cloud environment for the right workload. Seamlessly migrating applications between clouds requires careful planning and the right tools to ensure a smooth transition.

Cloud environment – ideally, you want to build your application once and be able to deploy it anywhere, on any cloud. This is the reason why multi-cloud becomes quite important. If you want multiple teams owning and managing different parts of your applications on different clouds, you need a diverse set of skills to make that happen.

So let’s go back to the main question: how can we actually build an application that can be updated continually without downtime, with different teams owning and evolving different services, and allowing applications to migrate seamlessly from one platform to another? The answer is cloud-native.

Cloud-native refers to how an application is built and deployed rather than the application results themselves. It essentially means that the application must be built, delivered, and operated in a way that it is not hardwired to any specific infrastructure. When you build your application, you design it to operate in a way that it can easily run on any cloud environment.

How does that happen? By relying on microservices architecture. Microservices architecture is the building block and the most essential ingredient of a cloud-native application. A cloud-native application consists of discrete, reusable components called microservices, which are designed to integrate into any cloud environment.

Microservices architecture addresses the limitations inherent in monolithic applications. Monolithic applications are built in a way where everything is bundled together. For example, in a store application, catalog services, billing, inventory, and recommendations are all bundled together as one application. If one part fails, the entire application may become unavailable.

In contrast, microservices architecture partitions the application into multiple independent components. Each service is independently deployable, highly maintainable, and testable. Teams can work on different parts of the application independently and deploy them on their own timeline. Additionally, each microservice is organized around business functionality, such as billing or inventory, ensuring separation of concerns.

Monolithic applications are prone to failure because they are designed as a single entity. Developers often face challenges identifying issues, and failures in one part can cascade throughout the system, much like how water leaking into one part of the Titanic could sink the entire ship. In many cases, monolithic architectures are destined to fail.

Microservices provide several advantages. Different parts of the application can evolve on different timelines and be deployed separately. Teams can choose the technology stack that best suits each microservice. Furthermore, services can be scaled dynamically at runtime, reducing costs by only scaling the services in high demand rather than the entire application.

Another significant advantage is resilience. If one part of the application fails, it does not render the entire application unavailable. For instance, in an online shop during Christmas, an issue with the inventory service won’t necessarily affect billing or catalog services.

After partitioning monolithic applications into microservices, the next step is containerizing each microservice with its dependencies and libraries. Containerization allows these microservices to be deployed independently without relying on each other. Tools like Docker or Podman are commonly used for this purpose.

Once applications are containerized, managing a large number of microservices becomes complex. Orchestration tools like Kubernetes are essential for automating deployment, scaling, and management of containerized applications. Kubernetes provides capabilities to run applications in multi-cloud environments, scale services based on demand, and ensure high availability.

Kubernetes enables seamless updates to microservices without causing downtime. It continuously reconciles the desired and observed state of the application, ensuring resources are adjusted as needed. If a service becomes unhealthy, Kubernetes takes action to restore it, such as spinning up new replicas.

To deploy an application onto Kubernetes, you first create a Kubernetes cluster and break down the application into microservices. Each microservice is containerized and deployed as a pod in Kubernetes. Pods are replicated as needed based on load and deployment scenarios. Services within the cluster communicate with each other and are exposed to external networks.

Kubernetes also provides tools to manage deployments, roll out updates, and scale services. Commands like kubectl are used to interact with the cluster, manage deployments, and monitor the application’s state. YAML files can also be used for batch operations, defining deployment details, scaling parameters, and resource requirements.

For open-source alternatives, tools like Podman, Rancher Desktop, and OKD can replace Docker and Kubernetes. Additionally, tools like Prometheus, Redis, PostgreSQL, and Grafana can be used for storage, in-memory operations, and monitoring.

Kubernetes operates using a control loop that continuously reconciles the observed and desired states of the application. For instance, if one replica fails, Kubernetes automatically spins up a replacement to maintain the desired number of replicas.

Summary

If you are starting to migrate your workloads to cloud-native, this talk will guide you through: The specifics of Cloud Native development; The difference between Cloud platforms; The basics of Kubernetes — why you should use it, how it functions, and how to orchestrate your containers with this platform; Open-source tools you can use for efficient Cloud Native development.

About Mo

Mo Haghighi Director/Distinguished Engineer for Cloud Platform and Infrastructure at Discover Financial Services

Social Media

Videos
card image
Feb 6, 2026
Backend Developer Roadmap 2026: What You Need to Know

Backend complexity keeps growing, and frameworks can't keep up. In 2026, knowing React or Django isn't enough. You need fundamentals that hold up when systems break, traffic spikes, or your architecture gets rewritten for the third time.I've been building production systems for 15 years. This roadmap covers three areas that separate people who know frameworks from people who can actually architect backend systems: data, architecture, and infrastructure. This is about how to think, not what tools to install.

Videos
card image
Jan 29, 2026
JDBC Connection Pools in Microservices. Why They Break Down (and What to Do Instead)

In this livestream, Catherine is joined by Rogerio Robetti, the founder of Open J Proxy, to discuss why traditional JDBC connection pools break down when teams migrate to microservices, and what is a more efficient and reliable approach to organizing database access with microservice architecture.

Further watching

Videos
card image
Feb 27, 2026
Spring Developer Roadmap 2026: What You Need to Know

Spring Boot is powerful. But knowing the framework isn’t the same as understanding backend engineering. In this video, I walk through the roadmap I believe matters for a Spring developer in 2026. We start with data. That means real SQL — CTEs, window functions, normalization trade-offs — and understanding what ACID and BASE actually imply for system guarantees. Spring Data JPA is useful, but you still need to know what happens underneath. Then architecture: microservices vs modular monolith, serverless, CQRS, and when HTTP, gRPC, Kafka, or WebSockets make sense. Not as buzzwords — but as design choices with trade-offs. Security and infrastructure follow: OWASP Top 10, AuthN vs AuthZ, encryption in transit and at rest, Docker, Kubernetes, Infrastructure as Code, and observability with Micrometer, OpenTelemetry, and Grafana. This roadmap isn’t about mastering every tool. It’s about knowing what affects reliability in production.

Videos
card image
Feb 18, 2026
Build Typed AI Agents in Java with Embabel

Most Java AI demos stop at prompt loops. That doesn't scale in production. In this video, we integrate Embabel into an existing Spring Boot application and build a multi-step, goal-driven agent for incident triage. Instead of manually orchestrating prompt → tool → prompt cycles, we define typed actions and let the agent plan across deterministic and LLM-powered steps. We parse structured input with Ollama, query MongoDB deterministically, classify risk using explicit thresholds, rank affected implants, generate a constrained root cause hypothesis, and produce a bounded containment plan. LLM handles reasoning. Java enforces rules. This is about controlled AI workflows on the JVM — not prompt glue code.

Videos
card image
Feb 12, 2026
Spring Data MongoDB: From Repositories to Aggregations

Spring Data MongoDB breaks down fast once CRUD meets production—real queries, actual data volumes, analytics. What looks simple at first quickly turns into unreadable repository methods, overfetching, and slow queries. In this video, I walk through building a production-style Spring Boot application using Spring Data MongoDB — starting with basic setup and repositories, then moving into indexing, projections, custom queries, and aggregation pipelines. You'll see how MongoDB's document model changes data design compared to SQL, when embedding helps, and when it becomes a liability. We cover where repository method naming stops scaling, how to use @Query safely, when to switch to MongoTemplate, and how to reduce payload size with projections and DTOs. Finally, we implement real MongoDB aggregations to calculate analytics directly in the database and test everything against a real MongoDB instance using Testcontainers. This is not another MongoDB overview. It's a practical guide to actually using Spring Data MongoDB in production without fighting the database.