Jrush episode 4th: Build your Cloud Native Application with Kubernetes

Transcript:

(00:01) foreign good morning good afternoon and good evening again thanks for joining my talk today my name is moh hagigi I'm a distinguished engineer at the scholar financial services today I'm going to talk about Cloud native development and how to get us started with kubernetes as the orchestration platform for containers um there's always going to be a panel discussion in the end where Mary Dimitri and I will join and talk about you know some of the questions you may have so um just a little bit of you know Safe
(00:39) Harbor or let's say disclaimer um so this presentation is for educational purposes only and all the contents and points of views that are basically expressed in this presentation to present my own views and not solve my employer so before I stop my talk I like to tell you a little bit about myself and my journey through the world of technology so I basically started my journey as a developer um a developer Advocate at Sun luxury systems back in 2007 that I worked with an amazing team of developers and open
(01:12) source Advocates all around the globe including Alex studying through this myself so it was a great time and I got to know so many great things and Innovations back then um then I basically moved on to do a PhD and a couple of post-doc positions mainly researching in iot and using machine learning in a distributed way and and then I joined Intel as a research scientist where I worked on Amazing products that we deployed into various places around the globe using Intel technology and later I joined IBM as a head of developer ecosystems for UK
(01:46) Europe Middle East and Africa we had an amazing team of developer Advocates all around the globe promoting open source Technologies and making sure that people you know developers actually get their hands on the right technology at the right time using the right tools and now here I am at Discover as a distinguished engineer doing the same job but now for a financial services from within the company so the first point that I like to highlight here is about multi-cloud environments and you know what cloud native actually means so the the main
(02:20) questions for many developers is um how you know what what does cloud native mean and how can we actually understand the importance of cloud native when it comes to multi-cloud applications and the main point here is um Cloud native basically allows you to build applications that can be ported seamlessly across different environments Cloud environments whether it's a private cloud or public Cloud it basically allows you to write your application and build your application regardless of the infrastructure so your
(02:51) focus is on how the application is built rather than where it resides and that is actually the main point about Cloud native development It ultimately basically helps developers to become more productive when it comes to deploying to multiple Cloud environments so in my last talk I jumped into very Advanced topics and I got some uh feedback that I should have basically talked about some of the basic uh elements of kubernetes and orchestration platforms and containers so that's why for this talk I'm going to give you a
(03:24) little bit of background about Cloud native development multi-cloud applications and why it matters as Alex basically pointed out why it matters basically for financial services companies and so many others to have a multi-cloud development and deployment models so today I'll be covering Cloud native and microservices kubernetes and how to deploy your application on kubernetes and then how kubernetes basically operates and all the automation that goes around it in case you're interested to learn about
(03:57) all the you know commands and and and the the instructions to get us started with um you know containerization Microsoft services and kubernetes I've got this full tutorial on my GitHub we can take a look at it this is a Java application that I built back in um 2020 during the pandemic it basically was for uh it was a use case for deriving information on covid-19 from John Hopkins University repo and this application is a micro Service as is basically consisted of multiple macro services that are containerized and
(04:34) deploy into kubernetes environment and then I basically move on to using openshift container platform and coordinary content containers and workspaces to take it even further all the instructions are on my GitHub from the very beginning to the end and all the instructions basically you need to get us started so in case you're interested take a look look at my GitHub repo so what are the motivations for having multi-cloud applications for many clients that's a very subjective question um cost reduction is always that common
(05:09) reason in a majority of conversations with business xx and they all want to you know reduce costs when it comes to motor Cloud development but then at the same time when we talk to sea level execs they're mostly interested in providing business flexibility to their clients and Delight them with new and competitive features and data sovereignty is another reason they want to be able to um to deploy the applications the services they have onto a cloud environment where customers data might reside and for so many
(05:41) um data sovereignty purposes and privacy legislations and regulations in so many countries you are basically required to run your application where customers data reside and for that reason um you know many sea level execs basically would like to have that sort of flexibility to be able to move and Port the application as soon as a new customer made some sort of data sovereignty requirement come forward and and then you know when it comes to developers um you know and operation Engineers they're also interested in a number of
(06:16) factors so when we look at it from a different angle um developers are interested to um you know for ways to expedite their Journey from development to testing staging and production so if that is actually okay then it's going to be okay for them to um move their applications into a multi-cloud model and then for operation Engineers they're motivated to simplify the operation increase the security and obviously reduce the cost so if these two things can happen then obviously 50 of the way is kind of paved and then
(06:49) there comes the next bit basically which is devops Engineers you know for the last decade or so there has been this new team which sits somewhere between developers and operation engineers and um they're known as devops team and they almost replace developers and operation teams in smaller companies so for devops Sims portability is the number one factor and then when we talk about portability portability mainly refers to porting your application or your workload between private and public clouds or between different public Cloud
(07:22) environments so that's basically what well what it means by porting and then there are three types of porting applications portability um and then workload portability and functions portability applications as the name suggests is moving your applications from One Cloud to another workload is when demand hits your servers or your services you want to be able to spread the workload across different clouds and then when it comes to functions and you want to be able to provide your services as functions on multiple Cloud
(07:54) environments with that um I would like to answer a very important questions I'll usually get in conference talks and for many people um all public Cloud environments are actually the same they think all public Cloud environments basically provide the same set of features and specifications that is not true um basically public Cloud environments they deferred in many many ways um they have different availability zones the provisioning time differs from One Cloud to another the downtime for some clouds are actually longer and some
(08:27) of them are shorter and again for time critical and Mission critical applications you need to pay attention to what sort of applications you're actually running or what part of your application you're running on a public cloud and then maybe you're looking for a study specific set of features or some sort of database that may not available may not be available on one public Cloud but it is available on another and for that reason and obviously the cost and billing the type of billing you have um some clouds where you single provide
(08:57) you with hourly building some of them you know call some of them per annual again it depends so um you have different set of features and specifications from different public clouds so for that reason you need to be able to move your application your workload across multiple clouds ads it suits your you know application but the most important aspect is matching the right workload to the right Cloud environment based on all the specifications that come from different public clouds but again how can an application be
(09:30) migrated from One Cloud to another seamlessly so we need to have a way that we can actually do that so to match that right workload to the right Cloud environment and ideally you want to build your application once and being able to deploy it anywhere on any cloud and this is the reason where multi-cloud becomes quite important because if you want to have multiple teams owning and taking care of different parts of your applications on different clouds then you need a set of different um you know skills in order to make that
(10:00) happen so let's go back to that main question how can we actually build an application that can be updated continually without downtime different teams can own and involve different services and applications can be migrated from one platform to another seamlessly and that is actually the the answer is cloud native so Cloud native basically refers to how an application is built and deployed rather than better application results as I said in the beginning of the top and it basically defines that the application
(10:35) must be built delivered and operated in a way that it is not hard drive to any infrastructure so when you build your application you're building your application to work in a way to operate in a way that it can easily run on any Cloud environment in any environment and that's basically the concept behind Cloud native well how does that happen um it basically by you know clouding if you make that happen by simply by you know relying on microservices architecture microservices architecture is the building block and the most
(11:06) essential ingredient of cloud native application and basically a cloud native application consists of discrete reusable components known as macro services that are designed to integrate into any Cloud environment so let's dig deeper and see what does that mean so microservices architecture basically addresses all the liabilities that are inherent in monolithic applications and monolithic applications are built in a way that everything is bundled together as you can see on the left hand side um you know we've got this store
(11:41) application for this digital store or online installed application that all the catalog Services um you know the billing inventory everything is bundled together as one application if one part of this application basically one of these microservices one of these Services basically fails then the entire application might become um you know useless and your app your service basically will go down whereas in the macro Services environment as you can see on the right hand side we partition the applications into multiple pieces and when that
(12:13) happens um you know you know it basically is the building block of luxury Services architecture it basically Advocates partitioning large monolithic applications into a smaller independent services that communicate with each other by using messages so this way each one of these Services become highly maintainable and testable and they're Loosely coupled and they're independently Deployable so your teams can work on different parts of different macro Services independently and deploy them on their own timeline and obviously
(12:44) each one of these Microsoft microservices they're organized around business functionalities or capabilities as you can see here we've got building we've got Billing Services we've got Inventory Services we've got recommendation services and each one of them is completely separated in the monolithic applications um you know what we had in the you know in the past was like a number of issues make them prone to failure because they're they were designed and operated as a single entity and it was always
(13:13) kind of you know this sort of you know finger pointing to find you know where the problem resides and developers who worked on different parts of these monolitha applications they used to get quite confused about where that error basically happened and that's why you know monolithic applications they're like you know what happened to you know Titanic for instance this is the sort of um you know example I use all those sections are very well connected together so if one uh you know if if water is sticking in one part it
(13:44) basically keeps spreading around and it goes and it brings down the shape basically so in my opinion monolithic applications in many cases not all uh they're destined to fake so let's let's take a look at the advantages of microservices but before that I would like to give you one example let's say we've got this monolithic applications with where the catalog you know filtering inventory recommendations and billing they're all bundle together as one single entity and then what we need to do is to separate
(14:14) them uh when we separate them basically we've got multiple micro services that can be um you know developed and built with different programming languages the different Technologies as it basically suits best for for that part and obviously different teams can own each one of them um you know completely independently um and this basically gives you a number of advantages the first part is different parts of the application can evolve on different timelines that's very important they can be deployed separately so teams they don't need to
(14:47) wait for each other to deploy a part of the application and then you choose your technology stack for each micro Service as it best fits the purpose um and then finally you can scale your services dynamically at runtime so you don't have to scale your entire application you can just scale the one those services that are basically require more demand and they're basically um you know kind of running on a high demand so this way it basically saves you some money um on on your billing you don't have to scale your entire
(15:18) application but then you've got the most obvious Advantage here based on what I said earlier if any part of the application fails the whole application will not necessarily becomes unavailable on on or unresponsive to the customer let's say an online shop during Christmas if something happens to the um to the inventory it doesn't mean that the billing and the catalog service is going to be um you know completely out of service so you can still serve your customer until you've fixed the problem for one micro
(15:48) service and and obviously it's because they are not designed and operated as a single entity like monolithic architecture now going back to our monolithic applications that we separated we partition um the services into microservices the next bit is that we need to containerize each one of them with their dependencies and libraries so this way each one of these Mark containers can be deployed um you know independently and separately without relying on each other and obviously by having messages in place they can communicate and provide the
(16:21) same sort of service that you had as a monolithic application but then the first part is okay containerize your application um you know Docker podman there are so many tools out there that can help you do that and it's a very quick and simple operation all you need to do is just um you know build your image um and and for that you need to craft your Docker file and then in that Docker Docker file basically you provide the details of your applications and the dependencies and the environmental parameters and it's not you know very
(16:54) complex here's a quick overview of what a Docker file basically contains um so you talk about your base image and you talk about some of the arguments for environmental parameters and and then how to run that container um but then when we have a large number of micro Services um you can imagine I was just showing you like five microservices if if let's say we have like 70 micro services or 50 micro services and this is actually the case for many financial services companies then you need to have a way to
(17:25) orchestrate them scale them based on demand and obviously recover from failure and make updates to the micro services without causing Interruption and this is a very hard task and a very complex you know process you can't really use um you know those tools that I mentioned earlier to do that in order to do that you need um kubernetes kubernetes is an open source system for automating deployment scaling and managing your containerized applications what does kubernetes offer a multi-container application you know
(17:57) when we have multiple containers it must run on a multi-host environment as we talked about multi multi-cloud environment in order to eliminate that single point of failure so if one application One Cloud environment goes down you can easily move everything onto another Cloud environment and kubernetes can actually make that happen for you so um it can easily switch the load to another host then we need to be able to create new instances of your individual microservices when I talked about the scaling earlier that's actually what it
(18:28) needs to happen and kubernetes basically takes care of that for that individual micro services or multiple micro services at the same time it can easily scale them and then if one or more of your services need to be updated let's say your team is working on a new feature for one of your microservices then you need to be able to um kind of you know have this to add those to the mix without causing any interruption or downtime and that's basically what kubernetes study the schedules it schedules new deployments
(18:57) and create create new instances of your containers with zero downtime and then kubernetes scales and manages your containers according to the available underlying resources based on what you've got available um in the platform or in the uh the hardware that kubernetes is running on and then last but not least is kubernetes checks every container container continually to make sure that they are healthy and in case of healthy failure it will take actions to reinstate our deployments and create three instances and restore the services
(19:28) that's what kubernetes does it's amazing it takes care of everything when you're dealing with a very large number of micro services so with kubernetes we State what we desire about our application such as the number of instances and individual you know allocated resources and all that and kubernetes basically observes that actual state of our application and will keep on adjusting and readjusting our resources to make that happen so it's always that continuous reconciliation of The observed and desired state
(19:58) now let's take a look at our deployment scenario on a high level how are we going to deploy our application onto kubernetes so we've got our kubernetes cluster uh you know we broke down our application to microservices then we can containerize them with Docker just a reminder and then deploying each Docker container with kubernetes is spin up a pod with this Docker container in there and then based on our deployment scenario and the load um for each one of those Services each parts gets replicated that's basically
(20:27) um you know where we talk about you know scaling and and that's basically adding new parts for each one of those services so first we created a deployment then we scaled out of a deployment accordingly by using replicas and the next step is to create Services uh that allows our applications you know macro services to communicate with each other within the cluster and also to expose them to the internet and the external network and that's basically what happens within kubernetes so these are about 12
(20:58) instruct options that you need to do and I'm going to show you those so we create using Cube control that's like your keyboard basically you create your deployment and then you export your deployment and then you scale your deployment and then you can even roll out new updates and even undo those updates using this command so it's very simple not that complicated and you can do it all from your command line interface if you've got a batch operation obviously you can use the anal files and you know describe all the
(21:27) operations you want to happen in your yaml file and then you can apply those yaml files in you know for batch operations and then if you need to get information about your deployment the number of PODS the deployments how many replicas you have you can basically use these commands to get that information what happens in the you know in the background is basically we use Cube control as I said or cube CTL you know people call it differently cli2 to interact with your kubernetes cluster then Cube CTO Cube control lets you
(21:57) um control kubernetes clusterance resources and then um if you go back to our deployment Center we use that we use queue control to instructor instructor our Master node to create a deployment based on their given containers and the masternode basically processes our request through the API server as you can see and then runs a scheduling service that automates when and where those containers are deployed based on our declaration and then each worker node includes a Docker and a software engine called cubelet now
(22:26) they've basically replaced that so Docker is not there anymore called the cubelet basically that receives and executes orders from the master node and then it creates basically the replicas and then it spins up new parts for every instances or our containers and then um to expose our application to the external network and our masternode creates a service to direct traffic to our pods um you know the way you can see Happening Here some of the other tools open source tools that you can use and they're completely open source it's like instead
(22:58) of Docker you can use podman to do that you can use Rancher desktop and which was like you know the docker desktop and that's that way you can actually continue as your application and completely using open source tools when it comes to kubernetes you can even use okd because the um you know openshift um uh basically um open source version of openshift that you can actually use um and then you know for the storage you can use promitis you can use redis for in-memory um storage and you can use you know postgres for persistence you know
(23:31) they're all open source tools and completely free and then you can use grafana to actually visualize what's happening in terms of getting the logs and everything else from your system through for instance promoters so this is like a completely open source um ecosystem that you can use to build and deploy your Cloud native application so when I talked about the control Loop this is actually record interesting so we defined with our desired State and kubernetes obviously observes and then you know kind of uh you know finds the
(24:05) difference and then reconcise to find what's actually happening and if it's not the same it will basically take actions to make that happen let's say in this case we desire to have three replicas and one of our replicas basically goes out of service as you can see in the red um you know Circle kubernetes is going to find out exactly that we don't have as many replicas and if there's no backup the replacement is instanced with no issues so what's going to happen is going to basically replicate another part immediately to
(24:34) make sure that we've got a good number of um you know replicas ready for the application and what happens is kubernetes Will basically make that happen so thank you again for joining my talk I try to keep it very short um and I hope you've enjoyed the talk and all the information I've given about using kubernetes and all the instructions that you need to do so make sure that you don't lose the the miss the last part of the the conference where we are going to join together with all the speakers to answer your
(25:06) questions thank you

Summary

If you are starting to migrate your workloads to cloud-native, this talk will guide you through: The specifics of Cloud Native development; The difference between Cloud platforms; The basics of Kubernetes — why you should use it, how it functions, and how to orchestrate your containers with this platform; Open-source tools you can use for efficient Cloud Native development.

Videos
card image
Nov 1, 2024
An Overview of Java Garbage Collectors

Java provides multiple garbage collectors (GCs) tailored to different performance needs. Serial GC is ideal for single-threaded apps but pauses all threads, while Parallel GC uses multiple threads to prioritize throughput.

Videos
card image
Oct 24, 2024
5 Tips for Optimizing Java Performance on Kubernetes

If your Java apps in the cloud struggle with high resource consumption, frequent container restarts, or slow response times, these five tips can help enhance their performance. First, set CPU and RAM limits properly based on load testing and account for Kubernetes overhead.

Further watching

Videos
card image
Nov 29, 2024
OpenJDK Projects That We Anticipate

OpenJDK is actively evolving, with projects like Leyden, Valhalla, Babylon, and Lilliput aiming to enhance Java's performance and capabilities. Leyden focuses on faster startup and warmup by reusing precompiled code, while Valhalla introduces value objects, primitive classes, and specialized generics for better memory and runtime efficiency.

Videos
card image
Nov 22, 2024
Reducing Java Startup Time: 4 Approaches

Java application startup can be significantly accelerated using modern tools. AppCDS stores preloaded classes in a shared archive, cutting startup time by up to 50%, while Project Leyden shifts optimizations to earlier stages with ahead-of-time compilation. GraalVM Native Image creates standalone executables for sub-second startup, and CRaC restores pre-warmed application states for instant readiness.

Videos
card image
Nov 15, 2024
Boost The Performance and Security of Your Spring Boot App with Alpaquita Containers

Alpaquita Containers offer a secure, high-performance solution for running Spring Boot applications in the cloud. These lightweight containers, built on Liberica JDK Lite and Alpaquita Linux, optimize memory and disk usage, reducing resource consumption by up to 30%.