posts

How to deploy application replicas in local Kubernetes clusters

figure
Nov 24, 2022
Dmitry Chuyko

We continue our series on optimizing Java applications on Kubernetes. We learned how to create a local single-node Kubernetes cluster in the first part. This time, we will look into deploying multiple application replicas in it. This deployment strategy is helpful if you need to study pod sizing and application scalability, considering the JVM flags and resource limits.

Set up the environment

We assume you already have a running local K8s cluster built with minikube, an easy-to-install Kubernetes distribution for local development and testing. Otherwise, refer to the tutorial.

We use an x86_64 machine running Linux, with 96 CPUs and a lot of RAM. We will also require the following technologies:

  • Docker, a container runtime, which we will use as a driver because there’s no need for virtualization
  • NGINX Ingress controller for traffic distribution between application replicas
  • Metrics Server, a reference implementation of the Metrics API for resource metrics collection and aggregation. We will use it instead of Heapster marked as deprecated in the newer K8s versions

First, let’s set up a driver and assign 88 CPUs and 32 GB to the cluster under the load. Install Docker 18.09 or higher and ensure the Docker daemon is running with

$ sudo systemctl start docker

Start a minikube cluster with the following command:

$ minikube start --driver=docker --extra-config=kubelet.housekeeping-interval=10s --cpus 88 --memory 32768

The command above solves two issues. The first one may appear when checking the pod metrics with metrics-server and is solved by adding a metrics interval. The second issue is caused by unresponsive minikube and is eliminated by changing the number of CPUs and memory allocated to it (it uses 2 CPUs and 2048 MB by default).

To make docker the default driver, run

$ minikube config set driver docker

Now, we need to enable Ingress and Metrics Server, which are provided as minikube addons (K8s extensions). The commands are as follows:

$ minikube addons enable ingress

If you are using Kubernetes version older than 1.11, you must disable Heapster first.

$ minikube addons disable heapster
$ minikube addons enable metrics-server

Now that you are all set, you can check the cluster information by running

$ kubectl cluster-info

Deploy an application 

We will use a containerized Spring Boot Petclinic application as a demo. Refer to the tutorial on dockerizing a Spring Boot app to build a container.

The following steps concern the deployment, as well as service and Ingress rules configuration.

We want to use a local Docker image for local testing instead of pulling one from a Docker registry. But minikube doesn’t see the local images and pulls the images from private or public repositories by default. The reason is that it uses its own Docker environment, and its Docker environment is different from the one running on our machine. To solve the issue, we can export the app container image from the local repository, switch to minikube’s shell, and load it there.

Load the image from your local Docker environment into Minikube’s cluster environment:

$ minikube image load petclinic-jpa-jvm:0.0.1-SNAPSHOT

Other options are to use minikube image build to build the image in the cluster environment, or to export the image as a file in the local environment and import it in Docker in minikube’s environment.

Switch to minikube’s Docker in a current shell session. It makes sense to keep regular and minikube sessions in parallel.

$ eval $(minikube docker-env)

Now we can configure the deployment, service, and Ingress. We need three configuration files for that purpose:

  • 01-deployment-jvm1.yaml
  • 02-service.yaml
  • 03-ingress.yaml

In 01-deployment-jvm1.yaml, we describe:

  • Which container to deploy
  • Minimal pod requirements to be able to run a replica — CPU and memory requests
  • Maximum pod resources to provide to a replica — CPU and memory limits
  • Application configuration (environment). For simplicity’s sake, we make each replica use its own in-memory database
  • How many replicas to have. We’ll have a single replica but you can easily vary the number.
  • How we refer that deployed application (selector)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: petclinic
spec:
  selector:
      matchLabels:
        app: petclinic
  replicas: 1
  template:
    metadata:
      labels:
        app: petclinic
    spec:
      containers:
        - name: petclinic
          image: petclinic-jpa-jvm:0.0.1-SNAPSHOT
          imagePullPolicy: Never
          ports:
            - containerPort: 8080
          env:
            - name: SPRING_PROFILES_ACTIVE
              value: "hsqldb"
            - name: JAVA_TOOL_OPTIONS
              value: "-XX:ActiveProcessorCount=8"
          resources:
            limits:
              cpu: "8"
              memory: 16Gi
            requests:
              cpu: "8"
              memory: 16Gi
          args:
            - -cpus
            - "8"

In 02-service.yaml, we state that Petclinic replicas can communicate on port 8080

kind: Service
apiVersion: v1
metadata:
  name: petclinic
spec:
  selector:
    app: petclinic
  ports:
  - name: http
    port: 8080

In 03-ingress.yaml, we expose a single 8080 endpoint for external connections to the cluster that are backed by all Petclinic replicas.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: petclinic
spec:
  defaultBackend:
    service:
      name: petclinic
      port:
        number: 8080

Apply all configuration files with the following commands:

$ kubectl apply -f 01-deployment-jvm1.yaml
$ kubectl apply -f 02-service.yaml
$ kubectl apply -f 03-ingress.yaml

External connections can be made by retrieving the IP address of a node:

$ minikube ip

You can now observe your running cluster:

$ kubectl get deployments

NAME             READY   UP-TO-DATE   AVAILABLE   AGE
petclinic-jvm1   1/1     1            1           6s

$ kubectl get services

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP    14h
petclinic-jvm1   ClusterIP   10.98.212.162   <none>        8080/TCP   11s

$ kubectl get ingresses

NAME             CLASS   HOSTS   ADDRESS         PORTS   AGE
petclinic-jvm1   nginx   *       192.168.49.2    80      10s

Or

$ kubectl get pods

NAME                       READY   STATUS    RESTARTS      AGE
petclinic-54d8b6b5-5fzrh   1/1     Running   2 (18m ago)   161d

You can find more details with commands such as

$ kubectl describe nodes
$ kubectl describe pods

etc.

After you found your pod’s name, you can look into its logs and find your application’s logs with the following command:

$ kubectl logs petclinic-54d8b6b5-5fzrh

2022-11-23 17:17:34.822  INFO 1 --- [           main] o.s.s.petclinic.PetClinicApplication     : Started PetClinicApplication in 3.972 seconds (JVM running for 4.248)

You should be able to access the application by the cluster IP with

$ curl http://192.168.49.2

Our test single-node single-pod deployment can be displayed on a scheme:

Local Kubernetes cluster: single node, single pod

Conclusion

This tutorial taught us how to deploy Spring boot container images to minikube using a local registry. We also mastered the deployment configuration that we can adjust as required. In the following post, we will look closely at JVM tuning options and load testing imitating real-world loading in the Kubernetes environment. 

Further reading

Your containers don’t perform the way you expect them to? Find out about the best practices of containerization 

Would you like to know how to tune your JVM for better performance? Read our article on HotSpot configuration with performance numbers

Want to tune the garbage collection in your Java app? Learn how to choose the appropriate GC first

posts
Alpaquita vs Alpine: a head-to-head comparison
figure
Nov 10, 2022
Dmitry Chuyko
shorts
Critical vulnerabilities in OpenSSL 3.0
Nov 11, 2022
Sergey Chernyshev

Find out about the newest CVEs discovered in OpenSSL 3.0 and how to eliminate the risk of exploits

Subcribe to our newsletter

figure

Read the industry news, receive solutions to your problems, and find the ways to save money.