posts
Spring Boot monitoring in Kubernetes with Prometheus and Grafana

Spring Boot monitoring in Kubernetes with Prometheus and Grafana

Nov 16, 2023
Catherine Edelveis
16.3

The application is successfully deployed to Kubernetes, and the instances are running normally — the job is done, right? Yes and no. A lot is happening in your clusters, and monitoring their health is essential to solve potential issues as soon as they arise so as not to disturb user experience and tailor the KPIs to the business needs. In addition, we need to understand how an application behaves and how much memory it actually needs to select the appropriate instance size.

By the way, to optimize instance size and reduce the startup and warmup times of your services from minutes to milliseconds, which is vital for high availability, you can use Java with CRaC. You should definitely give it a try!

Coming back to monitoring. Fortunately, Spring Boot provides the Actuator tool capable of exporting numerous out-of-the-box and custom application metrics. The metrics can be collected with Prometheus and visualized with Grafana, two outstanding open-source monitoring solutions.

If you deploy Spring Boot services to the cloud, check out Alpaquita Containers tailor-made for Spring Boot!

Why Prometheus and Grafana

Prometheus is an open-source solution for time series (i.e., with a timestamp of each recording) data collection and monitoring. Being a graduated Cloud Native Computing Foundation (CNCF) project with 50K+ stars on GitHub, it is a leading monitoring system in the niche. A distinctive feature of Prometheus is standalone server nodes that don’t depend on network storage, providing enhanced reliability and access to collected data even under failure conditions.

While Prometheus offers a robust time series metrics database, Grafana helps to visualize the collected data most conveniently with the help of a beautiful user interface. The Grafana UI includes a dashboard with colorful charts, graphs, and alerts, facilitating the analysis of complex data sets.

Prerequisites

The code for the project below is available on GitHub.

Enable Spring Boot Actuator

First thing first, let’s enable Spring Actuator to expose application metrics for Prometheus to gather. Add the following dependency to the pom.xml file:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

To expose the metrics, we need to explicitly specify them in our application.properties file:

management.endpoints.web.exposure.include=health,metrics

That’s it! Run the application, and at http://localhost:8080/actuator/health you should see

{
"status": "UP"
}

Now, go to http://localhost:8080/actuator/metrics, and you will see a long list of available metrics (some of them are shown below): 

    {
        "names": [
        "application.ready.time",
                "application.started.time",
                "disk.free",
                "disk.total",
                "executor.active",
                "executor.completed",
                "executor.pool.core",
                "executor.pool.max",
                "executor.pool.size",
                "executor.queue.remaining",
                "executor.queued",
                "http.server.requests",
                "http.server.requests.active",
                "jvm.buffer.count",
                "jvm.buffer.memory.used",
                "jvm.buffer.total.capacity",
                "jvm.classes.loaded",
                "jvm.classes.unloaded",
                "jvm.compilation.time",
                "jvm.gc.live.data.size",
                "jvm.gc.max.data.size",
                "jvm.gc.memory.allocated",
                "jvm.gc.memory.promoted",
                "jvm.gc.overhead",
                "jvm.info",
                "jvm.memory.committed",
                "jvm.memory.max",
                "jvm.memory.usage.after.gc",
                "jvm.memory.used",

Each metric can be studied separately by adding the value to the URL. For instance, let’s look more closely at jvm.memory.max at http://localhost:8080/actuator/metrics/jvm.memory.max:

    {
        "name":"jvm.memory.max",
            "description":"The maximum amount of memory in bytes that can be used for memory management",
            "baseUnit":"bytes",
            "measurements": [
        {
            "statistic":"VALUE",
                "value":5620367357
        }
],
        "availableTags": [
        {
            "tag":"area",
                "values": [
            "heap",
                    "nonheap"
]
        },
        {
            "tag":"id",
                "values": [
            "CodeHeap 'profiled nmethods'",
                    "G1 Old Gen",
                    "CodeHeap 'non-profiled nmethods'",
                    "G1 Survivor Space",
                    "Compressed Class Space",
                    "Metaspace",
                    "G1 Eden Space",
                    "CodeHeap 'non-nmethods'"
]
        }
]
    }

Make Actuator metrics visible to Prometheus

The next step is to make application metrics visible to Prometheus. For that purpose, add the Micrometer dependency to pom.xml:

<dependency>
  <groupId>io.micrometer</groupId>
  <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

Update the application.properties file:

management.endpoints.web.exposure.include=health,metrics,prometheus

The new endpoint will be available at http://localhost:8080/actuator/prometheus. Below is only a small part of what you will see there when you run your application: 

# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time
# TYPE system_load_average_1m gauge
system_load_average_1m 1.41015625
# HELP system_cpu_usage The "recent cpu usage" of the system the application is running in
# TYPE system_cpu_usage gauge
system_cpu_usage 0.11948988078735792
# HELP jvm_info JVM version info
# TYPE jvm_info gauge
jvm_info{runtime="OpenJDK Runtime Environment",vendor="BellSoft",version="17.0.7+7-LTS",} 1.0
# HELP process_files_open_files The open file descriptor count
# TYPE process_files_open_files gauge
process_files_open_files 62.0
# HELP jvm_gc_pause_seconds Time spent in GC pause
# TYPE jvm_gc_pause_seconds summary
jvm_gc_pause_seconds_count{action="end of minor GC",cause="G1 Evacuation Pause",gc="G1 Young Generation",} 1.0
jvm_gc_pause_seconds_sum{action="end of minor GC",cause="G1 Evacuation Pause",gc="G1 Young Generation",} 0.003

Add custom metrics

You can use custom metrics to monitor parameters important for your workloads. These metrics are automatically picked up by Prometheus. For instance, you can use a Counter to measure the number of requests made to the application or a Timer to measure latency.

As a demonstration, let’s count the sum of all numbers up to 1,000 with an interval of 10 ms:

@RestController
public class TimerController {

   public TimerController(MeterRegistry registry){
       Timer timer = registry.timer("Time for operation");
       timer.record(() -> {
           int sum = 0;
           for(int i=0; i<= 1000; i++ ){
           sum += i;
               try {
                   TimeUnit.MILLISECONDS.sleep(10);
               } catch (InterruptedException e) {
                   e.printStackTrace();
               }
           }
       });
   }

Run the application. At actuator/prometheus you will find our custom metrics with information that it took the app 12.1 seconds to complete the task.

# HELP Time_for_operation_seconds  
# TYPE Time_for_operation_seconds summary
Time_for_operation_seconds_count 1.0
Time_for_operation_seconds_sum 12.209412708

Set up Prometheus and Grafana on Kubernetes

Up until now, we have monitored our application on bare metal. However, there are more things to do before we can monitor our containerized workloads on Kubernetes. First, we must deploy Prometheus and Grafana to our Kubernetes cluster.

We will use Helm charts to install the tools on Kubernetes. Helm is another open-source graduated CNCF project aimed at facilitating the management of Kubernetes applications. It provides a solution for defining, deploying, and upgrading any Kubernetes application with the help of Kubernetes packages called charts. There are numerous ready charts available, including those for Prometheus and Grafana.

Install Prometheus

First of all, you need to install Helm CLI if you don’t have it yet. There are several ways of doing that. You can

  • download a binary version for your operating system, unpack it, and move to the required location;
  • use the installer script that will install the latest version of Helm to your machine:
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
  • take advantage of package managers (Homebrew, Chocolatey, apt, dnf/yum, etc.). For instance, to install Helm with Homebrew, run
$ brew install helm

After installing Helm CLI, install the Bitnami Helm repository where Prometheus charts reside:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo update

The next step is to pull the Prometheus Helm chart. Developers can take advantage of various configuration options to tailor Prometheus to their deployment environment. For our simple demo app though, the default settings will suffice. Run

$ helm install prometheus bitnami/kube-prometheus

NAME: prometheus
LAST DEPLOYED: Wed Nov  1 11:54:58 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kube-prometheus
CHART VERSION: 8.22.0
APP VERSION: 0.68.0
** Please be patient while the chart is being deployed **
Watch the Prometheus Operator Deployment status using the command:
    kubectl get deploy -w --namespace default -l app.kubernetes.io/name=kube-prometheus-operator,app.kubernetes.io/instance=prometheus
Watch the Prometheus StatefulSet status using the command:
    kubectl get sts -w --namespace default -l app.kubernetes.io/name=kube-prometheus-prometheus,app.kubernetes.io/instance=prometheus
Prometheus can be accessed via port "9090" on the following DNS name from within your cluster:
    prometheus-kube-prometheus-prometheus.default.svc.cluster.local
To access Prometheus from outside the cluster execute the following commands:
    echo "Prometheus URL: http://127.0.0.1:9090/"
    kubectl port-forward --namespace default svc/prometheus-kube-prometheus-prometheus 9090:9090
Watch the Alertmanager StatefulSet status using the command:
    kubectl get sts -w --namespace default -l app.kubernetes.io/name=kube-prometheus-alertmanager,app.kubernetes.io/instance=prometheus
Alertmanager can be accessed via port "9093" on the following DNS name from within your cluster:
    prometheus-kube-prometheus-alertmanager.default.svc.cluster.local
To access Alertmanager from outside the cluster execute the following commands:
    echo "Alertmanager URL: http://127.0.0.1:9093/"
    kubectl port-forward --namespace default svc/prometheus-kube-prometheus-alertmanager 9093:9093

The default Prometheus pod is accessible from within the cluster only, which is a better practice than exposing Prometheus metrics to the outside world. As you can see above, Helm advises you on accessing Prometheus, including port-forward — let’s make use of it. Open a new Terminal window and run

$ kubectl port-forward --namespace default svc/prometheus-kube-prometheus-prometheus 9090:9090

Voilà! Prometheus is now accessible via http://localhost:9090.

Prometheus GUI

Install Grafana

We are going to install Grafana in a similar fashion, with the help of a dedicated Helm chart. Again, you can use default settings for now and make a deep dive into available configurations for your enterprise project.

To install Grafana, run

$ helm install grafana bitnami/grafana

NAME: grafana
LAST DEPLOYED: Wed Nov  1 12:25:23 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: grafana
CHART VERSION: 9.5.0
APP VERSION: 10.2.0
** Please be patient while the chart is being deployed **
1. Get the application URL by running these commands:
    echo "Browse to http://127.0.0.1:8080"
    kubectl port-forward svc/grafana 8080:3000 &
2. Get the admin credentials:
    echo "User: admin"
    echo "Password: $(kubectl get secret grafana-admin --namespace default -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)"

Next, retrieve the password generated by the Helm chart to access Grafana (the command is conveniently provided by Helm as you can see above):

$ echo "Password: $(kubectl get secret grafana-admin --namespace default -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)"

Now we can access Grafana with the port-forward command. In a new Terminal window, execute:

$ kubectl port-forward svc/grafana 8080:3000

Visit the http://localhost:8080. Enter the username (admin by default) and the password that you retrieved previously. Upon successful login, you will see the main Grafana page.

Grafana GUI

Couple Prometheus with Grafana

The last thing we need to do is to make Prometheus metrics visible in the Grafana dashboard. Click on “Add your first data source” and choose Prometheus on the list. Enter the URL where Prometheus is running (http://prometheus-kube-prometheus-prometheus.default.svc.cluster.local:9090).

Click “Save & Test.” You can now import various dashboards and panels for data visualization. Grafana offers numerous ready dashboards: all you need to do is to import the ID of a selected panel to your Grafana installation.

All set, it’s time to poke into our containerized app and see how it is doing!

Deploy the application to Kubernetes

I’m assuming that you are familiar with the process of deploying an application to Kubernetes, but if you haven’t familiarized yourself with the procedure, please refer to my previous guide with step-by-step instructions.

Containerize your application and push it to a container registry — this is where Kubernetes pulls the images. As I said in the beginning, I’m using an image from the previous guide, built with a lightweight Alpaquita Container tailor-made for Spring Boot.

What we need to do now is create a deployment.yaml file with the following content (substitute <docker-id> with your Docker ID if you published the image to the Docker Hub registry):

apiVersion: apps/v1
kind: Deployment
metadata:
 name: spring-boot-app
 labels:
   app: spring-boot-app
spec:
 replicas: 1
 selector:
   matchLabels:
     app: spring-boot-app
 template:
   metadata:
     labels:
       app: spring-boot-app
   spec:
     containers:
       - name: spring-boot-app
         image: <docker-id>/spring-boot-app
         imagePullPolicy: Always
         ports:
           - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
 name: spring-boot-app-service
 labels:
   app: spring-boot-app
spec:
 selector:
   app: spring-boot-app
 ports:
   - protocol: TCP
     name: http-traffic
     port: 8080
     targetPort: 8080

---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
 name: spring-boot-app-service-monitor
spec:
 selector:
   matchLabels:
     app: spring-boot-app
 endpoints:
   - port: http-traffic
     path: "/actuator/prometheus"

Apart from the usual Deployment and Service parts, we specify a ServiceMonitor that defines endpoints to scrape metrics.

Make sure that minikube is running. Then, go to the application directory and deploy the .yaml file:

$ kubectl apply -f deployment.yaml

You can access your pods by running

$ kubectl get all                      
NAME                                                                READY   STATUS    RESTARTS   AGE
pod/alertmanager-prometheus-kube-prometheus-alertmanager-0          2/2     Running   0          19m
pod/grafana-84776fc997-srqvk                                        1/1     Running   0          17m
pod/prometheus-kube-prometheus-blackbox-exporter-77b4db9fd8-wbflx   1/1     Running   0          20m
pod/prometheus-kube-prometheus-operator-7ff7699948-mfkwb            1/1     Running   0          20m
pod/prometheus-kube-state-metrics-5dcf55d96d-92nj6                  1/1     Running   0          20m
pod/prometheus-node-exporter-m97wr                                  1/1     Running   0          20m
pod/prometheus-prometheus-kube-prometheus-prometheus-0              2/2     Running   0          19m
pod/spring-boot-app-dbf645dbf-hrb9c                                 1/1     Running   0          13m

Prometheus should pick up the new endpoint, which can be seen in the “Status -> Targets” section at http://localhost:9090.

Application endpoint picked up by Prometheus

In the Graph section, you can check application metrics, including our custom one.

Displaying custom metrics 

Visualize the metrics with Grafana

The final touch — let’s head to Grafana and visualize our metrics there.

The most convenient method of visualizing application metrics is to import a ready dashboard. For instance, let’s import a dashboard for Kubernetes Cluster Monitoring. You will see the memory, CPU, and filesystem usage stats for your cluster, as well as network and pods I/O data.

Visualizing metrics with Grafana

Conclusion

As you can see, setting up a monitoring solution for your Spring Boot application is not complicated. As a result, you can see everything happening in your Kubernetes cluster and thus react quickly to undesirable changes or tune the performance based on real-life data.

Want to know more valuable tips on running Spring Boot in the cloud? Subscribe to our newsletter to stay up to date with the newest guides!

 

Subcribe to our newsletter

figure

Read the industry news, receive solutions to your problems, and find the ways to save money.

Further reading