The JRush web conference for Java engineers is coming soon! Register now and evolve your Java apps. Rush with us!

Building Cloud-Native Java Microservices with OpenJDK

On the Highway to Production. Guide to Building a Java Microservices E-Commerce App with Liberica JDK. Part 3


Published November 12, 2021


Contents

  1. Custom Domain for our Applications
  2. Enable Security (HTTPS)
  3. Logging with Fluent Bit
  4. Monitoring with CloudWatch Container Insights
  5. Summary

This is the third part of the cloud-native microservice development in Java™. In the first part of the series, we designed the microservice architecture and developed two microservices for our simple Java e-commerce application.

In the second part, we containerized our applications using the cloud-native buildpack implementation paketo.io., which is natively supported by Spring Boot. We created a container image of our microservices using Liberica JDK. The container image was first published in the AWS Container Registry ECR and later deployed on the managed Kubernetes service of Amazon Cloud (EKS).

At the end of the second part, we demonstrated our microservices in action using the Postman collection.

However, there are still many things to do to make our microservices ready for enterprise use. If you’d like to discuss the process of building a Java e-commerce with a reliable partner, click the button below. Our engineers will be happy to help you.

Custom Domain for our Applications

Amazon Route 53 is a highly-available and scalable Cloud DNS service. In addition to the classic DNS routing, it also offers additional domain registration and health checking.

Even though we can use another DNS provider, we will be registering our customer domain in Amazon Route 53, which is a straightforward procedure.

First, open “Route 53” and select “Register domain” as shown below:

alt_text

I have chosen the domain name “microservicesdemo”. After that, a number of available domain names with “microservicesdemo” is displayed. I went for microservicesdemo.net, which costs 11 USD per year.

alt_text

On the next page, contact details for the domain registration are shown. Once the contact details are filled, the domain is created.

alt_text

After the registration of the domain name, Route 53 is automatically set as DNS service for the registered domain.

Route 53 also creates a “Hosted zone” with the same name as the registered domain.

We want to use two subdomains for our two microservices and also route traffic from the sub-domains to load balancers. For the latter purpose, we need to create a “Record” in the “Hosted zone” of Route 53.

alt_text

In the ”Routing policy”, we choose “Simple routing”.

In “Configure record”, we define a “simple record” to configure a subdomain for our Customer microservice so that the traffic from our subdomain will be routed to the customer load balancer.

alt_text

Now, we can reach our Customer microservice with the subdomain name:

alt_text

The Route 53 DNS Service routes the traffic from the subdomain to the customer load balancer.

We can repeat the above steps to configure the Order microservice subdomain as well.

Enable Security (HTTPS)

For any kind of enterprise application, securing the web traffic via TLS 1.2+ is a must-have criteria. The entire traffic between the client browser and the load balancer is TLS terminated (HTTPS encrypted/decrypted). Please note that the connection between the load balancer and the pods is not TLS encrypted as shown below:

alt_text

We will use the Amazon Certificate Manager to provide, manage, and deploy SSL/TLS certificates, which we will use in our load balancers.

Let’s open the Amazon Certificate Manager to provide a public certificate:

alt_text

Now, we have to request a certificate from ACM:

alt_text

On the next page, we provide our domain name which we previously configured.

alt_text

AWS needs to make sure that we have control over our configured domain. We will use the “DNS validation” method as we have full control of our domain name:

alt_text

Once we place our requisition to provide the certificate, it shows the CNAME record, which needs to be added to the DNS configuration of the domain.

We can choose the option “Create record in Route 53” which will automatically add a “Record” with the CNAME in the “Hosted zone” configuration of the Route 53:

alt_text

Once the certificate is provided, it will show the following:

alt_text

Please note that it shows the status as “issued” for our certificate indicating that our certificate is correctly issued by AWS. It shows “in use” as “no” because we have not configured the certificate yet.

Now, we need to terminate the SSL/TLS in the load balancer. Please note that we will not encrypt the connection between the load balancer and the pods.

We will create a new Load Balancer Service “eks-service-tls.yaml” in the directory “src/main/k8s”:

apiVersion: v1
kind: Service
metadata:
 name: microservice-customer-service
 annotations:
   service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:877546708265:certificate/113c4264-84bd-46e0-906f-a7b1bf1e0626
   service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
 #Creating a service of type load balancer. Load balancer gets created but takes time to reflect
 type: LoadBalancer
 selector:
   app: microservice-customer
 ports:
   - protocol: TCP
     port: 443
     targetPort: 8080

There are several differences compared to our Load Balancer Service definition we last used.

In the annotations, we configured the certificates which we previously created in ACM:

annotations:
   service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:877546708265:certificate/113c4264-84bd-46e0-906f-a7b1bf1e0626
   service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http

In addition, we defined the backend protocol as http. We wanted to have the “X-Forwarded-Proto” headers in the backend service, so we configured the “service.beta.kubernetes.io/aws-load-balancer-backend-protocol” as http.

Also, the port is changed to “443” for https.

Now, we can reach our Customer microservice at the address: https://customer.microservicesdemo.net/customer

alt_text

We can see the lock button near the URL indicating that our connection is secure. Expansion of the lock button will provide the detailed information about the SSL/TLS certificate (e.g. issued by Amazon, issue date, etc).

We can repeat the whole process to secure the connection with Order microservice as well.

Logging with Fluent Bit

Logging is the integral part of software development. It must be supported by all production applications. We have to log all the important events during the application lifecycle for data analysis. Here are the main use cases of logging:

  • If there is an error in the application, we can analyse the log data to find the root cause.
  • If there is a bug in the application, we can reproduce the bug by looking at the log data.
  • Understanding the whole workflow of the application.

In our open-source e-commerce application, we generate log data. The challenge is to collect and process them.

There are many ways to export log data from applications. We can write to files, log collector API, or the STDOUT (console) and use a log collector for data collection.

According to the twelve factor apps, which is a gold standard of modern application development, an application should not write to logfiles or log collectors. Instead, an application writes its logs to the unbuffered STDOUT as an event stream. During local development, the developers should be able to view the log events in the foreground of their console.

In staging or production, the logs are captured by the execution environment. The log routers/collectors collect the logs and route them to the final destination (file or database) for storage and analysis.

In our e-commerce app, we will use the approach suggested by the “12 factor apps logging”: logs will be generated as event stream and captured by the Kubernetes pods, a log collector and analyser will then collect the logs, and we will use Amazon Cloud Watch as the final destination of our log data for analysis.

There are many log collectors in the market. Among them, Fluentd is most widely used for its vendor neutral features. In recent years, it has gained popularity due its lightweight yet powerful features. Moreover, Fluent Bit has a particular focus on collecting logs in the containerized environment.

In our application, we will use Fluent Bit as a log collector to collect log data from our pods. We will use Amazon CloudWatch Container Insights for log analysis. All the components for logging are shown below:

alt_text

Fluent Bit is an open source Log Processor and Forwarder which allows us to collect various data such as metrics and logs from different sources, enrich them with filters, and send them to multiple destinations including CloudWatch logs. It’s the preferred choice for containerized environments, e.g. Kubernetes. Fluent Bit is designed with performance in mind. It is written in C to effectively solve the narrow log collection problem at scale, and it offers high throughput with low CPU and memory usage.

In our EKS cluster, we have the following kinds of logs:

  • Application logs are produced by our application and stored in /var/log/containers/*.log
  • Host logs are system logs generated by the EKS Host nodes and stored in /var/log/messages,/var/log/dmesg,/var/log/secure files
  • Data Plane logs are generated by the EKS Data Plane components.

Kubernetes has the concept of DaemonSet to make sure that all nodes run a copy of the pod. It is useful for the Kubernetes cluster wise-operations such as log collection, node monitoring, where pods are automatically added to a new node. DaemonSet is a great point to make an assertion about slim docker images. DaemonSet is in always-restart mode (tries to make a pull, should hit the cache though). And there are easy scale-in, scale-out, and platform updates when pulls are real. Smaller images can help you in this regard, and both Alpine LInux and Liberica JDK Lite (combined in images like bellsoft/liberica-openjdk-alpine-musl) reduce image size dramatically (base image <100 mb on disk). We will set up Fluent Bit as DaemonSet to send logs to the AWS CloudWatch logs.

  • We need to grant IAM permissions to the Kubernetes worker node so that metrics and logs are sent to CloudWatch. This is accomplished by attaching a policy to the IAM roles of the worker nodes. Please note that it is a cluster administration task and it is done once during the cluster setup; it is not a daily task. First, the IAM roles of the worker nodes (EC2 instances) are selected:

    alt_text

    Then the policy “CloudWatchAgentServerPolicy” is attached to the IAM roles:

    alt_text

  • Create a namespace “amazon-cloudwatch” with the following command:

      kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cloudwatch-namespace.yaml --kubeconfig ~/.kube/config
    
    
  • Create a ConfigMap named “fluent-bit-cluster-info” with the following command:

      ClusterName=microservices
      RegionName=eu-central-1
      FluentBitHttpPort='2020'
      FluentBitReadFromHead='Off'
      [[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'
      [[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On'
      kubectl create configmap fluent-bit-cluster-info \
      --from-literal=cluster.name=${ClusterName} \
      --from-literal=http.server=${FluentBitHttpServer} \
      --from-literal=http.port=${FluentBitHttpPort} \
      --from-literal=read.head=${FluentBitReadFromHead} \
      --from-literal=read.tail=${FluentBitReadFromTail} \
      --from-literal=logs.region=${RegionName} -n amazon-cloudwatch \
      --kubeconfig ~/.kube/config
    

    Here, FluentBitHttpServer is set by default. In addition, Fluent Bit reads log files from the tail, and will capture only new logs after it is deployed.

  • Download and deploy the Fluent Bit DaemonSet using the following command:

      kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit.yaml --kubeconfig ~/.kube/config
    

    The above steps create the following resources in the cluster:

    1. A service account named Fluent-Bit in the amazon-cloudwatch namespace. It is used to run the Fluent Bit daemonSet.
    2. A cluster role named Fluent-Bit-role in the amazon-cloudwatch namespace. It grants get, list, and watch permissions on pod logs to the Fluent-Bit service account.
    3. A ConfigMap named Fluent-Bit-config in the amazon-cloudwatch namespace. It contains the configuration to be used by Fluent Bit.
  • Validate the deployment using the following command:

      kubectl get pods -n amazon-cloudwatch --kubeconfig ~/.kube/config
    

    It should show a pod starting with “fluent-bit-*” for each node. For our EKS cluster, the following response is returned:

      NAME               READY   STATUS    RESTARTS   AGE
      fluent-bit-8xdlg   1/1     Running   0          11m
      fluent-bit-rmbw6   1/1     Running   0          11m
    

In the AWS console, the fluent-bit DaemonSet is shown in the EKS cluster:

alt_text

Now, we can check if the Fluent Bit is correctly configured by going to the CloudWatch console. Please make sure that the region in the CloudWatch Console matches the region of the EKS cluster (which is eu-central-1). In the log, three log groups are available as shown below:

alt_text

We can inspect the /aws/containerinsights/microservices/application log group, which contains all the log events of the microservices.

We can filter the log events and have a look at the aggregated log. For our e-commerce app, creating an order with invalid customer ID returns “500 Internal Server Error”.

We can easily see the error logs for this event as shown below:

alt_text

Monitoring with CloudWatch Container Insights

Monitoring cloud-native microservices is a must for production and deployment. There are many tools to monitor Kubernetes deployment. Amazon CloudWatch is a monitoring service to monitor EC2 clusters. Amazon CloudWatch also offers Container Insights to monitor, troubleshoot, and set alarms for AWS Elastic Kubernetes Service (EKS) and AWS Elastic Container Service (ECS).

The CloudWatch Container Insights dashboard gives access to the following information:

  • CPU and memory utilization
  • Task and service counts
  • Read/write storage
  • Network Rx/Tx
  • Container instance counts for clusters, services, and tasks

To enable CloudWatch Container Insights, we need to deploy a CloudWatch agent with FluentBit in our EKS cluster. The following command will deploy a CloudWatch agent with BluentBit:

ClusterName=microservices
RegionName=eu-central-1
FluentBitHttpPort='2020'
FluentBitReadFromHead='Off'
[[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'
[[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On'
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluent-bit-quickstart.yaml | sed 's//'${ClusterName}'/;s//'${RegionName}'/;s//"'${FluentBitHttpServer}'"/;s//"'${FluentBitHttpPort}'"/;s//"'${FluentBitReadFromHead}'"/;s//"'${FluentBitReadFromTail}'"/' | kubectl apply -f - --kubeconfig ~/.kube/config

We can now visit the CloudWatch Container Insights dashboard for performance monitoring of our EKS cluster:

alt_text

Summary

As can be seen, Java is the perfect fintech programming language. In this article, the code of our e-commerce microservices became a real production application with a public domain, monitoring, and output log collection. We utilized the AWS Services Route 53 for public domain configuration, AWS Certificate Manager for managing the certificate for SSL/TLS connection, FluentBit for log collection and CloudWatch Container Insights as our Java monitoring tool. Furthermore, there is an administrative section to assign the IAM roles for CloudWatch agent configuration, which is a non-repetitive task. With the CloudWatch Container Insights monitoring, we can monitor and set alarms for application events, server events, and so on.

However, tracing and fine JVM monitoring is not covered by logging and monitoring. For distributed tracing, we need to support tracing tools, such as Zipkin or Jaeger. For JVM monitoring, we need to use a tool, such as Prometheus, which supports JMX. And as Java is the perfect fintech programming language, Liberica JDK is the perfrect Java Developement Kit. Try it for free and see for yourself!

Author image

Md Kamaruzzaman

Software Architect, Special for BellSoft

 Twitter

 LinkedIn

 Blog