Reduce your TCO of building and using Spring Native applications with just 3 steps. Switch to the best Unified Java Runtime. Learn More.

Take Deployment by the Horns! Building Cloud-Native Java Microservices with OpenJDK. Part 2

Take Deployment by the Horns! Guide to Building a Java Microservices E-Commerce App with Liberica JDK. Part 2


Published June 16, 2021


BellSoft Blog Disclaimer

Welcome back to developing cloud-native applications based on microservice architecture. In the first part we introduced the goal: to build a sample e-commerce Java app. There we discussed domain-driven (as opposed to event-driven) design, the structure, and various open source tools. Everything was prepared for deployment in the cloud. Now it’s time to containerize, publish, and test the program. As always, you can actually use the convenient copy-and-paste snippets below to follow along.

Contents

  1. Set Up Cloud-Native Infrastructure
  2. Containerize Microservice
  3. Publish Container
  4. Deploy Container in EKS
  5. Test Application
    1. Create a Customer
    2. Create an Order
    3. Delete an Order
  6. Cleanup
  7. Conclusion

If you’re building an online store with Java backend, you may need a trustworthy partner to speak to and ask questions. Contact BellSoft senior engineers by clicking the button and filling the form. Together we’ll bring your project to new heights!

Set Up Cloud-Native Infrastructure

As mentioned in my previous article, our little case study features microservices tightly coupled with Java. So, we cannot afford the luxury of certain non-Java libraries that address common issues such as load balancing, fault tolerance, etc. It might be a problem for future-us when migrating the application to another programming language or system. Luckily, it can be solved with the introduction of infrastructure, which includes Docker and Kubernetes. This part will focus on making our e-commerce software cloud-native.

Spring Boot offers Cloud-Native Buildpack that generates a Docker Image from a Spring Boot project. It also uses BellSoft Liberica JDK as a default JVM for Java applications.

Moreover, I will also deploy the application in the public cloud. Amazon Web Services is the number one public cloud provider with a 32% market share. One way to deploy a Spring Boot application or a single microservice in the AWS cloud is by using Docker to create containers with the OCI image format, Amazon ECR as a managed container registry (Docker repository), and Amazon EKS as a managed Kubernetes service.

Another way for this is going with AWS Elastic Beanstalk, which offers a Platform as a Service (PaaS) to deploy web applications in the AWS cloud in a managed, auto-scalable way. But this post won’t touch on it.

Unfortunately, for our purposes of a quick demo, we cannot elaborate on the other aspects of cloud computing, such as continuous integration/continuous delivery, the classical Java EE way of thinking realized in MicroProfile, building a complex distributed system, and others.

If you would like to learn more about this topic, we recommend Introduction to Cloud-Native Java, a DZone Refcard by BellSoft. For more advanced approaches, such as using the native image technology and Spring Native, watch Spring Boot: production-ready, efficient, fast: pick three, a presentation by Andrew Clement and Josh Long.

Containerize Microservice

Containerization is the first and mandatory step to deploy a microservice application in the cloud. And a container itself is essentially a virtualization mechanism at the operating system level. There are many containerization techniques that exist in the market, but Docker is the most widely used container technology.

You may build a Docker image by defining a Dockerfile where the various layers of the Docker image need to be defined. The downside of this approach is that creating a Dockerfile needs a good understanding of the Docker technology. To empower the containerization, Cloud Native Buildpacks by Pivotal transform your source code into an OCI (Open Container Initiative) compatible Docker image without the need of a Dockerfile. The container image can then be deployed in any modern cloud.

Paketo.io is a Cloud Foundry project and one of the most popular implementations of Cloud Native Buildpacks. It can transform the source code of major programming languages into a container image.

Spring Boot natively supports the buildpacks that create an image with BellSoft Liberica JDK. It also looks at the build.gradle file and the Spring configuration file to build the Docker image.

Let’s first tackle the Customer microservice. Here is the application.yml file for it:

spring:
 application:
  name: microservice-customer

 data:
  mongodb:
   uri: mongodb+srv://mkmongouser:<Password>@clustermicroservice.fzatn.mongodb.net
   database: customer

server:
 port: 8080
 servlet:
  context-path: /customer
> sudo ./gradlew bootBuildImage

> Task :bootBuildImage
Building image 'docker.io/library/microservice-customer:1.0.0'

 > Pulling builder image 'docker.io/paketobuildpacks/builder:base' ..................................................
 > Pulled builder image 'paketobuildpacks/[email protected]:35e29183d1aec1b4d79ebec4fb47ef309dc4e803e2706b5a7336d8ebe68053e8'
 > Pulling run image 'docker.io/paketobuildpacks/run:base-cnb' ..................................................
 > Pulled run image 'paketobuildpacks/[email protected]:d968d1e9827704283bdfd678d9cb2b85d6e0bd826b0cb1f14bbceb5bb6e0f571'
 > Executing lifecycle version v0.10.2
 > Using build cache volume 'pack-cache-9d89fc6213b6.build'

 > Running creator
	[creator] 	===> DETECTING
	[creator] 	5 of 18 buildpacks participating
	[creator] 	paketo-buildpacks/ca-certificates   2.0.0
	[creator] 	paketo-buildpacks/bellsoft-liberica 7.0.0
	[creator] 	paketo-buildpacks/executable-jar	4.0.0
	[creator] 	paketo-buildpacks/dist-zip      	3.0.0
	[creator] 	paketo-buildpacks/spring-boot   	4.0.0
	[creator] 	===> ANALYZING
	[creator] 	Restoring metadata for "paketo-buildpacks/ca-certificates:helper" from app image
	[creator] 	Restoring metadata for "paketo-buildpacks/bellsoft-liberica:jvmkill" from app image
	[creator] 	Restoring metadata for "paketo-buildpacks/bellsoft-liberica:helper" from app image
	[creator] 	Restoring metadata for "paketo-buildpacks/bellsoft-liberica:java-security-properties" from app image
	[creator] 	Restoring metadata for "paketo-buildpacks/bellsoft-liberica:jre" from app image
	[creator] 	Restoring metadata for "paketo-buildpacks/executable-jar:class-path" from app image
	[creator] 	Restoring metadata for "paketo-buildpacks/spring-boot:spring-cloud-bindings" from app image
	[creator] 	Restoring metadata for "paketo-buildpacks/spring-boot:web-application-type" from app image
	[creator] 	Restoring metadata for "paketo-buildpacks/spring-boot:helper" from app image
	[creator] 	===> RESTORING
	[creator] 	===> BUILDING
	[creator]
	[creator] 	Paketo CA Certificates Buildpack 2.0.0
	[creator]   	https://github.com/paketo-buildpacks/ca-certificates
	[creator]   	Launch Helper: Reusing cached layer
	[creator]
	[creator] 	Paketo BellSoft Liberica Buildpack 7.0.0
	[creator]   	https://github.com/paketo-buildpacks/bellsoft-liberica
	[creator]   	Build Configuration:
	[creator]     	$BP_JVM_VERSION          	11.*        	the Java version
	[creator]   	Launch Configuration:
	[creator]     	$BPL_JVM_HEAD_ROOM       	0           	the headroom in memory calculation
	[creator]     	$BPL_JVM_LOADED_CLASS_COUNT  35% of classes  the number of loaded classes in memory calculation
	[creator]     	$BPL_JVM_THREAD_COUNT    	250         	the number of threads in memory calculation
	[creator]     	$JAVA_TOOL_OPTIONS                       	the JVM launch flags
	[creator]   	BellSoft Liberica JRE 11.0.10: Reusing cached layer
	[creator]   	Launch Helper: Reusing cached layer
	[creator]   	JVMKill Agent 1.16.0: Reusing cached layer
	[creator]   	Java Security Properties: Reusing cached layer
	[creator]
	[creator] 	Paketo Executable JAR Buildpack 4.0.0
	[creator]   	https://github.com/paketo-buildpacks/executable-jar
	[creator]   	Process types:
	[creator]     	executable-jar: java org.springframework.boot.loader.JarLauncher (direct)
	[creator]     	task:       	java org.springframework.boot.loader.JarLauncher (direct)
	[creator]     	web:        	java org.springframework.boot.loader.JarLauncher (direct)
	[creator]
	[creator] 	Paketo Spring Boot Buildpack 4.0.0
	[creator]   	https://github.com/paketo-buildpacks/spring-boot
	[creator]   	Creating slices from layers index
	[creator]     	dependencies
	[creator]     	spring-boot-loader
	[creator]     	snapshot-dependencies
	[creator]     	application
	[creator]   	Launch Helper: Reusing cached layer
	[creator]   	Web Application Type: Reusing cached layer
	[creator]   	Spring Cloud Bindings 1.7.0: Reusing cached layer
	[creator]   	4 application slices
	[creator]   	Image labels:
	[creator]     	org.springframework.boot.spring-configuration-metadata.json
	[creator]     	org.springframework.boot.version
	[creator] 	===> EXPORTING
	[creator] 	Reusing layer 'paketo-buildpacks/ca-certificates:helper'
	[creator] 	Reusing layer 'paketo-buildpacks/bellsoft-liberica:helper'
	[creator] 	Reusing layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
	[creator] 	Reusing layer 'paketo-buildpacks/bellsoft-liberica:jre'
	[creator] 	Reusing layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
	[creator] 	Reusing layer 'paketo-buildpacks/executable-jar:class-path'
	[creator] 	Reusing layer 'paketo-buildpacks/spring-boot:helper'
	[creator] 	Reusing layer 'paketo-buildpacks/spring-boot:spring-cloud-bindings'
	[creator] 	Reusing layer 'paketo-buildpacks/spring-boot:web-application-type'
	[creator] 	Reusing 5/5 app layer(s)
	[creator] 	Reusing layer 'launcher'
	[creator] 	Reusing layer 'config'
	[creator] 	Reusing layer 'process-types'
	[creator] 	Adding label 'io.buildpacks.lifecycle.metadata'
	[creator] 	Adding label 'io.buildpacks.build.metadata'
	[creator] 	Adding label 'io.buildpacks.project.metadata'
	[creator] 	Adding label 'org.springframework.boot.spring-configuration-metadata.json'
	[creator] 	Adding label 'org.springframework.boot.version'
	[creator] 	Setting default process type 'web'
	[creator] 	*** Images (2fbe378af6cb):
	[creator]       	docker.io/library/microservice-customer:1.0.0

Successfully built image 'docker.io/library/microservice-customer:1.0.0'

If you look at the output, it is evident that in the first step, it runs the Paketo BellSoft Liberica Buildpack 7.0.0, which in turn downloads the BellSoft Liberica JDK and JRE implementations for the JVM (version 11, as defined in the build.gradle file) from GitHub.

Thus it provides BellSoft Liberica JRE 11.0.10 as the JVM runtime layer of the Docker image.

Next the command runs Paketo Spring Boot Buildpack 4.0.0.It creates the layers for the application, dependencies, and the spring-boot-loader module.

Finally, it creates a Docker image. For our demo, the Customer microservice container image looks as such: docker.io/library/microservice-customer:1.0.0

Then we create one for the Order microservice going through the similar step: docker.io/library/microservice-order:1.0.0

We can run the microservices with a Docker command — here’s one for the Customer microservice:

> docker run -d -p 8080:8080 docker.io/library/microservice-customer:1.0.0

If the container started successfully, you’ll see the container ID as follows:

c1c3925595047ff5744507dd0abc05b1063f7dc884c3e2a4c9864633a2780d2b

Now, the following lines will appear in the container log file:

> docker logs c1c3925595047ff5744507dd0abc05b1063f7dc884c3e2a4c9864633a2780d2b

2021-03-03 22:01:21.116  INFO 1 --- [ngodb.net:27017] org.mongodb.driver.cluster           	: Setting max set version to 2 from replica set primary clustermicroservice-shard-00-01.fzatn.mongodb.net:27017
2021-03-03 22:01:21.116  INFO 1 --- [ngodb.net:27017] org.mongodb.driver.cluster           	: Discovered replica set primary clustermicroservice-shard-00-01.fzatn.mongodb.net:27017
2021-03-03 22:01:21.184  INFO 1 --- [       	main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2021-03-03 22:01:21.240  INFO 1 --- [       	main] o.s.b.a.w.s.WelcomePageHandlerMapping	: Adding welcome page: class path resource [static/index.html]
2021-03-03 22:01:21.383  INFO 1 --- [       	main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path '/customer'

Please note that we are using the port forwarding (-p 8080) so that we can connect with the containerized microservice from outside (e.g., from a local machine).

It is possible to check whether the container is running by sending a GET request to its liveness or “health” endpoint as described in detail in Part 1:

curl --location --request GET 'http://localhost:8080/customer/api/v1/health'

It returns the following response:

< {"status":"UP"}

Publish Container

The container should be published to a registry. For this end, we’ll use Amazon ECR, a managed container registry to store, share, and deploy containers in the AWS Cloud.

First, we should install and configure the AWS Command Line Interface in our local machine using the steps defined in the AWS CLI v2 installation guide. Also, I have configured the CLI with access key ID and secret access key as described in the Configuration and credential file settings from the same source.

Now, for each microservice container image, we need to create an ECR repository. Please note the repository name should exactly match the container image repository name.

Here’s the command to create a repository in ECR for the Order container image:

> aws ecr create-repository --repository-name microservice-customer

It will create a repository for our Order microservice and will return the following output:

{
	"repository": {
    	"repositoryArn": "arn:aws:ecr:eu-central-1:877546708265:repository/microservice-customer",
    	"registryId": "877546708265",
    	"repositoryName": "microservice-customer",
    	"repositoryUri": "877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer",
    	"createdAt": "2021-03-04T00:18:33+01:00",
    	"imageTagMutability": "MUTABLE",
    	"imageScanningConfiguration": {
        	"scanOnPush": false
    	},
    	"encryptionConfiguration": {
        	"encryptionType": "AES256"
    	}
	}
}

We need to tag our local Docker image with the ECR registry, repository, and (optional tag) in the next step. For this purpose, we need the Docker Image ID of our local Order microservice container. The following command will give detailed info regarding Docker images:

> docker image ls microservice-customer:1.0.0

It will return the following output:

REPOSITORY		TAG		IMAGE ID		CREATED	        	SIZE
microservice-customer	1.0.0		652da8e2130b    	41 years ago    	274MB

Create a tag of our Docker image to the AWS ECR registry and repository:

> docker tag 652da8e2130b 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer:1.0.0

Before publishing the Docker image to ECR, we need to authenticate our Docker there. The authentication will be valid for 12 hours.

> aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin 877546708265.dkr.ecr.eu-central-1.amazonaws.com

You will get a similar message:

WARNING! Your password will be stored unencrypted in /home/$USER_NAME/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Now, you can push the Docker image to AWS ECR with the following command:

> docker push 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-order:1.0.0

Depending on your network speed, it can take up to several minutes.

You can now check your pushed image in the ECR repository:

alt_text

We can also pull the image from ECR with the following command to test whether the image was correctly uploaded to the repository:

> docker pull 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer:1.0.0

If everything is fine, it will generate the following output:

1.0.0: Pulling from microservice-customer
Digest: sha256:555b582b3353f9657ee6f28b35923c8d43b5b5d4ab486db896539da51b4f971a
Status: Image is up to date for 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer:1.0.0
877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer:1.0.0

Deploy Container in EKS

Kubernetes is the de-facto container orchestration infrastructure. It is the open source system initially developed by Google but now backed by the whole industry. It facilitates the deployment, scaling, and management of containerized applications.

Kubernetes still needs operational efforts, and Managed Kubernetes is the better approach to focus on code entirely. For our cloud-native development use case, I will turn to Amazon Elastic Kubernetes Service (EKS) that enables to start, run, and scale Kubernetes applications in AWS Cloud or on-prem.

We need to install the eksctl command-line utility to manage the EKS cluster and the Kubernetes command-line tool kubectl.

Now, can create an EKS cluster using the eksctl command:

> eksctl create cluster \
--name microservices \
--region eu-central-1 \
--node-type t2.small \
--nodes 2

It will make a cluster with two worker nodes of type “t2.small” in the region “eu-central-1” with the name “microservice.”

In the background, eksctl uses CloudFormation to create the cluster, which usually takes 10–15 minutes. You can view this process:

alt_text

After the cluster creation is complete, you’ll get the following output:

[✔]  saved kubeconfig as "/home/<user>/.kube/config"
[ℹ]  no tasks
[✔]  all EKS cluster resources for "microservices" have been created
[ℹ]  adding identity "arn:aws:iam::877546708265:role/eksctl-microservices-nodegroup-ng-NodeInstanceRole-9PQCLZR7NSYS" to auth ConfigMap
[ℹ]  nodegroup "ng-3e8fb16c" has 0 node(s)
[ℹ]  waiting for at least 2 node(s) to become ready in "ng-3e8fb16c"
[ℹ]  nodegroup "ng-3e8fb16c" has 2 node(s)
[ℹ]  node "ip-192-168-77-195.eu-central-1.compute.internal" is ready
[ℹ]  node "ip-192-168-9-13.eu-central-1.compute.internal" is ready
[ℹ]  kubectl command should work with "/home/<user>/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "microservices" in "eu-central-1" region is ready

From the output, it is evident that it has created two nodes and one node group. Also, it has saved the kubectl config file in ./.kube/config. In case you already have minikube or microk8s, you have to mention the ./.kube/config file as the kubeconfig parameter in the command.

Creating an EKS cluster will take around 15 minutes. Once the cluster is ready, you can check it with the following command:

> kubectl get nodes --kubeconfig ~/.kube/config

It will return as follows:

NAME                                          	STATUS	ROLES	AGE	VERSION
ip-192-168-77-195.eu-central-1.compute.internal	Ready	<none>	10m	v1.18.9-eks-d1db3c
ip-192-168-9-13.eu-central-1.compute.internal 	Ready	<none>	10m	v1.18.9-eks-d1db3c

You can also see its details in AWS Console’s EKS Clusters:

alt_text

Let’s move on. Define the Kubernetes deployment file to deploy the application:

apiVersion: apps/v1
kind: Deployment
metadata:
 name: microservice-deployment
 labels:
  app: microservice-customer
spec:
 replicas: 1
 selector:
  matchLabels:
   app: microservice-customer
 template:
  metadata:
   labels:
    app: microservice-customer
  spec:
   containers:
    - name: microservice-customer-container
     image: 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer:1.0.0
     ports:
      - containerPort: 8080

Here we have defined the Kubernetes deployment file as well as a load balancer.

We are now ready to deploy our application in Kubernetes with the following command:

> kubectl apply -f eks-deployment.yaml --kubeconfig ~/.kube/config

You will have the response:

deployment.apps/microservice-deployment created

Check the status of the pods:

> --kubeconfig ~/.kube/config

This command will show the following output with the pod status as running:

NAME                                   	   READY  STATUS          RESTARTS      AGE
microservice-deployment-597bd7749b-wcfsz   1/1	  Running	  0	      	13m

We can also check the log file of the pod with the following:

> kubectl logs microservice-deployment-597bd7749b-wcfsz --kubeconfig ~/.kube/config

It should show a log message like this one:

2021-03-04 00:09:50.848  INFO 1 --- [ngodb.net:27017] org.mongodb.driver.cluster           	: Discovered replica set primary clustermicroservice-shard-00-01.fzatn.mongodb.net:27017
2021-03-04 00:09:52.778  INFO 1 --- [       	main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2021-03-04 00:09:53.088  INFO 1 --- [       	main] o.s.b.a.w.s.WelcomePageHandlerMapping	: Adding welcome page: class path resource [static/index.html]
2021-03-04 00:09:53.547  INFO 1 --- [       	main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path '/customer'
2021-03-04 00:09:54.602  INFO 1 --- [       	main] o.m.m.customer.CustomerApplication   	: Started CustomerApplication in 12.229 seconds (JVM running for 13.465)

From this, we can see that the Customer microservice can successfully connect with MongoDB Atlas, started on port 8080.

Although our Customer microservice is deployed correctly in the EKS cluster, it is still not reachable from outside. We need to create a Kubernetes Service Controller, which will expose an external IP address and make our deployed pods available from outside. Here is the definition of the Kubernetes service:

apiVersion: v1
kind: Service
metadata:
 name: microservice-customer-service
spec:
 #Creating a service of type load balancer. Load balancer gets created but takes time to reflect
 type: LoadBalancer
 selector:
  app: microservice-customer
 ports:
  - protocol: TCP
   port: 80
   targetPort: 8080

Please note that the targetPort should be the same as the containerPort defined in the deployment description (in our case 8080).

We can deploy our Service into AWS with the following command:

> kubectl apply -f eks-service.yaml --kubeconfig ~/.kube/config

It will return as follows:

service/microservice-customer-service created

The Kubernetes service will be mapped in Elastic Load Balancer (ELB) of AWS. Now, we can check the external IP address of the service with the following command:

> kubectl get svc  --kubeconfig ~/.kube/config

It should return the following:

NAME                            TYPE            CLUSTER-IP	EXTERNAL-IP					                                PORT(S)	  	AGE
Kubernetes                      ClusterIP	10.100.0.1	<none>					                                        443/TCP		39m
microservice-customer-service	LoadBalancer	10.100.94.75	aa62f80b9596a4fa6835d80a506227d6-1183908486.eu-central-1.elb.amazonaws.com	80:32248/TCP	21s

From the above response, we can see that the load balancer is available with the external IP address: aa62f80b9596a4fa6835d80a506227d6-1183908486.eu-central-1.elb.amazonaws.com

Moreover, we can check the ELB of AWS:

alt_text

Please note that the DNS name in the ELB is the same as the external IP address of the service previously received.

Opening a browser at the external IP address of the service at this point will give you the following:

alt_text

Similarly, we can deploy our Order microservice in the EKS cluster by repeating the above mentioned steps: Create Docker Image, Publish Docker Image to ECR, Deploy Docker Image in EKS.

For that, first, we need to put the ELB endpoint of the Customer microservice in the application.yml file of the Order microservice:

spring:
 application:
  name: microservice-order

  microservice-customer:
   url: http://aa62f80b9596a4fa6835d80a506227d6-1183908486.eu-central-1.elb.amazonaws.com/customer/api/v1/

 data:
  mongodb:
   uri: mongodb+srv://mkmongouser:<Password>@clustermicroservice.fzatn.mongodb.net
   database: order

server:
 port: 8080
 servlet:
  context-path: /order

As the steps are identical to the Customer microservice, I will discuss them in brief (only commands):

  1. Build a Docker image of the Order microservice:

     > sudo ./gradlew bootBuildImage
    
  2. Create a repository in ECR for the Order Microservice:

     > aws ecr create-repository --repository-name microservice-order
    
  3. Tag the Docker image of the Order Microservice:

      	> docker tag 03343db51934 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-order:1.0.0
    
  4. Publish the Docker image to AWS ECR:

     > docker push 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-order:1.0.0
    
  5. Check whether image is uploaded to ECR successfully:

     > docker pull 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-order:1.0.0
    
  6. Deploy the microservice-order:1.0.0 Docker image in Kubernetes:

    a. eks-deployment.yaml:

     apiVersion: apps/v1
     kind: Deployment
     metadata:
      name: microservice-order-deployment
      labels:
       app: microservice-order
     spec:
      replicas: 1
      selector:
       matchLabels:
        app: microservice-order
      template:
       metadata:
        labels:
         app: microservice-order
       spec:
        containers:
         - name: microservice-order-container
          image: 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-order:1.0.0
          imagePullPolicy: Always
          ports:
           - containerPort: 8080
    

    b. Deployment command:

     > kubectl apply -f eks-deployment.yaml --kubeconfig ~/.kube/config
    
  7. Deploy in Elastic Kubernetes Service:

    a. eks-service.yaml:

     apiVersion: v1
     kind: Service
     metadata:
      name: microservice-order-service
     spec:
      #Creating a service of type load balancer. Load balancer gets created but takes time to reflect
      type: LoadBalancer
      selector:
       app: microservice-order
      ports:
       - protocol: TCP
        port: 80
        targetPort: 8080
    

    b. kubectl command:

     > kubectl apply -f eks-service.yaml --kubeconfig ~/.kube/config
    
  8. Get the load balancer (ELB) External IP (DNS Address):

      	> kubectl get svc  --kubeconfig ~/.kube/config
    
  9. Check the application in your browser of choice:

alt_text

Test Application

Seeing that both our microservices are up and running, we should produce an end-to-end test to verify if everything is working seamlessly.

Here I will use the most Popular API client, Postman.

Create a Customer

This is how you create a Customer with Postman:

alt_text

The procedure will create a customer with an ID: 60403e1a01d12756ba730f0d in the MongoDB database and return the following response:

201: Created.

Notice the newly-made Customer ID—we’ll need it to create an order.

Create an Order

Here is the Postman window for creating an Order:

alt_text

This will create an Order in the Order microservice as shown by the response in Postman. Please take a look at the Order ID:

alt_text

As I’ve already described, when we create an Order, the Order microservice will notify the Customer microservice of that via REST calls. The Customer microservice will, in turn, update its Customer entity in the Database.

As a result, fetching the Customer from the Customer microservice shows the created Order:

alt_text

Delete an Order

You can delete the created Order as follows:

alt_text

It will return a 204 response.

Now, if we fetch the Customer again, we’ll see an empty order collection:

alt_text

Cleanup

Running the AWS EKS cluster will incur costs, including the costs of the full EKS infrastructure (master node), worker nodes, load balancers, and a node group. In a production environment, we will let them run 24/7. But in our testing case, it is better to clean the resources created by the EKS Cluster.

We can clean up the resources with the following command:

> eksctl delete cluster --name microservices

Conclusion

The application is complete! We have deployed the code compiled in Part 1 in Kubernetes and EKS. This demo is tested, cleaned-up and ready for the final installation in our three-part series on building cloud-native Java microservices. If you’d like to see the complete project, head over to my GitHub: here are repos for both the Customer and Order microservices.

Next time we’ll deal with everything related to monitoring. Stay tuned for valuable advice about JFR streaming in the cloud. We’ll look at this simple app’s performance and learn how to handle failure incidents.

Author image

Md Kamaruzzaman

Software Architect, Special for BellSoft

 Twitter

 LinkedIn

 Blog