A guide to building Java microservices. Part 3
Welcome back to our series on building Java microservices! In part one, we created two working microservices, Customer and Order. In part two, we containerized our application using Docker. Now it’s time to send our Docker containers into the cloud!
Publish containers
To publish our Docker containers to a registry, we’ll use Amazon ECR, a managed container registry to store, share, and deploy containers in the AWS Cloud.
First, we should install and configure the AWS Command Line Interface in our local machine using the steps defined in the AWS CLI v2 installation guide. Also, I have configured the CLI with access key ID and secret access key as described in the Configuration and credential file settings from the same source.
Now, for each microservice container image, we need to create an ECR repository. Please note the repository name should exactly match the container image repository name.
Here’s the command to create a repository in ECR for the Order container image:
aws ecr create-repository --repository-name microservice-customer
It will create a repository for our Order microservice and will return the following output:
{
"repository": {
"repositoryArn": "arn:aws:ecr:eu-central-1:877546708265:repository/microservice-customer",
"registryId": "877546708265",
"repositoryName": "microservice-customer",
"repositoryUri": "877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer",
"createdAt": "2021-03-04T00:18:33+01:00",
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": false
},
"encryptionConfiguration": {
"encryptionType": "AES256"
}
}
}
We need to tag our local Docker image with the ECR registry, repository, and (optional tag) in the next step. For this purpose, we need the Docker Image ID of our local Order microservice container. The next command will give detailed info regarding Docker images:
docker image ls microservice-customer:1.0.0
It will return the following output:
REPOSITORY TAG IMAGE ID CREATED SIZE
microservice-customer 1.0.0 652da8e2130b 41 years ago 274MB
Create a tag of our Docker image to the AWS ECR registry and repository:
docker tag 652da8e2130b 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer:1.0.0
Before publishing the Docker image to ECR, we need to authenticate our Docker there. The authentication will be valid for 12 hours.
aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin 877546708265.dkr.ecr.eu-central-1.amazonaws.com
You will get a similar message:
WARNING! Your password will be stored unencrypted in /home/$USER_NAME/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Now, you can push the Docker image to AWS ECR with
docker push 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-order:1.0.0
Depending on your network speed, it can take up to several minutes.
You can now check your pushed image in the ECR repository:
We can also pull the image from ECR with the following command to test whether the image was correctly uploaded to the repository:
docker pull 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer:1.0.0
If everything is fine, it will generate the output:
1.0.0: Pulling from microservice-customer
Digest: sha256:555b582b3353f9657ee6f28b35923c8d43b5b5d4ab486db896539da51b4f971a
Status: Image is up to date for 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer:1.0.0
877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer:1.0.0
Deploy containers in EKS
Set up a Database connection
For your microservices to connect with the database, you need to create a MongoDB cluster. Sign up at MongoDB Atlas. After the registration, create a free Shared cluster, choose cloud provider (AWS) and region, press the “Create Cluster” button. After that, create a user with a username and password. Once your cluster gets generated, you will see the following:
Now, you need to generate the connection string for the Atlas cluster so that our Spring Boot microservices can connect with it. Press “Connect”, select “Connect your application”. Please note that the connection string is dependent on the programming language and your MongoDB version. Follow the instructions on the screen to establish your connection.
Deploy your Docker container
Kubernetes is the de-facto container orchestration infrastructure. It is the open-source system initially developed by Google but now backed by the whole industry. It facilitates the deployment, scaling, and management of containerized applications. Kubernetes works perfectly with Docker Desktop, where it is included as a standalone server and client.
Kubernetes still needs operational efforts, and Managed Kubernetes is a better approach to focus on code entirely. For our cloud-native development use case, I will turn to Amazon Elastic Kubernetes Service (EKS) that enables the developers to start, run, and scale Kubernetes applications in AWS Cloud or on-prem.
We need to install the eksctl command-line utility to manage the EKS cluster and the Kubernetes command-line tool kubectl.
Now, we can create an EKS cluster using the eksctl
command:
eksctl create cluster \
--name microservices \
--region eu-central-1 \
--node-type t2.small \
--nodes 2
It will make a cluster with two worker nodes of type “t2.small” in the region “eu-central-1” with the name “microservice.”
In the background, eksctl uses CloudFormation to create the cluster, which usually takes 10–15 minutes. After the cluster creation is complete, you’ll get the following output:
[✔] saved kubeconfig as "/home/<user>/.kube/config"
[ℹ] no tasks
[✔] all EKS cluster resources for "microservices" have been created
[ℹ] adding identity "arn:aws:iam::877546708265:role/eksctl-microservices-nodegroup-ng-NodeInstanceRole-9PQCLZR7NSYS" to auth ConfigMap
[ℹ] nodegroup "ng-3e8fb16c" has 0 node(s)
[ℹ] waiting for at least 2 node(s) to become ready in "ng-3e8fb16c"
[ℹ] nodegroup "ng-3e8fb16c" has 2 node(s)
[ℹ] node "ip-192-168-77-195.eu-central-1.compute.internal" is ready
[ℹ] node "ip-192-168-9-13.eu-central-1.compute.internal" is ready
[ℹ] kubectl command should work with "/home/<user>/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "microservices" in "eu-central-1" region is ready
From the output, it is evident that it has created two nodes and one node group. Also, it has saved the kubectl config file in ./.kube/config
. In case you already have minikube or microk8s, you have to mention the ./.kube/config
file as the kubeconfig
parameter in the command.
Creating an EKS cluster will take around 15 minutes. Once the cluster is ready, you can check it by running
kubectl get nodes --kubeconfig ~/.kube/config
It will return as follows:
NAME STATUS ROLES AGE VERSION
ip-192-168-77-195.eu-central-1.compute.internal Ready <none> 10m v1.18.9-eks-d1db3c
ip-192-168-9-13.eu-central-1.compute.internal Ready <none> 10m v1.18.9-eks-d1db3c
Let’s move on. Define the Kubernetes deployment file to deploy the application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: microservice-customer
spec:
replicas: 1
selector:
matchLabels:
app: microservice-customer
template:
metadata:
labels:
app: microservice-customer
spec:
containers:
- name: microservice-customer-container
image: 877546708265.dkr.ecr.eu-central-1.amazonaws.com/microservice-customer:1.0.0
ports:
- containerPort: 8080
Here we have defined the Kubernetes deployment file as well as a load balancer.
We are now ready to deploy our application in Kubernetes. Run
kubectl apply -f eks-deployment.yaml --kubeconfig ~/.kube/config
You will have the response:
deployment.apps/microservice-deployment created
Check the status of the pods:
--kubeconfig ~/.kube/config
This command will show the following output with the pod status as running:
NAME READY STATUS RESTARTS AGE
microservice-deployment-597bd7749b-wcfsz 1/1 Running 0 13m
We can also check the log file of the pod with
kubectl logs microservice-deployment-597bd7749b-wcfsz --kubeconfig ~/.kube/config
It should show a log message like this one:
2021-03-04 00:09:50.848 INFO 1 --- [ngodb.net:27017] org.mongodb.driver.cluster : Discovered replica set primary clustermicroservice-shard-00-01.fzatn.mongodb.net:27017
2021-03-04 00:09:52.778 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2021-03-04 00:09:53.088 INFO 1 --- [ main] o.s.b.a.w.s.WelcomePageHandlerMapping : Adding welcome page: class path resource [static/index.html]
2021-03-04 00:09:53.547 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '/customer'
2021-03-04 00:09:54.602 INFO 1 --- [ main] o.m.m.customer.CustomerApplication
From this, we can see that the Customer microservice can successfully connect with MongoDB Atlas, started on port 8080.
Although our Customer microservice is deployed correctly in the EKS cluster, it is still not reachable from outside. We need to create a Kubernetes Service Controller, which will expose an external IP address and make our deployed pods available from outside. Here is the definition of the Kubernetes service:
apiVersion: v1
kind: Service
metadata:
name: microservice-customer-service
spec:
#Creating a service of type load balancer. Load balancer gets created but takes time to reflect
type: LoadBalancer
selector:
app: microservice-customer
ports:
- protocol: TCP
port: 80
targetPort: 8080
Please note that the targetPort
should be the same as the containerPort
defined in the deployment description (in our case 8080).
We can deploy our Service into AWS with the following command:
kubectl apply -f eks-service.yaml --kubeconfig ~/.kube/config
It will return as follows:
service/microservice-customer-service created
The Kubernetes service will be mapped in Elastic Load Balancer (ELB) of AWS. Now, we can check the external IP address of the service. Run
kubectl get svc --kubeconfig ~/.kube/config
It should return the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
Kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 39m
microservice-customer-service LoadBalancer 10.100.94.75 aa62f80b9596a4fa6835d80a506227d6-1183908486.eu-central-1.elb.amazonaws.com 80:32248/TCP 21s
From the above response, we can see that the load balancer is available with the external IP address: aa62f80b9596a4fa6835d80a506227d6-1183908486.eu-central-1.elb.amazonaws.com
Moreover, we can check the ELB of AWS:
Please note that the DNS name in the ELB is the same as the external IP address of the service previously received.
Opening a browser at the external IP address of the service at this point will give you this:
Similarly, we can deploy our Order microservice container in the EKS cluster by repeating the above mentioned steps: Create Docker Image, Publish Docker Image to ECR, Deploy Docker Image in EKS.
For that, first, we need to put the ELB endpoint of the Customer microservice in the application.yml
file of the Order microservice:
spring:
application:
name: microservice-order
microservice-customer:
url: http://aa62f80b9596a4fa6835d80a506227d6-1183908486.eu-central-1.elb.amazonaws.com/customer/api/v1/
data:
mongodb:
uri: mongodb+srv://mkmongouser:<Password>@clustermicroservice.fzatn.mongodb.net
database: order
server:
port: 8080
servlet:
context-path: /order
Otherwise, the steps are identical to the Customer microservice.
Cleanup
Running the AWS ECS cluster will incur costs, including the costs of the full EKS infrastructure (master node), worker nodes, load balancers, and a node group. In a production environment, we will let them run 24/7. But in our testing case, it is better to clean the resources created by the EKS Cluster.
To clean up the resources, run
eksctl delete cluster --name microservices
Conclusion
The application is complete! We have deployed our Docker container in Kubernetes and EKS. Again, if you’d like to see the complete project, head over to my GitHub: there, you will find the repos for both the Customer and Order microservices.
Next time we’ll deal with everything related to monitoring. Stay tuned for valuable advice about JFR streaming in the cloud. We’ll look at this simple app’s performance and learn how to handle failure incidents.