posts
How to create a single-node Kubernetes cluster

How to create a single-node Kubernetes cluster

Sep 14, 2022
Dmitry Chuyko
12.4

Great things have small beginnings — this couldn’t be more true when speaking of Kubernetes. Developers who have never worked with Kubernetes perceive it as a mighty beast doing wonders with container orchestration but too difficult to tame. Sooner or later, they will have to master this technology as it is becoming the standard of enterprise container management.

Where to begin? The answer is — local clusters with one node. They imitate their big K8s brother perfectly and enable the developers to gain insights into the system without the risk of ruining anything.

Besides, local clusters have another useful function — they can be used for JVM tuning in a sandbox mode. So read on to find out what local K8s clusters are and how to set them up!

Kubernetes development with local clusters

Kubernetes manages all workloads in clusters. The system doesn’t work with individual containers. Instead, it places them into pods to run on nodes, physical or virtual machines. Pods usually communicate with the outer world through an ingress controller. Nodes make up a cluster where they pool together their resources. The system redistributes the resources accordingly if any nodes are added or removed.

A cluster may contain only one node for educational or experimental purposes. In addition, it can be placed locally on a developer’s machine or remotely in a cloud.

The incredible Kubernetes journey starts locally, but even when the workloads shift to the cloud, local clusters can be of great help as they create a safe testing and development environment. But still, why do we need local clusters if we can start in a more realistic cloud environment right away?

  • They provide a great learning tool for those who want to integrate K8s into their enterprise. K8s cluster architecture is anything but simple, and mastering the system's intricacies requires time and effort with quite a few knocks along the way. Local clusters imitate the work of a real Kubernetes perfectly, so the developers can get insights into the system and test its functionalities without the risk of ruining anything.
  • Local clusters accelerate the feedback loop (when developers have to verify changes before pushing them into production) because they drastically reduce the time necessary for building images and pushing/pulling them to and from the cloud with all the security layers in place.
  • They decrease the cost of development as application optimization can be performed locally without the need to control and monitor cloud resources usage.
  • Local clusters provide a perfect sandbox environment great for testing, tuning, and proof of concept (PoC) experiments without the risk of blowing up the whole system.

Overview of local Kubernetes solutions

There are several open-source platforms for setting up a local Kubernetes environment. They are lightweight K8s distributions with a similar purpose of imitating a large-scale Kubernetes environment without using plenty of resources. The difference lies in the range of additional functions, so the choice depends on your business needs.

minikube

minikube is the most popular Kubernetes distribution developed by the Kubernetes project. It is effortless to install and use as a single-node K8s cluster with various operating systems, although it requires virtualization if run outside the Linux environment. In addition, minikube supports addons — the extensions for added Kubernetes functionality. You can add additional nodes to your cluster without any complications. minikube is perfect for testing purposes, but not fit for production deployments.

K3s

Created by Rancher, K3s is a sandbox K8s distribution tailored to low-resourced systems such as edge and IoT devices. It supports all OSs and is optimized for both ARM64 and ARMv7. K3s is easy to set up as a single-node cluster, but if you want to add nodes, you have to install K3s there and then connect them to the cluster.

MicroK8s

MicroK8s is developed by Canonical, the company behind Ubuntu. It is a modular, enterprise-grade Kubernetes distro that supports single- and multi-node clusters. MicroK8s can be used to run self-healing highly available clusters with multiple OSs. Another advantage is that it is compatible with edge and IoT devices and comes with optional commercial support. At the same time, installing and using the distribution is more complicated due to its modularity and multiple features to configure.

kind

kind (Kubernetes in Docker) was developed by the Kubernetes project primarily for testing K8s itself but can be used for CI purposes and local development. It builds Docker containers and uses them as nodes in a cluster. It enables the developers to create highly available multi-node clusters for Windows, macOS, and Linux.

Comparative table

 

minikube

K3s

MicroK8s

kind

Developer

Kubernetes

Rancher

Canonical

Kubernetes

Operating systems

Windows, Linux, macOS

Windows, Linux, macOS

Windows, Linux, macOS

Windows, Linux, macOS

Commercial support

No

No

Yes

No

Installation

Very easy

Easy

Easy with Linux snap support

Easy

Multi-node clusters

Yes

Yes, but requires effort

Yes

Yes

Edge/IoT devices support

No

Yes

Yes

No

Setting up minikube

Our articles dedicated to cloud expenses reduction and Java Garbage Collection stressed the importance of JVM tuning for better app performance and efficient resources consumption. We can use local clusters to test various JVM configurations and pod limits. One node is often enough for that purpose. In our following experiments, we use a powerful x86_64 machine with 96 CPUs, so we need an easy-to-install local cluster, which has an Ingress and uses a local repository. 

We chose minikube based on these requirements. We will also need an Ingress to distribute the traffic through the replicas. From the outside it will look like a single entry point leading to multiple pods. 

We will use the NGINX Ingress controller as the Ingress and Docker as a driver to avoid overhead and virtualization consequences. Follow the steps below to install all necessary utilities for Linux. Windows and macOS users may refer to the official documentation.

Install kubectl

First of all, you need to install kubectl — a Kubernetes command-line tool. The major kubectl version should be within one minor version difference of a cluster: for instance, v1.25 is compatible with v1.24, v1.25, and v1.26.

There are several methods of installing kubectl. The first one is to use curl. Run the following command to download the latest kubectl version:

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

Install kubectl with:

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

If you are using the Linux distribution with the snap package manager, you can use snap to install kubectl:

snap install kubectl --classic

As an alternative, you can use apt or yum package managers. kubectl is also available with Homebrew:

brew install kubectl

Regardless of the installation method, test the utility by checking the version:

kubectl version --client

Install a hypervisor or Docker

minikube enables you to use a hypervisor to create and manage virtual machines on a local host. You can use KVM or VirtualBox for Linux.

For our JVM tuning purposes, we will use Docker. Note that you can launch minikube directly on bare-metal without virtualization by stating --vm-driver=none when starting minikube. But running clusters without a VM layer is associated with decreased security, reliability, and data loss, so be cautious with the none parameter.

Install minikube

Now you are all set for installing minikube. You can perform the installation via a package manager, Homebrew, or by downloading the binary file. Let’s use curl to get the latest stable version for Linux x86_64:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

sudo install minikube-linux-amd64 /usr/local/bin/minikube

Use the following commands to be able to use minikube from any directory:

sudo mkdir -p /usr/local/bin/

sudo install minikube /usr/local/bin/

Now you can start your cluster. Run

minikube start --vm-driver=docker

to use minikube with Docker. To make Docker the default driver, use

minikube config set driver docker

If you want to change the driver, you should stop minikube first (see below).

Check that minikube is running correctly by requesting

minikube status

You should get a similar output:

host: Running

kubelet: Running

apiserver: Running

kubeconfig: Configured

When you are done working with minikube, stop the cluster with

minikube stop

Install NGINX

Ingress is an API object that manages access to Kubernetes services from outside the cluster. Ingress allows the developers to set up rules for routing traffic without creating numerous load balancers. Ingress Controller is the implementation of the Ingress API object. NGINX is a production-grade Ingress Controller with the following benefits: lightweight, easy-to-use, secure, and event-based.

To enable the NGINX Ingress Controller (minikube must be started), run

minikube addons enable ingress

Check that NGINX is running:

kubectl get pods -n ingress-nginx

You should see something like this:

NAME READY STATUS RESTARTS AGE

ingress-nginx-admission-create-p8fsk 0/1 Completed 0 53s

ingress-nginx-admission-patch-z6x9c 0/1 Completed 1 53s

ingress-nginx-controller-755dfbfc65-229wz 1/1 Running 0 53s

Conclusion

Now when you have prepared your local Kubernetes cluster, you can start adjusting the JVM settings on a local machine without compromising the production environment. In the following article of the series, we learn how to deploy multiple application replicas to our cluster for the following load testing and JVM adjustments.

Subcribe to our newsletter

figure

Read the industry news, receive solutions to your problems, and find the ways to save money.

Further reading