Installing Kubernetes Cluster with 2 minions on Ubuntu to manage pods and services


Kubernetes is a system designed to manage containerized applications built within Docker containers in a clustered environments. It provides basic mechanisms for deployment, maintenance and scaling of applications on public, private or hybrid setups, means, it handles the entier life cycle of a containerized application. It also have intelligency of self-healing features where containers can be auto provisioned, restarted or even replicated.



By the end of this Blog, you will:

  • Understand the utility of Kubernetes
  • Learn how to Setup Kubernetes Cluster


For this blog, you will need:

  • Understanding of Kubernetes Componetes.
  • The nodes must installed docker version 1.2+ and bridge-utils to manipulate linux bridge.
  • All machines should communicate with each other and Master node needs to be connected to the Internet as it download the necessary files
  • All node must have Ubuntu 14.04 LTS 64bit server
  • Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.1.8, may work with higher versions.
  • All the remote servers can be ssh logged in without a password by using key authentication.
eval "$(ssh-agent)"
ssh-add admin-key.pem

Before, starting the installation and configuration, let’s understand the Kubernetes components. Even you can look to my another blog Introduction to Kubernetes for more details.

Kubernetes Components

Kubernetes works in server-client concept, where, it has a master that provide centralized control for a all minions(agent). We will be deploying one Kubernetes master with two minions and we will also have a workspace machine from where we will run all installation scripts.

Kubernetes has several components:

etcd – A highly available key-value store for shared configuration and service discovery.

flannel – An etcd backed network fabric for containers.

kube-apiserver – Provides the API for Kubernetes orchestration.

kube-controller-manager – Enforces Kubernetes services.

kube-scheduler – Schedules containers on hosts.

kubelet – Processes a container manifest so the containers are launched according to how they are described.

kube-proxy – Provides network proxy services.

Set up working directory

Login into the workspace machine and create the workspace directory.

$ mkdir workspace
$ cd workspace/

Change the directory to workspace and clone the kubernetes github repo locally.

$git clone
Cloning into 'kubernetes'...
remote: Counting objects: 258974, done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 258974 (delta 3), reused 0 (delta 0), pack-reused 258965
Receiving objects: 100% (258974/258974), 185.65 MiB | 10.11 MiB/s, done.
Resolving deltas: 100% (169989/169989), done.

When we start the setup process, the required binary automatically picked up the latest release. But we can customize version as per our requirement by setting up corresponding environment variable ETCD_VERSION , FLANNEL_VERSION and KUBE_VERSION like following.

$ export KUBE_VERSION=2.2.1
$ export FLANNEL_VERSION=0.5.5
$ export ETCD_VERSION=1.1.8

Cluster IP details:

Master :


We can also configure our master as node(minios) but that over load the master, so we are doing master as completed standalone setup.

Configure the cluster information

We can configure the cluster information in cluster /ubuntu/ file or even we can export as environment variable. Following is a simple sample:

export nodes="ubuntu@ ubuntu@ ubuntu@"
export role="a i i"
export NUM_NODES=${NUM_NODES:-3}

The FLANNEL_NET variable defines the IP range used for flannel overlay network, should not conflict with above SERVICE_CLUSTER_IP_RANGE. You can optionally provide additional Flannel network configuration through FLANNEL_OTHER_NET_CONFIG, as explained in cluster /ubuntu/

Note: When deploying, master needs to be connected to the Internet to download the necessary files. If your machines are located in a private network that need proxy setting to connect the Internet, you can set the config PROXY_SETTING in cluster /ubuntu/ such as:

PROXY_SETTING="http_proxy=http://server:port https_proxy=https://server:port"

After all the above variables being set correctly, we can use following command in cluster/ directory to bring up the whole cluster.

$ cd kubernetes/cluster/

The scripts automatically copy binaries and config files to all the machines via scp and start kubernetes service on them. The only thing you need to do is to type the sudo password when promoted.

Deploying node on machine

[sudo] password to start node:

If everything works correctly, you will see the following message from console indicating the k8s cluster is up.

Cluster validation succeeded

Test it out

You can use kubectl command to check if the newly created cluster is working correctly. The kubectl binary is under the cluster/ubuntu/binaries directory. You can make it available via PATH, then you can use the below command smoothly.

For example, use $ kubectl get nodes to see if all of your nodes are ready.

$ kubectl get nodes

Congratulations! you have configured your kubernetes cluster. In case you need any help while playing kubernetes then please write to me at Pravin Mishra. We will be happy to assist.

Container Clustering tools and techniques

All the noise are around, the companies are moving their server applications from virtual machines (VM) to containers.

So why does everyone love containers?

Containers use shared operating systems. That means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance.  Among other benefits, containers offer a software architecture that provides portability and managed distribution of software assets. This has caused enterprises and software vendors to embrace container technology as they prepare for cloud.

But despite their success, containers still present challenges. Container scalability, for example, remains somewhat of a mystery. Some organizations struggle when trying to scale Docker, one of the leading container technologies.

There are couple of Open Source container cluster management tool. Each cluster management technology has something unique and different to offer.

  • Apache Mesos is mature and can also run non-containerized workloads (such as Hadoop).
  • Kubernetes tends to be opinionated. It offers little customization (for now) and networking can be a challenge. But it is moving fast and the defaults can be a quick way to get started.
  • Docker Swarm is from Docker, Inc and offers the familiar Docker (single-host) API. It is the easiest to get started with, but also the least mature (as of this writing).

Container orchestration, scheduling and clustering tool vary in their features and implementation but some of the core principles are the same.

  • Pool resources from a group of hosts into their components (CPU/RAM) and make them available for consumption. Also make sure the resources don’t become exhausted via over-provisioning or host failure.
  • Service supervision provides a service load balancer/entry point and make sure the service remains running.
  • Scaling functionality scales a service (automatic or manually) by allowing an operator to create more or fewer instances.
  • System metadata provides stats about running instances, scheduling, and container health.

Going forward, I am going to do POC on above tools. I have already written couple of blog on Mesos and going to add more on reaming tools.

Setup Standalone Mesos on Ubuntu

Install DC/OS on Vagrant

Using Jenkins on DC/OS backed by NFS