Installing Kubernetes Cluster with 2 minions on Ubuntu to manage pods and services


Introduction

Kubernetes is a system designed to manage containerized applications built within Docker containers in a clustered environments. It provides basic mechanisms for deployment, maintenance and scaling of applications on public, private or hybrid setups, means, it handles the entier life cycle of a containerized application. It also have intelligency of self-healing features where containers can be auto provisioned, restarted or even replicated.

index

Goals

By the end of this Blog, you will:

  • Understand the utility of Kubernetes
  • Learn how to Setup Kubernetes Cluster

Prerequisites

For this blog, you will need:

  • Understanding of Kubernetes Componetes.
  • The nodes must installed docker version 1.2+ and bridge-utils to manipulate linux bridge.
  • All machines should communicate with each other and Master node needs to be connected to the Internet as it download the necessary files
  • All node must have Ubuntu 14.04 LTS 64bit server
  • Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.1.8, may work with higher versions.
  • All the remote servers can be ssh logged in without a password by using key authentication.
eval "$(ssh-agent)"
ssh-add admin-key.pem

Before, starting the installation and configuration, let’s understand the Kubernetes components. Even you can look to my another blog Introduction to Kubernetes for more details.

Kubernetes Components

Kubernetes works in server-client concept, where, it has a master that provide centralized control for a all minions(agent). We will be deploying one Kubernetes master with two minions and we will also have a workspace machine from where we will run all installation scripts.

Kubernetes has several components:

etcd – A highly available key-value store for shared configuration and service discovery.

flannel – An etcd backed network fabric for containers.

kube-apiserver – Provides the API for Kubernetes orchestration.

kube-controller-manager – Enforces Kubernetes services.

kube-scheduler – Schedules containers on hosts.

kubelet – Processes a container manifest so the containers are launched according to how they are described.

kube-proxy – Provides network proxy services.

Set up working directory

Login into the workspace machine and create the workspace directory.

$ mkdir workspace
$ cd workspace/

Change the directory to workspace and clone the kubernetes github repo locally.

$git clone https://github.com/kubernetes/kubernetes.git
Cloning into 'kubernetes'...
remote: Counting objects: 258974, done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 258974 (delta 3), reused 0 (delta 0), pack-reused 258965
Receiving objects: 100% (258974/258974), 185.65 MiB | 10.11 MiB/s, done.
Resolving deltas: 100% (169989/169989), done.

When we start the setup process, the required binary automatically picked up the latest release. But we can customize version as per our requirement by setting up corresponding environment variable ETCD_VERSION , FLANNEL_VERSION and KUBE_VERSION like following.

$ export KUBE_VERSION=2.2.1
$ export FLANNEL_VERSION=0.5.5
$ export ETCD_VERSION=1.1.8

Cluster IP details:

Master : 10.2.0.33

Nodes: 10.2.0.34, 10.2.0.35

We can also configure our master as node(minios) but that over load the master, so we are doing master as completed standalone setup.

Configure the cluster information

We can configure the cluster information in cluster /ubuntu/config-default.sh file or even we can export as environment variable. Following is a simple sample:

export nodes="ubuntu@10.2.0.33 ubuntu@10.2.0.34 ubuntu@10.2.0.35"
export role="a i i"
export NUM_NODES=${NUM_NODES:-3}
export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
export FLANNEL_NET=172.16.0.0/16

The FLANNEL_NET variable defines the IP range used for flannel overlay network, should not conflict with above SERVICE_CLUSTER_IP_RANGE. You can optionally provide additional Flannel network configuration through FLANNEL_OTHER_NET_CONFIG, as explained in cluster /ubuntu/config-default.sh

Note: When deploying, master needs to be connected to the Internet to download the necessary files. If your machines are located in a private network that need proxy setting to connect the Internet, you can set the config PROXY_SETTING in cluster /ubuntu/config-default.sh such as:

PROXY_SETTING="http_proxy=http://server:port https_proxy=https://server:port"

After all the above variables being set correctly, we can use following command in cluster/ directory to bring up the whole cluster.

$ cd kubernetes/cluster/
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh

The scripts automatically copy binaries and config files to all the machines via scp and start kubernetes service on them. The only thing you need to do is to type the sudo password when promoted.


Deploying node on machine 10.2.0.35

[sudo] password to start node:

If everything works correctly, you will see the following message from console indicating the k8s cluster is up.

Cluster validation succeeded

Test it out

You can use kubectl command to check if the newly created cluster is working correctly. The kubectl binary is under the cluster/ubuntu/binaries directory. You can make it available via PATH, then you can use the below command smoothly.

For example, use $ kubectl get nodes to see if all of your nodes are ready.

$ kubectl get nodes
NAME LABELS STATUS
10.2.0.31 kubernetes.io/hostname=10.2.0.31 Ready
10.2.0.32 kubernetes.io/hostname=10.2.0.32 Ready
10.2.0.33 kubernetes.io/hostname=10.2.0.33 Ready

Congratulations! you have configured your kubernetes cluster. In case you need any help while playing kubernetes then please write to me at Pravin Mishra. We will be happy to assist.

Setting up Jenkins with Mesos


In this blog, I am going to explain the setups to configure Mesos with Jenkins. Jenkins connect with Mesos using mesos-jenkins plugin which allows Jenkins to dynamically launch Jenkins slaves on a Mesos cluster depending on the workload! Put simply, whenever the Jenkins Build Queue starts getting bigger, this plugin automatically spins up additional Jenkins slave(s) on Mesos so that jobs can be immediately scheduled! Similarly, when a Jenkins slave is idle for a long time it is automatically shut down.

1. Prerequisites

I assume, you have

1. A Jenkins instance with administrator privilege.
2. Working Mesos Cluster with at-least one master and one slave. For instructions on setting up a Mesos cluster, please refer to my blog Setup Standalone Mesos on Ubuntu.

I have installed Jenkins on my OpenStack environment having IP address is 10.1.0.8 and Mesos cluster working at IP address 10.1.0.17 with one master and one slaves.

Mesos-master

2. Installing the plugin

Go to Manage Jenkins > Manage Plugins > There is a tab called ‘available’, once you will click “Available” tab, then choose the ‘mesos’ plugins to install. Scroll all the way down, then you’ll see the “Install without restart” button as well as the “Download new and install after restart” button. The former is the result of this work, allowing you to start using the new plugins right away. The latter is the traditional behaviour, where new plugins take effect after the next restart.

select-jenkins-plugin

Click the button Install without restart on the left, and the plugin gets downloaded, installed, and activated:

Installing-jenkins-plugin-pravin

3. Configuring the plugin

Now go to ‘Configure’ page in Jenkins. If the plugin is successfully installed you should see an option to ‘Add a new cloud’ at the bottom of the page. Add the ‘Mesos Cloud’ and give the path to the Mesos native library (e.g., libmesos.so on Linux or libmesos.dylib on OSX) and the address (HOST:PORT) of a running Mesos master.

configure-mesos-cloud

If you want to test immediately connectivity to Mesos, you can set ‘On-demand framework registration’ to ‘no’ and the framework will appear in Mesos as soon as you save. Otherwise it will register and unregister automatically when a build is scheduled on Mesos.
Note: Ensure Mesos slaves have a jenkins user or the user the Jenkins master is running as. jenkins user should have JAVA_HOME environment variable setup.

4. Set up and try out a build job

Now – set up a new job in Jenkins. On the configure screen, check the box that says “restrict where this build can run”

configure-job

Put in “mesos” (that was in the label in plugin configuration).

At this point you are good to go. If you check the Mesos console, you should see that Jenkins Scheduler is now setup as a framework – which means it is able to accept jobs:

mesos-slave

You have done with all configuration, now you can run the job – and it will run on Mesos. You will see an executor magically appear – and then pause for a little bit (while the slave.jar is setup etc) – and then run the job. Slave will become idle once job complete and after few second, slave will shutdown.

Configure Jenkins container slaves


In this blog, I am going to walk you through the proper configuration of Docker plugin for Jenkins container slaves. Docker plugin is to be able to use a docker host to dynamically provision a slave, run a single build, then tear-down that slave. Optionally, the container can be committed, so that (for example) manual QA could be performed by the container being imported into a local docker provider, and run from there.

1. Prerequisites

I assume, you have

1. A Jenkins instance with administrator privilege.
2. Install docker plugin to Jenkins
3. A working docker host

I have installed Jenkins on my OpenStack VM having IP address is 10.1.0.8 and Docker host working at another VM IP address 10.1.0.9

2. Preparing Environment

A. Prepare docker host

Your docker host needs to allow connections from a jenkins instance hosted on a different machine, you will need to open up the TCP port 2375. This can be achieved by editing the docker config file in /etc/default/docker. Open this file in your favorite editor and do the underneath change:

$ sudo vi /etc/default/docker

DOCKER_OPTS="-H tcp://10.1.0.9:2375 -H unix:///var/run/docker.sock"

B. Creating a docker image

You can pull a ready-made jenkins slave using docker pull command!

docker pull evarga/jenkins-slave

You need a docker image that has, as a minimum, an ssh server installed. You probably want a JDK, and you will also want a ‘jenkins’ user that can log in. For doing all follow blow activities:

$ docker run -i -t jenkins-slave /bin/bash

root@044d879cbf8c:/# apt-get update
root@044d879cbf8c:/# apt-get install git
root@044d879cbf8c:/# exit

Once the container has been created, you need to commit it with a name to be used later, e.g: docker-slave-image

$ docker ps -a
CONTAINER ID        IMAGE                  COMMAND             CREATED             STATUS                     PORTS               NAMES
672a8e7ec179        evarga/jenkins-slave   "/bin/bash"         5 weeks ago         Exited (0) 5 weeks ago                         agitated_albattani

$ docker commit 672a8e7ec179 docker-slave-image

$ docker images
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
docker-slave-image     latest              d8e5c82b7ce1        1 minutes ago         655.3 MB
evarga/jenkins-slave   latest              4df728e7f65f        16 months ago       610.8 MB

3. Jenkins Configuration

Docker appears in the ‘Cloud’ section of the Jenkins configuration, select “Docker” from the “Add a new cloud” drop down menu.

Docker turns up in the ‘Cloud’ section of the Jenkins configuration, Now pick “Docker” from the “Add a new cloud” drop down menu.

docker-configuration

The project is now ready to run. If everything is set up correctly, Jenkins should start up a new Docker container, run the build, and then shut down the container.

Job Configuration

Now, configure a job to use that “docker-slave-image” label as shown below:

job-configuration

Running that job, we will see that it successfully spins up a container of “docker-slave-image” and will build the job. Once build will complete, container will destroyed automatically.

An Introduction to Kubernetes


In this blog, I am going to give brief introduction to Kubernetes and it’s services. In later blog, we will deploy and configure Kubernets cluster on OpenStack VM.

Kubernetes is basically a cluster management tool for Docker container. It help us to schedule and deploy any number of container replicas onto a node clusters. Kubernetes is enough inteligent to take care of making decisions like which containers go on which servers/node.

Kubernetes is a KEY to assess and dealing with various containers instead of simply working with Docker on a manually configured host.

Why Kubernetes?

Kubernetes enables you to respond quickly to customer demand by scaling or rolling out new features and to make maximal use of your hardware.

Kubernetes is:

  • lean: lightweight, simple, accessible
  • portable: public, private, hybrid, multi cloud
  • extensible: modular, pluggable, hookable, composable, toolable
  • self-healing: auto-placement, auto-restart, auto-replication

Parts of Kubernetes

A running Kubernetes cluster contains node agents (kubelet) and master components (APIs, scheduler, etc), on top of a distributed storage solution. The Kubernetes node has the services necessary to run application containers and be managed from the master systems.

  • Master: the managing machine, which oversees one or more minions.
  • Minion: a slave that runs tasks as delegated by the user and Kubernetes master.
  • Pod: an application (or part of an application) that runs on a minion. This is the basic unit of manipulation in Kubernetes.
  • Replication Controller: ensures that the requested number of pods are running on minions at all times.
  • Label: an arbitrary key/value pair that the Replication Controller uses for service discovery
    kubecfg: the command line config tool
  • Service: an endpoint that provides load balancing across a replicated group of pods

To manage resources within Kubernetes, It will interact with the Kubernetes API and pulling down the Kubernetes binaries that will give all the services necessary to get Kubernetes configuration up and running. Like most other cluster management solutions, Kubernetes works by creating a master, which exposes the Kubernetes API and allows to request certain tasks to be completed. The master then spawns containers to handle the workload. Aside from running Docker, each node runs the Kubelet service — which is an agent that works with the container manifest — and a proxy service. The Kubernetes control plane is comprised of many components, but they all run on the single Kubernetes master node.