Installing Kubernetes Cluster with 2 minions on Ubuntu to manage pods and services


Introduction

Kubernetes is a system designed to manage containerized applications built within Docker containers in a clustered environments. It provides basic mechanisms for deployment, maintenance and scaling of applications on public, private or hybrid setups, means, it handles the entier life cycle of a containerized application. It also have intelligency of self-healing features where containers can be auto provisioned, restarted or even replicated.

index

Goals

By the end of this Blog, you will:

  • Understand the utility of Kubernetes
  • Learn how to Setup Kubernetes Cluster

Prerequisites

For this blog, you will need:

  • Understanding of Kubernetes Componetes.
  • The nodes must installed docker version 1.2+ and bridge-utils to manipulate linux bridge.
  • All machines should communicate with each other and Master node needs to be connected to the Internet as it download the necessary files
  • All node must have Ubuntu 14.04 LTS 64bit server
  • Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.1.8, may work with higher versions.
  • All the remote servers can be ssh logged in without a password by using key authentication.
eval "$(ssh-agent)"
ssh-add admin-key.pem

Before, starting the installation and configuration, let’s understand the Kubernetes components. Even you can look to my another blog Introduction to Kubernetes for more details.

Kubernetes Components

Kubernetes works in server-client concept, where, it has a master that provide centralized control for a all minions(agent). We will be deploying one Kubernetes master with two minions and we will also have a workspace machine from where we will run all installation scripts.

Kubernetes has several components:

etcd – A highly available key-value store for shared configuration and service discovery.

flannel – An etcd backed network fabric for containers.

kube-apiserver – Provides the API for Kubernetes orchestration.

kube-controller-manager – Enforces Kubernetes services.

kube-scheduler – Schedules containers on hosts.

kubelet – Processes a container manifest so the containers are launched according to how they are described.

kube-proxy – Provides network proxy services.

Set up working directory

Login into the workspace machine and create the workspace directory.

$ mkdir workspace
$ cd workspace/

Change the directory to workspace and clone the kubernetes github repo locally.

$git clone https://github.com/kubernetes/kubernetes.git
Cloning into 'kubernetes'...
remote: Counting objects: 258974, done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 258974 (delta 3), reused 0 (delta 0), pack-reused 258965
Receiving objects: 100% (258974/258974), 185.65 MiB | 10.11 MiB/s, done.
Resolving deltas: 100% (169989/169989), done.

When we start the setup process, the required binary automatically picked up the latest release. But we can customize version as per our requirement by setting up corresponding environment variable ETCD_VERSION , FLANNEL_VERSION and KUBE_VERSION like following.

$ export KUBE_VERSION=2.2.1
$ export FLANNEL_VERSION=0.5.5
$ export ETCD_VERSION=1.1.8

Cluster IP details:

Master : 10.2.0.33

Nodes: 10.2.0.34, 10.2.0.35

We can also configure our master as node(minios) but that over load the master, so we are doing master as completed standalone setup.

Configure the cluster information

We can configure the cluster information in cluster /ubuntu/config-default.sh file or even we can export as environment variable. Following is a simple sample:

export nodes="ubuntu@10.2.0.33 ubuntu@10.2.0.34 ubuntu@10.2.0.35"
export role="a i i"
export NUM_NODES=${NUM_NODES:-3}
export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
export FLANNEL_NET=172.16.0.0/16

The FLANNEL_NET variable defines the IP range used for flannel overlay network, should not conflict with above SERVICE_CLUSTER_IP_RANGE. You can optionally provide additional Flannel network configuration through FLANNEL_OTHER_NET_CONFIG, as explained in cluster /ubuntu/config-default.sh

Note: When deploying, master needs to be connected to the Internet to download the necessary files. If your machines are located in a private network that need proxy setting to connect the Internet, you can set the config PROXY_SETTING in cluster /ubuntu/config-default.sh such as:

PROXY_SETTING="http_proxy=http://server:port https_proxy=https://server:port"

After all the above variables being set correctly, we can use following command in cluster/ directory to bring up the whole cluster.

$ cd kubernetes/cluster/
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh

The scripts automatically copy binaries and config files to all the machines via scp and start kubernetes service on them. The only thing you need to do is to type the sudo password when promoted.


Deploying node on machine 10.2.0.35

[sudo] password to start node:

If everything works correctly, you will see the following message from console indicating the k8s cluster is up.

Cluster validation succeeded

Test it out

You can use kubectl command to check if the newly created cluster is working correctly. The kubectl binary is under the cluster/ubuntu/binaries directory. You can make it available via PATH, then you can use the below command smoothly.

For example, use $ kubectl get nodes to see if all of your nodes are ready.

$ kubectl get nodes
NAME LABELS STATUS
10.2.0.31 kubernetes.io/hostname=10.2.0.31 Ready
10.2.0.32 kubernetes.io/hostname=10.2.0.32 Ready
10.2.0.33 kubernetes.io/hostname=10.2.0.33 Ready

Congratulations! you have configured your kubernetes cluster. In case you need any help while playing kubernetes then please write to me at Pravin Mishra. We will be happy to assist.

Setting up Jenkins with Mesos


In this blog, I am going to explain the setups to configure Mesos with Jenkins. Jenkins connect with Mesos using mesos-jenkins plugin which allows Jenkins to dynamically launch Jenkins slaves on a Mesos cluster depending on the workload! Put simply, whenever the Jenkins Build Queue starts getting bigger, this plugin automatically spins up additional Jenkins slave(s) on Mesos so that jobs can be immediately scheduled! Similarly, when a Jenkins slave is idle for a long time it is automatically shut down.

1. Prerequisites

I assume, you have

1. A Jenkins instance with administrator privilege.
2. Working Mesos Cluster with at-least one master and one slave. For instructions on setting up a Mesos cluster, please refer to my blog Setup Standalone Mesos on Ubuntu.

I have installed Jenkins on my OpenStack environment having IP address is 10.1.0.8 and Mesos cluster working at IP address 10.1.0.17 with one master and one slaves.

Mesos-master

2. Installing the plugin

Go to Manage Jenkins > Manage Plugins > There is a tab called ‘available’, once you will click “Available” tab, then choose the ‘mesos’ plugins to install. Scroll all the way down, then you’ll see the “Install without restart” button as well as the “Download new and install after restart” button. The former is the result of this work, allowing you to start using the new plugins right away. The latter is the traditional behaviour, where new plugins take effect after the next restart.

select-jenkins-plugin

Click the button Install without restart on the left, and the plugin gets downloaded, installed, and activated:

Installing-jenkins-plugin-pravin

3. Configuring the plugin

Now go to ‘Configure’ page in Jenkins. If the plugin is successfully installed you should see an option to ‘Add a new cloud’ at the bottom of the page. Add the ‘Mesos Cloud’ and give the path to the Mesos native library (e.g., libmesos.so on Linux or libmesos.dylib on OSX) and the address (HOST:PORT) of a running Mesos master.

configure-mesos-cloud

If you want to test immediately connectivity to Mesos, you can set ‘On-demand framework registration’ to ‘no’ and the framework will appear in Mesos as soon as you save. Otherwise it will register and unregister automatically when a build is scheduled on Mesos.
Note: Ensure Mesos slaves have a jenkins user or the user the Jenkins master is running as. jenkins user should have JAVA_HOME environment variable setup.

4. Set up and try out a build job

Now – set up a new job in Jenkins. On the configure screen, check the box that says “restrict where this build can run”

configure-job

Put in “mesos” (that was in the label in plugin configuration).

At this point you are good to go. If you check the Mesos console, you should see that Jenkins Scheduler is now setup as a framework – which means it is able to accept jobs:

mesos-slave

You have done with all configuration, now you can run the job – and it will run on Mesos. You will see an executor magically appear – and then pause for a little bit (while the slave.jar is setup etc) – and then run the job. Slave will become idle once job complete and after few second, slave will shutdown.

Configure Jenkins container slaves


In this blog, I am going to walk you through the proper configuration of Docker plugin for Jenkins container slaves. Docker plugin is to be able to use a docker host to dynamically provision a slave, run a single build, then tear-down that slave. Optionally, the container can be committed, so that (for example) manual QA could be performed by the container being imported into a local docker provider, and run from there.

1. Prerequisites

I assume, you have

1. A Jenkins instance with administrator privilege.
2. Install docker plugin to Jenkins
3. A working docker host

I have installed Jenkins on my OpenStack VM having IP address is 10.1.0.8 and Docker host working at another VM IP address 10.1.0.9

2. Preparing Environment

A. Prepare docker host

Your docker host needs to allow connections from a jenkins instance hosted on a different machine, you will need to open up the TCP port 2375. This can be achieved by editing the docker config file in /etc/default/docker. Open this file in your favorite editor and do the underneath change:

$ sudo vi /etc/default/docker

DOCKER_OPTS="-H tcp://10.1.0.9:2375 -H unix:///var/run/docker.sock"

B. Creating a docker image

You can pull a ready-made jenkins slave using docker pull command!

docker pull evarga/jenkins-slave

You need a docker image that has, as a minimum, an ssh server installed. You probably want a JDK, and you will also want a ‘jenkins’ user that can log in. For doing all follow blow activities:

$ docker run -i -t jenkins-slave /bin/bash

root@044d879cbf8c:/# apt-get update
root@044d879cbf8c:/# apt-get install git
root@044d879cbf8c:/# exit

Once the container has been created, you need to commit it with a name to be used later, e.g: docker-slave-image

$ docker ps -a
CONTAINER ID        IMAGE                  COMMAND             CREATED             STATUS                     PORTS               NAMES
672a8e7ec179        evarga/jenkins-slave   "/bin/bash"         5 weeks ago         Exited (0) 5 weeks ago                         agitated_albattani

$ docker commit 672a8e7ec179 docker-slave-image

$ docker images
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
docker-slave-image     latest              d8e5c82b7ce1        1 minutes ago         655.3 MB
evarga/jenkins-slave   latest              4df728e7f65f        16 months ago       610.8 MB

3. Jenkins Configuration

Docker appears in the ‘Cloud’ section of the Jenkins configuration, select “Docker” from the “Add a new cloud” drop down menu.

Docker turns up in the ‘Cloud’ section of the Jenkins configuration, Now pick “Docker” from the “Add a new cloud” drop down menu.

docker-configuration

The project is now ready to run. If everything is set up correctly, Jenkins should start up a new Docker container, run the build, and then shut down the container.

Job Configuration

Now, configure a job to use that “docker-slave-image” label as shown below:

job-configuration

Running that job, we will see that it successfully spins up a container of “docker-slave-image” and will build the job. Once build will complete, container will destroyed automatically.

An Introduction to Kubernetes


In this blog, I am going to give brief introduction to Kubernetes and it’s services. In later blog, we will deploy and configure Kubernets cluster on OpenStack VM.

Kubernetes is basically a cluster management tool for Docker container. It help us to schedule and deploy any number of container replicas onto a node clusters. Kubernetes is enough inteligent to take care of making decisions like which containers go on which servers/node.

Kubernetes is a KEY to assess and dealing with various containers instead of simply working with Docker on a manually configured host.

Why Kubernetes?

Kubernetes enables you to respond quickly to customer demand by scaling or rolling out new features and to make maximal use of your hardware.

Kubernetes is:

  • lean: lightweight, simple, accessible
  • portable: public, private, hybrid, multi cloud
  • extensible: modular, pluggable, hookable, composable, toolable
  • self-healing: auto-placement, auto-restart, auto-replication

Parts of Kubernetes

A running Kubernetes cluster contains node agents (kubelet) and master components (APIs, scheduler, etc), on top of a distributed storage solution. The Kubernetes node has the services necessary to run application containers and be managed from the master systems.

  • Master: the managing machine, which oversees one or more minions.
  • Minion: a slave that runs tasks as delegated by the user and Kubernetes master.
  • Pod: an application (or part of an application) that runs on a minion. This is the basic unit of manipulation in Kubernetes.
  • Replication Controller: ensures that the requested number of pods are running on minions at all times.
  • Label: an arbitrary key/value pair that the Replication Controller uses for service discovery
    kubecfg: the command line config tool
  • Service: an endpoint that provides load balancing across a replicated group of pods

To manage resources within Kubernetes, It will interact with the Kubernetes API and pulling down the Kubernetes binaries that will give all the services necessary to get Kubernetes configuration up and running. Like most other cluster management solutions, Kubernetes works by creating a master, which exposes the Kubernetes API and allows to request certain tasks to be completed. The master then spawns containers to handle the workload. Aside from running Docker, each node runs the Kubelet service — which is an agent that works with the container manifest — and a proxy service. The Kubernetes control plane is comprised of many components, but they all run on the single Kubernetes master node.

Container Clustering tools and techniques


All the noise are around, the companies are moving their server applications from virtual machines (VM) to containers.

So why does everyone love containers?

Containers use shared operating systems. That means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance.  Among other benefits, containers offer a software architecture that provides portability and managed distribution of software assets. This has caused enterprises and software vendors to embrace container technology as they prepare for cloud.

But despite their success, containers still present challenges. Container scalability, for example, remains somewhat of a mystery. Some organizations struggle when trying to scale Docker, one of the leading container technologies.

There are couple of Open Source container cluster management tool. Each cluster management technology has something unique and different to offer.

  • Apache Mesos is mature and can also run non-containerized workloads (such as Hadoop).
  • Kubernetes tends to be opinionated. It offers little customization (for now) and networking can be a challenge. But it is moving fast and the defaults can be a quick way to get started.
  • Docker Swarm is from Docker, Inc and offers the familiar Docker (single-host) API. It is the easiest to get started with, but also the least mature (as of this writing).

Container orchestration, scheduling and clustering tool vary in their features and implementation but some of the core principles are the same.

  • Pool resources from a group of hosts into their components (CPU/RAM) and make them available for consumption. Also make sure the resources don’t become exhausted via over-provisioning or host failure.
  • Service supervision provides a service load balancer/entry point and make sure the service remains running.
  • Scaling functionality scales a service (automatic or manually) by allowing an operator to create more or fewer instances.
  • System metadata provides stats about running instances, scheduling, and container health.

Going forward, I am going to do POC on above tools. I have already written couple of blog on Mesos and going to add more on reaming tools.

Setup Standalone Mesos on Ubuntu

Install DC/OS on Vagrant

Using Jenkins on DC/OS backed by NFS

Setup Standalone Mesos on Ubuntu


In this blog, I will walk you through setting up a standalone Apache Mesos and Marathon in ubuntu. Standalone installation means, running Mesos master, Mesos slave and Marathon on one machine. In this blog, we are going to install this mesos cluster on OpenStack VM. Even you can try on your local machine if you system running Ubuntu Operating System.

Introduction

Mesos is a scalable and distributed resource manager designed to manage resources for data centers.

Mesos can be thought of as “distributed kernel” that achieves resource sharing via APIs in various languages (C++, Python, Go, etc.) Mesos relies on cgroups to do process isolation on top of distributed file systems (e.g., HDFS). Using Mesos you can create and run clusters running heterogeneous tasks. Let us see what it is all about and some fundamentals on getting Mesos up and running.

Note: Even you can try DC/OS is an entirely open source software project based on Apache Mesos, Marathon and a whole lot more.

1. Add the Mesosphere Repositories to your Hosts

First, add the Mesosphere repository to your sources list. This process involves downloading the Mesosphere project’s key from the Ubuntu keyserver and then crafting the correct URL for our Ubuntu release. The project provides a convenient way of doing this:

$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
$ DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
$ CODENAME=$(lsb_release -cs)
$ echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/mesosphere.list

2. Download package lists and information of latest versions:

$ sudo apt-get -y update

3. Install Mesos and Marathon

A Mesos cluster needs at least one Mesos Master and one Mesos Slave. The Mesos Master coordinates and dispatch tasks onto the Mesos Slaves which run the jobs. In production clusters you typically run Mesos in High Availability (HA) Mode with three or more Mesos Masters, three or more Zookeepers, and many Mesos Slaves.

We will install Marathon and mesos meta package which also pulls in zookeeper as a dependency.

$ sudo apt-get -y install mesos marathon

4. Reboot the system:

$ sudo reboot

After the packages are downloaded and installed all of the Mesos and Marathon dependencies and startup scripts are ready for use on a single node cluster, including Apache ZooKeeper.

5. Mesos Web interface:

You should be able to access the Mesos web interface on port 5050 of your VM. For example, if the IP address of the VM is 192.168.0.131 then access the Mesos web UI at http://192.168.0.131:5050/. In case, if you have installed on your local system. Then you can access Mesos web UI on 127.0.0.1:5050. The master web console will show the number of active slaves as 1 and the registered Marathon framework. Take a few seconds to navigate around the interface to see what information is available.

scalable-jenkins

6. Marathon Web interface:

The Marathon web UI will be accessible at http://192.168.0.131:8080.

rsz_marathon

From here, you can click on the “Create” button in the upper-left corner. This will pop up an overlay where you can add information about your new application:

marathon_new_app

Fill in the fields with the requirements for your app. The only fields that are mandatory are:

  • ID: A unique ID selected by the user to identify a process. This can be whatever you’d like, but must be unique.
  • Command: This is the actual command that will be run by Marathon. This is the process that will be monitored and restarted if it fails.

Using this information, you can set up a simple service that just prints “hello” and sleeps for 10 seconds. We will call this “hello”:

simple_app

When you return to the interface, the service will go from “Deploying” to “Running”:

running

 

Using Jenkins on DC/OS backed by NFS


​​
In the following blog, I am going to explain about how to use Jenkins on DC/OS and configure NFS for persistence storage. Jenkins on DC/OS works by running the Jenkins master as a one-instance Marathon application. Once the Jenkins master comes online, it registers itself as a Mesos framework (using the jenkins-mesos plugin).

Prerequisites

I assume, you have running cluster of DC/OS on your machine. If not then follow my blog Install DC/OS on Vagrant to install and configure open source DC/OS on vagrant.

About NFS (Network File System) Mounts

NFS, or Network File System, is a distributed filesystem protocol that allows you to mount remote directories on your server. This allows you to leverage storage space in a different location and to write to the same space from multiple servers easily. NFS works well for directories that will have to be accessed regularly.

In my DC/OS cluster, I am running one master and two agent. You can check with vagrant status command:

$ vagrant status


Current machine states:

m1                        running (virtualbox) ===> 192.168.65.90
a1                         running (virtualbox) ===> 192.168.65.111 
a2                         running (virtualbox) ===> 192.168.65.121
p1                         running (virtualbox) ===> 192.168.65.60
boot                      running (virtualbox) ===>

We are going to configure NFS server on p1 VM that will works as NFS server for our a1 and a2 client.

p1 ==> NFS Server ==> 192.168.65.60
a1 ==> Client ==> 192.168.65.111
a2 ==> Client ==> 192.168.65.121

Setup

The system should be set up as root. You can access the root user by typing

$ sudo su

1. Setting Up the NFS Server(192.168.65.60)

a) Download the Required Software

Start off by using yum to install the nfs programs.

$ yum install nfs-utils nfs-utils-lib 

Subsequently, run several startup scripts for the NFS server:

$ chkconfig nfs on 
$ service rpcbind start
$ service nfs start

b)Export the Shared Directory

The next step is to decide which directory we want to share with the
client server. The chosen directory should then be added to the
/etc/exports file, which specifies both the directory to be shared and
the details of how it is shared. We are going to share the directory /jenkins_data.

We need to export the directory:

$ vi /etc/exports 

Add the following lines to the bottom of the file, sharing the directory with the client:


/jenkins_data           192.168.65.111(rw,sync,no_root_squash,no_subtree_check)
/jenkins_data           192.168.65.121(rw,sync,no_root_squash,no_subtree_check)

Once you have entered in the settings for each directory, run the following command to export them:

 $ exportfs -a 

2. Setting Up the NFS Client to Agent a1 (192.168.65.111) and a2 (192.168.65.121)

a) Download the Required Software

Start off by using yum to install the nfs programs.

$ yum install nfs-utils nfs-utils-lib 

b) Mount the Directories
Once the programs have been downloaded to the the client server, create the directory that will contain the NFS shared files

$ mkdir -p /mnt/jenkins 

Then go ahead and mount it

$ mount 192.168.65.60:/jenkins_data /mnt/jenkins 

You can use the df -h command to check that the directory has been mounted. You will see it last on the list.

$ df -h

Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/centos-root      8.8G  3.1G  5.7G  35% /
devtmpfs                     739M     0  739M   0% /dev
tmpfs                        749M     0  749M   0% /dev/shm
tmpfs                        749M  8.3M  741M   2% /run
tmpfs                        749M     0  749M   0% /sys/fs/cgroup
/dev/sda1                    497M  164M  334M  33% /boot
none                         455G  164G  292G  36% /vagrant
tmpfs                        150M     0  150M   0% /run/user/501
192.168.65.60:/jenkins_data  8.8G  2.9G  6.0G  33% /mnt/jenkins

Additionally, use the mount command to see the entire list of mounted file systems.

$ mount

systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/sda1 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
none on /vagrant type vboxsf (rw,nodev,relatime)
tmpfs on /run/user/501 type tmpfs (rw,nosuid,nodev,relatime,size=153292k,mode=700,uid=501,gid=501)
192.168.65.60:/jenkins_data on /mnt/jenkins type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.65.121,local_lock=none,addr=192.168.65.60)

You can ensure that the mount is always active by adding the directory
to the fstab file on the client. This will ensure that the mount starts
up after the server reboots.

$ vi /etc/fstab 

And add blow line:

192.168.65.60:/jenkins_data  /mnt/jenkins   nfs      auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0

3. Testing the NFS Mount
Once you have successfully mounted your NFS directory, you can test that it works by creating a file on the Client and checking its availability on the Server.

Create a file in the directory to try it out:

$ touch /mnt/jenkins/example 

You should then be able to find the files on the Server in the /jenkins_data.

$ ls /jenkins_data 

4. Installing Jenkins backed by NFS
If you already have a mount point, great! Create an options.json file that resembles the following example:

$ cat options.json
{
    "jenkins": {
        "framework-name": "jenkins-prod",
        "host-volume": "/mnt/jenkins",
        "cpus": 2.0,
        "mem": 4096
    }
}

Then, install Jenkins by running the following command:

$ dcos package install jenkins --options=options.json 

Jenkins will now be available with persistence storage.