Container Clustering tools and techniques


All the noise are around, the companies are moving their server applications from virtual machines (VM) to containers.

So why does everyone love containers?

Containers use shared operating systems. That means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance.  Among other benefits, containers offer a software architecture that provides portability and managed distribution of software assets. This has caused enterprises and software vendors to embrace container technology as they prepare for cloud.

But despite their success, containers still present challenges. Container scalability, for example, remains somewhat of a mystery. Some organizations struggle when trying to scale Docker, one of the leading container technologies.

There are couple of Open Source container cluster management tool. Each cluster management technology has something unique and different to offer.

  • Apache Mesos is mature and can also run non-containerized workloads (such as Hadoop).
  • Kubernetes tends to be opinionated. It offers little customization (for now) and networking can be a challenge. But it is moving fast and the defaults can be a quick way to get started.
  • Docker Swarm is from Docker, Inc and offers the familiar Docker (single-host) API. It is the easiest to get started with, but also the least mature (as of this writing).

Container orchestration, scheduling and clustering tool vary in their features and implementation but some of the core principles are the same.

  • Pool resources from a group of hosts into their components (CPU/RAM) and make them available for consumption. Also make sure the resources don’t become exhausted via over-provisioning or host failure.
  • Service supervision provides a service load balancer/entry point and make sure the service remains running.
  • Scaling functionality scales a service (automatic or manually) by allowing an operator to create more or fewer instances.
  • System metadata provides stats about running instances, scheduling, and container health.

Going forward, I am going to do POC on above tools. I have already written couple of blog on Mesos and going to add more on reaming tools.

Setup Standalone Mesos on Ubuntu

Install DC/OS on Vagrant

Using Jenkins on DC/OS backed by NFS

Setup Standalone Mesos on Ubuntu


In this blog, I will walk you through setting up a standalone Apache Mesos and Marathon in ubuntu. Standalone installation means, running Mesos master, Mesos slave and Marathon on one machine. In this blog, we are going to install this mesos cluster on OpenStack VM. Even you can try on your local machine if you system running Ubuntu Operating System.

Introduction

Mesos is a scalable and distributed resource manager designed to manage resources for data centers.

Mesos can be thought of as “distributed kernel” that achieves resource sharing via APIs in various languages (C++, Python, Go, etc.) Mesos relies on cgroups to do process isolation on top of distributed file systems (e.g., HDFS). Using Mesos you can create and run clusters running heterogeneous tasks. Let us see what it is all about and some fundamentals on getting Mesos up and running.

Note: Even you can try DC/OS is an entirely open source software project based on Apache Mesos, Marathon and a whole lot more.

1. Add the Mesosphere Repositories to your Hosts

First, add the Mesosphere repository to your sources list. This process involves downloading the Mesosphere project’s key from the Ubuntu keyserver and then crafting the correct URL for our Ubuntu release. The project provides a convenient way of doing this:

$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
$ DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
$ CODENAME=$(lsb_release -cs)
$ echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/mesosphere.list

2. Download package lists and information of latest versions:

$ sudo apt-get -y update

3. Install Mesos and Marathon

A Mesos cluster needs at least one Mesos Master and one Mesos Slave. The Mesos Master coordinates and dispatch tasks onto the Mesos Slaves which run the jobs. In production clusters you typically run Mesos in High Availability (HA) Mode with three or more Mesos Masters, three or more Zookeepers, and many Mesos Slaves.

We will install Marathon and mesos meta package which also pulls in zookeeper as a dependency.

$ sudo apt-get -y install mesos marathon

4. Reboot the system:

$ sudo reboot

After the packages are downloaded and installed all of the Mesos and Marathon dependencies and startup scripts are ready for use on a single node cluster, including Apache ZooKeeper.

5. Mesos Web interface:

You should be able to access the Mesos web interface on port 5050 of your VM. For example, if the IP address of the VM is 192.168.0.131 then access the Mesos web UI at http://192.168.0.131:5050/. In case, if you have installed on your local system. Then you can access Mesos web UI on 127.0.0.1:5050. The master web console will show the number of active slaves as 1 and the registered Marathon framework. Take a few seconds to navigate around the interface to see what information is available.

scalable-jenkins

6. Marathon Web interface:

The Marathon web UI will be accessible at http://192.168.0.131:8080.

rsz_marathon

From here, you can click on the “Create” button in the upper-left corner. This will pop up an overlay where you can add information about your new application:

marathon_new_app

Fill in the fields with the requirements for your app. The only fields that are mandatory are:

  • ID: A unique ID selected by the user to identify a process. This can be whatever you’d like, but must be unique.
  • Command: This is the actual command that will be run by Marathon. This is the process that will be monitored and restarted if it fails.

Using this information, you can set up a simple service that just prints “hello” and sleeps for 10 seconds. We will call this “hello”:

simple_app

When you return to the interface, the service will go from “Deploying” to “Running”:

running

 

Using Jenkins on DC/OS backed by NFS


​​
In the following blog, I am going to explain about how to use Jenkins on DC/OS and configure NFS for persistence storage. Jenkins on DC/OS works by running the Jenkins master as a one-instance Marathon application. Once the Jenkins master comes online, it registers itself as a Mesos framework (using the jenkins-mesos plugin).

Prerequisites

I assume, you have running cluster of DC/OS on your machine. If not then follow my blog Install DC/OS on Vagrant to install and configure open source DC/OS on vagrant.

About NFS (Network File System) Mounts

NFS, or Network File System, is a distributed filesystem protocol that allows you to mount remote directories on your server. This allows you to leverage storage space in a different location and to write to the same space from multiple servers easily. NFS works well for directories that will have to be accessed regularly.

In my DC/OS cluster, I am running one master and two agent. You can check with vagrant status command:

$ vagrant status


Current machine states:

m1                        running (virtualbox) ===> 192.168.65.90
a1                         running (virtualbox) ===> 192.168.65.111 
a2                         running (virtualbox) ===> 192.168.65.121
p1                         running (virtualbox) ===> 192.168.65.60
boot                      running (virtualbox) ===>

We are going to configure NFS server on p1 VM that will works as NFS server for our a1 and a2 client.

p1 ==> NFS Server ==> 192.168.65.60
a1 ==> Client ==> 192.168.65.111
a2 ==> Client ==> 192.168.65.121

Setup

The system should be set up as root. You can access the root user by typing

$ sudo su

1. Setting Up the NFS Server(192.168.65.60)

a) Download the Required Software

Start off by using yum to install the nfs programs.

$ yum install nfs-utils nfs-utils-lib 

Subsequently, run several startup scripts for the NFS server:

$ chkconfig nfs on 
$ service rpcbind start
$ service nfs start

b)Export the Shared Directory

The next step is to decide which directory we want to share with the
client server. The chosen directory should then be added to the
/etc/exports file, which specifies both the directory to be shared and
the details of how it is shared. We are going to share the directory /jenkins_data.

We need to export the directory:

$ vi /etc/exports 

Add the following lines to the bottom of the file, sharing the directory with the client:


/jenkins_data           192.168.65.111(rw,sync,no_root_squash,no_subtree_check)
/jenkins_data           192.168.65.121(rw,sync,no_root_squash,no_subtree_check)

Once you have entered in the settings for each directory, run the following command to export them:

 $ exportfs -a 

2. Setting Up the NFS Client to Agent a1 (192.168.65.111) and a2 (192.168.65.121)

a) Download the Required Software

Start off by using yum to install the nfs programs.

$ yum install nfs-utils nfs-utils-lib 

b) Mount the Directories
Once the programs have been downloaded to the the client server, create the directory that will contain the NFS shared files

$ mkdir -p /mnt/jenkins 

Then go ahead and mount it

$ mount 192.168.65.60:/jenkins_data /mnt/jenkins 

You can use the df -h command to check that the directory has been mounted. You will see it last on the list.

$ df -h

Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/centos-root      8.8G  3.1G  5.7G  35% /
devtmpfs                     739M     0  739M   0% /dev
tmpfs                        749M     0  749M   0% /dev/shm
tmpfs                        749M  8.3M  741M   2% /run
tmpfs                        749M     0  749M   0% /sys/fs/cgroup
/dev/sda1                    497M  164M  334M  33% /boot
none                         455G  164G  292G  36% /vagrant
tmpfs                        150M     0  150M   0% /run/user/501
192.168.65.60:/jenkins_data  8.8G  2.9G  6.0G  33% /mnt/jenkins

Additionally, use the mount command to see the entire list of mounted file systems.

$ mount

systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/sda1 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
none on /vagrant type vboxsf (rw,nodev,relatime)
tmpfs on /run/user/501 type tmpfs (rw,nosuid,nodev,relatime,size=153292k,mode=700,uid=501,gid=501)
192.168.65.60:/jenkins_data on /mnt/jenkins type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.65.121,local_lock=none,addr=192.168.65.60)

You can ensure that the mount is always active by adding the directory
to the fstab file on the client. This will ensure that the mount starts
up after the server reboots.

$ vi /etc/fstab 

And add blow line:

192.168.65.60:/jenkins_data  /mnt/jenkins   nfs      auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0

3. Testing the NFS Mount
Once you have successfully mounted your NFS directory, you can test that it works by creating a file on the Client and checking its availability on the Server.

Create a file in the directory to try it out:

$ touch /mnt/jenkins/example 

You should then be able to find the files on the Server in the /jenkins_data.

$ ls /jenkins_data 

4. Installing Jenkins backed by NFS
If you already have a mount point, great! Create an options.json file that resembles the following example:

$ cat options.json
{
    "jenkins": {
        "framework-name": "jenkins-prod",
        "host-volume": "/mnt/jenkins",
        "cpus": 2.0,
        "mem": 4096
    }
}

Then, install Jenkins by running the following command:

$ dcos package install jenkins --options=options.json 

Jenkins will now be available with persistence storage.

Install DC/OS on Vagrant


This installation method uses Vagrant to create a cluster of virtual machines on your local machine that can be used for demos, development, and testing with DC/OS.

Prerequisites

Your machine should have at least 16GB of RAM.

Note: If you are installing DC/OS on Vagrant, it only used for demos, development, and testing. Not use for production.

A) Download DC/OS Installer

First, it’s necessary to download the DC/OS 1.7.0 Installer. Save this somewhere safe – you’ll need this when setting up DC/OS Vagrant:

$ curl -O https://downloads.dcos.io/dcos/EarlyAccess/dcos_generate_config.sh

B) Install DC/OS Vagrant

1. Install & Configure Vagrant & VirtualBox

I assume, you have working Vagrant and VirtualBox on your machine. If not then first of all, Install Vagrant and Virtual box on your system.

Installing Virtualbox:

 $ sudo apt-get install virtualbox

Installing Vagrant:

$ sudo apt-get install vagrant

Install the dkms package to ensure that the VirtualBox host kernel modules (vboxdrv, vboxnetflt and vboxnetadp) are properly updated if the Linux kernel version changes during the next apt-get upgrade.

$ sudo apt-get install virtualbox-dkms

2. Clone DC/OS Vagrant Repo

Change the directory where you want to run the DC/OS and clone the dcos-vagrant repo and cd into it.

$ git clone https://github.com/dcos/dcos-vagrant
$ cd dcos-vagrant

You can also upgrade the latest version of dcos-vagrant:

  • Change into the repo directory (e.g. cd ~/workspace/dcos-vagrant)
  • Checkout the new desired version (e.g. git checkout v0.6.0)
  • Pull the new code (e.g. git pull)

3. Configure VirtualBox Networking

Configure the host-only vboxnet0 network to use the 192.168.65.0/24 subnet.

Create the vboxnet0 network if it does not exist:

$ VBoxManage list hostonlyifs | grep vboxnet0 -q || VBoxManage hostonlyif create

Set the vboxnet0 subnet:

$ VBoxManage hostonlyif ipconfig vboxnet0 --ip 192.168.65.1

4. Install Vagrant Host Manager Plugin

The Host Manager Plugin manages the /etc/hosts on the VMs and host to allow access by hostname.

$ vagrant plugin install vagrant-hostmanager

This will update /etc/hosts every time VMs are created or destroyed.

5. Download the DC/OS Installer

As we already downloaded, we need to move dcos_generate_config.sh to the root of the repo (the repo will be mounted into the vagrant machines as /vagrant).

$ cd dcos_generate_config.sh dcos-vagrant

6. Configure the DC/OS Installer

As we have downloaded DC/OS 1.7.0 Installer so we will export etc/config-1.7.yaml to DCOS_CONFIG_PATH env variable.

$ export DCOS_CONFIG_PATH=etc/config-1.7.yaml

The path to the config file is relative to the repo dir, because the repo dir will be mounted as /vagrant within each VM. Alternate configurations may be added to the <repo>/etc/ dir and configured in a similar manner.

7. Configure the DC/OS Machine Types

There is sample VagrantConfig.yaml.example dcos-vagrant repo that we can configure for DC/OS machine type. At this point of time, we not need to do any changes.

cd <repo>
cp VagrantConfig.yaml.example VagrantConfig.yaml
8. Download the VM Base Image

By default, Vagrant will automatically download the latest VM Base Image (virtualbox box) when you run vagrant up <machines>, but since downloading the image takes a while the first time, you may want to trigger the download manually.

$ vagrant box add https://downloads.dcos.io/dcos-vagrant/metadata.json

If you already have the latest version downloaded, the above command will fail.

9. Deploy DC/OS

Specify which machines to deploy in your cluster. Below are a few options to try.

a) Minimal Cluster

A minimal cluster supports launching small Marathon apps. Most other services will fail to install, because they require more than one agent node.

Requires > 4.5GB free memory (using the example VagrantConfig).

$ vagrant up m1 a1 boot

b) Small Cluster

A small cluster supports running tasks on multiple nodes.

Requires > 7.25GB free memory (using the example VagrantConfig).

$ vagrant up m1 a1 a2 p1 boot

c) Medium Cluster

A medium cluster supports the installation of a minimally configured Cassandra.

Requires > 10GB free memory (using the example VagrantConfig).

$ vagrant up m1 a1 a2 a3 a4 p1 boot

d) Large Cluster

Requires > 17GB free memory (using the example VagrantConfig).

A large cluster supports master node fail over, multiple framework installs, and multiple public load balancers.

$ vagrant up m1 m2 m3 a1 a2 a3 a4 a5 a6 p1 p2 p3 boot
10. Access DC/OS from browser

Once the the machines are created and provisioned, DC/OS will be installed. Once complete, the Web Interface will be available at http://m1.dcos/.

11. Additionally you can Authentication DC/OS

If your DC/OS CLI or the web dashboard prompt for username and password then the superuser credentials are by default admin/admin

Install Jenkins on Ubuntu


​Jenkins is a topnotch application.Originally started as Hudson in 2004 but due to a conflict in 2011 they divided and continued under the name, Jenkins. It enables one to build software, deploy software, or websites to various endpoints or to run unit/behaviour-driven software tests. In this blog, I will demonstrate how to install, configure Jenkins, and create your first job.

Step 1. Verify Java Installation

$java –version

If java is installed then, you will see java version like:

​java version "1.8.0_77"
Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

If Java is not installed, then follow the step 2 to install and set JAVA_HOME path

Step 2: Install Java and Set JAVA_HOME

$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
$ sudo apt-get install oracle-java8-set-default

$ cd /usr/lib/jvm/

$ ls -l

​drwxr-xr-x 4 root root 4096 Apr 16 11:07 ./
drwxr-xr-x 163 root root 20480 Apr 11 06:57 ../
lrwxrwxrwx 1 root root 24 Mar 23 2014 default-java -> java-1.7.0-openjdk-amd64/
lrwxrwxrwx 1 root root 20 Mar 24 16:08 java-1.7.0-openjdk-amd64 -> java-7-openjdk-amd64/
-rw-r--r-- 1 root root 2439 Mar 24 16:07 .java-1.7.0-openjdk-amd64.jinfo
drwxr-xr-x 5 root root 4096 Apr 16 11:07 java-7-openjdk-amd64/
drwxr-xr-x 8 root root 4096 Apr 1 08:35 java-8-oracle/
-rw-r--r-- 1 root root 2643 Apr 1 08:35 .java-8-oracle.jinfo

Then change directory to latest java

$ cd java-8-oracle

$ pwd

/usr/lib/jvm/java-8-oracle

$ sudo vi /etc/environment

Add below path as per your java installation at end of file

JAVA_HOME="/usr/lib/jvm/java-8-oracle"

$ source /etc/environment
$ echo $JAVA_HOME

/usr/lib/jvm/java-8-oracle

Step 3. Install jenkins.

Before you can install Jenkins, add the key and source list to apt. First add the key:

$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | apt-key add -

Then, create a sources list for Jenkins:

echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list

Update apt’s cache before installing Jenkins:

$ apt-get update

After the cache has been updated, proceed with installing Jenkins. Please note that it has a large bunch of dependencies, so it might take a few moments to install them all.

$ apt-get install jenkins

Jenkins will be launched as a daemon up on start. Check /etc/init.d/jenkins for more details. To run this service a ‘jenkins’ user is created. The log file will be placed in /var/log/jenkins/jenkins.log . Check this file when in need for troubleshooting.​​

​Step 4. Start Jenkins service.

$ /etc/init.d/jenkins start
$ /etc/init.d/jenkins start

Jenkins will write log files to /var/log/jenkins/jenkins.log. You can also fine-tune the configuration.

Step 5. Access Jenkins.

Finally, after the installation is complete you can visit the following address in your browser http://your-ip-address:8080

Congratulation’s! You have successfully installed jenkins.

Introduction to Jenkins


Jenkins is an adaptable open source software tool built with java language.It helps developers to build and test their software development continuously. Basically CICD is the best practice of running project tests on a non-developer machine automatically everytime when ever they pushed new code into the source repository.

index

With Jenkins, organization can speed up the software development workflow through automation. Jenkins deals and holds development growth processes of all kinds, including building , documenting , testing, packaging, staging, deployment, static analysis and many more……

Jenkins provide itself as a platform where installation, development , deployment and production occurs simultaneously on a large scale machines.

Git_gerrit_jenkins

Jenkins constitute some great features in it like:

  1. Easy installation: Just run java -jar jenkins.war, deploy it in a servlet container. No additional install, no database. Prefer an installer or native package? We have those as well.
  2. Easy configuration: Jenkins can be configured entirely from its friendly web GUI with extensive on-the-fly error checks and inline help.
  3. Rich plugin ecosystem: Jenkins integrates with virtually every SCM or build tool that exists. View plugins.
  4. Extensibility: Most parts of Jenkins can be extended and modified, and it’s easy to create new Jenkins plugins. This allows you to customize Jenkins to your needs.
  5. Distributed builds: Jenkins can distribute build/test loads to multiple computers with different operating systems. Building software for OS X, Linux, and Windows? No problem.