Setting up Jenkins with Mesos


In this blog, I am going to explain the setups to configure Mesos with Jenkins. Jenkins connect with Mesos using mesos-jenkins plugin which allows Jenkins to dynamically launch Jenkins slaves on a Mesos cluster depending on the workload! Put simply, whenever the Jenkins Build Queue starts getting bigger, this plugin automatically spins up additional Jenkins slave(s) on Mesos so that jobs can be immediately scheduled! Similarly, when a Jenkins slave is idle for a long time it is automatically shut down.

1. Prerequisites

I assume, you have

1. A Jenkins instance with administrator privilege.
2. Working Mesos Cluster with at-least one master and one slave. For instructions on setting up a Mesos cluster, please refer to my blog Setup Standalone Mesos on Ubuntu.

I have installed Jenkins on my OpenStack environment having IP address is 10.1.0.8 and Mesos cluster working at IP address 10.1.0.17 with one master and one slaves.

Mesos-master

2. Installing the plugin

Go to Manage Jenkins > Manage Plugins > There is a tab called ‘available’, once you will click “Available” tab, then choose the ‘mesos’ plugins to install. Scroll all the way down, then you’ll see the “Install without restart” button as well as the “Download new and install after restart” button. The former is the result of this work, allowing you to start using the new plugins right away. The latter is the traditional behaviour, where new plugins take effect after the next restart.

select-jenkins-plugin

Click the button Install without restart on the left, and the plugin gets downloaded, installed, and activated:

Installing-jenkins-plugin-pravin

3. Configuring the plugin

Now go to ‘Configure’ page in Jenkins. If the plugin is successfully installed you should see an option to ‘Add a new cloud’ at the bottom of the page. Add the ‘Mesos Cloud’ and give the path to the Mesos native library (e.g., libmesos.so on Linux or libmesos.dylib on OSX) and the address (HOST:PORT) of a running Mesos master.

configure-mesos-cloud

If you want to test immediately connectivity to Mesos, you can set ‘On-demand framework registration’ to ‘no’ and the framework will appear in Mesos as soon as you save. Otherwise it will register and unregister automatically when a build is scheduled on Mesos.
Note: Ensure Mesos slaves have a jenkins user or the user the Jenkins master is running as. jenkins user should have JAVA_HOME environment variable setup.

4. Set up and try out a build job

Now – set up a new job in Jenkins. On the configure screen, check the box that says “restrict where this build can run”

configure-job

Put in “mesos” (that was in the label in plugin configuration).

At this point you are good to go. If you check the Mesos console, you should see that Jenkins Scheduler is now setup as a framework – which means it is able to accept jobs:

mesos-slave

You have done with all configuration, now you can run the job – and it will run on Mesos. You will see an executor magically appear – and then pause for a little bit (while the slave.jar is setup etc) – and then run the job. Slave will become idle once job complete and after few second, slave will shutdown.

Advertisements

Configure Jenkins container slaves


In this blog, I am going to walk you through the proper configuration of Docker plugin for Jenkins container slaves. Docker plugin is to be able to use a docker host to dynamically provision a slave, run a single build, then tear-down that slave. Optionally, the container can be committed, so that (for example) manual QA could be performed by the container being imported into a local docker provider, and run from there.

1. Prerequisites

I assume, you have

1. A Jenkins instance with administrator privilege.
2. Install docker plugin to Jenkins
3. A working docker host

I have installed Jenkins on my OpenStack VM having IP address is 10.1.0.8 and Docker host working at another VM IP address 10.1.0.9

2. Preparing Environment

A. Prepare docker host

Your docker host needs to allow connections from a jenkins instance hosted on a different machine, you will need to open up the TCP port 2375. This can be achieved by editing the docker config file in /etc/default/docker. Open this file in your favorite editor and do the underneath change:

$ sudo vi /etc/default/docker

DOCKER_OPTS="-H tcp://10.1.0.9:2375 -H unix:///var/run/docker.sock"

B. Creating a docker image

You can pull a ready-made jenkins slave using docker pull command!

docker pull evarga/jenkins-slave

You need a docker image that has, as a minimum, an ssh server installed. You probably want a JDK, and you will also want a ‘jenkins’ user that can log in. For doing all follow blow activities:

$ docker run -i -t jenkins-slave /bin/bash

root@044d879cbf8c:/# apt-get update
root@044d879cbf8c:/# apt-get install git
root@044d879cbf8c:/# exit

Once the container has been created, you need to commit it with a name to be used later, e.g: docker-slave-image

$ docker ps -a
CONTAINER ID        IMAGE                  COMMAND             CREATED             STATUS                     PORTS               NAMES
672a8e7ec179        evarga/jenkins-slave   "/bin/bash"         5 weeks ago         Exited (0) 5 weeks ago                         agitated_albattani

$ docker commit 672a8e7ec179 docker-slave-image

$ docker images
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
docker-slave-image     latest              d8e5c82b7ce1        1 minutes ago         655.3 MB
evarga/jenkins-slave   latest              4df728e7f65f        16 months ago       610.8 MB

3. Jenkins Configuration

Docker appears in the ‘Cloud’ section of the Jenkins configuration, select “Docker” from the “Add a new cloud” drop down menu.

Docker turns up in the ‘Cloud’ section of the Jenkins configuration, Now pick “Docker” from the “Add a new cloud” drop down menu.

docker-configuration

The project is now ready to run. If everything is set up correctly, Jenkins should start up a new Docker container, run the build, and then shut down the container.

Job Configuration

Now, configure a job to use that “docker-slave-image” label as shown below:

job-configuration

Running that job, we will see that it successfully spins up a container of “docker-slave-image” and will build the job. Once build will complete, container will destroyed automatically.

Container Clustering tools and techniques


All the noise are around, the companies are moving their server applications from virtual machines (VM) to containers.

So why does everyone love containers?

Containers use shared operating systems. That means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance.  Among other benefits, containers offer a software architecture that provides portability and managed distribution of software assets. This has caused enterprises and software vendors to embrace container technology as they prepare for cloud.

But despite their success, containers still present challenges. Container scalability, for example, remains somewhat of a mystery. Some organizations struggle when trying to scale Docker, one of the leading container technologies.

There are couple of Open Source container cluster management tool. Each cluster management technology has something unique and different to offer.

  • Apache Mesos is mature and can also run non-containerized workloads (such as Hadoop).
  • Kubernetes tends to be opinionated. It offers little customization (for now) and networking can be a challenge. But it is moving fast and the defaults can be a quick way to get started.
  • Docker Swarm is from Docker, Inc and offers the familiar Docker (single-host) API. It is the easiest to get started with, but also the least mature (as of this writing).

Container orchestration, scheduling and clustering tool vary in their features and implementation but some of the core principles are the same.

  • Pool resources from a group of hosts into their components (CPU/RAM) and make them available for consumption. Also make sure the resources don’t become exhausted via over-provisioning or host failure.
  • Service supervision provides a service load balancer/entry point and make sure the service remains running.
  • Scaling functionality scales a service (automatic or manually) by allowing an operator to create more or fewer instances.
  • System metadata provides stats about running instances, scheduling, and container health.

Going forward, I am going to do POC on above tools. I have already written couple of blog on Mesos and going to add more on reaming tools.

Setup Standalone Mesos on Ubuntu

Install DC/OS on Vagrant

Using Jenkins on DC/OS backed by NFS