Install DC/OS on Vagrant

This installation method uses Vagrant to create a cluster of virtual machines on your local machine that can be used for demos, development, and testing with DC/OS.


Your machine should have at least 16GB of RAM.

Note: If you are installing DC/OS on Vagrant, it only used for demos, development, and testing. Not use for production.

A) Download DC/OS Installer

First, it’s necessary to download the DC/OS 1.7.0 Installer. Save this somewhere safe – you’ll need this when setting up DC/OS Vagrant:

$ curl -O

B) Install DC/OS Vagrant

1. Install & Configure Vagrant & VirtualBox

I assume, you have working Vagrant and VirtualBox on your machine. If not then first of all, Install Vagrant and Virtual box on your system.

Installing Virtualbox:

 $ sudo apt-get install virtualbox

Installing Vagrant:

$ sudo apt-get install vagrant

Install the dkms package to ensure that the VirtualBox host kernel modules (vboxdrv, vboxnetflt and vboxnetadp) are properly updated if the Linux kernel version changes during the next apt-get upgrade.

$ sudo apt-get install virtualbox-dkms

2. Clone DC/OS Vagrant Repo

Change the directory where you want to run the DC/OS and clone the dcos-vagrant repo and cd into it.

$ git clone
$ cd dcos-vagrant

You can also upgrade the latest version of dcos-vagrant:

  • Change into the repo directory (e.g. cd ~/workspace/dcos-vagrant)
  • Checkout the new desired version (e.g. git checkout v0.6.0)
  • Pull the new code (e.g. git pull)

3. Configure VirtualBox Networking

Configure the host-only vboxnet0 network to use the subnet.

Create the vboxnet0 network if it does not exist:

$ VBoxManage list hostonlyifs | grep vboxnet0 -q || VBoxManage hostonlyif create

Set the vboxnet0 subnet:

$ VBoxManage hostonlyif ipconfig vboxnet0 --ip

4. Install Vagrant Host Manager Plugin

The Host Manager Plugin manages the /etc/hosts on the VMs and host to allow access by hostname.

$ vagrant plugin install vagrant-hostmanager

This will update /etc/hosts every time VMs are created or destroyed.

5. Download the DC/OS Installer

As we already downloaded, we need to move to the root of the repo (the repo will be mounted into the vagrant machines as /vagrant).

$ cd dcos-vagrant

6. Configure the DC/OS Installer

As we have downloaded DC/OS 1.7.0 Installer so we will export etc/config-1.7.yaml to DCOS_CONFIG_PATH env variable.

$ export DCOS_CONFIG_PATH=etc/config-1.7.yaml

The path to the config file is relative to the repo dir, because the repo dir will be mounted as /vagrant within each VM. Alternate configurations may be added to the <repo>/etc/ dir and configured in a similar manner.

7. Configure the DC/OS Machine Types

There is sample VagrantConfig.yaml.example dcos-vagrant repo that we can configure for DC/OS machine type. At this point of time, we not need to do any changes.

cd <repo>
cp VagrantConfig.yaml.example VagrantConfig.yaml
8. Download the VM Base Image

By default, Vagrant will automatically download the latest VM Base Image (virtualbox box) when you run vagrant up <machines>, but since downloading the image takes a while the first time, you may want to trigger the download manually.

$ vagrant box add

If you already have the latest version downloaded, the above command will fail.

9. Deploy DC/OS

Specify which machines to deploy in your cluster. Below are a few options to try.

a) Minimal Cluster

A minimal cluster supports launching small Marathon apps. Most other services will fail to install, because they require more than one agent node.

Requires > 4.5GB free memory (using the example VagrantConfig).

$ vagrant up m1 a1 boot

b) Small Cluster

A small cluster supports running tasks on multiple nodes.

Requires > 7.25GB free memory (using the example VagrantConfig).

$ vagrant up m1 a1 a2 p1 boot

c) Medium Cluster

A medium cluster supports the installation of a minimally configured Cassandra.

Requires > 10GB free memory (using the example VagrantConfig).

$ vagrant up m1 a1 a2 a3 a4 p1 boot

d) Large Cluster

Requires > 17GB free memory (using the example VagrantConfig).

A large cluster supports master node fail over, multiple framework installs, and multiple public load balancers.

$ vagrant up m1 m2 m3 a1 a2 a3 a4 a5 a6 p1 p2 p3 boot
10. Access DC/OS from browser

Once the the machines are created and provisioned, DC/OS will be installed. Once complete, the Web Interface will be available at http://m1.dcos/.

11. Additionally you can Authentication DC/OS

If your DC/OS CLI or the web dashboard prompt for username and password then the superuser credentials are by default admin/admin

Install Jenkins on Ubuntu

​Jenkins is a topnotch application.Originally started as Hudson in 2004 but due to a conflict in 2011 they divided and continued under the name, Jenkins. It enables one to build software, deploy software, or websites to various endpoints or to run unit/behaviour-driven software tests. In this blog, I will demonstrate how to install, configure Jenkins, and create your first job.

Step 1. Verify Java Installation

$java –version

If java is installed then, you will see java version like:

​java version "1.8.0_77"
Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

If Java is not installed, then follow the step 2 to install and set JAVA_HOME path

Step 2: Install Java and Set JAVA_HOME

$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
$ sudo apt-get install oracle-java8-set-default

$ cd /usr/lib/jvm/

$ ls -l

​drwxr-xr-x 4 root root 4096 Apr 16 11:07 ./
drwxr-xr-x 163 root root 20480 Apr 11 06:57 ../
lrwxrwxrwx 1 root root 24 Mar 23 2014 default-java -> java-1.7.0-openjdk-amd64/
lrwxrwxrwx 1 root root 20 Mar 24 16:08 java-1.7.0-openjdk-amd64 -> java-7-openjdk-amd64/
-rw-r--r-- 1 root root 2439 Mar 24 16:07 .java-1.7.0-openjdk-amd64.jinfo
drwxr-xr-x 5 root root 4096 Apr 16 11:07 java-7-openjdk-amd64/
drwxr-xr-x 8 root root 4096 Apr 1 08:35 java-8-oracle/
-rw-r--r-- 1 root root 2643 Apr 1 08:35 .java-8-oracle.jinfo

Then change directory to latest java

$ cd java-8-oracle

$ pwd


$ sudo vi /etc/environment

Add below path as per your java installation at end of file


$ source /etc/environment
$ echo $JAVA_HOME


Step 3. Install jenkins.

Before you can install Jenkins, add the key and source list to apt. First add the key:

$ wget -q -O - | apt-key add -

Then, create a sources list for Jenkins:

echo deb binary/ > /etc/apt/sources.list.d/jenkins.list

Update apt’s cache before installing Jenkins:

$ apt-get update

After the cache has been updated, proceed with installing Jenkins. Please note that it has a large bunch of dependencies, so it might take a few moments to install them all.

$ apt-get install jenkins

Jenkins will be launched as a daemon up on start. Check /etc/init.d/jenkins for more details. To run this service a ‘jenkins’ user is created. The log file will be placed in /var/log/jenkins/jenkins.log . Check this file when in need for troubleshooting.​​

​Step 4. Start Jenkins service.

$ /etc/init.d/jenkins start
$ /etc/init.d/jenkins start

Jenkins will write log files to /var/log/jenkins/jenkins.log. You can also fine-tune the configuration.

Step 5. Access Jenkins.

Finally, after the installation is complete you can visit the following address in your browser http://your-ip-address:8080

Congratulation’s! You have successfully installed jenkins.

Introduction to Jenkins

Jenkins is an adaptable open source software tool built with java language.It helps developers to build and test their software development continuously. Basically CICD is the best practice of running project tests on a non-developer machine automatically everytime when ever they pushed new code into the source repository.


With Jenkins, organization can speed up the software development workflow through automation. Jenkins deals and holds development growth processes of all kinds, including building , documenting , testing, packaging, staging, deployment, static analysis and many more……

Jenkins provide itself as a platform where installation, development , deployment and production occurs simultaneously on a large scale machines.


Jenkins constitute some great features in it like:

  1. Easy installation: Just run java -jar jenkins.war, deploy it in a servlet container. No additional install, no database. Prefer an installer or native package? We have those as well.
  2. Easy configuration: Jenkins can be configured entirely from its friendly web GUI with extensive on-the-fly error checks and inline help.
  3. Rich plugin ecosystem: Jenkins integrates with virtually every SCM or build tool that exists. View plugins.
  4. Extensibility: Most parts of Jenkins can be extended and modified, and it’s easy to create new Jenkins plugins. This allows you to customize Jenkins to your needs.
  5. Distributed builds: Jenkins can distribute build/test loads to multiple computers with different operating systems. Building software for OS X, Linux, and Windows? No problem.


Dealing With Persistent Storage And Fault Tolerance In Apache Mesos

Why does storage matter?
● MESOS offers great support for stateless services
● But what about data persistence?
● Distributed Databases
● Distributed Filesystems
● Docker Volumes on distributed storage
● Two perspectives:
● Support for Distributed Storage Frameworks
● Support for Frameworks using the Distributed Storage Frameworks

Cloud Architect Musings


In part 1 of this series on Apache Mesos, I provided a high level overview of the technology and in part 2, I went into a bit more of a deep dive on the Mesos architecture.  I ended the last post stating I would do a follow-up post on how resource allocation is handled in Mesos.  However, I received some feedback from readers and decided I would first do this post on persistent storage and on fault tolerance before moving on to talk about resource allocation.

Persistent Storage Question

Screen Shot 2015-03-30 at 10.22.52 AM

As my previous posts discussed and as Mesos co-creator Ben Hindman’s diagram above indicates, a key benefit of using Mesos is the ability to run multiples types of applications (scheduled and initiated via frameworks as tasks) on the same set of compute nodes.  These tasks are abstracted from the actual nodes using isolation modules (currently some type of container technology)…

View original post 1,265 more words

Apache Mesos: Open Source Community Done Right

Leveraging Mesos as the Ultimate
Distributed Data Science

Cloud Architect Musings


I’ve been writing recently about Apache Mesos and it’s importance as an operating system kernel for the next generation data center. You can read those posts here:

Part 1: True OS For The SDDC

Part 2: Digging Deeper Into Mesos

Part 3: Dealing with Persistent Storage And Fault Tolerance

Part 4: Resource Allocation

Besides the technology though, I am also excited about the progression of the Mesos project itself.  So I want to take a detour from my technology-focused posts to make some general observation about the project.  As I said on Twitter previously, I’ve been particularly impressed with three characteristics:

Screen Shot 2015-03-22 at 11.26.31 PM

I’ve had the opportunity to speak to a number of people recently about Mesos and found it’s been extremely easy for them to grasp the concept and to understand the value of the technology.  This is very important for a project that is growing and looking to expand its…

View original post 635 more words

Docker Tutorial Series : Part 6 : Docker Private Registry


This is part 6 of the Docker Tutorial Series.

In this part we shall take a look at how you can host a local Docker registry. In an earlier part, we had looked at the Docker Hub, which is a public registry that is hosted by Docker. While the Docker Hub plays an important role in giving public visibility to your Docker images and for you to utilize quality Docker images put up by others, there is a clear need to setup your own private registry too for your team/organization.

View original post 903 more words

CI/CD the Docker Way

Tutum Blog


Tutum has recently announced build & testing capabilities and we are very happy not only about what we think is a huge step forward in providing the best CI/CD tools for developers, but also because of the amazing planning and team collaboration that was required from us to deliver such a big feature.

Tutum CI/CD design goals were to provide a flexible, agnostic, automatable and locally replicable solution for build & tests, which mirror the goals for the Docker project. This is why we based our solution for CI/CD on the open source image tutum/builder.

In a nutshell, tutum/builder performs the following steps:

  • Configure docker credentials using $DOCKERCFG.
  • Clone the repository using $GIT_REPO and checkouts the commit in $GIT_TAG.
  • Build the docker image specified by $DOCKERFILE_PATH.
  • Push the image to the image tag specified in $IMAGE_NAME.

How do we get the right values for…

View original post 652 more words