Marathon Authentication with Mesos as Framework


Mesos 0.15 added support for framework authentication. If you plan to use local volumes feature of Marathon 1.0.0 RC1 then your Marathon framework must had to authenticate with Mesos. In this blog, I am going to explain how to create credentials and authenticate with Mesos while registering with the Mesos master.


If you follows my last blog, then I am going to configure this on same Node 10.1.0.17. I will change the directory to my home and create two files

1. Create a file defining framework principals and their secrets with the following content.

$ cd
$ touch credential
$ cat credentials

principal1 secret1
principal2 secret2

2.Start the master using the credentials file (assuming the file is ~/credentials):

$ sudo ./bin/mesos-master.sh --ip=10.1.0.17 --work_dir=/var/lib/mesos --zk=zk://10.1.0.17:2181/mesos --quorum=1 --authenticate --authenticate_slaves --credentials=/home/ubuntu/credentials

3. Create another file with a single credential in it (~/slave_credential):

principal1 secret1

4. Start the slaves

$ sudo ./bin/mesos-slave.sh --master=10.1.0.17:5050 --credential=/home/ubuntu/slave_credential

Your new slave should have now successfully authenticated with the master.

5. Start Marathon using the following command line arguments

--mesos_authentication_principal principal2
--mesos_authentication_secret_file /home/ubuntu/marathon.secret
--mesos_role foo

Note: the framework must be registered for a specific role only in case you want to use Mesos features that require specifying a role
for a request.

$ MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos.so ./bin/start -h 10.1.0.17  --master 	 --zk zk://10.1.0.17:2181/marathon --mesos_authentication_principal principal2 --mesos_authentication_secret_file /home/ubuntu/marathon.secret --mesos_role foo

Deploy jenkins inside docker using persistent volumes on Mesos and Marathon


In this blog, I am going to walk you through the steps to configure docker container on Mesos slaves and deploy Jenkins on it using persistent volume. After release of Marthon 1.0.0 RC1, its is possible to deploy Stateful Applications Using Local Persistent Volumes.

Prerequisite
1. Working Mesos cluster which must be Mesos version 0.20.0
2. Working Marathon 1.0.0 RC1

1. Install Docker version 1.0.0 or later installed on each slave node.

2. Update slave configuration to specify the use of the Docker containerizer

$ echo 'docker,mesos' > /etc/mesos-slave/containerizers

3. Increase the executor timeout to account for the potential delay in pulling a docker image to the slave.

 $ echo '5mins' > /etc/mesos-slave/executor_registration_timeout

4. Restart mesos-slave process to load the new configuration and even pass containerizers and executor_registration_timeout as parametes

$ sudo /usr/sbin/mesos-slave --master=127.0.0.1:5050 --credential=/home/pravinmishra/slave_credential --containerizers=docker,mesos --executor_registration_timeout=5mins

5. Restart Marathon and Increase the Marathon command line option –task_launch_timeout to at least the executor timeout, in milliseconds, you set on your slaves in the previous step.

$ MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos.so ./bin/start -h 127.0.0.1  --master zk://127.0.0.1:2181/mesos --zk zk://127.0.0.1:2181/marathon --mesos_authentication_principal principal2 --mesos_authentication_secret_file /home/pravinmishra/marathon.secret --mesos_role foo --task_launch_timeout 300000

6.

jenkins-docker-json

Setup standalone mesos and marathon 1.0.0 RC1 on ubuntu


 

In this blog, I will walk you through setting up a standalone Apache Mesos and marathon 1.0.0 RC1 in ubuntu.

Why Marathon 1.0.0 RC1

a) Support for Local Persistent Storage

Benefits of using local persistent volumes

  • All resources needed to run tasks of your stateful service are dynamically reserved, thus ensuring the ability to relaunch the task on the same node using the same volume when needed.
  • You don’t need constraints to pin a task to a particular agent where its data resides
  • You can still use constraints to specify distribution logic
  • Marathon lets you locate and destroy an unused persistent volume if you don’t need it anymore

You can now launch tasks that use persistent volumes by specifying volumes either via the UI or the REST API. Marathon will reserve all required resources on a matching agent, and subsequently launch a task on that same agent if needed. Data within the volume will be retained even after relaunching the associated task.

Introduction

Mesos is a scalable and distributed resource manager designed to manage resources for data centers.

Mesos can be thought of as “distributed kernel” that achieves resource sharing via APIs in various languages (C++, Python, Go, etc.) Mesos relies on cgroups to do process isolation on top of distributed file systems (e.g., HDFS). Using Mesos you can create and run clusters running heterogeneous tasks. Let us see what it is all about and some fundamentals on getting Mesos up and running.

Note: Even you can try DC/OS is an entirely open source software project based on Apache Mesos, Marathon and a whole lot more.

1. Add the Mesosphere Repositories to your Hosts

First, add the Mesosphere repository to your sources list. This process involves downloading the Mesosphere project’s key from the Ubuntu keyserver and then crafting the correct URL for our Ubuntu release. The project provides a convenient way of doing this:

$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
$ DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
$ CODENAME=$(lsb_release -cs)
$ echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/mesosphere.list

2. Download package lists and information of latest versions:

$ sudo apt-get -y update

3. Install Mesos

A Mesos cluster needs at least one Mesos Master and one Mesos Slave. The Mesos Master coordinates and delivers tasks onto the Mesos Slaves which runs the job. In production clusters you typically run Mesos in High Availability (HA) Mode with three or more Mesos Masters, three or more Zookeepers, and many Mesos Slaves.

We will install Marathon and mesos meta package which also pulls in zookeeper as a dependency.

$ sudo apt-get -y install mesos

4.Download and unpack the latest Marathon 1.0.0 RC1

$ curl -O http://downloads.mesosphere.com/marathon/v1.1.0-RC1/marathon-1.1.0-RC1.tgz
$ tar xzf marathon-1.1.0-RC1.tgz

5. Start Mesos master and Slaves

$ sudo stop mesos-master
$ sudo stop mesos-slave
$ sudo ./bin/mesos-master.sh --ip=10.1.0.17 --work_dir=/var/lib/mesos --zk=zk://10.1.0.17:2181/mesos --quorum=1 --authenticate --authenticate_slaves --credentials=/home/ubuntu/credentials
$ sudo ./bin/mesos-slave.sh --master=10.1.0.17:5050 --credential=/home/ubuntu/slave_credential

5. Start Marathon

$ cd marathon-1.0.0-RC1/
$ $ MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos.so ./bin/start -h 10.1.0.17 --master --zk zk://10.1.0.17:2181/marathon --mesos_authentication_principal principal2 --mesos_authentication_secret_file /home/ubuntu/marathon.secret --mesos_role foo

6. Mesos Web interface:

You should be able to access the Mesos web interface on port 5050 of your server. For example, if the IP address of the server is 192.168.0.102 then access the Mesos web UI at http://192.168.0.102:5050/. In case, if you have installed on your local system. Then you can access Mesos web UI on 127.0.0.1:5050. The master web console will show the number of active slaves as 1 and the registered Marathon framework. Take a few seconds to navigate around the interface to see what information is available.

Mesos-master

7. Marathon Web interface:

The Marathon web UI will be accessible at http://192.168.0.102:8080.

Marathon

Create an application with local persistent volumes

Prerequisites

In order to create stateful applications using local persistent volumes in Marathon, you need to set 2 command line flags that Marathon will use to reserve/unreserve resources and create/destroy volumes.

–mesos_authentication_principal: You can choose any that suits your needs. However, if you have set up ACLs on your Mesos master, this must be an authenticated and authorized prinicpal.
–mesos_role: This should be a unique role and will be used implicitly, that is, you don’t need to configure the Mesos master via –roles.

Configuration options
Deploy Jenkins with persistence volume using blow json:

{
  "id": "jenkins",
  "cmd": "cd jenkins-mesos-deployment-master && ./jenkins-standalone.sh -z $(cat /etc/mesos/zk) -r 10.1.0.18:6379",
  "cpus": 1,
  "mem": 1024,
  "disk": 1024,
  "instances": 1,
  "ports": [
    0
  ],
  "container": {
    "volumes": [
      {
        "containerPath": "jenkinsdata",
        "persistent": {
          "size": 1024
        },
        "mode": "RW"
      }
    ],
    "type": "MESOS"
  },
  "uris": [
    "https://github.com/diatmpravin/jenkins-mesos-deployment/archive/master.tar.gz"
  ]
}

Where:

  • containerPath: The path where your application will read and write data. This can currently be only relative and non-nested (“data”, but not “/data”, “/var/data” or “var/data”).
  • mode: The access mode of the volume. Currently, “RW” is the only possible value and will let your application read from and write to the volume.
  • persistent.size: The size of the persistent volume in MiBs.

Once jenkins is deployed, click the Volumes tab of the application detail view to get detailed information about your app instances and associated volumes.

Install DC/OS on CentOS in OpenStack Virtual Machine


In this blog, we are going to install DC/OS on CentOS in OpenStack Virtual Machine. There are three way to install DC/OS in Virtual Machine.

1. An automated GUI installer

2. An automated, configurable CLI installer

3. An advanced installer with semi-manual steps.

In this blog, we will going for first option, GUI installation. The automated GUI installer provides a simple graphical interface with click able links. This automated installer provides us a basic installation that is suitable for demonstrations and POCs. Only.

For this installation, we are going to create one Master and Three Agent(Slaves), we also need one bootstrap node for the administer the DC/OS installation across our cluster cluster. The bootstrap node uses an SSH key to connect to each node in our cluster to automate the DC/OS installation.

The DC/OS installation creates these folders:

/opt/mesosphere : Contains all the DC/OS binaries, libraries, cluster configuration. Do not modify.
/etc/systemd/system/dcos.target.wants : Contains the systemd service units which start the things that make up DC/OS. They must live outside of /opt/mesosphere because of systemd constraints.
Various units prefixed with dcos in /etc/systemd/system : Copies of the units in /etc/systemd/system/dcos.target.wants. They must be at the top folder as well as inside dcos.target.wants.

Install DC/OS
1. Install DC/OS

$ curl -O https://downloads.dcos.io/dcos/EarlyAccess/dcos_generate_config.sh

2. From your terminal, start the DC/OS GUI installer with this command.

$ sudo bash dcos_generate_config.sh –web

Here is an example of the output.

Running mesosphere/dcos-genconf docker with BUILD_DIR set to /home/centos/genconf
16:36:09 dcos_installer.action_lib.prettyprint:: ====> Starting DC/OS installer in web mode
16:36:09 root:: Starting server (‘0.0.0.0’, 9000)

Tip: You can add the verbose (-v) flag to see the debug output: $ sudo bash dcos_generate_config.sh –web -v

3. Launch the DC/OS web installer in your browser at: http://:9000.

4. Click Begin Installation.

5. Specify your Deployment and DC/OS Environment settings:

Deployment Settings

Master Private IP List : Specify a comma-separated list of your internal static master IP addresses.

Agent Private IP List : Specify a comma-separated list of your internal static agent IP addresses.

Master Public IP : Specify a publicly accessible proxy IP address to one of your master nodes. If you don’t have a proxy or already have access to the network where you are deploying this cluster, you can use one of the master IP’s that you specified in the master list. This proxy IP address is used to access the DC/OS web interface on the master node after DC/OS is installed.

SSH Username : Specify the SSH username, for example centos.

SSH Listening Port : Specify the port to SSH to, for example 22.

SSH Key : Specify the private SSH key with access to your master IPs.

DC/OS Environment Settings

Upstream DNS Servers : Specify a comma-separated list of DNS resolvers for your DC/OS cluster nodes. Set this parameter to the most authoritative nameservers that you have. If you want to resolve internal hostnames, set it to a nameserver that can resolve them. If you have no internal hostnames to resolve, you can set this to a public nameserver like Google or AWS. In the example file above, the Google Public DNS IP addresses (IPv4) are specified (8.8.8.8 and 8.8.4.4).

Caution: If you set this parameter incorrectly you will have to reinstall DC/OS. For more information about service discovery, see this documentation.

IP Detect Script : Choose an IP detect script from the dropdown to broadcast the IP address of each node across the cluster. Each node in a DC/OS cluster has a unique IP address that is used to communicate between nodes in the cluster. The IP detect script prints the unique IPv4 address of a node to STDOUT each time DC/OS is started on the node.

Important: The IP address of a node must not change after DC/OS is installed on the node. For example, the IP address must not change when a node is rebooted or if the DHCP lease is renewed. If the IP address of a node does change, the node must be wiped and reinstalled.

6. Click Run Pre-Flight. The preflight script installs the cluster prerequisites and validates that your cluster is installable. For a list of cluster prerequisites, see the scripted installer prerequisites. This step can take up to 15 minutes to complete. If errors any errors are found, fix and then click Retry.

iMAGE

Important: If you exit your GUI installation before launching DC/OS, you must do this before reinstalling:

SSH to each node in your cluster and run rm -rf /opt/mesosphere.
SSH to your bootstrap master node and run rm -rf /var/lib/zookeeper

7. Click Deploy to install DC/OS on your cluster. If errors any errors are found, fix and then click Retry.

Image

Tip: This step might take a few minutes, depending on the size of your cluster.

8. Click Run Post-Flight. If errors any errors are found, fix and then click Retry.

image

Tip: You can click Download Logs to view your logs locally. Tip: If this takes longer than about 10 minutes, you’ve probably misconfigured your cluster. Go checkout the troubleshooting documentation

9. Click Log In To DC/OS. If this doesn’t work, take a look at the troubleshooting docs

image

You are done!

Image

Configure ManageIQ on OpenStack


Introduction to Manage IQ

ManageIQ is a cloud management platform which can be deployed on OpenStack, and can manage instances running on OpenStack clouds.

Installing ManageIQ

There are detailed instructions for deploying ManageIQ on OpenStack – the basic process is uploading an appliance to Glance, and launching it with an appropriately provisioned instance (30GB disk minimum required).

Once the application is installed, you can manage your OpenStack cloud by configuring it as a cloud provider for ManageIQ.

 

 

 

Step 1: Download & deploy your appliance

  1. Download ManageIQ directly to OpenStack by running this command:
    curl -O -L http://manageiq.org/download/manageiq-openstack-capablanca-2.qc2 && \
    glance image-create --name "manageiq-openstack-capablanca-2.qc2" \
    --is-public True --disk-format qcow2 \
    --container-format=bare --file manageiq-openstack-capablanca-2.qc2
    
  2. Launch a new instance from the ManageIQ image. ManageIQ needs a minimum of 6GB RAM and a 45GB persistent disk, so choose or create an instance flavor accordingly.

Step 2: First connection and configuration

  1. Log into the ManageIQ dashboard by connecting to the new running VM with a web browser. The initial username and password is admin/smartvm.
  2. There are a number of basic settings, located under “Configure → Configuration” in the web interface, or under “Advanced Settings” in the VM’s console, that you may wish to change when starting ManageIQ for the first time. Among the most common are:
    • Time and date settings
    • DHCP configuration
    • Hostname
    • Admin password

Step 3: Add an infrastructure or cloud provider

Now that your ManageIQ Appliance is up and running, it’s time to connect up with your Providers (Cloud or Infrastructure) and gather data about them.

https://www.rdoproject.org/cloud-management/using-manageiq-on-openstack/

Dealing With Persistent Storage And Fault Tolerance In Apache Mesos


Why does storage matter?
● MESOS offers great support for stateless services
● But what about data persistence?
● Distributed Databases
● Distributed Filesystems
● Docker Volumes on distributed storage
● Two perspectives:
● Support for Distributed Storage Frameworks
● Support for Frameworks using the Distributed Storage Frameworks

Cloud Architect Musings

redundancy_hours_just_a_few_redundacy_motis-s800x600-58577-580

In part 1 of this series on Apache Mesos, I provided a high level overview of the technology and in part 2, I went into a bit more of a deep dive on the Mesos architecture.  I ended the last post stating I would do a follow-up post on how resource allocation is handled in Mesos.  However, I received some feedback from readers and decided I would first do this post on persistent storage and on fault tolerance before moving on to talk about resource allocation.

Persistent Storage Question

Screen Shot 2015-03-30 at 10.22.52 AM

As my previous posts discussed and as Mesos co-creator Ben Hindman’s diagram above indicates, a key benefit of using Mesos is the ability to run multiples types of applications (scheduled and initiated via frameworks as tasks) on the same set of compute nodes.  These tasks are abstracted from the actual nodes using isolation modules (currently some type of container technology)…

View original post 1,265 more words

Apache Mesos: Open Source Community Done Right


Leveraging Mesos as the Ultimate
Distributed Data Science
Platform

Cloud Architect Musings

open-source-590x273

I’ve been writing recently about Apache Mesos and it’s importance as an operating system kernel for the next generation data center. You can read those posts here:

Part 1: True OS For The SDDC

Part 2: Digging Deeper Into Mesos

Part 3: Dealing with Persistent Storage And Fault Tolerance

Part 4: Resource Allocation

Besides the technology though, I am also excited about the progression of the Mesos project itself.  So I want to take a detour from my technology-focused posts to make some general observation about the project.  As I said on Twitter previously, I’ve been particularly impressed with three characteristics:

Screen Shot 2015-03-22 at 11.26.31 PM

I’ve had the opportunity to speak to a number of people recently about Mesos and found it’s been extremely easy for them to grasp the concept and to understand the value of the technology.  This is very important for a project that is growing and looking to expand its…

View original post 635 more words