How do I find all files containing specific text on Linux?


The grep command, which means global regular expression print, remains amongst the most versatile commands in a Linux terminal environment. It happens to be an immensely powerful program that lends users the ability to sort input based on complex rules, thus rendering it a fairly popular link across numerous command chains. The grep command is primarily used to search text or search any given file for lines containing a match to the supplied words/strings. By default, grep displays the matching lines, and it may be used to search for lines of text matching one/many regular expressions in a fuss-free, and it outputs only the matching lines.

The basic grep command syntax

grep 'word' filename
grep 'word' file1 file2 file3
grep 'string1 string2'  filename
cat otherfile | grep 'something'
command | grep 'something'
command option1 | grep 'data'
grep --color 'data' fileName

Example:

maxresdefault

Do the following:

grep -rnw '/path/to/somewhere/' -e 'pattern'
  • -r or -R is recursive,
  • -n is line number, and
  • -w stands for match the whole word.
  • -l (lower-case L) can be added to just give the file name of matching files.

Along with these, --exclude--include--exclude-dir flags could be used for efficient searching:

  • This will only search through those files which have .c or .h extensions:
    grep --include=\*.{c,h} -rnw '/path/to/somewhere/' -e "pattern"
    
  • This will exclude searching all the files ending with .o extension:
    grep --exclude=*.o -rnw '/path/to/somewhere/' -e "pattern"
    
  • For directories it’s possible to exclude a particular directory(ies) through --exclude-dir parameter. For example, this will exclude the dirs dir1/, dir2/ and all of them matching *.dst/:
    grep --exclude-dir={dir1,dir2,*.dst} -rnw '/path/to/somewhere/' -e "pattern"
    

For more options check man grep.

 

Passed AWS Certified Solution Architect – Associate


I am very excited and happy to announce that I have cleared the AWS Solution Architect – Associate certification today with 92% after almost 2 months of preparation.

No alt text provided for this image

In summary, it was very tough and crushing out every drop of your AWS knowledge and in-depth understanding of concepts. On top of that, getting 80 minutes for 60 questions looked a bit tough as many questions were descriptive and complex to understand the requirement. I have couple of years experience working on AWS service primarily on IAM, VPC, EC2, S3 & RDS which helped a lot.

Overall Score: 92%

Topic Level Scoring:

1.0 Designing highly available, cost efficient, fault tolerant, scalable systems : 93%
2.0 Implementation/Deployment: 83%
3.0 Security: 90%
4.0 Troubleshooting: 100%

There were questions related to S3, Security, ELB, Route53, IAM , EBS, Whitepapers, RDS, Placement Groups and DynamoDB. I have thoroughly read and understand Security White Paper and important topic for FAQ and hands-on EC2, Auto Scaling, ELB, S3, Route53, CloudFront are very important.

Best of luck to those who are yet to take the exam.

Overall it was a good experience. Targeting the Professional Certifications next ……

Why do people use Heroku when AWS is present?


Frst things first, AWS and Heroku are different things. AWS offer Infrastructure as a Service (IaaS) whereas Heroku offer a Platform as a Service (PaaS).

 

What’s the difference? Very approximately, IaaS gives you components you need in order to build things on top of it; PaaS gives you an environment where you just push code and some basic configuration and get a running application. IaaS can give you more power and flexibility, at the cost of having to build and maintain more yourself.

To get your code running on AWS and looking a bit like a Heroku deployment, you’ll want some EC2 instances – you’ll want a load balancer / caching layer installed on them (e.g. Varnish), you’ll want instances running something like Passenger and nginx to serve your code, you’ll want to deploy and configure a clustered database instance of something like PostgreSQL. You’ll want a deployment system with something like Capistrano, and something doing log aggregation.

That’s not an insignificant amount of work to set up and maintain. With Heroku, the effort required to get to that sort of stage is maybe a few lines of application code and a git push.

So you’re this far, and you want to scale up. Great. You’re using Puppet for your EC2 deployment, right? So now you configure your Capistrano files to spin up/down instances as needed; you re-jig your Puppet config so Varnish is aware of web-worker instances and will automatically pool between them. Or you heroku scale web:+5.

Hopefully that gives you an idea of the comparison between the two. Now to address your specific points:

Speed

Currently Heroku only runs on AWS instances in us-east and eu-west. For you, this sounds like what you want anyway. For others, it’s potentially more of a consideration.

Security

I’ve seen a lot of internally-maintained production servers that are way behind on security updates, or just generally poorly put together. With Heroku, you have someone else managing that sort of thing, which is either a blessing or a curse depending on how you look at it!

When you deploy, you’re effectively handing your code straight over to Heroku. This may be an issue for you. Their article on Dyno Isolation details their isolation technologies (it seems as though multiple dynos are run on individual EC2 instances). Several colleagues have expressed issues with these technologies and the strength of their isolation; I am alas not in a position of enough knowledge / experience to really comment, but my current Heroku deployments consider that “good enough”. It may be an issue for you, I don’t know.

Scaling

I touched on how one might implement this in my IaaS vs PaaS comparison above. Approximately, your application has a Procfile, which has lines of the form dyno_type: command_to_run, so for example (cribbed from http://devcenter.heroku.com/articles/process-model):

web:    bundle exec rails server
worker: bundle exec rake jobs:work

This, with a:

heroku scale web:2 worker:10

will result in you having 2 web dynos and 10 worker dynos running. Nice, simple, easy. Note that web is a special dyno type, which has access to the outside world, and is behind their nice web traffic multiplexer (probably some sort of Varnish / nginx combination) that will route traffic accordingly. Your workers probably interact with a message queue for similar routing, from which they’ll get the location via a URL in the environment.

Cost Efficiency

Lots of people have lots of different opinions about this. Currently it’s $0.05/hr for a dyno hour, compared to $0.025/hr for an AWS micro instance or $0.09/hr for an AWS small instance.

Heroku’s dyno documentation says you have about 512MB of RAM, so it’s probably not too unreasonable to consider a dyno as a bit like an EC2 micro instance. Is it worth double the price? How much do you value your time? The amount of time and effort required to build on top of an IaaS offering to get it to this standard is definitely not cheap. I can’t really answer this question for you, but don’t underestimate the ‘hidden costs’ of setup and maintenance.

(A bit of an aside, but if I connect to a dyno from here (heroku run bash), a cursory look shows 4 cores in /proc/cpuinfo and 36GB of RAM – this leads me to believe that I’m on a “High-Memory Double Extra Large Instance”. The Heroku dyno documentation says each dyno receives 512MB of RAM, so I’m potentially sharing with up to 71 other dynos. (I don’t have enough data about the homogeny of Heroku’s AWS instances, so your milage may vary))

How do they fare against their competitors?

This, I’m afraid I can’t really help you with. The only competitor I’ve ever really looked at was Google App Engine – at the time I was looking to deploy Java applications, and the amount of restrictions on usable frameworks and technologies was incredibly off-putting. This is more than “just a Java thing” – the amount of general restrictions and necessary considerations (the FAQ hints at several) seemed less than convenient. In contrast, deploying to Heroku has been a dream.

Conclusion

I hope this answers your questions (please comment if there are gaps / other areas you’d like addressed). I feel I should offer my personal position. I love Heroku for “quick deployments”. When I’m starting an application, and I want some cheap hosting (the Heroku free tier is awesome – essentially if you only need one web dyno and 5MB of PostgreSQL, it’s free to host an application), Heroku is my go-to position. For “Serious Production Deployment” with several paying customers, with a service-level-agreement, with dedicated time to spend on ops, et cetera, I can’t quite bring myself to offload that much control to Heroku, and then either AWS or our own servers have been the hosting platform of choice.

Ultimately, it’s about what works best for you. You say you’re “a beginner programmer” – it might just be that using Heroku will let you focus on writing Ruby, and not have to spend time getting all the other infrastructure around your code built up. I’d definitely give it a try.


Note, AWS does actually have a PaaS offering, Elastic Beanstalk, that supports Ruby, Node.js, PHP, Python, .NET and Java. I think generally most people, when they see “AWS”, jump to things like EC2 and S3 and EBS, which are definitely IaaS offerings

How can I install a package with go get?


First, we need GOPATH

 

The $GOPATH is a folder (or set of folders) specified by its environment variable. We must notice that this is not the $GOROOT directory where Go is installed.

export GOPATH=$HOME/gocode
export PATH=$PATH:$GOPATH/bin

We used ~/gocode path in our computer to store the source of our application and its dependencies. The GOPATH directory will also store the binaries of their packages.

Then check Go env

You system must have $GOPATH and $GOROOT, below is my Env:

GOARCH="amd64"
GOBIN=""
GOCHAR="6"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/elpsstu/gocode"
GORACE=""
GOROOT="/home/pravin/go"
GOTOOLDIR="/home/pravin/go/pkg/tool/linux_amd64"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"
CXX="g++"
CGO_ENABLED="1"

Now, you run download go package:

go get [-d] [-f] [-fix] [-t] [-u] [build flags] [packages]

Get downloads and installs the packages named by the import paths, along with their dependencies. For more details you can look here.

 

Marathon Authentication with Mesos as Framework


Mesos 0.15 added support for framework authentication. If you plan to use local volumes feature of Marathon 1.0.0 RC1 then your Marathon framework must had to authenticate with Mesos. In this blog, I am going to explain how to create credentials and authenticate with Mesos while registering with the Mesos master.


If you follows my last blog, then I am going to configure this on same Node 10.1.0.17. I will change the directory to my home and create two files

1. Create a file defining framework principals and their secrets with the following content.

$ cd
$ touch credential
$ cat credentials

principal1 secret1
principal2 secret2

2.Start the master using the credentials file (assuming the file is ~/credentials):

$ sudo ./bin/mesos-master.sh --ip=10.1.0.17 --work_dir=/var/lib/mesos --zk=zk://10.1.0.17:2181/mesos --quorum=1 --authenticate --authenticate_slaves --credentials=/home/ubuntu/credentials

3. Create another file with a single credential in it (~/slave_credential):

principal1 secret1

4. Start the slaves

$ sudo ./bin/mesos-slave.sh --master=10.1.0.17:5050 --credential=/home/ubuntu/slave_credential

Your new slave should have now successfully authenticated with the master.

5. Start Marathon using the following command line arguments

--mesos_authentication_principal principal2
--mesos_authentication_secret_file /home/ubuntu/marathon.secret
--mesos_role foo

Note: the framework must be registered for a specific role only in case you want to use Mesos features that require specifying a role
for a request.

$ MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos.so ./bin/start -h 10.1.0.17  --master 	 --zk zk://10.1.0.17:2181/marathon --mesos_authentication_principal principal2 --mesos_authentication_secret_file /home/ubuntu/marathon.secret --mesos_role foo

Deploy jenkins inside docker using persistent volumes on Mesos and Marathon


In this blog, I am going to walk you through the steps to configure docker container on Mesos slaves and deploy Jenkins on it using persistent volume. After release of Marthon 1.0.0 RC1, its is possible to deploy Stateful Applications Using Local Persistent Volumes.

 

Prerequisite
1. Working Mesos cluster which must be Mesos version 0.20.0
2. Working Marathon 1.0.0 RC1

1. Install Docker version 1.0.0 or later installed on each slave node.

2. Update slave configuration to specify the use of the Docker containerizer

$ echo 'docker,mesos' > /etc/mesos-slave/containerizers

3. Increase the executor timeout to account for the potential delay in pulling a docker image to the slave.

 $ echo '5mins' > /etc/mesos-slave/executor_registration_timeout

4. Restart mesos-slave process to load the new configuration and even pass containerizers and executor_registration_timeout as parametes

$ sudo /usr/sbin/mesos-slave --master=127.0.0.1:5050 --credential=/home/pravinmishra/slave_credential --containerizers=docker,mesos --executor_registration_timeout=5mins

5. Restart Marathon and Increase the Marathon command line option –task_launch_timeout to at least the executor timeout, in milliseconds, you set on your slaves in the previous step.

$ MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos.so ./bin/start -h 127.0.0.1  --master zk://127.0.0.1:2181/mesos --zk zk://127.0.0.1:2181/marathon --mesos_authentication_principal principal2 --mesos_authentication_secret_file /home/pravinmishra/marathon.secret --mesos_role foo --task_launch_timeout 300000

6.

jenkins-docker-json

Setup standalone mesos and marathon 1.0.0 RC1 on ubuntu


 

In this blog, I will walk you through setting up a standalone Apache Mesos and marathon 1.0.0 RC1 in ubuntu.

Why Marathon 1.0.0 RC1

a) Support for Local Persistent Storage

Benefits of using local persistent volumes

  • All resources needed to run tasks of your stateful service are dynamically reserved, thus ensuring the ability to relaunch the task on the same node using the same volume when needed.
  • You don’t need constraints to pin a task to a particular agent where its data resides
  • You can still use constraints to specify distribution logic
  • Marathon lets you locate and destroy an unused persistent volume if you don’t need it anymore

You can now launch tasks that use persistent volumes by specifying volumes either via the UI or the REST API. Marathon will reserve all required resources on a matching agent, and subsequently launch a task on that same agent if needed. Data within the volume will be retained even after relaunching the associated task.

Introduction

Mesos is a scalable and distributed resource manager designed to manage resources for data centers.

Mesos can be thought of as “distributed kernel” that achieves resource sharing via APIs in various languages (C++, Python, Go, etc.) Mesos relies on cgroups to do process isolation on top of distributed file systems (e.g., HDFS). Using Mesos you can create and run clusters running heterogeneous tasks. Let us see what it is all about and some fundamentals on getting Mesos up and running.

Note: Even you can try DC/OS is an entirely open source software project based on Apache Mesos, Marathon and a whole lot more.

1. Add the Mesosphere Repositories to your Hosts

First, add the Mesosphere repository to your sources list. This process involves downloading the Mesosphere project’s key from the Ubuntu keyserver and then crafting the correct URL for our Ubuntu release. The project provides a convenient way of doing this:

$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
$ DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
$ CODENAME=$(lsb_release -cs)
$ echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/mesosphere.list

2. Download package lists and information of latest versions:

$ sudo apt-get -y update

3. Install Mesos

A Mesos cluster needs at least one Mesos Master and one Mesos Slave. The Mesos Master coordinates and delivers tasks onto the Mesos Slaves which runs the job. In production clusters you typically run Mesos in High Availability (HA) Mode with three or more Mesos Masters, three or more Zookeepers, and many Mesos Slaves.

We will install Marathon and mesos meta package which also pulls in zookeeper as a dependency.

$ sudo apt-get -y install mesos

4.Download and unpack the latest Marathon 1.0.0 RC1

$ curl -O http://downloads.mesosphere.com/marathon/v1.1.0-RC1/marathon-1.1.0-RC1.tgz
$ tar xzf marathon-1.1.0-RC1.tgz

5. Start Mesos master and Slaves

$ sudo stop mesos-master
$ sudo stop mesos-slave
$ sudo ./bin/mesos-master.sh --ip=10.1.0.17 --work_dir=/var/lib/mesos --zk=zk://10.1.0.17:2181/mesos --quorum=1 --authenticate --authenticate_slaves --credentials=/home/ubuntu/credentials
$ sudo ./bin/mesos-slave.sh --master=10.1.0.17:5050 --credential=/home/ubuntu/slave_credential

5. Start Marathon

$ cd marathon-1.0.0-RC1/
$ $ MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos.so ./bin/start -h 10.1.0.17 --master --zk zk://10.1.0.17:2181/marathon --mesos_authentication_principal principal2 --mesos_authentication_secret_file /home/ubuntu/marathon.secret --mesos_role foo

6. Mesos Web interface:

You should be able to access the Mesos web interface on port 5050 of your server. For example, if the IP address of the server is 192.168.0.102 then access the Mesos web UI at http://192.168.0.102:5050/. In case, if you have installed on your local system. Then you can access Mesos web UI on 127.0.0.1:5050. The master web console will show the number of active slaves as 1 and the registered Marathon framework. Take a few seconds to navigate around the interface to see what information is available.

Mesos-master

7. Marathon Web interface:

The Marathon web UI will be accessible at http://192.168.0.102:8080.

Marathon

Create an application with local persistent volumes

Prerequisites

In order to create stateful applications using local persistent volumes in Marathon, you need to set 2 command line flags that Marathon will use to reserve/unreserve resources and create/destroy volumes.

–mesos_authentication_principal: You can choose any that suits your needs. However, if you have set up ACLs on your Mesos master, this must be an authenticated and authorized prinicpal.
–mesos_role: This should be a unique role and will be used implicitly, that is, you don’t need to configure the Mesos master via –roles.

Configuration options
Deploy Jenkins with persistence volume using blow json:

{
  "id": "jenkins",
  "cmd": "cd jenkins-mesos-deployment-master && ./jenkins-standalone.sh -z $(cat /etc/mesos/zk) -r 10.1.0.18:6379",
  "cpus": 1,
  "mem": 1024,
  "disk": 1024,
  "instances": 1,
  "ports": [
    0
  ],
  "container": {
    "volumes": [
      {
        "containerPath": "jenkinsdata",
        "persistent": {
          "size": 1024
        },
        "mode": "RW"
      }
    ],
    "type": "MESOS"
  },
  "uris": [
    "https://github.com/diatmpravin/jenkins-mesos-deployment/archive/master.tar.gz"
  ]
}

Where:

  • containerPath: The path where your application will read and write data. This can currently be only relative and non-nested (“data”, but not “/data”, “/var/data” or “var/data”).
  • mode: The access mode of the volume. Currently, “RW” is the only possible value and will let your application read from and write to the volume.
  • persistent.size: The size of the persistent volume in MiBs.

Once jenkins is deployed, click the Volumes tab of the application detail view to get detailed information about your app instances and associated volumes.

Install DC/OS on CentOS in OpenStack Virtual Machine


In this blog, we are going to install DC/OS on CentOS in OpenStack Virtual Machine. There are three way to install DC/OS in Virtual Machine.

 

1. An automated GUI installer

2. An automated, configurable CLI installer

3. An advanced installer with semi-manual steps.

In this blog, we will going for first option, GUI installation. The automated GUI installer provides a simple graphical interface with click able links. This automated installer provides us a basic installation that is suitable for demonstrations and POCs. Only.

For this installation, we are going to create one Master and Three Agent(Slaves), we also need one bootstrap node for the administer the DC/OS installation across our cluster cluster. The bootstrap node uses an SSH key to connect to each node in our cluster to automate the DC/OS installation.

The DC/OS installation creates these folders:

/opt/mesosphere : Contains all the DC/OS binaries, libraries, cluster configuration. Do not modify.
/etc/systemd/system/dcos.target.wants : Contains the systemd service units which start the things that make up DC/OS. They must live outside of /opt/mesosphere because of systemd constraints.
Various units prefixed with dcos in /etc/systemd/system : Copies of the units in /etc/systemd/system/dcos.target.wants. They must be at the top folder as well as inside dcos.target.wants.

Install DC/OS
1. Install DC/OS

$ curl -O https://downloads.dcos.io/dcos/EarlyAccess/dcos_generate_config.sh

2. From your terminal, start the DC/OS GUI installer with this command.

$ sudo bash dcos_generate_config.sh –web

Here is an example of the output.

Running mesosphere/dcos-genconf docker with BUILD_DIR set to /home/centos/genconf
16:36:09 dcos_installer.action_lib.prettyprint:: ====> Starting DC/OS installer in web mode
16:36:09 root:: Starting server (‘0.0.0.0’, 9000)

Tip: You can add the verbose (-v) flag to see the debug output: $ sudo bash dcos_generate_config.sh –web -v

3. Launch the DC/OS web installer in your browser at: http://:9000.

4. Click Begin Installation.

5. Specify your Deployment and DC/OS Environment settings:

Deployment Settings

Master Private IP List : Specify a comma-separated list of your internal static master IP addresses.

Agent Private IP List : Specify a comma-separated list of your internal static agent IP addresses.

Master Public IP : Specify a publicly accessible proxy IP address to one of your master nodes. If you don’t have a proxy or already have access to the network where you are deploying this cluster, you can use one of the master IP’s that you specified in the master list. This proxy IP address is used to access the DC/OS web interface on the master node after DC/OS is installed.

SSH Username : Specify the SSH username, for example centos.

SSH Listening Port : Specify the port to SSH to, for example 22.

SSH Key : Specify the private SSH key with access to your master IPs.

DC/OS Environment Settings

Upstream DNS Servers : Specify a comma-separated list of DNS resolvers for your DC/OS cluster nodes. Set this parameter to the most authoritative nameservers that you have. If you want to resolve internal hostnames, set it to a nameserver that can resolve them. If you have no internal hostnames to resolve, you can set this to a public nameserver like Google or AWS. In the example file above, the Google Public DNS IP addresses (IPv4) are specified (8.8.8.8 and 8.8.4.4).

Caution: If you set this parameter incorrectly you will have to reinstall DC/OS. For more information about service discovery, see this documentation.

IP Detect Script : Choose an IP detect script from the dropdown to broadcast the IP address of each node across the cluster. Each node in a DC/OS cluster has a unique IP address that is used to communicate between nodes in the cluster. The IP detect script prints the unique IPv4 address of a node to STDOUT each time DC/OS is started on the node.

Important: The IP address of a node must not change after DC/OS is installed on the node. For example, the IP address must not change when a node is rebooted or if the DHCP lease is renewed. If the IP address of a node does change, the node must be wiped and reinstalled.

6. Click Run Pre-Flight. The preflight script installs the cluster prerequisites and validates that your cluster is installable. For a list of cluster prerequisites, see the scripted installer prerequisites. This step can take up to 15 minutes to complete. If errors any errors are found, fix and then click Retry.

iMAGE

Important: If you exit your GUI installation before launching DC/OS, you must do this before reinstalling:

SSH to each node in your cluster and run rm -rf /opt/mesosphere.
SSH to your bootstrap master node and run rm -rf /var/lib/zookeeper

7. Click Deploy to install DC/OS on your cluster. If errors any errors are found, fix and then click Retry.

Image

Tip: This step might take a few minutes, depending on the size of your cluster.

8. Click Run Post-Flight. If errors any errors are found, fix and then click Retry.

image

Tip: You can click Download Logs to view your logs locally. Tip: If this takes longer than about 10 minutes, you’ve probably misconfigured your cluster. Go checkout the troubleshooting documentation

9. Click Log In To DC/OS. If this doesn’t work, take a look at the troubleshooting docs

image

You are done!

Image

Configure ManageIQ on OpenStack


Introduction to Manage IQ

 

ManageIQ is a cloud management platform which can be deployed on OpenStack, and can manage instances running on OpenStack clouds.

Installing ManageIQ

There are detailed instructions for deploying ManageIQ on OpenStack – the basic process is uploading an appliance to Glance, and launching it with an appropriately provisioned instance (30GB disk minimum required).

Once the application is installed, you can manage your OpenStack cloud by configuring it as a cloud provider for ManageIQ.

 

 

 

Step 1: Download & deploy your appliance

  1. Download ManageIQ directly to OpenStack by running this command:
    curl -O -L http://manageiq.org/download/manageiq-openstack-capablanca-2.qc2 && \
    glance image-create --name "manageiq-openstack-capablanca-2.qc2" \
    --is-public True --disk-format qcow2 \
    --container-format=bare --file manageiq-openstack-capablanca-2.qc2
    
  2. Launch a new instance from the ManageIQ image. ManageIQ needs a minimum of 6GB RAM and a 45GB persistent disk, so choose or create an instance flavor accordingly.

Step 2: First connection and configuration

  1. Log into the ManageIQ dashboard by connecting to the new running VM with a web browser. The initial username and password is admin/smartvm.
  2. There are a number of basic settings, located under “Configure → Configuration” in the web interface, or under “Advanced Settings” in the VM’s console, that you may wish to change when starting ManageIQ for the first time. Among the most common are:
    • Time and date settings
    • DHCP configuration
    • Hostname
    • Admin password

Step 3: Add an infrastructure or cloud provider

Now that your ManageIQ Appliance is up and running, it’s time to connect up with your Providers (Cloud or Infrastructure) and gather data about them.

https://www.rdoproject.org/cloud-management/using-manageiq-on-openstack/