Passed AWS Certified Solution Architect – Associate


I am very excited and happy to announce that I have cleared the AWS Solution Architect – Associate certification today with 92% after almost 2 months of preparation.

No alt text provided for this image

In summary, it was very tough and crushing out every drop of your AWS knowledge and in-depth understanding of concepts. On top of that, getting 80 minutes for 60 questions looked a bit tough as many questions were descriptive and complex to understand the requirement. I have couple of years experience working on AWS service primarily on IAM, VPC, EC2, S3 & RDS which helped a lot.

Overall Score: 92%

Topic Level Scoring:

1.0 Designing highly available, cost efficient, fault tolerant, scalable systems : 93%
2.0 Implementation/Deployment: 83%
3.0 Security: 90%
4.0 Troubleshooting: 100%

There were questions related to S3, Security, ELB, Route53, IAM , EBS, Whitepapers, RDS, Placement Groups and DynamoDB. I have thoroughly read and understand Security White Paper and important topic for FAQ and hands-on EC2, Auto Scaling, ELB, S3, Route53, CloudFront are very important.

Best of luck to those who are yet to take the exam.

Overall it was a good experience. Targeting the Professional Certifications next ……

Why do people use Heroku when AWS is present?


Frst things first, AWS and Heroku are different things. AWS offer Infrastructure as a Service (IaaS) whereas Heroku offer a Platform as a Service (PaaS).

What’s the difference? Very approximately, IaaS gives you components you need in order to build things on top of it; PaaS gives you an environment where you just push code and some basic configuration and get a running application. IaaS can give you more power and flexibility, at the cost of having to build and maintain more yourself.

To get your code running on AWS and looking a bit like a Heroku deployment, you’ll want some EC2 instances – you’ll want a load balancer / caching layer installed on them (e.g. Varnish), you’ll want instances running something like Passenger and nginx to serve your code, you’ll want to deploy and configure a clustered database instance of something like PostgreSQL. You’ll want a deployment system with something like Capistrano, and something doing log aggregation.

That’s not an insignificant amount of work to set up and maintain. With Heroku, the effort required to get to that sort of stage is maybe a few lines of application code and a git push.

So you’re this far, and you want to scale up. Great. You’re using Puppet for your EC2 deployment, right? So now you configure your Capistrano files to spin up/down instances as needed; you re-jig your Puppet config so Varnish is aware of web-worker instances and will automatically pool between them. Or you heroku scale web:+5.

Hopefully that gives you an idea of the comparison between the two. Now to address your specific points:

Speed

Currently Heroku only runs on AWS instances in us-east and eu-west. For you, this sounds like what you want anyway. For others, it’s potentially more of a consideration.

Security

I’ve seen a lot of internally-maintained production servers that are way behind on security updates, or just generally poorly put together. With Heroku, you have someone else managing that sort of thing, which is either a blessing or a curse depending on how you look at it!

When you deploy, you’re effectively handing your code straight over to Heroku. This may be an issue for you. Their article on Dyno Isolation details their isolation technologies (it seems as though multiple dynos are run on individual EC2 instances). Several colleagues have expressed issues with these technologies and the strength of their isolation; I am alas not in a position of enough knowledge / experience to really comment, but my current Heroku deployments consider that “good enough”. It may be an issue for you, I don’t know.

Scaling

I touched on how one might implement this in my IaaS vs PaaS comparison above. Approximately, your application has a Procfile, which has lines of the form dyno_type: command_to_run, so for example (cribbed from http://devcenter.heroku.com/articles/process-model):

web:    bundle exec rails server
worker: bundle exec rake jobs:work

This, with a:

heroku scale web:2 worker:10

will result in you having 2 web dynos and 10 worker dynos running. Nice, simple, easy. Note that web is a special dyno type, which has access to the outside world, and is behind their nice web traffic multiplexer (probably some sort of Varnish / nginx combination) that will route traffic accordingly. Your workers probably interact with a message queue for similar routing, from which they’ll get the location via a URL in the environment.

Cost Efficiency

Lots of people have lots of different opinions about this. Currently it’s $0.05/hr for a dyno hour, compared to $0.025/hr for an AWS micro instance or $0.09/hr for an AWS small instance.

Heroku’s dyno documentation says you have about 512MB of RAM, so it’s probably not too unreasonable to consider a dyno as a bit like an EC2 micro instance. Is it worth double the price? How much do you value your time? The amount of time and effort required to build on top of an IaaS offering to get it to this standard is definitely not cheap. I can’t really answer this question for you, but don’t underestimate the ‘hidden costs’ of setup and maintenance.

(A bit of an aside, but if I connect to a dyno from here (heroku run bash), a cursory look shows 4 cores in /proc/cpuinfo and 36GB of RAM – this leads me to believe that I’m on a “High-Memory Double Extra Large Instance”. The Heroku dyno documentation says each dyno receives 512MB of RAM, so I’m potentially sharing with up to 71 other dynos. (I don’t have enough data about the homogeny of Heroku’s AWS instances, so your milage may vary))

How do they fare against their competitors?

This, I’m afraid I can’t really help you with. The only competitor I’ve ever really looked at was Google App Engine – at the time I was looking to deploy Java applications, and the amount of restrictions on usable frameworks and technologies was incredibly off-putting. This is more than “just a Java thing” – the amount of general restrictions and necessary considerations (the FAQ hints at several) seemed less than convenient. In contrast, deploying to Heroku has been a dream.

Conclusion

I hope this answers your questions (please comment if there are gaps / other areas you’d like addressed). I feel I should offer my personal position. I love Heroku for “quick deployments”. When I’m starting an application, and I want some cheap hosting (the Heroku free tier is awesome – essentially if you only need one web dyno and 5MB of PostgreSQL, it’s free to host an application), Heroku is my go-to position. For “Serious Production Deployment” with several paying customers, with a service-level-agreement, with dedicated time to spend on ops, et cetera, I can’t quite bring myself to offload that much control to Heroku, and then either AWS or our own servers have been the hosting platform of choice.

Ultimately, it’s about what works best for you. You say you’re “a beginner programmer” – it might just be that using Heroku will let you focus on writing Ruby, and not have to spend time getting all the other infrastructure around your code built up. I’d definitely give it a try.


Note, AWS does actually have a PaaS offering, Elastic Beanstalk, that supports Ruby, Node.js, PHP, Python, .NET and Java. I think generally most people, when they see “AWS”, jump to things like EC2 and S3 and EBS, which are definitely IaaS offerings

Why do people use App42PaaS when AWS is present?


Recently i came across question: Why do people use App42PaaS when AWS is present?

It’s really interesting 🙂

Firstly, let me tell you, AWS and App42PaaS are different kind of cloud service providers. There should not be any comparison between them as AWS provides IaaS(Infrastructure as a Service) where as App42PaaS provides PaaS(Platform as a Service).

Now you must thinking what are the differences between them. Right?

In very short, IaaS gives you instance of Virtual Machine where you need deploy your application on top of it. PaaS gives you an environment where you just push code with some basic configuration by choosing them.

Let’s talk in details:

AWS:

Now a days, people are used to refer IaaS as HaaS(Hardware as a Service) as name suggest, IaaS provide us hardware, servers and networking components including storage. Once you own the instance, its become your responsibilities for housing, running and maintaining it and you typically pays on a per-use basis.

amazon-aws-infographic

App42PaaS:

App42PaaS basically help developer to speed the development of app, saving money and most important innovating their applications and business instead of setting up configurations and managing things like servers and databases. Other features buying to use App42PaaS is the application deployment process such as agility, High Availability, Monitoring, Scale / Descale, limited need for expertise, easy deployment, and reduced cost and development time.

app42_PaaS

Hopefully, i answered above question, if still you have question let me know in the comments.

Amazon S3 Library for Ruby


1. Install amazon s3 gem.

$gem install aws-s3

2. Connect with S3.

require 'aws/s3'

AWS::S3::Base.establish_connection!(
  :access_key_id     => 'key goes here',
  :secret_access_key => 'secret goes here'
)

3. Various operation on S3 buckets:

a) store an object on S3
S3Object.store('me.jpg', open('headshot.jpg'), 'photos')

b) More explicitly data
S3Object.store(
  'name of object',
  File.open('large-picture.jpg'),
  'name of bucket',
  :content_type => 'image/jpeg'
)

c) Fetch object from S3
picture = S3Object.find 'headshot.jpg', 'photos'

Cheers!

 
 

 

EC2StartersGuide


1. Make sure you have multi verse, We need to run this command to install ec2-api-tools

sudo apt-get install ec2-api-tools

2. We need to do some fundamental setup to access AMAZON from our local ubuntu system.
export EC2_KEYPAIR=<your keypair name> # name only, not the file name export EC2_URL=https://ec2.<your ec2 region>.amazonaws.com export EC2_PRIVATE_KEY=$HOME/<where your private key is>/pk-XXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem export EC2_CERT=$HOME/<where your certificate is>/cert-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem export JAVA_HOME=/usr/lib/jvm/java-6-openjdk/

3.Change the permission of .pen keys.

chmod 600 <your keypair name>
4.To connect with amazon:

ssh -i <your keypair name> ubuntu@ec2.<your ec2 region>.amazonaws.com

That's it..........

Links
http://www.robertsosinski.com/2008/01/26/starting-amazon-ec2-with-mac-os-x/

New to EC2: Unable to connect to host error


It might be that your corporate firewall is not letting out SSL based traffic. If that is the case you have two options:
1) change the environment variable EC2_URL to http://ec2.amazonaws.com/
2) ask your administrator to allow external access on port 443
If you use the non SSL http://ec2.amazonaws.com/ URL you should be aware that although your requests are authenticated, i.e. no-one else can perform EC2 calls as you,  they will not be confidential, i.e. someone could view your requests.