Passing the AWS Solution Architect Professional certification exam

 

Professional Certificate

If someone were to ask me how they should prepare for the AWS Solution Architect Professional exam, I would advice them not to prepare like I did. In the sense that I went to the exam quite under- prepared and I had to spend considerable time on each question in the initial stages before I got an hang of the questions. As the test progressed I was able to speed up my response.

I had taken a target of March end to complete this certification. My earlier Associate certification was expiring by March end and instead of getting re-certified I though I will attempt this certification. Unfortunately I got involved in getting my online courses ready (you should see them in a couple of month’s time) and didn’t have much time to prepare. Most preparation I did was in the last one week and I don’t think that is enough.

My friend Kalyan had sent me links to videos which need to be watched and also links to important white papers. Kalyan is a certified professional himself and these were helpful though I did not see all the videos and did not read all the white papers. What I did was to read the developer documents of most of the services and then depend on my logical ability to deduce the answer. This will backfire if you do not have a good grip on the services of AWS.

A few points from what I could gather from the exam:

1. Quite a few questions involve Big Data services: Kinesis, RedShift, Elastic Cache and EMR. So understand these services well. You must know when to use which service

2. I got a few questions on SWF and Datapipeline. Again you need to understand which is used for which situation

3. Lot of questions on hybrid cloud. So be very thorough with Direct Connect, VPN and Route 53

4. Lot of questions about costs which involved CloudFront, S3, Glacier

5. Understand when you must use RDS and when you must use DynamoDB. Quite a few questions have both these services as answers

6. Understand the difference between Layer 4 and Layer 7 in Networking

7. If you know your theory well, you can easily discard some of the options. This is the approach I used in most of the questions. To paraphrase Sherlock Holmes, “Remove all the impossible answers. Whatever remains, however improbable, must be true”

The major problem with this exam will be that you may not have used many of the services. Many of us will not have a chance to use Direct Connect or VPN or RedShift or Elastic Cache and so on. So we must rely on theory and an understanding of these services to answer the questions. Therefore it is imperative that you read the documentation in detail and watch the 300 and 400 series videos to understand the theory thoroughly. A good understanding of the theory couple with good analytical reasoning skills will let us cross the line.

All the best if you are trying for this certification.

Human Errors and the burden on SysOps engineer

300px-Paris_Tuileries_Garden_Facepalm_statue

Recently I read read about two outages, the AWS S3 being the bigger one. The other outage, being at GiLab.com. In both cases the root cause of the problem boiled down to human error. Even with tons and tons of automation around, we need to depend on System Operators to perform certain tasks and this is where human error gets induced. Also remember, not every automation tool is fool proof. You never know which corner condition it was not designed for and that could also induce problems. For now let us concentrate on human error.

I am sure each of the system administrator has his/her own horror story to related regarding human errors. I have known too many. I will tell you a few of them here.

When I worked for my company, in the late 80s, getting the root password was not a difficult thing. Lots of people had the root password for the systems. Once a sysadmin went to a lab of another department as he wanted to copy some files from there. He had root access on the system. After copying files, he some some unnecessary files in the system and gave rm -rf *.*  Unfortunately he was not in the same directory where those unwanted files existed but at a directory at a higher level. Before he ould realize his mistake the system went down. It was later said that whenever the department people saw him coming that side, they would shut down all systems till he left the place.

This was a minor one as it impacted only system. The major one I heard of was in the private cloud segment, where they were hosting database as a service. It seems that one of the DB administrators had to manually connect the database to a client system. Unfortunately he connected the DB of another client instead of the correct one. So the first client was able to see the database of another company!! All hell broke loose and the client had to be pacified by people at the very top.

If you look at the GitLab.com case, you will see another standard horror story. People take backups but never test if the backups are good. A friend of mine related a story wherein some major design drawings were being backed up regularly. One day their servers crashed and became non recoverable. So they tried to restore from the backups only to find that though backup jobs were run daily there were failures which the sysadmin had not noticed. So there were nothing in the tapes. To add to their horror the sysadmin had quit only a few weeks before. So almost 6 months of effort had to be repeated !!

The more complex the system, the more impact any such error has. Additionally the complexity, as in the case of AWS, induces its own error checking and consistency checks, so that recovering from errors will not be an easy task.

The job of System Administrator will grow more and more tense with the evolving complexity of systems. The fact is that some of the best SysAdmins are chosen for such jobs and yet there could always be an instance wherein due to tiredness, temporary lack of focus, oversight or sheer bad luck an error could be made. Unfortunate in this cloud era, if you are a service provider, the repercussions are bound to be heavy. The System Administrators must be more vigilant than ever and the organizations need to put in lots of checks and balances and ofcourse automate wherever they can.

You can read about the AWS S3 outage and what was impacted, here:  https://www.theregister.co.uk/2017/03/01/aws_s3_outage/ 

Here is an explanation of what how the AWS Outage happened:  https://techcrunch.com/2017/03/02/aws-cloudsplains-what-happend-to-s3-storage-on-monday/

Here is a writeup on the GitLab.com outage:  https://www.theregister.co.uk/2017/02/01/gitlab_data_loss/

 

Serverless Computing: The way ahead

AWS Lambda

To those who have not heard of this, the phrase ‘Serverless Computing’ may sound like science fiction. How can compute happen without a computer? While the phrase does give rise to such an interpretation, what is meant here is that we will compute without owning or setting up the servers.

When we embraced Cloud computing we gave up on ownership of the servers but not the control. The servers ‘belonged’ to you in the sense that you could login into the system and customize the OS and the application any way you wanted. With Serverless Computing, you give up both ownership and control. The service provider is responsible for setting up the servers and running your application seamlessly.

I will take up Amazon Lambda, which is what everyone thinks of when talking about serverless computing, and explain what it means to give up ownership and control. One way to use Amazon Lambda is as follows: a) Write you code and upload to Lambda b) Tell Lambda when you want your code to be run (scheduled or as response to an event) c) Lambda will setup the required environment and run your code. d)You will be charged only for the duration that your code ran.

Let’s take an application example. Assume you want to provide a service wherein the user loads a Word file and wants it converted to a pdf file. The application logic will work this way:

– The user file is loaded into Amazon S3,

– S3 generates an event

– The event is sent to Lambda,

– Lambda then invokes your code which downloads the file, converts it and uploads the resultant pdf file to another S3 bucket.

– The user is then informed that the pdf file is now available for download.

Assume the number of requests for conversion per day is not very large. In that case running a backend server to perform this logic will be costly as we have to keep the server running 24×7. In case you use Lambda you will only be charged for the duration for which your program ran. Setting up the servers and running your program is the responsibility of Lambda. So you have less headache and it costs you less.

There are limitations of course. Lambda doesn’t allow a job to run for very long time. They will terminate after 5 mins. Similarly, starting your app may take some time, especially if you have written the app in Java. You don’t get any persistent local storage. The amount of RAM you get is also limited. The languages currently supported by Lambda are: Java, Python, Javascript(Node.js) and C#.

All these limitations mean that developers must start thinking of serverless computing from the architecture stage itself. They must learn to think in serverless terms rather than assuming the availability of a server in the backend. Of course, not every application can use serverless architecture but in Microservices area this will definitely be beneficial. This excellent and quite exhaustive article by Mike Roberts in Martin Fowler’s site gives all sides of the picture and is definitely worth your time.

Here another article wherein we find how the company CloudSploit made the whole company serverless. This article has lot of technical details including some code snippets. Developers will get some nice insights into how they can implement serverless architecture.

It is not only AWS that has serverless computing. Google has Cloud functions, Microsoft Azure has Azure Functions and IBM BlueMix has OpenWhisk . So you can see that every cloud provider is interested in having a serverless computing solution. There are also frameworks like Serverless Framework which makes serverless development easy

It will take another two or three years to know how this succeeds but going by the idea, my take is that serverless computing has a bright future and we will see lot more applications adopting serverless computing

Reaching the first milestone & New Initiatives

1000-large

Last week we achieved what I consider to be our first milestone. We have now trained more than 1000 engineers. The number stands at 1030 and may go up by another 20 by the end of the year. This has been achieved over a period of 2 yrs and 2months. It is always a good feeling when you can train people and training more than 1000 of them does give you a sense of satisfaction. This is just the beginning. There is lot more to be done. When it comes to training, there is no end goal. You go on educating people as long as you are in business.

We have started some new initiatives. I have already blogged about our video initiative. We are now launching ‘Startup Siksha’, an initiative to help the startups. I have run a startup myself and have also been a part of a startup. So I understand the need to startups to get their people upto speed in new technologies in a cost effective way. Generally startups rely of people educating themselves and coming upto speed soon. This works in some cases and in some cases may not be very effective. In the complex world of cloud, an initial formal education can yield good dividends. Keeping in mind, the cash constraints of startups, we offer ‘Startup Siksha’. The main features of this initiative are:

– Offered at a very special price

– Online training for two days (Weekdays or Weekends. Most startups prefer weekends)

– Will accommodate upto 5 engineers

– Study material and Lab material will be provided

– A session on how to take up the certification exam

– Sample certification questions will be provided

We have already trained startups like RazorThink, EZDC, Aptus, Techwave etc on AWS and we have seen the people trained by getting certified as AWS Solution Architect Associate and AWS Developer Associate.

If you are a startup, you can write to me at suresh@cloudsiksha.com to know more details.

CloudSiksha completes two years

second anniversary

It gives me great pleasure to share with you that CloudSiksha completes two years as on today. It has been a fascinating journey so far. We have trained close to 900 people in these two years for various companies ranging from small startups all the way to huge IT giants. We have trained people in AWS, SoftLayer, DevOps, Chef, Puppet, Storage and MongoDB. I take this opportunity to thanks all the participants, friends and partners who have made this journey fruitful.

As you know, any company can survive only if it keeps running harder and takes calculated risks. Going forwards, we wish to offer more training via the video training mode, which will enable you to study at your own pace and we will also do the blended learning part. We also plan to be a platform for various SMEs to deliver their training. We will remain focused on Cloud, Big Data and DevOps and want to make CloudSiksha’s online training courses best in class. I know the plan is ambitious but then we can’t go far without ambition, can we?

Today as a gift to those who are very new to Cloud, we have uploaded three videos on youtube. These tell you how to start an EC2 Instance and how login into a Linux EC2 instance and how to login into a Windows EC2 instance. Here are the links:

Starting EC2 Instances: https://youtu.be/RpQob8M8Rqc

Login into Windows Instance: https://youtu.be/ahALycfEX0o

Login into Linux Instance: https://youtu.be/MBb2oJnTqRY

I must confess that I have been a bit lax in updating the blog due to professional commitments but that is no excuse. I will ensure that this blog gets updated once every fortnight from now on. Do follow this blog in your favorite reader to get updated regularly.

We will also be starting a newsletter soon. We used to send out newsletter but we stopped it because we wanted a ‘No Spam’ policy in place and we realized we were spamming people. We will now take people’s approval before we send the newsletter. This will be a monthly newsletter. If you want to subscribe to it, kindly send a mail to enquiry@cloudsiksha.com with ‘Subscribe’ as the subject.

Looking forward to interacting with all of you on a regular basis.

Watching your AWS Bill

billing

I have heard more than one story about how people were using AWS and suddenly one day they got a hefty bill. I too had this experience. The bill was not hefty but for a startup like mine even 20 or 30 dollars is an unwanted expenditure.

Why does this happen? Is this due to lack of knowledge on how AWS works and its billing schemes? No, lot of it happens due to lethargy. So the first battle to be fought is with lethargy. Being active is not enough but you must know how you may lose money unintentionally. I am trying to list some of my experiences here.

I am assuming you are a small or medium company and you do not want to purchase any additional software to manage your AWS infrastructure. You are managing it from the console. Here are the things you must do / look out for in order not to overspend.

1. Check the projected bill regularly: The best and simplest way to avoid unwanted charges is to check the project bill on a regular basis. In your billing section you will find a projection for the month. Check if it is the limit that you expect. If not dig deep down to see where the problem lies.

2. Set an alarm: You can set a billing alarm to alert you when the bill crosses a certain value. I think Amazon allows you to get this limit only once so use it wisely and you will be alerted in case of the bill crossing a certain amount

3. Stopping EC2 instances isn’t enough: Remember that when you stop your instance, only the billing for the instance stops. Your EBS is still billed. That means if you have have EBS volumes whose total size is more than what your free tier permits, you will be billed for EBS volumes even though the instance to which they are connected are stopped

4. Check the regions: A couple of times I had terminated my instances and seeing no running instance I was satisfied. Yet, I got a bill. This was because instances were running in other regions and I hadn’t shut them off. So remember what you see on  the dashboard pertains to one single region. So religiously check all regions regularly else you will end up paying a decent sum to Amazon

5. Release Elastic IPs: Elastic IPs cost nothing when they are attached to a running instance. In case you terminate an instance which has Elastic IP attached, ensure you also release the Elastic IP. Else you will be charged for the Elastic IP which is not in use

6. Delete Autoscaling groups: This happened to me once. I had terminated all instances and then logged off. What I had not realized is that I had an autoscaling group running with minimum number of instances set to 2. So after I logged off, autoscaling had done its job. It started two instances and I ended up paying for them. So always check if you have any autoscaiing group and whether the instances you are terminating belong to an autoscaling group

7. Delete Elastic LoadBalancer: Ensure you delete the ELB as well when you delete the instance attached to it. Else you will be charged for the ELB.

8. Understand which is free and which is not: I should have probably put this up as the first point. It is very important that we understand which services are free and which services are paid services. You must as well understand the limits (of free service). This will go a long way in ensuring that you do not pay anything in excess.

Large corporates will have their IT teams which will monitor for wastage. People like me, who run small companies, may not have the bandwidth to keep a continuous eye on the status of the infrastructure. So ensure that you scan for all things I had mentioned above whenever you log in into your AWS console. Only by discarding your lethargy will you be able to ensure you don’t waste money.

If you have any such experience. Let me know. I will add the learning to this post.

Dedicated Hosts from Amazon

AWS Dedicate Host

Amazon AWS recently announced the availability of Dedicated Hosts for users. This means that you can order a dedicated hosts for yourself and run your VMs in this host. Amazon says, “Dedicated Hosts provide you with visibility into the number of sockets and physical cores that are available so that you can obtain and use software licenses that are a good match for the actual hardware.” You can read all the technical details of how to order a dedicated host and how to place your instance on this host at this blog: https://aws.amazon.com/blogs/aws/now-available-ec2-dedicated-hosts/ 

In case of a Dedicated Host, the billing starts as soon as you are provisioned a Dedicate Host. The billing doesn’t depend on how many instances you are running on the Dedicated Host. You can check the pricing of the dedicated hosts here: http://aws.amazon.com/ec2/dedicated-hosts/pricing/ 

This is a good move by Amazon and I sure this will slowly lead towards a bare metal provisioning being available as well in future course. The reason I say this is because IBM SoftLayer has bare metal and it is their USP. Bare Metal offers enterprises lot of control and also ensure that the compliance requirements and performance requirements are taken care of. So if you want to build your own data center within the public cloud infrastructure using bare metal could be preferable for certain use cases. IBM SoftLayer gives that flexibility to its users. IBM SoftLayer has both dedicated hosts and bare metal. Amazon has caught up with the dedicated hosts path. I think in future they may get to the bare metal part.

In case of Bare Metal you ‘own’ the server, in the sense that you can control the server fully by using say IPMI. You get a KVM for your use and then you decide if you want to use it as a standalone machine or you can load any hypervisor that you want. Since you completely own the server many of your compliance headaches are solved.

In due course of time i think every cloud provider will be pushed to offer Bare Metal servers. Ofcourse the main value proposition of the cloud is that it takes the management headache away from you and leaves you free to concentrate on your product or service. Slowly everyone is realizing that this dream of a No-IT is not a possibility since there are multiple reasons why an Enterprise, especially a large one, needs to have control over the infrastructure. For such large Enterprises, it is not the No-IT but rather the flexibility and elasticity of the cloud which will be the main value proposition. The Dedicated Hosts offering from AWS tells us that it is indeed true.

CloudSiksha’s First Anniversary

CloudLogoJustDial

I am delighted to share with you that CloudSiksha completed one year of its existence on Oct 28th 2015. It has been a great year so far with lot of initiatives taken which will yield results in the coming years.

We started our first course in the month of December 2014. The first two courses we did were related to Storage. N R Ramesh, Saravanakumar, Ratnasagar, as well as my former colleague Sarath were the initial participants. My sincere thanks to them. We started the Cloud related course in Jan 2015. After that it has been majorly Cloud related work that we have been doing.

We did our first online course with participants from London. This was the MongoDB course which was conducted by Maniappan. Later I did a AWS Solution Architect Course for participants from US. From then on we have steadily been doing online courses for participants from US, Australia, Mumbai, Delhi, Hyderabad etc.

We also expanded into the corporate world. We did courses for Accenture, TechMahindra, HCL, Sonata, RazorThink and Adobe. We also had participants from companies like HP, IBM etc. for our courses

From Storage and AWS, we expanded to Puppet, Chef, Python and IBM’s SoftLayer. Within AWS itself, we have been conducting courses on Solution Architect, SysOps and Development. We will be venturing into DevOps soon.

I got myself certified as a AWS Solution Architect – Associate. I am also glad to report that quite a few of the participants of our classes passed this exam.

Additionally we have now partnered with other companies to develop online content. Python based content is under development. Similarly AWS based content is also under development. We will publish all details of this once the development is complete. In the coming year we will be focusing a lot on high quality online content.

At this time I would like to thanks Maniappan and Sarath, who took the initial classes. I also would like to thank Ramesh Murthy and his team at StridesIT, who have supported us both in terms of customer acquisition and in terms of fulfilling the infrastructure requirement. Kavirajan designed my website and I have to thanks him for the great job he did. I also wish to thank all my other partners on this occasion.

My sincere thanks to all participants of our courses. We would not have grown this much were it not for your active participation and support.

Hoping the we will scale greater heights in the coming years.

 

You succeed if your Eco-System succeeds

aws-market

In my former company I prepared a deck for my CEO wherein he was to give a talk which went ‘you succeed only if your client succeeds’. The basic idea was that days of customer satisfaction, customer delight, customer ecstasy and similar synonyms were no longer applicable. It was clear that as a service provider we would win only if our client wins. If the clients are satisfied with your work but if that work doesn’t lead to clients succeeding in their business, eventually you will go out of business. To achieve this sort of synergy requires a high level of trust and a deep understanding of client’s business.

That was from a service provider angle where the service we were providing was more of offshoring client’s operations and ensuring cost reductions. Service providers like Amazon face a different type of challenge. The first step is to ensure that those who are direct customers succeed by using Amazon’s cloud services. The second step is to accept that Amazon alone cannot provide all the services that a client needs and ensure an eco system around Amazon is built so that client has lot more choice. And all those choices have Amazon as their underlying layer.

Very often when I taking an Amazon AWS class I get asked about some features that participants would love to see in Amazon. While some of them will eventually be developed by Amazon and provided to the users but we must understand that even a company like Amazon will have a limit on resource when it comes to development. This is where building an eco-system helps as it allows other startups with agility to build tools and software which can be of use to the Enterprise. Amazon does a good job in their documentation (probably the most extensive and the best documentation I have seen on the web) and it becomes easy for startups to develop applications / software which are based on Amazon APIs. Once such tools and software proliferate, more and more customers will jump on to the Amazon bandwagon.

Recently I was talking to the CTO of Kumolus, Michael Salleo, about their product. I had seen their product during the AWS Conference. It was an impressive product. You can have a look at it here: https://kumolus.com/ It does quite a few things which many Amazon users want. You can see that you can get very good view of how your finances are spent and lot more management and deployment capabilities are present in the software.

That was just one example. Here is a link to a set of slides which talks about various tools / products which use AWS in one way or the other.

Hot Products from Amazon re:Invent 2014

They range from management software, security software, big data analytics, backup & DR and more. This gives an idea of the sort of products that are being built using cloud technologies. This is what will ensure that Amazon, Azure and other cloud providers will win in the longer run. In other words, you have to put in effort and money to help those who depend on you and you have to ensure they succeed. So that in the long run you succeed.

 

AWS Enterprise Summit 2015 Bangalore

AWS Enterprise Summit

I attended the AWS Enterprise Summit held at Ritz Carlton in Bangalore. Due to prior engagements I was not able to attend the second half of the summit. Here are some observations about the summit.

1. Amazon will be coming to India soon. They are building infrastructure in India and they will have multiple Availability Zones in India by 2016. I think there was a news article on this a few days back. This news was confirmed in this summit.

2. Hybrid Cloud or shall I say On Premise Data Center + Cloud will be the way the Enterprises would be going in the future. This is what many in the panel discussion said. It is difficult, if not impossible, for large enterprises to ignore their legacy systems and hardware. These are very difficult to move to the cloud. Hence the legacy hardware and software will exist on-premise and newer applications will run from the cloud. This co-existence will remain a reality for quite some time.

3. A Gartner Quadrant diagram was displayed. It was stated that the compute power of Amazon was 10X more than that of all the companies put together (in the same quadrant). That is quite impressive and tells us about the scale of Amazon.

4. The partner showcase was impressive with some innovative products on display. The partner exhibition gave an idea on how the eco-system around cloud is developing and how Amazon’s growth is spanning a lot of innovation from other companies and thus ensuring more companies grow in the Cloud services area.

5. There were companies which are into consulting, companies which are into disaster recovery, companies which are into products and more. All of them partnering with Amazon. I was particularly impressed with the product from ‘kumolus’ . The interface was good and was very easy to operate. You can check out their product here: https://kumolus.com/

6. Was talking to UmaShankar, VP, Delivery for Cloud Kinetics  http://www.cloud-kinetics.com/home/ We were discussing about how Cloud has now ushered in a need for multi faceted personalities in the Admin area. In order to be a good Solution Architect for Cloud based services, it is not enough if the person is only a Server Admin or Network Admin or Storage Admin. She / he has to be all these. So to all the admins out there: if you want to move to cloud, expand your skills.

7. Some general observations:

a) The crowd was impressive. It gives an idea of how much interest exists for Cloud and its march in India is inevitable

b) The hotel was small for this crowd size

c) The percentage of women overall was very less. No idea on why it is so

d) Was good to see m friend and former colleague KKV present a case study on behalf of Azim Premji foundation. Couldn’t get to meet him though.

e) Met another colleague, Varada, who is now with ‘ifruid labs’