Will I lose my job to the Cloud?

(Image: Hindu BusinessLine)

One of the questions that I get asked constantly nowadays is, “Will I lose my job to the Cloud?”. The people asking me range from system administrators to senior managers. They could be involved in Infrastructure projects or Development projects but their concern about cloud taking away their job is real.

I had written earlier that Cloud now demands a broader skill set from administrators. Earlier you were a server admin, AD admin, storage admin, network admin and so on. Some of these tasks are simplified on the cloud that if you are specialized in only of these, you may not be a right fit for the cloud. Let us take the case of Storage. We have excellent admins who specialize in administering complex storage products from Dell-EMC, NetApp, Hitachi and so on. The cloud storage takes most of the complexity. If you take the case of block storage in AWS, you have EBS for block storage, EFS for file storage and S3 for Object storage. All three of the them are setup for you and there is nothing much for a storage administrator to do. Similarly when it comes to networking, the complexity in the cloud is much less than what it is when you have to setup networking in your data center. Setting up a VPC is much less complicated than setting up routers and switches (sometimes from different vendors) in your data center. Similarly starting an EC2 instance is a very easy job and you don’t really require a server administrator to do it.

In other words, Cloud values technical knowledge over product knowledge. Additionally it also values breadth of knowledge. Ofcourse some areas may not be impacted much like say Microsoft AD Administrator or DBA, until and unless someone is using a PaaS in which case some of these will also be impacted.

So what should you do if you are an administrator? How scared should you be of losing your job? To be honest,  I cannot answer you with hundred percent certainty about what the future holds but these are a few steps you can take:

  • Expand your knowledge base. If you are a storage admin, start checking what networking is all about and vice versa
  • Understand what the roadmap is for the product you are supporting. Let us say you are supporting a NetApp product, you need to understand what the company’s roadmap is for that particular product. This will give you an idea if you are supporting a soon to be obsolete product or an evergreen product.
  • Find out the roadmap of your company and whether it has a Cloud strategy. In many cases, once people land a job, they rarely ever try to find out the roadmap of their own company. You must get rid of this lethargy and find out if and when your company will move to the cloud.
  • Also try and understand how the external market is growing. Is everyone going to the cloud? Are the sales of Dell-EMC, NetApp, Hitach etc are going up or going down. Your job depends on how the market is growing and in which direction it is growing
  • If you are serious about moving to the Cloud, then check if there are any cloud projects within the company. In order to show your seriousness, try and get yourself certified in any of the major Cloud vendor certification based on what is required in your company. Certification will cost money but it may be worthwhile if you are serious about moving to cloud

As I see there is no need to panic because though cloud migration is happening it is not happening at a pace wherein major companies are dismantling their data centers. That will not happen soon or may never happen. Yet, the demands of the future would be different: more wider knowledge on diverse topics, good grip on the fundamentals and so on and you must be prepared for it.

I also get questions from mid level managers on the impact of cloud on their jobs. I will write a separate post on that soon.

CloudSploit and Security in the Cloud : An Interview

cloudsploit

Security in the cloud is beyond a doubt the most important criteria for enterprises migrating to the cloud. Security in cloud is a shared responsibility. While Cloud providers like Amazon have certain responsibilities towards securing the infrastructure, users need to be vigilant and secure their data.

There are companies which help users to ensure that their cloud environment is secure. One such company is CloudSploit. The founder of Cloudsploit, Matthew Fuller, was kind enough to answer my questions regarding cloud security, over email.

Matt CloudSploit

 Matthew Fuller, Inventor and Co-Founder of CloudSploit

Matt is a DevOps Security Engineer with a wide array of security experience, ranging from web application pentesting to securing complex networks in the cloud. He began his security career, and love for open source, while working as a Web Application Security Engineer for Mozilla. He enjoys sharing his passion for technology with others and is an author of the best selling eBook on AWS’s new service – Lambda. He lives in Brooklyn, NY where he enjoys the fast paced, and growing, tech scene and abundant food options.

Here is our conversation

CloudSiksha: In your experience, what are the major security concerns of enterprises wanting to migrate to Cloud?

Matt: The biggest concern Enterprises should have with moving to the cloud is simply not understanding or having the in-house expertise to manage the available configuration options. Cloud providers like AWS do a tremendous job of securing their infrastructure and providing their users with the tools to secure their environments. However, without the proper knowledge and configuration of those tools, the settings can be mis-applied, or disabled entirely. Oftentimes, the experience that the various engineering teams may have with traditional infrastructure does not translate to the cloud equivalent, resulting in mismanaged environments. Multiply this across the hundreds of accounts and engineers a large organization may have, and the security risk becomes very concerning.

CloudSiksha: You are security company which helps people who migrate to AWS to be secure. What do you bring over and above what Amazon provides to users?

Matt: AWS does an excellent job of allowing users to tune their environments. However, while they provide comprehensive security options for every product they offer, they do not enforce best practice usage of those options. CloudSploit helps teams quickly detect which options have not been configured properly, and provides meaningful steps to resolve the potential security risk. We do not compete with any of AWS’s tools; instead, we help ensure that AWS users are using them correctly with the most secure settings.

CloudSiksha: AWS itself has services like Inspector, CloudTrail and so on. So can the users not use these services for their needs? How does CloudSploit differ from these? Or do you supplement / Complement these services?

Matt: AWS currently provides several security-related services including CloudTrail, Config, Inspector, and Trusted Advisor. The CloudTrail service is essentially an audit log of every API call made within the AWS account, along with metadata of those calls. From a security perspective, CloudTrail is a must-have, especially in accounts with multiple users. If there is ever a security incident, CloudTrail provides a historical log that can be analyzed to determine exactly what led to the intrusion, what actions the malicious user took, and what resources were affected.

AWS Config is slightly different in that it records historical states of every enabled resource within the account, allowing AWS users to see how a specific piece of the infrastructure changed over time and how future updates or changes might affect that piece.

Finally, Inspector is an agent that runs on EC2 instances, tracking potential compliance violations and security risks at the server level. These are aggregated to show whether a project as a whole is compliant or not.

While these services certainly aid in auditing the infrastructure, they only scratch the surface of potential risks. Like many of AWS’s services, they cover the basics, while leaving a large opening for third party providers. CloudSploit is one such service that aims to make security and compliance incredibly simple with as little configuration as possible. It uses the AWS APIs (so it is agentless, unlike Inspector) to check the configuration of the account and its resources for potential security risks. CloudSploit is most similar to AWS Config, but provides many advantages over it. For example, it does not require any manual configuration, continually updates with new rule sets, does not charge on a per-resource-managed basis, and covers every AWS region.

CloudSploit is designed to operate alongside these AWS services as part of a complete security toolset, and helps ensure that when you do enable services like CloudTrail, that you do so in a secure fashion (by enabling log encryption and file validation, for example).

See more at https://cloudsploit.com/compare

CloudSiksha: How does CloudSploit work in securing infrastructure?

Matt: CloudSploit has two main components. First, it connects to your account via a cross-account IAM role and queries the AWS APIs to obtain metadata about the configuration of resources in your account. It uses that data to detect potential security risks based on best practices, industry standards, and in-house and community-provided standards. For example, CloudSploit can tell you if your account lacks a secure password policy, if your RDS databases are not encrypted, or your ELBs are using insecure cipher suites (plus over 80 other checks). These results are compiled into scan reports at predefined intervals and sent to your email or any of our third-party integrations.

The second component of CloudSploit is called Events. Events is a relatively new service that we introduced to continually monitor all administrative API calls made in your AWS account for potentially malicious activity. Within 5 seconds of an event occurring, CloudSploit can make a security threat prediction and trigger an alert. The Events service is monitoring for unknown IP addresses accessing your account, activity in unused regions, high-risk API calls, modifications to security settings and over 100 other data points.

All of this information is delivered to your account to help them take action and improve the security of your AWS environment.

CloudSiksha: What are the dangers of providing you with a user account in AWS?

Matt: There is very little danger. CloudSploit uses a secure, third-party, cross-account IAM role to obtain temporary, read-only access to your AWS account. Even if this role information were compromised, an attacker would still not be able to gain access without also compromising CloudSploit’s AWS account resources. The information we obtain and store is also very limited in nature – metadata about the resources but never the contents of those resources.

 CloudSiksha: Can you tell me something about how your software has been used by companies and what value they are seeing?

Matt: Companies using our product have integrated it in a number of unique ways. For example, using our APIs, a number of our users have built integrations into their Jenkins-based pipelines, allowing them to scan for security risks when making changes to their accounts, shortening the feedback loop between changes being made and security issues being detected. Other companies have made CloudSploit the central dashboard for all of their engineering teams across every business unit to ensure that security practices are being implemented across the entire company.

Individual developers and pre-revenue projects tend to use our Free option, and are happy with the value it provides. 20% of these users move on to a paid plan in order to have the scans and remediation advice occur automatically.

Medium-sized teams prefer the Plus account in order to connect CloudSploit with third-party plug-ins such as email, SNS, Slack, and OpsGenie.

Advanced users, those who like to automate everything in their CI/CD workflow, as well as larger enterprises prefer the Premium plan for its access to APIs and all of our various features and maximum retention limits.

CloudSiksha: I see you have multiple options with varying payments. Has any of your client shifted from one tier to another? What was the reason for them upgrading to a higher tier?

Matt: Absolutely. Individual developers give the Free account a try and love the results. For many, it’s a “no brainer” to pay $8/month for automated scanning and alerts containing remediation advice. The biggest drivers of clients moving to higher-tier plans are a need for custom plugins, increased scan intervals, and longer data retention times.

CloudSiksha: What more can we expect to see from CloudSploit?

Matt: Expect to see a stronger focus on compliance. Besides the 80+ plugins and tests that we currently have, we are working to expand our footprint for more compliance-based best practices. In addition, we are launching a new strategy to get information sooner and react to it faster than any competing AWS security and compliance monitoring tool. Amazon released CloudWatch Events in January and a month later we had already taken advantage of those features. We plan to continue to enhance this Events integration, delivering ever more useful results to our users.

You can check out CloudSploit here

Disclosure: The links given here are affiliate links.

Passing the AWS Solution Architect Professional certification exam

 

Professional Certificate

If someone were to ask me how they should prepare for the AWS Solution Architect Professional exam, I would advice them not to prepare like I did. In the sense that I went to the exam quite under- prepared and I had to spend considerable time on each question in the initial stages before I got an hang of the questions. As the test progressed I was able to speed up my response.

I had taken a target of March end to complete this certification. My earlier Associate certification was expiring by March end and instead of getting re-certified I though I will attempt this certification. Unfortunately I got involved in getting my online courses ready (you should see them in a couple of month’s time) and didn’t have much time to prepare. Most preparation I did was in the last one week and I don’t think that is enough.

My friend Kalyan had sent me links to videos which need to be watched and also links to important white papers. Kalyan is a certified professional himself and these were helpful though I did not see all the videos and did not read all the white papers. What I did was to read the developer documents of most of the services and then depend on my logical ability to deduce the answer. This will backfire if you do not have a good grip on the services of AWS.

A few points from what I could gather from the exam:

1. Quite a few questions involve Big Data services: Kinesis, RedShift, Elastic Cache and EMR. So understand these services well. You must know when to use which service

2. I got a few questions on SWF and Datapipeline. Again you need to understand which is used for which situation

3. Lot of questions on hybrid cloud. So be very thorough with Direct Connect, VPN and Route 53

4. Lot of questions about costs which involved CloudFront, S3, Glacier

5. Understand when you must use RDS and when you must use DynamoDB. Quite a few questions have both these services as answers

6. Understand the difference between Layer 4 and Layer 7 in Networking

7. If you know your theory well, you can easily discard some of the options. This is the approach I used in most of the questions. To paraphrase Sherlock Holmes, “Remove all the impossible answers. Whatever remains, however improbable, must be true”

The major problem with this exam will be that you may not have used many of the services. Many of us will not have a chance to use Direct Connect or VPN or RedShift or Elastic Cache and so on. So we must rely on theory and an understanding of these services to answer the questions. Therefore it is imperative that you read the documentation in detail and watch the 300 and 400 series videos to understand the theory thoroughly. A good understanding of the theory couple with good analytical reasoning skills will let us cross the line.

All the best if you are trying for this certification.

Human Errors and the burden on SysOps engineer

300px-Paris_Tuileries_Garden_Facepalm_statue

Recently I read read about two outages, the AWS S3 being the bigger one. The other outage, being at GiLab.com. In both cases the root cause of the problem boiled down to human error. Even with tons and tons of automation around, we need to depend on System Operators to perform certain tasks and this is where human error gets induced. Also remember, not every automation tool is fool proof. You never know which corner condition it was not designed for and that could also induce problems. For now let us concentrate on human error.

I am sure each of the system administrator has his/her own horror story to related regarding human errors. I have known too many. I will tell you a few of them here.

When I worked for my company, in the late 80s, getting the root password was not a difficult thing. Lots of people had the root password for the systems. Once a sysadmin went to a lab of another department as he wanted to copy some files from there. He had root access on the system. After copying files, he some some unnecessary files in the system and gave rm -rf *.*  Unfortunately he was not in the same directory where those unwanted files existed but at a directory at a higher level. Before he ould realize his mistake the system went down. It was later said that whenever the department people saw him coming that side, they would shut down all systems till he left the place.

This was a minor one as it impacted only system. The major one I heard of was in the private cloud segment, where they were hosting database as a service. It seems that one of the DB administrators had to manually connect the database to a client system. Unfortunately he connected the DB of another client instead of the correct one. So the first client was able to see the database of another company!! All hell broke loose and the client had to be pacified by people at the very top.

If you look at the GitLab.com case, you will see another standard horror story. People take backups but never test if the backups are good. A friend of mine related a story wherein some major design drawings were being backed up regularly. One day their servers crashed and became non recoverable. So they tried to restore from the backups only to find that though backup jobs were run daily there were failures which the sysadmin had not noticed. So there were nothing in the tapes. To add to their horror the sysadmin had quit only a few weeks before. So almost 6 months of effort had to be repeated !!

The more complex the system, the more impact any such error has. Additionally the complexity, as in the case of AWS, induces its own error checking and consistency checks, so that recovering from errors will not be an easy task.

The job of System Administrator will grow more and more tense with the evolving complexity of systems. The fact is that some of the best SysAdmins are chosen for such jobs and yet there could always be an instance wherein due to tiredness, temporary lack of focus, oversight or sheer bad luck an error could be made. Unfortunate in this cloud era, if you are a service provider, the repercussions are bound to be heavy. The System Administrators must be more vigilant than ever and the organizations need to put in lots of checks and balances and ofcourse automate wherever they can.

You can read about the AWS S3 outage and what was impacted, here:  https://www.theregister.co.uk/2017/03/01/aws_s3_outage/ 

Here is an explanation of what how the AWS Outage happened:  https://techcrunch.com/2017/03/02/aws-cloudsplains-what-happend-to-s3-storage-on-monday/

Here is a writeup on the GitLab.com outage:  https://www.theregister.co.uk/2017/02/01/gitlab_data_loss/

 

Serverless Computing: The way ahead

AWS Lambda

To those who have not heard of this, the phrase ‘Serverless Computing’ may sound like science fiction. How can compute happen without a computer? While the phrase does give rise to such an interpretation, what is meant here is that we will compute without owning or setting up the servers.

When we embraced Cloud computing we gave up on ownership of the servers but not the control. The servers ‘belonged’ to you in the sense that you could login into the system and customize the OS and the application any way you wanted. With Serverless Computing, you give up both ownership and control. The service provider is responsible for setting up the servers and running your application seamlessly.

I will take up Amazon Lambda, which is what everyone thinks of when talking about serverless computing, and explain what it means to give up ownership and control. One way to use Amazon Lambda is as follows: a) Write you code and upload to Lambda b) Tell Lambda when you want your code to be run (scheduled or as response to an event) c) Lambda will setup the required environment and run your code. d)You will be charged only for the duration that your code ran.

Let’s take an application example. Assume you want to provide a service wherein the user loads a Word file and wants it converted to a pdf file. The application logic will work this way:

– The user file is loaded into Amazon S3,

– S3 generates an event

– The event is sent to Lambda,

– Lambda then invokes your code which downloads the file, converts it and uploads the resultant pdf file to another S3 bucket.

– The user is then informed that the pdf file is now available for download.

Assume the number of requests for conversion per day is not very large. In that case running a backend server to perform this logic will be costly as we have to keep the server running 24×7. In case you use Lambda you will only be charged for the duration for which your program ran. Setting up the servers and running your program is the responsibility of Lambda. So you have less headache and it costs you less.

There are limitations of course. Lambda doesn’t allow a job to run for very long time. They will terminate after 5 mins. Similarly, starting your app may take some time, especially if you have written the app in Java. You don’t get any persistent local storage. The amount of RAM you get is also limited. The languages currently supported by Lambda are: Java, Python, Javascript(Node.js) and C#.

All these limitations mean that developers must start thinking of serverless computing from the architecture stage itself. They must learn to think in serverless terms rather than assuming the availability of a server in the backend. Of course, not every application can use serverless architecture but in Microservices area this will definitely be beneficial. This excellent and quite exhaustive article by Mike Roberts in Martin Fowler’s site gives all sides of the picture and is definitely worth your time.

Here another article wherein we find how the company CloudSploit made the whole company serverless. This article has lot of technical details including some code snippets. Developers will get some nice insights into how they can implement serverless architecture.

It is not only AWS that has serverless computing. Google has Cloud functions, Microsoft Azure has Azure Functions and IBM BlueMix has OpenWhisk . So you can see that every cloud provider is interested in having a serverless computing solution. There are also frameworks like Serverless Framework which makes serverless development easy

It will take another two or three years to know how this succeeds but going by the idea, my take is that serverless computing has a bright future and we will see lot more applications adopting serverless computing

Reaching the first milestone & New Initiatives

1000-large

Last week we achieved what I consider to be our first milestone. We have now trained more than 1000 engineers. The number stands at 1030 and may go up by another 20 by the end of the year. This has been achieved over a period of 2 yrs and 2months. It is always a good feeling when you can train people and training more than 1000 of them does give you a sense of satisfaction. This is just the beginning. There is lot more to be done. When it comes to training, there is no end goal. You go on educating people as long as you are in business.

We have started some new initiatives. I have already blogged about our video initiative. We are now launching ‘Startup Siksha’, an initiative to help the startups. I have run a startup myself and have also been a part of a startup. So I understand the need to startups to get their people upto speed in new technologies in a cost effective way. Generally startups rely of people educating themselves and coming upto speed soon. This works in some cases and in some cases may not be very effective. In the complex world of cloud, an initial formal education can yield good dividends. Keeping in mind, the cash constraints of startups, we offer ‘Startup Siksha’. The main features of this initiative are:

– Offered at a very special price

– Online training for two days (Weekdays or Weekends. Most startups prefer weekends)

– Will accommodate upto 5 engineers

– Study material and Lab material will be provided

– A session on how to take up the certification exam

– Sample certification questions will be provided

We have already trained startups like RazorThink, EZDC, Aptus, Techwave etc on AWS and we have seen the people trained by getting certified as AWS Solution Architect Associate and AWS Developer Associate.

If you are a startup, you can write to me at suresh@cloudsiksha.com to know more details.

CloudSiksha completes two years

second anniversary

It gives me great pleasure to share with you that CloudSiksha completes two years as on today. It has been a fascinating journey so far. We have trained close to 900 people in these two years for various companies ranging from small startups all the way to huge IT giants. We have trained people in AWS, SoftLayer, DevOps, Chef, Puppet, Storage and MongoDB. I take this opportunity to thanks all the participants, friends and partners who have made this journey fruitful.

As you know, any company can survive only if it keeps running harder and takes calculated risks. Going forwards, we wish to offer more training via the video training mode, which will enable you to study at your own pace and we will also do the blended learning part. We also plan to be a platform for various SMEs to deliver their training. We will remain focused on Cloud, Big Data and DevOps and want to make CloudSiksha’s online training courses best in class. I know the plan is ambitious but then we can’t go far without ambition, can we?

Today as a gift to those who are very new to Cloud, we have uploaded three videos on youtube. These tell you how to start an EC2 Instance and how login into a Linux EC2 instance and how to login into a Windows EC2 instance. Here are the links:

Starting EC2 Instances: https://youtu.be/RpQob8M8Rqc

Login into Windows Instance: https://youtu.be/ahALycfEX0o

Login into Linux Instance: https://youtu.be/MBb2oJnTqRY

I must confess that I have been a bit lax in updating the blog due to professional commitments but that is no excuse. I will ensure that this blog gets updated once every fortnight from now on. Do follow this blog in your favorite reader to get updated regularly.

We will also be starting a newsletter soon. We used to send out newsletter but we stopped it because we wanted a ‘No Spam’ policy in place and we realized we were spamming people. We will now take people’s approval before we send the newsletter. This will be a monthly newsletter. If you want to subscribe to it, kindly send a mail to enquiry@cloudsiksha.com with ‘Subscribe’ as the subject.

Looking forward to interacting with all of you on a regular basis.

Watching your AWS Bill

billing

I have heard more than one story about how people were using AWS and suddenly one day they got a hefty bill. I too had this experience. The bill was not hefty but for a startup like mine even 20 or 30 dollars is an unwanted expenditure.

Why does this happen? Is this due to lack of knowledge on how AWS works and its billing schemes? No, lot of it happens due to lethargy. So the first battle to be fought is with lethargy. Being active is not enough but you must know how you may lose money unintentionally. I am trying to list some of my experiences here.

I am assuming you are a small or medium company and you do not want to purchase any additional software to manage your AWS infrastructure. You are managing it from the console. Here are the things you must do / look out for in order not to overspend.

1. Check the projected bill regularly: The best and simplest way to avoid unwanted charges is to check the project bill on a regular basis. In your billing section you will find a projection for the month. Check if it is the limit that you expect. If not dig deep down to see where the problem lies.

2. Set an alarm: You can set a billing alarm to alert you when the bill crosses a certain value. I think Amazon allows you to get this limit only once so use it wisely and you will be alerted in case of the bill crossing a certain amount

3. Stopping EC2 instances isn’t enough: Remember that when you stop your instance, only the billing for the instance stops. Your EBS is still billed. That means if you have have EBS volumes whose total size is more than what your free tier permits, you will be billed for EBS volumes even though the instance to which they are connected are stopped

4. Check the regions: A couple of times I had terminated my instances and seeing no running instance I was satisfied. Yet, I got a bill. This was because instances were running in other regions and I hadn’t shut them off. So remember what you see on  the dashboard pertains to one single region. So religiously check all regions regularly else you will end up paying a decent sum to Amazon

5. Release Elastic IPs: Elastic IPs cost nothing when they are attached to a running instance. In case you terminate an instance which has Elastic IP attached, ensure you also release the Elastic IP. Else you will be charged for the Elastic IP which is not in use

6. Delete Autoscaling groups: This happened to me once. I had terminated all instances and then logged off. What I had not realized is that I had an autoscaling group running with minimum number of instances set to 2. So after I logged off, autoscaling had done its job. It started two instances and I ended up paying for them. So always check if you have any autoscaiing group and whether the instances you are terminating belong to an autoscaling group

7. Delete Elastic LoadBalancer: Ensure you delete the ELB as well when you delete the instance attached to it. Else you will be charged for the ELB.

8. Understand which is free and which is not: I should have probably put this up as the first point. It is very important that we understand which services are free and which services are paid services. You must as well understand the limits (of free service). This will go a long way in ensuring that you do not pay anything in excess.

Large corporates will have their IT teams which will monitor for wastage. People like me, who run small companies, may not have the bandwidth to keep a continuous eye on the status of the infrastructure. So ensure that you scan for all things I had mentioned above whenever you log in into your AWS console. Only by discarding your lethargy will you be able to ensure you don’t waste money.

If you have any such experience. Let me know. I will add the learning to this post.

Dedicated Hosts from Amazon

AWS Dedicate Host

Amazon AWS recently announced the availability of Dedicated Hosts for users. This means that you can order a dedicated hosts for yourself and run your VMs in this host. Amazon says, “Dedicated Hosts provide you with visibility into the number of sockets and physical cores that are available so that you can obtain and use software licenses that are a good match for the actual hardware.” You can read all the technical details of how to order a dedicated host and how to place your instance on this host at this blog: https://aws.amazon.com/blogs/aws/now-available-ec2-dedicated-hosts/ 

In case of a Dedicated Host, the billing starts as soon as you are provisioned a Dedicate Host. The billing doesn’t depend on how many instances you are running on the Dedicated Host. You can check the pricing of the dedicated hosts here: http://aws.amazon.com/ec2/dedicated-hosts/pricing/ 

This is a good move by Amazon and I sure this will slowly lead towards a bare metal provisioning being available as well in future course. The reason I say this is because IBM SoftLayer has bare metal and it is their USP. Bare Metal offers enterprises lot of control and also ensure that the compliance requirements and performance requirements are taken care of. So if you want to build your own data center within the public cloud infrastructure using bare metal could be preferable for certain use cases. IBM SoftLayer gives that flexibility to its users. IBM SoftLayer has both dedicated hosts and bare metal. Amazon has caught up with the dedicated hosts path. I think in future they may get to the bare metal part.

In case of Bare Metal you ‘own’ the server, in the sense that you can control the server fully by using say IPMI. You get a KVM for your use and then you decide if you want to use it as a standalone machine or you can load any hypervisor that you want. Since you completely own the server many of your compliance headaches are solved.

In due course of time i think every cloud provider will be pushed to offer Bare Metal servers. Ofcourse the main value proposition of the cloud is that it takes the management headache away from you and leaves you free to concentrate on your product or service. Slowly everyone is realizing that this dream of a No-IT is not a possibility since there are multiple reasons why an Enterprise, especially a large one, needs to have control over the infrastructure. For such large Enterprises, it is not the No-IT but rather the flexibility and elasticity of the cloud which will be the main value proposition. The Dedicated Hosts offering from AWS tells us that it is indeed true.

CloudSiksha’s First Anniversary

CloudLogoJustDial

I am delighted to share with you that CloudSiksha completed one year of its existence on Oct 28th 2015. It has been a great year so far with lot of initiatives taken which will yield results in the coming years.

We started our first course in the month of December 2014. The first two courses we did were related to Storage. N R Ramesh, Saravanakumar, Ratnasagar, as well as my former colleague Sarath were the initial participants. My sincere thanks to them. We started the Cloud related course in Jan 2015. After that it has been majorly Cloud related work that we have been doing.

We did our first online course with participants from London. This was the MongoDB course which was conducted by Maniappan. Later I did a AWS Solution Architect Course for participants from US. From then on we have steadily been doing online courses for participants from US, Australia, Mumbai, Delhi, Hyderabad etc.

We also expanded into the corporate world. We did courses for Accenture, TechMahindra, HCL, Sonata, RazorThink and Adobe. We also had participants from companies like HP, IBM etc. for our courses

From Storage and AWS, we expanded to Puppet, Chef, Python and IBM’s SoftLayer. Within AWS itself, we have been conducting courses on Solution Architect, SysOps and Development. We will be venturing into DevOps soon.

I got myself certified as a AWS Solution Architect – Associate. I am also glad to report that quite a few of the participants of our classes passed this exam.

Additionally we have now partnered with other companies to develop online content. Python based content is under development. Similarly AWS based content is also under development. We will publish all details of this once the development is complete. In the coming year we will be focusing a lot on high quality online content.

At this time I would like to thanks Maniappan and Sarath, who took the initial classes. I also would like to thank Ramesh Murthy and his team at StridesIT, who have supported us both in terms of customer acquisition and in terms of fulfilling the infrastructure requirement. Kavirajan designed my website and I have to thanks him for the great job he did. I also wish to thank all my other partners on this occasion.

My sincere thanks to all participants of our courses. We would not have grown this much were it not for your active participation and support.

Hoping the we will scale greater heights in the coming years.