SNIA Storage Developer Conference 2015

This will be a small note.

I know I am a bit late here but thought I would anyway let you know in case you have not seen this.

SNIA is holding its Annual Storage Developer Conference India at Royal Orchid Road, HAL Airport Road, Bangalore on May 29th. Unfortunately I have a prior commitment on that day and hence may miss attending the conference.

The home page of the conference: http://www.snia.org/sdcindia/venue

The Agenda of the conference: http://www.snia.org/sdcindia/agenda

The Agenda is interesting and if I were to attend the conference I would be in two minds on which tracks to attend since all of them are interesting. Scale Out Filesystem, OpenStack, Cloud are areas of my interest. As could be expected, the talks promise to be a mix of theory and practice. I am happy to note that Sundara Nagarajan (whom we fondly call SN), who is like a mentor to many of us, is also giving a talk in this conference.

The Storage-Cloud relationship seems to be covered quite extensively here. Would have loved to see more of Storage-Hypervisor relationship being covered as well. Lot of innovation is happening in this area with lot of APIs coming out which allow for tighter integration between the Hypervisor and the Storage Array. (You would know this if you are following the developments at VMware and the Storage APIs on offer there.) A company like Nutanix or VMware (with its VSAN) presenting would have been great. But then, like the Top 10 Movies of all time list, each of us will have our own preferences.

If you are working on Storage this is definitely a conference you should attend. I hope it is not too late for registration.

AWS Partner Summit : Customer comes first

AWS Partner Summit

I attended the AWS Partner Summit at Leela Palace, Bangalore on 29th April 2015. It was a well organized meet and the hall was full. They had stalls by sponsors outside the hall and we could get to interact with Solution Architects of Amazon, which was nice thing. I was able to talk to them and get some of my doubts clarified.

There were quite a few talks but the most impressive ones for me were the panel discussion in the morning and the keynote address by Terry Wise (did I get the name right?). The panel discussion had AWS customers and consulting partners as the panelist. This gave a good idea to people on how AWS is being used in India and what are the opportunities for consultants in this field. Each one spoke about how they came to AWS, what benefits they are deriving and also addressed important issues like security.

The keynote was very well done by Terry Wise. It was concise and at the same time covered a lot. The two important takeaways for me personally was about customer success and not being afraid to fail. When I joined the industry quite some time back, the buzz word was customer satisfaction. It later changed to customer delight. Customer delight was just not enough and the buzzword transformed to Customer Success. I remember the time in my previous company when this buzzword hit us and it lead to me preparing a deck for me CEO to deliver on this topic to Project Managers and other mid-level managers. In case of Amazon, the statement which made a great impact was not ‘We succeed when the customer succeeds’ but ‘We succeed only when the customer succeeds’. The ‘only’ was underlined. This does change everything doesn’t it. To be fair to Amazon, they have been following this diktat and been improving their services, introducing new services and cutting cost. (From my personal experience I can say that their team in India is also very hungry and they helped us land our first consulting deal.) I believe in the space we are in (Competency Development & Consulting) we cannot afford not to help our customers succeed.

The other part of not being afraid is also an important message. The importance here lies in the fact that this is a culture which comes top down. When the leadership is scared to fail, the people below become risk averse or is they are willing to take a risk, they get punished for failing. I have seen the sense of insecurity and the unwillingness to take initiative

amongst mid-level managers in more than one company and you can directly trace it to the leadership of the company.

I do hope for sake of other companies, Amazon succeeds in its endeavor. Going by their recent $5B announcement, they are on their way. More power to them.

I will conclude this with a trivial request: Next time when we have such a summit can we have coffee without sugar please !!

 

Proof of the pudding: AWS Solution Architect – Associate certification

Solutions Architect-Associate

As the saying goes, ‘The proof of the pudding is in the eating’. So I decided to check as to how much of the training material we had at CloudSiksha would help people in their quest to get themselves certified as AWS Solution Architect – Associate. I am glad to say that we do cover quite a lot of ground but there are a few areas we still need to touch upon so that we cover almost everything that you would expect in the exam.

Amazon states that this exam is split up as follows:

– Design : 60%

– Deployment / Implementation : 10%

– Data Security : 20%

– Troubleshooting: 10%

The major issue many would face about this exam is that the scope is wide. It covers lots of services that AWS provides. It is possible that you will not be using all these services. Yet you must know about them as questions from these services appear in the exam.

For example, I had questions from Simple Notification Service (SNS), Simple Queue Service (SQS), DynamoDB, Route 53. Now not all of us would use these services (I had not) and yet you need to know about them. We need to read up about these services. The best option is to read the FAQ about each service. The questions in these services though were not very complex. Many of them you can logically sort out.

Security is a key issue for cloud and not surprisingly AWS gives 20% questions from this area. I also got security related questions in trouble shooting as well. So the overall emphasis on security is much more than 20%.

The questions are segregated into different areas. You just get a stream of 60 questions which you need to answer in 80 mins. Honestly the time is more than enough. The questions are all multiple choice questions. While most questions have one correct answer there are questions which have multiple right answers. Amazon does you a favor by stating how many right answers exist and you cannot submit until you have clicked all the right answers. These questions are generally the tricky ones.

Who should take the exam? I personally feel that you should have a decent understanding of what a data center is, what a 3 tier application means and what networking is all about. Not indepth but atleast a bit more than just awareness. If you really have no hands on experience in managing systems or networks, you would find it tough to clear the exam. Additionally this certification adds a lot of value to experience people but may not add much value is you are new to data center and server/network management.

Let me now come to the training part. I feel that it is not enough reading the documents. You need to work on the AWS Infrastructure. As I had mentioned earlier you may not be able to work on some of the services but you can definitely work on most important service using the free tier. It will cost you nothing and it will give you a good grip on managing the infrastructure. I personally feel that you must not attempt this exam without having done some hands on work. I saw quite a few questions which I could immediately answer because I had gone through that experience when setting up the infrastructure. One of the reasons I was able to score 100% in trouble shooting.

Finally, the workshop we conduct at CloudSiksha, is a complete hands-on workshop. This will probably cover around 70 to 75% of the questions asked. This workshop will give you a good grip in managing the infrastructure. Since this is a complete hands-on training, we are not teaching about the services like Route 53, SQS, SNS, DynamoDB. We plan to add another one day module which will deal with these theoretical subjects and also prepare you for the exam by giving tips on how to take the exam. Please await the announcement which will happen soon.

Cloud Pricing and Lock-in

cloud-pricing

Price (that is low price) is one of the aspects of Cloud which is highlighted and seen as a major selling point of Cloud. The fact that infrastructure is managed by someone else and the  infrastructure costs less are the main reasons why many go to the Cloud. Some do realize that as they grow and as their resources are being used all the time, things aren’t as cheap as they thought it would be initially. My friend, Ramesh, who runs StridesIT, told me once, “One startup started working on the cloud and initially they were quite happy with a small bill. As more people joined the startup and the usage of resources increased, they started feeling the pinch. You must do a through cost analysis initially else you may end up paying a lot”

Each Cloud provider has his own cost policy and they provide cost calculators which can be used to get a fair idea of how much you may shell out every month. They also have different cost if you are willing to commit upfront for one year or more. There was an interesting article published recently by Google which through some calculations showed that Google was cheaper when compared to Amazon. You can read the paper here: Understanding Cloud Pricing If you click on the ‘Estimate’ link in the article you will be taken into the cost calculators of Amazon and Google. You will find that you will need to take into consideration multiple aspects like disk usage, network usage and usage of any of their services.

You need also understand the granularity of pricing . For example Amazon bills you per hour for some resources. This means that even if you use for 5mins you will be charged for an hour. The time measured is always between when you started the system and when you stopped it. Assume a case where an instance is up for ten minutes. Then you shutdown. You come up again after some time and run the instance for ten minutes again. You have used 20 mins of the instance but you will be charged for 2 hrs !! Each start and shutdown is charged and you have two start and shutdown. Each of them will be charged a minimum of one hour price !!

An important point to note in this article is about the potential lock-in that occurs if you sign up for a long term deal. This is always something which customers worry about. Will the supplier not exploit us if he has locked us down? I believe there are two sides to this coin and lock-in is not as bad as it seems. There are so many companies which are say EMC shop, or NetApp shop or HP shop etc. It is not as if these companies have burnt their fingers because they decided to go with a single vendor.

From my experience in my earlier company I can say with a fair degree of confidence that the clients who are big and who are committed to the company do get very good treatment. There are times when even a small escalation from such companies reach the ears of the top most person and the pressure to solve their problem is enormous. Additionally their input is also sought from newer releases. Since a committed customer is always something which every company wants, you can be sure they will do their best to keep the customer happy. So lock in is not as bad provided you have done your background check about the vendor and the future direction they would take. If they are dependable, lock-in need not be a very major factor in your choice.

Cloud Computing: What should I do?

single white cloud on blue sky

“I know Cloud Computing is the ‘in-thing’, so what should I be doing in Cloud Computing?” This is a question that people often ask me when I tell them that I started a company aimed at Cloud education. The answer to this question is again a question, “Who are you?”  Lest you think I am being rude there, let me say that what I want to know from the person is the role that he / she is currently playing. The answer I give will depend on it.

Let me try and answer this for various roles in an Enterprise starting from the very top one. This is ofcourse a 10,000ft view and there will be lot of finer detail that I may not have covered. Yet I think this will give you a good idea of the direction you must take in case you want to work in the Cloud.

1. CIO:  It should be quite obvious that lot of CIOs would be under pressure to either move to the Cloud or atleast explore the possibility of moving to the Cloud. The CIO ofcourse has to look at the long term pan of the company and work out the economics. So if you are CIO you should be know about the various deployment models of the Cloud (Public, Private, Hybrid) and the various service models (IaaS, PaaS and SaaS). You should be able to understand the economics thoroughly (calculating the spend on Cloud is not as easy as you think.) Also the SLAs offered by each provider, their reputation for high uptime and security of your data need to taken into account. The call which the CIOs will generally need to make is whether they want a Cloud model or if they don’t want to manage infrastructure, should they go for Managed Services. To make this decision, they need to understand the difference between a Private Cloud and Managed Data Center. From what I hear and read both have their own advantages and disadvantages. A lot will depend on the applications that are being used by the Enterprise. Needless to say a long term vision about the Infrastructure and getting the best value for money would be the CIOs aim.

Architect : ‘Cloud Architect’ has different connotations depending on where you are working or would like to work. If you want to be a ‘Cloud Architect’ with a Cloud Provider you need to understand the nuts and bolts of how the Cloud is formed. A Cloud Architect in a Cloud Provider space sees less of the Cloud and more of the infrastructure. So while you need to have a clear idea of what Cloud means to the consumer, you must be well versed with Data Center technologies. You should be an expert in either Server Virtualization, Storage or Networking and should be able to understand how Software-Defined-Anything (Storage, Server, Networking) works. Additionally understand the newer technologies like Containers (Docker) which is being used in the Cloud context. Your job will be to Architect the infrastructure to ensure optimal and efficient use. So try and be a domain expert in one of the areas I listed above : Server, Storage or Networking. Depending on the Cloud Provider you may want to become an expert in OpenStack in case of IaaS providers and Cloud Foundry in case of PaaS providers.

If you are a ‘Cloud Architect’ in a company which wants to consume the Cloud, the expectations from you are different. You need to understand the infrastructure services provided by the Cloud Providers and plan for migrating your applications to the Cloud. If will need a good understanding of the services provided to perform efficient migration. You can also plan on developing some of your applications on the Cloud itself and for this too you need to have a clear idea of the services offered by the provider. For example, if you want to migrate your applications to Amazon AWS you should probably think of getting yourself Amazon certified, which will force you to read and understand all the offerings of Amazon.

Similarly if you are more interested in PaaS for your development team, you must understand the offerings from various PaaS vendors whether it Google App Eng, Microsoft Azure, Amazon Elastic Beanstalk, IBM BlueMix or anyone else. You need to understand the IDEs they provide, the language support they provide and how easy it is to deploy your application.

Project Managers / Technical Leads:  Understanding the deployment scenarios and service offerings of the Cloud Vendor is key. Understanding the economics and keeping a good grip on the money spent will be a major task. For like virtualization, here too we can have a sprawl due to the easy nature of provisioning a VM and the ease at which you can consume any service. So understanding how each service of the provide will put a drain on your exchequer is important. For the final control on the developers lies with the Project Managers and Technical Leads. Understanding the infrastructure and your own application well will let you realistically estimate how easy or difficult it is to migrate to the Cloud

Developers: Even before you understand the Cloud, ensure you can code well in one of the languages used for web services: Ruby, PHP, Python or Java. (You have Node.Js and GO as well but one of the four should suffice for now). Once you have mastered the language it will become easy for you to use any API and interact with the Cloud. You will be seeing all the services offered by the Cloud Provider from a programming perspective and once you understand the APIs you can develop lot of programs based on the Cloud. Overall understanding of Cloud Computing is required also with an inquisitive mind and a good grip on a programming language. Given that providers like Amazon AWS give you free tier for a year and the APIs are readily available, an enterprising programmer can develop her own application on the web in a very short time. If you are going to develop Enterprise class applications you need to understand the three-tier application model and some of the frameworks depending on the language you chose.

Administrators: Amongst different categories the Administrators have the most to learn. Again we have two types of Administrators here: One at the Cloud Providers premises and one at the Consumer premises. If you are the Consumer, then you will need to understand how to use the Management Console or CLI of the Cloud Provider. For example, if you apps are hosted on Amazon you should know how to use the Amazon Console and Amazon CLI to manage your apps in AWS. If it is a different provider then you must understand their UI and CLI. System Admin certification of Amazon AWS and CloudStack certification may be helpful

If you are an Administrator at a Cloud Provider, then as with the Architect, you need to be a specialist in Storage Administration, Network Administration or Server Administration. You must thoroughly understand the concept of Virtualization and how it is applied to Storage, Server and Networking.

Read about various server virtualization technologies like VMware, Xen, KVM, OVM, Virtual Box etc. if you are a Server Admin. Also understand the provisioning tools like Chef. Puppet, Ansible. Also understand container technology like Dockers and tools like Vagrant which allow you to launch VMs.

If you are a storage administrator understand how storage virtualization helps, understand replication and backup technologies. Storage is a very very important part of the Cloud and maintaining the SLAs wrt Storage is a major challenge. Understand what Scale Out Filesystem is and why they are useful in the Cloud Data Center.

My knowledge of networking is not great so I will refrain from giving lot of advice but surely do try and understand the concept of Software Defined Networking.

Hope this gives you an idea of what you must be concentrating on. Hopefully your journey to the Cloud will be smooth.

 

 

Amazon AWS: Seductiveness of ease of use

One of the important factors which affect people’s use of new technology is ease of use. Think iPhone, think Google. Think Amazon AWS.

I started using it Amazon AWS again recently and I am amazed at how easy it is to use. It is almost as if I have never stopped using it. The way you start an EC2 instance, the way you store objects in S3, the way you can host your static website on S3, everything is fairly easy to use. If there are any issues, the documentation ensures you get your doubts cleared soon. Ofcourse, you can understand all these easily if you have an idea of Amazon’s Infrastructure and you are conversant with the difference between Block Storage and Object Storage.

I had used EC2 and S3 earlier but this was the first time I was trying Elastic Bean Stalk and this too was easy to use. In Elastic Beanstalk Amazon deploys a large infrastructure for your application. Your application can run in a load balanced way with Amazon taking care of the load balancing part. It is supposed to scale the infrastructure whenever your application needs scaling. This is done automatically. Additionally your application’s health is monitored constantly. It supports Node.js, PHP, Python, Ruby, Java and .Net applications.

I chose PHP for my application and started Elastic Beanstalk. The setting up of infrastructure takes some time, a few minutes. Initially I let the Elastic Beanstalk deploy a sample PHP application. The application was started in the high available infrastructure and I could see the application run using a browser and pasting the link provided by Amazon. Once I checked this out, I wrote my own simple PHP application and asked Elastic Beanstalk to now deploy this application in place of the sample application. It took a few minutes and now the new application was deployed and I could see this application now running in my browser. The whole experience was very smooth.

Ease of use leads to more usage which in turn leads to familiarity which in turn leads us to explore more features of a system which in turn makes us be at ease with the product. Which means we are locked. Consider this: when I started CloudSiksha, I wanted to check if I can use OpenSource Office products. I did it give it a try for a month or more but the feature and the familiarity with MS Office was such that I had finally no choice but to buy a one year license of Office 365. I am not regretting it. I understand and appreciate that not every product can be easy to use but having that as a design criteria would definitely help in the long run. It may sound that I am probably stating a self evident truth but when you use some of the software, (which I shall not name), you wonder how the designers missed this simple self evident truth.

Other than the low cost, this ease of use is probably what makes people go to Amazon I guess. In the coming weeks I will doing more with Amazon and I will let you know how things go.

 

 

Ready to take off

Last week has been a busy one. First we got the website up. My friend Kavirajan designed the website and last Friday we went live. Do checkout the CloudSiksha website. (You can also ‘Like’ us on Facebook and LinkedIn. The social media links are given in the website.)

We also announced our first course, ‘Storage for Cloud’. This will be held in Bangalore on 20th and 21st of Dec. If you are interested in attending the course, do drop in a mail to enquiry@cloudsiksha.com

What I have observed is that there are few programs dedicated for senior engineers who want to grow into Architects. In many companies engineers learn by trial and error. There is no structured teaching available which enables the engineers to think about the big picture. You can become a good Storage Architect only if you understand what problems Storage needs to solve in the Enterprise. Storage faces stiffer challenges in the Cloud. I hope to address both the Enterprise Data Center Challenge and the Cloud Challenge in this program.

We sent out this flyer yesterday

Storage for Cloud Flyer

The count down has begun. Wish me luck as I embark on this journey.

In case you are looking out for a web designer, you can always contact my friend Kavirjan. His mail id is kavirajanr@gmail.com

Starting from square one again

After a brief stint with Oracle as a Cloud Architect, I hve decided to start on my own again. This will be the third start for me. I initially started Yagnavalky Center of Competency, which catered to the corporate competency development requirement in the areas of Storage and Linux. Later with my friend and colleague, Sarath Kodali, I founded Avanysis Data Storage Solutions. Our aim was to develop a Data Storage product, which would be cost effective for the SMB market but would have the features of an Enterprise storage product. We were able to develop a prototype but had to give up since we were not able to obtain the funding that we needed for such a product.

I am now starting a new company called CloudSiksha. The company will provide competency development services in the areas of Cloud Computing, Big Data and Data Storage. Programming language like Phython and Core Java, which are used extensively in Cloud and Big Data areas will also be taught. We will be staffed with industry veterans, who have extensive hands-on experience on these technologies.

Work on the website is in progress. You can visit http://www.cloudsiksha.com regularly to check for any updates. Hoping to work hard to ensure CloudSiksha succeeds. Will need all your support and best wishes for that to happen.

You can reach me at : suresh@cloudsiksha.com

 

Flash Point

All Flash Arrays (AFA) have been the flavor of the month for some time now with the Storage bloggers, especially after EMC announced the GA of its XtremIO based All Flash Array.

The blogging activity on this started even before the announcement with Richard Harris blogging about it. Before EMC made its announcement, Harris had this interestingly titled blog post : “XtremeLY late XtremIO launch next week” It is an interesting post with Harris discussing in detail about the challenges EMC faces in this area and also about the delay in EMC getting the product to the market.

EMC’s response came in the form of a long and informative post by Chad Sakac, ‘Virtual Geek’. In this detailed post “XtremeIO; Taking the time to do it right”, Chad explains some of the details of the XtremIO and why it took time for EMC to release the product.

From the end user side, the well respected Martin Glassborow, ‘Storagebod’ seemed underwhelmed and said that he ‘Xpect More..’ The post asks some very pertinent questions. Given that it comes from an end user, I am sure all the vendors are keenly listening.

With the All Flash Arrays coming in, the question that gets asked by everyone now is “What type of workloads require such performance?”. The FUD against AFA but those who don’t have one is based on this question.  The question is a genuine and a pertinent one but can always be twisted around to say that AFA is not needed in any case. Robin Harris takes on this question in his, “Ideal workload for enterprise arrays?” post. It had a good discussion in the comments section with Chad Sakac of EMC and NetApp employees weighing in. This lead Robin to do a followup “Best workload for enterprise arrays” post wherein he gave his response to the comments received in the earlier post.

Is AFA only about performance or should we also see the storage efficiency side of things. Vaughn Stewart, who had moved from NetApp to Pure Storage earlier, had a chart which spoke about both performance and storage efficiency of AFAs. He compared products from Pure Storage, Violin, EMC and IBM. Here is the chart.

Chris Evans felt that while Vaughn’s sheet was a good starting point, it did not compare all the vendors of Flash Arrays. So he set out to expand the list of vendors as well as the metrics being used for comparison. Here is the Expanded Comparison Chart.

Now that EMC has come out with its XtremIO array is that logical choice for the customer to buy given EMC’s background and size? No says Robin Harris and gives his take on what he calls the “Top 5 alternatives to XtremIO”

Vaughn Stewart feels that the adoption of Flash has been exceeding everyone’s expectations and that EMC’s entry would accelerate the adoption further. Here is his take on “All Flash Array: Market Clarity”

It must be said that whenever EMC enters the market with a new product there is no dearth of debate. It is the same this time around. Will this be the flash point which will accelerate market adoption of flash or whether this is a temporary flare up with the market slowly settling down between flash and spinning rust, only time will tell. I will probably bet on the latter.

Been a while

Yes, I have been off the blog scene for quite some time now.

In the meanwhile, along with my friend and former colleague Sarath started Avanisys, aimed at developing a storage product. We got it into a decent prototype stage but we were unable to proceed further, mainly due to financial considerations.

In the meanwhile I have also been part of a video transcoder company, where I was responsible for designing multiple things including the background daemon, monitoring daemon, a restart daemon and also wrote the SNMP Agent for the appliance. I also designed the Management GUI for the appliance and wrote the CLI part of the management application.

Lot of work accomplished. Time to move forward. Will provide you with more updates soon.