Virtualization: Lessons Learnt

If you were thinking I am going to give you some great tips about virtualization, let me make it clear that the title of this post must be taken literally and not figuratively. I attended a VMWare 4 day course at GT Enterprises, Bangalore and this post is regarding what I really learnt in the course. The course was conducted well and the trainer was good.

I did know a bit about VMware, having installed the workstation and server versions earlier and working with them. I did not have an idea about VMWare ESX / ESXi and the course was a good place to start. Though the course is more aimed at the server / storage admin (it is called ‘Install, Configure and Manage’) and I was looking at it from an engineering perspective, it was nevertheless a good course to attend. Once you attend the course, you get a much more clearer understanding of what VMware is all about, how useful server virtualization is to the enterprise and the various innovations being made by VMware to improve the product.

One of the things I liked a lot was the focus on management. vCentre is a nice piece of work which allows you to manage things from a single place and this is definitely something a large enterprise would require. Otherwise it will be a nightmare managing so many virtual machines individually. Same goes for the distributed vswitch available in VMware. Another nice concept. We worked on only the Distributed vSwitch of VMware and not on the Cisco virtual switch. It would have been nice had I got an idea of how that is configured but I guess you need to attend a Cisco training for that. Asking it in VMware training would be too much.

vMotion was another feature which impressed me. The ease with which you can move a VM from one physical server to another is incredible. Ofcourse certain pre-requisites need to be met. vCenter is smart enough to tell you which VMs can move to which servers. Similarly Storage vMotion was also quite easy to use. Also got  a better idea about Storage for VMware, High Availability, Clustering etc.

Some of the features have their own limitations and going by the new release it is clear that VMware is fully aware of the limitations and is working on them. The vSphere 4.1 released very recently had some nice features which are primarily aimed at improving the Storage I/O efficiency. Two important aspects related to storage were:

vStorage API for Array Integration (VAAI)

Storage I/O control (SIOC)

VAAI provides API for Storage Array makers so that some of the storage operations like Full Copy can be performed at the array level rather than involving the server resources. SIOC is a way in which to ensure the right job gets the right priority as far as storage is concerned. Here are links to two articles which cover these two aspects:

VAAI article by Mark Farley (aka 3parfarley)

SIOC article at technodrone (You will see this link in Mark’s article as well)  In order o understand SIOC you need to know something about what shares are and how they are allocated in VMware. Even if you are not aware, you will still get a general idea of how the concept works by reading the article.

The vSphere 4.1 release will certainly help in faster and fairer storage access.

This training I took is mandatory if you are planning to take up a VCP certification. If you don’t have such plans and want to know more about VMware vSphere, you can check out Scott Lowe’s book, “Mastering VMware vSphere 4”. (Scott is a well respected blogger in the virtualization world. He was an independent blogger who later joined EMC. His blog at http://blog.scottlowe.org/ still provides a lot of information which is neutral (that is, not plugging for EMC) ) This is a well written book and covers a lot of ground. I would suggest this book even to those who have attended the course, since Scott provides lot of insights into many of the features and gives a good background on each topic. I am sure it will be of immense help to those taking up the VCP exam.(I have recently seen another book in the book stores which deals with vSphere 4 but I haven’t been able to read it yet.) Scott’s book is published by Wiley India and costs Rs.599/- (Actual cost would depend on how much discount you would get. I generally get anywhere between 17 to 25% discount at Book Paradise, Jayanagar, Bangalore. The quantum of discount depends on the number of books you buy.)

Before I end, let me also note that I recently gave a talk titled, “Server and Storage Virtualization: Their relevance to the Cloud” at Mindtree Consulting. This was an invited talk as a part of their internal initiative. The talk was well attended, the participants were interactive and the feedback was positive. The arrangements were well done. I enjoyed the session and the interaction.  Thanks to Rama Narayanaswamy, VP at Mindtree Consulting, who made this happen.

The shape of the cloud : Public or Private

Cloud over Kinner Kailash

” That wall has to be bigger if we have to use this pattern”, said my painter, “it will not be very effective on your wall”. This is something which happens to many of us all the time. With our painters, our carpenters, our architects and various other service providers. You want to do something but the person providing you the service has a comfort zone in which he / she wants to work. So rarely do you get exactly what you want. It will always be a compromise between your ideas and the ideas of the vendor. This is something which happens not only for individuals like us but also for large enterprises. You cannot execute everything on our own. You need to depend on the vendors. What vendors will say depends on their comfort zone, what products they have, what they want to seel and how much commission the sales guy is getting for selling a particular product. I guess anyone who has been in the industry long enough knows this. And these are exactly the factors which will influence the way the cloud would shape up eventually.

Last year there was a lot and lot of talk about the cloud, which is still continuing. Initially the talk seemed to be about one type of cloud, owned by a service providers like Amazon, Google etc, whose services all enterprises would avail of. Slowly it emerged that there two types of clouds. One, the public one. Which you see when you step out on the road. And the private one, which you see on your own ceiling. In other words, Public Cloud is the one which is run by a service provider and Private Cloud is the one which is run by your own IT department.

People may ask as to what Private Cloud means. After all has the IT department not been running the data center all along. How does the Private Cloud change things? I am not sure if the definition of Private Cloud has taken a concrete shape yet but here is my take. Private Cloud represents a huge change in thinking on the part of the IT department. You no longer need to buy things for each department separately or you don’t need to reserve resources for any one department / division or whatever is the unit of classification. The whole idea is to give resources when necessary, provision just the right amount of resources and give more resources when asked for. In essence the IT department owns all the resources as a huge pool, or cloud, and provisions as required internally. No more buying or provisioning resources for a particular division, which will be locked down for a long time. This would apply to all kinds of resources, the key among them being compute power and storage. Given the ‘give only as much needed’ philosophy and sharing of resources across the organization will definitely bring in optimal usage and definite cost savings.

Changing from one model to another one is not an easy task. So what are the likely challenges for such a movement? Since I haven’t handled big data centers, I cannot possibly give you all the scenarios, but these are most likely challenges that an organization would face when trying to build its Private Cloud:

– Change in the thinking pattern of the IT staff. They must forget how they procured and provisioned earlier. They need to think in terms of how to provision using the cloud paradigm

– Training the IT staff in newer technology areas like Virtualization and newer ways of provisioning which we will important in order to build a Private Cloud. They will need to design newer chargeback mechanisms as well

– The bigger challenge I see is how to move the current infrastructure into the Private Cloud. Big companies have tons of equipment and they are already provisioned and being used. How will you get all these into a common pool? No organization would want to build a Private Cloud by purely buying new equipment

– Another important aspect which will dictate on whether Private Cloud will be accepted within an organization or not will depend on the how the power structure would change is Private Cloud is implemented. We have seen more than once that many good ideas have been compromised due to this issue.

How will this Public / Private Cloud help the vendors. As I said earlier, what eventually will come up depends a lot on your vendor. As of now many vendors have started talking Private Clouds and this is understandable. Look at it this way. You spend a lot of time, energy and money building up relationships with your client. You are now in a position that the client trusts you and you know the client well enough even to influence their buying decisions. At this juncture, if you were to propose the Public Cloud idea, you are shooting yourself. The decision on which equipment to buy will then pass on to the service provider, with whom you may or may not have a great equation. If it is a Private Cloud, you can always show the benefit of the cloud to the customer without losing your influence or your orders!!

The way the Public / Private Cloud is being proposed is: Big Enterprises need a Private Cloud, Smaller Enterprises can use the Public Cloud. This again makes sense from the vendors point of view. Especially the big ones. Because running after small orders or small players is never something which big vendors want to do. In such cases, if the buying decision shifts away from the small players to a service provider, who will buy in large quantities, then it is easy for the big guys to target this service provider.

As I had said in the beginning of this post, what color I paint my walls depends a lot on my painters aesthetics as well. In the same way, the way cloud will evolve will depend on what the vendors feel would be beneficial to them in the long run. Based on this, we will be seeing a lot more talk on Private Clouds. When cloud becomes a reality, Public and Private Clouds will coexist.

You can read EMC’s Chuck Hollis take on private clouds here.  Here is a different take on the cloud and what it should or should not mean. As usual, Steve Duplessie does not mince words in this article on why the cloud will vapourise

The Nagging Bug Fix

Nothing makes you feel that you have conquered the whole world than when you solve a problem. When you finally understand what has gone wrong, provide a fix and it works like a dream, you are in seventh heaven (wherever that is.) Recently I had a session on troubleshooting and could see that all the participants had this type of ‘wow’ moments in their lives. All of them were Storage administrators and had many stories of troubleshooting to tell. I will relate those at a later day after obtaining the required permissions from them to put it on this blog. In the meanwhile, I want to relate a minor debug that I have been invovled in the last couple of days. I can’t say I solved the problem, though the problems seems to have disappeared. This is what I call the nagging bug fix. You know the problem is solved but you don’t know why!!

First, let me explain my setup. There is nothing much as far as the hardware setup goes. I use a Lenovo Thinkpad and I have Windows XP loaded on it. I use VMplayer with Ubuntu Linux virtual appliance running in it for all my Linux needs. I use three different email accounts. One for my personal mail, one for my technical subscriptions and reading technical blogs (thru google reader) and one ‘official’ mail for all official transcations.  Now, my ‘official’ mail id is my own domain while the personal and technical subscription mail ids are gmail based ids. I could have gone to two different service providers but was happy with the experience of gmail and google reader that I decided to have both mail ids based on gmail. Since the browsers use cookies and know that you are logged in, it is not possible to see the mails from two ids simulatneously from the same browser. So I use two browsers. One for each mail id. I generally use Firefox for my technical stuff and Explorer for my personal stuff. Nothing logical in this selection, just my quirk.

Sometime back I downloaded a trial version of VMware workstation in order to work with Celerra VSA (virtual appliance.) I had downloaded TweetDeck as well. I also downloaded Celerra appliance but before I could start testing it, I had some assignments come my way and the last couple of weeks were spent in these assigments. I started earnestly working on my system this week on and noticed that Firefox was hanging once in a while. This happens sometimes and so I killed it, restarted it and kept going. Then I noticed that it had stalled again after some time. Once again the same process was repeated and I continued my work. When it happened again the next day, I got frustrated and decided not to use Firefox but try using Chrome instead. I thought things were working fine but suddenly Explorer stopped. I was now feeling like a butcher, having to kill these browsers at regular intervals.

One of the first things you learn in Computer Debugging 101 is that many problems get solved if you reboot!! I am sure we will try a reboot even if we were to administer a super computer!! This now runs in our blood and I had to do this. So the reboot happened. Again I started Chrome and Explorer. After some time Chrome hung!! Reboot had not solved the problem but instead was confusing me!! Kill-Start happened but the problem kept on appearing and I could no longer pretend that this was not a problem.

The steps in Debugging go thus:

1. The problem will solve itself. Just look the other way
2. A reboot will solve the problem. Time to switch off
3. Reload the software and things will be fine.

Naturally I had to try the third step and Internet Explorer was anyway tempting me to load the latest shiny version with Silverlight and all. How can you refuse such an offer? That too when it comes with a Silverlight. (I have no clue what Silverlight is but c’mon, it sounds so sexy.) So there I was, downloading the latest browser with all the plug ins and what not. Once done I started the browser and here is where competition happens nowadays. As soon as you start your browser it says something like, “Hey, are you a moron?  I am not your default browser. Make me your default browser. Click OK”. Scared, you click OK when the other browser wakes up and says, “Hey, someone trying to make you a moron and wants to be your default browser. Don’t let that happen.” Too scared by now, you are not sure what you do. You click some button and things quiten down. It used to end here in earlier days. Nowadays you hear a screeching sound and a screaming text saying, “How come I am not your default search engine. Why is someone else your default search engine.” Next someone crops up and says, “I want to beyour Phising filter”, to which someone else pops up saying, “No way. I am your Phising filter”. You realize that your desktop / laptop is now a battleground!! After you have pacified all these guys, the browser starts up with some 10 rows of toolbars. Google, Yahoo, MSN, Ask, Don’t Ask, Copernicus, Galileo, Newton.. oops, the last couple are not toolbars, not yet atleast. After all this trouble sometime later one of the browser hangs!!

Things are getting serious now. I start looking to check if my DNS is a problem. Doesn’t seem to be. Next step in debigging is to do the reverse of step 3, i.e. uninstall as many softwares as you can. You don’t know what clashes with what. So I start this process and discover that lot of stuff keeps getting updated without your knowledge. Norton Antivirus goes about downloading latest patches, Firefox goes about downloading latest version and fixes, Windows keeps downloading latest security patches and rebooting your system on its own. These are just a few of them. Ofcourse each asks you something before they download but these message happen so often that you generally click OK for everything. “This web site wants to use your bank account and draw some money”. Click OK. Done.

So I uninstalled TweetDeck . The problem exists. Uninstall VMware workstation. Problem exists. Uninstall some mp3 player. Problem exists. Reboot. Problem exists. Use Firefox instead of Chrome. Problem exists. Use Internet Explorer instead of Firefox, problem exists. Tear your hair. Problem exists.

The major problem about problem solving is that you do not notice details in the beginning. Now as that this was getting on my nerves I started observing closely as to what could be the characteristics of the problem. I immediately noticed that whichever browser had opened my technical subscription gmail was hanging. I started this in a different browser and now this browser was slowing things down. Finally I had got hold of the problem!! Gmail was the culprit!! The Gmail help forum asked to check if I had some plug ins which were not compatible. Nothing of that sort was on the system and there was no further help.  So the next step in debugging in current times? Yes, you guessed it right!! The internet. So off I was to find if someone else had this problem. Looks like lot of people have been having this problem lately. Check out this link. Everyone was confused as to why this was happening. Someone suggested that we turn off the https option in gmail. Someone suggested we use an earlier version of gmail. Someone asked people to turn off the Phishing filter if they were using Norton. As anyone involved in debugging knows, the worst case scenario is when you do too many things at a time and the problem gets solved. You have again debug to find what was the problem. So I decided to do this one at a time. First I switched off the https option and reloaded gmail manually by using the http option. Sometimes you get lucky and  and gmail started running normally and the browser did not hang now !! First try and you have succeeded.

I call these types of bug fixes as nagging bug fixes. Firstly, a good bug fix gives you a better understanding of the system. Nothing like that happened here. Second, the solution was quite trivial. Some minor setting change and things work without you knowing why that was a problem and you have no means of probing further. Added to it is the frustration that all the effort you have put to debug deserves a more complicated bug fix!! Third, a nagging feeling that this may probably not be the right solution.  The reason is that when I use gmail for my personal mail, the setting has https and it works like a dream!! So why should the setting affect one mail id and not the other. This has been consistent across all browsers. So what the heck is the problem? Sometimes, as in real life, you just need to accept the solution and move on and not probe too much. You need to get rid of that nagging feeling by drinking a cup of strong coffee or any other bewerage of your preference. Afterall, if the problem happens again, we have the necessary tools. Reboot, Reload, Reinstall, Uninstall and the World Wide Web!!!

Setting up Sun Unified Storage 7000 Simulator

Nothing to stimulate you like a simulator!! I know it sounds corny but what the heck. The Sun Unified Storage simulator did stimulate my interest and I found the going good. So here is my story of how to setup the Sun Unified Storage simulator and work with it.

I have been thinking of installing some simulator on my system and working with it. As it generally happens, you keep postponing it in small steps and before you know the idea has vanished from your mind.  Luckily for me, I had registered myself in the Sun site for information and they sent a mail to me asking me to download the simulator. I had some time on my hands and it was too tempting an offer to resist.

First things first. In order to download this simulator, you need to register yourself at the Sun site.  Then you get access to download the Sun Unified Storage simulator. The simulator zip file is around 370MB and it expands close to 2.5 GB.  You better have enough space on your hard disk for this.  You can get the simulator at this site Scroll down to find the simulator.

What do you need to run this simulator? I installed this simulator on my laptop, which is Core 2 Duo system with 2GB RAM running Windows XP SP3. So I guess if you have this or something better,  it should work. For the simulator to work, you also need VMware Player on your system. If you don’t have one, you can download it free of cost from the VMware site.  In essence, to make your simulator work on Windows XP, you need to download the VMware Player and you need to download the Sun Unified Storage Simulator.

The Sun Unified Storage Simulator is a virtual appliance, which means you don’t need anything else with it. The steps to follow to install the Storage Simulator are simple:

  1. Download the simulator zip file from the Sun site
  2. Unzip this file. In the extracted files, there will be a uni.vmx file
  3. Now start your VMplayer and select the uni.vmx file
  4. The installation starts now. Have patience

The next steps are from the Sun site:

“When the simulator initially boots you will be prompted for some basic network settings (this is exactly the same as if you were using an actual 7110, 7210 or 7410). Many of these should be filled in for you. Here are some tips if you’re unsure how to fill in any of the required fields:

  • Host Name: Any name you want.
  • DNS Domain: “localdomain”
  • Default Router: The same as the IP address, but put 1 as the final octet.
  • DNS Server: The same as the IP address, but put 1 as the final octet.
  • Password: Whatever you want.

After you enter this information, wait until the screen provides you with a URL to use for subsequent configuration and administration. Use the version of the URL with the IP address (for example, https://192.168.56.3:215/) rather than the host name in your web browser to complete appliance configuration”

What the above steps do is to setup the virtual simulator on your system. This also provides an IP address to the Storage simulator. Once that is done, you see a login prompt on your VMware player. This would probably be the same if you are using the actual hardware. At this point in time you have two options:

  • Login with ‘root’ as the user name and the password you have entered during the setup time and start using the CLI  (or)
  • Use the Web and the GUI provided there to manage the simulator

Though I love Unix and the CLIs generally, I decided to go ahead and try the web. You can access the web gui by typing in the link given during the setup phase. It will be something like <some ip address>:215/ (I got 192.168.22.128:215 as my address. It can be different for you.) Once you type this in your browser you will get the login screen.

The Sun Unified System has lot of features and you can test them using the simulator. There are features like replication, compression, snapshots, analytics etc. My initial idea was to do the simplest possible thing. Create a LUN and create a filesystem and export it. Then use this LUN or Filesystem. So I have not yet checked the other features.

The Sun Unified Storage allows you to use NFS, CIFS and iSCSI. In the GUI, on the top you have a tab called ‘Shares’. This allows you to create shares of the type you want. Shares can be grouped together as projects, making it easy to administer shares of the same kind. Under ‘Shares’ you have the Filesystem and the LUN tabs. If you want to use NFS or CIFS, you need to create that filesystem using the Filesystem tab. If you want to use iSCSI, you can just create a LUN using the LUN tab.

I first created a filesystem and exported it. It was easy seeing it over Windows. I just gave the path and it immediately saw the share. I then wanted to see the same share via Linux. I started another VMplayer with Ubuntu virtual machine running it. Initially I had a few hiccups since the portmapper package is not installed as a default on my system. My friend Sagar sent me a link on the packages required on Ubuntu to make NFS work. (The Ubuntu link here.) Once I installed the required packages and configured the system, I could immediately mount the share and copy some files into it.

The next step was to try accessing some LUNs via iSCSI. I don’t have an iSCSI HBA so I had to use the Software Initiator. I downloaded the Software Initiator from the Microsoft site and I also downloaded the documentation related to it. (I downloaded the initiator which ends with -x86fre.exe) The funny part is that the software and the document are of almost same size!! The download and installation happen fast. No reboot is required. Once installed, you can see the iSCSI software initiator under ‘Programs’. The iSCSI initiator works as a GUI and a CLI is also provided. In case you are just testing, the GUI should do fine.

Once you have downloaded the Software Initiator, you need to now go to the simulator and create a LUN. (Since you will expose this as a iSCSI target you should not create a filesystem.) You should go into the Protocols tab in the simulator to say that iSCSI protocol needs to be used and allow access for all initiators. Once this is done, get back to Windows and open the iSCSI initiator GUI. In this GUI:

  • Provide the IP address of simulator under the ‘Discovery’ tab.
  • The exposed LUNs will be automatically discovered and shown to you in the ‘Targets’ tab
  • Select each of the targets and press the ‘Login’ button. This will ensure you are now connected to the LUN

Once these steps are done, the disks will be visible in ‘Disk Management’ (under ‘Computer Management’) These are raw disks, which you can initialize and partition. I created two LUNs of 0.5GB each. I was able to see them using iSCSI and was able to initialize and partition them.

Thus ended my two days tryst with Sun Unified Storage Simulator.  I must say I am impressed with this simulator. Very easy to install and very easy to configure and use. I will probably try out the other features soon and will writeup about them if I do. I am now raring to go and try other Storage simulators. I know Celerra simulator exists but I am not sure if it is open. NetApp has a simulator but it for NetApp client only I think.

If you have the time, do try out the Sun Simulator. You can get the installation and configuration documents at this site My thanks are due to Chris M Evans, who provided me with the link to the documents when I asked him. (The document also comes a part of the simulator. You can press the ‘Help’ tab in the simulator to get the complete document. ) Chris Evans (@chrismevans) , who blogs as Storage Architect, has written a series of posts on Sun Unified Storage. You can check out those articles at The Storage Architect blog.

Hope this was useful and hope it makes at a few of you to wake up from your slumber and try something 🙂

What's Up in 2010 Doc?

Nothing beats the thrill of trying to predict how the future would be. This is a very enjoyable exercise, provided, like the stock market analysts, you quickly forget what you said earlier. I mean, who is going to check out the coming Dec as to what you said in Jan? The best analysts know this fact very well and hence they never shy away from predicting the future.  I am not exactly going to predict how things will be in the Storage world this year but just mull on what the scenario would be like.

Last year saw Data Deduplication getting fully mainstream, with every vendor having a Dedupe product. There are a quite a few Dedupe products now in the market but Source Dedupe integrated into the backup product and Target Dedupe, either Appliance Based or VTL will be the key products. We will need to wait and watch if Primary Dedupe makes much headway. There will always be products like Content Aware Dedupe but whether they will be accepted as a mainstream products or whether they will be used more as a solution for particular problem needs to be seen. My take is that it will be more of the latter.

2009 also saw a lot of discussion and some activity about Solid State Drives (SSDs) with EMC thumping its chest for being the first off the block and slowly other vendors offering SSDs as well. By the end of the year, with STEC’s, the company which manufactures these SSDs, results not being very rosy, there were questions raised about how quickly SSDs were being adopted in the industry. Companies like 3Par offered the technology in their array as an alternative to using SSDs. FCOE was discussed quite a bit but that is all that happened. Thin provisioning, Cloud Computing and Storage Virtualization were topics which were discussed heatedly.

The year 2009 did see lot of industry action which kept the vendors, analysts, bloggers and twitters happy. EMC’s acquisition of Data Domain was the biggest news by a mile. HP buying 3Com (people were predicting that HP would buy Brocade), Cisco entering the Blade Server market with UCS, the teaming of EMC, VMware and Cisco to offer products aimed at the cloud were other significant happenings.

So how does 2010 look? Quite misty I would say 🙂 People would love it if I said it would be cloudy since lot of vendors are betting on the industry to take to the cloud in a big way. Microsoft, Google and Amazon have shown that cloud can work and now the bigger boys want a bigger market for cloud and of course a big share of that market. My feeling is that lot of marketing dollars from the big guys will go into pushing their ‘cloud’ products. EMC, VMware and Cisco are playing the game together, which means the other big boys will start such collaborations as well in order to fight this combination. Will be interesting to see how the game develops. No one knows for sure what the future of Cloud would be but no one wants to miss the bus, if and when it arrives!!

From a technology perspective, what does 2010 hold? I see the same technologies that have been discussed in 2009 getting wider acceptance in 2010. Will there be any new breakthrough technology? I am not so sure going by the current trend. FCOE will slowly pick up this year but it will take some more time before it becomes the default standard in the data center. Automatic tiering will be discussed and implemented in lot more arrays. EMC’s FAST has already started the debate with other vendors highlighting how automatic tiering is done in their arrays. Thin Provisioning is slowly going the way of being a standard feature rather than being a differentiator. Primary Dedupe will get some attention but will it become mainstream? I doubt it. Effectiveness of SSDs have been debated and beaten to death. They will get a new lease of life with Automatic Tiering since technologies like FAST are supposed to ensure that your effective use your high cost SSDs. Will it be the year of I/O virtualization? XSigo got some good press and were discussed quite a bit. Their value proposition is quite good and I hope they do well. HP bought IBRIX and that gave some attention to scale out NAS. Storage Virtualization is a topic which will be discussed, if not by everyone, definitely by HDS.

One thing every Storage vendor has been trying to do is to get their products to work well with Server Virtualization products like VMWare, Hyper V and Xen. More management tools are required here because it becomes a nightmare to keep track of which physical disks hold your files given so much of virtualization happening!! Backup / Restore software also are raising up to the challenge of integrating their products with server virtualization products on one end and with Dedupe on the other side.

In all, I don’t see too many dramatic things happening in 2010. I hope, for my sake and all other bloggers, I am wrong. (Of course, if you play your cards correctly and change your predictions fast enough, you can never be wrong!!!)

Wish all of you a great New Year 2010. May we all see good growth in our personal and professional lives.

Walk when you talk. Learn while you teach.

“Walk when you talk”  is the new Idea ad which is famous nowadays. Given that I will probably never have a marketing division which will coin a cool phrase for me, let me do it myself. So what I do for a living nowadays, I will call as “Learning while teaching.” Quite a trite and a tired phrase I have to admit, but hey, I am no cool marketing guy.  Though jargonish in its feel, but as with all jargon, it hides an important truth.

A couple of months back I was given the task of conducting a one day session on Serial Attached SCSI protocol. The team attending this was fairly experienced one and they were clear on what they wanted. It is always a pleasure to deal with such teams as both the trainer and the audience is proceeding towards the same station and many a times you arrive safely. I started searching the web for details on the latest SAS protocol and found that details about SAS 2 were very scant. I was like “What, the internet doesn’t have the details?” only to realize that no one out there is sitting to see what is not present and entering the details. Internet is a  medium of collaboration and sometimes you also need to put in something!!! Maybe I will put in some details about SAS soon. Coming back to the training per se, the best part was the preparation. I did what everyone does when nothing else works. Read the @#$%*! manual. I did something better. I read the SAS 2 specification. Luckily I was able to connect with a friend who had some idea of SAS 2. He clarified some concepts to me. Reading the specification is very instructive. I have done it earlier when I did some content development for a SCSI Internals course.  It takes time to read the specification and connect up everything. Once you do that, you do get a good idea of what is going on. I did learn a lot when I read this specification. And luckily for me, when I did the course some very perceptive questions were asked for which I had to again refer the specs and clarify the doubts. The clarification was equally enlightening to the participants and to myself. No one teaches you better than a perceptive and an intelligent student.

Next came a standard storage course but with more focus on Fiber Channel protocol and FC Switches. To get a better idea of how the switches get configured, I downloaded the switch manuals from various vendors and read them. Reading manuals may not equal the excitement you get when you read John Grisham or Harlen Coben, but it does teach you a lot. It gives a very good idea of how things are actually implemented and you also get an idea of the limitations in real life compared to the theory. Here again the participants asked questions about areas I did not have much clue about. Leading me to start my investigations which eventually benefited both the participants and me.  I was immensely helped by my friends here and my sincere thanks to all of them. In fact I should actually sing the Beatles song: “I get along with a little help from my friends.” They are always around to help and that is a nice feeling to have.

While perceptive students are a boon I came across a different kind of participant in a course that I conducted recently. This person would have a soft copy of a manual open and keep asking you questions regarding a storage array, all of which pertained to the details present in the manual. Nothing wrong in trying to find out if the teacher knows all the details. It keeps you on your toes when done once in a while but can get tedious when done almost continuously over a three-day period !!!  Similarly, if people have access to internet when a training session is on, they try to find out the answers to the questions you ask, from the internet. In this Internet era, it is very important to make people realize that there is a huge difference between learning and finding out the answers. Internet allows you to do the latter easily but the learning part you need to do yourself, sometimes even after you find out the answers. Or shall I say, especially after you find out the answers !!!

It has been more than a year since I ventured on my own. Thought I would do a one year completion post but dropped the idea thinking it would be too self indulgent and honestly I haven’t achieved much except survive for one year without being part of larger organization. This one year has clearly taught me that learning and teaching are two things that I enjoy the most. And I would love to think that I have been able to communicate this enjoyment to those whom I teach. Atleast I am trying and you can’t blame a person who tries, can you?.

Mating Season

All of us in India know that the dark clouds during the monsoons bring the best out of the peacocks. They gloriously spread their beautiful feathers and call out for their mates.  This wonderful sight and the mating cries let you know that someone is being wooed. Something similar to this has been happening in the Storage world in the recent past.

Unless you had taken a vacation and gone off to Tibet to meditate in peace, you would have heard about the mating dance performed by EMC and NetApp. The object of their affection: Data Domain. NetApp started the process by spreading it dollar feathers.  And when you spread billions of them, it is bound to affect the opposite sex positively. No wonder Data Domain was impressed. But the Storage jungle is a cruel place and you cannot be assured that the initial impression created will be enough to tie the knot. EMC, which heard NetApps mating call, immediately responded with another impressive display of its own feathers. And it spread its feathers wider than NetApp. This confused Data Domain, which had a soft corner for NetApp!! NetApp had to respond. It spread it feathers a bit more and told Data Domain that it has something called stock option, which will enable more feathers to sprout in the future and the display will be even more glorious. EMC refuted that assertion and felt a bird in hand is worth two in the bush!! For some time, the spread of the feathers remained constant and it was time for mating calls. ‘You will fit well in my family’, ‘We have synergy’, ‘Govt will not approve of your marriage’, ‘It’s is a wonderful family’ and so and so forth. By all indications, Data Domain’s soft corner for NetApp still existed and things were at an impasse. Then EMC did what everyone was expecting it to do. It added more feathers to its already glittering display and that ensured Data Domain swooned in its favor. NetApp had to beat a retreat.

All in all, the whole mating process enlivened the Storage industry and specifically the blogsphere. How long can you keep on discussing if thin provisioning is important or how to save money through virtualization etc? You need something to stir things up and the Data Domain drama was godsend for many. Experts spent a lot of time analyzing what this would mean to EMC, what this would mean to NetApp and from the sidelines were predicting the winner. Now that EMC has won, people started wondering if EMC has overpaid for Data Domain, whether it will clash with already existing products etc etc. But as we all know once integrated, Data Domain becomes EMC family and family quarrels are never as interesting as a mating fight or a lover’s tiff. To be fair, EMC, by all reports, has done an excellent job of integrating their acquisitions and I am sure this will also work out well. The central question has been whether it is worth paying so much for DeDup, which according to many, is a feature and not a product. Time, as usual, will provide us the answer.

That it was a great mating was evident when HP, without any drama, bought IBRIX, which is into scalable parallel file systems. HP has been partnering with IBRIX on various deals. LSI Logic bought ONStor, which makes Clustered NAS solutions. Both IBRIX and ONStor have the scale out capability and the general thinking is that everyone is targetting acquisitions aimed at fortifying their positions in the Cloud Computing space. Let’s wait and see how things turn out in the future. Meanwhile, lets hope such things happen to enliven all of us once in while. This is one area where we wouldn’t mind some duplication, would we? !!

Time to Invest. Time to help.

If you are thinking that I am talking about investing in stock market, forget it. I have never been an expert in that area. What I am suggesting now is to invest in yourself. There cannot be a better time.

Times are indeed tough now. Especially for people  in the IT industry. There is a sense of concern all around and people are very unsure about how long their jobs will last. For some, the uncertainty has ended, but unfortunately, so has the job. We hear companies coming up with many ‘schemes’ in order to keep the costs low. All of you would have read about how Satyam (Tech Mahindra now) is going to keep people on bench by paying them their basic salary.

These tough times can also be the time to invest. In yourself. It is time to upgrade your skills. What the current situation has taught people is that only people whose skills are valued in the market place have a chance for survival. So if you are a project manager, maybe you should get a PMI certification. If you are technically oriented, maybe you should see how your can upgrade your skills further and keep in touch with the latest happenings in your area. What I have seen in recent times is that people who are technically well qualified have not had a problem moving out of their current job and getting a new one. Though times may be tough for people financially , I would still urge people to invest some money and lot of time to upgrade their skills now. This is the best investment that can be made for the future. People who are enterprising in nature, should find this a good opportunity to start off on their own. When you are on your own, any time is tough time. Ask me 🙂

Tough times also means people need help. I have had lot of people, who have either lost their jobs or whose jobs are under threat contact me. I have tried my best to put them in touch with consultants that I know and referred them to friends I know in other companies. Given the sort of social stigma that gets attached to a loss of job, I think it is very important that we help out our friends in this hour of need. Do let people know about job opportunities that you come across. Put in a word for your friend or former colleague wherever you can. I can assure that every little gesture of yours in this direction will be highly appreciated by all concerned.

Let us hope the current situation is a temperory one. But don’t bet on it. Work smart, work hard and invest in upgrading your skills in order to beat these tough times. Or be bold enough to chart your own path.

Solar Eclipse?

The title is misleading. During an eclipse, the sun is blocked for sometime before it re-emerges in its full glory. No such luck for Sun or more precisely Sun Microsystems, which will merge into Oracle.  I am sure all of you have heard and read various analysis on what this deal would do to Sun, Oracle and the industry in general. (I was away on vacation, hence the long silence). Oracle also acquired Virtual Iron, a virtualization company, after it acquired Sun. The industry dynamics is surely changing now. Cisco, with its UCS (Unified Computing System), has got into the blade server space, which is dominated by HP and IBM. Now Oracle wants to get into the virtualization space, dominated by VMware. Oracle now has three virtualization solutions:  its own, Sun’s xVM and the virtualization solution of Virtual Iron. How the market of Blade Servers and Virtualization will change remains to be seen. Added to this, NetApp is buying DataDomain for a large sum. Interesting times ahead.

It was a bit sad seeing Sun set. In the early part of my career, as I had indicated in my earlier post, we were fighting against Sun in many places with our SGI workstations. We lost in a lot of them since the solutions were totally different and Sun had the exact solution which a lot of people wanted. The actual fight in the workstation space those days was between Apollo and Sun. Apollo was later taken over by HP. In those days Sun was sold in India by Wipro and they were doing a good job of it. (Those were the times when we generally got systems which were atleast a couple of models older, if not a generation older.  Those were the times when India as a market had not evolved and there were lot of restrictions in getting newer equipment into the country. Added to it, it was costly getting new equipment in because we had to pay heavy import duties. )

Sun was always known as a technology company and there were a couple instances wherein I could see the  great respect people had for Sun. I was part of the organizing committee of what was known as ‘Techforum’, an annual technology festival within Wipro.  Though it was an internal festival we would invite a few speakers from the industry for this festival. One such speech was given by the Sun representative. He spoke about the 10 technologies that we should look out in the future. This was probably around 8 to 10 yrs back and I don’t remember what were the technologies he spoke about. What I do remember is that whatever was spoken made a very good impression on everyone present. It was generally accepted that this was the best presentation we have heard during our conference. There was lot of clarity in thought that came out during the presentation. (One remark by the speaker I still remember. He said that when Sun started saying, “The Network is the Computer”, a competitor put out a counter comment stating that, “Sorry. The network is a network and a computer is a computer”, only to beat a hasty retreat later.)

The next incident relates to Scott McNealy’s visit to Wipro. Scott was supoosed to deliver a lecture to our folks on a weekend. (I think the talk was scheduled for a Sunday.) As can be expected, there was a bit of apprehension regarding the number of people who would come in, given that it was a weekend. So managers like me were asked to see to it that as many team members turned up for the lecture as possible. I did my best to urge people to come in for the lecture. We had probably underestimated people’s respect and admiration for Scott. We had a large turnout that day and everyone enjoyed the talk.

While Sun did have a great reputation as a technology company, their India Development Center was more subdued and had a lower profile than some of its competitors like HP and IBM.  I may be talking from my limited exposure but I have seen more engineers keen to join companies like HP and IBM than join  Sun. Maybe Sun did not recruit as aggressively as HP and IBM did in India and hence this effect?

What we are witnessing now, with new products and all these M&As, will have a big impact on the future of the industry. Robin Harris, the Storage Mojo, has a nice article on “Why we are getting vertical – again”. Read the comments section as well since there are some relevant comments there.

I have not followed Sun very closely to know why they got into this situation but it always saddens you when a technology company goes down. Sun may disappear soon but they do leave behind a rich legacy. Stuff like NFS and Java will be around for more time to come. Hopefully the Sun culture for technology innovation will continue within Oracle.

Silicon bites the dust

When a certain product helps you meet the future Prime Minister of a country, it is not surprising that you remember the product fondly. That is why I turned a bit nostalgic when I read that Silicon Graphics’s assets have all been bought by Rackable for just $25million dollars. Afterall, Silicon Graphics Iris workstation took me to many places in India and no wonder that I feel like writing about it now. So excuse my self indulgence and read on.

It was in late 1980s when OMC computers and Wipro wanted to be the guys who sold Sun workstations in India. HCL was selling Appollo workstations. I had just joined OMC Computers and people told me that we lost out Sun to Wipro. It was good deal for Wipro because they sold a lot of Sun workstations. In order to compete in the workstation market, OMC tied up with Silicon Graphics, then one of the leading graphic workstations in the market. As we found out the hard way, given its price and positioning, it was a major challenge to sell it in India and in many cases Wipro beat us with the Sun workstations, which is what many enterprises wanted. I am not complaining since I got a lot of good experience trying to sell and support the Silicon Graphics workstations. For one, I got to meet lot of interesting people starting from academicians, film producers and all the way upto the future Prime Minister. Along the way, I had lot of interesting experiences as well.

It was clear to us after our efforts to fight Sun, that we cannot position Silicon Graphics workstations as general purpose workstations. (Wish the guys who made the deal knew it earlier.) So our strategy became more product or solution oriented. One of the segments we attacked for molecular biology since Center for Molecular Biology (CCMB) was located in Hyderabad, our headquarters. And next to it was RR labs (now IICHT). So we met lot of professors here and gave them demo on a certain product, whose name I cannot recall, to all of them. This was to enable some sort of 3D modelling of the proteins. It was then that I bought a book on Bio Chemistry and learnt that there were some 20 odd Amino acids and that all proteins were formed out a certain pattern of these amino acids. Looks like the scientists know the sequence of amino acids in the proteins but do not know about the actual physical structure of the protein. This software was supposed to help them solve this problem. Encouraged by the good words the profs had for this software, we decided to an all India roadshow and asked the company which had this product to send us an expert. They agreed and sent us the ‘expert’, who was actually a student and was doing a summer internship with them and was on a vacation to India!!  She was now called in to face a lot of academicians, who have been working in this field for ages. The encounter had the girl almost in tears. So I had to step in and control the situation. Like how we do always, I took many of the discussions ‘offline’, promised that we will send them more details later and did all the things we do when we don’t know an answer and don’t want to admit it. I remember my manager commending me later about how I was able to save the situation but it was not a situation I want to get in often. Lessons were learnt  in this encounter and  when we went to the next venue, were were prepared and things went off smoothly.

Along with the Molecular Biology software, we were also trying to sell some 3D modelling and animation software. I think it was called Alias. My colleague, Surya and I decided that we should try and sell this software to Annapoorna Studios, which was the biggest studio those days in Hyderabad. We ended up meeting Akkineni Venkat, the brother of the famous Telugu film hero, Akkineni Nagarjuna. He had visited our premises along with Ramprasad, the proprietor of Walden Book Store, and we gave them a demo on 3D modelling and showed some tricks like turning positive into negative etc. What we didn’t understand at that time was that whatever we had was not enough and to get a good graphics based stuff for films required lot more that just a Silicon Graphics machine and some software. We never made any deals with Annapoorna studios but the encounter did provide me with some memorable moments. One such moment was when Akkineni Venkat, who was watching a demo, stepped out to make a call. He called Nagarjuna’s house and apparently it was Nagarjuna’s wife and the beautiful actress Amala, who was on the line. I still recall Surya pulling my hand excitedly and saying, “He is talking to Amala!! He is talking to Amala!!”.

Then came the very brief encounter with this person. We were told by our manager that we need to put up the Silicon Graphics workstation for a demo on a Sunday. Obviously we were pissed. First, it was on a Sunday. Second, it was not a computer exhibition. It was an exhibition of all the companies to whom State Bank of India had provided funding. OMC Computers happened to be one of them. SBI wanted to show the good work it was doing to the then Finance Minister, who went by the name of Manmohan Singh. There we were, on a Sunday, waiting for the FM to arrive. He came in and started doing the rounds. What was impressive was that he took his time at each stall and spoke to people to get their direct feedback. He came to out stall and I was holding fort there. Those were the times of high export duties and Silicon Graphics was a costly system due to these duties. The FM was in our stall and we showed a small demo. After having a look he asked, “How much does this system cost?”. I replied, “Depends on your policies, sir.” This brought a small smile on his lips. He moved on after asking if people thought the cost was too high. What was striking was his utter simplicity and a real urge to understand the problems. No wonder he is considered to be one of our best Finance Minister ever.

Silicon Graphics took me to various corners of India. I experienced the searing heat of Jamshedpur, the freezing cold of Delhi, the sweating in Calcutta, the pleasantness of Bangalore. It is very sad to see Silicon Graphics bite the dust. I have not worked on it since I quit OMC Computers in 1995 but still, it seems like I lost a good friend.