Consolidation continued

A while back I wrote an post about the process of consolidation where I was describing the primary aspects of the process for us and the solutions we chose. We are a little ways down the road an a lot of consolidation has already happened.

We have shutdown (or emptied and left running) 5 vCenter servers and consolidated 8 clusters and 7 single hosts in our new vCenter setup. The process has been pain free for the most part. In all of these migrations (6 different maintenance windows) we have only encountered a single problem which I will describe a little later. This has given us quite the track record for performing well under maintenance windows in our little virtualization team 🙂

So the problem we encountered was not in the actual process of disconnecting from one vCenter and connecting to another but instead in the preparing phase. What happened was that while we were migrating from VDS switches to VSS switches we needed to change port groups for a lot of VMs; PowerCLI to the rescue. This was automated using a translation table in the form of a hash table that would have the old VDS port group names as keys and the new VSS port group names as values. Foreach VM and foreach of the network adapters on the VM look at the port group name and find the new value in the hash table. Simple!

But. There is always a but. After change some 60-70 port groups, the port group of a specific VM was changed and it was working fine it would seem. About 15 minutes later I got the support call that a website was down on one of the VMs. I started looking and could not see anything wrong (I’m not that fond of IIS web servers and their way of logging!). The network of the VM had not been changed yet so what was causing this 503 Service Unavailable error? And even more odd it was only one out fo 17-20 websites that was not working

I googled some things no luck, grabbed a colleague to help look, no answer. The Google paid off. The mention of the words “Application pool”. I quickly looked up the application pool of the problem web site and sure enough it was in a stopped state. Started it again and the site was running again. But why had it stopped? The logs show nothing™. So I got to thinking and the only explanation that made sense to me was the fact that I had 15 minutes before the call changed the port group of the aforementioned VM which was running the database for the website. A small MSSQL machine. The only logical thing I can see having happened is that in changing the port group the connection to the database was cut and the application running in the pool handled it badly.

Now looking back at it this is a minor problem. One web site between 20 running on one VM between hundreds. All in all the consolidation process is running well. We have a few more vCenters and single hosts to gather under the new setup still. And after that comes the process of phasing out old hardware. We have recently got 6 new blade servers to replace some older hardware, some of the most powerfull we have had yet. 2×8 core Intel CPUs with 256 GB RAM and if all goes well a new storage controller dedicated for these machines and virtualization in general. But moving to new hardware, shutting down old servers, giving new IP’s to production machines, all of this is not an easy process and requires coordination across the entire IT organization.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.