Disabling “One or more ports are experiencing network contention” alert

From day one of deploying vRealize Operations Manager 6.0 I had a bunch of errors in our environment on distributed virtual port group ports. They listed with the error:

One or more ports are experiencing network contention

Digging into the exact ports that were showing dropped packets resulted in nothing. The VMs connected to these ports were not registering any packet drops. Odd.

It took a while before any info came out but it apparently was a bug in the 6.0 code. I started following this thread on the VMware community boards and found that I was not alone in seeing this error. In our environment the error was also only present when logging in as the admin user. vCenter admin users were not seeing it so this pointed towards a cosmetic bug.

A KB article was released about the bug and that the alert can be disabled but it does not described exactly how to disable the alert. The alert is disabled by default in the 6.0.1 release but if you installed 6.0 and upgraded to 6.0.1 and did not reset all settings (as I did not do) there error is still there.

To remove the error login to the vROPS interface and navigate to Administration then Policy and lastly Policy Library as marked in the image below:

PolicyOnce in the Policy Library view select the active policy that is triggering the alert. For me it was Default Policy. Once selected click the pencil icon to Edit the profile as show below:

EditOn the Policy Editor click the 5. step – Override Alert / Symptom Definitions. In Alert Definitions click the drop-down next to Object Type, fold out vCenter Adapter and click vSphere Distributed Port Group. To alerts will now show. Next to the “One or more ports are experiencing…” alert click the error by State and select Local with the red circle with a cross as show below.

Local DisabledI had a few issues with clicking Save after this. Do not know exactly what fixed it but I had just logged in as admin when it worked. This disables the alert! Easy.

 

 

 

Default Host Application error

Last week I was called up by one of our Windows Admins. He had some issues with a VM running Windows and IIS. As we were talking he also casually mentioned another error he was seeing that was “caused by VMware”. I was a bit sceptic as you might imagine 🙂

He was seeing this error when he attempted to browse the IIS web page by clicking the link available in the IIS Manager:

Default Host Application ErrorNotice the VMware icon in the bottom. This is an error from VMware tools! What? As any sane person would do I consulted Google. And got a hit here – https://communities.vmware.com/message/2349884

The third reply post gave me the answer. Seems that when installing VMware Tools it might associate itself with HTTP and HTTPS links. This would then cause a click on the link in IIS Manager to call VMware Tools which is unable to service the request. The fix is pretty straight forward.

Go to Control Panel, then Default Programs and Set Associations. Scroll down to the Protocols section and locate HTTP and HTTPS. Make sure these are set to your browser of choice – in the image below I set them back to Internet Explorer (he was a Windows Sysadm after all 🙂 ). If the association is wrong it would be set to Default Host Application as shown for TELNET.

Fix

Working with Tags

The last couple of days I have been working with PowerCLI and vCenter Tags to see if I could automate my way out of some things regarding tracking which sys admins are responsible for a given VM.

Tagging and creating tags manually is not really my cup of tea (we have 1000+ vms and 40+ sys admins and even more people beyond that who could be tagged. So some automation would be required.

Next pre-creating all tags was not something I would enjoy either as maintaining the list would suck in my opinion. Also all tags are vCenter local so if you like us have more than 1 vCenter then propagating Tags to other vCenters is also something to keep in mind.

I added a bunch of small functions to my script collection to fix somethings. The first thing I ran into was “How to find which vCenter a given VM object came from?”. Luckily the “-Server” option on most commands accepts the vCenter server name as a string and not just the connection object so the following will get the vCenter of a given object by splitting the Uid attribute:

$object.Uid.Split("@")[1].Split(":")[0]

Splitting at “@” and taking the second part will remove the initial part of the string so it now starts with the FQDN followed by more information. Then splitting at the “:” just before port number and taking the first part will result in the FQDN of the vCenter. This may not work in all cases but it works for our purpose.

Now I needed this in my script because I was running into the problem of finding the correct Tag object to use with a given VM object in the “New-TagAssignment” Cmdlet. However it dawned on me that if I just make sure that the tag is present on all vCenter servers when I call “New-TagAssignment” I don’t need the Tag object just the name and PowerCLI/vCenter will do it’s magic. Thus the following works perfectly:

$VM | New-TagAssignment "<TAGNAME>"

But in any case I now have a way of finding the vCenter name of a given vSphere object in PowerCLI 🙂

 

vCenter Orchestrator 5.5.2.1 and SSO behind a load balancer

If you are like us in my organization and are crazy about HA solutions you have probably looked at putting SSO behind a load balancer. This may look like a daunting task and troubleshooting may not be easy. But hey we have an SSO server in our two sites that maintain the SSO service across the entire platform 🙂 Hurray!

Now this is not something I just recently configured. Alone reconfiguring an existing environment to a new SSO server seems like something I would avoid. Easier to just move ESXi hosts to new vCenter servers in a new setup. No, we were among first movers on SSO in HA. We installed vSphere 5.1 and configured the SSO for HA as described in KB2034157. Now vSphere 5.1 SSO had a host of other problems so only 4 months after installing vSphere 5.1 and moving production to this setup we upgraded to vSphere 5.5 and VMware were spot on with new documentation as there were major changes in SSO some URL’s changed, such as the /sts URL. For people like us the reconfiguration of the load balancer was described in KB2058838. Easy!

We have now been running this setup for about 12 months and it has been working well for us. Upgrading has been a bit tricky but having only applied vCenter patches twice in the period this was okay. We are running vCenter 5.5 U2b today so we have access to the new VMRC client for when Chrome stop supporting NPAPI.

But here is where the title of the post comes in. I recently (well it is almost two months ago now) upgraded our vCenter Orchestrator with the lates security patches pushing us to 5.5.2.1. Following this a problem occurred. I could no longer login via the client! This is a pretty serious problem. So I started debugging. Tried re-registering with the SSO. No problem. Test login in the configuration interface -> works. Login via client still fails. What the hell?

I the started browsing the vCO server.log file and looking at what happened when logins failed. There is what I found – 3 of these every login:

2014-11-27 09:52:33.716+0100 [http-bio-0.0.0.0-8281-exec-5] WARN {} [RestTemplate] GET request for "https://<sso-lb-fqdn>:7444//websso/SAML2/Metadata/vsphere.local" resulted in 404 (Not Found); invoking error handler
2014-11-27 09:52:33.717+0100 [http-bio-0.0.0.0-8281-exec-5] WARN {} [RetriableOperation] Exception handled during retry operation with message: 404 Not Found
2014-11-27 09:52:33.717+0100 [http-bio-0.0.0.0-8281-exec-5] INFO {} [RetriableOperation] Retries left: [2]. Sleeping for [3] seconds before the next retry attempt.

Now these indicate that the vCO cannot talk to the SSO. Well I just re-registered it and tested login? How could this be? At this point I started a support case with VMware. And following over a months back and forth the support started looking into why there was a double slash “//” after the port number thinking that the SSO registration was somehow wrong. I at the same point realized something. Looking at the URL the vCO server was using a different URL that was configured in the configuration interface. What? And remembering back to the load balancer configuration I quickly realized that the problem was as simple as the /websso URL that vCO was using when logging via the client was not allowed through the load balancer that VMware provided above. At some point between vSphere 5.5 release and now some products (including vCAC/vRA) started using /websso instead of /sts.

From here I have spend about two weeks ask VMware how I should configure this with out getting real answers. Finally last week I got a paper describing how to configure an F5 load balancer for SSO when using vCAC. Now this would have been good if I could reverse engineer the approach that the F5 load balancer was using and configure that in the Apache load balancer. But no, those two configurations are completely different. So I decided to test something very simple. Copy the configuration block for /sts and rename everything to /websso. And guess what. So far it works! Here is how it looks:

###################################################################################
# Configure the websso for clustering

ProxyPass /websso/ balancer://webssocluster/ nofailover=On
ProxyPassReverse /websso/ balancer://webssocluster/

Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/websso" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://webssocluster>
 BalancerMember https://<sso-node-1-fqdn>:7444/websso route=node1 loadfactor=100
 BalancerMember https://<sso-node-2-fqdn>:7444/websso route=node2 loadfactor=1
 ProxySet lbmethod=byrequests stickysession=ROUTEID
</Proxy>

vCAC: Playing Around

I have for a while wanted to write a bit about vCloud Automation Center (vCAC) or, if you haven’t heard, as it will be rebranded vRealize Automation (vRA). My organization upgraded to vCloud Suite Standard licenses 2 years ago during the promo so we have access to the Standard edition of vCAC.

 

So what would it be able to do for us? We have been looking at providing some kind of private cloud solution to users but the form and shape of this has yet to be defined. So for the moment I am just testing out what I can do, how it is done, how to configure access and the likes. So far I am impressed but still a bit confused between tenants, fabric and business groups, reservations, entitlements, catalogs etc etc etc.

 

What I have learned so far from fooling around are:

  1. Native AD support is only available on the default “vsphere.local” tenant for some reason. Meaning if you want to use a different tenant for your AD you need to define each domain in your AD forest to be able to use users from all domains. A bit impractical.
  2. Renaming a cluster under vCenter that has been added as a compute resource in vCAC causes odd problems. Suddenly my tests were giving: Failed “CloneVM: Object reference is not set to an instance of an object”. I was unable to find anything else. Thinking my template had moved I started data collection on the compute resouce again. This failed with no error message to be found. Then I remembered that I had renamed the cluster. Solution was to remove reservations and then the cluster from the Fabric Group and then add the new name, recreate the reservation and then I was able to go again 🙂
  3. Adding users to a business group as users don’t give access to the actual resources. You need to entitle them which to me seems a bit redundant. Perhaps I have not yet seen the light on why this is smart.

 

I wil hopefully write more at a later time about this. My next goal is to get vCO running as an endpoint but due to some odd login problems with vCO 5.5.2.1 I can’t use my vCO at the moment!

Upgrading SSO 5.5

*EDIT* After 3 hours on the phone with VMware Technical Support we are finally running again. Permissions on one vCenter were wiped and have to be recreated and a bug with Win2k12 domain controllers has hit is meaning that we had to create each domain as an identity source. But we are running again! *EDIT*

 

This is going to be a short post about my current experiences with upgrading the SSO component of vCenter. Short because it is still not working and I am waiting for VMware support to contact me.

So after VMworld I was pushing to upgrade to vSphere 5.5, atleast web client and SSO as this would solve a lot of our AD problems. So planning began and in November I revived my old test environment for the vSphere 5.1 upgrade. I dusted it off, got it running (more on that in a later post) and started the upgrade. The upgrade went fine – as such. After coming online again I could not enumerate users in SSO groups. I could search the AD via the new Integrated Windows Authentication Identity Source but not enumerate members of SSO groups. The web client would be “loading” for a LOOONG time and the following could be seen over and over again in the vmware-sts-idmd.log:

07:07:04,885 WARN   [LdapErrorChecker] Error received by LDAP client: com.vmware.identity.interop.ldap.WinLdapClientLibrary, error code: 81
07:07:04,885 WARN   [ServerUtils] cannot bind connection: [ldap://ADSERVERNAME, null]
07:07:04,885 ERROR  [ServerUtils] cannot establish connection with uri: [ldap://ADSERVERNAME]
07:07:04,885 ERROR  [ActiveDirectoryProvider] Failed to get GC connection to domain aau.dkLdap_sasl_bind failedServer Down

I purged dates and servernames from the log. But in essence it was looking like the SSO was attempting to make a Global Catalog (GC) connection to an AD server but it was selecting an AD server which was not running the GC service and when attempting to connect it decides that the server is down but keeps trying to contact it. Our AD forest consists of 1 root domain and about 20 subdomains and in total about 55 domain controllers where a bit over half run the GC service. There is no setting to tell SSO which server to talk to in the AD forest. So no change.

I got the following from VMware technical support a few days later:

Procedure

1 Log in to the vSphere Web Client as administrator@vsphere.local or as another user with vCenter Single Sign-On administrator privileges.

2 Browse to Administration > Single Sign-On > Configuration.

3 On the Identity Sources tab, select an identity source and click the Set as Default Domain icon.

In the domain display, the default domain shows (default) in the Domain column.

Please restart the services and login to the web client/ vi client.

When I tried that it seemed to work. I reverted and it still worked leading me to think that it was not the above that fixed it but simply time as it would at some point select another server. I was assured over the phone by VMware technical support that customers that had seen the above error had fixed it with the above.

But now, to tell you the truth, the above log snippet is from today. I went ahead and upgrade SSO and Web client yesterday and got stuck with this error after spending about 2 hours reconnecting/repointing a vCenter 5.1 to the new lookup service. It requires you to change the vpxd.cfg file as it has hard coded the path to the STS service in that file and it points to the old /ims and not the new /sts url. It also caused a total wipe of permissions on the one vCenter I repointed causing some incosistent behavior in the web client. Looking under vCenter I cannot see it but if I browse down through Administration -> licensing and find it there I can see all objects with the admin@System-Domain account. The only account left with permissions on the vCenter.

I’m back at the office early this morning, slept like crap due to this. This is going to be a long day!

Since the last time

So it’s been a while since my last post, a lot has been going on. I have been through my first “employee performance interview” (Medarbejderudviklingssamtale or MUS in Danish). It was good and a lot of things were discussed in regard to this new organization. Some steps to increase my skills were also planned and I will get back to that later.

Since last time I attended VMWorld Europe 2013 in Barcelona! It was an awesome conference as always and I got a lot of new things with me home. One of the things I did different this year as compared to the previous two years was spend a lot more time on the Solutions Exchange. I focused primarily on storage vendors as I have taken fancy in new flash accelerated or all-flash storage systems. So I think I visited every booth with just the slightest connection to storage.

I also had the chance to discuss some of the new technologies coming out of VMware and also discuss the upgrade procedure for vSphere 5.5 when running with an SSO behind a load balancer. That was really useful and insightful and provided me with most of the information I need to perform an upgrade of the SSO and Web Client in our environment to vSphere 5.5 to relieve all the AD problems we have had. I will post a blog article on this later as there are still some hick-ups in the documentation and procedure that I need to test out and receive confirmation on from VMware support.

Our consolidation process has not been moving that much. Shortly after returning from Barcelona I took part in a live migration of VMs between our data center and a remote server room across a distance of about a kilometer. Without going into details about how everything was connected suffice to say that we had a single 10Gbit Ethernet connection between our one of our data center routers and one of the server room’s routers. We also had a single FC connection between a storage array in the data center and the blade chassis in the server room. This allowed us to evacuate a single blade in the server room and move it to an identical blade chassis after this we used another blade in the server room as the “transport host”. We vMotion VMs onto it as it could see both data center and server room storage arrays. The Storage vMotioned the VMs to the data center array and finally vMotion onto the host in the data center. Then one by one we evacuated all blades in the server room and moved them to the data center. The process took about 2 days including the move of a few physical hosts as well and was all in all very successful.

We had a single error during the move which caused an unexplained HA restart. The largest of the VMs (1TB storage spread on 4 different VMDKs) was set to change format to thin provisioned during the Storage vMotion. But at some point during migration we got an unexpected error (This was the actual message from the vSphere client). 30 seconds later HA spontaneously rebooted the VM even though Virtual Machine monitoring was disabled and the host didn’t crash. Luckily the VM handled the reboot well and it occurred close to midnight with no users online.

Right now my colleagues are planning the consolidation of two other VMware installation which will most likely be done with cold migrations. The amount of VMs is small and the fiber connections and licenses of these installations will not allow us to do a live migration. They are also planning a move similar to the one I worked on which we hope to complete some time in December. I am working on a cold migration of VMware installation as well where most of the VMs will be reinstalled on a new cluster rather than migrating them.

That was a status on what we are working on. Also back to the “I will get back to this later”. During the next month I will be working on a test installation of vCloud Automation Center to experiment with it and research if this is something we can use in our organization. The initial tests will be confined to the infrastructure department but if it works out it might be scaled up.

The Missing Pop Up

During the work with consolidating several VMware installations into a single platform I have to manage permissions. Most of the installations were maintained by a single person or a smaller group of people thus permissions were not that complex, however they were all mostly the same; assign Administrator role to all people given at the vCenter level.

When moving to the new platform this is not really possible. I, as part of the virtualization group, of course have administrative privileges on the two vCenter servers and everything below. But the old administrators still need their permissions due them continuing to maintain legacy systems. We decided to grant this access on a cluster level. This also required us to create a folder in the VMs and Templates view, the Datastores and Datastore Clusters and the Networking view for each old installation. On the clusters and these folders we granted Administrator rights.

Now comes the funny part of this. We had not noticed the problem of a missing popup until a persistent administrator was testing his new rights out. We had the cluster he was on set on DRS Manual mode. This of course requires you to choose which server to start a VM on when powering it on. But he wasn’t getting the popup in the web client. I, at first, thought it was a web client problem but it turned out it was something else.

He would choose the action as below, Power On:

01

In my C# client I could see the following happening:

02

But the VM wasn’t powering on:

03

I tried powering it on manually and saw the popup in my client and wondered what was wrong since he didn’t get it. That is when I realized what might be the problem and tested it out. The problem being that powering on a VM and selecting which host to run on when in DRS Manual mode is a Datacenter privilege and as the user only had permissions on folders and clusters below the Datacenter he was not seeing the popup. If I added his permissions on the Datacenter (and not propagating) he would see the popup as below. This only happened when in DRS Manual mode:

04

Consolidation continued

A while back I wrote an post about the process of consolidation where I was describing the primary aspects of the process for us and the solutions we chose. We are a little ways down the road an a lot of consolidation has already happened.

We have shutdown (or emptied and left running) 5 vCenter servers and consolidated 8 clusters and 7 single hosts in our new vCenter setup. The process has been pain free for the most part. In all of these migrations (6 different maintenance windows) we have only encountered a single problem which I will describe a little later. This has given us quite the track record for performing well under maintenance windows in our little virtualization team 🙂

So the problem we encountered was not in the actual process of disconnecting from one vCenter and connecting to another but instead in the preparing phase. What happened was that while we were migrating from VDS switches to VSS switches we needed to change port groups for a lot of VMs; PowerCLI to the rescue. This was automated using a translation table in the form of a hash table that would have the old VDS port group names as keys and the new VSS port group names as values. Foreach VM and foreach of the network adapters on the VM look at the port group name and find the new value in the hash table. Simple!

But. There is always a but. After change some 60-70 port groups, the port group of a specific VM was changed and it was working fine it would seem. About 15 minutes later I got the support call that a website was down on one of the VMs. I started looking and could not see anything wrong (I’m not that fond of IIS web servers and their way of logging!). The network of the VM had not been changed yet so what was causing this 503 Service Unavailable error? And even more odd it was only one out fo 17-20 websites that was not working

I googled some things no luck, grabbed a colleague to help look, no answer. The Google paid off. The mention of the words “Application pool”. I quickly looked up the application pool of the problem web site and sure enough it was in a stopped state. Started it again and the site was running again. But why had it stopped? The logs show nothing™. So I got to thinking and the only explanation that made sense to me was the fact that I had 15 minutes before the call changed the port group of the aforementioned VM which was running the database for the website. A small MSSQL machine. The only logical thing I can see having happened is that in changing the port group the connection to the database was cut and the application running in the pool handled it badly.

Now looking back at it this is a minor problem. One web site between 20 running on one VM between hundreds. All in all the consolidation process is running well. We have a few more vCenters and single hosts to gather under the new setup still. And after that comes the process of phasing out old hardware. We have recently got 6 new blade servers to replace some older hardware, some of the most powerfull we have had yet. 2×8 core Intel CPUs with 256 GB RAM and if all goes well a new storage controller dedicated for these machines and virtualization in general. But moving to new hardware, shutting down old servers, giving new IP’s to production machines, all of this is not an easy process and requires coordination across the entire IT organization.

Migrating VMkernels

Today I’m going to show you a little trick I found in the vSphere C# client that I have not previously seen anyone mention, I didn’t even find until last week. Reason I found is simple I had a need to live migrate a VMkernel.

Scenario:

You have one or more Distributed Switches (VDS) which have two or more uplinks. You need to move e.g. the VMkernel NIC handling Management Traffic from the VDS to a normal Virtual Switch (VS) on the host. And you want to do this without losing connectivity from vCenter to the ESXi host.

Reason:

Why would you need this you may ask. For us the reason was simple, we needed to move ESXi hosts with live VMs from one vCenter to another. If all traffic including management and vMotion is handled by VDSs disconnecting from the vCenter would remove connectivity as VDSs are a vCenter/cluster construct.

I search a lot to find how to do it and always ended up with the solution “Use the DCUI and restore management switch and vmkernel from there and reattach uplinks”. But that would shortly disconnect the ESXi host from the vCenter and I would have to login to each hosts remote console software (DRAC, iLo etc). Tedious work.

Method:

This little nifty trick:

vmk1

When marking a VMkernel and clicking migrate you get a dialog like below where you select the VS you want to migrate to:

vmk2

And in the next step set a name and a VLAN ID for the port group and you are done:

vmk3

Easy. Just remember – and this is the important step – To avoid downtime and losing connectivity you need to first move one of the uplinks to the VS you want to migrate to so that there is an available physical connection. Once you have migrated you can move the other uplink(s) to the VS and you are done! Easy, and all done from the vSphere client! If you are a real pro most of it could probably be scripted as well but due to only have a few hosts I felt more comfortable just doing it by hand, one at a time.