Disabling “One or more ports are experiencing network contention” alert

From day one of deploying vRealize Operations Manager 6.0 I had a bunch of errors in our environment on distributed virtual port group ports. They listed with the error:

One or more ports are experiencing network contention

Digging into the exact ports that were showing dropped packets resulted in nothing. The VMs connected to these ports were not registering any packet drops. Odd.

It took a while before any info came out but it apparently was a bug in the 6.0 code. I started following this thread on the VMware community boards and found that I was not alone in seeing this error. In our environment the error was also only present when logging in as the admin user. vCenter admin users were not seeing it so this pointed towards a cosmetic bug.

A KB article was released about the bug and that the alert can be disabled but it does not described exactly how to disable the alert. The alert is disabled by default in the 6.0.1 release but if you installed 6.0 and upgraded to 6.0.1 and did not reset all settings (as I did not do) there error is still there.

To remove the error login to the vROPS interface and navigate to Administration then Policy and lastly Policy Library as marked in the image below:

PolicyOnce in the Policy Library view select the active policy that is triggering the alert. For me it was Default Policy. Once selected click the pencil icon to Edit the profile as show below:

EditOn the Policy Editor click the 5. step – Override Alert / Symptom Definitions. In Alert Definitions click the drop-down next to Object Type, fold out vCenter Adapter and click vSphere Distributed Port Group. To alerts will now show. Next to the “One or more ports are experiencing…” alert click the error by State and select Local with the red circle with a cross as show below.

Local DisabledI had a few issues with clicking Save after this. Do not know exactly what fixed it but I had just logged in as admin when it worked. This disables the alert! Easy.

 

 

 

Migrating VMkernels

Today I’m going to show you a little trick I found in the vSphere C# client that I have not previously seen anyone mention, I didn’t even find until last week. Reason I found is simple I had a need to live migrate a VMkernel.

Scenario:

You have one or more Distributed Switches (VDS) which have two or more uplinks. You need to move e.g. the VMkernel NIC handling Management Traffic from the VDS to a normal Virtual Switch (VS) on the host. And you want to do this without losing connectivity from vCenter to the ESXi host.

Reason:

Why would you need this you may ask. For us the reason was simple, we needed to move ESXi hosts with live VMs from one vCenter to another. If all traffic including management and vMotion is handled by VDSs disconnecting from the vCenter would remove connectivity as VDSs are a vCenter/cluster construct.

I search a lot to find how to do it and always ended up with the solution “Use the DCUI and restore management switch and vmkernel from there and reattach uplinks”. But that would shortly disconnect the ESXi host from the vCenter and I would have to login to each hosts remote console software (DRAC, iLo etc). Tedious work.

Method:

This little nifty trick:

vmk1

When marking a VMkernel and clicking migrate you get a dialog like below where you select the VS you want to migrate to:

vmk2

And in the next step set a name and a VLAN ID for the port group and you are done:

vmk3

Easy. Just remember – and this is the important step – To avoid downtime and losing connectivity you need to first move one of the uplinks to the VS you want to migrate to so that there is an available physical connection. Once you have migrated you can move the other uplink(s) to the VS and you are done! Easy, and all done from the vSphere client! If you are a real pro most of it could probably be scripted as well but due to only have a few hosts I felt more comfortable just doing it by hand, one at a time.