VMware vSAN on Cisco UCS Part 1 – Hardware

I have had parts of this post saved in draft for months without getting it finished because it was turning to a monster of a post if I tried covering it all. I finally found the drive to finish it when I realized that this was probably better if I split it in multiple posts instead of trying to include hardware considerations, UCS manager / standalone profile configurations and ESXi configurations into one single post.

So without further ado lets dive into the hardware part of VMware vSAN on Cisco UCS.

Please do note that these are my personal opinions and may or may not align with what you need in your datacenter solution. Most designs are individual for at specific use case and as such cannot be taken directly from here.

Base models

Cisco has a bunch of certified vSAN Ready nodes based on M4, M5 and M6 branches of servers. M3 isn’t supported as the hardware is both EOL and most of the controllers available for M3 models weren’t powerful enough for running vSAN workloads. The most common to use are Cisco’s C240 M5SX 2U models which allow for 24-26 drive bays total. For smaller deployments the C220 M5SX is also an excellent option with up 10 drives in 1U.

It is technically possible to run vSAN on other types of servers like the S3260 and B200 blades but they limit your options in terms of storage to compute ratio (S3260 being able to provide massive amounts of storage but little in compute and B200 being the opposite due to only having 2 disk slots).

One thing to note is if you plan on using NVMe storage options you need to focus on M5 and M6. M5 allows for up to 4 NVMe devices in U.2 format while M6 can support up to 24 NVMe devices. M4 only supports PCIe NVMe devices.

Boot options

Cisco has traditionally been a network boot company and as such the primary local boot option on M3 and M4 is SD cards if you don’t want to waste disk slots on boot devices. On B200 M4 with only 2 disk slots SD card is currently the only option as the disk slots are needed for a caching and capacity disk. On all M5 and M6 models (B200 included) there is a new dedicated slot for inserting a UCS-M2-HWRAID controller which can fit 2 M.2 drives (either 240 or 960 GB) and can do actual RAID that ESXi supports. Do not use the UCS-MSTOR-M2 controller which fits the same slot and fits 2 M.2 as well but this only supports the onboard LSI-SW RAID from the Intel chipset and that is only supported by Windows and Linux and not ESXi. It is not that expensive – just by the HWRAID controller 🙂

Specifically on the C240 M4 if you choose a UCSC-PCI-1C-240M4 you can insert up to two drives internally in the server that are managed by the onboard controller. You won’t have RAID functionality but it beats SD card booting by miles!

NIC

My go to here is using M5 servers with a UCSC-MLOM-C40Q-03 (VIC 1387) in combination with 6300 series Fabric Interconnects. That provides 2x40G per server which pairs nicely if your upstream network is 40 or 100G. On M6 that would be UCSC-M-V100-04 (VIC 1477) that provides the same.

If you are using 6400 series Fabric Interconnects and a 25G infrastructure you might want to go with UCSC-MLOM-C25Q-04 (VIC 1457) on M5 and UCSC-M-V25-04 (VIC 1467) on M6 to give 4×10/25G connections instead. Depends on your infrastructure.

On M4 it is technically possible to use the UCSC-MLOM-C40Q-03 (VIC 1387) all though the UCSC-MLOM-CSC-02 (VIC 1227) adapter is way more common but only provides 2x10G connections. If you run a pure 10G infrastructure and continue to do so I recommend adding an additional UCSC-PCIE-CSC-02 (VIC 1225) to provide 2x10G. I see this combination primarily used with 6200 series Fabric Interconnects.

For blades the standard is UCSB-MLOM-40G-03 (VIC 1340) for M4 and UCSB-MLOM-40G-04 (VIC 1440) for M5 and M6. Both cards are 2x40G. These need to be paired with IOM’s in the blade chassis which can limit the speed of the vNICs presented. Usually you get 2x20G on IOM 2304 and 2208. Consult your Cisco vendor to confirm how to get optimum speeds for your setup.

Controllers

Now the probably most crucial part of the any vSAN deployment – the controller. Albiet less important if you go for all-NVMe or even the new ESA option in vSAN 8 you need at SAS/SATA controller to handle your disks.

On C240 M4 this is usually UCSC-SAS12GHBA or UCSC-MRAID12G with a UCSC-MRAID12G-1GB cache module. Both are on the HCL but SAS HBA is prefferable over the RAID controller

On C220 and C240 M5 the only real options for vSAN are UCSC-SAS-M5 and UCSC-SAS-M5HD respectively. Primary difference is how many drives the controller is capable of utilizing which of course needs to be higher for the C240.

On the C240 M6 the option is CSC-SAS-M6T (UCSC-SAS-240M6) which allows for up to 16 disks but to be honest – if you are going for M6 nodes you should probably go for an M6N og M6SN for all NVMe configuration instead.

Disks

I won’t touch too much on this as various use cases and requirements need different numbers of disk groups and capacity devices. You use case may vary. We primarily use 3.8 TB Enterprise Value SATA SSD’s for capacity simply because they are fast enough and readily available to us. We aim to use NVMe caching devices if at all possible but if not we select a high endurance and performance SAS SSD for caching.

One note to have in mind. M4 only supports PCIe NVMe devices. On the C220 M5SX two front slots can be used for NVMe and on C220 M5SN all 10 slots can be NVMe. On the C240 M5SX slots 1 and 2 as well as 25 and 26 (on the rear) can be used for NVMe’s and on the C240M5SN bays 1-8 can be used for NVMe.

If you are retrofitting NVMe’s into existing C2x0 M5’s note that on the C220 M5 you need a CBL-NVME-220F to be able to use the front facing NVMe drives if not already present.

On the C240 M5 I recommend going for a UCSC-RIS-2C-240M5 which supports both 2xfront and 2xrear mounted NVMe’s if you remember to order a CBL-NVME-240SFF and UCSC-RNVME-240M5 to connect the front and rear slots respectively to the riser. This configuration allows you up to 4 NVMe caching devices while using SAS/SATA capacity drives up to 5 drives per group which can be a lot of disk and performance.

Conclusion

So those are the notes on hardware I have. I have not touched on CPU types and memory configurations at all as this is something that needs to match your workload. Somethings might need 3.0 Ghz base clock and no memory or loads of cores and memory. Pick something that matches the workload but I would recommend sticking to Xeon Gold CPU’s to get a good balance of performance and cores and selecting a configuration of 12 DIMMs for M5’s to get maximum memory bandwidth.

In the next article I’ll touch on the UCS Manager configurations that I use for vSAN.

vCloud Usage Meter 4.3 .local resolution issues

As part of our ongoing engagement with VMware we are required to operate vCloud Usage Meter to measure rental license usage for reporting back to VMware. We have been running an older build for a long time now waiting for the 4.3 release to come out because this new release could correctly measure vRealize Automation usage based on the Flex bundle Addon model rather than per OSI.

I got the appliance deployed just before the holidays but ran into several issues that I’d like to share with you.

First issue I ran into actually prompted me to redeploy because the migration of configuration from the old appliance ended in a bad state. It was caused by two things 1) I was missing a Conditional Forwarder for a domain on the DNS servers on the new appliance was using and 2) systemd-resolved is a nightmare to work with!

It like to focus in on the systemd-resolved. I really don’t like this piece of software as it is insanely frustrating to troubleshoot on. What it basically does is set the /etc/resolv.conf server to a local address on the server (127.0.0.53) and on that IP a daemon is listening for requests. If it can answer the request it does otherwise it passes the request onwards as normal.

But – and this is the crucial part – it handles “.local” domains a bit different. What it actually does I cannot answer completely but .local is being used by some services like Bonjour and mDNS. This is crucial as if you do not explicitly state that a .local domain needs to be resolved via actual DNS systemd-resolved won’t do it.

To jump a bit – the new Usage Meter 4.3 appliance runs on Photon OS which uses systemd. The older appliances use SLES which doesn’t and thus don’t have the issues. I had to do a lot of tinkering to get this working but managed by following this article: https://github.com/vmware/photon/issues/987 and making sure that both my required .local domains were present in the search path parameter and that the DNS servers were explicitly inserted into the 10-eth0.network config file.

I had to do both things otherwise it did not work. Search path can be configured correctly on deploy if you remember it. The DNS settings must be done after deployment but before running the migration script. Double check DNS resolution before attempting migration – it’ll save you headaches!

The appliance has been deployed and config migrated which prompted me with to errors – that old 5.5 vCenter that hadn’t been fixed yet and a currently unknown bug in registering a vRealize Automation 7.6 install – VMware support are investigating!

VMworld 2020 and General Announcements

Ohh it has been a while again since the last time I got to writing. Being busy with maintenance work is not really something that makes for great blog articles.

But last week I got to attend VMworld 2020! This year due to the situation world wide it was a virtual setting so for me it was two days in the home office watching a lot of great content on Kubernetes, NSX, vSAN and much more.

So many great things we announced. But the thing that struck me first was the acquisition of SaltStack. This is a major move to actually incorporate a configuration management system into the VMware portfolio and will certainly strengthen vRealize Automation in the future and hopefully also other parts of the ecosystem!

Another very huge announcement was Project Monterey. Although I’m still trying to wrap my head around the use cases and oppertunities this presents I do like the idea very much! Being able to offload vSAN and NFV workloads to the a SmartNIC is a great idea and I hope to see it evolve in the future.

This week also saw some the GA release of several new versions of the core products from VMware. These were announced previously but I was not aware that they would be releasing so soon – but that is just the cherry on top!

First up is the release of vSphere 7 U1! Biggest new feature has got to be the ability to run vSphere with Tanzu as well as new scalability maximums for VMs.

Along with vSphere 7 U1 there is of course also a vSAN 7 U1 release! Here features like HCI mesh allowing you to share the vsanDatastore natively between vSAN pods is one of my top features. Improvements to the fileservices of vSAN also landed as well as the option to only run compression on vSAN and not both compression and deduplication. Great features! For those running 2-node clusters or stretched clusters requiring witness a huge improvement has also landed allowing a witness server to be shared by up to 64 clusters! Very nice!

Another feature also seems to have crept in as detailed by John Nicholson. It is the option to run the iSCSI feature on stretched clusters. Again a very nice feature to have included for those needing it.

Last bit of GA material that I wanted to comment on aswell is the release of vRealize Automation 8.2. There are much needed improvements to the multi-tenancy of vRA as well as improvements to Infrastructure-as-code workflows and Kubernetes.

It can be a daunting task to keep up with all the releases from VMware but their ability to push new releases and features never ceases to amaze me!

System logging not Configured on host

A few weeks ago I noticed a warning on some of our hosts in our HyperFlex clusters and wondered what was going on. It was only hitting Compute Only nodes in the clusters.

The warning is indicating that the Syslog.global.logDir is not set as per KB2006834. But when I looked via ssh on the host it was logging data and the config option was set so it was working – so why the warning?

Well it turns out to be something not that complicated to fix. The admin who set up the nodes set the option to:

[] /vmfs/volumes/<UUID>/logs/hostname

That is giving it an absolute path on the host like you would do with the ScratchConfig.ConfiguredScratchLocation option. This works but triggers the warning as if it was not set.

The fix is simple. Simply change it to use the DatastoreName notation as this:

[DatastoreName] logs/hostname

This immediately removed the warning and everything continued as it had before.

Using tags for something interesting

Hello all

Thought I would try and share a bit of what I have been working on internally for a while now. As we are an organization that is still does not have all processes and data management in place I have been working on rewriting an Python based page I hacked together with a colleague to list information about initially virtual machines but it grew to include DNS, physical machines, NAT relations as well as relying on ARP data to resolve MAC to IP where needed. I finished version 1.0 of this new system written in Django 1.10 a few weeks back and am already working on a 1.1 release with a few feature requests.

Now the system grew out of a need to identify people and relations around virtual machines in our infrastructure and today heavily relies on Tags in the web client. We were already using tags to include VM’s in backup jobs in Veeam (something I can highly recommend doing!).

So what and how do we do it? We use 5 tag categories with the names Owner, Department, SysAdm, AppAdm and Service. These 5 tag types give a lot of information directly on the VM on who to contact and what service the VM belongs to. This information is then on a daily basis exported to the new system in the form of a CSV file at the moment. This CSV file along with a file from our NAT device, on from the datacenter routers with ARP, a list of DNS records, a CSV file from our Racktables installation for physical machines where the 5 tag categories are also defined and a list of systems with the SCOM agent installed.

The data is then imported into a data model defined in Django’s Object-relational mapping that tries to correlate some of the information from the different files. The end result is a web page where all systems and DNS records in theory are listed and can be searched, filtered, sorted etc. Where one can find a system(physical or virtual) based on the IP, DNS record (A, CNAME or MX), name, type, OS, you name it. This has some of the characteristics of a CMDB or CMS but instead of showing what it should be it is showing what it actually is at the time of the latest export. We have used this to help ourselves in the Infrastructure department as well as allowing some supports to find information to help route incidents and service requests to the correct groups in our service management system.

Below I have included a blurred screenshot from the system to show one of the views defined. A cut out from a simple system list. Note that the last 5 columns are the tag categories from vSphere/Racktables with the danish names we chose for them.

So that is an example of how we use tags in our day to day operations. We still in some cases miss the old Custom attributes (I know they are still in the API but not exposed in the web client) with the option of inputting variable information like expiry dates etc. Having to do something like that with tags would in my opinion be a mess (imagine a tag for every single date of expiry).

A bonus of doing this you can actually correlate this information with information from vRops via e.g PowerCLI. As an example one could send an email based on an alert in vRops to the addresses set via the SysAdm tags.

VMworld Europe 2016 – Day 1

Early morning day 2 of my VMworld 2016 trip seems like the time to do a short recap of yesterday.

Yesterday started with the General Session keynote where Pat Gelsinger and several others presented the view from VMware. Amongst his points I found the following things most interesting:

  • THE buzzword is Digital Transformation
  • Everyone is looking at Traditional vs Digital business
  • However only about 20% of companies are actively looking at doing this. 80% are stuck behind in traditional IT and spend time optimizing predictable processes.
  • Digital Business is the new Industrial Revolution

In 2016 – 10 years ago AWS was launched. Back there were about 29 million workloads running in IT. 2% of that was in the cloud mostly due to Salesforce. 98% was in traditional IT. Skip 5 years ahead now we have 80 million workloads and 7% in public cloud and 6% in private. Remaining 87% still in traditional perhaps virtualized IT. This year we are talking 15% public and 12% private cloud and 73% traditional of 160 million workloads. Pat’s research time have set a specific time and date for when cloud will be 50% (both public and private). That date is June 29th 2021 at 15:57 CEST. We will have about 255 million workloads by then. In 2030 50% of all workloads will be in public clouds. The hosting market is going to keep growing.

Also the devices we are connecting will keep growing. By 2021 we will have 8.7 billion laptops, phones, tablets etc connected. But looking at IoT by Q1 2019 there will be more IoT devices connected than laptops and phones etc and by 2021 18 billion IoT devices will be online.

In 2011 at VMworld in Copenhagen (please come back soon 🙂 ) the SDDC was introduced by Raghu Raghuram. Today we have it and keep expanding on it. So with today vSphere 6.5 and Virtual San 6.5 were announced for release as well as VMware Cloud Foundation as a single SDDC package and VMware Cross Cloud Services for managing your mutliple clouds.

vSphere 6.5 brings a lot of interesting new additions and updates – look here at the announcement. Some of the most interesting features from my view:

  • Native VC HA features with and Active, Passive, witness setup
  • HTML 5 web client for most deployments.
  • Better Appliance management
  • Encryption of VM data
  • And the VCSA is moving from SLES to Photon.

Updates on vCenter and hosts can be found here and here.

I got to stop by a few vendors at the Solutions exchange aswell and talk about new products:

Cohesity:

I talk to Frank Brix at the Cohesity booth who gave me a quick demo and look at their backup product. Very interesting hyper converged backup system that includes backup software for almost all need use cases and it scales linearly. Built-in deduplication and the possibility of presenting NFS/CIFS out of the deduped storage. Definitely worth a look if your are reviewing your backup infrastructure.

HDS:

Got a quick demo on Vvols and how to use it on our VSP G200 including how to move from the old VMFS to Vvols instead. Very easy and smooth process. I also got an update on the UCP platform that now allows for integration with an existing vCenter infrastructure. Very nice feature guys!

Cisco:

I went by the Cisco booth and got a great talk with Darren Williams about the Hyperflex platform and how it can be used in practice. Again a very interesting hyper-converged product with great potential.

Open Nebula:

I stopped by at OpenNebula to look at their vOneCloud product as an alternative to vRealize Automation now that VMware removed it from vCloud Suite Standard. It looks like a nice product – saw OpenNebula during my education back in 2011 I think while it was still version 1 or 2. They have a lot of great features but not totally on par with vRealize Automation – at least yet.

Veeam:

Got a quick walkthrough of the Veeam 9.5 features as well as some talk about Veeam Agent for Windows and Linux. Very nice to see them move to physical servers but there is still some ways to go before the can talk over all backup jobs.

 

Now for Day 2’s General Session!

Disabling “One or more ports are experiencing network contention” alert

From day one of deploying vRealize Operations Manager 6.0 I had a bunch of errors in our environment on distributed virtual port group ports. They listed with the error:

One or more ports are experiencing network contention

Digging into the exact ports that were showing dropped packets resulted in nothing. The VMs connected to these ports were not registering any packet drops. Odd.

It took a while before any info came out but it apparently was a bug in the 6.0 code. I started following this thread on the VMware community boards and found that I was not alone in seeing this error. In our environment the error was also only present when logging in as the admin user. vCenter admin users were not seeing it so this pointed towards a cosmetic bug.

A KB article was released about the bug and that the alert can be disabled but it does not described exactly how to disable the alert. The alert is disabled by default in the 6.0.1 release but if you installed 6.0 and upgraded to 6.0.1 and did not reset all settings (as I did not do) there error is still there.

To remove the error login to the vROPS interface and navigate to Administration then Policy and lastly Policy Library as marked in the image below:

PolicyOnce in the Policy Library view select the active policy that is triggering the alert. For me it was Default Policy. Once selected click the pencil icon to Edit the profile as show below:

EditOn the Policy Editor click the 5. step – Override Alert / Symptom Definitions. In Alert Definitions click the drop-down next to Object Type, fold out vCenter Adapter and click vSphere Distributed Port Group. To alerts will now show. Next to the “One or more ports are experiencing…” alert click the error by State and select Local with the red circle with a cross as show below.

Local DisabledI had a few issues with clicking Save after this. Do not know exactly what fixed it but I had just logged in as admin when it worked. This disables the alert! Easy.

 

 

 

Since the last time

So it’s been a while since my last post, a lot has been going on. I have been through my first “employee performance interview” (Medarbejderudviklingssamtale or MUS in Danish). It was good and a lot of things were discussed in regard to this new organization. Some steps to increase my skills were also planned and I will get back to that later.

Since last time I attended VMWorld Europe 2013 in Barcelona! It was an awesome conference as always and I got a lot of new things with me home. One of the things I did different this year as compared to the previous two years was spend a lot more time on the Solutions Exchange. I focused primarily on storage vendors as I have taken fancy in new flash accelerated or all-flash storage systems. So I think I visited every booth with just the slightest connection to storage.

I also had the chance to discuss some of the new technologies coming out of VMware and also discuss the upgrade procedure for vSphere 5.5 when running with an SSO behind a load balancer. That was really useful and insightful and provided me with most of the information I need to perform an upgrade of the SSO and Web Client in our environment to vSphere 5.5 to relieve all the AD problems we have had. I will post a blog article on this later as there are still some hick-ups in the documentation and procedure that I need to test out and receive confirmation on from VMware support.

Our consolidation process has not been moving that much. Shortly after returning from Barcelona I took part in a live migration of VMs between our data center and a remote server room across a distance of about a kilometer. Without going into details about how everything was connected suffice to say that we had a single 10Gbit Ethernet connection between our one of our data center routers and one of the server room’s routers. We also had a single FC connection between a storage array in the data center and the blade chassis in the server room. This allowed us to evacuate a single blade in the server room and move it to an identical blade chassis after this we used another blade in the server room as the “transport host”. We vMotion VMs onto it as it could see both data center and server room storage arrays. The Storage vMotioned the VMs to the data center array and finally vMotion onto the host in the data center. Then one by one we evacuated all blades in the server room and moved them to the data center. The process took about 2 days including the move of a few physical hosts as well and was all in all very successful.

We had a single error during the move which caused an unexplained HA restart. The largest of the VMs (1TB storage spread on 4 different VMDKs) was set to change format to thin provisioned during the Storage vMotion. But at some point during migration we got an unexpected error (This was the actual message from the vSphere client). 30 seconds later HA spontaneously rebooted the VM even though Virtual Machine monitoring was disabled and the host didn’t crash. Luckily the VM handled the reboot well and it occurred close to midnight with no users online.

Right now my colleagues are planning the consolidation of two other VMware installation which will most likely be done with cold migrations. The amount of VMs is small and the fiber connections and licenses of these installations will not allow us to do a live migration. They are also planning a move similar to the one I worked on which we hope to complete some time in December. I am working on a cold migration of VMware installation as well where most of the VMs will be reinstalled on a new cluster rather than migrating them.

That was a status on what we are working on. Also back to the “I will get back to this later”. During the next month I will be working on a test installation of vCloud Automation Center to experiment with it and research if this is something we can use in our organization. The initial tests will be confined to the infrastructure department but if it works out it might be scaled up.