Is mentioned in part 1 this is part of a larger post that I never got around to finishing because it just grew to unmanagable size. This is part 2 and I will touch on my configurations of the UCS profiles that I use to run vSAN. Primarily this part is done because, despite Cisco having vSAN Ready Nodes, they lack a validated design of how to set it up – if you know of one please let me know!
Preamble
Cisco UCS managed servers have a great advantage of being easy to make consistently configured while still maintaining easy options to update configurations across multiple servers. I have been working with UCS Manager for close to 8 years now and have had a lot of problems occur and learned alot about the function of the product through that. These “recommendations” are based on my personal preferences and borrows from different types of best practices and configurations that I have encountered over the years.
As I primarily work with M5 configurations today I will focus on that and inject points for some M4 stuff I have encountered and their fixes/workarounds
One thing that I, all though not necessary, do is to separate my clusters into separate Sub-Organizations inside of UCS Manager. This gives a nice clean look where I can make generic policies in the Root organiation and specific policies under each sub-organization.
Boot Drive configuration
On M5 (and M6) I use a Storage Profile define the OS LUN for ESXi. As all other disks are in JBOD mode nothing needs to be done other than confirm JBOD mode (which is default if you select a SAS HBA for the server). The Storage Profile consists of a Disk Group policy and a Local LUN definition.
The Disk Group policy I usually define is setting RAID Level to RAID 1 Mirrored and then flipping Disk Group Configuration to manual. I then defined disk number 253 and 254 as the constituents of the Disk Group as these are always the two disks on the M.2 HW RAID controller. Everything else I leave deafult.
With this Disk Group Policy in hand I create a Storage Profile and under the Local LUNs section I create a LUN. I normally call the LUN OS and set a Size of 32 GB. Auto Deploy is set and Expand To Available is checked and finally the Disk Group Policy is set.
I could set the 32 GB larger but given the Expand to Available is enabled it will automatically fill in the 240 GB RAID 1 volume or 960 if choosing the large boot drives.
For M4 I use a different method which will be mentioned in the Boot Policy section.
Network Policies
Next up to configure is networks. Here I have borrowed a bit of what Cisco HyperFlex does. Hyperflex is Cisco answer to vSAN and it works to some extent in a similar manner.
First thing to do is to allow for QoS to have the correct MTU settings so that I can utilize CoS Preserve on the upstream switches if need be. Below table shows the settings I use in my environments.
Priority | CoS | Packet Drop | Weight | MTU |
Platinum | 5 | No | 4 | 9216 |
Gold | 4 | Yes | 4 | 1600 |
Silver | 2 | Yes | best-effort | 1600 |
Bronze | 1 | Yes | best-effort | 9216 |
Best Effort | Any | Yes | best-effort | 9216 |
Iuse Platinum for vSAN storage traffic, Gold for VM guest traffic, Silver for the ESXi management traffic and Bronze for vMotion interfaces. Note that both Bronze and Platinum allow MTU 9000 Jumbo frames to be used inside ESXi for optimum performance. Make sure the upstream switches from your Fabric Interconnects support MTU 9216.
I take these classes and create matching QoS Policies from. Simply use the same name and select the priority and I use all default settings otherwise. I need these policies when configuring vNICs.
I usually also create a Network Control Policy that allows CDP and LLDP both recieve and transmit, allow forged MAC and set the action to Link Down when an uplink fails. More on that later.
Before we start defining vNICs and LAN Connectivity Policies we need MAC addresses for the vNICs. UCS Manager allows you to define your own MAC addresses inside of the 00:25:B5 and then defining as much of the remaining as you want. You can easily just create a single pool and have UCS Manager assign MAC addresses from that pool but we borrow an idea from how Hyperflex designs their MAC pools.
What you do in Hyperflex is to select the 4th octet of the MAC as a prefix for a cluster e.g A1 so that start of each MAC is 00:25:B5:A1. That means you can identify a cluster in your network based on the 4th octet alone. Neat!
Next Hyperflex uses the 5th octet to define vNIC number and attached fabric. This means that vNIC1 will have A1 and vNIC2 will have B2. That means when setting it up you can match the 5th octet to a function. I use A1 and B2 for esxi management, vSAN on A3 and B4, guest traffic on A5 and B6, vMotion on A7 and B8 and any additional required NICs continue from there.
I create MAC pools to match a minimum of 8 vNics (2 mgmt, 2 vSAN, 2 guest and 2 vMotion). Then add 2 for NFS and 2 for virtual networking if needed.
With the MAC pools in hand and the policies from above I create a set of vNICs for ESXi. I prefer to have 2 for each function, one on fabric A and one on fabric B without fabric failover – ESXi can easily handle the failover and if I set it up like this ESXi can use both links from the server if need be and in case of a failure on one of the links I would rather see the vNICs go down and have ESXi handle the failover instead of it being transparent for ESXi.
Each vNic name is suffixed with the expected fabric so e.g. esxi-mgmt-a and esxi-mgmt-b. I set the “-a” as primary template in a Peer redundancy setup an “-b” to the secondary. This allows me to only update vlans and configuration on the “-a” vNIC and configuration will be in sync with the “-b” vNIC. The Template type is set to Updating to allow for adding things like additional vLANs to all servers using this vNIC without having to go through every profile. MTU needs to match the QoS policy selected and defined above. Select the matching MAC pool and set the Network Control Policy and done. Then repeat for each required vNIC.
I use the created vNICs to create a LAN Connectivity Policy which contains all the vNICs and setsthe adapter policy to VMWare (yes Cisco capitalizes it wrong 🙁 ). And that is it for networking for now. We will use the LAN Connectivity Policy when defining the Server Profile Template.
Server Profile Policies
I need a couple of Server Policies before we can create the Server Profile Template. First one I create is a Scrub Policy. This policy I generally make in the Root scope as I globally want scrub to be disabled for all types; Disk, BIOS, FlexFlash and Persistent Memory. I generally don’t want UCS to wipe settings unless specifically instructed to do so.
Next up is a Boot Policy. For M5 I define a Policy that uses Boot Mode UEFI and with Secure Boot enabled. Then I add a single boot option of type Local LUN using the LUN Name OS, which we defined in the Storage Profile previously.
If attempting to boot from an internal drive in an M4 as described in Part 1 some special options need to be set. Instead of using Local LUN select Embedded Disk and then modify the Uefi Boot Parameters option to set Boot Loader Name to “BOOTX64.EFI” and Boot Loader Path to “\EFI\BOOT\”. This is the only way I found to do UEFI secure boot on those drives.
I setup a Maintenance Policy for the Server Profile as well to set every action that might require reboots to “User Ack” which means that I need to manually approve any reboots of the host from profile changes. I also set the “On Next Boot” option to allow for easy firmware updating while updating ESXi. On Next Boot will apply any pending changes if the host reboots like when applying ESXi updates. Convenient!
Lastly I create a Host Firmware Package policy which sets the version of firmware to use in that cluster. As firmware packages can contain firmware for the SAS HBAs I want tight control as to which firmware is used. This also allows me to change the firmware level of the cluster in one step and then have pending changes for each host ready for when I’m ready to do the reboot to update firmware.
Server Profile Template
With all those profiles and things ready I can now create the template that each server will be instantiated from. This will be an updating template to allow for changes to be done consistently on all hosts and avoid configuration drift.
I usually just run through the wizard and select the policies created where applicable. As we don’t have any FC in our setup I usually don’t setup any vHBA’s. These can be added later given the Updating setting.
Only thing I do manually is to select the LAN Connectivity Policy to get the required vNICs for ESXi attached. Once added I complete the Wizard and go back into the network tab of the template to click “Modify vNIC/vHBA Placement”. I do this because the view to edit is easier to manage when access from there instead of in the wizard. I then manually place the vNICs in the order I want to force.
Conclusion
With all that there is now a profile template that can be used to produce identical ESXi hosts for vSAN usage. The profile even works on “compute only” nodes that don’t provide any storage to the system as long as they still use the M.2 HWRAID boot module. Very nice in my opinion.
Next up in part 3 I will go over some of my ESXi configurations that I prefer in the vSAN pods I run.