Top 5 big & little enhancements in vSphere 5.5

The August/September timeframe has become like Christmas for vSphere geeks as the anxiously awaited new release of vSphere arrives which they finally get to unwrap and play with. VMware released vSphere 5.5 on September 22nd this year, just one year after the last major vSphere 5.1 release. Overall vSphere 5.5 is a bit light on the number of new features/enhancements compared to previous releases and is also missing the long awaited new Virtual Volumes (vVols) storage architecture that VMware has been showing off for a while now.

Despite that there is still plenty of new stuff in vSphere 5.5 that make it a worthy upgrade. Typically each new release has some superstar new features that get a lot of attention along with lots of smaller enhancements and features that often get overlooked. In this post I thought I’d highlight a few of the big new features and also a few of the smaller ones that often go un-noticed.

This post brought to you by SolarWinds, the makers of great products for virtual environments including Virtualization Manager, Storage Manager and Server & Application Monitor.

1 – Scalability

Scalability in vSphere is important as it dictates the size and the amount of workloads that can run on a host. By steadily increasing scalability VMware has made it so almost any size workload can be virtualized and VM density can grow higher. On the VM side, the Monster VM has steadily grown quite large and able to tackle any workload, however its one weakness has always been the virtual disk size which has been limited to 2TB in past releases. That’s finally changed with vSphere 5.5 as the maximum virtual disk size has jumped to a whopping 62TB.

While the VM side got more disk, on the host size the increases were focused on compute resources. The maximum number of physical and virtual CPUs per hosts doubled to 320 pCPUs and 4096 vCPUs while the maximum physical memory doubled to 4TB. This greatly increases the VM density that you can achieve as it allows you to pack more VMs onto a host. While the CPU limits are so high that most people will never even get close to reaching them, the memory increases are definitely welcome as many applications running on VMs tend to be very memory hungry.

One other nice scalability jump was with the vCenter Server Appliance which is a pre-built virtual appliance complete with OS, database and the vCenter Server software installed. The big advantage of the using the vCSA is it’s simple to install and setup which made it very convenient especially for users that lacked database experience. The problem in vSphere 5.1 was that it was very limited in scalability and would only support the smallest of environments up to 5 hosts and 50 VMs. That’s all changed in vSphere 5.5 as it now scales to 100 hosts and 3000 VMs, a huge jump which it will make it attractive alternative to a much wider group. If you want to find out more about the scalability changes between vSphere releases check out this post and this post.

2 – Virtual SAN

VMware’s new Virtual SAN (VSAN), not to be confused with their existing VSA offering, is their latest product as VMware continues to try and become a storage vendor. The big difference between VMware’s VSA & VSAN is that VSAN is not a virtual appliance, it is baked into the hypervisor and VSAN also scales much higher than VSA which was limited to 3 nodes. VSAN also requires both SSD and traditional spinning disk as it utilizes the SSD tier as both a read cache and write buffer to complement the traditional spinning disk tier.
While VSAN was released as part of vSphere 5.5, it’s not quite ready yet and is only available in beta form.

You can sign up for the public beta here, note there is nothing to download as its native to vSphere 5.5, you just need a license key to activate it. Before you jump in and start using it you should be aware that it currently has limited hardware support and it has some known issues. But it’s beta so you should expect that and shouldn’t be using it in production anyway. That shouldn’t stop you from giving it a try though as long as you meet the requirements so you can get a look at what’s coming. And also the correct acronym is VSAN not vSAN, you have to love VMware’s ever changing letter case usage. If you want to know more about VSAN I have a huge collection of links on it.

3 – UNMAP

UNMAP is a SCSI command (not a vSphere feature) that is used with thin provisioned storage arrays as a way to reclaim space from disk blocks that have been written to after the data that resides on those disk blocks has been deleted. UNMAP serves as the mechanism that is used by the Space Reclamation feature in vSphere to reclaim space from VMs that have been deleted or moved to another datastore. This process allows thin provisioning to clean-up after itself and greatly increases the value and effectiveness of thin provisioning.

Support for UNMAP was first introduced in vSphere 5 and it was initially intended to be an automatic (synchronous) reclamation process. However issues with storage operations timing out while vSphere waited for the process to complete on some arrays caused VMware to change it to a manual (asynchronous) process that does not work in real time. A parameter was added to the vmkfstools CLI utility that would create a balloon file and delete it and during the process UNMAP all disk blocks to reclaim space. The problem with this was you had to constantly run it manually, it was resource intensive and not very efficient as it tried to reclaim blocks that may not have data written to them yet.

In vSphere 5.5 it’s still not an automatic process unfortunately but VMware has improved the manual process. To initiate an UNMAP operation you now use the “esxcli” command using the “storage vmfs unmap” parameter, you can pass it some additional parameters to specify a VMFS volume label/uuid and the number of blocks to reclaim (default is 200). In addition UNMAP is now much more efficient and the run duration is greatly reduced and the reclaim efficiency is increased. As a result where VMware previously recommended only running it off-hours so it wouldn’t impact VM workloads, you can now run it anytime and it will have minimal impact.

To see if your storage device supports UNMAP you can run the “esxcli storage core device vaai status get -d” command and the Delete Status will say supported if it does, you can also check the vSphere HCL to see if its supported and what firmware version may be required. To find out more about the changes check out this post by Jason Boche and if you went to VMworld be sure and check out session STO4907 – Capacity Jail Break: vSphere 5 reclamation nuts and bolts.

4 – CPU C-states

vSphere can help reduce server power consumption during periods of low resource usage by throttling  the physical CPUs. It can accomplish this in one of 2 ways, by throttling the frequency and voltage of a CPU core or by completely shutting down a CPU core. This is referred to as P-states and C-states which are defined as follows:

  • A P-state is an operational state that can alter the frequency and voltage of a CPU core from a low state (P-min) to the max state (P-max), this can help save power for workloads that do not require a CPU core full frequency.
  • A C-state is an idle state that shuts down a whole CPU core so it cannot be used, this is done during periods of low-activity and saves more power than simply lowering the CPU core frequency.

Why would you want to use this feature, because it can save you money, especially in larger environments. It’s the equivalent to staffing a restaurant; do you want your full staff there standing around getting paid while doing nothing during off-peak periods? Of course not, just like you don’t want all your CPU cores powered on when you don’t need them, it wastes money.

Support for CPU P-states & C-states was first introduced in vSphere 4, but the balanced (between power/performance) power policy only supported P-states. You could use C-states as well but you had to create a custom policy for them. Now in vSphere 5.5 the balanced power policy supports both P-states and C-states to be able to achieve the best possible power savings. So now while your VMs are all tucked in bed and resting at night you can keep a green data center and save some cash. You can read more about power management in vSphere in this white paper.

5 – vSphere Flash Read Cache

vSphere has had a caching feature called Content Based Read Cache that was introduced in vSphere 5.0 which allowed you to allocate server RAM to be used as a disk read cache. Unfortunately this feature was only intended to be used by VMware View to help eliminate some of the unique I/O patterns in VDI environments such as boot storms. With vSphere 5.5 VMware has a new host-based caching mechanism called vSphere Flash Read Cache (formerly known as vFlash) that leverages local SSD disks as a cache mechanism.

As the name implies vFRC is a read cache only that is placed directly in front of a VMs virtual disk data path. It can be enabled on a per VM/virtual disk basis and is transparent to the guest OS and applications running in the VM. While caching is configured per host, you can optionally set up vFRC to migrate the cache contents to another host to follow a VM. Its primary benefit is for workloads that are very read intensive (i.e. VDI) but by offloading reads to cache it can indirectly benefit write performance as well.

Another component of vFRC is Virtual Host Flash Swap Cache which is simply the old Swap To SSD feature introduced in vSphere 5.0 that allowed you to automatically use SSDs to host VM desk swap files to support memory over-commitment. To find out more about vFlash you can check out the many links I have here and also check out this VMware white paper. Duncan also has a real good FAQ on it here.

As VMware continues to add more features, the management challenge of keeping up with the changes to the environment gets more difficult.   SolarWind’s Virtualization Manager provides a powerful and affordable solution that takes the complexity out of the managing VMware.  If you’d like to learn more or download a free trial, click on the banner below.

1308_vm_seek_and_destroy_external_260x130

Share This: