Tag: VMworld 2018

Why VMware VVols are Simpler, Smarter and Faster then traditional storage

Recently at VMworld in Vegas, Pete Flecha from VMware did a presentation on VVols at the HPE booth which highlights the benefits of VVols being simpler, smarter and faster than traditional storage with vSphere. The link to the presentation is below but I thought I would also summarize why VVols are simpler, smarter and faster.

VVols are Simpler

The reason VVols are simpler is because of storage policy based management which automates the provisioning and reclamation of storage for VMs. VVols completely eliminates LUN management on the storage array as vSphere can natively write VMs to a storage array, VVols are provisioned as needed when VMs are created and automatically reclaimed when VMs are deleted or moved.

VVols are Smarter

The reason VVols are smarter is again because of storage policy based management and also by the dynamic real time nature of storage operations between vSphere and the storage array. With VVols a storage array stays as efficient and thin as possible as space is always reclaimed immediately for VMs and snapshots and even when data is deleted within the guest OS it can be reclaimed on the storage array. In addition the array manages all snapshots and clones and array capabilities can be assigned at the VM level with SPBM.

VVols are Faster

The reason VVols are faster is not what you might think, data is still written from vSphere to a storage array using the same storage queues and paths, there is no performance difference between VVols & VMFS for normal read/write operations. Where VVols is much faster than VMFS is when it comes to snapshots, because all vSphere snapshots are array snapshots they are much more efficient and they take the burden off the host, In addition because there is no need to merge any data that has changed while a snapshot is active, deleting snapshots is always an instant process which can have a very positive impact on backups.

 

Go watch the whole video to learn more about why VVols are so great , it’s only about 15 minutes long.


Share This:

Everything you need to know about UNMAP in vSphere

As I watch the VMworld session recordings which are all publicly available I’ve been doing a write-up and summary of those sessions. My write-up is only a small summary of those sessions which are usually packed full of great information so I encourage you to go watch the full session recording. Today’s topic is UNMAP, a feature I have been very involved with since it’s initial release and a feature that has greatly evolved and changed across vSphere releases. John & Jason from VMware’s tech marketing do a great job talking about the history of UNMAP, showing examples of UNMAP in action and making recommendations for getting the most out of UNMAP. Below is my summary of the session and some highlights from it:

Session:  Better Storage Utilization with Space Reclamation/UNMAP (HCI3331BU) (View session recording)

Speakers:  Jason Massae, Technical Marketing Architect, Core Storage, vSAN, VMware – John Nicholson, Senior Technical Marketing Architect, VMware

  • The session opens with a description of space reclamation and the different levels it can be performed at and explains why it is an import feature. Next they go into the history of UNMAP which was a bit troubled at first, it debuted as part of vSphere 5.0 as an automatic process but issues with some vendors being able to support it properly quickly surfaced and in 5.0 Update 1 it was disabled by default. From that point on it became a manual (CLI) process until it finally came back in vSphere 6.5 as an automatic process with some modifications to force the process to work at fixed rate levels. In 6.7 they further refined UNMAP with a configurable rate to provide more flexibility and in 6.7 U1 vSAN it became it a truly automatic process again (for vSAN only).

  • Next they go into more detail on how the process worked across different vSphere versions. I went into a lot of detail on this in this post I did on UNMAP in vSphere 6.5, in 6.7 the only changed was going from fixed preset limits to more flexible configurable limits. They didn’t mention it but take note that UNMAP with VVols has been automatic since vSphere 6.0, a host doesn’t need to tell an array which blocks to reclaim when a VM is deleted the array is already aware of it and it’s on the array to do the reclamation on it’s own. Because of this flexibility an array can hold off on reclaiming space and potentially allow a user to undelete a VM if needed (i.e. recycle bin).

  • Next they cover in detail what was actually un-mapped in different versions of vSphere, 6.0 was pretty limited to thin disks only, in vSphere 6.5 it became less restrictive and now with vSphere 6.7 it pretty much works with everything. One thing to note again with VVols is that un-mapping from within the guest OS is supported and also snapshots are automatically reclaimed as well when they are deleted as all snapshots with VVols are array based snapshots.

  • As far as what types of Datastores are supported with UNMAP, what it really comes down to is how the UNMAP is handled. With VMFS vSphere tells the storage array what blocks to UNMAP as the array had no visibility inside the VMFS volume and doesn’t know where VM data resides. With NFS which isn’t a block based file system the array is already aware of which disk blocks a VM is on as a VM is written as a file, once the VM is deleted it knows which blocks to reclaim when deleting that file. The same holds true for VVols, the array knows where the VM is written and just deletes and reclaims the VVols associated with a VM. It’s VM level visibility great?
  • From there they went on describing the mechanics of how UNMAP works and some best practices for using it effectively. They also showed how to monitor the performance impact of UNMAP along with some demos. I just touched on a small part of this session so I encourage you to go watch the session replay to learn a lot more about UNMAP.
Share This: