Top 10 reasons to start using VVols right now

VMware’s new storage architecture, Virtual Volumes (VVols), has been out for a year and a half now as part of the vSphere 6.0 release and adoption continues to be very light for a variety of reasons. This kind of adoption can be typical of any 1.0 release as users wait for it to mature a bit and better understand the differences of it compared to what they are used to using. VVols brings a lot of great benefits that many are unaware of so in this post I though I would highlight those to try and make a compelling use case to start using VVols right now.

top10list-crop10 -You don’t have to go all in

It’s not all or nothing with VVols, you can easily run it right alongside VMFS or NFS on the same storage array. Because VVols requires no up-front space allocation for it’s Storage Container, you will only be consuming space on your existing array for any VMs that are put on VVol storage. Since VVols is a dynamic storage architecture, whatever free space you have remaining on the array is available to your VVols Storage Container which is purely a logical entity unlike a VMFS volume which requires up-front space allocation.

You can easily move running VMs from VMFS to VVols using Storage vMotion and back again if needed or create new VMs on VVols. This allows you to go at your own pace and as you move VMs over you can remove VMFS datastores as they are emptied out which provides more available space to your Storage Container. Note that Storage vMotion is the only method to move existing VM’s to VVols and you cannot upgrade VMFS datastores to VVol Storage Containers.

9 – Gain early experience with VVols

VMware never keeps 2 of anything around that do the same thing, they always eventually retire the old way of doing it as it is double the development work for them. At some point VVols will be the only storage architecture for external storage and VMFS will be retired. How long did you wait to switch from ESX to ESXi or to switch from the vSphere C# client to the web client? Did you wait until the last minute and then scramble to learn it and struggle with it the first few months. Why wait, the sooner you start using it the sooner you will understand it and you can plan on your migration over time instead of waiting until you are forced to do it right away. By gaining early experience you will be ahead of the pack and can focus on gaining deeper knowledge of it over time instead of being a newbie who is just learning the basics. There are no shortage of resources available today to help you on your journey to VVols.

8 – Get your disk space back and stop wasting it

With VVols both space allocation and space reclamation is completely automatic and real-time. No storage is pre-allocated to the Storage Container or Protocol Endpoint, when VM’s are created on VVols storage they are provisioned thin by default. When VMs are deleted or moved space is automatically reclaimed as the array can see VMs as objects and knows which disk blocks they reside on. No more manually running time and resource intensive cli tools,  vmkfstools/esxcli to blindly try and reclaim space on the array from deleted or moved VMs. VVols is designed to allow the array to maintain a very thin footprint without pre-allocating space and carving up your array into silos like you do with VMFS and at the same time being able to reclaim space in real time.

7 – It’s available in all vSphere editions

VVols isn’t a vSphere feature that is licensed only in certain editions such as Enterprise Plus, it’s part of the vSphere core architecture and available in all vSphere editions. If you have vSphere 6.0 or higher you already have VVols capabilities and are ready to start using it. VVols is mostly under the covers in vSphere so it won’t be completely obvious that the capability is there. It is part of the same Storage Policy Based Management (SPBM) system in vSphere that VSAN uses and also presents itself as a new datastore type when you are configuring storage in vSphere.

6 – Let the array do the heavy lifting

Storage operations are very resource intensive and a heavy burden on the host. While server’s are commonly being deployed as SAN’s these days, a server really wasn’t built specifically to handle storage I/O like a storage array is. VMware recognized this early on which is why they created vSphere APIs for storage such as VAAI to offload resource intensive storage operations from the host to the storage array. VVols takes this approach to the next level, it shifts the management of storage tasks to vSphere and utilizing policy based automation storage operations are shifted to the storage array.

So things like thin provisioning and snapshots which can be done either on the vSphere side or the storage array side with VMFS are only done on the storage array side with VVols. How you do these things remains the same in vSphere but when you take a snapshot in the vSphere client you are now taking an array based snapshot. The VASA specification which defines VVols is basically just a framework and allows the array vendors to implement certain storage functionality, however they want to handle things within the array is up to each vendor. The storage array is a purpose built I/O engine designed specifically for storage I/O, it can do things faster and more efficiently, why not let it do what it does best and take the burden off the host.

5 – Start using SPBM right now

VSAN users have been enjoying Storage Policy Based Management for quite a while now and with VVols anyone with an external storage array can now use it as well. While Storage Policies have been around for even longer when first introduced in VASA 1.0, they were very basic and not all that useful. The introduction of VASA 2.0 in vSphere 6.0 was a big overhaul for SPBM and made it much more rich and powerful. The benefit of using SPBM is that makes it easier for vSphere and storage admins by automating storage provisioning and mapping storage array capabilities including features and hardware attributes directly to individual VMs. SPBM ensures compliance of defined policies so you can create SLA’s and align storage performance, capacity, availability and security features to meet application requirements.

4 – Snapshots don’t suck anymore

vSphere native VM snapshots have always been useful but they also have a dark side. One of the big disadvantages of vSphere snapshots is the commit process (deleting a snapshot) which can be very time and resource consuming, the more so the longer a snapshot is active. The reason for this is that when you create a vSphere snapshot, the base disk becomes read only and any new writes are deflected to delta vmdk files that are created for each VM snapshot. The longer a snapshot is active and the more writes that are made the larger the delta files grow, if you take multiple snapshots you create more delta files. When you delete a snapshot all those delta files have to be merged back into the base disk which can take a very long time and is resource intensive. As backup agents routinely take snapshots before doing a backup of a VM, snapshots are a pretty common occurrence.

With VVols the whole VM snapshot process changes dramatically, a snapshot taken in vSphere is not performed by vSphere but instead created and managed on the storage array. The process is similar in the fact that separate delta files are still created but the files are VVol snapshots that are array-based and more importantly what happens while they are active is reversed. When a snapshot of a VM on VVol-based storage is initiated in vSphere a delta VVol is created for each virtual disk that a VM has but the original disk remains Read-Write and instead the delta VVols contain any disk blocks that were changed while the snapshot is running. The big change occurs when we delete a snapshot, with VVols because the original disk is Read-Write, we can simply discard the delta VVols and there is no data to commit back into the original disk. This process can take milliseconds compared to minutes or hours that is needed to commit a snapshot on VMFS datastores.

The advantage of VVol array based snapshots are many, they run more efficiently on the storage array and you are not using up any host resources. In addition you are not waiting hours for them to commit, your backups will completed quicker and there is no chance of lost data from long snapshot chains trying to be written back into the base disk.

3 – Easier for IT generalists

Because storage array management shifts to vSphere through SPBM, IT generalists don’t really have to be storage admins as well. Once the initial array setup is complete and the necessary VVols components created, all the common provisioning tasks that you would normally do on the array to support vSphere are done through automation. No more creating LUNs, expanding LUNs, configuring thin provisioning, taking array based snapshots, etc., it’s all done automatically. When you create a VM, storage is automatically provisioned on the storage array and there is no more worrying about creating more LUNs, determining LUN sizes and increasing LUN sizes when needed.

With VVols you are only managing VMs and storage in vSphere, not in 2 places. As a result if you don’t have a dedicated storage admin to manage your storage array, an IT generalist or vSphere admin can do it pretty easily. Everything with SPBM is designed to be dynamic and autonomic and it really unifies and simplifies management in a good and efficient manner to reduce the overall burden of storage management in vSphere. Instead of having the added complexity of using vendor specific storage management tools, with VVols management becomes more simplified through the native vSphere interfaces.

2 – It unifies the storage protocols

VMFS and NFS storage are implemented pretty differently in vSphere and what VVols does is dictate a standardized storage architecture and framework across any storage protocol. There were several big differences with file and block implementations in vSphere, with block you presented storage to ESXi hosts as LUNs and with file as a mount point. You still do this to connect your array’s Protocol Endpoint to a host but the standard storage container that is presented to a host to store VMs is just that, a Storage Container, not a LUN or mount point anymore.

When it came to file systems with block an ESXI host would manage the file system (VMFS) and with file it was managed by the NAS array. With VVols there is no file system anymore, VVols are written natively to the array without a file system regardless of it’s a file or block array. And the biggest change is that VMs are written as files (VVols) to block storage in the same manner that file storage has been doing it all along. This creates a  unified and standardized storage architecture for vSphere and eliminates the many differences that existed before between file and block. VVols creates one architecture to bring them all and in the darkness bind them (LOTR reference).

1 – The VM is now a unit of storage

This is what it’s all about, the LUN is dead, because it’s all about the VM, ’bout the VM, no LUN. With VVols the VM is now a first class citizen and the storage array has VM-level visibility. Applications and VMs can be directly and more granularly aligned with storage resources. Where VMFS was very LUN-centric and static creating silos within an array, aligning data services to LUNs and utilizing pre-allocated and over provisioned resources. VVols is VM-centric and dynamic where nothing is pre-allocated, no silos are created and array data services are aligned at the more granular VM-level. This new operational model transforms storage in vSphere to simplify management, deliver better SLAs and provide much improved efficiency. VVols enables application specific requirements to drive provisioning decisions while leveraging the rich set of capabilities provided by storage arrays.

Share This:

5 comments

Skip to comment form

  1. Thank you for this round up of vVols advantages. I must say the problem actually is the storage support for vVols, as many vendors do not offer support for vVols like HP Lefthand for example.

    1. Yeah it’s getting there and it’s way better now compared to Day 1 when only 4 vendors supported it. A quick check of the HCL shows 16 vendors supporting it right now including all the larger vendors: EMC, IBM, NetApp, Dell, HDS, HPE (3PAR). However still absent are Pure, Nexenta, SolidFire, Simplivity, Tegile and Nutanix. On the HPE StoreVirtual VSA front we’ve been working on it and you will see it in a future release of LeftHand OS.

    • Vaibhav Tiwari on September 16, 2016 at 8:30 am
    • Reply

    Hi,

    Thanks for the article ….

    This is something not related to the article however do you know how can I use VVOL at my home lab ; I use VMware workstation to run my lab.

    Thanks

    1. You basically need a VSA that supports it and the options are few. I believe StarWind supports it even though they are not listed on the HCL, you might check them out, here’s some links: https://www.starwindsoftware.com/starwind-virtual-san and http://blog.mwpreston.net/2016/06/06/want-try-vvols-starwind/

      Also I believe EMC had something as well, check out this: http://www.virtualtothecore.com/en/a-free-solution-to-test-vvols-in-my-lab-at-last-thanks-to-emc/

  2. I always wanted to use VVols but I was little bit concern about the VVol because I don’t know the benefits and features of VVols but now I am fully agree with the benefits of VVols which you have shared in this blog and I want to deploy this in our infrastructure so that we can take the all benefits of VVols .

Leave a Reply

Your email address will not be published.

Please answer the following to prove you're not a bot * Time limit is exhausted. Please reload the CAPTCHA.

*