How VVols impacts storage protocol choices with vSphere

File vs. Block, why choose one over the other with vSphere. They both have their pros and cons which has influenced decision making when it comes to picking your storage with vSphere but VVols has changed the game of how storage protocols interact with vSphere which may also impact your decision making.

Let’s first look at File (NFS), here’s some of the characteristics and pro’s/con’s:

  • File system is managed by the NAS array not vSphere
  • Uses an NFS Client build into ESXi to connect to NAS array via standard networking
  • Simplicity, no LUNs to deal with, easier to re-size volumes and easier overall management
  • VM’s are stored as files on a NAS array so the array can see and interact directly with individual VMs
  • Historically VMware feature development has lagged behind block

Now let’s first look at Block (iSCSI/FC), here’s some of the characteristics and pro’s/con’s:

  • File system managed by vSphere and not the array (VMFS)
  • Uses an iSCSI software initiator built into vSphere or physical HBA to connect to an array via either standard networking (iSCSI) or a FC fabric.
  • More complex, lots of LUNs to create, manage and re-size
  • VMs are stored as files on VMFS file system, the array has no visibility inside the LUN to see and interact directly with VMs
  • Historically VMware development more focused on block for new storage features

From this you could deduce that File is simpler and easier to manage and has VM-level visibility where Block has more management overhead and no VM-level visibility. Why is VM-level visibility a big deal? Because if a storage array can see individual VM’s then storage array features and capabilities can be applied at much more granular level rather than doing it at the LUN (VMFS datastore) level. The advantages of VM-level visibility include:

  • Ability to instantly reclaim disk space without UNMAP as the array knows when a VM is deleted or moved
  • Being able to snapshot or replicate individual virtual machines
  • Leveraging array based monitoring tools at the VM-level to see performance and capacity statistics
  • Easier troubleshooting at the array level to correlate VM to storage bottlenecks and hotspots
  • Using array based QoS tools to more granularly apply resource controls on VMs
  • Ability to place VMs directly on specific disk tiers to meet performance and resiliency SLAs
  • Being able to apply other array features such as de-dupe, compression, thin provisioning, etc at the VM-level

So VVols levels the playing field between file and block and puts them on equal footing with vSphere as VVols (VASA) essentially dictates a common framework and set of rules that all protocols must use for writing data to a storage array. The result of this is Block storage gains a huge advantage and puts it right on par with some of the advantages that File has always had with vSphere with simplified management and VM-level visibility. The below tables summarize the impact that VVols has on file and block storage arrays with VVols and how the storage protocol used becomes mostly irrelevant with VVols.

Table 1 – Comparison of File and Block protocols without VVols

Host AdapterI/O TransportHost PresentFile SystemVM StorageStorage Visibility
FileNFS ClientNetworkMount PointArray managed - NFSVMDK filesVM level
BlockiSCSI initiator (sw/hw) or HBANetwork or fabricLUNsvSphere managed - VMFSVMDK filesDatastore level

Table 2 – Comparison of File and Block protocols with VVols

Host AdapterI/O TransportHost PresentFile SystemVM StorageStorage Visibility
FileNFS ClientNetworkStorage ContainervSphere nativeVVolsVM level
BlockiSCSI initiator (sw/hw) or HBANetwork or fabricStorage ContainervSphere nativeVVolsVM level

As you can see VVols is a big deal for block storage arrays and essentially the storage protocol with VVols is now more about the method of how data is communicated to the storage array. The other things like how the file system is managed and VM-level visibility are not different anymore between protocols as vSphere natively writes VVols to an array without a file system and both file and block have the same VM-level visibility.

So now when it comes to choosing a storage protocol with vSphere it becomes more about the physical and networking layers between the host and the storage array. Things like bandwidth and network infrastructure will mostly drive the decision, fiber channel has obvious advantages but can be more costly and complex to implement but iSCSI and NFS are essentially almost the same now as they both use software clients built into vSphere and the exact same networking infrastructure.

With no more LUNs to deal with when using iSCSI for VVols it becomes a more attractive choice to use with vSphere bringing some of the same benefits as NFS around simplified management and VM-level visibility all delivered within VMware’s standardized vendor neutral Storage Policy Based Management (SPBM) framework. So what storage protocol will you use? Whatever you choose VVols has made the decision making a lot easier.

Share This:

1 comment

    • Adam Granatella on December 22, 2017 at 1:26 pm
    • Reply

    Vvols seem like a great new invention at all, except for the fact that the storage company that owns VMware was the last to adopt Vvols and has the worst support for them. If you’ve used the VMCA at all for certificate management, forget about connecting to an EMC VASA provider.

Leave a Reply

Your email address will not be published.

Please answer the following to prove you're not a bot * Time limit is exhausted. Please reload the CAPTCHA.

*