One year ago vSphere 6.0 was released and along with it came the new Virtual Volumes (VVols) storage architecture for shared storage. Many years in the making VVols revolutionized how storage arrays interact with VMs going from the archaic LUN-centric model to a new VM-centric model. VMware just published a blog post on this 1 year milestone for VVols and I thought I would comment and provide my perspective as well.
In a recent blog post VMware published some stats on VVol deployment that are supposedly based on real-life customer adoption but I don’t feel their numbers provide much meaningful information on how customers are using VVols. To be honest their numbers don’t make sense to me, they stated the following in their blog post:
“The median number of datastores deployed by a customer using VVols is two, serving on average just over 30TB of capacity with about half of that consumed by Virtual Machines.”
VVols doesn’t use datastores, it uses Storage Containers which is a purely logical entity as their is no physical storage assigned to a Storage Container. In most cases you are only going to use a single Storage Container as their isn’t too much benefit to using multiple storage containers besides logically separating VMs on a storage array. So when they say 2 datastores that’s confusing, again there are no datastores, are they referring to Storage Containers, if so I would think more people are using one then 2.
Next on capacity, with VVols their is no over-provisioning like you do with VMFS so their is no wasted space, you are using exactly the amount of space that your VMs need. So to say that you have 30TB of capacity allocated to VVols with only half that used by VMs doesn’t add up. Your VVol capacity should exactly equal the capacity used by your VMs on VVol storage, again this is confusing.
One metric that they don’t provide which I feel would be the most useful is the average numbers of VVols associated with a VM. This would be a key metric that would help determine how many VMs an array could support based on vendor max VVol limitations. This will vary by customer largely based on snapshot usage but I’m always curious to see what customers are averaging with this.
They mention VVol use cases and not being sure how customers are using them, I think at this point it is more dev/test, low hanging fruit apps that aren’t mission critical as most people are still testing the VVol waters right now before going all in. They also mention the benefits and highlight snapshots, I agree this is a big one along with space reclamation and no more over-allocating space. The new snapshot mechanism that VVols uses will have a big impact on backup efficiency as I highlighted in this post.
They talk about the partner ecosystem which is still a work in progress, they show a lot of partner logos but only 14 of them support VVols after one year, I did an update on that a few weeks ago. It’s nice to see VVols hit the one year milestone and start to mature and to see customer interest and adoption in VVols grow. I think it will take another year and another big vSphere release though before it gains good momentum much the same way that VSAN did (it turned 2 years old this month). For more on my perspective on customer adoption of VVols you can read this post I did a few months back.
[important]Update below from Ben Meadowcroft at VMware that clarifies this, author of the blog post and product manager, thanks Ben!:[/important]
I appreciate the write up and commentary on my blog post. There are a couple of items I’d like to clarify and add some context to if I may:
> VVols doesn’t use datastores, it uses Storage Containers
Storage Containers are the array side construct but within vSphere the storage container is exposed and consumed as a datastore. When you want to expose a storage container to vSphere you are in fact adding a new datastore to the inventory.
> In most cases you are only going to use a single Storage Container as their isn’t too much benefit to using multiple storage containers besides logically separating VMs on a storage array. …if so I would think more people are using one then 2.
I agree that the primary reason why you would want to do this are logical separation, perhaps taking advantage of vSphere permissions to restrict access to specific datastores to different users for example. This is why it is important to note that a storage container is exposed as a datastore within vSphere and so you get all the permissions capabilities that you previously had with the new storage model as well. Another (and probably more likely) explanation would simply be that some customers are following old habits, perhaps trying VVols on a couple of clusters and deciding to carve it out as a container per cluster say.
> Next on capacity, with VVols their is no over-provisioning like you do with VMFS so their is no wasted space, you are using exactly the amount of space that your VMs need. So to say that you have 30TB of capacity allocated to VVols with only half that used by VMs doesn’t add up. Your VVol capacity should exactly equal the capacity used by your VMs on VVol storage, again this is confusing.
This is a good point, there is a difference between the consumption on the array with VMFS and VVols that is an important consideration with VVols. The numbers from my blog are taken from what is reported by the datastore metrics. While the storage container is a logical entity it does have an associated capacity value (that is reported via VASA and reported against the datastore metrics received by VMware). It is this capacity that is picked up in the reports in the post. You are correct to highlight that the VVol storage container is not consumed at configuration time as a LUN with VMFS would be. I consider the capacity associated with the storage container to be more like a logical limit on the capacity that can be consumed by the container (and as it is logical it can be adjusted). I apologize that this was confusing, I can see how it could have been communicated more clearly.
Thanks again for your write-up, your article are a great resource and always an interesting read.