Analyzing The Register’s latest article on VVols

The Register just published another article on VMware’s new Virtual Volumes (VVols) storage architecture and why I agree with a lot of it I thought I would provide some clarification and analysis of parts of it.

The first part hits on end user adoption of VVols, or rather the lack of, I’ve addressed that before so rather than rehash all those reasons for it you can go read that here, here and here. VVols is barely a year old and frankly I would expect low adoption of any new technology in less than a year, look at where VSAN was a year after it’s one year release.

Next it talks about the complexity of VVols, well of course it’s complex, it’s an entire new storage architecture that was many years in the making and required entire new T-10 specs to support it. You can go read VMware’s patent on VVols if you really want to know more about it or the new T-10 specs they submitted on bind/unbind operations and conglomerate LUNs. There are lots of new components with VVols and I don’t think you will hear any storage vendor say it was easy to implement, we’ve spent over 4 years developing our support at HPE.

In comparison look at just about anything in a VMware environment and you will see design complexity, hypervisor scheduling, virtual switches and memory management certainly aren’t easy to implement. The key distinction though is why it may have been complex for VMware and storage vendors to implement VVOls on the back end, it’s by design made to be less complex for vSphere admins. End users don’t have to look at complex architecture diagrams and component relationships for VVols, all they care about is what they see and do in the vSphere client.

Next it covers the VVol architecture which has a lot of new components, if you want to know more about that check out my VMworld session on VVols. It also talks about the number of VVols that arrays will have to support, I’m not sure all the math is accurate, a VM will always have a minimum of 2 VVols when powered off (config/data) and 3 when powered on (+vswp) and then additional VVols for snapshots as needed, I’ll do a separate post that covers this in more detail, the theoretical max VVols in a vSphere cluster is 19 million and the theoretical max for a single VM is around 2,000. We’ve had feedback from customers that are looking to do large scale implementations of VVols and based on their requirements they estimated about 7 VVols per VM.

On array VVol limits, this is more of a challenge for block arrays that are used to dealing with LUNs in the hundreds instead of sub-LUNs in the thousands. I’ve seen some vendor implementations as low as 1,000, with 3PAR it’s at 128,000. NAS arrays are already used to dealing with large amounts of objects so I have seen some vendors claim support in the millions.

On the array controller side it required changes as well as arrays controller and arrays had to understand protocol endpoints, bindings and special LUNs that have conglomerate status (admin LUN) with bindings to secondary LUN IDs (subLUNs). As far as VASA Providers go they most definitely don’t need to be external and Window’s based, it’s up to vendors to choose how to implement this, most have gone external but some have embedded the VASA Provider in the array. VVols does have it’s own HCL for vendors that are certified to support VVols, I just posted an update on this earlier this week.

The bottom line is, yes VVols is a complex architecture, yes vendor implementations vary, yes many vendors are way behind on support, yes it needs to mature more but I think overall the benefits VVols brings will overcome all this. A year from now I predict you will see much higher adoption of VVols as VASA 3.0 will address some current shortcomings, storage vendors will get caught up with VVols development and customers start to embrace it. Like any new technology it takes time for partners and the entire VMware ecosystem to catch up but once they do you will eventually see VVols as the de facto standard for external storage arrays.

Share This:

Leave a Reply

Your email address will not be published.

Please answer the following to prove you're not a bot * Time limit is exhausted. Please reload the CAPTCHA.

*