Oct 28 2016

vSphere 6.5 with VVols 2.0 does not yet support in-band bind

One thing I noticed that was missing from the vSphere 6.5 release which is expected at some point is support for an in-band VVol bind process that a host must perform when a VM is powered on. The current VVol implementation is an out-of-band process which involves the VASA Provider which acts as the control path between vCenter, ESXi hosts and a storage array. In this post I’m going to go into detail about the bind process and why it would be beneficial to be an in-band process.

So first the basics, the concept of binding a VM to it’s disk files is unique to VVols. With VMFS/NFS we didn’t need this process as you have a file system in place that can handle file look-ups and locking. With VVols we don’t have a file system written to the storage array, VMs are written natively to the array as VVol objects which are essentially sub LUNS (block) or files (NFS). There still is a mini VMFS file system with VVols but it is contained within the Config VVol of a VM. So because we don’t have a file system we need a way to connect a VM to it’s VVols when it is powered on.

When a VM is powered on using VVols the bind operation is performed which connects a host to all of the VVols associated with that VM. When a VM is powered off those VVols are then unbound from the host as there is no need for them to be connected and a host can only bind to a limited number of VVols (4,096). To perform the bind though the host needs to know where the VVols of a VM reside and for that it needs to lookup the sub LUN ID’s for each VVol so they can be bound to.

Sub LUN’s are also a new concept with VVols, VMware had to request modifications to the SCSI T-10 standards to support the concept of sub LUNs. A sub LUN is basically a secondary level of LUNs, a host cannot directly see sub LUNs and they will not be visible during any SCSI scans that a host performs. To connect to a sub LUN a host must first connect to an administrative LUN which is visible to the host but it’s a special LUN with no provisioned storage and a bit set identifying it as part of a conglomerate. This administrative LUN is known as the Protocol Endpoint.

So to look up the sub LUN IDs for a VVol, a host has to ask someone for that information. The storage array knows the sub LUN IDs but a host can’t ask it directly, it must instead make the request to the VASA Provider which looks up the sub LUN information on the storage array and passes it on to the host so it can proceed with the bind. Because this process involves an intermediary and is not direct it is considered out-of-band. So essentially the host is asking for directions to get to it’s destination and once the VASA Provider provides it with an address (sub LUN ID) it can directly connect to those VVols via it’s direct in-band data path.

Now the VASA Provider component can be implemented either within a storage array or as an external component such as a VM appliance, it’s up to each vendor to choose how they want to implement it. No matter which way it is implemented the connection to the VASA Provider is via standard network protocols (TCP/IP) and not via a storage protocol (Fiber Channel/NFS/iSCSI). The connection to the VASA Provider is typically over your network via a management port on the storage array, if it’s external you have to connect to whatever VM/server is running the VASA Provider and then on to the storage array. Because the VASA Provider is connected over your standard network it is an out of band operation, to be an in-band operation a host would have to be able to perform this lookup via the data path using whatever storage protocol it is using to connect to the storage array. Below is a diagram which illustrates the components and paths that are used with VVols:


So now that you understand the difference between in-band and out-of-band binding, why does it matter which way you do it, either way you are getting the job done. Well performing the bind operation in-band has several advantages related to performance and availability compared to an out-of-band bind operation. An in-band operation will be performed quicker and more efficiently as a result of having a smaller number of network hops and by using faster in-band network connections. In addition a storage controller will typically higher processing power available to perform the operation faster. There is also no overhead imposed by the entire SOAP based VASA communication that is used when doing an out-of-bind operation.

Doing an in-band bind operation also provides overall better availability and reliability as you are not relying on a 3rd party to perform the operation which if it failed would prevent binding from occurring. This is especially true of external VASA Providers, if they crash, go down or something happens to them for whatever reason you cannot power on VMs until it is back up. In addition it also simplifies the bind process and makes it less likely that a mis-configuration could impact it.

So it sounds like in-band binding is definitely the way to go, why isn’t VVols using it yet? It appears VMware needs more time to implement this, I have seen mentions of in-band binding in both the VASA 2.0 & VASA 3.0 developer documentation. It looks like much of the framework to support in-band bind is already part of the VASA 3.0 spec so I expect to see the capability become available eventually in an upcoming release. Here’s what the developer guide says about this:

The vSphere 6.0 and 6.5 releases did not implement in-band bind. Bind is one of a few VVol management operations that are required for VM power-on. Failure to bind results in a “Data Unavailable” event that reflects poorly on storage system reliability. As a member of the T10 SCSI standard committee, VMware has been working to add in-band BIND and UNBIND commands to the SCSI standard. The intent was to define an in-band alternative to the corresponding VASA calls and enable use of the data path between host and storage array to power-on a VM when the network path between host and VASA provider is unavailable. When the T10 standard is finalized the specifications will be added to the VASA specification.

So for now we’ll just have to wait, once VMware completes what it needs to for in-band bind to be enabled in the VASA spec it will then be up on the storage vendors to make changes on their side to make the switch over to in-band bindings. Regardless of whether a bind operation is in-band or out-of-band it doesn’t impact the many benefits that VVols provides right now. The benefits of in-band binding will be under the covers and provide an extra lit bit of efficiency and reliability. So why wait, get started using VVols today and if you want to read more on VVols be sure and check out my huge VVols link collection and also read about what’s new with VVols in vSphere 6.5.

Oct 20 2016

VVols 2.0 with array based replication support announced with vSphere 6.5

VMware recently announced vSphere 6.5 and it includes the next big version of Virtual Volumes (VVols) which is based on the updated VASA 3.0 specification. The initial release of VVols was part of vSphere 6.0 which went GA in March 2015 and was based on VASA 2.0. VVols has been out now for over 18 months and with vSphere 6.5 expect to see maturity, improvements and of course a big capability that was missing in the initial release, support for array based replication.

Before I go into what’s new with VVols 2.0, lets first re-visit what I’m currently seeing around customer adoption of VVols. I’ve spoken to dozens of customers and surveyed hundreds of people in my sessions at Discover, VMUGs and VMworld. When I ask who is using VVols today I see very few hands go up. That’s really about what I expected at this point, most companies avoid 1.0 products, the same held true with VSAN, there are additional reasons as well such as lack of replication support, not on vSphere 6.x yet and not understanding what VVols is and the benefits it brings.

I expect that to start to change though, I have also spoke to plenty of customers, some very large that are in the planning phase for migrating to VVols. The release of vSphere 6.5 should help drive adoption as VVols unofficially becomes a 2.0 product and is more complete with support for replication. As far as the understanding aspect goes I think VMware hasn’t overall done all that great a job of helping customers with that. My feeling on that is mainly based on my own observations and interactions with VMware. I do know for a fact that the VMware VVols product management team has done a good job promoting VVols but they are a relatively small part of a big company.

If you watched the VMworld EMEA keynotes were vSphere 6.5 was introduced, you didn’t here a single mention of VVols. Other features like VSAN, NSX, encryption, vRealize, cloud, VM scale, vCenter, new Web UI, etc. all were mentioned. There was a brief callout in the vSphere 6.5 press release but not many people read those. The same was true with the early access blogger briefings that VMware does so the bloggers get the early scoop so they can write about it at launch, VVols was strangely absent. With the initial release of VVols they put a lot more effort into VVols marketing and awareness but this time around it has been relatively quiet. I’d really like VMware to step it up and see people like Pat Gelsinger, Ray O’Farrell and Yanbing Li talking about VVols as well. I know I’m doing my best to evangelize VVols and educate the community and I know other partners are doing so as well but it would be good to see VMware try and do more at all levels of their organization.

So let’s now cover what’s new in VVols 2.0 and really the focus here is all on Replication as there is not much else new. Under the covers I expect there are a lot of improvements but VASA is really a development specification or an enabler for storage vendors to develop their own implementations of array based capabilities to integrate with Storage Policy Based Management (SPBM). As a result every vendor implementation of VVols will be slightly different as it’s up to each vendor to dictate how they can scale, how they implement the VASA provider, what capabilities they will present to SPBM, etc.

With vSphere 6.5 storage array vendors can now implement replication of VVol-based VM’s through the creation of Replication Components and attaching them to vSphere Storage Policies. Previously, a storage policy was a single set of rules built from the list of capabilities offered by the array. With vSphere 6.5 those rules have been broken up into different capability types, known as Components that allows you to define a capability once and re-use it in multiple storage policies. There are different Component class types which include Replication, Performance and  Encryption. The below figure show how the Component section in the VM Storage Policy management interface in the vSphere client.

spbm-rep1Once you create a Component, you tie it to a vCenter Server, name it and then select the Provider type, in the below figure you can see the multiple types of Providers available in vSphere 6.5 including Replication.

spbm-rep2Next you define the properties for the Replication Component, this can include the target Storage Container and disk groups, how frequent replication will occur and desired RPO levels as shown below.

spbm-rep3Once you have your component defined you can add it to a VM Storage Policy as shown below, note a Storage Policy can have multiple Components added to it.

spbm-rep4Now that your Components are created and assigned to Storage Policies when you go to create a VM on VVol storage when you get to the Storage selection screen and choose an existing Storage Policy to assign to that VM you will also see the option to select a Replication Group for that VM based on what is defined in the policy. as shown below. You can choose from an existing Replication Group or a new one will be automatically created for you. Only existing groups that match the replication constraints defined in the policy will be displayed. A replication group is the minimum level of failover and will contain one or more VVol objects (multiple VMs). Any Replication Group that is automatically created will also be automatically removed once their are no more VMs assigned to it.

spbm-rep5Some other things to note about replication, all of a VM’s VVol objects will be replicated to the target to ensure VM integrity, this includes and snapshots attached to the VM. Replication may be array based but it is solely designed to be implemented and managed by vSphere via SPBM and other vSphere interfaces. You must configure Storage Containers on both the source and target arrays before creating replication profiles. While it is possible to replicate only certain disks of a VM to a target site, a VM must have all of it’s VVols that it is replicating in teh same Replication Group.

So while it is great that we now have array based replication with VVols, how do you actually make use of it in a real world DR scenario? In this release there is no automation component like SRM to orchestrate replication, failover, testing, failback, etc. Eventually vRealize Orchestrator will be able to do much of this but for now all operations must be scripted using the VVol APIs that exist. There are 3 main operations that you can do in this release, a test failover, an actual failover (planned or unplanned) and a reverse failover (recovery). All of these operations are strictly performed within the vSphere environment, your storage admin is not involved in any of this.

Under normal operations VVols at the secondary site are hidden from vSphere and can not be bound to. When you perform a test failover it is not a true failover operation, just a simulated one. What happens is the VVols at both the primary and secondary sites remain in their current state and copies of each replica VVol are made at the secondary site so they may be presented to the vSphere environment and are now capable of being bound to at the secondary site for testing. These copies are in a consistent state and vSphere will automatically fix-up the copies so they can be utilized for testing purposes. Once testing is complete all of the VVols at the secondary site will cleaned up and removed.

If you perform an actual failover it is initiated from the secondary site, a final sync operation is performed (planned failover) and replication is halted between the sites. If the failover is forced or unplanned the secondary site is not allowed to contact the primary site. After the failover call is made to the VASA Provider at the secondary site, the VVols at that site become visible and are able to be bound to hosts. After a failover, the only operation allowed is is to recover the group (failback). As the primary site can no longer be the replication source, vSphere will issue a reverse operation. A reverse operation essentially just sets up replication back to the primary site where you can failback at some point when needed. As part of a planned failover you initiate both the failover and reverse replication at the same time.

As far as the orchestration of all this I’m still trying to come up to speed on that so as I learn more I’ll provide more detail on that. For a planned failover you will have to manually script (i.e. PowerCLI) out the workflows as you don’t have a tool like SRM to build them for you. At some point vRealize Orchestrator and Automation will be capable of doing all this but until then you will have to build this on your own. I suspect VMware will have all the tools, docs, sample scripts, etc to help you with this. I’m actually just kicking off a tech paper that will cover how to do this for 3PAR, however most of the workflows/scripts should be generic as they are calling the standard tasks outlined in the VASA 3.0 specification.

You can also read more about the capabilities in VVols 2.0 on the VMware Virtual Blocks blog.

Oct 19 2016

Automatic space reclamation (UNMAP) is back in vSphere 6.5

cdd5e4858eac95d440512d9ea2f747a2A long time ago in a vSphere version far, far away VMware introduced support for automatic space reclamation which allowed vSphere to send UNMAP commands to a storage array so space from deleted or moved VMs could be un-allocated (reclaimed) on the storage array. This was a welcome feature as block storage arrays have no visibility inside a VMFS volume so when any data is deleted by vSphere the array is unaware of it and it remains allocated on the array. UNMAP was supposed to fix that so when data was deleted vSphere would send a string of UNMAP commands to the array telling it exactly which disk blocks it could have back. Doing this allowed thin provisioned storage arrays to maintain a much smaller capacity footprint.

Shortly after this feature was introduced in vSphere 5.0 which was released back in 2012 problems started surfacing. As the UNMAP operation was real-time (synchronous) vSphere would have to wait back for a response from the array that the operation was complete. In many scenario’s this wasn’t a problem, but some arrays apparently had problems completing the operation in a timely manner which would cause timeouts and disk errors in vSphere. As a result VMware quickly issued a KB article recommending to disable UNMAP support and in vSphere 5.0 Update 1 they completely disabled it.

What VMware did next was introduce a manual reclamation method by modifying the vmkfstools CLI command and adding a parameter to it that allowed UNMAP to run on an array as a manual operation. While this worked it took quite a while to execute and was very resource intensive on the array. The reason for that is all the manual operation was doing was creating a balloon file using un-used space on a VMFS volume and then sending UNMAP commands to the array to reclaim it all. The end result was that instead of reclaiming just the blocks from deleted VMs it tried to reclaim all the remaining free space on the VMFS volume which was terribly inefficient. You can read all about how this all worked in this post I did back then.

So since that time VMware has never figured out a way to make it work again until now. In vSphere 6.5 they have again made in an automatic operation but not in the same way as before. What they did was kind of a compromise, instead of trying to do it all as a synchronous operation they are now scheduling it and sending UNMAP commands in bursts to the storage array in an asynchronous manner. So it is truly an automatic process now but it operates in the background and how fast it works is based on priority levels that can be set on individual VMFS datastores.

Now this only works in vSphere 6.5 and only on VMFS6 datastores, VMFS5 datastores must still use the manual method using the esxcli command to reclaim space with the balloon file method. When you create a VMFS6 datastore the default priority will be set to Low which sends UNMAP commands at a less frequent rate to the storage array. In the vSphere Web Client you will only see the option to change this to either None or Low with None disabling UNMAP completely. However using the esxcli command (esxcli storage vmfs reclaim config) you can also change this setting to Medium or High which increases the frequency in which UNMAP commands are sent by 2x (Medium) and 3x (High) over the Low setting.

Now why did VMware not allow you to choose Medium or High from the Web Client? There is a good reason for that, they hid those options for your own good. UNMAP is still a resource intensive operation, when you do an UNMAP operation you are literally telling the array to un-allocate millions or billions of disk blocks. When you get more aggressive with UNMAP commands it will start putting a heavier load on the storage array which can seriously impact your VM workloads as the array tries to handle everything at once. Having this set to Low is a good compromise as you get your disk space back automatically but with minimal impact to your VM workloads. If you do happen to set it to Medium or High via esxcli it will still show those settings in the Web Client, you just can’t select them there.

So welcome back UNMAP, we missed you and are glad to have you back. Of course if you are using VVols you don’t have to worry about UNMAP at all as the array has VM-level visibility and knows when VMs are deleted and can reclaim space on it’s own without vSphere telling it to.

Oct 18 2016

vSphere 6.5 Link-O-Rama

v65-linkorama-cropYour complete guide to all the essential vSphere 6.5 links from all over the VMware universe. Bookmark this page and keep checking back as it will continue to grow as new links are added everyday. Also be sure and check out the Planet vSphere-land feed for all the latest blog posts from the Top 100 vBloggers.

Introducing vSphere 6.5 (VMware vSphere Blog)
VMware Advances Cross-Cloud Architecture with New Releases of vSphere, Virtual SAN and vRealize Solutions to Drive IT and Developer Productivity (VMware News Release)
VMware vSphere and vSphere with Operations Management (VMware Datasheet)

VMware What’s New Links

What’s New in vSphere 6.5: vCenter Server (VMware vSphere Blog)
What’s New in vSphere 6.5: Security (VMware vSphere Blog)
What’s New in vSphere 6.5: Host & Resource Management and Operations (VMware vSphere Blog)
What’s New with VMware Virtual SAN 6.5 (VMware Virtual Blocks)
Whats New in Virtual Volumes 2.0 (VMware Virtual Blocks)
What’s New in SRM 6.5 (VMware Virtual Blocks)

Video Links

VMware’s Yanbing Li on Virtual SAN 6.5 (VMware)
vSphere 6.5 Technical Overview Q4 2016 (VMware Partner TV)
vSphere 6.5 Sales Overview Q4 2016 (VMware Partner TV)
Virtual SAN 6.5 Technical Overview Q4 2016 (VMware Partner TV)
Virtual SAN 6.5 Sales Overview Q4 2016 (VMware Partner TV)

Availability (HA/DRS/FT) Links

vSphere 6.5 – What’s new with vSphere 6.5 DRS (Enterprise Daddy)
VMware vSphere 6.5 – HA and DRS Improvements (ESX Virtualization)
VMware vSphere 6.5 Fault Tolerance (FT) Improvements (ESX Virtualization)
VMware vSphere 6.5 – HA and DRS Improvements (Victor Virtualization)
vSphere 6.5 -What’s New with vSphere 6.5 HA & DRS (VMware Arena)
vSphere 6.5 – What’s is in VMware vSphere 6.5 Fault Tolerance? (VMware Arena)
What’s New in vSphere 6.5: Host & Resource Management and Operations (VMware vSphere Blog)
vSphere 6.5: vSphere HA What’s New – Part 1 – UI (vTagion)
vSphere 6.5: vSphere HA What’s New – Part 2 – Admission Control (vTagion)
vSphere 6.5: vSphere HA What’s New – Part 3 – Orchestrated Restart (vTagion)
vSphere 6.5: vSphere HA What’s New – Part 4 – VM Restart Priorities (vTagion)
vSphere 6.5: DRS what’s new – Part 1 (vTagion)
vSphere 6.5: DRS what’s new – Part 2 – Predictive DRS (vTagion)
vSphere 6.5: DRS what’s new – Part 3 – Proactive HA (vTagion)
vSphere 6.5 Encrypted vMotions are Here (vTagion)
Big Improvements to vSphere HA and DRS in 6.5 Release (Wahl Network)
vSphere 6.5 what’s new – DRS (Yellow Bricks)
vSphere 6.5 what’s new – HA (Yellow Bricks)

Documentation Links


Download Links


ESXi Links

Nested ESXi Enhancements in vSphere 6.5 (Virtually Ghetto)
Virtual NVMe and Nested ESXi 6.5? (Virtually Ghetto)

General Links

Quick Summary of What’s New in vSphere 6.5 (vSphere-land)
What’s New in vSphere 6.5 (Ather Beg’s Useful Thoughts)
VMworld EMEA Announcements : vSphere 6.5 (CloudFix)
What’s new in vSphere 6.5 (Come Le Feci)
What’s new in vSphere 6.5? (Enterprise Daddy)
VMware vSphere 6.5 Announced !! (ESX Virtualization)
vSphere 6.5 announced so what is coming?  (iGICS)
What’s New in vSphere 6.5 (Ivobeerens)
VMware vSphere 6.5 announced today, here’s how to download it fast, once it becomes available in Q4 2016 (TinkerTry)
What is new in VMware vSphere 6.5 and VSAN 6.5 (UP2V)
#VMworld Europe 2016: New Products and Product updates (including vSphere 6.5 / vSAN 6.5) (vLenzker)
VMworld 2016 – What’s New in vSphere 6.5 (VMguru)
VMware unveil vSphere 6.5 to kick off VMworld 2016 (vMustard)
What’s New with VMware vSphere 6.5? (VMware Arena)
What’s New in Version 6.5? (VMware Guruz)
vSphere 6.5 an Introduction (VMware Velocity)
vSphere 6.5 – Everything You Need To Know (vTagion)

Installing & Upgrading Links


Knowledgebase Articles Links


Licensing Links


Networking Links


News/Analyst Links

VMware VSAN 6.5 supports containers and physical servers via iSCSI (Computer Weekly)
VMware Upgrades vSphere, VSAN To Prep For Improved Multi-cloud Operations (CRN)
VMware Ushers in Large Number of Product Updates (eWeek)
VMware embraces containers with latest vSphere, Virtual SAN updates (Network World)
VMware Announces VSAN 6.5 & Other Solutions (Storage Review)
VMware hyper-convergence takes small steps with VSAN 6.5 (Tech Target)
VMware Expands ‘Cross-Cloud’ Hybrid Strategy With vSphere 6.5 Update (Tech Week Europe)
VMware waves white flag: vSphere, vRealize, VSAN dock with Docker (The Register)
vSphere 6.5 First Look (Virtualization Review)
VMware Virtual SAN 6.5 Quick Look (Virtualization Review)
VMware Announces New Releases of vSphere, Virtual SAN and vRealize Solutions (VMblog)

Performance Links


Scripting/CLI/API Links

What to Expect in PowerCLI 6.5? (VMware PowerCLI blog)
Restarting vCenter Services in vSphere 6.5 (vTagion)
vSphere 6.5 PowerCLI Module for Encrypted vMotion Management (vTagion)

Security Links

vSphere 6.5 – Secure VMs using vSphere 6.5 Security Features (Enterprise Daddy)
VMware vSphere 6.5 – VM Encryption Details (ESX Virtualization)
vSphere 6.5 – How VM’s are Secured using vSphere 6.5 Security Features? (VMware Arena)
What’s New in vSphere 6.5: Security (VMware vSphere Blog)

Storage Links

Automatic space reclamation (UNMAP) is back in vSphere 6.5 (vSphere-land)
HPE 3PAR StoreServ Is Ready: VMware Announces vSphere 6.5 (Around the Storage Block)
What’s new in vSphere 6.5 Core Storage (Cormac Hogan)
vSphere 6.5: The NFS edition (Why Is the Internet Broken?)
vSphere 6.5 what’s new – VMFS 6 / Core Storage (Yellow Bricks)

vCenter Server Links

VMware vSphere 6.5 – VUM, AutoDeploy and Host Profiles (ESX Virtualization)
VMware vSphere 6.5 – Native vCenter High Availability (VCSA 6.5 only) (ESX Virtualization)
vSphere 6.5 VCSA and Clients Announcements (The Saffa Geek)
VMware vCenter 6.5 – Improvements (Victor Virtualization)
How VCSA rise the level of vCenter (vInfrastructure blog)
What’s New in vSphere 6.5: vCenter Server (VMware vSphere Blog)
VMware vCenter Server Appliance (VCSA) Now Running on PhotonOS (vTagion)
vSphere 6.5 – Deploy VCSA  (vTagion)
vSphere 6.5 – VCSA Backup (vTagion)
vSphere 6.5 – Restore VCSA from Backup (vTagion)
vSphere 6.5 – VCSA Appliance Monitoring and Management (vTagion)
A Look at VMware’s vCenter Server Appliance (VCSA) 6.5 Release (Wahl Network)

Virtual Machine Links


Virtual Volumes (VVols) Links

VVols 2.0 with array based replication support announced with vSphere 6.5 (vSphere-land)
Whats New in Virtual Volumes 2.0 (VMware Virtual Blocks)
3 Key Reasons Customers Move to Virtual Volumes (VMware Virtual Blocks)
vSphere 6.5 what’s new – VVols (Yellow Bricks)

vRealize OPs (vROPs) Links

vSphere 6.5 Operations Management Announcements (The Saffa Geek)

VSAN Links

What’s New with VSAN 6.5 (vSphere-land)
VMworld EMEA Announcements : VSAN 6.5 (CloudFix)
What’s new in Virtual SAN 6.5 (Cormac Hogan)
VMware VSAN 6.5 – What’s New? (ESX Virtualization)
What’s new in VSA 6.5 (vInfrastructure blog)
vSAN 2 Node with Direct Connect (VMware Virtual Blocks)
What’s New with VMware Virtual SAN 6.5 (VMware Virtual Blocks)
What is new for Virtual SAN 6.5? (Yellow Bricks)

vSphere Web Client Links

VMware vSphere 6.5 – HTML5 Web Client and More (ESX Virtualization)
VMware vSphere 6.5 management UI (vInfrastructure blog)
vSphere 6.5: Client Integration Plug-in (CIP) Deprecated! (vTagion)

Oct 18 2016

What’s New in VMware VSAN 6.5

VMware has just announced a new release of  VSAN as part of vSphere 6.5 and this post will provide you with an overview of what is new in this release. Before we jump into that lets like at a brief history of VSAN so you can see how it has evolved over it’s fairly short life cycle.

  • August 2011 – VMware officially becomes a storage vendor with the release of vSphere Storage Appliance 1.0
  • August 2012 – VMware CTO Steve Herrod announces new Virtual SAN initiative as part of his VMworld keynote (47:00 mark of this recording)
  • September 2012 – VMware releases version 5.1 of their vSphere Storage Appliance
  • August 2013 – VMware unveils VSAN as part of VMworld announcements
  • September 2013 – VMware releases VSAN public beta
  • March 2014 – GA of VSAN 1.0 as part of vSphere 5.5 Update 1
  • April 2014 – VMware announces EOA of vSphere Storage Appliance
  • March 2015 – VMware releases version 6.0 of VSAN as part of vSphere 6 which includes the follow enhancements: All-flash deployment model, increased scalability to 64 hosts, new on disk format, JBOD support, new vsanSparse snapshot disk type, improved fault domains and improved health monitoring. Read all about it here.
  • September 2015 – VMware releases version 6.1 of VSAN which includes the following enhancements: stretched cluster support, vSMP support, enhanced replication and support for 2-node VSAN clusters. Read all about it here.
  • March 2016 – VMware releases version 6.2 of VSAN which includes the following enhancements: deduplication and compression support, erasure coding support (RAID 5/6) and new QoS controls. Read all about it here.

With this 6.5 release VSAN turns 2 1/2 years old and it’s remarkable how far it has come in that time frame. Note while VMware has announced VSAN 6.5 it is not yet available, if VMware operates in their traditional manner I suspect you will see it GA sometime in 30 days as part of vSphere 6.5. Unlike previous versions there isn’t a huge list of things that are new with this release of VSAN but that doesn’t mean that there are not some big things in it. Let’s now dive into what’s new in VSAN version 6.5.

vsan-5Customer adoption of VSAN continues to increase

With this release VMware is claiming that is has over 5,000 VSAN customers and they they are the #1 hyper-converged vendor. That 5,000 number sounds low to backup that #1 claim but VMware is basing this on total revenue and not customer counts. VMware had stated that they have an over $100 million revenue run rate and over 20,000 CPU licenses sold with VSAN back when they had over 3,000 customers which would put the average VSAN deal size around $30,000. The VSAN growth rate over the past two years according to VMware has been as follows:

  • Aug 2015 – VSAN 6.1 – 2,000 customers
  • Feb 2016 – VSAN 6.2 – 3,000 customers
  • Aug 2016 – VSAN 6.5 – 5,000 customers

From these numbers it appears that VMware has added a lot of VSAN customers in the last 6 months which can be attributed to their aggressive sales/marketing and rapid product development life-cycle.

vsan-1-1VSAN is going mainstream with the #1 use case being business critical apps

Back when VSAN was first announced VMware had positioned VSAN as more for VDI, Tier 2/3, ROBO and dev/test use cases. As VSAN has evolved and acquired increased scalability and resiliency as well as enterprise features VMware has been claiming since vSphere 6.0 that it is ready for the enterprise and business critical/Tier-1 apps. Apparently VMware has done some customer research (249 respondents) and is claiming that business critical apps are the #1 VSAN use case by quite a large margin. I don’t really doubt that number as VSAN is a significant financial investment for customers and when you are making that big of an investment in storage you are going to maximize your usage of it. Add in all-flash support and enterprise features and I can definitely see that many customers running business critical apps on VSAN as I suspect VSAN serves as the primary vSphere storage platform for those customers.

vsan-2VMware is trying to make All-Flash affordable for everyone

Based on licensing models in prior releases of VSAN you had to pay a All-Flash tax if you wanted to get the maximum performance that VSAN can offer by utilizing all SSD storage. With VSAN 6.2 the Standard license was priced at $2495/CPU and did not include the ability to use VSAN in an all-flash configuration. To get all-flash support you had to purchase VSAN Advanced license which was priced at $3,995/CPU which also included de-dupe, compression and erasure coding. VSAN Enterprise was priced at $5495/CPU and added on support for QoS and stretched clustering.

With VSAN 6.5 that all-flash tax is gone but with a caveat, you can deploy VSAN in an all-flash configuration now with the Standard license but you do not get the space efficiency features with it, for that you still have to move up to an Advanced license. In addition VMware is offering a new VSAN Advanced ROBO license that brings all-flash and space efficiency features for ROBO at a more affordable price point. This new ROBO licensing will be sold in 25-VM packs, if you exceed 25 VMs you have to move up to the regular VSAN license tiers. All in all with the affordability of all-flash media these days it looks like VMware is responding to try and not alienate customers that want to use all-flash but can’t afford higher licensing tiers.

vsan-6New 2-node Direct Connect deployment mode

Remember back in the day when we used to use special cables to connect two PC’s directly together and then use Laplink to transfer data back and forth between them. Because you had a direct connection between the devices it was cheaper, easier, faster and more secure to transfer data. The same holds true with storage devices, a direct connection eliminates the need for connecting 2-nodes together via a network switch, this was fairly common with SAS storage devices.

In vSphere 6.5 VMware has come out with a new 2-node direct connect deployment model for those use cases were it might be desirable to connect only 2 VSAN nodes together such as ROBO or SMB. This can help drive down costs as it eliminates teh need for 10Gbps switches, it also reduces the deployment complexity and helps customers with compliance concerns as VSAN traffic will never touch the network. Basically this solution involves just connecting a NIC port from one VSAN node directly to another VSAN node using a special crossover network cable. In prior versions of VSAN this wasn’t possible as you had both VSAN traffic and witness traffic were occurring on the same VMkernel port so if you used a direct connection there was no way to communicate with the witness.

To make this possible in VSAN 6.5 you now have the ability to separate out witness traffic onto a separate VMkernel port which essentially de-couples it from the VSAN traffic flow. To use this solution it is recommended to use 2 VMkernel connections in an active/standbay configuration. You then have to designate which vmKernel interface will have the witness traffic by using an esxcli command. You can continue to use vMotion with this type of configuration.

vsan-7vsan-8VSAN can now provide storage to more than just vSphere hosts with iSCSI support

This is a big one, in prior releases VSAN presents out storage to ESXi hosts and other VSAN nodes via a proprietary communication protocol, what this means nothing but an ESXi host could use VSAN for shared storage. In VSAN 6.5 the proprietary protocol is still used as the main transport between hosts but support for industry standard protocols has been added in the form of iSCSI support. What this means is that potentially any device in the data center be it physical or virtual could utilize VSAN as primary storage.

Why is this big? It opens the door for VMware for VSAN to be used for just about every use case in the data center and eliminates the barrier that may have previously existed that required customers to deploy an additional primary storage array to fulfill their non-vSphere shared storage needs. In other words VSAN is now positioned to take over your entire data center. However VMware is currently not targeting this solution to support non-VSAN ESXi clusters, but those shouldn’t exist in a perfect VSAN world.

iSCSI support is being done natively from within the vmKernel and not using any type of virtual appliance. There are some scale limitations with only 128 targets supported and 1024 LUNs but the solution is compatible with Storage Policy Based Management so you can use extend SPBM to more than just VMs. To use this you must first enable the Virtual SAN iSCSI target service and select a VMkernel port which automatically sets this amongst all of the VSAN cluster nodes. You can then select a default SPBM policy and optionally configure iSCSI authentication (CHAP). You then configure LUN information such as the LUN number and size (up to 62TB) and optionally multiple initiators.

vsan-9-2vsan-10Enhanced PowerCLI support

VSAN PowerCLI has been enhanced for those that want to automate VSAN operations to include things like Health Check remediation, iSCSI configuration, Capacity and Resync monitoring and more.

vsan-11Support for Cloud Native Apps running on the Photon Platform

Finally VSAN support has been extended to support additional VMware Cloud Native Applications to now include the Photon Platform. This positions VSAN to be able to handle any VMware-centric container deployment model.


Oct 18 2016

Quick summary of What’s New in vSphere 6.5

VMware just announced vSphere 6.5 almost a year and half after the release of vSphere 6.0 and this post will give you a quick summary of all the new features and enhancements in this release. There is actually quite a lot packed into this release and rather than try to cover it all in detail here I will be doing separate posts that go into much more detail on vSphere 6.5, VM Encryption, VSAN, VVols and much more. As much of the new stuff is more minor in nature I first wanted to highlight a few big things in this release.

  • Support for VVol Replication (VASA 3.0)
  • External protocol support for VSAN (iSCSI)
  • Photon Platform support for VSAN
  • VM-level native encryption via SPBM
  • HTML5 vSphere Client
  • Automatic Space Reclamation (UNMAP)
  • Encrypted vMotion
  • vCenter High Availability
  • HA Orchestrated Restart

And now for the full list which is based on the VMware published What’s New doc from the Beta 3 release combined with my own additions and embellishments.

vSphere Lifecycle Management

  • Enhanced vCenter Install, Upgrade, Patch: Streamlined user experience while deploying, upgrading and patching for vCenter Server. Support for CLI template-based vCenter Server lifecycle management.
  • vCenter Appliance Migration Tool: Single-step migration process for existing Windows vCenter Server to latest release of vCenter Server Appliance. Support for both CLI and UI methods.
  • vSphere Update Manager for vCenter Server Appliance: Fully embedded and integrated vSphere Update Manager experience for vCenter Server Appliance – with no Windows dependencies!
  • Enhanced Auto Deploy: New capabilities such as UI support, improved performance and scale, backup and restore of rules for Auto Deploy.
  • Improvements in Host Profiles: Streamlined user experience and host profile management with several new capabilities including DRS integration, parallel host remediation, and improved audit quality compliance results.
  • VMware Tools Lifecycle Management: Simplified and Scalable approach for install and upgrade of VMware Tools, Reboot less upgrade for Linux Tools, OSP upgrades, enhanced version and status reporting via API and UI.
  • (New) vSphere Automation API: A new REST based API, SDKs and Multi Platform CLI (DCLI) is now available to provide simplified VM management and automation of the VCSA based configuration and services.


  • Expanded Support for New Hardware, Architectures and Guest Operating Systems: Expanded support for the latest x86 chipsets, devices and drivers. NVMe enhancements, and several new performance and scale improvements due to the introduction of native driver stack.
  • Guest OS and Customization Support: Continue to offer broad support for GOSes, including recent Windows 10 builds, the latest from RHEL 7.x, Ubuntu 16.xx, SUSE 12 SPx and CoreOS 899.x. and Tech Preview of Windows Server 2016.
  • VMware Host Client: HTML5-based UI to manage individual ESX hosts. Supported tasks include creating and updating of VM, host, networking and storage resources, VM console access, and performance graphs and logs to aid in ESX troubleshooting.
  • Virtual Hardware 13: VMs up to 6TB of memory, UEFI secure boot for guest OS.
  • Virtual NVMe: Introducing virtual device emulation of NVMexpress 1.0e specification.
  • Increased Scalability and Performance for ESXi and vCenter Server: Continued increases in scale and performance beyond vSphere 6 – stay tuned for more information. For reference, with vSphere 6, cluster maximums increased to support up to 64 nodes and 8K VMs. Virtual Machines supported up to 128 vCPUs and 6TB vRAM and Hosts supported up to 480 physical CPUs , 12 TB RAM, 64 TB data stores, 1000+ VMs. Also adding support for 25G and 100G Ethernet as well as 32G fiber channel.
  • (New) Para-Virtualized RDMA: Introducing para-virtualized RDMA driver in Linux environment which is compliant to RDMA over Converged Ethernet (RoCE) version 1.0.
  • (New) RDMA over Converged Ethernet (RoCE): Introducing RoCE version 1.0 and version 2.0 support and associated I/O ecosystem.
  • (New) I/O Drivers and Ecosystem: Updating existing and introducing newer versions of IO device drivers. This includes various NVMe, NIC, IB, SATA and HBA device drivers. For a detailed list of drivers please refer to the VMware vSphere Download Beta Community, ESXi section.
  • (New) vSphere Fault Tolerance: Performance improvements, multi-NIC aggregation on the FT network for better performance with shared 10Gb+ NICs, interop with DRS (automated initial host placement)


  • Enhancements to Storage I/O Control: Support for I/O limits, shares and reservations is now fully integrated with Storage Policy-Based Management. Delivers comprehensive I/O prioritization for virtual machines accessing a shared storage pool.
  • Storage Policy-Based Management Components: Easily create and reuse Storage Policy Components in your policies to effectively manage a multitude of data services including encryption, caching, replication, and I/O control.
  • Enhancements in NFS 4.1 client: Support for stronger cryptographic algorithms with Kerberos (AES), support for IPV6 with Kerberos and also support for Kerberos integrity check (SEC_KRB5i). We have PowerCLI support for NFS 4.1 as well in this release.
  • Increased Datastore & Path limit: Number of LUNs supported per host increased to 1024 and number of Paths increased to 4096. (Note I heard this was scaled back to 512 LUNs & 2048 paths)
  • (New) 512e drive support: Due to the increasing demand for larger capacities, the storage industry has introduced advanced formats, such as 512-byte emulation, or 512e. 512e is the advanced format in which the physical sector size is 4,096 bytes, but the logical sector size emulates 512-bytes sector size. Storage devices that use the 512e format can support legacy applications and guest operating system. When you set up a datastore on a 512e storage device, VMFS6 is selected by default but 512e can also be used with VMFS5 datastores.
  • (New) VMFS6: SESparse will be the snapshot format supported on VMFS6, we will not be supporting VMFSparse snapshot format in VMFS6, though it will continue to be supported on VMFS5. Both VMFS 6 and VMFS 5 can co-exist. There is no inline upgrade from VMFS5 to VMFS6 available but customers can do data migration from VMFS5 to VMFS6 datastore using Storage vMotion.
  • (New) Virtual Volumes Replication: Support for VVol replication is included as part of the new VASA 3.0 spec. You can now use Virtual Volumes to replicate your Virtual Machines using your storage array’s native replication capabilities. This delivers a policy driven and integrated experience to deploy VM-centric replication offloaded to your array.
  • (New) Enhancements in VMware vSphere Storage APIs – Data Protection:
    • Configurable VSS parameters such as VSS_BACKUP_TYPE
    • Configurable timeout for creating quiesced snapshots
    • Transfer compressed data using NBD mode
    • Reuse vCenter Server session
  • Automatic Space Reclamation (UNMAP): VMFS6 now supports automatic UNMAP, which asynchronously tracks freed blocks and sends unmaps to storage in background to release free storage space on thin-provisioned storage arrays that support unmap operations to free up storage space when you delete a VM, migrate a VM with vSphere Storage vMotion, consolidate a snapshot, and so on.


  • (New) Virtual SAN iSCSI support: Add support for native iSCSI support within VSAN. The main use cases are supporting physical servers and also Microsoft Clustering Technologies that require shared disks. One can create iSCSI Targets and LUNs on VSAN and use iSCSI initiator to access the storage.
  • (New) 2 node direct connect: VSAN now has the ability to directly connect two nodes using crossover cables. This provides network higher availability and allows you to separate VSAN data traffic from witness traffic
  • (New) All-Flash support available in VSAN Standard edition: New licensing models allows customers to use All-Flash with VSAN Standard but without space efficiency features (de-dupe, compression, erasure coding)
  • (New) VSAN Advanced for ROBO licensing: New offering brings all-flash VSAN and space efficiency features to ROBO customers. Complements existing ROBO VSAN Standard offering and sold in 25-VM packs.
  • (New) Power CLI Support: Enhanced Power CLI support for Health Check and remediation, capacity and resync monitoring, proactive testing and 2 node/stretched cluster support.
  • (New) Support for Cloud Native Apps: To complement existing support for vSphere Integrated Containers, VSAN now supports the Photon Platform as well which is the non-vSphere SDDC stack for deploying containerized applications at scale.


  • Content Library Improvements: Enhancements to Content Library including ISO mount to a VM directly from Content Library, VM Guest OS customization, simplified library item update capabilities and optimizations in streaming content between vCenter Server.
  • Enhanced DRS: Enhancements to DRS settings with addition of DRS Policies that provides an easier way to set several advanced options such as even distribution of virtual machines, consumed vs. active memory, CPU over-commitment.
  • Orchestrated VM Restart using HA: Orchestrated restart allows admins to create dependency chains on VMs or VM groups, allowing for a restart order of these dependencies chains or multi-tiered applications (should an HA restart occur). Not only will Orchestrated Restart do this in the order specified by the admin, it can also wait until the previous VM is running and ready before beginning the HA restart of a dependent VM.
  • vSphere Web Client enhancements: New Web Client UI features like Custom Attributes, Object Tabs, and Live Refresh are presented alongside other performance and usability improvements.
  • (New) vSphere Web Client Reorganization of tabs: The tabbing structure for most vSphere objects have been changed to be more familiar and easier to use.
  • (New) Client Integration Plugin (CIP) removal: Client Integration Plugin was previously necessary for a certain set of functions in the vSphere Web Client. Most of these have been redesigned to remove any dependency:
    • Datastore File Upload/Download
    • OVF Export, Deploy
    • Content Library Import/Export

The only remaining function that has dependencies is Windows Session Authentication, so any user that does not use this functionality does not need to install CIP.

  • (New) vSphere Client (vSphere HTML5 Web Client): The popular fling has been integrated within vCenter. Currently it requires manually starting the service, but a few quick steps and it will become available alongside the vSphere Web Client.
  • (New) Proactive HA: Proactive HA leverages sensor data from server vendors to add an additional layer of availability for VMs by proactively leveraging DRS to vMotion virtual machines off of a degraded host prior to the host failing. This will result in fewer potential HA restarts and data loss by not requiring restarts, rather, continuing “business as usual” because of the vMotions that take place. (This can be modeled using the Beta Proactive HA Demo Plugin)
  • (New) VMware Platform Service Controller enhancements: New PSC HA features include zero configuration high availability with automatic vCenter failover to another PSC within a site. New PSC Site Management client side tools for viewing your topology and viewing PSC HA status.
  • (New) vCenter High Availability: Protect mission critical vCenter deployments with a native high availability solution that will not only protect against host and hardware failures, but also against vCenter application failures. The new vCenter HA solution provides automated failover from active to passive vCenter with expected RTO < 5 mins, and will only be available to the vCenter server appliance.
  • (New) vCenter Server Appliance and Database Management: The new 6.5 Appliance Management Interface includes usage monitoring of the embedded vCenter Postgres database by data type and utilization trends, and sends database usage alerts directly into the vSphere web client. Monitor appliance CPU, Memory, and networking utilization trends for more targeted troubleshooting. Send syslog data to remote hosts.
  • (New) Native vCenter backup and restore: Back up the vCenter Server Appliance and Platform Services Controller in three simple steps in the Appliance Management Interface using industry-standard protocols. The file-based backup will include the embedded Postgres database, vCenter inventory, and all configuration files required to recover vCenter. Restore the appliance from the new vCenter Server 6.5 installer. (Note: VADP-based backup is still supported for vCenter 6.0 and above)
  • (New) Upgrades over IPv6: Upgrade vCenter management network over IPv6 protocol. Management network must entirely run on IPv6 or entirely on IPv4.
  • (New) Virtual Machine Console: VMRC 9.0 supports Linux as well as Windows and Mac OS, auto-detect proxy settings, and access to VM consoles without host permission. HTML console supports additional languages (Japanese, German, Spanish, Italian, Portuguese) and mouse display without VM Tools installed.
  • (New) Network-aware DRS: Network-aware DRS is used to determine if the host that DRS has chosen for workload placement of a VM is network-saturated or not. If the chosen destination host is above 80% saturated, it will attempt to place the workload on a different host. This feature does not balance the cluster based on network saturation, however, it uses network utilization metrics to ensure the final target host will not perform negatively from a networking standpoint


  • (New) VM-level Encryption: Native VM-level encryption managed by storage policies (SPBM). VM and storage agnostic encryption that encrypts VM and VMDK files with no access to the encryption keys from the guest OS.
  • Secure Boot Support for ESXi Host and Guest VM: At boot time, we have assurance that ESXi and Guest VM’s are booting the right set of vibs. If the trust is violated, ESXi and the VM’s will not boot and customers can capture the outcome.
  • Enhanced vCenter Events, Alarms and vSphere Logging: Enhancements to vSphere Logging and events to provide granular visibility into current state, changes made, who made the changes and when.
  • (New) Encrypted vMotion: Data transferred over vMotion protocol will be encrypted providing confidentiality, integrity, and authenticity of data transferred during live migrations.

Oct 15 2016

How to experience VMworld EMEA 2016 without attending it

be_tomorrowVMworld EMEA usually ends up being a re-run of VMworld US but this year promises to actually be more exciting than the US edition was. The main reason for this is that VMware announced very little at VMworld US and overall the announcements were a bit lackluster. This year the timing of the new versions of VMware’s core products were a bit too far out from VMworld US so VMware will doing a lot of big product announcements at VMworld EMEA instead. As a result you are going to be wanting to pay attention to VMworld EMEA and if you can’t attend I’ll tell you how to do it.

The bloggers

There are hundreds of bloggers that write about VMware technology and there is no shortage of bloggers that attend VMworld and report on what they see, hear and experience at the show. You can expect bloggers to write about anything from thoughts and opinions on products and companies to what parties they attended to live blogging about sessions they attend. Even bloggers who are not attending the show will be posting about the announcements at the show as VMware has held pre-show blogger early access briefings. So blogs are a great way to tap into all the information relevant to the show and the announcements.

Once of the best ways to keep up with what the bloggers are posting is checking out my recently re-vamped and re-launched Planet vSphere-land which serves as a post aggregator for the top 100 vBloggers as well as VMware official blogs. The VMworld EMEA website also has a special list of VMworld bloggers along with feeds to keep you informed of all the latest blogger posts. You can also check out the official VMworld blog as well.


If you’re not on Twitter by now, why not? You may not be that social or the chatty type but its a great way to listen in on the thousands of people on social media all talking about VMworld in real time. So if you don’t have an account, sign-up now before VMworld and then use the many VMworld focused twitter resources to listen in and participate. The @VMworld account is the official account for VMworld so make sure and follow it, you also might follow the most popular bloggers as well to see what they are saying about VMworld. You can see the top bloggers here along with their twitter handles and also check out my list of the Top 100 VMware/virtualization people to follow.

You’ll also want to keep an eye on hashtags that flag tweets that related to a specific topic. The official hashtag for VMworld is #vmworld (not #VMworld2016 or #VMworldEMEA) there are also hashtags specific to each session (#sessionID) and fun ones such as #vmworld3word and #vmworldselfie. VMware also has a Social Stream of Twitter feeds available that is like a giant tweet billboard that you can watch to see the latest Twitter action at VMworld.

Live streams

VMware doesn’t live stream breakout sessions but they do live stream the 2 main general sessions which are where all the new product announcements are made. The opening general session (Tuesday) is historically more focused on VMware’s high level vision and strategies as heard from Pat Gelsinger and I’m sure Michael Dell will make an appearance. The 2nd general session (Wednesday) is more focused on the details and specific products and technologies and typically features more techie speakers such as Sanjay Poonen, Ray O’Farrell, Kit Colbert and Yanbing Li. The general sessions are at 9:00am London time so they are pretty late for us in the states (1:00am PST) but they always post the replay for you to watch after the session is over. To sign-up to view the general sessions live head on over to VMware’s general session live streaming page and pre-register where you can put in your email address and be sent a calender invite for them.


I’m pretty sure VMware will have a camera crew roaming around VMworld EMEA recording content for VMworld TV which will be narrated in part by the famous Mr. Sloof. In the past they featured a nice roll-up of the days happenings that summarized everything using the recordings that were made throughout the day. Keep an eye out on the VMware TV YouTube channel for posted recordings.

View recorded sessions

Almost all breakout sessions at VMworld US were recorded as it’s impossible for attendees to see more than a small fraction of the amount of total sessions (700+). The recordings allow attendees to watch each session after the event is over to check out all the great sessions that they could not attend while at the event. The audio for all sessions is recorded and presented along with the slides for each session, in some cases for more popular sessions they have video recorded them as well at past VMworlds.

This year VMware decided to release all of the recordings to the general public so you can go there and watch them now. Most of the sessions at VMworld EMEA are re-runs of the US sessions and are not recorded again except for panel type sessions that are usually different at each event. Because of the unique announcements at VMworld EMEA this year there will be session that were not at VMworld US, VMware has posted a list of them here. I suspect they will be recorded and available after the show as well.

Oct 14 2016

VMware on Amazon Web Services – if you can’t beat ’em, join ’em

vnw-amazon1Oh the irony in yesterday’s announcement by VMware that they are now partnering up with Amazon Web Services to offer vSphere as a Service running in the AWS cloud. This is quite a change of heart from their stance years ago when they saw AWS as a rival and enemy. Remember back in 2013 when they first announced their vCloud Hybrid Service running in VMware managed data centers in conjunction with Savvis. They have steadily built out their now called vCloud Air infrastructure to many locations across the globe to provide vSphere as a Service to customers to better compete with their main rival AWS in the cloud market.

As part of that rivalry AWS launched a management portal intended to attract VMware customers to AWS by allowing them to easily import VMs into AWS through vCenter. VMware quickly responded warning customers of all the management and integration complexities that could be jeopardized by doing that. Pat Gelsinger also lashed out at AWS saying “We want to own corporate workload, We all lose if they end up in these commodity public clouds. We want to extend our franchise from the private cloud into the public cloud and uniquely enable our customers with the benefits of both. Own the corporate workload now and forever.”

Fast forward to today and it appears VMware has had a change of heart and is now partnering with Amazon Web Services to offer vSphere as a Service on AWS. What is not clear though is what VMware intends to do with their existing vCloud Air infrastructure that it offers both within its own 10 data centers and across over 4,000 cloud partners across the world. It would seem like they are simply trying to expand their presence to one of the biggest cloud service providers in the world, Amazon enjoys 31% percent cloud market share and is growing like crazy (63% YoY). It makes total sense that VMware would want to tap into that, AWS has a great reputation and lots of cloud muscle and VMware opens itself to a much big market. With that much added capacity VMware may eventually decide to get out of the data center business and rely on it’s partners which makes sense as they are primarily a software company.

So lets now take a look at the details of this announcement. One thing to note is that this new service has only been announced and there is a bit of a long wait for it to be available, VMware is stating that it will be available in mid 2017. If you are interested in trying it out VMware does have a beta form that you can fill out to apply and also get updates about the service. There will be two licensing options for this service, on-demand (hourly) and subscription-based (1 year, 3 year), customers will also be able to leverage their existing investments in VMware licenses through VMware customer loyalty programs.

Another thing to note is VMware is not referring to this as vCloud Air, it is specifically being referred to as VMware Cloud on AWS. vCloud Air is specific to all of their other public cloud offerings and services. VMware laid the groundwork as a key enabler for this type of solution at VMworld in August with their Cross-Cloud Architecture announcement. While VMware Cloud will be running on AWS infrastructure, VMware will still be managing it. The overall solution as pictured below allows customers to run ESXi on dedicated infrastructure (not nested) in AWS data centers while having management (vCenter) running within their own data centers (for on prem vSphere) as well with vCenter running in AWS and at the same time having access to value added AWS Services.

vnw-amazon2VMware is offering their full software stack on AWS which includes VSAN for storage and NSX for networking, replication capabilities and more. All of this offering is 100% managed by VMware, that includes buying it through VMware, the configuration and upgrades to the environments and support is also through VMware.

vnw-amazon3Because this is a 100% native vSphere environment it will be managed by the customer using all of the native tools, scripts and familiar UIs that they use to manage their own vSphere environment as shown below, there is no AWS management UI layered on top of this. Because it is running in AWS it makes re-sizing simple as additional AWS infrastructure will automatically be allocated and added into existing clusters. Since it can be managed alongside existing customer on-premise vSphere infrastructure you could potentially also migrate VMs via vMotion back and forth as needed.

vnw-amazon5The key components in this solution are shown below, the first is what VMware refers to as the Service Console (not to be confused with the ESX Service Console), the Service Console is a VMware provided service that runs as a web application on VMware’s website (not AWS). The Service Console provides you with all of the administrative for the service itself which includes sign-up, provisioning, scaling up/down, billing and more. You are not doing any direct vSphere management through the Service Console. The next component is the Cloud Data Center itself which is simply the combination of AWS supplied hardware and the vSphere software stack. The AWS Global Infrastructure is the next component which is essentially their networking, data center services and everything else that makes AWS tick, again the billing is all-inclusive and comes from VMware itself not AWS. The whole solution is designed to look like it is coming directly from VMware with AWS operating transparently in the background.

vnw-amazon6The below figure shows what the Service Console UI looks like, it’s a simple HTML5 web-based interface that VMware developed for the initial setup and ongoing management of the VMware Cloud services on AWS. From here you can create deploy new VMware Cloud environments, see the status of existing ones, open the vCenter web UI for each instance and other actions for provisioning, scaling and billing. It is all designed to support VMware’s REST API’s so you could automate many of these actions through scripting. The authentication mechanism is the existing My VMware one that VMware uses today to manage support, licenses and billing which allows you to have one account for both your on prem vSphere and AWS vSphere cloud environments. Again, it’s designed to be a one stop shop at VMware for everything.

vnw-amazon7Next the below figure illustrates the combination of the AWS hardware and infrastructure combined with the VMware software stack. It’s pretty much the same stuff that you would deploy in your own data center. vCenter Server runs as an appliance, it can be deployed stand alone or in linked mode for single pane of glass management with your own on prem vCenter environment. VSAN, NSX Manager and Platform Services Controller are installed and available and of course as many ESXi hosts are configured on dedicated hardware as needed to support your requirements based on the capacity you specify. All of this is pre-configured and pre-provisioned as part of the service, you do not have to run through and set any of this up your self which is what this type of solution is all about, insert credit card and out pops a running vSphere environment. The other nice thing about this is VMware is responsible for keeping your environment up to date with patches and new vSphere versions, you don’t have to do a thing.

vnw-amazon8Finally the below figure illustrates how it can be deployed in any of the many worldwide AWS regions that exist today and in the future. Customers can connect to their VMware clouds using IPsec public network connections or direct connections to AWS.

vnw-amazon10VMware’s goal with this is to eliminate boundaries between public clouds and private data centers and allow customers to more easily build hybrid cloud environments. Of course for VMware this is a win/win situation as no matter where you run vSphere you’re still a VMware customer running their software stack. As VMware only sells software they really don’t care where the hardware comes from. By design this solution provides greater flexibility and more choices for customers to run their vSphere environments in. You can find out more about this new offering at the below links:

Oct 10 2016

Want to win a sweet kit for your home lab?

I know I do, my home lab is getting pretty dated and takes up a lot of space, well here’s your chance to win one courtesy of Turbonomic. I’ve previously written about the TurboStack which is based on the Intel NUC which is a small form factor PC packing a lot of computing muscle. Turbonomic continues to giveaway one of these sweet rigs valued at over $1300 every month so you have plenty of chances to win one. The TurboStack is a complete home lab solution and includes the Intel NUC with a dual core i5 CPU and 16GB RAM, also included are a Synology DS916+ 4-Bay NAS unit, spinning and SSD drives and a Cisco SG300 10-port Gigabit managed switch. All combined this provides you everything you need for a home lab that is quiet and will not take up a lot of room. For software the TurboStack is built on the OpenStack Juno build and also includes a full NFR License to Turbonomic 5.5.

So how do you win one, it’s simple, just watch a short video and fill out an entry form. For 3 minutes of your time you have a chance to win an awesome kit and also learn what Turbonomic is all about.


Oct 05 2016

Who were the best vendors at VMworld 2016?

Every year since 2007 TechTarget does their Best of VMworld awards which highlights the best vendors at the show within specific categories as chosen by a panel of independent judges (non-vendor). I’m always curious to see who receives the top honors at these events as it often highlights vendors I may not of head of before. As a former judge myself for several years I know the process that goes into making the selections and always felt it helped me learn more about the many innovative vendors in the VMware ecosystem.

Vendors have to nominate themselves to be eligible for consideration by filling out a form on Tech Target’s website before the show. Judges are picked from an independent pool of customers and VARs and then assigned to a specific category. Judges then review the vendors in their category before the event and often pre-judge to shrink down the number that they have to visit at the show. There is a list of rules and criteria for consideration when trying to determine which vendors are the best. During the show judges visit a select group of vendors to ask questions and find out more about a vendors product that was nominated. Judges then meet together and discuss their picks as the best vendors for each category and then also pick one vendor from the category winners to be chosen as overall Best in Show.

Before I list the winners of each category I wanted to give my perspective on these awards. If you look historically at past winners each year you typically won’t see big name vendors like IBM, EMC and Symantec, etc. winning these awards. The reason for that is these awards tend to be about uncovering those innovative smaller companies that are doing things uniquely and outside the box. I’m not saying big companies can’t innovate but startups often bring fresh ideas and perspectives to doing things in a way that nobody has ever tried before. They are not afraid about taking risks and going against the status quo and solving a problem in a whole new way.

I judged the security category each year that I did it and some of my picks for the winner of that category were companies like HyTrust and Reflex Systems. I knew right away when I talked to these vendors and saw their products that they were something special. Sometimes it’s not so easy though as there are so many vendors with great products in the VMware ecosystem and a lot of small startups all with their own ideas trying to capitalize on the opportunity that virtualization has brought about for new products. At the end of the day though the judges make their choices no matter how easy or difficult that decision is and based on their opinions the best vendors at VMworld are chosen.

So here are the winners this year, one hiccup this year, Cohesity DataProtect 3.0 was originally chosen as the winner for Data Protection but then was later found ineligible as the product release that they were being judged on was not released yet which is a requirement for being eligible. As a result the two finalists were chosen as co-winners in that category.

Category winners:

  • Data Protection – Co-Winner: StorageCraft ShadowProtect SPX  Co-Winner: Rubrik Firefly 3.0
  • Workload Management & MigrationEmbotics vCommander 5.7.2  (Finalists: Velostrata 2.0 and ExtraHop Networks)
  • SecurityShavlik Protect  (Finalists: Thycotic Secret Server and GuardiCore Centra Security Platform)
  • Virtualization & Cloud InfrastructureNVIDIA GRID with Horizon 7  (Finalists: Nutanix Xpress and Actifio Sky)
  • Desktop & Application DeliveryWorkspot VDI 2.0 Solution  (Finalists: Unidesk 4 and Citrix Secure Browser)
  • Networking & VirtualizationVeloCloud Cloud-Delivered SD-WAN  (Finalists: Paessler PRTG Network Monitor and  VMware vRealize Network Insight)
  • Agility & AutomationTufin Orchestration Suite  (Finalists: Quali CloudShell Cloud Sandbox Software)
  • Judge’s Choice Disruptive TechnologyLakeside Software Ask SysTrack
  • Judge’s Choice Startup SpotlightStacksWare

And chosen as Best in Show which is the top honor is Tufin Orchestration Suite

Congrats to all the winners this year! If you are interested in seeing past years winners you can view them here:

Oct 03 2016

Knock, knock – Who’s there – Vembu – Vembu who?

Vembu VMBackup for vSphere, that’s who.

I’ll be honest, when a data protection company called Vembu reached out to me last month I have to admit I had not heard of them before. Despite working neck deep in the virtualization world for the last 10 years and have attended every VMworld the last 9 years Vembu is a company I had never heard mentioned. A big part of the reason for that is Vembu is based out of India and initially focused on the managed service provider (MSP) market by providing their StoreGrid software for MSPs to white-label and re-brand to offer as a service to customers. Even ESG has declared Vembu as “The Biggest Little Data Protection Company You Probably Haven’t Heard Of (Yet)”.

Vembu has actually been around for over 12 years and I’m going to tell you a little bit about them. Vembu is a privately held data protection company based in India who recently opened an office in Texas and is now trying to expand their presence to the customer segment. To that end in late 2014, they shifted focus from the MSP market to developing their BDR Suite which is a collection of products meant for on-premise, offsite, cloud backup and disaster recovery across diverse IT environments including physical, virtual, applications and endpoints.

The Vembu BDR Suite caters to the backup needs of the modern data center running VMware/Hyper-V (Vembu VMBackup) as well as physical Windows IT environments (Vembu ImageBackup). They continue to provide all the features of Vembu StoreGrid under the Vembu NetworkBackup product name which is also part of Vembu BDR Suite.  They have a couple of products for VMware environments which includes VMBackup for VMware, OffsiteDR for VMware and BDR360 for VMware. VMBackup for VMware has pretty much everything you would expect a backup application to have and more such as:

  • Agentless VMware Image Backup
  • VM Replication for High Availability
  • VMware Hot-Add and SAN transport mode for LAN free data transfer
  • CBT enabled incremental data transfer using VMware VADP
  • Supports VMware vSphere v6 which includes VMware Virtual Volumes and Virtual SAN
  • Quick VM Recovery
  • Application-Aware Image Backups
  • VembuHIVE File System, a File System of File Systems for efficient backup storage
  • Flexible & Configurable Retention Policies
  • Vembu Universal Explorer for Microsoft Exchange, SQL, Active Directory and SharePoint

They also provide value added features such as automated backup verification, quick VM recovery from backup, instant file level recovery with Universal Explorer, building Virtual Labs from Storage Repositories and Cross Hypervisor Migration (V2V). Sounds like a whole lot of great stuff for a backup application to have, well wait until you see their pricing which they post right on their website.

Most everything is licensed per host CPU socket, VMBackup for VMware is only $360 per CPU socket/annum. If you want to use their OffsiteDR for VMware to your own data center it’s only $90 per CPU socket/annum or CloudDR for VMware to the Vembu Cloud is only $0.20 per GB/month. In addition to data protection they also offer BDR360 for VMware which provides centralized monitoring & management for only $60 per CPU socket/annum.

So if you’re in the market for an affordable data protection solution I’d highly recommend you give Vembu a serious look. To help you out I’ve included a few links below to get you started:

Sep 16 2016

Containers & VVols – a technical deep dive on new technologies that revolutionize storage for vSphere

Want to know more about VVols? Check out the VMworld 2016 edition of the session that I did back at VMworld 2015. You can also checkout the many other great sessions at VMworld 2016 on VVols. And if you are in the Chicago area come see it live on 9/22 (minus Containers).

Sep 16 2016

IT Pro Day is coming – don’t miss out

itprodaySolarWinds is doing their 2nd annual IT Pro Day next week on Sept. 20th. They have some cool and fun stuff that they are doing to celebrate IT Pro’s all over the world. Some fun things that they have on their IT Pro website are a Choose Your Own Adventure simulator that you can run through as either an End User or an IT Pro, you can also watch some hilarious videos and sign-up for a cool t-shirt giveaway if you take their fun quiz. While you are there you can also check out their free management tools to help make life easier. So if you’re an IT Pro head on over there and check it out.



Sep 15 2016

Top 10 reasons to start using VVols right now

VMware’s new storage architecture, Virtual Volumes (VVols), has been out for a year and a half now as part of the vSphere 6.0 release and adoption continues to be very light for a variety of reasons. This kind of adoption can be typical of any 1.0 release as users wait for it to mature a bit and better understand the differences of it compared to what they are used to using. VVols brings a lot of great benefits that many are unaware of so in this post I though I would highlight those to try and make a compelling use case to start using VVols right now.

top10list-crop10 -You don’t have to go all in

It’s not all or nothing with VVols, you can easily run it right alongside VMFS or NFS on the same storage array. Because VVols requires no up-front space allocation for it’s Storage Container, you will only be consuming space on your existing array for any VMs that are put on VVol storage. Since VVols is a dynamic storage architecture, whatever free space you have remaining on the array is available to your VVols Storage Container which is purely a logical entity unlike a VMFS volume which requires up-front space allocation.

You can easily move running VMs from VMFS to VVols using Storage vMotion and back again if needed or create new VMs on VVols. This allows you to go at your own pace and as you move VMs over you can remove VMFS datastores as they are emptied out which provides more available space to your Storage Container. Note that Storage vMotion is the only method to move existing VM’s to VVols and you cannot upgrade VMFS datastores to VVol Storage Containers.

9 – Gain early experience with VVols

VMware never keeps 2 of anything around that do the same thing, they always eventually retire the old way of doing it as it is double the development work for them. At some point VVols will be the only storage architecture for external storage and VMFS will be retired. How long did you wait to switch from ESX to ESXi or to switch from the vSphere C# client to the web client? Did you wait until the last minute and then scramble to learn it and struggle with it the first few months. Why wait, the sooner you start using it the sooner you will understand it and you can plan on your migration over time instead of waiting until you are forced to do it right away. By gaining early experience you will be ahead of the pack and can focus on gaining deeper knowledge of it over time instead of being a newbie who is just learning the basics. There are no shortage of resources available today to help you on your journey to VVols.

8 – Get your disk space back and stop wasting it

With VVols both space allocation and space reclamation is completely automatic and real-time. No storage is pre-allocated to the Storage Container or Protocol Endpoint, when VM’s are created on VVols storage they are provisioned thin by default. When VMs are deleted or moved space is automatically reclaimed as the array can see VMs as objects and knows which disk blocks they reside on. No more manually running time and resource intensive cli tools,  vmkfstools/esxcli to blindly try and reclaim space on the array from deleted or moved VMs. VVols is designed to allow the array to maintain a very thin footprint without pre-allocating space and carving up your array into silos like you do with VMFS and at the same time being able to reclaim space in real time.

7 – It’s available in all vSphere editions

VVols isn’t a vSphere feature that is licensed only in certain editions such as Enterprise Plus, it’s part of the vSphere core architecture and available in all vSphere editions. If you have vSphere 6.0 or higher you already have VVols capabilities and are ready to start using it. VVols is mostly under the covers in vSphere so it won’t be completely obvious that the capability is there. It is part of the same Storage Policy Based Management (SPBM) system in vSphere that VSAN uses and also presents itself as a new datastore type when you are configuring storage in vSphere.

6 – Let the array do the heavy lifting

Storage operations are very resource intensive and a heavy burden on the host. While server’s are commonly being deployed as SAN’s these days, a server really wasn’t built specifically to handle storage I/O like a storage array is. VMware recognized this early on which is why they created vSphere APIs for storage such as VAAI to offload resource intensive storage operations from the host to the storage array. VVols takes this approach to the next level, it shifts the management of storage tasks to vSphere and utilizing policy based automation storage operations are shifted to the storage array.

So things like thin provisioning and snapshots which can be done either on the vSphere side or the storage array side with VMFS are only done on the storage array side with VVols. How you do these things remains the same in vSphere but when you take a snapshot in the vSphere client you are now taking an array based snapshot. The VASA specification which defines VVols is basically just a framework and allows the array vendors to implement certain storage functionality, however they want to handle things within the array is up to each vendor. The storage array is a purpose built I/O engine designed specifically for storage I/O, it can do things faster and more efficiently, why not let it do what it does best and take the burden off the host.

5 – Start using SPBM right now

VSAN users have been enjoying Storage Policy Based Management for quite a while now and with VVols anyone with an external storage array can now use it as well. While Storage Policies have been around for even longer when first introduced in VASA 1.0, they were very basic and not all that useful. The introduction of VASA 2.0 in vSphere 6.0 was a big overhaul for SPBM and made it much more rich and powerful. The benefit of using SPBM is that makes it easier for vSphere and storage admins by automating storage provisioning and mapping storage array capabilities including features and hardware attributes directly to individual VMs. SPBM ensures compliance of defined policies so you can create SLA’s and align storage performance, capacity, availability and security features to meet application requirements.

4 – Snapshots don’t suck anymore

vSphere native VM snapshots have always been useful but they also have a dark side. One of the big disadvantages of vSphere snapshots is the commit process (deleting a snapshot) which can be very time and resource consuming, the more so the longer a snapshot is active. The reason for this is that when you create a vSphere snapshot, the base disk becomes read only and any new writes are deflected to delta vmdk files that are created for each VM snapshot. The longer a snapshot is active and the more writes that are made the larger the delta files grow, if you take multiple snapshots you create more delta files. When you delete a snapshot all those delta files have to be merged back into the base disk which can take a very long time and is resource intensive. As backup agents routinely take snapshots before doing a backup of a VM, snapshots are a pretty common occurrence.

With VVols the whole VM snapshot process changes dramatically, a snapshot taken in vSphere is not performed by vSphere but instead created and managed on the storage array. The process is similar in the fact that separate delta files are still created but the files are VVol snapshots that are array-based and more importantly what happens while they are active is reversed. When a snapshot of a VM on VVol-based storage is initiated in vSphere a delta VVol is created for each virtual disk that a VM has but the original disk remains Read-Write and instead the delta VVols contain any disk blocks that were changed while the snapshot is running. The big change occurs when we delete a snapshot, with VVols because the original disk is Read-Write, we can simply discard the delta VVols and there is no data to commit back into the original disk. This process can take milliseconds compared to minutes or hours that is needed to commit a snapshot on VMFS datastores.

The advantage of VVol array based snapshots are many, they run more efficiently on the storage array and you are not using up any host resources. In addition you are not waiting hours for them to commit, your backups will completed quicker and there is no chance of lost data from long snapshot chains trying to be written back into the base disk.

3 – Easier for IT generalists

Because storage array management shifts to vSphere through SPBM, IT generalists don’t really have to be storage admins as well. Once the initial array setup is complete and the necessary VVols components created, all the common provisioning tasks that you would normally do on the array to support vSphere are done through automation. No more creating LUNs, expanding LUNs, configuring thin provisioning, taking array based snapshots, etc., it’s all done automatically. When you create a VM, storage is automatically provisioned on the storage array and there is no more worrying about creating more LUNs, determining LUN sizes and increasing LUN sizes when needed.

With VVols you are only managing VMs and storage in vSphere, not in 2 places. As a result if you don’t have a dedicated storage admin to manage your storage array, an IT generalist or vSphere admin can do it pretty easily. Everything with SPBM is designed to be dynamic and autonomic and it really unifies and simplifies management in a good and efficient manner to reduce the overall burden of storage management in vSphere. Instead of having the added complexity of using vendor specific storage management tools, with VVols management becomes more simplified through the native vSphere interfaces.

2 – It unifies the storage protocols

VMFS and NFS storage are implemented pretty differently in vSphere and what VVols does is dictate a standardized storage architecture and framework across any storage protocol. There were several big differences with file and block implementations in vSphere, with block you presented storage to ESXi hosts as LUNs and with file as a mount point. You still do this to connect your array’s Protocol Endpoint to a host but the standard storage container that is presented to a host to store VMs is just that, a Storage Container, not a LUN or mount point anymore.

When it came to file systems with block an ESXI host would manage the file system (VMFS) and with file it was managed by the NAS array. With VVols there is no file system anymore, VVols are written natively to the array without a file system regardless of it’s a file or block array. And the biggest change is that VMs are written as files (VVols) to block storage in the same manner that file storage has been doing it all along. This creates a  unified and standardized storage architecture for vSphere and eliminates the many differences that existed before between file and block. VVols creates one architecture to bring them all and in the darkness bind them (LOTR reference).

1 – The VM is now a unit of storage

This is what it’s all about, the LUN is dead, because it’s all about the VM, ’bout the VM, no LUN. With VVols the VM is now a first class citizen and the storage array has VM-level visibility. Applications and VMs can be directly and more granularly aligned with storage resources. Where VMFS was very LUN-centric and static creating silos within an array, aligning data services to LUNs and utilizing pre-allocated and over provisioned resources. VVols is VM-centric and dynamic where nothing is pre-allocated, no silos are created and array data services are aligned at the more granular VM-level. This new operational model transforms storage in vSphere to simplify management, deliver better SLAs and provide much improved efficiency. VVols enables application specific requirements to drive provisioning decisions while leveraging the rich set of capabilities provided by storage arrays.

Sep 14 2016

A new vPlanet is born – your one source aggregator for vBlog content

Years ago I had setup a planet aggregator site for the top 25 vBlogs, it was a fairly basic plug-in and didn’t work all that well. I took it down months ago mainly because it was the last site I had hosted on GoDaddy and I wanted to get off of them completely. This week I built a brand new planet aggregator site from scratch on a hosting server that I wasn’t using. The plug-in I’m using now is way better and I bought some add-on’s for it so I could customize and improve it even more.

The default view displays content from the Top 100 vBlogs, you can change this to either the Top 50, Top 25 or Top 10 if you want to get more granular. I also have blogs sorted by categories so you can see blog posts from only certain categories like VDI or storage. Besides the Top 100 vBlogs I also included VMware corporate blogs as well like the vSphere, PowerCLI and Virtual Blocks blogs. The aggregator only displays links to the source posts, it does not contain any of the actual content. It is setup for long term retention of blog posts, so posts will not roll off quickly, right now I have it setup to capture 10 posts from each source going back up to 180 days.

So head on over there and check it out, if you have any suggestions for improving it please let me know. I made a best guest on categorizing blogs so if you blog isn’t in a certain category let me know and I’ll add it. You can also subscribe to it via RSS at this URL (Top 100 vBlogs only).


Older posts «