Esiebert7625

Author's details

Name: Eric Siebert
Date registered: April 27, 2009

Latest posts

  1. An overview of current storage vendor support for VMware Virtual Volumes (VVols) — March 21, 2017
  2. The Top 100 VMware/virtualization people you MUST follow on Twitter – 2017 edition — March 7, 2017
  3. Top vBlog 2017 starting soon, make sure your site is included — March 6, 2017
  4. One more vendor now supports VVol replication — February 23, 2017
  5. The shrinking session length trend for sponsor sessions at VMUG UserCon events — February 22, 2017

Author's posts listings

Feb 06 2017

No VMUG For You!

I was catching up on some blog reading this week when a particular post about VMUG leaders caught my eye. Apparently a few VMUG leaders in the community got caught up in the VMUG policy about leaders that if you work for a partner, No VMUG For You! (Seinfeld reference). One leader in particular went to work for Nutanix and was promptly shown the VMUG leader exit door.

Now I’ve long known the VMUG leader policy has heavily favored customer/users as VMUG leaders and partners/vendors were discouraged to avoid potential conflict of interest problems. I feel this policy came from the fact that partners/vendors are the sponsors of VMUG events, meaning money is paid to sponsor VMUG events and a VMUG leader could potentially play favorites or be discriminatory towards choosing certain partners.

In this case I don’t necessarily see this as being purely Nutanix biased, (VMware & Nutanix aren’t exactly best buds) it’s pretty much just the default VMUG leader policy. I was also forced to step down as well as a Denver VMUG leader when I joined HP over 5 years ago so I know exactly what it feels like. The very few exceptions to this policy (well one I know of) that I have seen tend to be based on what your role is at a partner/vendor. One of the current Denver VMUG leaders works for IBM but he’s part of their consulting group so he’s not directly involved in selling or marketing IBM products.

The same type of policy held true for when I was a major contributor at Tech Target, join a partner/vendor and they no longer want you writing for them, again there could be potential bias or conflict of interest. Do I agree with this policy, hell no, I think it sucks. You would think the VMUG leadership team could be trusting of us as professionals to do the right thing and remain neutral without showing bias or favoritism when it comes to sponsors. If you want to govern and monitor to ensure we do the right thing that’s fine, or better yet let our peers do it for you as there are multiple leaders in every city.

Now I can understand that the VMUG leadership team could just be trying to cover their butts but there are a lot of good people out their that work for partners/vendors that would/did make fantastic leaders and to exclude these passionate people from that position just isn’t right. I sincerely hope that the VMUG leadership team would soften their stance on this and consider allowing more partner/vendors to be leaders. I would also like to see partners be part of the VMUG Advisory  Board or have a special Partner Advisory Board to provide an outside viewpoint to VMUG operations.

Well one could hope that things will change but they probably won’t, that’s how VTUG’s are born. I wouldn’t mind being a VMUG leader again and I am sure there are plenty of others out there that feel the same way.

Share This:

Feb 06 2017

VMware releases new 6.5a versions of ESXi and vCenter Server

A few days ago (2/2/2017) VMware released version 6.5a of both ESXi and vCenter Server, this comes just a few months after the initial release of vSphere 6.5 on 11/15/16. Unless you’re a VMware NSX customer though there really isn’t much in this release for you. The release notes for both ESXi & vCenter Server show their is exactly one thing new in this release:

  • This release of ESXi 6.5.0a adds support for NSX 6.3.0.

There is one issue that is fixed in this release with ESXi that fixes a PSOD that can occur when using vMotion to move VMs to a ESXi 6.5 host from a ESXi 5.5 or 6.0 host. However the conditions necessary for this to occur are most likely not going to occur for most customers as the VM must be NUMA aware and configured with more than 8 vCPUs.

In addition there are 3 patches included in ESXi 6.5a and 1 patch included in vCenter Server. So if you’re running NSX or are really into patching head on over to VMware’s site and download them.

Share This:

Feb 06 2017

Optimize RPO & RTO While Enhancing DR Resilience with Vembu OffsiteDR Server

Moving backup data offsite is pretty much something that you have to do to protect against an event like a fire at your primary site that could potentially take out both your production data and the backup data. In the old days most companies would send tape backups offsite in a rotation to provide that extra layer of protection. These days with disk backup target solutions becoming very popular moving data offsite can be done over the wire instead of driving it in a vehicle to another location. So having a backup application that can automatically replicate backup data to another offsite location is a requirement to having a solid data protection solution.

Vembu’s OffsiteDR add-on product to Vembu VMBackup offers an additional layer of data protection by allowing users to replicate data to offsite to another data center from their primary backup server. This provides users the capability to be able to restore data directly from the OffsiteDR server in a manner similar to restoring from a backup server with minimal downtime. When doing this users will have multiple restore options to recover their data including:

  • Booting a live VM
  • Mounting image file to disk management
  • Downloading image in your native format like VHD, VHDX, VMDK, VMDK-Flat and RAW
  • Live restore to your ESX(i) server

This provides users an alternate system from which to recover protected VMs and physical servers using the same procedures that  administrators employ on a Vembu backup server. This is accomplished by using a Live Data Transfer method that can instantly transfer data from a Vembu backup server to another location hosting a OffsiteDR Server. By using Live Data Transfer it enables seamless communication between servers, so that the OffsiteDR Server stays continually up-to-date.

Recently openBench Labs assessed the performance and functionality of the Vembu OffsiteDR Server to examine the ability to restore data in the event of a catastrophic failure in a vSphere environment. They used several real world application workloads in their backup scenarios and demonstrated how you can achieve aggressive RPO’s with minimal impact on your active workloads. You can read the full results of their testing and results posted on Vembu’s website.

Share This:

Jan 27 2017

VVols by the numbers: an update on VVol adoption in the real world

VMware’s new Virtual Volumes (VVols) storage architecture has been out for almost 2 years now and customer adoption has been pretty light so far. There are a number of reasons for this including the simple fact that many customers are slow to migrate to brand new technologies. At HPE with 3PAR StoreServ arrays we collect point in time metrics on VVols via our phone home capabilities for customers that opt-in to it (most do). As a result we can see who is using VVols and other info such as how many VM’s they are creating on VVols and how many VVols those VMs have. One of our product managers collects and formats that information and presents it to us periodically so I thought I would share some details from his latest report.

  • 1,887 arrays have the VASA Provider enabled in the array
  • 1,014 arrays reporting VVols data (requires 3.2.2 firmware)
  • 258 arrays with at least one VVol
  • Compared to 1 year ago 5.6x more arrays are running VVols
  • Jan 2016 metrics: 2 arrays with 100+ VVols, 3 with 51-100 VVols, 5 with 11-50 VVols and 36 with less than 10 VVols
  • Jan 2017 metrics: 43 arrays with 100+ VVols, 11 with 51-100 VVols, 51 with 11-50 VVols and 111 with less than 10 VVols
  • Large number of those VVol deployments have happened in last 3 months
  • Largest customer has 1,457 VVols and 576 VMs, next 2 largest have 1,191 VVols/344 VMs and 1,453 VVols/304 VMs
  • #10 largest customer in VVol deployments has 477 VVols and 169 VMs
  • Average VVol to VM ratio around 4.5, highest customers at around 9 VVols per VM

As you can see we have some good sized VVol deployments out there and it’s encouraging to see more people giving VVols a try in their own environments. I expect we will see a good up tic this year as VVols 2.0 is out now as part of vSphere 6.5 which brings maturity and replication to VVols and more customers migrate to vSphere 6.x which is required for VVol support. I have met with several large customers that are planning large scale VVol deployments so hopefully we’ll see these numbers continue to rise.

Share This:

Jan 26 2017

Top 12 posts from vSphere-land in 2016

2017 is upon us and I thought out of the 100+ posts that I published in 2016 I would highlight some of my favorite ones. I know picking the favorites of things that are near and dear to your heart is often challenging but I managed to whittled the list down to 12. So without further ado and in no particular order here are my top 12 favorite posts from 2016:

Share This:

Jan 24 2017

How to find out which storage vendors support VVol replication in vSphere 6.5

vSphere 6.5 introduced support for VVol replication but on day 1 of the vSphere 6.5 GA there wasn’t a single storage vendor that supported it. VMware has a special Compatibility Guide category specifically for VVol support that shows which storage vendor arrays support VVols and additional information on supported array models, firmware and protocols. One additional piece of information in those listings is a field labeled Feature, this field is used to indicate support for additional VVol special features, it is not intended to display array capabilities that are exposed to VVols.

Prior to vSphere 6.5 I have only seen 2 types of features displayed in some vendor listings, Multi-vCenter support and VASA Provider High Availability support. The Multi-vCenter support simply means a storage array can support connecting to multiple VASA Providers when you have more than one vCenter Server in your environment. The VASA Provider High Availability support was mainly intended for external VASA Providers to indicate they had some type of mechanism in place to protect the VP in case of a failure (i.e. VM down).

Now with vSphere 6.5 there is a new feature listing called VVols Storage Replication which is an indication that a storage array supports the new VVol replication capability in vSphere 6.5. Note that a storage array can be certified to support VVols in vSphere 6.5 but unless they have the VVols Storage Replication feature listed they do not support VVol replication.

As of today there are only 7 storage vendors that are listed as supporting VVols in vSphere 6.5: Fujitsu, HPE, HDS, Huawei, IBM, NEC and Nimble, at the vSphere 6.5 launch there were only 4. but there is currently only one storage vendor that supports VVol replication, which is Nimble. For comparison purposes there are 17 storage vendors listed as supporting VVols in vSphere 6.0. So while support for VVols replication is now available, most storage vendors are not ready to support it yet. Having seen first hand the amount of engineering effort it takes to support VVols replication I can understand why I can count on my nose the amount of vendors that support it.

Another potential speed bump for customers wanting to implement VVol replication is the very limited documentation and support for performing replication related operations such as planned and unplanned failover, failback and failover testing. VMware introduced some new PowerCLI cmdlets to do some of these operations, but customers have to write their own scripts to make it work and it can get a bit complicated to do so especially when trying to recover from an unplanned failover. Currently there is no support for vRealize Orchestrator or vCenter Site Recovery Manager with VVol replication to help automate those operations. There is also currently no support for doing a test failover via PowerCLI.

I know VMware is working to try and get some sample scripts and documentation around this as well as expand PowerCLI support and integrate it into SRM. The only documentation I have been able to find so far is this page from Nimble Storage which has some good information about their implementation of VVols replication. So if you are anxious to start using VVol replication check with your storage vendor to see where they are at with it, I suspect you will see support slowly trickle out among the storage vendors, I know of one storage vendor in particular that will be supporting it fairly soon.

It’s great to see VVols evolve to support replication as it was a key missing feature that was holding some people back from using VVols. Now that VMware has delivered VVol replication support as part of VASA 3.0 in vSphere 6.5 the ball is in the storage vendors court to enable it within their arrays. As VVols continues to mature I look forward to seeing more hands go up when I’m speaking to groups and asking who is using it as there are a lot of great benefits to using the VVols storage architecture.

Share This:

Jan 23 2017

Coming soon: Top vBlog 2017 with a new scoring method

I’m back in the saddle after taking a few weeks off to recharge and preparing to launch my annual Top vBlog voting. This year brings some big changes as to how blogs are scored, instead of just relying on public voting which can become more about popularity and less about blog content, I’m adding several other scoring factors into the mix. By doing this scores will reflect the efforts a blogger puts into their blog and not just be about how popular a blogger is. The total points that blogger can receive through the entire process will be made up of the following factors:

  • 60% – public voting – general voting – anyone can vote – votes are tallied and weighted for points based on voting rankings as done in past years
  • 20% – private judges scoring – chosen judges who will grade a select group of blogs based on several factors, combined rankings will equal points
  • 10% – number of posts in a year – how much effort a blogger has put into writing posts over the course of a year based on Andreas hard work adding this up each year (aggregator’s excluded)
  • 10% – Google PageSpeed score – how well a blogger has done to build and optimize their site as scored by Google’s PageSpeed tools, you can read more on this here where I scored some of the top blogs.

All combined the above methods will be scored and the total points added up to determine the top bloggers. The mix of public voting and private scoring is similar to how VMworld session submissions are scored. I’m still working on the point formula for the various scoring methods but I believe this will be a very fair system to help recognize those bloggers that deserve it most. I will be looking for 8-10 private judges who will be given a group of 20-30 blogs to score. Once again the 10 minimum blog posts rule in 2016 will be enforced to be eligible to be on the Top vBlog voting form (about 220 blogs make it this year).

And thank you once again to Turbonomic for sponsoring Top vBlog 2017, stay tuned!

Share This:

Dec 14 2016

Is it VSAN, vSAN, VVols, vVols or what?

If there is one thing VMware tends to be consistent at, it is changing the case of their product & feature acronyms. I’ve seen the acronyms for both Virtual SAN and Virtual Volumes done many different ways. One reason for that is VMware periodically changes the case of their acronyms, in the case of Virtual Volumes in the early days VMware had it at VVOLs and then vVols and now it’s VVols. With Virtual SAN I’ve seen it VSAN and vSAN so let’s cover what is right and wrong right now according to VMware.

Virtual Volumes is largely referred to as Virtual Volumes in most VMware documentation, they do abbreviate it though and the official abbreviation is VVols. So the correct wording/case is VMware Virtual Volumes or VMware VVols.

Virtual SAN on the other hand is no longer being referred to as Virtual SAN, VMware officially now calls it VMware vSAN, the longer name “Virtual SAN” has been officially end of lifed.

Vendors have the fun job of updating all their product documentation and collateral that references the old name/case. So now you know the correct way to spell and use those product names until next time when the VMware marketing machine decides to change them again.

Share This:

Older posts «

» Newer posts