VMware finally announces SRM support for VVols!

VMware’s new storage architecture, Virtual Volumes (VVols), has been part of vSphere for over 3 years starting with the vSphere 6.0 release. However the initial release did not support array based replication which VMware eventually provided in vSphere 6.5. That support came with a caveat though, while replication was supported via Storage Policy Based Management (SPBM), orchestration of replication operations was a manual and painful process and not supported with SRM. To do a test failover, failover or failback you had to use PowerCLI and write your own scripts to orchestrate those operations, not exactly something you want to be dealing with in a time of crisis.

VMware SRM was designed to make BC/DR a simple and easy process and allow automated orchestration of replication by simply pushing a button. SRM essentially takes over the ownership and control of array-based replication using a Storage Replication Adapter (SRA) provided by each vendor specific to their storage arrays. Therefore when you click a button inside of vSphere, SRM has full control of array replication, bring up VMs at the recovery site and eventually failback to the primary site when needed by reversing replication. Having no SRM support for VVols is a show stopper for most customers that don’t want to deal with complex and manual scripts to perform BC/DR operations. Note you may have heard that SRM does support VVols today but that is ONLY if you are using vSphere Replication.

Let’s first examine how we got here and why it’s taken so long for SRM to support VVols. Most of VMware’s products are in silos, meaning they are run by different product teams, as a result product roadmaps and interoperability are not always in sync across products. Every product team has their own priorities and it may or may not include immediate support for other VMware products. VVols development has it’s own product team and they are mostly solely focused on developing VVols. Support for VVols with SRM is completely outside of the VVols engineering team and solely within the SRM engineering team. It’s not up to the VVols product manager to decide on when SRM will support VVols, it’s entirely up to the SRM product manager.

For the last year or so we’ve pleaded to the VVols team for SRM support and we were essentially told sorry, bring us customers that want it and we’ll try to push them to prioritize it. This of course frustrated everyone as support for replication without support for SRM wasn’t a complete solution. VMware’s dead silence on the issue also didn’t sit well with customers and left many customers not wanting to use VVols. Several months ago we had a face to face meeting at VMware with Lee Caswell and the product managers for VVols & SRM and again pleaded for VMware to take action. At that time they did finally commit to supporting VVols on the SRM roadmap but we also requested that they communicate that to customers sooner rather than later.

Well VMware finally broke their silence and announced SRM for VVols right before VMworld in a blog post. I’m betting the timing on this was deliberate as last year VMware got really beat up over this at VMworld and probably didn’t want a repeat of that. Note the blog post is very vague on purpose and this is just an announcement and don’t expect support for it right away. The SRM team is working on something else big right now which if you were at VMworld you could of seen a tech preview of it. The announcement basically acknowledges that VMware clearly sees SRM support for VVols as a key priority finally so customers and partners can put their pitchforks away.

Note with this announcement VMware is not just telling customers that support is coming so they can plan for it and start migrating to VVols, it’s also putting partners on notice to inspire them to prioritize finishing their support for VVols replication. To this day, almost 2 years after VVols replication was supported in vSphere 6.5 their are still only 2 partners (HPE/Pure) that support it. I think some partners may not have prioritized VVols replication support as they saw their was no SRM support for it and figured why bother.

So how will this marriage of VVols & SRM look? For one thing their will be no more SRA’s to install and maintain into SRM. Control of replication will be handled natively through the VASA Provider without any external components needed. This will be a welcome change and greatly simplify using SRM with external storage arrays. How this will impact certification (HCL) is TBD, presumably it will be handled through the VASA certification process. Beyond that we’ll have to wait and see, I know this is a nowhere near a minor change and represents a big engineering effort for the SRM team. I’m fortunate to be working closely with the SRM product manager and engineering team on this and am excited to see one of the final barriers for customers wanting to use VVols disappear.

Share This:

Me at VMworld

I’m looking forward to VMworld again this year, and most importantly seeing old and new friends again. This will be #11 for me and honestly it never gets old. Every single one goes by so fast and I always regret what I didn’t have time to do there. I’ll be getting in early Sunday and leaving Thursday morning. Ping me on Twitter at @ericsiebert if you want to meet up, below is a summary of what I will be doing at VMworld so far. This year I do have more magnetic buttons to hand out, I have a limited supply of vExpert 2018 and also vExpert 10 star buttons for those OG vExperts. I have have some special Viva Las VVols! buttons to give away to help promote VVols.

Sunday

  • Welcome Reception from 5:00-7:30, will be all over, at the HPE booth now and then
  • VMUG member party for a bit at House of Blues starting at 7:30
  • VMunderground at Beerhaus at The Park from 8:00 or so on

Monday

  • HCI2810NU – VVols Deep Dive session – always a must see Patrick & Pete technical session on VVols at 2:00 in Oceanside B Level 2
  • HCI2550PU – Leveraging VVols to Simplify Storage Management – a customer panel hosted by Bryan Young on VVols at 3:30 in Mandalay Bay Level 2, come out and hear from the people that love VVols
  • Zerto party at House of Blues at 7:00, featuring a Pearl Jam cover band, last year they had a Journey cover band which was incredible, not a huge Pearl Jam fan but this should still be goodAWS & Rubrik Party at 9:00 at Hakkasan (MGM), they have RUN-DMC playing, I do like Rev Run (Joseph Simmons) so I might check this out

Tuesday

  • VIN3684BUS – Doing an HPE storage session that I got thrown into at 11:00, I tried to cut out a lot of the marketing content so swing by and I’ll do my best to talk about VVols and other VMware technical integrations. At Islander I, Lower Level
  • HCI1270BU – Power of Storage Policy Based Management at 12:30 in Mandalay Bay L, Level 2, come hear Cormac & Duncan talk about SPBM which is what vSAN & VVols is all about.
  • Pete Flecha @ HPE Booth preaching the VVols gospel at 1:15, it’s Pete & VVols what more do you want
  • HPE Blogger briefing @ 3:30, also one on Monday with Calvin leading it but I can’t make that one, I’ll be at this one though.
  • Party’s: HyTrust at 6:00 at Fleur, Veeam at Omnia (Caesar’s) at 7:00, vExpert BBQ at Pinball Museum at 7:00, will try and visit them all.

Wednesday

  • Pretty open, tried to not commit to too much this day, I do have a meeting with Lee Caswell, Bryan Young and Velina (SRM product manager) at noon, also a repeat of the Pete Flecha VVols show in the HPE booth at 2:45. For the first year in a while I’m actually leaving Thursday instead of Wednesday night (blame it one the crappy bands), but since the party is kind of far out and really has no big name entertainment I may skip it and try and get into trouble elsewhere.

Hope to see you there!

Share This:

Back in the saddle again

You may have noticed I’ve been fairly quiet the last few months both here and on social media. The reason is I have been re-locating from Colorado to Texas (Houston) and it has been quite a lot of work moving and settling in. I had to sell my house in Colorado which took a ton of work getting the house ready and then sold (thanks Crystal Lowe). I did the move on my own using PODS which worked out pretty well but it was a ton of work packing/unpacking. I also spent a few weeks in hotels as we transitioned from CO to TX. The house took a lot of work to get settled in, it had flooded during Harvey and there was a lot of little things that needed fixing. My main desktop PC also didn’t survive the move so I had to start fresh and try and recover what I could. All in all it’s been a crazy busy last few months but I’m now mostly settled in.

I’ll be kicking off Top vBlog 2018 very soon so stay tuned here for more details. This year we have a special new vTrivia quiz as part of the voting process where you can win Amazon gift cards for the best scores. I’ll be at VMworld this year again (#11 for me) and looking forward to seeing all my old virtual friends. I also have some limited edition vExpert magnetic buttons to hand out if you see me. More to come very soon!

Share This:

Top vBlog 2018 starting soon, make sure your site is included

I’ll be kicking off Top vBlog 2018 very soon and my vLaunchPad website is the source for the blogs included in the Top vBlog voting each year so please take a moment and make sure your blog is listed.  Every year I get emails from bloggers after the voting starts wanting to be added but once it starts its too late as it messes up the ballot. I’ve also archived a bunch of blogs that have not blogged in over a year in a special section, those archived blogs still have good content so I haven’t removed them but since they are not active they will not be on the Top vBlog ballot.

So if you’re not listed on the vLaunchpad, here’s your last chance to get listed. Please use this form and give me your name, blog name, blog URL, twitter handle & RSS URL. I do have a number of listings from people that already filled out the form that I need to get added, the site should be updated in the next 2 weeks to reflect any additions or changes. I’ll post again once that is complete so you can verify that your site is listed. So hurry on up so the voting can begin, the nominations for voting categories will be opening up very soon.

Share This:

Configuration maximum changes in vSphere 6.7

A comparison using the Configuration Maximum tool for vSphere shows the following changes between vSphere 6.5 & 6.7.

Virtual Machine Maximums6.56.7
Persistent Memory - NVDIMM controllers per VMN/A1
Persistent Memory - Non-volatile memory per virtual machineN/A1024GB
Storage Virtual Adapters and Devices - Virtual SCSI targets per virtual SCSI adapter1564
Storage Virtual Adapters and Devices - Virtual SCSI targets per virtual machine60256
Networking Virtual Devices - Virtual RDMA Adapters per Virtual MachineN/A1
ESXi Host Maximums6.56.7
Fault Tolerance maximums - Virtual CPUs per virtual machine48
Fault Tolerance maximums - RAM per FT VM64GB128GB
Host CPU maximums - Logical CPUs per host576768
ESXi Host Persistent Memory Maximums - Maximum Non-volatile memory per hostN/A1TB
ESXi Host Memory Maximums - Maximum RAM per host12TB16TB
Fibre Channel - Number of total paths on a server20484096
Common VMFS - Volumes per host5121024
iSCSI Physical - LUNs per server5121024
iSCSI Physical - Number of total paths on a server20484096
Fibre Channel - LUNs per host5121024
Virtual Volumes - Number of PEs per host256512

Share This:

Important information to know before upgrading to vSphere 6.7

vSphere 6.7 is here and with support for vSphere 5.5 ending soon (Sept.) many people will be considering upgrading to it. Before you rush in though there is some important information about this release that you should be aware of. First let’s talk upgrade paths, you can’t just upgrade from any prior vSphere version to this release, only direct upgrades from certain versions are supported, see the below migration chart.

So duly note that upgrades to vSphere 6.7 are only possible from vSphere 6.0 or vSphere 6.5. If you are currently running vSphere 5.5, you must first upgrade to either vSphere 6.0 or vSphere 6.5 before upgrading to vSphere 6.7.

Next know that vSphere 6.7 is the absolute final release for the Windows version of vCenter Server and the Flex vSphere web client. VMware claims that the new HTML5 web client is much better in this release but it is not yet fully functional, VMware claims is it about 95% there so there may be things you still can’t do in it yet. The 6.7 release notes state this:

[important]In vSphere 6.7, the vSphere Client (HTML5) has many new features and is close to being a fully functional client with all the capabilities of vSphere Web Client (Flex). The majority of the features required to manage vCenter Server operations are available in this version, including vSAN and initial support for vSphere Update Manager (VUM). For an up-to-date list of unsupported functionality, see Functionality Updates for the vSphere Client Guide. vSphere 6.7 continues to offer the vSphere Web Client, which you can use for all advanced vCenter Server operations missing in the vSphere Client. However, VMware plans to deprecate the vSphere Web Client in future releases. For more information, see Goodbye, vSphere Web Client.[/important]

If you leverage vSphere APIs and use plug-ins also know this:

[important]The vSphere 6.7 release is the final release for two sets of vSphere client APIs: the vSphere Web Client APIs (Flex) and the current set of vSphere Client APIs (HTML5), also known as the Bridge APIs. A new set of vSphere Client APIs are included as part of the vSphere 6.7 release. These new APIs are designed to scale and support the use cases and improved security, design, and extensibility of the vSphere Client. VMware is deprecating webplatform.js, which will be replaced with an improved way to push updates into partner plugin solutions without any lifecycle dependencies on vSphere Client SDK updates. Note: If you have an existing plugin solution to the vSphere Client, you must upgrade the Virgo server. Existing vSphere Client plugins will not be compatible with the vSphere 6.7 release unless you make this upgrade. See Upgrading Your Plug-in To Maintain Compatibility with vSphere Client SDK 6.7 for information on upgrading the Virgo server.[/important]

TLS is a transport protocol that allows components to securely communicate with each other. In vSphere 6.7 VMware made a move to force better security so TLS 1.2 is now the default. Prior to 6.7 TLS 1.0 was the default for many VMware products, with TLS 1.2 now the default across the board this could potentially break some integration with 3rd party tools unless the vendor has support for TLS 1.2 as TLS 1.0/1.1 are now disabled. This KB article has a good VMware product matrix with TLS support default options prior to 6.7. This doesn’t mean support for TLS 1.0/1.1 is gone it’s just not enabled by default, it can be re-enabled if needed on a product by product basis (not recommended though). Here’s what the 6.7 release notes say about this:

[important]In vSphere 6.7, only TLS 1.2 is enabled by default. TLS 1.0 and TLS 1.1 are disabled by default. If you upgrade vCenter Server or Platform Services Controller to vSphere 6.7, and that vCenter Server instance or Platform Services Controller instance connects to ESXi hosts, other vCenter Server instances, or other services, you might encounter communication problems. To resolve this issue, you can use the TLS Configurator utility to enable older versions of the protocol temporarily on vSphere 6.7 systems. You can then disable the older, less secure versions after all connections use TLS 1.2. For information, see Managing TLS Protocol Configuration with the TLS Configurator Utility in the vSphere 6.5 Documentation Set. In the vSphere 6.7 release, vCenter Server does not support the TLS 1.2 connection for Oracle databases.[/important]

vSphere 6.7 introduces virtual hardware version 14 (HW compatibility level) which is necessary to take advantage of some of the new features in vSphere 6.7 like VBS, vTPM, vIOMMU, vPMEM and  per-VM EVC. To use these features if you are upgrading from a previous version you must upgrade the hardware compatibility level to 14. However this could potentially cause disruption to the VM OS as upgrading is equivalent to replacing the motherboard of a computer. So it is recommended that you only upgrade to 14 if you really need to.

Some additional FYIs:

  • The vSphere 6.7 release is the final release that supports replacing solution user certificates through the UI. Renewing these certificates with VMCA certificates through the UI will be supported in future releases.
  • The vSphere 6.7 release is the final release that requires customers to specify SSO sites. This descriptor is not required for any vSphere functionality and will be removed. Future vSphere releases will not include SSO Sites as a customer configurable item.

On the good side upgrading a host now only requires a single re-boot so the process is less disruptive. See below for more detail on this:

If you plan on upgrading to vSphere 6.7 note this important upgrade order for VMware products and components that must be followed to avoid issues, the hosts and VMs are at the very end of this order. The order is essentially this:

  • vRA->vRO->vRB->vROPS->vRLI->VADP backup solutions->NSX->External PSC->vCenter Server->VUM->VR->SRM->UMDS->ESXi->vSAN->Virtual Hardware->VMware Tools

So make sure you do your homework before you upgrade to vSphere 6.7, read through the documentation, make sure all your 3rd party tools support it, check the VMware Hardware Compatibility Guide and be prepared. There are a lot of good things in this release so make sure you are ready before you dive into it.

Share This:

vSphere 6.7 Link-O-Rama

Your complete guide to all the essential vSphere 6.7 links from all over the VMware universe. Bookmark this page and keep checking back as it will continue to grow as new links are added everyday. Also be sure and check out the Planet vSphere-land feed for all the latest blog posts from the Top 100 vBloggers.

Summary of What’s New in vSphere 6.7 (vSphere-land)
Important information to know before upgrading to vSphere 6.7 (vSphere-land)
Configuration maximum changes in vSphere 6.7 (vSphere-land)

VMware What’s New Links

Introducing VMware vSphere 6.7! (VMware vSphere Blog)
Introducing vCenter Server 6.7 (VMware vSphere Blog)
Introducing vSphere 6.7 Security (VMware vSphere Blog)
Introducing Faster Lifecycle Management Operations in VMware vSphere 6.7 (VMware vSphere Blog)
Introducing Developer and Automation Interfaces for vSphere 6.7 (VMware vSphere Blog)
Introducing vSphere with Operations Management 6.7 (VMware vSphere Blog)
Introducing vSphere 6.7 for Enterprise Applications (VMware vSphere Blog)

VMware Video Links

VMware vSphere 6.7 Quick Boot (VMware vSphere YouTube)
Faster Host Upgrades to vSphere 6.7 (VMware vSphere YouTube)
VMware vSphere 6.7 VM Encryption (VMware vSphere YouTube)
VMware vSphere 6.7 Optimizing GPU Usage (VMware vSphere YouTube)
VMware vSphere 6.7 TPM 2 0 (VMware vSphere YouTube)

Availability (HA/DRS/FT) Links

Documentation Links

vSphere 6.7 GA Release Notes (VMware)
vSphere 6.7 Datasheet (VMware)

Download Links

ESXi Download (VMware)
vCenter Server Download (VMware)

ESXi Links

General Links

VMware vSphere 6.7 – vSphere Update Manager (VUM) HTML5 and Quick Boot (ESX Virtualization)
VMware vSphere 6.7 and Enterprise Apps (ESX Virtualization)
VMware vSphere 6.7 what’s new (NoLabNoParty)
vSphere 6.7 – WOW – is now GA (Notes from MWhite)
New in Software Defined Compute in vSphere 6.7 (Plain Virtualization)
VMware vSphere 6.7 featuring vSAN 6.7 released! (TinkerTry)
VMware vSphere 6.7 announced today, here’s how to download it right away (TinkerTry)
Status of VMware vSphere 6.7 support by VM backup companies (TinkerTry)
VMware vSphere 6.7 Released (VCDX56)
VMware vSphere 6.7 is now GA  (VCDX133)
VMware vSphere 6.7 is GA (vInfrastructure Blog)
VMware vSphere 6.7 introduces Skylake EVC Mode (Virten.net)

Installing & Upgrading Links

Knowledgebase Articles Links

Licensing Links

Networking Links

News/Analyst Links

VMware Releases vSphere 6.7, Delivering Advances In Hybrid Cloud And Multi-Application Support (CRN)
VMware Updates vSphere & vSAN To 6.7 (Storage Review)
VMware pulls vSAN 6.7 deeper into vSphere (TechTarget)
Forking hell! VMware now has TWO current versions of vSphere (The Register)
Here’s What’s New in vSphere 6.7 (Virtualization Review)
VMware Elevates the Hybrid Cloud Experience with New Releases of vSphere and vSAN (VMware PR)
VMware updates vSphere, vSAN with more hybrid management features (ZDNet)

Performance Links

Scripting/CLI/API Links

Security Links

VMware vSphere 6.7 Security Features (ESX Virtualization)

Storage Links

What’s New in Core Storage in vSphere 6.7 Part I: In-Guest UNMAP and Snapshots (Cody Hosterman)
What’s New in Core Storage in vSphere 6.7 Part II: Sector Size and VMFS-6 (Cody Hosterman)
What’s New in Core Storage in vSphere 6.7 Part III: Increased Storage Limits (Cody Hosterman)
What’s New in Core Storage in vSphere 6.7 Part IV: NVMe Controller In-Guest UNMAP Support (Cody Hosterman)
What’s in the vSphere and vSAN 6.7 release? (Cormac Hogan)

vCenter Server Links

VMware vSphere 6.7 Announced – vCSA 6.7 (ESX Virtualization)

Virtual Volumes (VVols) Links

vROPS Links

Operationalize Your World with vRealize Operations 6.7 (Virtual Red Dot)

VSAN Links

VMware vSAN 6.7 Announced (ESX Virtualization)
What’s New with VMware vSAN 6.7 (Virtual Blocks)
Extending Hybrid Cloud Leadership with vSAN 6.7 (Virtual Blocks)
Released: vSAN 6.7 – HTML5 Goodness, Enhanced Health Checks and More! (Virtualization Is Life!)
vSpeaking Podcast Episode 75: What’s New in vSAN 6.7 (Virtually Speaking Podcast)
What’s new vSAN 6.7 (Yellow Bricks)

vSphere Web Client Links

Update Manager Is Now in the New and Improved HTML5 vSphere 6.7 Client (vMiss)

Share This:

Summary of What’s New in vSphere 6.7

Today VMware announced vSphere 6.7 coming almost a year and a half after the release of vSphere 6.5. Doesn’t look like the download is quite available yet but it should be shortly. Below is the What’s New document from the Release Candidate that summarizes most of the big things new in this release. I’ll be doing an in-depth series following this post with my own take on what’s new with vSphere 6.7. Also be sure and check out my huge vSphere 6.7 Link-O-Rama collection.

Compute

  • Persistent Memory (PMem): In this RC, ESXi introduces support for Persistent Memory to take advantage of ultra-fast storage closer to CPU. PMem is a new paradigm in computing which fills the important gap between ultra-fast volatile memory and slower storage connected over PCIe. This RC includes support for PMem at the guest level to consume as a ‘virtualized Non-volatile dual in-line memory module (NVDIMM)’ as well as a ‘virtualized fast block storage device’ powered by PMem. It is also important to note that customers using ESXi with PMem can manage PMem resources at the cluster level and can also perform live migration of workloads using PMem.

Security and Compliance

  • Transport Layer Security protocol (TLS) 1.2: This vSphere RC has been updated (in accordance with the latest security requirements) to adopt the latest version of TLS protocol. This release includes support for TLS 1.2 out of the box. TLS 1.0 and TLS 1.1 will be disabled by default with the option to manually enable them on both ESXi hosts and vCenter servers.
  • FIPS 140-2: This vSphere RC includes FIPS 140-2 capabilities turned on by default! The UI management interfaces now use FIPS 140-2 capable cryptography libraries as default and the VMware Certificate Authority will have use FIPS 140-2 capable libraries for key generation by default. The kernel cryptography is under evaluation to be FIPS 140-2 validated and currently uses this cryptography under evaluation. Note VAMI UI is not FIPS capable in this release. Note: To clarify, FIPS features are “turned on” in this release. FIPS certification of vSphere is a process that VMware is exploring for a later date.
  • TPM 2.0 Support and Host Attestation: This vSphere RC release introduces support for TPM 2.0 and a new capability called “Host Attestation”. A TPM, or Trusted Platform Module, is a hardware device designed to securely store information such as credentials or measurements. In the case of ESXi, a number of measurements are taken of a known good configuration. At boot time measurements are taken and compared with known good values securely stored in the TPM 2.0 chip. Using this comparison, vSphere can ensure that features such as Secure Boot have not been turned off. In 6.5, Secure Boot for ESXi was introduced and ensures that ESXi boots using only digitally signed code. Now with Host Attestation, a validation of that boot process can be reported up to a dashboard within vCenter.
  • Virtual TPM (vTPM): The vTPM capability in this release lets you add a virtualized TPM 2.0 compatible “chip” to a virtual machine running on vSphere. The guest OS can use virtual TPM to store sensitive information, perform cryptographic operations, or attest integrity of the guest platform. In this release, vSphere makes adding virtual TPM to a VM as easy as adding a virtual device to the VM. In a hardware TPM, the storage of credentials is ensured by having an encrypted space on the TPM. For a virtual TPM, this storage space is encrypted using VM Encryption. VM Encryption was introduced in vSphere 6.5 and requires the use of an external Key Management System. See the documentation for more information on these requirements. As a point of clarification, the virtual TPM does not extend to the hosts hardware TPM. To support operations such as vMotion or vSphere High Availability, the host has a root of trust to the hardware and based on that, presents trusted virtual hardware to virtual machines.
  • Support for Virtualization Based Security (VBS): This vSphere RC provides a seamless way to prepare Windows VMs for Virtualization Based Security (VBS). This is as easy as clicking a single checkbox in VM settings! vSphere will enable admins to enable and disable VBS features for Windows VMs and verify that Windows VBS features (Credential Guard and Device Guard) are functional inside the guest OS. This feature requires ESXi host to be running with Haswell CPU (or greater) and the guest OS to be Windows Server 2016 or Windows 10 (64 bit). Note: Additional configuration within Windows is necessary to enable these features. See Microsoft documentation for more information.
  • Encrypted vMotion: Encrypted vMotion was introduced in vSphere 6.5. With this release, this feature will also be supported across different vCenter instances and versions.
  • VM Encryption: VM Encryption provides VM level data-at-rest encryption solution and was introduced in vSphere 6.5 to protect the VM from both external and internal threat vectors as well as to meet the compliance objectives of an organization. In this release, VM Encryption UI in the HTML5 client gets a facelift which makes enabling encryption on virtual machines seamless using the new HTML5 based vSphere client.

Management

  • vSphere Client (HTML5): Try out the most recent release of the vSphere Client, with additional support for existing functionality, further improved performance and usability, and support for new features in this release. Specific highlights include support for basic Licensing functionality, create/edit VM storage policies, and the new vSphere HTML Client SDK, amongst many others. You can also try the version available on Fling site to experience the new features faster, available at – https://labs.vmware.com/flings/vsphere-html5-web-client
  • vCenter Server Appliance monitoring and management enhancements: vCenter Server Appliance (vCSA) management interface (VAMI UI) includes a lot of new capabilities like scheduling backup, disk monitoring, patching UI, syslog configuration and also the new Clarity themed UI. Also included are new vSphere Alarms for resource exhaustion and service failures. All these new capabilities further simplify vCenter Server management.
  • New vSphere client APIs for the vSphere HTML5 client: vSphere HTML5 client APIs introduced are extensible and scalable for HTML5 client use cases, optimized for Clarity design guidelines, have no Flex dependencies, and have security improvements. Plugins written with these new APIs will reduce developer effort on testing with the vSphere web client (Flex). Moreover, usage of these APIs is a prerequisite for plugins to be compatible with VMware Cloud on AWS.
  • Instant Clone: Instant Clone enables a user to create powered-on VMs from the running state of another powered-on VM without losing its state. This will enable the user to move into a new paradigm of just-in-time (JIT) provisioning given the speed and state-persisting nature of this operation.
  • Per-VM EVC: Per-VM EVC enables the EVC mode to become an attribute of the VM rather than the specific processor generation it happens to be booted on in the cluster. This allows for seamless migration between two datacenters that sport different processors. Further, the feature is persisted per VM and does not lose the EVC mode during migrations across clusters nor during power cycles.
  • What’s new with Nvidia GRID™ vGPU: For this release, VMware and Nvidia collaborated to significantly enhance the operational flexibility and utilization of virtual infrastructure accelerated with Nvidia GRID™ vGPU technology. Most prominent is the new ability to suspend any VM using a compatible Nvidia GRID vGPU profile and resume it on either the same or another vSphere host with compatible and available compute and vGPU resources. This reduces the dependence of VI Admins on end-users’ awareness of maintenance windows, and significantly lowers end-user disruption by removing the need for them to log off and shut down their desktops before maintenance windows. In addition, by using Horizon View’s Power Policy option, idle desktops consuming vGPU could be suspended, either freeing up resources for other clients, or reduce OpEx by lowering power usage.
  • APIs for SPBM Using vAPI: The Storage Policy Based Management APIs manage the storage policy association for a virtual machine and its associate virtual disks. These include retrieving information on the compliance status of a virtual machine and its associated entities.

Storage

  • 4K Native Hard Disk Drive support with ESXi Hosts: With this release, customers can now deploy ESXi on servers with 4K Native HDD used for local storage (4K Native NVMe/SSD drives are not supported at this time). We are providing a software read-modify-write layer within ESXi PSA stack which allows it to emulate these drives as 512e drives for all layers above PSA stack. ESXi continues to expose 512 sector VMDK’s to guest OS. Servers having UEFI BIOS support can boot from 4K Native drives. This release does not support creating RDMs on these local drives.
  • Improving backup performance with network-block device modes (NBD and NBDSSL): This vSphere RC includes performance improvements to the network-block device transport modes between the third-party backup proxy and the virtual disk over a LAN. VMware VADP/VDDK code now only backups allocated sectors and also leverages asynchronous NFC calls to improve NBD and NBDSSL performance.
  • Supporting snapshots and backups for detached first-class disks (FCD): This vSphere RC includes support for snapshots and VADP/VDDK based backups for detached first-class disks. For example, a key use-case where detached first-class disks are useful is for Horizon Appstacks and writeable volumes. With vSphere 6.5, we introduced support for snapshots and backups for attached first-class disks. This new release supports the capability for when first-class disks are detached from any VM.
  • iSCSI Extension for RDMA (iSER): Now customers can deploy ESXi with external storage systems supporting iSER targets. iSER takes advantage of faster interconnects and CPU offload using Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE). We are providing iSER initiator function, which allows ESXi storage stack to connect with iSER capable target storage systems.
  • Extending support for number of disks per VM: Now customers can deploy virtual machines with up to 256 disks using PVSCSI adapters. Each PVSCSI adapter can support up to 64 devices. Devices can be virtual disks or RDMs.
  • Software Fiber Channel over Ethernet (FCoE) Initiator: In this release, ESXi introduces software based FCoE (SW-FCoE) initiator than can create FCoE connection over Ethernet controllers. The VMware FCoE initiator works on lossless Ethernet fabric using Priority-based Flow Control (PFC). It can work in Fabric and VN2VN modes. Please check VMware Compatibility Guide (VCG) for supported NICs.
  • Support for Intel’s Volume Management Device (VMD): ESXi introduces the inbox driver support for Intel’s VMD technology which was introduced recently with Intel’s launch of their Skylake platform. Intel’s VMD technology helps managing NVMe drives with hot-swap capabilities and reliable LED management.
  • Configurable Automatic UNMAP Rate: In this release, we have added a feature to make UNMAP rate a configurable parameter at a datastore level. With this enhancement, customers can change the UNMAP rate to the best of storage array’s capability and this can fine tune the space reclamation accordingly. In this release UNMAP rate can be configured using ESXi CLI only. We allow UNMAP rates in increments of 100 where the minimum rate is 100 MBps and maximum rate is 2000 MBps. CLI command to check current configuration: esxcli storage vmfs reclaim config get –volume-label <volume-name>  Setting the UNMAP rate to 100 MBps: esxcli storage vmfs reclaim config set –volume-label <volume-name> –reclaim-method fixed -b 100
  • Automatic UNMAP Support for SE Sparse: SE Sparse is a sparse virtual disk format and is widely used as a snapshot in vSphere. SE Sparse is the default snapshot format for VMFS 6 datastores. In this release, we are providing automatic space reclamation support for VM’s with SE Sparse snapshot on VMFS-6 datastores. This will only work when the VM is powered on, and it is applicable to the top-most snapshot only.
  • No Support for VMFS-3 Datastores: In this release, we are not supporting VMFS-3 datastores. All VMFS-3 volumes will get upgraded automatically to VMFS-5 volumes during mount. Customers can do in-place or online upgrade of VMFS-3 volumes to VMFS-5.

Networking

  • Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) v2: This beta introduces RoCE v2 support with ESXi hosts. RDMA provides low latency and higher-throughput interconnects with CPU offloads between the end-points. If host has RoCE capable network adaptor(s), this feature is automatically enabled.
  • Para-virtualized RDMA (PV-RDMA): In this release, ESXi introduces the PV-RDMA for Linux guest OS with RoCE v2 support. PV-RDMA enables customers to run RDMA capable applications in virtualized environment. PV-RDMA enabled VM can also be live migrated.
  • ESXi platform enhancements for NSX: This release includes Vmxnet3 version 4, which will support Geneve/VXLAN TSO as well as checksum offload. It will also support RSS for UDP as well as for ESP, and while disabled by default the guest/host admin will be able to enable/disable both features as needed.

vCenter Topology

  • vCenter Embedded Linked Mode: vCenter Server appliance now has support for vCenter with embedded Platform Services Controllers connected in Enhanced Linked Mode. We are calling this vCenter Embedded Linked Mode and we will support 10 nodes connected in Linked Mode. Moreover, full support for vCenter HA and vCSA back-up and restore is also included. This can reduce your management VMs and configuration items by up to 75 percent!
  • vCenter Cross-domain repointing: Have you ever wanted to move your vCenter to another domain or consolidate two domains? Cross-domain repointing gives you an interactive way to move your vCenter to a new domain. The same tool can be used to move all of your vCenters to another domain. We guide you through this process and allow you to keep, copy or delete your data along the way.
  • Cross-VC Mixed Version Provisioning: vCenter 6.0 introduced provisioning between vCenter instances. This is often called “cross-vCenter provisioning.” The use of two vCenter instances introduces the possibility that the instances are different release versions. This feature enables customers to use different vCenter versions while allowing cross-vCenter, mixed-version provisioning operations (vMotion, Full Clone and cold migrate) to continue as seamlessly as possible.

vSphere Lifecycle Enhancements

  • vCenter Migration Assistant: In this release, the Windows Migration assistant now includes an engine so when migrating your vCenter, we can have your vCenter operational very quickly and import external database data such as stats, events, alarms in the background. This means your vCenter is up and running while the other data is imported in the background.
  • vCenter Embedded Linked Mode Multiple Deployment CLI: Along with vCenter Embedded Linked Mode, we are also releasing CLI based deployment options for installation where up to 10 vCenter Embedded Linked Mode nodes can be deployed automatically using our CLI.
  • Single Reboot during ESXi upgrades: With this release, hosts that are upgraded via Update Manager are rebooted only once instead of twice. This feature is available for all types of host hardware, but limited to an upgrade path from vSphere 6.5 to this new release. This will significantly reduce the downtime due to host upgrades and will provide a huge benefit to business continuity.
  • Quick Boot: In this release, Update Manager will trigger a “soft reboot” and skip BIOS/firmware initialization for a limited set of pre-approved hardware. This further reduces the downtime incurred during ESXi upgrades. This feature can be disabled if necessary, but is the default method of rebooting for hosts that satisfy the hardware and driver requirements.

Performance and Availability

  • Fault Tolerance scalability improvements: With this release, vSphere Fault Tolerance supports 8 vCPU / 128 GB RAM per FT-protected VM. Other FT supported scalability limits remain the same.
  • Storage failure protection with Fault Tolerance-protected VMs: In this release, vSphere Fault Tolerance interoperates with VM Component Protection (VMCP). Fault Tolerance will trigger when storage for the protected VM experiences All Paths Down (APD) and Permanent Device Loss (PDL) failures. FT will failover protected VMs to hosts with storage that is available and active.
  • Support for memory mapping for 1GB page sizes: Applications with a large memory footprint, especially greater than 32GB, can often stress the hardware memory subsystem (i.e. Translation Lookaside Buffer) with their access patterns. Modern processors can mitigate this performance impact by creating larger mappings to memory and increasing the memory reach of the application. In prior releases, ESXi allowed guest OS memory mappings based on 2MB page sizes. This release introduces memory mappings for 1GB page sizes.
Share This: