Apr 18 2018

Top vBlog 2018 starting soon, make sure your site is included

I’ll be kicking off Top vBlog 2018 very soon and my vLaunchPad website is the source for the blogs included in the Top vBlog voting each year so please take a moment and make sure your blog is listed.  Every year I get emails from bloggers after the voting starts wanting to be added but once it starts its too late as it messes up the ballot. I’ve also archived a bunch of blogs that have not blogged in over a year in a special section, those archived blogs still have good content so I haven’t removed them but since they are not active they will not be on the Top vBlog ballot.

So if you’re not listed on the vLaunchpad, here’s your last chance to get listed. Please use this form and give me your name, blog name, blog URL, twitter handle & RSS URL. I do have a number of listings from people that already filled out the form that I need to get added, the site should be updated in the next 2 weeks to reflect any additions or changes. I’ll post again once that is complete so you can verify that your site is listed. So hurry on up so the voting can begin, the nominations for voting categories will be opening up very soon.

Share This:

Apr 17 2018

Configuration maximum changes in vSphere 6.7

A comparison using the Configuration Maximum tool for vSphere shows the following changes between vSphere 6.5 & 6.7.

Virtual Machine Maximums6.56.7
Persistent Memory - NVDIMM controllers per VMN/A1
Persistent Memory - Non-volatile memory per virtual machineN/A1024GB
Storage Virtual Adapters and Devices - Virtual SCSI targets per virtual SCSI adapter1564
Storage Virtual Adapters and Devices - Virtual SCSI targets per virtual machine60256
Networking Virtual Devices - Virtual RDMA Adapters per Virtual MachineN/A1
ESXi Host Maximums6.56.7
Fault Tolerance maximums - Virtual CPUs per virtual machine48
Fault Tolerance maximums - RAM per FT VM64GB128GB
Host CPU maximums - Logical CPUs per host576768
ESXi Host Persistent Memory Maximums - Maximum Non-volatile memory per hostN/A1TB
ESXi Host Memory Maximums - Maximum RAM per host12TB16TB
Fibre Channel - Number of total paths on a server20484096
Common VMFS - Volumes per host5121024
iSCSI Physical - LUNs per server5121024
iSCSI Physical - Number of total paths on a server20484096
Fibre Channel - LUNs per host5121024
Virtual Volumes - Number of PEs per host256512

Share This:

Apr 17 2018

Important information to know before upgrading to vSphere 6.7

vSphere 6.7 is here and with support for vSphere 5.5 ending soon (Sept.) many people will be considering upgrading to it. Before you rush in though there is some important information about this release that you should be aware of. First let’s talk upgrade paths, you can’t just upgrade from any prior vSphere version to this release, only direct upgrades from certain versions are supported, see the below migration chart.

So duly note that upgrades to vSphere 6.7 are only possible from vSphere 6.0 or vSphere 6.5. If you are currently running vSphere 5.5, you must first upgrade to either vSphere 6.0 or vSphere 6.5 before upgrading to vSphere 6.7.

Next know that vSphere 6.7 is the absolute final release for the Windows version of vCenter Server and the Flex vSphere web client. VMware claims that the new HTML5 web client is much better in this release but it is not yet fully functional, VMware claims is it about 95% there so there may be things you still can’t do in it yet. The 6.7 release notes state this:

Important!

In vSphere 6.7, the vSphere Client (HTML5) has many new features and is close to being a fully functional client with all the capabilities of vSphere Web Client (Flex). The majority of the features required to manage vCenter Server operations are available in this version, including vSAN and initial support for vSphere Update Manager (VUM). For an up-to-date list of unsupported functionality, see Functionality Updates for the vSphere Client Guide. vSphere 6.7 continues to offer the vSphere Web Client, which you can use for all advanced vCenter Server operations missing in the vSphere Client. However, VMware plans to deprecate the vSphere Web Client in future releases. For more information, see Goodbye, vSphere Web Client.

If you leverage vSphere APIs and use plug-ins also know this:

Important!

The vSphere 6.7 release is the final release for two sets of vSphere client APIs: the vSphere Web Client APIs (Flex) and the current set of vSphere Client APIs (HTML5), also known as the Bridge APIs. A new set of vSphere Client APIs are included as part of the vSphere 6.7 release. These new APIs are designed to scale and support the use cases and improved security, design, and extensibility of the vSphere Client. VMware is deprecating webplatform.js, which will be replaced with an improved way to push updates into partner plugin solutions without any lifecycle dependencies on vSphere Client SDK updates. Note: If you have an existing plugin solution to the vSphere Client, you must upgrade the Virgo server. Existing vSphere Client plugins will not be compatible with the vSphere 6.7 release unless you make this upgrade. See Upgrading Your Plug-in To Maintain Compatibility with vSphere Client SDK 6.7 for information on upgrading the Virgo server.

TLS is a transport protocol that allows components to securely communicate with each other. In vSphere 6.7 VMware made a move to force better security so TLS 1.2 is now the default. Prior to 6.7 TLS 1.0 was the default for many VMware products, with TLS 1.2 now the default across the board this could potentially break some integration with 3rd party tools unless the vendor has support for TLS 1.2 as TLS 1.0/1.1 are now disabled. This KB article has a good VMware product matrix with TLS support default options prior to 6.7. This doesn’t mean support for TLS 1.0/1.1 is gone it’s just not enabled by default, it can be re-enabled if needed on a product by product basis (not recommended though). Here’s what the 6.7 release notes say about this:

Important!

In vSphere 6.7, only TLS 1.2 is enabled by default. TLS 1.0 and TLS 1.1 are disabled by default. If you upgrade vCenter Server or Platform Services Controller to vSphere 6.7, and that vCenter Server instance or Platform Services Controller instance connects to ESXi hosts, other vCenter Server instances, or other services, you might encounter communication problems. To resolve this issue, you can use the TLS Configurator utility to enable older versions of the protocol temporarily on vSphere 6.7 systems. You can then disable the older, less secure versions after all connections use TLS 1.2. For information, see Managing TLS Protocol Configuration with the TLS Configurator Utility in the vSphere 6.5 Documentation Set. In the vSphere 6.7 release, vCenter Server does not support the TLS 1.2 connection for Oracle databases.

vSphere 6.7 introduces virtual hardware version 14 (HW compatibility level) which is necessary to take advantage of some of the new features in vSphere 6.7 like VBS, vTPM, vIOMMU, vPMEM and  per-VM EVC. To use these features if you are upgrading from a previous version you must upgrade the hardware compatibility level to 14. However this could potentially cause disruption to the VM OS as upgrading is equivalent to replacing the motherboard of a computer. So it is recommended that you only upgrade to 14 if you really need to.

Some additional FYIs:

  • The vSphere 6.7 release is the final release that supports replacing solution user certificates through the UI. Renewing these certificates with VMCA certificates through the UI will be supported in future releases.
  • The vSphere 6.7 release is the final release that requires customers to specify SSO sites. This descriptor is not required for any vSphere functionality and will be removed. Future vSphere releases will not include SSO Sites as a customer configurable item.

On the good side upgrading a host now only requires a single re-boot so the process is less disruptive. See below for more detail on this:

If you plan on upgrading to vSphere 6.7 note this important upgrade order for VMware products and components that must be followed to avoid issues, the hosts and VMs are at the very end of this order. The order is essentially this:

  • vRA->vRO->vRB->vROPS->vRLI->VADP backup solutions->NSX->External PSC->vCenter Server->VUM->VR->SRM->UMDS->ESXi->vSAN->Virtual Hardware->VMware Tools

So make sure you do your homework before you upgrade to vSphere 6.7, read through the documentation, make sure all your 3rd party tools support it, check the VMware Hardware Compatibility Guide and be prepared. There are a lot of good things in this release so make sure you are ready before you dive into it.

Share This:

Apr 17 2018

vSphere 6.7 Link-O-Rama

Your complete guide to all the essential vSphere 6.7 links from all over the VMware universe. Bookmark this page and keep checking back as it will continue to grow as new links are added everyday. Also be sure and check out the Planet vSphere-land feed for all the latest blog posts from the Top 100 vBloggers.

Summary of What’s New in vSphere 6.7 (vSphere-land)
Important information to know before upgrading to vSphere 6.7 (vSphere-land)
Configuration maximum changes in vSphere 6.7 (vSphere-land)

VMware What’s New Links

Introducing VMware vSphere 6.7! (VMware vSphere Blog)
Introducing vCenter Server 6.7 (VMware vSphere Blog)
Introducing vSphere 6.7 Security (VMware vSphere Blog)
Introducing Faster Lifecycle Management Operations in VMware vSphere 6.7 (VMware vSphere Blog)
Introducing Developer and Automation Interfaces for vSphere 6.7 (VMware vSphere Blog)
Introducing vSphere with Operations Management 6.7 (VMware vSphere Blog)
Introducing vSphere 6.7 for Enterprise Applications (VMware vSphere Blog)

VMware Video Links

VMware vSphere 6.7 Quick Boot (VMware vSphere YouTube)
Faster Host Upgrades to vSphere 6.7 (VMware vSphere YouTube)
VMware vSphere 6.7 VM Encryption (VMware vSphere YouTube)
VMware vSphere 6.7 Optimizing GPU Usage (VMware vSphere YouTube)
VMware vSphere 6.7 TPM 2 0 (VMware vSphere YouTube)

Availability (HA/DRS/FT) Links

Documentation Links

vSphere 6.7 GA Release Notes (VMware)
vSphere 6.7 Datasheet (VMware)

Download Links

ESXi Download (VMware)
vCenter Server Download (VMware)

ESXi Links

General Links

VMware vSphere 6.7 – vSphere Update Manager (VUM) HTML5 and Quick Boot (ESX Virtualization)
VMware vSphere 6.7 and Enterprise Apps (ESX Virtualization)
VMware vSphere 6.7 what’s new (NoLabNoParty)
vSphere 6.7 – WOW – is now GA (Notes from MWhite)
New in Software Defined Compute in vSphere 6.7 (Plain Virtualization)
VMware vSphere 6.7 featuring vSAN 6.7 released! (TinkerTry)
VMware vSphere 6.7 announced today, here’s how to download it right away (TinkerTry)
Status of VMware vSphere 6.7 support by VM backup companies (TinkerTry)
VMware vSphere 6.7 Released (VCDX56)
VMware vSphere 6.7 is now GA  (VCDX133)
VMware vSphere 6.7 is GA (vInfrastructure Blog)
VMware vSphere 6.7 introduces Skylake EVC Mode (Virten.net)

Installing & Upgrading Links

Knowledgebase Articles Links

Licensing Links

Networking Links

News/Analyst Links

VMware Releases vSphere 6.7, Delivering Advances In Hybrid Cloud And Multi-Application Support (CRN)
VMware Updates vSphere & vSAN To 6.7 (Storage Review)
VMware pulls vSAN 6.7 deeper into vSphere (TechTarget)
Forking hell! VMware now has TWO current versions of vSphere (The Register)
Here’s What’s New in vSphere 6.7 (Virtualization Review)
VMware Elevates the Hybrid Cloud Experience with New Releases of vSphere and vSAN (VMware PR)
VMware updates vSphere, vSAN with more hybrid management features (ZDNet)

Performance Links

Scripting/CLI/API Links

Security Links

VMware vSphere 6.7 Security Features (ESX Virtualization)

Storage Links

What’s New in Core Storage in vSphere 6.7 Part I: In-Guest UNMAP and Snapshots (Cody Hosterman)
What’s New in Core Storage in vSphere 6.7 Part II: Sector Size and VMFS-6 (Cody Hosterman)
What’s New in Core Storage in vSphere 6.7 Part III: Increased Storage Limits (Cody Hosterman)
What’s New in Core Storage in vSphere 6.7 Part IV: NVMe Controller In-Guest UNMAP Support (Cody Hosterman)
What’s in the vSphere and vSAN 6.7 release? (Cormac Hogan)

vCenter Server Links

VMware vSphere 6.7 Announced – vCSA 6.7 (ESX Virtualization)

Virtual Volumes (VVols) Links

vROPS Links

Operationalize Your World with vRealize Operations 6.7 (Virtual Red Dot)

VSAN Links

VMware vSAN 6.7 Announced (ESX Virtualization)
What’s New with VMware vSAN 6.7 (Virtual Blocks)
Extending Hybrid Cloud Leadership with vSAN 6.7 (Virtual Blocks)
Released: vSAN 6.7 – HTML5 Goodness, Enhanced Health Checks and More! (Virtualization Is Life!)
vSpeaking Podcast Episode 75: What’s New in vSAN 6.7 (Virtually Speaking Podcast)
What’s new vSAN 6.7 (Yellow Bricks)

vSphere Web Client Links

Update Manager Is Now in the New and Improved HTML5 vSphere 6.7 Client (vMiss)

Share This:

Apr 17 2018

Summary of What’s New in vSphere 6.7

Today VMware announced vSphere 6.7 coming almost a year and a half after the release of vSphere 6.5. Doesn’t look like the download is quite available yet but it should be shortly. Below is the What’s New document from the Release Candidate that summarizes most of the big things new in this release. I’ll be doing an in-depth series following this post with my own take on what’s new with vSphere 6.7. Also be sure and check out my huge vSphere 6.7 Link-O-Rama collection.

Compute

  • Persistent Memory (PMem): In this RC, ESXi introduces support for Persistent Memory to take advantage of ultra-fast storage closer to CPU. PMem is a new paradigm in computing which fills the important gap between ultra-fast volatile memory and slower storage connected over PCIe. This RC includes support for PMem at the guest level to consume as a ‘virtualized Non-volatile dual in-line memory module (NVDIMM)’ as well as a ‘virtualized fast block storage device’ powered by PMem. It is also important to note that customers using ESXi with PMem can manage PMem resources at the cluster level and can also perform live migration of workloads using PMem.

Security and Compliance

  • Transport Layer Security protocol (TLS) 1.2: This vSphere RC has been updated (in accordance with the latest security requirements) to adopt the latest version of TLS protocol. This release includes support for TLS 1.2 out of the box. TLS 1.0 and TLS 1.1 will be disabled by default with the option to manually enable them on both ESXi hosts and vCenter servers.
  • FIPS 140-2: This vSphere RC includes FIPS 140-2 capabilities turned on by default! The UI management interfaces now use FIPS 140-2 capable cryptography libraries as default and the VMware Certificate Authority will have use FIPS 140-2 capable libraries for key generation by default. The kernel cryptography is under evaluation to be FIPS 140-2 validated and currently uses this cryptography under evaluation. Note VAMI UI is not FIPS capable in this release. Note: To clarify, FIPS features are “turned on” in this release. FIPS certification of vSphere is a process that VMware is exploring for a later date.
  • TPM 2.0 Support and Host Attestation: This vSphere RC release introduces support for TPM 2.0 and a new capability called “Host Attestation”. A TPM, or Trusted Platform Module, is a hardware device designed to securely store information such as credentials or measurements. In the case of ESXi, a number of measurements are taken of a known good configuration. At boot time measurements are taken and compared with known good values securely stored in the TPM 2.0 chip. Using this comparison, vSphere can ensure that features such as Secure Boot have not been turned off. In 6.5, Secure Boot for ESXi was introduced and ensures that ESXi boots using only digitally signed code. Now with Host Attestation, a validation of that boot process can be reported up to a dashboard within vCenter.
  • Virtual TPM (vTPM): The vTPM capability in this release lets you add a virtualized TPM 2.0 compatible “chip” to a virtual machine running on vSphere. The guest OS can use virtual TPM to store sensitive information, perform cryptographic operations, or attest integrity of the guest platform. In this release, vSphere makes adding virtual TPM to a VM as easy as adding a virtual device to the VM. In a hardware TPM, the storage of credentials is ensured by having an encrypted space on the TPM. For a virtual TPM, this storage space is encrypted using VM Encryption. VM Encryption was introduced in vSphere 6.5 and requires the use of an external Key Management System. See the documentation for more information on these requirements. As a point of clarification, the virtual TPM does not extend to the hosts hardware TPM. To support operations such as vMotion or vSphere High Availability, the host has a root of trust to the hardware and based on that, presents trusted virtual hardware to virtual machines.
  • Support for Virtualization Based Security (VBS): This vSphere RC provides a seamless way to prepare Windows VMs for Virtualization Based Security (VBS). This is as easy as clicking a single checkbox in VM settings! vSphere will enable admins to enable and disable VBS features for Windows VMs and verify that Windows VBS features (Credential Guard and Device Guard) are functional inside the guest OS. This feature requires ESXi host to be running with Haswell CPU (or greater) and the guest OS to be Windows Server 2016 or Windows 10 (64 bit). Note: Additional configuration within Windows is necessary to enable these features. See Microsoft documentation for more information.
  • Encrypted vMotion: Encrypted vMotion was introduced in vSphere 6.5. With this release, this feature will also be supported across different vCenter instances and versions.
  • VM Encryption: VM Encryption provides VM level data-at-rest encryption solution and was introduced in vSphere 6.5 to protect the VM from both external and internal threat vectors as well as to meet the compliance objectives of an organization. In this release, VM Encryption UI in the HTML5 client gets a facelift which makes enabling encryption on virtual machines seamless using the new HTML5 based vSphere client.

Management

  • vSphere Client (HTML5): Try out the most recent release of the vSphere Client, with additional support for existing functionality, further improved performance and usability, and support for new features in this release. Specific highlights include support for basic Licensing functionality, create/edit VM storage policies, and the new vSphere HTML Client SDK, amongst many others. You can also try the version available on Fling site to experience the new features faster, available at – https://labs.vmware.com/flings/vsphere-html5-web-client
  • vCenter Server Appliance monitoring and management enhancements: vCenter Server Appliance (vCSA) management interface (VAMI UI) includes a lot of new capabilities like scheduling backup, disk monitoring, patching UI, syslog configuration and also the new Clarity themed UI. Also included are new vSphere Alarms for resource exhaustion and service failures. All these new capabilities further simplify vCenter Server management.
  • New vSphere client APIs for the vSphere HTML5 client: vSphere HTML5 client APIs introduced are extensible and scalable for HTML5 client use cases, optimized for Clarity design guidelines, have no Flex dependencies, and have security improvements. Plugins written with these new APIs will reduce developer effort on testing with the vSphere web client (Flex). Moreover, usage of these APIs is a prerequisite for plugins to be compatible with VMware Cloud on AWS.
  • Instant Clone: Instant Clone enables a user to create powered-on VMs from the running state of another powered-on VM without losing its state. This will enable the user to move into a new paradigm of just-in-time (JIT) provisioning given the speed and state-persisting nature of this operation.
  • Per-VM EVC: Per-VM EVC enables the EVC mode to become an attribute of the VM rather than the specific processor generation it happens to be booted on in the cluster. This allows for seamless migration between two datacenters that sport different processors. Further, the feature is persisted per VM and does not lose the EVC mode during migrations across clusters nor during power cycles.
  • What’s new with Nvidia GRID™ vGPU: For this release, VMware and Nvidia collaborated to significantly enhance the operational flexibility and utilization of virtual infrastructure accelerated with Nvidia GRID™ vGPU technology. Most prominent is the new ability to suspend any VM using a compatible Nvidia GRID vGPU profile and resume it on either the same or another vSphere host with compatible and available compute and vGPU resources. This reduces the dependence of VI Admins on end-users’ awareness of maintenance windows, and significantly lowers end-user disruption by removing the need for them to log off and shut down their desktops before maintenance windows. In addition, by using Horizon View’s Power Policy option, idle desktops consuming vGPU could be suspended, either freeing up resources for other clients, or reduce OpEx by lowering power usage.
  • APIs for SPBM Using vAPI: The Storage Policy Based Management APIs manage the storage policy association for a virtual machine and its associate virtual disks. These include retrieving information on the compliance status of a virtual machine and its associated entities.

Storage

  • 4K Native Hard Disk Drive support with ESXi Hosts: With this release, customers can now deploy ESXi on servers with 4K Native HDD used for local storage (4K Native NVMe/SSD drives are not supported at this time). We are providing a software read-modify-write layer within ESXi PSA stack which allows it to emulate these drives as 512e drives for all layers above PSA stack. ESXi continues to expose 512 sector VMDK’s to guest OS. Servers having UEFI BIOS support can boot from 4K Native drives. This release does not support creating RDMs on these local drives.
  • Improving backup performance with network-block device modes (NBD and NBDSSL): This vSphere RC includes performance improvements to the network-block device transport modes between the third-party backup proxy and the virtual disk over a LAN. VMware VADP/VDDK code now only backups allocated sectors and also leverages asynchronous NFC calls to improve NBD and NBDSSL performance.
  • Supporting snapshots and backups for detached first-class disks (FCD): This vSphere RC includes support for snapshots and VADP/VDDK based backups for detached first-class disks. For example, a key use-case where detached first-class disks are useful is for Horizon Appstacks and writeable volumes. With vSphere 6.5, we introduced support for snapshots and backups for attached first-class disks. This new release supports the capability for when first-class disks are detached from any VM.
  • iSCSI Extension for RDMA (iSER): Now customers can deploy ESXi with external storage systems supporting iSER targets. iSER takes advantage of faster interconnects and CPU offload using Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE). We are providing iSER initiator function, which allows ESXi storage stack to connect with iSER capable target storage systems.
  • Extending support for number of disks per VM: Now customers can deploy virtual machines with up to 256 disks using PVSCSI adapters. Each PVSCSI adapter can support up to 64 devices. Devices can be virtual disks or RDMs.
  • Software Fiber Channel over Ethernet (FCoE) Initiator: In this release, ESXi introduces software based FCoE (SW-FCoE) initiator than can create FCoE connection over Ethernet controllers. The VMware FCoE initiator works on lossless Ethernet fabric using Priority-based Flow Control (PFC). It can work in Fabric and VN2VN modes. Please check VMware Compatibility Guide (VCG) for supported NICs.
  • Support for Intel’s Volume Management Device (VMD): ESXi introduces the inbox driver support for Intel’s VMD technology which was introduced recently with Intel’s launch of their Skylake platform. Intel’s VMD technology helps managing NVMe drives with hot-swap capabilities and reliable LED management.
  • Configurable Automatic UNMAP Rate: In this release, we have added a feature to make UNMAP rate a configurable parameter at a datastore level. With this enhancement, customers can change the UNMAP rate to the best of storage array’s capability and this can fine tune the space reclamation accordingly. In this release UNMAP rate can be configured using ESXi CLI only. We allow UNMAP rates in increments of 100 where the minimum rate is 100 MBps and maximum rate is 2000 MBps. CLI command to check current configuration: esxcli storage vmfs reclaim config get –volume-label <volume-name>  Setting the UNMAP rate to 100 MBps: esxcli storage vmfs reclaim config set –volume-label <volume-name> –reclaim-method fixed -b 100
  • Automatic UNMAP Support for SE Sparse: SE Sparse is a sparse virtual disk format and is widely used as a snapshot in vSphere. SE Sparse is the default snapshot format for VMFS 6 datastores. In this release, we are providing automatic space reclamation support for VM’s with SE Sparse snapshot on VMFS-6 datastores. This will only work when the VM is powered on, and it is applicable to the top-most snapshot only.
  • No Support for VMFS-3 Datastores: In this release, we are not supporting VMFS-3 datastores. All VMFS-3 volumes will get upgraded automatically to VMFS-5 volumes during mount. Customers can do in-place or online upgrade of VMFS-3 volumes to VMFS-5.

Networking

  • Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) v2: This beta introduces RoCE v2 support with ESXi hosts. RDMA provides low latency and higher-throughput interconnects with CPU offloads between the end-points. If host has RoCE capable network adaptor(s), this feature is automatically enabled.
  • Para-virtualized RDMA (PV-RDMA): In this release, ESXi introduces the PV-RDMA for Linux guest OS with RoCE v2 support. PV-RDMA enables customers to run RDMA capable applications in virtualized environment. PV-RDMA enabled VM can also be live migrated.
  • ESXi platform enhancements for NSX: This release includes Vmxnet3 version 4, which will support Geneve/VXLAN TSO as well as checksum offload. It will also support RSS for UDP as well as for ESP, and while disabled by default the guest/host admin will be able to enable/disable both features as needed.

vCenter Topology

  • vCenter Embedded Linked Mode: vCenter Server appliance now has support for vCenter with embedded Platform Services Controllers connected in Enhanced Linked Mode. We are calling this vCenter Embedded Linked Mode and we will support 10 nodes connected in Linked Mode. Moreover, full support for vCenter HA and vCSA back-up and restore is also included. This can reduce your management VMs and configuration items by up to 75 percent!
  • vCenter Cross-domain repointing: Have you ever wanted to move your vCenter to another domain or consolidate two domains? Cross-domain repointing gives you an interactive way to move your vCenter to a new domain. The same tool can be used to move all of your vCenters to another domain. We guide you through this process and allow you to keep, copy or delete your data along the way.
  • Cross-VC Mixed Version Provisioning: vCenter 6.0 introduced provisioning between vCenter instances. This is often called “cross-vCenter provisioning.” The use of two vCenter instances introduces the possibility that the instances are different release versions. This feature enables customers to use different vCenter versions while allowing cross-vCenter, mixed-version provisioning operations (vMotion, Full Clone and cold migrate) to continue as seamlessly as possible.

vSphere Lifecycle Enhancements

  • vCenter Migration Assistant: In this release, the Windows Migration assistant now includes an engine so when migrating your vCenter, we can have your vCenter operational very quickly and import external database data such as stats, events, alarms in the background. This means your vCenter is up and running while the other data is imported in the background.
  • vCenter Embedded Linked Mode Multiple Deployment CLI: Along with vCenter Embedded Linked Mode, we are also releasing CLI based deployment options for installation where up to 10 vCenter Embedded Linked Mode nodes can be deployed automatically using our CLI.
  • Single Reboot during ESXi upgrades: With this release, hosts that are upgraded via Update Manager are rebooted only once instead of twice. This feature is available for all types of host hardware, but limited to an upgrade path from vSphere 6.5 to this new release. This will significantly reduce the downtime due to host upgrades and will provide a huge benefit to business continuity.
  • Quick Boot: In this release, Update Manager will trigger a “soft reboot” and skip BIOS/firmware initialization for a limited set of pre-approved hardware. This further reduces the downtime incurred during ESXi upgrades. This feature can be disabled if necessary, but is the default method of rebooting for hosts that satisfy the hardware and driver requirements.

Performance and Availability

  • Fault Tolerance scalability improvements: With this release, vSphere Fault Tolerance supports 8 vCPU / 128 GB RAM per FT-protected VM. Other FT supported scalability limits remain the same.
  • Storage failure protection with Fault Tolerance-protected VMs: In this release, vSphere Fault Tolerance interoperates with VM Component Protection (VMCP). Fault Tolerance will trigger when storage for the protected VM experiences All Paths Down (APD) and Permanent Device Loss (PDL) failures. FT will failover protected VMs to hosts with storage that is available and active.
  • Support for memory mapping for 1GB page sizes: Applications with a large memory footprint, especially greater than 32GB, can often stress the hardware memory subsystem (i.e. Translation Lookaside Buffer) with their access patterns. Modern processors can mitigate this performance impact by creating larger mappings to memory and increasing the memory reach of the application. In prior releases, ESXi allowed guest OS memory mappings based on 2MB page sizes. This release introduces memory mappings for 1GB page sizes.
Share This:

Apr 16 2018

Coming Soon: Top vBlog 2018

I’m back at it and getting ready to launch Top vBlog 2018. Been fairly busy the last few months getting my house ready to sell and moving to a new state soon. This year the modified scoring method I put into place last year will remain the same. Instead of just relying on public voting which can become more about popularity and less about blog content, I added several other scoring factors last year into the mix and I think that worked out well. The total points that blogger can receive through the entire process will be made up of the following factors:

  • 80% – public voting – general voting – anyone can vote – votes are tallied and weighted for points based on voting rankings as done in past years
  • 10% – number of posts in a year – how much effort a blogger has put into writing posts over the course of a year based on Andreas hard work adding this up each year (aggregator’s excluded)
  • 10% – Google PageSpeed score – how well a blogger has done to build and optimize their site as scored by Google’s PageSpeed tools, you can read more on this here where I scored some of the top blogs.

Once again the 10 minimum blog posts rule in 2017 will be enforced to be eligible to be on the Top vBlog voting form. Some new stuff this year though:

  • An optional timed and scored test your vKnowledge quiz at the end of the voting giving voters a chance to win Amazon gift cards. This quiz is sponsored by Nutanix and will feature questions to see how much you know about the virtualization community, various trivia and the history of virtualization.
  • A truly live reveal show, I’m looking to have a live reveal show that is also broadcast via the internet at VMworld US. We’ll have special guests, an emcee and more to make it a fun event.

And thank you once again to Turbonomic for sponsoring Top vBlog 2018, stay tuned!

Share This:

Feb 19 2018

Is 2018 the year of VVols?

Next month VMware’s new Virtual Volumes (VVols) storage architecture will turn 3 years old as it was released as part of vSphere 6.0 in March of 2015. Since it’s initial release adoption has been very slow, I believe VMware estimates less than 2% of customers are using VVols. So will 2018 be the year of VVols and will we finally start to see more mainstream adoption? I’d really love to answer that question with a yes but to be realistic I’d probably have to answer it as no and the reason for that VVols has several things working against it which is the root cause of slow adoption.

Lack of VMware marketing

Now I know VMware has backed off on marketing VVols and I understand their rational as they see it as core architecture but it wouldn’t hurt to promote it now and then to help out the partner community. Except for the VMware VVols product team doing some occasional things you really don’t find many other people at VMware talking VVols up. VMware does have a VVols product page but it seems most of their marketing and promotion is focused on things like NSX, vSAN and AWS.

To be fair VMware did do a ton of marketing around VVols the first year or so after it was released, but the timing wasn’t right back then, partners weren’t ready (only 4 supported it), customers weren’t ready (nobody rushed to vSphere 6), heck even VMware wasn’t completely ready until vSphere 6.5 (no replication). Now is really the time for them to be marketing VVols as the maturity level for VVols has greatly advanced over the last 3 years for both VMware and especially the partner community.

Partners not ready

Developing a VVols solution is no easy task and represents a hugely significant development commitment for partners. Over the last 3 years partner solutions for VVols have been slowly emerging and improving in fact the last major storage vendor to not have VVols support (Pure) just finally released their VVols solution recently. This isn’t a knock against partners really, when you factor in the major changes that you have to make in your storage architecture to support VVols with all the other non-VVols development work that partners need to constantly do to improve their platforms it’s a heck of a challenge to deliver a robust and scalable VVols solution.

I know in my company we have spent 6+ years working on VVols with a dedicated engineering team focused on it. Scaling VVols is one of the biggest challenges and most block storage arrays were never equipped to handle tens of thousands of LUNs that VVols could potentially require (1000 VMs = 3,000 LUNs at a minimum). In addition the way storage arrays interact with Protocol Endpoints and VASA Providers requires a whole different approach from a storage array perspective compared to VMFS.

I think most partners are a lot more ready then they were 3 years ago but I don’t think you could point to any one partner and call them done with VVols completely. I’d have to say all of them still have VVols support roadmaps stretching out for months and years. Today there are about 20 storage vendors that show in the VVols HCL as support it on vSphere 6.0, 13 of them supporting it on vSphere 6.5 and only 3 that supports the VVols replication functionally in vSphere 6.5. However just being on the HCL speaks nothing of what you actually support with VVols which is different across every partner and how well you scale with VVols as well. Hopefully partners keep working hard to improve their VVols solutions.

And as a final comment, VMware is mostly done with the development of VVols, it’s not really on them anymore to deliver missing functionality. There really isn’t any roadblock for partners to engineer a complete VVols solutions at this point. From what I’ve seen the VVols roadmap going forward mainly centers around some optimizations of VVols operations and things like bringing bind operations in band. The upcoming vSphere release brings some minor improvements to VVols but it really is feature complete from a VMware perspective.

Bad first impressions

You only get one chance to make a first impression and that impression can have a lasting effect on people. For anyone who tried out VVols early on when most partners had new and incomplete solutions the VVols experience may not have gone all that great for them. Because of that bad first experience they may have already made up their minds about VVols and stick with what has been working just fine for them for years and years. As a result it’s hard to convince those people to give VVols another try on a more mature and feature complete VVols solution.

What is really at the root of this problem is that it typically takes years and years to completely engineer a VVols solution and if you release it too early when it isn’t complete and fully baked you risk giving users a bad VVols experience. VVols is a lot different from other VMware integration’s like VAAI and vCenter plug-ins. It requires a vendor to make big changes to their core storage arrays and to engineer a scalable solution that supports a wide variety of array capabilities advertised to SPBM takes a lot of effort. Most vendors I’ve seen have released solutions and have slowly improved them over time. Pure seems to be a notable exception to this as while they were showing off VVols years ago, they appeared to have held off on delivering their solution until recently.

So for anyone who may have had a bad first impression with VVols, I strongly encourage you to give it another try. VVols has some fantastic benefits over VMFS and partners have come a long way since the early days. Work with your partners, see where they are at today with VVols and how their roadmap looks. At some point I fully expect VMFS to go away and VVols will be your only option so don’t wait too long which leads me to my next point.

No sense of urgency

Right now there is no sense of urgency from VMware to switch to VVols and I don’t think we will get that sense anytime soon. If VMware really wanted to drive people to VVols they can do what they typically do when they offer two things that are similar in nature, announce the EOL of the old one to get people moving to the new one. Remember when so many people stuck with ESX as their hypervisor of choice despite there being a new and better option with ESXi? Some people simply don’t like change, they stick with what they know and are used to and are very resistant to switching to something new despite the benefits. Once the partner ecosystem caught up with supporting ESXi it took VMware to finally say enough is enough you have no choice in the next vSphere release it’s ESXi or nothing to get people motivated to switch.

The same holds true for other vSphere duplicate functionality, VMware made you give up your vSphere C# client, the days of vCenter running on Windows are numbered and soon HTML5 will be the only web UI. Now I know right now isn’t the time for VMware to announce any EOL of VMFS but at some point they’ll have to set a sense of urgency with customers to get them moving to VVols. It also wouldn’t hurt if VMware dropped some subtle hints as well at some point. Right now I don’t think customers know VMware’s strategic plans for VMFS and VVols, it’s more of you can choose either this or that and not you should move from this to that. Once VMware does set expectations accordingly and deprecates the old I think we’ll see a big wave of people moving to VVols.

Lack of support and documentation

I’ve heard this from many people, call VMware for VVols support and many times it’s hard to find someone who know enough about it to help resolve issues. I’m guessing this is more of VMware support not getting a lot of calls on VVols and therefore not having a lot of experience with it then a lack of VMware training support people on VVols. I’m thinking the same might hold true for partners supporting VVols as well. Also you have the potential for finger pointing between partners and VMware back and forth as well, in many cases customers are probably calling VMware first and the reality is many VVols support issues are probably partner related as most issues are probably related to partner implementations and not the VASA spec. All in all this can result in frustrated customers, I expect this to improve over time as support teams get more experience with VVols.

When it comes to documentation and other VVols support material (i.e. blogs, videos, white papers, etc.) some vendors have done a fairly decent job with this but from what I’ve seen other vendors have not done very well at all. I’ve looked around at some vendors when doing research and I constantly see either very dated or no material to help customers understand and implement VVols. I know at my company I’ve done all I can to get as much out there as a possible, that includes creating white papers, analyst reports, blogs, video, webinars, VMUG sessions and more. Lack of VVols documentation and materials can definitely frustrate and turn away customers wanting to check out VVols. I’d encourage all partners to do a better job with this, don’t put all that engineering effort into VVols and skimp out on documentation.

Lack of partner marketing

It truly is on the partners to promote their own VVols solutions. This isn’t VAAI where every partner integration is mostly the same thing supporting core primitives. At it’s core VVols is mainly VASA which is the specification that VMware writes that allows storage vendors to develop VVols solutions specific to their own arrays. As a result every storage vendor is going to have their own unique VVols solution where they can pick and choose what capabilities they want to advertise to storage policies and what features they want to implement to support with VVols. It’s up to each vendor to decide what they want to do with VVols, they can offer basic capabilities or they can get innovative with their solutions and do cool things that other partners aren’t doing.

Since every partner solution is vastly different with VVols, partners need to market their own unique solutions! If all partners stepped up their VVols marketing I bet we would see a lot more adoption. Right now many people don’t know what VVols will do for them and why they should switch to VVols. It’s on the partners to get the word out and make people want to move to VVols especially now that VMware has gone largely silent in promoting VVols themselves. How does any company sell something? They market it to raise awareness! Unless we see partners doing more promotion of VVols the slow adoption trend will continue.

Final thoughts

So 2018 may not be a big year for VVols but I know adoption will slowly continue to rise. As time progresses I believe that pace will be more rapid. As I’ve said over and over VVols has many benefits and is the future for external storage in vSphere. It’s not a matter of if people will switch to VVols it’s a matter of when they will switch. If VMware and partners work together and address some of the things I have talked about here it will go a long way to getting VVols to see more mainstream adoption. I for one want to see that sooner rather than later so I’ll continue to do my part and hope others do so as well. Viva Las VVols!

Share This:

Feb 19 2018

VMworld 2018 Call For Papers is open – here’s how to get people to vote for and attend your session

VMware just announced that the Call for Papers for VMworld 2018 is now open until March 13th. Just like last year, this year VMware has opened early compared to the usual March-April period of years past. Remember VMworld US is again back in Vegas this year at the Mandalay Bay from Aug. 26th-30th, VMworld Europe in Barcelona is much later this year, instead of being right after the US event like last year, this year it it pushed out to November (5th-8th). Here are the key dates for the CFP process this year:

  • June 12, 2018 – Speaker Resource Center Live (US and Europe)
  • June 19, 2018 – Content Catalog Live (US and Europe)
  • July 10, 2018 – Presentation First Drafts Due (US)
  • July 17, 2018 – Schedule Builder Live (US)
  • August 2, 2018 – Final Presentations Due (US)
  • August 26-30 – VMworld 2018 US takes place (Las Vegas)
  • September 25, 2018 – Schedule Builder Live (Europe)
  • September 25, 2018 – Presentation First Drafts Due (Europe)
  • November 5-8, 2018 – VMworld takes place (Barcelona)

Last year they ended up extended the deadline a few days, but one thing I can’t stress enough is don’t wait for until the last minute and rush through it, plan it out now and write your submissions up so they are well thought out. From previous experience I can tell you to have a catchy title as it’s your sessions curb appeal. Many people won’t make it past your title and you miss a chance to interest them with your abstract if you have a boring and un-interesting session title. As a former content committee judge I can also tell you to spend some time on your abstract and don’t rush to throw something together without thinking it through. I’ve seen lots of session proposals that lacked any real detail about what the session was about.

If you want to impress both the content committee and public voters who will determine if your session is approved I encourage you to follow the tips listed below for the best chance of getting your session approved. For sponsors in particular I highly encourage you to read this post I did last year entitled: Sponsor sessions at VMware events: If you build it right they will come. In that post I detailed what works and what doesn’t to make your session attractive to attendees, this is based on my personal experience at VMworld last year and how I was able to get almost 1,000 people to register for my session. To summarize the winning formula for a good session tends to be:

  • Knowledgeable, technical speaker + educational/technical content – sales/marketing pitch = great attendance

Here are some additional tips that VMware provides:

Tips for Creating Effective Titles for Submission

  • Do not use abbreviations or acronyms under any circumstances in the titles of your submissions.
  • Do not use competitor or other company names in your submission titles. If you are highlighting other companies within your session, you can adopt these names within the session description.
  • Start with the Benefit: Ex: Shorten Adoption Time by Using VMware’s XXX.
  • Use clear and concise language that attendees will immediately understand. The agenda will eventually host hundreds of sessions and attendees need to easily identify sessions of interest. Straight forward language like “Introduction to”, “Deep Dive” and “Case Study” are popular examples because they quickly tell the attendee important information about the session.

Typical Reasons for Abstract Rejection

  • The abstract is poorly written—ideas are not clear, goals are not established, there are grammatical errors, etc.
  • The content is not relevant to the indicated audience.
  • The session value is not clearly identified.
  • The session topic is not unique or overlaps with another more appropriate abstract.

Tips for Writing Winning Abstracts

  • Avoid beginning your session description with the phrase, “In this session we will…”, or “In this session you will learn…”. It does not add value and becomes tedious on an agenda of several hundred sessions. Instead try a rhetorical question, or an interesting industry data
    point to start your session abstract.
  • Ensure that what you submit will be what you present. Nothing will upset attendees more than signing up for a session that is not what it is advertised to be.
  • Your abstract should generate enthusiasm‐ make sure your content is relevant, but also generates excitement. What invaluable information will be shared during the session?
  • Thoughtfully leverage the tags in the system for topics, level, and roles. Who is the target audience? What products or topics does this session cover outside of the track name? What roles would specifically benefit from this session? Do not check every check box if your session is applicable to all.
  • Be Original – Attendees want to see new presentations that cover the latest innovations in technology. Take the time to create well‐written titles, abstracts, outlines, and the key takeaways for your submission. A thoughtful proposal will have a better chance of being
    selected and if accepted, will be seen by thousands of attendees once published in the course catalog.
  • Be Educational –VMware requires that sessions focus on the educational value of the presentation. Be sure that your proposal doesn’t sound like a sales pitch but rather an exciting opportunity for attendees to learn something new.
  • Be Timely – Make sure your topic is relevant to the audience you’re targeting. Review the content topics before submitting a session.

Read the full submission guidelines here and the FAQ here.

Share This:

Older posts «