October 2014 archive

Import files to subscribe to all the Top vBlogs via RSS

I’ve been long meaning to create RSS import files that can be used to automatically add all the Top vBlogs to your RSS reader of choice but have never gotten around to completing them. Thanks to a reminder from a reader (Brian Olsen) I have gone and created them by exporting my top vBlog WordPress table from the vLaunchpad and clearing out all the non RSS feed related HTML code. I then went and created 4 separate OPML files that can be used to import into a RSS reader. I created 4 files in case someone wanted to import only the Top 10, the Top 25, the Top 50 or the whole Top 100 vBlogs. The files are available for download on my page bar on the vLaunchpad. The process to import them is fairly simple as outlined below:

1) First download the file you want by going to the vLaunchpad, selecting it and right-click on it and select “Save link as”, it will default to a XML file name (i.e. Top50vBlogs.xml), save it to your computer.


2) Now that you have the import file its time to go into your RSS reader and import it. How you do this will vary depending on the RSS reader you are using but it should be pretty straightforward. In this example I’ll be using a popular free RSS reader, FeedDemon. First I created a folder to put the blogs in and called it Top 50 vBlogs. Then I select File from the top menu, then Import/Export and then the Import Feeds option.


3) The Import Feeds window will open, select “Import An OPML File” click the Folder icon to browse for the file, change the File types to “XML Files” and select the file you downloaded.


4) On the next screen it will list all the blogs that are contained in the import file, you can select them all or just specific ones that you want to import, once you select them click Next.



5) On the next screen choose a folder to place them in, I selected the Top 50 vBlogs folder that I had already created, once you do that click Next.


6) Click Finish and the blogs will be added to your RSS reader and it will connect to all of them and pull the latest content from them. Note the original OPML file had the RSS feed URL as the blog title which made it difficult to identify blog names so I went and manually edited the file so they display as they do on the vLaunchpad with the blog ranking, blog name and blog author. Also note the number of posts that are pulled from a blog are dictated by each blog’s WordPress settings under “Settings–>Reading–>Syndication feeds show the most recent _ items”



Share This:

Don’t miss out, subscribe to vSphere-land via email



I just added a new option to be notified via email of any new posts here at vSphere-land. So sign up now so you never miss a post again, especially with the upcoming Top vBlog voting coming up soon. Every year I have people complain that they missed out on posts related to Top vBlog so here’s your chance to stay informed.


Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Share This:

The vLaunchpad is finally updated!

I’ve spent a lot of time recently updating and refreshing all my sites and I recently migrated my vLaunchpad site to a new hosting provider, new WordPress version and to a new theme. I also took some time and went through the big backlog I had of adding new blogs to it. I’ve added at least 50 new blogs as I get ready for the upcoming top vBlog voting which will be kicking off in December. There are some cool changes in store for top vBlog this year so make sure your blog is listed on the vLaunchpad so it is included in the voting and you don’t miss out. I just updated the blog submission form so make sure you use that as it includes all the key info that I need to add your blog to it.

Share This:

Why the VMware vSphere TPS vulnerability is a big deal


VMware recently acknowledged a vulnerability with their Transparent Page Sharing (TPS) feature that could potentially allow VM’s to access memory pages of other VMs running on a host. TPS is a host process that leverages the Virtual Machine Monitor (VMM) component of the VMkernel to scan physical host memory to identify redundant VM memory pages. The benefits of TPS are that it allows a host to reduce physical memory usage so you can squeeze more VMs onto a host as memory is often one of the most constrained resources on a host. TPS is basically the equivalent of storage de-duplication for RAM and works at the 4KB block level. You can read a full description on how it works in this classic VI3 memory white paper that refers to it as “Memory Sharing” or this more recent vSphere 5 white paper on memory management. Here’s a brief description of it and a slide on how it works:

[important]When multiple virtual machines are running, some of them may have identical sets of memory content. This presents opportunities for sharing memory across virtual machines (as well as sharing within a single virtual machine). For example, several virtual machines may be running the same guest operating system, have the same applications, or contain the same user data. With page sharing, the hypervisor can reclaim the redundant copies and keep only one copy, which is shared by multiple virtual machines in the host physical memory. As a result, the total virtual machine host memory consumption is reduced and a higher level of memory overcommitment is possible.[/important]


It’s important to note that the TPS feature is nothing new to vSphere having been first introduced with the VI3 release back in 2006, but apparently someone has now finally gotten around to try and successfully exploit it. Why is this a big deal? Because a virtualized architecture demands VM isolation, this is the most important security requirement for virtualization. Each VM guest running on a host must not be allowed in any way to access another VM guest. They must be kept in separate locked rooms with only the hypervisor possessing the keys to access all of them.

To illustrate this let’s use a real world scenario, imagine if you checked into a hotel, you expect privacy and isolation and to not have any other guests be able to come into your room. However your room has an adjoining shared door with another room but neither guest could get through it, only the hotel management can control the door. But somehow a guest has figured out a way to to open that door and get into your room and invade your privacy, that’s a pretty big deal wouldn’t you say.

VMware appears to be down-playing it as it obviously exposes a chink in their virtual armor, they have issued a KB article describing the vulnerability and giving guidance on how customers can disable TPS on their hosts. VMware doesn’t name the specific source that found the vulnerability in the KB article, they simply refer to it as “an academic paper”. The academic paper is entitled “Wait a minute! A fast, Cross-VM attack on AES” and was written by a group of individuals from Worcester Polytechnic Institute in 2014. It was funded by a National Science Foundation grant and it’s a pretty technical, head-spinning read with a lot of mathematical formulas. It’s worth taking a look at though as parts of it are more easily consumable and describe the attack scenarios and results. The overview of the paper is described as:

[important]In this work, we show a novel cache-based side-channel attack on AES that—by employing the Flush+Reload technique—enables, for the first time, a practical full key recovery attack across virtual machine boundaries in a realistic cloud-like server setting. The attack takes advantage of deduplication mechanism called the Transparent Page Sharing which is employed by VMware virtualization engine and is the focus of this work. The attack works well across cores, i.e. it works well in a high-end server with multiple cores scenario that is commonly found in cloud systems. The attack is, compared to [13], minimally invasive, significantly reducing requirements on the adversary: memory accesses are minimal and the accesses do not need to interrupt the victim process’ execution. This also means that the attack is hardly detectable by the victim. Last but not least, the attack is lightning fast: we show that, when running in a realistic scenario where an encryption server is attacked, the whole key is recovered in less than 10 seconds in non-virtualized setting (i.e. using a spy process) even across cores, and in less than a minute in virtualized setting across VM boundaries.[/important]

When I was going through the paper at the end of the paper in the references section one thing I noticed was that the authors of the paper had previously written a paper earlier in 2014 entitled “Fine grain Cross-VM Attacks on Xen and VMware are possible!“. So it appears based on that they worked to apply the theoretical research from that paper into a real-life attack which succeeded and prompted the 2nd paper to be written.

It’s important to note that this is not an easily exploitable vulnerability and the risk is super low so most environments should not really be impacted by it. VMware is being overly cautious though and will disable the feature by default in all upcoming releases of vSphere which includes:

  • ESXi 5.5 Update release – Q1 2015
  • ESXi 5.1 Update release – Q4 2014
  • ESXi 5.0 Update release – Q1 2015
  • The next major version of ESXi

In addition they will be issuing patches before that to address it sooner rather than later as follows:

  • ESXi 5.5 Patch 3
  • ESXi 5.1 patch planned for Q4, 2014
  • ESXi 5.0 patch planned for Q4, 2014

All versions of vSphere back to VI3 are vulnerable to the exploit but VMware is only patching the 5.x versions of vSphere as the 4.x versions are no longer officially supported as of May 2014. Note these patches only disable TPS which is currently enable by default, they do nothing to fix the vulnerability, it will most likely take VMware some time to figure out how to make TPS work in a way that cannot be exploited. So if you disable TPS on your own you don’t really need the patch. VMware states in the KB article that “Administrators may revert to the previous behavior if they so wish” so it sounds like they are not too worried about it.

The benefits that TPS provides will vary in each environment depending on VM workloads so if you are really paranoid about security you will probably want to disable it. You can view the effectiveness of TPS in vCenter by looking at the “shared” and “sharedcommon” memory counters to see how much you benefit from it. You can disable TPS in your current environment by changing advanced settings on each host as described in the KB article, updating the setting is pretty simple but having it take effect is tedious work:

  1. Log in to ESX\ESXi or vCenter Server using the vSphere Client.
  2. If connected to vCenter Server, select the relevant ESX\ESXi host.
  3. In the Configuration tab, click Advanced Settings under the software section.
  4. In the Advanced Settings window, click Mem.
  5. Look for Mem.ShareScanGHz and set the value to 0.
  6. Click OK.
  7. Perform one of the following to make the TPS changes effective immediately:
    1. Migrate all the virtual machines to other host in cluster and back to original host.
    2. Shutdown and power-on the virtual machines.

While the impact and exposure may be minimal with this, the fact that someone has finally cracked those solid walls that have stood between VMs is a big deal. I’ve written about it previously in my post on Escaping the Cave, VMware officially describes this as “inter-process side channel leakage” and mentions this in the KB article:

[important]Side channel attacks that exploit information leakage from resources shared between processes running on a common processor is an area of research that has been explored for several years. Although largely theoretical, techniques are continuously improving as researchers build on each other’s work. Although this is not a problem unique to VMware technology, VMware does work with the research community to ensure that the issues are fully understood and to implement mitigation into our products when appropriate[/important]

Note they mention these types of vulnerabilities have been “explored” for years, meaning lots of people have been looking for a way to penetrate those walls as this is the holy grail type of hack of a virtual environment. Imagine if someone compromised a less critical VM through one of the many thousands of OS and application vulnerabilities that exist. At least the damage and exposure is contained within that one VM, but if some one could somehow use that compromised VM as a launchpad for attacking other VMs on a host that’s a real big deal. The fact that most of these types of vulnerabilities have been theoretical up until now really exposes VMware and their security foundation.

Overall vSphere is a very secure platform and has stood the test of time without any major issues which is a testament to how seriously they take security in vSphere. VMware will no doubt learn from this one and work to improve the security in vSphere to make it even better. However the circumstances of this one resulted from them trying to be more efficient with hypervisor resources by mixing VM data together. They will definitely need to be careful going forward as they continually look for new ways to optimize vSphere so they do it in a secure manner and do not risk exposure between VMs.

If you want to find out more about memory management in vSphere check out some of these links:

Share This:

vSphere Metro Storage Cluster Links

EMC specific

Using VMware vSphere with EMC VPLEX (EMC Tech Paper)
Implementing vSphere Metro Storage Cluster (vMSC) using EMC VPLEX (2007545) (VMware KB)

HDS specific

Deploy VMware vSphere Metro Storage Cluster on Hitachi Virtual Storage Platform (HDS Tech Paper)
Implementing vSphere Metro Storage Cluster using Hitachi Storage Cluster for VMware vSphere (2073278) (VMware KB)
Deploying a vSphere Metro Storage Cluster (vMSC) using Hitachi NAS (HNAS) Platform with Synchronous Disaster Recovery (SyncDR) Cluster software in VMware vSphere (2085108) (VMware KB)
Implementing vSphere Metro Storage Cluster using Hitachi Storage Cluster for VMware vSphere (featuring Hitachi Virtual Storage Platform) (2039406) (VMware KB)

HP specific

Implementing vSphere Metro Storage Cluster using HP 3PAR Peer Persistence (HP Tech Paper)
Implementing VMware vSphere Metro Storage Cluster with HP LeftHand Multi-Site storage (HP Tech Paper)
VMware vSphere Metro Storage Cluster with HP 3PAR Peer Persistence – Part I (vCloudNine)
VMware vSphere Metro Storage Cluster with HP 3PAR Peer Persistence – Part II (vCloudNine)
Implementing vSphere Metro Storage Cluster using HP 3PAR StoreServ Peer Persistence (2055904) (VMware KB)
Implementing vSphere Metro Storage Cluster using HP LeftHand Multi-Site (2020097) (VMware KB)

IBM specific

Stretched Cluster on IBM SVC (Part 1) (CloudFix)
Stretched Cluster on IBM SVC (Part 2) (CloudFix)
Deploying Stretched vSphere clusters with Site Recovery Manager on SAN Volume Controller (Virtual Storage Speak)
Bridging datacenters with VMware vSphere and IBM SAN Volume Controller – Part 1 (Virtual Storage Speak)
Bridging datacenters with VMware vSphere and IBM SAN Volume Controller – Part 2 (Virtual Storage Speak)
Bridging datacenters with VMware vSphere and IBM SAN Volume Controller – Part 3 (Virtual Storage Speak)
Implementing vSphere Metro Storage Cluster using IBM System Storage SAN Volume Controller (2032346) (VMware KB)
VMware HA and vMotion with stretched IBM System Storage SAN Volume Controller Cluster (2000948) (VMware KB)

NetApp specific

A Continuous-Availability Solution for VMware vSphere and NetApp (NetApp Tech Paper)
MetroCluster Version 8.2.1 Best Practices for Implementation (NetApp Tech Paper)
VMware support with NetApp MetroCluster (1001783) (VMware KB)


How to automate vSphere Metro Storage Clusters, so VMs are running locally to storage (bmspeak)
VM Placement on a vSphere Metro Storage Cluster with VCAC (Grant Orchard)
vSphere Metro Stretched Cluster with vSphere 5.5 and PDL AutoRemove (Long White Virtual Clouds)
vSphere Metro Storage Cluster and GAD: Rise of HDS Virtual Storage Machine (Paul Meehan)
Metro Clustering on VMware (Plain Virtualization)
New VMware HCL category: vSphere Metro Stretched Cluster (Virtual Geek)
vSphere Metro Storage Cluster – steps for non-disruptive site failover (Virtual Stace)
VMware Metro Storage Cluster Explained – Part 1: The Challenge (Virtualization Software)
VMware Metro Storage Cluster Explained – Part 2: The Potential Solution (Virtualization Software)
vSphere Metro Storage Cluster solutions and PDL’s? (VMware vSphere Blog)
VMware vSphere Metro Storage Cluster Case Study (VMware Tech Paper)
Stretched Clusters and VMware vCenter Site Recovery Manager: Understanding the Options and Goals (VMware Tech Paper)
VMworld 2012: Session BCO1159 – Architecting and Operating a VMware vSphere Metro Storage Cluster (VMworld video)
Operating and architecting a vSphere Metro Storage Cluster based infrastructure (VMworld 2013 slides)
vSphere Metro Storage Cluster – Uniform vs Non-Uniform (Yellow Bricks)
vSphere Metro Storage Cluster storage latency requirements (Yellow Bricks)

Share This:

Vendor Specific Storage Links


Best Practices for Configuring DCB with VMware ESXi 5.1 and Dell EqualLogic Storage (Dell Tech Paper)
Best Practices when implementing VMware vSphere in a Dell EqualLogic PS Series SAN Environment (Dell Tech Paper)
Configuring an iDRAC vFlash partition as a datastore in VMware ESXi (Dell Tech Paper)
Configuring Dell PowerEdge VRTX shared storage for VMware vSphere Environment (Dell Tech Paper)
Configuring iDRAC vFlash as a VMware ESXi VMKernel Coredump Collector (Dell Tech Paper)
Configuring iSCSI Boot for EqualLogic SAN with Dell PowerEdge Servers and VMware ESXi 5.1 (Dell Tech Paper)
Configuring stateless boot using Dell customized VMware ESXi 5.0 (Dell Tech Paper)


Storage Protocol Choices & Storage Best Practices for vSphere (EMC Presentation)
Using EMC VNX Storage with VMware vSphere (EMC Tech Paper)
EMC Powerpath/VE for VMware vSphere Best Practices planning (EMC Tech Paper)
Using VMware vSphere Storage APIs For Array Integration With EMC Symmetrix (EMC Tech Paper)
VMware vStorage APIs For Array Integration With EMC VNX Series For NAS (EMC Tech Paper)
EMC VPLEX – vSphere 5.1 Stretched Cluster Best Practices (Virtualization Team)
VMware vSphere 5.5 vMotion on EMC VPLEX Metro (VMware Tech Paper)
Implementing EMC Symmetrix Virtual Provisioning with VMware vSphere (VMware Tech Paper)


HP P4000 LeftHand SAN Solutions with VMware vSphere Best Practices (VMware Tech Paper)
VMware vSphere VAAI for HP LeftHand Storage performance benefits (HP Tech Paper)
HP StoreVirtual VSA Design and Configuration Guide  (HP Tech Paper)
Best practices for deploying VMware vSphere 5 with VMware High Availability and Fault Tolerance on HP LeftHand Multi-Site SAN cluster (HP Tech Paper)
Implementing VMware vSphere Metro Storage Cluster with HP LeftHand Multi-Site storage (HP Tech Paper)
HP 3PAR Storage and VMware vSphere 5 best practices (HP Tech Paper)
HP XP7 Storage and VMware vSphere 5 Best Practices and Configuration Guide (HP Tech Paper)
VMware vSphere VAAI for HP 3PAR Storage performance benefits (VMware Tech Paper)
3PAR Utility Storage with VMware vSphere (VMware Tech Paper)
HP Enterprise Virtual Array Storage and VMware vSphere 4.0, 4.1 and 5.x configuration best practices (HP Tech Paper)


Deploying VMware vSphere 5.5 on IBM PureFlex System (IBM Tech Paper)
VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family (IBM Tech Paper)


NetApp TR-3808 – VMware vSphere and ESX 3.5 Multiprotocol Performance Comparison Using FC, iSCSI, and NFS (NetApp Tech Paper)
NetApp Storage Best Practices for VMware vSphere (NetApp Tech Paper)
vSphere 5 on NetApp MetroCluster Solution (NetApp Tech Paper)
FlexPod Datacenter with VMware vSphere 5.5 Update 1 and All-Flash FAS (NetApp Tech Paper)
VMware vSphere 5 on NetApp Clustered Data ONTAP (NetApp Tech Paper)

Pure Storage

Pure Storage and VMware Storage APIs for Array Integration—VAAI (Pure Storage)
Pure Storage and VMware vSphere Best Practices Guide (Pure Storage)
Using the Pure Storage Content Pack for VMware vCenter Log Insight (Pure Storage)

Share This:

Flash Storage Links

Understanding TLC NAND (Anandtech)
The SSD Anthology: Understanding SSDs and new drives from OCZ (Anandtech)
Threshold Voltage Distribution in MLC NAND Flash Memory: Characterization, Analysis, and Modeling (cmu.edu)
An Introduction to Flash Technology (Cormac Hogan)
Storage 101: Flash Storage Myths and Facts (Enterprise Storage Guide)
Solid State Drive Technology: Differences between SLC, MLC and TLC NAND (HP)
Understanding endurance and performance characteristics of HP solid state drives (HP)
What is the difference between MLC Flash and eMLC Flash, and is it required for Enterprise Flash? (Hu’s Blog)
All-Flash Array Performance Testing Framework (IDC)
Anatomy of SSDs (Linux Magazine)
NAND Flash Primer (Micron)
How Solid State Drives are Made (Micron)
NAND Flash 101: An Introduction to NAND Flash and How to Design It In to Your Next Product (Micron)
TLC, MLC, and SLC Devices (Micron)
NOR/NAND Flash Guide: Selecting a Flash Storage Solution (Micron)
Choosing the Right NAND (Micron)
Flash Memory Reliability – Read, Program, and Erase Latency Versus Endurance Cycling (NASA)
Introduction to Flash Memory (Pure Storage)
01: Why SSDs Are Awesome: An SSD Primer (Samsung)
02: Understanding SSD System Requirements: SATA Interface Basics (Samsung)
03: NAND Basics: Understanding the Technology Behind Your SSD (Samsung)
04: Understanding SSDs: A Peek Behind the Curtain (Samsung)
05: Maximize SSD Lifetime and Performance With Over-Provisioning (Samsung)
06: Protect Your Privacy: Security & Encryption Basics (Samsung)
07: Communicating With Your SSD: Understanding SMART Attributes (Samsung)
08: Benchmarking Utilities: What You Should Know (Samsung)
09: Why Integration Matters: What Samsung’s Vertical Integration Means to You (Samsung)
10: The Samsung Advantage: Why You Should Choose a Samsung SSD (Samsung)
11: Samsung Data Migration Software: The simplest way to get your new SSD up and running (Samsung)
12: Samsung Magician Software: OS Optimization Feature Overview (Samsung)
Floating Gate Basics (scu.ecu)
Increasing Flash SSD Reliability (Silicon Systems)
E‐MLC vs. MLC NAND Flash (Smart Storage Systems)
A detailed overview of flash management techniques (Smart Storage Systems)
Making the case for solid-state storage (Storage Magazine)
The truth about SSD performance benchmarks (Storage Magazine)
NAND vs. NOR Flash Memory Technology Overview (Toshiba)
FAQ: Using SSDs with ESXi (VMware Front Experience)
Yaffs NAND flash failure mitigation (Yaffs.net)

Share This:

Storage Protocol Links


iSCSI Beats Fibre Channel at Interop 2011 (Video)
Storage Protocol Comparison – A vSphere Perspective (VMware vSphere Blog)
Storage Protocol Comparison (VMware Tech Paper)
The Debate-Why NFS vs Block Access for OS/Applications (vTexan)


A “Multivendor Post” on using iSCSI with VMware vSphere (Virtual Geek)
A “Multivendor Post” to help our mutual iSCSI customers using VMware (Virtual Geek)
Using iSCSI storage with vSphere (Storage Magazine)
Configuring VMware vSphere Software iSCSI with Dell Equallogic PS Series Storage (Equallogic)
How to Configure Openfiler iSCSI Storage for VMware ESX 4 (Xtravirt)
Putting your storage to the test – Part 1 iSCSI on Iomega IX4-200D (Gabe’s Virtual World)
How-to connect ESX4, vSphere to Openfiler iSCSI NAS (Vladan.fr)
EMC Virtual Infrastructure for Exchange 2007 using vSphere 4.0 and iSCSI (EMC)
How to setup basic software iSCSI for VMware vSphere (video)
iSCSI Design Considerations and Deployment Guide (VMware)
Converged Storage Infrastructure for VMware vSphere 4.1 (Broadcom)
Why can you not use NIC Teaming with iSCSI Binding? (VMware vSphere Blog)
How to configure ESXi to boot via Software iSCSI? (VMware vSphere Blog)
iSCSI Advanced Settings (VMware vSphere Blog)
Configuring Proper vSphere iSCSI Multipathing via Binding VMkernel Ports [Video] (Wahl Network)


NFS Best Practices – Part 1: Networking (Cormac Hogan)
NFS Best Practices – Part 2: Advanced Settings (Cormac Hogan)
NFS Best Practices – Part 3: Interoperability Considerations (Cormac Hogan)
NFS Best Practices – Part 4: Sizing Considerations (Cormac Hogan)
A “Multivendor Post” to help our mutual NFS customers using VMware (Virtual Geek)
Using NAS for virtual machines (Storage Magazine)
Putting your storage to the test Part 2 NFS on Iomega IX4-200D (Gabe’s Virtual World)
Best Practices for Running vSphere on NFS Storage (VMware)
NFS Block Sizes, Transfer Sizes & Locking (VMware vSphere Blog)
Load Balancing with NFS and Round-Robin DNS (VMware vSphere Blog)
Best Practices for Running vSphere on NFS Storage (VMware Tech Paper)
Republished: Dispelling Some VMware over NFS Myths (Scott Lowe)
Reasons For Using NFS With VMware Virtual Infrastructure (VM/ETC)

Fiber Channel over Ethernet (FCoE)

“FCoE vs. iSCSI – Making the Choice” from Interop Las Vegas 2011 (Stephen Foskett)
VMware ESX FCoE CNA Compatibility in Plain English (Stephen Foskett)
How FCoE and iSCSI Fit into Your Storage Strategy (NetApp Tech OnTap)
Fibre Channel over Ethernet in the Data Center: An Introduction (Cisco)
VMware’s Software FCoE (Fibre Channel over Ethernet) Adapter (VMware vSphere Blog)

Fiber Channel

SAN System Design and Deployment Guide (VMware)
Configuring and Troubleshooting N-Port ID Virtualization (VMware)
NPIV: N-Port ID Virtualization (VMware vSphere Blog)

Share This: