Tag Archive: vSphere


Sep 09 2013

Best Practices for running vSphere on iSCSI

VMware recently updated a paper that covers Best Practices for running vSphere on iSCSI storage. This paper is similar in nature to the Best Practices for running vSphere on NFS paper that was updated not too long ago. VMware has tried to involve their storage partners in these papers and reached out to NetApp & EMC to gather their best practices to include in the NFS paper. They did something similar with the iSCSI paper by reaching out to HP and Dell who have strong iSCSI storage products. As a result you’ll see my name and Jason Boche’s in the credits of the paper but the reality is I didn’t contribute much to it besides hooking VMware up with some technical experts at HP.

So if you’re using iSCSI be sure and give it a read, if you’re using NFS be sure and give that one a read and don’t forget to read VMware’s vSphere storage documentation that is specific to each product release.

capture7

Aug 08 2013

The Top 10 Things You MUST Know About Storage for vSphere

If you’re going to VMworld this year be sure and check out my session STO5545 – The Top 10 Things You MUST Know About Storage for vSphere which will be on Tuesday, Aug. 27th from 5:00-6:00 pm. The session was showing full last week but they must have moved it to a larger room as it is currently showing 89 seats available. This session is crammed full of storage tips, best practices, design considerations and lots of other information related to storage. So sign up know before it fills up again and I look forward to seeing you there!

top10-11

Jul 14 2013

The new HP ProLiant MicroServer Gen8 – a great virtualization home lab server

I’ve always liked the small size of the HP MicroServer which makes it perfect for use in a virtualization home lab, but one area that I felt it was lacking was with the CPU. The original MicroServer came with a AMD Athlon II NEO N36L 1.3 Ghz dual-core processor which was pretty weak to use with virtualization and was more suited for small ultra-notebook PC’s that require small form factors and low power consumption. HP came out with enhanced N40L model a bit later but it was just a small bump up to the AMD Turion II Neo N40L/1.5GHz and later on to the N54L/2.2 Ghz dual-core processor. I have both the N36L & N40L MicroServers in my home lab and really like them except for the under-powered CPU.

Well HP just announced a new MicroServer model that they are calling Gen8 which is in line with their current Proliant server model generation 8. The new model not only looks way cool, it also comes in 2 different CPU configurations which give it a big boost in CPU power. Unfortunately while the GHz of the CPU makes a big jump, it’s still only available in dual-core. The two new processor options are:

Having a Pentium processor is a big jump over the Celeron and it comes with more L3 cache. Unfortunately though neither processor supports hyper-threading which would show as more cores to a vSphere host. Despite this its still a nice bump that makes the MicroServer even better for a virtualization home lab server. Note they switched from AMD to Intel processors with the Gen 8 models.

Lets start with a the cosmetic stuff on the new MicroServer, it has a radical new look (shown on the right below) which is consistent with it’s big brother Proliant servers, I personally think it looks pretty bad-ass cool.

2micros

Note the new slim DVD drive  opening on the Gen8 instead of the full size opening on the previous model. One thing to note is that while the new new Gen8 models are all depicted with a DVD drive, it does not come standard and must be purchased separately and installed. The Gen8 model is also a bit shorter and less deep and slightly wider than the old model. On the old model the HP logo would light up blue when powered on to serve as the health light and it looks like on the new model there is a blue bar at the bottom that lights up instead. There are also only 2 USB ports on the front now instead of 4. The old model also had keys (which I always misplace) to open the front to gain access to the drives and components, it looks like on the new model they did away with that and have a little handle to open it. On the back side they have moved the power supply and fan up a bit, removed one of the PCIe ports (only 1 now), removed the eSATA port, added 2 USB ports (2 of them 3.0) and added a 1GB NIC port. This is a nice change especially the addition of the second NIC which makes for more vSwitch options with vSphere. I have always added a 2-port NIC to my MicroServers as they only had 1 previously so it’s nice that it comes with 2.

Inside the unit still has 4 non-hot plug drive trays and supports up to 12TB of SATA disk (4 x 3TB). The storage controller is the HP Dynamic Smart Array B120i Controller which only supports RAID levels 0, 1 and 10. Also only 2 of the bays support 6.0Gb/s SATA drives, the other 2 support 3.0Gb/s SATA drives. There are still only 2 memory slots that support a maximum of 16GB (DDR3), this is another big enhancement as the previous model only supported 8GB maximum memory which limited it to how many VMs you could run on it. The Gen8 model also comes with a new internal microSD slot so you could boot ESXi from it if you wanted to, both the old & new still have an internal USB port as well. The server comes with the HP iLO Management Engine which is standard on all Proliant servers and is accessed through one of the NICs that does split-duty, but you have to license it to use many of the advanced features like the remote console. To license it costs a minimum of $129 for the iLO Essentials with 1 yr support which is a bit much for a home lab server that is under $500.

Speaking of cost  which has always made the MicroServer attractive for home labs, the G1610T model starts at $449 and the G2020T starts at $529, the two models are identical besides the processor and they both come standard with 2GB of memory and no hard drives. I wish they would not include memory in it and make it optional as well and lower the price. If you want to go to 8G or 16Gb of memory (who doesn’t) you have to take out the 2GB DIMM that comes with it and toss it and put in 4GB or 8GB DIMMs. Here are some of the add-on options and pricing on the HP SMB Store website:

  • HP 8GB (1x8GB) Dual Rank x8 PC3- 12800E (DDR3-1600) Unbuffered CAS-11 Memory Kit  [Add $139.00]
  • HP 4GB (1x4GB) Dual Rank x8 PC3-12800E (DDR3-1600) Unbuffered CAS-11 Memory Kit  [Add $75.00]
  • HP 500GB 6G Non-Hot Plug 3.5 SATA 7200rpm MDL Hard Drive  [Add $239.00]
  • HP 1TB 6G Non-Hot Plug 3.5 SATA 7200rpm MDL Hard Drive  [Add $269.00]
  • HP 2TB 6G Non-Hot Plug 3.5 SATA 7200rpm MDL Hard Drive  [Add $459.00]
  • HP 3TB 6G Non-Hot Plug 3.5 SATA 7200rpm MDL Hard Drive  [Add $615.00]
  • HP SATA DVD-RW drive [Add $129.00]
  • HP NC112T PCI Express Gigabit Server Adapter [Add $59.00]
  • HP 4GB microSD [Add $79.00]
  • HP 32GB microSD [Add $219.00]
  • HP iLO Essentials including 1yr 24×7 support [$129.00]

With all the add-ons the server cost can quickly grow to over $1000, not ideal for a home lab server. I’d recommend heading to New Egg & Micro Center and getting parts to upgrade the server. You can get a Kingston HyperX Blu 8GB DDR3-1600 Memory Kit for $69 or a Kingston HyperX Red 16GB DDR3-1600 Memory Kit for $119.00 which is half the cost.

All in all I really like the improvements they have made with the new model and it makes an ideal virtualization home lab server that you are typically building on a tight budget. HP if you want to send me one I’d love to do a full review on it. Listed below are some links for more information and a comparison of the old MicroServer G7 N54L and the new Gen8 G2020T model so you can see the differences and what has changed.

Comparison of the old MicroServer G7 N54L and the new Gen8 G2020T model:

FeatureOld MicroServer G7 (N54L)New MicroServer Gen8 (G2020T)
ProcessorAMD Turion II Model Neo N54L (2.20 GHz, 15W, 2MB)Intel Pentium G2020T (2.5GHz/2-core/3MB/35W)
Cache2x 1MB Level 2 cache3MB (1 x 3MB) L3 Cache
ChipsetAMD RS785E/SB820MIntel C204 Chipset
Memory2 slots - 8GB max - DDR3 1333MHz2 slots - 16GB max - DDR3 1600MHz/1333MHz
NetworkHP Ethernet 1Gb 1-port NC107i AdapterHP Ethernet 1Gb 2-port 332i Adapter
Expansion Slot2 - PCIe 2.0 x16 & x1 Low Profile1 - PCIe 2.0 x16 Low Profile
Storage ControllerIntegrated SATA controller with embedded RAID (0, 1)HP Dynamic Smart Array B120i Controller (RAID 0, 1, 10)
Storage Capacity (Internal)8.0TB (4 x 2TB) SATA12.0TB (4 x 3TB) SATA
Power SupplyOne (1) 150 Watts Non-Hot PlugOne (1) 150 Watts Non-Hot Plug
USB ports2.0 Ports - 2 rear, 4 front panel, 1 internal2.0 Ports - 2 front, 2 rear, 1 internal) 3.0 Ports - 2 rear
microSDNoneOne - internal
Dimensions (H x W x D) 10.5 x 8.3 x 10.2 in9.15 x 9.05 x 9.65 in
Weight (Min/Max)13.18 lb/21.16 lb15.13 lb/21.60 lb
Acoustic Noise24.4 dBA (Fully Loaded/Operating)21.0 dBA (Fully Loaded/Operating)

Dec 01 2012

New free tool – HP Virtualization Performance Viewer

This new free tool caught my eye when it was mentioned on an internal email chain, it’s called the HP Virtualization Performance Viewer (vPV). It’s a lightweight tool that provides real-time performance analysis reporting for to help diagnose and triage performance problems. It’s a Linux based utility and can be installed natively on any VM/PC running Linux or it can be deployed as a virtual appliance. It supports both VMware vSphere & Microsoft Hyper-V environments and has the following features:

  • Quick time to value
  • Intuitive at-a-glance dashboards
  • Triage virtualization performance issues in real-time
  • Foresee capacity issues and identify under / over utilized systems
  • Operational and status report for performance, up-time and distribution analysis

The free version of vPV has some limitations, to achieve the full functionality you need to upgrade to the Enterprise version but the free version should be good enough for smaller environments.

hpv2

It’s simple and easy to download the tool, just head over to the HP website, enter some basic information and you get the download page where you can choose the files that you want to download based on your install preference.

hpv11Downloading the OVA file to install the virtual appliance is the easiest way to go, once you download it, you simply deploy it using the Deploy OVF Template option in the vSphere Client and it will install as a new VM. Once deployed and powered on you can log in to the VM’s OS using the username root and password vperf*viewer if you need to manually configure an IP address. Otherwise you can connect to the VM and start using vPV using the URL: http://<servername>:8081/PV OR https://<servername>:8444/PV which will bring up the user interface so you can get started. I haven’t tried it out yet as it’s still downloading but here’s some screenshots from the vPV webpage:

hpv3hpv4

I’ll do a more detailed review once I have it up and running but it looks like a pretty handy little tool. For more great free tools be sure and check out my huge free tool gallery that contains links to 100+ free tools for VMware environments.

Nov 27 2012

Win a complete vSphere dream lab from Unitrends

Why would you want your own vSphere lab in the first place, if you read my book I have a whole chapter on it (actually Simon did that chapter), here’s why:

  • Exam Study – To provide yourself with an environment where you can easily build a mock production environment to follow examples in any study material you may have and to also confirm to yourself that what you have read in your study material actually works as described in practice.
  • Hands-On Learning – Probably the most common reason for putting together your own virtualization lab is to jump onto the kit, wrestle with it and get your hands dirty – breaking it, fixing it and then breaking it again in the process. This is the preferred method of learning for many though for this you obviously do need the luxury of time. Very few of us in IT have the opportunity or access to the necessary non-production hardware during the working day to spend learning a product. With the financially tough times over recent years attending a training course and the often hefty associated price tag has meant that fewer people have had the luxury of learning from a trained instructor making a lab environment a popular choice.
  • Centralized Home Infrastructure – Perhaps you are running a home office or need a centralized IT environment from which to run your home PCs for things such as centralized monitoring, management of your kid’s internet access or the family file, music and photo repository.
  • Because It’s There (ie: Why Not?) – Some of you, like myself, love to play with new enterprise IT products and technologies, even if it doesn’t have direct application to your personal or work life. A virtualized lab environment provides an excellent platform from which to do this from.

So want your own vSphere lab but maybe are short on funds, no problem, Unitrends has you covered. Unitrends is giving away a complete vSphere dream lab that has all the hardware and software you’ll need to get up and running with vSphere and start cranking out your own VMs. How do you get your chance win this complete vSphere dream lab, it’s pretty simple, just head on over to there website and check out Unitrends Enterprise Backup.

capture4

The vSphere dream lab consists of 2 HP servers, HP network switch, NetGear ReadyNAS and the VMware, Microsoft & Unitrends software you need to get everything up and running. I was able to find out some more detail on the hardware specifications:

  • HP ML110 G7 servers with Xeon quad-core 3.20 GHz CPU’s that support hyper-threading, this will look like 8 CPU cores to vSphere. These are awesome servers, I have two of the ML110 G6 models in my own home lab, they are quiet, powerful and have 4 PCI slots so you can put lots of NICs in them, they’re also expandable to 16GB or memory. They also support all the vSphere advanced features that rely on specific hardware such as power features, Fault Tolerance and VMDirectPath. You can read the Quickspecs on them here.
  • HP V1410-16G Switch, this is a 16-port switch that support gigabit connections, you can read the specs on it here. You’ll need lots of ports if you add more NICs to your servers so you can play around with vSphere networking more, I have 6 NICs in the servers in my lab so I quickly outgrew my 8 port switch.
  • ReadyNAS Pro 2TB Storage system, you can read more about it here. I really like the NetGear NAS units, I have one in my home lab and they are very solid, high quality with lots of features.

unitrends-dream-lab

All in all a nice sweet lab setup, the contest runs from now until Feb. 14th so you have plenty of time to head over there and download and register your copy of Unitrends Enterprise Backup and enter for your chance to win. If if you want to know more about home vSphere labs be sure and check out my massive home labs link collection.

Oct 17 2012

Understanding CPU & Memory Management in vSphere

cpumemory

CPU & memory resources are two resources that are not fully understood in vSphere, and they are two resources that are often wasted, as well. In this post we’ll take a look at CPU & memory resources in vSphere to help you better understand them and provide some tips for properly managing them.

Let’s first take a look at the CPU resource. Virtual machines can be assigned up to 64 virtual CPUs in vSphere 5.1. The amount you can assign to a VM, of course, depends on the total amount a host has available. This number is determined by the number of physical CPUs (sockets), the number of cores that a physical CPU has, and whether hyper-threading technology is supported on the CPU. For example, my ML110G6 server has a single physical CPU that is quad-core, but it also has hyper-threading technology. So, the total CPUs seen by vSphere is 8 (1 x 4 x 2). Unlike memory, where you can assign more virtual RAM to a VM then you have physically in a host, you can’t do this type of over-provisioning with CPU resources.

Most VMs will be fine with one vCPU. Start with one, in most cases, and add more, if needed. The applications and workloads running inside the VM will dictate whether you need additional vCPUs or not. The exception here is if you have an application (i.e. Exchange, transactional database, etc.) that you know will have a heavy workload and need more than one vCPU. One word of advice, though, on changing from a single CPU to multiple CPU’s with Microsoft Windows: previous versions of Windows had separate kernel HAL’s that were used depending on whether the server had a single CPU or multiple CPUs (vSMP). These kernels were optimized for each configuration to improve performance.  So, if you made a change in the hardware once Windows was already installed, you had to change the kernel type inside of Windows, which was a pain in the butt. Microsoft did away with that requirement some time ago with the Windows Server 2008 release, and now there is only one kernel regardless of the number of CPUs that are assigned to a server. You can read more about this change here. So, if you are running an older Windows OS, like Server 2000 or 2003, you still need to change the kernel type if you go from single to multiple CPUs or vice versa.

So, why not just give VMs lots of CPUs, and let them use what they need? CPU usage is not like memory usage, which often utilizes all the memory assigned to it for things like pre-fetching. The real problem with assigning too many vCPUs to a VM is scheduling.  Unlike memory, which is directly allocated to VMs and is not shared (except for TPS), CPU resources are shared and must wait in a line to be scheduled  and processed by the hypervisor which finds a free physical CPU/core to handle each request. Handling VMs with a single vCPU is pretty easy: just find a single open CPU/core and hand it over to the VM. With multiple vCPU’s it becomes more difficult, as you have to find several available CPUs/cores to handle requests. This is called co-scheduling, and throughout the years, VMware has changed how they handle co-scheduling to make it a bit more flexible and relaxed. You can read more about how vSphere handles co-scheduling in this VMware white paper.

When it comes to memory, assigning too much is not a good thing and there are several reasons for that. The first reason is that the OS and applications tend to use all available memory for things like caching that consume extra available memory. All this extra memory usage takes away physical host memory from other VMs. It also makes the hypervisors job of managing memory conservation, via features like TPS and ballooning, more difficult.  Another thing that happens with memory is when you assign memory to a VM and power it on; you are also consuming additional disk space. The hypervisor creates a virtual swap (vswp) file in the home directory of the VM equal in size to the amount of memory assigned to a VM (minus any memory reservations). The reason this happens is to support vSphere’s ability to over-commit memory to VMs and assign them more than a host is physically capable of supporting. Once a host ‘s physical memory is exhausted, it starts uses the vswp files to make up for this resource shortage, which slows down performance and puts more stress on the storage array.

So, if you assign 8GB of memory to a VM, once it is powered on, an 8GB vswp file will be created.  If you have 100 VMs with 8GB of memory each, that’s 800GB of disk space that you lose from your vSphere datastores. This can chew up a lot of disk space, so limiting the amount of memory that you assign to VMs will also limit the size of the vswp files that get created.

Therefore, the secret to a healthy vSphere environment is to “right-size” VMs.  This means only assigning them the resources they need to support their workloads, and not wasting resources. Virtual environments share resources, so you can’t use mindsets from physical environments where having too much memory or CPUs is no big deal. How do you know what is the right size is? In most cases you won’t know, but you can get a fairly good idea of a VM’s resource requirements by combining the typical amount that the guest OS needs with the resource requirements for the applications that you are running on it. You should start by estimating the amount.  Then the key is to monitor performance to determine what resources a VM is using and what resources it is not using. vCenter Server isn’t really helpful for this as it doesn’t really do reporting. So, using 3rd party tools can make this much easier. I’ve always been impressed by the dashboards that SolarWinds has in their VMware monitoring tool, Virtualization Manager. These dashboards can show you, at a glance, which VMs are under-sized and which are over-sized. Their VM Sprawl dashboard can make it really easy to right-size all the VMs in your environment so you can reallocate resources from VMs that don’t need them to VMs that do.

solarwinds1

Another benefit that Virtualization Manager provides is that you can spot idle and zombie VMs that also suck away resources from your environment and need to be dealt with.

So, effectively managing your CPU and memory resources is really a two-step process. First, don’t go overboard with resources when creating new VMs. Try and be a bit conservative to start out.  Then, monitor your environment continually with a product like SolarWinds Virtualization Manager so you can see the actual VM resource needs. The beauty of virtualization is that it makes it really easy to add or remove resources from a virtual machine. If you want to experience the maximum benefit that virtualization provides and get the most cost savings from it, right-sizing is the key to achieving that.

Sep 23 2012

What is SAHF and LAHF and why do I need it to install vSphere 5.1?

Happened to look over the ESXi 5.1 documentation today (yeah, yeah, normally I just install and don’t RTFM) and noticed this in the Hardware Requirements section:

  • ESXi 5.1 will install and run only on servers with 64-bit x86 CPUs
  • ESXi 5.1 requires a host machine with at least two cores
  • ESXi 5.1 supports only LAHF and SAHF CPU instructions
  • ESXi 5.1 requires the NX/XD bit to be enabled for the CPU in the BIOS

Most of the requirements are fairly straightforward, the 64-bit CPU requirement has been there since vSphere 4 was introduced, but many people probably don’t know what NX/XD & LAHF/SAHF are. The NX/XD bit is a CPU feature called Never eXecute, hence the NX name. What the NX bit does is enable the ability to mark certain areas of memory as non-executable with a flag. When this happens the processor will then refuse to execute any code that resides in those areas of memory. Any attempt to execute code from a page that is marked as no execute will result in a memory access violation. This feature adds a layer of security to a computer by providing a protected area against malicious code such as viruses and buffer overflow attacks.

AMD first added the NX bit feature to their AMD64 processor line starting with the Opteron processor starting in 2003. So you may be wondering about the XD part, well that is simply Intel’s name for the same feature which they refer to as eXecute Disable. Intel introduced support for the XD bit shortly after AMD with their Pentium 4 Prescott processor in 2004. Both the NX bit and the XD bit have the exact same functionality just different names so you will often see it as referred to as NX/XD. This feature has been standard on most processors for years now so almost every server built since 2006 should have it. Support for NX/XD is typically enabled or disabled in the server BIOS and is typically found under Processor options and labeled as something like “Execute Disable Bit”, “NX Technology” or “XD Support”.

Many virtualization admins know what NX/XD is but LAHF & SAHF CPU instructions are a processor function that you have probably never heard of. LAHF stands for Load AH from Flags and SAHF stands for Store AH into Flags. LAHF & SAHF are used to load and store instructions for certain status flags. Instructions are basic commands composed of one or more symbols that that are passed to a CPU as input. These instructions related to LAHF & SAHF are used for virtualization and floating-point condition handling. You really don’t need to understand how they work as they are related to the core CPU architecture but if you want to understand them better you can read more about them here.

Support for LAHF and SAHF instructions appeared shortly after NX/XD was introduced. AMD introduced support for the instructions with their Athlon 64, Opteron and Turion 64 revision D processors in March 2005 and Intel introduced support for the instructions with the Pentium 4 G1 stepping in December 2005. So again most most servers built after 2006 should have CPUs that support LAHF/SAHF. Similar to NX/XD which can be enabled or disabled in the server BIOS, support for LAHF/SAHF is typically tied into the Virtualization Technology (VT) option in a server BIOS which is often referred to Intel VT or AMD-V which is their respective support for virtualization CPU technology. The option to enable this on a HP Proliant BIOS is shown below:

bios1

So how do you know if your server’s CPUs support NX/XD & LAHF/SAHF? As I said before if you’ve purchased a server in the last 5 or so years, it most likely will support it. If it doesn’t support it the ESXi installer will warn you when you install it as shown below:

bios31

Interesting enough though it will still let you install it despite not having the required CPU features. Prior versions of vSphere used to give you an error saying your CPU doesn’t support Long Mode and wouldn’t let you install it. If you do get the error above the first thing to check in that case is if you have those options enabled in the BIOS, if you don’t see those options in the BIOS then your CPU may not support them. You can check your specific CPU’s specifications on Intel’s or AMD‘s websites. You can also check VMware’s Hardware Compatibility List but be aware that there are many processor types/server models not on the HCL that will still work despite not being on the list, they just are not officially supported.

Another way to know if your CPU’s support the required features is to use VMware’s CPU Identification Utility which is a small little ISO that you can boot your host from and it will check the CPU hardware to see if it will support vSphere. I’ve mounted it using the iLO management on server and have also mounted it to a VM’s CD-ROM and booted from it and ran it. Since the CPU hardware is not emulated it can see what type of physical CPU the host is using and what features it supports. The output of the CPU ID tool is shown below, this server fully support all the required CPU features for vSphere:

bios21

So there you have it, now you know more about NX/XD & LAHF/SAHF than you probably wanted to know but at least you have an understanding of what they are when you read about the CPU requirements in the vSphere documentation. You probably won’t find any modern servers that don’t support it but often times our data centers become server graveyards and contain a lot of older hardware that keeps getting re-used until they finally die which may not support it. So knowing what to look for when it comes to CPU features is good to know.

Sep 17 2012

VMware configuration maximums from 1.0 to 5.1

VMware has really grown in scalability from the early days of ESX 1.0 with each new release of vSphere. I put together this table on how the configuration maximums have increased over the years so you can see just how much it scales over the years. VMware has published there Configuration Maximums documentation with each release starting with VI3 which you should be familiar with especially if you are trying to get a certification. I pieced together the earlier versions from the installation documentation for each release, there isn’t much info available on ESX 1.0 so if you know anything please fill in the blanks for me. Notice how the VM virtual disk size of 2TB has never changed, this is to due file system limitations that VMware has not yet been able to overcome. With their new Virtual Volumes architecture that limit may finally be gone. Also note on the earlier versions the documentation did not state a 2TB virtual disk limit although I’m almost positive it existed, the documentation stated “9TB per virtual disk”, not sure why though.

Configuration Maximums for VMware ESX/vSphere

VMware release:1.01.52.02.53.03.54.04.15.05.15.5
vCPUs per VM112244883264
RAM per VM2GB3.6GB3.6GB3.6GB16GB64GB255GB255GB1TB1TB
NICs per VM?4444410101010
VM Virtual Disk????2TB2TB2TB2TB2TB2TB62TB
VMFS Volume?64TB64TB64TB64TB64TB64TB64TB64TB64TB
pCPU per host?81616323264160160160320
vCPU per host?648080128128512512204820484096
RAM per host?64GB64GB64GB64GB256GB1TB1TB2TB2TB4TB
pNICs per host?161616323232323232

In addition hears a diagram from VMware that depicts the configuration maximums in a slightly different manner:

vsphere-max

VMware Configuration Maximum Published Documents:

Jul 18 2012

All about the UNMAP command in vSphere

There’s been a lot of confusion lately surrounding the UNMAP command which is used to reclaim deleted space on thin provisioning that is done at the storage array level. The source of the confusion is VMware’s support for it which has changed since the original introduction of the feature in vSphere 5.0. As a result I wanted to try and sum up everything around it to clear up any misconceptions around it.

So what exactly is UNMAP?

UNMAP is a SCSI command (not a vSphere 5 feature) that is used with thin provisioned storage arrays as a way to reclaim space from disk blocks that have been written to after the data that resides on those disk blocks has been deleted by an application or operating system. UNMAP serves as the mechanism that is used by the Space Reclamation feature in vSphere 5 to reclaim space left by deleted data. With thin provisioning once data has been deleted that space is still allocated by the storage array because it is not aware that the data has been deleted which results in inefficient space usage. UNMAP allows an application or OS to tell the storage array that the disk blocks contain deleted data so the array can un-allocate them which reduces the amount of space allocated/in use on the array. This allows thin provisioning to clean-up after itself and greatly increases the value and effectiveness of thin provisioning.

So why is UNMAP important?

In vSphere there are several storage operations that will result in data being deleted on a storage array. These operations include:

  • Storage vMotion, when a VM is moved from one datastore to another
  • VM snapshot deletion
  • Virtual machine deletion

Of these operations, Storage vMotion has the biggest impact on the efficiency of thin provisioning because entire virtual disks are moved between disks which results in a lot of wasted space that cannot be reclaimed without additional user intervention. The impact of Storage vMotion on thin provisioning prior to vSphere 5 was not significant because Storage vMotion was not a commonly used feature and was mainly used for planned maintenance on storage arrays. However with the introduction of Storage DRS in vSphere 5, Storage vMotion will be a common occurrence as VMs can now be dynamically moved around between datastores based on latency and capacity thresholds. This will have a big impact on thin provisioning because reclaiming the space consumed by VMs that have been moved becomes critical in order to maintaining an efficient and effective space allocation of thin provisioning storage capacity. Therefore UNMAP is important as it allows vSphere to inform the storage array when a Storage vMotion occurs or a VM or snapshot is deleted so those blocks can quickly be reclaimed to allow thin provisioned LUNs to stay as thin as possible.

How does UNMAP work in vSphere?

Prior to vSphere 5 and UNMAP when an operation occurred that deleted VM data from a VMFS volume, vSphere just looked up the inode on the VMFS file system and deleted the inode pointer which mapped to the blocks on disk. This shows the space as free on the file system but the disk blocks that were in use on the storage array are not touched and the array is not aware that they contain deleted data.

In vSphere 5 with UNMAP when those operations occur the process is more elaborate from a storage integration point of view, the inodes are still looked up like before, but instead of just removing the inode pointers a list of the logical block addresses (LBA) must be obtained. The LBA list contains the locations of all the disk blocks on the storage array that the inode pointers map to. Once vSphere has the LBA list it can start sending SCSI UNMAP commands for each range of disk blocks to free up the space on the storage array. Once the array acknowledges the UNMAP, the process repeats as it loops through the whole range of LBA’s to UNMAP.

So what is the issue with UNMAP in vSphere?

ESXi 5.0 issues UNMAP commands for space reclamation in critical regions during several operations with the expectation that the operation would complete quickly. Due to varied UNMAP command completion response times from the storage devices, the UNMAP operation can result in poor performance of the system which is why VMware is recommending that it be disabled on ESXi 5.0 hosts. The implementation and response times for the UNMAP operation may vary significantly among storage arrays. The delay in response time to the UNMAP operation can interfere with operations such as Storage vMotion and Virtual Machine Snapshot consolidation and can cause those operations to experience timeouts as they are forced to wait for the storage array to respond.

How can UNMAP be disabled?

In vSphere support for UNMAP is enabled by default but can be disabled via the command line interface, there is currently no way to disable this using the vSphere Client. Esxcli is a multi-purpose command that can be run from ESXi Tech Support Mode, the vSphere Management Assistant (vMA) or the vSphere CLI. Setting the EnableBlockDelete parameter to 0 disabled UNMAP functionality, setting it to 1 (the default) enables it. The syntax for disabling UNMAP functionality is shown below. Note the reference to VMFS3, this is just the parameter category and is the same regardless of if you are using VMFS3 or VMFS5.

  • esxcli system settings advanced set –int-value 0 –option /VMFS3/EnableBlockDelete

or you can use:

  • esxcfg-advcfg -s 1 /VMFS3/EnableBlockDelete

This command must be run individually on each ESXi host to disable UNMAP support. You can check the status of this parameter on a host by using the following command.

  • esxcfg-advcfg -g /VMFS3/EnableBlockDelete

It is possible to use a small shell script to automate the process of disabling UNMAP support to make it easier to update many hosts. A pre-built script to do this can be downloaded from here. Note changing this parameter takes effect immediately; there is no need to reboot the host for it to take effect. Without disabling the UNMAP feature, you may experience timeouts with operations such as Storage vMotion and VM snapshot deletion.

How did UNMAP change in vSphere 5.0 Update 1?

Soon after vSphere 5.0 was released VMware recalled the UNMAP feature and issued a KB article stating it was not supported and should be disabled via the advanced setting. With vSphere 5.0 Update 1 they disabled UNMAP by default, but they went one step further, if you try and enable it via the advanced setting it will still remain disabled. So there is no way to do automatic space reclamation in vSphere at the moment. What VMware did instead was introduce a manual reclamation method by modifying the vmkfstools CLI command and adding a parameter to it. The syntax for this is as follows:

  • vmkfstools -y <percent of free space>

The percentage of free space will create a virtual disk of the specified size, the command is run against an individual VMFS volume. So if a datastore has 800GB of free space left and you specify 50 as the free space parameter it will create a virtual disk 400GB in size, then delete it and UNMAP the space. This manual method of UNMAP has a few issues though, first it’s not very efficient, it’s creating a virtual disk without any regard for if the blocks are already reclaimed/un-used or not. Second it’s a very I/O intensive process and it is not recommended to run it during any peak workload times. Next it’s a manual process and you can’t really schedule it within vCenter Server, there are some scripting methods to get it run but it’s a bit of a chore. Finally since it’s a CLI command there is no way to really monitor it or log what occurs.

Another thing that VMware did in vSphere 5.0 Update 1 was invalidate all of the testing that was done with UNMAP for the Hardware Compatibility Guide. They required vendors to re-test with the manual CLI method and added a footnote to the HCG that said automatic reclamation is not supported and space can be reclaimed via command line tools as shown below.

2012-07-18_220746-22

The manual CLI method of using UNMAP is not ideal, it works but it’s a bit of a kludge, until VMware introduces manual support again there is not a lot of value in using it because of the limitations.

When will VMware support automatic reclamation again?

Only time will tell, hopefully the next major release will enable it again. This was quite a setback for VMware and there was no easy fix so it may take a while before VMware has perfected it. Once it is back it will be a great feature to utilize as it really makes thin provisioning at the storage array level very efficient.

Where can I find out more about UNMAP?

Below are some useful links about UNMAP:

Jun 15 2012

The why and how of building a vSphere lab for work or home

Be sure and register at the bottom of this article for your chance to win a Iomega ix4-200d 4TB Network Storage Cloud Edition provided by SolarWinds.

Having a server lab environment is a great way to experience and utilize the technology that we work with on a daily basis. This provides many benefits and can help you grow your knowledge and experience which can help your career growth.  With the horsepower of today’s server and versatility of virtualization, you can even build a home lab, allowing you the flexibility of playing around with servers, operating systems and applications in any way without worrying about impacting production systems at your work. Five years ago the idea of having a mini datacenter in your home was mostly unheard of due to the large amounts of hardware that it would require which would make it very costly.

Let’s first look at the reasons why you might want a home lab in the first place.

  • Certification Study – if you are trying to obtain one of the many VMware certifications, you really need an environment that you can use to study and prepare for an exam. Whether it’s building a mock blueprint for a design certification or gaining experience in an area you may be lacking a home lab provides you with a platform to do it all.
  • Blogging – if you want to join the ranks of the hundreds of people that are blogging about virtualization then you’ll need a place where you can experiment and play around so you can educate yourself on whatever topic you are blogging about.
  • Hands-on Experience – there really is no better way to learn virtualization than to use it and experience it firsthand. You can only learn so much from reading a book. You really need to get hands-on experience to maximize your learning potential.
  • Put it to work! – why not actually use it in your household? You can run your own infrastructure service in your house for things like file centralized management and even using VDI to provide access for kids.

There are several ways you can deploy a home lab, often times your available budget will dictate which one you choose. You can run a whole virtual environment within a virtual environment; this is called a nested configuration. To accomplish this in a home lab you typically run ESX or ESXi as virtual machines running under a hosted hypervisor like VMware Workstation. This allows you the flexibility of not having to dedicate physical hardware to your home lab and you can bring it up when needed on any PC. This also gives you the option of mobility, you can easily run it on a laptop that has sufficient resources (multi-core CPU, 4-8GB RAM) to handle it. This is also the most cost effective option as you don’t have to purchase dedicated server, storage and networking hardware for it. This option provides you with a lot of flexibility but has limited scalability and can limit some of the vSphere features that you can use.

You can also choose to dedicate hardware to a virtual lab and install ESX/ESXi directly on physical hardware. If you choose to do this you have two options for it, build your own server using hand-picked components or buy a name brand pre-built server.  The first option is referred to as a white box server as it is a generic server using many different brands of components. This involves choosing a case, motherboard, power supply, I/O cards, memory, hard disks and then assembling them into a full server.  Since you’re not paying for a name brand on the outside of the computer, the labor to assemble components, or an operating system, this option is often cheaper. You can also choose the exact components that you want and are not limited to specific configurations that a vendor may have chosen. While this option provides more flexibility in choosing a configuration at a lower cost there can be some disadvantages to this. The first is that you have to be skilled to put everything together. While not overly complicated, it can be challenging connecting all the cables and screwing everything in place. You also have to deal with compatibility issues, vSphere has a strict Hardware Compatibility List (HCL) of supported hardware and oftentimes components not listed will not work with vSphere as there is no driver for that component. This is frequently the case with I/O adapters such as network and storage adapters. However, as long as you do your homework and choose supported components or those known to work, you’ll be OK. Lastly, when it comes to hardware support you’ll have to work with more than one vendor, which can be frustrating.

The final option is using pre-assembled name-brand hardware from vendors like HP, Dell, Toshiba and IBM. While these vendors sell large rack-mount servers for data centers they also sell lower-end standalone servers that are aimed at SMB’s. While these may cost a bit more than a white box server everything is pre-configured and assembled and fully supported by one vendor. Many of these servers are also listed on the vSphere HCL so you have the peace of mind of knowing that they will work with vSphere. These servers can often be purchased for under $600 for the base models, from which you often have to add some additional memory and NICs. Some popular servers used in home labs include HP’s ML series and MicroServers as well as Dell’s PowerEdge tower servers.

No matter which server you choose you’ll need networking components to connect them. (if you choose to go with VMware Workstation instead of ESX or ESXi, one advantage is that all the networking is virtual so you don’t need any physical network components.) For standalone servers you’ll need a network hub or switch to plug your server’s NICs into. Having more NICs (4-6) in your servers provides you with more options for configuring separate vSwitches in vSphere. Using 2 or 4 port NICs from vendors like Intel can provide this more affordably. Just make sure they are listed on the vSphere HCL. While 100Mbps networking components will work okay, if you want the best performance from your hosts, use gigabit Ethernet, which has become much more affordable. Vendors like NetGear and Linksys make some good, affordable managed and unmanaged switches in configurations from 4 to 24 ports. Managed switches give you more configuration options and advanced features like VLANs, Jumbo Frames, LACP, QoS and port mirroring. Two popular models for home labs are NetGear’s ProSafe Smart Switches and LinkSys switches.

Finally having some type of shared storage is a must so you can use the vSphere advanced features that require it. There are two options for this: 1) use a Virtual Storage Appliance (VSA), which can turn local storage into shared storage, or 2) use a dedicated storage appliance. Both options typically use either the iSCSI or NFS storage protocols, which can be accessed via the software clients built into vSphere. A VSA provides a nice affordable option that allows you to use the hard disks that are already in your hosts to create shared storage. These utilize a virtual machine that has VSA software installed on it that handle the shared storage functionality. There are several free options available for this such as OpenFiler and FreeNAS. These can be installed on a VM or a dedicated physical server. While a VSA is more affordable it can be more complicated to manage and may not provide the best performance compared to a physical storage appliance. If you choose to go with a physical storage appliance there are many low cost models available from vendors like Qnap, Iomega, Synology, NetGear and Drobo. These come in configurations as small as 2-drive units all the way up to 8+ drive units and typically support both iSCSI and NFS protocols. These units will typically come with many of the same features that you will find on expensive SANs, so they will provide you with a good platform to gain experience with vSphere storage features. As an added bonus, these units typically come with many features that can be used within your household for things like streaming media and backups so you can utilize it for more than just your home lab.

One other consideration you should think about with a home lab is the environmental factor. Every component in your home lab will generate both heat and noise and will require power.  The more equipment you have, the more heat and noise you will have, and the more power you will use.  Having this equipment in an enclosed room without adequate ventilation or cooling can cause problems. Fortunately, most smaller tower servers often come with energy efficient power supplies that are no louder than a typical PC and do not generate much heat. However, if you plan on bringing home some old rack mount servers that are no longer needed at work for your lab, be prepared for the loud noise, high heat, and big power requirements that come with them.  One handy tool for your home lab is a device like the Kill-A-Watt, which can be used to see exactly how much power each device is using and what it will cost you to run. You can also utilize many of vSphere’s power saving features to help keep your costs down.

Finally, you’ll need software to run in your home lab. For the hypervisor you can utilize the free version of ESXi but this provides you with limited functionality and features. If you want to use all the vSphere features you can also utilize the evaluation version of vSphere, which is good for 60-days. However, you’ll need to re-install periodically once the license expires. If you happen to be a VMware vExpert, one of the perks is that you are provided with 1-year (Not For Resale) NFR licenses each year.  If you’re really in a pinch, oftentimes you can leverage vSphere licenses you may have at work in your home lab.

Once you have the hypervisor out of the way, you need tools to manage it, fortunately there are many free tools available to help you with this. Companies like SolarWinds offer many cool free tools that you can utilize in your home lab that are often based on the products they sell. Their VM Monitor can be run on a desktop PC and continuously monitors the health of your hosts and VMs.  They also have additional cool tools like their Storage Response Monitor which monitors the latency of your datastores as well a whole collection of other networking, server and virtualization tools. There is a comprehensive list of hundreds of free tools for your vSphere environment available here. A home lab is also a great place to try out software to kick the tires with it and gain experience. Whether it’s VMware products like View, vCloud Director, and SRM, or 3rd party vendor products like SolarWinds Virtualization Manager, your home lab gives you the freedom to try out all these products.  You can install them, see what they offer and how they work and learn about features such as performance monitoring, chargeback automation and configuration management. While most home labs are on the small size, if you do find yours growing out of control, you can also look at features that provide VM sprawl control and capacity planning to grow your home lab as needed.

There are plenty of options and paths you can take in your quest to build a home lab that will meet your requirements and goals. Take some time and plan out your route, there are many others that have built home labs and you can benefit from their experiences. No matter what your reasons are for building your home lab it will provide a great environment for you to learn, experience and utilize virtualization technology within it. If you’re passionate about the technology, you’ll find that a home lab is invaluable to fuel that passion and help your knowledge and experience with virtualization continue to grow and evolve.

ix4-200d

So to get you started with your own home lab, SolarWinds is providing a  Iomega ix4-200d 4TB Network Storage Cloud Edition that would be great to provide the shared storage which can serve as the foundation for your vSphere home lab. All you have to do to register for your chance to win is head on over to the entry page on SolarWind’s website to enter, the contest is open now and closes on June 23rd. On June 25th one entry will be randomly drawn to win and will be the lucky owner of a pretty cool NAS unit. So what are you waiting for, head on over to the entry page and register and don’t miss your chance to win.

banner260x130_vm_goldaward

Jun 13 2012

Escaping the Cave – A VMware admins worst fear

The worst security fear of any virtual environment is having a VM be able to gain access at the host level which can allow it to compromise any VM running on that host. If a VM was to gain access to a host it would essentially have the keys to the kingdom and because it has penetrated into the virtualization layer have a direct back door into any other VM. This has often been referred to as “escaping the cave” as the analogy goes that VMs all live in caves and are not allowed to escape it by the hypervisor.

caveman1

Typically this concern is most prevalent with hosted hypervisors like VMware Workstation that run a full OS under the virtualization layer. Bare metal hypervisors like ESX/ESXi have been fairly immune to this as they have direct contact with the bare metal of a server without a layer in between.

A new vulnerability was recently published that allows this exact scenario, fortunately if you’re a VMware shop it doesn’t affect you. It does affect pretty much every other hypervisor though that does not support the specific function that this vulnerability exploits.

You can read more about it here and here and specifically about VMware here. If you want to know more about security with VMware here’s an article I also wrote on how to steal a VM in 3 easy steps that you might find interesting. VMware also has a very good security blog that you can read here and a great overall security page with lots of links here. And if you want to follow one of VMware’s security guru’s (Rob Randell) who is a friend of mine and a fellow Colorado resident you can follow him here.

VMware has traditionally done an awesome job keeping ESX/ESXi very secure which is just one of the many reasons that they are the leader in virtualization. Security is a very big concern with virtualization and any vulnerabilities can have very large impacts which is why VMware takes it very seriously.

Here’s also an excerpt from my first book that talks about the escaping the cave concept:

Dealing with Security Administrators
This is the group that tends to put up the most resistance to VMware because of the fear that if a VM is compromised it will allow access to the host server and the other VMs on that host. This is commonly known as “escaping the cave,” and is more an issue with hosted products such as VMware Workstation and Server and less an issue with ESX, which is a more secure platform.

By the Way

The term escaping the cave comes from the analogy that a VM is trapped inside a cave on the host server. Every time it tries to escape from the cave, it gets pushed back in, and no matter what it does, it cannot escape from the cave to get outside. To date, there has never been an instance of a VM escaping the cave on an ESX server.

ESX has a securely designed architecture, and the risk level of this happening is greatly reduced compared to hosted virtual products such as Server and Workstation. This doesn’t mean it can’t happen, but as long as you keep your host patched and properly secured, the chances of it happening are almost nonexistent. Historically, ESX has a good record when it comes to security and vulnerabilities, and in May 2008, ESX version 3.0.2 and VirtualCenter 2.0.21 received the Common Criteria certification at EAL4+ under the Communications Security Establishment Canada (CSEC) Common Criteria Evaluation and Certification Scheme (CCS). EAL4+ is the highest assurance level that is recognized globally by all signatories under the Common Criteria Recognition Agreement (CCRA).

Apr 14 2012

Some great vSphere technical resources from VMware

Finding good technical information from VMware can be tough, VMware scatters it all across their websites & blogs and their is no one big technical library where you can view everything. I’m a junkie for great technical information on vSphere and have run across a number of great resources that I thought I would share with everyone:

1. You may be familiar with VMware Labs as they pump out a lot of those great Flings that are unofficial free tools that perform specific functions in a vSphere environment. They also have a publications section that has 50+ hardcore technical papers on topics like VisorFS, which explains the ESXI architecture in detail and vIC which explains Interrupt Control for VM storage device I/O. VMware has rolled a lot of these great publications up into a Technical Journal that they recently published.

2. You can find all of VMware’s patent listings at this website which helps explain exactly how a lot of their technology and features work. Here you can find out the technical details behind their DataMover component of the hypervisor which is responsible for moving and copying VM data for operations like Storage vMotion. You can also find out all the details on how their Fault Tolerance feature works as well.

3. In addition to problem solving articles, the VMware KB has a lot of great articles in it as well but finding them can be pretty challenging.  To find them you can search on types of documents and select technical articles that are not specific to problems and issues such as Best Practices, How-To’s and Information documents. Their is also a KBTV video blog as well that gives you video instruction on many of the popular KB topics.

4. The VMworld website has all of the recordings and materials from previous years VMworld’s. You can only access the previous year if you attended or purchase a subscription to them. But the years before the previous year are always made free after each VMworld and still contain a lot of good content. For example you can see all of the 2010 sessions here.

5. VMware has a video library for their Technical Publications on YouTube so if you don’t like reading about something you can see it in a visual manner instead. Most of the videos are fairly short (under 5 min), it would be nice if they could make these longer and put more information in them but they are still good to watch.

6. VMware has first rate documentation, it’s now available in PDF, EPUB and MOBI formats, download it all and put it on your Kindle, E-Reader or iPad so you always have it with you. It’s broken into specific guides now for areas such as storage, availability, networking, security, resource management, etc. This documentation is not just how to install and use the products, their is a lot of great information as well that is very educational.

7. Of course don’t forget the great technical papers that VMware frequently publishes, some great recent ones include Storage Protocol Comparison and another comparing Stretched Clusters to SRM.

8. Then there is the VMTN forums which is where a lot of topics get discussed, but there is also a Documents section in each of the forums that contains some great information that is more focused then the discussion threads. For example here is the Documents for the vSphere forum and for the vCenter forum.

9. Finally if there is one blog you must read besides Duncan’s, VMware’s vSphere Blog is it. The vSphere blog has become an aggregate for may other VMware technical blogs that focus on specific areas like storage, ESXi, DR/BC, Networking and Automation. Some of VMware’s great technical minds like Cormac Hogan, Frank Denneman, Duncan Epping, William Lam and Kyle Gleed.

If you have any other good VMware technical information sources let me know and I’ll add them to the list.

Nov 28 2011

vSphere Storage I/O Control: What it does and how to configure it

Storage is the slowest and most complex host resource, and when bottlenecks occur, they can bring your virtual machines (VMs) to a crawl. In a VMware environment, Storage I/O Control provides much needed control of storage I/O and should be used to ensure that the performance of your critical VMs are not affected by VMs from other hosts when there is contention for I/O resources.

Storage I/O Control was introduced in vSphere 4.1, taking storage resource controls built into vSphere to a much broader level. In vSphere 5, Storage I/O Control has been enhanced with support for NFS data stores and clusterwide I/O shares.

Prior to vSphere 4.1, storage resource controls could be set on each host at the VM level using shares that provided priority access to storage resources. While this worked OK for individual hosts, it is common for many hosts to share data stores, and since each host worked individually to control VM access to disk resources, VMs on one host could limit the amount of disk resources on other hosts.

The following example illustrates the problem:

  • Host A has a number of noncritical VMs on Data Store 1, with disk shares set to Normal
  • Host B runs a critical SQL Server VM that is also located on Data Store 1, with disk shares set to High
  • A noncritical VM on Host A starts generating intense disk I/O due to a job that was kicked off; since Host A has no resource contention, the VM is given all the storage I/O resources it needs
  • Data Store 1 starts experiencing a lot of demand for I/O resources from the VM on Host A
  • Storage performance for the critical SQL VM on Host B starts to suffer as a result

How Storage I/O Control works

Storage I/O Control solves this problem by enforcing storage resource controls at the data store level so all hosts and VMs in a cluster accessing a data store are taken into account when prioritizing VM access to storage resources. Therefore, a VM with Low or Normal shares will be throttled if higher-priority VMs on other hosts need more storage resources. Storage I/O Control can be enabled on each data store and, once enabled, uses a congestion threshold that measures latency in the storage subsystem. Once the threshold is reached, Storage I/O Control begins enforcing storage priorities on each host accessing the data store to ensure VMs with higher priority have the resources they need.

Read the full article at searchvirtualstorage.com…

Nov 17 2011

Storage I/O Bottlenecks in a Virtual Environment

Today I wanted to highlight another white paper that I wrote for SolarWinds that is titled “Storage I/O Bottlenecks in a Virtual Environment”. I enjoyed writing this one the most as it digs really deep into the technical aspects of storage I/O bottlenecks. This white paper covers topics such as the effects of storage I/O bottlenecks, common causes, how to identify them and how to solve them. Below is an excerpt from this white paper, you can register and read the full paper over at SolarWinds website.

There are several key statistics that should be monitored on your storage subsystem related to bottlenecks but perhaps the most important is latency. Disk latency is defined as the time it takes for the selected disk sector to be positioned under the drive head so it can be read or written to. Once a VM makes a read or write to its virtual disk that request must follow a path to make its way from the guest OS to the physical storage device. A bottleneck can occur at different points along that path, there are different statistics that can be used to help pinpoint where the bottleneck is occurring in the path. The below figure illustrates the path that data takes to get from the VM to the storage device.

latency3

The storage I/O goes through the operating system as it normally would and makes its way to the device driver for the virtual storage adapter. From there it goes through the Virtual Machine Monitor (VMM) of the hypervisor which emulates the virtual storage adapter that the guest sees. It travels through the VMkernel and through a series of queues before it gets to the device driver for the physical storage adapter that is in the host. For shared storage it continues out the host on the storage network and makes its way to its final destination which is the physical storage device. Total guest latency is measured at the point where the storage I/O enters the VMkernel up to the point where it arrives at the physical storage device.

The total guest latency (GAVG/cmd as it is referred to in the esxtop utility) is measured in milliseconds and consists of the combined values of kernel latency (KAVG/cmd) plus device latency (DAVG/cmd). The kernel latency includes all the time that I/O spends in the VMkernel before it exits to the destination storage device. Queue latency (QAVG/cmd) is a part of the kernel latency but also measured independently. The device latency is the total amount of time that I/O spends in the VMkernel physical driver code and the physical storage device. So when I/O leaves the VMkernel and goes to the storage device this is the amount of time that it takes to get there and return. A guest latency value that is too high is a pretty clear indication that you have a storage I/O bottleneck that can cause severe performance issues. Once total guest latency exceeds 20ms you will notice the performance of your VMs suffer, as it approaches 50ms your VMs will become unresponsive.

Full paper including information on the key statistics related to storage I/O bottlenecks available here

Aug 19 2011

The top iPad applications for VMware admins

The iPad is becoming more and more popular in the enterprise, and not just for mobile workers. There is also a slew of iPad applications for VMware admins.

Many IT vendors see the iPad’s potential and are developing iPad apps that can manage their traditional hardware and software products. Xsigo Systems, for instance, has a very nice app called Xsigo XMS , which manages virtual I/O through the company’s XMS servers. There is also an iPad application called SiMU Pro that manages Cisco Systems Inc.’s Unified Computing System.

In addition, there are several iPad applications that can supplement the traditional VMware admin toolkit, including the vSphere Client and Secure Shell (SSH) applications. With the right iPad applications, VMware admins will reach a new level of management flexibility that’s not possible with traditional desktops and laptops.

TABLE OF CONTENTS

I. Top iPad applications for VMware management

II. Applications for remotely connecting to hosts and workstations

III. Top iPad applications for VMware networking

IV. General purpose iPad apps for VMware admins

I. TOP IPAD APPLICATIONS FOR VMWARE MONITORING AND MAINTENANCE
With the top iPad applications for infrastructure management, VMware admins can control basic functions, such as powering virtual machines (VMs) on and off and using vMotion.

These iPad apps mimic some of the functionality of the vSphere Client and service console, but they aren’t a full-fledge replacement. Even so, these iPad applications allow VMware admins to perform key virtualization tasks without a full-scale computer.

Read the full article at searchvmware.com…

Older posts «