Tidbits on the new vSphere 4.1 release

I’ve been in the Beta for 4.1 for quite some time and also have participated in VMware’s blogger briefings on vSphere 4.1. The major features of 4.1 will no doubt be covered by many, this post is simply a collection of tidbits on vSphere 4.1 that I’ve picked up over the last few months. Keep checking back as this post is a work in progress and will grow as I add more to it.

  • vCenter Server 4.1 now requires a 64-bit Windows operating system, it will not installl on a 32-bit Windows OS.
  • This is the last major release to support ESX and the Service Console, ESXi will be the only choice in the next major release due out next year. Almost all vendors now support using APIs instead of Service Console agents.
  • The Host Update utility that installed with the vSphere Client is now gone in 4.1. You’ll notice that the size of the vSphere Client download is much smaller in vSphere 4.1 as a result. To patch ESXi hosts you need to either use Update Manager or the CLI utilities.
  • The vSphere Client is no longer bundled with ESX & ESXi installations, this was done to reduce the build size so the ISO file that is used to install ESX & ESXi with is smaller. In previous versions when you access the web interface of an ESX or ESXi host you had the option to download the vSphere Client directly from the host to install it on a workstation. Having the vSphere Client available as a download from the host was more for convenience. It can still be downloaded from VMware’s website or from the vCenter Server’s web interface. Doing this reduced the size of the ESX ISO from 814MB (4.0) to 631MB (4.1) and reduced the size of the ESXi ISO from 353MB (4.0) to 290MB (4.1).
  • When installing vCenter Server you can now choose the JVM maximum memory size for the Tomcat application server that is installed on the vCenter Server. This can drastically reduce or increase the amount of RAM that the JVM uses which is the biggest memory consumer on the vCenter Server. For small (<100 hosts) the JVM is set to 1024MB, medium (100 – 400 hosts) the JVM is set to 2048MB and for large (>400 hosts) the JVM is set to 4096MB. This setting can be changed at anytime by loading the Configure Tomcat utility in the VMware start menu folder and selecting the Java tab.
  • Application Monitoring is a new feature of HA which will monitor applications that have been modified to transmit a heartbeat that vSphere can detect and restart a VM if the application stops responding (crashes). This adds another layer of the stack that HA can monitor uptime for (Host, Operating system, Application), currently no applications support this but will there will probably be some in the future.
  • Name/case changes:
    • VMotion -> vMotion
    • Storage VMotion -> Storage vMotion
    • ESXi free or standalone edition -> vSphere Hypervisor
    • ESX & ESXi paid editions -> Hypervisor Architectures
  • The VMFS disk format has been upgraded from version 3.33 (vSphere 4.0) to version 3.46 (vSphere 4.1), like some prior updates this one is minor and not worth re-creating your VMFS volumes for. The new VMFS driver (not the disk format) in vSphere 4.1  includes new storage offloading algorithms which are introduced in vSphere 4.1 via VAAI (vStorage APIs for Array Integration).
  • A new feature in the Performance views will show you VM/host power usage in Watts. ESX 4.1 can gather host power consumption on platforms which provide that data through IPMI sensors. Newer platforms from HP, Dell, IBM, and Fujitsu are supported, and there is a way to teach ESX on how to get host power consumption on other systems which have host power consumption IPMI sensors. If you go to vClient and click on “Performance” and then choose “Power” from the drop down list at the top, then you should see host power consumption chart if the host is supported. However this feature will not work by default and is considered experimental. To enable it click on the Configuation Tab on an ESX Host, in the Software box, click Advanced Settings. In the list of options click on Power and scroll down to near the end of the list on the right hand side and you will see a setting called Power.ChargeVMs , change this value to 1 and click OK.  This is one more step to make this work, you need to edit the /usr/share/sensors/vmware file and add information for your server, by default there is an example of a Fujitsu server showing the syntax that should be used when describing sensors. VMware added this functionality so that OEMs and customers can add support for their systems without the need for VMware to update sensord itself. For example for an HP385 G6 servers which has two IPMI sensors called “Power Supply 1” and “Power Supply 2”, you can add a single line to the sensor file like this: “default:power:HP:ProLiant DL385 G6:Power Supply 1,Power Supply 2:WATTS” You’ll need to restart sensord on the server afterwards. The names of the sensors put in these configuration files must match the correct vendor sensor name, the vendor and product names can be anything you want. VMware plans on having OEMs produce their own configuration files for sensord and either send them to VMware or ship them as part of their oem.tgz custom archives.
  • Other new advanced power settings include:
    • Power.UsePStates
    • Power.UseCStates
    • Power.UseStallCtr
    • Power.CStateMaxLatency
    • Power.CStateResidencyCoef
    • Power.CStatePredictionCoef
    • Power.PerfBias
    • Power.PerfBiasEnable
    • Power.ChargeVMs
    • Power.ChargeMemoryPct
  • These settings control what the Custom power policy does. By default Custom is the same as Balanced, but you can tweak those settings to change it. For example, in 4.1, the Balanced policy doesn’t use C-states (only P-states), while the Low policy uses both but also changes some other parameters to be more aggressive. You could make a Custom policy that is the same as Balanced except that C-states are used. Each option has a doc string that briefly explains what it does, which should be available in the UI. If the doc string starts “In Custom policy”, that option affects only the Custom policy. Otherwise it’s applied regardless of policy. There is not a lot of documentation on these settings as VMware anticipated practically no one would actually want to play with a custom policy or tweak the other options; they’d just choose from one the three predefined ones (High Performance, Balanced, or Low Power).
  • The difference between a processor C-state and a P-state is this:
    • A P-state can alter the frequency and voltage of a CPU core from a low state (P-min) to the max state (P-max), this can help save power for workloads that do not require a CPU core full frequency.
    • A C-state shuts down a whole CPU core so it cannot be used, this is done during periods of low-activity and saves more power than simply lowering the CPU core frequency.
  • A new feature in vSphere 4.1 call iSCSI Boot Firmware Table (iBFT) allows booting from an iSCSI target using software initiators. Previously only hardware initiators on ESX supported this feature. This feature has some restrictions though; it will only work ESXi (no ESX) and the only currently supported network card is the Broadcom 57711 10GBe NIC. When booting from software iSCSI the boot firmware on the network adapter logs into an iSCSI target. The firmware than saves the network and iSCSI boot parameters in the iBFT which is stored in the host’s memory. Before you can use iBFT you need to configure the boot order in your server’s BIOS so the iBFT NIC is first before all other devices. You than need to configure the iSCSI configuration and CHAP authentication in the BIOS of the NIC before you can use it to boot ESXi from. The ESXi installation media has special iSCSI initialization scripts that use iBFT to connect to the iSCSI target and present it to the BIOS. Once you select the iSCSI target as your boot device the installer copies the boot image to it. Once the media is removed and the host rebooted the iSCSI target is used to boot and the initialization script runs in first boot mode which configures the networking which afterwards is persistent.
  • Memory Compression is a new feature to version 4.1 of vSphere than can offer VMs performance benefits. It provides a mechanism for swapping out memory which is between that of physical memory and disk, and works when a VM’s memory is under contention. The performance gains are had by the memory not being swapped out to slower disk based storage.
  • Load-based teaming found in vSphere 4.1 provides the ability to dynamically adjust the teaming algorithm which will balance the network load across a team of physical adapters connected to a vNetwork Distributed Switch.
  • A new feature with vSphere 4.1 is the extra storage performance and NFS statistics that can be accessed via the performance charts and esxtop. These metrics provide a useful insight into storage throughput and any host or virtual machine (VM) latency.
  • You will receive a prompt when creating a Distributed vSwitch to choose a vDS version, either 4.0 or 4.1, if your hosts are all 4.1 you can choose the 4.1 version which enables additional features such as Network I/O control and dynamic load balancing.
  • vSphere 4.1 added another new feature to HA that checks the Operational Status of the cluster. Available on the cluster summary tab, this detail window called Cluster Operational Status displays more information about the current HA operational status, including the specific status and errors for each host in the HA cluster.
  • vStorage APIs for Data Protection (VADP)  now offer VSS quiescing support for Windows 2008 and Windows 2008 R2 servers. This enables application-consistent backup and restore operations for Windows 2008 and Windows 2008 R2 applications.
  • VMware multi-core virtual CPU support lets you control the number of cores per virtual CPU in a virtual machine. This capability lets operating systems with socket restrictions use more of the host CPU’s cores, which increases overall performance. You can configure how the virtual CPUs are assigned in terms of sockets and cores. For example, you can
    configure a virtual machine with four virtual CPUs in the following ways:

    • Four sockets with one core per socket
    • Two sockets with two cores per socket
    • One socket with four cores per socket
  • Using multi-core virtual CPUs can be useful when you run operating systems or applications that can take advantage of only a limited number of CPU sockets. Previously, each virtual CPU was, by default, assigned to a single-core socket, so that the virtual machine would have as many sockets as virtual CPUs. When you configure multicore virtual CPUs for a virtual machine, CPU hot Add/remove is disabled. TO set this in the vSphere Client inventory, right-click the virtual machine and select Edit Settings. Select the Hardware tab and select CPUs and select the number of virtual processors. 4 Select the Options tab and Click General in the Advanced options list, click Configuration Parameters, click Add Row and type cpuid.coresPerSocket in the Name column. Type a value [2, 4, or 8] in the Value column. The number of virtual CPUs must be divisible by the number of cores per socket. The coresPerSocket setting must be a power of two. Click OK and power on the virtual machine. You can verify the CPU settings for the virtual machine on the Resource Allocation tab.
Share This:

4 comments

Skip to comment form

    • ricdgr on July 17, 2010 at 10:05 am

    About power metering:

    – All the changes to /usr/share/sensord are lost when the host reboots. You either have to create a oem.tgz or modify a third party oem.tgz
    – Server name do matters. If the server name (that you can get on the hardware tab, for example) is not on the file, sensord will not start (product mismatch).

    I tried to add my R900 and it worked very well. I can even see a consumed power estimate for each VM.
    Let’s hope Dell adds it to the next release of their OMSA/vib.

  1. Excellent post Eric, much appreciated.

    Do you know which Dell servers support the new VM/host power usage functionality?

    Thanking you in advance,

    Jose Maria Gonzalez from El blog de Virtualizacion en Español

    • Olaf on July 30, 2010 at 4:41 am

    About VMFS: ESXi 4.1 does not recognize the VMFS 3.33 on an iSCSI LUN.

  2. Let me clarify your paragraph on displaying host and VM power usage. It muddles those two features together a bit.

    (1) The feature of displaying the host power consumption is not
    experimental and is always on, but will display 0 watts if the host
    is not supported or does not have a power meter. Hopefully most people
    should not have to edit /usr/share/sensors/vmware to get support for
    their host, but if you do, the instructions in the paragraph are OK.
    Here are some more detailed instructions and additional lines that
    are going into esx4.1u1:

    #
    # This file contains a list of power sensors that are known to VMware, Inc.
    #
    # OEMs: to add support for new machines, do not modify this file
    # directly, but place a new file in this directory instead.
    #
    # Supported format:
    #
    # EntryType:SensorType:Manufacturer:Product:Sensor1[,Sensor2…]:Units
    #
    # EntryType must be “default”, SensorType must be “power”, and Units
    # must be “WATTS” (all without quotation marks).
    #
    # Manufacturer and Product are compared against the system’s DMI (also
    # known as SMBIOS) information from its System Information (Type 1)
    # record. Manufacturer and Product are both case-insensitive and will
    # match even if the actual name is longer; for example, “Dell” would
    # match “DELL, INC.”. Product may be “*” to match all products from
    # the specified Manufacturer.
    #
    # Sensor names are case-sensitive and must match exactly. If multiple
    # Sensors are listed on a line (up to 4), sensord reads them all, sums
    # them, and reports the total as the system power. It is acceptable
    # for not all of the sensors listed on a line to be present; sensord
    # will skip any that are missing as long as at least one is present.
    #
    default:power:FUJITSU:*:Pwr Mon:WATTS
    default:power:FUJITSU:*:Total Power:WATTS
    default:power:FUJITSU:*:SYSTEM:WATTS
    default:power:FUJITSU:*:PSU1 Power,PSU2 Power:WATTS
    default:power:Dell:*:System Level:WATTS
    default:power:HP:*:Power Supply 1,Power Supply 2:WATTS
    default:power:Hewlett-Packard:*:Power Meter:WATTS
    default:power:Hewlett-Packard:*:Power Supply 1,Power Supply 2:WATTS
    default:power:NEC:*:POWER:WATTS
    default:power:NEC:*:Power:WATTS
    default:power:NEC:*:Input_Power:WATTS
    default:power:NEC:*:System Power:WATTS
    default:power:MITSUBISHI:*:POWER:WATTS
    default:power:MITSUBISHI:*:Power:WATTS
    default:power:TOSHIBA:*:POWER:WATTS
    default:power:TOSHIBA:*:Power:WATTS
    default:power:BULL:*:POWER:WATTS

    (2) The feature of displaying per-VM power consumption is experimental
    and off by default. It can be turned on with an advanced config option
    as the paragraph describes. The per-VM power consumption feature is
    dependent on the host power consumption feature.

Comments have been disabled.