Tag: iSCSI

Best Practices for running vSphere on iSCSI

VMware recently updated a paper that covers Best Practices for running vSphere on iSCSI storage. This paper is similar in nature to the Best Practices for running vSphere on NFS paper that was updated not too long ago. VMware has tried to involve their storage partners in these papers and reached out to NetApp & EMC to gather their best practices to include in the NFS paper. They did something similar with the iSCSI paper by reaching out to HP and Dell who have strong iSCSI storage products. As a result you’ll see my name and Jason Boche’s in the credits of the paper but the reality is I didn’t contribute much to it besides hooking VMware up with some technical experts at HP.

So if you’re using iSCSI be sure and give it a read, if you’re using NFS be sure and give that one a read and don’t forget to read VMware’s vSphere storage documentation that is specific to each product release.

capture7

Share This:

New ESXi 5.0 build to fix Software iSCSI Initiator issue

VMware has recently released a new build of ESXi to fix a bug that causes ESXi to hang for a long period of time while it tries to connect to all iSCSI targets. I’ve personally seen this happen in my lab and it can take quite a long time for ESXi to boot as it will try 9 times to connect to each iSCSI target. VMware sees this as a serious enough issue that not only have they released a patch to fix the problem but they’ve also released a special patch express release of ESXi. So when you go to download ESXi 5.0 now you will see two options for the ESXi ISO: one for systems without software iSCSI configured and one for systems with software iSCSI configured. If you are already using software iSCSI or plan on it at some point  you should choose the ISO image for systems with software iSCSI. You can read more about this issue in this VMware KB article. Here is the details on the two ESXi builds:

  • Original release: Version 5.0.0 – Release Date 8/24/11 – Build 469512
  • iSCSI patch release: Version 5.0.0 – Release Date 11/10/11 – Build 504890

2011-11-19_075315

Share This:

Using iSCSI storage with vSphere

To tap into some of VMware vSphere’s advanced features such as VMotion, fault tolerance, high availability and the VMware Distributed Resource Scheduler, you need to have shared storage for all of your hosts. vSphere’s proprietary VMFS file system uses a special locking mechanism to allow multiple hosts to connect to the same shared storage volumes and the virtual machines (VMs) on them. Traditionally, this meant you had to implement an expensive Fibre Channel SAN infrastructure, but iSCSI and NFS network storage are now more affordable alternatives.

Focusing on iSCSI, we’ll describe how to set it up and configure it properly for vSphere hosts, as well as provide some tips and best practices for using iSCSI storage with vSphere. In addition, we’ve included the results of a performance benchmarking test for the iSCSI/vSphere pairing, with performance comparisons of the various configurations.

Read the full article in the August 2010 issue of Storage Magazine at searchstorage.com…

cover_vol9_iss7

Share This:

The importance of the Hardware Compatibility List

VMware publishes a list of all server hardware that is supported with vSphere which includes servers, I/O adapters and storage and SAN devices. This list is continually updated and is most commonly referred to as the Hardware Compatibility List (HCL),  VMware changed the name for it a while back to the VMware Compatibility Guide as it is now referred to. The guide used to be published as PDF files only that you could read through to see if your hardware was listed but is now available as an online interactive webpage that is searchable and filterable as well. The guide lists hardware that is supported by vSphere but if hardware is not listed it does not mean it will not work with vSphere. In many cases the hardware will still work but because it is not listed VMware may not provide you support if you have problems with it. Servers and storage devices are two areas that are very common where they work with vSphere despite not being listed in the guide. I/O devices like network adapters and storage controllers though are less likely to work if they are not listed because they rely on drivers loaded in the VMkernel to work properly. If the driver is not one of the limited ones load in the VMkernel than your device is not going to work.

I don’t go through the guide that often as I assume the new mainstream hardware from large vendors like HP will always work and be supported. However last week that assumption bit me. We had just received some new hardware from HP which included a DL385 G6 server and an MSA G2 iSCSI storage array. I wanted to use hardware iSCSI initiators with TCP/IP Offload Engines (TOE) due to high CPU and I/O demands of the applications running on that server so I initially ordered HP’s only TOE card that was available which was the NC380T, however after we placed the order we were informed that the card was no longer available and was replaced by the newer NC382T. After receiving everything and assembling it and loading ESX on it I found that the NC382T was not listed under Storage Adapters in the vSphere Client as TOE cards should be as they are not treated as network adapters in vSphere. Only my SATA adapter, P410 smart array controller for local storage, Fibre Channel adapters and iSCSI software iniator were showing up under Storage Adapters.

iscsi1

The NC382T was showing up under Network Adapters instead and could not be used as a hardware initiator.

iscsi2So this new TOE card from HP that we bought to use as a hardware initiator could not be used for that purpose. After discovering this I checked VMware’s hardware guide to see what HP iSCSI adapters were listed. Much to my surprise there was only one HP model listed and after looking up that adapter on HP’s website I found that it was for blade systems only and was therefore no use to my server.

iscsi3Now HP OEM’s many of their storage adapters as many of  them are made by QLogic and Emulex. Most of the other vendors listed in the guide for iSCSI adapters had many QLogic adapters re-branded under their name but not HP.

iscsi4After confirming with HP that the blade adapter was the only one that they OEM’d I was forced to get the QLogic branded adapter instead. The card that seemed to be the most popular was the QLE4062C adapter, so now we have one of those on order and may end up just using the NC382T as a regular NIC instead.

So the moral of this story is always check the Compatibility Guide especially when ordering a I/O adapters so you don’t end up with hardware that you can’t use with vSphere. If you want to know more about the guide such as how hardware gets added/removed from it and VMware’s support policy check out this blog post that I did a while ago on it.

Share This: