Tag: Storage

The Top 10 Things You MUST Know About Storage for vSphere

If you’re going to VMworld this year be sure and check out my session STO5545 – The Top 10 Things You MUST Know About Storage for vSphere which will be on Tuesday, Aug. 27th from 5:00-6:00 pm. The session was showing full last week but they must have moved it to a larger room as it is currently showing 89 seats available. This session is crammed full of storage tips, best practices, design considerations and lots of other information related to storage. So sign up know before it fills up again and I look forward to seeing you there!

top10-11

Share This:

vSphere Storage I/O Control: What it does and how to configure it

Storage is the slowest and most complex host resource, and when bottlenecks occur, they can bring your virtual machines (VMs) to a crawl. In a VMware environment, Storage I/O Control provides much needed control of storage I/O and should be used to ensure that the performance of your critical VMs are not affected by VMs from other hosts when there is contention for I/O resources.

Storage I/O Control was introduced in vSphere 4.1, taking storage resource controls built into vSphere to a much broader level. In vSphere 5, Storage I/O Control has been enhanced with support for NFS data stores and clusterwide I/O shares.

Prior to vSphere 4.1, storage resource controls could be set on each host at the VM level using shares that provided priority access to storage resources. While this worked OK for individual hosts, it is common for many hosts to share data stores, and since each host worked individually to control VM access to disk resources, VMs on one host could limit the amount of disk resources on other hosts.

The following example illustrates the problem:

  • Host A has a number of noncritical VMs on Data Store 1, with disk shares set to Normal
  • Host B runs a critical SQL Server VM that is also located on Data Store 1, with disk shares set to High
  • A noncritical VM on Host A starts generating intense disk I/O due to a job that was kicked off; since Host A has no resource contention, the VM is given all the storage I/O resources it needs
  • Data Store 1 starts experiencing a lot of demand for I/O resources from the VM on Host A
  • Storage performance for the critical SQL VM on Host B starts to suffer as a result

How Storage I/O Control works

Storage I/O Control solves this problem by enforcing storage resource controls at the data store level so all hosts and VMs in a cluster accessing a data store are taken into account when prioritizing VM access to storage resources. Therefore, a VM with Low or Normal shares will be throttled if higher-priority VMs on other hosts need more storage resources. Storage I/O Control can be enabled on each data store and, once enabled, uses a congestion threshold that measures latency in the storage subsystem. Once the threshold is reached, Storage I/O Control begins enforcing storage priorities on each host accessing the data store to ensure VMs with higher priority have the resources they need.

Read the full article at searchvirtualstorage.com…

Share This:

Storage I/O Bottlenecks in a Virtual Environment

Today I wanted to highlight another white paper that I wrote for SolarWinds that is titled “Storage I/O Bottlenecks in a Virtual Environment”. I enjoyed writing this one the most as it digs really deep into the technical aspects of storage I/O bottlenecks. This white paper covers topics such as the effects of storage I/O bottlenecks, common causes, how to identify them and how to solve them. Below is an excerpt from this white paper, you can register and read the full paper over at SolarWinds website.

There are several key statistics that should be monitored on your storage subsystem related to bottlenecks but perhaps the most important is latency. Disk latency is defined as the time it takes for the selected disk sector to be positioned under the drive head so it can be read or written to. Once a VM makes a read or write to its virtual disk that request must follow a path to make its way from the guest OS to the physical storage device. A bottleneck can occur at different points along that path, there are different statistics that can be used to help pinpoint where the bottleneck is occurring in the path. The below figure illustrates the path that data takes to get from the VM to the storage device.

latency3

The storage I/O goes through the operating system as it normally would and makes its way to the device driver for the virtual storage adapter. From there it goes through the Virtual Machine Monitor (VMM) of the hypervisor which emulates the virtual storage adapter that the guest sees. It travels through the VMkernel and through a series of queues before it gets to the device driver for the physical storage adapter that is in the host. For shared storage it continues out the host on the storage network and makes its way to its final destination which is the physical storage device. Total guest latency is measured at the point where the storage I/O enters the VMkernel up to the point where it arrives at the physical storage device.

The total guest latency (GAVG/cmd as it is referred to in the esxtop utility) is measured in milliseconds and consists of the combined values of kernel latency (KAVG/cmd) plus device latency (DAVG/cmd). The kernel latency includes all the time that I/O spends in the VMkernel before it exits to the destination storage device. Queue latency (QAVG/cmd) is a part of the kernel latency but also measured independently. The device latency is the total amount of time that I/O spends in the VMkernel physical driver code and the physical storage device. So when I/O leaves the VMkernel and goes to the storage device this is the amount of time that it takes to get there and return. A guest latency value that is too high is a pretty clear indication that you have a storage I/O bottleneck that can cause severe performance issues. Once total guest latency exceeds 20ms you will notice the performance of your VMs suffer, as it approaches 50ms your VMs will become unresponsive.

Full paper including information on the key statistics related to storage I/O bottlenecks available here

Share This:

Managing storage for virtual desktops

Implementing a virtual desktop infrastructure (VDI) involves many critical considerations, but storage may be the most vital. User experience can often determine the success of a VDI implementation, and storage is perhaps the one area that has the most impact on the user experience. If you don’t design, implement and manage your VDI storage properly, you’re asking for trouble.

VDI’s impact on storage

The biggest challenge for storage in VDI environments is accommodating the periods of peak usage when storage I/O is at its highest. The most common event that can cause an I/O spike is the “boot storm” that occurs when a large group of users boots up and loads applications simultaneously. Initial startup of a desktop is a very resource-intensive activity with the operating system and applications doing a lot of reading from disk. Multiplied by hundreds of desktops, the amount of storage I/O generated can easily bring a storage array to its knees. Boot storms aren’t just momentary occurrences — they can last from 30 minutes to two hours and can have significant impact.

After users boot up, log in and load applications, storage I/O typically settles down; however, events like patching desktops, antivirus updates/scans and the end-of-day user log off can also cause high I/O. Having a data storage infrastructure that can handle these peak periods is therefore critical.

Cost is another concern. The ROI with VDI isn’t the same as server virtualization, so getting adequate funding can be a challenge. A proper storage infrastructure for VDI can be very costly, and to get the required I/O operations per second (IOPS) you may have to purchase more data storage capacity than you’ll need.

Expect to spend more time on administration, too. Hundreds or thousands of virtual disks for the virtual desktops will have to be created and maintained, which can be a difficult and time-consuming task.

Read the full article in the March 2011 issue of Storage Magazine…

Share This:

10 tips for managing storage for virtual servers and virtual desktops

cover_vol9_iss9

Server and desktop virtualization have provided relatively easy ways to consolidate and conserve, allowing a reduction in physical systems. But these technologies have also introduced problems for data storage managers who need to effectively configure their storage resources to meet the needs of a consolidated infrastructure.

Server virtualization typically concentrates the workloads of many servers onto a few shared storage devices, often creating bottlenecks as many virtual machines (VMs) compete for storage resources. With desktop virtualization this concentration becomes even denser as many more desktops are typically running on a single host. As a result, managing storage in a virtual environment is an ongoing challenge that usually requires the combined efforts of desktop, server, virtualization and storage administrators to ensure that virtualized servers and desktops perform well. Here are 10 tips to help you better manage your storage in virtual environments.

Read the full article at searchstorage.com…

Share This:

Affordable shared storage options for VMware vSphere

You can use VMware vSphere without a shared storage device, but it limits the amount of advanced features that you can use with it. Certain features in vSphere require that a virtual machine (VM) reside on a shared storage device that is accessible by multiple hosts concurrently. These features include high availability (HA), Distributed Resource Scheduler (DRS), Fault Tolerance (FT) and VMotion, which provide high/continuous availability as well as workload load balancing and live migration of virtual machines. For some storage administrators, these features may only be nice to have, but they are also essential for many IT environments that cannot afford to have VMs down for an extended amount of time.

A few years ago, VMware shared storage typically meant using a Fibre Channel (FC) SAN, which was expensive, required specialized equipment and was complicated to manage. In recent years, other shared storage options that utilize standard network components to connect to storage devices have become popular and make for affordable, easy-to-use shared storage solutions. The protocols used for this are iSCSI and NFS, both of which are natively supported in vSphere. The performance of NFS and iSCSI are similar, but both can vary depending on a variety of factors including the data storage device characteristics, network speed/latency and host server resources. Since both protocols use software built into vSphere to manage the storage connections over the network there is some minimal CPU resource usage on the host server as a result.

Read the full article at searchsmbstorage.com…

Share This:

New storage toys and new storage woes

In the last week I’ve gotten some new storage devices, both at work and at home. Unfortunately I’ve experienced problems with both and its not been as fun of a week as I would of liked. The new work storage device is a HP MSA-2312i which is the iSCSI version of their Modular Storage Array line.

msa-overview1

The new home storage device is an Iomega ix4-200d 4TB which is a relatively low cost network storage device that supports iSCSI & NFS and much more.

iomega-ix41

The MSA problems have all been firmware related, basically it kept getting stuck in a firmware upgrade loop, if you own one or plan on buying one I would say don’t upgrade the firmware unless you have a reason to and if you do make sure you schedule downtime. I’ll be sharing some tips for upgrading the firmware on that unit later on.

msa-firmware3

The Iomega problems are from a flaky hard drive presumably, not long after I plugged the unit in and started configuring it I received the message that drive 3 was missing. After talking to support and rebuilding the RAID group the problem briefly went away and then came right back. They graciously waived the $25 replacement fee (it’s brand new, they better!) and refused to expedite the shipping unless I paid $40 (again, its brand new, you would think they would want to make a new customer happy). Having a flaky drive in a brand new unit doesn’t exactly inspire confidence in storing critical data on the device so I’ll have to see how it goes once the drive is replaced.

ix4-2

So look forward to some upcoming posts on using and configuring both devices. The MSA will be used as part of a Domino virtualization project and I’ll be doing performance testing on it in various configurations. The Iomega I’ll be using with VMware Workstation 7 on my home computer as both iSCSI & NFS datastores.

Share This:

Storage Links

General

Storage Caching Solutions (Architecting IT)
(Alternative) VM swap file locations Q&A (Frank Denneman)
A look at the ESX I/O stack (NetApp)
vSphere Storage: Features and Enhancements (Professional VMware)
Storage Changes in VMware ESX 3.5 Update 4 (Stephen Foskett)
Storage Changes in the VMware vSphere 4 Family (Stephen Foskett)
Storage Changes in VMware vSphere 5 (Stephen Foskett)
Storage Changes in VMware vSphere 5.1 (Stephen Foskett)
Everything you need to know about vSphere and data storage (Storage Magazine)
10 tips for managing storage for virtual servers and virtual desktops (Storage Magazine)
Best Practices for Designing Highly Available Storage – Host Perspective (Stretch Cloud)
Best Practices for Designing Highly Available Storage – FC SAN Perspective (Stretch Cloud)
Best Practices for Designing Highly Available Storage – iSCSI SAN Perspective (Stretch Cloud)
Best Practices for Designing Highly Available Storage – Network for iSCSI (Stretch Cloud)
Best Practices for Designing Highly Available Storage – Storage LUNs (Stretch Cloud)
Best Practices for Designing Highly Available Storage – Overall Storage System (Stretch Cloud)
Storage Landscape (Part 1) – Disruptive Technology Trends (Talking Tech with SHD)
Storage Landscape (Part 2) – Storage Architectures (Talking Tech with SHD)
Software-Defined Storage & Hyper-Converged Infrastructure; Two Sides of the Same Coin? (Talking Tech with SHD)
vSphere and 2TB LUNs – changes from VI3.x (Virtual Geek)
Data Compression, Deduplication, & Single Instance Storage (Virtual Storage Guy)
vSphere Introduces the Plug-n-Play SAN (Virtual Storage Guy)
Storage Basics – Part I: An Introduction (VM Today)
Storage Basics – Part II: IOPS (VM Today)
Storage Basics – Part III: RAID (VM Today)
Storage Basics – Part IV: Interface (VM Today)
Storage Basics – Part V: Controllers, Cache and Coalescing (VM Today)
Storage Basics – Part VI: Storage Workload Characterization (VM Today)
Storage Basics – Part VII: Storage Alignment (VM Today)
Storage Basics – Part VIII – The Difference in Consumer vs. Enterprise Class Disks and Storage Arrays (VM Today)
Storage Basics – Part IX: Alternate IOPS Formula (VM Today)
Improve Storage Efficiency and Management with VMware vSphere 4 (VMware)
What’s New in VMware vSphere 4.0 -Storage (VMware Tech Paper)
What’s New in VMware vSphere 4.1 – Storage (VMware Tech Paper)
What’s New in VMware vSphere 5.0 – Storage (VMware Tech Paper)
What’s New in VMware vSphere 5.1 – Storage (VMware Tech Paper)
VMware vSphere 4: Exchange Server on NFS, iSCSI, and Fibre Channel (VMware)
Advanced VMkernel Settings for Disk Storage (VMware vSphere Blog)
Ye olde Controller, Target, Device (ctd) Numbering (VMware vSphere Blog)
How much storage can I present to a Virtual Machine? (VMware vSphere Blog)
Best Practice: How to correctly remove a LUN from an ESX host (VMware vSphere Blog)
Should I defrag my Guest OS? (VMware vSphere Blog)
Misaligned VMs? (VMware vSphere Blog)
Guest OS Partition Alignment (VMware vSphere Blog)
Storage Oversubscription Technologies (Xtravirt)
Why Queue Depth matters! (Yellow Bricks)

Performance

vStorage: Troubleshooting Performance (Professional VMware)
Benchmarking Storage for VMware (Peacon)
Avoid Storage I/O Bottlenecks With vCenter and Esxtop (Petri)
Calculate IOPS in a storage array (Tech Republic)
Performance Troubleshooting VMware vSphere – Storage (Virtual Insanity)
Comparing the I/O Performance of 2 or more Virtual Machines SSD, SATA & IOmeter (Vinf.net)
Storage System Performance Analysis with Iometer (VMware)
Poor performance and high disk latency with some storage configurations (VMware KB)
vSOM: A Framework for Virtual Machine-centric Analysis of End-to-End Storage IO Operations (VMware Labs Tech Paper)
Performance Best Practices for VMware vSphere 5.1 (VMware Tech Paper)
Storage I/O Performance on VMware vSphere 5.1 over 16 Gigabit Fibre Channel (VMware Tech Paper)
Storage Workload Characterization and Consolidation in Virtualized Environments (VMware Tech Pub)
An Analysis of Disk Performance in VMware ESX Virtual Machines (VMware Tech Pub)
IOPs? (Yellow Bricks)

pvSCSI

More Bang for Your Buck with PVSCSI (Part 1) (Virtual Insanity)
More Bang for your Buck with PVSCSI (Part 2) (Virtual Insanity)
Boot from Paravirtualized SCSI Adapter (Xtravirt)
Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters (VMware KB)

Raw Device Mappings (RDMs)

Physical RDM to VMDK Migration Feature (VMware vSphere Blog)
Migrating RDMs, and a question for RDM Users (VMware vSphere Blog)

Storage DRS

Storage DRS, more than I/O load-balancing only (VMware vSphere Blog)
Storage DRS Affinity & Anti-Affinity Rules (VMware vSphere Blog)
Storage DRS and Storage Array Feature Interoperability (VMware vSphere Blog)
VMware vSphere Storage DRS Interoperability (VMware Tech Paper)
Storage DRS: Automated Management of Storage Devices In a Virtualized Datacenter (VMware Labs Tech Paper)
Should I use many small LUNs or a couple large LUNs for Storage DRS? (Yellow Bricks)

Storage I/O Control

Debunking Storage I/O Control Myths (VMware vSphere Blog)
Storage I/O Control Enhancements in vSphere 5.0 (VMware vSphere Blog)
Using both Storage I/O Control & Network I/O Control for NFS (VMware vSphere Blog)
Performance Implications of Storage I/O Control-Enabled NFS Datastores in VMware vSphere 5.0 (VMware Tech Paper)
Storage IO Control Technical Overview & Considerations for Deployment (VMware Tech Paper)

Storage vMotion

Answering some Storage vMotion questions (VMware vSphere Blog)
Storage vMotion of a Virtualized SQL Server Database (VMware Tech Paper)
The Design and Evolution of Live Storage Migration in VMware ESX (VMware Tech Paper)

Virtual Disk

To Zero or not to Zero, that is the question… (Defined By Software)
xExtending an EagerZeroedThick Disk (VMware vSphere Blog)
2TB VMDKs on Upgraded VMFS-3 to VMFS-5. Really? (VMware vSphere Blog)
Virtual Disk Format 5.0 (VMware Tech Paper)

VDI

VDI User Sizing Example (EMC)
Space-Efficient Sparse Virtual Disks and VMware View (My Virtual Cloud)
Clearing up Space-Efficient Virtual Disk questions (My Virtual Cloud)
Managing storage for virtual desktops (Storage Magazine)
VDI and Storage: Deep Impact (Virtuall.eu)
A closer look at the View Storage Accelerator [incl. Video] (VMware vSphere Blog)
VMFS File Locking and Its Impact in VMware View 5.1 (VMware Tech Paper)
Storage Considerations for VMware View (VMware Tech Paper)
View Storage Accelerator in VMware View 5.1 (VMware Tech Paper)

VMFS

VMFS Extents – Are they bad, or simply misunderstood? (VMware vSphere Blog)
Exactly how big can I make a single-extent VMFS-5 datastore? (VMware vSphere Blog)
Some useful vmkfstools ‘hidden’ options (VMware vSphere Blog)
VMFS Locking Uncovered (VMware vSphere Blog)
VMFS Heap Considerations (VMware vSphere Blog)
Something I didn’t known about VMFS sub-blocks (VMware vSphere Blog)
Upgraded VMFS-5: Automatic Partition Format Change (VMware vSphere Blog)
What could be writing to a VMFS when no Virtual Machines are running? (VMware vSphere Blog)
VMware vStorage Virtual Machine File System – Technical Overview and Best Practices (VMware Tech Paper)
VMware vSphere VMFS-5 Upgrade Considerations (VMware Tech Paper)

vSphere Storage Appliance (VSA)

vSphere Storage Appliance (VSA) Resilience – Network Outage Scenario #1: Back End (VMware vSphere Blog)
vSphere Storage Appliance (VSA) Resilience – Network Outage Scenario #2: Front End (VMware vSphere Blog)
vSphere Storage Appliance (VSA) – Introduction (VMware vSphere Blog)
vSphere Storage Appliance – Can I run vCenter on a VSA cluster member? (VMware vSphere Blog)
Performance of VSA in VMware Sphere 5 (VMware Tech Paper)
VMware vSphere Storage Appliance Technical Deep Dive (VMware Tech Paper)
What’s New in VMware vSphere® Storage Appliance 5.1 (VMware Tech Paper)

vSphere 5.0 Blog Series

vSphere 5.0 Storage Features Part 1 – VMFS-5 (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 2 – Storage vMotion (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 3 – VAAI (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 4 – Storage DRS – Initial Placement (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 5 – Storage DRS – Balance On Space Usage (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 6 – Storage DRS – Balance On I/O Metrics (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 7 – VMFS-5 & GPT (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 8 – Handling the All Paths Down (APD) condition (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 9 – Snapshot Consolidate (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 10 – VASA – vSphere Storage APIs – Storage Awareness(VMware vSphere Blog)
vSphere 5.0 Storage Features Part 11 – Profile Driven Storage (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 12 – iSCSI Multipathing Enhancements (VMware vSphere Blog)

vSphere 5.1 Blog Series

vSphere 5.1 Storage Enhancements – Part 1: VMFS-5 (Cormac Hogan)
vSphere 5.1 Storage Enhancements – Part 2: SE Sparse Disks (Cormac Hogan)
vSphere 5.1 Storage Enhancements – Part 3: vCloud Director (Cormac Hogan)
vSphere 5.1 Storage Enhancements – Part 4: All Paths Down (APD) (Cormac Hogan)
vSphere 5.1 Storage Enhancements – Part 5: Storage Protocols (Cormac Hogan)
vSphere 5.1 Storage Enhancements – Part 6: IODM & SSD Monitoring (Cormac Hogan)
vSphere 5.1 Storage Enhancements – Part 7: Storage vMotion (Cormac Hogan)
vSphere 5.1 Storage Enhancements – Part 8: Storage I/O Control (Cormac Hogan)
vSphere 5.1 Storage Enhancements – Part 9: Storage DRS (Cormac Hogan)
vSphere 5.1 Storage Enhancements – Part 10: 5 Node MSCS Support (Cormac Hogan)

vSphere 5.5 Posts

vSphere 5.5 Storage Enhancements Part 1: 62TB VMDK (Cormac Hogan)
Hot-Extending Large VMDKs in vSphere 5.5 (Cormac Hogan)
vSphere 5.5 Storage Profiles Are Now Storage Policies (Everything Should Be Virtual)
vSphere 5.5 Jumbo VMDK Deep Dive (Long White Clouds)
vSphere 5.5 Windows Failover Clustering Support (Long White Clouds)
VMworld 2013: What’s new in vSphere 5.5: Storage (Mike Laverick)
vSphere 5.5 UNMAP Deep Dive (VMware vEvangelist)
64TB VMDKs? Yes we can. (VM Today)
VMware’s Strategy for Software-Defined Storage (VMware CTO Blog)
What’s New in vSphere 5.5 Storage (VMware vSphere Blog)
Comparing VMware VSA & VMware Virtual SAN (VMware vSphere Blog)
VMworld 2013: STO5715-S – Software-defined Storage – The Next Phase in the Evolution of Enterprise Storage (VMworld TV)
vSphere 5.5 Improvements Part 3 – Lions, Tigers, and 62TB VMDKs (Wahl Network)
vSphere 5.5 nuggets: changes to disk.terminateVMOnPDLDefault (Yellow Bricks)
vSphere 5.5 nuggets: Change Disk.SchedNumReqOutstanding per device! (Yellow Bricks)

Share This: