buy viagra australia

Archive

Posts Tagged ‘Storage’

The Top 10 Things You MUST Know About Storage for vSphere

August 8th, 2013
Comments Off

If you’re going to VMworld this year be sure and check out my session STO5545 - The Top 10 Things You MUST Know About Storage for vSphere which will be on Tuesday, Aug. 27th from 5:00-6:00 pm. The session was showing full last week but they must have moved it to a larger room as it is currently showing 89 seats available. This session is crammed full of storage tips, best practices, design considerations and lots of other information related to storage. So sign up know before it fills up again and I look forward to seeing you there!

top10-11

Author: esiebert7625 Categories: News Tags: , ,

vSphere Storage I/O Control: What it does and how to configure it

November 28th, 2011
Comments Off

Storage is the slowest and most complex host resource, and when bottlenecks occur, they can bring your virtual machines (VMs) to a crawl. In a VMware environment, Storage I/O Control provides much needed control of storage I/O and should be used to ensure that the performance of your critical VMs are not affected by VMs from other hosts when there is contention for I/O resources.

Storage I/O Control was introduced in vSphere 4.1, taking storage resource controls built into vSphere to a much broader level. In vSphere 5, Storage I/O Control has been enhanced with support for NFS data stores and clusterwide I/O shares.

Prior to vSphere 4.1, storage resource controls could be set on each host at the VM level using shares that provided priority access to storage resources. While this worked OK for individual hosts, it is common for many hosts to share data stores, and since each host worked individually to control VM access to disk resources, VMs on one host could limit the amount of disk resources on other hosts.

The following example illustrates the problem:

  • Host A has a number of noncritical VMs on Data Store 1, with disk shares set to Normal
  • Host B runs a critical SQL Server VM that is also located on Data Store 1, with disk shares set to High
  • A noncritical VM on Host A starts generating intense disk I/O due to a job that was kicked off; since Host A has no resource contention, the VM is given all the storage I/O resources it needs
  • Data Store 1 starts experiencing a lot of demand for I/O resources from the VM on Host A
  • Storage performance for the critical SQL VM on Host B starts to suffer as a result

How Storage I/O Control works

Storage I/O Control solves this problem by enforcing storage resource controls at the data store level so all hosts and VMs in a cluster accessing a data store are taken into account when prioritizing VM access to storage resources. Therefore, a VM with Low or Normal shares will be throttled if higher-priority VMs on other hosts need more storage resources. Storage I/O Control can be enabled on each data store and, once enabled, uses a congestion threshold that measures latency in the storage subsystem. Once the threshold is reached, Storage I/O Control begins enforcing storage priorities on each host accessing the data store to ensure VMs with higher priority have the resources they need.

Read the full article at searchvirtualstorage.com…

Author: esiebert7625 Categories: News Tags: , ,

Storage I/O Bottlenecks in a Virtual Environment

November 17th, 2011
Comments Off

Today I wanted to highlight another white paper that I wrote for SolarWinds that is titled “Storage I/O Bottlenecks in a Virtual Environment”. I enjoyed writing this one the most as it digs really deep into the technical aspects of storage I/O bottlenecks. This white paper covers topics such as the effects of storage I/O bottlenecks, common causes, how to identify them and how to solve them. Below is an excerpt from this white paper, you can register and read the full paper over at SolarWinds website.

There are several key statistics that should be monitored on your storage subsystem related to bottlenecks but perhaps the most important is latency. Disk latency is defined as the time it takes for the selected disk sector to be positioned under the drive head so it can be read or written to. Once a VM makes a read or write to its virtual disk that request must follow a path to make its way from the guest OS to the physical storage device. A bottleneck can occur at different points along that path, there are different statistics that can be used to help pinpoint where the bottleneck is occurring in the path. The below figure illustrates the path that data takes to get from the VM to the storage device.

latency3

The storage I/O goes through the operating system as it normally would and makes its way to the device driver for the virtual storage adapter. From there it goes through the Virtual Machine Monitor (VMM) of the hypervisor which emulates the virtual storage adapter that the guest sees. It travels through the VMkernel and through a series of queues before it gets to the device driver for the physical storage adapter that is in the host. For shared storage it continues out the host on the storage network and makes its way to its final destination which is the physical storage device. Total guest latency is measured at the point where the storage I/O enters the VMkernel up to the point where it arrives at the physical storage device.

The total guest latency (GAVG/cmd as it is referred to in the esxtop utility) is measured in milliseconds and consists of the combined values of kernel latency (KAVG/cmd) plus device latency (DAVG/cmd). The kernel latency includes all the time that I/O spends in the VMkernel before it exits to the destination storage device. Queue latency (QAVG/cmd) is a part of the kernel latency but also measured independently. The device latency is the total amount of time that I/O spends in the VMkernel physical driver code and the physical storage device. So when I/O leaves the VMkernel and goes to the storage device this is the amount of time that it takes to get there and return. A guest latency value that is too high is a pretty clear indication that you have a storage I/O bottleneck that can cause severe performance issues. Once total guest latency exceeds 20ms you will notice the performance of your VMs suffer, as it approaches 50ms your VMs will become unresponsive.

Full paper including information on the key statistics related to storage I/O bottlenecks available here

Author: esiebert7625 Categories: News Tags: , , ,

Managing storage for virtual desktops

March 21st, 2011
Comments Off

Implementing a virtual desktop infrastructure (VDI) involves many critical considerations, but storage may be the most vital. User experience can often determine the success of a VDI implementation, and storage is perhaps the one area that has the most impact on the user experience. If you don’t design, implement and manage your VDI storage properly, you’re asking for trouble.

VDI’s impact on storage

The biggest challenge for storage in VDI environments is accommodating the periods of peak usage when storage I/O is at its highest. The most common event that can cause an I/O spike is the “boot storm” that occurs when a large group of users boots up and loads applications simultaneously. Initial startup of a desktop is a very resource-intensive activity with the operating system and applications doing a lot of reading from disk. Multiplied by hundreds of desktops, the amount of storage I/O generated can easily bring a storage array to its knees. Boot storms aren’t just momentary occurrences — they can last from 30 minutes to two hours and can have significant impact.

After users boot up, log in and load applications, storage I/O typically settles down; however, events like patching desktops, antivirus updates/scans and the end-of-day user log off can also cause high I/O. Having a data storage infrastructure that can handle these peak periods is therefore critical.

Cost is another concern. The ROI with VDI isn’t the same as server virtualization, so getting adequate funding can be a challenge. A proper storage infrastructure for VDI can be very costly, and to get the required I/O operations per second (IOPS) you may have to purchase more data storage capacity than you’ll need.

Expect to spend more time on administration, too. Hundreds or thousands of virtual disks for the virtual desktops will have to be created and maintained, which can be a difficult and time-consuming task.

Read the full article in the March 2011 issue of Storage Magazine…

Author: esiebert7625 Categories: News Tags: , ,

10 tips for managing storage for virtual servers and virtual desktops

October 5th, 2010
Comments Off

cover_vol9_iss9

Server and desktop virtualization have provided relatively easy ways to consolidate and conserve, allowing a reduction in physical systems. But these technologies have also introduced problems for data storage managers who need to effectively configure their storage resources to meet the needs of a consolidated infrastructure.

Server virtualization typically concentrates the workloads of many servers onto a few shared storage devices, often creating bottlenecks as many virtual machines (VMs) compete for storage resources. With desktop virtualization this concentration becomes even denser as many more desktops are typically running on a single host. As a result, managing storage in a virtual environment is an ongoing challenge that usually requires the combined efforts of desktop, server, virtualization and storage administrators to ensure that virtualized servers and desktops perform well. Here are 10 tips to help you better manage your storage in virtual environments.

Read the full article at searchstorage.com…

Author: esiebert7625 Categories: News Tags: ,

Affordable shared storage options for VMware vSphere

August 9th, 2010
Comments Off

You can use VMware vSphere without a shared storage device, but it limits the amount of advanced features that you can use with it. Certain features in vSphere require that a virtual machine (VM) reside on a shared storage device that is accessible by multiple hosts concurrently. These features include high availability (HA), Distributed Resource Scheduler (DRS), Fault Tolerance (FT) and VMotion, which provide high/continuous availability as well as workload load balancing and live migration of virtual machines. For some storage administrators, these features may only be nice to have, but they are also essential for many IT environments that cannot afford to have VMs down for an extended amount of time.

A few years ago, VMware shared storage typically meant using a Fibre Channel (FC) SAN, which was expensive, required specialized equipment and was complicated to manage. In recent years, other shared storage options that utilize standard network components to connect to storage devices have become popular and make for affordable, easy-to-use shared storage solutions. The protocols used for this are iSCSI and NFS, both of which are natively supported in vSphere. The performance of NFS and iSCSI are similar, but both can vary depending on a variety of factors including the data storage device characteristics, network speed/latency and host server resources. Since both protocols use software built into vSphere to manage the storage connections over the network there is some minimal CPU resource usage on the host server as a result.

Read the full article at searchsmbstorage.com…

Author: esiebert7625 Categories: News Tags: , ,

New storage toys and new storage woes

February 10th, 2010

In the last week I’ve gotten some new storage devices, both at work and at home. Unfortunately I’ve experienced problems with both and its not been as fun of a week as I would of liked. The new work storage device is a HP MSA-2312i which is the iSCSI version of their Modular Storage Array line.

msa-overview1

The new home storage device is an Iomega ix4-200d 4TB which is a relatively low cost network storage device that supports iSCSI & NFS and much more.

iomega-ix41

The MSA problems have all been firmware related, basically it kept getting stuck in a firmware upgrade loop, if you own one or plan on buying one I would say don’t upgrade the firmware unless you have a reason to and if you do make sure you schedule downtime. I’ll be sharing some tips for upgrading the firmware on that unit later on.

msa-firmware3

The Iomega problems are from a flaky hard drive presumably, not long after I plugged the unit in and started configuring it I received the message that drive 3 was missing. After talking to support and rebuilding the RAID group the problem briefly went away and then came right back. They graciously waived the $25 replacement fee (it’s brand new, they better!) and refused to expedite the shipping unless I paid $40 (again, its brand new, you would think they would want to make a new customer happy). Having a flaky drive in a brand new unit doesn’t exactly inspire confidence in storing critical data on the device so I’ll have to see how it goes once the drive is replaced.

ix4-2

So look forward to some upcoming posts on using and configuring both devices. The MSA will be used as part of a Domino virtualization project and I’ll be doing performance testing on it in various configurations. The Iomega I’ll be using with VMware Workstation 7 on my home computer as both iSCSI & NFS datastores.

Author: esiebert7625 Categories: News Tags:

Storage Links

May 11th, 2009
Comments Off

General

(Alternative) VM swap file locations Q&A (Frank Denneman)
A look at the ESX I/O stack (NetApp)
vSphere Storage: Features and Enhancements (Professional VMware)
Storage Changes in VMware ESX 3.5 Update 4 (Stephen Foskett)
Storage Changes in the VMware vSphere 4 Family (Stephen Foskett)
Storage Changes in VMware vSphere 5 (Stephen Foskett)
Storage Changes in VMware vSphere 5.1 (Stephen Foskett)
Everything you need to know about vSphere and data storage (Storage Magazine)
10 tips for managing storage for virtual servers and virtual desktops (Storage Magazine)
Best Practices for Designing Highly Available Storage - Host Perspective (Stretch Cloud)
Best Practices for Designing Highly Available Storage - FC SAN Perspective (Stretch Cloud)
Best Practices for Designing Highly Available Storage - iSCSI SAN Perspective (Stretch Cloud)
Best Practices for Designing Highly Available Storage - Network for iSCSI (Stretch Cloud)
Best Practices for Designing Highly Available Storage - Storage LUNs (Stretch Cloud)
Best Practices for Designing Highly Available Storage - Overall Storage System (Stretch Cloud)
vSphere and 2TB LUNs - changes from VI3.x (Virtual Geek)
Data Compression, Deduplication, & Single Instance Storage (Virtual Storage Guy)
vSphere Introduces the Plug-n-Play SAN (Virtual Storage Guy)
Storage Basics - Part I: An Introduction (VM Today)
Storage Basics - Part II: IOPS (VM Today)
Storage Basics - Part III: RAID (VM Today)
Storage Basics - Part IV: Interface (VM Today)
Storage Basics - Part V: Controllers, Cache and Coalescing (VM Today)
Storage Basics - Part VI: Storage Workload Characterization (VM Today)
Storage Basics - Part VII: Storage Alignment (VM Today)
Storage Basics - Part VIII - The Difference in Consumer vs. Enterprise Class Disks and Storage Arrays (VM Today)
Storage Basics - Part IX: Alternate IOPS Formula (VM Today)
Improve Storage Efficiency and Management with VMware vSphere 4 (VMware)
What’s New in VMware vSphere 4.0 -Storage (VMware Tech Paper)
What’s New in VMware vSphere 4.1 - Storage (VMware Tech Paper)
What’s New in VMware vSphere 5.0 - Storage (VMware Tech Paper)
What’s New in VMware vSphere 5.1 - Storage (VMware Tech Paper)
VMware vSphere 4: Exchange Server on NFS, iSCSI, and Fibre Channel (VMware)
Advanced VMkernel Settings for Disk Storage (VMware vSphere Blog)
Ye olde Controller, Target, Device (ctd) Numbering (VMware vSphere Blog)
How much storage can I present to a Virtual Machine? (VMware vSphere Blog)
Best Practice: How to correctly remove a LUN from an ESX host (VMware vSphere Blog)
Should I defrag my Guest OS? (VMware vSphere Blog)
Misaligned VMs? (VMware vSphere Blog)
Guest OS Partition Alignment (VMware vSphere Blog)
Storage Oversubscription Technologies (Xtravirt)

vStorage APIs

VAAI (Array Integration)

VAAI Comparison - Block versus NAS (Cormac Hogan)
VMware vSphere 4.1 vStorage APIs for Array Integration (VAAI) understanding
(GeekSilver)
VAAI and Deployment - a Practical Example (NTPro.nl)
VMware VAAI pros and cons and the hidden fourth primitive (SearchVMware.com)
VMware VAAI Storage Array Support in Plain English (Stephen Foskett)
A Complete List of VMware VAAI Primitives (Stephen Foskett)
Exploring the performance benefits of VAAI (The Lower Case w)
vSphere 4.1 - What do the “vStorage APIs for Array Integration” mean to you? (Virtual Geek)
vSphere 4.1 and vStorage APIs for Array Integration (VAAI) (Virtual Storage Guy)
VMware vSphere VAAI Demo with NetApp (Virtual Storage Guy)
VAAI and the Unlimited VMs per Datastore Urban Myth (Virtualization Evangelist)
vStorage APIs for Array Integration (VAAI) (VMTN)
vStorage APIs for Array Integration FAQ (VMware KB)
VAAI Offloads and KAVG Latency (VMware vSphere Blog)
Low Level VAAI Behaviour (VMware vSphere Blog)
A brief history of VAAI & how VMware is contributing to T10 standards (VMware vSphere Blog)
VAAI Offload Failures & the role of the VMKernel Data Mover (VMware vSphere Blog)
VAAI Thin Provisioning Block Reclaim/UNMAP Issue (VMware vSphere Blog)
VAAI Thin Provisioning Block Reclaim/UNMAP In Action (VMware vSphere Blog)
VAAI Thin Provisioning Block Reclaim/UNMAP is back in 5.0U1 (VMware vSphere Blog)
VMware vSphere Storage APIs - Array Integration (VAAI) (VMware Tech Paper)
VAAI sweetness (Yellow Bricks)
Using ESXTOP to check VAAI primitive stats (Yellow Bricks)
vStorage APIs for Array Integration aka VAAI (Yellow Bricks)

VASA (Storage Awareness)

What is VMware VASA? Not Much (Yet) (Stephen Foskett)

VAMP (Multi-pathing)

Pluggable Storage Architecture (PSA) Deep-Dive - Part 1 (Cormac Hogan)
Pluggable Storage Architecture (PSA) Deep-Dive - Part 2 (Cormac Hogan)
VMware PSP and SATP in Plain English
(Stephen Foskett)
Configure VMware ESX(i) Round Robin on EMC Storage (boche.net)
What’s that ALUA exactly? (Yellow Bricks)
Pluggable Storage Architecture, exploring the next version of ESX/vCenter (Yellow Bricks)
A couple important (ALUA and SRM) notes (Virtual Geek)
Understanding more about NMP RR and iooperationslimit=1 (Virtual Geek)
vSphere Introduces the Plug-n-Play SAN (Virtual Storage Guy)
VMware PSA, MPP, NMP, PSP, MRU, … And Tutti Quanti! (DeinosCloud)
Best practices for HP EVA, vSphere 4 and Round Robin multi-pathing (Ivobeerens.nl)
vSphere Round Robin MultiPathing (Phil the Virtualizer)
Multipathing policies in ESX 4.x (VMware KB)
Did you know that you can now prioritize I/O Paths in the event of a failover? (VMware vSphere Blog)
Configuration Settings for ALUA Devices (VMware vSphere Blog)
Path failure and related SATP/PSP behaviour (VMware vSphere Blog)

N-Port ID Virtualization (NPIV)

NPIV: N-Port ID Virtualization (VMware vSphere Blog)
Configuring and Troubleshooting N-Port ID Virtualization (VMware Tech Paper)

Performance

vStorage: Troubleshooting Performance (Professional VMware)
Benchmarking Storage for VMware (Peacon)
Avoid Storage I/O Bottlenecks With vCenter and Esxtop (Petri)
Calculate IOPS in a storage array (Tech Republic)
Performance Troubleshooting VMware vSphere - Storage (Virtual Insanity)
Comparing the I/O Performance of 2 or more Virtual Machines SSD, SATA & IOmeter (Vinf.net)
Storage System Performance Analysis with Iometer (VMware)
Poor performance and high disk latency with some storage configurations (VMware KB)
vSOM: A Framework for Virtual Machine-centric Analysis of End-to-End Storage IO Operations (VMware Labs Tech Paper)
Performance Best Practices for VMware vSphere 5.1 (VMware Tech Paper)
Storage I/O Performance on VMware vSphere 5.1 over 16 Gigabit Fibre Channel (VMware Tech Paper)
Storage Workload Characterization and Consolidation in Virtualized Environments (VMware Tech Pub)
An Analysis of Disk Performance in VMware ESX Virtual Machines (VMware Tech Pub)
IOPs? (Yellow Bricks)

Protocols

General

iSCSI Beats Fibre Channel at Interop 2011 (Video)
Storage Protocol Comparison - A vSphere Perspective (VMware vSphere Blog)
Storage Protocol Comparison (VMware Tech Paper)
The Debate-Why NFS vs Block Access for OS/Applications (vTexan)

iSCSI

A “Multivendor Post” on using iSCSI with VMware vSphere (Virtual Geek)
A “Multivendor Post” to help our mutual iSCSI customers using VMware (Virtual Geek)
Using iSCSI storage with vSphere (Storage Magazine)
Configuring VMware vSphere Software iSCSI with Dell Equallogic PS Series Storage (Equallogic)
How to Configure Openfiler iSCSI Storage for VMware ESX 4 (Xtravirt)
Putting your storage to the test - Part 1 iSCSI on Iomega IX4-200D (Gabe’s Virtual World)
How-to connect ESX4, vSphere to Openfiler iSCSI NAS (Vladan.fr)
EMC Virtual Infrastructure for Exchange 2007 using vSphere 4.0 and iSCSI (EMC)
How to setup basic software iSCSI for VMware vSphere (video)
iSCSI Design Considerations and Deployment Guide (VMware)
Converged Storage Infrastructure for VMware vSphere 4.1 (Broadcom)
Why can you not use NIC Teaming with iSCSI Binding? (VMware vSphere Blog)
How to configure ESXi to boot via Software iSCSI? (VMware vSphere Blog)
iSCSI Advanced Settings (VMware vSphere Blog)
Configuring Proper vSphere iSCSI Multipathing via Binding VMkernel Ports [Video] (Wahl Network)

NFS

NFS Best Practices - Part 1: Networking (Cormac Hogan)
NFS Best Practices - Part 2: Advanced Settings (Cormac Hogan)
NFS Best Practices - Part 3: Interoperability Considerations (Cormac Hogan)
NFS Best Practices - Part 4: Sizing Considerations (Cormac Hogan)
A “Multivendor Post” to help our mutual NFS customers using VMware (Virtual Geek)
Using NAS for virtual machines (Storage Magazine)
Putting your storage to the test Part 2 NFS on Iomega IX4-200D (Gabe’s Virtual World)
Best Practices for Running vSphere on NFS Storage (VMware)
NFS Block Sizes, Transfer Sizes & Locking (VMware vSphere Blog)
Load Balancing with NFS and Round-Robin DNS (VMware vSphere Blog)
Best Practices for Running vSphere on NFS Storage (VMware Tech Paper)
Republished: Dispelling Some VMware over NFS Myths (Scott Lowe)
Reasons For Using NFS With VMware Virtual Infrastructure (VM/ETC)

Fiber Channel over Ethernet (FCoE)

“FCoE vs. iSCSI - Making the Choice” from Interop Las Vegas 2011 (Stephen Foskett)
VMware ESX FCoE CNA Compatibility in Plain English (Stephen Foskett)
How FCoE and iSCSI Fit into Your Storage Strategy (NetApp Tech OnTap)
Fibre Channel over Ethernet in the Data Center: An Introduction (Cisco)
VMware’s Software FCoE (Fibre Channel over Ethernet) Adapter (VMware vSphere Blog)

SAN/Fiber Channel

SAN System Design and Deployment Guide (VMware)
Configuring and Troubleshooting N-Port ID Virtualization (VMware)

pvSCSI

More Bang for Your Buck with PVSCSI (Part 1) (Virtual Insanity)
More Bang for your Buck with PVSCSI (Part 2) (Virtual Insanity)
Boot from Paravirtualized SCSI Adapter (Xtravirt)
Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters (VMware KB)

Raw Device Mappings (RDMs)

Physical RDM to VMDK Migration Feature (VMware vSphere Blog)
Migrating RDMs, and a question for RDM Users (VMware vSphere Blog)

Snapshots

Storage vMotion, Storage DRS & Virtual Machine Snapshots Interoperability (VMware vSphere Blog)

Solid-State Disks (SSDs)

Understanding TLC NAND (Anandtech)
An Introduction to Flash Technology (Cormac Hogan)
Storage 101: Flash Storage Myths and Facts (Enterprise Storage Guide)
What is the difference between MLC Flash and eMLC Flash, and is it required for Enterprise Flash? (Hu’s Blog)
Anatomy of SSDs (Linux Magazine)
NAND Flash Primer (Micron)
How Solid State Drives are Made (Micron)
Flash Memory Reliability - Read, Program, and Erase Latency Versus Endurance Cycling (NASA)
Increasing Flash SSD Reliability (Silicon Systems)
E‐MLC vs. MLC NAND Flash (Smart Storage Systems)
Making the case for solid-state storage (Storage Magazine)
The truth about SSD performance benchmarks (Storage Magazine)
NAND vs. NOR Flash Memory Technology Overview (Toshiba)
FAQ: Using SSDs with ESXi (VMware Front Experience)

Storage DRS

Storage DRS, more than I/O load-balancing only (VMware vSphere Blog)
Storage DRS Affinity & Anti-Affinity Rules (VMware vSphere Blog)
Storage DRS and Storage Array Feature Interoperability (VMware vSphere Blog)
VMware vSphere Storage DRS Interoperability (VMware Tech Paper)
Storage DRS: Automated Management of Storage Devices In a Virtualized Datacenter (VMware Labs Tech Paper)
Should I use many small LUNs or a couple large LUNs for Storage DRS? (Yellow Bricks)

Storage I/O Control

Debunking Storage I/O Control Myths (VMware vSphere Blog)
Storage I/O Control Enhancements in vSphere 5.0 (VMware vSphere Blog)
Using both Storage I/O Control & Network I/O Control for NFS (VMware vSphere Blog)
Performance Implications of Storage I/O Control-Enabled NFS Datastores in VMware vSphere 5.0 (VMware Tech Paper)
Storage IO Control Technical Overview & Considerations for Deployment (VMware Tech Paper)

Storage vMotion

Answering some Storage vMotion questions (VMware vSphere Blog)
Storage vMotion of a Virtualized SQL Server Database (VMware Tech Paper)
The Design and Evolution of Live Storage Migration in VMware ESX (VMware Tech Paper)

Thin Provisioning

Dynamic Storage Provisioning (VMware)
Thin Provisioning - What’s the scoop? (VMware vSphere Blog)
Thin Provisioning Storage Choices (Virtualization Evangelist)
VMware ESX 4 Reclaiming Thin Provisioned disk Unused Space (Virtualization Team)

Vendor Specific

EMC

Storage Protocol Choices & Storage Best Practices for vSphere (EMC Presentation)
Using EMC VNX Storage with VMware vSphere (EMC Tech Paper)
EMC Powerpath/VE for VMware vSphere Best Practices planning (EMC Tech Paper)
EMC VPLEX - vSphere 5.1 Stretched Cluster Best Practices (Virtualization Team)
Implementing EMC Symmetrix Virtual Provisioning with VMware vSphere (VMware Tech Paper)

HP

HP P4000 LeftHand SAN Solutions with VMware vSphere Best Practices (VMware Tech Paper)
VMware vSphere VAAI for HP LeftHand Storage performance benefits (HP Tech Paper)
Best practices for deploying VMware vSphere 5 with VMware High Availability and Fault Tolerance on HP LeftHand Multi-Site SAN cluster (HP Tech Paper)
Implementing VMware vSphere Metro Storage Cluster with HP LeftHand Multi-Site storage (HP Tech Paper)
HP 3PAR Storage and VMware vSphere 5 best practices (HP Tech Paper)
VMware vSphere VAAI for HP 3PAR Storage performance benefits (VMware Tech Paper)
3PAR Utility Storage with VMware vSphere (VMware Tech Paper)
HP Enterprise Virtual Array Storage and VMware vSphere 4.0, 4.1 and 5.x configuration best practices (HP Tech Paper)

NetApp

NetApp TR-3808 - VMware vSphere and ESX 3.5 Multiprotocol Performance Comparison Using FC, iSCSI, and NFS (NetApp)
NetApp Storage Best Practices for VMware vSphere (Netapp)

Virtual Disk

Extending an EagerZeroedThick Disk (VMware vSphere Blog)
2TB VMDKs on Upgraded VMFS-3 to VMFS-5. Really? (VMware vSphere Blog)
Virtual Disk Format 5.0 (VMware Tech Paper)

Virtual Volumes (vVols)

Virtual Volumes (VVOLs) Tech Preview [with video] (VMware vSphere Blog)
What is Software Defined Storage? A VMware TMM Perspective (VMware vSphere Blog)

VDI

VDI User Sizing Example (EMC)
Space-Efficient Sparse Virtual Disks and VMware View (My Virtual Cloud)
Clearing up Space-Efficient Virtual Disk questions (My Virtual Cloud)
Managing storage for virtual desktops (Storage Magazine)
VDI and Storage: Deep Impact (Virtuall.eu)
A closer look at the View Storage Accelerator [incl. Video] (VMware vSphere Blog)
VMFS File Locking and Its Impact in VMware View 5.1 (VMware Tech Paper)
Storage Considerations for VMware View (VMware Tech Paper)
View Storage Accelerator in VMware View 5.1 (VMware Tech Paper)

VMFS

VMFS Extents - Are they bad, or simply misunderstood? (VMware vSphere Blog)
Exactly how big can I make a single-extent VMFS-5 datastore? (VMware vSphere Blog)
Some useful vmkfstools ‘hidden’ options (VMware vSphere Blog)
VMFS Locking Uncovered (VMware vSphere Blog)
VMFS Heap Considerations (VMware vSphere Blog)
Something I didn’t known about VMFS sub-blocks (VMware vSphere Blog)
Upgraded VMFS-5: Automatic Partition Format Change (VMware vSphere Blog)
What could be writing to a VMFS when no Virtual Machines are running? (VMware vSphere Blog)
VMware vStorage Virtual Machine File System - Technical Overview and Best Practices (VMware Tech Paper)
VMware vSphere VMFS-5 Upgrade Considerations (VMware Tech Paper)

vSphere Metro Storage Cluster (vMSC)

vSphere Metro Storage Cluster solutions and PDL’s? (VMware vSphere Blog)
VMware vSphere Metro Storage Cluster Case Study (VMware Tech Paper)
vSphere Metro Storage Cluster - Uniform vs Non-Uniform (Yellow Bricks)

vSphere Storage Appliance (VSA)

vSphere Storage Appliance (VSA) Resilience - Network Outage Scenario #1: Back End (VMware vSphere Blog)
vSphere Storage Appliance (VSA) Resilience - Network Outage Scenario #2: Front End (VMware vSphere Blog)
vSphere Storage Appliance (VSA) - Introduction (VMware vSphere Blog)
vSphere Storage Appliance - Can I run vCenter on a VSA cluster member? (VMware vSphere Blog)
Performance of VSA in VMware Sphere 5 (VMware Tech Paper)
VMware vSphere Storage Appliance Technical Deep Dive (VMware Tech Paper)
What’s New in VMware vSphere® Storage Appliance 5.1 (VMware Tech Paper)

vSphere 5.0 Blog Series

vSphere 5.0 Storage Features Part 1 - VMFS-5 (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 2 - Storage vMotion (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 3 - VAAI (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 4 - Storage DRS - Initial Placement (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 5 - Storage DRS - Balance On Space Usage (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 6 - Storage DRS - Balance On I/O Metrics (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 7 - VMFS-5 & GPT (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 8 - Handling the All Paths Down (APD) condition (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 9 - Snapshot Consolidate (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 10 - VASA - vSphere Storage APIs - Storage Awareness(VMware vSphere Blog)
vSphere 5.0 Storage Features Part 11 - Profile Driven Storage (VMware vSphere Blog)
vSphere 5.0 Storage Features Part 12 - iSCSI Multipathing Enhancements (VMware vSphere Blog)

vSphere 5.1 Blog Series

vSphere 5.1 Storage Enhancements - Part 1: VMFS-5 (Cormac Hogan)
vSphere 5.1 Storage Enhancements - Part 2: SE Sparse Disks (Cormac Hogan)
vSphere 5.1 Storage Enhancements - Part 3: vCloud Director (Cormac Hogan)
vSphere 5.1 Storage Enhancements - Part 4: All Paths Down (APD) (Cormac Hogan)
vSphere 5.1 Storage Enhancements - Part 5: Storage Protocols (Cormac Hogan)
vSphere 5.1 Storage Enhancements - Part 6: IODM & SSD Monitoring (Cormac Hogan)
vSphere 5.1 Storage Enhancements - Part 7: Storage vMotion (Cormac Hogan)
vSphere 5.1 Storage Enhancements - Part 8: Storage I/O Control (Cormac Hogan)
vSphere 5.1 Storage Enhancements - Part 9: Storage DRS (Cormac Hogan)
vSphere 5.1 Storage Enhancements - Part 10: 5 Node MSCS Support (Cormac Hogan)

Author: esiebert7625 Categories: vSphere Links Tags:

Storage Links

May 7th, 2009
General

All-in-one Storage and Virtualization Learning Guide
Layers of Virtual Storage in VMware VI3: Configuration without Confusion
Choosing and Architecting Storage for your Environment (VMworld 2006)
iSCSI , NAS and IP Storage Configuration for Vmware ESX Server (VMworld 2006)
A look at the ESX I/O Stack
Improving system performance cost effectively (10 vs. 15K drives)
Network Appliance and VMware ESX Server 3.0 Storage Best Practices
Network Appliance and VMware ESX Server 3.0 Building a Virtual Infrastructure from Server to Storage
Get to know RAID levels
What is new in Storage in VI3 Release 3.5
Comparison of Storage Protocol Performance
Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems
Design, Build and Manage your SAN Environment using VI3 (Vmword 2007)
NFS & iSCSI - Performance Characterization and Best Practices (VMworld 2007)
Tuning ESX Server 3.5 for Better Storage Performance by Modifying the Maximum I/O Block Size
Easy and Efficient Disk I/O Workload Characterization in VMware ESX Server
Which storage protocol is best?
Internal vs. external guest virtual machine storage
Storage IO crash consistency with VMware products

Fibre Channel/SAN

SAN Configuration Guide
Fibre Channel SAN Configuration Guide
SAN Conceptual and Design Basics
SAN System Design and Deployment Guide
Using VMware ESX Server with Hitachi Data Systems NSC or USP Storage
VMware Infrastructure 3, HP StorageWorks Best Practices
Using multi-pathing in ESX Server
Round Robin Load Balancing
Obtaining LUN pathing information for ESX Server 3
Queue Depth
How to check if a LUN is locked by a host
Scripting Queue Depth in a QLogic/EMC environment
EnableResignature and/or DisallowSnapshotLUN
Script for Balancing Multipathing in ESX 3.x
Increasing the queue depth?
Understanding MRU behavior in VMware ESX 3.x
Are You Stuck with a Single REALLY busy array port when using ESX?? Script for Balancing Multipathing in ESX 3.x

iSCSI

Ethernet-based Storage Configuration
Configuring iSCSI in a Vmware 3 environment
How to create an inexpensive iSCSI SAN for VMware ESX
iSCSI Design Considerations and Deployment Guide
Best Practices in an EqualLogic iSCSI Virtualized SAN
Virtualized iSCSI SANS: Flexible, Scalable Enterprise Storage for Virtual Infrastructures
iSCSI in VMware ESX 3
Connect VMware ESX Server to a free iSCSI SAN using Openfiler
Use OpenFiler as your Free VMware ESX SAN Server
iSCSI: Superior Storage for Virtualization
How to configure OpenFiler v2.3 iSCSI Storage for use with VMware ESX
A “Multivendor Post” to help our mutual iSCSI customers using VMware
Installing LeftHand’s virtual storage appliance in VMware ESX
Connecting VMware ESX to LeftHand’s virtual storage appliance
Configuring and troubleshooting basic Software iSCSI setup

NFS

VMware over NFS
Why VMware over NetApp NFS
Important Note Regarding VMware over NFS
VMware and NFS on NetApp Filers
NFS Datastores and what was their BIG issue…
NFS.LockDisable what should it be 1 or 0
Advisory for advanced VMkernel parameter NFS.LockDisabled
Mythbusters - NFS for VMware
VMFS vs. NFS for VMware Infreastructure?

Author: esiebert7625 Categories: General/VI3 Links Tags:

Top 10 things you must read about VMware Storage

April 24th, 2008
  1. SAN Configuration Guide - A big guide from VMware on SAN background, installation and management.
  2. SAN System Design and Deployment Guide - Another big guide from VMware on designing and deploying SAN environments to use with VI3.
  3. Design, Build and Manage your SAN Environment using VI3 - A VMworld 2007 presentation from VMware that discusses how VMware Infrastructure 3 can solve SAN management problems by providing solutions such as managing multiple hosts/clients from sprawling, multipathing management without the high cost and complexity, cluster file system for HA solutions, LUN security, and storage consolidation. This is a vendor neutral session providing topics for SAN architects and administrators ideas on ways to best deploy VMware Infrastructure 3 on SAN.
  4. NFS & iSCSI - Performance Characterization and Best Practices - A VMworld 2007 presentation from VMware that provides a performance-oriented overview of the technology along with performance troubleshooting techniques and best-practice recommendations in typical ESX Server deployment. Up-to-date performance data, a review of performance optimizations available currently and a preview of features in upcoming releases are also be presented.
  5. Choosing and Architecting Storage for your Environment - A VMworld 2006 presentation on selecting and architecting the right storage solution for your ESX environment.
  6. ESX Storage Virtualization Insights - A TSX 2007 presentation on the ESX storage stack, VMFS vs. RDM and multi-pathing.
  7. Network Appliance and VMware ESX Server 3.0 Storage Best Practices - A white paper from Netapp with general best practices and recommendations on using storage with ESX.
  8. iSCSI, NAS and IP Storage Configuration for Vmware ESX Server - A VMworld 2006 presentation on using iSCSI and NAS instead of a SAN with ESX.
  9. Comparison of Storage Protocol Performance - A VMware performance study comparing Fibre Channel, Hardware iSCSI, Software iSCSI and NFS.
  10. Configuring iSCSI in a Vmware 3 environment - A white paper from VMware on using and configuring iSCSI in your ESX environment.
Author: esiebert7625 Categories: Top 10 List Tags: ,