The polls will open next week for the annual VMware/virtualization top blog voting, so if you want to make sure your site is included make sure I have your blog listed on my vLaunchPad. This year will be a bit different, instead of just a top 25 we’ll also have categories kind of like they do at many awards shows. Some of the categories will be Best New Blog, Best Storage Blog, Blogger You Most Want To Meet, etc. I’m still playing with the categories so sound off in the comments if you have any ideas.
Nov 02 2011
So long, farewell, auf wiedersehen, goodbye … to Eric Siebert
It’s always difficult to say goodbye, especially when you’ve been doing something you enjoy for many years.
I started writing for SearchVMware.com more than three years ago, in March 2008. Up to that point I had no writing experience, and I had never really thought of myself as a writer. On a whim, I responded to a post on TechTarget’s website about writing opportunities. They gave me a shot, and I found that I was pretty good at it, and since then I’ve written more than 100 articles on SearchVMware.com and more than 100 posts for the Virtualization Pro blog. My writing at TechTarget got me a lot of exposure and opportunities over the years, and it really helped my career develop and blossom.
Looking back at my writing over the years, I thought I would highlight some of the articles that I enjoyed doing the most and am most proud of. I tend to write pretty detailed and lengthy articles, because I like to give complete coverage to a product or feature, and as a result many of my articles were turned into series. My very first assignment was to write about VMware Converter, which ended up being a three-part series. I was a little green starting out, but one of the nice things about having editors is that they take your work and make it look even better.
Read the full article at searchvmware.com…
Nov 02 2011
VSphere 5 features for VARs: VCenter server appliance, auto deploy (Part 2)
While part one of this vSphere 5 upgrade series dealt with the challenges that solution providers may need to deal with during a migration, but there are also benefits. Because of their simplistic nature and improved capabilities, the vSphere 5 features listed below can be assets during a customer upgrade:
vCenter Server appliance
VSphere 5 supports running vCenter Server as a pre-built Linux virtual appliance. This makes deploying and maintaining vCenter Server much easier and also means it’s no longer required to run it on a server running a Windows operating system (OS). The virtual appliance comes packaged with IBM’s DB2 Express and also supports only Oracle or DB2 for external databases. This will appeal to customers that mainly use Linux because vCenter Server doesn’t require the use of Microsoft products.
Solution providers can use a Web user interface (UI) for configuration, and it’s compatible with the new Flex based Web UI that is part of vSphere 5. Flex is a framework for web development from Adobe that enables rich functionality for Web browsers. This allows for better Web administration UI’s to be created so VMware can mimic the Windows based vSphere Client’s functionality through a Web browser. The previous web UI in vSphere 4 just used basic HTML and was not as feature-heavy as the new Flex based Web UI in vSphere 5.
Read the full article at searchsystemschannel.com…
Nov 02 2011
VSphere 5 upgrade challenges: Licensing and ESXi (Part 1)
Because of its new features and enhancements, many of your customers are likely anxious to start planning their vSphere 5 upgrade, and it’s important to know new caveats and issues before you let your customers dive right into it.
Solution providers can ensure a smooth transition to vSphere 5 by taking note of these important modifications and understanding what they mean to individual customer environments:
New licensing
Perhaps the biggest vSphere 5 deviation from previous versions is that your customers will need to deal with the new licensing model.
VSphere 4 licensing was fairly straightforward: as licenses were bought per CPU socket and you could run unlimited virtual machines (VMs) on the host. That model is no more with vSphere 5, and while licenses are still sold per CPU socket, each license comes with a fixed amount of virtual RAM (vRAM) that can be assigned to VMs. Depending on the environment, this could cause a customer to spend thousands of dollars in additional licenses to be compliant with new vSphere 5 licensing. The new model favors scale-out architectures that hold a greater amount of hosts that have smaller resource amounts. This means less VMs running on each host, which results in less vRAM usage per host. When you try and scale up with hosts that have large amounts of RAM, the additional license costs to match the amount of that RAM in a host can get very costly.
Read the full article at searchsystemschannel.com…
Oct 28 2011
Setting up VMware Auto Deploy for customers (Part 2)
You learned about VMware Auto Deploy’s benefits in part one of our two-part series, but it’s also important to know the vSphere 5 feature’s nuts and bolts and how to set it up for customers.
Auto Deploy takes advantage of the Preboot Execution Environment (PXE) boot feature that is present in many physical network interface cards (NICs). This allows a server to boot from a remote image file using only a physical NIC and without local storage.
A server booting with PXE first obtains an IP address using Dynamic Host Configuration Protocol (DHCP) and then loads a boot image from a Trivial File Transfer Protocol (TFTP) server. Auto Deploy uses PXE booting to download an ESXi image file to a host and its components work to define which image a host should use, customizations and host-specific configuration information.
Auto Deploy relies on software depots that are used to store collections of vSphere Installation Bundles (VIB) and image files that are accessed remotely via HTTP to deploy or update hosts. VIB files are used to deploy the ESXi software and any hardware vendor customizations. The Auto Deploy Server uses the software depot to pull VIBs and image profiles when booting an ESXi host.
Read the full article at searchsystemschannel.com…
Oct 28 2011
How VMware Auto Deploy can ease VAR workloads (Part 1)
Although creating new hosts is a common server virtualization task, it is also a tedious process. Solution providers can use VMware Auto Deploy for vSphere 5 to do it much more efficiently.
Installing single hosts isn’t hard for solution providers but configuring multiple hosts can be time consuming. A VAR that has to create a new vSphere host without VMware Auto Deploy typically has to do the following:
1. Locate the installation media for ESX or ESXi and boot the server from it.
2. Follow the setup prompt to set configuration information.
3. Complete the installation and then reboot the server.
4. Verify that management console networking works properly.
5. Add the host to vCenter Server.
6. Configure networking, storage, security and other settings.
In doing all this, you risk making mistakes during the build process and could configure hosts inconsistently. Consistency is very important in a virtual environment both from a security and operational perspective. Once you’ve built and configured a host, another challenge is to back up host configuration data so if a problem occurs and you need to rebuild the host, you don’t have to restart from scratch.
Read the full article at searchsystemschannel.com…
Oct 28 2011
vSphere 5’s Storage DRS and storage profile function deliver control over storage resources
The release of VMware Inc.’s vSphere 5 brings many exciting new features and enhancements to the virtualization platform, especially when it comes to storage. Two of the biggest new features in that area are Storage Distributed Resource Scheduler (DRS) and Profile-Driven Storage, which provide some much-needed control over storage resources.
In previous versions of vSphere, Distributed Resource Scheduler balanced VM workloads based on CPU and memory resource utilization. Storage DRS extends this capability to storage, enabling intelligent VM initial placement and load balancing based on storage I/O and capacity conditions within a cluster. Profile-Driven Storage, for its part, ensures that VMs are placed on storage tiers based on service-level agreements (SLAs), availability, performance and capabilities of the underlying storage platform. In this tip, we’ll examine both Storage DRS and the storage profile functionality in detail.
Storage DRS
Similar to the traditional DRS feature, Storage DRS uses a new type of cluster called a data store cluster, which is a collection of data stores that are aggregated into a single unit of consumption. By controlling all of the storage resources, Storage DRS allows intelligent placement of VMs that are powered on, as well as the shifting of workloads from one storage resource to another when needed to ensure optimum performance and avoid I/O bottlenecks. What this means in simpler terms is that, similar to vMotion’s movement of VMs from host to host, VMs can now be moved from data store to data store as well; the decision to move a VM from one data store to another is made by Storage DRS, which tells Storage vMotion to make the move.
Read the full article at searchvirtualstorage.com…
Sep 22 2011
Choosing a virtualization hypervisor: Eight factors to consider
Selecting a virtualization hypervisor begins with an important choice: Do you need a hosted or bare-metal hypervisor? Once you decide which type of hypervisor you need, there are lots of factors to consider.
You want a virtualization hypervisor that’s compatible with your hardware, allows for simple management and gives you the performance your virtual infrastructure needs. You should also consider high availability, reliability and scalability. And of course, look into costs.
Here are eight considerations for choosing a virtualization hypervisor:
Performance
If you want high performance, a bare-metal virtualization hypervisor is really your only option. Bare-metal virtualization offers the least amount of resource overhead. Bare-metal virtualization hypervisors also have advanced resource controls that allow you to guarantee, prioritize and limit virtual machine (VM) resource usage.
Hosted hypervisors typically have no or limited resource controls, so VMs have to fight each other for resources. Unlike bare-metal virtualization, hosted hypervisors often have steep resource-overhead penalties, especially when operating system services, tools and applications are running on the guest operating system.
Read the full article at searchservervirtualization.com…