VMware released a new technical white paper to coincide with the announcement of VSAN 6.2, it goes into more detail on some of the new architecture features such as dedepulication, compression, erasure coding and QoS. The paper is written by Jase McCarty and Jeff Hunter from VMware and provides the missing technical details on VSAN 6.2 that you won’t find in the VMware announcements. Be sure and read my complete overview of VSAN 6.2 first and then give the technical paper a read and you’ll know everything you need to know about VSAN 6.2.
Feb 11 2016
Based on a lot of feedback from my last post about new requirements for blogs to participate in Top vBlog I’m going to set the minimum post count to be in the voting at 10 posts for the 12 month period of 2015. Almost everyone thought 6 was too low with most saying 10-12 was just about right. So if you only had 9 or fewer posts last year you won’t be in the running for Top vBlog. You can check the post counts for all the blogs over at Andreas’s site. Based on that requirement there will only be about 200 blogs in Top vBlog this year, last year there was over 400. If you are a new blog for 2015 and you didn’t get 10 posts in let me know for special consideration.
Why I am doing this is to make the voting more fair, if you didn’t put the time in and do at least 10 posts last year you probably shouldn’t be voted as one of the Top vBlogs among all the other bloggers that did put it in alot of hard work. What usually happens each year is people are voted based on name recognition regardless of what there contributions were for the year which isn’t fair. If you ever saw the Eddie Murphy movie Distinguished Gentleman it highlights a similar pattern, the synopsis for that movie is a con man get on the election ballot using a dead Congressman’s old campaign material and runs a low budget campaign that appeals to name recognition, figuring most people do not pay much attention and simply vote for the “name you know.” He wins a slim victory and is off to Washington.
I have the vLaunchpad mostly cleaned up, I archived almost 60 blogs that have not blogged in the last year, If your blog is not listed there use this form and I’ll get it up there in the next week. The new coin design is set and being made right now and I expect to kick things off with VMTurbo as the official sponsor of Top vBlog 2016 in the next few weeks.
Feb 10 2016
VMware just announced a new release of VSAN, version 6.2 and this post will provide you with an overview of what is new in this release. Before we jump into that lets like at a brief history of VSAN so you can see how it has evolved over it’s fairly short life cycle.
- August 2011 – VMware officially becomes a storage vendor with the release of vSphere Storage Appliance 1.0
- August 2012 – VMware CTO Steve Herrod announces new Virtual SAN initiative as part of his VMworld keynote (47:00 mark of this recording)
- September 2012 – VMware releases version 5.1 of their vSphere Storage Appliance
- August 2013 – VMware unveils VSAN as part of VMworld announcements
- September 2013 – VMware releases VSAN public beta
- March 2014 – GA of VSAN 1.0 as part of vSphere 5.5 Update 1
- April 2014 – VMware announces EOA of vSphere Storage Appliance
- Feburary 2015 – VMware releases version 6.0 of VSAN as part of vSphere 6 which includes the follow enhancements: All-flash deployment model, increased scalability to 64 hosts, new on disk format, JBOD support, new vsanSparse snapshot disk type, improved fault domains and improved health monitoring. Read all about it here.
- August 2015 – VMware releases version 6.1 of VSAN which includes the following enhancements: stretched cluster support, vSMP support, enhanced replication and support for 2-node VSAN clusters. Read all about it here.
With this 6.2 release VSAN turns 2 years old and it has come a long way in those two years. Note while VMware has announced VSAN 6.2 it is not yet available, if VMware operates in their traditional manner I suspect you will see it GA sometime in March as part of vSphere 6.0 Update 2. Let’s now dive into what’s new in VSAN version 6.2. After reading this post you should also check out VMware’s What’s New with VMware Virtual SAN 6.2 white paper for more detailed information.
VMware continues to expand the Ecosystem and tweak licensing
VMware is continually trying to expand the market for VSAN and has put a lot of effort into working with hardware partners to expand the ecosystem. You’ll notice a couple key things here that has changed some things that have held them back in the past. The first is a more flexible licensing and support model. In addition VMware is now trying to get VSAN pre-installed on server hardware to make it even easier for customers to deploy it. You’ll see support from Fujitsu, Hitachi and SuperMicro right away on this, I suspect you’ll also see Dell and Cisco at some point, don’t hold your breath for HP Enterprise to do this though.
In VSAN 6.1 licensing will split into Standard and Advanced with the Advanced license getting you the ability to use the All-Flash deployment model. In VSAN 6.2 a new licensing tier is added, Enterprise which provides the ability to use Stretched Clustering and QoS (IOPS Limits). Note the new de-dupe and compression features in VSAN 6.2 are included in Advanced, also current Advanced customers are entitled to get VSAN Enterprise.
You would sure hope so, VMware is now claiming 3,000 VSAN customers. Back in August with the release of VSAN 6.1 they claimed 2,000 customers, so if you do the math they have added 1,000 new VSAN customers in the past 6 months. Not too bad growth but I’m sure VMware would like to see that number a lot higher after 2 years of VSAN GA. VMware is also claiming “More Customers Enable HCI with VMware HCS than Competition”, I’m not sure if I believe that claim, I wonder where they got the numbers that prove it.
We’ll dive into these areas deeper but this gives you the quick overview of what’s new in VSAN 6.2 if you want to do the TL:DR thing. The big things are deduplication and compression, QoS and new RAID levels.
If you’re going to play in the storage big leagues you have to have these key features and VSAN now has them, but only on the All-Flash VSAN deployment model. This is pretty much in line with what you see in the industry as de-dupe and compression and SSDs are a perfect match so you can make more efficient use of the limited capacity of SSD drives. VMware hasn’t provided a lot of detail on how their implementation works under the covers beyond this slide but I suspect you will see a technical white paper on it at some point.
Note this deduplication is enabled at the cluster level so you can’t pick and choose what VSAN hosts it will be enabled on. While it is inline, the de-dupe operation occurs after data is written to the write cache and before it is moved to the capacity tier, compression happens right after de-dupe. VMware refers to this method as “nearline” and it allows them to be able to not waste resources trying to de-dupe “hot” data that is frequently changing. The de-dupe block size is fixed at 4KB, the storage industry block size tends to range from 4KB to 32KB with many vendors choosing greater than 4KB block sizes, 4KB is definitely a lot more granular which can result in higher de-dupe ratios.
Deduplication and compression are tied together with VSAN meaning they work together and you can’t just enable one or the other. Of course enabling deduplication and compression will add resource overhead to your hosts, as they are both CPU intensive operations. VMware claims it is minimal (around 5%) as they are using LZ4 compression which is designed to be extremely fast with minimal CPU overhead, but I’d like to see comparisons with this enabled and disabled to see how much impact it will be.
VSAN has never required the use of any hardware RAID configured on the server side, you essentially use RAID-0 (no RAID) when configuring your disks and then VSAN handles redundancy by doing it’s own RAID at the VM-level. Prior to 6.2 there was only one option for this which was essentially RAID-1 (mirroring) where whole copies of a VM are written to additional hosts for redundancy in case of a host failure. While that worked OK it consumed a lot of extra disk capacity on hosts as well as more host overhead.
With 6.2 VMware has introduced two new RAID levels, RAID-5 and RAID-6 which improves efficiency and reduces the required capacity requirements. These new RAID levels are only available on the All-Flash deployment model and can be enabled on a per VM-level. They refer to these methods as “Erasure Coding” which is different from traditional RAID in the way that data is broken up and written. Erasure coding is supposed to be more efficient than RAID when re-constructing data and has a downside that it can be more CPU intensive than RAID. These new RAID levels work much like their equivalent traditional disk RAID levels where parity data is striped across multiple hosts. In 6.2 these new RAID levels do not support stretch clustering but VMware expects to support that later on.
RAID-5 requires a minimum of 4 hosts to enable and is configured as 3+1 where parity data is written across 3 other hosts. Using RAID-5 the parity data only requires 1.33 times the additional space where as RAID-1 always consumed double additional space (2x). As a result a VM that is 20GB in size will only consume an additional 7GB on other hosts with RAID-5, with RAID-1 it would consume 20GB as you are writing an entire copy of the entire VM to other hosts.
With RAID-6 you are providing additional protection by writing an additional parity block and as a result there is a 6 host minimum (4+2) and the parity data consumes only 1.5 times the additional space. This provides better protection to allow you to survive up to 2 host failures. Using RAID-6 a 20GB VM would only consume an additional 10GB of disk on other hosts, if you did this with RAID-1 it would consume an additional 40GB as you are writing two copies of the entire VM to other hosts.
These RAID levels are tied to the Failures To Tolerate (FTT) setting in the VSAN configuration which specifies how many failures VSAN can tolerate before data loss occurs. When FTT is set to 1 RAID-5 is utilized and you can tolerate one host failure and not lose any data. When FTT is set to 2 RAID-6 is utilized, and you can tolerate two host failures and not lose any data. While there is a minimum number of hosts required to use these RAID levels once you meet that number any number of hosts is supported with them. If you have less than 4 hosts in a VSAN cluster than the old RAID-1 is used.
New Software Checksum
A new software checksum has been introduced for increased resiliency that is designed to provide even better data integrity and complement hardware checksums. This will help in case data corruption occurs because of disk errors. A checksum is a calculation using a hash function that essentially takes a block of data and assigns a value to it. A background process will use checksums to validate the integrity of data at rest by looking at disk blocks and comparing the current checksum of that block to it’s last know value which is stored in a table. If an error or mismatch occurs it will replace that block with another copy that is stored in parity on other hosts. While enabled by default at the cluster level and it can disabled on a per VM basis if needed.
VSAN has some new QoS controls designed to regulate storage performance within a host to protect against noisy neighbors or for anyone looking to set and manage performance SLAs on a per VM basis. The new QoS controls work via vSphere Storage Policies and allow you to set IOPS limits on VMs and virtual disks. These limits will be initially based on a 32KB block size but that will be adjustable as needed. VMware didn’t go into a lot of detail on how this all works but it seems fairly straightforward as you are just capping the amount of IOPS that a VM can consume.
This one is pretty straightforward, vSphere has had IPv6 support for years, VMware has had requests for IPv6 support with VSAN and now they have it. There is support for a mixed IPv4 and IPv6 environment for migration purposes.
VSAN already has pretty good application support with key apps such as Oracle and Exchange, they have extended that in 6.2 with new support for SAP and tighter integration with Horizon View. VMware is working hard to make VSAN capable of running just about any application workload.
It’s even easier to manage and monitor VSAN in 6.2 from directly within vCenter, prior to 6.2 you had to leverage external tools such as vSAN Observer or vRealize Operations Manager to get detailed health, capacity and performance metrics. This new performance management capability is built directly into vCenter but it’s separate from the traditional performance metrics that vCenter collects and stores in it’s database. The new VSAN performance service will have it’s own separate database contained with the VSAN object store. The size of this database will be around 255GB and you can choose to protect it with either traditional mirroring (RAID 1) or using the new erasure coding methods (RAID-5 or RAID-6). By default this is not enabled to conserve host space but can be enabled if needed in the settings for VSAN.
You no longer need to use a special Health Check Plug-in to monitor the health of VSAN. This allows you to have end to end monitoring of VSAN to quickly recognize problems and issues and resolve them. They have also improved the ability to detect and identify faults to enable better health reporting with VSAN.
Finally there are a few minor additional improvements with VSAN in 6.2, the first one is rather interesting. VMware is introducing a new client (host) cache in VSAN 6.2 that utilizes host memory (RAM) as a dynamic read cache to help improve performance. The size of this cache will be .4% of total host memory up to a maximum size of 1GB. This is similar to what 3rd party vendors such as Pernix and Infinio do by leveraging host memory as a cache to speed up storage operations. While this new client cache is currently limited to VSAN you have to wonder if VMware will open this up in a future release to work with local VMFS datastores or SAN/NAS storage.
Another new feature is the ability to have your VM memory swap files (.vswp) use the new Sparse disk format that VMware introduced in VSAN 6.0 as a more efficient disk format. As memory over-commitment is not always used by customers this enables you to reclaim a lot of that wasted space used by vswp files that are created automatically when VMs are powered on.
Feb 09 2016
We don’t seem to have that many VMUGs in Denver as of late so when comes around you should definitely try and attend. There is an upcoming Denver VMUG on Thursday Feb. 18th from 11:00am – 3:00pm at the usual NW location of CableLabs. Sponsoring this one is the good folks from HyTrust, maker of virtualization security products and a company I know very well as I picked them as the winner of Best of VMworld many years ago. So go register and I look forward to seeing everyone there.
Feb 08 2016
The annual vExpert recognition from VMware has been announced for 2016 with over 1300 people receiving the honor this year. I’m honored to be on that list again, I’ve been a recurring vExpert since the program’s inception in 2009 thanks to the efforts of John Troyer to help recognize members of the VMware community that continually give back by sharing their knowledge and experience with others. The original group was about 300 members and was mostly compromised of bloggers and VMUG leaders. The group has expanded over the years as both the number of bloggers has grown and the criteria and requirements have changed.
Personally I have always thought the group is too large and doesn’t distinguish that well based on the level of contributions. It seems like just about anyone that has a blog is included even if they only posted once or twice in a year. There are definitely people very deserving of the honor but I feel there are some people out there that start blogs just so they can get the vExpert title and they don’t put a lot of effort into it. Remember the vExpert title is not an official certification, it is simply a recognition award from VMware that validates your contributions to the VMware community. What you get from it is recognition and some other great perks like VMware licenses, beta program access, exclusive early access webinars, special events and more. Some vendors will also reward vExperts with special giveaways.
I’d like to see the bar set higher and it be a more exclusive club and/or have recognition levels like vExpert Gold/Silver/Bronze based on the level of contributions and the duration of maintaining the vExpert title. Those that have been named a vExpert every year since the beginning should also get special recognition as to keep it going year after year takes commitment and hard work. I think doing this would give the people that really deserve special recognition just that and put them in higher tier. I’ve always felt there are vExperts, and then there are vExperts, meaning I’ve always seen those that do more to earn it differently then those that do the minimum.
I also see this directly relate to blogging, there are many opportunistic bloggers out there. They see starting a blog as their path to getting something whether it be a new and better job or to get recognition for becoming a vExpert. Now there is nothing wrong with this, if someone wants to better themselves good for them. It’s not the reason I starting blogging and I know that’s true for many other bloggers. What happens too often though is they get what they want and then they dump what got them there. Just this week I removed at least 50 dead blogs from my vLaunchpad. Again there is nothing wrong with this, if that person is happy we’re they are at and doesn’t want to blog any more so be it.
The point I’m trying to make is those bloggers that stick with it year after year and publish great content should get special recognition and they do via my annual Top vBlog voting. It would be nice to see this carry over to the vExpert program, recognize those that deserve it the most instead of publishing a huge list of names with no segregation based on accomplishments. Maybe have a point system that weights accomplishments and then separating the vExperts into different tiers. Also seniority should play into it, someone who has been a vExpert for 8 years should have a higher weight then someone new to the vExpert program.
The vExpert program is a great thing to have and I appreciate VMware’s hard work and continued commitment to it. As the program continues to grow larger hopefully they can find some way to implement different levels of vExperts that I believe would make the program even more special as well as motivate people to accomplish even more instead of just doing the bare minimum. Additionally it would get those that deserve special recognition just that and give more meaning to the vExpert title.
Feb 07 2016
VMware is hosting a big online event this week that is themed “Enabling the Digital Enterprise”. It’s being presented as a 2-part event on multiple days split into 2 tracks. The first track seems to be all about VDI, that’s a bit of a change for VMware that has historically put VDI as secondary at their events like VMworld. If VDI isn’t your cup of tea then tune in the 2nd day that is all about vSphere and the cloud and VSAN. I’m willing to bet that you’ll be hearing new product announcements as well as all sorts of licensing changes (hint: streamlined product portfolio).
Track 1 – Deliver and Secure Your Digital Workspace
VMware is helping customers develop consumer-simple, enterprise-secure digital workspaces that include their desktop and mobile environments along with critical components of security, identity and cloud infrastructure. Pat Gelsinger will be joined by Sanjay Poonen who will present VMware’s digital workspace vision and share exciting announcements that help companies securely deliver and manage any app on any device.
What You’ll Learn
- How to transform traditional IT culture, process, tools, and budgets by delivering and managing any app on any device from one platform
- What’s new with the VMware Horizon portfolio
- VMware’s new approach for managing your desktop and apps in the cloud
Americas Tuesday, February 9 at 9:30 AM PST
EMEA Wednesday, February 10 at 9:30 AM GMT
Asia Pacific Tuesday, February 16 at 9:00 AM (GMT +11)
Track 2 – Build and Manage Your Hybrid Cloud
Raghu Raghuram joins Pat Gelsinger to share how VMware’s software-defined approach can help simplify how you build and manage your hybrid cloud. See how VMware’s enterprise-ready cloud management platform (CMP) helps accelerate IT service delivery, improve IT efficiency, and optimize IT operations and capital spending. Get up to speed on how VMware is enabling high-performance hyper-converged infrastructure (HCI) solutions through radically simple storage and a tightly integrated software stack.
What You’ll Learn
- How companies are implementing CMPs for intelligent operations, automated IT to IaaS, and DevOps-ready IT
- VMware’s new streamlined product portfolio
- Why companies are embracing HCI solutions powered by Virtual SAN
Americas Wednesday, February 10 at 8:30 AM PST
EMEA Thursday, February 11 at 9:30 AM GMT
Asia Pacific Tuesday, February 16 at 9:00 AM (GMT +11)
Feb 06 2016
As you may have noticed if you are a partner or VAR, VMware decided last year to dis-continue it’s annual Partner Exchange (PEX) conference that it has held annually in Feburary. PEX was a partner and VAR only event, no customers allowed and was mostly aimed at providing partners and VARs with the training and information they needed to better sell VMware solutions. It followed the same basic outline as VMworld just on a much smaller scale (i.e. 5,000 attendees).
PEX was a bit of a tricky event for VMware as it was split into 2 audiences, partners that sold products and solutions into the VMware ecosystem and VARs that sell both VMware and partner products. The tricky part comes from VMware directly competing with many (most?) of it’s partners in just about every area from storage to networking to management to cloud to data protection. This caused VMware to actually ban some companies from the event in prior years. Because of the competitive angle the conference prevented VMware from being able to have the full attention of resellers who could also learn about competitor products at the event.
Because PEX mostly duplicated the formula of VMworld and was still a fairly technical show, VMware decided last year to roll it into VMworld and not have PEX this year. The rolling in part was pretty hasty and not very well executed, the technical part was simple as VMworld is already very technically focused. The partner part not so well, partners didn’t get the opportunity to do boot camps which were a big part of PEX and the partner offerings were very limited and not presented well enough in advance to be able to plan for them.
The one nice thing about PEX was that it gave VMware another window in the year to do new product launches in front of a big audience outside of the yearly VMworld window. VMware has broken this out lately into special internet broadcast events like the one coming up next week. Hopefully they will do a better job with the execution next year as they have more time to plan for it.
What replaced PEX this year is what VMware’s calls a Partner Leadership Summit which is an invitation only event that is focused on c-level business audiences within the company’s partner ecosystem. The event is being held March 6th-9th In Scottsdale, AZ, I didn’t get an invite so I must not be very important ;-( I’m not sure if many partners even received invites, I asked around at HP with our alliance teams and nobody has heard of the conference. I think it is more aimed at VAR’s then VMware technology partners, that way VMware can pitch VSAN and other products without any distractions from competitors. VMware mentioned the conference once last year in this blog post.
vExpert meets the Hit King at PEX 2013:
I for one won’t shed too many tears at the loss of PEX as in the later years it started to conflict with an even bigger annual event, the Super Bowl. Last year totally sucked for me as I was set to fly from Phoenix to San Francisco Sunday morning to get in early enough to watch the big game. The Super Bowl was held in Phoenix last year and that Sunday was literally the one day in years that the whole city got hit with heavy fog (Phoenix almost never gets fog). As a result planes couldn’t land for hours, pilots timed out, my flight got canceled and I literally spent 12 hours at the airport having to watch the Super Bowl at a small crappy airport bar. I finally flew out on one of the last flights of the night that had one seat left.
That wasn’t my worst PEX travel experience though, a few years before that when PEX was in Vegas I was set to fly out from Denver to Vegas on Sunday when a huge snowstorm hit the Denver area that weekend. Of course most of the flights got canceled and the earliest re-bookings were late Monday or Tuesday which meant missing the boot camp and half the show. Well that didn’t deter me, I took a taxi to the airport, rented a Jeep Cherokee and hit the road Sunday evening to get to Vegas by morning.
The Jeep was great in the snow and once you got outside of the Denver area into the mountains it wasn’t that bad. Late that night while driving through Utah, conditions were clear, roads were good but it was very cold out I rounded a corner on I-70 and there was a herd of moose blocking the entire highway. The chances of stopping or avoiding them were zero, even with my lightning reflexes we hit at least one of them which smashed up the front fender pretty good. They all scattered and the one that was hit limped off, at that hour in the middle of nowhere there was no traffic, and of course no cell service. Thankfully the car was still drive-able, the airbag had not deployed and the cooling system seemed to be intact, the Jeep was like a tank.
Jeep vs big ass moose:
We drove for a while longer finally got cell service, called the rental car company, since the car was still driving OK they said to bring it to them in Vegas and they would swap it out. As they needed a police report I called the Utah Highway Patrol, they wouldn’t come out for it they just took a report and gave me a case #. We continued on to Vegas, I was constantly nervous the car might overheat in the freezing cold and we’d get stranded but it made it just fine. Took the car to them, they didn’t say much and just exchanged cars with me. Despite that ordeal I ended up getting there just in the knick of time about an hour before the boot camp started on Monday at 8:00am.
So I for one won’t miss PEX all that much, having to plan for our presence at PEX each year was a lot of duplicated effort. So goodbye PEX and hello VMworld.
Feb 06 2016
So this year I’m going to try something a bit different to try and limit the number of blogs that are in the voting. Every year the number of blogs in the voting continues to rise with over 400 last year. Each year I do remove dead blogs from the voting, these are blogs that have not produced any new posts in the last year. The number of dead blogs is fairly small though, maybe less than 20. With the number of blogs so high it gets very difficult for voters to sort through the huge number and pick their favorite blogs. It also makes all the work that goes into the building the whole voting process and processing the results much more difficult.
So based on some feedback from last year and especially thanks to Andreas Lesslhumer’s hard work of actually tallying up blog posts for each and every blogger last year I’m going to set a minimum post count as a requirement for being on the Top vBlog ballot. Right now I’m thinking of setting it as a minimum of 6 posts in the prior year to be eligible for Top vBlog. This would eliminate at least a 100 blogs from the voting. I know people get busy and it’s often hard to maintain a blog but I think one post every 2 months is a fairly low bar to set for this. The end result is it will be more fair to the bloggers that put in more work, easier for the voters to choose their favorite blogs and easier for me to complete the who process.
Let me know what you think about this, too high? too low? just right?
Jan 29 2016
Veeam Backup & Replication was first introduced as a 1.0 product back in 2008 and helped launch the revolution of the data protection industry with a backup product specifically designed for VMware environments. To put that in context with vSphere back in 2008 vSphere consisted of VirtualCenter 2.5 together with ESX 3.5 and ESXi was just being introduced. Back then Veeam was a small company consisting of around 10 employees. Fast forward to today and they’ve come a long way since that time, Veeam now has over 2,000 employees and has just released version 9 of their flagship Backup & Replication product. You can read more on the history of Veeam in a post I did back in 2014.
Veeam Backup & Replication is one of the core components of the Veeam Availability Suite along with the Veeam ONE monitoring and reporting tool. The v9 release of Veeam Backup & Replication is packed full of new features and enhancements including a lot more integration with some of the big storage array vendors. I was on a blogger early preview of the v9 release and one nice thing that caught my attention was new support for Direct to NFS backups. Prior to v9 Veeam has always supported Direct to SAN backups where a backup appliance could directly backup VMs on a SAN without involving the hypervisor which is more efficient, in v9 that has been extended to NFS storage arrays as well.
The list of new features and enhancements in this release is ridiculously long, so rather than list them all here go check out the 10-page What’s new in v9 document that Veeam has published. You can also give this blog post from Doug Hazelman a read and check out a recorded webinar from Rick-a-tron that provides an overview of the v9 Availability Suite
Jan 18 2016
Tom Fenton recently published an article on Virtualization Review detailing the current state of VMware’s new Virtual Volume (VVol) storage architecture. In the article he polled a few vendors to find out what they are seeing as far as customer adoption of VVols. A few vendors responded including myself, both HDS & Dell did not have an accurate way to track adoption and where mainly relying on customer feedback. They are mainly seeing customers testing it out right now and using it in Dev/Test environments. HDS stated one of the limiters to VVol adoption is customers still on vSphere 5.5 and Dell stated customers are still trying to understand it better before diving in.
At HPE, we can track actual usage of VVol adoption via our array phone home capability which provides us with some usages stats on the array. In the article based on my feedback Tom wrote that we had seen at least 600 3PAR arrays with the VVols VASA Provider enabled within the array. More recent numbers puts that at around 720 arrays, but its important to note that this just means they have the potential to use VVols, not that they have VMs running on VVol storage. More detailed stats show that about 50 customers have created VMs on VVol storage. So this is pretty much inline with what other vendors are seeing which is pretty light adoption of VVols right now.
VVols has been available as part of vSphere 6 for almost a year now (March 2015), so why aren’t more people using it? There are probably a lot of reasons for this including:
- Customers haven’t migrated to vSphere 6
- Array firmware doesn’t support VVol
- Lack of replication capabilities in VASA 2.0
- Lack of knowledge/understanding of VVols
- Limited scalability and feature support in some implementations
- It’s essentially a 1.0 architecture
In my previous post on when customers would start adopting VVols I went into a lot more detail on the barriers/challenges to VVol adoption. I expect usage to pickup within the next year or so based on a number of factors:
- VASA 3.0 with replication support in the next vSphere release
- More arrays support for VVols
- Increased scalability and more feature support
- More mature implementation from VMware and array vendors
- Better understanding of VVols and how to implement it
Until then I expect to see steadily increased usage of VVols, like any new technology or feature, adoption is almost always slow at first as customers are often cautious about jumping right in to something new. The same growing pains were apparent with VSAN as well when it was released as a 1.0 new storage architecture. If your array supports VVols I encourage you to definitely try it out and learn all you can about it as VVols is the future and at some point I expect VMFS to be phased out just like ESX was. If you are looking for resources to learn more about VVols be sure and check out my huge ever-growing VVols link collection and also my VMworld 2015 STO5888 session that VMware has made publicly available.
Jan 18 2016
I recently updated a popular paper that I did for Veeam years ago which is now available on Veeam’s website…
Backing up virtual machines (VMs) may seem like a simple process, but there’s a lot more to it than meets the eye. For maximum data availability, you shouldn’t use the same method you used to back up physical servers: You need techniques and features that were designed specifically for virtualized environments.
In this FREE white paper, Top 10 Best Practices for VMware Data Availability, written by VMware vExpert Eric Siebert, you’ll learn the proper methods, techniques and configuration and how to leverage the built-in features of Veeam® Backup & Replication™ to take your backups to the next level. You’ll also learn about:
- Using the 3-2-1 Rule to keep your VMs and critical data safe
- Leveraging policy-based controls for smarter data protection
- Working with new vSphere features, including VSAN and VVols storage architectures
- And more!
Dec 30 2015
VMTurbo has created a vColoring Book with 13 pages of illustrations that can be colored in by you or your kids. The vColoring Book is entitled “The Origins Of The Green Circle Guardians” and is based on the common challenges found with virtualization. The vColoring Book is completely free, you can simply download it from their site. And best of all by downloading it you’ll help support the Engineers Without Borders charity as VMTurbo has pledged to donate $1 for every vColoring Book that is downloaded. This charity’s mission is the following:
“Engineers Without Borders USA builds a better world through engineering projects that empower communities to meet their basic human needs and equip leaders to solve the world’s most pressing challenges. Our 15,900 members work with communities to find appropriate solutions for water supply, sanitation, energy, agriculture, civil works and structures.”
Much like VMTurbo can help you solve virtualization challenges, Engineers Without Borders connects smart people to help solve real-world challenges around getting people some of the basic human needs that we take for granted. So this would should be a no-brainer, go download the vColoring Book, help a great cause and bust out the crayons and have some fun.
Dec 30 2015
WordPress is a great platform for hosting a blog but it isn’t the best platform when it comes to security. There are many vulnerabilities that are constantly being found and exploited in both WordPress and it’s thousands of plug-ins. I’ve been stung by malicious attacks in the past by hackers that injected malicious rows in my MySQL tables. edited my WordPress files or created malicious PHP files that would do things like display spam links at the top of my blog. Recovering from these is not easy, it took me hours to identify the infected files/tables and clean them up. In many cases your best option is to start from scratch, some bloggers have lost their whole site and have had to start over with nothing to show for their years of hard work.
This week I had a small WordPress site that I maintain for my mom hacked, luckily the database was intact so all I had to do was completely re-install WordPress and re-install themes and plug-ins to get it back up and running. That experience reminded me of a post I did a year ago about the importance of backing up WordPress that I thought I would re-post. How would you feel if you lost all those hundreds of hours that you put into blogging? Not very good I bet, in fact probably downright awful, so do yourself a favor and don’t ignore your backups…
Have you ever had that awful sick to your stomach, oh shit feeling when you just realized you lost a lot of important data whether it be photos, documents or other important stuff that can’t easily be replaced? It sucks doesn’t it, usually it takes just one instance like that to inspire us to start taking backups seriously. Unfortunately though it won’t bring back what you lost. Backups are one of those things that many people don’t think about especially when they store data on a location that is hosted on the internet.
There are a great many people that are blogging about virtualization these days and most of them are using WordPress as their platform of choice to do it. WordPress is an ideal platform for blogging but all that hard work you put into blogging could be wiped out if you don’t properly backup your WordPress site.
But doesn’t my hosting provider backup my site?
You should never trust that your hosting provider is backing up your website, many of them do not backup your content and if they do they probably do not guarantee them. Some hosting providers will offer a backup option as a paid add-on service. In addition they usually are not backing up your WordPress MySQL database which contains much of your valuable content. Take a look at this notice from my hosting provider, I think you’ll find the policy is similar with whatever provider you use, if you don’t know check with them.
How often should you backup your site?
Depending on how often you blog you should backup your WordPress instance at least once a month. If you are blogging several times a week you should probably do it daily or weekly. You should also do a backup before you upgrade WordPress to a newer version or update your plug-ins. Just like you do in the data center you should also plan on preserving older backups for as long as possible as often you may have something corrupted or malicious content that has been that way for a while that you need to go back a while to find a clean copy.
What should you backup?
With WordPress there are two main sets of data that you need to back up to ensure all your content is backed up and you can easily recover if needed, your WordPress files and your WordPress database. When you install WordPress on your website there are hundreds of files that get copied to specific directories that contain the complete WordPress web application. A new install of WordPress is only about 16MB in size with around 1,100 files but as you add content that will grow. Your WordPress database is typically hosted on a MySQL database that is installed and managed by your hosting provider. The WordPress database has many tables that store configuration and content for your WordPress website.
How do I back it up?
So now we know what needs to be backed up, how do we actually do it? There are several ways that you can backup WordPress:
- Manually by copying all your files to a PC using FTP and then doing a SQL export of your WordPress MySQL database and copying that to your local PC as well.
- Automatically by using some type of PHP script that can run scheduled on your hosting provider server using a scheduling tool like cron.
- Automatically using a WordPress plug-in designed to backup WordPress.
- Some hosting providers will do it if you pay for a add-on backup service.
Doing a manual backup
This method is OK for ad-hoc backups but it can be tedious to do on a frequent basis and it can easily slip your mind. I used this method for years, I did it nowhere near as often as I should of and I got lucky a few times were I almost lost a lot of data. To backup WordPress manually you will need to copy the all appropriate files and directories from the hosting provider web server to your local PC or even better to a cloud storage platform like Dropbox. Below are the files and directories that are come with a new install of WordPress.
The easiest way to move the files is to use an FTP application to copy them from your old server to your local PC and then to your new server. It’s also a good idea to periodically do this to backup WordPress. If you need a FTP client, check out FileZilla which is a free open source application. You may need to setup a FTP username/password on your hosting site before you can connect to it. Create a new site in FileZilla and give it a name (i.e. mywebsite-old), use the IP address or DNS name of your website and then enter in your login credentials. Once you connect to your web server you’ll see the directory listing of the contents, what you see will usually vary by hosting providers, some providers partition you off so you don’t see much of the web server files.
You may not need all the files you see to backup WordPress but its best to copy everything to a sub-directory on your PC so you do have a full backup just in case. In the figure above you can see the 3 WordPress directories that you need to copy for sure along with all the files that start with “wp” in the root directory. I’ve manually copied things to my site in the past (i.e. images) which I copy also. Other files that are part of the hosting platform you typically don’t need to bring over but it doesn’t hurt to copy them any way. Once you’ve copied everything to your PC it’s time to move on to the next step, backing up your MySQL database.
Your WordPress database is typically hosted on a MySQL database that is installed and managed by your hosting provider. The WordPress database has many tables that store configuration and content for your WordPress website. You can find a complete description of the database tables here. Log into your hosting provider control panel for your website and you should see a link for database management via phpMyAdmin which is a free software tool that is written in PHP that is used to administer MySQL over the Web. Once you launch phpMyAdmin you should be prompted for a username and password to connect to your database. You probably won’t know it or remember it but you can easily look it up by opening the wpconfig.php file that you copied to your PC as part of the backup in a text editor like Notepad and looking for the MySQL section which will contain your MySQL username/password.
Note some hosting providers may require you to whitelist your IP address to do remote MySQL administration, if they do there should be a section in your hosting control panel to put in your IP address. Once you are logged into phpMyAdmin you want to Export your database, click the Export link. You may be prompted for a quick export where you don’t need to enter a lot of options which will work just fine, if you do get a selection screen you typically can just use the defaults and then just hit Go and it will ask you for a location for the file on your PC and then begin the Export. It shouldn’t take more than a few minutes. Here’s how the Export screen looks with my hosting provider:
Once you have completed this it will create a .SQL file on your local PC that you should save with the other WordPress files that you copied. You now have everything you need to restore WordPress if needed by copying all the files you backed up back to the server and Importing the .SQL file back into WordPress, see my other post on moving to a new hosting provider for more info on how to do that.
Also note some hosting providers like GoDaddy provide a link in their control panel to kick off a database backup so you don’t have to go into PHPMyAdmin. They dump the resulting .SQL file in a db backup directory on your website, just make sure you copy the file from there to your local PC.
Doing an automatic backup with a PHP script and cron
I’m not going to go into much detail on this method, it can be a bit technical to setup. There are some WordPress plug-ins available that will make this easier to setup. Your hosting provider control panel should have a section to setup and manage cron jobs like below:
You then need to configure the scheduling and action for the cron job for the cron job to perform like below:
Again look for some WordPress plug-ins that support cron or a PHP script that is written to backup WordPress MySQL databases. If you are feeling adventurous you could also write your own PHP script. Here’s one I found by searching the internet. Some scripts may only backup the database so make sure you know what the script is doing and where it is storing your backups.
Doing an automatic backup with WordPress plug-in
This is probably the easiest and most convenient way to backup your WordPress site. There are many plug-ins available that will automate the backup of both your WordPress files and MySQL database so you don’t have to do anything but install the plug-in and configure it. You can search through the WordPress plug-in directory and you’ll see many of them. The one I ended up using which had 4.8 out of 5 stars and is free but has some paid add-on’s is UpDraftPlus Backup & Restoration. You can go to their add-on page which contains a big list of add-on’s and pricing for each that expand the flexibility, functionality and backup targets supported. Another popular WordPress backup plug-in is BackWPup.
You can backup your site just fine with the free version but it only puts the backup files on your hosting web server. If you want to use other backup destinations like Dropbox, Amazon E3, Rackspace, Google Drive and more it will cost you about $10 for each. If you do backup to your website only just remember to copy those files off periodically to somewhere safe.
Install UpDraftPlus like you would any other WordPress plug-in, once activated go into Settings, UpDraftPlus Backups in WordPress to setup your backup jobs and you will be at the main screen:
Here you can see your backup status and quick actions for backing up and restoring. It’s best to click on the Settings tab first to configure backup schedules and retention. Since most hosting providers now provide unlimited space don’t be afraid to retain a lot of backups.
You can also specify what files to backup, database encryption if you are really paranoid, reporting, remote storage options and other advanced settings. Note by default with the free version it will not backup your core WordPress files (i.e. wp-admin) but unless you customize yours you won’t have to worry about those as you can easily download those again if needed. All the files specific to your WordPress site are in the Themes and Plugins directories.
Once the backup runs you can look at the log files to see everything that occurred during the backup, it’s not something you’ll need to do regularly but I looked as I was curious. If you are using the free version which puts the backups into your WordPress directory you’ll see a new sub-directory under wp-content called updraft that contains your backup files all zipped up, make sure you backup these backup files somewhere else!
What about Backup as a Service?
If you prefer not to deal with your backups at all you can outsource them to a company that provides backup services for WordPress. Note both of these companies below backup both your WordPress files and database.
One such company that does this is blogVault. Their Basic plan starts at $9/month for backing up a single site and retain 30 days of backups. If you have more than one site you they have a Plus plan for $19/month that will backup 3 sites. It works by installing their WordPress plug-in on your site and then their server automatically contacts the plugin everyday to backup new changes to your site.
Another company that provides WordPress backup services is Backup Buddy. Their Blogger plan rate is $80/year for backing up 2 sites with 1GB of backup space available. Presumably with that much space available you could store more than 60 days of backups with them. They also have a Freelancer plan available for $100/year for up to 10 sites with 1GB of backup space. Again it works by installing their WordPress plug-in on your site and then configuring it, they have a video that demonstrates this process.
And that’s all there is to it, pick the service/method/plug-in that works best for you. For me I’ve setup UpDraftPlus and will also periodically do manual backups as well. Regardless of how you do it the important thing is that you are backing up your WordPress site which contains all your hard work that you do not want to ever take the chance of losing.
Dec 29 2015
I recently wrote a white paper for Infinio, who has a server-side storage acceleration product that utilizes host RAM as a storage cache. The white paper focuses on some of the cool storage enhancements in vSphere 6.0 and Update 1 such as the new vSphere APIs for I/O Filtering (VAIO) and new Virtual Volumes (VVols) storage architecture. It also covers the enhancements to VSAN, vMotion and Fault Tolerance. The white paper is a companion document to the webinar that I did a few months ago on the same topic and covers the content in a bit more detail. So if you want to know more about the great new storage enhancements in vSphere 6 (of course you do) I encourage you to give it a read and give the webinar a listen as well. And if you want to learn more about Infinio’s cool storage acceleration product give them a visit.
Nov 16 2015
Today VMware released Photon Controller another component in their new container architecture that was announced months ago. I really haven’t been up to speed on all the container stuff but I attended a blogger briefing on this latest piece last week so I have a little better understanding of it now. I’ll try and summarize what I learned:
- There are several components to VMware’s container architecture, Lightwave (identity and access management), Photon Machine (stripped down ESXi), Photon Controller (management, kind of like vCenter) and Photon OS (container runtime environment).
- All components are open-sourced, Lightwave and Photon OS already launched, Photon Controller is the last piece to launch.
- Below is a figure depicting VMware’s long-term architecture for the Photon Controller, you can see that integration into traditional vSphere tools like vROPs and Log Insight is planned along with 3rd party integration:
- Below is a figure depicting the Photon Platform architecture, got to love how VMware is still using the term ESX (maybe they are bringing it back):
- vSphere Integrated Containers is a separate infrastructure to Photon Platform and runs on traditional vSphere. Here’s VMware’s comparison of the two:
- Photon Platform does run on a stripped down ESXi hypervisor with a container runtime environment based on Photon OS. VMware wouldn’t say exactly what all was stripped out but some features that don’t fit into supporting containers were removed (i.e. HA, FT, DRS, integration APIs).
- I saw a demo of Photon Controller in action (below), most of the management and deployment is all CLI right now and it’s completely different from vSphere and very developer focused. To me it seemed like a pretty steep learning curve if you are used to traditional vSphere. Note Photon uses “Flavors” for resource policy management.
- Photon Machine can be managed with the VMware Embedded Host Client as shown below, no you can’t manage it with vCenter:
- Photon Machine has no support for any of the current vSphere Storage APIs (VAAI/VASA), that may come later.
- Photon Machine only supports VMFS, no VVols support, that may come later.
- There is currently no management plug-in integration like there is in vCenter for 3rd party vendors to add-on to it.
- You can run Photon Controller as a VM in VMware Fusion or Workstation so you can have a whole container development environment on a desktop or laptop.
There is still a lot that I need to learn and understand about this new architecture. It will be interesting to see how VMware continues to develop and evolve this and how they position it against vSphere integrated containers. Here are some additional resources to help you learn more about it:
- VMware Photon Controller Deep Dive (VMware blog)
- Photon Controller Getting Started Guide End User Workflows (VMware doc)
- Project Lightwave FAQ (VMware doc)
- Project Photon OS FAQ (VMware doc)