Migrating From SCSI To NVMe on vCenter (Part 1 – Live Migration)

This is going to be broken up into two parts- first, a live migration where no VMs get powered off during the migration; second, a migration where you temporarily power off VMs attached to the SCSI datastore.

Why would you want to do it one way or another?

Pros of live migration:

  • No VM downtime
  • Simpler configuration changes and overlap. Less to go wrong or mess up

Pros of powering off VMs:

  • The total migration time will be significantly less because no data will have to be moved. Currently VMware doesn’t support XCOPY (even on the same array) for NVMe-oF
Continue reading “Migrating From SCSI To NVMe on vCenter (Part 1 – Live Migration)”

ActiveCluster and VAAI

Pure Storage recently offered up support for active/active replication for the FlashArray in a feature called ActiveCluster. And a common question that comes up for active/active solutions alongside VMware is how is VAAI supported?

The reason it is asked is that it is often tricky if not impossible to support in an active/active scenario. Because the storage platform first has to perform the operation on one array, but also on the other. So XCOPY, which offloads VM copying, is often not supported. Let’s take a look at VAAI with ActiveCluster, specifically these four features:

  1. Hardware Assisted Locking (ATOMIC TEST & SET)
  2. Zero Offload (WRITE SAME)
  3. Space Reclamation (UNMAP)
  4. Copy Offload (XCOPY)

Continue reading “ActiveCluster and VAAI”

VAAI XCOPY not being used with Powered-On Windows VM

This is an issue I discovered along with my good friend and former colleague Drew Tonnesen a few years back which has cropped up a few times in recent days. I noticed there wasn’t really any information about it online, so made sense to put a quick post together.

In short, Windows 2012 R2 virtual machine clone or Storage vMotion operations complete much slower when powered-on as compared to when power-off. The common explanation is that VAAI XCOPY does not work when the VM is powered-on. This is not exactly true. Let me explain. Continue reading “VAAI XCOPY not being used with Powered-On Windows VM”

XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks

Here is another “look what I found” storage-related post for vSphere 6. Once again, I am still looking into exact design changes, so this is what I observed and my educated guess on how it was done. Look for more details as time wears on.

***This blog post really turned out longer than I expected, probably should have been a two parter, so I apologize for the length.***

Like usual, let me wax historical for a bit… A little over a year ago, in my previous job, I wrote a proposal document to VMware to improve how they handled XCOPY. XCOPY, as you may be aware, is the SCSI command used by ESXi to clone/Storage vMotion/deploy from template VMs on a compatible array. It seems that in vSphere 6.0 VMware implemented these requests (my good friend Drew Tonnesen recently blogged on this). My request centered around three things:

  1. Allow XCOPY to use a much larger transfer size (current maximum is 16 MB) a.k.a, how much space a single XCOPY SCSI command can describe. Things like Microsoft ODX can handle XCOPY sizes up to 256 MB for example (though the ODX implementation is a bit different).
  2. Allow ESXi to query the Maximum Segment Length during an Extended Copy (XCOPY) Receive Copy Results and use that value. This value tells ESXi what to use as a maximum transfer size. This will allow the end user to avoid the hassle of having to deal with manual transfer size changes.
  3. Allow for thin virtual disks to leverage a larger transfer size than 1 MB.

The first two are currently supported in a very limited fashion by VMware right now, (but stay tuned on this!) so for this post I am going to focus on the thin virtual disk enhancement and what it means on the FlashArray.

Continue reading “XCOPY Improvement in vSphere 6.0 for Thin Virtual Disks”

FlashArray XCOPY Data Reduction Reporting Enhancement

Recently the Purity Operating Environment 4.1.1 release  came out with quite a few enhancements. Many of these were for replication, certain new GUI features and the new vSphere Web Client Plugin is included. What I wanted to talk about here is a space reporting enhancement that was made concerning VAAI XCOPY (Full Copy). First some history…

First off a quick refresher on XCOPY. XCOPY is a VAAI feature that provides for offloading virtual disk copying/migration inside of one array. So operations like Storage vMotion, Cloning or Deploy from Template. Telling an array to move something from location A to location B is much faster than having ESXi issue tons of reads and writes over the SAN and it also therefore reduces CPU overhead on the ESXi host and reduces traffic on the SAN. Faster cloning/migration and less overhead–yay! This lets ESXi focus on what it does best: manage and run virtual machines while letting the array do what it does best: manage and move around data. Continue reading “FlashArray XCOPY Data Reduction Reporting Enhancement”

Pure Storage and VMware VAAI

Today I posted a new document to our repository on purestorage.com: Pure Storage and VMware Storage APIs for Array Integration—VAAI. This is a new white paper that describes in detail the VAAI block primitives that VMware offers and that we support. Furthermore, performance expectations are described, comparing before/after and how the operations do at scale. There are some best practices listed as well, the why and how of those recommendations are also described within.

I have to say, especially when it comes to XCOPY, I have never seen a storage array do so well with it. It is really quite impressive how fast XCOPY sessions complete and how scaling it up (in terms of numbers of VMs or size of the VMDKs) doesn’t weaken the process at all. The main purpose of this post is to alert you to the new document but I will go over some high level performance pieces of information as well. Read the document for the details and more.


vaai_pdf_cover

Continue reading “Pure Storage and VMware VAAI”

Pure Storage FlashArray and Re-Examining VMware Virtual Disk Types

A bit of a long rambling one here. Here it goes…

Virtual Disk allocation mechanism choice is something that was somewhat of a hot topic a few years back with the “thin on thin” question almost becoming a religious debate at times. Essentially this has cooled down and vendors have made their recommendations and end users have their preferences and that’s that. With the true advent of the all-flash-array such as the Pure Storage FlashArray with deduplication, compression, pattern removal etc. I feel like this somewhat basic topic is worth revisiting now.

To review there are three main virtual disk types (there are others, namely SESparse but I am going to stick with the most common for this discussion):

  • Eagerzeroedthick–This type is fully allocated upon creation. This means that it reserves the entire indicated capacity on the VMFS volume and zeroes the entire encompassed region on the underlying storage prior to allowing writes from a guest OS. This means that is takes longer to provision as it has to write GBs or TBs of zeroes before the virtual disk creation is complete and ready to use. I will refer to this as EZT from now on.
  • Zeroedthick–This type fully reserves the space on VMFS but does not pre-zero the underlying storage. Zeroing is done on an as needed basis, when a guest OS writes to a new segment of the virtual disk the encompassing block is zeroed first than the new write is committed.
  • Thin–This type neither reserves space on the VMFS or pre-zeroes. Zeroing is done on an as-needed basis like zeroedthick. The virtual disk physical capacity grows in segments defined by the block size of the VMFS, usually 1 MB.

Continue reading “Pure Storage FlashArray and Re-Examining VMware Virtual Disk Types”

VMware PowerCLI and Pure Storage

This is a post I plan on just updating on a rolling basis. I have been working on updating the vSphere and Pure Storage Best Practices document and there are few settings that can be tweaked to increase performance. A common question I have and occasionally receive is can this be easily simplified or automated? Of course! And PowerCLI is the best option in most cases–I will continue to add to this post or update it as I find newer or better ways of doing things.

****UPDATED SCRIPTS AND NEW FUNCTIONALITY check out this blog post for insight****

Update: get my scripts on my GitHub page here:

https://github.com/codyhosterman/powercli

flasharray

Continue reading “VMware PowerCLI and Pure Storage”