I have officially decided to “retire” my UNMAP script that uses direct REST calls to find before and after capacity changes for given volumes. I am only updating the one that uses the Pure Storage PowerShell SDK from this point on–using this is much more robust, not tied to direct API versions and greatly simplifies managing the data in the script.
I have also created a second version for use in the VMware Fling called PowerActions. This allows the script to be executed from the vSphere Web Client.
***********UPDATE PLEASE REFER TO THE POST AT THIS LINK FOR UPDATED INFORMATION ON THIS SCRIPT***********************
Quick update. I have posted a new version of the PowerCLI UNMAP script that I maintain. I usually do not put a blog post out every time I update it, but there were enough changes here I think it was worth a quick note.
I have completed updates for two of my main VMware vSphere documents for the Pure Storage FlashArray. These include the standard best practices document and the white paper explaining VAAI in detail and how it works on the FlashArray.
The best practices document has mainly been updated with information that this blog has shown in the past couple of months. Notably:
vSphere 6 updates, support for Web Client Plugin versions, changes in virtual disk recommendations, in-guest UNMAP support, etc
VMFS UNMAP changes when it comes to best practice recommendations
vRealize Operations Management Pack
EFI-enabled VMs and Disk.DiskMaxIOSize
In the VAAI document, it is a similar update:
vSphere 6 changes, mainly focused on the thin virtual disk XCOPY enhancements
UNMAP changes, block counts, performance and in-guest support (EnableBlockDelete)
Both documents are also updated for FlashArray//m, but it is mainly a cosmetic change as nothing really changes for the VMware environment, no recommendations are changed. Of course the documents are also cleaned up and re-arranged to be more reader friendly with a semi-new format as well.
Important! If you have old versions of these documents, delete them! These get updated frequently (a few times a year at least) and these changes can be important. When needing to refer to the guides, please check back to the Pure Storage community for the latest version.
Enjoy! As always feedback on these documents is ALWAYS welcome.
I recently was doing some troubleshooting for a customer that was using my UNMAP PowerCLI script and discovered a change in ESXi 5.5+ UNMAP. The issue was that the script was taking quite a while to complete. After some logic optimizations and increasing timeouts the script was sped up a bit and less timeout errors occurred, but a bunch of the UNMAP operations were still taking a lot longer than expected. Eventually we threw our hands up and said it was good enough. A bit more recently, I was testing a 3rd party UNMAP tool and ran into similar behavior so I dug into it a bit more and found some semi-unexpected changes in how UNMAP works, specifically the behavior when leveraging non-default block iteration counts. Continue reading “UNMAP Block Count Behavior Change in ESXi 5.5 P3+”
Here is another “look what I found” storage-related post for vSphere 6. Once again, I am still looking into exact design changes, so this is what I observed and my educated guess on how it was done. Look for more details as time wears on.
***This blog post really turned out longer than I expected, probably should have been a two parter, so I apologize for the length.***
Like usual, let me wax historical for a bit… A little over a year ago, in my previous job, I wrote a proposal document to VMware to improve how they handled XCOPY. XCOPY, as you may be aware, is the SCSI command used by ESXi to clone/Storage vMotion/deploy from template VMs on a compatible array. It seems that in vSphere 6.0 VMware implemented these requests (my good friend Drew Tonnesen recently blogged on this). My request centered around three things:
Allow XCOPY to use a much larger transfer size (current maximum is 16 MB) a.k.a, how much space a single XCOPY SCSI command can describe. Things like Microsoft ODX can handle XCOPY sizes up to 256 MB for example (though the ODX implementation is a bit different).
Allow ESXi to query the Maximum Segment Length during an Extended Copy (XCOPY) Receive Copy Results and use that value. This value tells ESXi what to use as a maximum transfer size. This will allow the end user to avoid the hassle of having to deal with manual transfer size changes.
Allow for thin virtual disks to leverage a larger transfer size than 1 MB.
The first two are currently supported in a very limited fashion by VMware right now, (but stay tuned on this!) so for this post I am going to focus on the thin virtual disk enhancement and what it means on the FlashArray.
I received a question recently on another UNMAP post what are the minimum permissions required to run UNMAP with PowerCLI and finally got around to looking into it. Turns out it is very straight forward. If you run it with a read-only account–it will fail. Since it is creating a file and making changes some configuration authority is required. Running as read only will look like this:
This is certainly not my first post about UNMAP and I am pretty sure it will not be my last, but I think this is one of the more interesting updates of late. vSphere 6.0 has a new feature that supports the ability for direct UNMAP operations from inside a virtual machine issued from a Guest OS. Importantly this is now supported using a virtual disk instead of the traditional requirement of a raw device mapping.
Recently the Purity Operating Environment 4.1.1 release came out with quite a few enhancements. Many of these were for replication, certain new GUI features and the new vSphere Web Client Plugin is included. What I wanted to talk about here is a space reporting enhancement that was made concerning VAAI XCOPY (Full Copy). First some history…
First off a quick refresher on XCOPY. XCOPY is a VAAI feature that provides for offloading virtual disk copying/migration inside of one array. So operations like Storage vMotion, Cloning or Deploy from Template. Telling an array to move something from location A to location B is much faster than having ESXi issue tons of reads and writes over the SAN and it also therefore reduces CPU overhead on the ESXi host and reduces traffic on the SAN. Faster cloning/migration and less overhead–yay! This lets ESXi focus on what it does best: manage and run virtual machines while letting the array do what it does best: manage and move around data. Continue reading “FlashArray XCOPY Data Reduction Reporting Enhancement”
***********UPDATE PLEASE REFER TO THE POST AT THIS LINK FOR UPDATED INFORMATION ON THIS SCRIPT***********************
The most common request I get for scripts here at Pure Storage is an UNMAP script using PowerCLI. I have a basic one here that does the trick–UNMAPs Pure Storage volumes in a vCenter. That being said it is pretty dumb–doesn’t tell you much about what happened other than what volumes it is reclaiming (or not reclaiming) and moves on. A few requests have come in recently for something a little more in-depth. Most notably the ability to see how much space has been reclaimed. This information cannot be gathered from the VMware side of things–it has to come from the FlashArray.
There are two options here–either use our REST APIs or use our PowerShell toolkit to get this information (which just wraps the REST calls). For this script I chose to use the REST API directly from within PowerShell. What this script does is:
Connects to the vCenter and FlashArray
Finds all of the datastores and counts how many are actually Pure Storage volumes (NAA comparison)
Iterates through all of the datastores
Skips it if it is not Pure
If it is, the current data reduction ratio is reported and so the is current physical written capacity on the FlashArray.
Runs UNMAP on the datastore
Reports the new data reduction and physical space after UNMAP completes and how much was reclaimed.
Repeats for the rest of the volumes.
The script reports all of this to the console window, but it always throws it in a log file through add-content. If you don’t want it to return the info to the console, simply delete the write-host lines. If you don’t want it to log, delete the add-content lines.
There are a few required parameters–vCenter information (IP, username, password), FlashArray info (IP, username, password), UNMAP block count and a log file location. These are hard-coded parameters, but that can easily be changed by altering it to a read-host.
You may also note that after each UNMAP the script sleeps for 60 seconds–I do this so I make sure the FlashArray has time to update its information right after the UNMAP. 60 seconds is VERY conservative–probably 10 or so is fine, so feel free to mess with that number if you don’t like waiting. I also have another sleep at the end of each datastore operation to give a quick chance to review the latest results before it starts spewing the next datastore information on the screen (note this update didn’t make it into the video demo below–it doesn’t wait after each datastore).
See the script in action below. Essentially I am deleting a bunch of VMs across 4 datastores and then running the UNMAP. You can see the space get reclaimed on the FlashArray.
Note: You need particular access (see a blog post about that here) to vCenter to run UNMAP. For the FlashArray only Read Only is needed (higher of course is fine too).
Reclaiming “dirty” or “dead” space is a topic that goes by my desk quite often these days–since the FlashArray is a data reduction array it is especially important that space is not wasted on the array–throws off the economics etc. Therefore UNMAP is an important VAAI feature to leverage in any AFA environment. Supporting UNMAP is definitely table stakes for AFAs.
Note–I am doing to use the terms “dead”, “dirty” and “stranded” to define space that needs to be reclaimed interchangeably. So anyways…
Unfortunately UNMAP in its current form does not satisfy all of the reclamation use cases. UNMAP will only reclaim space on any array when capacity is cleared from the VMFS volume–so when a VM (or virtual disk) is deleted or migrated elsewhere. It does not have the ability to reclaim space when data is “deleted” inside a virtual machine by the guest OS when using virtual disks. VMware does not know this capacity has been cleared and neither can the array. So until this virtual disk is deleted or moved the capacity cannot be reclaimed with UNMAP. So to be clear, UNMAP with vmkfstools (in ESXi 5.0/5.1) or esxcli (in ESXi 5.5) does not allow you to reclaim space that remains stranded inside of virtual disks.