Another year, another VMworld! This year I have a few sessions that I have submitted that I would love to do this year, hopefully you agree and will vote for them. Plus some other sessions related to Pure Storage that you should take a look at.
I normally do not create a blog post about updating the guide, but this one was a major overhaul and I think is worth mentioning. Furthermore, there are a few documents I have written and published that I want to mention.
This is the second part of this post. In the first post, I explained the fix and how it affected Windows. In this post, we will overview how the change affects Linux-based virtual machines. See the original post here:
As you might’ve seen, Cormac Hogan just posted about an UNMAP fix that was just released. This is a fix I have been eagerly awaiting for some time, so I am very happy to see it released. And thankfully it does not disappoint.
So I am in the middle of updating my best practices guide for vSphere on FlashArray and one of the topics I am looking into providing better guidance around is ESXi queue management. This breaks down to a few things:
Array volume queue depth limit
Datastore queue depth limit
Virtual Machine vSCSI Adapter queue depth limit
Virtual Disk queue depth limit
I have had more than a few questions lately about handling this–either just general queries or performance escalations. And generally from what I have found it comes down to fundamental understanding of how ESXi queuing works. And how the FlashArray plays with it. So I put a blog post together of a use case and walking through solving a performance problem. Explaining concepts along the way.
Please note:
This is a simple example to explain how queuing works in ESXi
Mileage will vary depending on your workload and configuration
This workload is targeted specifically to make relationships easier to understand
PLEASE do not make changes in your environment at least until you read my conclusion at the end. And frankly not without direct guidance from VMware support.
I am sorry, this is a long one. But hopefully informative!
If you prefer a video, here is my 1 hr VMworld session that goes into depth on what I write below:
This is a blog I have been waiting a long time to write. The past year and a half of my work has heavily focused on improving and building our VMware vRealize integration at Pure Storage. Log Insight and Operations Manager integration already existed (analytics etc.), so the next logical step is actually provisioning (orchestration). So vRealize Orchestrator and Automation. The first step I took was using the built-in REST plugin in vRO to build a workflow package that customers could use to actually manage the FlashArray without much work on their own part inside of vRO.
I started to realize that a workflow package was not enough. Especially when it comes to vRA Anything-As-A-Service integration. A big part of what is missing from a workflow package is custom objects and inventory management. Something that a plugin can easily achieve. So, without further ado–please meet the FlashArray vRO plugin! Downloadable at the VMware Solution Exchange and fully certified by VMware and Pure Storage:
This is part 7 of this 7 part series. Questions around managing VMFS snapshots have been cropping up a lot lately and I realized I didn’t have a lot of specific Pure Storage and VMware resignaturing information out there. Especially around scripting all of this and the various options to do this. So I put a long series out here about how to do all of this.
Here is my storage manager for the FlashArray and VMware. Based on PowerCLI, but uses a front end GUI. Enjoy!
NOTICE THIS HAS BEEN DEPRECATED IN FAVOR OF THE POWERSHELL MODULE HERE:
https://www.codyhosterman.com/scripts-and-tools/pure-storage-powershell-vmware-module/
There are a variety of methods of managing VMware objects (VMFS volumes, VMs, VMDKs and RDMs) and the underlying snapshots to recovery or clone them. But often I get asked if I have a PowerShell (PowerCLI) script to do one or all of them. I have a bunch on my GitHub, but I decided a week or so ago to put something a bit more robust together. At first I was making it a standard interactive script, but it morphed into a GUI, using combo-boxes etc:
This is part 6 of this 8 part series. Questions around managing VMFS snapshots have been cropping up a lot lately and I realized I didn’t have a lot of specific Pure Storage and VMware resignaturing information out there. Especially around scripting all of this and the various options to do this. So I put a long series out here about how to do all of this.
Using vCenter and our Web Client plugin, recovering a snapshot is a pretty straight forward process. So the pre-requisite here is having our Web Client plugin installed and configured. Info on that here. If you want to know the manual steps, scroll down further and the whole process is described in detail that does not use the plugin–just our GUI and vCenter. Continue reading “VMFS Snapshots and the FlashArray Part VI: Mounting a FlashArray VMFS Snapshot”
So vSphere 6.5 introduced VMFS-6 which came with the highly-desired automatic UNMAP. Yay! But some users still might need to run manual UNMAP on it for some reason. Immediate reasons that come to mind are:
They disabled automatic UNMAP on the VMFS for some reason
They need to get space back quickly and don’t have time to wait
When you run manual UNMAP one of the options you can specify is the block count. The UNMAP process since 5.5 iterates through the VMFS, by issuing reclaim to a small part of the VMFS, one at a time, until UNMAP has been issued to all of the free space. The block count dictates how big that segment is. By default ESXi will use 200 blocks (which is 200 MB). Continue reading “Issue with Manual VMFS-6 UNMAP and Block Count”