Phew another VMworld under my belt! Tons of fun and tons of work! Pure Storage had its largest presence ever at VMworld–we certainly painted Moscone orange in addition to unleashing an arsenal of nerf guns (4,500) upon the city. This post is meant to be more of a photo album of the week and what Pure was involved in–technical posts of things that we did there will be forthcoming.
This is a question that has come up quite often and I have blogged about this for several different products in the past. What Firewall rules do I need to create to install and use the Pure Storage Plugin for the vSphere Web Client? Luckily this is fairly simple. For instructions on using and installing the Web Client plugin check out these posts here and here.
When you go to install the plugin from the array GUI and you see the following error it could very well be a network error:
Just like last year, Pure Storage is sponsoring a post-VMworld event called “Evolve”. This will feature some great keynote speakers as well as some technical sessions on the state of the industry. This will take place on Thursday, August 28th after VMworld has officially come to a close and will be right there next to the Moscone Center. I highly recommend you stick around for this (yes, it’s free)!
Today I posted a new document to our repository on purestorage.com: Pure Storage and VMware Storage APIs for Array Integration—VAAI. This is a new white paper that describes in detail the VAAI block primitives that VMware offers and that we support. Furthermore, performance expectations are described, comparing before/after and how the operations do at scale. There are some best practices listed as well, the why and how of those recommendations are also described within.
I have to say, especially when it comes to XCOPY, I have never seen a storage array do so well with it. It is really quite impressive how fast XCOPY sessions complete and how scaling it up (in terms of numbers of VMs or size of the VMDKs) doesn’t weaken the process at all. The main purpose of this post is to alert you to the new document but I will go over some high level performance pieces of information as well. Read the document for the details and more.
Quick post here. A PowerShell toolkit for Pure Storage has been released and can be downloaded here:
It currently offers:
- Create a volume
- Create a new volume from a source (snap or volume)
- Snap a volume
- Get volume statistics
I won’t go into detail because my colleague @themsftdude (who created the actual kit) did a fine job of describing it on his own blog:
This will save you a lot of time in your own scripts. These leverage our REST API and will spare you the steps of creating your own REST calls in your script. Look for this toolkit to expand in the near future.
A bit of a long rambling one here. Here it goes…
Virtual Disk allocation mechanism choice is something that was somewhat of a hot topic a few years back with the “thin on thin” question almost becoming a religious debate at times. Essentially this has cooled down and vendors have made their recommendations and end users have their preferences and that’s that. With the true advent of the all-flash-array such as the Pure Storage FlashArray with deduplication, compression, pattern removal etc. I feel like this somewhat basic topic is worth revisiting now.
To review there are three main virtual disk types (there are others, namely SESparse but I am going to stick with the most common for this discussion):
- Eagerzeroedthick–This type is fully allocated upon creation. This means that it reserves the entire indicated capacity on the VMFS volume and zeroes the entire encompassed region on the underlying storage prior to allowing writes from a guest OS. This means that is takes longer to provision as it has to write GBs or TBs of zeroes before the virtual disk creation is complete and ready to use. I will refer to this as EZT from now on.
- Zeroedthick–This type fully reserves the space on VMFS but does not pre-zero the underlying storage. Zeroing is done on an as needed basis, when a guest OS writes to a new segment of the virtual disk the encompassing block is zeroed first than the new write is committed.
- Thin–This type neither reserves space on the VMFS or pre-zeroes. Zeroing is done on an as-needed basis like zeroedthick. The virtual disk physical capacity grows in segments defined by the block size of the VMFS, usually 1 MB.
Previously I blogged about using PowerShell with the Pure Storage FlashArray to enable scripting of common tasks like provisioning or snapshotting. In that post I showed how to use SSH to run Purity operations, but with the introduction of the REST APIs (fully available in 3.4+) there is now a much better and cleaner way to script this. You no longer need to install extra SSH modules and the like, all you need is the Invoke-RestMethod in PowerShell.