Ah yes. Tagging. The one above them all. A simple feature, but nonetheless powerful.
We have actually had tagging in Purity for quite some time. But it was hidden–we initially used it only for Virtual Volume metadata. Though there are a ton of use cases for tags beyond vVols–use cases I know customers need, and use cases that I need.
So in Purity 6.0 we added tagging–the ability to assign key value tags to a volume or a snapshot. As of Purity 6.0, these tags are available in the CLI or the REST API–GUI support is upcoming. For this post I will walk through using the CLI to demonstrate the tags, stay tuned for information on using the REST and specific scripting tools.
Purity 6.0 ships with a new REST version 2.2. 2.2 includes the endpoints to manage ActiveDR processes (demote/promote), tagging (more on that in a later post) and more.
REST 2.x is a new major release of our REST API that changes the underlying structure of the API, the endpoints, authentication, queries, etc. Our current PowerShell SDK uses REST 1.x (which is changing) but for folks who might want to write their own PowerShell against REST, or starting using it now–here is some help.
I have already posted about ActiveDR briefly here:
I wanted to go into more detail on ActiveDR (and more) in a “What’s New” series. One of the flagship features of the Purity 6.0 release is what we call ActiveDR. ActiveDR is a continuous replication feature–meaning it sends the new data over to the secondary array as quickly as it can–it does not wait for an interval to replicate.
For the TL;DR, here is a video tech preview demo of the upcoming SRM integration as well as setup of ActiveDR itself
But ActiveDR is much more than just data replication is protects your storage environment. Let me explain what that means.
Hey there! This week we announced the upcoming release of our latest operating environment for the FlashArray: Purity 6.0. There are quite a few new features, details of which I will get into in subsequent posts, but I wanted to focus on one related topic for now. Replication. We have had array based replication (in many forms) for years now, in Purity 6 we introduced a new offering called ActiveDR.
ActiveDR at a high level is a near-zero RPO replication solution. When the data gets written, we send it to the second array as fast as we can–there is no waiting for some set interval. This is not a fundamentally new concept. Asynchronous replication has been around for a long time and in fact we already support a version of asynchronous. What is DIFFERENT about ActiveDR, is how much thought has gone into the design to ensure simplicity while taking advantage of how the FlashArray is built. A LOT of thought went into the design–lessons learned from our own replication solutions and features and of course lessons from history around what people have found traditionally painful with asynchronous replication. But importantly–ActiveDR isn’t just about replicating your new writes–but also snapshots, protection schedules, volume configurations, and more. It protects your protection! More on that in the 2nd part.
Hey there! This week we announced the upcoming release of our latest operating environment for the FlashArray: Purity 6.0. There are quite a few new features, details of which I will get into in subsequent posts, but I wanted to focus on one related topic for now. Replication. We have had array based replication (in many forms) for years now, in Purity 6 we introduced a new offering called ActiveDR.
ActiveDR at a high level is a near-zero RPO replication solution. When the data gets written, we send it to the second array as fast as we can–there is no waiting for some set interval. This is not a fundamentally new concept. Asynchronous replication has been around for a long time and in fact we already support a version of asynchronous. What is DIFFERENT about ActiveDR, is how much thought has gone into the design to ensure simplicity while taking advantage of how the FlashArray is built. A LOT of thought went into the design–lessons learned from our own replication solutions and features and of course lessons from history around what people have found traditionally painful with asynchronous replication. But importantly–ActiveDR isn’t just about replicating your new writes–but also snapshots, protection schedules, volume configurations, and more. It protects your protection! More on that in the 2nd part.
Note: This is another guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.
One of the (many) fun things we get to work on at Pure is researching and figuring out new ways to streamline things that are traditionally repetitive and time-consuming (read: boring). Recently, we looked at how we could go about automating the deployment of FlashStack™ end-to-end; since a traditional deployment absolutely includes some of these repetitive tasks. Our goal is to start off with a completely greenfield FlashStack (racked, powered, cabled and otherwise completely unconfigured) and automate everything possible to end up with a fully-functional VMware environment ready for use. After some thought, reading and discussion, we found that this goal was achievable with the combination of SmartConfig™ and VMware Cloud Foundation™.
Automating a FlashStack deployment makes a ton of sense: From the moment new hardware is procured and delivered to a datacenter, the race is on for it to switch from a liability to a money producing asset for the business. Further, using SmartConfig and Cloud Foundation together is really combining two blueprint-driven solutions: Cisco Validated Designs (CVDs) and VMware Validated Designs (VVDs). That does a lot to take the guesswork out of building the underlying infrastructure and hypervisor layers since firmware, hardware and software versions have all been pre validated and tested by Cisco, VMware and Pure Storage. In addition, these two tools also go through setting up these blueprints automatically via a customizable and repeatable framework.
Once we started working through this in the lab, the following automation workflow emerged:
Along with some introduction to the key technologies in play, we have divided the in-depth deployment guide into 3 core parts. All of these sections, including product overviews and click-by-click instructions are publicly available here on the Pure Storage VMware Platform Guide.
Deploy FlashStack with ESXi via SmartConfig. The input of this section will be factory reset Cisco hardware and the output will be a fully functional imaged/zoned/deployed UCS chassis with ESXi7 installed and ready for use with VMware Cloud Foundation.
Build VMware Cloud Foundation SDDC Manager on FlashStack. The primary input for CloudBuilder is, not ironically, the output of the work in part 1. Specifically, ESXi hosts and their underlying infrastructure, from which we will automatically deploy a Management Domain with CloudBuilder.
The last section will show how to deploy a VMware Cloud Foundation Workload Domain with Pure Storage as both Principle Storage (VMFS on FC) and Supplemental Storage (vVols). Options such as iSCSI are covered in additional KB articles in the VMware Cloud Foundation section of the Pure Storage support site.
Post-deployment, customers will enjoy the benefits of single-click lifecycle management for the bulk of their UCS and VMware components and the ability to dynamically scale up or down their Workload Domain deployment resources independently or collectively based upon specific needs (e.g. compute/memory, network and/or storage) all from SDDC Manager.
For those who prefer a more interactive demo, I’ve recorded an in-depth overview video of this automation project followed by a four-part demo video series that shows click-by-click just how easy and fast it is to deploy a FlashStack with VMware from scratch.
Craig Waters and I gave a Light Board session on this subject:
And this is an in-depth PowerPoint overview of the project:
Finally, this is a video series showing the end-to-end process in-depth broken into a few parts for brevity.
One of the new features in vSphere 7 is support for NVMe-oF (Non-Volatile Memory Express over Fabric)–this replaces SCSI as a protocol and extends the NVMe command set over an external (to the host) fabric.
So what is it and why? I think this is worth a quick walk down memory lane to really answer both of these questions.
Before I get into it, below is a recent video/podcast/roundtable I did with the Gestalt IT with a few wonderful people:
Christopher Kusek
Greg Stuart
Jason Massae
Stephen Foskett
The premise is “Is NVMe-oF ready for the primetime?” Check it out and find out where we all land! For more on my thoughts, read on.
Quick post, I did a quick google and found nothing immediately on this, so figured a quick post might be helpful for folks. My new install of Windows Terminal was defaulting to PowerShell 5:
And to switch to 7.0.1 (core) I had to go to the dropdown and open it each time. Such drudgery!
Of course this is probably gets an old “big deal, I’ve had that on Linux or Mac since the stone age” from those users. And fair enough.
Anyways, regardless to your platform, you might be a PowerShell user–since PowerShell Core is supported on multiple platforms. If you are a PowerCLI user, VMware added support a few years ago. Our base PowerShell modules (for direct management of the FA) does not yet support Core, though it is in plan. We also offer a VMware-focused Pure Storage PowerShell Module which connects PowerCLI commands with FlashArray operations (when needed) to managing a VMware and Pure environment a streamlined experience in PowerShell. This module has some cmdlets that have dependencies on both, and some have dependencies on just one of the two. The latter situation is what I am working on.