PowerShell Core Support for Installation of the Pure Storage vSphere Plugin

For those Windows users, one of the nice things that just went GA was Windows Terminal–which is pretty cool.

https://github.com/microsoft/terminal

Of course this is probably gets an old “big deal, I’ve had that on Linux or Mac since the stone age” from those users. And fair enough.

Anyways, regardless to your platform, you might be a PowerShell user–since PowerShell Core is supported on multiple platforms. If you are a PowerCLI user, VMware added support a few years ago. Our base PowerShell modules (for direct management of the FA) does not yet support Core, though it is in plan. We also offer a VMware-focused Pure Storage PowerShell Module which connects PowerCLI commands with FlashArray operations (when needed) to managing a VMware and Pure environment a streamlined experience in PowerShell. This module has some cmdlets that have dependencies on both, and some have dependencies on just one of the two. The latter situation is what I am working on.

Continue reading “PowerShell Core Support for Installation of the Pure Storage vSphere Plugin”

What’s New in vSphere 7.0 Storage Part III: GuestInfo VirtualDiskMapping Linux, PowerCLI support

I am a bit behind on my series here, and this was not meant to be in it, but after a conversation around it on Reddit, I dug in.

I posted about this earlier:

https://www.codyhosterman.com/2020/03/whats-new-in-vsphere-7-0-storage-part-ii-guestinfo-virtualdiskmapping/

But it was about the API and only Windows. What about Linux? What about getting it with PowerCLI?

All good questions.

LiRead more

The Case for vVols and Ransomware

I was listening to an episode of the Pure Report recently where Rob Ludeman interviewed Andrew Miller:

Also a post on ransomware and FlashBlade:

https://blog.purestorage.com/ransomware-mitigation-solution

It’s a good listen–and it did get me thinking about vVols (like most things do these days). Before I get into that though… We (Pure) are doing a fair amount around helping customers protect against, or at least easily recover from ransomware attacks. My personal thinking around this is certainly still evolving, and I have a fair amount to learn, but here are a few things I think are important points.

  • Ransomware attacks do not begin and end with encryption of your data. Generally, once an attacker gets in they find out what they can do. What can they access? What can they disable? Can they disable your protection? It is worth their time to figure the answers to these questions out. The more damage they do to your protection, the more likely they will get paid.
  • You need to ASSUME that the attacker has gained administrative credentials. In building your protection, good RBAC is a part of but not the end all, be all. A disgruntled sys admin even–doesn’t have to be a shadowy figure in a cave.
  • Look at the forest and the trees. Protection requires consideration of each component (as an admin of this piece of the infrastructure how can I protect what I am in charge of?) and consideration of the entire infrastructure (how do I protect my business if an entire part of my stack gets compromised?).
  • Prevention, insulation, detection, mitigation, and restore. My five phases of ransomware.
    • How can I prevent it?
    • How can I reduce the blast radius if one part or many get successfully attacked?
    • Can I detect it?
    • How can I stop it?
    • How would I restore and how quickly?
  • When did the attack actually start? Restoring to a non-encrypted version doesn’t mean it isn’t infected. Having access to longer-term point-in-time, while still having fast restore is important.
Continue reading “The Case for vVols and Ransomware”

What’s New in vSphere 7.0 Storage Part II: GuestInfo VirtualDiskMapping

This is a new kind of “what’s new” than what I usually talk about–it is not really a “storage” feature in the specific sense. But it is a really useful one that I intend to use a lot.

A common traditional problem was knowing what was going on in the guest from a storage perspective. If you want to script something against the vSphere API (unmount this file system) then do something with the virtual disk, then do something on the storage. Now it was possible to use the in-guest API, but because it required additional credentials to get into the VM and was a multiple step operation, it didn’t scale very well if you need to query information from a bunch of VMs.

The ideal scenario would be for VMware tools to report this vCenter so it can easily be pulled from the API, right?

Continue reading “What’s New in vSphere 7.0 Storage Part II: GuestInfo VirtualDiskMapping”

Extending vVols to VMware Cloud Foundation

Note: This is a guest blog by Kyle Grossmiller. Kyle is a Sr. Solutions Architect at Pure and works with Cody on all things VMware.

As we’ve covered in past posts, VMware Cloud Foundation (VCF) offers immense advantage to VMware users in terms of simplifying day 0 and 1 activities and streamlining management operations within the vSphere ecosystem.  Today, we dive into how to use the Pure Storage leading vVols implementation as Supplemental storage with your Management and Workload Domains. 

First though, a brief description of the differences between Principal Storage and Supplemental Storage and how it relates to VCF is in order to set the table.  Fortunately, it is very easy to distinguish between the two storage types:

Principal Storage is any storage type that you can connect to your Workload Domain as a part of the setup process within SDDC Manager.  Today, that’s comprised of vSAN, NFS and VMFS on Fibre Channel, pictured below.  We’ve shown how to use VMFS on FC previously.

Supplemental Storage simply means that you connect your storage system to a Workload Domain after it has been deployed.  Examples of this storage type today include iSCSI and the focus of this blog:  vVols.

Continue reading “Extending vVols to VMware Cloud Foundation”

What’s New in vSphere 7.0 Storage Part I: vVols are all over the place!

Ah it’s time for another round of “what’s new” with vSphere external storage. Before I get into the more traditional feature version of this series, I wanted to first note some important announcements around vVols.

So the first thing that’s “new” in storage with vSphere 7.0 is that VMware is taking vVols extremely seriously now. 2018 and vVols was about spreading the value of vVols, 2019 was about getting vendors to dig in, and 2020 is about VMware and storage partners delivering on it. This is just the start.

Site Recovery Manager

This is, of course, the big one. You can check out the announcement here:

https://blogs.vmware.com/virtualblocks/2020/03/10/whats-new-srm-vr-83/

Since day 1 of SRM, array-based replication was of primary importance. SRM was essentially built to provide a common orchestration tool for disaster recovery. It automated the VMware steps of recovering virtual machines while coordinating with the underlying replication on the array to make sure the data was on site B and was ready to be used when needed. This coordination was through something called a Storage Replication Adapter (an SRA).

The fundamental problem around SRAs were the fact that it was entirely a SRM “thing”. Replication configuration and management had to be done elsewhere. It couldn’t be done natively in vSphere–best case there was a vSphere Plugin that could help, but once again that only integrated the configuration of replication into the UI, not into vSphere itself, so managing changes wasn’t scalable. Furthermore, every vendor did it differently (if they even had a plugin that could do it).

There was ZERO consistency beyond how SRM ran recovery plans. This is what vVol replication integration was designed to fix.

First off, it integrates directly with VM provisioning and policy-based management. So there is no need to install or use a plugin to manage replication protection for VMs. It is also built into vSphere itself, not just the UI. This allows it to be managed and configured however you manage vSphere (PowerCLI, vRO, vRA, Python, etc) without additional plugins.

As vVols have REALLY picked up steam in the last year. VMware has re-focused its efforts on making sure lingering issues/gaps were fixed that were preventing further vVol adoption. This is/was a common sentiment from customers:

Let’s be clear here: the stated path for VMware storage of the future is vVols and vSAN. VMware is obviously finally committing to this ideal.

So now in SRM, you can create a protection group that discovers replicated VMs not via the SRA, but by querying the vSphere API directly for vVol replication groups.

So you add vVol replication groups directly to SRM protection group–very similar in concept via datastore groups via SRA-based policies.

When you choose a SPBM policy for a given VM–you then choose a replication group (if it is a replication type policy). As you add (or remove) VMs to the replication group, they will be automatically protected by SRM (or unprotected). Further integrating the process into SPBM.

Stay tuned for a lot more on this!

vRealize Operations Manager

vRealize Operations Manager (vROps) is a fantastic tool for datacenter trending, analysis, balancing, monitoring, etc. Many vendors have what is called a management pack which integrates their specific objects,metrics, and alerts into vROps so it can be associated with their various related VMware objects (and their metrics, alerts, and their own related objects).

When it came to vVols, there was a gap–vROps didnt quite know how to understand a vVol datastore. Therefore it didn’t know how to relate VMs and their disks. Therefore the vendor couldnt really relate them to their storage objects. So any vVol integration by vendors there was at best half done.

So in vROps 8.1 the vVol datastore exists:

Image

This opens up a whole new world of storage management packs! I’m very excited to build more onto our management pack to take advantage of this final connector we needed!

vSphere with Kubernetes

Project Pacific no more! There are a lot of places to get more information on this, though a great place to get a start is here:

In short, tightly integrating K8s into vSphere. Manage and control your containers/K8s pods as a 1st class citizen, just like your VMs of yore.

Persistent storage is presented through the VMware CSI driver, called CNS (Cloud Native Storage). CNS uses existing storage options for storage provisioning, but in a new way. First it is based of of Storage Policy Based Manager (your storage classes for CSI provisioning are based on policies) furthermore, it uses first class disks instead of standard disks which I talk about here:

They are just virtual disks, but in the API they are 1st class objects–they can be created and exist independently of a VM. Which makes sense for something that is not a VM (or more to the point something that might not be as persistent as a VM) like a container.

FCDs can be created, snapshotted, resized, etc just like a virtual disk but without a VM to own it. Sounds a lot like a persistent volume claim!

vVols + FCDs make this story even better, because configuration is controlled in policies (get, set, check) and the volume is a 1st class object on the array too. On the FlashArray, since vVols are just volumes if that persistent volume claim (that volume) is in use in a non-VMware K8s environment it should be easily imported into vSphere with Kubernetes through a vVol FCD. Look for more information as we build out documentation and tools around this.

Very excited about the future of this!

VMware Cloud Foundations

The mother of all VMware automation. I blogged about it while ago here:

This is becoming more and more important and VMware is improving it to have better storage integration into SDDC manager as shown above. VMware has announced partner support of vVols as supplementary storage (we will have documentation on that very soon) which is just the start.

This is just the start to vVols in 2020! Stay tuned!

Default FlashArray Connection With PowerShell

In the VMware Pure PowerShell module (PureStorage.FlashArray.VMware) there is a default array connection stored in a global variable called $Global:DefaultFlashArray and all connected FlashArrays in $Global:AllFlashArrays. The VMware/Pure PowerShell module automatically uses what is in the “default” variable.

The underlying “core” Pure Storage PowerShell module (PureStoragePowerShellSDK) does not yet take advantage of global connections. So for each cmdlet you run, you must pass in the “array” parameter. For example to get all of the volumes from an array:

Kind of annoying if you are interactively running commands and only have one array connection you care about (or one that you primarily care about).

Continue reading “Default FlashArray Connection With PowerShell”

Testing New SRA Release with a 2nd SRM Pair

At the time of writing this post we are currently at work on our next release of our Storage Replication Adapter for the FlashArray. In a discussion with a customer who needs the feature that we are adding (what a nice coincidence!) the question came up, “what is the best way to test?”. They want to test the SRA without fouling up their production SRM environment.

So a simple answer is well deploy two new vCenters and a SRM pair. But that requires certain hosts and similar network configuration and authentication, etc. etc. So they wanted to use their existing vCenters but NOT their existing SRM servers.

SRM used to be a fairly rigid tool (for good reason, let’s not break your DR). But in the past few years VMware has really opened it up. Loosened the tight vCenter version to SRM version, shared recovery sites, and multiple SRM pairs per vCenter pair. This is where we come in.

Continue reading “Testing New SRA Release with a 2nd SRM Pair”