Site Recovery Manager and ActiveCluster Part II: Configuring SRM

In my last post, I walked through configuring ActiveCluster and your VMware environment to prepare for use in Site Recovery Manager.

Site Recovery Manager and ActiveCluster Part I: Pre-SRM Configuration

In this post, I will walk through configuring Site Recovery Manager itself.  There are a few pre-requisites at this point:

  • Everything that was done in part 1.
  • Site Recovery Manager installed and paired
  • Inventory mappings in SRM are complete (network, folders, clusters, resource pools etc).
  • Downloaded and installed the FlashArray SRA 3.x or later on both SRM servers.

Continue reading “Site Recovery Manager and ActiveCluster Part II: Configuring SRM”

Site Recovery Manager and ActiveCluster Part I: Pre-SRM Configuration

About four years ago, we (Pure Storage) released support for our asynchronous replication and Site Recovery Manager by releasing our storage replication adapter. In late 2017, we released our support for active-active synchronous replication called ActiveCluster.

Until SRM 6.1, SRM only supported active-passive replication, so a test failover or a failover would take a copy of the source VMFS (or RDM) on the target array and present it, rescan the ESXi environment, resignature the datastore(s) then register and power-on the VMs in accordance to the SRM recovery plan.

The downside to this of course is that the failover is disruptive–even if there was not actually a disaster that was the impetus for the failover. But this is the nature of active-passive replication.

In SRM 6.1, SRM introduced support for active-active replication. And because this type of replication is fundamentally different–SRM also changed how it behaved to take advantage of what active-active replication offers. Continue reading “Site Recovery Manager and ActiveCluster Part I: Pre-SRM Configuration”

What’s New in Core Storage in vSphere 6.7 Part VI: Flat LUN ID Addressing Support

vSphere 6.7 core storage “what’s new” series:

A while back I wrote a blog post about LUN ID addressing and ESXi, which you can find here:

ESXi and the Missing LUNs: 256 or Higher

In  short, VMware only supported one mechanism of LUN ID addressing which is called “peripheral”. A different mechanism is generally encouraged by the SAM called “flat” especially for larger LUN IDs (like 256 and above). If a storage array used flat addressing, then ESXi would not see LUNs from that target. This is often why ESXi could not see LUN IDs greater than 255, as arrays would use flat addressing for LUN IDs that number or higher.

ESXi 6.7 adds support for flat addressing.  Continue reading “What’s New in Core Storage in vSphere 6.7 Part VI: Flat LUN ID Addressing Support”

Troubleshooting Virtual Volume Setup

VVols have been gaining quite a bit of traction of late, which has been great to see. I truly believe it solves a lot of problems that were traditionally faced in VMware environments and infrastructures in general. With that being said, as things get adopted at scale, a few people inevitably run into some problems setting it up.

The main issues have revolved around the fact that VVols are presented and configured in a different way then VMFS, so when someone runs into an issue, they often do not know exactly where to start.

The issues usually come down to one of the following places:

  • Initial Configuration
  • Registering VASA
  • Mounting a VVol datastore
  • Creating a VM on the VVol datastore
Continue reading “Troubleshooting Virtual Volume Setup”

PowerCLI and vVols Part II: Finding vVol UUIDs

One of the great benefits of vVols is that fact that virtual disks are just volumes on your array. So this means if you want to do some data management with your virtual disks, you just need to work directly on the volume that corresponds to it.

The question is what virtual disk corresponds to what volume on what array?

Well some of that question is very array dependent (are you using Pure Storage or something else). But the first steps are always the same. Let’s start there for the good of the order.

Continue reading “PowerCLI and vVols Part II: Finding vVol UUIDs”

Data Mobility Demo Journey Part I: Virtual Volumes

At the Pure//Accelerate conference this year, my colleague Barkz and I gave a session on data mobility–how the FlashArray enables you to put your data where you want it. The session video can be found here:

https://watch.purestorage.com/ondemand/detail/videos/enterprise-applications/video/5778647922001/moving-data-between-cloud-and-on-premises-virtualized-environments?autoStart=true 

In short, the session was a collection of demos of moving data between virtual environments (Hyper-V and ESXi), between FlashArrays, and between on-premises and public using FlashArray features.

Continue reading “Data Mobility Demo Journey Part I: Virtual Volumes”

What’s New in Purity 5.1: WRITE SAME Handling Improvement

In Purity 5.1 there were a variety of new features introduced on the FlashArray like CloudSnap to NFS or volume throughput limits, but there were also a variety of internal enhancements. I’d like to start this series with one of them.

VAAI (VMware API for Array Integration) includes a variety of offloads that allow the underlying array to do certain storage-related tasks better (either faster, more efficiently, etc.) than ESXi can do them. One of these offloads is called Block Zero, which leverages the SCSI command called WRITE SAME. WRITE SAME is basically a SCSI operation that tells the storage to write a certain pattern, in this case zeros. So instead of ESXi issuing possibly terabytes of zeros, ESXi just issues a few hundred or thousand small WRITE SAME I/Os and the array takes care of the zeroing. This greatly speeds up the process and also significantly reduces the impact on the SAN.

WRITE SAME is used in quite a few places, but the most commonly encountered scenarios are:

What’s New in Core Storage in vSphere 6.7 Part V: Rate Control for Automatic VMFS UNMAP

vSphere 6.7 core storage “what’s new” series:

VMware has continued to improve and refine automatic UNMAP in vSphere 6.7. In vSphere 6.5, VMFS-6 introduced automatic space reclamation, so that you no longer had to run UNMAP manually to reclaim space after virtual disks or VMs had been deleted.

Continue reading “What’s New in Core Storage in vSphere 6.7 Part V: Rate Control for Automatic VMFS UNMAP”

ESXi iSCSI, Multiple Subnets, and Port Binding

With the introduction of our Active-Active Synchronous Replication (called ActiveCluster) I have been getting more and more questions around multiple-subnet iSCSI access. Some customers have their two arrays in different datacenters, and also different subnets (no stretched layer 2).

With ActiveCluster, a volume exists on both arrays, so essentially the iSCSI targets on the 2nd array just look like additional paths to that volume–as far as the host knows it is not a two arrays, it just has more paths.

Consequently, this discussion is the same as if you happen to have a single array using more than one subnet for its iSCSI targets or if you are using active-active across two arrays.

Though there are some different considerations which I will talk about later.

First off, should you use more than one subnet? Well keeping things simple is good, and for a single FlashArray I would probably do that. Chris Wahl wrote a great post on this awhile back that explains the ins and out of this well:

http://wahlnetwork.com/2015/03/09/when-to-use-multiple-subnet-iscsi-network-design/ 

Continue reading “ESXi iSCSI, Multiple Subnets, and Port Binding”

FlashArray vSphere Web Client now supports vSphere 6.7

Quick post–if you are looking at using vSphere 6.7, please note that only version of our plugin that works with 6.7 is version 3.1.x or later. There were some API changes that prevent it from properly loading in the 6.7 interface.

Reach out to support if you would like the latest version! This is still only for the Flash vSphere Web Client. We are working on building an HTML-5 supported one. Stay tuned on that.

Release notes are as follows:

What’s New

vSphere 6.7 Support
This release of the plugin includes support for vSphere 6.7. Users requiring support for vSphere 6.7 must upgrade to this version of the vSphere client plugin.

Continue reading “FlashArray vSphere Web Client now supports vSphere 6.7”