NMP Multipathing rules for the FlashArray are now default

As you might have noticed vSphere 6.5 Update 1 just came out (7/27/2017) and there are quite a few enhancements and fixes. I will be blogging about these in subsequent posts, but there is one that I wanted to specifically and immediately call out now.

Round Robin and IO Operations Limit of 1 is now default in ESXi for the Pure Storage FlashArray! This means that you no longer need to create a custom SATP rule when provisioning a new host or adding your first FlashArray into an existing environment. Continue reading “NMP Multipathing rules for the FlashArray are now default”

Host Connectivity Reporting Changes and IO Balance: Part 2

In Part 1 of this two-parter, I spoke about our new CLI-based I/O Balance tool customers can use to verify that the I/O coming from their host is balanced across the paths that are configured.

We also have made some enhancements in the GUI for host connectivity reporting. There has been a screen inside the System tab of the FlashArray GUI that reports on the redundancy of host connections to the FlashArray for awhile now:

hostconnectionsnow Continue reading “Host Connectivity Reporting Changes and IO Balance: Part 2”

Host Connectivity Reporting Changes and IO Balance: Part 1

In the latest GA release of Purity, version 4.1.5, there have been some nice improvements in how we handle host connectivity/balance reporting. There is a new CLI command to monitor the balance of I/O from a host standpoint as well as how we report/display host connectivity in the FlashArray web GUI. Let’s take a look at these enhancements. In Part 1, I will talk about the CLI enhancement.

intro

Continue reading “Host Connectivity Reporting Changes and IO Balance: Part 1”

Another look at ESXi iSCSI Multipathing (or a Lack Thereof)

I jumped on a call the other day to talk about iSCSI setup for a new FlashArray and the main reason for the discussion had to do with co-existence of a pre-existing array from another vendor. They were following my blog post on iSCSI setup and things didn’t quite match up.

To setup multi-pathing (the recommended way) for Software iSCSI is to configure more than one vmkernel port that each have exactly one active host adapter (physical NIC). You then add those vmkernel ports to the iSCSI software adapter and the iSCSI adapter will then use those specific NICs for I/O transmission and load-balance across those ports.

Continue reading “Another look at ESXi iSCSI Multipathing (or a Lack Thereof)”