Howdy doody folks. Lots of releases coming down the pipe in short order and the latest is well the latest release of the Pure Storage Plugin for the vSphere Client. This may be our last release of it in this architecture (though we may have one or so more depending on things) in favor of the new preferred client-side architecture that VMware released in 6.7. Details on that here if you are curious.
Improved protection group import wizard. This feature pulls in FlashArray protection groups and converts them into vVol storage policies. This was, rudimentary at best previously, and is now a full-blown, much more flexible wizard.
Native performance charts. Previously performance charts for datastores (where we showed FlashArray performance stats in the vSphere Client) was actually an iframe we pulled from our GUI. This was a poor decision. We have re-done this entirely from the ground up and now pull the stats from the REST API and draw them natively using the Clarity UI. Furthermore, there are now way more stats shown too.
Datastore connectivity management. A few releases ago we added a feature to add an existing datastore to new compute, but it wasn’t particularly flexible and it wasn’t helpful if there were connectivity issues and didn’t provide good insight into what was already connected. We now have an entirely new page that focuses on this.
Host management. This has been entirely revamped. Initially host management was laser focused on one use case: connecting a cluster to a new FlashArray. But no ability to add/remove a host or make adjustments. And like above, no good insight into current configuration. The host and cluster objects now have their own page with extensive controls.
vVol Datastore Summary. This shows some basic information around the vVol datastore object
First off how do you install? The easiest method is PowerShell. See details (and other options) here:
Also in that release are a few more cmdlets concerning storage policy creation, editing, and assignment. They were built to make the process easier–the original cmdlets and their use is certainly an option–and for very specific things you might want to do they might be necessary, but the vast majority of common operations can be more easily achieved with these.
As always, to install run:
Install-Module PureStorage.FlashArray.VMware
Or to upgrade:
Update-Module PureStorage.FlashArray.VMware
These modules are open source, so if you just want to use my code or open an RFE or issue go here:
Happy New Year everyone! Let’s work to make 2021 a better year.
In furtherance of that goal, I have put out a few new vVol-related PowerShell cmdlets! So baby steps I guess.
The following are the new cmdlets:
Basics:
Get-PfaVvolStorageArray
Replication:
Get-PfaVvolReplicationGroup
Get-PfaVvolReplicationGroupPartner
Get-PfaVvolFaultDomain
Storage Policy Management:
Build-PfaVvolStoragePolicyConfig
Edit-PfaVvolStoragePolicy
Get-PfaVvolStoragePolicy
New-PfaVvolStoragePolicy
Set-PfaVvolVmStoragePolicy
Now to walk through how to use them. This post will talk about the basics and the replication cmdlets. The next post will talk about the profile cmdlets.
From my armchair in the past few weeks, I have been watching the myriad of announcements at re:Invent by AWS and a few things caught my eye (well a lot of things did, but a few in particular to storage).
The first thing to note was the change in the consistency model in S3. Up until now, consistency was “eventual” within S3 for certain operations like changes to a file, and there are a ton of posts that do a great job of explaining this. One is below:
Google “S3 eventual consistency” and you will find tons of examples.
At a high level, when modifying or deleting objects the change may not be immediately reflected. So on an immediate subsequent read, you may not get what you wrote. For busy environments with high change rates/modifications this could lead to corruption. So you needed to understand the behavior and build to or around it.
At re:Invent, AWS announced that strong consistency is now supported for S3 operations:
When deployed on Windows, the Pure1 PowerShell Module takes advantage of Windows-based certificates in the user (or specified) certificate store. On Linux or MacOS, it uses RSA private key pairs.
To relocate authentication on a Non-Windows machine to another non-Windows machine, you just copy the private key from wherever it is to the target. For Windows though you need to export the cert (which has a private key) from the certificate store, then you can copy the file to wherever.
In the latest release of the Pure1 PowerShell module (1.4.3.1) there is a new feature to do that for you–or at least simplify the process of exporting the cert with the right settings.
Let’s walk through exporting and then importing the cert. In a future post I will go into some of the other enhancements in this release in more detail.
As always the repo is here (and release notes) and it is best installed/updated via the PowerShell Gallery:
install-module PureStorage.Pure1
or
update-module PureStorage.Pure1
Sounds like a silly thing, but we all have to start somewhere. Generally when I dig into something new, I like to start from a place I know well. So when it comes to using a new API, I like to use a tool I know how to use. Kubernetes–and its API is fairly new to me from a hands-on perspective. PowerShell, however, is not. I have decent handle on that. So seems to me a good place to start with the k8s API.
I don’t know if this is the best way, or even a good way, but it does work. And there is also this:
But I am trying to learn authentication and the finer points of the API, so I like to start with first principles.
Create a Service Account
So the first step is to create a service account. So create a new file and then in that, enter in the following information, replacing the username and/or namespace with whatever you want:
vim newuseracct.yml
Then apply it:
Again using your favorite editor, create a new file:
vim newuser.yaml
This will apply the cluster admin role to that account. Replace the username, the namespace or even role as needed.
Now you need to get the server address for the cluster where you created the user. So if you don’t know, look at the context via kubectl config get-contexts:
Then, run kubectl config view and pull the server address for the corresponding cluster, so for mine it is cody-dev so the address is https://10.21.202.237:6443
Connect with Invoke-RestMethod
Now head over to PowerShell!
First, store your token in an object, I will use $token.
Then we need to form the header as a bearer token:
$k8sheader = @{authorization="Bearer $($token)"}
This is the format needed to authenticate with that token.
Now you are ready!
To pull the storage classes for instance run:
Invoke-RestMethod -Method GET -Uri https://10.21.202.237:6443/apis/storage.k8s.io/v1/storageclasses -Headers $k8sheader -SkipCertificateCheck
You will need skip certificate check for now–I didn’t configure the certificate checking yet.
If we store the response in an object we can more easily dig in:
And find my default storage class.
Definitely a lot more for me to learn here, but it is a start!
JSON Web Tokens (JWT) are part of the mechanism that we (and many modern REST implementations) use to authorize connections. I think the term authorize is the key here. Authenticate vs. Authorize. Think of it in a similar way to when you log into a website. You initially login (authenticate) with a website with a user name and password. But the next time you go to it, or re-launch your browser you don’t have to. Why because you already authenticated. An authorization is stored in a cookie so you don’t have to again. For at least a certain amount of time or for the length of that browser session etc.
This is often somewhat abstracted, but not always. If you want to directly authenticate to Pure1, for instance, you need to create a JWT. So let’s dig into that process. Then let’s talk about troubleshooting techniques for a rejected JWT.
The Anatomy of a JWT
So what is in a JWT? Well the data can vary, but in this case I will be talking about the data required by Pure1.
There are three parts:
Headers
Payload
Signature
The headers indicate what type of encryption is used in the signature.
The payload indicates the information required by the authenticator. Expiration. User. Key. Whatever.
The signature is the encrypted string that consists of the header plus the payload data. So an example.
For Pure1, the header looks like so. Always:
{
"alg": "RS256",
"typ": "JWT"
}
Basically saying use RSA 256 bit encryption for this JWT.
The payload is always structured the same, but the data varies:
The iss property is a Pure1 key assigned to an application. The IAT property is the current epoch time, and the exp property is the expiration of this key. So the JWT cannot be used after that time to authorize any more connections.
As you might notice, this information is formatted in JSON. But it is not sent that way. The data is sent via https, so it needs to be what is called URL encoded.
There is a third section. This is the first two parts that have been encrypted via a RSA 256 bit private key.
So basically the header and payload is sent twice. Unencrypted, but URL encoded and encrypted AND URL encoded.
A good way to decode this is via a website jwt.io.
This site will interpret the unencrypted, but URL encoded data and display them in the right hand side.
How Pure1 checks the JWT
So how does Pure1 check the JWT? Well a few things happen. Before a JWT can be used for authorization, you must first create a RSA 256 bit private key. Let’s say our key is this:
This key is used to create the signature portion of the JWT. More on that in a bit.
From the private key, a public key is created. This is used to decipher anything signed by the private key. So a public key for the above private key is the following:
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4WVi9HtenBdPUbZKvjOo
efYxUsNOt+eUTPAMWU3dNoUR/2pkoy1i2+iOBwkQgh5veyKZpQQCjEnyGKeliZEk
MPh2bJgFuKKyG4L0wgU1v7AEsuhQoNg7mdpPUlmIiIkVU91mtLxjES95AXl0A0oj
kUFe5JsE71Qt+2GgVJoB7fRh2l8eQnvje9tcfGFaAKoW63FmalNrZq8FjDbhQ4N3
AFANClfRlP7Ru3QLGbPWl3WMdGkjZU7STYON3fg3TN7K/95xRheOqpbZ/JpL9FHr
wZNiSx4hY82pjqDrdeF2mGDVILMa3FQIHNBqNyY1ORkLc8EW+wKJk+8OsNwbr4mj
WwIDAQAB
-----END PUBLIC KEY-----
So the public key is entered into Pure1.
Once uploaded, Pure1 generates an Application ID that corresponds to that public key.
Navigate the the website jwt.io. Paste the first two parts of the JWT into the left panel:
This will automatically decode them into the header and payload. If it looks different or weird:
It is formatted incorrectly or encoded incorrectly. If it is formatted correctly, the main thing you want to check here, is that the API key is correct.
Verifying the Public Key
Next the signature itself.
Now paste the FULL JWT into the left panel:
Note that the signature is noted as invalid. This is because there is no key to check it with. For this you need the public key.
My public key is the following:
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4WVi9HtenBdPUbZKvjOo
efYxUsNOt+eUTPAMWU3dNoUR/2pkoy1i2+iOBwkQgh5veyKZpQQCjEnyGKeliZEk
MPh2bJgFuKKyG4L0wgU1v7AEsuhQoNg7mdpPUlmIiIkVU91mtLxjES95AXl0A0oj
kUFe5JsE71Qt+2GgVJoB7fRh2l8eQnvje9tcfGFaAKoW63FmalNrZq8FjDbhQ4N3
AFANClfRlP7Ru3QLGbPWl3WMdGkjZU7STYON3fg3TN7K/95xRheOqpbZ/JpL9FHr
wZNiSx4hY82pjqDrdeF2mGDVILMa3FQIHNBqNyY1ORkLc8EW+wKJk+8OsNwbr4mj
WwIDAQAB
-----END PUBLIC KEY-----
Take that and paste it into the public key box. If this is the right public key, it will turn to signature verified. If it does not, you are using the wrong public key. In this case
It will create the JWT for you. If that JWT is different than one you generated elsewhere your other JWT was incorrectly created.
You can add in your public key to ensure it is all good:
What to do with a bad JWT
So if you get an authorization error with Pure1 what should you do? Make sure the combination that you are using is correct: right API key, right public key, right private key. Figure out which one is wrong. The simplest thing often is to start over: create a new key pair, add the public one into Pure1, and retry.
The next step here is storage. I want to configure an ability to provision persistent storage in Tanzu Kubernetes. Storage is generally managed and configured through a specification called the Container Storage Interface (CSI). CSI is a specification created to provide a consistent experience in an orchestrated container environment for storage provisioning and management. There are a ton of different storage types (SAN, NAS, DAS, SDS, Cloud, etc. etc.) from 100x that in vendors. Management and interaction with all of them is different. Many people deploying and managing containers are not experts in any of these, and do not have the time nor the interest in learning them. And if you change storage vendors do you want to have to change your entire practice in k8s for it? Probably not.
So CSI takes some proprietary storage layer and provides an API mapping:
Vendors can take that and build a CSI driver that manages their storage but provides a consistent experience above it.
At Pure Storage we have our own CSI driver for instance, called Pure Service Orchestrator. Which I will get to in a later series. For now, lets get into VMware’s CSI driver. VMware’s CSI driver is part of a whole offering called Cloud Native Storage.
This has two parts, the CSI driver which gets installed in the k8s nodes, and the CNS control plane within vSphere itself that does the selecting and provisioning of storage. This requires vSphere 6.7 U3 or later. A benefit of using TKG is that the various CNS components come pre-installed.