string(7) "English"

Utilizing the Pure Storage Integrations Released in Ansible 2.4

keep calm and automateAnsible logo

Next week Ansible® 2.4 is released and here at Pure Storage®, we are happy to announce that the Pure Storage FlashArray now has native storage module support included in this Ansible release.

There are five separate modules available which cover volumes, hosts, host groups, protection groups and snapshots. With these modules, you can now configure your FlashArray directly from your Ansible playbooks without the need to use an intermediary module to act as a proxy.

In a previous post, I discussed how to use the HTTP execution module to pass RestAPI calls to your FlashArray, but that is now a thing of the past. With this release of Ansible, you can call Pure Storage FlashArray modules directly from within your playbook.

There is documentation for these modules included in the Ansible 2.4 release, but in this post, I’ll cover the available commands for your FlashArray that allow you prepare it for your hosts and applications.


The only pre-requisite for using these new storage modules is to have the Pure Storage Python SDK installed. This is simply done by running the pip install purestorage command.

Storage Modules

These are the Pure Storage storage modules included in Ansible 2.4 to help you configure your FlashArray:


The volume module (purefa_volume) will allow you to create, extend, clone and delete volumes. There are options to force overwriting when cloning if the target volume already exists and you can force a volume eradication as well on delete. When it comes to creating a volume, you have to define an initial size using any of the standard sizing units for a FlashArray volume, such as M, T, G or P.


The host module (purefa_host) allows creation and deletion of host objects. When creating a host you can optionally specify if it connected using fibre channel or iSCSI protocol and the associated WWNs or IQNs. WWNs and IQNs are in a standard Ansible list format, which you will see in the examples later. If you want to add additional WWNs or IQNs to a host object you can do that as well. Lastly, with this module, you can attach volumes that already exist to the host when creating it.

When you delete a host it will disconnect any attached volumes, so don’t forget to clean these up as well if you need to.


In the hostgroup module (purefa_hg) you can create an empty hostgroup, or one with hosts and volumes already connected, assuming these hosts and volumes already exist as objects in the FlashArray. Today we can’t add new hosts or volumes to an existing hostgroup. Look for that is a later release.

Be aware that when you delete a hostgroup all the connected volumes and hosts are disconnected. These objects are not deleted so if necessary you have to delete them separately.

Protection Groups

In this release of the protection group module (purefa_pg) we are only providing local protection groups, not replicated groups. Replication support is planned for the next release of the modules.

When creating a protection group it is, by default, enabled to snapshot its contents using the default snapshot schedule, but you can disable this if you want.

You can create protection groups that contain volumes, hosts or hostgroups by specifying which you want in the playbook and a list of those object types to be included in the protection group.

As with hostgroups, you can’t amend existing protection groups with new members. Also, as with hostgroups, when you delete a protection group it doesn’t delete the objects that were covered by it. These have to be done separately as required.


Finally, we have the snapshot module (purefa_snap) that allows you to create snapshots of volumes, create a read/write volume from an existing snapshot and delete specific snapshots.

When creating a snapshot you can force the snapshot suffix extension or allow the module to provide this as a timestamp.

Using the ability to control the snapshot suffix makes it easier to create a volume from a snapshot as you need to specify the exact snapshot you want to use to create the volume and this is done using the snapshot suffix and the copy state.


So now we have all these modules available to use, how do we direct them at a specific FlashArray, or series of FlashArrays?

Authentication is done using API tokens and the management VIP (or FQDN) of the FlashArray and can be applied to the playbook commands in one of two ways. The API token is obtained from the FlashArray and can be from any account configured to access the FlashArray, be that the pureuser default account, or any external accounts linked to the array using a Directory Service. If you use an external linked account it needs to be in the correct Admin group or some of the commands you try to use will fail due to lack of privilege.

Firstly, we can define two environment variables that contain the IP address and the API token. These will then be used by all playbook commands and therefore all commands will be applied to only one FlashArray.

Using this method all we need to do is issue the following commands before running the playbook, or add these into your shell profile:

  # export PUREFA_URL=<Management VIP/FQDN of FlashArray>
  # export PUREFA_API=<API token for FlashArray user>

Secondly, we can define the API token and IP address as parameters in each playbook command. This means that each command could be applied to a different FlashArray, or you can bundle commands for multiple FlashArrays.

In this second method each playbook command needs to have the following two parameters defined:

  fa_url: <Management VIP/FQDN of FlashArray>
  api_token: <API token of FlashArray user>

so a simple playbook command would look like this:

To run an Ansible playbook, which should be in the appropriate YAML format, you only have to execute the following command:

  # ansible-playbook <playbook name>.yaml

There is no requirement to run this under a specific remote user, using the -u flag or the remote_user parameter in the playbook, as your API token will do all the authentication on the FlashArray.

You can run execute these FlashArray modules on any host that has network connectivity to the FlashArray management IP address, but for simplicity I usually use localhost.

Playbook Example

Now we’ve finished the explanation let’s create a playbook to build an environment on our FlashArray.

In this example, I’m going to create 3 volumes and 2 hosts. One host will use fibre channel connectivity and the other will be iSCSI and each host will have 1 volume attached to it.

The third volume will be attached to a hostgroup that contains both of the above hosts. This means each host will share the third volume.

To show the different types of protection groups I’ll create 3, one with a volume, one with selected hosts and one with the hostgroup.

After completing the volume and host related work I’m going to take a clone of the shared volume and also a snapshot of it. Finally, I’ll create a read/write volume from the snapshot of the shared volume.

The playbook will pause at this point to allow you to see the completed environment, then it will tear down the created environment. As part of the tear down the playbook will eradicate the protection groups and the volumes as they are deleted, making the array nice and clean.

This example playbook doesn’t gather facts as these are unnecessary for the FlashArray module.

Future Work

As I mentioned earlier there will be a development of these modules in future releases of Ansible which will include support for replication, both configuring the initial replication and setting up protection groups. We are also going to look at allowing modification of existing hosts, hostgroups and protection groups.

If there are other features you would like to add to these modules please let me know in comments to this post, or even add them yourself. These modules are Open Source and we welcome contributions from the customer community.

I hope you have found this post useful and that you will find the Pure Storage modules for the FlashArray helpful when deploying your environments backed by Pure Storage.