With the recent release of Pure Storage® Cloud Block Store for AWS, Pure Storage makes unified storage-infrastructure-as-code a reality for DevOps professionals in hybrid cloud environments.
Automating data lifecycle management has historically been a challenge when lifting and shifting applications from on-premises to the cloud or the other way around: on-premises and cloud APIs are different, thereby requiring DevOps professionals to maintain two different sets of scripts to essentially perform the exact same operations (such as provisioning a volume, extending a volume size, etc…), as shown in the diagram below:
With the advent of Cloud Block Store, this is no longer the case: you now can use the exact same APIs, whether you are dealing with an on-premises FlashArray™ or a Cloud Block Store instance. This means that any library, module or tool that leverages the FlashArray REST API will work both on FlashArray and Cloud Block Store:
Let’s get technical
Let’s see how this can be achieved with the Pure Storage FlashArray Ansible modules, which are built on top of the Pure Storage FlashArray Python REST Client.
The scenario I have implemented is one where you target both an on-premises FlashArray and a Cloud Block Store by running one single Ansible playbook that creates a volume with the same characteristics (name and size) on-prem and in the cloud:
Let’s watch the video below to see how that’s done:
The playbook I have used in the demo above is quite simple:
- name: Pure Storage storage module examples
hosts: local aws
- name: Create new chocolate-croissant volume
It creates a 300GiB ‘chocolate-croissant’ volume using the purefa_volume Ansible module in a FlashArray and a Cloud Block Store at once. The secret sauce is to dynamically pass the URLs and API tokens of the FlashArray/Cloud Block Store by leveraging Ansible’s configuration flexibility. Indeed, you can specify a URL/API Token pair for each of your Ansible hosts (named “local” and “aws” below) using your playbook’s inventory file, as shown below:
<aws_host_ip_address> ansible_ssh_private_key_file=<your_pem_file> ansible_user=ubuntu
You store the Pure-specific information relevant to each host in a
[hostname:vars] section of your inventory file to set the
flasharray_api_token variables the Ansible playbook expects.
Alternatively, you can also create host-specific configuration files inside a special
host_vars folder, as demonstrated in the video above. For more security, you may even want to add the API token to the Ansible Vault so that it’s fully obfuscated from prying eyes.
Check out the Pure Storage Hybrid Cloud Examples with Ansible GitHub repository to download the Ansible playbook as well as the various configuration files to test it in your own environment.
As I mentioned above, this API consistency across on-premises data centers, private clouds and public clouds is available with all Pure Storage libraries and integrations that rely on our FlashArray REST API. I recommend you to read Barkz’ Consistent API Experience for on-premises and the cloud blog post if you’re interested in a similar use case with PowerShell or Python. But if you’d rather stay in Ansible universe, I highly recommend Simon Dodsley‘s latest post on Pure Storage Ansible modules, right from the modules’ author’s mouth!
I also encourage you to sign up for the Cloud Block Store Beta to be among the first ones to test this API ubiquity and consistency.
Last but not least, if you’re interested in getting more news about everything dev-related at Pure Storage, you should:
- Check out Pure/Code() for more developer resources
- Claim your Slack invite to join the Pure Storage developer community on Slack and ask us any question you might have. We have dedicated channels for Python, PowerShell, Ruby, Ansible and more, so it’s a great way to connect with like-minded developers
- Follow us on Twitter @PureStorageDev
I look forward to connecting with you on Twitter and Slack soon!