image_pdfimage_print

On April 30 the eleventh release of OpenStack hit the streets. Dubbed Kilo, this release adds some new functionality which dovetails nicely into the enterprise class features offered by the Pure Storage FlashArray.

kilo-logo

The Pure Storage Block Storage (Cinder) driver is again included in the core OpenStack Kilo release and incorporates all the features from the previous release with some significant enhancements to enable more enterprise usage and ease of management.

One important thing to note about the Pure Storage Kilo Cinder driver is that is utilizes the Python SDK we released as a standalone product at the same time as OpenStack Juno. As the SDK is now a pre-requisite, it is essential to install the pip command and then install our SDK prior to installing OpenStack on your controller nodes. If you use deployment tools such as Foreman and Puppet to build your OpenStack environment, then install the SDK immediately after deployment on your Cinder nodes running the cinder-volume service.

This is not difficult as I show here in the few steps required to install the Pure Storage Python SDK on the latest versions of the major Linux distributions.

For RHEL 7 and CentOS 7 run the following commands:

yum install epel-release
yum install python-pip
pip install purestorage

For Ubuntu and Debian distributions:

apt-get install python-pip
pip install purestorage

You might ask why we decided to use our SDK as an integral part of our Cinder driver. So did I and this is the answer I got from one of our senior OpenStack developers…

The primary benefit is that we can add new features/functionality and bug fixes to the REST API client code without having to wait for the next OpenStack release. By de-coupling them we have more flexibility to quickly resolve issues. It also makes the code base smaller as far as things that need to be supported by our engineers which means less time fixing bugs/updating for new features and more time for us to work on cool new things.

I mentioned earlier about the new features and enhancements in our Cinder driver, so let’s have a quick look at each one as they relate to the Pure Storage FlashArray:

  • iSCSI CHAP support: Changes to the Cinder database now mean that our driver can automatically generate an iSCSI CHAP secret for each OpenStack node that needs to connect to the FlashArray and persist this over Cinder service failovers
  • Automated host object created: No need to manually create array host objects for each and every OpenStack host that has volumes presented from the FlashArray. This links nicely with the iSCSI CHAP feature above.
  • Consistency Groups: OpenStack consistency groups have been created to allow for snapshotting of multiple volumes at the same time. From the FlashArray perspective these Consistency Groups tie directly to our Protection Groups. This means that the snapshots created are crash consistent and you can leverage both the local and remote replication scheduling ability embedded in our array for your volume types that are in consistency groups.
  • Over subscription in thin provisioning: Cinder was originally written to cater for the thick provisioned world created by local DAS drives, whether they were ‘spinning rust’ or flash. The majority of enterprise class storage array being used as Cinder backends utilize thin-provisioning to allow for over-subscription of the capacity of the array. Now we also have all-flash arrays that use cutting edge data reduction technologies. Kilo’s Cinder core allows for over-subscription of thinly provisioned array, including the ability to be informed of varying data-reduction rates from all-flash arrays to ensure you get the most use out of your array. In a world where inline and post-processing data reduction is constantly changing the overall data reduction rate of an array this can be critical to effective capacity utilization.

For more details on configuring the Pure Storage Cinder driver in Kilo you can download the new configuration guide here.