Whilst we await the official Pure Storage FlashBlade™ OpenStack® drivers to be released I thought it might be interesting to show you how you can use your FlashBlade system in an existing OpenStack environment, leveraging drivers that already exist.
The cool thing is that even though the FlashBlade is a scale-out NAS system, offering NFS connectivity, I’m going to show you how to use the system under Cinder, the block storage project within OpenStack. I’m also going to show you how you can use the same FlashBlade system to hold your Glance images.
This blog is actually based on a 2-node Devstack deployment of OpenStack Newton; a controller node (running Nova) and a compute-only node, so some of the directory names mentioned are Devstack specific. Where appropriate I will give the directory name for a full OpenStack deployment.
The first thing we are going to do is setup an NFS share on the FlashBlade. From the FlashBlade GUI select the Storage tab in the left pane and then the + symbol top right. This will open the Create File System window which we are going to complete as below.
Here I’m creating a 10TB volume called openstack and allowing anyone to access the export. I could add a specific IP address, or series of address, for OpenStack controller nodes that are running the cinder-volume service, but for ease of description, I’ll stick with a wildcard.
I could create two separate shares, one for Cinder and one for Glance, but I’m going to only have one share and create a couple of directories in there and use those as mount points.
The first thing I need to do is mount the newly created NFS share – this can be done from any host – and create two subdirectories in this share called cinder
and glance
, ensuring that there are full read/write permissions for these directories by running the following commands:
1 2 3 4 |
# mount –t nfs cloud-demo-fb-b08-35:/openstack /mnt # mkdir /mnt/cinder; chmod 777 /mnt/cinder # mkdir /mnt/glance; chmod 777 /mnt/glance # umount /mnt |
Now we have our NFS share ready we can configure our OpenStack controller nodes.
For the controller node running Glance, issue the following command:
1 2 3 |
Use this version for a non–devstack deployment # mount –t nfs cloud-demo-fb-b08-35:/openstack/glance /var/lib/glance/images |
Remembering to add an appropriate entry to your /etc/fstab
file.
As far as Cinder is concerned we don’t need to do anything as the cinder-volume service will handle the mounting automatically based on the changes we are going to make to the cinder configuration file now.
To allow Cinder to use the FlashBlade as a backend we need to implement the Cinder NFS (Reference) driver, more details of which can be found here. In my Cinder configuration file (/etc/cinder/cinder.conf
) I’m going to add a backend stanza for the FlashBlade and ensure that Cinder enables this backend as follows:
1 2 3 4 5 |
[DEFAULT] … enabled_backends = fb–1 default_volume_type = fb–1 … |
[fb-1]
volume_driver = cinder.volume.drivers.nfs.NfsDriver nfs_mount_options = vers=3 nas_host = 10.21.97.47 nas_share_path = /openstack/cinder volume_backend_name = fb-1
I will also create an appropriate volume type that points to the FlashBlade backend to ensure the default setting is supported. By restarting the Cinder services (API, Scheduler and Volume) the FlashBlade mountpoint is automatically created with no requirement for you to add this to your /etc/fstab
file. We can see this by looking at the controller filesystem:
1 2 3 4 5 6 7 8 9 10 11 12 |
stack@demo–b08–15:~$ df Filesystem 1K–blocks Used Available Use% Mounted on udev 65979628 4 65979624 1% /dev tmpfs 13198552 936 13197616 1% /run /dev/sda1 9476324 7131176 1840728 80% / none 4 0 4 0% /sys/fs/cgroup none 5120 0 5120 0% /run/lock none 65992752 0 65992752 0% /run/shm none 102400 0 102400 0% /run/user /dev/sda6 89682096 57056 85046328 1% /home 10.21.97.47:/openstack/cinder 10737418240 5521920 10731896320 1% /opt/stack/data/cinder/mnt/46185554c76779b4af125c54ccb188b2 10.21.97.47:/openstack/glance 10737418240 5521920 10731896320 1% /opt/stack/data/glance/images |
In fact Cinder will ensure that this mountpoint is made available across ever Nova compute node in your OpenStack deployment, even without the Cinder services being deployed on those nodes. All that is required is to ensure the nfs-common package has been installed on all your controller and compute nodes.
So now we are done and FlashBlade is integrated into your OpenStack environment providing a Cinder backend and a Glance image store.
Just so we can see the integrated FlashBlade and OpenStack environment, I have uploaded a number of Glance images using the standard glance image-upload
command, as you can see in this screenshot of Horizon:
and on the Glance controller node this is the Glance image store directory that we created on the FlashBlade NFS mountpoint:
1 2 3 4 5 6 7 8 9 |
stack@demo–b08–15:~/data/glance/images$ ls –lrt total 3115640 –rw–r——– 1 stack stack 13287936 Apr 27 12:57 0a392c17–5895–4def–9cc4–5f21349c24cd –rw–r——– 1 stack stack 261489152 Apr 27 13:16 739c3b7e–c4b7–4085–9bd6–f76e7ed6c90f –rw–r——– 1 stack stack 265748992 Apr 27 13:16 af115fc3–099f–44d4–8e9e–a699f3f62c62 –rw–r——– 1 stack stack 286851072 Apr 27 13:16 b50e328e–4e10–4cb3–881a–1424ce192965 –rw–r——– 1 stack stack 234363392 Apr 27 13:16 b230897e–bebe–4746–9836–0ddd0b4346ac –rw–r——– 1 stack stack 737345536 Apr 27 13:17 25b7a361–cafc–4393–a8b5–72cadde4ca0d –rw–r——– 1 stack stack 1391329280 Apr 27 13:17 3fc86150–39bc–4fcc–94d0–bdda274c848e |
Now let’s now create couple of Cinder volumes, one an image volume and the other an empty volume:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
stack@demo–b08–15:~$ cinder create —display_name data–volume 10 +————————————————+———————————————————+ | Property | Value | +————————————————+———————————————————+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017–05–02T16:08:50.000000 | | description | None | | encrypted | False | | id | af340bac–fe1e–490c–8388–cf345f92d3ea | | metadata | {} | | migration_status | None | | multiattach | False | | name | data–volume | | os–vol–host–attr:host | None | | os–vol–mig–status–attr:migstat | None | | os–vol–mig–status–attr:name_id | None | | os–vol–tenant–attr:tenant_id | 73f553f1ef114dbc898622510e7bd439 | | replication_status | disabled | | size | 10 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | fa829db3c0794162982b45443a58b6e9 | | volume_type | fb–1 | +————————————————+———————————————————+ stack@demo–b08–15:~$ cinder create —image–id b50e328e–4e10–4cb3–881a–1424ce192965 —display_name Xenial–Boot 8 +————————————————+———————————————————+ | Property | Value | +————————————————+———————————————————+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017–05–02T16:16:42.000000 | | description | None | | encrypted | False | | id | cb747790–748d–4483–9215–be2cceca44cf | | metadata | {} | | migration_status | None | | multiattach | False | | name | Xenial–Boot | | os–vol–host–attr:host | None | | os–vol–mig–status–attr:migstat | None | | os–vol–mig–status–attr:name_id | None | | os–vol–tenant–attr:tenant_id | 73f553f1ef114dbc898622510e7bd439 | | replication_status | disabled | | size | 8 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | fa829db3c0794162982b45443a58b6e9 | | volume_type | fb–1 | +————————————————+———————————————————+ |
We can see that these two volumes have been created as files on the NFS that Cinder and any Nova instances that use them, will see as block devices.
1 2 3 4 |
stack@demo–b08–15:~$ ls –l /opt/stack/data/cinder/mnt/46185554c76779b4af125c54ccb188b2 total 18874368 –rw–rw—— 1 stack stack 10737418240 May 2 09:09 volume–af340bac–fe1e–490c–8388–cf345f92d3ea –rw–rw—— 1 stack stack 8589934592 May 2 2017 volume–cb747790–748d–4483–9215–be2cceca44cf |
Now all we need to do now is look at what the FlashBlade GUI is showing for the openstack share that now has these Glance images and Cinder volumes.
We can see that the share is giving a compression rate of around 1.9:1 for the small amount of data we have in that share.