Let’s clarify the difference between these two similar commands before we go any further.
cinder-manage
(with the hyphen) is a command to get information about the current state of Cinder and to manipulate the Cinder databasecinder manage
(no hyphen) is the ability to import an external block storage volume into the control of Cinder.
The latter is the one I will cover in this post.
Use case for cinder manage
As this command allows you to manage previously created volumes on a backend array, it facilitates the ability to import block devices that have been used in other environments, maybe OpenStack, maybe bare-metal. These imported volumes could be replica volumes from a remote OpenStack cloud that you want to use for Disaster Recovery testing.
What can you import
Helpfully there is an OpenStack Cinder command provided from the Newton release to allow you to list volumes on a backend that are capable of being imported into Cinder for its management. All you need to know is some OpenStack configuration information of the storage backed you are interested in importing from.
Firstly, you must find the name of the storage pool you are using within OpenStack that references the backend you are going to import from. The reference to pools is OpenStack terminology and doesn’t mean you can only use this feature with arrays that support storage pool. Again, this is another unnecessary confusion that arises from Cinder terminology. Remember it is hard trying to come up with generic names for features that are meaningful, but that doesn’t match with one of the feature names used by one of the eighty or so supported storage backends from forty or so different vendors…
Run the following command to get this pool information.
1 2 3 4 5 6 |
# cinder get-pools +—————+—————————————————————————————————–+ | Property | Value | +—————+—————————————————————————————————–+ | name | sn1–pool–e01–03.puretec.purestorage.com@puredriver–1#puredriver-1 | +—————+—————————————————————————————————–+ |
In this case, we only have one storage backend configured, that being a Pure Storage® FlashArray.
Now we have the name of the storage pool related to this backend array we can have a look at the volumes that exist on this array, both from our existing OpenStack cloud and those created from other applications. Here we use the helpful command that became available from OpenStack Newton, cinder manageable-list
:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
# cinder –os-volume-api-version 3.8 manageable-list sn1-pool-e01-03.puretec.purestorage.com@puredriver-1#puredriver-1 +————————————————————————————————+———+————————+————————————–+———————————————————+——————+ | reference | size | safe_to_manage | reason_not_safe | cinder_id | extra_info | +————————————————————————————————+———+————————+————————————–+———————————————————+——————+ | {‘name’: ‘volume-d01956e9-f1de-4485-be22-823d9f6212ef-cinder’} | 100 | True | – | – | – | | {‘name’: ‘volume-c688327f-ce76-479c-b23a-f0f09b81f6f2-cinder’} | 100 | True | – | – | – | | {‘name’: ‘volume-ac2fb473-ee6b-4b1c-85a9-6392c8793be5-cinder’} | 10 | False | Volume already managed. | ac2fb473–ee6b–4b1c–85a9–6392c8793be5 | – | | {‘name’: ‘volume-7daca491-78be-47a9-9ebd-9664c86b22e2-cinder’} | 8 | False | Volume already managed. | 7daca491–78be–47a9–9ebd–9664c86b22e2 | – | | {‘name’: ‘volume-75162556-bc55-495d-b80b-c7d4ec8e68be-cinder’} | 1 | False | Volume already managed. | 75162556–bc55–495d–b80b–c7d4ec8e68be | – | | {‘name’: ‘volume-5c7e9015-18ed-446d-89b3-96debfc0676d-cinder’} | 1 | False | Volume already managed. | 5c7e9015–18ed–446d–89b3–96debfc0676d | – | | {‘name’: ‘volume-3705bbc6-d6a8-4c8f-91bd-d4d40bc4a3d6-cinder’} | 300 | True | – | – | – | | {‘name’: ‘please-manage-me’} | 123 | True | – | – | – | +————————————————————————————————+———+————————+————————————–+———————————————————+——————+ |
What this command is showing is that the array has, in total, eight volumes, four of which are already managed by this OpenStack cloud. Of the remaining volumes, three look like OpenStack volumes but are not managed – this is because they were created by another OpenStack cloud attached to the same Pure Storage FlashArray, and one remaining volume which is the one we are going to bring into our OpenStack environment.
Managing the external volume
To bring this volume under our OpenStack Cinder control we use the cinder manage
command. The syntax of this looks a little complex when you reference the CLI description, but when you see it used it becomes very simple.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
# cinder manage –id-type name –volume-type puredriver-1 –name newly-managed-vol sn1-pool-e01-03.puretec.purestorage.com@puredriver-1#puredriver-1 please-manage-me +————————————————+—————————————————————————————————–+ | Property | Value | +————————————————+—————————————————————————————————–+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017–08–15T14:49:35.000000 | | description | None | | encrypted | False | | id | 49696d13–ec6e–4afc–ba91–43556abf4973 | | metadata | {} | | migration_status | None | | multiattach | False | | name | newly–managed–vol | | os–vol–host–attr:host | sn1–pool–e01–03.puretec.purestorage.com@puredriver–1#puredriver-1 | | os–vol–mig–status–attr:migstat | None | | os–vol–mig–status–attr:name_id | None | | os–vol–tenant–attr:tenant_id | c5cbcfbd24314ab2b6afc98c7489ee76 | | replication_status | None | | size | 0 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | 2017–08–15T14:49:35.000000 | | user_id | 389494e05b4d4c10a737ca32d11c0db2 | | volume_type | puredriver–1 | +————————————————+—————————————————————————————————–+ |
Let’s break this down.
- The parameter
–-id-type
should bename
, as we are going to reference the name field in the manageable-list output for the volume we want to manage --volume-type
should be set to be what you want the volume type to be when managed by your OpenStack cloud--name
is what the imported volume will be called after making it available to your environment- next, we give the pool name of the backend array
- finally, we provide the name of the unmanaged volume we want to manage as detailed in the first volume of the
manageable-list
output
Now let’s look at backend array again and see that the volume is now managed by our OpenStack Cinder environment:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
# cinder –os-volume-api-version 3.8 manageable-list sn1-pool-e01-03.puretec.purestorage.com@puredriver-1#puredriver-1 +————————————————————————————————+———+————————+————————————–+———————————————————+——————+ | reference | size | safe_to_manage | reason_not_safe | cinder_id | extra_info | +————————————————————————————————+———+————————+————————————–+———————————————————+——————+ | {‘name’: ‘volume-d01956e9-f1de-4485-be22-823d9f6212ef-cinder’} | 100 | True | – | – | – | | {‘name’: ‘volume-c688327f-ce76-479c-b23a-f0f09b81f6f2-cinder’} | 100 | True | – | – | – | | {‘name’: ‘volume-ac2fb473-ee6b-4b1c-85a9-6392c8793be5-cinder’} | 10 | False | Volume already managed. | ac2fb473–ee6b–4b1c–85a9–6392c8793be5 | – | | {‘name’: ‘volume-7daca491-78be-47a9-9ebd-9664c86b22e2-cinder’} | 8 | False | Volume already managed. | 7daca491–78be–47a9–9ebd–9664c86b22e2 | – | | {‘name’: ‘volume-75162556-bc55-495d-b80b-c7d4ec8e68be-cinder’} | 1 | False | Volume already managed. | 75162556–bc55–495d–b80b–c7d4ec8e68be | – | | {‘name’: ‘volume-5c7e9015-18ed-446d-89b3-96debfc0676d-cinder’} | 1 | False | Volume already managed. | 5c7e9015–18ed–446d–89b3–96debfc0676d | – | | {‘name’: ‘volume-49696d13-ec6e-4afc-ba91-43556abf4973-cinder’} | 123 | False | Volume already managed. | 49696d13–ec6e–4afc–ba91–43556abf4973 | – | | {‘name’: ‘volume-3705bbc6-d6a8-4c8f-91bd-d4d40bc4a3d6-cinder’} | 300 | True | – | – | – | +————————————————————————————————+———+————————+————————————–+———————————————————+——————+ |
Notice that the name of the volume has been changed to conform to the standard required by the driver of the backend storage array you are using. In the case of the Pure Storage FlashArray driver, the volume name is the cinder_id
appended by -cinder
. In the example, the Pure Storage FlashArray performed the rename automatically on the backend array, so there is no need to do any management on your Pure Storage FlashArray. For different backends, you would need to consult with the owner of that driver.
Finally, from the standard Cinder view we see the volume with the name and volume type we defined in the manage
command:
1 2 3 4 5 6 7 8 9 |
# cinder list +———————————————————+—————–+—————————————+———+———————+—————+———————————————————+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +———————————————————+—————–+—————————————+———+———————+—————+———————————————————+ | 49696d13–ec6e–4afc–ba91–43556abf4973 | available | newly–managed–vol | 123 | puredriver–1 | false | | | 75162556–bc55–495d–b80b–c7d4ec8e68be | available | cirros–0.3.5–x86_64–disk | 1 | puredriver–1 | true | | | ac2fb473–ee6b–4b1c–85a9–6392c8793be5 | in–use | | 10 | puredriver–1 | true | 7e821fce–3af1–4af8–bdd5–f72fd7f75e08 | +———————————————————+—————–+—————————————+———+———————+—————+———————————————————+ |
So, not as hard or complicated as some may think.
Going back
So now we have managed an external volume, how about we unmanage an OpenStack Cinder volume. This is more simple to achieve as we have to do is specify the Cinder ID or name of the volume.
Let’s unmanage the volume we just managed:
1 |
# cinder unmanage newly-managed-vol |
Looking at the manageable list again we can see how the volume is now outside of Cinder’s control and that the volume has again been renamed, this time to the original name on the backend array and appended with -unmanaged
:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
# cinder –os-volume-api-version 3.8 manageable-list sn1-pool-e01-03.puretec.purestorage.com@puredriver-1#puredriver-1 +—————————————————————————————————————+———+————————+————————————–+———————————————————+——————+ | reference | size | safe_to_manage | reason_not_safe | cinder_id | extra_info | +—————————————————————————————————————+———+————————+————————————–+———————————————————+——————+ | {‘name’: ‘volume-d01956e9-f1de-4485-be22-823d9f6212ef-cinder’} | 100 | True | – | – | – | | {‘name’: ‘volume-c688327f-ce76-479c-b23a-f0f09b81f6f2-cinder’} | 100 | True | – | – | – | | {‘name’: ‘volume-ac2fb473-ee6b-4b1c-85a9-6392c8793be5-cinder’} | 10 | False | Volume already managed. | ac2fb473–ee6b–4b1c–85a9–6392c8793be5 | – | | {‘name’: ‘volume-7daca491-78be-47a9-9ebd-9664c86b22e2-cinder’} | 8 | False | Volume already managed. | 7daca491–78be–47a9–9ebd–9664c86b22e2 | – | | {‘name’: ‘volume-75162556-bc55-495d-b80b-c7d4ec8e68be-cinder’} | 1 | False | Volume already managed. | 75162556–bc55–495d–b80b–c7d4ec8e68be | – | | {‘name’: ‘volume-5c7e9015-18ed-446d-89b3-96debfc0676d-cinder’} | 1 | False | Volume already managed. | 5c7e9015–18ed–446d–89b3–96debfc0676d | – | | {‘name’: ‘volume-49696d13-ec6e-4afc-ba91-43556abf4973-cinder-unmanaged’} | 123 | True | – | – | – | | {‘name’: ‘volume-3705bbc6-d6a8-4c8f-91bd-d4d40bc4a3d6-cinder’} | 300 | True | – | – | – | +—————————————————————————————————————+———+————————+————————————–+———————————————————+——————+ |
I hope that this short post has been helpful, as I know I have been asked about how this feature works several times.