In this second blog post in the series, I describe how to perform a frequently used process using the Pure Storage® Ansible Collections for FlashArray™ and the modules available in the collection. The first blog post in the series discussed Ansible Collections and how to use them.
Over the years, I have been involved in many, many data migration projects. Believe me, I never want to go through those overnight and weekend marathons again, with all the associated application outages, rollbacks, complexity, and more.
If you had told me back then there would be a simple, push-button way to migrate from one storage array to another, while the application was still running, and there was little or no impact to the application users, I would have told you that you were insane. Well, fast-forward to today: migration nirvana is within your grasp if you are migrating from a one Pure Storage FlashArray to another when you use the FlashArray Ansible Collections.
The rest of this blog post will show how to use Ansible to perform a volume migration from one FlashArray to another, with live IO being performed to the filesystem by the underlying volume. This allows applications to remain active during the whole migration process with little or no impact on their users, and depending on the application there may be a slight increase in response times.
The process has been recorded in this short video, and the playbooks that are used in the video are detailed later in this blog post.
Notice the timestamps in the video. There are no smoke and mirrors here. This is all real-time.
The video shows the creation of a simple volume on a host connected to a single FlashArray running an application that is writing data to the volume. This volume is then migrated, whilst the application is still running, to another FlashArray.
The steps to perform the migration are:
- Create an ActiveCluster between the two arrays
- Create an ActiveCluster pod
- Move the volume to the pod
- Stretch the pod and let the data synchronise
- Move the volume out of the pod onto the second array
- Break the ActiveCluster connection
- Cleanup the first array.
As I stated above, the first part of this video creates a simple volume, then formats it and mounts it to be ready for an application to write data. The playbook to do this is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
– name: Create volume for migration example hosts: localhost gather_facts: true collections: – purestorage.flasharray vars_files: – mig–vars.yaml tasks: – name: Get source FlashArray info purefa_info: gather_subset: – minimum – network – interfaces fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” register: src_array_info – name: Create volume for migration purefa_volume: name: “{{ migration_volume }}” size: 10G fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” – name: Get serial number of migration volume purefa_info: gather_subset: volumes fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” register: volumes_data – set_fact: volume_serial: “{{ volumes_data.purefa_info.volumes[migration_volume].serial }}” – name: Create host object on source array and connect volume purefa_host: host: “{{ ansible_hostname }}” iqn: “{{ ansible_iscsi_iqn }}” volume: “{{ migration_volume }}” fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” – name: Discover source FlashArray for iSCSI open_iscsi: show_nodes: yes discover: yes portal: “{{ src_array_info.purefa_info.network[src_iscsi_port].address }}” register: src_iscsi_iqn – name: Connect to source FlashArray over iSCSI open_iscsi: target: “{{ src_iscsi_iqn.nodes[0] }}” login: yes – name: Force multipath rescan command: /usr/sbin/multipath –r – name: Get multipath device for migration volume shell: cmd: /usr/sbin/multipath –ll | grep –i {{ volume_serial }} | awk ‘{print $2}’ register: mpath_dev – name: Format migration volume filesystem: fstype: ext4 dev: ‘/dev/{{ mpath_dev.stdout }}’ – name: Mount migration volume mount: path: “{{ mount_path }}” fstype: ext4 src: ‘/dev/{{ mpath_dev.stdout }}’ state: mounted |
You can also find this in the Pure Storage GitHub account.
The application I use to write data to the volume is a very simple random text generator, which ensures data is continually being written to the volume during the live migration.
The main Ansible playbook performs the actions listed earlier:
- First, start by creating an ActiveCluster link between the source and target FlashArrays.
- Next, create a pod and move the volume into the pod.
- Stretch the pod to the target array, which allows the pod to synchronize the volume across the two FlashArrays.
At this point, the playbook uses a loop process to wait for the pod to fully synchronize. This is the core part of the migration, and you can see that data is still being written to the volume all the way. Imagine your production database being migrated with zero downtime.
- After the volume has synchronized, it is connected to the second host and the host sees connections for the volume from both arrays.
- The volume is disconnected from the source array in the active cluster pod construct. The pod is unstretched and the volume is removed from the pod.
- Finally, all references to the ActiveCluster are removed, and the source array connections to the host are removed.
The playbook used to perform this migration is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
– name: Live migration using ActiveCluster hosts: localhost gather_facts: true collections: – purestorage.flasharray vars_files: – mig–vars.yaml tasks: – name: Get source FlashArray info purefa_info: gather_subset: – minimum – network – interfaces fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” register: src_array_info – name: Get destination FlashArray info purefa_info: gather_subset: – minimum – network – interfaces fa_url: “{{ dst_array_ip }}” api_token: “{{ dst_array_api }}” register: dst_array_info – name: Connect arrays in ActiveCluster configuration purefa_connect: target_url: “{{ dst_array_ip }}” target_api: “{{ dst_array_api }}” connection: sync fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” – name: Create migration pod purefa_pod: name: “{{ migration_pod }}” fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” – name: Move migration volume to migration pod purefa_volume: name: “{{ migration_volume }}” move: “{{ migration_pod }}” fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” – name: Stretch migration pod to destination array purefa_pod: name: “{{ migration_pod }}” stretch: “{{ dst_array_info[‘purefa_info’][‘default’][‘array_name’] }}” fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” – name: Wait for pod sync purefa_info: gather_subset: pods fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” register: output retries: 40 delay: 5 until: “output | json_query(‘purefa_info.pods.\”{{ migration_pod }}\”.arrays[].status’) == [‘online’, ‘online’]” – name: Create host object on destination array purefa_host: host: “{{ ansible_hostname }}” iqn: “{{ ansible_iscsi_iqn }}” volume: “{{ migration_pod }}::{{ migration_volume }}” fa_url: “{{ dst_array_ip }}” api_token: “{{ dst_array_api }}” – name: Discover destination FlashArray using iSCSI open_iscsi: show_nodes: yes discover: yes portal: “{{ dst_array_info.purefa_info.network[dst_iscsi_port].address }}” register: dst_iscsi_iqn – name: Connect to destination FlashArray over iSCSI open_iscsi: target: “{{ dst_iscsi_iqn.nodes[0] }}” login: yes – name: Ensure new multipath links from destination array are connected command: /usr/sbin/multipath –r – debug: msg: “Volume fully sync’ed and ready for removal from source array” – pause: – name: Disconnect migration volume from host on source array purefa_host: volume: “{{ migration_pod }}::{{ migration_volume }}” host: “{{ ansible_hostname }}” state: absent fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” – name: Unstretch pod from source array purefa_pod: name: “{{ migration_pod }}” state: absent stretch: “{{ src_array_info[‘purefa_info’][‘default’][‘array_name’] }}” fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” – name: Move migrated volume out of pod on destination array purefa_volume: name: ‘{{ migration_pod }}::{{ migration_volume }}’ move: local fa_url: “{{ dst_array_ip }}” api_token: “{{ dst_array_api }}” – name: Remove old multipath links to source array command: /usr/sbin/multipath –r – debug: msg: “Volume fully migrated to destination array. Ready to clean up both arrays.” – pause: – name: Eradicate migration pod on destination array purefa_pod: name: “{{ migration_pod }}” state: absent eradicate: true fa_url: “{{ dst_array_ip }}” api_token: “{{ dst_array_api }}” – name: Cleanup hanging migration pod on source array purefa_pod: name: “{{ migration_pod }}.restretch” state: absent eradicate: true fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” – name: Disconnect arrays from ActiveCluster mode purefa_connect: target_url: “{{ dst_array_ip }}” target_api: “{{ dst_array_api }}” state: absent fa_url: “{{ src_array_ip }}” api_token: “{{ src_array_api }}” – name: Remove old multipath links to source array command: /usr/sbin/multipath –r – name: Get source array IQN shell: cmd: /usr/sbin/iscsiadm –m node | grep {{ src_array_info.purefa_info.network[src_iscsi_port].address }} | awk ‘{print $2}’ register: src_iqn – name: Logout iSCSI sessions to source array shell: cmd: /usr/sbin/iscsiadm –m node –T {{ src_iqn.stdout }} –u – name: Delete iSCSI sessions to source array shell: cmd: /usr/sbin/iscsiadm –m node –o delete –T {{ src_iqn.stdout }} |
You can also download from GitHub.
You wouldn’t do this next step in a production environment, but the video continues and uses a playbook to clean up the demonstration environment. This unmounts the file system from the host, deletes the volume on the target array, and removes all references to the target array from the host. The playbook used for this is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
– name: Cleanup environment after migration complete hosts: localhost gather_facts: true collections: – purestorage.flasharray vars_files: – mig–vars.yaml tasks: – name: Get destination array information purefa_info: gather_subset: – network – volumes fa_url: “{{ dst_array_ip }}” api_token: “{{ dst_array_api }}” register: dst_array_info – set_fact: volume_serial: “{{ dst_array_info.purefa_info.volumes[migration_volume].serial }}” – name: Unmount filesystem mount: path: “{{ mount_path }}” state: absent – name: Get multipath device of migrated volume shell: cmd: /usr/sbin/multipath –ll | grep –i {{ volume_serial }} | awk ‘{print $2}’ register: mpath_dev – name: Delete host object from destination array purefa_host: host: “{{ ansible_hostname }}” state: absent fa_url: “{{ dst_array_ip }}” api_token: “{{ dst_array_api }}” – name: Delete migrated volume from destination array purefa_volume: name: “{{ migration_volume }}” state: absent eradicate: true fa_url: “{{ dst_array_ip }}” api_token: “{{ dst_array_api }}” – name: Remove multipath device shell: cmd: /usr/sbin/multipath –r /dev/{{ mpath_dev.stdout }} – name: Flush multipath cache shell: cmd: /usr/sbin/multipath –F – name: Get destination array IQN shell: cmd: /usr/sbin/iscsiadm –m node | grep {{ dst_array_info.purefa_info.network[dst_iscsi_port].address }} | awk ‘{print $2}’ register: dst_iqn – name: Logout iSCSI sessions to destination array shell: cmd: /usr/sbin/iscsiadm –m node –T {{ dst_iqn.stdout }} –u – name: Delete iSCSI sessions to destination array shell: cmd: /usr/sbin/iscsiadm –m node –o delete –T {{ dst_iqn.stdout }} |
This is also available on the Pure Storage GitHub account.
Each of these playbooks calls a variable file that holds environmental information that allows the migration to occur. An example variable file is available.
If you download these playbooks, please ensure you update the variables file with your settings.
This example only uses iSCSI for the host-volume connection, but I’m sure you could adapt it to support Fibre Channel connectivity. If you do, please upload a copy to the Pure Storage Ansible playbooks examples repository.
Pure Storage Virtual Lab