image_pdfimage_print

This post I will talk about using PowerCLI to run a test failover for VVol-based virtual machines. One of the many nice things about VVols is that in the VASA 3.0 API this process is largely automated for you. The SRM-like workflow of a test failover is included–so the amount of storage-related PowerShell you have to manually write is fairly minimal.

 

With VVols, you don’t really failover a single VM, you fail over a replication group. While it is certainly possible to just recover a single VM from that failover etc, I will, for this blog post, show failing over the entire group.

Get your replication group

So the first step is what group? Well most likely you are working from the VM side first. So for a given VM, what replication group am I failing over?

This is fairly simple to figure out. If you have a vm object from get-vm, you can use get-spbmreplicationgroup to find the associated group.

So if I have a VM called vm-001 I will store it in $vm:

Then I can get the replication group from that:

PowerCLI

 

Get target replication group

The next step is to get the target replication group, as this is what you are actually going to run the test failover against. Every source replication group has a target group on the replication side.

To find the target replication group you can use the get-spbmreplicationpair and pass in your source group.

Note, this will not return the target group by itself, it will return the group pair, meaning the source group and the corresponding target group. You can store the target group by pulling it from the response.

Optional: Choose a Point-in-Time

The next step is optional. Do you want to failover to a specific point-in-time? If you do, you can query the available point-in-time’s available. Otherwise, skip this step.

PowerCLI

You can choose the desired PiT by indexing to the right one. So If I want the second to the last, I can just choose $targetPiTs[1]

Building the VVol Map

The last part is building what is called the VVol map. This indicates what VMs and what disks belong together. Every VVol has a UUID, and those UUIDs are stored in the VMX file and the corresponding VMDK files in the config VVol. These references need to be updated to the UUIDs for the VVols that are created from the PiT on the target site. The config VVol, which holds all of these files, is identically replicated, so the VMDK pointers will have references to the old UUIDs of the data VVols on the source side. The recovered data VVols will have new UUIDs. The test failover cmdlet will automatically update the files, but you do need to specify the source IDs.

This needs two things, the source VVol UUIDs and the source VVol datastore container ID.

The VVol datastore container ID is simple enough:

Getting the source VVol IDs is fairly simple too. From your source replication group:
Then put them together:

PowerCLI
Now we are ready to run the test!

Run Test Failover

If you want to specify a PiT, just add your previously selected PiT to that command with:

(whichever index  of it you want)
The failover on the array happens very quickly, the longest part is the VVol datastore file update process, which you will see in the task log:
PowerCLI
The $testVms object will contain the VMX path for all of the VMs you can now register and power on.
PowerCLI
Then all you need to do is register the VMs and power them on:
They are now test failed over:
PowerCLI

Clean up

The last step is to clean up. Just shut the VMs down and unregister them:
You do not need to delete the VMs, just unregister them. VASA will do that work for you.
Then tell the array to clean it up:

We can see on the FlashArray that the VMs (volume groups) have been destroyed. They will automatically eradicate in 24 hrs, but you can of course destroy them earlier if you choose.
And you’re done!