VMFS UNMAP Process Switches Block Count

I blogged before about why a given UNMAP process might revert to the default block count of 200. Here, I’ll address a recent question I got about my UNMAP PowerCLI script.


A recent question I got about my UNMAP PowerCLI script was it says it was using a certain block count but when I looked at the log it was using 200. Why?

Well I blogged before about why a given UNMAP process might revert to the default block count of 200 here. Essentially, if you indicate a block count larger than 1% of the free space of the VMFS ESXi will revert it to 200. Or if the VMFS is more than 75% full it will always override the block count back down to 200.

The question was, does my script calculate the optimal block count (the larger the block count, the faster the reclaim process runs on the FlashArray). It does. It finds the 1% number and if it detects the volume is 75% full it will use 200 because it has no choice.

Great, so what’s the problem here?

Well the situation was that it was not 75% full so the script calculated the 1% value and it reported in the script log file as follows:

The UNMAP for this process was taking a very long time, so the person looked at the hostd log on the ESXi server running the UNMAP procedure and saw it was using a block count of 200, not the 364,285 that was reported in ESXi:

ESXi reports in this log each step inside of an UNMAP procedure and the block count of that step until the process has issued UNMAP to all of the free blocks on the VMFS. So why was it 200?

There are only two reasons it reverts to 200. This check is performed at EVERY step of the process. So while a calculated non-default block can be correct when the process begins, changes of allocation on the VMFS may occur. If the volume suddenly becomes 75% full for instance, due to a new VM or a VM that grows, the original calculation may be incorrect and ESXi will revert to 200.

So let’s take this example. I have a 4 TB VMFS. It is completely empty, so my block count that is equal to 1% of that free space would be around 41,943 blocks ( ~40 GB). So I will use 41,900, which is just slightly below 1% of the free space. So my UNMAP command looks like:

Then I immediately deploy a 3.5 TB thick virtual disk to consume a bunch of space on that VMFS:

So now the free space is only 600 GB, so the appropriate block count would be 6,000 (6 GB) which is far smaller than the 41900 that was issued with the command. You cannot change the block count once the command has been sent, so as soon as that value is no longer valid it will override the rest of the steps in the UNMAP. So look at my hostd log:

It was fine for the first 4 steps, but by the 5th step, the VMFS free capacity had changed, so the value was overridden for the remaining steps for the remainder of that UNMAP process.

So if you see your UNMAP process slowing down even if the block count was changeable at first, it might be because the state of the VMFS free capacity changed enough during the UNMAP run.