A customer has two DELL Powervault MD3620f SAN arrays which included the Remote Replication feature. The Virtual Disks (VDs) hosting the VMware Datastores are replicated across the two SANs, in different buildings, by using hardware Synchronous replication. Some of the Primary VDs are stored on the SAN in site A and replicated to the SAN in site B on the secondary VDs, whilst the other VDs are located on the SAN in Site B and replicated to the one in Site A.

This design was used since the customer did not purchase VMware Site Recovery Manager and the fail over needs to be done manually. This way fewer VM Datastores need to be manually brought up should failure occur, reducing downtime.

We were using RAID 5 disk groups on the SANs but due to a recent hard drive failure, we decided to change to Dynamic Disk Pool. What happened was that when an HDD failed, the hotspare took over and during the rebuild operation another drive was reporting that some sectors were unreadable which in turn effected some VDs.

In short, Dynamic Disk Pools allow us to provision storage quicker, provide RAID 6 protection with double parity and utilise all the drives without the need of hot spares because spare capacity is distributed across the disk pool.

In order to delete the Disk Groups, we needed to vacate one of the SANs by failing over the VDs, presenting the failed over VDs again to the ESXi hosts and mounting them.

The example below is showing the procedure that was carried out to failover the ISOs Datastore from Site B to Site A in a vSphere 5.1 environment.

  • Switch off the VMs that reside on the Datastore that is going to be effected by the VD failover.
  • Stop presenting the Primary VD to the hosts
Un-Assign the VD to the Hosts
  • Select the VD that is going to be failed over and switch the VD Role to Secondary.
Change Primary VD to Secondary
  • Login to the hosts via SSH and Run the command esxcfg-rescan –all. The Datastore will be viewed as inactive.


  • Assign the new Primary VD to the Hosts. Suspend the replication just in case something goes wrong you will still have a copy.
  • Run a esxcfg-rescan –all  command again on the hosts.
  • Now run esxcfg-volume -l This will list all the volumes that are available to be mounted.

  • In order to keep the signature and not have to re-register the VMs and modify the vmx and other virtual machine files to point to the Datastore with the new signature, we force mount the Datastore by executing the command esxcfg-volume -m “DS-ISOs”
  • Refreshing the Datastore view in the vSphere client will show the DS-ISOs datastore as active again. Any VMs residing on it can be powered ON again.


Keep in mind that the VD is considered as a Datastore copy (Snapshot) LUN by the ESXi hosts. As such, you cannot perform certain operations on it. For example resizing the Datastore as per this KB.

Increase button greyed out

To use the failed over  VD as a ‘normal’ Datastore, you would need to re-signature the Datastore, modify the virtual machines files residing on the Datastore and re-register the VM. This is described in the vSphere ESX Configuration Guide
In our case we used vMotion to migrate the Virtual Machines from the Datastore Copy to a new Datastore created on the Disk Pool and replicated back.


If when you run the command esxcfg-volume -l you see the line Can mount : No (the original volume is still online) you will need to restart the hosts on which you are seeing this and try force mounting again.


I have compiled the above method and information after a lot of research. Should you wish to follow on these notes, please take care and ensure you have a proper backup.

Please leave a comment if you see something that needs to be adjusted or have and feedback


By Brian Farrugia

I am the author of Phy2Vir.com. More info can be found on the about page.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.