A customer of ours purchased a new server to create a Hyper-V cluster with the existing server. We decided to create a single node cluster and migrate the vms to the new server so we could re-install the “old” server (Host 1) and join it to the cluster.

The customer had only one switch(we know!) and, to avoid split brain scenarios in case of switch failure and to utilise fewer ports, we decided to connect the Live Migration networking and an additional cluster network between the servers back-to-back. The below diagram gives a basic overview of the setup omitting the additional cluster networking.


Basic Network Setup

We configured both nodes to allow Live Migrations to use the subnet to avoid disrupting the services being accessed from the corporate network.

When we started the Shared Nothing Live Migration, i.e migrated vm data and storage to the new Node (Host 2) and shared storage (CSV), we noticed that the transfer was very slow. It was going in bursts of 70Mbps peaking to 200Mbps. Sometimes the transfer rate would go down as low as few Kbps bursts. The bursts lasted for a few seconds and then paused for half a minute or so between bursts. 3GB took roughly 45 minutes to be transferred!

Knowing that this is not normal behaviour, we wanted to make sure that there was no problem on the back-to-back link. We transferred an ISO file between the hosts using the live migration link and the file transfer was a steady 113MBps in both directions. I searched a bit and found that enabling Jumbo frames helps improving the performance in Live Migration so I proceeded to configure the Ethernet adapters used for Live Migration on both hosts to 9000 MTU. This did not improve the situation.

After going through all possible resolutions, we took a look at the network adapter parameters. We decided to change the EEE (Energy Efficient Ethernet) Control Policies from Optimal Power and Performance to Maximum Performance on both servers’ Ethernet adapters used for Live Migration. Both servers had the Broadcom BCM5207C NetXtreme Gigabit Ethernet card. Once this was done, Live Migrations were being done at over 700Mbps constantly, which is not bad over a Gigabit Link.

The screenshots below show the property that was changed and the the graph shows how the speed increased as soon as the change was done during a Live Migration.

Broadcom EEE Control Policies Property


Speed increase during Live Migration

Hope this article helps someone with similar issues as I couldn’t find anything on the internet similare to what we were encountering.

If you have any thoughts on this, please make sure to leave a comment!


By Brian Farrugia

I am the author of Phy2Vir.com. More info can be found on the about page.

2 thought on “Slow Hyper-V Shared Nothing Live Migration”
  1. Have very similar issue. Seems shared-nothing live migrations are slow when the guest VMs are powered ON. When OFF, transfer is very fast. I think the EEE setting you found may also be influenced by disabling C States in the BIOS, and setting the BIOS to Maximum Performance. This helped a lot for me, but did not end up resolving the issue entirely. In order for me to have fast performance with live migrations, I have to power-down the guest VMs first – which of course kind of defeats the purpose of Live Migrations.

    Here are my threads on the issue.



    1. Hi,
      I haven’t done this on a 10Gb card but if I do find anything I will let you know.
      If using Windows Server 2012R2, have you tried applying the latest updates and hotfixes for Hyper-V and the Failover Cluster Roles.

      You can use the script in this article to check for hotfixes.

      Keep us updated if you resolve it 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.