Don’t we all remember when we witnessed our first vMotion and realized which awesome things this made possible in virtualization?! In vSphere 6 vMotion even got better!
vMotion version history
First I’d like to give a short overview about what was already achieved in previous versions of vSphere.
- Multi-NIC vMotion
Migration time reduced by making Multi-NIC vMotion possible.
- Stun During Page Send
Stunned source machine if needed to progress pre-copy phase of vMotion, so the memory modification rate stays below the precopy transfer rate and the pre-copy will eventually succeed.
- vMotion without shared storage
Allows for moving the VM simultaneously to another computing & storage resource at the same time.
vSphere 6 vMotion Enhancements
vMotion across vCenter Servers
Previously vMotion was constrained to the boundaries of vCenter Server, Datacenter Objects and Folder Objects. This constraint doesn’t apply anymore and vSphere 6 gives us cross-vCenter migration capabilities. Previously you had to export/import OVA/OVF’s or use other tooling to migrate VM’s between vCenters, but now vSphere 6 allows for VM’s to move across vCenters and simultaniously change Compute, Storage Networks & Management. I think it’s justified to say that we can now really talk about Shared-Nothing vMotion. What’s just as impressive is that most VM properties are maintained across the vCenter instances, like UUID, Historical data (Events, Alarms & Task History), HA Properties and even anti-DRS affinity rules are honoured.
As we talk about later in this article, this feature is now supported on local, metro and cross-continental distances. I was already impressed with the inital version of vMotion, but what about seamlessly moving VM’s from one country to another (ofcourse some pre-requisites apply, like stretched virtual network for instance), but still I would simply like to call that WOW!!
Requirements for using vMotion across vCenter Servers:
- vSphere 6+
- Same SSO-domain for both vCenter Servers if using the UI, via API it’s possible to move VM’s between SSO-domains.
- 250 Mbit bandwidth per vMotion migration
- L3-connection between vCenter servers
vMotion across vSwitches (x-vSwitch vMotion)
This new features allows us to change the vSwitch the VM is connected to after the vMotion.
This allows for:
- VSS to VSS migration
- VSS to VDS migration
- VDS to VDS migration (transferring VDS port metadata)
It isn’t possible to vMotion from VDS to VSS yet.
- Layer 2 VM network connection (stretched VLAN or overlay network)
Long distance vMotion (LD vMotion) & Increased Network Flexibility
As mentioned earlier vMotion is now supported across long distances. What VMware is telling us is that it’s now supported up to 100ms, but i also already heared rumors it could even be up to 150ms+. We would really like to get in touch with you guys about your experiences on these really long-distance vMotions. If this really would be possible this would allow us (according to the table on //www.dotcom-tools.com/internet-backbone-latency.aspx, this would allow us to move VM’s from Amsterdam to the US (FL & CA) seamlessly. That’s mind dazzling!
In vSphere 5.5 vMotion over long distances was supported as long as the RTT (Round Trip Time) was 10ms or less. The maximum RTT in vSphere 6 is increased to 100+ms.
[Update 10-4-2015]. The rumors where correct! VMware now officially validated 150ms RTT for long distance vMotion (reference: PDF @ Page 10). Even longer RTT’s are possible (but not supported at this moment). Thanks to @DuncanYB (Duncan Epping) for confirming this and @TomVerhaeg (Tom Verhaeg) pointing us at this information.
This feature allows for cross-continental US vMotion migrations and for example allows us in EMEA to migrate VM’s from Poland to Paris, while still maintaining all vMotion guarantees. That’s a 15x increase of latency tolerance, while still maintaining vMotion guarantee. You have to give the engineering team credits for pulling that of.
- Permanent migrations (to for example a cloud provider)
- Disaster avoidance
- SRM/DA testing
- Multi-site load balancing
- Capacity-bursting (Expansion to the cloud)
- Follow the sun
- Layer 2 Connection (or overlay network)
Same IP should be available at destination.
- Layer 3 connection
In vSphere 6 vMotion is now able to be used over a routed vMotion network!
- Secured connection
The traffic should be secured at transport level, too prevent sensitive data to be visible to prying eyes! So encrypt it or secure it so it cannot be intercepted.
- 250 Mbit per vMotion operation
NFC (Network File Copy)
New to vSphere 6 is also the possibility to define a network for vSphere Replication NFC, which is then used for moving cold data (cloning, vMotioning powered off VM’s). This network can be a layer 2 or a routed layer 3 connection.
vMotion-compatible Shared-Disks Windows Clustering
[Updated 20-5-2016] Thanks to arfman2 on ReddIt for pointing me at another major vMotion Enhancement in vSphere 6.0 which allows us to vMotion clustered windows machines with shared virtual disks (RDM) without causing a failure in WSFC or the underlaying application in the VM.
For more information on this topic: //blogs.vmware.com/apps/2015/02/say-hello-vmotion-compatible-shared-disks-windows-clustering-vsphere.html
I think this vMotion enhancements will really increase the adaption of Hybrid Cloud environments, because this really eases up moving your VM’s around even more!