What are the Key Requirements to Support vMotion Across Data Center Sites?
By Keysight Blog Team |
Before we discuss about the underline network requirements for vMotion (namely, VM Migration) to work across datacenter sites, let’s first try to understand what vMotion is and what benefits it offers. vMotion technology allows virtual machine (VM) mobility between two VMware hosts (hypervisors) instantaneously, without impacting application downtime. It allows data center administrators to perform hardware maintenance, better optimize CPU and memory resources, and migrate mission-critical applications from the data center without affecting service SLAs.
One of the key use cases for VM mobility is performing VM migration across the data center by using a datacenter interconnect (DCI) WAN infrastructure.
Here are some of the key benefits offered by Virtual Mobility across the data center:
- Datacenter maintenance without downtime: Applications on a server or data center infrastructure requiring maintenance can be migrated offsite without any downtime.
- Datacenter consolidation: VM mobility enables data center consolidation efforts, applications migration from one data center to other without having business impact.
- Disaster avoidance: When data centers are in the path of natural calamities (such as hurricanes), it can proactively migrate the mission-critical applications to another data center located in different geographic region.
- Workload balancing across multiple sites: Migrate virtual machines between data centers to provide compute power from datacenters closer to the clients (follow-the-sun model) or to load balance across multiple sites.
vMotion Requirements
vMotion application mobility is heavily dependent on underlying networking infrastructure. When migrating VMs to different subnets, special HW and SW features and careful network design is required.
- IP network with minimum 622 Mbps bandwidth is required.
- The maximum latency between the two vSphere servers cannot exceed 10 milliseconds.
- The source and destination ESX servers must be on same broadcast domain or if it is in different domain, it must have reachability to each other.
- The IP subnet where VM resides must be accessible from both source and destination ESX servers because VM retains IP address when it moves to new destination server (to server TCP clients and other applications to work smoothly).
- The data storage location, including booth device used by the VM must be active and accessible by both source and destination ESX servers.
So, what are the options available to support VM migration across data center sites configured in different network subnets?
There are many different technologies proposed by various NEMs, such as EVPN, PBB-EVPN, SPBM, LISP, L2VPN, VXLAN, NVGRE, and STT.
We are going to highlight how you can use VXLAN to support VM migration. The following network topology image represents data center sites connected over WAN infrastructure:
Here’s how the VM migration process will work by using VXLAN:
- Administrator initiates Green VM (IP 10.1.1.3) migration from site A to site B.
- After VM is moved to site B, the host (on site B) will generate Reverse ARP (RARP) to announce this movement.
- The VXLAN VTEP either running on host (hypervisor) or Top of Rack (ToR) switch will encapsulate entire Ethernet broadcast packet into a UDP with multicast address assigned to that Virtual Network Interface (VNI) as a destination address and VTEP address as the source IP address.
- The physical network delivers the multicast address to all hosts that joined that group (meaning same VNIs).
- This is how other hosts will know that the VM is migrated to another hosts and will start forwarding traffic to that host instead of its original location.
How Keysight can help to test this scenario?
Keysight’s unique RackSim solution (2U Appliance) allows users to emulate real-world data center network sites (racks of servers). Each appliance is capable of emulating large number of virtualized server hosts running various type of hypervisor such as ESXi or KVM. On top of hypervisor, user can emulate a large number of VMs that is capable to generate real-world application traffic profiles (NS-EW).
It also can emulate the VM manger to generate events such as VM start/stop, deploy/destroy VM, and VM migration.
One of the key benefit it is offering is to measure Network Convergence time, which is very important to test for VM migration scenarios. As mentioned earlier in the migration requirement section, the network should have very minimum latency to allow successful VM migration across WAN.