Skip to main content

Virtual Machine Mobility Protocol Using Distributed Registrations
draft-sarikaya-nvo3-vmm-dmm-pmip-03

The information below is for an old version of the document.
Document Type
This is an older version of an Internet-Draft whose latest revision state is "Replaced".
Authors Behcet Sarikaya , Linda Dunbar , Bhumip Khasnabish
Last updated 2014-05-14
Replaced by draft-ietf-nvo3-vmm, draft-ietf-nvo3-vmm
RFC stream (None)
Formats
Additional resources
Stream Stream state (No stream defined)
Consensus boilerplate Unknown
RFC Editor Note (None)
IESG IESG state I-D Exists
Telechat date (None)
Responsible AD (None)
Send notices to (None)
draft-sarikaya-nvo3-vmm-dmm-pmip-03
Network Working Group                                        B. Sarikaya
Internet-Draft                                                 L. Dunbar
Intended status: Standards Track                              Huawei USA
Expires: November 15, 2014                                 B. Khasnabish
                                                           ZTE (TX) Inc.
                                                            May 14, 2014

   Virtual Machine Mobility Protocol Using Distributed Registrations
                draft-sarikaya-nvo3-vmm-dmm-pmip-03.txt

Abstract

   This document specifies a new IP level protocol for seamless virtual
   machine mobility in data centers.  Source hypervisor registers the
   newly created virtual machine with the centrally available management
   node.  When the virtual machine moves to the destination hypervisor,
   the destination hypervisor updates the virtual machine record in the
   management node.  Management node sends registration message to all
   previous source hypervisors in order to direct the ongoing traffic to
   the destination hypervisor.  Hot, warm and and cold virtual machine
   mobility are achieved using dynamic domain name system update and
   host routes.  Both intra data center and inter data center hot
   virtual machine mobility solutions are presented.  The requirements
   for warm and cold mobility of virtual machines are expected to be
   less stringent than those for the hot mobility option.  A short
   discussion on virtual machine lifecycle management is also included
   in this draft.

Status of This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on November 15, 2014.

Sarikaya, et al.        Expires November 15, 2014               [Page 1]
Internet-Draft        VM Mobility Solution Protocol             May 2014

Copyright Notice

   Copyright (c) 2014 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   2
   2.  Conventions and Terminology . . . . . . . . . . . . . . . . .   3
   3.  Requirements  . . . . . . . . . . . . . . . . . . . . . . . .   4
   4.  Architecture  . . . . . . . . . . . . . . . . . . . . . . . .   4
     4.1.  VM Mobility Protocol Architecture . . . . . . . . . . . .   4
   5.  VM Mobility Protocol Operation  . . . . . . . . . . . . . . .   6
   6.  Moving Local State of VM  . . . . . . . . . . . . . . . . . .   8
   7.  Handling of Hot, Warm and Cold Virtual Machine Mobility . . .   9
     7.1.  Intra Data Center Hot Virtual Machine Mobility  . . . . .  10
     7.2.  Inter Data Center Hot Virtual Machine Mobility  . . . . .  10
   8.  Virtual Machine Operation . . . . . . . . . . . . . . . . . .  11
     8.1.  Virtual Machine Lifecycle Management  . . . . . . . . . .  11
   9.  Handling IPv4 Virtual Machine Mobility  . . . . . . . . . . .  12
   10. Security Considerations . . . . . . . . . . . . . . . . . . .  13
   11. IANA Considerations . . . . . . . . . . . . . . . . . . . . .  13
   12. Acknowledgements  . . . . . . . . . . . . . . . . . . . . . .  13
   13. References  . . . . . . . . . . . . . . . . . . . . . . . . .  13
     13.1.  Normative References . . . . . . . . . . . . . . . . . .  13
     13.2.  Informative references . . . . . . . . . . . . . . . . .  14
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  15

1.  Introduction

   Data center networks are being increasingly used by telecom operators
   as well as by enterprises.  Currently these networks are organized as
   one large Layer 2 network in a single building.  In some cases such a
   network is extended geographically using virtual private network
   (VPN) technologies still as an even larger Layer 2 network.

   Virtualization which is being used in almost all of today's data
   centers enables many virtual machines to run on a single physical

Sarikaya, et al.        Expires November 15, 2014               [Page 2]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   computer or compute server.  Virtual machines (VM) need hypervisor
   running on the physical compute server to provide them shared
   processor/memory/storage and network connectivity
   [I-D.kreeger-nvo3-overlay-cp].  Being able to move VMs dynamically,
   or live migration, from one server to another allows for dynamic load
   balancing or work distribution and thus it is a highly desirable
   feature [I-D.ietf-nvo3-vm-mobility-issues].  VM mobility is currently
   being provided by Layer 2 [IEEE802.1Qbg] or Layer 2.5 techniques
   [I-D.raggarwa-data-center-mobility].

   There are many challenges and requirements related to migration,
   mobility, and interconnection of Virtual Machines (VMs)and Virtual
   Network Elements (VNEs).  For example virtual local area networking
   (VLAN) and virtual private networking (VPN) techniques that are
   traditionally expected to support such migration, mobility, and
   interconnections have many limitations
   [I-D.khasnabish-vmmi-problems].  A new protocol solution needs to be
   developed.

   In view of many virtual machine mobility schemes that exist today,
   there is a desire to define standard control plane protocol for Layer
   3 based virtual machine mobility.  The protocol should be based on
   IPv6.  In this document we specify such a protocol.

2.  Conventions and Terminology

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 [RFC2119].

   This document uses the terminology defined in
   [I-D.ietf-nvo3-vm-mobility-issues].  In addition we make the
   following definitions:

   Hot VM Mobility.  A given VM could be moved from one server to
   another in running state.

   Warm VM Mobility.  In case of warm VM mobility, the VM states are
   mirrored to the secondary server (or domain) at a predefined
   (configurable) regular intervals.  This reduces the overheads and
   complexity but this may also lead to a situation when both servers
   may not contain the exact same data (state information)

   Cold VM Mobility.  A given VM could be moved from one server to
   another in stopped or suspended state.

Sarikaya, et al.        Expires November 15, 2014               [Page 3]
Internet-Draft        VM Mobility Solution Protocol             May 2014

3.  Requirements

   This section states requirements on data center network virtual
   machine mobility.

   Data center network MUST support virtual machine mobility in IPv6.

   IPv4 SHOULD also be supported in virtual machine mobility.

   Tunneling MUST NOT be used between VMs in different data centers.
   Tunneling MUST NOT be used between a VM located in a data center and
   a host in some other site.

   Host routes MAY be used between VMs in different data centers and
   between a VM located in a data center and a host in some other site
   [I-D.shima-clouds-net-portability-reqs-and-models].

   Triangular routing MAY be be used between VMs in different data
   centers.  The use of triangular routing SHOULD be minimized between a
   VM located in a data center and a host in some other site.

4.  Architecture

   Datacenter is Layer-2 based if packets are switched inside a rack and
   bridged among the racks, i.e. completely in Layer-2.  From IP point
   of view the nodes are connected to a single link.  Layer-2 based
   networks make it easy to move Virtual Machines from one server to
   another but on the other hand they don't scale well for address
   resolution protocols like ARP [RFC6820].

   In this document we assume L3-based datacenter network and design
   live virtual machine migration protocol.  The design makes minimum
   use of Proxy Mobile IPv6 protocol as in [RFC5213].  The document is
   inspired from recent work on Distributed Proxy Mobile IPv6 such as
   [I-D.bernardos-dmm-distributed-anchoring], [I-D.seite-dmm-dma] which
   extend Proxy Mobile IPv6 protocol for distributed anchoring.

4.1.  VM Mobility Protocol Architecture

   Virtual Machines connect to the network using a virtual interface
   supported by the hypervisor.  In this document, the hypervisor that
   supports distributed Proxy Mobile IP based virtual machine mobility
   is called mHypervisor, or mHS in short.  For VMs that are created
   locally and have not yet moved, mHS is the serving or destination
   mHS.  After the VM moves to a new mHypervisor, the previous mHS is
   called the anchor or source mHS (see Figure 1).

Sarikaya, et al.        Expires November 15, 2014               [Page 4]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   Top of Rack Switch (ToR) is a switch used to connect the servers in a
   data center to the data center network.  Border Router (BR) is the
   data center border router that provides connectivity between VMs and
   hosts communicating with the VMs.  The data center has an associated
   storage center.  The storage center is connected to the data center
   using fast means such as fiber channel (fc).

   When VM is created it registers with the data center management
   system.  The management system keeps a record of all VMs and their
   most recent addresses.  Data center management system manages all
   intra- and inter-data center VM mobility.

   After VM is created it starts to serve its users.  During this
   process, VM may be moved anytime.  Live VM migration is done by the
   hypervisor.  VM moves from the source hypervisor (anchor mHS) to the
   destination hypervisor (serving mHS).  If VM moves to a different
   subnet its IP address(es) change.  In Figure 1, if a VM moves from
   Hypervisor A to Hypervisor B, Hypervisor A is the source and
   Hypervisor B is the destination hypervisor.

Sarikaya, et al.        Expires November 15, 2014               [Page 5]
Internet-Draft        VM Mobility Solution Protocol             May 2014

                                      I N T E R N E T
                  |              |
               ------          ------
              |  BR  |        |  BR  |
               ------          ------
          ________|_____________|______________________________________
         |        |             |                            Data      |
         |     ------          ------       fc                Center   |
         |    |   R  |        |   R  |-----------------------|         |
         |     ------          ------                        |         |
         |        |              |   \                  _____|________ |
         |         --------------     \____________    |              ||
         |        |Agg. Switch   |    (   Network   )  |Storage Center||
         |         --------------      (_Management_)  |_____________ ||
         |         | \________/ |       (___Node___)                   |
         |     ------          ------                                  |
         |    |Switch|        |Switch|                                 |
         |     ------          ------                                  |
         |       |  \________/     |                                   |
         |       |      /     \    |                                   |
         |  ------------        -----                                  |
         | | ToR Switch |      | ToR |                                 |
         |  ------------        -----  ---------------------+          |
         |   |                    |                         |          |
         |   |   ----------       |   ----------                       |
         |   |--| Server   |      |--| Server   |       Other Servers  |
         |   |  |Hypervisor|      |   ----------                       |
         |   |  |  ----   A|      |                                    |
         |   |  | | VM |   |      |   ----------                       |
         |   |  |  -----   |       --| Server   |                      |
         |   |  |  | VM |  |         |Hypervisor|                      |
         |   |  |   -----  |         |   ----- B|                      |
         |   |  |   | VM | |         |   | VM | |                      |
         |   |  |    ----  |         |    ----  |                      |
         |   |   ----------           ----------                       |
         |   |-- Other servers                                         |
          -------------------------------------------------------------

                    Figure 1: Architecture of DPMIP VMM

5.  VM Mobility Protocol Operation

   When a virtual machine is created, its hypervisor or source
   hypervisor sends a VM mobility registration message called VM
   Mobility Registration Request or registration request in short to the
   management node.  The message is structured similar to Proxy Binding
   Update (PBU) message [RFC5213], i.e. an IPv6 extension header
   containing the fields of length, type, checksum, sequence number,

Sarikaya, et al.        Expires November 15, 2014               [Page 6]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   lifetime and the message data.  Message data contains various options
   which are structured in Type Length Value (TLV) format.

   VM Mobility Registration request message contains Virtual Machine
   Link Layer Identifier option (renamed from Mobile Node Link Layer
   Identifier option) in which Link-Layer Identifier is MAC address of
   the VM, Virtual Machine Identifier option (renamed from Mobile Node
   Identifier option) containing VM-ID, Virtual Machine Address option
   (renamed from Home Network Prefix Option) containing VM address.
   More than one Virtual Machine Address option can be included,
   possibly one for each interface of VM.  Source address of VM Mobility
   Registration Request packet is used as Proxy Care-of Address (Proxy-
   CoA).  Source Hypervisor keeps all these values for each VM in a data
   structure called Binding Update List [RFC5213], one entry for each
   VM.

   The management node records the virtual machine information in a
   binding cache entry for this virtual machine including the source
   address in VM Registration Request packet as the Proxy-CoA.  The
   management node sends a VM Mobility Registration Reply message to the
   hypervisor.  The message is structured similar to Proxy Binding
   Acknowledgement message [RFC5213].  VM Registration Reply message
   contains a status field which should be set to accepted or rejected.

   Virtual machine moves from its source Hypervisor to a new,
   destination Hypervisor.  The move is initiated by the source
   Hypervisor.  If the move is not in the same L2 link, the virtual
   machine IP address(es) changes.  VM obtains a new IP address from the
   destination Hypervisor.  The destination Hypervisor MUST send a VM
   Mobility Registration Request message to the management node.

   The management node receives VM Mobility Registration Request message
   and searches the binding cache for a matching entry.  Once the match
   is found, the entry is modified to point to the new IP address(es)
   and Proxy-CoA.  Previous Proxy-CoAs are kept in the entry.  The
   management node sends a reply (VM Mobility Registration Reply
   message) to the destination Hypervisor to indicate the acceptance of
   the registration.

   Source Hypervisor or previous source Hypervisors need to be informed
   of the new IP address(es) of the VM.  For this purpose, the
   management node sends VM Mobility Registration Request message to the
   source Hypervisors.  Hypervisor verifies that this message is coming
   from the management node otherwise rejects any such message.

   Source Hypervisor sends VM Mobility Registration Reply back to the
   management node.  Source Hypervisor creates a host route pointing the
   old VM address to the new VM address.  The old VM address is obtained

Sarikaya, et al.        Expires November 15, 2014               [Page 7]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   from the Binding Update List entry matching this VM and the new VM
   address from VM Mobility Registration Request message received from
   the management node.

   Virtual Machine Mobility Registration Request message contains a
   Lifetime field, a 16-bit unsigned integer.  Lifetime field contains
   the lifetime of the registration in the number of time units (each 4
   seconds).  Source Hypervisor sends its suggested value and the
   management node sends the final value of the lifetime which is equal
   or less than the suggested value.  In order to extend a binding that
   is expiring, the Hypervisor sends periodic reregistration messages
   (VM Mobility Registration Requests).

   All source hypervisors keep one entry in their Binding Update List
   for each virtual machine that was in communication before it was
   moved, i.e. VMs in hot VM mobility.  The entries for VMs in cold VM
   mobility are removed after receiving VM Mobility Registration Request
   message from the management node.  Binding Update List is used to
   create the host routes.

   Source Hypervisor sends all packets from ongoing connections of the
   virtual machine to the destination Hypervisor using the host route.
   Destination Hypervisor receives the packet and sends it to the VM.
   This delivery mechanism does not avoid triangular routing but it
   avoids tunneling.  Route optimization, i.e. avoiding triangular
   routing is explained in Section 7.

   At the source Hypervisor, virtual machine entries are kept in the
   binding update list until all inbound traffic to the virtual machine
   stops.  A timer may be used for this purpose.  When the timer times
   out, the entry is deleted.

   Destination Hypervisor of the virtual machine does a dynamic Domain
   Name System update [RFC3007].  This update is done for all the
   services that the VM provides which means that all traffic from new
   connections are directed to the new location of the virtual machine
   with no tunneling or triangular routing.

6.  Moving Local State of VM

   After VM mobility related signaling (VM Mobility Registration Request
   /Reply), the virtual machine state needs to be transferred to the
   destination Hypervisor.  The state includes its memory and file
   system.  Source Hypervisor opens a TCP connection with destination
   Hypervisor over which VM's memory state is transferred.

   File system or local storage is more complicated to transfer.  The
   transfer should ensure consistency, i.e. the VM at the destination

Sarikaya, et al.        Expires November 15, 2014               [Page 8]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   should find the same file system it had at the source.  Precopying is
   commonly used technique for transferring the file system.  First the
   whole disk image is transferred while VM continues to run.  After the
   VM is moved any changes in the file system are packaged together and
   sent to the destination Hypervisor which reflects these changes to
   the file system locally at the destination.

7.  Handling of Hot, Warm and Cold Virtual Machine Mobility

   When the VM moves to the destination hypervisor, it starts a secure
   dynamic DNS update using [RFC3007].  VM registers all its services in
   the DNS.  Dynamic DNS update solves the cold VM mobility problem
   since all new communication to this VM can only be initiated at the
   new addresses that VM acquired at the destination hypervisor.  Cold
   VM mobility also allows all source hypervisors to delete binding
   update list entries of the VM.  The VMs that are used for cold
   standby receive scheduled backup information but less frequently than
   that would be for warm standby option.  Therefore, the cold mobility
   option can be used for non-critical applications and services.

   In cases of warm standby option, the backup VMs receive backup
   information at regular intervals.  The duration of the interval
   determines the warmth of the standby option.  The larger the
   duration, the less warm (and hence cold) the standby option becomes.

   In case of hot standby option, the VMs in both primary and secondary
   domains have identical information and can provide services
   simultaneously as in load-share mode of operation.  If the VMs in the
   primary domain fails, there is no need to actively move the VMs to
   the secondary domain because the VMs in the secondary domain already
   contains identical information.  The hot standby option is the most
   costly mechanism for providing redundancy, and hence this option is
   utilized only for mission-critical applications and services.

   When VM in motion has ongoing communications with outside hosts, the
   packets will continue to be received at the source hypervisors.
   Source hypervisors create host routes based on the binding cache
   entries they have for the VM.  Source route enables them to route
   ongoing communications to the destination hypervisor.  If the VM
   moved to a different data center then the packets are routed to the
   new data center.

   Host routes avoid tunneling inherent in the mobility protocols such
   as Proxy Mobile IPv6 [RFC5213].  However host routes do not avoid
   triangular routing.  Route optimization is needed to avoid triangular
   routing.  In mobility protocols route optimization is achieved by
   establishing a direct route between all communicating hosts, a.k.a.
   correspondent nodes and the destination virtual machine.  Such a

Sarikaya, et al.        Expires November 15, 2014               [Page 9]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   solution requires host modifications and not scalable in virtual
   machine mobility.

   Optimal IP routing in virtual machine mobility has two components
   [I-D.ietf-nvo3-vm-mobility-issues]: outgoing traffic and incoming
   traffic.  Optimal IP routing for the outgoing traffic can be achieved
   by assigning a default router that is topologically closest to the
   ToR that connects the server presently hosting that VM.  This can be
   achieved by limiting Layer 2 network to each rack and ToR to act as
   the default gateway of all the servers connected to this ToR.

   Optimal IP routing of the incoming traffic is divided into two
   components: intra data center traffic and inter data center traffic.

7.1.  Intra Data Center Hot Virtual Machine Mobility

   Optimal IP routing of the incoming intra data center traffic is
   achieved as follows: Management node after sending VM Mobility
   Registration Request message to the source Hypervisors in Section 5,
   it also exchanges VM Mobility Registration Request/Reply messages
   with the default router of the source hypervisor.  Default router is
   usually the Top of Rack switch in this document, but the default
   router could be a different node depending on the configuration of
   the data center.

   The default router interprets Home Network Prefix values in pairs as
   host routes for virtual machines.  The default router (ToR)
   establishes these host routes and uses them to redirect traffic from
   any correspondent nodes or from VMs in the servers connected to the
   ToR.

   The default routers MUST allow configuration of the host route
   generated by our VM Mobility Registration protocol.  VM is not moved
   until Interior gateway protocol (IGP), e.g. OSPF or IS-IS to announce
   the route by the default router of the destination Hypervisor.  The
   VMs can wait to move until the host route is set-up.  The VM Mobility
   Registration protocol is basically used to inform both routers that
   this process is going on.

7.2.  Inter Data Center Hot Virtual Machine Mobility

   Optimal IP routing of the incoming inter data center traffic can be
   achieved by propagating the host routes using inter-domain routing
   protocols such as Border Gateway Protocol (BGP) [RFC4271].  If the
   host routes are propagated within a Data Center using IGPs, the
   normal redistribution mechanism can by policy redistribute the host
   routes at the Border router.  A BGP Community can be tagged to the
   host routes to make it easier to process.

Sarikaya, et al.        Expires November 15, 2014              [Page 10]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   Border router (BR) MAY send BGP UPDATE message to its BGP peers.
   Source hypervisors receiving incoming traffic for a VM that has moved
   first try to reroute the traffic using host routes.  Next action
   should be to inform the border router to initiate a BGP update
   message.  Source hypervisor may inform each host route that it has in
   its binding update list for the VM to the BR.

   Border router generates an UPDATE message using the information it
   received from the source hypervisors.  UPDATE message contains one or
   more ORIGIN path attributes containing which is set to IGP.  The
   address prefix values in IPv4 or IPv6 of the VM when it was at the
   source hypervisor and the destination prefix are contained in Network
   Layer Reachability Information (NLRI) field of the UPDATE message.

   UPDATE messages with host routes should be exchanged among a
   particular set of data centers, possibly the data centers belonging
   to the same operator.  This constrained propagation can be achived by
   policy enforcement.

8.  Virtual Machine Operation

   Virtual machines are not involved in any mobility signalling.  Once
   VM moves to the destination hypervisor, VM should be able to continue
   to receive packets to its previous address(es).  This happens in hot
   VM mobility scenarios.

   VM establishes virtual interfaces for each of its previous addresses.
   Virtual interfaces are established only if there is communication on
   an address previously acquired.  This address is assigned to the
   virtual interface.  These virtual interfaces enable the VM to
   continue to receive packets from its previous addresses.  The virtual
   interfaces are deleted when there is no longer any communication over
   that address.

8.1.  Virtual Machine Lifecycle Management

   Managing the lifecycle of VM includes creating a VM with all of the
   required resources, and managing them seamlessly as the VM migrates
   from one service to another during its lifetime.  The on-boarding
   process includes the following steps:

   1.  Sending an allowed (authorized/authenticated) request to
       Management/Orchestration module in an acceptable format with
       mandatory/optional virtualized resources {cpu, memory, storage,
       process/thread support, etc.} and interface information

Sarikaya, et al.        Expires November 15, 2014              [Page 11]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   2.  Receiving an acknowledgement from the Management/Orchestration
       module regarding availability and usability of virtualized
       resources and interface package

   3.  Sending a confirmation message to the Management/Orchestration
       module with request for approval to adapt/adjust/modify the
       virtualized resources and interface package for utilization in a
       service

9.  Handling IPv4 Virtual Machine Mobility

   Virtual machines may be created to serve legacy clients and therefore
   may be assigned IPv4 addresses.  In this section we specify how IPv4
   virtual machine mobility is handled.

   Source Hypervisor registers the virtual machine by sending VM
   Registration Request message structured as in Section 5 but instead
   of Virtual Machine Address option Virtual Machine IPv4 Address option
   which is renamed from IPv4 Home Address Request option defined in
   [RFC5844] where IPv4 address of the virtual machine is placed.
   Hypervisor also adds Virtual Machine Default Router Address option
   renamed from IPv4 Default-Router Address option defined in [RFC5844]
   where Hypervisor's IPv4 address is placed.  Hypervisor which is dual-
   stack sends VM Registration Request message in IPv6.

   Management node replies back with Virtual Machine Registration Reply
   message after registering IPv4 address in the binding cache to the
   Hypervisor.  In the reply, IPv4 Home Address Reply option is not used
   because virtual machines are assigned IPv4 addresses by their
   hypervisors.  Registration contains a lifetime value as in IPv6.  The
   management node records the virtual machine information in a binding
   cache entry for this virtual machine with Virtual Machine Default
   Router Address as Proxy-CoA.  Any traffic tunnelled to this virtual
   machine is directed to Proxy-CoA.

   When virtual machine moves to a destination Hypervisor, the
   Hypervisor registers the VM in the management node as in Section 5.
   Next, the management node sends VM Registration Request message to
   the source Hypervisor(s).  For IPv4 VMs, source Hypervisor MUST
   create IPv4 host routes.  Old VM IPv4 address is obtained from the
   binding Update List entry matching this virtual machine and the new
   VM IPv4 address on the other hand is obtained from VM Registration
   Request message.

   Source Hypervisor transfers virtual machine state to the destination
   Hypervisor in IPv4 over TCP connection(s) using the old and new
   Proxy-CoA IPv4 addresses.

Sarikaya, et al.        Expires November 15, 2014              [Page 12]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   In case Network Address Translation (NAT) [RFC3022] is used, changing
   the NAT box after mobility invalidates all private addresses the VM
   had at the source hypervisor.  The protocol described in this
   document can handle VM mobility in the presence of NAT when the NAT
   box is centrally located in a data center such as at the border
   router.  In this case only intra-data center VM mobility of privately
   addressed VMs can be handled.

10.  Security Considerations

   TBD.

11.  IANA Considerations

   TBD.

12.  Acknowledgements

   The authors are grateful to Susan Hares for careful reading and
   constructive comments.

13.  References

13.1.  Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119, March 1997.

   [RFC2629]  Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629,
              June 1999.

   [RFC5213]  Gundavelli, S., Leung, K., Devarapalli, V., Chowdhury, K.,
              and B. Patil, "Proxy Mobile IPv6", RFC 5213, August 2008.

   [RFC5844]  Wakikawa, R. and S. Gundavelli, "IPv4 Support for Proxy
              Mobile IPv6", RFC 5844, May 2010.

   [RFC3007]  Wellington, B., "Secure Domain Name System (DNS) Dynamic
              Update", RFC 3007, November 2000.

   [RFC3022]  Srisuresh, P. and K. Egevang, "Traditional IP Network
              Address Translator (Traditional NAT)", RFC 3022, January
              2001.

   [RFC4271]  Rekhter, Y., Li, T., and S. Hares, "A Border Gateway
              Protocol 4 (BGP-4)", RFC 4271, January 2006.

Sarikaya, et al.        Expires November 15, 2014              [Page 13]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   [RFC6820]  Narten, T., Karir, M., and I. Foo, "Address Resolution
              Problems in Large Data Center Networks", RFC 6820, January
              2013.

   [I-D.ietf-nvo3-vm-mobility-issues]
              Rekhter, Y., Henderickx, W., Shekhar, R., Fang, L.,
              Dunbar, L., and A. Sajassi, "Network-related VM Mobility
              Issues", draft-ietf-nvo3-vm-mobility-issues-02 (work in
              progress), December 2013.

   [I-D.ietf-nvo3-framework]
              Lasserre, M., Balus, F., Morin, T., Bitar, N., and Y.
              Rekhter, "Framework for DC Network Virtualization", draft-
              ietf-nvo3-framework-05 (work in progress), January 2014.

13.2.  Informative references

   [I-D.narten-nvo3-overlay-problem-statement]
              Narten, T., Black, D., Dutt, D., Fang, L., Gray, E.,
              Kreeger, L., Napierala, M., and M. Sridhavan, "Problem
              Statement: Overlays for Network Virtualization", draft-
              narten-nvo3-overlay-problem-statement-04 (work in
              progress), August 2012.

   [I-D.kreeger-nvo3-overlay-cp]
              Kreeger, L., Dutt, D., Narten, T., Black, D., and M.
              Sridharan, "Network Virtualization Overlay Control
              Protocol Requirements", draft-kreeger-nvo3-overlay-cp-04
              (work in progress), June 2013.

   [I-D.wkumari-dcops-l3-vmmobility]
              Kumari, W. and J. Halpern, "Virtual Machine mobility in L3
              Networks.", draft-wkumari-dcops-l3-vmmobility-00 (work in
              progress), August 2011.

   [I-D.shima-clouds-net-portability-reqs-and-models]
              Shima, K., Sekiya, Y., and K. Horiba, "Network Portability
              Requirements and Models for Cloud Environment", draft-
              shima-clouds-net-portability-reqs-and-models-01 (work in
              progress), October 2011.

   [I-D.bernardos-dmm-distributed-anchoring]
              Bernardos, C. and J. Zuniga, "PMIPv6-based distributed
              anchoring", draft-bernardos-dmm-distributed-anchoring-03
              (work in progress), October 2013.

Sarikaya, et al.        Expires November 15, 2014              [Page 14]
Internet-Draft        VM Mobility Solution Protocol             May 2014

   [I-D.seite-dmm-dma]
              Seite, P., Bertin, P., and J. Lee, "Distributed Mobility
              Anchoring", draft-seite-dmm-dma-07 (work in progress),
              February 2014.

   [I-D.raggarwa-data-center-mobility]
              Aggarwal, R., Rekhter, Y., Henderickx, W., Shekhar, R.,
              Fang, L., and A. Sajassi, "Data Center Mobility based on
              E-VPN, BGP/MPLS IP VPN, IP Routing and NHRP", draft-
              raggarwa-data-center-mobility-06 (work in progress),
              December 2013.

   [I-D.khasnabish-vmmi-problems]
              Khasnabish, B., Liu, B., Lei, B., and F. Wang, "Mobility
              and Interconnection of Virtual Machines and Virtual
              Network Elements", draft-khasnabish-vmmi-problems-03 (work
              in progress), December 2012.

   [IEEE802.1Qbg]
              IEEE Draft Std 802.1Qbg/D2.2, "MAC Bridges and Virtual
              Bridged Local Area Networks - Amendment XX: Edge Virtual
              Bridging", February 2012.

Authors' Addresses

   Behcet Sarikaya
   Huawei USA
   5340 Legacy Dr. Building 3
   Plano, TX  75024

   Email: sarikaya@ieee.org

   Linda Dunbar
   Huawei USA
   5340 Legacy Dr. Building 3
   Plano, TX  75024

   Email: linda.dunbar@huawei.com

   Bhumip Khasnabish
   ZTE (TX) Inc.
   55 Madison Avenue, Suite 160
   Morristown, NJ  07960

   Email: vumip1@gmail.com, bhumip.khasnabish@ztetx.com

Sarikaya, et al.        Expires November 15, 2014              [Page 15]