Network Working Group                                    L. Dunbar
Internet Draft                                           Futurewei
Intended status: Informational                            A. Malis
Expires: January 26, 2021                         Malis Consulting
                                                       C. Jacquenet
                                                             Orange
                                                      July 26, 2020



           Networks Connecting to Hybrid Cloud DCs: Gap Analysis
                draft-ietf-rtgwg-net2cloud-gap-analysis-07

Abstract

   This document analyzes the IETF routing area technical gaps that may
   affect the dynamic connection to workloads and applications hosted
   in hybrid Cloud Data Centers from enterprise premises.

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time.  It is inappropriate to use Internet-Drafts as
   reference material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html

   This Internet-Draft will expire on January 26, 2009.





xxx, et al.             Expires January 26, 2021              [Page 1]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


Copyright Notice

   Copyright (c) 2020 IETF Trust and the persons identified as the
   document authors. All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document. Please review these documents
   carefully, as they describe your rights and restrictions with
   respect to this document. Code Components extracted from this
   document must include Simplified BSD License text as described in
   Section 4.e of the Trust Legal Provisions and are provided without
   warranty as described in the Simplified BSD License.

Table of Contents

   1. Introduction...................................................3
   2. Conventions used in this document..............................3
   3. Gap Analysis for Accessing Cloud Resources.....................4
      3.1. Multiple PEs connecting to virtual CPEs in Cloud DCs......6
      3.2. Access Control for workloads in the Cloud DCs.............6
      3.3. NAT Traversal.............................................7
      3.4. BGP between PEs and remote CPEs via Internet..............7
      3.5. Multicast traffic from/to the remote edges................8
   4. Gap Analysis of Traffic over Multiple Underlay Networks........9
   5. Aggregating VPN paths and Internet paths......................10
      5.1. Control Plane for Cloud Access via Heterogeneous Networks11
      5.2. Using BGP UPDATE Messages................................12
         5.2.1. Lack ways to differentiate traffic in Cloud DCs.....12
         5.2.2. Miss attributes in Tunnel-Encap.....................12
      5.3. SECURE-EVPN/BGP-EDGE-DISCOVERY...........................12
      5.4. SECURE-L3VPN.............................................13
      5.5. Preventing attacks from Internet-facing ports............14
   6. Gap Summary...................................................14
   7. Manageability Considerations..................................15
   8. Security Considerations.......................................16
   9. IANA Considerations...........................................16
   10. References...................................................16
      10.1. Normative References....................................16
      10.2. Informative References..................................16
   11. Acknowledgments..............................................17




Dunbar, et al.         Expires January 26, 2021                [Page 2]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


1. Introduction

   [Net2Cloud-Problem] describes the problems enterprises face today
   when interconnecting their branch offices with dynamic workloads
   hosted in third party data centers (a.k.a. Cloud DCs). In
   particular, this document analyzes the available routing protocols
   to identify whether there are any gaps that may impede such
   interconnection which may for example justify additional
   specification effort to define proper protocol extensions.

   For the sake of readability, an edge, C-PE, or CPE are used
   interchangeably throughout this document. More precisely:

     . Edge: may include multiple devices (virtual or physical);
     . C-PE: provider-owned edge, e.g. for SECURE-EVPN's PE-based
        BGP/MPLS VPN, where PE is the edge node;
     . CPE: device located in enterprise premises.



2. Conventions used in this document

   Cloud DC:   Third party Data Centers that usually host applications
               and workload owned by different organizations or
               tenants.

   Controller: Used interchangeably with Overlay controller to manage
               overlay path creation/deletion and monitor the path
               conditions between sites.

   CPE-Based VPN: Virtual Private Network designed and deployed from
               CPEs. This is to differentiate from most commonly used
               PE-based VPNs a la RFC 4364.

   OnPrem:     On Premises data centers and branch offices








Dunbar, et al.         Expires January 26, 2021                [Page 3]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


3. Gap Analysis for Accessing Cloud Resources

   Because of the ephemeral property of the selected Cloud DCs for
   specific workloads/Apps, an enterprise or its network service
   provider may not have direct physical connections to the Cloud DCs
   that are optimal for hosting the enterprise's specific
   workloads/Apps. Under those circumstances, an overlay network design
   can be an option to interconnect the enterprise's on-premises data
   centers & branch offices to its desired Cloud DCs.

   However, overlay paths established over the public Internet can have
   unpredictable performance, especially over long distances.
   Therefore, it is highly desirable to minimize the distance or the
   number of segments that traffic had to be forwarded over the public
   Internet.

   The Metro Ethernet Forum's Cloud Service Architecture [MEF-Cloud]
   also describes a use case of network operators using Overlay paths
   over an LTE network or the public Internet for the last mile access
   where the VPN service providers cannot always provide the required
   physical infrastructure.

   In some scenarios, some overlay edge nodes may not be directly
   attached to the PEs that participate to the delivery and the
   operation of the enterprise's VPN.

   When using an overlay network to connect the enterprise's sites to
   the workloads hosted in Cloud DCs, the existing C-PEs at
   enterprise's sites have to be upgraded to connect to the said
   overlay network.  If the workloads hosted in Cloud DCs need to be
   connected to many sites, the upgrade process can be very expensive.

   [Net2Cloud-Problem] describes a hybrid network approach that extends
   the existing MPLS-based VPNs to the Cloud DC Workloads over the
   access paths that are not under the VPN provider's control. To make
   it work properly, a small number of the PEs of the BGP/MPLS VPN can
   be designated to connect to the remote workloads via secure IPsec
   tunnels.  Those designated PEs are shown as fPE (floating PE or
   smart PE) in Figure 3. Once the secure IPsec tunnels are
   established, the workloads hosted in Cloud DCs can be reached by the
   enterprise's VPN without upgrading all of the enterprise's CPEs. The


Dunbar, et al.         Expires January 26, 2021                [Page 4]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


   only CPE that needs to connect to the overlay network would be a
   virtualized CPE instantiated within the cloud DC.


   +--------+                                             +--------+
   | Host-a +--+                                     +----| Host-b |
   |        |  |                                    (')   |        |
   +--------+  |           +-----------+           (   )  +--------+
               |  +-+--+  ++-+        ++-+  +--+-+  (_)
               |  | CPE|--|PE|        |PE+--+ CPE|   |
               +--|    |  |  |        |  |  |    |---+
                  +-+--+  ++-+        ++-+  +----+
                   /       |           |
                  /        |  MPLS   +-+---+    +--+-++--------+
          +------+-+       | Network |fPE-1|    |CPE || Host   |
          | Host   |       |         |     |- --|    ||   d    |
          |   c    |       +-----+   +-+---+    +--+-++--------+
          +--------+       |fPE-2|-----+
                           +---+-+    (|)
                              (|)     (|) Overlay
                              (|)     (|) over any access
                              +=\======+=========+
                             //   \    | Cloud DC \\
                            //      \ ++-----+       \\
                                      +      |
                                      | vCPE |
                                      +-+----+
                            ----+-------+-------+-----
                                |               |
                            +---+----+      +---+----+
                            | Remote |      | Remote |
                            | App-1  |      | App-2  |
                            +--------+      +--------+

                    Figure 1: VPN Extension to Cloud DC

   In Figure 1, the optimal Cloud DC to host the workloads (as a
   function of the proximity, capacity, pricing, or any other criteria
   chosen by the enterprises) does not have a direct connection to the
   PEs of the NGP/MPLS VPN that interconnects the enterprise's sites.



Dunbar, et al.         Expires January 26, 2021                [Page 5]


Internet-Draft          Net2Cloud Gap Analysis                July 2020



3.1. Multiple PEs connecting to virtual CPEs in Cloud DCs

   To extend BGP/MPLS VPNs to virtual CPEs in Cloud DCs, it is
   necessary to establish secure tunnels (such as IPsec tunnels)
   between the PEs and the vCPEs.

   Even though a set of PEs can be manually selected for a specific
   cloud data center, there are no standard protocols for those PEs to
   interact with the vCPEs instantiated in the third party cloud data
   centers over unsecure networks. The interaction includes exchanging
   performance, route information, etc..

   When there is more than one PE available for use (as there should be
   for resiliency purposes or because of the need to support multiple
   cloud DCs geographically scattered), it is not straightforward to
   designate an egress PE to remote vCPEs based on applications.  It
   might not be possible for PEs to recognize all applications because
   too much traffic traversing the PEs.

   When there are multiple floating PEs that have established IPsec
   tunnels with a remote CPE, the remove CPE can forward outbound
   traffic to the optimal PE, which in turn forwards traffic to egress
   PEs to reach the final destinations. However, it is not
   straightforward for the ingress PE to select which egress PEs to
   send traffic. For example, in Figure 1:

     - fPE-1 is the optimal PE for communication between App-1 <->
     Host-a due to latency, pricing or other criteria.

     - fPE-2 is the optimal PE for communication between App-1 <->
     Host-b.


3.2. Access Control for workloads in the Cloud DCs

   There is widespread diffusion of access policy for Cloud Resource,
   some of which is not easy for verification and validation. Because
   there are multiple parties involved in accessing Cloud Resources,
   policy enforcement points are not easily visible for policy
   refinement, monitoring, and testing.



Dunbar, et al.         Expires January 26, 2021                [Page 6]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


   The current state of the art for specifying access policies for
   Cloud Resources could be improved by having automated and reliable
   tools to map the user-friendly (natural language) rules into machine
   readable policies and to provide interfaces for enterprises to self-
   manage policy enforcement points for their own workloads.

3.3. NAT Traversal

   Cloud DCs that only assign private IPv4 addresses to the
   instantiated workloads assume that traffic to/from the workload
   usually needs to traverse NATs.

   There is no automatic way for an enterprise's network controller to
   be informed of the NAT properties for its workloads in Cloud DCs

   One potential solution could be utilizing the messages sent during
   initialization of an IKE VPN when NAT Traversal option is enabled.
   There are some inherent problems while sending IPSec packets through
   NAT devices. One way to overcome these problems is to encapsulate
   IPSec packets in UDP. To do this effectively, there is a discovery
   phase in IKE (Phase1) that tries to determine if either of the IPSec
   gateways is behind a NAT device. If a NAT device is found, IPSec-
   over-UDP is proposed during IPSec (Phase 2) negotiation. If there is
   no NAT device detected, IPSec is used

   Another potential solution could be allowing the virtual CPE in
   Cloud DCs to solicit a STUN (Session Traversal of UDP Through
   Network Address Translation, [RFC3489]) Server to get the
   information about the NAT property, the public IP addresses and port
   numbers so that such information can be communicated to the relevant
   peers.

3.4. BGP between PEs and remote CPEs via Internet

   Even though an EBGP (external BGP) Multi-Hop design can be used to
   connect peers that are not directly connected to each other, there
   are still some issues about extending BGP from MPLS VPN PEs to
   remote CPEs in cloud DCs via any access path (e.g., Internet).

   The path between the remote CPEs and VPN PEs that maintain VPN
   routes can traverse untrusted segments.



Dunbar, et al.         Expires January 26, 2021                [Page 7]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


   EBGP Multi-hop design requires configuration on both peers, either
   manually or via NETCONF from a controller. To use EBGP between a PE
   and remote CPEs, the PE has to be manually configured with the
   "next-hop" set to the IP address of the CPEs. When remote CPEs,
   especially remote virtualized CPEs are dynamically instantiated or
   removed, the configuration of Multi-Hop EBGP on the PE has to be
   changed accordingly.

     Egress peering engineering (EPE) is not sufficient. Running BGP on
     virtualized CPEs in Cloud DCs requires GRE tunnels to be
     established first, which requires the remote CPEs to support
     address and key management capabilities. RFC 7024 (Virtual Hub &
     Spoke) and Hierarchical VPN do not support the required
     properties.

     Also, there is a need for a mechanism to automatically trigger
     configuration changes on PEs when remote CPEs' are instantiated or
     moved (leading to an IP address change) or deleted.

     EBGP Multi-hop design does not include a security mechanism by
     default. The PE and remote CPEs need secure communication channels
     when connecting via the public Internet.

   Remote CPEs, if instantiated in Cloud DCs might have to traverse
   NATs to reach PEs. It is not clear how BGP can be used between
   devices located beyond the NAT and the devices located behind the
   NAT. It is not clear how to configure the Next Hop on the PEs to
   reach private IPv4 addresses.




3.5. Multicast traffic from/to the remote edges

   Among the multiple floating PEs that are reachable from a remote CPE
   in a Cloud DC, multicast traffic sent by the remote CPE towards the
   MPLS VPN can be forwarded back to the remote CPE due to the PE
   receiving the multicast packets forwarding the multicast/broadcast
   frame to other PEs that in turn send to all attached CPEs. This
   process may cause traffic loops.





Dunbar, et al.         Expires January 26, 2021                [Page 8]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


   This problem can be solved by selecting one floating PE as the CPE's
   Designated Forwarder, similar to TRILL's Appointed Forwarders
   [RFC6325].

   BGP/MPLS VPNs do not have features like TRILL's Appointed
   Forwarders.



4. Gap Analysis of Traffic over Multiple Underlay Networks

   Very often the Hybrid Cloud DCs are interconnected by multiple types
   of underlay networks, such as VPN, public Internet, wireless and
   wired infrastructures, etc. Sometimes the enterprises' VPN providers
   do not have direct access to the Cloud DCs that host some specific
   applications or workloads operated by the enterprise.

   When reached by an untrusted network, all sensitive data to/from
   this virtual CPE have to be encrypted, usually by means of IPsec
   tunnels. When reached by a trusted direct connect paths, sensitive
   data can be forwarded without encryption for better performance.

   If a virtual CPE in Cloud DC can be reached by both trusted and
   untrusted paths, better performance can be achieved to have a mixed
   encrypted and unencrypted traffic depending which paths the traffic
   is forwarded. However, there is no appropriate control plane
   protocol to achieve this automatically.

   Some networks achieve the IPsec tunnel automation by using the
   modified NHRP protocol [RFC2332] to register network facing ports of
   the edge nodes with their Controller (or NHRP server), which then
   maps a private VPN address to a public IP address of the destination
   node/port. DSVPN [DSVPN] or DMVPN [DMVPN] are used to establish
   tunnels between WAN ports of SDWAN edge nodes.

   NHRP was originally intended for ATM address resolution, and as a
   result, it misses many attributes that are necessary for dynamic
   virtual C-PE registration to the controller, such as:

   - Interworking with the MPLS VPN control plane. An overlay edge can
     have some ports facing the MPLS VPN network over which packets can
     be forwarded without any encryption and some ports facing the


Dunbar, et al.         Expires January 26, 2021                [Page 9]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


     public Internet over which sensitive traffic needs to be
     encrypted.
   - Scalability: NHRP/DSVPN/DMVPN work fine with small numbers of edge
     nodes. When a network has more than 100 nodes, these protocols do
     not scale well.
   - NHRP does not have the IPsec attributes, which are needed for
     peers to build Security Associations over the public Internet.
   - NHRP messages do not have any field to encode the C-PE supported
     encapsulation types, such as IPsec-GRE or IPsec-VxLAN.
   - NHRP messages do not have any field to encode C-PE Location
     identifiers, such as Site Identifier, System ID, and/or Port ID.
   - NHRP messages do not have any field to describe the gateway(s) to
     which the C-PE is attached. When a C-PE is instantiated in a Cloud
     DC, it is desirable for the C-PE's owner to be informed about how
     and where the C-PE is attached.
   - NHRP messages do not have any field to describe C-PE's NAT
     properties if the C-PE is using private IPv4 addresses, such as
     the NAT type, Private address, Public address, Private port,
     Public port, etc.


5. Aggregating VPN paths and Internet paths

   Most likely, enterprises (especially the largest ones) already have
   their C-PEs interconnected by VPNs, based upon VPN techniques like
   EVPN, L2VPN, or L3VPN. Their VPN providers might have direct
   paths/links to the Cloud DCs that host their workloads and
   applications.

   When there is short term high traffic volume that can't justify
   increasing the VPNs capacity, enterprises can utilize public
   internet to reach their Cloud vCPEs.  Then it is necessary for the
   vCPEs to communicate with the controller on how traffic is
   distributed among multiple heterogeneous underlay networks and to
   manage secure tunnels over untrusted networks.









Dunbar, et al.         Expires January 26, 2021               [Page 10]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


                                       +---+
                        +--------------|RR |----------+
                       /  Untrusted    +-+-+           \
                      /                            +----------------------
                     /                             |     \     Cloud DC
     +----+  +---------+  packets encrypted over   | +------+  +----+
     | TN3|--|         A1-----+ Untrusted    +-------+      |--| TN1|
     +----+  | C-PE    A2-\                        | | vCPE |  +----+
     +----+  |  A      A3--+--+              +---+ | |  B   |  +----+
     | TN2|--|         |   |PE+--------------+PE |---+      |--| TN3|
     +----+  +---------+   +--+   trusted    +---+ | +------+  +----+
                              |      WAN     |     |
     +----+  +---------+   +--+   packets    +---+ | +------+  +----+
     | TN1|--|         C1--|PE| go natively  |PE |---+      |--| TN1|
     +----+  | C-PE    C2--+--+ without encry+---+ | | vCPE |  +----+
             |  C      |      +--------------+     | |  D   |
             |         |                           | |      |
     +----+  |         C3--|  without encrypt over | |      |  +----+
     | TN2|--|         C4--+---- Untrusted  --+------+      |--| TN2|
     +----+  +---------+                           | +------+  +----+
                                                   |
                                                   +------------------------
                  Figure 2: vCPEs reached by Hybrid Paths


 5.1. Control Plane for Cloud Access via Heterogeneous Networks

   The Control Plane for managing applications and workloads in cloud
   DCs reachable by heterogeneous networks need to include the
   following properties:

     - vCPE in a cloud DCs needs to communicate with its controller of
        the properties of the directly connected underlay networks.

     - Need Controller-facilitated IPsec SA attributes and NAT
        information distribution
          o The controller facilitates and manages the peer
             authentication for all IPsec tunnels terminated at the
             vCPEs.

     - Establishing and Managing the topology and reachability for
        services attached to the vCPEs in Cloud DCs.
          o This is for the overlay layer's route distribution, so
             that a vCPE can populate its overlay routing table with



Dunbar, et al.         Expires January 26, 2021               [Page 11]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


             entries that identify the next hop for reaching a specific
             route/service attached to the vCPEs.


 5.2. Using BGP UPDATE Messages

 5.2.1. Lack ways to differentiate traffic in Cloud DCs

   One enterprise can have different types of applications in one Cloud
   DC. Some can be production applications, some can be testing
   applications, and some can belong to one specific departments. The
   traffic to/from different applications might need to traverse
   different network paths or need to be differentiated by Control
   plane and data plane.

   BGP already has built-in mechanisms, like Route Target, to
   differentiate different VPNs. But Route Target (RT) is for MPLS
   based VPNs, therefore RT is not appropriate to directly apply to
   virtual paths laid over mixed VPNs, IPsec or public underlay
   networks.

 5.2.2. Miss attributes in Tunnel-Encap

   [Tunnel-Encap] describes the BGP UPDATE Tunnel Path Attribute that
   advertises endpoints' tunnel encapsulation capabilities for the
   respective attached client routes encoded in the MP-NLRI Path
   Attribute. The receivers of the BGP UPDATE can use any of the
   supported encapsulations encoded in the Tunnel Path Attribute for
   the routes encoded in the MP-NLRI Path Attribute.

   Here are some of the issues raised by using [Tunnel-Encap] to
   distribute the property of client routes be carried by mixed of
   hybrid networks:

   - [Tunnel-Encap] doesn't have encoding methods to advertise that a
     route can be carried by mixed of IPsec tunnels and other already
     supported tunnels.
   - The mechanism defined in [Tunnel-Encap] does not facilitate the
     exchange of IPsec SA-specific attributes.


5.3. SECURE-EVPN/BGP-EDGE-DISCOVERY

   [SECURE-EVPN] describes a solution that utilize BGP as control plane
   for the Scenario #1 described in [BGP-SDWAN-Usage]. It relies upon a


Dunbar, et al.         Expires January 26, 2021               [Page 12]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


   BGP cluster design to facilitate the key and policy exchange among
   PE devices to create private pair-wise IPsec Security Associations.
   [Secure-EVPN] attaches all the IPsec SA information to the actual
   client routes.

   [BGP-Edge-DISCOVERY] proposes BGP UPDATEs from client routers only
   include the IPsec SA identifiers (ID) to reference the IPsec SA
   attributes being advertised by separate Underlay Property BGP UPDATE
   messages. If a client route can be encrypted by multiple IPsec SAs,
   then multiple IPsec SA IDs are included in the Tunnel-Encap Path
   attribute for the client route.

   [BGP-Edge-DISCOVERY] proposes detailed IPsec SA attributes are
   advertised in a separate BGP UPDATE for the underlay networks.

   [Secure-EVPN] and [BGP-Edge-Discovery] differs in the information
   included in the client routes. [Secure-EVPN] attaches all the IPsec
   SA information to the actual client routes, whereas the [BGP-Edge-
   Discovery] only includes the IPsec SA IDs for the client routes. The
   IPsec SA IDs used by [BGP-Edge-Discovery] is pointing to the SA-
   Information which are advertised separately, with all the SA-
   Information attached to routes which describe the SDWAN underlay,
   such as WAN Ports or Node address.

5.4. SECURE-L3VPN

    [SECURE-L3VPN] describes a method to enrich BGP/MPLS VPN [RFC4364]
   capabilities to allow some PEs to connect to other PEs via public
   networks. [SECURE-L3VPN] introduces the concept of Red Interface &
   Black Interface used by PEs, where the RED interfaces are used to
   forward traffic into the VPN, and the Black Interfaces are used
   between WAN ports through which only IPsec-formatted packets are
   forwarded to the Internet or to any other backbone network, thereby
   eliminating the need for MPLS transport in the backbone.

   [SECURE-L3VPN] assumes PEs use MPLS over IPsec when sending traffic
   through the Black Interfaces.

   [SECURE-L3VPN] is useful, but it misses the aspects of aggregating
   VPN and Internet underlays. In addition:

   - The [SECURE-L3VPN] assumes that a CPE "registers" with the RR.
     However, it does not say how. It assumes that the remote CPEs are
     pre-configured with the IPsec SA manually. For overlay networks to
     connect Hybrid Cloud DCs, Zero Touch Provisioning is expected.
     Manual configuration is not an option.


Dunbar, et al.         Expires January 26, 2021               [Page 13]


Internet-Draft          Net2Cloud Gap Analysis                July 2020




   - The [SECURE-L3VPN] assumes that C-PEs and RRs are connected via an
     IPsec tunnel. For management channel, TLS/DTLS is more economical
     than IPsec. The following assumption made by [SECURE-L3VPN] can be
     difficult to meet in the environment where zero touch provisioning
     is expected:
          A CPE must also be provisioned with whatever additional
          information is needed in order to set up an IPsec SA with
          each of the red RRs

   - IPsec requires periodic refreshment of the keys. The [SECURE-
     L3VPN] does not provide any information about how to synchronize
     the refreshment among multiple nodes.
   - IPsec usually sends configuration parameters to two endpoints only
     and lets these endpoints negotiate the key. The [SECURE-L3VPN]
     assumes that the RR is responsible for creating/managing the key
     for all endpoints. When one endpoint is compromised, all other
     connections may be impacted.


5.5. Preventing attacks from Internet-facing ports

   When C-PEs have Internet-facing ports, additional security risks are
   raised.

   To mitigate security risks, in addition to requiring Anti-DDoS
   features on C-PEs, it is necessary for C-PEs to support means to
   determine whether traffic sent by remote peers is legitimate to
   prevent spoofing attacks, in particular.



6. Gap Summary

   Here is the summary of the technical gaps discussed in this
   document:

   - For Accessing Cloud Resources

       a) Traffic Path Management: when a remote vCPE can be reached by
          multiple PEs of one provider VPN network, it is not



Dunbar, et al.         Expires January 26, 2021               [Page 14]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


          straightforward to designate which egress PE to the remote
          vCPE based on applications or performance.
       b) NAT Traversal: There is no automatic way for an enterprise's
          network controller to be informed of the NAT properties for
          its workloads in Cloud DCs.
       c) There is no loop prevention for the multicast traffic to/from
          remote vCPE in Cloud DCs.

         Needs a feature like Appointed Forwarder specified by TRILL to
         prevent multicast data frames from looping around.

       d) BGP between PEs and remote CPEs via untrusted networks.

   - Missing control plane to manage the propagation of the property of
   networks connected to the virtual nodes in Cloud DCs.

      BGP UPDATE propagate client's routes information, but don't
     distinguish underlay networks.

   - Issues of aggregating traffic over private paths and Internet
   paths

       a) Control plane messages for different overlay segmentations
          needs to be differentiated. User traffic belonging to
          different segmentations need to be differentiated.
       b) BGP Tunnel Encap doesn't have ways to indicate a route or
          prefix that can be carried by both IPsec tunnels and VPN
          tunnels
       c) Missing clear methods in preventing attacks from Internet-
          facing ports

7. Manageability Considerations

     Zero touch provisioning of overlay networks to interconnect Hybrid
     Clouds is highly desired. It is necessary for a newly powered up
     edge node to establish a secure connection (by means of TLS, DTLS,
     etc.) with its controller.







Dunbar, et al.         Expires January 26, 2021               [Page 15]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


8. Security Considerations

     Cloud Services are built upon shared infrastructures, therefore
     not secure by nature.

     Secure user identity management, authentication, and access
     control mechanisms are important. Developing appropriate security
     measurements can enhance the confidence needed by enterprises to
     fully take advantage of Cloud Services.



9. IANA Considerations

   This document requires no IANA actions. RFC Editor: Please remove
   this section before publication.

10. References


10.1. Normative References

   [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
             Requirement Levels", BCP 14, RFC 2119, March 1997.

10.2. Informative References

   [RFC8192] S. Hares, et al, "Interface to Network Security Functions
             (I2NSF) Problem Statement and Use Cases", July 2017

   [RFC5521] P. Mohapatra, E. Rosen, "The BGP Encapsulation Subsequent
             Address Family Identifier (SAFI) and the BGP Tunnel
             Encapsulation Attribute", April 2009.

   [BGP-EDGE-DISCOVERY] L. Dunbar, et al, "BGP UPDATE for SDWAN Edge
             Discovery ", draft-dunbar-idr-sdwan-edge-discovery-00,
             Work-in-progress, July 2020.

   [BGP-SDWAN-Usage] L. Dunbar, et al, "BGP Usage for SDWAN Overlay
             Networks ", draft-dunbar-bess-bgp-sdwan-usage-08, work-in-
             progress, July 2020.


Dunbar, et al.         Expires January 26, 2021               [Page 16]


Internet-Draft          Net2Cloud Gap Analysis                July 2020


   [Tunnel-Encap] K. Patel, et al, "The BGP Tunnel Encapsulation
             Attribute", draft-ietf-idr-tunnel-encaps-17, July 2020.

   [SECURE-EVPN] A. Sajassi, et al, draft-sajassi-bess-secure-evpn-01,
             work in progress, March 2019.

   [SECURE-L3VPN] E. Rosen, "Provide Secure Layer L3VPNs over Public
             Infrastructure", draft-rosen-bess-secure-l3vpn-00, work-
             in-progress, July 2018

   [DMVPN] Dynamic Multi-point VPN:
             https://www.cisco.com/c/en/us/products/security/dynamic-
             multipoint-vpn-dmvpn/index.html

   [DSVPN] Dynamic Smart VPN:
             http://forum.huawei.com/enterprise/en/thread-390771-1-
             1.html



   [ITU-T-X1036] ITU-T Recommendation X.1036, "Framework for creation,
             storage, distribution and enforcement of policies for
             network security", Nov 2007.

    [Net2Cloud-Problem] L. Dunbar and A. Malis, "Seamless Interconnect
             Underlay to Cloud Overlay Problem Statement", draft-dm-
             net2cloud-problem-statement-02, June 2018



11. Acknowledgments

   Acknowledgements to John Drake for his review and contributions.
   Many thanks to John Scudder for stimulating the clarification
   discussion on the Tunnel-Encap draft so that our gap analysis can be
   more accurate.

   This document was prepared using 2-Word-v2.0.template.dot.





Dunbar, et al.         Expires January 26, 2021               [Page 17]


Internet-Draft          Net2Cloud Gap Analysis                July 2020



Authors' Addresses


   Linda Dunbar
   Futurewei
   Email: ldunbar@futurewei.com

   Andrew G. Malis
   Malis Consulting
   Email: agmalis@gmail.com

   Christian Jacquenet
   Orange
   Rennes, 35000
   France
   Email: Christian.jacquenet@orange.com






























Dunbar, et al.         Expires January 26, 2021               [Page 18]