Framework for Abstraction and Control of Traffic Engineered Networks
draft-ietf-teas-actn-framework-08
The information below is for an old version of the document.
Document | Type |
This is an older version of an Internet-Draft that was ultimately published as RFC 8453.
|
|
---|---|---|---|
Authors | Daniele Ceccarelli , Young Lee | ||
Last updated | 2017-10-04 | ||
Replaces | draft-ceccarelli-teas-actn-framework | ||
RFC stream | Internet Engineering Task Force (IETF) | ||
Formats | |||
Reviews |
GENART Last Call review
(of
-13)
by Peter Yee
Ready w/issues
|
||
Additional resources | Mailing list discussion | ||
Stream | WG state | WG Document | |
Document shepherd | (None) | ||
IESG | IESG state | Became RFC 8453 (Informational) | |
Consensus boilerplate | Unknown | ||
Telechat date | (None) | ||
Responsible AD | (None) | ||
Send notices to | (None) |
draft-ietf-teas-actn-framework-08
Internet-Draft ACTN Framework October 2017 As the Customer Network Controller directly interfaces to the applications, it understands multiple application requirements and their service needs. 3.2. Multi-Domain Service Coordinator A Multi-Domain Service Coordinator (MDSC) is a functional block that implements all of the ACTN functions listed in Section 3 and described further in Section 4.2. The MDSC sits at the center of the ACTN model between the CNC that issues connectivity requests and the Physical Network Controllers (PNCs) that manage the physical network resources. The key point of the MDSC (and of the whole ACTN framework) is detaching the network and service control from underlying technology to help the customer express the network as desired by business needs. The MDSC envelopes the instantiation of the right technology and network control to meet business criteria. In essence it controls and manages the primitives to achieve functionalities as desired by the CNC. In order to allow for multi-domain coordination a 1:N relationship must be allowed between MDSCs and PNCs. In addition to that, it could also be possible to have an M:1 relationship between MDSCs and PNC to allow for network resource partitioning/sharing among different customers not necessarily connected to the same MDSC (e.g., different service providers) but all using the resources of a common network infrastructure provider. 3.3. Physical Network Controller The Physical Network Controller (PNC) oversees configuring the network elements, monitoring the topology (physical or virtual) of the network, and collecting information about the topology (either raw or abstracted). The PNC functions can be implemented as part of an SDN domain controller, a Network Management System (NMS), an Element Management System (EMS), an active PCE-based controller [Centralized] or any other means to dynamically control a set of nodes and that is implementing an NBI compliant with ACTN specification. A PNC domain includes all the resources under the control of a single PNC. It can be composed of different routing domains and administrative domains, and the resources may come from different Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 13] Internet-Draft ACTN Framework October 2017 layers. The interconnection between PNC domains is illustrated in Figure 3. _______ _______ _( )_ _( )_ _( )_ _( )_ ( ) Border ( ) ( PNC ------ Link ------ PNC ) ( Domain X |Border|========|Border| Domain Y ) ( | Node | | Node | ) ( ------ ------ ) (_ _) (_ _) (_ _) (_ _) (_______) (_______) Figure 3: PNC Domain Borders 3.4. ACTN Interfaces Direct customer control of transport network elements and virtualized services is not a viable proposition for network providers due to security and policy concerns. In addition, some networks may operate a control plane and as such it is not practical for the customer to directly interface with network elements. Therefore, the network has to provide open, programmable interfaces, through which customer applications can create, replace and modify virtual network resources and services in an interactive, flexible and dynamic fashion while having no impact on other customers. Three interfaces exist in the ACTN architecture as shown in Figure 2. . CMI: The CNC-MDSC Interface (CMI) is an interface between a CNC and an MDSC. The CMI is a business boundary between customer and network provider. It is used to request a VNS for an application. All service-related information is conveyed over this interface (such as the VNS type, topology, bandwidth, and service constraints). Most of the information over this interface is technology agnostic (the customer is unaware of the network technologies used to deliver the service), but there are some cases (e.g., access link configuration) where it is necessary to specify technology-specific details. . MPI: The MDSC-PNC Interface (MPI) is an interface between an MDSC and a PNC. It communicates requests for new connectivity or for bandwidth changes in the physical network. In multi- Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 14] Internet-Draft ACTN Framework October 2017 domain environments, the MDSC needs to communicate with multiple PNCs each responsible for control of a domain. The MPI presents an abstracted topology to the MDSC hiding technology specific aspects of the network and hiding topology according to policy. . SBI: The Southbound Interface (SBI) is out of scope of ACTN. Many different SBIs have been defined for different environments, technologies, standards organizations, and vendors. It is shown in Figure 3 for reference reason only. 4. Advanced ACTN Architectures This section describes advanced configurations of the ACTN architecture. 4.1. MDSC Hierarchy A hierarchy of MDSCs can be foreseen for many reasons, among which are scalability, administrative choices, or putting together different layers and technologies in the network. In the case where there is a hierarchy of MDSCs, we introduce the terms higher-level MDSC (MDSC-H) and lower-level MDSC (MDSC-L). The interface between them is a recursion of the MPI. An implementation of an MDSC-H makes provisioning requests as normal using the MPI, but an MDSC-L must be able to receive requests as normal at the CMI and also at the MPI. The hierarchy of MDSCs can be seen in Figure 4. Another implementation choice could foresee the usage of an MDSC-L for all the PNCs related to a given network layer or technology (e.g. IP/MPLS) a different MDSC-L for the PNCs related to another layer/technology (e.g. OTN/WDM) and an MDSC-H to coordinate them. +--------+ | CNC | +--------+ | +-----+ | CMI | CNC | +----------+ +-----+ -------| MDSC-H |---- | | +----------+ | | CMI MPI | MPI | | | | | +---------+ +---------+ | MDSC-L | | MDSC-L | +---------+ +---------+ MPI | | | | Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 15] Internet-Draft ACTN Framework October 2017 | | | | ----- ----- ----- ----- | PNC | | PNC | | PNC | | PNC | ----- ----- ----- ----- Figure 4: MDSC Hierarchy 4.2. Functional Split of MDSC Functions in Orchestrators An implementation choice could separate the MDSC functions into two groups, one group for service-related functions and the other for network-related functions. This enables the implementation of a service orchestrator that provides the service-related functions of the MDSC and a network orchestrator that provides the network- related functions of the MDSC. This split is consistent with the YANG service model architecture described in [Service-YANG]. Figure 5 depicts this and shows how the ACTN interfaces may map to YANG models. +--------------------+ | Customer | | +-----+ | | | CNC | | | +-----+ | +--------------------+ CMI | Customer Service Model | +---------------------------------------+ | Service | ********|*********************** Orchestrator | * MDSC | +-----------------+ * | * | | Service-related | * | * | | Functions | * | * | +-----------------+ * | * +----------------------*----------------+ * * | Service Delivery Model * * | * +----------------------*----------------+ * | * Network | * | +-----------------+ * Orchestrator | * | | Network-related | * | * | | Functions | * | * | +-----------------+ * | ********|*********************** | +---------------------------------------+ MPI | Network Configuration Model | +------------------------+ Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 16] Internet-Draft ACTN Framework October 2017 | Domain | | +------+ Controller | | | PNC | | | +------+ | +------------------------+ SBI | Device Configuration Model | +--------+ | Device | +--------+ Figure 5: ACTN Architecture in the Context of the YANG Service Models 5. Topology Abstraction Methods Topology abstraction is described in [RFC7926]. This section discusses topology abstraction factors, types, and their context in the ACTN architecture. Abstraction in ACTN is performed by the PNC when presenting available topology to the MDSC, or by an MDSC-L when presenting topology to an MDSC-H. This function is different to the creation of a VN (and particularly a Type 2 VN) which is not abstraction but construction of virtual resources. 5.1. Abstraction Factors As discussed in [RFC7926], abstraction is tied with policy of the networks. For instance, per an operational policy, the PNC would not provide any technology specific details (e.g., optical parameters for WSON) in the abstract topology it provides to the MDSC. There are many factors that may impact the choice of abstraction: - Abstraction depends on the nature of the underlying domain networks. For instance, packet networks may be abstracted with fine granularity while abstraction of optical networks depends on the switching units (such as wavelengths) and the end-to-end continuity and cross-connect limitations within the network. - Abstraction also depends on the capability of the PNCs. As abstraction requires hiding details of the underlying network resources, the PNC's capability to run algorithms impacts the feasibility of abstraction. Some PNC may not have the ability to abstract native topology while other PNCs may have the ability to use sophisticated algorithms. Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 17] Internet-Draft ACTN Framework October 2017 - Abstraction is a tool that can improve scalability. Where the native network resource information is of large size there is a specific scaling benefit to abstraction. - The proper abstraction level may depend on the frequency of topology updates and vice versa. - The nature of the MDSC's support for technology-specific parameters impacts the degree/level of abstraction. If the MDSC is not capable of handling such parameters then a higher level of abstraction is needed. - In some cases, the PNC is required to hide key internal topological data from the MDSC. Such confidentiality can be achieved through abstraction. 5.2. Abstraction Types This section defines the following three types of topology abstraction: . Native/White Topology (Section 5.2.1) . Black Topology (Section 5.2.2) . Grey Topology (Section 5.2.3) 5.2.1. Native/White Topology This is a case where the PNC provides the actual network topology to the MDSC without any hiding or filtering of information. I.e., no abstraction is performed. In this case, the MDSC has the full knowledge of the underlying network topology and can operate on it directly. 5.2.2. Black Topology A black topology replaces a full network with a minimal representation of the edge-to-edge topology without disclosing any node internal connectivity information. The entire domain network may be abstracted as a single abstract node with the network's access/egress links appearing as the ports to the abstract node and the implication that any port can be 'cross-connected' to any other. Figure 6 depicts a native topology with the corresponding black topology with one virtual node and inter-domain links. In this case, the MDSC has to make a provisioning request to the PNCs to establish the port-to-port connection. If there is a large number of inter-connected domains, this abstraction method may impose a Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 18] Internet-Draft ACTN Framework October 2017 heavy coordination load at the MDSC level in order to find an optimal end-to-end path since the abstraction hides so much information that it is not possible to determine whether an end-to- end path is feasible without asking each PNC to set up each path fragment. For this reason, the MPI might need to be enhanced to allow the PNCs to be queried for the practicality and characteristics of paths across the abstract node. ..................................... : PNC Domain : : +--+ +--+ +--+ +--+ : ------+ +-----+ +-----+ +-----+ +------ : ++-+ ++-+ +-++ +-++ : : | | | | : : | | | | : : | | | | : : | | | | : : ++-+ ++-+ +-++ +-++ : ------+ +-----+ +-----+ +-----+ +------ : +--+ +--+ +--+ +--+ : :.................................... +----------+ ---+ +--- | Abstract | | Node | ---+ +--- +----------+ Figure 6: Native Topology with Corresponding Black Topology Expressed as an Abstract Node 5.2.3. Grey Topology A grey topology represents a compromise between black and white topologies from a granularity point of view. In this case the PNC exposes an abstract topology that comprises nodes and links. The nodes and links may be physical of abstract while the abstract topology represents the potential of connectivity across the PNC domain. Two modes of grey topology are identified: . In a type A grey topology type border nodes are connected by a full mesh of TE links (see Figure 7). . In a type B grey topology border nodes are connected over a more detailed network comprising internal abstract nodes and abstracted links. This mode of abstraction supplies the MDSC Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 19] Internet-Draft ACTN Framework October 2017 with more information about the internals of the PNC domain and allows it to make more informed choices about how to route connectivity over the underlying network. ..................................... : PNC Domain : : +--+ +--+ +--+ +--+ : ------+ +-----+ +-----+ +-----+ +------ : ++-+ ++-+ +-++ +-++ : : | | | | : : | | | | : : | | | | : : | | | | : : ++-+ ++-+ +-++ +-++ : ------+ +-----+ +-----+ +-----+ +------ : +--+ +--+ +--+ +--+ : :.................................... .................... : Abstract Network : : : : +--+ +--+ : -------+ +----+ +------- : ++-+ +-++ : : | \ / | : : | \/ | : : | /\ | : : | / \ | : : ++-+ +-++ : -------+ +----+ +------- : +--+ +--+ : :..................: Figure 7: Native Topology with Corresponding Grey Topology 5.3. Methods of Building Grey Topologies This section discusses two different methods of building a grey topology: . Automatic generation of abstract topology by configuration (Section 5.3.1) . On-demand generation of supplementary topology via path computation request/reply (Section 5.3.2) Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 20] Internet-Draft ACTN Framework October 2017 5.3.1. Automatic Generation of Abstract Topology by Configuration Automatic generation is based on the abstraction/summarization of the whole domain by the PNC and its advertisement on the MPI. The level of abstraction can be decided based on PNC configuration parameters (e.g., "provide the potential connectivity between any PE and any ASBR in an MPLS-TE network"). Note that the configuration parameters for this abstract topology can include available bandwidth, latency, or any combination of defined parameters. How to generate such information is beyond the scope of this document. This abstract topology may need to be periodically or incrementally updated when there is a change in the underlying network or the use of the network resources that make connectivity more or less available. 5.3.2. On-demand Generation of Supplementary Topology via Path Compute Request/Reply While abstract topology is generated and updated automatically by configuration as explained in Section 5.3.1, additional supplementary topology may be obtained by the MDSC via a path compute request/reply mechanism. The abstract topology advertisements from PNCs give the MDSC the border node/link information for each domain. Under this scenario, when the MDSC needs to create a new VN, the MDSC can issue path computation requests to PNCs with constraints matching the VN request as described in [ACTN-YANG]. An example is provided in Figure 8, where the MDSC is creating a P2P VN between AP1 and AP2. The MDSC could use two different inter-domain links to get from Domain X to Domain Y, but in order to choose the best end-to-end path it needs to know what domain X and Y can offer in terms of connectivity and constraints between the PE nodes and the border nodes. ------- -------- ( ) ( ) - BrdrX.1------- BrdrY.1 - (+---+ ) ( +---+) -+---( |PE1| Dom.X ) ( Dom.Y |PE2| )---+- | (+---+ ) ( +---+) | AP1 - BrdrX.2------- BrdrY.2 - AP2 ( ) ( ) ------- -------- Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 21] Internet-Draft ACTN Framework October 2017 Figure 8: A Multi-Domain Example The MDSC issues a path computation request to PNC.X asking for potential connectivity between PE1 and border node BrdrX.1 and between PE1 and BrdrX.2 with related objective functions and TE metric constraints. A similar request for connectivity from the border nodes in Domain Y to PE2 will be issued to PNC.Y. The MDSC merges the results to compute the optimal end-to-end path including the inter domain links. The MDSC can use the result of this computation to request the PNCs to provision the underlying networks, and the MDSC can then use the end-to-end path as a virtual link in the VN it delivers to the customer. 5.4. Hierarchical Topology Abstraction Example This section illustrates how topology abstraction operates in different levels of a hierarchy of MDSCs as shown in Figure 9. +-----+ | CNC | CNC wants to create a VN +-----+ between CE A and CE B | | +-----------------------+ | MDSC-H | +-----------------------+ / \ / \ +---------+ +---------+ | MDSC-L1 | | MDSC-L2 | +---------+ +---------+ / \ / \ / \ / \ +----+ +----+ +----+ +----+ CE A o----|PNC1| |PNC2| |PNC3| |PNC4|----o CE B +----+ +----+ +----+ +----+ Virtual Network Delivered to CNC CE A o==============o CE B Topology operated on by MDSC-H CE A o----o==o==o===o----o CE B Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 22] Internet-Draft ACTN Framework October 2017 Topology operated on by MDSC-L1 Topology operated on by MDSC-L2 _ _ _ _ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) CE A o--(o---o)==(o---o)==Dom.3 Dom.2==(o---o)==(o---o)--o CE B ( ) ( ) ( ) ( ) (_) (_) (_) (_) Actual Topology ___ ___ ___ ___ ( ) ( ) ( ) ( ) ( o ) ( o ) ( o--o) ( o ) ( / \ ) ( |\ ) ( | | ) ( / \ ) CE A o---(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)---o CE B ( \ / ) ( | |/ ) ( | | ) ( \ / ) ( o ) (o-o ) ( o--o) ( o ) (___) (___) (___) (___) Domain 1 Domain 2 Domain 3 Domain 4 Where o is a node --- is a link === border link Figure 9: Illustration of Hierarchical Topology Abstraction In the example depicted in Figure 9, there are four domains under control of PNCs PNC1, PNC2, PNC3, and PNC4. MDSC-L1 controls PNC1 and PNC2 while MDSC-L2 controls PNC3 and PNC4. Each of the PNCs provides a grey topology abstraction that presents only border nodes and links across and outside the domain. The abstract topology MDSC-L1 that operates is a combination of the two topologies from PNC1 and PNC2. Likewise, the abstract topology that MDSC-L2 operates is shown in Figure 9. Both MDSC-L1 and MDSC-L2 provide a black topology abstraction to MSDC-H in which each PNC domain is presented as a single virtual node. MDSC-H combines these two topologies to create the abstraction topology on which it operates. MDSC-H sees the whole four domain networks as four virtual nodes connected via virtual links. 6. Access Points and Virtual Network Access Points In order to map identification of connections between the customer's sites and the TE networks and to scope the connectivity requested in Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 23] Internet-Draft ACTN Framework October 2017 the VNS, the CNC and the MDSC refer to the connections using the Access Point (AP) construct as shown in Figure 10. ------------- ( ) - - +---+ X ( ) Z +---+ |CE1|---+----( )---+---|CE2| +---+ | ( ) | +---+ AP1 - - AP2 ( ) ------------- Figure 10: Customer View of APs Let's take as an example a scenario shown in Figure 10. CE1 is connected to the network via a 10Gb link and CE2 via a 40Gb link. Before the creation of any VN between AP1 and AP2 the customer view can be summarized as shown in Table 1. +----------+------------------------+ |End Point | Access Link Bandwidth | +-----+----------+----------+-------------+ |AP id| CE,port | MaxResBw | AvailableBw | +-----+----------+----------+-------------+ | AP1 |CE1,portX | 10Gb | 10Gb | +-----+----------+----------+-------------+ | AP2 |CE2,portZ | 40Gb | 40Gb | +-----+----------+----------+-------------+ Table 1: AP - Customer View On the other hand, what the provider sees is shown in Figure 11. ------- ------- ( ) ( ) - - - - W (+---+ ) ( +---+) Y -+---( |PE1| Dom.X )----( Dom.Y |PE2| )---+- | (+---+ ) ( +---+) | AP1 - - - - AP2 ( ) ( ) ------- ------- Figure 11: Provider view of the AP Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 24] Internet-Draft ACTN Framework October 2017 Which results in a summarization as shown in Table 2. +----------+------------------------+ |End Point | Access Link Bandwidth | +-----+----------+----------+-------------+ |AP id| PE,port | MaxResBw | AvailableBw | +-----+----------+----------+-------------+ | AP1 |PE1,portW | 10Gb | 10Gb | +-----+----------+----------+-------------+ | AP2 |PE2,portY | 40Gb | 40Gb | +-----+----------+----------+-------------+ Table 2: AP - Provider View A Virtual Network Access Point (VNAP) needs to be defined as binding between the AP that is linked to a VN and that is used to allow for different VNs to start from the same AP. It also allows for traffic engineering on the access and/or inter-domain links (e.g., keeping track of bandwidth allocation). A different VNAP is created on an AP for each VN. In this simple scenario we suppose we want to create two virtual networks. The first with VN identifier 9 between AP1 and AP2 with bandwidth of 1Gbps, while the second with VN identifier 5, again between AP1 and AP2 and with bandwidth 2Gbps. The provider view would evolve as shown in Table 3. +----------+------------------------+ |End Point | Access Link/VNAP Bw | +---------+----------+----------+-------------+ |AP/VNAPid| PE,port | MaxResBw | AvailableBw | +---------+----------+----------+-------------+ |AP1 |PE1,portW | 10Gbps | 7Gbps | | -VNAP1.9| | 1Gbps | N.A. | | -VNAP1.5| | 2Gbps | N.A | +---------+----------+----------+-------------+ |AP2 |PE2,portY | 40Gbps | 37Gbps | | -VNAP2.9| | 1Gbps | N.A. | | -VNAP2.5| | 2Gbps | N.A | +---------+----------+----------+-------------+ Table 3: AP and VNAP - Provider View after VNS Creation 6.1. Dual-Homing Scenario Often there is a dual homing relationship between a CE and a pair of PEs. This case needs to be supported by the definition of VN, APs Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 25] Internet-Draft ACTN Framework October 2017 and VNAPs. Suppose CE1 connected to two different PEs in the operator domain via AP1 and AP2 and that the customer needs 5Gbps of bandwidth between CE1 and CE2. This is shown in Figure 12. ____________ AP1 ( ) AP3 -------(PE1) (PE3)------- W / ( ) \ X +---+/ ( ) \+---+ |CE1| ( ) |CE2| +---+\ ( ) /+---+ Y \ ( ) / Z -------(PE2) (PE4)------- AP2 (____________) Figure 12: Dual-Homing Scenario In this case, the customer will request for a VN between AP1, AP2, and AP3 specifying a dual homing relationship between AP1 and AP2. As a consequence no traffic will flow between AP1 and AP2. The dual homing relationship would then be mapped against the VNAPs (since other independent VNs might have AP1 and AP2 as end points). The customer view would be shown in Table 4. +----------+------------------------+ |End Point | Access Link/VNAP Bw | +---------+----------+----------+-------------+-----------+ |AP/VNAPid| CE,port | MaxResBw | AvailableBw |Dual Homing| +---------+----------+----------+-------------+-----------+ |AP1 |CE1,portW | 10Gbps | 5Gbps | | | -VNAP1.9| | 5Gbps | N.A. | VNAP2.9 | +---------+----------+----------+-------------+-----------+ |AP2 |CE1,portY | 40Gbps | 35Gbps | | | -VNAP2.9| | 5Gbps | N.A. | VNAP1.9 | +---------+----------+----------+-------------+-----------+ |AP3 |CE2,portX | 40Gbps | 35Gbps | | | -VNAP3.9| | 5Gbps | N.A. | NONE | +---------+----------+----------+-------------+-----------+ Table 4: Dual-Homing - Customer View after VN Creation 7. Advanced ACTN Application: Multi-Destination Service A further advanced application of ACTN is in the case of Data Center selection, where the customer requires the Data Center selection to be based on the network status; this is referred to as Multi- Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 26] Internet-Draft ACTN Framework October 2017 Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a connectivity service (virtual network) between a set of source Aps and destination APs and leave it up to the network (MDSC) to decide which source and destination access points to be used to set up the connectivity service (virtual network). The candidate list of source and destination APs is decided by a CNC (or an entity outside of ACTN) based on certain factors which are outside the scope of ACTN. Based on the AP selection as determined and returned by the network (MDSC), the CNC (or an entity outside of ACTN) should further take care of any subsequent actions such as orchestration or service setup requirements. These further actions are outside the scope of ACTN. Consider a case as shown in Figure 13, where three data centers are available, but the customer requires the data center selection to be based on the network status and the connectivity service setup between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) would select the best destination AP based on the constraints, optimization criteria, policies, etc., and setup the connectivity service (virtual network). ------- ------- ( ) ( ) - - - - +---+ ( ) ( ) +----+ |CE1|---+---( Domain X )----( Domain Y )---+---|DC-A| +---+ | ( ) ( ) | +----+ AP1 - - - - AP2 ( ) ( ) ---+--- ---+--- | | AP3-+ AP4-+ | | +----+ +----+ |DC-B| |DC-C| +----+ +----+ Figure 13: End-Point Selection Based on Network Status 7.1. Pre-Planned End Point Migration Furthermore, in case of Data Center selection, customer could request for a backup DC to be selected, such that in case of failure, another DC site could provide hot stand-by protection. As Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 27] Internet-Draft ACTN Framework October 2017 shown in Figure 14 DC-C is selected as a backup for DC-A. Thus, the VN should be setup by the MDSC to include primary connectivity between AP1 (CE1) and AP2 (DC-A) as well as protection connectivity between AP1 (CE1) and AP4 (DC-C). ------- ------- ( ) ( ) - - __ - - +---+ ( ) ( ) +----+ |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| +---+ | ( ) ( ) | +----+ AP1 - - - - AP2 | ( ) ( ) | ---+--- ---+--- | | | | AP3-+ AP4-+ HOT STANDBY | | | +----+ +----+ | |DC-D| |DC-C|<------------- +----+ +----+ Figure 14: Pre-planned End-Point Migration 7.2. On the Fly End-Point Migration Compared to pre-planned end point migration, on the fly end point selection is dynamic in that the migration is not pre-planned but decided based on network condition. Under this scenario, the MDSC would monitor the network (based on the VN SLA) and notify the CNC in case where some other destination AP would be a better choice based on the network parameters. The CNC should instruct the MDSC when it is suitable to update the VN with the new AP if it is required. 8. Manageability Considerations The objective of ACTN is to manage traffic engineered resources, and provide a set of mechanisms to allow customers to request virtual connectivity across server network resources. ACTN supports multiple customers each with its own view of and control of a virtual network built on the server network, the network operator will need to partition (or "slice") their network resources, and manage the resources accordingly. The ACTN platform will, itself, need to support the request, response, and reservations of client and network layer connectivity. It will also need to provide performance monitoring and control of Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 28] Internet-Draft ACTN Framework October 2017 traffic engineered resources. The management requirements may be categorized as follows: . Management of external ACTN protocols . Management of internal ACTN interfaces/protocols . Management and monitoring of ACTN components . Configuration of policy to be applied across the ACTN system 8.1. Policy Policy is an important aspect of ACTN control and management. Policies are used via the components and interfaces, during deployment of the service, to ensure that the service is compliant with agreed policy factors and variations (often described in SLAs), these include, but are not limited to: connectivity, bandwidth, geographical transit, technology selection, security, resilience, and economic cost. Depending on the deployment of the ACTN architecture, some policies may have local or global significance. That is, certain policies may be ACTN component specific in scope, while others may have broader scope and interact with multiple ACTN components. Two examples are provided below: . A local policy might limit the number, type, size, and scheduling of virtual network services a customer may request via its CNC. This type of policy would be implemented locally on the MDSC. . A global policy might constrain certain customer types (or specific customer applications) to only use certain MDSCs, and be restricted to physical network types managed by the PNCs. A global policy agent would govern these types of policies. The objective of this section is to discuss the applicability of ACTN policy: requirements, components, interfaces, and examples. This section provides an analysis and does not mandate a specific method for enforcing policy, or the type of policy agent that would be responsible for propagating policies across the ACTN components. It does highlight examples of how policy may be applied in the context of ACTN, but it is expected further discussion in an applicability or solution specific document, will be required. Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 29] Internet-Draft ACTN Framework October 2017 8.2. Policy Applied to the Customer Network Controller A virtual network service for a customer application will be requested by the CNC. The request will reflect the application requirements and specific service needs, including bandwidth, traffic type and survivability. Furthermore, application access and type of virtual network service requested by the CNC, will be need adhere to specific access control policies. 8.3. Policy Applied to the Multi Domain Service Coordinator A key objective of the MDSC is to support the customer's expression of the application connectivity request via its CNC as set of desired business needs, therefore policy will play an important role. Once authorized, the virtual network service will be instantiated via the CNC-MDSC Interface (CMI), it will reflect the customer application and connectivity requirements, and specific service transport needs. The CNC and the MDSC components will have agreed connectivity end-points, use of these end-points should be defined as a policy expression when setting up or augmenting virtual network services. Ensuring that permissible end-points are defined for CNCs and applications will require the MDSC to maintain a registry of permissible connection points for CNCs and application types. Conflicts may occur when virtual network service optimization criteria are in competition. For example, to meet objectives for service reachability a request may require an interconnection point between multiple physical networks; however, this might break a confidentially policy requirement of specific type of end-to-end service. Thus an MDSC may have to balance a number of the constraints on a service request and between different requested services. It may also have to balance requested services with operational norms for the underlying physical networks. This balancing may be resolved using configured policy and using hard and soft policy constraints. 8.4. Policy Applied to the Physical Network Controller The PNC is responsible for configuring the network elements, monitoring physical network resources, and exposing connectivity (direct or abstracted) to the MDSC. It is therefore expected that policy will dictate what connectivity information will be exported between the PNC, via the MDSC-PNC Interface (MPI), and MDSC. Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 30] Internet-Draft ACTN Framework October 2017 Policy interactions may arise when a PNC determines that it cannot compute a requested path from the MDSC, or notices that (per a locally configured policy) the network is low on resources (for example, the capacity on key links become exhausted). In either case, the PNC will be required to notify the MDSC, which may (again per policy) act to construct a virtual network service across another physical network topology. Furthermore, additional forms of policy-based resource management will be required to provide virtual network service performance, security and resilience guarantees. This will likely be implemented via a local policy agent and additional protocol methods. 9. Security Considerations The ACTN framework described in this document defines key components and interfaces for managed traffic engineered networks. Securing the request and control of resources, confidentially of the information, and availability of function, should all be critical security considerations when deploying and operating ACTN platforms. Several distributed ACTN functional components are required, and implementations should consider encrypting data that flows between components, especially when they are implemented at remote nodes, regardless these data flows are on external or internal network interfaces. The ACTN security discussion is further split into two specific categories described in the following sub-sections: . Interface between the Customer Network Controller and Multi Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) . Interface between the Multi Domain Service Coordinator and Physical Network Controller (PNC), MDSC-PNC Interface (MPI) From a security and reliability perspective, ACTN may encounter many risks such as malicious attack and rogue elements attempting to connect to various ACTN components. Furthermore, some ACTN components represent a single point of failure and threat vector, and must also manage policy conflicts, and eavesdropping of communication between different ACTN components. The conclusion is that all protocols used to realize the ACTN framework should have rich security features, and customer, application and network data should be stored in encrypted data stores. Additional security risks may still exist. Therefore, Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 31] Internet-Draft ACTN Framework October 2017 discussion and applicability of specific security functions and protocols will be better described in documents that are use case and environment specific. 9.1. CNC-MDSC Interface (CMI) Data stored by the MDSC will reveal details of the virtual network services, and which CNC and customer/application is consuming the resource. The data stored must therefore be considered as a candidate for encryption. CNC Access rights to an MDSC must be managed. The MDSC must allocate resources properly, and methods to prevent policy conflicts, resource wastage, and denial of service attacks on the MDSC by rogue CNCs, should also be considered. The CMI will likely be an external protocol interface. Suitable authentication and authorization of each CNC connecting to the MDSC will be required, especially, as these are likely to be implemented by different organizations and on separate functional nodes. Use of the AAA-based mechanisms would also provide role-based authorization methods, so that only authorized CNC's may access the different functions of the MDSC. 9.2. MDSC-PNC Interface (MPI) Where the MDSC must interact with multiple (distributed) PNCs, a PKI-based mechanism is suggested, such as building a TLS or HTTPS connection between the MDSC and PNCs, to ensure trust between the physical network layer control components and the MDSC. Which MDSC the PNC exports topology information to, and the level of detail (full or abstracted) should also be authenticated and specific access restrictions and topology views, should be configurable and/or policy-based. 10. References 10.1. Informative References [RFC2702] Awduche, D., et. al., "Requirements for Traffic Engineering Over MPLS", RFC 2702, September 1999. [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path Computation Element (PCE)-Based Architecture", IETF RFC 4655, August 2006. Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 32] Internet-Draft ACTN Framework October 2017 [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts (Ed.), "Requirements of an MPLS Transport Profile", RFC 5654, September 2009. [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined Networking: A Perspective from within a Service Provider Environment", RFC 7149, March 2014. [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for Information Exchange between Interconnected Traffic- Engineered Networks", RFC 7926, July 2016. [RFC3945] Manning, E., et al., "Generalized Multi-Protocol Label Switching (GMPLS) Architecture2, RFC 3945, October 2004. [ONF-ARCH] Open Networking Foundation, "SDN architecture", Issue 1.1, ONF TR-521, June 2016. [Centralized] Farrel, A., et al., "An Architecture for Use of PCE and PCEP in a Network with Central Control", draft-ietf- teas-pce-central-control, work in progress. [Service-YANG] Lee, Y., Dhody, D., and Ceccarrelli, C., "Traffic Engineering and Service Mapping Yang Model", draft-lee- teas-te-service-mapping-yang, work in progress. [ACTN-YANG] Lee, Y., et al., "A Yang Data Model for ACTN VN Operation", draft-lee-teas-actn-vn-yang, work in progress. [ACTN-REQ] Lee, Y., et al., "Requirements for Abstraction and Control of TE Networks", draft-ietf-teas-actn- requirements, work in progress. 11. Contributors Adrian Farrel Old Dog Consulting Email: adrian@olddog.co.uk Italo Busi Huawei Email: Italo.Busi@huawei.com Khuzema Pithewan Infinera Email: kpithewan@infinera.com Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 33] Internet-Draft ACTN Framework October 2017 Michael Scharf Nokia Email: michael.scharf@nokia.com Luyuan Fang eBay Email: luyuanf@gmail.com Diego Lopez Telefonica I+D Don Ramon de la Cruz, 82 28006 Madrid, Spain Email: diego@tid.es Sergio Belotti Alcatel Lucent Via Trento, 30 Vimercate, Italy Email: sergio.belotti@nokia.com Daniel King Lancaster University Email: d.king@lancaster.ac.uk Dhruv Dhody Huawei Technologies Divyashree Techno Park, Whitefield Bangalore, Karnataka 560066 India Email: dhruv.ietf@gmail.com Gert Grammel Juniper Networks Email: ggrammel@juniper.net Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 34] Internet-Draft ACTN Framework October 2017 Authors' Addresses Daniele Ceccarelli Ericsson Torshamnsgatan,48 Stockholm, Sweden Email: daniele.ceccarelli@ericsson.com Young Lee Huawei Technologies 5340 Legacy Drive Plano, TX 75023, USA Phone: (469)277-5838 Email: leeyoung@huawei.com APPENDIX A - Example of MDSC and PNC Functions Integrated in A Service/Network Orchestrator This section provides an example of a possible deployment scenario, in which Service/Network Orchestrator can include a number of functionalities, among which, in the example below, PNC functionalities for domain 2 and MDSC functionalities to coordinate the PNC1 functionalities (hosted in a separate domain controller) and PNC2 functionalities (co-hosted in the network orchestrator). Customer +-------------------------------+ | +-----+ | | | CNC | | | +-----+ | +-------|-----------------------+ | Service/Network | CMI Orchestrator | +-------|------------------------+ | +------+ MPI +------+ | | | MDSC |---------| PNC2 | | | +------+ +------+ | +-------|------------------|-----+ | MPI | Domain Controller | | +-------|-----+ | | +-----+ | | SBI | |PNC1 | | | | +-----+ | | +-------|-----+ | v SBI v ------- ------- Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 35] Internet-Draft ACTN Framework October 2017 ( ) ( ) - - - - ( ) ( ) ( Domain 1 )----( Domain 2 ) ( ) ( ) - - - - ( ) ( ) ------- ------- APPENDIX B - Example of IP + Optical network with L3VPN service This section provides a more complex deployment scenario in which ACTN hierarchy is deployed to control a multi-layer network via an IP/MPLS PNC and an Optical PNC. The scenario is further enhanced by the introduction of an upper layer service configuration (e.g. L3VPN). The provisioning of the L3VPN service is outside ACTN scope but it is worth showing how the two parts are integrated for the end to end service fulfilment. An example of service configuration function in the Service/Network Orchestrator is discussed in [I- D.dhjain-bess-bgp-l3vpn-yang]. Customer +-------------------------------+ | +-----+ | | | CNC | | | +-----+ | +-------|--------+--------------+ | | Customer Service Model | CMI | (non-ACTN interface) Service/Network | | Orchestrator | | +-------|--------|--------------------------+ | | +-------------------------+ | | | |Service Mapping Function | | | | +-------------------------+ | | | | | | | +------+ | +---------------+ | | | MDSC |--- |Service Config.| | | +------+ +---------------+ | +------|------------------|-----------------+ MPI | +------------+ (non-ACTN Interf.) | / +-----------/------------+ IP/MPLS | / | Domain | / | Optical Domain Controller | / | Controller +--------|-------/----+ +---|--------------+ | +-----+ +-----+ | | +-----+ | | |PNC1 | |Serv.| | | |PNC2 | | Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 36] Internet-Draft ACTN Framework October 2017 | +-----+ +-----+ | | +-----+ | +---------------------+ +------------------+ SBI | | v | +---------------------------------+ | SBI / IP/MPLS Network \ | +-------------------------------------+ | v +--------------------------------------+ / Optical Network \ +------------------------------------------+ Ceccarelli, Lee, et al. Expires April 3, 2018 [Page 37]