Skip to main content

Flow Aggregation for the IP Flow Information Export (IPFIX) Protocol
draft-ietf-ipfix-a9n-07

The information below is for an old version of the document.
Document Type
This is an older version of an Internet-Draft that was ultimately published as RFC 7015.
Authors Brian Trammell , Arno Wagner , Benoît Claise
Last updated 2012-11-15 (Latest revision 2012-10-11)
Replaces draft-trammell-ipfix-a9n
RFC stream Internet Engineering Task Force (IETF)
Formats
Reviews
Additional resources Mailing list discussion
Stream WG state WG Document
Document shepherd (None)
IESG IESG state Became RFC 7015 (Proposed Standard)
Consensus boilerplate Unknown
Telechat date (None)
Responsible AD Ron Bonica
IESG note ** No value found for 'doc.notedoc.note' **
Send notices to ipfix-chairs@tools.ietf.org, draft-ietf-ipfix-a9n@tools.ietf.org
draft-ietf-ipfix-a9n-07
IPFIX Working Group                                          B. Trammell
Internet-Draft                                                ETH Zurich
Intended status: Standards Track                               A. Wagner
Expires: April 14, 2013                                      Consecom AG
                                                               B. Claise
                                                     Cisco Systems, Inc.
                                                        October 11, 2012

  Flow Aggregation for the IP Flow Information Export (IPFIX) Protocol
                      draft-ietf-ipfix-a9n-07.txt

Abstract

   This document provides a common implementation-independent basis for
   the interoperable application of the IP Flow Information Export
   (IPFIX) Protocol to the handling of Aggregated Flows, which are IPFIX
   Flows representing packets from multiple Original Flows sharing some
   set of common properties.  It does this through a detailed
   terminology and a descriptive Intermediate Aggregation Process
   architecture, including a specification of methods for Original Flow
   counting and counter distribution across intervals.

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on April 14, 2013.

Copyright Notice

   Copyright (c) 2012 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of

Trammell, et al.         Expires April 14, 2013                 [Page 1]
Internet-Draft              IPFIX Aggregation               October 2012

   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  4
     1.1.  IPFIX Protocol Overview  . . . . . . . . . . . . . . . . .  5
     1.2.  IPFIX Documents Overview . . . . . . . . . . . . . . . . .  5
   2.  Terminology  . . . . . . . . . . . . . . . . . . . . . . . . .  7
   3.  Use Cases for IPFIX Aggregation  . . . . . . . . . . . . . . .  9
   4.  Architecture for Flow Aggregation  . . . . . . . . . . . . . . 11
     4.1.  Aggregation within the IPFIX Architecture  . . . . . . . . 11
     4.2.  Intermediate Aggregation Process Architecture  . . . . . . 15
       4.2.1.  Correlation and Normalization  . . . . . . . . . . . . 17
   5.  IP Flow Aggregation Operations . . . . . . . . . . . . . . . . 19
     5.1.  Temporal Aggregation through Interval Distribution . . . . 19
       5.1.1.  Distributing Values Across Intervals . . . . . . . . . 20
       5.1.2.  Time Composition . . . . . . . . . . . . . . . . . . . 22
       5.1.3.  External Interval Distribution . . . . . . . . . . . . 22
     5.2.  Spatial Aggregation of Flow Keys . . . . . . . . . . . . . 23
       5.2.1.  Counting Original Flows  . . . . . . . . . . . . . . . 24
       5.2.2.  Counting Distinct Key Values . . . . . . . . . . . . . 25
     5.3.  Spatial Aggregation of Non-Key Fields  . . . . . . . . . . 26
       5.3.1.  Counter Statistics . . . . . . . . . . . . . . . . . . 26
       5.3.2.  Derivation of New Values from Flow Keys and
               non-Key fields . . . . . . . . . . . . . . . . . . . . 26
     5.4.  Aggregation Combination  . . . . . . . . . . . . . . . . . 27
   6.  Additional Considerations and Special Cases in Flow
       Aggregation  . . . . . . . . . . . . . . . . . . . . . . . . . 28
     6.1.  Exact versus Approximate Counting during Aggregation . . . 28
     6.2.  Delay and Loss introduced by the IAP . . . . . . . . . . . 28
     6.3.  Considerations for Aggregation of Sampled Flows  . . . . . 28
     6.4.  Considerations for Aggregation of Heterogeneous Flows  . . 29
   7.  Export of Aggregated IP Flows using IPFIX  . . . . . . . . . . 30
     7.1.  Time Interval Export . . . . . . . . . . . . . . . . . . . 30
     7.2.  Flow Count Export  . . . . . . . . . . . . . . . . . . . . 30
       7.2.1.  originalFlowsPresent . . . . . . . . . . . . . . . . . 30
       7.2.2.  originalFlowsInitiated . . . . . . . . . . . . . . . . 30
       7.2.3.  originalFlowsCompleted . . . . . . . . . . . . . . . . 31
       7.2.4.  deltaFlowCount . . . . . . . . . . . . . . . . . . . . 31
     7.3.  Distinct Host Export . . . . . . . . . . . . . . . . . . . 31
       7.3.1.  distinctCountOfSourceIPAddress . . . . . . . . . . . . 31
       7.3.2.  distinctCountOfDestinationIPAddress  . . . . . . . . . 32

Trammell, et al.         Expires April 14, 2013                 [Page 2]
Internet-Draft              IPFIX Aggregation               October 2012

       7.3.3.  distinctCountOfSourceIPv4Address . . . . . . . . . . . 32
       7.3.4.  distinctCountOfDestinationIPv4Address  . . . . . . . . 32
       7.3.5.  distinctCountOfSourceIPv6Address . . . . . . . . . . . 33
       7.3.6.  distinctCountOfDestinationIPv6Address  . . . . . . . . 33
     7.4.  Aggregate Counter Distribution Export  . . . . . . . . . . 33
       7.4.1.  Aggregate Counter Distribution Options Template  . . . 33
       7.4.2.  valueDistributionMethod Information Element  . . . . . 34
   8.  Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
     8.1.  Traffic Time-Series per Source . . . . . . . . . . . . . . 38
     8.2.  Core Traffic Matrix  . . . . . . . . . . . . . . . . . . . 42
     8.3.  Distinct Source Count per Destination Endpoint . . . . . . 47
     8.4.  Traffic Time-Series per Source with Counter
           Distribution . . . . . . . . . . . . . . . . . . . . . . . 49
   9.  Security Considerations  . . . . . . . . . . . . . . . . . . . 52
   10. IANA Considerations  . . . . . . . . . . . . . . . . . . . . . 53
   11. Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 54
   12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 55
     12.1. Normative References . . . . . . . . . . . . . . . . . . . 55
     12.2. Informative References . . . . . . . . . . . . . . . . . . 55
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 57

Trammell, et al.         Expires April 14, 2013                 [Page 3]
Internet-Draft              IPFIX Aggregation               October 2012

1.  Introduction

   The assembly of packet data into Flows serves a variety of different
   purposes, as noted in the requirements [RFC3917] and applicability
   statement [RFC5472] for the IP Flow Information Export (IPFIX)
   protocol [I-D.ietf-ipfix-protocol-rfc5101bis].  Aggregation beyond
   the flow level, into records representing multiple Flows, is a common
   analysis and data reduction technique as well, with applicability to
   large-scale network data analysis, archiving, and inter-organization
   exchange.  This applicability in large-scale situations, in
   particular, led to the inclusion of aggregation as part of the IPFIX
   Mediators Problem Statement [RFC5982], and the definition of an
   Intermediate Aggregation Process in the Mediator framework [RFC6183].

   Aggregation is used for analysis and data reduction in a wide variety
   of applications, for example in traffic matrix calculation,
   generation of time series data for visualizations or anomaly
   detection, or data reduction for long-term trending and storage.
   Depending on the keys used for aggregation, it may additionally have
   an anonymizing affect on the data: for example, aggregation
   operations which eliminate IP addresses make it impossible to later
   directly identify nodes using those addresses.

   Aggregation as defined and described in this document covers the
   applications defined in [RFC5982], including 5.1 "Adjusting Flow
   Granularity", 5.4 "Time Composition", and 5.5 "Spatial Composition".
   However, Section 4.2 of this document specifies a more flexible
   architecture for an Intermediate Aggregation Process than that
   envisioned by the original Mediator work.  Instead of a focus on
   these specific limited use cases, the Intermediate Aggregation
   Process is specified to cover any activity commonly described as
   "flow aggregation".  This architecture is intended to describe any
   such activity without reference to the specific implementation of
   aggregation.

   An Intermediate Aggregation Process may be applied to data collected
   from multiple Observation Points, as it is natural to use aggregation
   for data reduction when concentrating measurement data.  This
   document specifically does not address the protocol issues that arise
   when combining IPFIX data from multiple Observation Points and
   exporting from a single Mediator, as these issues are general to
   IPFIX Mediation; they are therefore treated in detail in the
   Mediation Protocol document [I-D.ietf-ipfix-mediation-protocol].

   Since Aggregated Flows as defined in the following section are
   essentially Flows, the IPFIX protocol
   [I-D.ietf-ipfix-protocol-rfc5101bis] can be used to export, and the
   IPFIX File Format [RFC5655] can be used to store, aggregated data

Trammell, et al.         Expires April 14, 2013                 [Page 4]
Internet-Draft              IPFIX Aggregation               October 2012

   "as-is"; there are no changes necessary to the protocol.  This
   document provides a common basis for the application of IPFIX to the
   handling of aggregated data, through a detailed terminology,
   Intermediate Aggregation Process architecture, and methods for
   Original Flow counting and counter distribution across intervals.

1.1.  IPFIX Protocol Overview

   In the IPFIX protocol, { type, length, value } tuples are expressed
   in Templates containing { type, length } pairs, specifying which {
   value } fields are present in data records conforming to the
   Template, giving great flexibility as to what data is transmitted.
   Since Templates are sent very infrequently compared with Data
   Records, this results in significant bandwidth savings.  Various
   different data formats may be transmitted simply by sending new
   Templates specifying the { type, length } pairs for the new data
   format.  See [I-D.ietf-ipfix-protocol-rfc5101bis] for more
   information.

   The IPFIX Information Element Registry [iana-ipfix-assignments]
   defines a large number of standard Information Elements which provide
   the necessary { type } information for Templates.  The use of
   standard elements enables interoperability among different vendors'
   implementations.  Additionally, non-standard enterprise-specific
   elements may be defined for private use.

1.2.  IPFIX Documents Overview

   "Specification of the IPFIX Protocol for the Exchange of IP Traffic
   Flow Information" [I-D.ietf-ipfix-protocol-rfc5101bis] and its
   associated documents define the IPFIX Protocol, which provides
   network engineers and administrators with access to IP traffic flow
   information.

   "Architecture for IP Flow Information Export" [RFC5470] defines the
   architecture for the export of measured IP flow information out of an
   IPFIX Exporting Process to an IPFIX Collecting Process, and the basic
   terminology used to describe the elements of this architecture, per
   the requirements defined in "Requirements for IP Flow Information
   Export" [RFC3917].  The IPFIX Protocol document
   [I-D.ietf-ipfix-protocol-rfc5101bis] then covers the details of the
   method for transporting IPFIX Data Records and Templates via a
   congestion-aware transport protocol from an IPFIX Exporting Process
   to an IPFIX Collecting Process.

   "IP Flow Information Export (IPFIX) Mediation: Problem Statement"
   [RFC5982] introduces the concept of IPFIX Mediators, and defines the
   use cases for which they were designed; "IP Flow Information Export

Trammell, et al.         Expires April 14, 2013                 [Page 5]
Internet-Draft              IPFIX Aggregation               October 2012

   (IPFIX) Mediation: Framework" [RFC6183] then provides an
   architectural framework for Mediators.  Protocol-level issues (e.g.,
   Template and Observation Domain handling across Mediators) are
   covered by "Specification of the Protocol for IPFIX Mediation"
   [I-D.ietf-ipfix-mediation-protocol].  This document specifies an
   Intermediate Process which may be applied at an IPFIX Mediator, as
   well as at an original Observation Point prior to export, or for
   analysis and data reduction purposes after receipt at a Collecting
   Process.

Trammell, et al.         Expires April 14, 2013                 [Page 6]
Internet-Draft              IPFIX Aggregation               October 2012

2.  Terminology

   Terms used in this document that are defined in the Terminology
   section of the IPFIX Protocol [I-D.ietf-ipfix-protocol-rfc5101bis]
   document are to be interpreted as defined there.

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in [RFC2119].

   In addition, this document defines the following terms

   Aggregated Flow:   A Flow, as defined by
      [I-D.ietf-ipfix-protocol-rfc5101bis], derived from a set of zero
      or more original Flows within a defined Aggregation Interval.  The
      primary difference between a Flow and an Aggregated Flow in the
      general case is that the time interval (i.e., the two-tuple of
      start and end times) of a Flow is derived from information about
      the timing of the packets comprising the Flow, while the time
      interval of an Aggregated Flow is often externally imposed.  Note
      that an Aggregated Flow is defined in the context of an
      Intermediate Aggregation Process only.  Once an Aggregated Flow is
      exported, it is essentially a Flow as in
      [I-D.ietf-ipfix-protocol-rfc5101bis] and can be treated as such.

   Intermediate Aggregation Process:   an Intermediate Process (IAP) as
      in [RFC6183] that aggregates records, based upon a set of Flow
      Keys or functions applied to fields from the record.

   Aggregation Interval:   A time interval imposed upon an Aggregated
      Flow.  Intermediate Aggregation Processes may use a regular
      Aggregation Interval (e.g. "every five minutes", "every calendar
      month"), though regularity is not necessary.  Aggregation
      intervals may also be derived from the time intervals of the
      Original Flows being aggregated.

   Partially Aggregated Flow:   A Flow during processing within an
      Intermediate Aggregation Process; refers to an intermediate data
      structure during aggregation within the Intermediate Aggregation
      Process architecture detailed in Section 4.2.

   Original Flow:   A Flow given as input to an Intermediate Aggregation
      Process in order to generate Aggregated Flows.

   Contributing Flow:   An Original Flow that is partially or completely
      represented within an Aggregated Flow.  Each Aggregated Flow is
      made up of zero or more Contributing Flows, and an Original Flow
      may contribute to zero or more Aggregated Flows.

Trammell, et al.         Expires April 14, 2013                 [Page 7]
Internet-Draft              IPFIX Aggregation               October 2012

   Original Exporter:   The Exporter from which the Original Flows are
      received; meaningful only when an IAP is deployed at a Mediator.

   The terminology presented herein improves the precision of, but does
   not supersede or contradict the terms related to mediation and
   aggregation defined in the Mediation Problem Statement [RFC5982] and
   the Mediation Framework [RFC6183] documents.  Within this document,
   the terminology defined in this section is to be considered
   normative.

Trammell, et al.         Expires April 14, 2013                 [Page 8]
Internet-Draft              IPFIX Aggregation               October 2012

3.  Use Cases for IPFIX Aggregation

   Aggregation, as a common data reduction method used in traffic data
   analysis, has many applications.  When used with a regular
   Aggregation Interval and Original Flows containing timing
   information, it generates time series data from a collection of Flows
   with discrete intervals, as in the example in Section 8.1.  This time
   series data is itself useful for a wide variety of analysis tasks,
   such as generating input for network anomaly detection systems, or
   driving visualizations of volume per time for traffic with specific
   characteristics.  As a second example, traffic matrix calculation
   from flow data, as shown in Section 8.2 is inherently an aggregation
   action, by spatially aggregating the Flow Key down to input or output
   interface, address prefix, or autonomous system.

   Irregular or data-dependent Aggregation Intervals and key aggregation
   operations can also be used to provide adaptive aggregation of
   network flow data.  Here, full Flow Records can be kept for Flows of
   interest, while Flows deemed "less interesting" to a given
   application can be aggregated.  For example, in an IPFIX Mediator
   equipped with traffic classification capabilities for security
   purposes, potentially malicious Flows could be exported directly,
   while known-good or probably-good Flows (e.g. normal web browsing)
   could be exported simply as time series volumes per web server.

   Aggregation can also be applied to final analysis of stored Flow
   data, as shown in the example in Section 8.3.  All such aggregation
   applications in which timing information is not available or not
   important can be treated as if an infinite Aggregation Interval
   applies.

   Note that an Intermediate Aggregation Process which removes
   potentially sensitive information as identified in [RFC6235] may tend
   to have an anonymising effect on the Aggregated Flows as well;
   however, any application of aggregation as part of a data protection
   scheme should ensure that all the issues raised in [RFC6235] are
   addressed, specifically Section 4 "Anonymization of IP Flow Data",
   Section 7.2 "IPFIX-Specific Anonymization Guidelines", and Section 9
   "Security Considerations".

   While much of the discussion in this document, and all of the
   examples, apply to the common case that the Original Flows to be
   aggregated are all of the same underlying type (i.e., are represented
   with identical Templates or compatible Templates containing a core
   set Information Elements which can be freely converted to one
   another), and that each packet observed by the Metering Process
   associated with the Original Exporter is represented, this is not a
   necessary assumption.  Aggregation can also be applied as part of a

Trammell, et al.         Expires April 14, 2013                 [Page 9]
Internet-Draft              IPFIX Aggregation               October 2012

   technique applying both aggregation and correlation to pull together
   multiple views of the same traffic from different Observation Points
   using different Templates.  For example, consider a set of
   applications running at different Observation Points for different
   purposes -- one generating flows with round-trip-times for passive
   performance measurement, and one generating billing records.  Once
   correlated, these flows could used to produce Aggregated Flows
   containing both volume and performance information together.  The
   correlation and normalization operation described in Section 4.2.1
   handles this specific case of correlation.  Flow correlation in the
   general case is outside the scope of this document.

Trammell, et al.         Expires April 14, 2013                [Page 10]
Internet-Draft              IPFIX Aggregation               October 2012

4.  Architecture for Flow Aggregation

   This section specifies the architecture of the Intermediate
   Aggregation Process, and how it fits into the IPFIX Architecture.

4.1.  Aggregation within the IPFIX Architecture

   An Intermediate Aggregation Process could be deployed at any of three
   places within the IPFIX Architecture.  While aggregation is most
   commonly done within a Mediator which collects Original Flows from an
   Original Exporter and exports Aggregated Flows, aggregation can also
   occur before initial export, or after final collection, as shown in
   Figure 1.  The presence of an IAP at any of these points is of course
   optional.

Trammell, et al.         Expires April 14, 2013                [Page 11]
Internet-Draft              IPFIX Aggregation               October 2012

   +===========================================+
   |  IPFIX Exporter        +----------------+ |
   |                        | Metering Proc. | |
   | +-----------------+    +----------------+ |
   | | Metering Proc.  | or |      IAP       | |
   | +-----------------+----+----------------+ |
   | |           Exporting Process           | |
   | +-|----------------------------------|--+ |
   +===|==================================|====+
       |                                  |
   +===|===========================+      |
   |   |  Aggregating Mediator     |      |
   + +-V-------------------+       |      |
   | | Collecting Process  |       |      |
   + +---------------------+       |      |
   | |         IAP         |       |      |
   + +---------------------+       |      |
   | |  Exporting Process  |       |      |
   + +-|-------------------+       |      |
   +===|===========================+      |
       |                                  |
   +===|==================================|=====+
   |   | Collector                        |     |
   | +-V----------------------------------V-+   |
   | |         Collecting Process           |   |
   | +------------------+-------------------+   |
   |                    |        IAP        |   |
   |                    +-------------------+   |
   |  (Aggregation      |   File Writer     |   |
       for Storage)     +-----------|-------+   |
   +================================|===========+
                                    |
                             +------V-----------+
                             |    IPFIX File    |
                             +------------------+

                 Figure 1: Potential Aggregation Locations

   The Mediator use case is further shown in Figures A and B in
   [RFC6183].

   Aggregation can be applied for either intermediate or final analytic
   purposes.  In certain circumstances, it may make sense to export
   Aggregated Flows directly after metering, for example, if the
   Exporting Process is applied to drive a time-series visualization, or
   when flow data export bandwidth is restricted and flow or packet
   sampling is not an option.  Note that this case, where the
   Aggregation Process is essentially integrated into the Metering

Trammell, et al.         Expires April 14, 2013                [Page 12]
Internet-Draft              IPFIX Aggregation               October 2012

   Process, is essentially covered by the IPFIX architecture [RFC5470]:
   the Flow Keys used are simply a subset of those that would normally
   be used, and time intervals may be chosen other than those available
   from the cache policies customarily offered by the Metering Process.
   A Metering Process in this arrangement MAY choose to simulate the
   generation of larger Flows in order to generate Original Flow counts,
   if the application calls for compatibility with an Intermediate
   Aggregation Process deployed in a separate location.

   In the specific case that an Intermediate Aggregation Process is
   employed for data reduction for storage purposes, it can take
   Original Flows from a Collecting Process or File Reader and pass
   Aggregated Flows to a File Writer for storage.

   Deployment of an Intermediate Aggregation Process within a Mediator
   [RFC5982] is a much more flexible arrangement.  Here, the Mediator
   consumes Original Flows and produces Aggregated Flows; this
   arrangement is suited to any of the use cases detailed in Section 3.
   In a Mediator, Original Flows from multiple sources can also be
   aggregated into a single stream of Aggregated Flows; the
   architectural specifics of this arrangement are not addressed in this
   document, which is concerned only with the aggregation operation
   itself; see [I-D.ietf-ipfix-mediation-protocol] for details.

   The data paths into and out of an Intermediate Aggregation Process
   are shown in Figure 2.

Trammell, et al.         Expires April 14, 2013                [Page 13]
Internet-Draft              IPFIX Aggregation               October 2012

     packets --+               IPFIX Messages      IPFIX Files
               |                     |                  |
               V                     V                  V
     +==================+ +====================+ +=============+
     | Metering Process | | Collecting Process | | File Reader |
     |                  | +====================+ +=============+
     | (Original Flows  |            |                  |
     |    or direct     |            |  Original Flows  |
     |   aggregation)   |            V                  V
     + - - - - - - - - -+======================================+
     |           Intermediate Aggregation Process (IAP)        |
     +=========================================================+
               | Aggregated                  Aggregated |
               | Flows                            Flows |
               V                                        V
     +===================+                       +=============+
     | Exporting Process |                       | File Writer |
     +===================+                       +=============+
               |                                        |
               V                                        V
         IPFIX Messages                            IPFIX Files

           Figure 2: Data paths through the aggregation process

   Note that as Aggregated Flows are IPFIX Flows, an Intermediate
   Aggregation Process may aggregate already-Aggregated Flows from an
   upstream IAP as well as original Flows from an upstream Original
   Exporter or Metering Process.

   Aggregation may also need to correlate original flows from multiple
   Metering Processes, each according to a different Template with
   different Flow Keys and values.  This arrangement is shown in
   Figure 3; in this case, the correlation and normalization operation
   described in Section 4.2.1 handles merging the Original Flows before
   aggregation.

Trammell, et al.         Expires April 14, 2013                [Page 14]
Internet-Draft              IPFIX Aggregation               October 2012

    packets --+---------------------+------------------+
              |                     |                  |
              V                     V                  V
    +====================+ +====================+ +====================+
    | Metering Process 1 | | Metering Process 2 | | Metering Process n |
    +====================+ +====================+ +====================+
              |                     |  Original Flows  |
              V                     V                  V
    +==================================================================+
    | Intermediate Aggregation Process  +  correlation / normalization |
    +==================================================================+
              | Aggregated                  Aggregated |
              | Flows                            Flows |
              V                                        V
    +===================+                       +=============+
    | Exporting Process |                       | File Writer |
    +===================+                       +=============+
              |                                        |
              +------------> IPFIX Messages <----------+

   Figure 3: Aggregating Original Flows from multiple Metering Processes

4.2.  Intermediate Aggregation Process Architecture

   Within this document, an Intermediate Aggregation Process can be seen
   as hosting a function composed of four types of operations on
   Partially Aggregated Flows, as illustrated in Figure 4: interval
   distribution (temporal), key aggregation (spatial), value aggregation
   (spatial), and aggregate combination.  "Partially Aggregated Flows"
   as defined in Section 2 are essentially the intermediate results of
   aggregation, internal to the Intermediate Aggregation Process.

Trammell, et al.         Expires April 14, 2013                [Page 15]
Internet-Draft              IPFIX Aggregation               October 2012

           Original Flows  /   Original Flows requiring correlation
   +=============|===================|===================|=============+
   |             |   Intermediate    |    Aggregation    |   Process   |
   |             |                   V                   V             |
   |             |   +-----------------------------------------------+ |
   |             |   |   (optional) correlation and normalization    | |
   |             |   +-----------------------------------------------+ |
   |             |                          |                          |
   |             V                          V                          |
   |  +--------------------------------------------------------------+ |
   |  |                interval distribution (temporal)              | |
   |  +--------------------------------------------------------------+ |
   |           | ^                         | ^                |        |
   |           | |  Partially Aggregated   | |                |        |
   |           V |         Flows           V |                |        |
   |  +-------------------+       +--------------------+      |        |
   |  |  key aggregation  |<------|  value aggregation |      |        |
   |  |     (spatial)     |------>|      (spatial)     |      |        |
   |  +-------------------+       +--------------------+      |        |
   |            |                          |                  |        |
   |            |   Partially Aggregated   |                  |        |
   |            V          Flows           V                  V        |
   |  +--------------------------------------------------------------+ |
   |  |                     aggregate combination                    | |
   |  +--------------------------------------------------------------+ |
   |                                       |                           |
   +=======================================|===========================+
                                           V
                                   Aggregated Flows

    Figure 4: Conceptual model of aggregation operations within an IAP

   Interval distribution:   a temporal aggregation operation which
      imposes an Aggregation Interval on the partially Aggregated Flow.
      This Aggregation Interval may be regular, irregular, or derived
      from the timing of the Original Flows themselves.  Interval
      distribution is discussed in detail in Section 5.1.

   Key aggregation:   a spatial aggregation operation which results in
      the addition, modification, or deletion of Flow Key fields in the
      Partially Aggregated Flows.  New Flow Keys may be derived from
      existing Flow Keys (e.g., looking up an AS number for an IP
      address), or "promoted" from specific non-Key fields (e.g., when
      aggregating Flows by packet count per Flow).  Key aggregation can
      also add new non-Key fields derived from Flow Keys that are
      deleted during key aggregation; mainly counters of unique reduced
      keys.  Key aggregation is discussed in detail in Section 5.2.

Trammell, et al.         Expires April 14, 2013                [Page 16]
Internet-Draft              IPFIX Aggregation               October 2012

   Value aggregation:   a spatial aggregation operation which results in
      the addition, modification, or deletion of non-Key fields in the
      Partially Aggregated Flows.  These non-Key fields may be "demoted"
      from existing Key fields, or derived from existing Key or non-Key
      fields.  Value aggregation is discussed in detail in Section 5.3.

   Aggregate combination:   an operation combining multiple partially
      Aggregated Flows having undergone interval distribution, key
      aggregation, and value aggregation which share Flow Keys and
      Aggregation Intervals into a single Aggregated Flow per set of
      Flow Key values and Aggregation Interval.  Aggregate combination
      is discussed in detail in Section 5.4.

   Correlation and normalization:   an optional operation, applies when
      accepting Original Flows from Metering Processes which export
      different views of essentially the same Flows before aggregation;
      the details of correlation and normalization are specified in
      Section 4.2.1, below.

   The first three of these operations may be carried out any number of
   times in any order, either on Original Flows or on the results of one
   of the operations above, with one caveat: since Flows carry their own
   interval data, any spatial aggregation operation implies a temporal
   aggregation operation, so at least one interval distribution step,
   even if implicit, is required by this architecture.  This is shown as
   the first step for the sake of simplicity in the diagram above.  Once
   all aggregation operations are complete, aggregate combination
   ensures that for a given Aggregation Interval, set of Flow Key
   values, and Observation Domain, only one Flow is produced by the
   Intermediate Aggregation Process.

   This model describes the operations within a single Intermediate
   Aggregation Process, and it is anticipated that most aggregation will
   be applied within a single process.  However, as the steps in the
   model may be applied in any order and aggregate combination is
   idempotent, any number of Intermediate Aggregation Processes
   operating in series can be modeled as a single process.  This allows
   aggregation operations to be flexibly distributed across any number
   of processes, should application or deployment considerations so
   dictate.

4.2.1.  Correlation and Normalization

   When accepting Original Flows from multiple Metering Processes, each
   of which provides a different view of the Original Flow as seen from
   the point of view of the IAP, an optional correlation and
   normalization operation combines each of these single Flow Records
   into a set of unified partially aggregated Flows before applying

Trammell, et al.         Expires April 14, 2013                [Page 17]
Internet-Draft              IPFIX Aggregation               October 2012

   interval distribution.  These unified Flows appear as if they had
   been measured at a single Metering Process which used the union of
   the set of Flow Keys and non-key fields of all Metering Processes
   sending Original Flows to the IAP.

   Since, due to export errors or other slight irregularities in flow
   metering, the multiple views may not be completely consistent;
   normalization involves applying a set of aggregation application
   specific corrections in order to ensure consistency in the unified
   Flows.

   In general, correlation and normalization should take multiple views
   of essentially the same Flow, as determined by the configuration of
   the operation itself, and render them into a single unified Flow.
   Flows which are essentially different should not be unified by the
   correlation and normalization operation.  This operation therefore
   requires enough information about the configuration and deployment of
   Metering Processes from which it correlates Original Flows in order
   to make this distinction correctly and consistently.

   The exact steps performed to correlate and normalize flows in this
   step are application-, implementation-, and deployment-specific, and
   will not be further specified in this document.

Trammell, et al.         Expires April 14, 2013                [Page 18]
Internet-Draft              IPFIX Aggregation               October 2012

5.  IP Flow Aggregation Operations

   As stated in Section 2, an Aggregated Flow is simply an IPFIX Flow
   generated from Original Flows by an Intermediate Aggregation Process.
   Here, we detail the operations by which this is achieved within an
   Intermediate Aggregation Process.

5.1.  Temporal Aggregation through Interval Distribution

   Interval distribution imposes a time interval on the resulting
   Aggregated Flows.  The selection of an interval is specific to the
   given aggregation application.  Intervals may be derived from the
   Original Flows themselves (e.g., an interval may be selected to cover
   the entire time containing the set of all Flows sharing a given Key,
   as in Time Composition described in Section 5.1.2) or externally
   imposed; in the latter case the externally imposed interval may be
   regular (e.g., every five minutes) or irregular (e.g., to allow for
   different time resolutions at different times of day, under different
   network conditions, or indeed for different sets of Original Flows).

   The length of the imposed interval itself has tradeoffs.  Shorter
   intervals allow higher-resolution aggregated data and, in streaming
   applications, faster reaction time.  Longer intervals generally lead
   to greater data reduction and simplified counter distribution.
   Specifically, counter distribution is greatly simplified by the
   choice of an interval longer than the duration of longest Original
   Flow, itself generally determined by the Original Flow's Metering
   Process active timeout; in this case an Original Flow can contribute
   to at most two Aggregated Flows, and the more complex value
   distribution methods become inapplicable.

   |                |                |                |
   | |<--Flow A-->| |                |                |
   |        |<--Flow B-->|           |                |
   |          |<-------------Flow C-------------->|   |
   |                |                |                |
   |   interval 0   |   interval 1   |   interval 2   |

              Figure 5: Illustration of interval distribution

   In Figure 5, we illustrate three common possibilities for interval
   distribution as applies with regular intervals to a set of three
   Original Flows.  For Flow A, the start and end times lie within the
   boundaries of a single interval 0; therefore, Flow A contributes to
   only one Aggregated Flow.  Flow B, by contrast, has the same duration
   but crosses the boundary between intervals 0 and 1; therefore, it
   will contribute to two Aggregated Flows, and its counters must be
   distributed among these Flows, though in the two-interval case this

Trammell, et al.         Expires April 14, 2013                [Page 19]
Internet-Draft              IPFIX Aggregation               October 2012

   can be simplified somewhat simply by picking one of the two
   intervals, or proportionally distributing between them.  Only Flows
   like Flow A and Flow B will be produced when the interval is chosen
   to be longer than the duration of longest Original Flow, as above.
   More complicated is the case of Flow C, which contributes to more
   than two Aggregated Flows, and must have its counters distributed
   according to some policy as in Section 5.1.1.

5.1.1.  Distributing Values Across Intervals

   In general, counters in Aggregated Flows are treated the same as in
   any Flow.  Each counter is independently calculated as if it were
   derived from the set of packets in the Original Flow: e.g., delta
   counters are summed, the most recent total count for each Original
   Flow taken then summed across flows, and so on.

   When the Aggregation Interval is guaranteed to be longer than the
   longest Original Flow, a Flow can cross at most one Interval
   boundary, and will therefore contribute to at most two Aggregated
   Flows.  Most common in this case is to arbitrarily but consistently
   choose to account the Original Flow's counters either to the first or
   the last Aggregated Flow to which it could contribute.

   However, this becomes more complicated when the Aggregation Interval
   is shorter than the longest Original Flow in the source data.  In
   such cases, each Original Flow can incompletely cover one or more
   time intervals, and apply to one or more Aggregated Flows.  In this
   case, the Intermediate Aggregation Process must distribute the
   counters in the Original Flows across one or more resulting
   Aggregated Flows.  There are several methods for doing this, listed
   here in roughly increasing order of complexity and accuracy; most of
   these are necessary only in specialized cases.

   End Interval:   The counters for an Original Flow are added to the
      counters of the appropriate Aggregated Flow containing the end
      time of the Original Flow.

   Start Interval:   The counters for an Original Flow are added to the
      counters of the appropriate Aggregated Flow containing the start
      time of the Original Flow.

   Mid Interval:   The counters for an Original Flow are added to the
      counters of a single appropriate Aggregated Flow containing some
      timestamp between start and end time of the Original Flow.

Trammell, et al.         Expires April 14, 2013                [Page 20]
Internet-Draft              IPFIX Aggregation               October 2012

   Simple Uniform Distribution:   Each counter for an Original Flow is
      divided by the number of time intervals the Original Flow covers
      (i.e., of appropriate Aggregated Flows sharing the same Flow
      Keys), and this number is added to each corresponding counter in
      each Aggregated Flow.

   Proportional Uniform Distribution:   This is like simple uniform
      distribution, but accounts for the fractional portions of a time
      interval covered by an Original Flow in the first and last time
      interval.  Each counter for an Original Flow is divided by the
      number of time _units_ the Original Flow covers, to derive a mean
      count rate.  This rate is then multiplied by the number of time
      units in the intersection of the duration of the Original Flow and
      the time interval of each Aggregated Flow.

   Simulated Process:   Each counter of the Original Flow is distributed
      among the intervals of the Aggregated Flows according to some
      function the Intermediate Aggregation Process uses based upon
      properties of Flows presumed to be like the Original Flow.  For
      example, Flow Records representing bulk transfer might follow a
      more or less proportional uniform distribution, while interactive
      processes are far more bursty.

   Direct:   The Intermediate Aggregation Process has access to the
      original packet timings from the packets making up the Original
      Flow, and uses these to distribute or recalculate the counters.

   A method for exporting the distribution of counters across multiple
   Aggregated Flows is detailed in Section 7.4.  In any case, counters
   MUST be distributed across the multiple Aggregated Flows in such a
   way that the total count is preserved, within the limits of accuracy
   of the implementation.  This property allows data to be aggregated
   and re-aggregated with negligible loss of original count information.
   To avoid confusion in interpretation of the aggregated data, all the
   counters from one Aggregated Flow MUST be distributed via the same
   method.

   More complex counter distribution methods generally require that the
   interval distribution process track multiple "current" time intervals
   at once.  This may introduce some delay into the aggregation
   operation, as an interval should only expire and be available for
   export when no additional Original Flows applying to the interval are
   expected to arrive at the Intermediate Aggregation Process.

   Note, however, that since there is no guarantee that Flows from the
   Original Exporter will arrive in any given order, whether for
   transport-specific reasons (i.e.  UDP reordering) or Metering Process
   or Exporting Process implementation-specific reasons, even simpler

Trammell, et al.         Expires April 14, 2013                [Page 21]
Internet-Draft              IPFIX Aggregation               October 2012

   distribution methods may need to deal with flows arriving in other
   than start time or end time order.  Therefore, the use of larger
   intervals does not obviate the need to buffer Partially Aggregated
   Flows within "current" time intervals, to ensure the IAP can accept
   flow time intervals in any arrival order.  More generally, the
   interval distribution process SHOULD accept flow start and end times
   in the Original Flows in any reasonable order.  The expiration of
   intervals in interval distribution operations is dependent on
   implementation and deployment requirements, and SHOULD be made
   configurable in contexts in which "reasonable order" is not obvious
   at implementation time.  This operation may lead to delay and loss
   introduced by the IAP, as detailed in Section 6.2.

5.1.2.  Time Composition

   Time Composition as in Section 5.4 of [RFC5982] (or interval
   combination) is a special case of aggregation, where interval
   distribution imposes longer intervals on Flows with matching keys and
   "chained" start and end times, without any key reduction, in order to
   join long-lived Flows which may have been split (e.g., due to an
   active timeout shorter than the actual duration of the Flow.)  Here,
   no Key aggregation is applied, and the Aggregation Interval is chosen
   on a per-Flow basis to cover the interval spanned by the set of
   aggregated Flows.  This may be applied alone in order to normalize
   split Flows, or in combination with other aggregation functions in
   order to obtain more accurate Original Flow counts.

5.1.3.  External Interval Distribution

   Note that much of the difficulty of interval distribution at an IAP
   can be avoided simply by configuring the original Exporters to
   synchronize the time intervals in the Original Flows with the desired
   aggregation interval.  The resulting Original Flows would then be
   split to align perfectly with the time intervals imposed during
   Interval Imposition, as shown in Figure 6, though this may reduce
   their usefulness for non-Aggregation purposes.  This approach allows
   the Intermediate Aggregation Process to use Start Interval or End
   Interval distribution, while having equivalent information to that
   available to Direct interval distribution.

   |                |                |                |
   |<----Flow D---->|<----Flow E---->|<----Flow F---->|
   |                |                |                |
   |   interval 0   |   interval 1   |   interval 2   |

         Figure 6: Illustration of external interval distribution

Trammell, et al.         Expires April 14, 2013                [Page 22]
Internet-Draft              IPFIX Aggregation               October 2012

5.2.  Spatial Aggregation of Flow Keys

   Key aggregation generates a new set of Flow Key values for the
   Aggregated Flows from the Original Flow Key and non-Key fields in the
   Original Flows, or from correlation of the Original Flow information
   with some external source.  There are two basic operations here.
   First, Aggregated Flow Keys may be derived directly from Original
   Flow Keys through reduction, or the dropping of fields or precision
   in the Original Flow Keys.  Second, Aggregated Flow Keys may be
   derived through replacement, e.g. by removing one or more fields from
   the Original Flow and replacing them with fields derived from the
   removed fields.  Replacement may refer to external information (e.g.,
   IP to AS number mappings).  Replacement may apply to Flow Keys as
   well as non-key fields.  For example, consider an application which
   aggregates Original Flows by packet count (i.e., generating an
   Aggregated Flow for all one-packet Flows, one for all two-packet
   Flows, and so on).  This application would promote the packet count
   to a Flow Key.

   Key aggregation may also result in the addition of new non-Key fields
   to the Aggregated Flows, namely Original Flow counters and unique
   reduced key counters; these are treated in more detail in
   Section 5.2.1 and Section 5.2.2, respectively.

   In any key aggregation operation, reduction and/or replacement may be
   applied any number of times in any order.  Which of these operations
   are supported by a given implementation is implementation- and
   application-dependent.

   Original Flow Keys

   +---------+---------+----------+----------+-------+-----+
   | src ip4 | dst ip4 | src port | dst port | proto | tos |
   +---------+---------+----------+----------+-------+-----+
        |         |         |          |         |      |
     retain   mask /24      X          X         X      X
        |         |
        V         V
   +---------+-------------+
   | src ip4 | dst ip4 /24 |
   +---------+-------------+

   Aggregated Flow Keys (by source address and destination class-C)

          Figure 7: Illustration of key aggregation by reduction

   Figure 7 illustrates an example reduction operation, aggregation by
   source address and destination class C network.  Here, the port,

Trammell, et al.         Expires April 14, 2013                [Page 23]
Internet-Draft              IPFIX Aggregation               October 2012

   protocol, and type-of-service information is removed from the Flow
   Key, the source address is retained, and the destination address is
   masked by dropping the lower 8 bits.

   Original Flow Keys

   +---------+---------+----------+----------+-------+-----+
   | src ip4 | dst ip4 | src port | dst port | proto | tos |
   +---------+---------+----------+----------+-------+-----+
        |         |         |          |         |      |
        V         V         |          |         |      |
   +-------------------+    X          X         X      X
   | ASN lookup table  |
   +-------------------+
        |         |
        V         V
   +---------+---------+
   | src asn | dst asn |
   +---------+---------+

   Aggregated Flow Keys (by source and dest ASN)

        Figure 8: Illustration of key aggregation by reduction and
                                replacement

   Figure 8 illustrates an example reduction and replacement operation,
   aggregation by source and destination Border Gateway Protocol (BGP)
   Autonomous System Number (ASN) without ASN information available in
   the Original Flow.  Here, the port, protocol, and type-of-service
   information is removed from the Flow Keys, while the source and
   destination addresses are run though an IP address to ASN lookup
   table, and the Aggregated Flow Keys are made up of the resulting
   source and destination ASNs.

5.2.1.  Counting Original Flows

   When aggregating multiple Original Flows into an Aggregated Flow, it
   is often useful to know how many Original Flows are present in the
   Aggregated Flow.  Section 7.2 introduces four new information
   elements in to export these counters.

   There are two possible ways to count Original Flows, which we call
   here conservative and non-conservative.  Conservative flow counting
   has the property that each Original Flow contributes exactly one to
   the total flow count within a set of Aggregated Flows.  In other
   words, conservative flow counters are distributed just as any other
   counter during interval distribution, except each Original Flow is
   assumed to have a flow count of one.  When a count for an Original

Trammell, et al.         Expires April 14, 2013                [Page 24]
Internet-Draft              IPFIX Aggregation               October 2012

   Flow must be distributed across a set of Aggregated Flows, and a
   distribution method is used which does not account for that Original
   Flow completely within a single Aggregated Flow, conservative flow
   counting requires a fractional representation.

   By contrast, non-conservative flow counting is used to count how many
   Contributing Flows are represented in an Aggregated Flow.  Flow
   counters are not distributed in this case.  An Original Flow which is
   present within N Aggregated Flows would add N to the sum of non-
   conservative flow counts, one to each Aggregated Flow.  In other
   words, the sum of conservative flow counts over a set of Aggregated
   Flows is always equal to the number of Original Flows, while the sum
   of non-conservative flow counts is strictly greater than or equal to
   the number of Original Flows.

   For example, consider Flows A, B, and C as illustrated in Figure 5.
   Assume that the key aggregation step aggregates the keys of these
   three Flows to the same aggregated Flow Key, and that start interval
   counter distribution is in effect.  The conservative flow count for
   interval 0 is 3 (since Flows A, B, and C all begin in this interval),
   and for the other two intervals is 0.  The non-conservative flow
   count for interval 0 is also 3 (due to the presence of Flows A, B,
   and C), for interval 1 is 2 (Flows B and C), and for interval 2 is 1
   (Flow C).  The sum of the conservative counts 3 + 0 + 0 = 3, the
   number of Original Flows; while the sum of the non-conservative
   counts 3 + 2 + 1 = 6.

   Note that the active and inactive timeouts used to generate Original
   Flows, as well as the cache policy used to generate those Flows, have
   an effect on how meaningful either the conservative or non-
   conservative flow count will be during aggregation.  In general,
   Original Exporters using the IPFIX Configuration Model SHOULD be
   configured to export Flows with equal or similar activeTimeout and
   inactiveTimeout configuration values, and the same cacheMode, as
   defined in [I-D.ietf-ipfix-configuration-model]; Original Exporters
   not using the IPFIX Configuration Model SHOULD be configured
   equivalently.

5.2.2.  Counting Distinct Key Values

   One common case in aggregation is counting distinct key values that
   were reduced away during key aggregation.  The most common use case
   for this is counting distinct hosts per Flow Key; for example, in
   host characterization or anomaly detection, distinct sources per
   destination or distinct destinations per source are common metrics.
   These new non-Key fields are added during key aggregation.

   For such applications, Information Elements for distinct counts of

Trammell, et al.         Expires April 14, 2013                [Page 25]
Internet-Draft              IPFIX Aggregation               October 2012

   IPv4 and IPv6 addresses are defined in Section 7.3.  These are named
   distinctCountOf(KeyName).  Additional such Information Elements
   SHOULD be registered with IANA on an as-needed basis.

5.3.  Spatial Aggregation of Non-Key Fields

   Aggregation operations may also lead to the addition of value fields
   demoted from key fields, or derived from other value fields in the
   Original Flows.  Specific cases of this are treated in the
   subsections below.

5.3.1.  Counter Statistics

   Some applications of aggregation may benefit from computing different
   statistics than those native to each non-key field (e.g., flags are
   natively combined via union, and delta counters by summing).  For
   example, minimum and maximum packet counts per Flow, mean bytes per
   packet per Contributing Flow, and so on.  Certain Information
   Elements for these applications are already provided in the IANA
   IPFIX Information Elements registry
   (http://www.iana.org/assignments/ipfix/ipfix.html (e.g.
   minimumIpTotalLength).

   A complete specification of additional aggregate counter statistics
   is outside the scope of this document, and should be added in the
   future to the IANA IPFIX Information Elements registry on a per-
   application, as-needed basis.

5.3.2.  Derivation of New Values from Flow Keys and non-Key fields

   More complex operations may lead to other derived fields being
   generated from the set of values or Flow Keys reduced away during
   aggregation.  A prime example of this is sample entropy calculation.
   This counts distinct values and frequency, so is similar to distinct
   key counting as in Section 5.2.2, but may be applied to the
   distribution of values for any flow field.

   Sample entropy calculation provides a one-number normalized
   representation of the value spread and is useful for anomaly
   detection.  The behavior of entropy statistics is such that a small
   number of keys showing up very often drives the entropy value down
   towards zero, while a large number of keys, each showing up with
   lower frequency, drives the entropy value up.

   Entropy statistics are generally useful for identifier keys, such as
   IP addresses, port numbers, AS numbers, etc.  They can also be
   calculated on flow length, flow duration fields and the like, even if
   this generally yields less distinct value shifts when the traffic mix

Trammell, et al.         Expires April 14, 2013                [Page 26]
Internet-Draft              IPFIX Aggregation               October 2012

   changes.

   As a practical example, one host scanning a lot of other hosts will
   drive source IP entropy down and target IP entropy up.  A similar
   effect can be observed for ports.  This pattern can also be caused by
   the scan-traffic of a fast Internet worm.  A second example would be
   a DDoS flooding attack against a single target (or small number of
   targets) which drives source IP entropy up and target IP entropy
   down.

   A complete specification of additional derived values or entropy
   information elements is outside the scope of this document.  Any such
   Information Elements should be added in the future to the IANA IPFIX
   Information Elements registry on a per-application, as-needed basis.

5.4.  Aggregation Combination

   Interval distribution and key aggregation together may generate
   multiple Partially Aggregated Flows covering the same time interval
   with the same set of Flow Key values.  The process of combining these
   Partially Aggregated Flows into a single Aggregated Flow is called
   aggregation combination.  In general, non-Key values from multiple
   Contributing Flows are combined using the same operation by which
   values are combined from packets to form Flows for each Information
   Element.  Delta counters are summed, flags are unioned, and so on.

Trammell, et al.         Expires April 14, 2013                [Page 27]
Internet-Draft              IPFIX Aggregation               October 2012

6.  Additional Considerations and Special Cases in Flow Aggregation

6.1.  Exact versus Approximate Counting during Aggregation

   In certain circumstances, particularly involving aggregation by
   devices with limited resources, and in situations where exact
   aggregated counts are less important than relative magnitudes (e.g.
   driving graphical displays), counter distribution during key
   aggregation may be performed by approximate counting means (e.g.
   Bloom filters).  The choice to use approximate counting is
   implementation- and application-dependent.

6.2.  Delay and Loss introduced by the IAP

   When accepting Original Flows in export order from traffic captured
   live, the Intermediate Aggregation Process waits for all Original
   Flows which may contribute to a given interval during interval
   distribution.  This is generally dominated by the active timeout of
   the Metering Process measuring the Original Flows.  For example, with
   Metering Processes configured with a 5 minute active timeout, the
   Intermediate Aggregation Process introduces a delay of at least 5
   minutes to all exported Aggregated Flows to ensure it has received
   all Original Flows.  Note that when aggregating flows from multiple
   Metering Processes with different active timeouts, the delay is
   determined by the maximum active timeout.

   In certain circumstances, additional delay at the original Exporter
   may cause an IAP to close an interval before the last Original
   Flow(s) accountable to the interval arrives; in this case the IAP
   SHOULD drop the late Original Flow(s).  Accounting of flows lost at
   an Intermediate Process due to such issues is covered in
   [I-D.ietf-ipfix-mediation-protocol].

6.3.  Considerations for Aggregation of Sampled Flows

   The accuracy of Aggregated Flows may also be affected by sampling of
   the Original Flows, or sampling of packets making up the Original
   Flows.  At the time of writing, the effect of sampling on flow
   aggregation is still an open research question.  However, to maximize
   the comparability of Aggregated Flows, aggregation of sampled Flows
   SHOULD only use Original Flows sampled using the same sampling rate
   and sampling algorithm, Flows created from packets sampled using the
   same sampling rate and sampling algorithm, or Original Flows which
   have been normalized as if they had the same sampling rate and
   algorithm before aggregation.  For more on packet sampling within
   IPFIX, see [RFC5476].  For more on Flow sampling within the IPFIX
   Mediator Framework, see [I-D.ietf-ipfix-flow-selection-tech].

Trammell, et al.         Expires April 14, 2013                [Page 28]
Internet-Draft              IPFIX Aggregation               October 2012

6.4.  Considerations for Aggregation of Heterogeneous Flows

   Aggregation may be applied to Original Flows from different sources
   and of different types (i.e., represented using different, perhaps
   wildly-different Templates).  When the goal is to separate the
   heterogeneous Original Flows and aggregate them into heterogeneous
   Aggregated Flows, each aggregation should be done at its own
   Intermediate Aggregation Process.  The Observation Domain ID on the
   Messages containing the output Aggregated Flows can be used to
   identify the different Processes, and to segregate the output.

   However, when the goal is to aggregate these Flows into a single
   stream of Aggregated Flows representing one type of data, and if the
   Original Flows may represent the same original packet at two
   different Observation Points, the Original Flows should be correlated
   by the correlation and normalization operation within the IAP to
   ensure that each packet is only represented in a single Aggregated
   Flow or set of Aggregated Flows differing only by aggregation
   interval.

Trammell, et al.         Expires April 14, 2013                [Page 29]
Internet-Draft              IPFIX Aggregation               October 2012

7.  Export of Aggregated IP Flows using IPFIX

   In general, Aggregated Flows are exported in IPFIX as any other Flow.
   However, certain aspects of Aggregated Flow export benefit from
   additional guidelines, or new Information Elements to represent
   aggregation metadata or information generated during aggregation.
   These are detailed in the following subsections.

7.1.  Time Interval Export

   Since an Aggregated Flow is simply a Flow, the existing timestamp
   Information Elements in the IPFIX Information Model (e.g.,
   flowStartMilliseconds, flowEndNanoseconds) are sufficient to specify
   the time interval for aggregation.  Therefore, no new aggregation-
   specific Information Elements for exporting time interval information
   are necessary.

   Each Aggregated Flow carrying timing information SHOULD contain both
   an interval start and interval end timestamp.

7.2.  Flow Count Export

   The following four Information Elements are defined to count Original
   Flows as discussed in Section 5.2.1.

7.2.1.  originalFlowsPresent

   Description:   The non-conservative count of Original Flows
      contributing to this Aggregated Flow.  Non-conservative counts
      need not sum to the original count on re-aggregation.

   Abstract Data Type:   unsigned64

   Data Type Semantics:   deltaCount

   ElementId:   TBD1

7.2.2.  originalFlowsInitiated

   Description:   The conservative count of Original Flows whose first
      packet is represented within this Aggregated Flow.  Conservative
      counts must sum to the original count on re-aggregation.

   Abstract Data Type:   unsigned64

Trammell, et al.         Expires April 14, 2013                [Page 30]
Internet-Draft              IPFIX Aggregation               October 2012

   Data Type Semantics:   deltaCount

   ElementId:   TBD2

7.2.3.  originalFlowsCompleted

   Description:   The conservative count of Original Flows whose last
      packet is represented within this Aggregated Flow.  Conservative
      counts must sum to the original count on re-aggregation.

   Abstract Data Type:   unsigned64

   Data Type Semantics:   deltaCount

   ElementId:   TBD3

7.2.4.  deltaFlowCount

   Description:   The conservative count of Original Flows contributing
      to this Aggregated Flow; may be distributed via any of the methods
      expressed by the valueDistributionMethod Information Element.

   Abstract Data Type:   unsigned64

   Data Type Semantics:   deltaCount

   ElementId:   3

   [IANA NOTE: This Information Element is compatible with Information
   Element 3 as used in NetFlow version 9.]

7.3.  Distinct Host Export

   The following four Information Elements represent the distinct counts
   of source and destination network-layer addresses, used to export
   distinct host counts reduced away during key aggregation.

7.3.1.  distinctCountOfSourceIPAddress

   Description:   The count of distinct source IP address values for
      Original Flows contributing to this Aggregated Flow, without
      regard to IP version.  This Information Element is preferred to
      the IP-version-specific counters, unless it is important to
      separate the counts by version.

Trammell, et al.         Expires April 14, 2013                [Page 31]
Internet-Draft              IPFIX Aggregation               October 2012

   Abstract Data Type:   unsigned64

   Data Type Semantics:   totalCount

   ElementId:   TBD4

7.3.2.  distinctCountOfDestinationIPAddress

   Description:   The count of distinct destination IP address values
      for Original Flows contributing to this Aggregated Flow, without
      regard to IP version.  This Information Element is preferred to
      the version-specific counters below, unless it is important to
      separate the counts by version.

   Abstract Data Type:   unsigned64

   Data Type Semantics:   totalCount

   ElementId:   TBD5

7.3.3.  distinctCountOfSourceIPv4Address

   Description:   The count of distinct source IPv4 address values for
      Original Flows contributing to this Aggregated Flow.

   Abstract Data Type:   unsigned32

   Data Type Semantics:   totalCount

   ElementId:   TBD6

7.3.4.  distinctCountOfDestinationIPv4Address

   Description:   The count of distinct destination IPv4 address values
      for Original Flows contributing to this Aggregated Flow.

   Abstract Data Type:   unsigned32

   Data Type Semantics:   totalCount

   ElementId:   TBD7

   Status:   Current

Trammell, et al.         Expires April 14, 2013                [Page 32]
Internet-Draft              IPFIX Aggregation               October 2012

7.3.5.  distinctCountOfSourceIPv6Address

   Description:   The count of distinct source IPv6 address values for
      Original Flows contributing to this Aggregated Flow.

   Abstract Data Type:   unsigned64

   Data Type Semantics:   totalCount

   ElementId:   TBD8

   Status:   Current

7.3.6.  distinctCountOfDestinationIPv6Address

   Description:   The count of distinct destination IPv6 address values
      for Original Flows contributing to this Aggregated Flow.

   Abstract Data Type:   unsigned64

   Data Type Semantics:   totalCount

   ElementId:   TBD9

   Status:   Current

7.4.  Aggregate Counter Distribution Export

   When exporting counters distributed among Aggregated Flows, as
   described in Section 5.1.1, the Exporting Process MAY export an
   Aggregate Counter Distribution Option Record for each Template
   describing Aggregated Flow records; this Options Template is
   described below.  It uses the valueDistributionMethod Information
   Element, also defined below.  Since in many cases distribution is
   simple, accounting the counters from Contributing Flows to the first
   Interval to which they contribute, this is the default situation, for
   which no Aggregate Counter Distribution Record is necessary;
   Aggregate Counter Distribution Records are only applicable in more
   exotic situations, such as using an Aggregation Interval smaller than
   the durations of Original Flows.

7.4.1.  Aggregate Counter Distribution Options Template

   This Options Template defines the Aggregate Counter Distribution
   Record, which allows the binding of a value distribution method to a
   Template ID.  The scope is the Template Id, whose uniqueness, per
   [I-D.ietf-ipfix-protocol-rfc5101bis], is local to the Transport
   Session and Observation Domain that generated the Template ID.  This

Trammell, et al.         Expires April 14, 2013                [Page 33]
Internet-Draft              IPFIX Aggregation               October 2012

   is used to signal to the Collecting Process how the counters were
   distributed.  The fields are as below:

   +-------------------------+-----------------------------------------+
   | IE                      | Description                             |
   +-------------------------+-----------------------------------------+
   | templateId [scope]      | The Template ID of the Template         |
   |                         | defining the Aggregated Flows to which  |
   |                         | this distribution option applies.  This |
   |                         | Information Element MUST be defined as  |
   |                         | a Scope Field.                          |
   |                         |                                         |
   | valueDistributionMethod | The method used to distribute the       |
   |                         | counters for the Aggregated Flows       |
   |                         | defined by the associated Template.     |
   +-------------------------+-----------------------------------------+

7.4.2.  valueDistributionMethod Information Element

   Description:   A description of the method used to distribute the
      counters from Contributing Flows into the Aggregated Flow records
      described by an associated scope, generally a Template.  The
      method is deemed to apply to all the non-key Information Elements
      in the referenced scope for which value distribution is a valid
      operation; if the originalFlowsInitiated and/or
      originalFlowsCompleted Information Elements appear in the
      Template, they are not subject to this distribution method, as
      they each infer their own distribution method.  This is intended
      to be a complete set of possible value distribution methods; it is
      encoded as follows:

Trammell, et al.         Expires April 14, 2013                [Page 34]
Internet-Draft              IPFIX Aggregation               October 2012

   +-------+-----------------------------------------------------------+
   | Value | Description                                               |
   +-------+-----------------------------------------------------------+
   | 0     | Unspecified: The counters for an Original Flow are        |
   |       | explicitly not distributed according to any other method  |
   |       | defined for this Information Element; use for arbitrary   |
   |       | distribution, or distribution algorithms not described by |
   |       | any other codepoint.                                      |
   |       | --------------------------------------------------------- |
   |       |                                                           |
   | 1     | Start Interval: The counters for an Original Flow are     |
   |       | added to the counters of the appropriate Aggregated Flow  |
   |       | containing the start time of the Original Flow.  This     |
   |       | should be assumed the default if value distribution       |
   |       | information is not available at a Collecting Process for  |
   |       | an Aggregated Flow.                                       |
   |       | --------------------------------------------------------- |
   |       |                                                           |
   | 2     | End Interval: The counters for an Original Flow are added |
   |       | to the counters of the appropriate Aggregated Flow        |
   |       | containing the end time of the Original Flow.             |
   |       | --------------------------------------------------------- |
   |       |                                                           |
   | 3     | Mid Interval: The counters for an Original Flow are added |
   |       | to the counters of a single appropriate Aggregated Flow   |
   |       | containing some timestamp between start and end time of   |
   |       | the Original Flow.                                        |
   |       | --------------------------------------------------------- |
   |       |                                                           |
   | 4     | Simple Uniform Distribution: Each counter for an Original |
   |       | Flow is divided by the number of time intervals the       |
   |       | Original Flow covers (i.e., of appropriate Aggregated     |
   |       | Flows sharing the same Flow Key), and this number is      |
   |       | added to each corresponding counter in each Aggregated    |
   |       | Flow.                                                     |
   |       | --------------------------------------------------------- |
   |       |                                                           |
   | 5     | Proportional Uniform Distribution: Each counter for an    |
   |       | Original Flow is divided by the number of time _units_    |
   |       | the Original Flow covers, to derive a mean count rate.    |
   |       | This mean count rate is then multiplied by the number of  |
   |       | time units in the intersection of the duration of the     |
   |       | Original Flow and the time interval of each Aggregated    |
   |       | Flow.  This is like simple uniform distribution, but      |
   |       | accounts for the fractional portions of a time interval   |
   |       | covered by an Original Flow in the first and last time    |
   |       | interval.                                                 |
   |       | --------------------------------------------------------- |

Trammell, et al.         Expires April 14, 2013                [Page 35]
Internet-Draft              IPFIX Aggregation               October 2012

   | 6     | Simulated Process: Each counter of the Original Flow is   |
   |       | distributed among the intervals of the Aggregated Flows   |
   |       | according to some function the Intermediate Aggregation   |
   |       | Process uses based upon properties of Flows presumed to   |
   |       | be like the Original Flow.  This is essentially an        |
   |       | assertion that the Intermediate Aggregation Process has   |
   |       | no direct packet timing information but is nevertheless   |
   |       | not using one of the other simpler distribution methods.  |
   |       | The Intermediate Aggregation Process specifically makes   |
   |       | no assertion as to the correctness of the simulation.     |
   |       | --------------------------------------------------------- |
   |       |                                                           |
   | 7     | Direct: The Intermediate Aggregation Process has access   |
   |       | to the original packet timings from the packets making up |
   |       | the Original Flow, and uses these to distribute or        |
   |       | recalculate the counters.                                 |
   +-------+-----------------------------------------------------------+

   Abstract Data Type:   unsigned8

   ElementId:   TBD10

   Status:   Current

Trammell, et al.         Expires April 14, 2013                [Page 36]
Internet-Draft              IPFIX Aggregation               October 2012

8.  Examples

   In these examples, the same data, described by the same Template,
   will be aggregated multiple different ways; this illustrates the
   various different functions which could be implemented by
   Intermediate Aggregation Processes.  Templates are shown in IESpec
   format as introduced in [I-D.ietf-ipfix-ie-doctors].  The source data
   format is a simplified flow: timestamps, traditional 5-tuple, and
   octet count; the flow key fields are the 5-tuple.  The Template is
   shown in Figure 9.

   flowStartMilliseconds(152)[8]
   flowEndMilliseconds(153)[8]
   sourceIPv4Address(8)[4]{key}
   destinationIPv4Address(12)[4]{key}
   sourceTransportPort(7)[2]{key}
   destinationTransportPort(11)[2]{key}
   protocolIdentifier(4)[1]{key}
   octetDeltaCount(1)[8]

                   Figure 9: Input Template for examples

   The data records given as input to the examples in this section are
   shown below; timestamps are given in H:MM:SS.sss format.  In this and
   subsequent tables, flowStartMilliseconds is shown in H:MM:SS.sss
   format as 'start time', flowEndMilliseconds is shown in H:MM:SS.sss
   format as 'end time', sourceIPv4Address is shown as 'source ip4' with
   the following 'port' representing sourceTransportPort,
   destinationIPv4Address is shown as 'dest ip4' with the following
   'port' representing destinationTransportPort, protocolIdentifier is
   shown as 'pt', and octetDeltaCount as 'oct'.

Trammell, et al.         Expires April 14, 2013                [Page 37]
Internet-Draft              IPFIX Aggregation               October 2012

  start time |end time   |source ip4 |port |dest ip4      |port|pt|  oct
  9:00:00.138 9:00:00.138 192.0.2.2   47113 192.0.2.131    53   17   119
  9:00:03.246 9:00:03.246 192.0.2.2   22153 192.0.2.131    53   17    83
  9:00:00.478 9:00:03.486 192.0.2.2   52420 198.51.100.2   443  6   1637
  9:00:07.172 9:00:07.172 192.0.2.3   56047 192.0.2.131    53   17   111
  9:00:07.309 9:00:14.861 192.0.2.3   41183 198.51.100.67  80   6  16838
  9:00:03.556 9:00:19.876 192.0.2.2   17606 198.51.100.68  80   6  11538
  9:00:25.210 9:00:25.210 192.0.2.3   47113 192.0.2.131    53   17   119
  9:00:26.358 9:00:30.198 192.0.2.3   48458 198.51.100.133 80   6   2973
  9:00:29.213 9:01:00.061 192.0.2.4   61295 198.51.100.2   443  6   8350
  9:04:00.207 9:04:04.431 203.0.113.3 41256 198.51.100.133 80   6    778
  9:03:59.624 9:04:06.984 203.0.113.3 51662 198.51.100.3   80   6    883
  9:00:30.532 9:06:15.402 192.0.2.2   37581 198.51.100.2   80   6  15420
  9:06:56.813 9:06:59.821 203.0.113.3 52572 198.51.100.2   443  6   1637
  9:06:30.565 9:07:00.261 203.0.113.3 49914 198.51.100.133 80   6    561
  9:06:55.160 9:07:05.208 192.0.2.2   50824 198.51.100.2   443  6   1899
  9:06:49.322 9:07:05.322 192.0.2.3   34597 198.51.100.3   80   6   1284
  9:07:05.849 9:07:09.625 203.0.113.3 58907 198.51.100.4   80   6   2670
  9:10:45.161 9:10:45.161 192.0.2.4   22478 192.0.2.131    53   17    75
  9:10:45.209 9:11:01.465 192.0.2.4   49513 198.51.100.68  80   6   3374
  9:10:57.094 9:11:00.614 192.0.2.4   64832 198.51.100.67  80   6    138
  9:10:59.770 9:11:02.842 192.0.2.3   60833 198.51.100.69  443  6   2325
  9:02:18.390 9:13:46.598 203.0.113.3 39586 198.51.100.17  80   6  11200
  9:13:53.933 9:14:06.605 192.0.2.2   19638 198.51.100.3   80   6   2869
  9:13:02.864 9:14:08.720 192.0.2.3   40429 198.51.100.4   80   6  18289

                    Figure 10: Input data for examples

8.1.  Traffic Time-Series per Source

   Aggregating flows by source IP address in time series (i.e., with a
   regular interval) can be used in subsequent heavy-hitter analysis and
   as a source parameter for statistical anomaly detection techniques.
   Here, the Intermediate Aggregation Process imposes an interval,
   aggregates the key to remove all key fields other than the source IP
   address, then combines the result into a stream of Aggregated Flows.
   The imposed interval of 5 minutes is longer than the majority of
   flows; for those flows crossing interval boundaries, the entire flow
   is accounted to the interval containing the start time of the flow.

   In this example the Partially Aggregated Flows after each conceptual
   operation in the Intermediate Aggregation Process are shown.  These
   are meant to be illustrative of the conceptual operations only, and
   not to suggest an implementation (indeed, the example shown here
   would not necessarily be the most efficient method for performing
   these operations).  Subsequent examples will omit the Partially
   Aggregated Flows for brevity.

Trammell, et al.         Expires April 14, 2013                [Page 38]
Internet-Draft              IPFIX Aggregation               October 2012

   The input to this process could be any Flow Record containing a
   source IP address and octet counter; consider for this example the
   Template and data from the introduction.  The Intermediate
   Aggregation Process would then output records containing just
   timestamps, source IP, and octetDeltaCount, as in Figure 11.

   flowStartMilliseconds(152)[8]
   flowEndMilliseconds(153)[8]
   sourceIPv4Address(8)[4]
   octetDeltaCount(1)[8]

           Figure 11: Output Template for time series per source

   Assume the goal is to get 5-minute (300s) time series of octet counts
   per source IP address.  The aggregation operations would then be
   arranged as in Figure 12.

                    Original Flows
                          |
                          V
              +-----------------------+
              | interval distribution |
              |  * impose uniform     |
              |    300s time interval |
              +-----------------------+
                  |
                  | Partially Aggregated Flows
                  V
   +------------------------+
   |  key aggregation       |
   |   * reduce key to only |
   |     sourceIPv4Address  |
   +------------------------+
                  |
                  | Partially Aggregated Flows
                  V
             +-------------------------+
             |  aggregate combination  |
             |   * sum octetDeltaCount |
             +-------------------------+
                          |
                          V
                  Aggregated Flows

       Figure 12: Aggregation operations for time series per source

   After applying the interval distribution step to the source data in
   Figure 10, only the time intervals have changed; the Partially

Trammell, et al.         Expires April 14, 2013                [Page 39]
Internet-Draft              IPFIX Aggregation               October 2012

   Aggregated flows are shown in Figure 13.  Note that interval
   distribution follows the default Start Interval policy; that is, the
   entire flow is accounted to the interval containing the flow's start
   time.

  start time |end time   |source ip4 |port |dest ip4      |port|pt|  oct
  9:00:00.000 9:05:00.000 192.0.2.2   47113 192.0.2.131    53   17   119
  9:00:00.000 9:05:00.000 192.0.2.2   22153 192.0.2.131    53   17    83
  9:00:00.000 9:05:00.000 192.0.2.2   52420 198.51.100.2   443  6   1637
  9:00:00.000 9:05:00.000 192.0.2.3   56047 192.0.2.131    53   17   111
  9:00:00.000 9:05:00.000 192.0.2.3   41183 198.51.100.67  80   6  16838
  9:00:00.000 9:05:00.000 192.0.2.2   17606 198.51.100.68  80   6  11538
  9:00:00.000 9:05:00.000 192.0.2.3   47113 192.0.2.131    53   17   119
  9:00:00.000 9:05:00.000 192.0.2.3   48458 198.51.100.133 80   6   2973
  9:00:00.000 9:05:00.000 192.0.2.4   61295 198.51.100.2   443  6   8350
  9:00:00.000 9:05:00.000 203.0.113.3 41256 198.51.100.133 80   6    778
  9:00:00.000 9:05:00.000 203.0.113.3 51662 198.51.100.3   80   6    883
  9:00:00.000 9:05:00.000 192.0.2.2   37581 198.51.100.2   80   6  15420
  9:00:00.000 9:05:00.000 203.0.113.3 39586 198.51.100.17  80   6  11200
  9:05:00.000 9:10:00.000 203.0.113.3 52572 198.51.100.2   443  6   1637
  9:05:00.000 9:10:00.000 203.0.113.3 49914 197.51.100.133 80   6    561
  9:05:00.000 9:10:00.000 192.0.2.2   50824 198.51.100.2   443  6   1899
  9:05:00.000 9:10:00.000 192.0.2.3   34597 198.51.100.3   80   6   1284
  9:05:00.000 9:10:00.000 203.0.113.3 58907 198.51.100.4   80   6   2670
  9:10:00.000 9:15:00.000 192.0.2.4   22478 192.0.2.131    53   17    75
  9:10:00.000 9:15:00.000 192.0.2.4   49513 198.51.100.68  80   6   3374
  9:10:00.000 9:15:00.000 192.0.2.4   64832 198.51.100.67  80   6    138
  9:10:00.000 9:15:00.000 192.0.2.3   60833 198.51.100.69  443  6   2325
  9:10:00.000 9:15:00.000 192.0.2.2   19638 198.51.100.3   80   6   2869
  9:10:00.000 9:15:00.000 192.0.2.3   40429 198.51.100.4   80   6  18289

         Figure 13: Interval imposition for time series per source

   After the key aggregation step, all Flow Keys except the source IP
   address have been discarded, as shown in Figure 14.  This leaves
   duplicate Partially Aggregated flows to be combined in the final
   operation.

Trammell, et al.         Expires April 14, 2013                [Page 40]
Internet-Draft              IPFIX Aggregation               October 2012

   start time |end time   |source ip4 |octets
   9:00:00.000 9:05:00.000 192.0.2.2      119
   9:00:00.000 9:05:00.000 192.0.2.2       83
   9:00:00.000 9:05:00.000 192.0.2.2     1637
   9:00:00.000 9:05:00.000 192.0.2.3      111
   9:00:00.000 9:05:00.000 192.0.2.3    16838
   9:00:00.000 9:05:00.000 192.0.2.2    11538
   9:00:00.000 9:05:00.000 192.0.2.3      119
   9:00:00.000 9:05:00.000 192.0.2.3     2973
   9:00:00.000 9:05:00.000 192.0.2.4     8350
   9:00:00.000 9:05:00.000 203.0.113.3    778
   9:00:00.000 9:05:00.000 203.0.113.3    883
   9:00:00.000 9:05:00.000 192.0.2.2    15420
   9:00:00.000 9:05:00.000 203.0.113.3  11200
   9:05:00.000 9:10:00.000 203.0.113.3   1637
   9:05:00.000 9:10:00.000 203.0.113.3    561
   9:05:00.000 9:10:00.000 192.0.2.2     1899
   9:05:00.000 9:10:00.000 192.0.2.3     1284
   9:05:00.000 9:10:00.000 203.0.113.3   2670
   9:10:00.000 9:15:00.000 192.0.2.4       75
   9:10:00.000 9:15:00.000 192.0.2.4     3374
   9:10:00.000 9:15:00.000 192.0.2.4      138
   9:10:00.000 9:15:00.000 192.0.2.3     2325
   9:10:00.000 9:15:00.000 192.0.2.2     2869
   9:10:00.000 9:15:00.000 192.0.2.3    18289

           Figure 14: Key aggregation for time series per source

   Aggregate combination sums the counters per key and interval; the
   summations of the first two keys and intervals are shown in detail in
   Figure 15.

     start time |end time   |source ip4 |octets
     9:00:00.000 9:05:00.000 192.0.2.2      119
     9:00:00.000 9:05:00.000 192.0.2.2       83
     9:00:00.000 9:05:00.000 192.0.2.2     1637
     9:00:00.000 9:05:00.000 192.0.2.2    11538
   + 9:00:00.000 9:05:00.000 192.0.2.2    15420
                                          -----
   = 9:00:00.000 9:05:00.000 192.0.2.2    28797

     9:00:00.000 9:05:00.000 192.0.2.3      111
     9:00:00.000 9:05:00.000 192.0.2.3    16838
     9:00:00.000 9:05:00.000 192.0.2.3      119
   + 9:00:00.000 9:05:00.000 192.0.2.3     2973
                                          -----
   = 9:00:00.000 9:05:00.000 192.0.2.3    20041

Trammell, et al.         Expires April 14, 2013                [Page 41]
Internet-Draft              IPFIX Aggregation               October 2012

             Figure 15: Summation during aggregate combination

   Applying this to each set of Partially Aggregated Flows to produce
   the final Aggregated Flows shown in Figure 16 to be exported by the
   Template in Figure 11.

   start time |end time   |source ip4 |octets
   9:00:00.000 9:05:00.000 192.0.2.2    28797
   9:00:00.000 9:05:00.000 192.0.2.3    20041
   9:00:00.000 9:05:00.000 192.0.2.4     8350
   9:00:00.000 9:05:00.000 203.0.113.3  12861
   9:05:00.000 9:10:00.000 192.0.2.2     1899
   9:05:00.000 9:10:00.000 192.0.2.3     1284
   9:05:00.000 9:10:00.000 203.0.113.3   4868
   9:10:00.000 9:15:00.000 192.0.2.2     2869
   9:10:00.000 9:15:00.000 192.0.2.3    20614
   9:10:00.000 9:15:00.000 192.0.2.4     3587

          Figure 16: Aggregated Flows for time series per source

8.2.  Core Traffic Matrix

   Aggregating flows by source and destination autonomous system number
   in time series is used to generate core traffic matrices.  The core
   traffic matrix provides a view of the state of the routes within a
   network, and can be used for long-term planning of changes to network
   design based on traffic demand.  Here, imposed time intervals are
   generally much longer than active flow timeouts.  The traffic matrix
   is reported in terms of octets, packets, and flows, as each of these
   values may have a subtly different effect on capacity planning.

   This example demonstrates key aggregation using derived keys and
   Original Flow counting.  While some Original Flows may be generated
   by Exporting Processes on forwarding devices, and therefore contain
   the bgpSourceAsNumber and bgpDestinationAsNumber Information
   Elements, Original Flows from Exporting Processes on dedicated
   measurement devices without routing data contain only a
   destinationIPv[46]Address.  For these flows, the Mediator must look
   up a next hop AS from an IP-to-AS table, replacing source and
   destination addresses with AS numbers.  The table used in this
   example is shown in Figure 17.  (Note that due to limited example
   address space, in this example we ignore the common practice of
   routing only blocks of /24 or larger).

Trammell, et al.         Expires April 14, 2013                [Page 42]
Internet-Draft              IPFIX Aggregation               October 2012

   prefix           |ASN
   192.0.2.0/25      64496
   192.0.2.128/25    64497
   198.51.100/24     64498
   203.0.113.0/24    64499

              Figure 17: Example Autonomous system number map

   The Template for Aggregated Flows produced by this example is shown
   in Figure 18.

   flowStartMilliseconds(152)[8]
   flowEndMilliseconds(153)[8]
   bgpSourceAsNumber(16)[4]
   bgpDestinationAsNumber(17)[4]
   octetDeltaCount(1)[8]

               Figure 18: Output Template for traffic matrix

   Assume the goal is to get 60-minute time series of octet counts per
   source/destination ASN pair.  The aggregation operations would then
   be arranged as in Figure 19.

Trammell, et al.         Expires April 14, 2013                [Page 43]
Internet-Draft              IPFIX Aggregation               October 2012

                    Original Flows
                          |
                          V
              +-----------------------+
              | interval distribution |
              |  * impose uniform     |
              |   3600s time interval |
              +-----------------------+
                  |
                  | Partially Aggregated Flows
                  V
   +------------------------+
   |  key aggregation       |
   |  * reduce key to only  |
   |    sourceIPv4Address + |
   |    destIPv4Address     |
   +------------------------+
                  |
                  V
   +------------------------+
   |  key aggregation       |
   |  * replace addresses   |
   |    with ASN from map   |
   +------------------------+
                  |
                  | Partially Aggregated Flows
                  V
             +-------------------------+
             |  aggregate combination  |
             |   * sum octetDeltaCount |
             +-------------------------+
                          |
                          V
                  Aggregated Flows

           Figure 19: Aggregation operations for traffic matrix

   After applying the interval distribution step to the source data in
   Figure 10,; the Partially Aggregated flows are shown in Figure 20.
   Note that the flows are identical to those in interval distribution
   step in the previous example, except the chosen interval (1 hour,
   3600 seconds) is different; therefore, all the flows fit into a
   single interval.

Trammell, et al.         Expires April 14, 2013                [Page 44]
Internet-Draft              IPFIX Aggregation               October 2012

   start time |end time |source ip4 |port |dest ip4      |port|pt|  oct
   9:00:00     10:00:00  192.0.2.2   47113 192.0.2.131    53   17   119
   9:00:00     10:00:00  192.0.2.2   22153 192.0.2.131    53   17    83
   9:00:00     10:00:00  192.0.2.2   52420 198.51.100.2   443  6   1637
   9:00:00     10:00:00  192.0.2.3   56047 192.0.2.131    53   17   111
   9:00:00     10:00:00  192.0.2.3   41183 198.51.100.67  80   6  16838
   9:00:00     10:00:00  192.0.2.2   17606 198.51.100.68  80   6  11538
   9:00:00     10:00:00  192.0.2.3   47113 192.0.2.131    53   17   119
   9:00:00     10:00:00  192.0.2.3   48458 198.51.100.133 80   6   2973
   9:00:00     10:00:00  192.0.2.4   61295 198.51.100.2   443  6   8350
   9:00:00     10:00:00  203.0.113.3 41256 198.51.100.133 80   6    778
   9:00:00     10:00:00  203.0.113.3 51662 198.51.100.3   80   6    883
   9:00:00     10:00:00  192.0.2.2   37581 198.51.100.2   80   6  15420
   9:00:00     10:00:00  203.0.113.3 52572 198.51.100.2   443  6   1637
   9:00:00     10:00:00  203.0.113.3 49914 197.51.100.133 80   6    561
   9:00:00     10:00:00  192.0.2.2   50824 198.51.100.2   443  6   1899
   9:00:00     10:00:00  192.0.2.3   34597 198.51.100.3   80   6   1284
   9:00:00     10:00:00  203.0.113.3 58907 198.51.100.4   80   6   2670
   9:00:00     10:00:00  192.0.2.4   22478 192.0.2.131    53   17    75
   9:00:00     10:00:00  192.0.2.4   49513 198.51.100.68  80   6   3374
   9:00:00     10:00:00  192.0.2.4   64832 198.51.100.67  80   6    138
   9:00:00     10:00:00  192.0.2.3   60833 198.51.100.69  443  6   2325
   9:00:00     10:00:00  203.0.113.3 39586 198.51.100.17  80   6  11200
   9:00:00     10:00:00  192.0.2.2   19638 198.51.100.3   80   6   2869
   9:00:00     10:00:00  192.0.2.3   40429 198.51.100.4   80   6  18289

             Figure 20: Interval imposition for traffic matrix

   The next steps are to discard irrelevant key fields and to replace
   the source and destination addresses with source and destination AS
   numbers in the map; the results of these key aggregation steps are
   shown in Figure 21.

Trammell, et al.         Expires April 14, 2013                [Page 45]
Internet-Draft              IPFIX Aggregation               October 2012

   start time |end time |source ASN |dest ASN |octets
   9:00:00     10:00:00  AS64496     AS64497      119
   9:00:00     10:00:00  AS64496     AS64497       83
   9:00:00     10:00:00  AS64496     AS64498     1637
   9:00:00     10:00:00  AS64496     AS64497      111
   9:00:00     10:00:00  AS64496     AS64498    16838
   9:00:00     10:00:00  AS64496     AS64498    11538
   9:00:00     10:00:00  AS64496     AS64497      119
   9:00:00     10:00:00  AS64496     AS64498     2973
   9:00:00     10:00:00  AS64496     AS64498     8350
   9:00:00     10:00:00  AS64499     AS64498      778
   9:00:00     10:00:00  AS64499     AS64498      883
   9:00:00     10:00:00  AS64496     AS64498    15420
   9:00:00     10:00:00  AS64499     AS64498     1637
   9:00:00     10:00:00  AS64499     AS64498      561
   9:00:00     10:00:00  AS64496     AS64498     1899
   9:00:00     10:00:00  AS64496     AS64498     1284
   9:00:00     10:00:00  AS64499     AS64498     2670
   9:00:00     10:00:00  AS64496     AS64497       75
   9:00:00     10:00:00  AS64496     AS64498     3374
   9:00:00     10:00:00  AS64496     AS64498      138
   9:00:00     10:00:00  AS64496     AS64498     2325
   9:00:00     10:00:00  AS64499     AS64498    11200
   9:00:00     10:00:00  AS64496     AS64498     2869
   9:00:00     10:00:00  AS64496     AS64498    18289

       Figure 21: Key aggregation for traffic matrix: reduction and
                                replacement

   Finally, aggregate combination sums the counters per key and
   interval.  The resulting Aggregated Flows containing the traffic
   matrix, shown in Figure 22, are then exported using the Template in
   Figure 18.  Note that these aggregated flows represent a sparse
   matrix: AS pairs for which no traffic was received have no
   corresponding record in the output.

   start time  end time  source ASN  dest ASN  octets
   9:00:00     10:00:00  AS64496     AS64497      507
   9:00:00     10:00:00  AS64496     AS64498    86934
   9:00:00     10:00:00  AS64499     AS64498    17729

              Figure 22: Aggregated Flows for traffic matrix

   The output of this operation is suitable for re-aggregation: that is,
   traffic matrices from single links or Observation Points can be
   aggregated through the same interval imposition and aggregate
   combination steps in order to build a traffic matrix for an entire
   network.

Trammell, et al.         Expires April 14, 2013                [Page 46]
Internet-Draft              IPFIX Aggregation               October 2012

8.3.  Distinct Source Count per Destination Endpoint

   Aggregating flows by destination address and port, and counting
   distinct sources aggregated away, can be used as part of passive
   service inventory and host characterization.  This example shows
   aggregation as an analysis technique, performed on source data stored
   in an IPFIX File.  As the Transport Session in this File is bounded,
   removal of all timestamp information allows summarization of the
   entire time interval contained within the interval.  Removal of
   timing information during interval imposition is equivalent to an
   infinitely long imposed time interval.  This demonstrates both how
   infinite intervals work, and how unique counters work.  The
   aggregation operations are summarized in Figure 23.

Trammell, et al.         Expires April 14, 2013                [Page 47]
Internet-Draft              IPFIX Aggregation               October 2012

                    Original Flows
                          |
                          V
              +-----------------------+
              | interval distribution |
              |  * discard timestamps |
              +-----------------------+
                  |
                  | Partially Aggregated Flows
                  V
   +----------------------------+
   |  value aggregation         |
   |  * discard octetDeltaCount |
   +----------------------------+
                  |
                  | Partially Aggregated Flows
                  V
   +----------------------------+
   |  key aggregation           |
   |   * reduce key to only     |
   |     destIPv4Address +      |
   |     destTransportPort,     |
   |   * count distinct sources |
   +----------------------------+
                  |
                  | Partially Aggregated Flows
                  V
       +----------------------------------------------+
       |  aggregate combination                       |
       |   * no-op (distinct sources already counted) |
       +----------------------------------------------+
                          |
                          V
                  Aggregated Flows

            Figure 23: Aggregation operations for source count

   The Template for Aggregated Flows produced by this example is shown
   in Figure 24.

   destinationIPv4Address(12)[4]
   destinationTransportPort(11)[2]
   distinctCountOfSourceIPAddress(TBD4)[8]

                Figure 24: Output Template for source count

   Interval distribution, in this case, merely discards the timestamp
   information from the Original Flows in Figure 10 , and as such is not

Trammell, et al.         Expires April 14, 2013                [Page 48]
Internet-Draft              IPFIX Aggregation               October 2012

   shown.  Likewise, the value aggregation step simply discards the
   octetDeltaCount value field.  The key aggregation step reduces the
   key to the destinationIPv4Address and destinationTransportPort,
   counting the distinct source addresses.  Since this is essentially
   the output of this aggregation function, the aggregate combination
   operation is a no-op; the resulting Aggregated Flows are shown in
   Figure 25.

   dest ip4      |port |dist src
   192.0.2.131    53           3
   198.51.100.2   80           1
   198.51.100.2   443          3
   198.51.100.67  80           2
   198.51.100.68  80           2
   198.51.100.133 80           2
   198.51.100.3   80           3
   198.51.100.4   80           2
   198.51.100.17  80           1
   198.51.100.69  443          1

               Figure 25: Aggregated flows for source count

8.4.  Traffic Time-Series per Source with Counter Distribution

   Returning to the example in Section 8.1, note that our source data
   contains some flows with durations longer than the imposed interval
   of five minutes.  The default method for dealing with such flows is
   to account them to the interval containing the flow's start time.

   In this example, the same data is aggregated using the same
   arrangement of operations and the same output Template as the as in
   Section 8.1, but using a different counter distribution policy,
   Simple Uniform Distribution, as described in Section 5.1.1.  In order
   to do this, the Exporting Process first exports the Aggregate Counter
   Distribution Options Template, as in Figure 26.

   templateId(12)[2]{scope}
   valueDistributionMethod(TBD10)[1]

        Figure 26: Aggregate Counter Distribution Options Template

   This Template is followed by an Aggregate Counter Distribution Record
   described by this Template; assuming the output Template in Figure 11
   has ID 257, this record would appear as in Figure 27.

   template ID | value distribution method
           257   4 (simple uniform)

Trammell, et al.         Expires April 14, 2013                [Page 49]
Internet-Draft              IPFIX Aggregation               October 2012

             Figure 27: Aggregate Counter Distribution Record

   Following metadata export, the aggregation steps follow as before.
   However, two long flows are distributed across multiple intervals in
   the interval imposition step, as indicated with "*" in Figure 28.
   Note the uneven distribution of the three-interval, 11200-octet flow
   into three Partially Aggregated Flows of 3733, 3733, and 3734 octets;
   this ensures no cumulative error is injected by the interval
   distribution step.

 start time |end time   |source ip4 |port |dest ip4      |port|pt|  oct
 9:00:00.000 9:05:00.000 192.0.2.2   47113 192.0.2.131    53   17   119
 9:00:00.000 9:05:00.000 192.0.2.2   22153 192.0.2.131    53   17    83
 9:00:00.000 9:05:00.000 192.0.2.2   52420 198.51.100.2   443  6   1637
 9:00:00.000 9:05:00.000 192.0.2.3   56047 192.0.2.131    53   17   111
 9:00:00.000 9:05:00.000 192.0.2.3   41183 198.51.100.67  80   6  16838
 9:00:00.000 9:05:00.000 192.0.2.2   17606 198.51.100.68  80   6  11538
 9:00:00.000 9:05:00.000 192.0.2.3   47113 192.0.2.131    53   17   119
 9:00:00.000 9:05:00.000 192.0.2.3   48458 198.51.100.133 80   6   2973
 9:00:00.000 9:05:00.000 192.0.2.4   61295 198.51.100.2   443  6   8350
 9:00:00.000 9:05:00.000 203.0.113.3 41256 198.51.100.133 80   6    778
 9:00:00.000 9:05:00.000 203.0.113.3 51662 198.51.100.3   80   6    883
 9:00:00.000 9:05:00.000 192.0.2.2   37581 198.51.100.2   80   6   7710*
 9:00:00.000 9:05:00.000 203.0.113.3 39586 198.51.100.17  80   6   3733*
 9:05:00.000 9:10:00.000 203.0.113.3 52572 198.51.100.2   443  6   1637
 9:05:00.000 9:10:00.000 203.0.113.3 49914 197.51.100.133 80   6    561
 9:05:00.000 9:10:00.000 192.0.2.2   50824 198.51.100.2   443  6   1899
 9:05:00.000 9:10:00.000 192.0.2.3   34597 198.51.100.3   80   6   1284
 9:05:00.000 9:10:00.000 203.0.113.3 58907 198.51.100.4   80   6   2670
 9:05:00.000 9:10:00.000 192.0.2.2   37581 198.51.100.2   80   6   7710*
 9:05:00.000 9:10:00.000 203.0.113.3 39586 198.51.100.17  80   6   3733*
 9:10:00.000 9:15:00.000 192.0.2.4   22478 192.0.2.131    53   17    75
 9:10:00.000 9:15:00.000 192.0.2.4   49513 198.51.100.68  80   6   3374
 9:10:00.000 9:15:00.000 192.0.2.4   64832 198.51.100.67  80   6    138
 9:10:00.000 9:15:00.000 192.0.2.3   60833 198.51.100.69  443  6   2325
 9:10:00.000 9:15:00.000 192.0.2.2   19638 198.51.100.3   80   6   2869
 9:10:00.000 9:15:00.000 192.0.2.3   40429 198.51.100.4   80   6  18289
 9:10:00.000 9:15:00.000 203.0.113.3 39586 198.51.100.17  80   6   3734*

   Figure 28: Distributed interval imposition for time series per source

   Subsequent steps are as in Section 8.1; the results, to be exported
   using the Template shown in Figure 11, are shown in Figure 29, with
   Aggregated Flows differing from the example in Section 8.1 indicated
   by "*".

Trammell, et al.         Expires April 14, 2013                [Page 50]
Internet-Draft              IPFIX Aggregation               October 2012

   start time |end time   |source ip4 |octets
   9:00:00.000 9:05:00.000 192.0.2.2    21087*
   9:00:00.000 9:05:00.000 192.0.2.3    20041
   9:00:00.000 9:05:00.000 192.0.2.4     8350
   9:00:00.000 9:05:00.000 203.0.113.3   5394*
   9:05:00.000 9:10:00.000 192.0.2.2     9609*
   9:05:00.000 9:10:00.000 192.0.2.3     1284
   9:05:00.000 9:10:00.000 203.0.113.3   8601*
   9:10:00.000 9:15:00.000 192.0.2.2     2869
   9:10:00.000 9:15:00.000 192.0.2.3    20614
   9:10:00.000 9:15:00.000 192.0.2.4     3587
   9:10:00.000 9:15:00.000 203.0.113.3   3734*

    Figure 29: Aggregated Flows for time series per source with counter
                               distribution

Trammell, et al.         Expires April 14, 2013                [Page 51]
Internet-Draft              IPFIX Aggregation               October 2012

9.  Security Considerations

   This document specifies the operation of an Intermediate Aggregation
   Process with the IPFIX Protocol; the Security Considerations for the
   protocol itself in Section 11 [RFC-EDITOR NOTE: verify section
   number] of [I-D.ietf-ipfix-protocol-rfc5101bis] therefore apply.  In
   the common case that aggregation is performed on a Mediator, the
   Security Considerations for Mediators in Section 9 of [RFC6183] apply
   as well.

   As mentioned in Section 3, certain aggregation operations may tend to
   have an anonymizing effect on flow data by obliterating sensitive
   identifiers.  Aggregation may also be combined with anonymization
   within a Mediator, or as part of a chain of Mediators, to further
   leverage this effect.  In any case in which an Intermediate
   Aggregation Process is applied as part of a data anonymization or
   protection scheme, or is used together with anonymization as
   described in [RFC6235], the Security Considerations in Section 9 of
   [RFC6235] apply.

Trammell, et al.         Expires April 14, 2013                [Page 52]
Internet-Draft              IPFIX Aggregation               October 2012

10.  IANA Considerations

   This document specifies the creation of new IPFIX Information
   Elements in the IPFIX Information Element registry located at
   http://www.iana.org/assignments/ipfix, as defined in Section 7 above.
   IANA has assigned Information Element numbers to these Information
   Elements, and entered them into the registry.

   [NOTE for IANA: The text TBDn should be replaced with the respective
   assigned Information Element numbers where they appear in this
   document.  Note that the deltaFlowCount Information Element has been
   assigned the number 3, as it is compatible with the corresponding
   existing (reserved) NetFlow v9 Information Element.  Other
   Information Element numbers should be assigned outside the NetFlow V9
   compatibility range, as these Information Elements are not supported
   by NetFlow V9.]

Trammell, et al.         Expires April 14, 2013                [Page 53]
Internet-Draft              IPFIX Aggregation               October 2012

11.  Acknowledgments

   Special thanks to Elisa Boschi for early work on the concepts laid
   out in this document.  Thanks to Lothar Braun, Christian Henke, and
   Rahul Patel for their reviews and valuable feedback, with special
   thanks to Paul Aitken for his multiple detailed reviews.  This work
   is materially supported by the European Union Seventh Framework
   Programme under grant agreement 257315 (DEMONS).

Trammell, et al.         Expires April 14, 2013                [Page 54]
Internet-Draft              IPFIX Aggregation               October 2012

12.  References

12.1.  Normative References

   [I-D.ietf-ipfix-protocol-rfc5101bis]
              Claise, B. and B. Trammell, "Specification of the IP Flow
              Information eXport (IPFIX) Protocol for the Exchange of
              Flow Information", draft-ietf-ipfix-protocol-rfc5101bis-02
              (work in progress), June 2012.

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119, March 1997.

12.2.  Informative References

   [RFC3917]  Quittek, J., Zseby, T., Claise, B., and S. Zander,
              "Requirements for IP Flow Information Export (IPFIX)",
              RFC 3917, October 2004.

   [RFC5470]  Sadasivan, G., Brownlee, N., Claise, B., and J. Quittek,
              "Architecture for IP Flow Information Export", RFC 5470,
              March 2009.

   [RFC5472]  Zseby, T., Boschi, E., Brownlee, N., and B. Claise, "IP
              Flow Information Export (IPFIX) Applicability", RFC 5472,
              March 2009.

   [RFC5476]  Claise, B., Johnson, A., and J. Quittek, "Packet Sampling
              (PSAMP) Protocol Specifications", RFC 5476, March 2009.

   [RFC5655]  Trammell, B., Boschi, E., Mark, L., Zseby, T., and A.
              Wagner, "Specification of the IP Flow Information Export
              (IPFIX) File Format", RFC 5655, October 2009.

   [RFC5982]  Kobayashi, A. and B. Claise, "IP Flow Information Export
              (IPFIX) Mediation: Problem Statement", RFC 5982,
              August 2010.

   [RFC6183]  Kobayashi, A., Claise, B., Muenz, G., and K. Ishibashi,
              "IP Flow Information Export (IPFIX) Mediation: Framework",
              RFC 6183, April 2011.

   [RFC6235]  Boschi, E. and B. Trammell, "IP Flow Anonymization
              Support", RFC 6235, May 2011.

   [I-D.ietf-ipfix-mediation-protocol]
              Claise, B., Kobayashi, A., and B. Trammell, "Operation of
              the IP Flow Information Export (IPFIX) Protocol on IPFIX

Trammell, et al.         Expires April 14, 2013                [Page 55]
Internet-Draft              IPFIX Aggregation               October 2012

              Mediators", draft-ietf-ipfix-mediation-protocol-02 (work
              in progress), July 2012.

   [I-D.ietf-ipfix-ie-doctors]
              Trammell, B. and B. Claise, "Guidelines for Authors and
              Reviewers of IPFIX Information Elements",
              draft-ietf-ipfix-ie-doctors-07 (work in progress),
              October 2012.

   [I-D.ietf-ipfix-configuration-model]
              Muenz, G., Claise, B., and P. Aitken, "Configuration Data
              Model for IPFIX and PSAMP",
              draft-ietf-ipfix-configuration-model-11 (work in
              progress), June 2012.

   [I-D.ietf-ipfix-flow-selection-tech]
              D'Antonio, S., Zseby, T., Henke, C., and L. Peluso, "Flow
              Selection Techniques",
              draft-ietf-ipfix-flow-selection-tech-12 (work in
              progress), September 2012.

   [iana-ipfix-assignments]
              Internet Assigned Numbers Authority, "IP Flow Information
              Export Information Elements
              (http://www.iana.org/assignments/ipfix/ipfix.xml)".

Trammell, et al.         Expires April 14, 2013                [Page 56]
Internet-Draft              IPFIX Aggregation               October 2012

Authors' Addresses

   Brian Trammell
   Swiss Federal Institute of Technology Zurich
   Gloriastrasse 35
   8092 Zurich
   Switzerland

   Phone: +41 44 632 70 13
   Email: trammell@tik.ee.ethz.ch

   Arno Wagner
   Consecom AG
   Bleicherweg 64a
   8002 Zurich
   Switzerland

   Email: arno@wagner.name

   Benoit Claise
   Cisco Systems, Inc.
   De Kleetlaan 6a b1
   1831 Diegem
   Belgium

   Phone: +32 2 704 5622
   Email: bclaise@cisco.com

Trammell, et al.         Expires April 14, 2013                [Page 57]