Network Working Group M. Bagnulo
Internet-Draft UC3M
Intended status: Best Current Practice B. Claise
Expires: March 14, 2015 Cisco Systems, Inc.
P. Eardley
BT
A. Morton
AT&T Labs
A. Akhter
Cisco Systems, Inc.
September 10, 2014
Registry for Performance Metrics
draft-ietf-ippm-metric-registry-01
Abstract
This document defines the IANA Registry for Performance Metrics.
This document also gives a set of guidelines for Registered
Performance Metric requesters and reviewers.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on March 14, 2015.
Copyright Notice
Copyright (c) 2014 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
Bagnulo, et al. Expires March 14, 2015 [Page 1]
Internet-Draft Registry for Performance Metrics September 2014
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Open Issues . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5
4. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5. Design Considerations for the Registry and Registered Metrics 7
5.1. Interoperability . . . . . . . . . . . . . . . . . . . . 7
5.2. Criteria for Registered Performance Metrics . . . . . . . 8
5.3. Single point of reference for Performance metrics . . . . 8
5.4. Side benefits . . . . . . . . . . . . . . . . . . . . . . 9
6. Performance Metric Registry: Prior attempt . . . . . . . . . 9
6.1. Why this Attempt Will Succeed? . . . . . . . . . . . . . 10
7. Defintion of the Performance Metric Registry . . . . . . . . 10
7.1. Summary Category . . . . . . . . . . . . . . . . . . . . 12
7.1.1. Identifier . . . . . . . . . . . . . . . . . . . . . 12
7.1.2. Name . . . . . . . . . . . . . . . . . . . . . . . . 13
7.1.3. URI . . . . . . . . . . . . . . . . . . . . . . . . . 13
7.1.4. Description . . . . . . . . . . . . . . . . . . . . . 14
7.2. Metric Definition Category . . . . . . . . . . . . . . . 14
7.2.1. Reference Definition . . . . . . . . . . . . . . . . 14
7.2.2. Fixed Parameters . . . . . . . . . . . . . . . . . . 14
7.3. Method of Measurement Category . . . . . . . . . . . . . 15
7.3.1. Reference Method . . . . . . . . . . . . . . . . . . 15
7.3.2. Packet Generation Stream . . . . . . . . . . . . . . 15
7.3.3. Traffic Filter . . . . . . . . . . . . . . . . . . . 16
7.3.4. Sampling distribution . . . . . . . . . . . . . . . . 16
7.3.5. Run-time Parameters . . . . . . . . . . . . . . . . . 16
7.3.6. Role . . . . . . . . . . . . . . . . . . . . . . . . 17
7.4. Output Category . . . . . . . . . . . . . . . . . . . . . 17
7.4.1. Value . . . . . . . . . . . . . . . . . . . . . . . . 17
7.4.2. Data Format . . . . . . . . . . . . . . . . . . . . . 17
7.4.3. Reference . . . . . . . . . . . . . . . . . . . . . . 18
7.4.4. Metric Units . . . . . . . . . . . . . . . . . . . . 18
7.5. Admisnitratvie information . . . . . . . . . . . . . . . 18
7.5.1. Status . . . . . . . . . . . . . . . . . . . . . . . 18
7.5.2. Requester . . . . . . . . . . . . . . . . . . . . . . 18
7.5.3. Revision . . . . . . . . . . . . . . . . . . . . . . 18
7.5.4. Revision Date . . . . . . . . . . . . . . . . . . . . 18
7.6. Comments and Remarks . . . . . . . . . . . . . . . . . . 18
8. The Life-Cycle of Registered Metrics . . . . . . . . . . . . 19
8.1. Adding new Performance Metrics to the Registry . . . . . 19
Bagnulo, et al. Expires March 14, 2015 [Page 2]
Internet-Draft Registry for Performance Metrics September 2014
8.2. Revising Registered Performance Metrics . . . . . . . . . 20
8.3. Deprecating Registered Performance Metrics . . . . . . . 21
9. Performance Metric Registry and other Registries . . . . . . 22
10. Security considerations . . . . . . . . . . . . . . . . . . . 22
11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22
12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 23
13. References . . . . . . . . . . . . . . . . . . . . . . . . . 23
13.1. Normative References . . . . . . . . . . . . . . . . . . 23
13.2. Informative References . . . . . . . . . . . . . . . . . 24
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25
1. Open Issues
1. Many aspects of the Naming convention are TBD, and need
discussion. For example, we have distinguished RTCP-XR metrics
as End-Point (neither active nor passive in the traditional
sense, so not Act_ or Pas_). Even though we may not cast all
naming conventions in stone at the start, it will be helpful to
look at several examples of passive metric names now.
2. We should expand on the different roles and responsibilities of
the Performance Metrics Experts versus the Performance Metric
Directorate. At least, the Performance Metric Directorate one
should be expanded. --- (v7) If these are different entities, our
only concern is the role of the "PM Experts".
3. Revised Registry Entries: Keep for history (deprecated) or
Delete?
4. Need to include an example for a name for a passive metric
5. Definition of Parameter needs more work?
6. Whether the name of the metric should contain the version of the
metric
7. reserve some values for examples and private use?
8. should we define a "type" column with the possible values
"active" "passive" "hybrid" "endpoint"? if we go for all 4 of
them, we should define the corresponding prefixes for the metric
name (at this point only the pas and act are defined)
9. URL: should we include a URL link in each registry entry with a
URL specific to the entry that links to a different text page
that contains all the details of the registry entry as in
http://www.iana.org/assignments/xml-registry/xml-
registry.xhtml#ns
Bagnulo, et al. Expires March 14, 2015 [Page 3]
Internet-Draft Registry for Performance Metrics September 2014
2. Introduction
The IETF specifies and uses Performance Metrics of protocols and
applications transported over its protocols. Performance metrics are
such an important part of the operations of IETF protocols that
[RFC6390] specifies guidelines for their development.
The definition and use of Performance Metrics in the IETF happens in
various working groups (WG), most notably:
The "IP Performance Metrics" (IPPM) WG is the WG primarily
focusing on Performance Metrics definition at the IETF.
The "Metric Blocks for use with RTCP's Extended Report Framework"
(XRBLOCK) WG recently specified many Performance Metrics related
to "RTP Control Protocol Extended Reports (RTCP XR)" [RFC3611],
which establishes a framework to allow new information to be
conveyed in RTCP, supplementing the original report blocks defined
in "RTP: A Transport Protocol for Real-Time Applications",
[RFC3550].
The "Benchmarking Methodology" WG (BMWG) defined many Performance
Metrics for use in laboratory benchmarking of inter-networking
technologies.
The "IP Flow Information eXport" (IPFIX) WG Information elements
related to Performance Metrics are currently proposed.
The "Performance Metrics for Other Layers" (PMOL) concluded WG,
defined some Performance Metrics related to Session Initiation
Protocol (SIP) voice quality [RFC6035].
It is expected that more Performance Metrics will be defined in the
future, not only IP-based metrics, but also metrics which are
protocol-specific and application-specific.
However, despite the importance of Performance Metrics, there are two
related problems for the industry. First, how to ensure that when
one party requests another party to measure (or report or in some way
act on) a particular Performance Metric, then both parties have
exactly the same understanding of what Performance Metric is being
referred to. Second, how to discover which Performance Metrics have
been specified, so as to avoid developing new Performance Metric that
is very similar. The problems can be addressed by creating a
registry of performance metrics. The usual way in which IETF
organizes namespaces is with Internet Assigned Numbers Authority
(IANA) registries, and there is currently no Performance Metrics
Registry maintained by the IANA.
Bagnulo, et al. Expires March 14, 2015 [Page 4]
Internet-Draft Registry for Performance Metrics September 2014
This document therefore creates a Performance Metrics Registry. It
also provides best practices on how to define new or updated entries
in the Performance Metrics Registry.
3. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in
[RFC2119].
The terms Performance Metric and Performance Metrics Directorate are
defined in [RFC6390], and copied over in this document for the
readers convenience.
Performance Metric: A Performance Metric is a quantitative measure
of performance, specific to an IETF-specified protocol or specific
to an application transported over an IETF-specified protocol.
Examples of Performance Metrics are the FTP response time for a
complete file download, the DNS response time to resolve the IP
address, a database logging time, etc.
Registered Performance Metric: A Registered Performance Metric (or
Registered Metric) is a Performance Metric expressed as an entry
in the Performance Metric Registry, and comprised of a
specifically named metric which has met all the registry review
criteria, is under the curation of IETF Performance Metrics
Experts, and whose changes are controlled by IANA.
Performance Metrics Registry: The IANA registry containing
Registered Performance Metrics. In this document, it is also
called simply "Registry".
Proprietary Registry: A set of metrics that are registered in a
proprietary registry, as opposed to Performance Metrics Registry.
Performance Metrics Experts: The Performance Metrics Experts is a
group of experts selected by the IESG to validate the Performance
Metrics before updating the Performance Metrics Registry. The
Performance Metrics Experts work closely with IANA.
Performance Metrics Directorate: The Performance Metrics Directorate
is a directorate that provides guidance for Performance Metrics
development in the IETF. The Performance Metrics Directorate
should be composed of experts in the performance community,
potentially selected from the IP Performance Metrics (IPPM),
Benchmarking Methodology (BMWG), and Performance Metrics for Other
Layers (PMOL) WGs.
Bagnulo, et al. Expires March 14, 2015 [Page 5]
Internet-Draft Registry for Performance Metrics September 2014
Parameter: An input factor defined as a variable in the definition
of a metric. A numerical or other specified factor forming one of
a set that defines a metric or sets the conditions of its
operation. All Input Parameters must be known to measure using a
metric and interpret the results. Although Input Parameters do
not change the fundamental nature of the metric's definition, some
have substantial influence on the network property being assessed
and interpretation of the results.
Consider the case of packet loss in the following two cases.
The first case is packet loss as background loss where the
parameter set includes a very sparse Poisson stream, and only
characterizes the times when packets were lost. Actual user
streams likely see much higher loss at these times, due to tail
drop or radio errors. The second case is packet loss as
inverse of Throughput where the parameter set includes a very
dense, bursty stream, and characterizes the loss experienced by
a stream that approximates a user stream. These are both "loss
metrics", but the difference in interpretation of the results
is highly dependent on the Parameters (at least), to the
extreme where we are actually using loss to infer its
compliment: delivered throughput.
Active Measurement Method: Methods of Measurement conducted on
traffic which serves only the purpose of measurement and is
generated for that reason alone, and whose traffic characteristics
are known a priori. An Internet user's host can generate active
measurement traffic (virtually all typical user-generated traffic
is not dedicated to active measurement, but it can produce such
traffic with the necessary application operating).
Passive Measurement Method: Methods of Measurement conducted on
network traffic, generated either from the end users or from
network elements. One characteristic of Passive Measurement
Methods is that sensitive information may be observed, and as a
consequence, stored in the measurement system.
Hybrid Measurement Method: Methods of Measurement which use a
combination of Active Measurement and Passive Measurement methods.
4. Scope
The intended audience of this document includes those who prepare and
submit a request for a Registered Performance Metric, and for the
Performance Metric Experts who review a request.
This document specifies a Performance Metrics Registry in IANA. This
Performance Metric Registry is applicable to Performance Metrics
Bagnulo, et al. Expires March 14, 2015 [Page 6]
Internet-Draft Registry for Performance Metrics September 2014
issued from Active Measurement, Passive Measurement, from end-point
calculation or any other form of Performance Metric. This registry
is designed to encompass Performance Metrics developed throughout the
IETF and especially for the following existing working groups: IPPM,
XRBLOCK, IPFIX, and BMWG. This document analyzes an prior attempt to
set up a Performance Metric Registry, and the reasons why this design
was inadequate [RFC6248]. Finally, this document gives a set of
guidelines for requesters and expert reviewers of candidate
Registered Performance Metrics.
This document makes no attempt to populate the Registry with initial
entries. It does provides a few examples that are merely
illustrations and should not be included in the registry at this
point in time.
Based on [RFC5226] Section 4.3, this document is processed as Best
Current Practice (BCP) [RFC2026].
5. Design Considerations for the Registry and Registered Metrics
In this section, we detail several design considerations that are
relevant for understanding the motivations and expected use of the
Performance Metric Registry.
5.1. Interoperability
As any IETF registry, the primary use for a registry is to manage a
namespace for its use within one or more protocols. In this
particular case of the Performance Metric Registry, there are two
types of protocols that will use the values defined in the Registry
for their operation:
o Control protocol: this type of protocols is used to allow one
entity to request another entity to perform a measurement using a
specific metric defined by the Registry. One particular example
is the LMAP framework [I-D.ietf-lmap-framework]. Using the LMAP
terminology, the Registry is used in the LMAP Control protocol to
allow a Controller to request a measurement task to one or more
Measurement Agents. In order to enable this use case, the entries
of the Performance Metric Registry must be well enough defined to
allow a Measurement Agent implementation to trigger a specific
measurement task upon the reception of a control protocol message.
This requirements heavily constrains the type of entries that are
acceptable for the Performance Metric Registry.
o Report protocol: This type of protocols is used to allow an entity
to report measurement results to another entity. By referencing
to a specific Performance Metric Registry, it is possible to
Bagnulo, et al. Expires March 14, 2015 [Page 7]
Internet-Draft Registry for Performance Metrics September 2014
properly characterize the measurement result data being
transferred. Using the LMAP terminology, the Registry is used in
the Report protocol to allow a Measurement Agent to report
measurement results to a Collector.
5.2. Criteria for Registered Performance Metrics
It is neither possible nor desirable to populate the Registry with
all combinations of input parameters of all Performance Metrics. The
Registered Performance Metrics should be:
1. interpretable by the user.
2. implementable by the software designer,
3. deployable by network operators, without major impact on the
networks,
4. accurate, for interoperability and deployment across vendors,
5. Operational useful, so that it has significant industry interest
and/or has seen deployment,
6. Sufficiently tightly defined, so that changing Parameters does
not change the fundamental nature of the measurement, nor change
the practicality of its implementation.
In essence, there needs to be evidence that a candidate Registry
entry has significant industry interest, or has seen deployment, and
there is agreement that the candidate Registered Metric serves its
intended purpose.
5.3. Single point of reference for Performance metrics
A Registry for Performance metrics serves as a single point of
reference for Performance Metrics defined in different working groups
in the IETF. As we mentioned earlier, there are several WGs that
define Performance Metrics in the IETF and it is hard to keep track
of all them. This results in multiple definitions of similar metrics
that attempt to measure the same phenomena but in slightly different
(and incompatible) ways. Having a Registry would allow both the IETF
community and external people to have a single list of relevant
Performance Metrics defined by the IETF (and others, where
appropriate). The single list is also an essential aspect of
communication about metrics, where different entities that request
measurements, execute measurements, and report the results can
benefit from a common understanding of the referenced metric.
Bagnulo, et al. Expires March 14, 2015 [Page 8]
Internet-Draft Registry for Performance Metrics September 2014
5.4. Side benefits
There are a couple of side benefits of having such a Registry.
First, the Registry could serve as an inventory of useful and used
metrics, that are normally supported by different implementations of
measurement agents. Second, the results of the metrics would be
comparable even if they are performed by different implementations
and in different networks, as the metric is properly defined. BCP
176 [RFC6576] examines whether the results produced by independent
implementations are equivalent in the context of evaluating the
completeness and clarity of metric specifications. This BCP defines
the standards track advancement testing for (active) IPPM metrics,
and the same process will likely suffice to determine whether
Registry entries are sufficiently well specified to result in
comparable (or equivalent) results. Registry entries which have
undergone such testing SHOULD be noted, with a reference to the test
results.
6. Performance Metric Registry: Prior attempt
There was a previous attempt to define a metric registry RFC 4148
[RFC4148]. However, it was obsoleted by RFC 6248 [RFC6248] because
it was "found to be insufficiently detailed to uniquely identify IPPM
metrics... [there was too much] variability possible when
characterizing a metric exactly" which led to the RFC4148 registry
having "very few users, if any".
A couple of interesting additional quotes from RFC 6248 might help
understand the issues related to that registry.
1. "It is not believed to be feasible or even useful to register
every possible combination of Type P, metric parameters, and
Stream parameters using the current structure of the IPPM Metrics
Registry."
2. "The registry structure has been found to be insufficiently
detailed to uniquely identify IPPM metrics."
3. "Despite apparent efforts to find current or even future users,
no one responded to the call for interest in the RFC 4148
registry during the second half of 2010."
The current approach learns from this by tightly defining each entry
in the registry with only a few variable Parameters to be specified
by the measurement designer, if any. The idea is that entries in the
Registry represent different measurement methods which require input
parameters to set factors like source and destination addresses
(which do not change the fundamental nature of the measurement). The
Bagnulo, et al. Expires March 14, 2015 [Page 9]
Internet-Draft Registry for Performance Metrics September 2014
downside of this approach is that it could result in a large number
of entries in the Registry. We believe that less is more in this
context - it is better to have a reduced set of useful metrics rather
than a large set of metrics with questionable usefulness. Therefore
this document defines that the Registry only includes metrics that
are well defined and that have proven to be operationally useful. In
order to guarantee these two characteristics we require that a set of
experts review the allocation request to verify that the metric is
well defined and it is operationally useful.
6.1. Why this Attempt Will Succeed?
The Registry defined in this document addresses the main issues
identified in the previous attempt. As we mention in the previous
section, one of the main issues with the previous registry was that
the metrics contained in the registry were too generic to be useful.
In this Registry, the Registry requests are evaluated by an expert
group, the Performance Metrics Experts, who will make sure that the
metric is properly defined. This document provides guidelines to
assess if a metric is properly defined.
Another key difference between this attempt and the previous one is
that in this case there is at least one clear user for the Registry:
the LMAP framework and protocol. Because the LMAP protocol will use
the Registry values in its operation, this actually helps to
determine if a metric is properly defined. In particular, since we
expect that the LMAP control protocol will enable a controller to
request a measurement agent to perform a measurement using a given
metric by embedding the Performance Metric Registry value in the
protocol, a metric is properly specified if it is defined well-enough
so that it is possible (and practical) to implement the metric in the
measurement agent. This was clearly not the case for the previous
attempt: defining a metric with an undefined P-Type makes its
implementation unpractical.
7. Defintion of the Performance Metric Registry
In this section we define the columns of the Performance Metric
Registry. This registry will contain all Registered Performance
Metrics including active, passive, hybrid, endpoint metrics and any
other type of performance metric that can be envisioned. Because of
that, it may be the case that some of the columns defined are not
applicable for a given type of metric. If this is the case, the
column(s) SHOULD be populated with the "NA" value (Non Applicable).
However, the "NA" value MUST NOT be used any any metric in the
following columns: Identifier, Name, URI, Status, Requester,
Revision, Revision Date, Description and Reference Specification.
Moreover, In addition, it may be possible that in the future, a new
Bagnulo, et al. Expires March 14, 2015 [Page 10]
Internet-Draft Registry for Performance Metrics September 2014
type of metric requires additional columns. Should that be the case,
it is possible to add new columns to the registry. The specification
defining the new column(s) MUST define how to populate the new
column(s) for existing entries.
The columns of the Performance Metric Registry are defined next. The
columns are grouped into "Categories" to facilitate the use of the
registry. Categories are described at the 3.x heading level, and
columns are at the 3.x.y heading level. The Figure below illustrates
this organization. An entry (row) therefore gives a complete
description of a Registered Metric.
Each column serves as a check-list item and helps to avoid omissions
during registration and expert review. In some cases an entry (row)
may have some columns without specific entries, marked Not Applicable
(NA).
Bagnulo, et al. Expires March 14, 2015 [Page 11]
Internet-Draft Registry for Performance Metrics September 2014
Registry Categories and Columns, shown as
Category
------------------
Column | Column |
Summary
-------------------------------
ID | Name | URI | Description |
Metric Definition
-----------------------------------------
Reference Definition | Fixed Parameters |
Method of Measurement
---------------------------------------------------------------------------------
Reference Method | Packet Generation | Traffic | Sampling | Run-time | Role |
| Stream | Filter | distribution | Param | |
Output
-----------------------------------------
| Type | Reference | Data | Units |
| | Definition | Format | |
Administrative information
----------------------------------
Status |Request | Rev | Rev.Date |
Comments and Remarks
--------------------
7.1. Summary Category
7.1.1. Identifier
A numeric identifier for the Registered Performance Metric. This
identifier MUST be unique within the Performance Metric Registry.
The Registered Performance Metric unique identifier is a 16-bit
integer (range 0 to 65535). When adding newly Registered Performance
Metrics to the Performance Metric Registry, IANA SHOULD assign the
lowest available identifier to the next Registered Performance
Metric.
Bagnulo, et al. Expires March 14, 2015 [Page 12]
Internet-Draft Registry for Performance Metrics September 2014
7.1.2. Name
As the name of a Registered Performance Metric is the first thing a
potential implementor will use when determining whether it is
suitable for a given application, it is important to be as precise
and descriptive as possible. New names of Registered Performance
Metrics:
1. "MUST be chosen carefully to describe the Registered Performance
Metric and the context in which it will be used."
2. "MUST be unique within the Performance Metric Registry."
3. "MUST use capital letters for the first letter of each component
. All other letters MUST be lowercase, even for acronyms.
Exceptions are made for acronyms containing a mixture of
lowercase and capital letters, such as 'IPv4' and 'IPv6'."
4. "MUST use '_' between each component composing the Registered
Performance Metric name."
5. "MUST start with prefix Act_ for active measurement Registered
Performance Metric."
6. "MUST start with prefix Pas_ for passive monitoring Registered
Performance Metric."
7. Other types of metrics should define a proper prefix for
identifying the type.
8. The remaining rules for naming are left to the Performance
Experts to determine as they gather experience, so this is an
area of planned update by a future RFC.
An example is "Act_UDP_Latency_Poisson_99mean" for a active
monitoring UDP latency metric using a Poisson stream of packets and
producing the 99th percentile mean as output.
7.1.3. URI
The URI column MUST contain a URI [RFC 3986] that uniquely identified
the metric. The URI is a URN [RFC 2141]. The URI is automatically
generated by prepending the prefix urn:ietf:params:ippm:metric: to
the metric name. The resulting URI is globally unique.
Bagnulo, et al. Expires March 14, 2015 [Page 13]
Internet-Draft Registry for Performance Metrics September 2014
7.1.4. Description
A Registered Performance Metric Description is a written
representation of a particular Registry entry. It supplements the
metric name to help Registry users select relevant Registered
Performance Metrics.
7.2. Metric Definition Category
This category includes columns to prompt all necessary details
related to the metric definition, including the RFC reference and
values of input factors, called fixed parameters, which are left open
in the RFC but have a particular value defined by the performance
metric.
7.2.1. Reference Definition
This entry provides a reference (or references) to the relevant
section(s) of the document(s) that define the metric, as well as any
supplemental information needed to ensure an unambiguous definition
for implementations. The reference needs to be an immutable
document, such as an RFC; for other standards bodies, it is likely to
be necessary to reference a specific, dated version of a
specification.
7.2.2. Fixed Parameters
Fixed Parameters are input factors whose value must be specified in
the Registry. The measurement system uses these values.
Where referenced metrics supply a list of Parameters as part of their
descriptive template, a sub-set of the Parameters will be designated
as Fixed Parameters. For example, for active metrics, Fixed
Parameters determine most or all of the IPPM Framework convention
"packets of Type-P" as described in [RFC2330], such as transport
protocol, payload length, TTL, etc. An example for passive metrics
is for RTP packet loss calculation that relies on the validation of a
packet as RTP which is a multi-packet validation controlled by
MIN_SEQUENTIAL as defined by [RFC3550]. Varying MIN_SEQUENTIAL
values can alter the loss report and this value could be set as a
fixed parameter
A Parameter which is Fixed for one Registry entry may be designated
as a Run-time Parameter for another Registry entry.
Bagnulo, et al. Expires March 14, 2015 [Page 14]
Internet-Draft Registry for Performance Metrics September 2014
7.3. Method of Measurement Category
This category includes columns for references to relevant sections of
the RFC(s) and any supplemental information needed to ensure an
unambiguous method for implementations.
7.3.1. Reference Method
This entry provides references to relevant sections of the RFC(s)
describing the method of measurement, as well as any supplemental
information needed to ensure unambiguous interpretation for
implementations referring to the RFC text.
Specifically, this section should include pointers to pseudocode or
actual code that could be used for an unambigious implementation.
7.3.2. Packet Generation Stream
This column applies to metrics that generate traffic for measurement
purposes including but not necessarily limited to Active metrics.
The generated traffic is referred as stream and this columns describe
its characteristics. Principally, two different streams are used in
IPPM metrics, Poisson distributed as described in [RFC2330] and
Periodic as described in [RFC3432]. Both Poisson and Periodic have
their own unique parameters, and the relevant set of values is
specified in this column.
Each entry for this column contains the following information:
o Value: The name of the packet stream scheduling discipline
o Stream Parameters: The values and formats of input factors for
each type of stream. For example, the average packet rate and
distribution truncation value for streams with Poisson-distributed
inter-packet sending times.
o Reference: the specification where the stream is defined
The simplest example of stream specification is Singleton scheduling,
where a single atomic measurement is conducted. Each atomic
measurement could consist of sending a single packet (such as a DNS
request) or sending several packets (for example, to request a
webpage). Other streams support a series of atomic measurements in a
"sample", with a schedule defining the timing between each
transmitted packet and subsequent measurement.
Bagnulo, et al. Expires March 14, 2015 [Page 15]
Internet-Draft Registry for Performance Metrics September 2014
7.3.3. Traffic Filter
This column applies to metrics that observe packets flowing in the
wire i.e. that is not specifically addressed to the measurement
agent. This includes but is not limited to Passive Metrics. The
filter specifies the traffic constraints that the passive measurement
method used is valid (or invalid) for. This includes valid packet
sampling ranges, width of valid traffic matches (eg. all traffic on
interface, UDP packets packets in a flow (eg. same RTP session).
It is possible that the measurement method may not have a specific
limitation. However, this specific registry entry with it's
combination of fixed parameters implies restrictions. These
restrictions would be listed in this field.
7.3.4. Sampling distribution
The sampling distribution defines out of all the packets that match
the traffic filter, which one of those are actually used for the
measurement. One possibility is "all" which implies that all packets
matching the Traffic filter are considered, but there may be other
sampling strategies. It includes the following information:
Value: the name of the sampling distribution
Parameters: if any.
Reference definition: pointer to the specification where the
sampling distribution is properly defined.
7.3.5. Run-time Parameters
Run-Time Parameters are input factors that must be determined,
configured into the measurement system, and reported with the results
for the context to be complete. However, the values of these
parameters is not specified in the Registry, rather these parameters
are listed as an aid to the measurement system implementor or user
(they must be left as variables, and supplied on execution).
Where metrics supply a list of Parameters as part of their
descriptive template, a sub-set of the Parameters will be designated
as Run-Time Parameters.
A Data Format of each Run-time Parameter SHALL be specified in this
column, to simplify the control and implementation of measurement
devices.
Bagnulo, et al. Expires March 14, 2015 [Page 16]
Internet-Draft Registry for Performance Metrics September 2014
Examples of Run-time Parameters include IP addresses, measurement
point designations, start times and end times for measurement, and
other information essential to the method of measurement.
7.3.6. Role
In some method of measurements, there may be several roles defined
e.g. on a one-way packet delay active measurement, there is one
measurement agent that generates the packets and the other one that
receives the packets. This column contains the name of the role for
this particular entry. In the previous example, there should be two
entries int he registry, one for each role, so that when a
measurement agent is instructed to perform the one way delay source
metric know that it is supposed to generate packets. The values for
this field are defined in the reference method of measurement.
7.4. Output Category
For entries which involve a stream and many singleton measurements, a
statistic may be specified in this column to summarize the results to
a single value. If the complete set of measured singletons is
output, this will be specified here.
Some metrics embed one specific statistic in the reference metric
definition, while others allow several output types or statistics.
7.4.1. Value
This column contain the name of the output type. The output type
defines the type of result that the metric produces. It can be the
raw results or it can be some form of statistic. The specification
of the output type must define the format of the output. In some
systems, format specifications will simplify both measurement
implementation and collection/storage tasks. Note that if two
different statistics are required from a single measurement (for
example, both "Xth percentile mean" and "Raw"), then a new output
type must be defined ("Xth percentile mean AND Raw").
7.4.2. Data Format
This column provides the data format for the output. It is provided
to simplify the communication with collection systems and
implementation of measurement devices.
Bagnulo, et al. Expires March 14, 2015 [Page 17]
Internet-Draft Registry for Performance Metrics September 2014
7.4.3. Reference
This column contains a pointer to the specification where the output
type is defined
7.4.4. Metric Units
The measured results must be expressed using some standard dimension
or units of measure. This column provides the units.
When a sample of singletons (see [RFC2330] for definitions of these
terms) is collected, this entry will specify the units for each
measured value.
7.5. Admisnitratvie information
7.5.1. Status
The status of the specification of this Registered Performance
Metric. Allowed values are 'current' and 'deprecated'. All newly
defined Information Elements have 'current' status.
7.5.2. Requester
The requester for the Registered Performance Metric. The requester
MAY be a document, such as RFC, or person.
7.5.3. Revision
The revision number of a Registered Performance Metric, starting at 0
for Registered Performance Metrics at time of definition and
incremented by one for each revision.
7.5.4. Revision Date
The date of acceptance or the most recent revision for the Registered
Performance Metric.
7.6. Comments and Remarks
Besides providing additional details which do not appear in other
categories, this open Category (single column) allows for unforeseen
issues to be addressed by simply updating this Informational entry.
Bagnulo, et al. Expires March 14, 2015 [Page 18]
Internet-Draft Registry for Performance Metrics September 2014
8. The Life-Cycle of Registered Metrics
Once a Performance Metric or set of Performance Metrics has been
identified for a given application, candidate Registry entry
specifications in accordance with Section 7 are submitted to IANA to
follow the process for review by the Performance Metric Experts, as
defined below. This process is also used for other changes to the
Performance Metric Registry, such as deprecation or revision, as
described later in this section.
It is also desirable that the author(s) of a candidate Registry entry
seek review in the relevant IETF working group, or offer the
opportunity for review on the WG mailing list.
8.1. Adding new Performance Metrics to the Registry
Requests to change Registered Metrics in the Performance Metric
Registry are submitted to IANA, which forwards the request to a
designated group of experts (Performance Metric Experts) appointed by
the IESG; these are the reviewers called for by the Expert Review
RFC5226 policy defined for the Performance Metric Registry. The
Performance Metric Experts review the request for such things as
compliance with this document, compliance with other applicable
Performance Metric-related RFCs, and consistency with the currently
defined set of Registered Performance Metrics.
Authors are expected to review compliance with the specifications in
this document to check their submissions before sending them to IANA.
The Performance Metric Experts should endeavor to complete referred
reviews in a timely manner. If the request is acceptable, the
Performance Metric Experts signify their approval to IANA, which
changes the Performance Metric Registry. If the request is not
acceptable, the Performance Metric Experts can coordinate with the
requester to change the request to be compliant. The Performance
Metric Experts may also choose in exceptional circumstances to reject
clearly frivolous or inappropriate change requests outright.
This process should not in any way be construed as allowing the
Performance Metric Experts to overrule IETF consensus. Specifically,
any Registered Metrics that were added with IETF consensus require
IETF consensus for revision or deprecation.
Decisions by the Performance Metric Experts may be appealed as in
Section 7 of RFC5226.
Bagnulo, et al. Expires March 14, 2015 [Page 19]
Internet-Draft Registry for Performance Metrics September 2014
8.2. Revising Registered Performance Metrics
A request for Revision is ONLY permissible when the changes maintain
backward-compatibility with implementations of the prior Registry
entry describing a Registered Metric (entries with lower revision
numbers, but the same Identifier and Name).
The purpose of the Status field in the Performance Metric Registry is
to indicate whether the entry for a Registered Metric is 'current' or
'deprecated'.
In addition, no policy is defined for revising IANA Performance
Metric entries or addressing errors therein. To be certain, changes
and deprecations within the Performance Metric Registry are not
encouraged, and should be avoided to the extent possible. However,
in recognition that change is inevitable, the provisions of this
section address the need for revisions.
Revisions are initiated by sending a candidate Registered Performance
Metric definition to IANA, as in Section X, identifying the existing
Registry entry.
The primary requirement in the definition of a policy for managing
changes to existing Registered Performance Metrics is avoidance of
interoperability problems; Performance Metric Experts must work to
maintain interoperability above all else. Changes to Registered
Performance Metrics may only be done in an inter-operable way;
necessary changes that cannot be done in a way to allow
interoperability with unchanged implementations must result in the
creation of a new Registered Metric and possibly the deprecation of
the earlier metric.
A change to a Registered Performance Metric is held to be backward-
compatible only when:
1. "it involves the correction of an error that is obviously only
editorial; or"
2. "it corrects an ambiguity in the Registered Performance Metric's
definition, which itself leads to issues severe enough to prevent
the Registered Performance Metric's usage as originally defined;
or"
3. "it corrects missing information in the metric definition without
changing its meaning (e.g., the explicit definition of 'quantity'
semantics for numeric fields without a Data Type Semantics
value); or"
Bagnulo, et al. Expires March 14, 2015 [Page 20]
Internet-Draft Registry for Performance Metrics September 2014
4. "it harmonizes with an external reference that was itself
corrected."
5. "BENOIT: NOTE THAT THERE ARE MORE RULES IN RFC 7013 SECTION 5 BUT
THEY WOULD ONLY APPLY TO THE ACTIVE/PASSIVE DRAFTS. TO BE
DISCUSSED."
If a change is deemed permissible by the Performance Metric Experts,
IANA makes the change in the Performance Metric Registry. The
requester of the change is appended to the requester in the Registry.
Each Registered Performance Metric in the Registry has a revision
number, starting at zero. Each change to a Registered Performance
Metric following this process increments the revision number by one.
COMMENT: Al (and Phil) think we should keep old/revised entries as-
is, marked as deprecated >>>> Since any revision must be inter-
operable according to the criteria above, there is no need for the
Performance Metric Registry to store information about old revisions.
When a revised Registered Performance Metric is accepted into the
Performance Metric Registry, the date of acceptance of the most
recent revision is placed into the revision Date column of the
Registry for that Registered Performance Metric.
Where applicable, additions to Registry entries in the form of text
Comments or Remarks should include the date, but such additions may
not constitute a revision according to this process.
8.3. Deprecating Registered Performance Metrics
Changes that are not permissible by the above criteria for Registered
Metric's revision may only be handled by deprecation. A Registered
Performance Metric MAY be deprecated and replaced when:
1. "the Registered Performance Metric definition has an error or
shortcoming that cannot be permissibly changed as in
Section Revising Registered Performance Metrics; or"
2. "the deprecation harmonizes with an external reference that was
itself deprecated through that reference's accepted deprecation
method; or"
A request for deprecation is sent to IANA, which passes it to the
Performance Metric Expert for review, as in Section 'The Process for
Review by the Performance Metric Experts'. When deprecating an
Performance Metric, the Performance Metric description in the
Performance Metric Registry must be updated to explain the
Bagnulo, et al. Expires March 14, 2015 [Page 21]
Internet-Draft Registry for Performance Metrics September 2014
deprecation, as well as to refer to any new Performance Metrics
created to replace the deprecated Performance Metric.
The revision number of a Registered Performance Metric is incremented
upon deprecation, and the revision Date updated, as with any
revision.
The use of deprecated Registered Metrics should result in a log entry
or human-readable warning by the respective application.
Names and Metric ID of deprecated Registered Metrics must not be
reused.
9. Performance Metric Registry and other Registries
BENOIT: TBD.
THE BASIC IDEA IS THAT PEOPLE COULD DIRECTLY DEFINE PERF. METRICS IN
OTHER EXISTING REGISTRIES, FOR SPECIFIC PROTOCOL/ENCODING. EXAMPLE:
IPFIX. IDEALLY, ALL PERF. METRICS SHOULD BE DEFINED IN THIS
REGISTRY AND REFERS TO FROM OTHER REGISTRIES.
10. Security considerations
This draft doesn't introduce any new security considerations for the
Internet. However, the definition of Performance Metrics may
introduce some security concerns, and should be reviewed with
security in mind.
11. IANA Considerations
This document specifies the procedure for Performance Metrics
Registry setup. IANA is requested to create a new Registry for
Performance Metrics called "Registered Performance Metrics" with the
columns defined in Section 7.
New assignments for Performance Metric Registry will be administered
by IANA through Expert Review [RFC5226], i.e., review by one of a
group of experts, the Performance Metric Experts, appointed by the
IESG upon recommendation of the Transport Area Directors. The
experts will initially be drawn from the Working Group Chairs and
document editors of the Performance Metrics Directorate [performance-
metrics-directorate].
This document requests the allocation of the URI prefix
urn:ietf:params:ippm:metric for the purpose of generating URIs for
registered metrics.
Bagnulo, et al. Expires March 14, 2015 [Page 22]
Internet-Draft Registry for Performance Metrics September 2014
12. Acknowledgments
Thanks to Brian Trammell and Bill Cerveny, IPPM chairs, for leading
some brainstorming sessions on this topic.
13. References
13.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2026] Bradner, S., "The Internet Standards Process -- Revision
3", BCP 9, RFC 2026, October 1996.
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, May
1998.
[RFC4148] Stephan, E., "IP Performance Metrics (IPPM) Metrics
Registry", BCP 108, RFC 4148, August 2005.
[RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an
IANA Considerations Section in RFCs", BCP 26, RFC 5226,
May 2008.
[RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics
(IPPM) Registry of Metrics Are Obsolete", RFC 6248, April
2011.
[RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New
Performance Metric Development", BCP 170, RFC 6390,
October 2011.
[RFC6576] Geib, R., Morton, A., Fardid, R., and A. Steinmitz, "IP
Performance Metrics (IPPM) Standard Advancement Testing",
BCP 176, RFC 6576, March 2012.
[RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform
Resource Identifier (URI): Generic Syntax", STD 66, RFC
3986, January 2005.
[RFC2141] Moats, R., "URN Syntax", RFC 2141, May 1997.
Bagnulo, et al. Expires March 14, 2015 [Page 23]
Internet-Draft Registry for Performance Metrics September 2014
13.2. Informative References
[RFC3611] Friedman, T., Caceres, R., and A. Clark, "RTP Control
Protocol Extended Reports (RTCP XR)", RFC 3611, November
2003.
[RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V.
Jacobson, "RTP: A Transport Protocol for Real-Time
Applications", STD 64, RFC 3550, July 2003.
[RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich,
"Session Initiation Protocol Event Package for Voice
Quality Reporting", RFC 6035, November 2010.
[I-D.ietf-lmap-framework]
Eardley, P., Morton, A., Bagnulo, M., Burbridge, T.,
Aitken, P., and A. Akhter, "A framework for large-scale
measurement platforms (LMAP)", draft-ietf-lmap-
framework-08 (work in progress), August 2014.
[RFC5477] Dietz, T., Claise, B., Aitken, P., Dressler, F., and G.
Carle, "Information Model for Packet Sampling Exports",
RFC 5477, March 2009.
[RFC5102] Quittek, J., Bryant, S., Claise, B., Aitken, P., and J.
Meyer, "Information Model for IP Flow Information Export",
RFC 5102, January 2008.
[RFC6792] Wu, Q., Hunt, G., and P. Arden, "Guidelines for Use of the
RTP Monitoring Framework", RFC 6792, November 2012.
[RFC5905] Mills, D., Martin, J., Burbank, J., and W. Kasch, "Network
Time Protocol Version 4: Protocol and Algorithms
Specification", RFC 5905, June 2010.
[RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation
Metric for IP Performance Metrics (IPPM)", RFC 3393,
November 2002.
[RFC6776] Clark, A. and Q. Wu, "Measurement Identity and Information
Reporting Using a Source Description (SDES) Item and an
RTCP Extended Report (XR) Block", RFC 6776, October 2012.
[RFC7003] Clark, A., Huang, R., and Q. Wu, "RTP Control Protocol
(RTCP) Extended Report (XR) Block for Burst/Gap Discard
Metric Reporting", RFC 7003, September 2013.
Bagnulo, et al. Expires March 14, 2015 [Page 24]
Internet-Draft Registry for Performance Metrics September 2014
[RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network
performance measurement with periodic streams", RFC 3432,
November 2002.
[RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session
Description Protocol", RFC 4566, July 2006.
[RFC5481] Morton, A. and B. Claise, "Packet Delay Variation
Applicability Statement", RFC 5481, March 2009.
Authors' Addresses
Marcelo Bagnulo
Universidad Carlos III de Madrid
Av. Universidad 30
Leganes, Madrid 28911
SPAIN
Phone: 34 91 6249500
Email: marcelo@it.uc3m.es
URI: http://www.it.uc3m.es
Benoit Claise
Cisco Systems, Inc.
De Kleetlaan 6a b1
1831 Diegem
Belgium
Email: bclaise@cisco.com
Philip Eardley
BT
Adastral Park, Martlesham Heath
Ipswich
ENGLAND
Email: philip.eardley@bt.com
Al Morton
AT&T Labs
200 Laurel Avenue South
Middletown, NJ
USA
Email: acmorton@att.com
Bagnulo, et al. Expires March 14, 2015 [Page 25]
Internet-Draft Registry for Performance Metrics September 2014
Aamer Akhter
Cisco Systems, Inc.
7025 Kit Creek Road
RTP, NC 27709
USA
Email: aakhter@cisco.com
Bagnulo, et al. Expires March 14, 2015 [Page 26]