Skip to main content

Benchmarking Methodology for Software-Defined Networking (SDN) Controller Performance
draft-ietf-bmwg-sdn-controller-benchmark-meth-09

Revision differences

Document history

Date Rev. By Action
2018-10-16
09 (System) RFC Editor state changed to AUTH48-DONE from AUTH48
2018-08-31
09 (System) RFC Editor state changed to AUTH48 from RFC-EDITOR
2018-08-27
09 (System) RFC Editor state changed to RFC-EDITOR from AUTH
2018-08-03
09 (System) RFC Editor state changed to AUTH from EDIT
2018-05-25
09 (System) RFC Editor state changed to EDIT
2018-05-25
09 (System) IESG state changed to RFC Ed Queue from Approved-announcement sent
2018-05-25
09 (System) Announcement was received by RFC Editor
2018-05-25
09 (System) IANA Action state changed to No IC from In Progress
2018-05-25
09 (System) IANA Action state changed to In Progress
2018-05-25
09 Amy Vezza IESG state changed to Approved-announcement sent from Approved-announcement to be sent::AD Followup
2018-05-25
09 Amy Vezza IESG has approved the document
2018-05-25
09 Amy Vezza Closed "Approve" ballot
2018-05-25
09 Amy Vezza Ballot approval text was generated
2018-05-25
09 Amy Vezza RFC Editor Note was changed
2018-05-25
09 Amy Vezza RFC Editor Note was changed
2018-05-25
09 Amy Vezza RFC Editor Note for ballot was generated
2018-05-25
09 Amy Vezza RFC Editor Note for ballot was generated
2018-05-25
09 (System) Sub state has been changed to AD Followup from Revised ID Needed
2018-05-25
09 Bhuvaneswaran Vengainathan New version available: draft-ietf-bmwg-sdn-controller-benchmark-meth-09.txt
2018-05-25
09 (System) New version approved
2018-05-25
09 (System) Request for posting confirmation emailed to previous authors: Vishwas Manral , Sarah Banks , Bhuvaneswaran Vengainathan , Anton Basil , Mark Tassinari
2018-05-25
09 Bhuvaneswaran Vengainathan Uploaded new revision
2018-04-19
08 Cindy Morgan IESG state changed to Approved-announcement to be sent::Revised I-D Needed from IESG Evaluation
2018-04-19
08 Suresh Krishnan
[Ballot comment]
I share Ignas's concern about this being too tightly associated with the OpenFlow model.

* Section 4.1
  The test cases SHOULD use …
[Ballot comment]
I share Ignas's concern about this being too tightly associated with the OpenFlow model.

* Section 4.1
  The test cases SHOULD use Leaf-Spine topology with at least 1
  Network Device in the topology for benchmarking.

How is it even possible to have a leaf-spine topology with one Network Device?
2018-04-19
08 Suresh Krishnan [Ballot Position Update] New position, No Objection, has been recorded for Suresh Krishnan
2018-04-19
08 Ignas Bagdonas
[Ballot comment]
The document seems to assume the OpenFlow dataplane abstraction model – which is one of the possible models; the practical applicability of such …
[Ballot comment]
The document seems to assume the OpenFlow dataplane abstraction model – which is one of the possible models; the practical applicability of such model to anything beyond experimental deployments is a completely separate question outside of the scope of this document. The methodology tends to apply to a broader set of central control based systems, and not only to the data plane operations – therefore the document seems to be setting at least something practically usable for benchmarking of such central control systems. Possibly the document could mention such assumptions made about the overall model where the methodology defined applies to.

A nit: s/Khasanov Boris/Boris Khasanov, unless Boris himself would insist otherwise.
2018-04-19
08 Ignas Bagdonas [Ballot Position Update] New position, No Objection, has been recorded for Ignas Bagdonas
2018-04-19
08 Benjamin Kaduk
[Ballot comment]
In the Abstract:

  This document defines the methodologies for benchmarking control
  plane performance of SDN controllers.

Why "the" methodologies?  That seems …
[Ballot comment]
In the Abstract:

  This document defines the methodologies for benchmarking control
  plane performance of SDN controllers.

Why "the" methodologies?  That seems more authoritative than is
appropriate in an Informational document.


Why do we need the test setup diagrams in both the terminology draft
and this one?  It seems like there is some excess redundancy, here.


In Section 4.1, how can we even have a topology with just one
network device?  This "at least 1" seems too low.  Similarly, how
would TP1 and TP2 *not* be connected to the same node if there is
only one device?

Thank you for adding consideration to key distribution in Section
4.4, as noted by the secdir review.  But insisting on having key
distribution done prior to testing gives the impression that keys
are distributed once and updated never, which has questionable
security properties.  Perhaps there is value in doing some testing
while rekeyeing is in progress?

I agree with others that the statistical methodology is not clearly
justified, such as the sample size of 10 in Section 4.7 (with no
consideration for sample relative variance), use of sample vs.
population veriance, etc.

It seems like the measurements being described sometimes start the
timer at an event at a network element and other times start the
timer when a message enters the SDN controller itself (similarly for
outgoing messages), which seems to include a different treatment of
propagation delays in the network, for different tests.  Assuming
these differences were made by conscious choice, it might be nice to
describe why the network propagation is/is not included for any
given measurement.

It looks like the term "Nxrn" is introduced implicitly and the
reader is supposed to infer that the 'n' represents a counter, with
Nrx1 corresponding to the first measurement, Nrx2 the second, etc.
It's probably worth mentioning this explicitly, for all fields that
are measured on a per-trial/counter basis.

I'm not sure that the end condition for the test in Section 5.2.2
makes sense.

It seems like the test in Section 5.2.3 should not allow flexibility
in "unique source and/or destination address" and rather should
specify exactly what happens.

In Section 5.3.1, only considering 2% of asynchronous messages as
invalid implies a preconception about what might be the reason for
such invalid messages, but that assumption might not hold in the
case of an active attack, which may be somewhat different from the
pure DoS scenario considered in the following section.

Section 5.4.1 says "with incremental sequence number and source
address" -- are both the sequence number and source address
incrementing for each packet sent?  This could be more clear.
It also is a little jarring to refer to "test traffic generator TP2"
when TP2 is just receiving traffic and not generating it.

Appendix B.3 indicates that plain TCP or TLS can be used for
communications between switch and controller.  It seems like this
would be a highly relevant test parameter to report with the results
for the tests described in this document, since TLS would introduce
additional overhead to be quantified!

The figure in Section B.4.5 leaves me a little confused as to what
is being measured, if the SDN Application is depicted as just
spontaneously installing a flow at some time vaguely related to
traffic generation but not dependent on or triggered by the traffic
generation.
2018-04-19
08 Benjamin Kaduk [Ballot Position Update] New position, No Objection, has been recorded for Benjamin Kaduk
2018-04-19
08 Adam Roach [Ballot comment]
I again share Martin's concerns about the use of the word "standard" in this document's abstract and introduction.
2018-04-19
08 Adam Roach [Ballot Position Update] New position, No Objection, has been recorded for Adam Roach
2018-04-18
08 Alvaro Retana [Ballot Position Update] New position, No Objection, has been recorded for Alvaro Retana
2018-04-18
08 Alissa Cooper
[Ballot comment]
Regarding this text:

"The test SHOULD use one of the test setups described in section 3.1
  or section 3.2 of this document …
[Ballot comment]
Regarding this text:

"The test SHOULD use one of the test setups described in section 3.1
  or section 3.2 of this document in combination with Appendix A."

Appendix A is titled "Example Test Topology." If it's really an example, then it seems like it should not be normatively required. So either the appendix needs to be re-named, or the normative language needs to be removed. And if it is normatively required, why is it in an appendix? The document would also benefit from describing what the exception cases to the SHOULD are (I guess if the tester doesn't care about having comparable results with other tests?).
2018-04-18
08 Alissa Cooper [Ballot Position Update] New position, No Objection, has been recorded for Alissa Cooper
2018-04-18
08 Deborah Brungard [Ballot Position Update] New position, No Objection, has been recorded for Deborah Brungard
2018-04-18
08 Mirja Kühlewind
[Ballot comment]
Editorial comments:

1) sdn-controller-benchmark-term should probably rather be referred in the intro (instead of the abstract).

2) Is the test setup needed in …
[Ballot comment]
Editorial comments:

1) sdn-controller-benchmark-term should probably rather be referred in the intro (instead of the abstract).

2) Is the test setup needed in both docs (this and sdn-controller-benchmark-term) or would a reference to sdn-controller-benchmark-term maybe be sufficient?

3) Appendix A.1 should probably also be moved to sdn-controller-benchmark-term
2018-04-18
08 Mirja Kühlewind Ballot comment text updated for Mirja Kühlewind
2018-04-18
08 Mirja Kühlewind
[Ballot comment]
Editorial comments:
1) sdn-controller-benchmark-term should probably rather be referred in the intro (instead of the abstract).
2) Is the test setup needed in …
[Ballot comment]
Editorial comments:
1) sdn-controller-benchmark-term should probably rather be referred in the intro (instead of the abstract).
2) Is the test setup needed in both docs (this and sdn-controller-benchmark-term) or would a reference to sdn-controller-benchmark-term maybe be sufficient?
3) Appendix A.1 should probably also be moved to sdn-controller-benchmark-term
2018-04-18
08 Mirja Kühlewind [Ballot Position Update] New position, No Objection, has been recorded for Mirja Kühlewind
2018-04-18
08 Martin Vigoureux
[Ballot comment]
Hello,

I have the same question/comment than on the companion document:
I wonder about the use of the term "standard" in the abstract …
[Ballot comment]
Hello,

I have the same question/comment than on the companion document:
I wonder about the use of the term "standard" in the abstract in view of the intended status of the document (Informational).
Could the use of this word confuse the reader?
2018-04-18
08 Martin Vigoureux [Ballot Position Update] New position, No Objection, has been recorded for Martin Vigoureux
2018-04-18
08 Terry Manderson [Ballot Position Update] New position, No Objection, has been recorded for Terry Manderson
2018-04-17
08 Eric Rescorla
[Ballot comment]
Rich version of this review at:
https://mozphab-ietf.devsvcdev.mozaws.net/D3948






COMMENTS
>      reported.

>  4.7. Test Repeatability

>      To increase …
[Ballot comment]
Rich version of this review at:
https://mozphab-ietf.devsvcdev.mozaws.net/D3948






COMMENTS
>      reported.

>  4.7. Test Repeatability

>      To increase the confidence in measured result, it is recommended
>      that each test SHOULD be repeated a minimum of 10 times.

Nit: you might be happier with "RECOMMENDED that each test be repeated
..."

Also, where does 10 come from? Generally, the number of trials you
need depends on the variance of each trial.



>      Test Reporting

>      Each test has a reporting format that contains some global and
>      identical reporting components, and some individual components that
>      are specific to individual tests. The following test configuration
>      parameters and controller settings parameters MUST be reflected in

This is an odd MUST, as it's not required for interop.


>      5. Stop the trial when the discovered topology information matches
>        the deployed network topology, or when the discovered topology
>        information return the same details for 3 consecutive queries.
>      6. Record the time last discovery message (Tmn) sent to controller
>        from the forwarding plane test emulator interface (I1) when the
>        trial completed successfully. (e.g., the topology matches).

How large is the TD usually? How much does 3 seconds compare to that?


>                                                  Total Trials

>                                              SUM[SQUAREOF(Tri-TDm)]
>      Topology Discovery Time Variance (TDv)  ----------------------
>                                                  Total Trials -1


You probably don't need to specify individual formulas for mean and
variance. However, you probably do want to explain why you are using
the n-1 sample variance formula.



>  Measurement:

>                                              (R1-T1) + (R2-T2)..(Rn-Tn)
>      Asynchronous Message Processing Time Tr1 = -----------------------
>                                                          Nrx

Incidentally, this formula is the same as \sum_i{R_i} - \sum_i{T_i}


>      messages transmitted to the controller.

>      If this test is repeated with varying number of nodes with same
>      topology, the results SHOULD be reported in the form of a graph. The
>      X coordinate SHOULD be the Number of nodes (N), the Y coordinate
>      SHOULD be the average Asynchronous Message Processing Time.

This is an odd metric because an implementation which handled overload
by dropping every other message would look better than one which
handled overload by queuing.
2018-04-17
08 Eric Rescorla [Ballot Position Update] New position, No Objection, has been recorded for Eric Rescorla
2018-04-16
08 Spencer Dawkins
[Ballot comment]
I have a few questions, at the No Objection level ... do the right thing, of course.

I apologize for attempting to play …
[Ballot comment]
I have a few questions, at the No Objection level ... do the right thing, of course.

I apologize for attempting to play amateur statistician, but it seems to me that this text

4.7. Test Repeatability

  To increase the confidence in measured result, it is recommended
  that each test SHOULD be repeated a minimum of 10 times.

is recommending a heuristic, when I'd think that you'd want to repeat a test until the results seem to be converging on some measure of central tendency, given some acceptable margin of error, and this text

Procedure:

  1. Establish the network connections between controller and network
    nodes.
  2. Query the controller for the discovered network topology
    information and compare it with the deployed network topology
    information.
  3. If the comparison is successful, increase the number of nodes by 1
    and repeat the trial.
    If the comparison is unsuccessful, decrease the number of nodes by
    1 and repeat the trial.
  4. Continue the trial until the comparison of step 3 is successful.
  5. Record the number of nodes for the last trial (Ns) where the
    topology comparison was successful.

seems to beg for a binary search, especially if you're testing whether a controller can support a large number of controllers ...

This text

Reference Test Setup:

  The test SHOULD use one of the test setups described in section 3.1
  or section 3.2 of this document in combination with Appendix A.

or some variation is repeated about 16 times, and I'm not understanding why this is using BCP 14 language, and if BCP 14 language is the right thing to do, I'm not understanding why it's always SHOULD.

I get the part that this will help compare results, if two researchers are running the same tests. Is there more to the requirement than that?

In this text,

Procedure:

  1. Perform the listed tests and launch a DoS attack towards
    controller while the trial is running.

  Note:

    DoS attacks can be launched on one of the following interfaces.

    a. Northbound (e.g., Query for flow entries continuously on
      northbound interface)
    b. Management (e.g., Ping requests to controller's management
      interface)
    c. Southbound (e.g., TCP SYN messages on southbound interface)

is there a canonical description of "DoS attack" that researchers should be using, in order to compare results? These are just examples, right?

Is the choice of

  [OpenFlow Switch Specification]  ONF,"OpenFlow Switch Specification"
              Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.

intentional? I'm googling that the current version of OpenFlow is 1.5.1, from 2015.
2018-04-16
08 Spencer Dawkins [Ballot Position Update] New position, No Objection, has been recorded for Spencer Dawkins
2018-04-16
08 Stewart Bryant Request for Telechat review by GENART Completed: Ready with Nits. Reviewer: Stewart Bryant. Sent review to list.
2018-03-08
08 Jean Mahoney Request for Telechat review by GENART is assigned to Stewart Bryant
2018-03-08
08 Jean Mahoney Request for Telechat review by GENART is assigned to Stewart Bryant
2018-03-02
08 (System) IANA Review state changed to IANA OK - No Actions Needed from Version Changed - Review Needed
2018-02-27
08 Warren Kumari Please review in conjunction with draft-ietf-bmwg-sdn-controller-benchmark-term.
2018-02-27
08 Warren Kumari IESG state changed to IESG Evaluation from Waiting for Writeup
2018-02-27
08 Warren Kumari Placed on agenda for telechat - 2018-04-19
2018-02-27
08 Warren Kumari Ballot has been issued
2018-02-27
08 Warren Kumari [Ballot Position Update] New position, Yes, has been recorded for Warren Kumari
2018-02-27
08 Warren Kumari Created "Approve" ballot
2018-02-27
08 Warren Kumari Ballot writeup was changed
2018-02-27
08 Warren Kumari Changed consensus to Yes from Unknown
2018-02-25
08 (System) IANA Review state changed to Version Changed - Review Needed from IANA OK - No Actions Needed
2018-02-25
08 Bhuvaneswaran Vengainathan New version available: draft-ietf-bmwg-sdn-controller-benchmark-meth-08.txt
2018-02-25
08 (System) New version approved
2018-02-25
08 (System) Request for posting confirmation emailed to previous authors: Vishwas Manral , Sarah Banks , Bhuvaneswaran Vengainathan , Anton Basil , Mark Tassinari
2018-02-25
08 Bhuvaneswaran Vengainathan Uploaded new revision
2018-02-02
07 (System) IESG state changed to Waiting for Writeup from In Last Call
2018-01-31
07 (System) IANA Review state changed to IANA OK - No Actions Needed from IANA - Review Needed
2018-01-31
07 Sabrina Tanamal
(Via drafts-lastcall@iana.org): IESG/Authors/WG Chairs:

The IANA Services Operator has reviewed draft-ietf-bmwg-sdn-controller-benchmark-meth-07, which is currently in Last Call, and has the following comments:

We …
(Via drafts-lastcall@iana.org): IESG/Authors/WG Chairs:

The IANA Services Operator has reviewed draft-ietf-bmwg-sdn-controller-benchmark-meth-07, which is currently in Last Call, and has the following comments:

We understand that this document doesn't require any registry actions.

While it's often helpful for a document's IANA Considerations section to remain in place upon publication even if there are no actions, if the authors strongly prefer to remove it, we do not object.

If this assessment is not accurate, please respond as soon as possible.

Thank you,

Sabrina Tanamal
Senior IANA Services Specialist
2018-01-30
07 Stewart Bryant Request for Last Call review by GENART Completed: Ready with Nits. Reviewer: Stewart Bryant. Sent review to list.
2018-01-29
07 Scott Bradner Request for Last Call review by OPSDIR Completed: Has Issues. Reviewer: Scott Bradner. Sent review to list.
2018-01-29
07 Min Ye Request for Last Call review by RTGDIR Completed: Has Nits. Reviewer: Henning Rogge.
2018-01-26
07 Russ Housley Request for Last Call review by SECDIR Completed: Has Issues. Reviewer: Russ Housley. Sent review to list.
2018-01-25
07 Min Ye Request for Last Call review by RTGDIR is assigned to Henning Rogge
2018-01-25
07 Min Ye Request for Last Call review by RTGDIR is assigned to Henning Rogge
2018-01-25
07 Tero Kivinen Request for Last Call review by SECDIR is assigned to Russ Housley
2018-01-25
07 Tero Kivinen Request for Last Call review by SECDIR is assigned to Russ Housley
2018-01-25
07 Jean Mahoney Request for Last Call review by GENART is assigned to Stewart Bryant
2018-01-25
07 Jean Mahoney Request for Last Call review by GENART is assigned to Stewart Bryant
2018-01-25
07 Gunter Van de Velde Request for Last Call review by OPSDIR is assigned to Scott Bradner
2018-01-25
07 Gunter Van de Velde Request for Last Call review by OPSDIR is assigned to Scott Bradner
2018-01-24
07 Min Ye Request for Last Call review by RTGDIR is assigned to Ravi Singh
2018-01-24
07 Min Ye Request for Last Call review by RTGDIR is assigned to Ravi Singh
2018-01-21
07 Min Ye Request for Last Call review by RTGDIR is assigned to Henning Rogge
2018-01-21
07 Min Ye Request for Last Call review by RTGDIR is assigned to Henning Rogge
2018-01-19
07 Alvaro Retana Requested Last Call review by RTGDIR
2018-01-19
07 Cindy Morgan IANA Review state changed to IANA - Review Needed
2018-01-19
07 Cindy Morgan
The following Last Call announcement was sent out (ends 2018-02-02):

From: The IESG
To: IETF-Announce
CC: draft-ietf-bmwg-sdn-controller-benchmark-meth@ietf.org, bmwg-chairs@ietf.org, acmorton@att.com, Al Morton , …
The following Last Call announcement was sent out (ends 2018-02-02):

From: The IESG
To: IETF-Announce
CC: draft-ietf-bmwg-sdn-controller-benchmark-meth@ietf.org, bmwg-chairs@ietf.org, acmorton@att.com, Al Morton , bmwg@ietf.org, warren@kumari.net
Reply-To: ietf@ietf.org
Sender:
Subject: Last Call:  (Benchmarking Methodology for SDN Controller Performance) to Informational RFC


The IESG has received a request from the Benchmarking Methodology WG (bmwg)
to consider the following document: - 'Benchmarking Methodology for SDN
Controller Performance'
  as Informational RFC

The IESG plans to make a decision in the next few weeks, and solicits final
comments on this action. Please send substantive comments to the
ietf@ietf.org mailing lists by 2018-02-02. Exceptionally, comments may be
sent to iesg@ietf.org instead. In either case, please retain the beginning of
the Subject line to allow automated sorting.

Abstract


  This document defines the methodologies for benchmarking control
  plane performance of SDN controllers. Terminology related to
  benchmarking SDN controllers is described in the companion
  terminology document. SDN controllers have been implemented with
  many varying designs in order to achieve their intended network
  functionality. Hence, the authors have taken the approach of
  considering an SDN controller as a black box, defining the
  methodology in a manner that is agnostic to protocols and network
  services supported by controllers. The intent of this document is to
  provide a standard mechanism to measure the performance of all
  controller implementations.

[ Note that this document is closely related to draft-ietf-bmwg-sdn-controller-benchmark-term.
It might be worth reading them together!]


The file can be obtained via
https://datatracker.ietf.org/doc/draft-ietf-bmwg-sdn-controller-benchmark-meth/

IESG discussion can be tracked via
https://datatracker.ietf.org/doc/draft-ietf-bmwg-sdn-controller-benchmark-meth/ballot/


No IPR declarations have been submitted directly on this I-D.




2018-01-19
07 Cindy Morgan IESG state changed to In Last Call from Last Call Requested
2018-01-19
07 Warren Kumari Last call was requested
2018-01-19
07 Warren Kumari Ballot approval text was generated
2018-01-19
07 Warren Kumari Ballot writeup was generated
2018-01-19
07 Warren Kumari IESG state changed to Last Call Requested from AD Evaluation
2018-01-19
07 Warren Kumari Last call announcement was changed
2018-01-18
07 Warren Kumari IESG state changed to AD Evaluation from Publication Requested
2018-01-10
07 Al Morton

Document Titles:
Terminology/Methodology for Benchmarking SDN Controller Performance
Filenames:
draft-ietf-bmwg-sdn-controller-benchmark-term-07
draft-ietf-bmwg-sdn-controller-benchmark-meth-07
Intended Status:
Informational


Changes are expected over time. This version is dated 24 February …

Document Titles:
Terminology/Methodology for Benchmarking SDN Controller Performance
Filenames:
draft-ietf-bmwg-sdn-controller-benchmark-term-07
draft-ietf-bmwg-sdn-controller-benchmark-meth-07
Intended Status:
Informational


Changes are expected over time. This version is dated 24 February 2012.

(1) What type of RFC is being requested (BCP, Proposed Standard,
Internet Standard, Informational, Experimental, or Historic)?  Why
is this the proper type of RFC?  Is this type of RFC indicated in the
title page header?

Informational, all BMWG RFCs to date are Informational.
The status is correctly indicated on the title pages.

(2) The IESG approval announcement includes a Document Announcement
Write-Up. Please provide such a Document Announcement Write-Up. Recent
examples can be found in the "Action" announcements for approved
documents. The approval announcement contains the following sections:

Technical Summary

  These two memos specify the terminology, benchmark definitions
  and methods for characterizing key performance aspects of
  Software Defined Network (SDN) Controllers. Considering the
  number of benchmarking and performance comparison studies that
  have been published prior to standards development, this is an
  important area for new specifications and tools to implement them.
  These memos focus on the ability of SDN controllers to learn
  topology, communicate with switches, and instantiate paths using
  both reactive and proactive techniques (including controller clusters).
  Further, requirements for test traffic and network emulation
  capabilities of the test devices are specified. The memos approach
  this problem in a generic way (Openflow specific procedures are
  included in an Appendix) for broader applicability and longevity.

Working Group Summary

  Consensus for these drafts required several WGLC which prompted
  careful review and further comments. However, the process to achieve
  consensus was smooth, and at no time was there sustained controversy.

Document Quality

  There are existing implementations of the methods described here,
  both full and partial, and many are available as Open Source tools.
  Tools are tied to specific versions of Northbound and Southbound
  protocols, but the memo avoids this dependency (and redundancy with
  other industry efforts) by defining benchmarks for generic SDN
  controller functions (as suggested in https://mailarchive.ietf.org/arch/msg/bmwg/Glchqbvg6F7vOUc0ug3ACyztcNg ).
  The document benefits from review in the NFVRG, and from interactions
  with the Open Platform for Network Function Virtualization (OPNFV) "CPerf"
  project team (which includes members from Open Daylight Integration
  test team, ETSI NFV Test and Open Source WG, and other Linux Foundation
  and independent Open Source projects related to SDN performance).

Personnel

  Al Morton is the Document Shepherd
  Warren Kumari is the Responsible Area Director

(3) Briefly describe the review of this document that was performed by
the Document Shepherd.  If this version of the document is not ready
for publication, please explain why the document is being forwarded to
the IESG.

The Doc Shepherd has reviewed many versions of these drafts, and finds
that the current versions are ready for publication (recognizing that
some editorial suggestions are a certainty of further review, and that
the editors are very willing to implement them).

Note:
1. The Shepherd provided editorial suggestions for the authors,
  and they were addressed in 07 versions.
2. In the -term draft, it may be possible to avoid page breaks in the
  Figures and the Table in Sec 3.


(4) Does the document Shepherd have any concerns about the depth or
breadth of the reviews that have been performed? 
No concerns.

(5) Do portions of the document need review from a particular or from
broader perspective, e.g., security, operational complexity, AAA, DNS,
DHCP, XML, or internationalization? If so, describe the review that
took place.
No additional reviews appear to be needed (see below).

(6) Describe any specific concerns or issues that the Document Shepherd
has with this document that the Responsible Area Director and/or the
IESG should be aware of? For example, perhaps he or she is uncomfortable
with certain parts of the document, or has concerns whether there really
is a need for it. In any event, if the WG has discussed those issues and
has indicated that it still wishes to advance the document, detail those
concerns here.
No Specific issues.

(7) Has each author confirmed that any and all appropriate IPR
disclosures required for full conformance with the provisions of BCP 78
and BCP 79 have already been filed. If not, explain why.

All five authors listed on the drafts have confirmed that they are
unaware of any IPR related to these drafts.

(8) Has an IPR disclosure been filed that references this document?
If so, summarize any WG discussion and conclusion regarding the IPR
disclosures.

There are currently no IPR disclosures for either draft
officially submitted to the IETF.

(9) How solid is the WG consensus behind this document? Does it
represent the strong concurrence of a few individuals, with others
being silent, or does the WG as a whole understand and agree with it? 

Over time, most of the working group has participated in discussion
and review of these drafts, so I think it is fair to say that the
majority of the WG understands and agrees with the content.

(10) Has anyone threatened an appeal or otherwise indicated extreme
discontent? If so, please summarise the areas of conflict in separate
email messages to the Responsible Area Director. (It should be in a
separate email because this questionnaire is publicly available.)

No appeals threatened.

(11) Identify any ID nits the Document Shepherd has found in this
document. (See http://www.ietf.org/tools/idnits/ and the Internet-Drafts
Checklist). Boilerplate checks are not enough; this check needs to be
thorough.

The nits check is free of warnings and errors.

(12) Describe how the document meets any required formal review
criteria, such as the MIB Doctor, media type, and URI type reviews.

NA

(13) Have all references within this document been identified as
either normative or informative?

Yes.

(14) Are there normative references to documents that are not ready for
advancement or are otherwise in an unclear state? If such normative
references exist, what is the plan for their completion?

No

(15) Are there downward normative references references (see RFC 3967)?
If so, list these downward references to support the Area Director in the
Last Call procedure.

No

(16) Will publication of this document change the status of any
existing RFCs? Are those RFCs listed on the title page header, listed
in the abstract, and discussed in the introduction? If the RFCs are not
listed in the Abstract and Introduction, explain why, and point to the
part of the document where the relationship of this document to the
other RFCs is discussed. If this information is not in the document,
explain why the WG considers it unnecessary.

No

(17) Describe the Document Shepherd's review of the IANA considerations
section, especially with regard to its consistency with the body of the
document. Confirm that all protocol extensions that the document makes
are associated with the appropriate reservations in IANA registries.
Confirm that any referenced IANA registries have been clearly
identified. Confirm that newly created IANA registries include a
detailed specification of the initial contents for the registry, that
allocations procedures for future registrations are defined, and a
reasonable name for the new registry has been suggested (see RFC 5226).

There are no requests of IANA, as indicated.

(18) List any new IANA registries that require Expert Review for future
allocations. Provide any public guidance that the IESG would find
useful in selecting the IANA Experts for these new registries.

NA

(19) Describe reviews and automated checks performed by the Document
Shepherd to validate sections of the document written in a formal
language, such as XML code, BNF rules, MIB definitions, etc.

NA
2018-01-10
07 Al Morton Responsible AD changed to Warren Kumari
2018-01-10
07 Al Morton IETF WG state changed to Submitted to IESG for Publication from WG Consensus: Waiting for Write-Up
2018-01-10
07 Al Morton IESG state changed to Publication Requested
2018-01-10
07 Al Morton IESG process started in state Publication Requested
2018-01-10
07 Al Morton Changed document writeup
2018-01-09
07 Bhuvaneswaran Vengainathan New version available: draft-ietf-bmwg-sdn-controller-benchmark-meth-07.txt
2018-01-09
07 (System) New version approved
2018-01-09
07 (System) Request for posting confirmation emailed to previous authors: Vishwas Manral , Sarah Banks , Bhuvaneswaran Vengainathan , Anton Basil , Mark Tassinari
2018-01-09
07 Bhuvaneswaran Vengainathan Uploaded new revision
2018-01-07
06 Al Morton The Write-up is complete, Waiting on Author replies to IPR Disclosure Question.
2018-01-07
06 Al Morton IETF WG state changed to WG Consensus: Waiting for Write-Up from In WG Last Call
2018-01-07
06 Al Morton Changed document writeup
2017-11-16
06 Bhuvaneswaran Vengainathan New version available: draft-ietf-bmwg-sdn-controller-benchmark-meth-06.txt
2017-11-16
06 (System) New version approved
2017-11-16
06 (System) Request for posting confirmation emailed to previous authors: Vishwas Manral , Sarah Banks , Bhuvaneswaran Vengainathan , Anton Basil , Mark Tassinari
2017-11-16
06 Bhuvaneswaran Vengainathan Uploaded new revision
2017-10-30
05 Al Morton Ends on Nov 16, 2017
2017-10-30
05 Al Morton Tag Revised I-D Needed - Issue raised by WGLC cleared.
2017-10-30
05 Al Morton IETF WG state changed to In WG Last Call from WG Document
2017-10-30
05 Al Morton Added to session: IETF-100: bmwg  Thu-1550
2017-10-02
05 Bhuvaneswaran Vengainathan New version available: draft-ietf-bmwg-sdn-controller-benchmark-meth-05.txt
2017-10-02
05 (System) New version approved
2017-10-02
05 (System) Request for posting confirmation emailed to previous authors: Vishwas Manral , Sarah Banks , Bhuvaneswaran Vengainathan , Anton Basil , Mark Tassinari
2017-10-02
05 Bhuvaneswaran Vengainathan Uploaded new revision
2017-07-16
04 Al Morton Added to session: IETF-99: bmwg  Mon-0930
2017-06-29
04 Bhuvaneswaran Vengainathan New version available: draft-ietf-bmwg-sdn-controller-benchmark-meth-04.txt
2017-06-29
04 (System) New version approved
2017-06-28
04 (System) Request for posting confirmation emailed to previous authors: Vishwas Manral , Sarah Banks , Bhuvaneswaran Vengainathan , Anton Basil , Mark Tassinari
2017-06-28
04 Bhuvaneswaran Vengainathan Uploaded new revision
2017-03-11
03 Al Morton Added to session: IETF-98: bmwg  Thu-0900
2017-01-07
03 Bhuvaneswaran Vengainathan New version available: draft-ietf-bmwg-sdn-controller-benchmark-meth-03.txt
2017-01-07
03 (System) New version approved
2017-01-07
03 (System) Request for posting confirmation emailed to previous authors: "Bhuvaneswaran Vengainathan" , "Mark Tassinari" , "Anton Basil" , "Vishwas Manral" , "Sarah Banks"
2017-01-07
03 Bhuvaneswaran Vengainathan Uploaded new revision
2016-12-01
02 Al Morton Notification list changed to "Al Morton" <acmorton@att.com>
2016-12-01
02 Al Morton Document shepherd changed to Al Morton
2016-12-01
02 Al Morton First WGLC closed on Nov 15, 2016
2016-12-01
02 Al Morton Tag Revised I-D Needed - Issue raised by WGLC set.
2016-12-01
02 Al Morton IETF WG state changed to WG Document from In WG Last Call
2016-11-12
02 Al Morton IETF WG state changed to In WG Last Call from WG Document
2016-11-09
02 Al Morton Added to session: IETF-97: bmwg  Tue-0930
2016-07-08
02 Bhuvaneswaran Vengainathan New version available: draft-ietf-bmwg-sdn-controller-benchmark-meth-02.txt
2016-04-04
01 Al Morton Added to session: IETF-95: bmwg  Thu-1000
2016-03-21
01 Bhuvaneswaran Vengainathan New version available: draft-ietf-bmwg-sdn-controller-benchmark-meth-01.txt
2015-11-04
00 Al Morton Intended Status changed to Informational from None
2015-11-04
00 Al Morton This document now replaces draft-bhuvan-bmwg-sdn-controller-benchmark-meth instead of None
2015-10-18
00 Bhuvaneswaran Vengainathan New version available: draft-ietf-bmwg-sdn-controller-benchmark-meth-00.txt