Skip to main content

Benchmarking Methodology for Network Security Device Performance
draft-ietf-bmwg-ngfw-performance-15

Revision differences

Document history

Date Rev. By Action
2024-01-26
15 Gunter Van de Velde Request closed, assignment withdrawn: Carlos Martínez Last Call OPSDIR review
2024-01-26
15 Gunter Van de Velde Closed request for Last Call review by OPSDIR with state 'Overtaken by Events': Cleaning up stale OPSDIR queue
2023-03-09
15 (System) RFC Editor state changed to AUTH48-DONE from AUTH48
2023-02-28
15 (System) RFC Editor state changed to AUTH48
2023-02-07
15 (System) RFC Editor state changed to RFC-EDITOR from EDIT
2022-11-17
15 (System) IANA Action state changed to No IANA Actions from In Progress
2022-11-15
15 (System) IANA Action state changed to In Progress
2022-11-14
15 (System) RFC Editor state changed to EDIT
2022-11-14
15 (System) IESG state changed to RFC Ed Queue from Approved-announcement sent
2022-11-14
15 (System) Announcement was received by RFC Editor
2022-11-14
15 (System) Removed all action holders (IESG state changed)
2022-11-14
15 Cindy Morgan IESG state changed to Approved-announcement sent from IESG Evaluation::AD Followup
2022-11-14
15 Cindy Morgan IESG has approved the document
2022-11-14
15 Cindy Morgan Closed "Approve" ballot
2022-11-14
15 Cindy Morgan Ballot approval text was generated
2022-10-27
15 Murray Kucherawy
[Ballot comment]
Thanks for handling my DISCUSS about all the normative language used here.

Nits not yet mentioned by others:

Section 4.2:

* "... users …
[Ballot comment]
Thanks for handling my DISCUSS about all the normative language used here.

Nits not yet mentioned by others:

Section 4.2:

* "... users SHOULD configure their device ..." -- s/device/devices/ (unless all users share one device)

Section 6.3:

* "The value SHOULD be expressed in millisecond." -- s/millisecond/milliseconds/
2022-10-27
15 Murray Kucherawy [Ballot Position Update] Position for Murray Kucherawy has been changed to No Objection from Discuss
2022-10-22
15 Roman Danyliw [Ballot comment]
Thanks for addressing my DISCUSS and COMMENT feedback.
2022-10-22
15 Roman Danyliw [Ballot Position Update] Position for Roman Danyliw has been changed to No Objection from Discuss
2022-10-22
15 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-15.txt
2022-10-22
15 (System) New version approved
2022-10-22
15 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , bmonkman
2022-10-22
15 Balamuhunthan Balarajah Uploaded new revision
2022-10-13
14 Roman Danyliw
[Ballot discuss]
(Updated Ballot)

-- [per -13] Recognizing that NGFW, NGIPS and UTM are not precise product categories, offerings in this space commonly rely on …
[Ballot discuss]
(Updated Ballot)

-- [per -13] Recognizing that NGFW, NGIPS and UTM are not precise product categories, offerings in this space commonly rely on statistical models or AI techniques (e.g., machine learning) to improve detection rates and reduce false positives to realize the capabilities in Table 1 and 2.  If even possible, how should these settings be tuned?  How should the training period be handled when describing the steps of the test regime (e.g., in Section 4.3.4? Section 7.2.4?)

[per -14] Thank for explaining that the training phase would not be included in the threat emulating in your email response.  Since the goal of these document is specify reproducible testing, the primary text I was look for was an acknowledgment that the detection performance of some systems may be affected by learning from prior traffic.  Any state kept by such systems much be reset between testing runs.
2022-10-13
14 Roman Danyliw
[Ballot comment]
(Updated Ballot)

Thanks for the changes made in -13.

** [per -13] Section 3. Per “This document focuses on advanced, …”, what makes …
[Ballot comment]
(Updated Ballot)

Thanks for the changes made in -13.

** [per -13] Section 3. Per “This document focuses on advanced, …”, what makes a testing method “advanced”?

** [per -13] Section 4.2.  Should the following additional features be noted as a feature of NGFWs and NGIPS (Table 2 and 3 in -14)?

-- geolocation or network topology-based classification/filtering (since there is normative text “Geographical location filtering SHOULD be configured.”)

** [per -13/14] Table 2.  Is there a Anti-Evasion (listed in Table 3 for NGIPS) are not mentioned here (for NGFW). 

** [per -13] Section 4.2.  Per “Logging SHOULD be enabled.”  How does this “SHOULD” align with “logging and reporting” being a RECOMMENDED in Table 1 and 2? 

[per -14]  Thanks for the edits here.  I think a regression was a regression introduced.  Table 3 (NGIPS) used to have “Logging and Reporting” just like Table 2 in -12.
2022-10-13
14 Roman Danyliw Ballot comment and discuss text updated for Roman Danyliw
2022-09-12
14 Éric Vyncke
[Ballot comment]
Thank you for the work put into this document and for addressing my previous DISCUSS and my previous COMMENT. They are kept below …
[Ballot comment]
Thank you for the work put into this document and for addressing my previous DISCUSS and my previous COMMENT. They are kept below only for archiving purpose.

Thanks to Toerless for his deep and detailed IoT directorate review, I have seen as well that the authors are engaged in email discussions on this review:
https://datatracker.ietf.org/doc/review-ietf-bmwg-ngfw-performance-13-iotdir-telechat-eckert-2022-01-30/

Special thanks to Al Morton for the shepherd's write-up including the section about the WG consensus.

I hope that this helps to improve the document,

Regards,

-éric


# previous DISCUSS for archiving

As noted in https://www.ietf.org/blog/handling-iesg-ballot-positions/, a DISCUSS ballot is a request to have a discussion on the following topics

The document obsoletes RFC 3511, but it does not include any performance testing of IP fragmentation (which RFC 3511 did), which is AFAIK still a performance/evasion problem. What was the reason for this lack of IP fragmentation support ? At the bare minimum, there should be some text explaining why IP fragmentation can be ignored.

# previous COMMENT for archiving

One generic comment about the lack of testing with IPv6 extension headers as they usually reduce the performance (even for NGFW/NGIPS). There should be some words about this lack of testing.

## Section 4.1

Please always use "ARP/ND" rather than "ARP".

## Section 4.2

Any reason why "SSL" is used rather than "TLS" ?

Suggest to replace "IP subnet" by "IP prefix".

## Section 4.3.1.2 (and other sections)

"non routable Private IPv4 address ranges" unsure what it is ? RFC 1918 addresses are routable albeit private, or is it about link-local IPv4 address ? 169.254.0.0/16 or 198.18.0.0/15 ?

## Section 4.3.1.3

Suggest to add a date information (e.g., 2022) in the sentence "The above ciphers and keys were those commonly used enterprise grade encryption cipher suites for TLS 1.2".

In "[RFC8446] defines the following cipher suites for use with TLS 1.3." is this about a SHOULD or a MUST ?

## Section 6.1

In "Results SHOULD resemble a pyramid in how it is reported" I have no clue how a report could resemble a pyramid. Explanations/descriptions are welcome in the text.

## Section 7.8.4 (and other sections)

In "This test procedure MAY be repeated multiple times with different IP types (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic distribution)" should it be a "SHOULD" rather than a "MAY" ?
2022-09-12
14 Éric Vyncke [Ballot Position Update] Position for Éric Vyncke has been changed to Yes from Discuss
2022-09-12
14 Lars Eggert [Ballot Position Update] Position for Lars Eggert has been changed to No Objection from Discuss
2022-09-11
14 (System) Changed action holders to Warren Kumari (IESG state changed)
2022-09-11
14 (System) Sub state has been changed to AD Followup from Revised ID Needed
2022-09-11
14 (System) IANA Review state changed to Version Changed - Review Needed from IANA OK - No Actions Needed
2022-09-11
14 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-14.txt
2022-09-11
14 (System) New version approved
2022-09-11
14 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , bmonkman
2022-09-11
14 Balamuhunthan Balarajah Uploaded new revision
2022-07-17
13 Al Morton Added to session: IETF-114: bmwg  Tue-1500
2022-05-24
13 Tommy Pauly Request for Telechat review by TSVART Completed: Almost Ready. Reviewer: Tommy Pauly. Sent review to list.
2022-05-13
13 Magnus Westerlund Request for Telechat review by TSVART is assigned to Tommy Pauly
2022-05-13
13 Magnus Westerlund Request for Telechat review by TSVART is assigned to Tommy Pauly
2022-05-09
13 Al Morton Requested Telechat review by TSVART
2022-03-19
13 Benjamin Kaduk
[Ballot comment]
[Updated to remove my Discuss point, as my colleagues have convinced me
that my concern was not reasonable]

I support Roman's Discuss (which …
[Ballot comment]
[Updated to remove my Discuss point, as my colleagues have convinced me
that my concern was not reasonable]

I support Roman's Discuss (which you have already begun to resolve, thank
you).

Perhaps it is time to retire the term "SSL" in favor of the current
protocol name, "TLS".

Section 4.1

  In some deployment scenarios, the network security devices (Device
  Under Test/System Under Test) are connected to routers and switches,
  which will reduce the number of entries in MAC or ARP tables of the
  Device Under Test/System Under Test (DUT/SUT).  If MAC or ARP tables
  have many entries, this may impact the actual DUT/SUT performance due
  to MAC and ARP/ND (Neighbor Discovery) table lookup processes.  This

I understand the motivation for benchmarking the maximum performance from
the device under controlled circumstances, but it also seems that if a
device really will exhibit degraded performance due to the number of
entries in its MAC/ARP table, that would be useful information to have.
Perhaps a remark about how future work could include repeating
benchmarking results with different numbers of other devices on the local
network segment is in order.

Section 4.2

  Table 1 and Table 2 below describe the RECOMMENDED and OPTIONAL sets
  of network security feature list for NGFW and NGIPS respectively.

I agree with the IoTdir reviewer that Certificate Validation should surely
be a recommended feature for NGFWs.  But see also the DISCUSS point.

    | SSL Inspection | DUT/SUT intercepts and decrypts inbound HTTPS  |
    |                | traffic between servers and clients.  Once the |
    |                | content inspection has been completed, DUT/SUT |
    |                | encrypts the HTTPS traffic with ciphers and    |
    |                | keys used by the clients and servers.          |

This description could stand to be more clear, especially in light of the
fundamental differences between TLS 1.2 and TLS 1.3.
First, the description starts off with "intercepts and decrypts" and then
goes on to say that once inspection is over, the DUT/SUT "encrypts the
HTTPS traffic".  Does this mean that the DUT/SUT specifically needs to
re-encrypt after decrypting, or is it permissible to retain the original
ciphertext and just relay that ciphertext onward?
Second, in TLS 1.3, it is by construction impossible for a single set of
traffic encryption keys to be shared by all three of client, server, and
DUT/SUT -- RSA key transport is forbidden and ephemeral key exchange is
required.  In order to perform content inspection, such a middlebox needs
to be able to impersonate the server to the client (i.e., holding a
certificate and private key that is trusted by the client and represents
the identity of the real server, which is expected to require specific
configuration on the client to enable) and complete separate TLS
connections to client and to server.  In this scenario the middlebox must
remain as a "machine in the middle" for the duration of the entire
connection and decrypt/reencrypt all content using the different keys for
the client/middlebox and middlebox/server connections.

  *  Geographical location filtering, and Application Identification
      and Control SHOULD be configured to trigger based on a site or
      application from the defined traffic mix.

Do we have a sense for how sensitive the performance results are going to
be with respect to the proportion of traffic that triggers these classes
of filtering/control?  Would it be appropriate to require that this
breakdown be included in the report?

Section 4.3.1.3

  validation.  Depending on test scenarios and selected HTTP version,
  HTTP header compression MAY be set to enable or disable.  This

I didn't think it was possible to fully disable header compression for
HTTP/2 and HTTP/3 (just to set the dynamic table size to zero).

  [RFC8446] defines the following cipher suites for use with TLS 1.3.
  [...]

TLS_AES_128_CCM_8_SHA256 is marked as Recommended=N in the registry; I
think we should indicate that there is little need to benchmark it except
for those special circumstances where the cipher is appropriate.
Even TLS_AES_128_CCM_SHA256 (with full-length authentication tag) is
mostly only going to be used in IoT environments and is likely not needed
for a target of "enterprise grade encryption cipher suites".

Section 4.3.2.3

  server, TLS 1.2 or higher MUST be used with a maximum record size of
  16 KByte and MUST NOT use ticket resumption or session ID reuse.  The

Why is TLS resumption prohibited?
(As a technical matter, TLS 1.3 resumption uses a different mechanism than
the two TLS 1.2 resumption mechanisms, and it may be prudent to
specifically note whether TLS 1.3 resumption is also forbidden.)

  server SHALL serve a certificate to the client.  The HTTPS server
  MUST check host SNI information with the FQDN if SNI is in use.

What does "check host SNI information with the FQDN" mean?  Where is the
FQDN in question obtained from?  (In §4.3.3.1 we say that the proposed
(SNI) FQDN is compared to "the domain embedded in the certificate".  Note
that, of course, the certificate can contain more than one domain name,
e.g., via the now-quite-common use of subjectAltName.)

Section 6.1

      e.  Key test parameters

          *  Used cipher suites and keys

Do we really need to report the specific *keys* used (as opposed to
cryptographic parameters of the TLS connection like the group used for key
exchange, algorithm and key size of the server certificate, etc.)?

          *  Percentage of encrypted traffic and used cipher suites and
              keys (The RECOMMENDED ciphers and keys are defined in
              Section 4.3.1.3)

For what it's worth, trends in generic web traffic are rapidly converging
towards near-universal HTTPS usage.  I am not really sure that measuring
unencrypted traffic is going to be very interesting to many users (though
I concede that some will still be using it and find the corresponding
benchmarking results useful).

Section 7.3.3.2

  RECOMMENDED HTTP response object size: 1, 16, 64, 256 KByte, and
  mixed objects defined in Table 4.

With the explosion of video use on the modern Web, it might be worth
revisiting these recommended object sizes.  Is there likely to be value in
having very large objects for any of the tests?

Section 7.6.1, 7.7.1

  Test iterations MUST include common cipher suites and key strengths
  as well as forward looking stronger keys.  Specific test iterations

How/where would an implementor obtain more guidance on "common cipher suites
and key strengths" and "forward looking stronger keys"?  (With
understanding that this guidance will change over time and cannot be
permanently enshrined in this RFC-to-be.)

(Why does Section 7.8.1 not have similar language?)

Section 9

Hmm, I thought we typically had some language about how if the
benchmarking techniques specified in this document were used outside a
laboratory isolated test environment, security and other risks could arise
(e.g., due to DoS of nearby nodes/services).

Appendix A

I agree with Roman that the text around "CVEs" is imprecise and should be
talking about exploits that are identified by CVEs.
2022-03-19
13 Benjamin Kaduk [Ballot Position Update] Position for Benjamin Kaduk has been changed to No Objection from Discuss
2022-02-08
13 Éric Vyncke Request closed, assignment withdrawn: David Lamparter Telechat INTDIR review
2022-02-08
13 Éric Vyncke
Closed request for Telechat review by INTDIR with state 'Withdrawn': Telechat deadline has passed... The document has been approved by the IESG. Please next time, …
Closed request for Telechat review by INTDIR with state 'Withdrawn': Telechat deadline has passed... The document has been approved by the IESG. Please next time, be explicit and refuse to review the document. Thank you. -éric
2022-02-03
13 (System) Changed action holders to Warren Kumari, Carsten Rossenhoevel, Balamuhunthan Balarajah, Brian Monkman (IESG state changed)
2022-02-03
13 Cindy Morgan IESG state changed to IESG Evaluation::Revised I-D Needed from IESG Evaluation
2022-02-03
13 Benjamin Kaduk
[Ballot discuss]
This is probably a minor point, but I'm putting it in the Discuss section
because one of the possible answers would be very …
[Ballot discuss]
This is probably a minor point, but I'm putting it in the Discuss section
because one of the possible answers would be very problematic and I can't
rule that scenario out with just the information at hand.  As such (and
per
https://www.ietf.org/about/groups/iesg/statements/handling-ballot-positions/)
a response is greatly appreciated, to help clarify the intended meaning
and thus what (if any) changes to the document should be made in response.

In Section 4.2 we have a couple tables listing RECOMMENDED and OPTIONAL
features for different types of DUT/SUT.  Do these recommendations relate
to what features should be tested, what features should be enabled for use
in normal operation, what features should be implemented in devices, or
something else?  The latter options seem a bit far afield from the stated
scope of this document, and the particular recommendations listed probably
do not have IETF Consensus as to general applicability (most notably for
SSL Inspection).
2022-02-03
13 Benjamin Kaduk
[Ballot comment]
I support Roman's Discuss (which you have already begun to resolve, thank
you).

Perhaps it is time to retire the term "SSL" in …
[Ballot comment]
I support Roman's Discuss (which you have already begun to resolve, thank
you).

Perhaps it is time to retire the term "SSL" in favor of the current
protocol name, "TLS".

Section 4.1

  In some deployment scenarios, the network security devices (Device
  Under Test/System Under Test) are connected to routers and switches,
  which will reduce the number of entries in MAC or ARP tables of the
  Device Under Test/System Under Test (DUT/SUT).  If MAC or ARP tables
  have many entries, this may impact the actual DUT/SUT performance due
  to MAC and ARP/ND (Neighbor Discovery) table lookup processes.  This

I understand the motivation for benchmarking the maximum performance from
the device under controlled circumstances, but it also seems that if a
device really will exhibit degraded performance due to the number of
entries in its MAC/ARP table, that would be useful information to have.
Perhaps a remark about how future work could include repeating
benchmarking results with different numbers of other devices on the local
network segment is in order.

Section 4.2

  Table 1 and Table 2 below describe the RECOMMENDED and OPTIONAL sets
  of network security feature list for NGFW and NGIPS respectively.

I agree with the IoTdir reviewer that Certificate Validation should surely
be a recommended feature for NGFWs.  But see also the DISCUSS point.

    | SSL Inspection | DUT/SUT intercepts and decrypts inbound HTTPS  |
    |                | traffic between servers and clients.  Once the |
    |                | content inspection has been completed, DUT/SUT |
    |                | encrypts the HTTPS traffic with ciphers and    |
    |                | keys used by the clients and servers.          |

This description could stand to be more clear, especially in light of the
fundamental differences between TLS 1.2 and TLS 1.3.
First, the description starts off with "intercepts and decrypts" and then
goes on to say that once inspection is over, the DUT/SUT "encrypts the
HTTPS traffic".  Does this mean that the DUT/SUT specifically needs to
re-encrypt after decrypting, or is it permissible to retain the original
ciphertext and just relay that ciphertext onward?
Second, in TLS 1.3, it is by construction impossible for a single set of
traffic encryption keys to be shared by all three of client, server, and
DUT/SUT -- RSA key transport is forbidden and ephemeral key exchange is
required.  In order to perform content inspection, such a middlebox needs
to be able to impersonate the server to the client (i.e., holding a
certificate and private key that is trusted by the client and represents
the identity of the real server, which is expected to require specific
configuration on the client to enable) and complete separate TLS
connections to client and to server.  In this scenario the middlebox must
remain as a "machine in the middle" for the duration of the entire
connection and decrypt/reencrypt all content using the different keys for
the client/middlebox and middlebox/server connections.

  *  Geographical location filtering, and Application Identification
      and Control SHOULD be configured to trigger based on a site or
      application from the defined traffic mix.

Do we have a sense for how sensitive the performance results are going to
be with respect to the proportion of traffic that triggers these classes
of filtering/control?  Would it be appropriate to require that this
breakdown be included in the report?

Section 4.3.1.3

  validation.  Depending on test scenarios and selected HTTP version,
  HTTP header compression MAY be set to enable or disable.  This

I didn't think it was possible to fully disable header compression for
HTTP/2 and HTTP/3 (just to set the dynamic table size to zero).

  [RFC8446] defines the following cipher suites for use with TLS 1.3.
  [...]

TLS_AES_128_CCM_8_SHA256 is marked as Recommended=N in the registry; I
think we should indicate that there is little need to benchmark it except
for those special circumstances where the cipher is appropriate.
Even TLS_AES_128_CCM_SHA256 (with full-length authentication tag) is
mostly only going to be used in IoT environments and is likely not needed
for a target of "enterprise grade encryption cipher suites".

Section 4.3.2.3

  server, TLS 1.2 or higher MUST be used with a maximum record size of
  16 KByte and MUST NOT use ticket resumption or session ID reuse.  The

Why is TLS resumption prohibited?
(As a technical matter, TLS 1.3 resumption uses a different mechanism than
the two TLS 1.2 resumption mechanisms, and it may be prudent to
specifically note whether TLS 1.3 resumption is also forbidden.)

  server SHALL serve a certificate to the client.  The HTTPS server
  MUST check host SNI information with the FQDN if SNI is in use.

What does "check host SNI information with the FQDN" mean?  Where is the
FQDN in question obtained from?  (In §4.3.3.1 we say that the proposed
(SNI) FQDN is compared to "the domain embedded in the certificate".  Note
that, of course, the certificate can contain more than one domain name,
e.g., via the now-quite-common use of subjectAltName.)

Section 6.1

      e.  Key test parameters

          *  Used cipher suites and keys

Do we really need to report the specific *keys* used (as opposed to
cryptographic parameters of the TLS connection like the group used for key
exchange, algorithm and key size of the server certificate, etc.)?

          *  Percentage of encrypted traffic and used cipher suites and
              keys (The RECOMMENDED ciphers and keys are defined in
              Section 4.3.1.3)

For what it's worth, trends in generic web traffic are rapidly converging
towards near-universal HTTPS usage.  I am not really sure that measuring
unencrypted traffic is going to be very interesting to many users (though
I concede that some will still be using it and find the corresponding
benchmarking results useful).

Section 7.3.3.2

  RECOMMENDED HTTP response object size: 1, 16, 64, 256 KByte, and
  mixed objects defined in Table 4.

With the explosion of video use on the modern Web, it might be worth
revisiting these recommended object sizes.  Is there likely to be value in
having very large objects for any of the tests?

Section 7.6.1, 7.7.1

  Test iterations MUST include common cipher suites and key strengths
  as well as forward looking stronger keys.  Specific test iterations

How/where would an implementor obtain more guidance on "common cipher suites
and key strengths" and "forward looking stronger keys"?  (With
understanding that this guidance will change over time and cannot be
permanently enshrined in this RFC-to-be.)

(Why does Section 7.8.1 not have similar language?)

Section 9

Hmm, I thought we typically had some language about how if the
benchmarking techniques specified in this document were used outside a
laboratory isolated test environment, security and other risks could arise
(e.g., due to DoS of nearby nodes/services).

Appendix A

I agree with Roman that the text around "CVEs" is imprecise and should be
talking about exploits that are identified by CVEs.
2022-02-03
13 Benjamin Kaduk [Ballot Position Update] New position, Discuss, has been recorded for Benjamin Kaduk
2022-02-03
13 Lars Eggert
[Ballot discuss]
This document needs TSV and ART people to help with straightening out a lot of
issues related to TCP, TLS, and H1/2/3. Large …
[Ballot discuss]
This document needs TSV and ART people to help with straightening out a lot of
issues related to TCP, TLS, and H1/2/3. Large parts of the document don't
correctly reflect the complex realities of what "HTTP" is these days (i.e.,
that we have H1 and H2 over either TCP or TLS, and H3 over only QUIC.) The
document is also giving unnecessarily detailed behavioral descriptions of TCP
and its parameters, while at the same time not being detailed enough about TLS,
H2 and esp. QUIC/H3. It feels like this stared out as an H1/TCP document that
was then incompletely extended to H2/H3.

Section 4.3.1.1. , paragraph 2, discuss:
>    The TCP stack SHOULD use a congestion control algorithm at client and
>    server endpoints.  The IPv4 and IPv6 Maximum Segment Size (MSS)
>    SHOULD be set to 1460 bytes and 1440 bytes respectively and a TX and
>    RX initial receive windows of 64 KByte.  Client initial congestion
>    window SHOULD NOT exceed 10 times the MSS.  Delayed ACKs are
>    permitted and the maximum client delayed ACK SHOULD NOT exceed 10
>    times the MSS before a forced ACK.  Up to three retries SHOULD be
>    allowed before a timeout event is declared.  All traffic MUST set the
>    TCP PSH flag to high.  The source port range SHOULD be in the range
>    of 1024 - 65535.  Internal timeout SHOULD be dynamically scalable per
>    RFC 793.  The client SHOULD initiate and close TCP connections.  The
>    TCP connection MUST be initiated via a TCP three-way handshake (SYN,
>    SYN/ACK, ACK), and it MUST be closed via either a TCP three-way close
>    (FIN, FIN/ACK, ACK), or a TCP four-way close (FIN, ACK, FIN, ACK).

There are a lot of requirements in here that are either no-ops ("SHOULD use a
congestion control algorithm"), nonsensical ("maximum client delayed ACK SHOULD
NOT exceed 10 times the MSS") or under the sole control of the stack. This
needs to be reviewed and corrected by someone who understands TCP.
2022-02-03
13 Lars Eggert
[Ballot comment]
Section 1. , paragraph 2, comment:
>    18 years have passed since IETF recommended test methodology and
>    terminology for firewalls …
[Ballot comment]
Section 1. , paragraph 2, comment:
>    18 years have passed since IETF recommended test methodology and
>    terminology for firewalls initially ([RFC3511]).  The requirements
>    for network security element performance and effectiveness have
>    increased tremendously since then.  In the eighteen years since

These sentences don't age well - rephrase without talking about particular
years?

Section 4.3.2.3. , paragraph 2, comment:
>    The server pool for HTTP SHOULD listen on TCP port 80 and emulate the
>    same HTTP version (HTTP 1.1 or HTTP/2 or HTTP/3) and settings chosen
>    by the client (emulated web browser).  The Server MUST advertise

An H3 server will not listen on TCP port 80. In general, the document needs to
be checked for the implicit assumption that HTTP sues TCP; there is text
throughout that is nonsensical for H3 (like this example).    The Server MUST
advertise

Section 6.3. , paragraph 6, comment:
>      The average number of successfully established TCP connections per
>      second between hosts across the DUT/SUT, or between hosts and the
>      DUT/SUT.  The TCP connection MUST be initiated via a TCP three-way
>      handshake (SYN, SYN/ACK, ACK).  Then the TCP session data is sent.
>      The TCP session MUST be closed via either a TCP three-way close
>      (FIN, FIN/ACK, ACK), or a TCP four-way close (FIN, ACK, FIN, ACK),
>      and MUST NOT by RST.

This prohibits TCP fast open, why? Also, wouldn't it be enough to say that the
connection needs to not abnormally reset, rather than describing the TCP packet
sequences that are acceptable? Given that those are not the only possible
sequences, c.f., loss and reordering.

Section 6.3. , paragraph 6, comment:
>      The average number of successfully completed transactions per
>      second.  For a particular transaction to be considered successful,
>      all data MUST have been transferred in its entirety.  In case of
>      HTTP(S) transactions, it MUST have a valid status code (200 OK),
>      and the appropriate FIN, FIN/ACK sequence MUST have been
>      completed.

H3 doesn't do FIN/ACK, etc. See above.

Section 7.1.3.4. , paragraph 4, comment:
>    a.  Number of failed application transactions (receiving any HTTP
>        response code other than 200 OK) MUST be less than 0.001% (1 out
>        of 100,000 transactions) of total attempted transactions.
>
>    b.  Number of Terminated TCP connections due to unexpected TCP RST
>        sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
>        connections) of total initiated TCP connections.

Why is a 0.001% failure rate deemed acceptable? (Also elsewhere.)

Section 7.2.1. , paragraph 2, comment:
>    Using HTTP traffic, determine the sustainable TCP connection
>    establishment rate supported by the DUT/SUT under different
>    throughput load conditions.

H3 doesn't do TCP.

Section 7.2.3.2. , paragraph 9, comment:
>    The client SHOULD negotiate HTTP and close the connection with FIN
>    immediately after completion of one transaction.  In each test
>    iteration, client MUST send GET request requesting a fixed HTTP
>    response object size.

H3 doesn't do TCP FIN.

Section 7.2.3.3. , paragraph 6, comment:
>    c.  During the sustain phase, traffic SHOULD be forwarded at a
>        constant rate (considered as a constant rate if any deviation of
>        traffic forwarding rate is less than 5%).

What does this mean? How would traffic NOT be forwarded at a constant rate?

Section 7.2.3.3. , paragraph 5, comment:
>    d.  Concurrent TCP connections MUST be constant during steady state
>        and any deviation of concurrent TCP connections SHOULD be less
>        than 10%. This confirms the DUT opens and closes TCP connections
>        at approximately the same rate.

What does it mean for a TCP connection to be constant?

Section 7.4.1. , paragraph 4, comment:
>    Scenario 1: The client MUST negotiate HTTP and close the connection
>    with FIN immediately after completion of a single transaction (GET
>    and RESPONSE).

H3 sessions don't send TCP FINs. (Also elsewhere.)

Section 7.7. , paragraph 1, comment:
> 7.7.  HTTPS Throughput

Is this HTTPS as in H1, H2 or H3? All of the above?

Found terminology that should be reviewed for inclusivity; see
https://www.rfc-editor.org/part2/#inclusive_language for background and more
guidance:

* Term "dummy"; alternatives might be "placeholder", "sample", "stand-in",
  "substitute".

Thanks to Matt Joras for their General Area Review Team (Gen-ART) review
(https://mailarchive.ietf.org/arch/msg/gen-art/NUycZt5uKAZejOvCr6tdi_7SvPA).

-------------------------------------------------------------------------------
All comments below are about very minor potential issues that you may choose to
address in some way - or ignore - as you see fit. Some were flagged by
automated tools (via https://github.com/larseggert/ietf-reviewtool), so there
will likely be some false positives. There is no need to let me know what you
did with these suggestions.

Section 4.1. , paragraph 8, nit:
>  actively inspected by the DUT/SUT. Also "Fail-Open" behavior MUST be disable
>                                    ^^^^
A comma may be missing after the conjunctive/linking adverb "Also".

Section 4.2. , paragraph 9, nit:
> security vendors implement ACL decision making.) The configured ACL MUST NOT
>                                ^^^^^^^^^^^^^^^
The noun "decision-making" (= the process of deciding something) is spelled
with a hyphen.

Section 4.2.1. , paragraph 1, nit:
>  the MSS. Delayed ACKs are permitted and the maximum client delayed ACK SHOUL
>                                    ^^^^
Use a comma before "and" if it connects two independent clauses (unless they
are closely connected and short).

Section 4.3.1.3. , paragraph 3, nit:
>  the MSS. Delayed ACKs are permitted and the maximum server delayed ACK MUST
>                                    ^^^^
Use a comma before "and" if it connects two independent clauses (unless they
are closely connected and short).

Section 4.3.1.3. , paragraph 4, nit:
> IPv6 with a ratio identical to the clients distribution ratio. Note: The IAN
>                                    ^^^^^^^
An apostrophe may be missing.

Section 4.3.3.1. , paragraph 2, nit:
> S throughput performance test with smallest object size. 3. Ensure that any
>                                    ^^^^^^^^
A determiner may be missing.

Section 6.1. , paragraph 19, nit:
> sion with a more specific Kbit/s in parenthesis. * Time to First Byte (TTFB)
>                                  ^^^^^^^^^^^^^^
Did you mean "in parentheses"? "parenthesis" is the singular.

Section 7.5.3. , paragraph 2, nit:
> s and key strengths as well as forward looking stronger keys. Specific test
>                                ^^^^^^^^^^^^^^^
This word is normally spelled with a hyphen.

Section 7.5.4.2. , paragraph 3, nit:
> SHOULD NOT be reported, if the above mentioned KPI (especially inspected thro
>                                ^^^^^^^^^^^^^^^
The adjective "above-mentioned" is spelled with a hyphen.

Section 7.6.1. , paragraph 4, nit:
> s and key strengths as well as forward looking stronger keys. Specific test
>                                ^^^^^^^^^^^^^^^
This word is normally spelled with a hyphen.

Section 7.9.3.4. , paragraph 1, nit:
> * Accuracy of DUT/SUT statistics in term of vulnerabilities reporting A.2. T
>                                  ^^^^^^^^^^
Did you mean the commonly used phrase "in terms of"?

Section 7.9.4. , paragraph 2, nit:
> tected attack traffic MUST be dropped and the session SHOULD be reset A.3.2.
>                                      ^^^^
Use a comma before "and" if it connects two independent clauses (unless they
are closely connected and short).
2022-02-03
13 Lars Eggert [Ballot Position Update] New position, Discuss, has been recorded for Lars Eggert
2022-02-03
13 Zaheduzzaman Sarker
[Ballot comment]
Thanks for the efforts on this specification. I have been part of writing two testcase documents for real-time congestion control algorithms and understand …
[Ballot comment]
Thanks for the efforts on this specification. I have been part of writing two testcase documents for real-time congestion control algorithms and understand getting things in a reasonable shape is hard.

I have similar observation as Murray and Eric when it comes to obsoleting the previous specification. Hence supporting their discusses.

Some more comments/questions below -

  * Section 5 : what is "packet loss latency" metric? where is it defined? how do I measure?

  * Traffic profile is missing in all the benchmark test which is a MUST to have. If this is intentional then a rational need to be added.

  * Section 7.3 and 7.7 : The HTTP throughput will look different not only because of object size but also how often the request are sent. If the requests are sent all at once the resulted throughput may look like a long file download and if they are sparse then they will look small downloads in a sparse timeline. Here, it is not clear to me what is the intention. Again the traffic profile is missing and I am started to think that Section 7.1.3.3 might be part of Section 7.1.3.2.

  * Section 7.4 and 7.8 : I can have similar view as per my comment on Section 7.3. This is not clear to me that only object size matter here on the latency.
2022-02-03
13 Zaheduzzaman Sarker [Ballot Position Update] New position, No Objection, has been recorded for Zaheduzzaman Sarker
2022-02-03
13 Éric Vyncke
[Ballot discuss]
Thank you for the work put into this document.

Please find below one blocking DISCUSS points (probably easy to address but really important), …
[Ballot discuss]
Thank you for the work put into this document.

Please find below one blocking DISCUSS points (probably easy to address but really important), some non-blocking COMMENT points (but replies would be appreciated even if only for my own education).

Thanks to Toerless for his deep and detailed IoT directorate review, I have seen as well that the authors are engaged in email discussions on this review:
https://datatracker.ietf.org/doc/review-ietf-bmwg-ngfw-performance-13-iotdir-telechat-eckert-2022-01-30/

Special thanks to Al Morton for the shepherd's write-up including the section about the WG consensus.

I hope that this helps to improve the document,

Regards,

-éric


# DISCUSS

As noted in https://www.ietf.org/blog/handling-iesg-ballot-positions/, a DISCUSS ballot is a request to have a discussion on the following topics

The document obsoletes RFC 3511, but it does not include any performance testing of IP fragmentation (which RFC 3511 did), which is AFAIK still a performance/evasion problem. What was the reason for this lack of IP fragmentation support ? At the bare minimum, there should be some text explaining why IP fragmentation can be ignored.
2022-02-03
13 Éric Vyncke
[Ballot comment]
One generic comment about the lack of testing with IPv6 extension headers as they usually reduce the performance (even for NGFW/NGIPS). There should …
[Ballot comment]
One generic comment about the lack of testing with IPv6 extension headers as they usually reduce the performance (even for NGFW/NGIPS). There should be some words about this lack of testing.

## Section 4.1

Please always use "ARP/ND" rather than "ARP".

## Section 4.2

Any reason why "SSL" is used rather than "TLS" ?

Suggest to replace "IP subnet" by "IP prefix".

## Section 4.3.1.2 (and other sections)

"non routable Private IPv4 address ranges" unsure what it is ? RFC 1918 addresses are routable albeit private, or is it about link-local IPv4 address ? 169.254.0.0/16 or 198.18.0.0/15 ?

## Section 4.3.1.3

Suggest to add a date information (e.g., 2022) in the sentence "The above ciphers and keys were those commonly used enterprise grade encryption cipher suites for TLS 1.2".

In "[RFC8446] defines the following cipher suites for use with TLS 1.3." is this about a SHOULD or a MUST ?

## Section 6.1

In "Results SHOULD resemble a pyramid in how it is reported" I have no clue how a report could resemble a pyramid. Explanations/descriptions are welcome in the text.

## Section 7.8.4 (and other sections)

In "This test procedure MAY be repeated multiple times with different IP types (IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic distribution)" should it be a "SHOULD" rather than a "MAY" ?
2022-02-03
13 Éric Vyncke [Ballot Position Update] New position, Discuss, has been recorded for Éric Vyncke
2022-02-02
13 Erik Kline
[Ballot comment]
[throughout; comment]

* In all sections describing Configuration Parameters, both Client and
  Server "IP address range" is mentioned in the singular.  I …
[Ballot comment]
[throughout; comment]

* In all sections describing Configuration Parameters, both Client and
  Server "IP address range" is mentioned in the singular.  I think
  appropriate s/range/ranges/ might make sense.
2022-02-02
13 Erik Kline [Ballot Position Update] New position, No Objection, has been recorded for Erik Kline
2022-02-02
13 Murray Kucherawy
[Ballot discuss]
I may be wandering into unfamiliar territory here, i.e., how benchmarking specs are typically written, but this is sufficiently confusing that I'd like …
[Ballot discuss]
I may be wandering into unfamiliar territory here, i.e., how benchmarking specs are typically written, but this is sufficiently confusing that I'd like to discuss it.

I note that RFC 3511, which this document obsoletes, didn't cite RFC 2119 (BCP 14) but rather defined those same key words on its own.  Then it used SHOULD rather liberally, in a way that seems kind of peculiar to me (especially compared to the text of Section 6 of RFC 2119).  Do any of them matter to the outcome of the benchmark being constructed or executed?  If so and they would spoil the test, shouldn't they be MUSTs?  If not, why include them?  Or in the alternative, why might I, as someone setting up a test, legitimately do something contrary to the SHOULD in each case (which "SHOULD" expressly permits)?

This document does cite BCP 14 directly, and then seems to take that curious pattern to the next level.  Among the 130+ SHOULDs in here, I'm particularly confused by stuff like this in Section 4.3.1:

  This section specifies which parameters SHOULD be considered while
  configuring clients using test equipment. 

I have no idea what this means to the test.  If I've simply thought about these parameters, have I met the burden here?

This in Section 4.3.1.1 ("TCP Stack Attributes") seems an odd thing to have to stipulate:

  The client SHOULD initiate and close TCP connections.

Then Section 7.1.3, which contains subsections about each of the test parameters for the benchmark described in Section 7.1, consists of this text:

  In this section, the benchmarking test specific parameters SHOULD be
  defined.

As I read it, this is a self-referential SHOULD about this document!  I'm very confused.  This happens again in Section 7.2.3, 7.3.3, etc., up to 7.9.3, and even Appendix A.3.  I think in each case you just want:

  This section defines test-specific parameters for this benchmark.
2022-02-02
13 Murray Kucherawy
[Ballot comment]
Nits not yet mentioned by others:

Section 4.2:

* "... users SHOULD configure their device ..." -- s/device/devices/ (unless all users share one …
[Ballot comment]
Nits not yet mentioned by others:

Section 4.2:

* "... users SHOULD configure their device ..." -- s/device/devices/ (unless all users share one device)

Section 6.3:

* "The value SHOULD be expressed in millisecond." -- s/millisecond/milliseconds/
2022-02-02
13 Murray Kucherawy [Ballot Position Update] New position, Discuss, has been recorded for Murray Kucherawy
2022-02-02
13 Al Morton This was the individual draft.
2022-02-02
13 Al Morton This document now replaces draft-balarajah-bmwg-ngfw-performance instead of None
2022-02-02
13 Roman Danyliw
[Ballot discuss]
** A key element of successfully running the throughput tests described in Section 7, appears to be ensuring how to configure the device …
[Ballot discuss]
** A key element of successfully running the throughput tests described in Section 7, appears to be ensuring how to configure the device under test.  Section 4.2. helpfully specifies feature sets with recommendations configurations.  However, it appears there are elements of under-specification given the level of detail specified with normative language.  Specifically:

-- Section 4.2.1 seems unspecified regarding all the capabilities in Table 1 and 2.  The discussion around vulnerabilities (CVEs) does not appear to be relevant to configuration of anti-spyware, anti-virus, anti-botnet, DLP, and DDOS. 

-- Recognizing that NGFW, NGIPS and UTM are not precise product categories, offerings in this space commonly rely on statistical models or AI techniques (e.g., machine learning) to improve detection rates and reduce false positives to realize the capabilities in Table 1 and 2.  If even possible, how should these settings be tuned?  How should the training period be handled when describing the steps of the test regime (e.g., in Section 4.3.4? Section 7.2.4?)

** Appendix A.  The KPI measures don’t seem precise here – CVEs are unlikely to be the measure seen on the wire.  Wouldn’t it be exploits associated with a particular vulnerability (that’s numbered via CVE)?  There can be a one-to-many relationship between the vulnerability and exploits (e.g., multiple products affected by a single CVE); or the multiple implementations of an exploit.
2022-02-02
13 Roman Danyliw
[Ballot comment]
** Abstract.  NGFW, NGIPS and UTM are fuzzy product categories.  Do you want to them somewhere?  How do they differ in functionality?  UTM …
[Ballot comment]
** Abstract.  NGFW, NGIPS and UTM are fuzzy product categories.  Do you want to them somewhere?  How do they differ in functionality?  UTM is mentioned here, but not again in the document.

** Section 1.
The requirements
  for network security element performance and effectiveness have
  increased tremendously since then.  In the eighteen years since
  [RFC3511] was published, recommending test methodology and
  terminology for firewalls, requirements and expectations for network
  security elements has increased tremendously. 

I don’t follow how the intent of these two sentences is different.  Given the other text in this paragraph, these sentences also appear redundant.

** Section 3. Per “This document focuses on advanced, …”, what makes a testing method “advanced”?

** Section 4.2.  The abstract said that testing for NGFW, NGIPS and UTM would be provided.  This section is silent on UTM.

** Section 4.2.  Should the following additional features be noted as a feature of NGFWs and NGIPS (Tables 1 and 2)?

-- reconnaissance detection

-- geolocation or network topology-based classification/filtering

** Section 4.2. Thanks for the capability taxonomies describe here.  Should it be noted that “Table 1 and 2 are approximate taxonomies of features commonly found in currently deployed NGFW and NGIDS.  The features provided by specific implementations may be named differently and not necessarily have configuration settings that align to the taxonomy.”

** Table 1.  Is there a reason that DPI and Anti-Evasion (listed in Table 2 for NGIPS) are not mentioned here (for NGFW).  I don’t see how many (all?) of the features listed as RECOMMENDED could be done without it.

** Table 3.  For Anti-Botnet, should it read “detects and blocks”?

** Table 3.  For Web Filtering, is this scoped to be classification and threat detection by URI?

** Table 3.  This table is missing a description for DoS from Table 1 and DPI and Anti-Evasion from Table 2.

** Section 4.2.  Per “Logging SHOULD be enabled.”  How does this “SHOULD” align with “logging and reporting” being a RECOMMENDED in Table 1 and 2?  Same question on “Application Identification and Control SHOULD be configured”

** Section 4.3.1.1.  Why is such well-formed and well-behaved traffic assumed for a security device?

** Section 4.3.1.  What cipher suites should be used for TLS 1.3 based tests? The text is prescriptive for TLS 1.2 (using a RECOMMEND) but simply restates all of those registered by RFC8446.

** Section 9.  Given that the configurations of these test will include working exploits, it would be helpful to provide a reminder on the need control access to them.

** Section A.1.
In parallel, the CVEs will be sent to the DUT/SUT as
  encrypted and as well as clear text payload formats using a traffic
  generator. 

This guidance doesn’t seem appropriate for all cases.  Couldn’t the vulnerability being exploited involve a payload in the unencrypted part or a phase in the communication exchange before a secure channel is negotiated?

** Editorial nits
-- Section 1.  Editorial. s/for firewalls initially/for firewalls/

-- Section 5.  Typo. s/as test equipments/as test equipment/
2022-02-02
13 Roman Danyliw [Ballot Position Update] New position, Discuss, has been recorded for Roman Danyliw
2022-02-02
13 Alvaro Retana [Ballot comment]
The datatracker should indicate that this document replaces draft-balarajah-bmwg-ngfw-performance.
2022-02-02
13 Alvaro Retana [Ballot Position Update] New position, No Objection, has been recorded for Alvaro Retana
2022-02-01
13 Martin Duke
[Ballot comment]
(4.3.1.3) RFC8446 is not the reference for HTTP/2.

(4.3.1.1), (4.3.2.1) Is there a reason that delayed ack limits are defined only in terms …
[Ballot comment]
(4.3.1.3) RFC8446 is not the reference for HTTP/2.

(4.3.1.1), (4.3.2.1) Is there a reason that delayed ack limits are defined only in terms of number of bytes, instead of time? What if an HTTP request (for example) ends, and the delayed ack is very long? Note also that the specification for delayed acks limits it to every two packets, although in the real world many endpoints use much higher thresholds. [It's OK to keep it at 10*MSS if you prefer].

(4.3.3.1) What is a "TCP persistence stack"?
2022-02-01
13 Martin Duke [Ballot Position Update] New position, No Objection, has been recorded for Martin Duke
2022-02-01
13 Matt Joras Request for Telechat review by GENART Completed: Ready with Issues. Reviewer: Matt Joras. Sent review to list.
2022-01-30
13 Toerless Eckert Request for Telechat review by IOTDIR Completed: On the Right Track. Reviewer: Toerless Eckert. Sent review to list.
2022-01-26
13 Amanda Baber IANA Review state changed to IANA OK - No Actions Needed from Version Changed - Review Needed
2022-01-21
13 Jean Mahoney Closed request for Last Call review by GENART with state 'Overtaken by Events'
2022-01-20
13 Jean Mahoney Request for Telechat review by GENART is assigned to Matt Joras
2022-01-20
13 Jean Mahoney Request for Telechat review by GENART is assigned to Matt Joras
2022-01-20
13 Jean Mahoney Assignment of request for Last Call review by GENART to Matt Joras was withdrawn
2022-01-20
13 Tommy Pauly Request for Telechat review by TSVART Completed: Ready. Reviewer: Tommy Pauly. Sent review to list.
2022-01-19
13 Carlos Jesús Bernardos Request for Telechat review by INTDIR is assigned to David Lamparter
2022-01-19
13 Carlos Jesús Bernardos Request for Telechat review by INTDIR is assigned to David Lamparter
2022-01-19
13 Éric Vyncke Requested Telechat review by INTDIR
2022-01-18
13 Ines Robles Request for Telechat review by IOTDIR is assigned to Toerless Eckert
2022-01-18
13 Ines Robles Request for Telechat review by IOTDIR is assigned to Toerless Eckert
2022-01-18
13 Éric Vyncke Requested Telechat review by IOTDIR
2022-01-17
13 Magnus Westerlund Request for Telechat review by TSVART is assigned to Tommy Pauly
2022-01-17
13 Magnus Westerlund Request for Telechat review by TSVART is assigned to Tommy Pauly
2022-01-14
13 Cindy Morgan Placed on agenda for telechat - 2022-02-03
2022-01-14
13 Warren Kumari Ballot has been issued
2022-01-14
13 Warren Kumari [Ballot Position Update] New position, Yes, has been recorded for Warren Kumari
2022-01-14
13 Warren Kumari Created "Approve" ballot
2022-01-14
13 Warren Kumari IESG state changed to IESG Evaluation from Waiting for AD Go-Ahead
2022-01-12
13 (System) IANA Review state changed to Version Changed - Review Needed from IANA OK - No Actions Needed
2022-01-12
13 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-13.txt
2022-01-12
13 (System) New version approved
2022-01-12
13 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , bmonkman
2022-01-12
13 Balamuhunthan Balarajah Uploaded new revision
2021-12-29
12 (System) IESG state changed to Waiting for AD Go-Ahead from In Last Call
2021-12-21
12 (System) IANA Review state changed to IANA OK - No Actions Needed from IANA - Review Needed
2021-12-21
12 Sabrina Tanamal
(Via drafts-lastcall@iana.org): IESG/Authors/WG Chairs:

The IANA Functions Operator has reviewed draft-ietf-bmwg-ngfw-performance-12, which is currently in Last Call, and has the following comments:

We …
(Via drafts-lastcall@iana.org): IESG/Authors/WG Chairs:

The IANA Functions Operator has reviewed draft-ietf-bmwg-ngfw-performance-12, which is currently in Last Call, and has the following comments:

We understand that this document doesn't require any registry actions.

While it's often helpful for a document's IANA Considerations section to remain in place upon publication even if there are no actions, if the authors strongly prefer to remove it, we do not object.

If this assessment is not accurate, please respond as soon as possible.

Thank you,

Sabrina Tanamal
Lead IANA Services Specialist
2021-12-17
12 Tommy Pauly Request for Last Call review by TSVART Completed: On the Right Track. Reviewer: Tommy Pauly. Sent review to list.
2021-12-16
12 Jean Mahoney Request for Last Call review by GENART is assigned to Matt Joras
2021-12-16
12 Jean Mahoney Request for Last Call review by GENART is assigned to Matt Joras
2021-12-16
12 Gunter Van de Velde Request for Last Call review by OPSDIR is assigned to Carlos Martínez
2021-12-16
12 Gunter Van de Velde Request for Last Call review by OPSDIR is assigned to Carlos Martínez
2021-12-16
12 Magnus Westerlund Request for Last Call review by TSVART is assigned to Tommy Pauly
2021-12-16
12 Magnus Westerlund Request for Last Call review by TSVART is assigned to Tommy Pauly
2021-12-15
12 Cindy Morgan IANA Review state changed to IANA - Review Needed
2021-12-15
12 Cindy Morgan
The following Last Call announcement was sent out (ends 2021-12-29):

From: The IESG
To: IETF-Announce
CC: Al Morton , acm@research.att.com, bmwg-chairs@ietf.org, bmwg@ietf.org, …
The following Last Call announcement was sent out (ends 2021-12-29):

From: The IESG
To: IETF-Announce
CC: Al Morton , acm@research.att.com, bmwg-chairs@ietf.org, bmwg@ietf.org, draft-ietf-bmwg-ngfw-performance@ietf.org, warren@kumari.net
Reply-To: last-call@ietf.org
Sender:
Subject: Last Call:  (Benchmarking Methodology for Network Security Device Performance) to Informational RFC


The IESG has received a request from the Benchmarking Methodology WG (bmwg)
to consider the following document: - 'Benchmarking Methodology for Network
Security Device Performance'
  as Informational RFC

The IESG plans to make a decision in the next few weeks, and solicits final
comments on this action. Please send substantive comments to the
last-call@ietf.org mailing lists by 2021-12-29. Exceptionally, comments may
be sent to iesg@ietf.org instead. In either case, please retain the beginning
of the Subject line to allow automated sorting.

Abstract


  This document provides benchmarking terminology and methodology for
  next-generation network security devices including next-generation
  firewalls (NGFW), next-generation intrusion prevention systems
  (NGIPS), and unified threat management (UTM) implementations.  The
  main areas covered in this document are test terminology, test
  configuration parameters, and benchmarking methodology for NGFW and
  NGIPS.  This document aims to improve the applicability,
  reproducibility, and transparency of benchmarks and to align the test
  methodology with today's increasingly complex layer 7 security
  centric network application use cases.  As a result, this document
  makes [RFC3511] obsolete.




The file can be obtained via
https://datatracker.ietf.org/doc/draft-ietf-bmwg-ngfw-performance/



No IPR declarations have been submitted directly on this I-D.




2021-12-15
12 Cindy Morgan IESG state changed to In Last Call from Last Call Requested
2021-12-15
12 Warren Kumari Last call was requested
2021-12-15
12 Warren Kumari Last call announcement was generated
2021-12-15
12 Warren Kumari Ballot approval text was generated
2021-12-15
12 (System) Changed action holders to Warren Kumari (IESG state changed)
2021-12-15
12 Warren Kumari IESG state changed to Last Call Requested from Publication Requested
2021-12-15
12 Al Morton AD Review complete with only editorial comments - current version to be submitted for IETF Last Call.
2021-12-15
12 Al Morton Tag Other - see Comment Log set. Tag Doc Shepherd Follow-up Underway cleared.
2021-12-15
12 Warren Kumari Changed action holders to Al Morton, Sarah Banks, Warren Kumari
2021-12-15
12 Warren Kumari Changed consensus to Yes from Unknown
2021-12-15
12 Warren Kumari Ballot writeup was changed
2021-11-19
12 Al Morton
As required by RFC 4858, this is the current template for the Document
Shepherd Write-Up. Changes are expected over time.

This version is dated …
As required by RFC 4858, this is the current template for the Document
Shepherd Write-Up. Changes are expected over time.

This version is dated 1 November 2019.

(1) What type of RFC is being requested (BCP, Proposed Standard, Internet Standard, Informational, Experimental, or Historic)? Why is this the proper type of RFC? Is this type of RFC indicated in the title page header?

Informational, all BMWG RFCs to date are Informational.
The status is correctly indicated on the title pages.

(2) The IESG approval announcement includes a Document Announcement Write-Up. Please provide such a Document Announcement Write-Up. Recent examples can be found in the "Action" announcements for approved documents. The approval announcement contains the following sections:

Technical Summary:

This document provides benchmarking terminology and methodology for
next-generation network security devices including next-generation
firewalls (NGFW), next-generation intrusion prevention systems
(NGIPS), and unified threat management (UTM) implementations.  This
document aims to improve the applicability, reproducibility, and
transparency of benchmarks and to align the test methodology with
today's increasingly complex layer 7 security centric network
application use cases.  The main areas covered in this document are
test terminology, test configuration parameters, and benchmarking
methodology for NGFW and NGIPS.

Working Group Summary:

Consensus for these drafts required several WGLC which prompted
careful review and further comments. The scope of the document was
appropriately tightened during review. The process to achieve
consensus was long but smooth, and at no time was there sustained
controversy.

Document Quality:

There are at least two existing implementations of the test methods described in the memo, both full and partial. Many layers of review contributed to the quality of the document (authors, external NetSecOpen organization, and many working group participants sharing comments on bmwg-list).

Personnel:

Al Morton is the Document Shepherd.
Warren Kumari is the Responsible Area Director.

(3) Briefly describe the review of this document that was performed by the Document Shepherd. If this version of the document is not ready for publication, please explain why the document is being forwarded to the IESG.

The Doc Shepherd has reviewed this memo many times during development, and seen his comments addressed.

(4) Does the document Shepherd have any concerns about the depth or breadth of the reviews that have been performed?

No

(5) Do portions of the document need review from a particular or from broader perspective, e.g., security, operational complexity, AAA, DNS, DHCP, XML, or internationalization? If so, describe the review that took place.

No

(6) Describe any specific concerns or issues that the Document Shepherd has with this document that the Responsible Area Director and/or the IESG should be aware of? For example, perhaps he or she is uncomfortable with certain parts of the document, or has concerns whether there really is a need for it. In any event, if the WG has discussed those issues and has indicated that it still wishes to advance the document, detail those concerns here.

Review of the "next-generation" adjective prompted discussion, but the authors found that this adjective is in common use with the FW and NGIPS devices that are the target of this work. Also, the adjective helps to distinguish this work from the devices covered in the RFC 3511 time-frame.

(7) Has each author confirmed that any and all appropriate IPR disclosures required for full conformance with the provisions of BCP 78 and BCP 79 have already been filed. If not, explain why?

Yes

(8) Has an IPR disclosure been filed that references this document? If so, summarize any WG discussion and conclusion regarding the IPR disclosures.

No

(9) How solid is the WG consensus behind this document? Does it represent the strong concurrence of a few individuals, with others being silent, or does the WG as a whole understand and agree with it?

I think that most of teh WG understands this document's goals and methods, and many members have reviewed the document in detail, according to their experience. The WG consensus is now clear.

(10) Has anyone threatened an appeal or otherwise indicated extreme discontent? If so, please summarise the areas of conflict in separate email messages to the Responsible Area Director. (It should be in a separate email because this questionnaire is publicly available.)

No

(11) Identify any ID nits the Document Shepherd has found in this document. (See http://www.ietf.org/tools/idnits/ and the Internet-Drafts Checklist). Boilerplate checks are not enough; this check needs to be thorough.

The current nits-check is below, with [acm] comments:

idnits 2.17.00 (12 Aug 2021)

/tmp/idnits22257/draft-ietf-bmwg-ngfw-performance-10.txt:

  Checking boilerplate required by RFC 5378 and the IETF Trust (see
  https://trustee.ietf.org/license-info):
  ----------------------------------------------------------------------------

    No issues found here.

  Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
  ----------------------------------------------------------------------------

    No issues found here.

  Checking nits according to https://www.ietf.org/id-info/checklist :
  ----------------------------------------------------------------------------

  == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses
    in the document.  If these are example addresses, they should be changed.
[acm]
this is ok, BMWG's addresses are used.
/tmp/idnits31522/draft-ietf-bmwg-ngfw-performance-10_1_.txt(2895): update_references(    [RFC5180] and the IPv4 address block 198.18.0.0/15 has been allocated)

  == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses
    in the document.  If these are example addresses, they should be changed.
[acm]
this is ok, BMWG's addresses are used.
/tmp/idnits31522/draft-ietf-bmwg-ngfw-performance-10_1_.txt(2899): Found possible IPv6 address '2001:2::' in position 138 in the paragraph; this doesn't match RFC 3849's suggested 2001:DB8::/32 address range or RFC 4193's Unique Local Address range FC00::/7.
  -->  The IANA has assigned IPv4 and IPv6 address blocks in [RFC6890] that have been registered for special purposes.  The IPv6 address block 2001:2::/48 has been allocated for the purpose of IPv6 Benchmarking [RFC5180] and the IPv4 address block 198.18.0.0/15 has been allocated for the purpose of IPv4 Benchmarking [RFC2544].  This assignment was made to minimize the chance of conflict in case a testing device were to be accidentally connected to part of the Internet.

  -- The draft header indicates that this document obsoletes RFC3511, but the
    abstract doesn't seem to mention this, which it should.
[acm]
Fixed

  Miscellaneous warnings:
  ----------------------------------------------------------------------------

  == The document seems to lack the recommended RFC 2119 boilerplate, even if
    it appears to use RFC 2119 keywords.
[acm] this is ok, Section 2 provides the correct boilerplate.

    (The document does seem to have the reference to RFC 2119 which the
    ID-Checklist requires).
  -- The document date (September 2021) is 31 days in the past.  Is this
    intentional?


  Checking references for intended status: Informational
  ----------------------------------------------------------------------------

  -- Obsolete informational reference (is this intentional?): RFC 2616
    (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235)
[acm]
Updated

    Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 3 comments (--).

    Run idnits with the --verbose option for more detailed information about
    the items above.

(12) Describe how the document meets any required formal review criteria, such as the MIB Doctor, YANG Doctor, media type, and URI type reviews.

NA

(13) Have all references within this document been identified as either normative or informative?

Yes.

(14) Are there normative references to documents that are not ready for advancement or are otherwise in an unclear state? If such normative references exist, what is the plan for their completion?

All normative refs are stable/RFCs.

(15) Are there downward normative references references (see RFC 3967)? If so, list these downward references to support the Area Director in the Last Call procedure.

No.

(16) Will publication of this document change the status of any existing RFCs? Are those RFCs listed on the title page header, listed in the abstract, and discussed in the introduction? If the RFCs are not listed in the Abstract and Introduction, explain why, and point to the part of the document where the relationship of this document to the other RFCs is discussed. If this information is not in the document, explain why the WG considers it unnecessary.

RFC 3511 will be made Obsolete

(17) Describe the Document Shepherd's review of the IANA considerations section, especially with regard to its consistency with the body of the document. Confirm that all protocol extensions that the document makes are associated with the appropriate reservations in IANA registries. Confirm that any referenced IANA registries have been clearly identified. Confirm that newly created IANA registries include a detailed specification of the initial contents for the registry, that allocations procedures for future registrations are defined, and a reasonable name for the new registry has been suggested (see RFC 8126).

The draft makes no specific request of IANA, and now says that first.


(18) List any new IANA registries that require Expert Review for future allocations. Provide any public guidance that the IESG would find useful in selecting the IANA Experts for these new registries.

NA

(19) Describe reviews and automated checks performed by the Document Shepherd to validate sections of the document written in a formal language, such as XML code, BNF rules, MIB definitions, YANG modules, etc.

NA

(20) If the document contains a YANG module, has the module been checked with any of the recommended validation tools (https://trac.ietf.org/trac/ops/wiki/yang-review-tools) for syntax and formatting validation? If there are any resulting errors or warnings, what is the justification for not fixing them at this time? Does the YANG module comply with the Network Management Datastore Architecture (NMDA) as specified in RFC8342?

NA
2021-11-19
12 Al Morton
As required by RFC 4858, this is the current template for the Document
Shepherd Write-Up. Changes are expected over time.

This version is dated …
As required by RFC 4858, this is the current template for the Document
Shepherd Write-Up. Changes are expected over time.

This version is dated 1 November 2019.

(1) What type of RFC is being requested (BCP, Proposed Standard, Internet Standard, Informational, Experimental, or Historic)? Why is this the proper type of RFC? Is this type of RFC indicated in the title page header?

Informational, all BMWG RFCs to date are Informational.
The status is correctly indicated on the title pages.

(2) The IESG approval announcement includes a Document Announcement Write-Up. Please provide such a Document Announcement Write-Up. Recent examples can be found in the "Action" announcements for approved documents. The approval announcement contains the following sections:

Technical Summary:

This document provides benchmarking terminology and methodology for
next-generation network security devices including next-generation
firewalls (NGFW), next-generation intrusion prevention systems
(NGIPS), and unified threat management (UTM) implementations.  This
document aims to improve the applicability, reproducibility, and
transparency of benchmarks and to align the test methodology with
today's increasingly complex layer 7 security centric network
application use cases.  The main areas covered in this document are
test terminology, test configuration parameters, and benchmarking
methodology for NGFW and NGIPS.

Working Group Summary:

Consensus for these drafts required several WGLC which prompted
careful review and further comments. The scope of the document was
appropriately tightened during review. The process to achieve
consensus was long but smooth, and at no time was there sustained
controversy.

Document Quality:

There are at least two existing implementations of the test methods described in the memo, both full and partial. Many layers of review contributed to the quality of the document (authors, external NetSecOpen organization, and many working group participants sharing comments on bmwg-list).

Personnel:

Al Morton is the Document Shepherd.
Warren Kumari is the Responsible Area Director.

(3) Briefly describe the review of this document that was performed by the Document Shepherd. If this version of the document is not ready for publication, please explain why the document is being forwarded to the IESG.

The Doc Shepherd has reviewed this memo many times during development, and seen his comments addressed.

>>> One important comment remains to be addressed:
Since this memo Obsoletes RFC 3511, a sentence indicating this action must be added to the Abstract according to current practice.

>>> One additional comment on version 10:
The Security Directorate Review usually goes more smoothly when the Security Considerations section (9) re-enforces that the scope of this document is a laboratory Isolated Test Environment (and not production network testing). Sample text is available to use in this section, consistent with BMWG's lab-only charter.

Also, see a few ">>>" below.

(4) Does the document Shepherd have any concerns about the depth or breadth of the reviews that have been performed?

No

(5) Do portions of the document need review from a particular or from broader perspective, e.g., security, operational complexity, AAA, DNS, DHCP, XML, or internationalization? If so, describe the review that took place.

No

(6) Describe any specific concerns or issues that the Document Shepherd has with this document that the Responsible Area Director and/or the IESG should be aware of? For example, perhaps he or she is uncomfortable with certain parts of the document, or has concerns whether there really is a need for it. In any event, if the WG has discussed those issues and has indicated that it still wishes to advance the document, detail those concerns here.

Review of the "next-generation" adjective prompted discussion, but the authors found that this adjective is in common use with the FW and NGIPS devices that are the target of this work. Also, the adjective helps to distinguish this work from the devices covered in the RFC 3511 time-frame.

(7) Has each author confirmed that any and all appropriate IPR disclosures required for full conformance with the provisions of BCP 78 and BCP 79 have already been filed. If not, explain why?

>>> message sent 10/16/2021 <<<

(8) Has an IPR disclosure been filed that references this document? If so, summarize any WG discussion and conclusion regarding the IPR disclosures.

(9) How solid is the WG consensus behind this document? Does it represent the strong concurrence of a few individuals, with others being silent, or does the WG as a whole understand and agree with it?

I think that most of teh WG understands this document's goals and methods, and many members have reviewed the document in detail, according to their experience. The WG consensus is now clear.

(10) Has anyone threatened an appeal or otherwise indicated extreme discontent? If so, please summarise the areas of conflict in separate email messages to the Responsible Area Director. (It should be in a separate email because this questionnaire is publicly available.)

No

(11) Identify any ID nits the Document Shepherd has found in this document. (See http://www.ietf.org/tools/idnits/ and the Internet-Drafts Checklist). Boilerplate checks are not enough; this check needs to be thorough.

The current nits-check is below, with [acm] comments:

idnits 2.17.00 (12 Aug 2021)

/tmp/idnits22257/draft-ietf-bmwg-ngfw-performance-10.txt:

  Checking boilerplate required by RFC 5378 and the IETF Trust (see
  https://trustee.ietf.org/license-info):
  ----------------------------------------------------------------------------

    No issues found here.

  Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
  ----------------------------------------------------------------------------

    No issues found here.

  Checking nits according to https://www.ietf.org/id-info/checklist :
  ----------------------------------------------------------------------------

  == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses
    in the document.  If these are example addresses, they should be changed.
[acm]
this is ok, BMWG's addresses are used.
/tmp/idnits31522/draft-ietf-bmwg-ngfw-performance-10_1_.txt(2895): update_references(    [RFC5180] and the IPv4 address block 198.18.0.0/15 has been allocated)

  == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses
    in the document.  If these are example addresses, they should be changed.
[acm]
this is ok, BMWG's addresses are used.
/tmp/idnits31522/draft-ietf-bmwg-ngfw-performance-10_1_.txt(2899): Found possible IPv6 address '2001:2::' in position 138 in the paragraph; this doesn't match RFC 3849's suggested 2001:DB8::/32 address range or RFC 4193's Unique Local Address range FC00::/7.
  -->  The IANA has assigned IPv4 and IPv6 address blocks in [RFC6890] that have been registered for special purposes.  The IPv6 address block 2001:2::/48 has been allocated for the purpose of IPv6 Benchmarking [RFC5180] and the IPv4 address block 198.18.0.0/15 has been allocated for the purpose of IPv4 Benchmarking [RFC2544].  This assignment was made to minimize the chance of conflict in case a testing device were to be accidentally connected to part of the Internet.

  -- The draft header indicates that this document obsoletes RFC3511, but the
    abstract doesn't seem to mention this, which it should.
[acm]
>>> This needs fix, as mentioned earlier.


  Miscellaneous warnings:
  ----------------------------------------------------------------------------

  == The document seems to lack the recommended RFC 2119 boilerplate, even if
    it appears to use RFC 2119 keywords.
[acm] this is ok, Section 2 provides the correct boilerplate.

    (The document does seem to have the reference to RFC 2119 which the
    ID-Checklist requires).
  -- The document date (September 2021) is 31 days in the past.  Is this
    intentional?


  Checking references for intended status: Informational
  ----------------------------------------------------------------------------

  -- Obsolete informational reference (is this intentional?): RFC 2616
    (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235)
[acm]
>>>> Authors, Please check this ref, see if it can be updated.  <<<<

    Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 3 comments (--).

    Run idnits with the --verbose option for more detailed information about
    the items above.

(12) Describe how the document meets any required formal review criteria, such as the MIB Doctor, YANG Doctor, media type, and URI type reviews.

NA

(13) Have all references within this document been identified as either normative or informative?

Yes.

(14) Are there normative references to documents that are not ready for advancement or are otherwise in an unclear state? If such normative references exist, what is the plan for their completion?

All normative refs are stable/RFCs.

(15) Are there downward normative references references (see RFC 3967)? If so, list these downward references to support the Area Director in the Last Call procedure.

No.

(16) Will publication of this document change the status of any existing RFCs? Are those RFCs listed on the title page header, listed in the abstract, and discussed in the introduction? If the RFCs are not listed in the Abstract and Introduction, explain why, and point to the part of the document where the relationship of this document to the other RFCs is discussed. If this information is not in the document, explain why the WG considers it unnecessary.

>>> As mentioned twice above, RFC 3511 will become obsolete, and this fact needs to appear in the Abstract.

(17) Describe the Document Shepherd's review of the IANA considerations section, especially with regard to its consistency with the body of the document. Confirm that all protocol extensions that the document makes are associated with the appropriate reservations in IANA registries. Confirm that any referenced IANA registries have been clearly identified. Confirm that newly created IANA registries include a detailed specification of the initial contents for the registry, that allocations procedures for future registrations are defined, and a reasonable name for the new registry has been suggested (see RFC 8126).

>>> The draft discusses the BMWG address assignments in this section (8). However, the draft makes no specific request of IANA, and should say that first.


(18) List any new IANA registries that require Expert Review for future allocations. Provide any public guidance that the IESG would find useful in selecting the IANA Experts for these new registries.

NA

(19) Describe reviews and automated checks performed by the Document Shepherd to validate sections of the document written in a formal language, such as XML code, BNF rules, MIB definitions, YANG modules, etc.

NA

(20) If the document contains a YANG module, has the module been checked with any of the recommended validation tools (https://trac.ietf.org/trac/ops/wiki/yang-review-tools) for syntax and formatting validation? If there are any resulting errors or warnings, what is the justification for not fixing them at this time? Does the YANG module comply with the Network Management Datastore Architecture (NMDA) as specified in RFC8342?

NA
2021-11-19
12 Al Morton Responsible AD changed to Warren Kumari
2021-11-19
12 Al Morton IETF WG state changed to Submitted to IESG for Publication from WG Document
2021-11-19
12 Al Morton IESG state changed to Publication Requested from I-D Exists
2021-11-19
12 Al Morton IESG process started in state Publication Requested
2021-11-19
12 Al Morton IETF WG state changed to WG Document from WG Consensus: Waiting for Write-Up
2021-11-16
12 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-12.txt
2021-11-16
12 (System) New version approved
2021-11-16
12 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , bmonkman
2021-11-16
12 Balamuhunthan Balarajah Uploaded new revision
2021-10-20
11 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-11.txt
2021-10-20
11 (System) New version approved
2021-10-20
11 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , bmonkman
2021-10-20
11 Balamuhunthan Balarajah Uploaded new revision
2021-10-16
10 Al Morton Revised Doc also needed to respond to Shepherd's review points in preliminary write-up.
2021-10-16
10 Al Morton Tag Doc Shepherd Follow-up Underway set. Tag Revised I-D Needed - Issue raised by WGLC cleared.
2021-10-16
10 Al Morton IETF WG state changed to WG Consensus: Waiting for Write-Up from WG Document
2021-10-16
10 Al Morton Intended Status changed to Informational from None
2021-10-16
10 Al Morton
RFC 3511 time-frame.

(7) Has each author confirmed that any and all appropriate IPR disclosures required for full conformance with the provisions of BCP 78 …
RFC 3511 time-frame.

(7) Has each author confirmed that any and all appropriate IPR disclosures required for full conformance with the provisions of BCP 78 and BCP 79 have already been filed. If not, explain why?

>>> message sent 10/16/2021 <<<

(8) Has an IPR disclosure been filed that references this document? If so, summarize any WG discussion and conclusion regarding the IPR disclosures.

(9) How solid is the WG consensus behind this document? Does it represent the strong concurrence of a few individuals, with others being silent, or does the WG as a whole understand and agree with it?

I think that most of teh WG understands this document's goals and methods, and many members have reviewed the document in detail, according to their experience. The WG consensus is now clear.

(10) Has anyone threatened an appeal or otherwise indicated extreme discontent? If so, please summarise the areas of conflict in separate email messages to the Responsible Area Director. (It should be in a separate email because this questionnaire is publicly available.)

No

(11) Identify any ID nits the Document Shepherd has found in this document. (See http://www.ietf.org/tools/idnits/ and the Internet-Drafts Checklist). Boilerplate checks are not enough; this check needs to be thorough.

The current nits-check is below, with [acm] comments:

idnits 2.17.00 (12 Aug 2021)

/tmp/idnits22257/draft-ietf-bmwg-ngfw-performance-10.txt:

  Checking boilerplate required by RFC 5378 and the IETF Trust (see
  https://trustee.ietf.org/license-info):
  ----------------------------------------------------------------------------

    No issues found here.

  Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
  ----------------------------------------------------------------------------

    No issues found here.

  Checking nits according to https://www.ietf.org/id-info/checklist :
  ----------------------------------------------------------------------------

  == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses
    in the document.  If these are example addresses, they should be changed.
[acm]
this is ok, BMWG's addresses are used.
/tmp/idnits31522/draft-ietf-bmwg-ngfw-performance-10_1_.txt(2895): update_references(    [RFC5180] and the IPv4 address block 198.18.0.0/15 has been allocated)

  == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses
    in the document.  If these are example addresses, they should be changed.
[acm]
this is ok, BMWG's addresses are used.
/tmp/idnits31522/draft-ietf-bmwg-ngfw-performance-10_1_.txt(2899): Found possible IPv6 address '2001:2::' in position 138 in the paragraph; this doesn't match RFC 3849's suggested 2001:DB8::/32 address range or RFC 4193's Unique Local Address range FC00::/7.
  -->  The IANA has assigned IPv4 and IPv6 address blocks in [RFC6890] that have been registered for special purposes.  The IPv6 address block 2001:2::/48 has been allocated for the purpose of IPv6 Benchmarking [RFC5180] and the IPv4 address block 198.18.0.0/15 has been allocated for the purpose of IPv4 Benchmarking [RFC2544].  This assignment was made to minimize the chance of conflict in case a testing device were to be accidentally connected to part of the Internet.

  -- The draft header indicates that this document obsoletes RFC3511, but the
    abstract doesn't seem to mention this, which it should.
[acm]
>>> This needs fix, as mentioned earlier.


  Miscellaneous warnings:
  ----------------------------------------------------------------------------

  == The document seems to lack the recommended RFC 2119 boilerplate, even if
    it appears to use RFC 2119 keywords.
[acm] this is ok, Section 2 provides the correct boilerplate.

    (The document does seem to have the reference to RFC 2119 which the
    ID-Checklist requires).
  -- The document date (September 2021) is 31 days in the past.  Is this
    intentional?


  Checking references for intended status: Informational
  ----------------------------------------------------------------------------

  -- Obsolete informational reference (is this intentional?): RFC 2616
    (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235)
[acm]
>>>> Authors, Please check this ref, see if it can be updated.  <<<<

    Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 3 comments (--).

    Run idnits with the --verbose option for more detailed information about
    the items above.

(12) Describe how the document meets any required formal review criteria, such as the MIB Doctor, YANG Doctor, media type, and URI type reviews.

NA

(13) Have all references within this document been identified as either normative or informative?

Yes.

(14) Are there normative references to documents that are not ready for advancement or are otherwise in an unclear state? If such normative references exist, what is the plan for their completion?

All normative refs are stable/RFCs.

(15) Are there downward normative references references (see RFC 3967)? If so, list these downward references to support the Area Director in the Last Call procedure.

No.

(16) Will publication of this document change the status of any existing RFCs? Are those RFCs listed on the title page header, listed in the abstract, and discussed in the introduction? If the RFCs are not listed in the Abstract and Introduction, explain why, and point to the part of the document where the relationship of this document to the other RFCs is discussed. If this information is not in the document, explain why the WG considers it unnecessary.

>>> As mentioned twice above, RFC 3511 will become obsolete, and this fact needs to appear in the Abstract.

(17) Describe the Document Shepherd's review of the IANA considerations section, especially with regard to its consistency with the body of the document. Confirm that all protocol extensions that the document makes are associated with the appropriate reservations in IANA registries. Confirm that any referenced IANA registries have been clearly identified. Confirm that newly created IANA registries include a detailed specification of the initial contents for the registry, that allocations procedures for future registrations are defined, and a reasonable name for the new registry has been suggested (see RFC 8126).

>>> The draft discusses the BMWG address assignments in this section (8). However, the draft makes no specific request of IANA, and should say that first.


(18) List any new IANA registries that require Expert Review for future allocations. Provide any public guidance that the IESG would find useful in selecting the IANA Experts for these new registries.

NA

(19) Describe reviews and automated checks performed by the Document Shepherd to validate sections of the document written in a formal language, such as XML code, BNF rules, MIB definitions, YANG modules, etc.

NA

(20) If the document contains a YANG module, has the module been checked with any of the recommended validation tools (https://trac.ietf.org/trac/ops/wiki/yang-review-tools) for syntax and formatting validation? If there are any resulting errors or warnings, what is the justification for not fixing them at this time? Does the YANG module comply with the Network Management Datastore Architecture (NMDA) as specified in RFC8342?

NA
2021-09-26
10 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-10.txt
2021-09-26
10 (System) New version approved
2021-09-26
10 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , bmonkman
2021-09-26
10 Balamuhunthan Balarajah Uploaded new revision
2021-07-26
09 Al Morton Added to session: IETF-111: bmwg  Mon-1200
2021-05-21
09 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-09.txt
2021-05-21
09 (System) New version approved
2021-05-21
09 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , bmonkman
2021-05-21
09 Balamuhunthan Balarajah Uploaded new revision
2021-04-16
08 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-08.txt
2021-04-16
08 (System) New version approved
2021-04-16
08 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , bmonkman
2021-04-16
08 Balamuhunthan Balarajah Uploaded new revision
2021-04-06
07 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-07.txt
2021-04-06
07 (System) New version approved
2021-04-06
07 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , bmonkman
2021-04-06
07 Balamuhunthan Balarajah Uploaded new revision
2021-02-22
06 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-06.txt
2021-02-22
06 (System) New version approved
2021-02-22
06 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , bmonkman
2021-02-22
06 Balamuhunthan Balarajah Uploaded new revision
2021-02-03
05 Al Morton Tag Revised I-D Needed - Issue raised by WGLC set.
2020-10-30
05 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-05.txt
2020-10-30
05 (System) New version approved
2020-10-30
05 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , bmonkman , Carsten Rossenhoevel
2020-10-30
05 Balamuhunthan Balarajah Uploaded new revision
2020-09-09
04 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-04.txt
2020-09-09
04 (System) New version approved
2020-09-09
04 (System) Request for posting confirmation emailed to previous authors: bmonkman , Carsten Rossenhoevel , Balamuhunthan Balarajah
2020-09-09
04 Balamuhunthan Balarajah Uploaded new revision
2020-03-09
03 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-03.txt
2020-03-09
03 (System) New version approved
2020-03-09
03 (System) Request for posting confirmation emailed to previous authors: Carsten Rossenhoevel , Balamuhunthan Balarajah , bmonkman
2020-03-09
03 Balamuhunthan Balarajah Uploaded new revision
2019-11-19
02 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-02.txt
2019-11-19
02 (System) New version approved
2019-11-19
02 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , bmonkman , Carsten Rossenhoevel
2019-11-19
02 Balamuhunthan Balarajah Uploaded new revision
2019-11-18
01 Al Morton Added to session: IETF-106: bmwg  Wed-1330
2019-09-03
01 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-01.txt
2019-09-03
01 (System) New version approved
2019-09-03
01 (System) Request for posting confirmation emailed to previous authors: Balamuhunthan Balarajah , Carsten Rossenhoevel , Brian Monkman
2019-09-03
01 Balamuhunthan Balarajah Uploaded new revision
2019-07-08
00 Kathleen Moriarty Request for Early review by SECDIR Completed: Has Nits. Reviewer: Kathleen Moriarty. Sent review to list.
2019-03-22
00 Tero Kivinen Request for Early review by SECDIR is assigned to Kathleen Moriarty
2019-03-22
00 Tero Kivinen Request for Early review by SECDIR is assigned to Kathleen Moriarty
2019-03-18
00 Al Morton Notification list changed to Al Morton <acm@research.att.com>
2019-03-18
00 Al Morton Document shepherd changed to Al Morton
2019-03-18
00 Al Morton Added to session: IETF-104: bmwg  Wed-1120
2019-03-18
00 Al Morton Requested Early review by SECDIR
2019-03-05
00 Balamuhunthan Balarajah New version available: draft-ietf-bmwg-ngfw-performance-00.txt
2019-03-05
00 (System) WG -00 approved
2019-03-05
00 Balamuhunthan Balarajah Set submitter to "Balamuhunthan Balarajah ", replaces to (none) and sent approval email to group chairs: bmwg-chairs@ietf.org
2019-03-05
00 Balamuhunthan Balarajah Uploaded new revision