Skip to main content

Minutes IETF117: bmwg: Wed 16:30
minutes-117-bmwg-202307261630-00

Meeting Minutes Benchmarking Methodology (bmwg) WG
Date and time 2023-07-26 16:30
Title Minutes IETF117: bmwg: Wed 16:30
State Active
Other versions markdown
Last updated 2023-08-01

minutes-117-bmwg-202307261630-00

Benchmarking Methodology Working Group (bmwg) - IETF 117 San Francisco

Date and Time: 26/07/2023 Wednesday 9:30-11:30 PDT
Location: Hilton San Francisco Union Square, Golden Gate 6
Chairs: Sarah Banks, In Memoriam: Al Morton
AD: Warren Kumari
Zulip Stream: https://zulip.ietf.org/login/#narrow/stream/bmwg

Note taker(s), Jabber, IPR

Paolo Volpato volunteered.
Also, notes from Eduard Vasilenko and Giuseppe Fioccola.

Administrative and WG Status (Chairs)

Introduction from the chair, note well presented.
In memoriam of Al Morton.
Al spent 20 years in producing RFCs, last one in 2022.

Scott Bradner, Joel Jaeggli, Warren Kumari shared many good words to Al

No agenda bashing.
No IPR declaration.

  • RFC 9411, Benchmarking Methodology for Network Security Device
    Performance

    • Congrats to the authors.
  • WG Adoptions and Proposals

WG Drafts

Multiple Loss Ratio Search and YANG Model, Vratko Polák

draft-ietf-bmwg-mlrsearch

To search for the load where the drop is absent. Work status presented,
with focus on terminology.
Asking for feedback.

Sarah Banks: distinguish between terminology and methodology. Avoid
having definitions/terminology scattered across too many RFCs. If
terminology is too long, keep terminology into a separate document.

Vratko Polak: terminology will be at its minimum.

Benchmarking Methodology for Stateful NATxy Gateways using RFC 4814 Pseudorandom Port numbers, Gabor Lencse

draft-ietf-bmwg-benchmarking-stateful

Addresses and ports randomization to see how hash works. The source code
is on GitHub.
Summary of the proposal presented. Progress of the draft also shown.
Draft almost ready for WGLC. Scientific magazine publications expected.

Sarah Banks: are you expecting an update?

Gabor Lencse: small update on the address usage.

Sarah Banks: as a participant I like your changes. If people support it,
we can go to WGLC.

Proposals

Considerations for Benchmarking Network Performance in Containerized Infrastructures, Tran Minh Ngoc

draft-dcn-bmwg-containerized-infra

Tribute to Al Morton.
History and update on the work presented with list of actions taken to
address the comments received.
Through the updates authors hope to have addressed concerns.
Asking for adoption.

Sarah Banks: thanks for the updates. Happy after the meeting to ask list
for adoption. This means reading the draft and answering Y/N.

Vratko Polak: there is value in the draft, but I am not quite sure of
the scope. Scope should be clearer.
Two RFCs are compared: 8172, 8204. they have clear scope, even if they
speak of VM they do not go into the details.
instead, they have more structure on SUT. My expectation is that this
draft is more similar to these 2 RFCs, but it has a different scope.

Sarah Banks: the scope needs to be a bit more clear. Please Vratko share
your feedback on the list.

Benchmarking methodology for MPLS Segment Routing, Paolo Volpato

draft-vfv-bmwg-srmpls-bench-meth

Recap of the scope of the draft. The authors consider the draft quite
stable after addressing the comments received.

Carsten Rossenhoevel: I support the work. Asked whether it makes sense
to integrate both drafts to have a general.

Sarah Banks: I support to keep them separated.

Carsten Rossenhoevel: Having a single document can help to address the
main points on how to benchmark Segment Routing in general.

Eduard Vasilenko: There are several things that are kept out of scope in
the two documents since there are specific characteristics and two
documents can be a good choice.

Carsten Rossenhoevel: It may be important to work on a single document
since it can be a single reference for Segment Routing.

Sarah Banks: It may be worth to consider it.

Paolo Volpato: Let's discuss offline.

Luis Contreras: I see pros and cons to merge or keep separated.
Probably, considering that the solutions are different it makes sense to
keep separated.

Sarah Banks: It is important to analyze the methodology and understand
whether it makes sense or not.

Carsten: What about the adoption?

Sarah Banks: Yes, the WG Adoption will be done on the list together with
the discussion on the number of documents.

Benchmarking methodology for IPv6 Segment Routing, Eduard Vasilenko

draft-vfv-bmwg-srv6-bench-meth

Overview of the draft is presented.
Technical details are addressed to show the differences with SR-MPLS.
Issue is that we don't have 3rd party testers.

Sarah Banks: as a participant I am less concerned to have less tests. If
we can progress on that, ok but less concerned. You Eduard made a
statement why the two drafts should be separate.

Luis Contreras: we have been focused on interoperability, probably it
was no the time to focus on benchmarking.
Not committing, but we may have some results to share for IETF 118.

Carsten Rossenhoevel: we spent too muh time on things that shouldbe
reproduceable.
Is it the case the drafts are for interoperability?

Eduard Vasilenko: please read yourself.

Carsten Rossenhoevel: I did.

Sarah Banks: we are BM WG, not interoperability. Continue the
conversaton on the list on 1) keep the 2 draft separate and 2) they
focus on BM.

Problems and requirements of evaluation methodology for integrated space and terrestrial networks, Qi Zhang

draft-lai-bmwg-istn-methodology

Need of a new methodology for integrated space/terrestrial networks
introduced.
Updated environment shown with new data.

Sarah Banks: excited about this work. How accurate representation of
real world is in your lab? Happy that FITI and China Mobile partner with
you. Any results you can share with the group? That would be useful for
the IESG to consider to go through the RFC process.

Qi Zhang: we tried to answer the two questions in our work. Methodology
is scored as data-driven. This helps with dealing with fidelity of the
simulation.
Once we get this input like the network parameters, we build the
environment and run the analysis.

Sarah Banks: happy to hear this. For the second question, a concern is
the height shown in the table. How the number translate when moving from
a lab to real cases.

Carsten Rossenhoevel: great work, happy to contribute. there are
professional testing devices that enable capability useful for specific
benchmarking methodologies.
I am not sure of the scope of the draft, whether is about performance,
routing scalability/resilience, or comparability of the emulated
container scenario.

Joel Jaeggli: it's observable hat atmosphere can condition things that
are parametrized.
Don't say it's easy to describe but we have the tools to emulate.

Sarah Banks: one of the thing the draft could use was what Joel
described. Please circle back on and share on the list.

Qi Zhang: we look forward to answer to the comments on the list.

Presentations

YANG model for management of Network Tester, Vladimir Vassilev

draft-ietf-bmwg-network-tester-cfg

Progress of the work presented. Results of IETF 116 Hackathon were
reported.

Warren Kumari: cool job. just a comment the 6 SFP board I hadn't put the
name together with you.

Sarah Banks: thanks all for sharing the slides.

Remarks

Sarah Banks: I'd like to introduce the new chair of the WG: Giuseppe
Fioccola.

Giuseppe Fioccola: it's an honor to continue the work of Al.

Chat

Boris: Al was a great person: humanity, openness,kindness etc.etc.

Boris: Better to keep them separated IMO because SRv6, for example, is
still developing with bigger pace at the moment. So there are features
which belong only for SRv6, like SRH compression. So it will be quite
different methodology vs. SR-MPLS in some sense.

Carsten: Comparing the two drafts, the tables of content are almost
identical. Each of sections 5 with the test cases has the same list of
test cases (ingress / transit / egress). The list of authors is almost
identical. So, good arguments from my point of view to combine them :-)

Eduard: Up to chair and WG. I vote to keep it separate because people
would not need 2 data plane tests at the same time. They would probably
do only one for the real deployment.
no big problem to merge.

Boris: Indeed Carsten :) but there is some difference inside too. The
point about interop methodology part is very important for customers
(saying that as a customer) but it contradicts with BMWG scope as Sarah
said.

Giuseppe: IMO it also depends on how you can eventually do the merge.
Since the tests for SR-MPLS and SRv6 need to be described separately,
the merge is likely to be a sequence of separated tests. For this reason
I think it can be clearer for a reader to keep them separated and
therefore describe IPv6 and MPLS specific test environments separately.

Boris: @Giuseppe +1

Eduard: Carsten, thanks that you come to BMWG with all your experience!

Boris: cool!

Carsten: In the end, it is up to the authors to suggest a way forward
that they are willing to go. Do you think the methodology would be that
much different? Would it be possible to separate just the SR-MPLS / SRv6
test bed setup and configuration into two different sub-sections? Could
the test cases just mention "setup tunnel as described in config
section", "send IP data into tunnel", "receive encapsulated data from
tunnel"? I hope that the small technological differences could be
encapsulated in just one subsection, and there would be joint test cases
covering both options.

Carsten: @Boris Yes, I agree! If functional testing and interoperability
were the single goal, the test case descriptions would be less
demanding. For benchmarking, detailed test steps/configs are needed to
create reproducible results and provide the guidance that the community
probably expects from IETF BMWG.