Skip to main content

Minutes IETF112: sidrops
minutes-112-sidrops-00

Meeting Minutes SIDR Operations (sidrops) WG
Date and time 2021-11-08 12:00
Title Minutes IETF112: sidrops
State Active
Other versions plain text
Last updated 2021-11-10

minutes-112-sidrops-00
sidrops 112

Monday 8 November 2021
12:00 UTC

Agenda
1. Chair slides - Chris Morrow
Alexander Azimov: both ASPA drafts still need some work before we can request
WG last call

2. Ben Maddison:
https://github.com/benmaddison/rpkimancer

Job Snijders: Expresses his appreciation
Chris Morrow: Being able to look at the differences between objects over time
is useful. Ben Maddison: Being able to do a diff between two known versions of
a given object is not difficult to implement, but itÕs difficult to scale
storing that data. This tool is not a good place to have that. Chris Morrow:
Perhaps a thought process on the mailinng list on how to store that historical
date. Ben Maddison: The issue is that you end up with a massive duplication
from rollover manifests and crl , tracking at file level would make a diff
Ôalmost freeÕ but a bigger job than a small python library like this (comments
mentionÊhttps://rpkiviews.org, RIPE NCC RIS data) Job Snijders: In order to
track what happens Òin the middleÓ we need Certificate Transparency. Thats
perhaps a topic for the next IETF meeting. Chris Morrow: slideware/discussion
welcome Ties de Kock (chat): My experience is that if you want to actually test
RPs having a valid tal + repo (so including valid URLs) is likely the way to
go. Needs one implementation instead of N. Otherwise this looks like a useful
tool! The RIPE NCC also has data per TAL (both effective VRPs and the raw
objects) since 2011. Ben: I agree in genral, hooking up a test RP to this data
is probably easiest to implement if you use one of the retrieval mechanisms
like RRDP or rsync running locally, but the problem is that because you need to
embed the URLs in the actual objects themselves, you need to spoof your own
machines DNS, feels a bit clunky. Doing both is probably the right answer.

3. Oliver Borchert: BGP-ASPA Hackathon Report
Tools, Datasets to facilitate testing route leak mitigation
https://github.com/usnistgov/NIST-BGP-SRx

Ruediger Volk: Syntax qestion: the syntax that you give for the ASPA test data
ends in an asterisk sign, should it be a plus? Oliver Yes. Ruediger: That
precludes empty AS provider sets but I hope they are still in ASPA. Oliver
Borchert: ok weÕll change back to plus so its regex zero-or-more. Alexander
Asimov: I have a question about the source of data. Using CAIDA, As I remember,
peering relations based on ÒexpertÓ set of tier-1 providers. These T1s are not
in the customer-to-provider set. you need to specifically create the ASPA0. IÕm
surprised by number of unknownsÉ Oliver Borchert: Take the data with a grain of
salt. DidnÕt add time analysing the data. For the scale test, and some dataset
which would make some kind of set instead of random. Want data which can be
used to do Òtwo implementations, output the sameÓ to validate the algorithm is
deterministic. Alexander Asimov: You should check how you are processing data
from CAIDA. It is known to be noisy and have false positives and there are
missing providers. I suggest to start with simple ASPA records forTier 1
providers. Oliver Borchert: We needed a big dataset, weÕre open to other
datasets. Sriram Kotikalapudi : Small data testing is available on the github
page. Alexander Asimov: I observed that the number of unknowns is large, we
noticed that they should have an AS0 ASPA. The 701 AS is a Tier1 appears as a
customer, that is unusual. From the draft there is no Òempty ASPAÓ (Alexander
yes in the draft this is marked by AS0 but discussion on the list continues)
Oliver Borchert: will check the PDU, if mistaken will modify, deal with this
Doug Montgomery: (from the chat) Yep - our goal was just a large data set and a
common APSPA output format - run two ASPA implementations on the data set, diff
the output - algorithm ÒinteroperabilityÓ. It is not meant to be a realistic
test of ASPA effectiveness.

Chris Morrow closes the meeting at 13:21 UTC