Skip to main content

Minutes IETF111: irtfopen
minutes-111-irtfopen-00

Meeting Minutes IRTF Open Meeting (irtfopen) RAG
Date and time 2021-07-26 21:30
Title Minutes IETF111: irtfopen
State Active
Other versions plain text
Last updated 2021-07-27

minutes-111-irtfopen-00
IRTF Open Meeting
=================

Monday, 26 July 2021, at 21:30-22:30 UTC
Room: Room 2

Chair: Colin Perkins (CP)
Notes: Mat Ford

# Introduction and Status Update

  IRTF Chair

  Slides:
  https://datatracker.ietf.org/meeting/111/materials/slides-111-irtfopen-agenda-00

# Config2spec: Mining Network Specifications from Network Configurations

  Rüdiger Birkner (RB)
  https://www.usenix.org/conference/nsdi20/presentation/birkner

  Slides:
  https://datatracker.ietf.org/meeting/111/materials/slides-111-irtfopen-config2spec-mining-network-specifications-from-network-configurations-00

  Jonathan Hoyland: If we assume that the intended / correct specification is
  4000 lines long, then how do I determine whether the generated 4000 line
  spec is the one I want?

  RB: We will not learn intended spec, but spec that config implements. 4000
  predicates is way too much.  Really one thing we're working on - we can do
  anomaly detection on the spec - if we see that all routers can reach
  certain destinations but one router can not, we can highlight this anomaly.
  Spec summarisation is also possible. Instead of low-level spec (R1 can
  reach prefix A, R1 can reach prefix B), can be summarised to something more
  readable.

  Colin Perkins: You said generated specs are quite low level, are the
  generated specs human-readable?

  RB: I think they're readable but for trained people - depends on who is
  reading them. I hope a human operator will be able to work with it.

  CP: Is it possible to feed them into a synthesis tool and generate the
  config?

  RB: Yes, that is the hope. Policy language or predicates that we support is
  the challenge - maybe we can't support everything the user wants.
  Predicates we have are more data-plane policies, but we might need more
  control-plane policies E.g. 'don't provide transit for neighbour X' isn't
  supported in config2spec currently.

  Hesham: Saw something similar from VMware. Maybe I should read your paper
  to find out what you have done differently.

  RB: yes, take a look at the paper and feel free to contact us with any
  ideas, comments or questions

  Jonathan Hoyland: I wonder if you could do anomaly detection by spec
  compressibility. An anomaly such as a single router not being able to reach
  another when all others can is probably less compressible than all routers
  been able to reach it.

  RB: Yes, indeed. This is especially helpful when policies are missing. You
  can detect missing policies if they allow you to create a more concise
  summary. Obviously, you never know whether you detected a real problem or
  it was an intended exception. That's why you need to involve the operator
  in the summarization process. However that should be reduced to a minimum.

# Salsify: low-latency network video through tighter integration between a
  video codec and a transport protocol

  Sadjad Fouladi (SF)
  https://www.usenix.org/conference/nsdi18/presentation/fouladi

  Slides:
  https://datatracker.ietf.org/meeting/111/materials/slides-111-irtfopen-salsify-low-latency-network-video-through-tighter-integration-between-a-video-codec-and-a-transport-protocol-00

  Ali Begen: Variable frame rate could be annoying based on how varying it is.

  SF: That's true. In case of Salsify, variable frame rate just means
  "occasionally dropping and not sending frames" when the network cannot
  accommodate them (if we do, the resulting glitches would be more
  annoying!).

  Ali Begen: I would buy the idea if you were doing all intra coding (like
  all jpeg frames) but with predictive coding, arbitrary frame-level rate
  control will likely hurt the overall video quality (but surely will gain
  from the less freezes)

  Hesham: Have you looked at holographic communications? Do you plan to?

  SF: No. This is the first time I've heard the term.

  Jana Iyengar: Thanks for the lovely talk. I've known this work for a
  little while and I'm curious to know if there's been more progress in
  terms of deployment or in terms of your own work. The architecture is
  simple, but it's effective - has this progressed since paper publication?

  SF: Biggest hurdle to deployment is not having support for deploying
  stateless codec in hardware. Good news is that about a year ago stateless
  codecs found their way into the Linux kernel. Today you can buy chips
  that implement this functionality in hardware. So it seems like there is
  a path forward to bring ideas like Salsify to production, but until we
  have this hardware widely deployed the challenge to widespread deployment
  remains. Nevertheless, many small details in Salsify can help signal
  improvements to existing deployments, for example dropping frames after
  encoding - engineers at WhatsApp, Google are interested in these ideas
  and using them.

  JI: People in this room should talk to Sadjad - the ideas in Salsify can
  make your products better.

  Jonathan Hoyland: That was super interesting, thanks 😊

  Phil Hallam-Baker: Is there a library? What are the license terms?

  SF: https://github.com/excamera/alfalfa#license

  Mo Zanaty: This is called joint source-channel coding. There is a very
  large body of prior work. The main problem is the experts in each domain,
  source/video coding and channel/transport "coding" (really packet
  dynamics more than "coding"), rarely have deep knowledge of the other.
  And so you get old codecs bound to old transports, much as you presented.
  The challenge is to bind state of the art codecs and transports.23:28:53
  The "functional codec" you implemented in VP8 was already present in the
  original version of VP8 called "dual coder". It was for error resilience
  (clean and dirty decoding paths) not bandwidth speculation. But
  essentially the same concept.

  Spencer Dawkins: This work is quite good, and the presentation made some
  helpful points I don't remember from the paper - thanks for that, too!

  Ali Begen: In terms of stateless encoding you are doing intra-coding so
  things get really simplified but quality will suffer greatly in my
  opinion. Looking at the results in the paper - using SSIM metric isn't
  really correct - Facetime or some other app when they use a much better
  codec than VP8 it's unreasonable to expect that their quality will be
  less. Intra-coding simplifies to frame level rate control but quality
  will suffer in comparison with more modern codecs using inter-coding.
  It's not a fair comparison.

  SF: Only first frame is intra-coded, all other frames are inter-coded.
  All motion compensation compressed.

  Ali Begen: Transport says next frame size should be X you need to pick
  quantisation factor to satisfy that requirement, right - you might
  significantly reduce bit rate for that frame. But any subsequent frame
  dependent on that original frame might be using that frame as a source,
  so subsequent frames are also impacted by this reduced quality.

  SF: In the event of slightly degraded network, the difference between
  subsequent frames is not really visible to human eye. More probable
  occurrence is dropping frame entirely when network capacity drops out
  significantly.

  Mo Zanaty: I would love to see a revival of joint source-channel coding
  using AV1 and QUIC (with BBR optimized for media not Sprout). Salsify
  2022?

CP: Thanks for these two really interesting talks. Both talks are available
on the IRTF website. I hope both speakers will stick around for the rest of
the week's meetings.

Recordings of the talks, and links to the papers, are available from
https://irtf.org/anrp/