Skip to main content

Known TCP Implementation Problems
RFC 2525

Document Type RFC - Informational (March 1999)
Authors Mark Allman , Bill Fenner (ˢˣˠ) , Jim Griner , Ian Heavens , Kevin Lahey , Dr. Vern Paxson , Jeff Semke , Bernie Volz
Last updated 2013-03-02
RFC stream Internet Engineering Task Force (IETF)
Formats
Additional resources Mailing list discussion
IESG Responsible AD (None)
Send notices to (None)
RFC 2525
" the window, causing an
      inappropriate amount of data to be sent into the network after
      recovery.  One cause of this problem is the "header prediction"
      code, which is used to handle incoming segments that require
      little work.  In some implementations of TCP, the header
      prediction code does not check to make sure cwnd has not been
      artificially inflated, and therefore does not reduce the
      artificially increased cwnd when appropriate.

   Significance
      TCP senders that exhibit this problem will transmit a burst of
      data immediately after recovery, which can degrade performance, as
      well as network stability.  Effectively, the sender does not

Paxson, et. al.              Informational                     [Page 26]
RFC 2525              TCP Implementation Problems             March 1999

      reduce the size of cwnd as much as it should (to half its value
      when loss was detected), if at all.  This can harm the performance
      of the TCP connection itself, as well as competing TCP flows.

   Implications
      A TCP sender exhibiting this problem does not reduce cwnd
      appropriately in times of congestion, and therefore may contribute
      to congestive collapse.

   Relevant RFCs
      RFC 2001 outlines the fast retransmit/fast recovery algorithms.
      [Brakmo95] outlines this implementation problem and offers a fix.

   Trace file demonstrating it
      The following trace file was taken using tcpdump at host A, the
      data sender.  The advertised window (which never changed) has been
      omitted for clarity, except for the first packet sent by each
      host.

   08:22:56.825635 A.7505 > B.7505: . 29697:30209(512) ack 1 win 4608
   08:22:57.038794 B.7505 > A.7505: . ack 27649 win 4096
   08:22:57.039279 A.7505 > B.7505: . 30209:30721(512) ack 1
   08:22:57.321876 B.7505 > A.7505: . ack 28161
   08:22:57.322356 A.7505 > B.7505: . 30721:31233(512) ack 1
   08:22:57.347128 B.7505 > A.7505: . ack 28673
   08:22:57.347572 A.7505 > B.7505: . 31233:31745(512) ack 1
   08:22:57.347782 A.7505 > B.7505: . 31745:32257(512) ack 1
   08:22:57.936393 B.7505 > A.7505: . ack 29185
   08:22:57.936864 A.7505 > B.7505: . 32257:32769(512) ack 1
   08:22:57.950802 B.7505 > A.7505: . ack 29697 win 4096
   08:22:57.951246 A.7505 > B.7505: . 32769:33281(512) ack 1
   08:22:58.169422 B.7505 > A.7505: . ack 29697
   08:22:58.638222 B.7505 > A.7505: . ack 29697
   08:22:58.643312 B.7505 > A.7505: . ack 29697
   08:22:58.643669 A.7505 > B.7505: . 29697:30209(512) ack 1
   08:22:58.936436 B.7505 > A.7505: . ack 29697
   08:22:59.002614 B.7505 > A.7505: . ack 29697
   08:22:59.003026 A.7505 > B.7505: . 33281:33793(512) ack 1
   08:22:59.682902 B.7505 > A.7505: . ack 33281
   08:22:59.683391 A.7505 > B.7505: P 33793:34305(512) ack 1
   08:22:59.683748 A.7505 > B.7505: P 34305:34817(512) ack 1 ***
   08:22:59.684043 A.7505 > B.7505: P 34817:35329(512) ack 1
   08:22:59.684266 A.7505 > B.7505: P 35329:35841(512) ack 1
   08:22:59.684567 A.7505 > B.7505: P 35841:36353(512) ack 1
   08:22:59.684810 A.7505 > B.7505: P 36353:36865(512) ack 1
   08:22:59.685094 A.7505 > B.7505: P 36865:37377(512) ack 1

Paxson, et. al.              Informational                     [Page 27]
RFC 2525              TCP Implementation Problems             March 1999

      The first 12 lines of the trace show incoming ACKs clocking out a
      window of data segments.  At this point in the transfer, cwnd is 7
      segments.  The next 4 lines of the trace show 3 duplicate ACKs
      arriving from the receiver, followed by a retransmission from the
      sender.  At this point, cwnd is halved (to 3 segments) and
      artificially incremented by the three duplicate ACKs that have
      arrived, making cwnd 6 segments.  The next two lines show 2 more
      duplicate ACKs arriving, each of which increases cwnd by 1
      segment.  So, after these two duplicate ACKs arrive the cwnd is 8
      segments and the sender has permission to send 1 new segment
      (since there are 7 segments outstanding).  The next line in the
      trace shows this new segment being transmitted.  The next packet
      shown in the trace is an ACK from host B that covers the first 7
      outstanding segments (all but the new segment sent during
      recovery).  This should cause cwnd to be reduced to 3 segments and
      2 segments to be transmitted (since there is already 1 outstanding
      segment in the network).  However, as shown by the last 7 lines of
      the trace, cwnd is not reduced, causing a line-rate burst of 7 new
      segments.

   Trace file demonstrating correct behavior
      The trace would appear identical to the one above, only it would
      stop after the line marked "***", because at this point host A
      would correctly reduce cwnd after recovery, allowing only 2
      segments to be transmitted, rather than producing a burst of 7
      segments.

   References
      This problem is documented and the performance implications
      analyzed in [Brakmo95].

   How to detect
      Failure of window deflation after loss recovery can be found by
      examining sender-side packet traces recorded during periods of
      moderate loss (so cwnd can grow large enough to allow for fast
      recovery when loss occurs).

   How to fix
      When this bug is caused by incorrect header prediction, the fix is
      to add a predicate to the header prediction test that checks to
      see whether cwnd is inflated; if so, the header prediction test
      fails and the usual ACK processing occurs, which (in this case)
      takes care to deflate the window.  See [Brakmo95] for details.

2.9.

   Name of Problem
      Excessively short keepalive connection timeout

Paxson, et. al.              Informational                     [Page 28]
RFC 2525              TCP Implementation Problems             March 1999

   Classification
      Reliability

   Description
      Keep-alive is a mechanism for checking whether an idle connection
      is still alive.  According to RFC 1122, keepalive should only be
      invoked in server applications that might otherwise hang
      indefinitely and consume resources unnecessarily if a client
      crashes or aborts a connection during a network failure.

      RFC 1122 also specifies that if a keep-alive mechanism is
      implemented it MUST NOT interpret failure to respond to any
      specific probe as a dead connection.  The RFC does not specify a
      particular mechanism for timing out a connection when no response
      is received for keepalive probes.  However, if the mechanism does
      not allow ample time for recovery from network congestion or
      delay, connections may be timed out unnecessarily.

   Significance
      In congested networks, can lead to unwarranted termination of
      connections.

   Implications
      It is possible for the network connection between two peer
      machines to become congested or to exhibit packet loss at the time
      that a keep-alive probe is sent on a connection.  If the keep-
      alive mechanism does not allow sufficient time before dropping
      connections in the face of unacknowledged probes, connections may
      be dropped even when both peers of a connection are still alive.

   Relevant RFCs
      RFC 1122 specifies that the keep-alive mechanism may be provided.
      It does not specify a mechanism for determining dead connections
      when keepalive probes are not acknowledged.

   Trace file demonstrating it
      Made using the Orchestra tool at the peer of the machine using
      keep-alive.  After connection establishment, incoming keep-alives
      were dropped by Orchestra to simulate a dead connection.

   22:11:12.040000 A > B: 22666019:0 win 8192 datasz 4 SYN
   22:11:12.060000 B > A: 2496001:22666020 win 4096 datasz 4 SYN ACK
   22:11:12.130000 A > B: 22666020:2496002 win 8760 datasz 0 ACK
   (more than two hours elapse)
   00:23:00.680000 A > B: 22666019:2496002 win 8760 datasz 1 ACK
   00:23:01.770000 A > B: 22666019:2496002 win 8760 datasz 1 ACK
   00:23:02.870000 A > B: 22666019:2496002 win 8760 datasz 1 ACK
   00:23.03.970000 A > B: 22666019:2496002 win 8760 datasz 1 ACK

Paxson, et. al.              Informational                     [Page 29]
RFC 2525              TCP Implementation Problems             March 1999

   00:23.05.070000 A > B: 22666019:2496002 win 8760 datasz 1 ACK

      The initial three packets are the SYN exchange for connection
      setup.  About two hours later, the keepalive timer fires because
      the connection has been idle.  Keepalive probes are transmitted a
      total of 5 times, with a 1 second spacing between probes, after
      which the connection is dropped.  This is problematic because a 5
      second network outage at the time of the first probe results in
      the connection being killed.

   Trace file demonstrating correct behavior
      Made using the Orchestra tool at the peer of the machine using
      keep-alive.  After connection establishment, incoming keep-alives
      were dropped by Orchestra to simulate a dead connection.

   16:01:52.130000 A > B: 1804412929:0 win 4096 datasz 4 SYN
   16:01:52.360000 B > A: 16512001:1804412930 win 4096 datasz 4 SYN ACK
   16:01:52.410000 A > B: 1804412930:16512002 win 4096 datasz 0 ACK
   (two hours elapse)
   18:01:57.170000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK
   18:03:12.220000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK
   18:04:27.270000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK
   18:05:42.320000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK
   18:06:57.370000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK
   18:08:12.420000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK
   18:09:27.480000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK
   18:10:43.290000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK
   18:11:57.580000 A > B: 1804412929:16512002 win 4096 datasz 0 ACK
   18:13:12.630000 A > B: 1804412929:16512002 win 4096 datasz 0 RST ACK

      In this trace, when the keep-alive timer expires, 9 keepalive
      probes are sent at 75 second intervals.  75 seconds after the last
      probe is sent, a final RST segment is sent indicating that the
      connection has been closed.  This implementation waits about 11
      minutes before timing out the connection, while the first
      implementation shown allows only 5 seconds.

   References
      This problem is documented in [Dawson97].

   How to detect
      For implementations manifesting this problem, it shows up on a
      packet trace after the keepalive timer fires if the peer machine
      receiving the keepalive does not respond.  Usually the keepalive
      timer will fire at least two hours after keepalive is turned on,
      but it may be sooner if the timer value has been configured lower,
      or if the keepalive mechanism violates the specification (see
      Insufficient interval between keepalives problem).  In this

Paxson, et. al.              Informational                     [Page 30]
RFC 2525              TCP Implementation Problems             March 1999

      example, suppressing the response of the peer to keepalive probes
      was accomplished using the Orchestra toolkit, which can be
      configured to drop packets.  It could also have been done by
      creating a connection, turning on keepalive, and disconnecting the
      network connection at the receiver machine.

   How to fix
      This problem can be fixed by using a different method for timing
      out keepalives that allows a longer period of time to elapse
      before dropping the connection.  For example, the algorithm for
      timing out on dropped data could be used.  Another possibility is
      an algorithm such as the one shown in the trace above, which sends
      9 probes at 75 second intervals and then waits an additional 75
      seconds for a response before closing the connection.

2.10.

   Name of Problem
      Failure to back off retransmission timeout

   Classification
      Congestion control / reliability

   Description
      The retransmission timeout is used to determine when a packet has
      been dropped in the network.  When this timeout has expired
      without the arrival of an ACK, the segment is retransmitted. Each
      time a segment is retransmitted, the timeout is adjusted according
      to an exponential backoff algorithm, doubling each time.  If a TCP
      fails to receive an ACK after numerous attempts at retransmitting
      the same segment, it terminates the connection.  A TCP that fails
      to double its retransmission timeout upon repeated timeouts is
      said to exhibit "Failure to back off retransmission timeout".

   Significance
      Backing off the retransmission timer is a cornerstone of network
      stability in the presence of congestion.  Consequently, this bug
      can have severe adverse affects in congested networks.  It also
      affects TCP reliability in congested networks, as discussed in the
      next section.

   Implications
      It is possible for the network connection between two TCP peers to
      become congested or to exhibit packet loss at the time that a
      retransmission is sent on a connection.  If the retransmission
      mechanism does not allow sufficient time before dropping

Paxson, et. al.              Informational                     [Page 31]
RFC 2525              TCP Implementation Problems             March 1999

      connections in the face of unacknowledged segments, connections
      may be dropped even when, by waiting longer, the connection could
      have continued.

   Relevant RFCs
      RFC 1122 specifies mandatory exponential backoff of the
      retransmission timeout, and the termination of connections after
      some period of time (at least 100 seconds).

   Trace file demonstrating it
      Made using tcpdump on an intermediate host:

   16:51:12.671727 A > B: S 510878852:510878852(0) win 16384
   16:51:12.672479 B > A: S 2392143687:2392143687(0)
                            ack 510878853 win 16384
   16:51:12.672581 A > B: . ack 1 win 16384
   16:51:15.244171 A > B: P 1:3(2) ack 1 win 16384
   16:51:15.244933 B > A: . ack 3 win 17518  (DF)

   <receiving host disconnected>

   16:51:19.381176 A > B: P 3:5(2) ack 1 win 16384
   16:51:20.162016 A > B: P 3:5(2) ack 1 win 16384
   16:51:21.161936 A > B: P 3:5(2) ack 1 win 16384
   16:51:22.161914 A > B: P 3:5(2) ack 1 win 16384
   16:51:23.161914 A > B: P 3:5(2) ack 1 win 16384
   16:51:24.161879 A > B: P 3:5(2) ack 1 win 16384
   16:51:25.161857 A > B: P 3:5(2) ack 1 win 16384
   16:51:26.161836 A > B: P 3:5(2) ack 1 win 16384
   16:51:27.161814 A > B: P 3:5(2) ack 1 win 16384
   16:51:28.161791 A > B: P 3:5(2) ack 1 win 16384
   16:51:29.161769 A > B: P 3:5(2) ack 1 win 16384
   16:51:30.161750 A > B: P 3:5(2) ack 1 win 16384
   16:51:31.161727 A > B: P 3:5(2) ack 1 win 16384

   16:51:32.161701 A > B: R 5:5(0) ack 1 win 16384

      The initial three packets are the SYN exchange for connection
      setup, then a single data packet, to verify that data can be
      transferred.  Then the connection to the destination host was
      disconnected, and more data sent.  Retransmissions occur every
      second for 12 seconds, and then the connection is terminated with
      a RST.  This is problematic because a 12 second pause in
      connectivity could result in the termination of a connection.

   Trace file demonstrating correct behavior
      Again, a tcpdump taken from a third host:

Paxson, et. al.              Informational                     [Page 32]
RFC 2525              TCP Implementation Problems             March 1999

   16:59:05.398301 A > B: S 2503324757:2503324757(0) win 16384
   16:59:05.399673 B > A: S 2492674648:2492674648(0)
                           ack 2503324758 win 16384
   16:59:05.399866 A > B: . ack 1 win 17520
   16:59:06.538107 A > B: P 1:3(2) ack 1 win 17520
   16:59:06.540977 B > A: . ack 3 win 17518  (DF)

   <receiving host disconnected>

   16:59:13.121542 A > B: P 3:5(2) ack 1 win 17520
   16:59:14.010928 A > B: P 3:5(2) ack 1 win 17520
   16:59:16.010979 A > B: P 3:5(2) ack 1 win 17520
   16:59:20.011229 A > B: P 3:5(2) ack 1 win 17520
   16:59:28.011896 A > B: P 3:5(2) ack 1 win 17520
   16:59:44.013200 A > B: P 3:5(2) ack 1 win 17520
   17:00:16.015766 A > B: P 3:5(2) ack 1 win 17520
   17:01:20.021308 A > B: P 3:5(2) ack 1 win 17520
   17:02:24.027752 A > B: P 3:5(2) ack 1 win 17520
   17:03:28.034569 A > B: P 3:5(2) ack 1 win 17520
   17:04:32.041567 A > B: P 3:5(2) ack 1 win 17520
   17:05:36.048264 A > B: P 3:5(2) ack 1 win 17520
   17:06:40.054900 A > B: P 3:5(2) ack 1 win 17520

   17:07:44.061306 A > B: R 5:5(0) ack 1 win 17520

      In this trace, when the retransmission timer expires, 12
      retransmissions are sent at exponentially-increasing intervals,
      until the interval value reaches 64 seconds, at which time the
      interval stops growing.  64 seconds after the last retransmission,
      a final RST segment is sent indicating that the connection has
      been closed.  This implementation waits about 9 minutes before
      timing out the connection, while the first implementation shown
      allows only 12 seconds.

   References
      None known.

   How to detect
      A simple transfer can be easily interrupted by disconnecting the
      receiving host from the network.  tcpdump or another appropriate
      tool should show the retransmissions being sent.  Several trials
      in a low-rtt environment may be required to demonstrate the bug.

   How to fix
      For one of the implementations studied, this problem seemed to be
      the result of an error introduced with the addition of the
      Brakmo-Peterson RTO algorithm [Brakmo95], which can return a value
      of zero where the older Jacobson algorithm always returns a

Paxson, et. al.              Informational                     [Page 33]
RFC 2525              TCP Implementation Problems             March 1999

      positive value.  Brakmo and Peterson specified an additional step
      of min(rtt + 2, RTO) to avoid problems with this.  Unfortunately,
      in the implementation this step was omitted when calculating the
      exponential backoff for the RTO.  This results in an RTO of 0
      seconds being multiplied by the backoff, yielding again zero, and
      then being subjected to a later MAX operation that increases it to
      1 second, regardless of the backoff factor.

      A similar TCP persist failure has the same cause.

2.11.

   Name of Problem
      Insufficient interval between keepalives

   Classification
      Reliability

   Description
      Keep-alive is a mechanism for checking whether an idle connection
      is still alive.  According to RFC 1122, keep-alive may be included
      in an implementation.  If it is included, the interval between
      keep-alive packets MUST be configurable, and MUST default to no
      less than two hours.

   Significance
      In congested networks, can lead to unwarranted termination of
      connections.

   Implications
      According to RFC 1122, keep-alive is not required of
      implementations because it could: (1) cause perfectly good
      connections to break during transient Internet failures; (2)
      consume unnecessary bandwidth ("if no one is using the connection,
      who cares if it is still good?"); and (3) cost money for an
      Internet path that charges for packets.  Regarding this last
      point, we note that in addition the presence of dial-on-demand
      links in the route can greatly magnify the cost penalty of excess
      keepalives, potentially forcing a full-time connection on a link
      that would otherwise only be connected a few minutes a day.

      If keepalive is provided the RFC states that the required inter-
      keepalive distance MUST default to no less than two hours.  If it
      does not, the probability of connections breaking increases, the
      bandwidth used due to keepalives increases, and cost increases
      over paths which charge per packet.

Paxson, et. al.              Informational                     [Page 34]
RFC 2525              TCP Implementation Problems             March 1999

   Relevant RFCs
      RFC 1122 specifies that the keep-alive mechanism may be provided.
      It also specifies the two hour minimum for the default interval
      between keepalive probes.

   Trace file demonstrating it
      Made using the Orchestra tool at the peer of the machine using
      keep-alive.  Machine A was configured to use default settings for
      the keepalive timer.

   11:36:32.910000 A > B: 3288354305:0      win 28672 datasz 4 SYN
   11:36:32.930000 B > A: 896001:3288354306 win 4096  datasz 4 SYN ACK
   11:36:32.950000 A > B: 3288354306:896002 win 28672 datasz 0 ACK

   11:50:01.190000 A > B: 3288354305:896002 win 28672 datasz 0 ACK
   11:50:01.210000 B > A: 896002:3288354306 win 4096  datasz 0 ACK

   12:03:29.410000 A > B: 3288354305:896002 win 28672 datasz 0 ACK
   12:03:29.430000 B > A: 896002:3288354306 win 4096  datasz 0 ACK

   12:16:57.630000 A > B: 3288354305:896002 win 28672 datasz 0 ACK
   12:16:57.650000 B > A: 896002:3288354306 win 4096  datasz 0 ACK

   12:30:25.850000 A > B: 3288354305:896002 win 28672 datasz 0 ACK
   12:30:25.870000 B > A: 896002:3288354306 win 4096  datasz 0 ACK

   12:43:54.070000 A > B: 3288354305:896002 win 28672 datasz 0 ACK
   12:43:54.090000 B > A: 896002:3288354306 win 4096  datasz 0 ACK

      The initial three packets are the SYN exchange for connection
      setup.  About 13 minutes later, the keepalive timer fires because
      the connection is idle.  The keepalive is acknowledged, and the
      timer fires again in about 13 more minutes.  This behavior
      continues indefinitely until the connection is closed, and is a
      violation of the specification.

   Trace file demonstrating correct behavior
      Made using the Orchestra tool at the peer of the machine using
      keep-alive.  Machine A was configured to use default settings for
      the keepalive timer.

   17:37:20.500000 A > B: 34155521:0       win 4096 datasz 4 SYN
   17:37:20.520000 B > A: 6272001:34155522 win 4096 datasz 4 SYN ACK
   17:37:20.540000 A > B: 34155522:6272002 win 4096 datasz 0 ACK

   19:37:25.430000 A > B: 34155521:6272002 win 4096 datasz 0 ACK
   19:37:25.450000 B > A: 6272002:34155522 win 4096 datasz 0 ACK

Paxson, et. al.              Informational                     [Page 35]
RFC 2525              TCP Implementation Problems             March 1999

   21:37:30.560000 A > B: 34155521:6272002 win 4096 datasz 0 ACK
   21:37:30.570000 B > A: 6272002:34155522 win 4096 datasz 0 ACK

   23:37:35.580000 A > B: 34155521:6272002 win 4096 datasz 0 ACK
   23:37:35.600000 B > A: 6272002:34155522 win 4096 datasz 0 ACK

   01:37:40.620000 A > B: 34155521:6272002 win 4096 datasz 0 ACK
   01:37:40.640000 B > A: 6272002:34155522 win 4096 datasz 0 ACK

   03:37:45.590000 A > B: 34155521:6272002 win 4096 datasz 0 ACK
   03:37:45.610000 B > A: 6272002:34155522 win 4096 datasz 0 ACK

      The initial three packets are the SYN exchange for connection
      setup.  Just over two hours later, the keepalive timer fires
      because the connection is idle.  The keepalive is acknowledged,
      and the timer fires again just over two hours later.  This
      behavior continues indefinitely until the connection is closed.

   References
      This problem is documented in [Dawson97].

   How to detect
      For implementations manifesting this problem, it shows up on a
      packet trace.  If the connection is left idle, the keepalive
      probes will arrive closer together than the two hour minimum.

2.12.

   Name of Problem
      Window probe deadlock

   Classification
      Reliability

   Description
      When an application reads a single byte from a full window, the
      window should not be updated, in order to avoid Silly Window
      Syndrome (SWS; see [RFC813]).  If the remote peer uses a single
      byte of data to probe the window, that byte can be accepted into
      the buffer.  In some implementations, at this point a negative
      argument to a signed comparison causes all further new data to be
      considered outside the window; consequently, it is discarded
      (after sending an ACK to resynchronize).  These discards include
      the ACKs for the data packets sent by the local TCP, so the TCP
      will consider the data unacknowledged.

Paxson, et. al.              Informational                     [Page 36]
RFC 2525              TCP Implementation Problems             March 1999

      Consequently, the application may be unable to complete sending
      new data to the remote peer, because it has exhausted the transmit
      buffer available to its local TCP, and buffer space is never being
      freed because incoming ACKs that would do so are being discarded.
      If the application does not read any more data, which may happen
      due to its failure to complete such sends, then deadlock results.

   Significance
      It's relatively rare for applications to use TCP in a manner that
      can exercise this problem.  Most applications only transmit bulk
      data if they know the other end is prepared to receive the data.
      However, if a client fails to consume data, putting the server in
      persist mode, and then consumes a small amount of data, it can
      mistakenly compute a negative window.  At this point the client
      will discard all further packets from the server, including ACKs
      of the client's own data, since they are not inside the
      (impossibly-sized) window.  If subsequently the client consumes
      enough data to then send a window update to the server, the
      situation will be rectified.  That is, this situation can only
      happen if the client consumes 1 < N < MSS bytes, so as not to
      cause a window update, and then starts its own transmission
      towards the server of more than a window's worth of data.

   Implications
      TCP connections will hang and eventually time out.

   Relevant RFCs
      RFC 793 describes zero window probing.  RFC 813 describes Silly
      Window Syndrome.

   Trace file demonstrating it
      Trace made from a version of tcpdump modified to print out the
      sequence number attached to an ACK even if it's dataless.  An
      unmodified tcpdump would not print seq:seq(0); however, for this
      bug, the sequence number in the ACK is important for unambiguously
      determining how the TCP is behaving.

   [ Normal connection startup and data transmission from B to A.
     Options, including MSS of 16344 in both directions, omitted
     for clarity. ]
   16:07:32.327616 A > B: S 65360807:65360807(0) win 8192
   16:07:32.327304 B > A: S 65488807:65488807(0) ack 65360808 win 57344
   16:07:32.327425 A > B: . 1:1(0) ack 1 win 57344
   16:07:32.345732 B > A: P 1:2049(2048) ack 1 win 57344
   16:07:32.347013 B > A: P 2049:16385(14336) ack 1 win 57344
   16:07:32.347550 B > A: P 16385:30721(14336) ack 1 win 57344
   16:07:32.348683 B > A: P 30721:45057(14336) ack 1 win 57344
   16:07:32.467286 A > B: . 1:1(0) ack 45057 win 12288

Paxson, et. al.              Informational                     [Page 37]
RFC 2525              TCP Implementation Problems             March 1999

   16:07:32.467854 B > A: P 45057:57345(12288) ack 1 win 57344

   [ B fills up A's offered window ]
   16:07:32.667276 A > B: . 1:1(0) ack 57345 win 0

   [ B probes A's window with a single byte ]
   16:07:37.467438 B > A: . 57345:57346(1) ack 1 win 57344

   [ A resynchronizes without accepting the byte ]
   16:07:37.467678 A > B: . 1:1(0) ack 57345 win 0

   [ B probes A's window again ]
   16:07:45.467438 B > A: . 57345:57346(1) ack 1 win 57344

   [ A resynchronizes and accepts the byte (per the ack field) ]
   16:07:45.667250 A > B: . 1:1(0) ack 57346 win 0

   [ The application on A has started generating data.  The first
     packet A sends is small due to a memory allocation bug. ]
   16:07:51.358459 A > B: P 1:2049(2048) ack 57346 win 0

   [ B acks A's first packet ]
   16:07:51.467239 B > A: . 57346:57346(0) ack 2049 win 57344

   [ This looks as though A accepted B's ACK and is sending
     another packet in response to it.  In fact, A is trying
     to resynchronize with B, and happens to have data to send
     and can send it because the first small packet didn't use
     up cwnd. ]
   16:07:51.467698 A > B: . 2049:14337(12288) ack 57346 win 0

   [ B acks all of the data that A has sent ]
   16:07:51.667283 B > A: . 57346:57346(0) ack 14337 win 57344

   [ A tries to resynchronize.  Notice that by the packets
     seen on the network, A and B *are* in fact synchronized;
     A only thinks that they aren't. ]
   16:07:51.667477 A > B: . 14337:14337(0) ack 57346 win 0

   [ A's retransmit timer fires, and B acks all of the data.
     A once again tries to resynchronize. ]
   16:07:52.467682 A > B: . 1:14337(14336) ack 57346 win 0
   16:07:52.468166 B > A: . 57346:57346(0) ack 14337 win 57344
   16:07:52.468248 A > B: . 14337:14337(0) ack 57346 win 0

   [ A's retransmit timer fires again, and B acks all of the data.
     A once again tries to resynchronize. ]
   16:07:55.467684 A > B: . 1:14337(14336) ack 57346 win 0

Paxson, et. al.              Informational                     [Page 38]
RFC 2525              TCP Implementation Problems             March 1999

   16:07:55.468172 B > A: . 57346:57346(0) ack 14337 win 57344
   16:07:55.468254 A > B: . 14337:14337(0) ack 57346 win 0

   Trace file demonstrating correct behavior
      Made between the same two hosts after applying the bug fix
      mentioned below (and using the same modified tcpdump).

   [ Connection starts up with data transmission from B to A.
     Note that due to a separate bug (the fact that A and B
     are communicating over a loopback driver), B erroneously
     skips slow start. ]
   17:38:09.510854 A > B: S 3110066585:3110066585(0) win 16384
   17:38:09.510926 B > A: S 3110174850:3110174850(0)
                            ack 3110066586 win 57344
   17:38:09.510953 A > B: . 1:1(0) ack 1 win 57344
   17:38:09.512956 B > A: P 1:2049(2048) ack 1 win 57344
   17:38:09.513222 B > A: P 2049:16385(14336) ack 1 win 57344
   17:38:09.513428 B > A: P 16385:30721(14336) ack 1 win 57344
   17:38:09.513638 B > A: P 30721:45057(14336) ack 1 win 57344
   17:38:09.519531 A > B: . 1:1(0) ack 45057 win 12288
   17:38:09.519638 B > A: P 45057:57345(12288) ack 1 win 57344

   [ B fills up A's offered window ]
   17:38:09.719526 A > B: . 1:1(0) ack 57345 win 0

   [ B probes A's window with a single byte.  A resynchronizes
     without accepting the byte ]
   17:38:14.499661 B > A: . 57345:57346(1) ack 1 win 57344
   17:38:14.499724 A > B: . 1:1(0) ack 57345 win 0

   [ B probes A's window again.  A resynchronizes and accepts
     the byte, as indicated by the ack field ]
   17:38:19.499764 B > A: . 57345:57346(1) ack 1 win 57344
   17:38:19.519731 A > B: . 1:1(0) ack 57346 win 0

   [ B probes A's window with a single byte.  A resynchronizes
     without accepting the byte ]
   17:38:24.499865 B > A: . 57346:57347(1) ack 1 win 57344
   17:38:24.499934 A > B: . 1:1(0) ack 57346 win 0

   [ The application on A has started generating data.
     B acks A's data and A accepts the ACKs and the
     data transfer continues ]
   17:38:28.530265 A > B: P 1:2049(2048) ack 57346 win 0
   17:38:28.719914 B > A: . 57346:57346(0) ack 2049 win 57344

   17:38:28.720023 A > B: . 2049:16385(14336) ack 57346 win 0
   17:38:28.720089 A > B: . 16385:30721(14336) ack 57346 win 0

Paxson, et. al.              Informational                     [Page 39]
RFC 2525              TCP Implementation Problems             March 1999

   17:38:28.720370 B > A: . 57346:57346(0) ack 30721 win 57344

   17:38:28.720462 A > B: . 30721:45057(14336) ack 57346 win 0
   17:38:28.720526 A > B: P 45057:59393(14336) ack 57346 win 0
   17:38:28.720824 A > B: P 59393:73729(14336) ack 57346 win 0
   17:38:28.721124 B > A: . 57346:57346(0) ack 73729 win 47104

   17:38:28.721198 A > B: P 73729:88065(14336) ack 57346 win 0
   17:38:28.721379 A > B: P 88065:102401(14336) ack 57346 win 0

   17:38:28.721557 A > B: P 102401:116737(14336) ack 57346 win 0
   17:38:28.721863 B > A: . 57346:57346(0) ack 116737 win 36864

   References
      None known.

   How to detect
      Initiate a connection from a client to a server.  Have the server
      continuously send data until its buffers have been full for long
      enough to exhaust the window.  Next, have the client read 1 byte
      and then delay for long enough that the server TCP sends a window
      probe.  Now have the client start sending data.  At this point, if
      it ignores the server's ACKs, then the client's TCP suffers from
      the problem.

   How to fix
      In one implementation known to exhibit the problem (derived from
      4.3-Reno), the problem was introduced when the macro MAX() was
      replaced by the function call max() for computing the amount of
      space in the receive window:

          tp->rcv_wnd = max(win, (int)(tp->rcv_adv - tp->rcv_nxt));

      When data has been received into a window beyond what has been
      advertised to the other side, rcv_nxt > rcv_adv, making this
      negative.  It's clear from the (int) cast that this is intended,
      but the unsigned max() function sign-extends so the negative
      number is "larger".  The fix is to change max() to imax():

          tp->rcv_wnd = imax(win, (int)(tp->rcv_adv - tp->rcv_nxt));

      4.3-Tahoe and before did not have this bug, since it used the
      macro MAX() for this calculation.

2.13.

   Name of Problem
      Stretch ACK violation

Paxson, et. al.              Informational                     [Page 40]
RFC 2525              TCP Implementation Problems             March 1999

   Classification
      Congestion Control/Performance

   Description
      To improve efficiency (both computer and network) a data receiver
      may refrain from sending an ACK for each incoming segment,
      according to [RFC1122].  However, an ACK should not be delayed an
      inordinate amount of time.  Specifically, ACKs SHOULD be sent for
      every second full-sized segment that arrives.  If a second full-
      sized segment does not arrive within a given timeout (of no more
      than 0.5 seconds), an ACK should be transmitted, according to
      [RFC1122].  A TCP receiver which does not generate an ACK for
      every second full-sized segment exhibits a "Stretch ACK
      Violation".

   Significance
      TCP receivers exhibiting this behavior will cause TCP senders to
      generate burstier traffic, which can degrade performance in
      congested environments.  In addition, generating fewer ACKs
      increases the amount of time needed by the slow start algorithm to
      open the congestion window to an appropriate point, which
      diminishes performance in environments with large bandwidth-delay
      products.  Finally, generating fewer ACKs may cause needless
      retransmission timeouts in lossy environments, as it increases the
      possibility that an entire window of ACKs is lost, forcing a
      retransmission timeout.

   Implications
      When not in loss recovery, every ACK received by a TCP sender
      triggers the transmission of new data segments.  The burst size is
      determined by the number of previously unacknowledged segments
      each ACK covers.  Therefore, a TCP receiver ack'ing more than 2
      segments at a time causes the sending TCP to generate a larger
      burst of traffic upon receipt of the ACK.  This large burst of
      traffic can overwhelm an intervening gateway, leading to higher
      drop rates for both the connection and other connections passing
      through the congested gateway.

      In addition, the TCP slow start algorithm increases the congestion
      window by 1 segment for each ACK received.  Therefore, increasing
      the ACK interval (thus decreasing the rate at which ACKs are
      transmitted) increases the amount of time it takes slow start to
      increase the congestion window to an appropriate operating point,
      and the connection consequently suffers from reduced performance.
      This is especially true for connections using large windows.

   Relevant RFCs
      RFC 1122 outlines delayed ACKs as a recommended mechanism.

Paxson, et. al.              Informational                     [Page 41]
RFC 2525              TCP Implementation Problems             March 1999

   Trace file demonstrating it
      Trace file taken using tcpdump at host B, the data receiver (and
      ACK originator).  The advertised window (which never changed) and
      timestamp options have been omitted for clarity, except for the
      first packet sent by A:

   12:09:24.820187 A.1174 > B.3999: . 2049:3497(1448) ack 1
       win 33580 <nop,nop,timestamp 2249877 2249914> [tos 0x8]
   12:09:24.824147 A.1174 > B.3999: . 3497:4945(1448) ack 1
   12:09:24.832034 A.1174 > B.3999: . 4945:6393(1448) ack 1
   12:09:24.832222 B.3999 > A.1174: . ack 6393
   12:09:24.934837 A.1174 > B.3999: . 6393:7841(1448) ack 1
   12:09:24.942721 A.1174 > B.3999: . 7841:9289(1448) ack 1
   12:09:24.950605 A.1174 > B.3999: . 9289:10737(1448) ack 1
   12:09:24.950797 B.3999 > A.1174: . ack 10737
   12:09:24.958488 A.1174 > B.3999: . 10737:12185(1448) ack 1
   12:09:25.052330 A.1174 > B.3999: . 12185:13633(1448) ack 1
   12:09:25.060216 A.1174 > B.3999: . 13633:15081(1448) ack 1
   12:09:25.060405 B.3999 > A.1174: . ack 15081

      This portion of the trace clearly shows that the receiver (host B)
      sends an ACK for every third full sized packet received.  Further
      investigation of this implementation found that the cause of the
      increased ACK interval was the TCP options being used.  The
      implementation sent an ACK after it was holding 2*MSS worth of
      unacknowledged data.  In the above case, the MSS is 1460 bytes so
      the receiver transmits an ACK after it is holding at least 2920
      bytes of unacknowledged data.  However, the length of the TCP
      options being used [RFC1323] took 12 bytes away from the data
      portion of each packet.  This produced packets containing 1448
      bytes of data.  But the additional bytes used by the options in
      the header were not taken into account when determining when to
      trigger an ACK.  Therefore, it took 3 data segments before the
      data receiver was holding enough unacknowledged data (>= 2*MSS, or
      2920 bytes in the above example) to transmit an ACK.

   Trace file demonstrating correct behavior
      Trace file taken using tcpdump at host B, the data receiver (and
      ACK originator), again with window and timestamp information
      omitted except for the first packet:

   12:06:53.627320 A.1172 > B.3999: . 1449:2897(1448) ack 1
       win 33580 <nop,nop,timestamp 2249575 2249612> [tos 0x8]
   12:06:53.634773 A.1172 > B.3999: . 2897:4345(1448) ack 1
   12:06:53.634961 B.3999 > A.1172: . ack 4345
   12:06:53.737326 A.1172 > B.3999: . 4345:5793(1448) ack 1
   12:06:53.744401 A.1172 > B.3999: . 5793:7241(1448) ack 1
   12:06:53.744592 B.3999 > A.1172: . ack 7241

Paxson, et. al.              Informational                     [Page 42]
RFC 2525              TCP Implementation Problems             March 1999

   12:06:53.752287 A.1172 > B.3999: . 7241:8689(1448) ack 1
   12:06:53.847332 A.1172 > B.3999: . 8689:10137(1448) ack 1
   12:06:53.847525 B.3999 > A.1172: . ack 10137

      This trace shows the TCP receiver (host B) ack'ing every second
      full-sized packet, according to [RFC1122].  This is the same
      implementation shown above, with slight modifications that allow
      the receiver to take the length of the options into account when
      deciding when to transmit an ACK.

   References
      This problem is documented in [Allman97] and [Paxson97].

   How to detect
      Stretch ACK violations show up immediately in receiver-side packet
      traces of bulk transfers, as shown above.  However, packet traces
      made on the sender side of the TCP connection may lead to
      ambiguities when diagnosing this problem due to the possibility of
      lost ACKs.

2.14.

   Name of Problem
      Retransmission sends multiple packets

   Classification
      Congestion control

   Description
      When a TCP retransmits a segment due to a timeout expiration or
      beginning a fast retransmission sequence, it should only transmit
      a single segment.  A TCP that transmits more than one segment
      exhibits "Retransmission Sends Multiple Packets".

      Instances of this problem have been known to occur due to
      miscomputations involving the use of TCP options.  TCP options
      increase the TCP header beyond its usual size of 20 bytes.  The
      total size of header must be taken into account when
      retransmitting a packet.  If a TCP sender does not account for the
      length of the TCP options when determining how much data to
      retransmit, it will send too much data to fit into a single
      packet.  In this case, the correct retransmission will be followed
      by a short segment (tinygram) containing data that may not need to
      be retransmitted.

      A specific case is a TCP using the RFC 1323 timestamp option,
      which adds 12 bytes to the standard 20-byte TCP header.  On
      retransmission of a packet, the 12 byte option is incorrectly

Paxson, et. al.              Informational                     [Page 43]
RFC 2525              TCP Implementation Problems             March 1999

      interpreted as part of the data portion of the segment.  A
      standard TCP header and a new 12-byte option is added to the data,
      which yields a transmission of 12 bytes more data than contained
      in the original segment.  This overflow causes a smaller packet,
      with 12 data bytes, to be transmitted.

   Significance
      This problem is somewhat serious for congested environments
      because the TCP implementation injects more packets into the
      network than is appropriate.  However, since a tinygram is only
      sent in response to a fast retransmit or a timeout, it does not
      effect the sustained sending rate.

   Implications
      A TCP exhibiting this behavior is stressing the network with more
      traffic than appropriate, and stressing routers by increasing the
      number of packets they must process.  The redundant tinygram will
      also elicit a duplicate ACK from the receiver, resulting in yet
      another unnecessary transmission.

   Relevant RFCs
      RFC 1122 requires use of slow start after loss; RFC 2001
      explicates slow start; RFC 1323 describes the timestamp option
      that has been observed to lead to some implementations exhibiting
      this problem.

   Trace file demonstrating it
      Made using tcpdump recording at a machine on the same subnet as
      Host A.  Host A is the sender and Host B is the receiver.  The
      advertised window and timestamp options have been omitted for
      clarity, except for the first segment sent by host A.  In
      addition, portions of the trace file not pertaining to the packet
      in question have been removed (missing packets are denoted by
      "[...]" in the trace).

   11:55:22.701668 A > B: . 7361:7821(460) ack 1
       win 49324 <nop,nop,timestamp 3485348 3485113>
   11:55:22.702109 A > B: . 7821:8281(460) ack 1
   [...]

   11:55:23.112405 B > A: . ack 7821
   11:55:23.113069 A > B: . 12421:12881(460) ack 1
   11:55:23.113511 A > B: . 12881:13341(460) ack 1
   11:55:23.333077 B > A: . ack 7821
   11:55:23.336860 B > A: . ack 7821
   11:55:23.340638 B > A: . ack 7821
   11:55:23.341290 A > B: . 7821:8281(460) ack 1
   11:55:23.341317 A > B: . 8281:8293(12) ack 1

Paxson, et. al.              Informational                     [Page 44]
RFC 2525              TCP Implementation Problems             March 1999

   11:55:23.498242 B > A: . ack 7821
   11:55:23.506850 B > A: . ack 7821
   11:55:23.510630 B > A: . ack 7821

   [...]

   11:55:23.746649 B > A: . ack 10581

      The second line of the above trace shows the original transmission
      of a segment which is later dropped.  After 3 duplicate ACKs, line
      9 of the trace shows the dropped packet (7821:8281), with a 460-
      byte payload, being retransmitted.  Immediately following this
      retransmission, a packet with a 12-byte payload is unnecessarily
      sent.

   Trace file demonstrating correct behavior
      The trace file would be identical to the one above, with a single
      line:

      11:55:23.341317 A > B: . 8281:8293(12) ack 1

      omitted.

   References
      [Brakmo95]

   How to detect
      This problem can be detected by examining a packet trace of the
      TCP connections of a machine using TCP options, during which a
      packet is retransmitted.

2.15.

   Name of Problem
      Failure to send FIN notification promptly

   Classification
      Performance

   Description
      When an application closes a connection, the corresponding TCP
      should send the FIN notification promptly to its peer (unless
      prevented by the congestion window).  If a TCP implementation
      delays in sending the FIN notification, for example due to waiting
      until unacknowledged data has been acknowledged, then it is said
      to exhibit "Failure to send FIN notification promptly".

Paxson, et. al.              Informational                     [Page 45]
RFC 2525              TCP Implementation Problems             March 1999

      Also, while not strictly required, FIN segments should include the
      PSH flag to ensure expedited delivery of any pending data at the
      receiver.

   Significance
      The greatest impact occurs for short-lived connections, since for
      these the additional time required to close the connection
      introduces the greatest relative delay.

      The additional time can be significant in the common case of the
      sender waiting for an ACK that is delayed by the receiver.

   Implications
      Can diminish total throughput as seen at the application layer,
      because connection termination takes longer to complete.

   Relevant RFCs
      RFC 793 indicates that a receiver should treat an incoming FIN
      flag as implying the push function.

   Trace file demonstrating it
      Made using tcpdump (no losses reported by the packet filter).

   10:04:38.68 A > B: S 1031850376:1031850376(0) win 4096
                   <mss 1460,wscale 0,eol> (DF)
   10:04:38.71 B > A: S 596916473:596916473(0) ack 1031850377
                   win 8760 <mss 1460> (DF)
   10:04:38.73 A > B: . ack 1 win 4096 (DF)
   10:04:41.98 A > B: P 1:4(3) ack 1 win 4096 (DF)
   10:04:42.15 B > A: . ack 4 win 8757 (DF)
   10:04:42.23 A > B: P 4:7(3) ack 1 win 4096 (DF)
   10:04:42.25 B > A: P 1:11(10) ack 7 win 8754 (DF)
   10:04:42.32 A > B: . ack 11 win 4096 (DF)
   10:04:42.33 B > A: P 11:51(40) ack 7 win 8754 (DF)
   10:04:42.51 A > B: . ack 51 win 4096 (DF)
   10:04:42.53 B > A: F 51:51(0) ack 7 win 8754 (DF)
   10:04:42.56 A > B: FP 7:7(0) ack 52 win 4096 (DF)
   10:04:42.58 B > A: . ack 8 win 8754 (DF)

      Machine B in the trace above does not send out a FIN notification
      promptly if there is any data outstanding.  It instead waits for
      all unacknowledged data to be acknowledged before sending the FIN
      segment.  The connection was closed at 10:04.42.33 after
      requesting 40 bytes to be sent.  However, the FIN notification
      isn't sent until 10:04.42.51, after the (delayed) acknowledgement
      of the 40 bytes of data.

Paxson, et. al.              Informational                     [Page 46]
RFC 2525              TCP Implementation Problems             March 1999

   Trace file demonstrating correct behavior
      Made using tcpdump (no losses reported by the packet filter).

   10:27:53.85 C > D: S 419744533:419744533(0) win 4096
                   <mss 1460,wscale 0,eol> (DF)
   10:27:53.92 D > C: S 10082297:10082297(0) ack 419744534
                   win 8760 <mss 1460> (DF)
   10:27:53.95 C > D: . ack 1 win 4096 (DF)
   10:27:54.42 C > D: P 1:4(3) ack 1 win 4096 (DF)
   10:27:54.62 D > C: . ack 4 win 8757 (DF)
   10:27:54.76 C > D: P 4:7(3) ack 1 win 4096 (DF)
   10:27:54.89 D > C: P 1:11(10) ack 7 win 8754 (DF)
   10:27:54.90 D > C: FP 11:51(40) ack7 win 8754 (DF)
   10:27:54.92 C > D: . ack 52 win 4096 (DF)
   10:27:55.01 C > D: FP 7:7(0) ack 52 win 4096 (DF)
   10:27:55.09 D > C: . ack 8 win 8754 (DF)

      Here, Machine D sends a FIN with 40 bytes of data even before the
      original 10 octets have been acknowledged. This is correct
      behavior as it provides for the highest performance.

   References
      This problem is documented in [Dawson97].

   How to detect
      For implementations manifesting this problem, it shows up on a
      packet trace.

2.16.

   Name of Problem
      Failure to send a RST after Half Duplex Close

   Classification
      Resource management

   Description
      RFC 1122 4.2.2.13 states that a TCP SHOULD send a RST if data is
      received after "half duplex close", i.e. if it cannot be delivered
      to the application.  A TCP that fails to do so is said to exhibit
      "Failure to send a RST after Half Duplex Close".

   Significance
      Potentially serious for TCP endpoints that manage large numbers of
      connections, due to exhaustion of memory and/or process slots
      available for managing connection state.

Paxson, et. al.              Informational                     [Page 47]
RFC 2525              TCP Implementation Problems             March 1999

   Implications
      Failure to send the RST can lead to permanently hung TCP
      connections.  This problem has been demonstrated when HTTP clients
      abort connections, common when users move on to a new page before
      the current page has finished downloading.  The HTTP client closes
      by transmitting a FIN while the server is transmitting images,
      text, etc.  The server TCP receives the FIN,  but its application
      does not close the connection until all data has been queued for
      transmission.  Since the server will not transmit a FIN until all
      the preceding data has been transmitted, deadlock results if the
      client TCP does not consume the pending data or tear down the
      connection: the window decreases to zero, since the client cannot
      pass the data to the application, and the server sends probe
      segments.  The client acknowledges the probe segments with a zero
      window. As mandated in RFC1122 4.2.2.17, the probe segments are
      transmitted forever.  Server connection state remains in
      CLOSE_WAIT, and eventually server processes are exhausted.

      Note that there are two bugs.  First, probe segments should be
      ignored if the window can never subsequently increase.  Second, a
      RST should be sent when data is received after half duplex close.
      Fixing the first bug, but not the second, results in the probe
      segments eventually timing out the connection, but the server
      remains in CLOSE_WAIT for a significant and unnecessary period.

   Relevant RFCs
      RFC 1122 sections 4.2.2.13 and 4.2.2.17.

   Trace file demonstrating it
      Made using an unknown network analyzer.  No drop information
      available.

   client.1391 > server.8080: S 0:1(0) ack: 0 win: 2000 <mss: 5b4>
   server.8080 > client.1391: SA 8c01:8c02(0) ack: 1 win: 8000 <mss:100>
   client.1391 > server.8080: PA
   client.1391 > server.8080: PA 1:1c2(1c1) ack: 8c02 win: 2000
   server.8080 > client.1391: [DF] PA 8c02:8cde(dc) ack: 1c2 win: 8000
   server.8080 > client.1391: [DF] A 8cde:9292(5b4) ack: 1c2 win: 8000
   server.8080 > client.1391: [DF] A 9292:9846(5b4) ack: 1c2 win: 8000
   server.8080 > client.1391: [DF] A 9846:9dfa(5b4) ack: 1c2 win: 8000
   client.1391 > server.8080: PA
   server.8080 > client.1391: [DF] A 9dfa:a3ae(5b4) ack: 1c2 win: 8000
   server.8080 > client.1391: [DF] A a3ae:a962(5b4) ack: 1c2 win: 8000
   server.8080 > client.1391: [DF] A a962:af16(5b4) ack: 1c2 win: 8000
   server.8080 > client.1391: [DF] A af16:b4ca(5b4) ack: 1c2 win: 8000
   client.1391 > server.8080: PA
   server.8080 > client.1391: [DF] A b4ca:ba7e(5b4) ack: 1c2 win: 8000
   server.8080 > client.1391: [DF] A b4ca:ba7e(5b4) ack: 1c2 win: 8000

Paxson, et. al.              Informational                     [Page 48]
RFC 2525              TCP Implementation Problems             March 1999

   client.1391 > server.8080: PA
   server.8080 > client.1391: [DF] A ba7e:bdfa(37c) ack: 1c2 win: 8000
   client.1391 > server.8080: PA
   server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c2 win: 8000
   client.1391 > server.8080: PA

   [ HTTP client aborts and enters FIN_WAIT_1 ]

   client.1391 > server.8080: FPA

   [ server ACKs the FIN and enters CLOSE_WAIT ]

   server.8080 > client.1391: [DF] A

   [ client enters FIN_WAIT_2 ]

   server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c3 win: 8000

   [ server continues to try to send its data ]

   client.1391 > server.8080: PA < window = 0 >
   server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c3 win: 8000
   client.1391 > server.8080: PA < window = 0 >
   server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c3 win: 8000
   client.1391 > server.8080: PA < window = 0 >
   server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c3 win: 8000
   client.1391 > server.8080: PA < window = 0 >
   server.8080 > client.1391: [DF] A bdfa:bdfb(1) ack: 1c3 win: 8000
   client.1391 > server.8080: PA < window = 0 >

   [ ... repeat ad exhaustium ... ]

   Trace file demonstrating correct behavior
      Made using an unknown network analyzer.  No drop information
      available.

   client > server D=80 S=59500 Syn Seq=337 Len=0 Win=8760
   server > client D=59500 S=80 Syn Ack=338 Seq=80153 Len=0 Win=8760
   client > server D=80 S=59500 Ack=80154 Seq=338 Len=0 Win=8760

   [ ... normal data omitted ... ]

   client > server D=80 S=59500 Ack=14559 Seq=596 Len=0 Win=8760
   server > client D=59500 S=80 Ack=596 Seq=114559 Len=1460 Win=8760

   [ client closes connection ]

   client > server D=80 S=59500 Fin Seq=596 Len=0 Win=8760

Paxson, et. al.              Informational                     [Page 49]
RFC 2525              TCP Implementation Problems             March 1999

   server > client D=59500 S=80 Ack=597 Seq=116019 Len=1460 Win=8760

   [ client sends RST (RFC1122 4.2.2.13) ]

   client > server D=80 S=59500 Rst Seq=597 Len=0 Win=0
   server > client D=59500 S=80 Ack=597 Seq=117479 Len=1460 Win=8760
   client > server D=80 S=59500 Rst Seq=597 Len=0 Win=0
   server > client D=59500 S=80 Ack=597 Seq=118939 Len=1460 Win=8760
   client > server D=80 S=59500 Rst Seq=597 Len=0 Win=0
   server > client D=59500 S=80 Ack=597 Seq=120399 Len=892 Win=8760
   client > server D=80 S=59500 Rst Seq=597 Len=0 Win=0
   server > client D=59500 S=80 Ack=597 Seq=121291 Len=1460 Win=8760
   client > server D=80 S=59500 Rst Seq=597 Len=0 Win=0

      "client" sends a number of RSTs, one in response to each incoming
      packet from "server".  One might wonder why "server" keeps sending
      data packets after it has received a RST from "client"; the
      explanation is that "server" had already transmitted all five of
      the data packets before receiving the first RST from "client", so
      it is too late to avoid transmitting them.

   How to detect
      The problem can be detected by inspecting packet traces of a
      large, interrupted bulk transfer.

2.17.

   Name of Problem
      Failure to RST on close with data pending

   Classification
      Resource management

   Description
      When an application closes a connection in such a way that it can
      no longer read any received data, the TCP SHOULD, per section
      4.2.2.13 of RFC 1122, send a RST if there is any unread received
      data, or if any new data is received. A TCP that fails to do so
      exhibits "Failure to RST on close with data pending".

      Note that, for some TCPs, this situation can be caused by an
      application "crashing" while a peer is sending data.

      We have observed a number of TCPs that exhibit this problem.  The
      problem is less serious if any subsequent data sent to the now-
      closed connection endpoint elicits a RST (see illustration below).

Paxson, et. al.              Informational                     [Page 50]
RFC 2525              TCP Implementation Problems             March 1999

   Significance
      This problem is most significant for endpoints that engage in
      large numbers of connections, as their ability to do so will be
      curtailed as they leak away resources.

   Implications
      Failure to reset the connection can lead to permanently hung
      connections, in which the remote endpoint takes no further action
      to tear down the connection because it is waiting on the local TCP
      to first take some action.  This is particularly the case if the
      local TCP also allows the advertised window to go to zero, and
      fails to tear down the connection when the remote TCP engages in
      "persist" probes (see example below).

   Relevant RFCs
      RFC 1122 section 4.2.2.13.  Also, 4.2.2.17 for the zero-window
      probing discussion below.

   Trace file demonstrating it
      Made using tcpdump.  No drop information available.

   13:11:46.04 A > B: S 458659166:458659166(0) win 4096
                       <mss 1460,wscale 0,eol> (DF)
   13:11:46.04 B > A: S 792320000:792320000(0) ack 458659167
                       win 4096
   13:11:46.04 A > B: . ack 1 win 4096 (DF)
   13:11.55.80 A > B: . 1:513(512) ack 1 win 4096 (DF)
   13:11.55.80 A > B: . 513:1025(512) ack 1 win 4096 (DF)
   13:11:55.83 B > A: . ack 1025 win 3072
   13:11.55.84 A > B: . 1025:1537(512) ack 1 win 4096 (DF)
   13:11.55.84 A > B: . 1537:2049(512) ack 1 win 4096 (DF)
   13:11.55.85 A > B: . 2049:2561(512) ack 1 win 4096 (DF)
   13:11:56.03 B > A: . ack 2561 win 1536
   13:11.56.05 A > B: . 2561:3073(512) ack 1 win 4096 (DF)
   13:11.56.06 A > B: . 3073:3585(512) ack 1 win 4096 (DF)
   13:11.56.06 A > B: . 3585:4097(512) ack 1 win 4096 (DF)
   13:11:56.23 B > A: . ack 4097 win 0
   13:11:58.16 A > B: . 4096:4097(1) ack 1 win 4096 (DF)
   13:11:58.16 B > A: . ack 4097 win 0
   13:12:00.16 A > B: . 4096:4097(1) ack 1 win 4096 (DF)
   13:12:00.16 B > A: . ack 4097 win 0
   13:12:02.16 A > B: . 4096:4097(1) ack 1 win 4096 (DF)
   13:12:02.16 B > A: . ack 4097 win 0
   13:12:05.37 A > B: . 4096:4097(1) ack 1 win 4096 (DF)
   13:12:05.37 B > A: . ack 4097 win 0
   13:12:06.36 B > A: F 1:1(0) ack 4097 win 0
   13:12:06.37 A > B: . ack 2 win 4096 (DF)
   13:12:11.78 A > B: . 4096:4097(1) ack 2 win 4096 (DF)

Paxson, et. al.              Informational                     [Page 51]
RFC 2525              TCP Implementation Problems             March 1999

   13:12:11.78 B > A: . ack 4097 win 0
   13:12:24.59 A > B: . 4096:4097(1) ack 2 win 4096 (DF)
   13:12:24.60 B > A: . ack 4097 win 0
   13:12:50.22 A > B: . 4096:4097(1) ack 2 win 4096 (DF)
   13:12:50.22 B > A: . ack 4097 win 0

      Machine B in the trace above does not drop received data when the
      socket is "closed" by the application (in this case, the
      application process was terminated). This occurred at
      approximately 13:12:06.36 and resulted in the FIN being sent in
      response to the close. However, because there is no longer an
      application to deliver the data to, the TCP should have instead
      sent a RST.

      Note: Machine A's zero-window probing is also broken.  It is
      resending old data, rather than new data. Section 3.7 in RFC 793
      and Section 4.2.2.17 in RFC 1122 discuss zero-window probing.

   Trace file demonstrating better behavior
      Made using tcpdump.  No drop information available.

      Better, but still not fully correct, behavior, per the discussion
      below.  We show this behavior because it has been observed for a
      number of different TCP implementations.

   13:48:29.24 C > D: S 73445554:73445554(0) win 4096
                       <mss 1460,wscale 0,eol> (DF)
   13:48:29.24 D > C: S 36050296:36050296(0) ack 73445555
                       win 4096 <mss 1460,wscale 0,eol> (DF)
   13:48:29.25 C > D: . ack 1 win 4096 (DF)
   13:48:30.78 C > D: . 1:1461(1460) ack 1 win 4096 (DF)
   13:48:30.79 C > D: . 1461:2921(1460) ack 1 win 4096 (DF)
   13:48:30.80 D > C: . ack 2921 win 1176 (DF)
   13:48:32.75 C > D: . 2921:4097(1176) ack 1 win 4096 (DF)
   13:48:32.82 D > C: . ack 4097 win 0 (DF)
   13:48:34.76 C > D: . 4096:4097(1) ack 1 win 4096 (DF)
   13:48:34.84 D > C: . ack 4097 win 0 (DF)
   13:48:36.34 D > C: FP 1:1(0) ack 4097 win 4096 (DF)
   13:48:36.34 C > D: . 4097:5557(1460) ack 2 win 4096 (DF)
   13:48:36.34 D > C: R 36050298:36050298(0) win 24576
   13:48:36.34 C > D: . 5557:7017(1460) ack 2 win 4096 (DF)
   13:48:36.34 D > C: R 36050298:36050298(0) win 24576

      In this trace, the application process is terminated on Machine D
      at approximately 13:48:36.34.  Its TCP sends the FIN with the
      window opened again (since it discarded the previously received
      data).  Machine C promptly sends more data, causing Machine D to

Paxson, et. al.              Informational                     [Page 52]
RFC 2525              TCP Implementation Problems             March 1999

      reset the connection since it cannot deliver the data to the
      application. Ideally, Machine D SHOULD send a RST instead of
      dropping the data and re-opening the receive window.

      Note: Machine C's zero-window probing is broken, the same as in
      the example above.

   Trace file demonstrating correct behavior
      Made using tcpdump.  No losses reported by the packet filter.

   14:12:02.19 E > F: S 1143360000:1143360000(0) win 4096
   14:12:02.19 F > E: S 1002988443:1002988443(0) ack 1143360001
                       win 4096 <mss 1460> (DF)
   14:12:02.19 E > F: . ack 1 win 4096
   14:12:10.43 E > F: . 1:513(512) ack 1 win 4096
   14:12:10.61 F > E: . ack 513 win 3584 (DF)
   14:12:10.61 E > F: . 513:1025(512) ack 1 win 4096
   14:12:10.61 E > F: . 1025:1537(512) ack 1 win 4096
   14:12:10.81 F > E: . ack 1537 win 2560 (DF)
   14:12:10.81 E > F: . 1537:2049(512) ack 1 win 4096
   14:12:10.81 E > F: . 2049:2561(512) ack 1 win 4096
   14:12:10.81 E > F: . 2561:3073(512) ack 1 win 4096
   14:12:11.01 F > E: . ack 3073 win 1024 (DF)
   14:12:11.01 E > F: . 3073:3585(512) ack 1 win 4096
   14:12:11.01 E > F: . 3585:4097(512) ack 1 win 4096
   14:12:11.21 F > E: . ack 4097 win 0 (DF)
   14:12:15.88 E > F: . 4097:4098(1) ack 1 win 4096
   14:12:16.06 F > E: . ack 4097 win 0 (DF)
   14:12:20.88 E > F: . 4097:4098(1) ack 1 win 4096
   14:12:20.91 F > E: . ack 4097 win 0 (DF)
   14:12:21.94 F > E: R 1002988444:1002988444(0) win 4096

      When the application terminates at 14:12:21.94, F immediately
      sends a RST.

      Note: Machine E's zero-window probing is (finally) correct.

   How to detect
      The problem can often be detected by inspecting packet traces of a
      transfer in which the receiving application terminates abnormally.
      When doing so, there can be an ambiguity (if only looking at the
      trace) as to whether the receiving TCP did indeed have unread data
      that it could now no longer deliver.  To provoke this to happen,
      it may help to suspend the receiving application so that it fails
      to consume any data, eventually exhausting the advertised window.
      At this point, since the advertised window is zero, we know that

Paxson, et. al.              Informational                     [Page 53]
RFC 2525              TCP Implementation Problems             March 1999

      the receiving TCP has undelivered data buffered up.  Terminating
      the application process then should suffice to test the
      correctness of the TCP's behavior.

2.18.

   Name of Problem
      Options missing from TCP MSS calculation

   Classification
      Reliability / performance

   Description
      When a TCP determines how much data to send per packet, it
      calculates a segment size based on the MTU of the path.  It must
      then subtract from that MTU the size of the IP and TCP headers in
      the packet.  If IP options and TCP options are not taken into
      account correctly in this calculation, the resulting segment size
      may be too large.  TCPs that do so are said to exhibit "Options
      missing from TCP MSS calculation".

   Significance
      In some implementations, this causes the transmission of strangely
      fragmented packets.  In some implementations with Path MTU (PMTU)
      discovery [RFC1191], this problem can actually result in a total
      failure to transmit any data at all, regardless of the environment
      (see below).

      Arguably, especially since the wide deployment of firewalls, IP
      options appear only rarely in normal operations.

   Implications
      In implementations using PMTU discovery, this problem can result
      in packets that are too large for the output interface, and that
      have the DF (don't fragment) bit set in the IP header.  Thus, the
      IP layer on the local machine is not allowed to fragment the
      packet to send it out the interface.  It instead informs the TCP
      layer of the correct MTU size of the interface; the TCP layer
      again miscomputes the MSS by failing to take into account the size
      of IP options; and the problem repeats, with no data flowing.

   Relevant RFCs
      RFC 1122 describes the calculation of the effective send MSS.  RFC
      1191 describes Path MTU discovery.

Paxson, et. al.              Informational                     [Page 54]
RFC 2525              TCP Implementation Problems             March 1999

   Trace file demonstrating it
      Trace file taking using tcpdump on host C.  The first trace
      demonstrates the fragmentation that occurs without path MTU
      discovery:

   13:55:25.488728 A.65528 > C.discard:
           P 567833:569273(1440) ack 1 win 17520
           <nop,nop,timestamp 3839 1026342>
           (frag 20828:1472@0+)
           (ttl 62, optlen=8 LSRR{B#} NOP)

   13:55:25.488943 A > C:
           (frag 20828:8@1472)
           (ttl 62, optlen=8 LSRR{B#} NOP)

   13:55:25.489052 C.discard > A.65528:
           . ack 566385 win 60816
           <nop,nop,timestamp 1026345 3839> (DF)
           (ttl 60, id 41266)

      Host A repeatedly sends 1440-octet data segments, but these hare
      fragmented into two packets, one with 1432 octets of data, and
      another with 8 octets of data.

      The second trace demonstrates the failure to send any data
      segments, sometimes seen with hosts doing path MTU discovery:

   13:55:44.332219 A.65527 > C.discard:
           S 1018235390:1018235390(0) win 16384
           <mss 1460,nop,wscale 0,nop,nop,timestamp 3876 0> (DF)
           (ttl 62, id 20912, optlen=8 LSRR{B#} NOP)

   13:55:44.333015 C.discard > A.65527:
           S 1271629000:1271629000(0) ack 1018235391 win 60816
           <mss 1460,nop,wscale 0,nop,nop,timestamp 1026383 3876> (DF)
           (ttl 60, id 41427)

   13:55:44.333206 C.discard > A.65527:
           S 1271629000:1271629000(0) ack 1018235391 win 60816
           <mss 1460,nop,wscale 0,nop,nop,timestamp 1026383 3876> (DF)
           (ttl 60, id 41427)

      This is all of the activity seen on this connection.  Eventually
      host C will time out attempting to establish the connection.

   How to detect
      The "netcat" utility [Hobbit96] is useful for generating source
      routed packets:

Paxson, et. al.              Informational                     [Page 55]
RFC 2525              TCP Implementation Problems             March 1999

      1% nc C discard
      (interactive typing)
      ^C
      2% nc C discard < /dev/zero
      ^C
      3% nc -g B C discard
      (interactive typing)
      ^C
      4% nc -g B C discard < /dev/zero
      ^C

      Lines 1 through 3 should generate appropriate packets, which can
      be verified using tcpdump.  If the problem is present, line 4
      should generate one of the two kinds of packet traces shown.

   How to fix
      The implementation should ensure that the effective send MSS
      calculation includes a term for the IP and TCP options, as
      mandated by RFC 1122.

3. Security Considerations

   This memo does not discuss any specific security-related TCP
   implementation problems, as the working group decided to pursue
   documenting those in a separate document.  Some of the implementation
   problems discussed here, however, can be used for denial-of-service
   attacks.  Those classified as congestion control present
   opportunities to subvert TCPs used for legitimate data transfer into
   excessively loading network elements.  Those classified as
   "performance", "reliability" and "resource management" may be
   exploitable for launching surreptitious denial-of-service attacks
   against the user of the TCP.  Both of these types of attacks can be
   extremely difficult to detect because in most respects they look
   identical to legitimate network traffic.

4. Acknowledgements

   Thanks to numerous correspondents on the tcp-impl mailing list for
   their input:  Steve Alexander, Larry Backman, Jerry Chu, Alan Cox,
   Kevin Fall, Richard Fox, Jim Gettys, Rick Jones, Allison Mankin, Neal
   McBurnett, Perry Metzger, der Mouse, Thomas Narten, Andras Olah,
   Steve Parker, Francesco Potorti`, Luigi Rizzo, Allyn Romanow, Al
   Smith, Jerry Toporek, Joe Touch, and Curtis Villamizar.

   Thanks also to Josh Cohen for the traces documenting the "Failure to
   send a RST after Half Duplex Close" problem; and to John Polstra, who
   analyzed the "Window probe deadlock" problem.

Paxson, et. al.              Informational                     [Page 56]
RFC 2525              TCP Implementation Problems             March 1999

5. References

   [Allman97]   M. Allman, "Fixing Two BSD TCP Bugs," Technical Report
                CR-204151, NASA Lewis Research Center, Oct. 1997.
                http://roland.grc.nasa.gov/~mallman/papers/bug.ps

   [RFC2414]    Allman, M., Floyd, S. and C. Partridge, "Increasing
                TCP's Initial Window", RFC 2414, September 1998.

   [RFC1122]    Braden, R., Editor, "Requirements for Internet Hosts --
                Communication Layers", STD 3, RFC 1122, October 1989.

   [RFC2119]    Bradner, S., "Key words for use in RFCs to Indicate
                Requirement Levels", BCP 14, RFC 2119, March 1997.

   [Brakmo95]   L. Brakmo and L. Peterson, "Performance Problems in
                BSD4.4 TCP," ACM Computer Communication Review,
                25(5):69-86, 1995.

   [RFC813]     Clark, D., "Window and Acknowledgement Strategy in TCP,"
                RFC 813, July 1982.

   [Dawson97]   S. Dawson, F. Jahanian, and T. Mitton, "Experiments on
                Six Commercial TCP Implementations Using a Software
                Fault Injection Tool," to appear in Software Practice &
                Experience, 1997.  A technical report version of this
                paper can be obtained at
                ftp://rtcl.eecs.umich.edu/outgoing/sdawson/CSE-TR-298-
                96.ps.gz.

   [Fall96]     K. Fall and S. Floyd, "Simulation-based Comparisons of
                Tahoe, Reno, and SACK TCP," ACM Computer Communication
                Review, 26(3):5-21, 1996.

   [Hobbit96]   Hobbit, Avian Research, netcat, available via anonymous
                ftp to ftp.avian.org, 1996.

   [Hoe96]      J. Hoe, "Improving the Start-up Behavior of a Congestion
                Control Scheme for TCP," Proc. SIGCOMM '96.

   [Jacobson88] V. Jacobson, "Congestion Avoidance and Control," Proc.
                SIGCOMM '88.  ftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z

   [Jacobson89] V. Jacobson, C. Leres, and S. McCanne, tcpdump,
                available via anonymous ftp to ftp.ee.lbl.gov, Jun.
                1989.

Paxson, et. al.              Informational                     [Page 57]
RFC 2525              TCP Implementation Problems             March 1999

   [RFC2018]    Mathis, M., Mahdavi, J., Floyd, S. and A. Romanow, "TCP
                Selective Acknowledgement Options", RFC 2018, October
                1996.

   [RFC1191]    Mogul, J. and S. Deering, "Path MTU discovery", RFC
                1191, November 1990.

   [RFC896]     Nagle, J., "Congestion Control in IP/TCP Internetworks",
                RFC 896, January 1984.

   [Paxson97]   V. Paxson, "Automated Packet Trace Analysis of TCP
                Implementations," Proc. SIGCOMM '97, available from
                ftp://ftp.ee.lbl.gov/papers/vp-tcpanaly-sigcomm97.ps.Z.

   [RFC793]     Postel, J., Editor, "Transmission Control Protocol," STD
                7, RFC 793, September 1981.

   [RFC2001]    Stevens, W., "TCP Slow Start, Congestion Avoidance, Fast
                Retransmit, and Fast Recovery Algorithms", RFC 2001,
                January 1997.

   [Stevens94]  W. Stevens, "TCP/IP Illustrated, Volume 1", Addison-
                Wesley Publishing Company, Reading, Massachusetts, 1994.

   [Wright95]   G. Wright and W. Stevens, "TCP/IP Illustrated, Volume
                2", Addison-Wesley Publishing Company, Reading
                Massachusetts, 1995.

6. Authors' Addresses

   Vern Paxson
   ACIRI / ICSI
   1947 Center Street
   Suite 600
   Berkeley, CA 94704-1198

   Phone: +1 510/642-4274 x302
   EMail: vern@aciri.org

Paxson, et. al.              Informational                     [Page 58]
RFC 2525              TCP Implementation Problems             March 1999

   Mark Allman <mallman@grc.nasa.gov>
   NASA Glenn Research Center/Sterling Software
   Lewis Field
   21000 Brookpark Road
   MS 54-2
   Cleveland, OH 44135
   USA

   Phone: +1 216/433-6586
   Email: mallman@grc.nasa.gov

   Scott Dawson
   Real-Time Computing Laboratory
   EECS Building
   University of Michigan
   Ann Arbor, MI  48109-2122
   USA

   Phone: +1 313/763-5363
   EMail: sdawson@eecs.umich.edu

   William C. Fenner
   Xerox PARC
   3333 Coyote Hill Road
   Palo Alto, CA 94304
   USA

   Phone: +1 650/812-4816
   EMail: fenner@parc.xerox.com

   Jim Griner <jgriner@grc.nasa.gov>
   NASA Glenn Research Center
   Lewis Field
   21000 Brookpark Road
   MS 54-2
   Cleveland, OH 44135
   USA

   Phone: +1 216/433-5787
   EMail: jgriner@grc.nasa.gov

Paxson, et. al.              Informational                     [Page 59]
RFC 2525              TCP Implementation Problems             March 1999

   Ian Heavens
   Spider Software Ltd.
   8 John's Place, Leith
   Edinburgh EH6 7EL
   UK

   Phone: +44 131/475-7015
   EMail: ian@spider.com

   Kevin Lahey
   NASA Ames Research Center/MRJ
   MS 258-6
   Moffett Field, CA 94035
   USA

   Phone: +1 650/604-4334
   EMail: kml@nas.nasa.gov

   Jeff Semke
   Pittsburgh Supercomputing Center
   4400 Fifth Ave
   Pittsburgh, PA 15213
   USA

   Phone: +1 412/268-4960
   EMail: semke@psc.edu

   Bernie Volz
   Process Software Corporation
   959 Concord Street
   Framingham, MA 01701
   USA

   Phone: +1 508/879-6994
   EMail: volz@process.com

Paxson, et. al.              Informational                     [Page 60]
RFC 2525              TCP Implementation Problems             March 1999

7.  Full Copyright Statement

   Copyright (C) The Internet Society (1999).  All Rights Reserved.

   This document and translations of it may be copied and furnished to
   others, and derivative works that comment on or otherwise explain it
   or assist in its implementation may be prepared, copied, published
   and distributed, in whole or in part, without restriction of any
   kind, provided that the above copyright notice and this paragraph are
   included on all such copies and derivative works.  However, this
   document itself may not be modified in any way, such as by removing
   the copyright notice or references to the Internet Society or other
   Internet organizations, except as needed for the purpose of
   developing Internet standards in which case the procedures for
   copyrights defined in the Internet Standards process must be
   followed, or as required to translate it into languages other than
   English.

   The limited permissions granted above are perpetual and will not be
   revoked by the Internet Society or its successors or assigns.

   This document and the information contained herein is provided on an
   "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
   TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING
   BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
   HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Paxson, et. al.              Informational                     [Page 61]