netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* TCP retransmission timers, questions
@ 2003-11-29  9:05 Ronnie Sahlberg
  2003-12-01 18:10 ` Nivedita Singhvi
  0 siblings, 1 reply; 2+ messages in thread
From: Ronnie Sahlberg @ 2003-11-29  9:05 UTC (permalink / raw)
  To: netdev

Hi list, hope this mail arrives (Im not subscribe to the list, but dont
worry, I read the list through the mailarchive)

By looking at the kernel sources it seems that the minimum TCP
retransmission timeout is hardcoded to 200ms.
Is this correct?

While I understand why it is important to not be too aggressive in
retransmitting I wounder if it would be possible to get
and interface in proc where one could "tune" this.

The reason for this is that in some applications you do have a completely
private, dedicated network used for one specific application.
Those networks can be dimensioned so that congestion "should" not occur.
However, packets are lost from time to time and sometimes packets will be
lost.
In those isolated dedicated subnets, with end to end network latency in the
sub ms range, would it not be useful to be able to allow
the retransmission timeout to drop down to 5-10ms?

Do anyone know of any work/research in the area of tcp retransmission
timeouts for very high bandwidth, low latency networks?
I have checked both the IETF list of drafts, Sally Floyds pages and google
but could not find anything.
It seems to me that all research/experimentation in high throughput is for
high bandwidth high latency links and tuning the slowstart/congestion
avoidance algorithms.
What about high throughput, very low latency?  Does nayone know of any
papers in that area?


For specific applications, running on completely isolated dedicated
networks, dimensioned to make congestion unlikely, isolated so it will NEVER
compete about bandwidth with normal TCPs on the internet,  to me it would
make sense to allow the retransmission timeout to be allowed to drop
significantly below 200ms.


Another question, I think it was RFC2988 (but an not find it again) that
discussed that a TCP may add an artificial delay in sending the packets
based on
the RTT so that when sending an entire window the packets are spaced
equidistantly across the RTT interval instead of in just one big burst.
This to prevent the burstinessd of the traffic and make buffer
overruns/congestion less likely.
I have seen indications that w2k/bsd might in some conditions do this.
Doe Linux do this? my search through the sources came up with nothing.
Does anyone know whether there are other TCPs that do this?
As i said I have seen something that looked like that on a BSD stack but it
could have been related to something else.


best regards
    ronnie

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2003-12-01 18:10 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-11-29  9:05 TCP retransmission timers, questions Ronnie Sahlberg
2003-12-01 18:10 ` Nivedita Singhvi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).