* 3 packet TCP window limit? @ 2010-05-05 9:10 dormando 2010-05-05 13:26 ` Brian Bloniarz 0 siblings, 1 reply; 13+ messages in thread From: dormando @ 2010-05-05 9:10 UTC (permalink / raw) To: netdev Hey, Noticed in Linux that no matter what sysctl variable I twiddle, or what TCP congestion algorithm is running, TCP will wait for remote acks after sending the first 3 packets. After that it's normal. Apologies, it's hard ot describe: Linux server listening. Remote -> SYN (RTT wait) Linux -> SYN/ACK Remote -> ACK Remote -> Packet (small HTTP request) (RTT wait) Linux -> Packet (x 3) Remote -> (returning acks per packet) (RTT wait) Linux -> More packets (up to window size) If the request response fits in 3 packets or less, that third RTT wait never happens. The remote client gets all its data, and sends back all the FIN/ACK packets for closing the connection. What's bizarre is that this 3 packet/4 packet barrier is regardless of how much data there is to send. I can cause the extra RTT to flip on or off by sending exactly +/- 1 byte to cause an extra packet. Holding the connection open and repeating the request any number of times runs just fine, after the initial request. You can pretty easily see this by: tc qdisc add dev eth0 root netem delay 100ms ... then fetching a 3k file, then 4k file from an http server running linux. Well. at least I can see this easily. I tried on a half dozen boxes (2.6.11 through 2.6.32). I'm trying to track down where in the code this is, or why my sysctl tuning isn't affecting it. I can't discern its purpose. The lag it causes is pretty awful for far away clients; adding 300ms of latency will make a small request take a full second, instead of 600ms. I'm slugging through the code but any insight would be greatly appreciated! -Dormando ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: 3 packet TCP window limit? 2010-05-05 9:10 3 packet TCP window limit? dormando @ 2010-05-05 13:26 ` Brian Bloniarz 2010-05-05 20:01 ` dormando 0 siblings, 1 reply; 13+ messages in thread From: Brian Bloniarz @ 2010-05-05 13:26 UTC (permalink / raw) To: dormando; +Cc: netdev dormando wrote: > Hey, > > Noticed in Linux that no matter what sysctl variable I twiddle, or what > TCP congestion algorithm is running, TCP will wait for remote acks after > sending the first 3 packets. After that it's normal. > > Apologies, it's hard ot describe: > > Linux server listening. > > Remote -> SYN > (RTT wait) > Linux -> SYN/ACK > Remote -> ACK > Remote -> Packet (small HTTP request) > (RTT wait) > Linux -> Packet (x 3) > Remote -> (returning acks per packet) > (RTT wait) > Linux -> More packets (up to window size) > > If the request response fits in 3 packets or less, that third RTT wait > never happens. The remote client gets all its data, and sends back all the > FIN/ACK packets for closing the connection. > > What's bizarre is that this 3 packet/4 packet barrier is regardless of how > much data there is to send. I can cause the extra RTT to flip on or off by > sending exactly +/- 1 byte to cause an extra packet. > > Holding the connection open and repeating the request any number of times > runs just fine, after the initial request. > > You can pretty easily see this by: > tc qdisc add dev eth0 root netem delay 100ms > ... then fetching a 3k file, then 4k file from an http server running > linux. Well. at least I can see this easily. I tried on a half dozen boxes > (2.6.11 through 2.6.32). > > I'm trying to track down where in the code this is, or why my sysctl > tuning isn't affecting it. I can't discern its purpose. The lag it causes > is pretty awful for far away clients; adding 300ms of latency will make a > small request take a full second, instead of 600ms. > > I'm slugging through the code but any insight would be greatly > appreciated! This sounds like TCP slow start. http://en.wikipedia.org/wiki/Slow-start As far as tunables you might want to play with the initcwnd route flag (see "ip route help") > > -Dormando > > -- > To unsubscribe from this list: send the line "unsubscribe netdev" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: 3 packet TCP window limit? 2010-05-05 13:26 ` Brian Bloniarz @ 2010-05-05 20:01 ` dormando 2010-05-05 20:23 ` Rick Jones 2010-05-05 20:56 ` Brian Bloniarz 0 siblings, 2 replies; 13+ messages in thread From: dormando @ 2010-05-05 20:01 UTC (permalink / raw) To: Brian Bloniarz; +Cc: netdev > This sounds like TCP slow start. > > http://en.wikipedia.org/wiki/Slow-start > > As far as tunables you might want to play with the initcwnd route > flag (see "ip route help") Ah, yes, initcwnd was it. I'm well aware of TCP Congestion control / slow start / etc. However I couldn't find the damn tunable for it :) ssthresh/tso/etc didn't seem to unwedge it. Felt like describing it in the most generic way possible would help :) Other OS's appear to have a larger initcwnd. As do commercial load balancers. The default of 3 seems to be tuned for 56k dialup modems. I'm a little surprised that none of the pluggable TCP congestion control algorithms changed this value. I went through all of them except for tcp_yeah. Anyway, thanks and sorry for the nearly off-topic post here. I see some google papers on bumping initcwnd to 10... but I guess that's not linux's deal yet. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: 3 packet TCP window limit? 2010-05-05 20:01 ` dormando @ 2010-05-05 20:23 ` Rick Jones 2010-05-05 21:31 ` dormando 2010-05-05 20:56 ` Brian Bloniarz 1 sibling, 1 reply; 13+ messages in thread From: Rick Jones @ 2010-05-05 20:23 UTC (permalink / raw) To: dormando; +Cc: Brian Bloniarz, netdev dormando wrote: >>This sounds like TCP slow start. >> >>http://en.wikipedia.org/wiki/Slow-start >> >>As far as tunables you might want to play with the initcwnd route >>flag (see "ip route help") > > Ah, yes, initcwnd was it. I'm well aware of TCP Congestion control / slow > start / etc. However I couldn't find the damn tunable for it :) I don't believe linux as yet has a damn tunable for it :) > ssthresh/tso/etc didn't seem to unwedge it. If they did, it would be a bug. In fact there *was* a bug "way back when" where TSO being enabled caused the stack to ignore initcwd, but that was fixed circa 2.6.14. Until it was fixed (it was difficult to notice unless one was speaking to a non-Linux reciever, since Linux receivers autotune the receive window) it did some very nice things for SPECweb benchmark results :) > Felt like describing it in the most generic way possible would help :) > > Other OS's appear to have a larger initcwnd. Names? Values? > As do commercial load balancers. Names? Values? > The default of 3 seems to be tuned for 56k dialup modems. I'm a > little surprised that none of the pluggable TCP congestion control > algorithms changed this value. I went through all of them except for > tcp_yeah. The initcwnd comes from IETF RFCs and their "thou shalts" and "thou shalt nots." As you note below, Google et al seek to alter/extend the RFCs. That is an ongoing discussion in some of the ietf related mailing lists. rick jones > Anyway, thanks and sorry for the nearly off-topic post here. I see some > google papers on bumping initcwnd to 10... but I guess that's not linux's > deal yet. > -- > To unsubscribe from this list: send the line "unsubscribe netdev" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: 3 packet TCP window limit? 2010-05-05 20:23 ` Rick Jones @ 2010-05-05 21:31 ` dormando 2010-05-06 6:15 ` Lars Eggert 0 siblings, 1 reply; 13+ messages in thread From: dormando @ 2010-05-05 21:31 UTC (permalink / raw) To: Rick Jones; +Cc: Brian Bloniarz, netdev > I don't believe linux as yet has a damn tunable for it :) ip route initcwnd sure does it :) > > Other OS's appear to have a larger initcwnd. > > Names? Values? OpenBSD 4.6 jumps between a ~5k fetch to a ~6k fetch > > As do commercial load balancers. > > Names? Values? An older Big/IP appears to be between 5k and 6k as well. I remember a sales meeting with netscaler (pre-NDA) back in 2004 or 2005 where they claimed to have opened up slow start. There might be others but I can't remember which side of the NDA I was informed of their TCP tunning. Linux is consistently between 3k and 4k. Just the distinction from having the RTT in the ~4k or the ~6k range makes our latency graphs go nutty. I've been testing a subset of traffic at an initcwnd of 10 for the last few hours and latency has dropped even more, though I see some bad outliers. > > The default of 3 seems to be tuned for 56k dialup modems. I'm a > > little surprised that none of the pluggable TCP congestion control > > algorithms changed this value. I went through all of them except for > > tcp_yeah. > > The initcwnd comes from IETF RFCs and their "thou shalts" and "thou shalt > nots." As you note below, Google et al seek to alter/extend the RFCs. That > is an ongoing discussion in some of the ietf related mailing lists. The RFC clearly states "around 4k", but these other OS's/products have an extra kilobyte snuck in? Could this be on purpose via rfc interpretation, or an off by one on the initcwnd estimator? :) ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: 3 packet TCP window limit? 2010-05-05 21:31 ` dormando @ 2010-05-06 6:15 ` Lars Eggert 2010-05-06 8:51 ` dormando 0 siblings, 1 reply; 13+ messages in thread From: Lars Eggert @ 2010-05-06 6:15 UTC (permalink / raw) To: dormando; +Cc: Rick Jones, Brian Bloniarz, netdev@vger.kernel.org [-- Attachment #1: Type: text/plain, Size: 1459 bytes --] Hi, On 2010-5-5, at 23:31, dormando wrote: > The RFC clearly states "around 4k", no, it doesn't. RFC3390 gives a very precise formula for calculating the initial window: min (4*MSS, max (2*MSS, 4380 bytes)) Please see the RFC for why. More reading at http://www.icir.org/floyd/tcp_init_win.html I believe that Linux implements behavior this pretty faithfully. > but these other OS's/products have an > extra kilobyte snuck in? Could this be on purpose via rfc > interpretation, or an off by one on the initcwnd estimator? :) I'm surprised to hear that OpenBSD doesn't follow the RFC. Can you share a measurement? Are you sure the box you are measuring is using the default configuration? I don't think the RFC can be misread (it's pretty clear), and the formula is also not exactly complicated. My guess would be that some vendors have convinced themselves that using a slightly larger value is OK, esp. if they can show customers that "their" TCP is "faster" than some competitors' TCPs. An arms race between vendors in this space would really not be good for anyone - it's clear that at some point, problems due to overshoot will occur. (We can definitely argue about whether the current RFC-recommended value is too low, and Google and others are gathering data in support of making a convincing and backed-up argument for increasing the initial window to the IETF. Which is exactly the correct way of going about this.) Lars [-- Attachment #2: smime.p7s --] [-- Type: application/pkcs7-signature, Size: 2490 bytes --] ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: 3 packet TCP window limit? 2010-05-06 6:15 ` Lars Eggert @ 2010-05-06 8:51 ` dormando [not found] ` <p2h349f35ee1005061513x1db24de0ld98a40256c481ac2@mail.gmail.com> 0 siblings, 1 reply; 13+ messages in thread From: dormando @ 2010-05-06 8:51 UTC (permalink / raw) To: Lars Eggert; +Cc: Rick Jones, Brian Bloniarz, netdev@vger.kernel.org > On 2010-5-5, at 23:31, dormando wrote: > > The RFC clearly states "around 4k", > > no, it doesn't. RFC3390 gives a very precise formula for calculating the initial window: > > min (4*MSS, max (2*MSS, 4380 bytes)) > > Please see the RFC for why. More reading at http://www.icir.org/floyd/tcp_init_win.html I believe that Linux implements behavior this pretty faithfully. Sorry, paraphrasing :) Web nerds have been working around this for a long time now. Google talks about using HTTP chunked encoding responses to send an initial "frame" of a webpage in under 3 packets. Which immediately gives the browser something to render and primes the TCP connection for more web junk. > I'm surprised to hear that OpenBSD doesn't follow the RFC. Can you share a measurement? Are you sure the box you are measuring is using the default configuration? Yeah, default config. OBSD was giving me back 4 packets in the first window, while linux always gives back 3. The Big/IP is based on linux 2.4.21. If that kernel didn't have it wrong, they tuned it. Already nuked my dumps. If you're curious I'll re-create. > I don't think the RFC can be misread (it's pretty clear), and the > formula is also not exactly complicated. My guess would be that some > vendors have convinced themselves that using a slightly larger value is > OK, esp. if they can show customers that "their" TCP is "faster" than > some competitors' TCPs. An arms race between vendors in this space would > really not be good for anyone - it's clear that at some point, problems > due to overshoot will occur. I clearly remember some vendors bragging about doing this. That was a long time ago? Perhaps they stopped? If it's true they've been doing it for half a decade or more, and haven't broken anything someone would notice. The only reason why I set about tuning this is because our latency jumped while moving traffic from a commercial machine to a linux machine, and I had to figure out what they changed to do that. I've since turned the setting *back* to the standard, having confirmed what they did. Almost tempted to test this against a bunch of websites... > (We can definitely argue about whether the current RFC-recommended value > is too low, and Google and others are gathering data in support of > making a convincing and backed-up argument for increasing the initial > window to the IETF. Which is exactly the correct way of going about > this.) This sounds like fun. We have some diverse traffic, so I'm hoping we can contribute to that conversation. Still have a lot of reading to catch up with first :) ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <p2h349f35ee1005061513x1db24de0ld98a40256c481ac2@mail.gmail.com>]
[parent not found: <q2ud1c2719f1005061613yf90cd7c6r46ee23cc49858e74@mail.gmail.com>]
* Re: 3 packet TCP window limit? [not found] ` <q2ud1c2719f1005061613yf90cd7c6r46ee23cc49858e74@mail.gmail.com> @ 2010-05-06 23:15 ` Jerry Chu 0 siblings, 0 replies; 13+ messages in thread From: Jerry Chu @ 2010-05-06 23:15 UTC (permalink / raw) To: dormando; +Cc: Lars Eggert, Rick Jones, Brian Bloniarz, netdev@vger.kernel.org From: dormando <dormando@rydia.net> > > Date: Thu, May 6, 2010 at 1:51 AM > Subject: Re: 3 packet TCP window limit? > To: Lars Eggert <lars.eggert@nokia.com> > Cc: Rick Jones <rick.jones2@hp.com>, Brian Bloniarz > <bmb@athenacr.com>, "netdev@vger.kernel.org" <netdev@vger.kernel.org> > > > > On 2010-5-5, at 23:31, dormando wrote: > > > The RFC clearly states "around 4k", > > > > no, it doesn't. RFC3390 gives a very precise formula for calculating the initial window: > > > > min (4*MSS, max (2*MSS, 4380 bytes)) > > > > Please see the RFC for why. More reading at http://www.icir.org/floyd/tcp_init_win.html I believe that Linux implements behavior this pretty faithfully. > > Sorry, paraphrasing :) Web nerds have been working around this for a long > time now. Google talks about using HTTP chunked encoding responses to send > an initial "frame" of a webpage in under 3 packets. Which immediately > gives the browser something to render and primes the TCP connection for > more web junk. > > > I'm surprised to hear that OpenBSD doesn't follow the RFC. Can you share a measurement? Are you sure the box you are measuring is using the default configuration? > > Yeah, default config. OBSD was giving me back 4 packets in the first > window, while linux always gives back 3. The Big/IP is based on linux > 2.4.21. If that kernel didn't have it wrong, they tuned it. > > Already nuked my dumps. If you're curious I'll re-create. > > > I don't think the RFC can be misread (it's pretty clear), and the > > formula is also not exactly complicated. My guess would be that some > > vendors have convinced themselves that using a slightly larger value is > > OK, esp. if they can show customers that "their" TCP is "faster" than > > some competitors' TCPs. An arms race between vendors in this space would > > really not be good for anyone - it's clear that at some point, problems > > due to overshoot will occur. > > I clearly remember some vendors bragging about doing this. That was a long > time ago? Perhaps they stopped? If it's true they've been doing it for > half a decade or more, and haven't broken anything someone would notice. > > The only reason why I set about tuning this is because our latency jumped > while moving traffic from a commercial machine to a linux machine, and I > had to figure out what they changed to do that. I've since turned the > setting *back* to the standard, having confirmed what they did. > > Almost tempted to test this against a bunch of websites... > > > (We can definitely argue about whether the current RFC-recommended value > > is too low, and Google and others are gathering data in support of > > making a convincing and backed-up argument for increasing the initial > > window to the IETF. Which is exactly the correct way of going about > > this.) > > This sounds like fun. We have some diverse traffic, so I'm hoping we can > contribute to that conversation. Still have a lot of reading to catch up > with first :) Yes please do. Our presentation at Anaheim IETF can be found at http://www.ietf.org/proceedings/10mar/slides/tcpm-4.pdf, with a paper describing the details of our experiments at http://code.google.com/speed/articles/tcp_initcwnd_paper.pdf. We've gotten a lot of feedback from IETF and are planning to collect more data to justify the proposal. But at this point we really need help from others as the scope of the work is certainly not a one-company job. Help can be in the form of more experiments/tests and/or simulations to study the effect of a larger initcwnd. Please contact me directly or send your data to IETF's TCPM WG list (http://www.ietf.org/mail-archive/web/tcpm/current/maillist.html). Thanks, Jerry ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: 3 packet TCP window limit? 2010-05-05 20:01 ` dormando 2010-05-05 20:23 ` Rick Jones @ 2010-05-05 20:56 ` Brian Bloniarz 2010-05-05 22:03 ` Stephen Hemminger 1 sibling, 1 reply; 13+ messages in thread From: Brian Bloniarz @ 2010-05-05 20:56 UTC (permalink / raw) To: dormando; +Cc: netdev, Rick Jones, shemminger dormando wrote: >> This sounds like TCP slow start. >> >> http://en.wikipedia.org/wiki/Slow-start >> >> As far as tunables you might want to play with the initcwnd route >> flag (see "ip route help") > > Ah, yes, initcwnd was it. I'm well aware of TCP Congestion control / slow > start / etc. However I couldn't find the damn tunable for it :) Documenting the flag in ip(8) might increase its visibility a little. I don't see it documented in the iproute2 git head, though it shows up on http://linux.die.net/man/8/ip somehow. Stephen, do you know why that is? ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: 3 packet TCP window limit? 2010-05-05 20:56 ` Brian Bloniarz @ 2010-05-05 22:03 ` Stephen Hemminger 2010-05-06 1:37 ` [PATCH iproute2] document initcwnd Brian Bloniarz 0 siblings, 1 reply; 13+ messages in thread From: Stephen Hemminger @ 2010-05-05 22:03 UTC (permalink / raw) To: Brian Bloniarz; +Cc: dormando, netdev, Rick Jones, shemminger On Wed, 05 May 2010 16:56:34 -0400 Brian Bloniarz <bmb@athenacr.com> wrote: > dormando wrote: > >> This sounds like TCP slow start. > >> > >> http://en.wikipedia.org/wiki/Slow-start > >> > >> As far as tunables you might want to play with the initcwnd route > >> flag (see "ip route help") > > > > Ah, yes, initcwnd was it. I'm well aware of TCP Congestion control / slow > > start / etc. However I couldn't find the damn tunable for it :) > > Documenting the flag in ip(8) might increase its visibility > a little. I don't see it documented in the iproute2 git head, > though it shows up on http://linux.die.net/man/8/ip somehow. > > Stephen, do you know why that is? No one sent me an official patch to change it? ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH iproute2] document initcwnd 2010-05-05 22:03 ` Stephen Hemminger @ 2010-05-06 1:37 ` Brian Bloniarz 2010-05-06 2:33 ` Stephen Hemminger 2010-05-19 15:31 ` Stephen Hemminger 0 siblings, 2 replies; 13+ messages in thread From: Brian Bloniarz @ 2010-05-06 1:37 UTC (permalink / raw) To: Stephen Hemminger; +Cc: dormando, netdev, Rick Jones, shemminger Stephen Hemminger wrote: > On Wed, 05 May 2010 16:56:34 -0400 > Brian Bloniarz <bmb@athenacr.com> wrote: > >> dormando wrote: >>>> This sounds like TCP slow start. >>>> >>>> http://en.wikipedia.org/wiki/Slow-start >>>> >>>> As far as tunables you might want to play with the initcwnd route >>>> flag (see "ip route help") >>> Ah, yes, initcwnd was it. I'm well aware of TCP Congestion control / slow >>> start / etc. However I couldn't find the damn tunable for it :) >> Documenting the flag in ip(8) might increase its visibility >> a little. I don't see it documented in the iproute2 git head, >> though it shows up on http://linux.die.net/man/8/ip somehow. >> >> Stephen, do you know why that is? > > No one sent me an official patch to change it? Mention initcwnd in ip(8). Text taken from doc/ip-cref.tex. Signed-off-by: Brian Bloniarz <bmb@athenacr.com> diff --git a/man/man8/ip.8 b/man/man8/ip.8 index a5d2915..777a0a7 100644 --- a/man/man8/ip.8 +++ b/man/man8/ip.8 @@ -211,7 +211,9 @@ replace " | " monitor " } " .B realms .IR REALM " ] [ " .B rto_min -.IR TIME " ]" +.IR TIME " ] [ " +.B initcwnd +.IR NUMBER " ]" .ti -8 .IR TYPE " := [ " @@ -1561,6 +1563,13 @@ the clamp for congestion window. It is ignored if the flag is not used. .TP +.BI initcwnd " NUMBER " "(2.5.70+ only)" +Initial congestion window size for connections to this destination. +Actual window size is this value multiplied by the MSS +(``Maximal Segment Size'') for same connection. The default is +zero, meaning to use the values specified in RFC2414. + +.TP .BI advmss " NUMBER " "(2.3.15+ only)" the MSS ('Maximal Segment Size') to advertise to these destinations when establishing TCP connections. If it is not given, ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH iproute2] document initcwnd 2010-05-06 1:37 ` [PATCH iproute2] document initcwnd Brian Bloniarz @ 2010-05-06 2:33 ` Stephen Hemminger 2010-05-19 15:31 ` Stephen Hemminger 1 sibling, 0 replies; 13+ messages in thread From: Stephen Hemminger @ 2010-05-06 2:33 UTC (permalink / raw) To: Brian Bloniarz; +Cc: dormando, netdev, Rick Jones, shemminger On Wed, 05 May 2010 21:37:40 -0400 Brian Bloniarz <bmb@athenacr.com> wrote: > Stephen Hemminger wrote: > > On Wed, 05 May 2010 16:56:34 -0400 > > Brian Bloniarz <bmb@athenacr.com> wrote: > > > >> dormando wrote: > >>>> This sounds like TCP slow start. > >>>> > >>>> http://en.wikipedia.org/wiki/Slow-start > >>>> > >>>> As far as tunables you might want to play with the initcwnd route > >>>> flag (see "ip route help") > >>> Ah, yes, initcwnd was it. I'm well aware of TCP Congestion control / slow > >>> start / etc. However I couldn't find the damn tunable for it :) > >> Documenting the flag in ip(8) might increase its visibility > >> a little. I don't see it documented in the iproute2 git head, > >> though it shows up on http://linux.die.net/man/8/ip somehow. > >> > >> Stephen, do you know why that is? > > > > No one sent me an official patch to change it? > > Mention initcwnd in ip(8). Text taken from doc/ip-cref.tex. > > Signed-off-by: Brian Bloniarz <bmb@athenacr.com> Ok, I will add it with an explicit caution about not doing this on public networks. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH iproute2] document initcwnd 2010-05-06 1:37 ` [PATCH iproute2] document initcwnd Brian Bloniarz 2010-05-06 2:33 ` Stephen Hemminger @ 2010-05-19 15:31 ` Stephen Hemminger 1 sibling, 0 replies; 13+ messages in thread From: Stephen Hemminger @ 2010-05-19 15:31 UTC (permalink / raw) To: Brian Bloniarz; +Cc: dormando, netdev, Rick Jones, shemminger On Wed, 05 May 2010 21:37:40 -0400 Brian Bloniarz <bmb@athenacr.com> wrote: > Stephen Hemminger wrote: > > On Wed, 05 May 2010 16:56:34 -0400 > > Brian Bloniarz <bmb@athenacr.com> wrote: > > > >> dormando wrote: > >>>> This sounds like TCP slow start. > >>>> > >>>> http://en.wikipedia.org/wiki/Slow-start > >>>> > >>>> As far as tunables you might want to play with the initcwnd route > >>>> flag (see "ip route help") > >>> Ah, yes, initcwnd was it. I'm well aware of TCP Congestion control / slow > >>> start / etc. However I couldn't find the damn tunable for it :) > >> Documenting the flag in ip(8) might increase its visibility > >> a little. I don't see it documented in the iproute2 git head, > >> though it shows up on http://linux.die.net/man/8/ip somehow. > >> > >> Stephen, do you know why that is? > > > > No one sent me an official patch to change it? > > Mention initcwnd in ip(8). Text taken from doc/ip-cref.tex. Applied ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2010-05-19 15:31 UTC | newest] Thread overview: 13+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-05-05 9:10 3 packet TCP window limit? dormando 2010-05-05 13:26 ` Brian Bloniarz 2010-05-05 20:01 ` dormando 2010-05-05 20:23 ` Rick Jones 2010-05-05 21:31 ` dormando 2010-05-06 6:15 ` Lars Eggert 2010-05-06 8:51 ` dormando [not found] ` <p2h349f35ee1005061513x1db24de0ld98a40256c481ac2@mail.gmail.com> [not found] ` <q2ud1c2719f1005061613yf90cd7c6r46ee23cc49858e74@mail.gmail.com> 2010-05-06 23:15 ` Jerry Chu 2010-05-05 20:56 ` Brian Bloniarz 2010-05-05 22:03 ` Stephen Hemminger 2010-05-06 1:37 ` [PATCH iproute2] document initcwnd Brian Bloniarz 2010-05-06 2:33 ` Stephen Hemminger 2010-05-19 15:31 ` Stephen Hemminger
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).