* TCP default congestion control in linux should be newreno
[not found] <b98e548c0812030637j2851ed76xb0768932b434e35c@mail.gmail.com>
@ 2008-12-03 16:12 ` Saverio Mascolo
2008-12-04 12:41 ` Ilpo Järvinen
0 siblings, 1 reply; 14+ messages in thread
From: Saverio Mascolo @ 2008-12-03 16:12 UTC (permalink / raw)
To: netdev
we have added plots of cwnd at
http://c3lab.poliba.it/index.php/TCP_over_Hsdpa
in the case of newreno, wetwood+, bic/cubic.
basically the goodput is similar with all variants but with
significantly larger packet losses and timeouts with bic/cubic. i am
pretty sure that this would happen with any algos -including h-tcp-
that makes the probing more aggressive leaving the van jacobson linear
phase.
westwood+ seems to "gain" on the side of rtt (i.e. less queueing)
becasue of the specific setting after congestion.
at this stage of research, i think that newreno should be made the
deafult stack in linux.
saverio
On Fri, Nov 28, 2008 at 12:09 PM, Douglas Leith <Doug.Leith@nuim.ie> wrote:
>
> A bit of delayed input to this thread on netdev ...
>
>> I'm not so sure about this logic, regardless of the algorithms
>> involved.
>>
>> H-TCP was never the default in any distribution or release that
>> I know of. So it's real world exposure is effectively zero,
>> which is the same as the new CUBIC stuff.
>
>> They are effectively, therefore, equivalent choices.
>
> Not really. At this stage HTCP has undergone quite extensive independent testing by a good few groups (Caltech, Swinburne, North Carolina etc). Its also been independently implemented in FreeBSD by the Swinburne folks. Its true it hasn't been default in linux, but it HTCP been subject to *far* more testing than the new cubic algorithm which has had no independent testing at all to my knowledge.
>
> I'd also like to add some new input to the discussion on choice of congestion control algorithm in linux - and why it might be useful to evaluate alternatives like htcp. Almost all of the proposals for changes to tcp (including cubic) have really slow convergence to fairness when new flows start up. The question is then whether it matters e.g. whether it negatively impacts users.
>
> To try to get a handle on this, we took one set of measurements from a home DSL line over a live link (so hopefully representative of common user experience), the other from a the production link out of the hamilton institute (so maybe more like the experience of enterprise users). Plots of our measurements are at
>
> http://www.hamilton.ie/doug/tina2.eps (DSL link)
> http://www.hamilton.ie/doug/caltech.eps (hamilton link)
>
> and also attached.
>
> We started one long-ish flow (mimicking incumbent flows) and then started a second shorter flow. The plots show the completion time of the second flow vs its connection size. If the incumbent flow is slow to release bandwidth (as we expect with cubic), we expect the completion time of the second flow to increase, and indeed this is what we see.
>
> What's particularly interesting is (i) the magnitude of the difference - completion times are consistently x2 with cubic vs htcp over many tests and (ii) that this effect is apparent not only on higher speed links (caltech.eps) but also on regular DSL links (tina2.eps - we took measurements from a home DSL line, so its not a sanitised lab setup or anything like that).
>
> As might be expected, the difference in completion times eventually washes out for long transfers, e.g for the DSL link the most pronounced difference is for 1MB connections (where there is about a x2 difference in times between cubic and htcp) but becomes less for longer flows. The point is that most real flows are short however, so the performance with a 1MB size flow seems like it should be more important than the 10MB size performance. For me the DSL performance is the more important one here since it affects so many people, and was quite surprising, although I can also reproduce similar results on our testbed so its not a weird corner case or anything like that.
>
> Wouldn't it be interesting to give h-tcp a go in linux to get wider feedback ?
>
> Doug
>
>
>
>
>
>
>
>
>
>
>
>
>
--
Prof. Saverio Mascolo
Dipartimento di Elettrotecnica ed Elettronica
Politecnico di Bari
Via Orabona 4
70125 Bari
Italy
Tel. +39 080 5963621
Fax. +39 080 5963410
email:mascolo@poliba.it
http://www-dee.poliba.it/dee-web/Personale/mascolo.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-03 16:12 ` TCP default congestion control in linux should be newreno Saverio Mascolo
@ 2008-12-04 12:41 ` Ilpo Järvinen
2008-12-04 15:26 ` Saverio Mascolo
2008-12-04 17:50 ` Luca De Cicco
0 siblings, 2 replies; 14+ messages in thread
From: Ilpo Järvinen @ 2008-12-04 12:41 UTC (permalink / raw)
To: Saverio Mascolo; +Cc: Netdev
On Wed, 3 Dec 2008, Saverio Mascolo wrote:
> we have added plots of cwnd at
>
> http://c3lab.poliba.it/index.php/TCP_over_Hsdpa
>
> in the case of newreno, wetwood+, bic/cubic.
You lack the most important detail, ie., the used kernel versions! And
also information if some sysctls were tuned or not. This is especially
important since you seem to claim that bic is the default which it hasn't
been for years?!
> basically the goodput is similar with all variants but with
> significantly larger packet losses and timeouts with bic/cubic.
I've never understood what exactly is wrong with the larger amount of
packet losses if they happen before (or at the bottleneck), here they're
just a consequence of having the larger window.
> i am pretty sure that this would happen with any algos -including h-tcp-
> that makes the probing more aggressive leaving the van jacobson linear
> phase.
Probably, but you seem to completely lack the analysis to find out why the
rtos did actually happen, whether it was due to most of the window lost or
perhaps spurious rtos?
--
i.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 12:41 ` Ilpo Järvinen
@ 2008-12-04 15:26 ` Saverio Mascolo
2008-12-04 17:56 ` David Miller
2008-12-04 17:57 ` David Miller
2008-12-04 17:50 ` Luca De Cicco
1 sibling, 2 replies; 14+ messages in thread
From: Saverio Mascolo @ 2008-12-04 15:26 UTC (permalink / raw)
To: Ilpo Järvinen; +Cc: Netdev
On Thu, Dec 4, 2008 at 1:41 PM, Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> wrote:
> On Wed, 3 Dec 2008, Saverio Mascolo wrote:
>
>> we have added plots of cwnd at
>>
>> http://c3lab.poliba.it/index.php/TCP_over_Hsdpa
>>
>> in the case of newreno, wetwood+, bic/cubic.
>
> You lack the most important detail, ie., the used kernel versions!
2.6
>And
> also information if some sysctls were tuned or not. This is especially
> important since you seem to claim that bic is the default which it hasn't
> been for years?!
bic is default since a couple of years, at my by knoweledge. btw now it is.
>
>> basically the goodput is similar with all variants but with
>> significantly larger packet losses and timeouts with bic/cubic.
>
> I've never understood what exactly is wrong with the larger amount of
> packet losses if they happen before (or at the bottleneck), here they're
> just a consequence of having the larger window.
what matters is goodput. if protocols A and B provide same goodput the
better is the one with lowest retransmissions because retrans. waste
bandwidth
>
>> i am pretty sure that this would happen with any algos -including h-tcp-
>> that makes the probing more aggressive leaving the van jacobson linear
>> phase.
>
> Probably, but you seem to completely lack the analysis to find out why the
> rtos did actually happen, whether it was due to most of the window lost or
> perhaps spurious rtos?
using a more aggressive probing (more than linear) increases retransmissions.
sm
>
>
> --
> i.
>
--
Prof. Saverio Mascolo
Dipartimento di Elettrotecnica ed Elettronica
Politecnico di Bari
Via Orabona 4
70125 Bari
Italy
Tel. +39 080 5963621
Fax. +39 080 5963410
email:mascolo@poliba.it
http://www-dee.poliba.it/dee-web/Personale/mascolo.html
=================================
This message may contain confidential and/or legally privileged information.
If you are not the intended recipient of the message, please destroy it.
Any unauthorized dissemination, distribution, or copying of the material in
this message, and any attachments to the message, is strictly forbidden.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 12:41 ` Ilpo Järvinen
2008-12-04 15:26 ` Saverio Mascolo
@ 2008-12-04 17:50 ` Luca De Cicco
2008-12-04 18:18 ` David Miller
2008-12-04 20:08 ` Ilpo Järvinen
1 sibling, 2 replies; 14+ messages in thread
From: Luca De Cicco @ 2008-12-04 17:50 UTC (permalink / raw)
To: Ilpo Järvinen; +Cc: Saverio Mascolo, Netdev
Dear Ilpo,
please find my reply in line.
On Thu, 4 Dec 2008 14:41:05 +0200 (EET)
"Ilpo Järvinen" <ilpo.jarvinen@helsinki.fi> wrote:
> On Wed, 3 Dec 2008, Saverio Mascolo wrote:
>
> > we have added plots of cwnd at
> >
> > http://c3lab.poliba.it/index.php/TCP_over_Hsdpa
> >
> > in the case of newreno, wetwood+, bic/cubic.
>
> You lack the most important detail, ie., the used kernel versions!
> And also information if some sysctls were tuned or not. This is
> especially important since you seem to claim that bic is the default
> which it hasn't been for years?!
>
Thank you for pointing out, we employed the kernel 2.6.24 with web100
patch in order to log the internal variables. You are right, cubic is
the default, that was simply a cut & paste error.
For what concerns the sysctls, they are set all to the default values,
with the only exception of tcp_no_metrics_save that is turned on in
order not to save metrics (such as ssthresh) as specified in [1].
The other sysctls were left as default in order to assess the
performance of the algorithms as a normal user would do.
> > basically the goodput is similar with all variants but with
> > significantly larger packet losses and timeouts with bic/cubic.
>
> I've never understood what exactly is wrong with the larger amount of
> packet losses if they happen before (or at the bottleneck), here
> they're just a consequence of having the larger window.
Saverio already replied to this objection. I would like to add a
further consideration. The aggressive probing phase has also the
negative effect of causing inflated values of RTT due to the excessive
queuing (see the RTT time evolution in the Cwnd/RTT figures).
>
> > i am pretty sure that this would happen with any algos -including
> > h-tcp- that makes the probing more aggressive leaving the van
> > jacobson linear phase.
>
> Probably, but you seem to completely lack the analysis to find out
> why the rtos did actually happen, whether it was due to most of the
> window lost or perhaps spurious rtos?
Why are you suggesting spurious rtos? To my uderstanding the spurious
rto should be mostly due to the link layer retransmissions that
are orthogonal to the congestion control algorithm employed.
Let's say the average number of spurious timeouts is X, independent
from the algo, the remaining number of timeouts should be caused by
congestion, that IMHO is what differentiates the couple
NewReno/Westwood+ from the Bic/Cubic one.
However the high number of timeouts caused by Bic (and other TCP
variants) has been already observed in [2] in a different scenario.
Best regards,
Luca
--
Refs.
[1] http://www.linuxfoundation.org/en/Net:TCP_testing
[2] Saverio Mascolo, Francesco Vacirca, The effect of reverse traffic
on TCP congestion control algorithms Protocols for Fast Long-distance
Networks, Nara, Japan, Feb. 2006
(Available at http://c3lab.poliba.it/images/2/27/Pfldnet06.pdf)
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 15:26 ` Saverio Mascolo
@ 2008-12-04 17:56 ` David Miller
2008-12-04 17:57 ` David Miller
1 sibling, 0 replies; 14+ messages in thread
From: David Miller @ 2008-12-04 17:56 UTC (permalink / raw)
To: saverio.mascolo; +Cc: ilpo.jarvinen, netdev
From: "Saverio Mascolo" <saverio.mascolo@gmail.com>
Date: Thu, 4 Dec 2008 16:26:30 +0100
> On Thu, Dec 4, 2008 at 1:41 PM, Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> wrote:
> > On Wed, 3 Dec 2008, Saverio Mascolo wrote:
> >
> >> we have added plots of cwnd at
> >>
> >> http://c3lab.poliba.it/index.php/TCP_over_Hsdpa
> >>
> >> in the case of newreno, wetwood+, bic/cubic.
> >
> > You lack the most important detail, ie., the used kernel versions!
>
> 2.6
There are many different "2.6" kernels, which one in particular?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 15:26 ` Saverio Mascolo
2008-12-04 17:56 ` David Miller
@ 2008-12-04 17:57 ` David Miller
1 sibling, 0 replies; 14+ messages in thread
From: David Miller @ 2008-12-04 17:57 UTC (permalink / raw)
To: saverio.mascolo; +Cc: ilpo.jarvinen, netdev
From: "Saverio Mascolo" <saverio.mascolo@gmail.com>
Date: Thu, 4 Dec 2008 16:26:30 +0100
> On Thu, Dec 4, 2008 at 1:41 PM, Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> wrote:
> >And
> > also information if some sysctls were tuned or not. This is especially
> > important since you seem to claim that bic is the default which it hasn't
> > been for years?!
>
> bic is default since a couple of years, at my by knoweledge. btw now it is.
Currently cubic is the default, and it has been for some
time now.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 17:50 ` Luca De Cicco
@ 2008-12-04 18:18 ` David Miller
2008-12-04 19:05 ` Saverio Mascolo
2008-12-04 20:08 ` Ilpo Järvinen
1 sibling, 1 reply; 14+ messages in thread
From: David Miller @ 2008-12-04 18:18 UTC (permalink / raw)
To: ldecicco; +Cc: ilpo.jarvinen, saverio.mascolo, netdev
From: Luca De Cicco <ldecicco@gmail.com>
Date: Thu, 4 Dec 2008 18:50:22 +0100
> Thank you for pointing out, we employed the kernel 2.6.24 with web100
That is so ancient, especially TCP wise, it isn't even funny.
And using the web100 patch adds yet more variables to the equation.
It doesn't really reflect what users are actually running in the world
today, not by a country mile.
You're not comparing what we're actually shipping in the kernel.org
kernel at all with your tests, which means your tests and results are
something close to useless for us.
I don't udnerstand why there are so many people who dice up their
kernels, or use very old kernels, then do some "research" and then
suggest we should change this or that with the kernel.org kernel as a
result.
Sorry, nobody here is going to take that seriously at all.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 18:18 ` David Miller
@ 2008-12-04 19:05 ` Saverio Mascolo
2008-12-04 19:54 ` David Miller
2008-12-04 21:05 ` Ilpo Järvinen
0 siblings, 2 replies; 14+ messages in thread
From: Saverio Mascolo @ 2008-12-04 19:05 UTC (permalink / raw)
To: David Miller; +Cc: ldecicco, ilpo.jarvinen, netdev
dear david,
we have done many experiments, along with many other "researcher" as u
say. as a consequence, i have maturated the belief that changing the
probing phase of the van jacobson TCP, which is linear, could not be
not a wise thing .
the rationale of VJ choice seems simple to me: if the window is
increased of one packet in one rtt, i can drop at most one packet in
one rtt and so i have to recover only one packet in one rtt (note that
rtt is the feedback reaction time). If cwnd is increased of N packets
i could drop N packets and i could need to recover N pkts. this is the
problem here.
window dynamics of newreno cwnd behave better, they are smoother.
others cwnd seem chaotic.
if you have other experimental results showing the opposite, please
let me know.
thanks,
saverio
On Thu, Dec 4, 2008 at 7:18 PM, David Miller <davem@davemloft.net> wrote:
> From: Luca De Cicco <ldecicco@gmail.com>
> Date: Thu, 4 Dec 2008 18:50:22 +0100
>
>> Thank you for pointing out, we employed the kernel 2.6.24 with web100
>
> That is so ancient, especially TCP wise, it isn't even funny.
>
> And using the web100 patch adds yet more variables to the equation.
>
> It doesn't really reflect what users are actually running in the world
> today, not by a country mile.
>
> You're not comparing what we're actually shipping in the kernel.org
> kernel at all with your tests, which means your tests and results are
> something close to useless for us.
>
> I don't udnerstand why there are so many people who dice up their
> kernels, or use very old kernels, then do some "research" and then
> suggest we should change this or that with the kernel.org kernel as a
> result.
>
> Sorry, nobody here is going to take that seriously at all.
>
--
Prof. Saverio Mascolo
Dipartimento di Elettrotecnica ed Elettronica
Politecnico di Bari
Via Orabona 4
70125 Bari
Italy
Tel. +39 080 5963621
Fax. +39 080 5963410
email:mascolo@poliba.it
http://www-dee.poliba.it/dee-web/Personale/mascolo.html
=================================
This message may contain confidential and/or legally privileged information.
If you are not the intended recipient of the message, please destroy it.
Any unauthorized dissemination, distribution, or copying of the material in
this message, and any attachments to the message, is strictly forbidden.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 19:05 ` Saverio Mascolo
@ 2008-12-04 19:54 ` David Miller
2008-12-04 20:05 ` Saverio Mascolo
2008-12-04 21:05 ` Ilpo Järvinen
1 sibling, 1 reply; 14+ messages in thread
From: David Miller @ 2008-12-04 19:54 UTC (permalink / raw)
To: saverio.mascolo; +Cc: ldecicco, ilpo.jarvinen, netdev
From: "Saverio Mascolo" <saverio.mascolo@gmail.com>
Date: Thu, 4 Dec 2008 20:05:22 +0100
> the rationale of VJ choice seems simple to me: if the window is
> increased of one packet in one rtt, i can drop at most one packet in
> one rtt and so i have to recover only one packet in one rtt (note that
> rtt is the feedback reaction time). If cwnd is increased of N packets
> i could drop N packets and i could need to recover N pkts. this is the
> problem here.
The bigger problem is that windows are so much larger now that going
back to the stone ages and newreno recovery behavior is absolutely not
an option any more.
So "go back to newreno" is not a suggestion to be taken seriously.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 19:54 ` David Miller
@ 2008-12-04 20:05 ` Saverio Mascolo
2008-12-04 20:47 ` David Miller
0 siblings, 1 reply; 14+ messages in thread
From: Saverio Mascolo @ 2008-12-04 20:05 UTC (permalink / raw)
To: David Miller; +Cc: ldecicco, ilpo.jarvinen, netdev
>
> The bigger problem is that windows are so much larger now that going
> back to the stone ages and newreno recovery behavior is absolutely not
> an option any more.
average window cannot be much larger, because average windows follow
network capacity in any case. so you can end up having only more
oscillating windows (i.e. more timeouts, more retransmissions)
>
> So "go back to newreno" is not a suggestion to be taken seriously.
this should be supported by experiments. i do not think reno is stone
age, i think is very close to the optimal you can get given the
feedback available in current nets.
you should show less timeouts, less retransmissions larger goodput
when using a rocket tcp.
-s
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 17:50 ` Luca De Cicco
2008-12-04 18:18 ` David Miller
@ 2008-12-04 20:08 ` Ilpo Järvinen
1 sibling, 0 replies; 14+ messages in thread
From: Ilpo Järvinen @ 2008-12-04 20:08 UTC (permalink / raw)
To: Luca De Cicco; +Cc: Saverio Mascolo, Netdev, David Miller
[-- Attachment #1: Type: TEXT/PLAIN, Size: 5131 bytes --]
On Thu, 4 Dec 2008, Luca De Cicco wrote:
> Dear Ilpo,
>
> please find my reply in line.
>
> On Thu, 4 Dec 2008 14:41:05 +0200 (EET)
> "Ilpo Järvinen" <ilpo.jarvinen@helsinki.fi> wrote:
>
> > On Wed, 3 Dec 2008, Saverio Mascolo wrote:
> >
> > > we have added plots of cwnd at
> > >
> > > http://c3lab.poliba.it/index.php/TCP_over_Hsdpa
> > >
> > > in the case of newreno, wetwood+, bic/cubic.
> >
> > You lack the most important detail, ie., the used kernel versions!
> > And also information if some sysctls were tuned or not. This is
> > especially important since you seem to claim that bic is the default
> > which it hasn't been for years?!
> >
>
> Thank you for pointing out, we employed the kernel 2.6.24 with web100
> patch in order to log the internal variables.
Thanks, this gives much more informative context to the results.
> You are right, cubic is the default, that was simply a cut & paste error.
Ok.
> For what concerns the sysctls, they are set all to the default values,
> with the only exception of tcp_no_metrics_save that is turned on in
> order not to save metrics (such as ssthresh) as specified in [1].
> The other sysctls were left as default in order to assess the
> performance of the algorithms as a normal user would do.
Did you know that 2.6.24 has broken frto fallback to conventional
recovery? And frto is enabled by default... ...Once that bug was found the
stable-2.6.24 was already obsolete (and therefore not updated by the
stable team so it never got fixed). That bug will affect behavior after
each rto quite a lot (basically it will invalidate all 2.6.24 results if
any rto occurred during a test)! The fix is included from 2.6.25.7 and
2.6.26 onward.
Yes, we know that ubuntu hardy cared a very little to fix that but asked
users to do pointless bisects again and again, I think they never got
around to fix it for real until upstream did that for them in intrepid so
in a sense it's what many people would run but I don't find that too good
reason to do decision about future (which includes the fix).
> > > basically the goodput is similar with all variants but with
> > > significantly larger packet losses and timeouts with bic/cubic.
> >
> > I've never understood what exactly is wrong with the larger amount of
> > packet losses if they happen before (or at the bottleneck), here
> > they're just a consequence of having the larger window.
>
> Saverio already replied to this objection. I would like to add a
> further consideration. The aggressive probing phase has also the
> negative effect of causing inflated values of RTT due to the excessive
> queuing (see the RTT time evolution in the Cwnd/RTT figures).
Obviously cwnd and rtt have strong correlation if any queuing happens.
In order to satisfy the needs with other paths (large bdp), there's this
aggressiviness tradeoff which one makes, yes, the consequence is that
window will be bigger and will hit more losses when they happen but those
retransmission won't be unnecessary and therefore only waste spare
resources on non-bottleneck links, the utilization of the bottleneck is
kept full as long as the queue was long enough.
> > > i am pretty sure that this would happen with any algos -including
> > > h-tcp- that makes the probing more aggressive leaving the van
> > > jacobson linear phase.
> >
> > Probably, but you seem to completely lack the analysis to find out
> > why the rtos did actually happen, whether it was due to most of the
> > window lost or perhaps spurious rtos?
>
> Why are you suggesting spurious rtos? To my uderstanding the spurious
> rto should be mostly due to the link layer retransmissions that
> are orthogonal to the congestion control algorithm employed.
> Let's say the average number of spurious timeouts is X, independent
> from the algo, the remaining number of timeouts should be caused by
> congestion, that IMHO is what differentiates the couple
> NewReno/Westwood+ from the Bic/Cubic one.
Dynamics of tcp are quite hard to figure out. ...I heard a wise saying
yesterday (though in a bit different context) that it's not very wise, as
a scientist, to be guessing things.
And you seem to totally ignore the nature of those wireless links. I
haven't had time to check how a real-world queue of the hsdpa behaves but
what I know of from umts I say that it's so complex dynamic setup that
your simple assumptions here are totally irrelevant in reality. And I
doubt that hsdpa differs that much from umts behavior, though ymmv a bit
depending on which operator, manufacturer, etc. devices are in question.
Anyway, now that I heard that it's broken frto many things might just
vanish away if the fixed kernel would be used.
> However the high number of timeouts caused by Bic (and other TCP
> variants) has been already observed in [2] in a different scenario.
>
> [2] Saverio Mascolo, Francesco Vacirca, The effect of reverse traffic
> on TCP congestion control algorithms Protocols for Fast Long-distance
> Networks, Nara, Japan, Feb. 2006
I'll take look into that later. ...I hope it tells which kernel version
was in use.
--
i.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 20:05 ` Saverio Mascolo
@ 2008-12-04 20:47 ` David Miller
0 siblings, 0 replies; 14+ messages in thread
From: David Miller @ 2008-12-04 20:47 UTC (permalink / raw)
To: saverio.mascolo; +Cc: ldecicco, ilpo.jarvinen, netdev
From: "Saverio Mascolo" <saverio.mascolo@gmail.com>
Date: Thu, 4 Dec 2008 21:05:37 +0100
> > The bigger problem is that windows are so much larger now that going
> > back to the stone ages and newreno recovery behavior is absolutely not
> > an option any more.
>
> average window cannot be much larger, because average windows follow
> network capacity in any case. so you can end up having only more
> oscillating windows (i.e. more timeouts, more retransmissions)
But a lost based fast recovery is more expensive unless you
modify the congestion window handling, reno simply does
not scale.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 19:05 ` Saverio Mascolo
2008-12-04 19:54 ` David Miller
@ 2008-12-04 21:05 ` Ilpo Järvinen
2008-12-04 23:56 ` John Heffner
1 sibling, 1 reply; 14+ messages in thread
From: Ilpo Järvinen @ 2008-12-04 21:05 UTC (permalink / raw)
To: Saverio Mascolo; +Cc: David Miller, ldecicco, Netdev
On Thu, 4 Dec 2008, Saverio Mascolo wrote:
> we have done many experiments, along with many other "researcher" as u
> say. as a consequence, i have maturated the belief that changing the
> probing phase of the van jacobson TCP, which is linear, could not be
> not a wise thing .
> the rationale of VJ choice seems simple to me: if the window is
> increased of one packet in one rtt, i can drop at most one packet in
> one rtt and so i have to recover only one packet in one rtt (note that
> rtt is the feedback reaction time). If cwnd is increased of N packets
> i could drop N packets and i could need to recover N pkts. this is the
> problem here.
1) N drops is not as bad as you seem to imply, sadly this is a very common
misconcept among much tcp related research that fast retransmits and the
following recovery are a bad things in itself. SACK handles it very
efficiently as long as N is considerably less than cwnd and the buffer
before the bottleneck was adequate to keep the link fully utilized over
that recovery rtt. Thus the first claim (vj one) does not follow from the
latter on (n pkts is bad), it is bogus reasoning.
2) The main benefit of maintaining VJ's one packet increment is the better
interflow fairness on short time scale which again is a tradeoff.
> window dynamics of newreno cwnd behave better, they are smoother.
> others cwnd seem chaotic.
"seem chaotic" (to humans) != chaotic btw. E.g., any rapid change may
seem chaotic. Also utms buffers might "seem chaotic" btw :-) which
makes your equation even more complex one (this is not a joke, I've
actually probed its behavior a bit).
--
i.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: TCP default congestion control in linux should be newreno
2008-12-04 21:05 ` Ilpo Järvinen
@ 2008-12-04 23:56 ` John Heffner
0 siblings, 0 replies; 14+ messages in thread
From: John Heffner @ 2008-12-04 23:56 UTC (permalink / raw)
To: Ilpo Järvinen; +Cc: Saverio Mascolo, David Miller, ldecicco, Netdev
On Thu, Dec 4, 2008 at 1:05 PM, Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> wrote:
> On Thu, 4 Dec 2008, Saverio Mascolo wrote:
>
>> we have done many experiments, along with many other "researcher" as u
>> say. as a consequence, i have maturated the belief that changing the
>> probing phase of the van jacobson TCP, which is linear, could not be
>> not a wise thing .
>> the rationale of VJ choice seems simple to me: if the window is
>> increased of one packet in one rtt, i can drop at most one packet in
>> one rtt and so i have to recover only one packet in one rtt (note that
>> rtt is the feedback reaction time). If cwnd is increased of N packets
>> i could drop N packets and i could need to recover N pkts. this is the
>> problem here.
>
> 1) N drops is not as bad as you seem to imply, sadly this is a very common
> misconcept among much tcp related research that fast retransmits and the
> following recovery are a bad things in itself. SACK handles it very
> efficiently as long as N is considerably less than cwnd and the buffer
> before the bottleneck was adequate to keep the link fully utilized over
> that recovery rtt. Thus the first claim (vj one) does not follow from the
> latter on (n pkts is bad), it is bogus reasoning.
One thing to note, you only get N drops upon cwnd increase of N when
you have simple drop-tail queues. AQM significantly changes the
behavior here, but drop-tail is still nearly ubiquidous.
A notable effect of multiple loss associated with increasing cwnd by
multiple segments, aside from outright SACK bugs, is that it tends to
synchronize losses across flows, since all N drops are not usually
from the same flow. (Again, talking only about drop-tail here.)
Synchronized losses lead to lower link utilization with many flows.
-John
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2008-12-04 23:56 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <b98e548c0812030637j2851ed76xb0768932b434e35c@mail.gmail.com>
2008-12-03 16:12 ` TCP default congestion control in linux should be newreno Saverio Mascolo
2008-12-04 12:41 ` Ilpo Järvinen
2008-12-04 15:26 ` Saverio Mascolo
2008-12-04 17:56 ` David Miller
2008-12-04 17:57 ` David Miller
2008-12-04 17:50 ` Luca De Cicco
2008-12-04 18:18 ` David Miller
2008-12-04 19:05 ` Saverio Mascolo
2008-12-04 19:54 ` David Miller
2008-12-04 20:05 ` Saverio Mascolo
2008-12-04 20:47 ` David Miller
2008-12-04 21:05 ` Ilpo Järvinen
2008-12-04 23:56 ` John Heffner
2008-12-04 20:08 ` Ilpo Järvinen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).