* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
[not found] <4513F1A1.3000506@nuim.ie>
@ 2006-09-22 17:42 ` Ian McDonald
2006-09-22 19:38 ` Douglas Leith
0 siblings, 1 reply; 12+ messages in thread
From: Ian McDonald @ 2006-09-22 17:42 UTC (permalink / raw)
To: Douglas Leith; +Cc: end2end-interest, netdev, Stephen Hemminger
On 9/23/06, Douglas Leith <doug.leith@nuim.ie> wrote:
> For those interested in TCP for high-speed environments, and perhaps
> also people interested in TCP evaluation generally, I'd like to point
> you towards the results of a detailed experimental study which are now
> available at:
>
> http://www.hamilton.ie/net/eval/ToNfinal.pdf
>
> This study consistently compares Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP
> and H-TCP performance under a wide range of conditions including with
> mixes of long and short-lived flows. This study has now been subject to
> peer review (to hopefully give it some legitimacy) and is due to appear
> in the Transactions on Networking.
>
> The conclusions (see summary below) seem especially topical as BIC-TCP
> is currently widely deployed as the default algorithm in Linux.
>
> Comments appreciated. Our measurements are publicly available - on the
> web or drop me a line if you'd like a copy.
>
> Summary:
> In this paper we present experimental results evaluating the
> performance of the Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP and
> H-TCP proposals in a series of benchmark tests.
>
> We find that many recent proposals perform surprisingly poorly in
> even the most simple test, namely achieving fairness between two
> competing flows in a dumbbell topology with the same round-trip
> times and shared bottleneck link. Specifically, both Scalable-TCP
> and FAST TCP exhibit very substantial unfairness in this test.
>
> We also find that Scalable-TCP, HS-TCP and BIC-TCP induce significantly
> greater RTT unfairness between competing flows with different round-trip
> times. The unfairness can be an order of magnitude greater than that
> with standard TCP and is such that flows with longer round-trip times
> can be completely starved of bandwidth.
>
> While the TCP proposals studied are all successful at improving
> the link utilisation in a relatively static environment with
> long-lived flows, in our tests many of the proposals exhibit poor
> responsiveness to changing network conditions. We observe that
> Scalable-TCP, HS-TCP and BIC-TCP can all suffer from extremely
> slow (>100s) convergence times following the startup of a new
> flow. We also observe that while FAST-TCP flows typically converge
> quickly initially, flows may later diverge again to create
> significant and sustained unfairness.
>
> --Doug
>
> Hamilton Institute
> www.hamilton.ie
>
>
>
Interesting reading and I am replying to netdev@vger.kernel.org as
well. I will read in more detail later but my first questions/comments
are:
- have you tested CUBIC subsequently as this is meant to fix many of
the rtt issues? This is becoming the default in 2.6.19 probably.
- have you tested subsequently on more recent kernels than 2.6.6?
Looks like some very useful information.
Regards,
Ian
--
Ian McDonald
Web: http://wand.net.nz/~iam4
Blog: http://imcdnzl.blogspot.com
WAND Network Research Group
Department of Computer Science
University of Waikato
New Zealand
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
2006-09-22 17:42 ` Ian McDonald
@ 2006-09-22 19:38 ` Douglas Leith
2006-09-22 20:04 ` Ian McDonald
0 siblings, 1 reply; 12+ messages in thread
From: Douglas Leith @ 2006-09-22 19:38 UTC (permalink / raw)
To: Ian McDonald; +Cc: end2end-interest, netdev, Stephen Hemminger
Ian McDonald wrote:
> On 9/23/06, Douglas Leith <doug.leith@nuim.ie> wrote:
>> For those interested in TCP for high-speed environments, and perhaps
>> also people interested in TCP evaluation generally, I'd like to point
>> you towards the results of a detailed experimental study which are now
>> available at:
>>
>> http://www.hamilton.ie/net/eval/ToNfinal.pdf
>>
>> This study consistently compares Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP
>> and H-TCP performance under a wide range of conditions including with
>> mixes of long and short-lived flows. This study has now been subject to
>> peer review (to hopefully give it some legitimacy) and is due to appear
>> in the Transactions on Networking.
>>
>> The conclusions (see summary below) seem especially topical as BIC-TCP
>> is currently widely deployed as the default algorithm in Linux.
>>
>> Comments appreciated. Our measurements are publicly available - on the
>> web or drop me a line if you'd like a copy.
>>
>> Summary:
>> In this paper we present experimental results evaluating the
>> performance of the Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP and
>> H-TCP proposals in a series of benchmark tests.
>>
>> We find that many recent proposals perform surprisingly poorly in
>> even the most simple test, namely achieving fairness between two
>> competing flows in a dumbbell topology with the same round-trip
>> times and shared bottleneck link. Specifically, both Scalable-TCP
>> and FAST TCP exhibit very substantial unfairness in this test.
>>
>> We also find that Scalable-TCP, HS-TCP and BIC-TCP induce significantly
>> greater RTT unfairness between competing flows with different round-trip
>> times. The unfairness can be an order of magnitude greater than that
>> with standard TCP and is such that flows with longer round-trip times
>> can be completely starved of bandwidth.
>>
>> While the TCP proposals studied are all successful at improving
>> the link utilisation in a relatively static environment with
>> long-lived flows, in our tests many of the proposals exhibit poor
>> responsiveness to changing network conditions. We observe that
>> Scalable-TCP, HS-TCP and BIC-TCP can all suffer from extremely
>> slow (>100s) convergence times following the startup of a new
>> flow. We also observe that while FAST-TCP flows typically converge
>> quickly initially, flows may later diverge again to create
>> significant and sustained unfairness.
>>
>> --Doug
>>
>> Hamilton Institute
>> www.hamilton.ie
>>
>>
>>
> Interesting reading and I am replying to netdev@vger.kernel.org as
> well. I will read in more detail later but my first questions/comments
> are:
> - have you tested CUBIC subsequently as this is meant to fix many of
> the rtt issues? This is becoming the default in 2.6.19 probably.
> - have you tested subsequently on more recent kernels than 2.6.6?
>
> Looks like some very useful information.
>
> Regards,
>
> Ian
Ian,
We haven't tested cubic yet. Inevitably there's a delay in running
tests, getting them properly reviewed (essential for credibility I think
when results can be controversial) etc and so there are quite a few
proposed algorithms that are post out tests and so not covered,
including cubic. I think this certainly motivates further rounds of
testing. We have tested later kernels, but we found the results are
much the same as with the 2.6.6 kernel used (which included sack
processing patches etc that have now largely been incorporated into
later kernels).
I wasn't aware of the planned move to cubic in Linux. Can I ask the
rationale for this ? Cubic is, of course, closely related to HTCP
(borrowing the HTCP idea of using elapsed time since last backoff as the
quantity used to adjust the cwnd increase rate) which *is* tested in the
reported study. I'd be more than happy to run tests on cubic and I
reckon we should do this sooner rather than later now that you have
flagged up plans to rollout cubic.
Doug
Hamilton Institute
www.hamilton.ie
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
2006-09-22 19:38 ` Douglas Leith
@ 2006-09-22 20:04 ` Ian McDonald
0 siblings, 0 replies; 12+ messages in thread
From: Ian McDonald @ 2006-09-22 20:04 UTC (permalink / raw)
To: Douglas Leith; +Cc: end2end-interest, netdev, Stephen Hemminger
> I wasn't aware of the planned move to cubic in Linux. Can I ask the
> rationale for this ? Cubic is, of course, closely related to HTCP
> (borrowing the HTCP idea of using elapsed time since last backoff as the
> quantity used to adjust the cwnd increase rate) which *is* tested in the
> reported study. I'd be more than happy to run tests on cubic and I
> reckon we should do this sooner rather than later now that you have
> flagged up plans to rollout cubic.
>
As I understand it, it is because Cubic is better than bic for
differing rtts and bic is the current default. Stephen might like to
add to it.
More tests are always good!
Ian
--
Ian McDonald
Web: http://wand.net.nz/~iam4
Blog: http://imcdnzl.blogspot.com
WAND Network Research Group
Department of Computer Science
University of Waikato
New Zealand
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: performance of BIC-TCP, High-Speed-TCP, H-TCP etc
@ 2006-09-23 2:34 Injong Rhee
2006-09-23 6:34 ` [e2e] " Douglas Leith
0 siblings, 1 reply; 12+ messages in thread
From: Injong Rhee @ 2006-09-23 2:34 UTC (permalink / raw)
To: doug.leith; +Cc: netdev, floyd, lisongxu, end2end-interest
[-- Attachment #1: Type: text/plain, Size: 7192 bytes --]
This is a resend with fixed web links. The links were broken in my previous email -- sorry about multiple transmissions.
---------------------------------------------------------------------------------
Hi Doug,
Thanks for sharing your paper. Also congratulations to the acceptance of your journal paper to TONs. But I am wondering what's new in this paper. At first glance, I did not find many new things that are different from your previously publicized reports. How much is this different from the ones you put out in this mail list a year or two ago and also the one publicized in PFLDnet February this year http://www.hpcc.jp/pfldnet2006/? In that same workshop, we also presented our experimental results that shows significant discrepancy from yours but i am not sure why you forgot to reference our experimental work presented in that same PFLDnet. Here is a link to a more detailed version of that report accepted to COMNET http://netsrv.csc.ncsu.edu/highspeed/comnet-asteppaper.pdf
The main point of contention [that we talked about in that PFLDnet workshop] is the presence of background traffic and the method to add them. Your report mostly ignores the effect of background traffic. Some texts in this paper state that you added some web traffic (10%), but the paper shows only the results from NO background traffic scenarios. But our results differ from yours in many aspects. Below are the links to our results (the links to them have been available in our BIC web site for a long time and also mentioned in our PFLDnet paper; this result is with the patch that corrects HTCP bugs).
[Convergence and intra protocol fairness]
without background traffic: http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/intra_protocol/intra_protocol.htm
with background traffic: http://netsrv.csc.ncsu.edu/highspeed/1200/bk/intra_protocol/intra_protocol.htm
[RTT fairness]:
w/o background traffic: http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/rtt_fairness/rtt_fairness.htm
with background traffic: http://netsrv.csc.ncsu.edu/highspeed/1200/bk/rtt_fairness/rtt_fairness.htm
[TCP friendliness]
without background traffic: http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/tcp_friendliness/tcp_friendliness.htm
with background traffic: http://netsrv.csc.ncsu.edu/highspeed/1200/bk/tcp_friendliness/tcp_friendliness.htm
After our discussion in that PFLDnet, I puzzled why we get different results. My guess is that the main difference between your experiment and ours is the inclusion of mid-sized flows with various RTTs -- our experience tells that the RTT variations of mid size flows play a very important role in creating significant dynamics in testing environments. The same point about the importance of mid size flows with RTT variations has been raised in several occasions by Sally Floyd as well, including in this year's E2E research group meeting. You can find some reference to the importance of RTT variations in her paper too [ http://www.icir.org/models/hotnetsFinal.pdf]. Just having web-traffic (all with the same RTTs) does not create a realistic environment as it does not do anything about RTTs and also flow sizes tend to be highly skewed with the Pareto distribution-- but I don't know exactly how you create your testing environment with web-traffic -- I can only guess from the description you have about the web traffic in your paper.
Another puzzle in this difference seems that even under no background traffic, we also get different results from yours..hmm...especially with FAST because under no background traffic, FAST seems to work fairly well with good RTT fairness in our experiment. But your results show FAST has huge RTT-unfairness. That is very strange. Is that because we have different bandwidth and buffer sizes in the setup? I think we need to compare our notes more. Also in the journal paper of FAST experimental results [ http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf ], FAST seems to work very well under no background traffic. We will verify our results again in the exact same environment as you have in your report, to make sure we can reproduce your results....but here are some samples of our results for FAST.
http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/rtt_fairness/1200--2.4_FAST-2.4_FAST-NONE--400-3-1333--1000-76-3-0-0-0-5-500--200000-0.6-1000-10-1200-64000-150--1/
In this experiment, FAST flows are just perfect. Also the same result is confirmed inthe FAST journal paper [ http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf-- please look at Section IV.B and C. But your results show really bad RTT fairness.]
Best regards,
Injong
---
Injong Rhee
NCSU
On Sep 22, 2006, at 10:22 AM, Douglas Leith wrote:
For those interested in TCP for high-speed environments, and perhaps also people interested in TCP evaluation generally, I'd like to point you towards the results of a detailed experimental study which are now available at:
http://www.hamilton.ie/net/eval/ToNfinal.pdf
This study consistently compares Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP and H-TCP performance under a wide range of conditions including with mixes of long and short-lived flows. This study has now been subject to peer review (to hopefully give it some legitimacy) and is due to appear in the Transactions on Networking.
The conclusions (see summary below) seem especially topical as BIC-TCP is currently widely deployed as the default algorithm in Linux.
Comments appreciated. Our measurements are publicly available - on the web or drop me a line if you'd like a copy.
Summary:
In this paper we present experimental results evaluating the
performance of the Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP and
H-TCP proposals in a series of benchmark tests.
We find that many recent proposals perform surprisingly poorly in
even the most simple test, namely achieving fairness between two
competing flows in a dumbbell topology with the same round-trip
times and shared bottleneck link. Specifically, both Scalable-TCP
and FAST TCP exhibit very substantial unfairness in this test.
We also find that Scalable-TCP, HS-TCP and BIC-TCP induce significantly greater RTT unfairness between competing flows with different round-trip times. The unfairness can be an order of magnitude greater than that with standard TCP and is such that flows with longer round-trip times
can be completely starved of bandwidth.
While the TCP proposals studied are all successful at improving
the link utilisation in a relatively static environment with
long-lived flows, in our tests many of the proposals exhibit poor
responsiveness to changing network conditions. We observe that
Scalable-TCP, HS-TCP and BIC-TCP can all suffer from extremely
slow (>100s) convergence times following the startup of a new
flow. We also observe that while FAST-TCP flows typically converge
quickly initially, flows may later diverge again to create
significant and sustained unfairness.
--Doug
Hamilton Institute
www.hamilton.ie
[-- Attachment #2: Type: text/html, Size: 18572 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
@ 2006-09-23 2:43 Injong Rhee
2006-09-24 4:18 ` Stephen Hemminger
2006-09-24 4:25 ` Stephen Hemminger
0 siblings, 2 replies; 12+ messages in thread
From: Injong Rhee @ 2006-09-23 2:43 UTC (permalink / raw)
To: netdev
This is a resend with fixed web links. The links were broken in my previous
email -- sorry about multiple transmissions.
---------------------------------------------------------------------------------
Hi Doug,
Thanks for sharing your paper. Also congratulations to the acceptance of
your journal paper to TONs. But I am wondering what's new in this paper. At
first glance, I did not find many new things that are different from your
previously publicized reports. How much is this different from the ones you
put out in this mail list a year or two ago and also the one publicized in
PFLDnet February this year http://www.hpcc.jp/pfldnet2006/? In that same
workshop, we also presented our experimental results that shows significant
discrepancy from yours but i am not sure why you forgot to reference our
experimental work presented in that same PFLDnet. Here is a link to a more
detailed version of that report accepted to COMNET
http://netsrv.csc.ncsu.edu/highspeed/comnet-asteppaper.pdf
The main point of contention [that we talked about in that PFLDnet workshop]
is the presence of background traffic and the method to add them. Your
report mostly ignores the effect of background traffic. Some texts in this
paper state that you added some web traffic (10%), but the paper shows only
the results from NO background traffic scenarios. But our results differ
from yours in many aspects. Below are the links to our results (the links to
them have been available in our BIC web site for a long time and also
mentioned in our PFLDnet paper; this result is with the patch that corrects
HTCP bugs).
[Convergence and intra protocol fairness]
without background traffic:
http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/intra_protocol/intra_protocol.htm
with background traffic:
http://netsrv.csc.ncsu.edu/highspeed/1200/bk/intra_protocol/intra_protocol.htm
[RTT fairness]:
w/o background traffic:
http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/rtt_fairness/rtt_fairness.htm
with background traffic:
http://netsrv.csc.ncsu.edu/highspeed/1200/bk/rtt_fairness/rtt_fairness.htm
[TCP friendliness]
without background traffic:
http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/tcp_friendliness/tcp_friendliness.htm
with background traffic:
http://netsrv.csc.ncsu.edu/highspeed/1200/bk/tcp_friendliness/tcp_friendliness.htm
After our discussion in that PFLDnet, I puzzled why we get different
results. My guess is that the main difference between your experiment and
ours is the inclusion of mid-sized flows with various RTTs -- our experience
tells that the RTT variations of mid size flows play a very important role
in creating significant dynamics in testing environments. The same point
about the importance of mid size flows with RTT variations has been raised
in several occasions by Sally Floyd as well, including in this year's E2E
research group meeting. You can find some reference to the importance of RTT
variations in her paper too [ http://www.icir.org/models/hotnetsFinal.pdf].
Just having web-traffic (all with the same RTTs) does not create a realistic
environment as it does not do anything about RTTs and also flow sizes tend
to be highly skewed with the Pareto distribution-- but I don't know exactly
how you create your testing environment with web-traffic -- I can only guess
from the description you have about the web traffic in your paper.
Another puzzle in this difference seems that even under no background
traffic, we also get different results from yours..hmm...especially with
FAST because under no background traffic, FAST seems to work fairly well
with good RTT fairness in our experiment. But your results show FAST has
huge RTT-unfairness. That is very strange. Is that because we have different
bandwidth and buffer sizes in the setup? I think we need to compare our
notes more. Also in the journal paper of FAST experimental results [
http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf ],
FAST seems to work very well under no background traffic. We will verify our
results again in the exact same environment as you have in your report, to
make sure we can reproduce your results....but here are some samples of our
results for FAST.
http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/rtt_fairness/1200--2.4_FAST-2.4_FAST-NONE--400-3-1333--1000-76-3-0-0-0-5-500--200000-0.6-1000-10-1200-64000-150--1/
In this experiment, FAST flows are just perfect. Also the same result is
confirmed inthe FAST journal paper [
http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf --
please look at Section IV.B and C. But your results show really bad RTT
fairness.]
Best regards,
Injong
---
Injong Rhee
NCSU
On Sep 22, 2006, at 10:22 AM, Douglas Leith wrote:
For those interested in TCP for high-speed environments, and perhaps also
people interested in TCP evaluation generally, I'd like to point you towards
the results of a detailed experimental study which are now available at:
http://www.hamilton.ie/net/eval/ToNfinal.pdf
This study consistently compares Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP and
H-TCP performance under a wide range of conditions including with mixes of
long and short-lived flows. This study has now been subject to peer review
(to hopefully give it some legitimacy) and is due to appear in the
Transactions on Networking.
The conclusions (see summary below) seem especially topical as BIC-TCP is
currently widely deployed as the default algorithm in Linux.
Comments appreciated. Our measurements are publicly available - on the web
or drop me a line if you'd like a copy.
Summary:
In this paper we present experimental results evaluating the
performance of the Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP and
H-TCP proposals in a series of benchmark tests.
We find that many recent proposals perform surprisingly poorly in
even the most simple test, namely achieving fairness between two
competing flows in a dumbbell topology with the same round-trip
times and shared bottleneck link. Specifically, both Scalable-TCP
and FAST TCP exhibit very substantial unfairness in this test.
We also find that Scalable-TCP, HS-TCP and BIC-TCP induce significantly
greater RTT unfairness between competing flows with different round-trip
times. The unfairness can be an order of magnitude greater than that with
standard TCP and is such that flows with longer round-trip times
can be completely starved of bandwidth.
While the TCP proposals studied are all successful at improving
the link utilisation in a relatively static environment with
long-lived flows, in our tests many of the proposals exhibit poor
responsiveness to changing network conditions. We observe that
Scalable-TCP, HS-TCP and BIC-TCP can all suffer from extremely
slow (>100s) convergence times following the startup of a new
flow. We also observe that while FAST-TCP flows typically converge
quickly initially, flows may later diverge again to create
significant and sustained unfairness.
--Doug
Hamilton Institute
www.hamilton.ie
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
2006-09-23 2:34 performance of BIC-TCP, High-Speed-TCP, H-TCP etc Injong Rhee
@ 2006-09-23 6:34 ` Douglas Leith
2006-09-23 7:45 ` rhee
0 siblings, 1 reply; 12+ messages in thread
From: Douglas Leith @ 2006-09-23 6:34 UTC (permalink / raw)
To: Injong Rhee; +Cc: floyd, netdev, end2end-interest, lisongxu
I suggest you take a closer look Injong - there is a whole page of data
from tests covering a wide range of levels of background traffic. These
results are all new, and significantly strengthen the conclusions I
think, as is the expanded explanatory discussion of the observed
behaviour of the various algorithms (the result of a fair bit of
detective work of course). Your claim that "Your report mostly ignores
the effect of background traffic" is simply not true.
I can't really comment on your own tests without more information,
although I can say that we went to a good bit of trouble to make sure
our results were consistent and reproducible - in fact all our reported
results are from at least five, and usually more, runs of each test. We
were also careful to control for differences in kernel implementation so
that we compare congestion control algorithms rather than other aspects
of the network stack implementation. All of this is documented in the
paper. The kernel we used is available on the web. Our measurements
are also publicly available - the best way forward might be to pick one
or two tests and compare results of them in detail with a view to
diagnosing the source of any differences.
General comments such as "our experience tells that the RTT variations
of mid size flows play a very important role in creating significant
dynamics in testing environments" are not too helpful. What do you mean
by a "mid-sized flow" ? What do you mean by "significant dynamics" ?
What do you mean by "important role" - is this quantified ? Best to
stick to science rather than grandstanding. This is especially true
when dealing with a sensitive subject such as the evaluation of
competing algorithms.
Re FAST, we have of course discussed our results with the Caltech folks.
As stated in the paper, some of the observed behaviour seems to be
associated with the alpha tuning algorithm. Other behaviour seems to be
associated with packet burst effects that have also been reported
independently by the Caltech folks. Similar results to ours have since
been observed by other groups I believe. Perhaps differences between
our results point to some issue in your testbed setup.
Doug
Injong Rhee wrote:
>
>
> This is a resend with fixed web links. The links were broken in my
> previous email -- sorry about multiple transmissions.
>
> ---------------------------------------------------------------------------------
>
>
> Hi Doug,
>
> Thanks for sharing your paper. Also congratulations to the acceptance of
> your journal paper to TONs. But I am wondering what's new in this paper.
> At first glance, I did not find many new things that are different from
> your previously publicized reports. How much is this different from the
> ones you put out in this mail list a year or two ago and also the one
> publicized in PFLDnet February this year
> http://www.hpcc.jp/pfldnet2006/? In that same workshop, we also
> presented our experimental results that shows significant discrepancy
> from yours but i am not sure why you forgot to reference our
> experimental work presented in that same PFLDnet. Here is a link to a
> more detailed version of that report accepted to COMNET
> http://netsrv.csc.ncsu.edu/highspeed/comnet-asteppaper.pdf
>
> The main point of contention [that we talked about in that PFLDnet
> workshop] is the presence of background traffic and the method to add
> them. Your report mostly ignores the effect of background traffic. Some
> texts in this paper state that you added some web traffic (10%), but the
> paper shows only the results from NO background traffic scenarios. But
> our results differ from yours in many aspects. Below are the links to
> our results (the links to them have been available in our BIC web site
> for a long time and also mentioned in our PFLDnet paper; this result is
> with the patch that corrects HTCP bugs).
>
> [Convergence and intra protocol fairness]
>
> without background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/intra_protocol/intra_protocol.htm
>
>
> with background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/bk/intra_protocol/intra_protocol.htm
>
>
> [RTT fairness]:
>
> w/o background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/rtt_fairness/rtt_fairness.htm
>
>
> with background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/bk/rtt_fairness/rtt_fairness.htm
>
> [TCP friendliness]
>
> without background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/tcp_friendliness/tcp_friendliness.htm
>
>
> with background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/bk/tcp_friendliness/tcp_friendliness.htm
>
>
> After our discussion in that PFLDnet, I puzzled why we get different
> results. My guess is that the main difference between your experiment
> and ours is the inclusion of mid-sized flows with various RTTs -- our
> experience tells that the RTT variations of mid size flows play a very
> important role in creating significant dynamics in testing environments.
> The same point about the importance of mid size flows with RTT
> variations has been raised in several occasions by Sally Floyd as well,
> including in this year's E2E research group meeting. You can find some
> reference to the importance of RTT variations in her paper too [
> http://www.icir.org/models/hotnetsFinal.pdf]. Just having web-traffic
> (all with the same RTTs) does not create a realistic environment as it
> does not do anything about RTTs and also flow sizes tend to be highly
> skewed with the Pareto distribution-- but I don't know exactly how you
> create your testing environment with web-traffic -- I can only guess
> from the description you have about the web traffic in your paper.
>
> Another puzzle in this difference seems that even under no background
> traffic, we also get different results from yours..hmm...especially with
> FAST because under no background traffic, FAST seems to work fairly well
> with good RTT fairness in our experiment. But your results show FAST has
> huge RTT-unfairness. That is very strange. Is that because we have
> different bandwidth and buffer sizes in the setup? I think we need to
> compare our notes more. Also in the journal paper of FAST experimental
> results [
> http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf ],
> FAST seems to work very well under no background traffic. We will verify
> our results again in the exact same environment as you have in your
> report, to make sure we can reproduce your results....but here are some
> samples of our results for FAST.
>
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/rtt_fairness/1200--2.4_FAST-2.4_FAST-NONE--400-3-1333--1000-76-3-0-0-0-5-500--200000-0.6-1000-10-1200-64000-150--1/
>
>
> In this experiment, FAST flows are just perfect. Also the same result is
> confirmed inthe FAST journal paper [
> http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf --
> please look at Section IV.B and C. But your results show really bad RTT
> fairness.]
>
> Best regards,
>
> Injong
>
> ---
>
> Injong Rhee
>
> NCSU
>
> On Sep 22, 2006, at 10:22 AM, Douglas Leith wrote:
>
> For those interested in TCP for high-speed environments, and perhaps
> also people interested in TCP evaluation generally, I'd like to point
> you towards the results of a detailed experimental study which are now
> available at:
>
>
>
> http://www.hamilton.ie/net/eval/ToNfinal.pdf
> <http://www.hamilton.ie/net/eval/ToNfinal.pdf>
>
>
>
> This study consistently compares Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP
> and H-TCP performance under a wide range of conditions including with
> mixes of long and short-lived flows. This study has now been subject to
> peer review (to hopefully give it some legitimacy) and is due to appear
> in the Transactions on Networking.
>
>
>
> The conclusions (see summary below) seem especially topical as BIC-TCP
> is currently widely deployed as the default algorithm in Linux.
>
>
>
> Comments appreciated. Our measurements are publicly available - on the
> web or drop me a line if you'd like a copy.
>
>
>
> Summary:
>
> In this paper we present experimental results evaluating the
>
> performance of the Scalable-TCP, HS-TCP, BIC-TCP, FAST-TCP and
>
> H-TCP proposals in a series of benchmark tests.
>
>
>
> We find that many recent proposals perform surprisingly poorly in
>
> even the most simple test, namely achieving fairness between two
>
> competing flows in a dumbbell topology with the same round-trip
>
> times and shared bottleneck link. Specifically, both Scalable-TCP
>
> and FAST TCP exhibit very substantial unfairness in this test.
>
>
>
> We also find that Scalable-TCP, HS-TCP and BIC-TCP induce significantly
> greater RTT unfairness between competing flows with different round-trip
> times. The unfairness can be an order of magnitude greater than that
> with standard TCP and is such that flows with longer round-trip times
>
> can be completely starved of bandwidth.
>
>
>
> While the TCP proposals studied are all successful at improving
>
> the link utilisation in a relatively static environment with
>
> long-lived flows, in our tests many of the proposals exhibit poor
>
> responsiveness to changing network conditions. We observe that
>
> Scalable-TCP, HS-TCP and BIC-TCP can all suffer from extremely
>
> slow (>100s) convergence times following the startup of a new
>
> flow. We also observe that while FAST-TCP flows typically converge
>
> quickly initially, flows may later diverge again to create
>
> significant and sustained unfairness.
>
>
>
> --Doug
>
>
>
> Hamilton Institute
>
> www.hamilton.ie <http://www.hamilton.ie/>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
2006-09-23 6:34 ` [e2e] " Douglas Leith
@ 2006-09-23 7:45 ` rhee
2006-09-23 9:43 ` Douglas Leith
2006-09-27 23:20 ` Lachlan Andrew
0 siblings, 2 replies; 12+ messages in thread
From: rhee @ 2006-09-23 7:45 UTC (permalink / raw)
To: Douglas Leith; +Cc: Injong Rhee, floyd, netdev, end2end-interest, lisongxu
Doug Leith wrote-----
> I suggest you take a closer look Injong - there is a whole page of data
> from tests covering a wide range of levels of background traffic. These
> results are all new, and significantly strengthen the conclusions I
> think, as is the expanded explanatory discussion of the observed
> behaviour of the various algorithms (the result of a fair bit of
> detective work of course).
I was not sure whether this whole new page is good enough to make another
public announcement about this paper -- this paper has been publicized by
you many times in these mailing lists and also in the workshop. It would
have saved us some time if you had just pointed out the new stuff.
>
> I can't really comment on your own tests without more information,
> although I can say that we went to a good bit of trouble to make sure
> our results were consistent and reproducible - in fact all our reported
> results are from at least five, and usually more, runs of each test.
I am not doubting your effort here and I am sure your methods are correct.
Just i was pondering why we got different results and try to see if we can
come to some understanding on this different results we got. Who knows we
together might run into some fundamental research issues regarding
testing.
Also the "more" information about our own experiment is already given in
the paper and also in our web site. If you could tell what specific info
you need more, I can provide. Let's put our heads together to solve this
mystery of "different results".
>
> General comments such as "our experience tells that the RTT variations
> of mid size flows play a very important role in creating significant
> dynamics in testing environments" are not too helpful. What do you mean
> by a "mid-sized flow" ? What do you mean by "significant dynamics" ?
> What do you mean by "important role" - is this quantified ? Best to
> stick to science rather than grandstanding. This is especially true
> when dealing with a sensitive subject such as the evaluation of
> competing algorithms.
I hope you can perhaps enlighten us with this "science".
Well..this WAS just email. There wasn't much space to delve into
"science" there. So that is why I gave the link to Floyd and Kohler's
paper. Sally's paper on this role of RTT variations provides more
scientific explanation on this "dynamics".
In case you missed it, here is the link again.
http://www.icir.org/models/hotnetsFinal.pdf. Please read Section 3.3.
Also about mid size flows, I am referring to the flow lifetimes. The mid
sized flows cannot be represented well by the Pareto distribution -- the
ones that are in the middle of the distribution that heavy tail is not
capable of providing with a large number. Since the Pareto distribution
(of your web traffic sz) follows the power law, the distribution of flow
sizes around the origin (very short-term) is very high while very
long-term flows have relatively high probability.
So speaking of "science", can you please tell me whether all flows of your
web traffic have the same RTTs or not? If you could please point me to the
results you have with your web traffic tests instead of simply hand-wavy
about the results saying they are just the same (or similar) as the
results from your NO background traffic tests, I'd appreciate that very
much.
>
> Re FAST, we have of course discussed our results with the Caltech folks.
> As stated in the paper, some of the observed behaviour seems to be
> associated with the alpha tuning algorithm. Other behaviour seems to be
> associated with packet burst effects that have also been reported
> independently by the Caltech folks. Similar results to ours have since
> been observed by other groups I believe. Perhaps differences between
> our results point to some issue in your testbed setup.
That might be the case. Thanks for pointing that out. But it is hard to
explain why we got coincidently the same results as the FAST folks. Maybe
our and FAST folks' testbeds have "this issue" while yours are completely
sound and scientific. But I think it is more to do with the different
setups we have regarding buffer sizes and the maximum bandwidth. FAST
doesn't adapt very well especially under small buffers because of this
alpha tuning.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
2006-09-23 7:45 ` rhee
@ 2006-09-23 9:43 ` Douglas Leith
2006-09-27 23:20 ` Lachlan Andrew
1 sibling, 0 replies; 12+ messages in thread
From: Douglas Leith @ 2006-09-23 9:43 UTC (permalink / raw)
To: rhee; +Cc: Injong Rhee, floyd, netdev, end2end-interest, lisongxu
> I was not sure whether this whole new page is good enough to make another
> public announcement about this paper
At the risk of repeating myself, the page referred to contains the
results of approx. 500 new test runs (and we have carried out many more
than that which are summarised in the text) and directly addresses the
primary concern raised by yourself and others that situations with a mix
of connection lengths may lead to significantly different conclusions
from tests with only long-lived flows. Our finding is that, for the
metrics studied, the mix of flow sizes makes little difference to our
conclusions. That, combined with the scrutiny provided by the peer
review process, greatly strengthens our conclusions and certainly seems
worth reporting.
> I am not doubting your effort here and I am sure your methods are
correct.
> Just i was pondering why we got different results and try to see if
we can
> come to some understanding on this different results we got. Who knows we
> together might run into some fundamental research issues regarding
> testing.
I'm certainly up for taking a closer look at this.
> Sally's paper on this role of RTT variations provides more
> scientific explanation on this "dynamics".
> In case you missed it, here is the link again.
> http://www.icir.org/models/hotnetsFinal.pdf. Please read Section 3.3.
Section 3.3 of this paper seems to concern "Active Queue Management:
Oscillations". The discussion relates to queue dynamics of RED. How is
this relevant ? All of our tests are for drop-tail queues only.
> Also about mid size flows, I am referring to the flow lifetimes. The mid
> sized flows cannot be represented well by the Pareto distribution -- the
> ones that are in the middle of the distribution that heavy tail is not
> capable of providing with a large number. Since the Pareto distribution
> (of your web traffic sz) follows the power law, the distribution of flow
> sizes around the origin (very short-term) is very high while very
> long-term flows have relatively high probability.
I suspect your answers in the previous point and here just re-emphasise
my point. Its not clear for example what actual values of flow lifetime
you consider "mid-size" nor what the basis for those values is - there
are a huge number of measurement studies on traffic stats and if the aim
is to get closer to real link behaviour then it seems sensible to make
use of this sort of data. I do agree it might be interesting to see if
our test results are sensitive to the connection size distribution used,
although I suspect the answer will be that they are largely insensitive
- should be easy enough to check though if you'd be kind enough to send
me details of the sort of distribution you have in mind.
> That might be the case. Thanks for pointing that out. But it is hard to
> explain why we got coincidently the same results as the FAST folks.
Its hard for me to comment without more information - can you post a
link to the results by the FAST folks that you mention ? Perhaps they
also might like to comment here ? See also the next comment below ...
> But I think it is more to do with the different
> setups we have regarding buffer sizes and the maximum bandwidth. FAST
> doesn't adapt very well especially under small buffers because of this
> alpha tuning.
I thought you were suggesting in your last post that you obtained
different results for the *same* setup as us ? Some clarity here seems
important as otherwise your comments are in danger of just serving to
muddy the water.
If the network setup is different, then its maybe no surprise if the
results are a little different. Our own experience (and a key part of
the rationale for our work) underlines the need to carry out tests over
a broad range of conditions rather than confining testing to a small
number of specific scenarios (e.g. only gigabit speed links or only
links with large buffers) - otherwise its hard to get an overall feel
for expected behaviour. We did carry out tests for really quite a wide
range of network conditions and do already comment, for example, that
FAST performance does depend on the buffer size.
Doug
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
2006-09-23 2:43 Injong Rhee
@ 2006-09-24 4:18 ` Stephen Hemminger
2006-09-24 4:25 ` Stephen Hemminger
1 sibling, 0 replies; 12+ messages in thread
From: Stephen Hemminger @ 2006-09-24 4:18 UTC (permalink / raw)
To: Injong Rhee; +Cc: netdev
On Fri, 22 Sep 2006 22:43:22 -0400
"Injong Rhee" <rhee@eos.ncsu.edu> wrote:
> This is a resend with fixed web links. The links were broken in my previous
> email -- sorry about multiple transmissions.
> ---------------------------------------------------------------------------------
> Hi Doug,
> Thanks for sharing your paper. Also congratulations to the acceptance of
> your journal paper to TONs. But I am wondering what's new in this paper. At
> first glance, I did not find many new things that are different from your
> previously publicized reports. How much is this different from the ones you
> put out in this mail list a year or two ago and also the one publicized in
> PFLDnet February this year http://www.hpcc.jp/pfldnet2006/? In that same
> workshop, we also presented our experimental results that shows significant
> discrepancy from yours but i am not sure why you forgot to reference our
> experimental work presented in that same PFLDnet. Here is a link to a more
> detailed version of that report accepted to COMNET
> http://netsrv.csc.ncsu.edu/highspeed/comnet-asteppaper.pdf
>
> The main point of contention [that we talked about in that PFLDnet workshop]
> is the presence of background traffic and the method to add them. Your
> report mostly ignores the effect of background traffic. Some texts in this
> paper state that you added some web traffic (10%), but the paper shows only
> the results from NO background traffic scenarios. But our results differ
> from yours in many aspects. Below are the links to our results (the links to
> them have been available in our BIC web site for a long time and also
> mentioned in our PFLDnet paper; this result is with the patch that corrects
> HTCP bugs).
>
> [Convergence and intra protocol fairness]
> without background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/intra_protocol/intra_protocol.htm
> with background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/bk/intra_protocol/intra_protocol.htm
>
> [RTT fairness]:
> w/o background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/rtt_fairness/rtt_fairness.htm
> with background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/bk/rtt_fairness/rtt_fairness.htm
>
> [TCP friendliness]
> without background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/tcp_friendliness/tcp_friendliness.htm
> with background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/bk/tcp_friendliness/tcp_friendliness.htm
>
> After our discussion in that PFLDnet, I puzzled why we get different
> results. My guess is that the main difference between your experiment and
> ours is the inclusion of mid-sized flows with various RTTs -- our experience
> tells that the RTT variations of mid size flows play a very important role
> in creating significant dynamics in testing environments. The same point
> about the importance of mid size flows with RTT variations has been raised
> in several occasions by Sally Floyd as well, including in this year's E2E
> research group meeting. You can find some reference to the importance of RTT
> variations in her paper too [ http://www.icir.org/models/hotnetsFinal.pdf].
> Just having web-traffic (all with the same RTTs) does not create a realistic
> environment as it does not do anything about RTTs and also flow sizes tend
> to be highly skewed with the Pareto distribution-- but I don't know exactly
> how you create your testing environment with web-traffic -- I can only guess
> from the description you have about the web traffic in your paper.
>
> Another puzzle in this difference seems that even under no background
> traffic, we also get different results from yours..hmm...especially with
> FAST because under no background traffic, FAST seems to work fairly well
> with good RTT fairness in our experiment. But your results show FAST has
> huge RTT-unfairness. That is very strange. Is that because we have different
> bandwidth and buffer sizes in the setup? I think we need to compare our
> notes more. Also in the journal paper of FAST experimental results [
> http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf ],
> FAST seems to work very well under no background traffic. We will verify our
> results again in the exact same environment as you have in your report, to
> make sure we can reproduce your results....but here are some samples of our
> results for FAST.
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/rtt_fairness/1200--2.4_FAST-2.4_FAST-NONE--400-3-1333--1000-76-3-0-0-0-5-500--200000-0.6-1000-10-1200-64000-150--1/
> In this experiment, FAST flows are just perfect. Also the same result is
> confirmed inthe FAST journal paper [
> http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf --
> please look at Section IV.B and C. But your results show really bad RTT
> fairness.]
>
> Best regards,
> Injong
Since a lot of the discussion seems to be about emulated environments,
has anyone run tests with the current crop of TCP variants over a real
high BDP network? The SLAC testbed stuff http://www-iepm.slac.stanford.edu/bw/tcp-eval/
was last updated in 2003. Since real world traffic is so complex, and
there could easily be higher order effects, I would prefer that the
Linux defaults be based on observed behaviour. Just like the choice of
I/O scheduler and other heuristics is optimized to be right for
real workloads.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
2006-09-23 2:43 Injong Rhee
2006-09-24 4:18 ` Stephen Hemminger
@ 2006-09-24 4:25 ` Stephen Hemminger
1 sibling, 0 replies; 12+ messages in thread
From: Stephen Hemminger @ 2006-09-24 4:25 UTC (permalink / raw)
To: Injong Rhee; +Cc: netdev
On Fri, 22 Sep 2006 22:43:22 -0400
"Injong Rhee" <rhee@eos.ncsu.edu> wrote:
> This is a resend with fixed web links. The links were broken in my previous
> email -- sorry about multiple transmissions.
> ---------------------------------------------------------------------------------
> Hi Doug,
> Thanks for sharing your paper. Also congratulations to the acceptance of
> your journal paper to TONs. But I am wondering what's new in this paper. At
> first glance, I did not find many new things that are different from your
> previously publicized reports. How much is this different from the ones you
> put out in this mail list a year or two ago and also the one publicized in
> PFLDnet February this year http://www.hpcc.jp/pfldnet2006/? In that same
> workshop, we also presented our experimental results that shows significant
> discrepancy from yours but i am not sure why you forgot to reference our
> experimental work presented in that same PFLDnet. Here is a link to a more
> detailed version of that report accepted to COMNET
> http://netsrv.csc.ncsu.edu/highspeed/comnet-asteppaper.pdf
>
> The main point of contention [that we talked about in that PFLDnet workshop]
> is the presence of background traffic and the method to add them. Your
> report mostly ignores the effect of background traffic. Some texts in this
> paper state that you added some web traffic (10%), but the paper shows only
> the results from NO background traffic scenarios. But our results differ
> from yours in many aspects. Below are the links to our results (the links to
> them have been available in our BIC web site for a long time and also
> mentioned in our PFLDnet paper; this result is with the patch that corrects
> HTCP bugs).
>
> [Convergence and intra protocol fairness]
> without background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/intra_protocol/intra_protocol.htm
> with background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/bk/intra_protocol/intra_protocol.htm
>
> [RTT fairness]:
> w/o background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/rtt_fairness/rtt_fairness.htm
> with background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/bk/rtt_fairness/rtt_fairness.htm
>
> [TCP friendliness]
> without background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/tcp_friendliness/tcp_friendliness.htm
> with background traffic:
> http://netsrv.csc.ncsu.edu/highspeed/1200/bk/tcp_friendliness/tcp_friendliness.htm
>
> After our discussion in that PFLDnet, I puzzled why we get different
> results. My guess is that the main difference between your experiment and
> ours is the inclusion of mid-sized flows with various RTTs -- our experience
> tells that the RTT variations of mid size flows play a very important role
> in creating significant dynamics in testing environments. The same point
> about the importance of mid size flows with RTT variations has been raised
> in several occasions by Sally Floyd as well, including in this year's E2E
> research group meeting. You can find some reference to the importance of RTT
> variations in her paper too [ http://www.icir.org/models/hotnetsFinal.pdf].
> Just having web-traffic (all with the same RTTs) does not create a realistic
> environment as it does not do anything about RTTs and also flow sizes tend
> to be highly skewed with the Pareto distribution-- but I don't know exactly
> how you create your testing environment with web-traffic -- I can only guess
> from the description you have about the web traffic in your paper.
>
> Another puzzle in this difference seems that even under no background
> traffic, we also get different results from yours..hmm...especially with
> FAST because under no background traffic, FAST seems to work fairly well
> with good RTT fairness in our experiment. But your results show FAST has
> huge RTT-unfairness. That is very strange. Is that because we have different
> bandwidth and buffer sizes in the setup? I think we need to compare our
> notes more. Also in the journal paper of FAST experimental results [
> http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf ],
> FAST seems to work very well under no background traffic. We will verify our
> results again in the exact same environment as you have in your report, to
> make sure we can reproduce your results....but here are some samples of our
> results for FAST.
> http://netsrv.csc.ncsu.edu/highspeed/1200/nobk/rtt_fairness/1200--2.4_FAST-2.4_FAST-NONE--400-3-1333--1000-76-3-0-0-0-5-500--200000-0.6-1000-10-1200-64000-150--1/
> In this experiment, FAST flows are just perfect. Also the same result is
> confirmed inthe FAST journal paper [
> http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf --
> please look at Section IV.B and C. But your results show really bad RTT
> fairness.]
>
> Best regards,
> Injong
Since a lot of the discussion seems to be about emulated environments,
has anyone run tests with the current crop of TCP variants over a real
high BDP network? The SLAC testbed stuff http://www-iepm.slac.stanford.edu/bw/tcp-eval/
was last updated in 2003. Since real world traffic is so complex, and
there could easily be higher order effects, I would prefer that the
Linux defaults be based on observed behaviour. Just like the choice of
I/O scheduler and other heuristics is optimized to be right for
real workloads.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
2006-09-23 7:45 ` rhee
2006-09-23 9:43 ` Douglas Leith
@ 2006-09-27 23:20 ` Lachlan Andrew
2006-09-28 16:33 ` Injong Rhee
1 sibling, 1 reply; 12+ messages in thread
From: Lachlan Andrew @ 2006-09-27 23:20 UTC (permalink / raw)
To: rhee@ncsu.edu
Cc: Douglas Leith, netdev, floyd, lisongxu, Injong Rhee,
end2end-interest
Greetings all,
On 23/09/06, rhee@ncsu.edu <rhee@ncsu.edu> wrote:
> Just i was pondering why we got different results and try to see if we can
> come to some understanding on this different results we got. Who knows we
> together might run into some fundamental research issues regarding
> testing.
Since many interested parties will be around LA for PFLDnet, how about
getting together after that (Friday 9 Feb) to re-run some of the
disputed tests on one set of hardware, with everyone present to debate
the results?
You're all welcome to come to Caltech to do the testing. We can
provide a few servers, dummynets and Gigabit switches. Everyone is
welcome to bring their scripts, and any other hardware they need.
If there is interest, we could also have things like a round-table
discussion of the benefits of testing with different file-length
distributions (like long lived flows to understand what is happening
vs a range of flows to test suitability for deployment), and the
benefits of repeating other people's tests vs testing in as many
scenarios as possible.
Who is interested in coming?
Cheers,
Lachlan
--
Lachlan Andrew Dept of Computer Science, Caltech
1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA
Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
2006-09-27 23:20 ` Lachlan Andrew
@ 2006-09-28 16:33 ` Injong Rhee
0 siblings, 0 replies; 12+ messages in thread
From: Injong Rhee @ 2006-09-28 16:33 UTC (permalink / raw)
To: l.andrew, rhee; +Cc: Douglas Leith, netdev, floyd, lisongxu, end2end-interest
Sure. I don't mind doing this test. I am currently working with Doug Leith
to get to the bottom of this difference. So when we get to the PFLDnet, we
should have some more findings on this. But I am up for this challenge.
----- Original Message -----
From: "Lachlan Andrew" <lachlan.andrew@gmail.com>
To: <rhee@ncsu.edu>
Cc: "Douglas Leith" <doug.leith@nuim.ie>; <netdev@vger.kernel.org>;
<floyd@icsi.berkeley.edu>; <lisongxu@gmail.com>; "Injong Rhee"
<rhee@eos.ncsu.edu>; <end2end-interest@postel.org>
Sent: Wednesday, September 27, 2006 7:20 PM
Subject: Re: [e2e] performance of BIC-TCP, High-Speed-TCP, H-TCP etc
> Greetings all,
>
> On 23/09/06, rhee@ncsu.edu <rhee@ncsu.edu> wrote:
>> Just i was pondering why we got different results and try to see if we
>> can
>> come to some understanding on this different results we got. Who knows we
>> together might run into some fundamental research issues regarding
>> testing.
>
> Since many interested parties will be around LA for PFLDnet, how about
> getting together after that (Friday 9 Feb) to re-run some of the
> disputed tests on one set of hardware, with everyone present to debate
> the results?
>
> You're all welcome to come to Caltech to do the testing. We can
> provide a few servers, dummynets and Gigabit switches. Everyone is
> welcome to bring their scripts, and any other hardware they need.
>
> If there is interest, we could also have things like a round-table
> discussion of the benefits of testing with different file-length
> distributions (like long lived flows to understand what is happening
> vs a range of flows to test suitability for deployment), and the
> benefits of repeating other people's tests vs testing in as many
> scenarios as possible.
>
> Who is interested in coming?
>
> Cheers,
> Lachlan
>
> --
> Lachlan Andrew Dept of Computer Science, Caltech
> 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA
> Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603
>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2006-09-28 16:34 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-09-23 2:34 performance of BIC-TCP, High-Speed-TCP, H-TCP etc Injong Rhee
2006-09-23 6:34 ` [e2e] " Douglas Leith
2006-09-23 7:45 ` rhee
2006-09-23 9:43 ` Douglas Leith
2006-09-27 23:20 ` Lachlan Andrew
2006-09-28 16:33 ` Injong Rhee
-- strict thread matches above, loose matches on Subject: below --
2006-09-23 2:43 Injong Rhee
2006-09-24 4:18 ` Stephen Hemminger
2006-09-24 4:25 ` Stephen Hemminger
[not found] <4513F1A1.3000506@nuim.ie>
2006-09-22 17:42 ` Ian McDonald
2006-09-22 19:38 ` Douglas Leith
2006-09-22 20:04 ` Ian McDonald
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).