* Re: TCP Westwood Patch
[not found] <20060524182006.32fd1fcd@localhost>
@ 2006-06-05 16:32 ` Stephen Hemminger
2006-06-06 16:41 ` [PATCH]TCP Westwood+ Luca De Cicco
0 siblings, 1 reply; 4+ messages in thread
From: Stephen Hemminger @ 2006-06-05 16:32 UTC (permalink / raw)
To: Luca De Cicco; +Cc: netdev, David S. Miller, Saverio Mascolo
On Wed, 24 May 2006 18:20:06 +0200
Luca De Cicco <ldecicco@gmail.com> wrote:
> Hi there to the list.
>
> I'm Luca De Cicco a PhD student working on TCP Westwood+ at Politecnico
> di Bari.
>
> I have attached a patch (it applies to 2.6.16.18 but it has been also
> tested on 2.6.15 too) for the TCP Westwood+ congestion control
> algorithm which fixes a bug and adds some new features/improvements:
>
> o BUGFIX. The first sample was wrong
>
> o RTT_min is updated each time a timeout event occurs (in order to
> cope with hard handovers in wireless scenarios)
>
> o The bandwidth estimate filter is now initialized with the first
> bandwidth sample in order to have better performances in the case of
> small file transfers.
>
> Best regards,
>
> Luca De Cicco
> Politecnico di Bari
Are you going to split this up, or do you want me to do it?
You probably are busy with school work.
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH]TCP Westwood+
2006-06-05 16:32 ` TCP Westwood Patch Stephen Hemminger
@ 2006-06-06 16:41 ` Luca De Cicco
2006-06-06 18:53 ` TCP Westwood+ patches Stephen Hemminger
0 siblings, 1 reply; 4+ messages in thread
From: Luca De Cicco @ 2006-06-06 16:41 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: netdev, David S. Miller, Saverio Mascolo
[-- Attachment #1: Type: text/plain, Size: 801 bytes --]
Dear all,
As requested you find attached patches (they should be orghogonal) to
tcp_westwood.c. Changes are listed below:
* westwood_comments.diff: Updated comments pointing to essential papers
about TCP Westwood
* westwood_bugfix.diff: Fixes a subtle bug that caused the first sample
to be wrong
* westwood_faster_filter.diff: The bandwidth estimate filter is now
initialized with the first bandwidth sample in order to have better
performances in the case of small file transfers.
* westwood_rtt_min_reset.diff: RTT_min is updated each time a timeout
event occurs (in order to cope with hard handovers in wireless
scenarios such as UMTS). A new inline function, update_rtt_min has been
added.
Signed-off-by: Luca De Cicco <ldecicco@gmail.com>
Best Regards,
Luca De Cicco
Politecnico di Bari
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: westwood_bugfix.diff --]
[-- Type: text/x-patch; name=westwood_bugfix.diff, Size: 1073 bytes --]
Index: ipv4/tcp_westwood.c
===================================================================
--- ipv4.orig/tcp_westwood.c 2006-06-06 18:08:03.000000000 +0200
+++ ipv4/tcp_westwood.c 2006-06-06 18:09:44.000000000 +0200
@@ -22,6 +22,7 @@
u32 accounted;
u32 rtt;
u32 rtt_min; /* minimum observed RTT */
+ u16 first_ack; /* flag which infers that this is the first ack */
};
@@ -52,6 +53,7 @@
w->rtt_min = w->rtt = TCP_WESTWOOD_INIT_RTT;
w->rtt_win_sx = tcp_time_stamp;
w->snd_una = tcp_sk(sk)->snd_una;
+ w->first_ack = 1;
}
/*
@@ -184,6 +186,15 @@
{
struct tcp_sock *tp = tcp_sk(sk);
struct westwood *w = inet_csk_ca(sk);
+
+ /* Initialise w->snd_una with the first acked sequence number in oder
+ * to fix mismatch between tp->snd_una and w->snd_una for the first
+ * bandwidth sample
+ */
+ if(w->first_ack && (event == CA_EVENT_FAST_ACK||event == CA_EVENT_SLOW_ACK)) {
+ w->snd_una = tp->snd_una;
+ w->first_ack = 0;
+ }
switch(event) {
case CA_EVENT_FAST_ACK:
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #3: westwood_comments.diff --]
[-- Type: text/x-patch; name=westwood_comments.diff, Size: 1432 bytes --]
Index: ipv4/tcp_westwood.c
===================================================================
--- ipv4.orig/tcp_westwood.c 2006-06-06 17:56:13.000000000 +0200
+++ ipv4/tcp_westwood.c 2006-06-06 17:57:13.000000000 +0200
@@ -1,9 +1,29 @@
/*
- * TCP Westwood+
+ * TCP Westwood+: end-to-end bandwidth estimation for TCP
*
- * Angelo Dell'Aera: TCP Westwood+ support
+ * Angelo Dell'Aera: author of the first version of TCP Westwood+ in Linux 2.4
+ * Luca De Cicco: current support of Westwood+ and author of the last patch
+ * with: updated RTT_min and initial bandwidth estimate
+ *
+ * Support at http://c3lab.poliba.it/index.php/Westwood
+ * Main references in literature:
+ *
+ * - Mascolo S, Casetti, M. Gerla et al.
+ * "TCP Westwood: bandwidth estimation for TCP" Proc. ACM Mobicom 2001
+ *
+ * - A. Grieco, s. Mascolo
+ * "Performance evaluation of New Reno, Vegas, Westwood+ TCP" ACM Computer
+ * Comm. Review, 2004
+ *
+ * - A. Dell'Aera, L. Grieco, S. Mascolo.
+ * "Linux 2.4 Implementation of Westwood+ TCP with Rate-Halving :
+ * A Performance Evaluation Over the Internet" (ICC 2004), Paris, June 2004
+ *
+ * Westwood+ employs end-to-end bandwdidth measurement to set cwnd and
+ * ssthresh after packet loss. The probing phase is as the original Reno.
*/
+
#include <linux/config.h>
#include <linux/mm.h>
#include <linux/module.h>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #4: westwood_faster_filter.diff --]
[-- Type: text/x-patch; name=westwood_faster_filter.diff, Size: 799 bytes --]
Index: ipv4/tcp_westwood.c
===================================================================
--- ipv4.orig/tcp_westwood.c 2006-06-06 18:14:50.000000000 +0200
+++ ipv4/tcp_westwood.c 2006-06-06 18:16:49.000000000 +0200
@@ -65,8 +65,17 @@
static inline void westwood_filter(struct westwood *w, u32 delta)
{
- w->bw_ns_est = westwood_do_filter(w->bw_ns_est, w->bk / delta);
- w->bw_est = westwood_do_filter(w->bw_est, w->bw_ns_est);
+ /*
+ * If the filter is empty fill it with the first sample of bandwidth
+ */
+ if (w->bw_ns_est==0 && w->bw_est==0)
+ {
+ w->bw_ns_est = w->bk / delta;
+ w->bw_est = w->bw_ns_est ;
+ } else {
+ w->bw_ns_est = westwood_do_filter(w->bw_ns_est, w->bk / delta);
+ w->bw_est = westwood_do_filter(w->bw_est, w->bw_ns_est);
+ }
}
/*
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #5: westwood_rtt_min_reset.diff --]
[-- Type: text/x-patch; name=westwood_rtt_min_reset.diff, Size: 1820 bytes --]
Index: ipv4/tcp_westwood.c
===================================================================
--- ipv4.orig/tcp_westwood.c 2006-06-06 18:10:22.000000000 +0200
+++ ipv4/tcp_westwood.c 2006-06-06 18:13:07.000000000 +0200
@@ -15,6 +15,7 @@
struct westwood {
u32 bw_ns_est; /* first bandwidth estimation..not too smoothed 8) */
u32 bw_est; /* bandwidth estimate */
+ u16 reset_rtt_min; /* Please reset RTT min to RTT sample*/
u32 rtt_win_sx; /* here starts a new evaluation... */
u32 bk;
u32 snd_una; /* used for evaluating the number of acked bytes */
@@ -50,6 +51,7 @@
w->bw_est = 0;
w->accounted = 0;
w->cumul_ack = 0;
+ w->reset_rtt_min = 1;
w->rtt_min = w->rtt = TCP_WESTWOOD_INIT_RTT;
w->rtt_win_sx = tcp_time_stamp;
w->snd_una = tcp_sk(sk)->snd_una;
@@ -110,6 +112,19 @@
}
}
+static inline void update_rtt_min(struct sock *sk)
+{
+ struct westwood *w = inet_csk_ca(sk);
+
+ if (w->reset_rtt_min) {
+ w->rtt_min = w->rtt;
+ w->reset_rtt_min = 0;
+ } else {
+ w->rtt_min = min(w->rtt, w->rtt_min);
+ }
+}
+
+
/*
* @westwood_fast_bw
* It is called when we are in fast path. In particular it is called when
@@ -125,7 +140,7 @@
w->bk += tp->snd_una - w->snd_una;
w->snd_una = tp->snd_una;
- w->rtt_min = min(w->rtt, w->rtt_min);
+ update_rtt_min(sk);
}
/*
@@ -207,12 +222,14 @@
case CA_EVENT_FRTO:
tp->snd_ssthresh = westwood_bw_rttmin(sk);
+ /* Please update RTT_min when next ack arrives */
+ w->reset_rtt_min = 1;
break;
case CA_EVENT_SLOW_ACK:
westwood_update_window(sk);
w->bk += westwood_acked_count(sk);
- w->rtt_min = min(w->rtt, w->rtt_min);
+ update_rtt_min(sk);
break;
default:
^ permalink raw reply [flat|nested] 4+ messages in thread
* TCP Westwood+ patches
2006-06-06 16:41 ` [PATCH]TCP Westwood+ Luca De Cicco
@ 2006-06-06 18:53 ` Stephen Hemminger
2006-06-12 6:02 ` David Miller
0 siblings, 1 reply; 4+ messages in thread
From: Stephen Hemminger @ 2006-06-06 18:53 UTC (permalink / raw)
To: Luca De Cicco, David S. Miller; +Cc: netdev, Saverio Mascolo
I cleaned these up and put them in a git tree. Luco if you want to be the
maintainer of this, then the correct way is to send a patch to the MAINTAINER
file.
-------
The following changes since commit bd6673d239e7041bc3f81f8d6d0242b7bf6dd62f:
Andreas Schwab:
[CONNECTOR]: Fix warning in cn_queue.c
are found in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/shemminger/tcp-2.6.18.git#tcp
Luca De Cicco:
TCP Westwood: comment fixes
TCP Westwood: bandwidth filter startup
TCP Westwood: reset RTT min after FRTO
Stephen Hemminger:
TCP Westwood: fix first sample
net/ipv4/tcp_westwood.c | 62 ++++++++++++++++++++++++++++++++++++++++++-----
1 files changed, 55 insertions(+), 7 deletions(-)
diff --git a/net/ipv4/tcp_westwood.c b/net/ipv4/tcp_westwood.c
index 62a96b7..4247da1 100644
--- a/net/ipv4/tcp_westwood.c
+++ b/net/ipv4/tcp_westwood.c
@@ -1,7 +1,24 @@
/*
- * TCP Westwood+
+ * TCP Westwood+: end-to-end bandwidth estimation for TCP
*
- * Angelo Dell'Aera: TCP Westwood+ support
+ * Angelo Dell'Aera: author of the first version of TCP Westwood+ in Linux 2.4
+ *
+ * Support at http://c3lab.poliba.it/index.php/Westwood
+ * Main references in literature:
+ *
+ * - Mascolo S, Casetti, M. Gerla et al.
+ * "TCP Westwood: bandwidth estimation for TCP" Proc. ACM Mobicom 2001
+ *
+ * - A. Grieco, s. Mascolo
+ * "Performance evaluation of New Reno, Vegas, Westwood+ TCP" ACM Computer
+ * Comm. Review, 2004
+ *
+ * - A. Dell'Aera, L. Grieco, S. Mascolo.
+ * "Linux 2.4 Implementation of Westwood+ TCP with Rate-Halving :
+ * A Performance Evaluation Over the Internet" (ICC 2004), Paris, June 2004
+ *
+ * Westwood+ employs end-to-end bandwidth measurement to set cwnd and
+ * ssthresh after packet loss. The probing phase is as the original Reno.
*/
#include <linux/config.h>
@@ -23,6 +40,7 @@ struct westwood {
u32 rtt;
u32 rtt_min; /* minimum observed RTT */
u8 first_ack; /* flag which infers that this is the first ack */
+ u8 reset_rtt_min; /* Reset RTT min to next RTT sample*/
};
@@ -50,6 +68,7 @@ static void tcp_westwood_init(struct soc
w->bw_est = 0;
w->accounted = 0;
w->cumul_ack = 0;
+ w->reset_rtt_min = 1;
w->rtt_min = w->rtt = TCP_WESTWOOD_INIT_RTT;
w->rtt_win_sx = tcp_time_stamp;
w->snd_una = tcp_sk(sk)->snd_una;
@@ -65,10 +84,16 @@ static inline u32 westwood_do_filter(u32
return (((7 * a) + b) >> 3);
}
-static inline void westwood_filter(struct westwood *w, u32 delta)
+static void westwood_filter(struct westwood *w, u32 delta)
{
- w->bw_ns_est = westwood_do_filter(w->bw_ns_est, w->bk / delta);
- w->bw_est = westwood_do_filter(w->bw_est, w->bw_ns_est);
+ /* If the filter is empty fill it with the first sample of bandwidth */
+ if (w->bw_ns_est == 0 && w->bw_est == 0) {
+ w->bw_ns_est = w->bk / delta;
+ w->bw_est = w->bw_ns_est;
+ } else {
+ w->bw_ns_est = westwood_do_filter(w->bw_ns_est, w->bk / delta);
+ w->bw_est = westwood_do_filter(w->bw_est, w->bw_ns_est);
+ }
}
/*
@@ -93,7 +118,7 @@ static void westwood_update_window(struc
struct westwood *w = inet_csk_ca(sk);
s32 delta = tcp_time_stamp - w->rtt_win_sx;
- /* Initialise w->snd_una with the first acked sequence number in order
+ /* Initialize w->snd_una with the first acked sequence number in order
* to fix mismatch between tp->snd_una and w->snd_una for the first
* bandwidth sample
*/
@@ -119,6 +144,16 @@ static void westwood_update_window(struc
}
}
+static inline void update_rtt_min(struct westwood *w)
+{
+ if (w->reset_rtt_min) {
+ w->rtt_min = w->rtt;
+ w->reset_rtt_min = 0;
+ } else
+ w->rtt_min = min(w->rtt, w->rtt_min);
+}
+
+
/*
* @westwood_fast_bw
* It is called when we are in fast path. In particular it is called when
@@ -134,7 +169,7 @@ static inline void westwood_fast_bw(stru
w->bk += tp->snd_una - w->snd_una;
w->snd_una = tp->snd_una;
- w->rtt_min = min(w->rtt, w->rtt_min);
+ update_rtt_min(w);
}
/*
@@ -191,7 +226,7 @@ static void tcp_westwood_event(struct so
{
struct tcp_sock *tp = tcp_sk(sk);
struct westwood *w = inet_csk_ca(sk);
-
+
switch(event) {
case CA_EVENT_FAST_ACK:
westwood_fast_bw(sk);
@@ -203,12 +238,14 @@ static void tcp_westwood_event(struct so
case CA_EVENT_FRTO:
tp->snd_ssthresh = tcp_westwood_bw_rttmin(sk);
+ /* Update RTT_min when next ack arrives */
+ w->reset_rtt_min = 1;
break;
case CA_EVENT_SLOW_ACK:
westwood_update_window(sk);
w->bk += westwood_acked_count(sk);
- w->rtt_min = min(w->rtt, w->rtt_min);
+ update_rtt_min(w);
break;
default:
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: TCP Westwood+ patches
2006-06-06 18:53 ` TCP Westwood+ patches Stephen Hemminger
@ 2006-06-12 6:02 ` David Miller
0 siblings, 0 replies; 4+ messages in thread
From: David Miller @ 2006-06-12 6:02 UTC (permalink / raw)
To: shemminger; +Cc: ldecicco, netdev, saverio.mascolo
From: Stephen Hemminger <shemminger@osdl.org>
Date: Tue, 6 Jun 2006 11:53:06 -0700
> I cleaned these up and put them in a git tree.
All 4 patches are in my tree now.
Thanks Stephen.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2006-06-12 6:02 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20060524182006.32fd1fcd@localhost>
2006-06-05 16:32 ` TCP Westwood Patch Stephen Hemminger
2006-06-06 16:41 ` [PATCH]TCP Westwood+ Luca De Cicco
2006-06-06 18:53 ` TCP Westwood+ patches Stephen Hemminger
2006-06-12 6:02 ` David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).