* [PATCH 1/2] tcp: fix undo after RTO for BIC
@ 2012-01-19 3:47 Neal Cardwell
2012-01-19 3:47 ` [PATCH 2/2] tcp: fix undo after RTO for CUBIC Neal Cardwell
2012-01-20 19:18 ` [PATCH 1/2] tcp: fix undo after RTO for BIC David Miller
0 siblings, 2 replies; 5+ messages in thread
From: Neal Cardwell @ 2012-01-19 3:47 UTC (permalink / raw)
To: David Miller
Cc: netdev, ilpo.jarvinen, Nandita Dukkipati, Yuchung Cheng,
Jerry Chu, Tom Herbert, Neal Cardwell
This patch fixes BIC so that cwnd reductions made during RTOs can be
undone (just as they already can be undone when using the default/Reno
behavior).
When undoing cwnd reductions, BIC-derived congestion control modules
were restoring the cwnd from last_max_cwnd. There were two problems
with using last_max_cwnd to restore a cwnd during undo:
(a) last_max_cwnd was set to 0 on state transitions into TCP_CA_Loss
(by calling the module's reset() functions), so cwnd reductions from
RTOs could not be undone.
(b) when fast_covergence is enabled (which it is by default)
last_max_cwnd does not actually hold the value of snd_cwnd before the
loss; instead, it holds a scaled-down version of snd_cwnd.
This patch makes the following changes:
(1) upon undo, revert snd_cwnd to ca->loss_cwnd, which is already, as
the existing comment notes, the "congestion window at last loss"
(2) stop forgetting ca->loss_cwnd on TCP_CA_Loss events
(3) use ca->last_max_cwnd to check if we're in slow start
Signed-off-by: Neal Cardwell <ncardwell@google.com>
---
net/ipv4/tcp_bic.c | 11 +++++++----
1 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/tcp_bic.c b/net/ipv4/tcp_bic.c
index 6187eb4..f45e1c2 100644
--- a/net/ipv4/tcp_bic.c
+++ b/net/ipv4/tcp_bic.c
@@ -63,7 +63,6 @@ static inline void bictcp_reset(struct bictcp *ca)
{
ca->cnt = 0;
ca->last_max_cwnd = 0;
- ca->loss_cwnd = 0;
ca->last_cwnd = 0;
ca->last_time = 0;
ca->epoch_start = 0;
@@ -72,7 +71,11 @@ static inline void bictcp_reset(struct bictcp *ca)
static void bictcp_init(struct sock *sk)
{
- bictcp_reset(inet_csk_ca(sk));
+ struct bictcp *ca = inet_csk_ca(sk);
+
+ bictcp_reset(ca);
+ ca->loss_cwnd = 0;
+
if (initial_ssthresh)
tcp_sk(sk)->snd_ssthresh = initial_ssthresh;
}
@@ -127,7 +130,7 @@ static inline void bictcp_update(struct bictcp *ca, u32 cwnd)
}
/* if in slow start or link utilization is very low */
- if (ca->loss_cwnd == 0) {
+ if (ca->last_max_cwnd == 0) {
if (ca->cnt > 20) /* increase cwnd 5% per RTT */
ca->cnt = 20;
}
@@ -185,7 +188,7 @@ static u32 bictcp_undo_cwnd(struct sock *sk)
{
const struct tcp_sock *tp = tcp_sk(sk);
const struct bictcp *ca = inet_csk_ca(sk);
- return max(tp->snd_cwnd, ca->last_max_cwnd);
+ return max(tp->snd_cwnd, ca->loss_cwnd);
}
static void bictcp_state(struct sock *sk, u8 new_state)
--
1.7.7.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] tcp: fix undo after RTO for CUBIC
2012-01-19 3:47 [PATCH 1/2] tcp: fix undo after RTO for BIC Neal Cardwell
@ 2012-01-19 3:47 ` Neal Cardwell
2012-01-20 2:49 ` Stephen Hemminger
2012-01-20 19:18 ` [PATCH 1/2] tcp: fix undo after RTO for BIC David Miller
1 sibling, 1 reply; 5+ messages in thread
From: Neal Cardwell @ 2012-01-19 3:47 UTC (permalink / raw)
To: David Miller
Cc: netdev, ilpo.jarvinen, Nandita Dukkipati, Yuchung Cheng,
Jerry Chu, Tom Herbert, Neal Cardwell
This patch fixes CUBIC so that cwnd reductions made during RTOs can be
undone (just as they already can be undone when using the default/Reno
behavior).
When undoing cwnd reductions, BIC-derived congestion control modules
were restoring the cwnd from last_max_cwnd. There were two problems
with using last_max_cwnd to restore a cwnd during undo:
(a) last_max_cwnd was set to 0 on state transitions into TCP_CA_Loss
(by calling the module's reset() functions), so cwnd reductions from
RTOs could not be undone.
(b) when fast_covergence is enabled (which it is by default)
last_max_cwnd does not actually hold the value of snd_cwnd before the
loss; instead, it holds a scaled-down version of snd_cwnd.
This patch makes the following changes:
(1) upon undo, revert snd_cwnd to ca->loss_cwnd, which is already, as
the existing comment notes, the "congestion window at last loss"
(2) stop forgetting ca->loss_cwnd on TCP_CA_Loss events
(3) use ca->last_max_cwnd to check if we're in slow start
Signed-off-by: Neal Cardwell <ncardwell@google.com>
---
net/ipv4/tcp_cubic.c | 10 ++++++----
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
index f376b05..a9077f4 100644
--- a/net/ipv4/tcp_cubic.c
+++ b/net/ipv4/tcp_cubic.c
@@ -107,7 +107,6 @@ static inline void bictcp_reset(struct bictcp *ca)
{
ca->cnt = 0;
ca->last_max_cwnd = 0;
- ca->loss_cwnd = 0;
ca->last_cwnd = 0;
ca->last_time = 0;
ca->bic_origin_point = 0;
@@ -142,7 +141,10 @@ static inline void bictcp_hystart_reset(struct sock *sk)
static void bictcp_init(struct sock *sk)
{
- bictcp_reset(inet_csk_ca(sk));
+ struct bictcp *ca = inet_csk_ca(sk);
+
+ bictcp_reset(ca);
+ ca->loss_cwnd = 0;
if (hystart)
bictcp_hystart_reset(sk);
@@ -275,7 +277,7 @@ static inline void bictcp_update(struct bictcp *ca, u32 cwnd)
* The initial growth of cubic function may be too conservative
* when the available bandwidth is still unknown.
*/
- if (ca->loss_cwnd == 0 && ca->cnt > 20)
+ if (ca->last_max_cwnd == 0 && ca->cnt > 20)
ca->cnt = 20; /* increase cwnd 5% per RTT */
/* TCP Friendly */
@@ -342,7 +344,7 @@ static u32 bictcp_undo_cwnd(struct sock *sk)
{
struct bictcp *ca = inet_csk_ca(sk);
- return max(tcp_sk(sk)->snd_cwnd, ca->last_max_cwnd);
+ return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
}
static void bictcp_state(struct sock *sk, u8 new_state)
--
1.7.7.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] tcp: fix undo after RTO for CUBIC
2012-01-19 3:47 ` [PATCH 2/2] tcp: fix undo after RTO for CUBIC Neal Cardwell
@ 2012-01-20 2:49 ` Stephen Hemminger
2012-01-20 19:18 ` David Miller
0 siblings, 1 reply; 5+ messages in thread
From: Stephen Hemminger @ 2012-01-20 2:49 UTC (permalink / raw)
To: Neal Cardwell
Cc: David Miller, netdev, ilpo.jarvinen, Nandita Dukkipati,
Yuchung Cheng, Jerry Chu, Tom Herbert
On Wed, 18 Jan 2012 22:47:59 -0500
Neal Cardwell <ncardwell@google.com> wrote:
> This patch fixes CUBIC so that cwnd reductions made during RTOs can be
> undone (just as they already can be undone when using the default/Reno
> behavior).
>
> When undoing cwnd reductions, BIC-derived congestion control modules
> were restoring the cwnd from last_max_cwnd. There were two problems
> with using last_max_cwnd to restore a cwnd during undo:
>
> (a) last_max_cwnd was set to 0 on state transitions into TCP_CA_Loss
> (by calling the module's reset() functions), so cwnd reductions from
> RTOs could not be undone.
>
> (b) when fast_covergence is enabled (which it is by default)
> last_max_cwnd does not actually hold the value of snd_cwnd before the
> loss; instead, it holds a scaled-down version of snd_cwnd.
>
> This patch makes the following changes:
>
> (1) upon undo, revert snd_cwnd to ca->loss_cwnd, which is already, as
> the existing comment notes, the "congestion window at last loss"
>
> (2) stop forgetting ca->loss_cwnd on TCP_CA_Loss events
>
> (3) use ca->last_max_cwnd to check if we're in slow start
>
> Signed-off-by: Neal Cardwell <ncardwell@google.com>
I was unsure of this, so forwarded it to the author
and received his confirmation.
Acked-by: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Sangtae Ha <sangtae.ha@gmail.com>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] tcp: fix undo after RTO for CUBIC
2012-01-20 2:49 ` Stephen Hemminger
@ 2012-01-20 19:18 ` David Miller
0 siblings, 0 replies; 5+ messages in thread
From: David Miller @ 2012-01-20 19:18 UTC (permalink / raw)
To: shemminger
Cc: ncardwell, netdev, ilpo.jarvinen, nanditad, ycheng, hkchu,
therbert
From: Stephen Hemminger <shemminger@vyatta.com>
Date: Thu, 19 Jan 2012 18:49:21 -0800
> On Wed, 18 Jan 2012 22:47:59 -0500
> Neal Cardwell <ncardwell@google.com> wrote:
>
>> This patch fixes CUBIC so that cwnd reductions made during RTOs can be
>> undone (just as they already can be undone when using the default/Reno
>> behavior).
>>
>> When undoing cwnd reductions, BIC-derived congestion control modules
>> were restoring the cwnd from last_max_cwnd. There were two problems
>> with using last_max_cwnd to restore a cwnd during undo:
>>
>> (a) last_max_cwnd was set to 0 on state transitions into TCP_CA_Loss
>> (by calling the module's reset() functions), so cwnd reductions from
>> RTOs could not be undone.
>>
>> (b) when fast_covergence is enabled (which it is by default)
>> last_max_cwnd does not actually hold the value of snd_cwnd before the
>> loss; instead, it holds a scaled-down version of snd_cwnd.
>>
>> This patch makes the following changes:
>>
>> (1) upon undo, revert snd_cwnd to ca->loss_cwnd, which is already, as
>> the existing comment notes, the "congestion window at last loss"
>>
>> (2) stop forgetting ca->loss_cwnd on TCP_CA_Loss events
>>
>> (3) use ca->last_max_cwnd to check if we're in slow start
>>
>> Signed-off-by: Neal Cardwell <ncardwell@google.com>
>
> I was unsure of this, so forwarded it to the author
> and received his confirmation.
>
> Acked-by: Stephen Hemminger <shemminger@vyatta.com>
> Acked-by: Sangtae Ha <sangtae.ha@gmail.com>
Applied.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] tcp: fix undo after RTO for BIC
2012-01-19 3:47 [PATCH 1/2] tcp: fix undo after RTO for BIC Neal Cardwell
2012-01-19 3:47 ` [PATCH 2/2] tcp: fix undo after RTO for CUBIC Neal Cardwell
@ 2012-01-20 19:18 ` David Miller
1 sibling, 0 replies; 5+ messages in thread
From: David Miller @ 2012-01-20 19:18 UTC (permalink / raw)
To: ncardwell; +Cc: netdev, ilpo.jarvinen, nanditad, ycheng, hkchu, therbert
From: Neal Cardwell <ncardwell@google.com>
Date: Wed, 18 Jan 2012 22:47:58 -0500
> This patch fixes BIC so that cwnd reductions made during RTOs can be
> undone (just as they already can be undone when using the default/Reno
> behavior).
>
> When undoing cwnd reductions, BIC-derived congestion control modules
> were restoring the cwnd from last_max_cwnd. There were two problems
> with using last_max_cwnd to restore a cwnd during undo:
>
> (a) last_max_cwnd was set to 0 on state transitions into TCP_CA_Loss
> (by calling the module's reset() functions), so cwnd reductions from
> RTOs could not be undone.
>
> (b) when fast_covergence is enabled (which it is by default)
> last_max_cwnd does not actually hold the value of snd_cwnd before the
> loss; instead, it holds a scaled-down version of snd_cwnd.
>
> This patch makes the following changes:
>
> (1) upon undo, revert snd_cwnd to ca->loss_cwnd, which is already, as
> the existing comment notes, the "congestion window at last loss"
>
> (2) stop forgetting ca->loss_cwnd on TCP_CA_Loss events
>
> (3) use ca->last_max_cwnd to check if we're in slow start
>
> Signed-off-by: Neal Cardwell <ncardwell@google.com>
Applied.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2012-01-20 19:18 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-01-19 3:47 [PATCH 1/2] tcp: fix undo after RTO for BIC Neal Cardwell
2012-01-19 3:47 ` [PATCH 2/2] tcp: fix undo after RTO for CUBIC Neal Cardwell
2012-01-20 2:49 ` Stephen Hemminger
2012-01-20 19:18 ` David Miller
2012-01-20 19:18 ` [PATCH 1/2] tcp: fix undo after RTO for BIC David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).