netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yuchung Cheng <ycheng@google.com>
To: Neal Cardwell <ncardwell@google.com>
Cc: Lisong Xu <xu@unl.edu>, Wei Sun <unlcsewsun@gmail.com>,
	netdev <netdev@vger.kernel.org>
Subject: Re: A buggy behavior for Linux TCP Reno and HTCP
Date: Mon, 24 Jul 2017 16:41:12 -0700	[thread overview]
Message-ID: <CAK6E8=djw-eD-JE3tkJsDjYb5UXR8scQRSnAvpt1d4p18GwHeA@mail.gmail.com> (raw)
In-Reply-To: <CADVnQymwOwWVr_Ln5+04pyiGWChf-K0XcXYP5s-+db+q1zGB9w@mail.gmail.com>

On Mon, Jul 24, 2017 at 11:29 AM, Neal Cardwell <ncardwell@google.com> wrote:
> On Mon, Jul 24, 2017 at 2:17 PM, Yuchung Cheng <ycheng@google.com> wrote:
>> On Sun, Jul 23, 2017 at 7:37 PM, Neal Cardwell <ncardwell@google.com> wrote:
>>> On Sun, Jul 23, 2017 at 10:36 PM, Neal Cardwell <ncardwell@google.com> wrote:
> ...
>>>> What if we call the field tp->prior_cwnd? Then at least we'd have some
>>>> nice symmetry:
>>>>
>>>> - tp->snd_cwnd,  which is saved in tp->prior_cwnd  (and restored upon undo)
>>>> - tp->snd_ssthresh,  which is saved in tp-> prior_ssthresh  (and
>>>> restored upon undo)
>>>>
>>>> That sounds appealing to me. WDYT?
>>>
>>> And, I should add, if we go with the tp->prior_cwnd approach, then we
>>> can have a single "default"/CUBIC-style undo function, instead of 15
>>> separate but identical implementations...
>> you mean all CC modules share one ca_ops->undo_cwnd function? sounds a
>> nice consolidation work.
>
> Yes, exactly.
>
> Right now we have 9 modules that have identical tcp_foo_cwnd_undo functions:
>
> tcp_bic.c:188:  return max(tp->snd_cwnd, ca->loss_cwnd);
> tcp_cubic.c:378:        return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
> tcp_dctcp.c:318:        return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
> tcp_highspeed.c:165:    return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
> tcp_illinois.c:309:     return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
> tcp_nv.c:190:   return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
> tcp_scalable.c:50:      return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
> tcp_veno.c:210: return max(tcp_sk(sk)->snd_cwnd, veno->loss_cwnd);
> tcp_yeah.c:232: return max(tcp_sk(sk)->snd_cwnd, yeah->loss_cwnd);
>
> And if we fix this bug in tcp_reno_undo_cwnd() by referring to
> ca->loss_cwnd then we will add another 6 like this.
>
> So my proposal would be
>
> - tp->snd_cwnd,  which is saved in tp->prior_cwnd  (and restored upon undo)
> - tp->snd_ssthresh,  which is saved in tp-> prior_ssthresh  (and
>    restored upon undo)
>
> Actually, now that I re-read the code, we already do have a
> prior_cwnd, which is used for the PRR code, and already set upon
> entering CA_Recovery. So if we set prior_cwnd for CA_Loss, perhaps we
> can do something like:
>
> diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
> index fde983f6376b..c2b174469645 100644
> --- a/net/ipv4/tcp_cong.c
> +++ b/net/ipv4/tcp_cong.c
> @@ -456,7 +456,7 @@ u32 tcp_reno_undo_cwnd(struct sock *sk)
>  {
>         const struct tcp_sock *tp = tcp_sk(sk);
>
> -       return max(tp->snd_cwnd, tp->snd_ssthresh << 1);
> +       return max(tp->snd_cwnd, tp->prior_cwnd);
>  }
>  EXPORT_SYMBOL_GPL(tcp_reno_undo_cwnd);
>
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 2920e0cb09f8..ae790a84302d 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -1951,6 +1951,7 @@ void tcp_enter_loss(struct sock *sk)
>             !after(tp->high_seq, tp->snd_una) ||
>             (icsk->icsk_ca_state == TCP_CA_Loss && !icsk->icsk_retransmits)) {
>                 tp->prior_ssthresh = tcp_current_ssthresh(sk);
> +               tp->prior_cwnd = tp->snd_cwnd;
>                 tp->snd_ssthresh = icsk->icsk_ca_ops->ssthresh(sk);
>                 tcp_ca_event(sk, CA_EVENT_LOSS);
>                 tcp_init_undo(tp);
>
> And then change all the CC modules but BBR to use the
> tcp_reno_undo_cwnd() instead of their own custom undo code.
>
> WDYT?
Looks reasonable. But we might want to eventually refactor TCP undo
code: the stats changes (prior_ssthresh, prior_cwnd, undo_marker,
undo_retrans) are scattered in different helpers, making the code hard
to audit.

>
> neal

  reply	other threads:[~2017-07-24 23:41 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-18 21:36 A buggy behavior for Linux TCP Reno and HTCP Wei Sun
2017-07-19 19:31 ` Yuchung Cheng
2017-07-20 21:28   ` Wei Sun
2017-07-21 17:59     ` Yuchung Cheng
2017-07-21 20:26       ` Lisong Xu
2017-07-21 20:27       ` Lisong Xu
     [not found]         ` <CADVnQynG0MZcuAPpZ+hiK-9Ounx8JKPWxvb1n3t-OyyC7=es_Q@mail.gmail.com>
2017-07-21 20:49           ` Neal Cardwell
2017-07-21 21:16           ` Yuchung Cheng
2017-07-24  2:36             ` Neal Cardwell
2017-07-24  2:37               ` Neal Cardwell
2017-07-24 18:17                 ` Yuchung Cheng
2017-07-24 18:29                   ` Neal Cardwell
2017-07-24 23:41                     ` Yuchung Cheng [this message]
2017-07-25  4:19                       ` Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAK6E8=djw-eD-JE3tkJsDjYb5UXR8scQRSnAvpt1d4p18GwHeA@mail.gmail.com' \
    --to=ycheng@google.com \
    --cc=ncardwell@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=unlcsewsun@gmail.com \
    --cc=xu@unl.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).