netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Oleksandr Natalenko <oleksandr@natalenko.name>
To: "David S. Miller" <davem@davemloft.net>
Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Eric Dumazet <edumazet@google.com>,
	Soheil Hassas Yeganeh <soheil@google.com>,
	Neal Cardwell <ncardwell@google.com>,
	Yuchung Cheng <ycheng@google.com>, Van Jacobson <vanj@google.com>,
	Jerry Chu <hkchu@google.com>
Subject: Re: TCP and BBR: reproducibly low cwnd and bandwidth
Date: Fri, 16 Feb 2018 16:15:51 +0100	[thread overview]
Message-ID: <2189487.nPhU5NAnbi@natalenko.name> (raw)
In-Reply-To: <1697118.nv5eASg0nx@natalenko.name>

Hi, David, Eric, Neal et al.

On čtvrtek 15. února 2018 21:42:26 CET Oleksandr Natalenko wrote:
> I've faced an issue with a limited TCP bandwidth between my laptop and a
> server in my 1 Gbps LAN while using BBR as a congestion control mechanism.
> To verify my observations, I've set up 2 KVM VMs with the following
> parameters:
> 
> 1) Linux v4.15.3
> 2) virtio NICs
> 3) 128 MiB of RAM
> 4) 2 vCPUs
> 5) tested on both non-PREEMPT/100 Hz and PREEMPT/1000 Hz
> 
> The VMs are interconnected via host bridge (-netdev bridge). I was running
> iperf3 in the default and reverse mode. Here are the results:
> 
> 1) BBR on both VMs
> 
> upload: 3.42 Gbits/sec, cwnd ~ 320 KBytes
> download: 3.39 Gbits/sec, cwnd ~ 320 KBytes
> 
> 2) Reno on both VMs
> 
> upload: 5.50 Gbits/sec, cwnd = 976 KBytes (constant)
> download: 5.22 Gbits/sec, cwnd = 1.20 MBytes (constant)
> 
> 3) Reno on client, BBR on server
> 
> upload: 5.29 Gbits/sec, cwnd = 952 KBytes (constant)
> download: 3.45 Gbits/sec, cwnd ~ 320 KBytes
> 
> 4) BBR on client, Reno on server
> 
> upload: 3.36 Gbits/sec, cwnd ~ 370 KBytes
> download: 5.21 Gbits/sec, cwnd = 887 KBytes (constant)
> 
> So, as you may see, when BBR is in use, upload rate is bad and cwnd is low.
> If using real HW (1 Gbps LAN, laptop and server), BBR limits the throughput
> to ~100 Mbps (verifiable not only by iperf3, but also by scp while
> transferring some files between hosts).
> 
> Also, I've tried to use YeAH instead of Reno, and it gives me the same
> results as Reno (IOW, YeAH works fine too).
> 
> Questions:
> 
> 1) is this expected?
> 2) or am I missing some extra BBR tuneable?
> 3) if it is not a regression (I don't have any previous data to compare
> with), how can I fix this?
> 4) if it is a bug in BBR, what else should I provide or check for a proper
> investigation?

I've played with BBR a little bit more and managed to narrow the issue down to 
the changes between v4.12 and v4.13. Here are my observations:

v4.12 + BBR + fq_codel == OK
v4.12 + BBR + fq       == OK
v4.13 + BBR + fq_codel == Not OK
v4.13 + BBR + fq       == OK

I think this has something to do with an internal TCP implementation for 
pacing, that was introduced in v4.13 (commit 218af599fa63) specifically to 
allow using BBR together with non-fq qdiscs. Once BBR relies on fq, the 
throughput is high and saturates the link, but if another qdisc is in use, for 
instance, fq_codel, the throughput drops. Just to be sure, I've also tried 
pfifo_fast instead of fq_codel with the same outcome resulting in the low 
throughput.

Unfortunately, I do not know if this is something expected or should be 
considered as a regression. Thus, asking for an advice.

Ideas?

Thanks.

Regards,
  Oleksandr

  reply	other threads:[~2018-02-16 15:15 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-15 20:42 TCP and BBR: reproducibly low cwnd and bandwidth Oleksandr Natalenko
2018-02-16 15:15 ` Oleksandr Natalenko [this message]
2018-02-16 16:25   ` Eric Dumazet
2018-02-16 17:37     ` Oleksandr Natalenko
2018-02-16 16:26   ` Holger Hoffstätte
2018-02-16 16:56     ` Neal Cardwell
2018-02-16 17:13       ` Holger Hoffstätte
2018-02-16 17:35     ` Oleksandr Natalenko
2018-02-16 16:21 ` Eric Dumazet
     [not found]   ` <CADVnQymiswHBp32dcMvWd1WfYLpFqY4QTas8yABFQE7KKKc5ag@mail.gmail.com>
2018-02-16 16:43     ` Eric Dumazet
2018-02-16 16:45       ` Neal Cardwell
2018-02-16 17:00         ` Oleksandr Natalenko
2018-02-16 17:25     ` Oleksandr Natalenko
2018-02-16 17:56       ` Holger Hoffstätte
2018-02-16 19:54         ` Oleksandr Natalenko
2018-02-16 20:54       ` Eric Dumazet
2018-02-16 22:50         ` Eric Dumazet
2018-02-16 23:06           ` Oleksandr Natalenko
2018-02-16 22:50         ` Oleksandr Natalenko
2018-02-16 22:59           ` Eric Dumazet
2018-02-17 10:01             ` Oleksandr Natalenko
2018-02-17 18:52               ` Eric Dumazet
2018-02-18 21:04                 ` Eric Dumazet
2018-02-18 21:06                   ` Eric Dumazet
2018-02-18 21:49                   ` Oleksandr Natalenko
2018-02-18 22:24                     ` Eric Dumazet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2189487.nPhU5NAnbi@natalenko.name \
    --to=oleksandr@natalenko.name \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hkchu@google.com \
    --cc=kuznet@ms2.inr.ac.ru \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ncardwell@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=soheil@google.com \
    --cc=vanj@google.com \
    --cc=ycheng@google.com \
    --cc=yoshfuji@linux-ipv6.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).