From mboxrd@z Thu Jan 1 00:00:00 1970 From: Eric Dumazet Subject: Re: TCP and BBR: reproducibly low cwnd and bandwidth Date: Fri, 16 Feb 2018 08:21:49 -0800 Message-ID: <1518798109.55655.2.camel@gmail.com> References: <1697118.nv5eASg0nx@natalenko.name> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Cc: netdev@vger.kernel.org, Yuchung Cheng , Soheil Hassas Yeganeh , Neal Cardwell , Jerry Chu , Eric Dumazet To: Oleksandr Natalenko , "David S. Miller" Return-path: Received: from mail-pg0-f52.google.com ([74.125.83.52]:33866 "EHLO mail-pg0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030663AbeBPQVx (ORCPT ); Fri, 16 Feb 2018 11:21:53 -0500 Received: by mail-pg0-f52.google.com with SMTP id m19so2787472pgn.1 for ; Fri, 16 Feb 2018 08:21:53 -0800 (PST) In-Reply-To: <1697118.nv5eASg0nx@natalenko.name> Sender: netdev-owner@vger.kernel.org List-ID: Lets CC BBR folks at Google, and remove the ones that probably have no idea. On Thu, 2018-02-15 at 21:42 +0100, Oleksandr Natalenko wrote: > Hello. > > I've faced an issue with a limited TCP bandwidth between my laptop and a > server in my 1 Gbps LAN while using BBR as a congestion control mechanism. To > verify my observations, I've set up 2 KVM VMs with the following parameters: > > 1) Linux v4.15.3 > 2) virtio NICs > 3) 128 MiB of RAM > 4) 2 vCPUs > 5) tested on both non-PREEMPT/100 Hz and PREEMPT/1000 Hz > > The VMs are interconnected via host bridge (-netdev bridge). I was running > iperf3 in the default and reverse mode. Here are the results: > > 1) BBR on both VMs > > upload: 3.42 Gbits/sec, cwnd ~ 320 KBytes > download: 3.39 Gbits/sec, cwnd ~ 320 KBytes > > 2) Reno on both VMs > > upload: 5.50 Gbits/sec, cwnd = 976 KBytes (constant) > download: 5.22 Gbits/sec, cwnd = 1.20 MBytes (constant) > > 3) Reno on client, BBR on server > > upload: 5.29 Gbits/sec, cwnd = 952 KBytes (constant) > download: 3.45 Gbits/sec, cwnd ~ 320 KBytes > > 4) BBR on client, Reno on server > > upload: 3.36 Gbits/sec, cwnd ~ 370 KBytes > download: 5.21 Gbits/sec, cwnd = 887 KBytes (constant) > > So, as you may see, when BBR is in use, upload rate is bad and cwnd is low. If > using real HW (1 Gbps LAN, laptop and server), BBR limits the throughput to > ~100 Mbps (verifiable not only by iperf3, but also by scp while transferring > some files between hosts). > > Also, I've tried to use YeAH instead of Reno, and it gives me the same results > as Reno (IOW, YeAH works fine too). > > Questions: > > 1) is this expected? > 2) or am I missing some extra BBR tuneable? > 3) if it is not a regression (I don't have any previous data to compare with), > how can I fix this? > 4) if it is a bug in BBR, what else should I provide or check for a proper > investigation? > > Thanks. > > Regards, > Oleksandr > >