From mboxrd@z Thu Jan 1 00:00:00 1970 From: Oleksandr Natalenko Subject: Re: TCP and BBR: reproducibly low cwnd and bandwidth Date: Fri, 16 Feb 2018 18:25:51 +0100 Message-ID: <1555524.zWX3teDkM3@natalenko.name> References: <1697118.nv5eASg0nx@natalenko.name> <1518798109.55655.2.camel@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Cc: Eric Dumazet , "David S. Miller" , Netdev , Yuchung Cheng , Soheil Hassas Yeganeh , Jerry Chu , Eric Dumazet , Dave Taht To: Neal Cardwell Return-path: Received: from vulcan.natalenko.name ([104.207.131.136]:43038 "EHLO vulcan.natalenko.name" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161169AbeBPRZx (ORCPT ); Fri, 16 Feb 2018 12:25:53 -0500 In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: Hi. On p=E1tek 16. =FAnora 2018 17:33:48 CET Neal Cardwell wrote: > Thanks for the detailed report! Yes, this sounds like an issue in BBR. We > have not run into this one in our team, but we will try to work with you = to > fix this. >=20 > Would you be able to take a sender-side tcpdump trace of the slow BBR > transfer ("v4.13 + BBR + fq_codel =3D=3D Not OK")? Packet headers only wo= uld be > fine. Maybe something like: >=20 > tcpdump -w /tmp/test.pcap -c1000000 -s 100 -i eth0 port $PORT So, going on with two real HW hosts. They are both running latest stock Arc= h=20 Linux kernel (4.15.3-1-ARCH, CONFIG_PREEMPT=3Dy, CONFIG_HZ=3D1000) and are= =20 interconnected with 1 Gbps link (via switch if that matters). Using iperf3,= =20 running each test for 20 seconds. Having BBR+fq_codel (or pfifo_fast, same result) on both hosts: Client to server: 112 Mbits/sec Server to client: 96.1 Mbits/sec Having BBR+fq on both hosts: Client to server: 347 Mbits/sec Server to client: 397 Mbits/sec Having YeAH+fq on both hosts: Client to server: 928 Mbits/sec Server to client: 711 Mbits/sec (when the server generates traffic, the throughput is a little bit lower, a= s=20 you can see, but I assume that's because I have there low-power Silvermont= =20 CPU, when the client has Ivy Bridge beast) Now, to tcpdump. I've captured it 2 times, for client-to-server flow (c2s) = and=20 for server-to-client flow (s2c) while using BBR + pfifo_fast: # tcpdump -w test_XXX.pcap -c1000000 -s 100 -i enp2s0 port 5201 I've uploaded both files here [1]. Thanks. Oleksandr [1] https://natalenko.name/myfiles/bbr/