From mboxrd@z Thu Jan 1 00:00:00 1970 From: Oleksandr Natalenko Subject: Re: TCP and BBR: reproducibly low cwnd and bandwidth Date: Fri, 16 Feb 2018 18:37:08 +0100 Message-ID: <18081951.d6t0IUddpn@natalenko.name> References: <1697118.nv5eASg0nx@natalenko.name> <2189487.nPhU5NAnbi@natalenko.name> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Cc: "David S. Miller" , Alexey Kuznetsov , Hideaki YOSHIFUJI , netdev , LKML , Soheil Hassas Yeganeh , Neal Cardwell , Yuchung Cheng , Van Jacobson , Jerry Chu To: Eric Dumazet Return-path: In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-Id: netdev.vger.kernel.org Hi. On p=E1tek 16. =FAnora 2018 17:25:58 CET Eric Dumazet wrote: > The way TCP pacing works, it defaults to internal pacing using a hint > stored in the socket. >=20 > If you change the qdisc while flow is alive, result could be unexpected. I don't change a qdisc while flow is alive. Either the VM is completely=20 restarted, or iperf3 is restarted on both sides. > (TCP socket remembers that one FQ was supposed to handle the pacing) >=20 > What results do you have if you use standard pfifo_fast ? Almost the same as with fq_codel (see my previous email with numbers). > I am asking because TCP pacing relies on High resolution timers, and > that might be weak on your VM. Also, I've switched to measuring things on a real HW only (also see previou= s=20 email with numbers). Thanks. Regards, Oleksandr