From mboxrd@z Thu Jan 1 00:00:00 1970 From: Eric Dumazet Subject: Re: Performance regressions in TCP_STREAM tests in Linux 4.15 (and later) Date: Wed, 2 May 2018 15:41:17 -0700 Message-ID: References: <20180427231149.119db14c@vmware.local.home> <476bfc0f-eb2d-fe57-73d9-ec8a8392ad33@candelatech.com> <7e1f00ad-d859-0aab-c953-d6da2efe11f0@gmail.com> <544a8c5e-2aec-7a64-1414-e8d9b86b9311@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: "netdev@vger.kernel.org" , Shilpi Agarwal , Boon Ang , Darren Hart , Steven Rostedt , Abdul Anshad Azeez , Rajender M To: Michael Wenig , Eric Dumazet , Ben Greear , Steven Rostedt Return-path: Received: from mail-pf0-f196.google.com ([209.85.192.196]:37856 "EHLO mail-pf0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751261AbeEBWlT (ORCPT ); Wed, 2 May 2018 18:41:19 -0400 Received: by mail-pf0-f196.google.com with SMTP id e9so9015673pfi.4 for ; Wed, 02 May 2018 15:41:19 -0700 (PDT) In-Reply-To: Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 05/02/2018 02:47 PM, Michael Wenig wrote: > After applying Eric's proposed change (see below) to a 4.17 RC3 kernel, the regressions that we had observed in our TCP_STREAM small message tests with TCP_NODELAY enabled are now drastically reduced. Instead of the original 3x thruput and cpu cost regressions, the regression depth is now < 10% for thruput and between 10% - 20% for cpu cost. The improvements in the TCP_RR tests that we had observed after Eric's original commit are not impacted by the change. It would be great if this change could make it into a patch. > Thanks for a lot testing, I will submit this patch after more tests from my side. > Michael Wenig > VMware Performance Engineering > > -----Original Message----- > From: Eric Dumazet [mailto:eric.dumazet@gmail.com] > Sent: Monday, April 30, 2018 10:48 AM > To: Ben Greear ; Steven Rostedt ; Michael Wenig > Cc: netdev@vger.kernel.org; Shilpi Agarwal ; Boon Ang ; Darren Hart ; Steven Rostedt ; Abdul Anshad Azeez > Subject: Re: Performance regressions in TCP_STREAM tests in Linux 4.15 (and later) > > > > On 04/30/2018 09:36 AM, Eric Dumazet wrote: >> >> >> On 04/30/2018 09:14 AM, Ben Greear wrote: >>> On 04/27/2018 08:11 PM, Steven Rostedt wrote: >>>> >>>> We'd like this email archived in netdev list, but since netdev is >>>> notorious for blocking outlook email as spam, it didn't go through. >>>> So I'm replying here to help get it into the archives. >>>> >>>> Thanks! >>>> >>>> -- Steve >>>> >>>> >>>> On Fri, 27 Apr 2018 23:05:46 +0000 >>>> Michael Wenig wrote: >>>> >>>>> As part of VMware's performance testing with the Linux 4.15 kernel, >>>>> we identified CPU cost and throughput regressions when comparing to >>>>> the Linux 4.14 kernel. The impacted test cases are mostly >>>>> TCP_STREAM send tests when using small message sizes. The >>>>> regressions are significant (up 3x) and were tracked down to be a >>>>> side effect of Eric Dumazat's RB tree changes that went into the Linux 4.15 kernel. >>>>> Further investigation showed our use of the TCP_NODELAY flag in >>>>> conjunction with Eric's change caused the regressions to show and >>>>> simply disabling TCP_NODELAY brought performance back to normal. >>>>> Eric's change also resulted into significant improvements in our >>>>> TCP_RR test cases. >>>>> >>>>> >>>>> >>>>> Based on these results, our theory is that Eric's change made the >>>>> system overall faster (reduced latency) but as a side effect less >>>>> aggregation is happening (with TCP_NODELAY) and that results in >>>>> lower throughput. Previously even though TCP_NODELAY was set, >>>>> system was slower and we still got some benefit of aggregation. >>>>> Aggregation helps in better efficiency and higher throughput >>>>> although it can increase the latency. If you are seeing a >>>>> regression in your application throughput after this change, using >>>>> TCP_NODELAY might help bring performance back however that might increase latency. >>> >>> I guess you mean _disabling_ TCP_NODELAY instead of _using_ TCP_NODELAY? >>> >> >> Yeah, I guess auto-corking does not work as intended. > > I would try the following patch : > > diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 44be7f43455e4aefde8db61e2d941a69abcc642a..c9d00ef54deca15d5760bcbe154001a96fa1e2a7 100644 > --- a/net/ipv4/tcp.c > +++ b/net/ipv4/tcp.c > @@ -697,7 +697,7 @@ static bool tcp_should_autocork(struct sock *sk, struct sk_buff *skb, { > return skb->len < size_goal && > sock_net(sk)->ipv4.sysctl_tcp_autocorking && > - skb != tcp_write_queue_head(sk) && > + !tcp_rtx_queue_empty(sk) && > refcount_read(&sk->sk_wmem_alloc) > skb->truesize; } > >