From: Feng Tang <feng.tang@intel.com>
To: Eric Dumazet <edumazet@google.com>
Cc: kernel test robot <rong.a.chen@intel.com>,
Stephen Rothwell <sfr@canb.auug.org.au>,
Willem de Bruijn <willemb@google.com>,
Soheil Hassas Yeganeh <soheil@google.com>,
LKML <linux-kernel@vger.kernel.org>, "lkp@01.org" <lkp@01.org>,
"David S. Miller" <davem@davemloft.net>,
ying.huang@intel.com
Subject: Re: [LKP] [tcp] 8b27dae5a2: netperf.Throughput_Mbps -25.7% regression
Date: Wed, 12 Jun 2019 11:52:29 +0800 [thread overview]
Message-ID: <20190612035133.GA18313@shbuild999.sh.intel.com> (raw)
In-Reply-To: <20190604100735.s2g3tc35ofybimek@shbuild888>
On Tue, Jun 04, 2019 at 06:07:35PM +0800, Feng Tang wrote:
> On Thu, May 30, 2019 at 11:23:14PM +0800, Feng Tang wrote:
> > Hi Eric,
> >
> > On Thu, May 30, 2019 at 05:21:40AM -0700, Eric Dumazet wrote:
> > > On Thu, May 30, 2019 at 3:31 AM Feng Tang <feng.tang@intel.com> wrote:
> > > >
> > > > On Wed, Apr 03, 2019 at 02:34:36PM +0800, kernel test robot wrote:
> > > > > Greeting,
> > > > >
> > > > > FYI, we noticed a -25.7% regression of netperf.Throughput_Mbps due to commit:
> > > > >
> > > > >
> > > > > commit: 8b27dae5a2e89a61c46c6dbc76c040c0e6d0ed4c ("tcp: add one skb cache for rx")
> > > > > https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
> > > >
> > > > Hi Eric,
> > > >
> > > > Could you help to check this? thanks,
> > > >
> > >
> > > Hmmm... patch is old and had some bugs that have been fixed.
> > >
> > > What numbers do you have with more recent kernels ?
> >
> >
> > I just run the test with 5.2-rc2, and the regression is still there.
>
> Hi Eric,
>
> Any hint on this?
>
> >From the perf data, the spinlock contention has an obvious increase:
>
> 9.28 +7.6 16.91 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_one_page.__free_pages_ok.___pskb_trim
> 18.55 +8.6 27.14 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.skb_page_frag_refill
Hi Eric,
Any thoughts?
Actually I did some further check. The increased lock contention
comes from the mm zone lock for page alloc/free. I did an
experiment by changing the SKB_FRAG_PAGE_ORDER from 32K to 64K,
the lock contention is dramatically reduced, and the throughput
got some recovery (10% ~ 15% gain) depending on HW platform, but
can't fully recover the -25.7% loss. Hope this info helps.
Thanks,
Feng
>
> And for commit 8b27dae5a2 ("tcp: add one skb cache for rx"), IIUC, it
> is not a real cache like the "tx skb cache" patch, and kind of a
> delayed freeing.
>
> Thanks,
> Feng
>
>
prev parent reply other threads:[~2019-06-12 3:52 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-03 6:34 [tcp] 8b27dae5a2: netperf.Throughput_Mbps -25.7% regression kernel test robot
2019-05-30 10:30 ` [LKP] " Feng Tang
2019-05-30 12:21 ` Eric Dumazet
2019-05-30 15:23 ` Feng Tang
2019-06-04 10:07 ` Feng Tang
2019-06-12 3:52 ` Feng Tang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190612035133.GA18313@shbuild999.sh.intel.com \
--to=feng.tang@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@01.org \
--cc=rong.a.chen@intel.com \
--cc=sfr@canb.auug.org.au \
--cc=soheil@google.com \
--cc=willemb@google.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox