From: Aaron Lu <aaron.lu@intel.com>
To: Tariq Toukan <tariqt@mellanox.com>
Cc: Linux Kernel Network Developers <netdev@vger.kernel.org>,
linux-mm <linux-mm@kvack.org>,
Mel Gorman <mgorman@techsingularity.net>,
David Miller <davem@davemloft.net>,
Jesper Dangaard Brouer <brouer@redhat.com>,
Eric Dumazet <eric.dumazet@gmail.com>,
Alexei Starovoitov <ast@fb.com>,
Saeed Mahameed <saeedm@mellanox.com>,
Eran Ben Elisha <eranbe@mellanox.com>,
Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@suse.com>
Subject: Re: Page allocator bottleneck
Date: Mon, 23 Apr 2018 21:10:33 +0800 [thread overview]
Message-ID: <20180423131033.GA13792@intel.com> (raw)
In-Reply-To: <0dea4da6-8756-22d4-c586-267217a5fa63@mellanox.com>
On Mon, Apr 23, 2018 at 11:54:57AM +0300, Tariq Toukan wrote:
> Hi,
>
> I ran my tests with your patches.
> Initial BW numbers are significantly higher than I documented back then in
> this mail-thread.
> For example, in driver #2 (see original mail thread), with 6 rings, I now
> get 92Gbps (slightly less than linerate) in comparison to 64Gbps back then.
>
> However, there were many kernel changes since then, I need to isolate your
> changes. I am not sure I can finish this today, but I will surely get to it
> next week after I'm back from vacation.
>
> Still, when I increase the scale (more rings, i.e. more cpus), I see that
> queued_spin_lock_slowpath gets to 60%+ cpu. Still high, but lower than it
> used to be.
I wonder if it is on allocation path or free path?
Also, increasing PCP size through vm.percpu_pagelist_fraction would
still help with my patches since it can avoid touching even more cache
lines on allocation path with a higher PCP->batch(which has an upper
limit of 96 though at the moment).
>
> This should be root solved by the (orthogonal) changes planned in network
> subsystem, which will change the SKB allocation/free scheme so that SKBs are
> released on the originating cpu.
next prev parent reply other threads:[~2018-04-23 13:10 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-14 16:49 Page allocator bottleneck Tariq Toukan
2017-09-14 20:19 ` Andi Kleen
2017-09-17 15:43 ` Tariq Toukan
2017-09-15 7:28 ` Jesper Dangaard Brouer
2017-09-17 16:16 ` Tariq Toukan
2017-09-18 7:34 ` Aaron Lu
2017-09-18 7:44 ` Aaron Lu
2017-09-18 15:33 ` Tariq Toukan
2017-09-19 7:23 ` Aaron Lu
2017-09-15 10:23 ` Mel Gorman
2017-09-18 9:16 ` Tariq Toukan
2017-11-02 17:21 ` Tariq Toukan
2017-11-03 13:40 ` Mel Gorman
2017-11-08 5:42 ` Tariq Toukan
2017-11-08 9:35 ` Mel Gorman
2017-11-09 3:51 ` Figo.zhang
2017-11-09 5:06 ` Tariq Toukan
2017-11-09 5:21 ` Jesper Dangaard Brouer
2018-04-21 8:15 ` Aaron Lu
2018-04-22 16:43 ` Tariq Toukan
2018-04-23 8:54 ` Tariq Toukan
2018-04-23 13:10 ` Aaron Lu [this message]
2018-04-27 8:45 ` Aaron Lu
2018-05-02 13:38 ` Tariq Toukan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180423131033.GA13792@intel.com \
--to=aaron.lu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=ast@fb.com \
--cc=brouer@redhat.com \
--cc=davem@davemloft.net \
--cc=eranbe@mellanox.com \
--cc=eric.dumazet@gmail.com \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
--cc=tariqt@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).