From: Mel Gorman <mgorman@suse.de>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: lsf@lists.linux-foundation.org, linux-mm <linux-mm@kvack.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
Brenden Blanco <bblanco@plumgrid.com>,
James Bottomley <James.Bottomley@HansenPartnership.com>,
Tom Herbert <tom@herbertland.com>,
lsf-pc@lists.linux-foundation.org,
Alexei Starovoitov <alexei.starovoitov@gmail.com>
Subject: Re: [Lsf-pc] [LSF/MM TOPIC] Generic page-pool recycle facility?
Date: Mon, 11 Apr 2016 09:58:19 +0100 [thread overview]
Message-ID: <20160411085819.GE21128@suse.de> (raw)
In-Reply-To: <20160407161715.52635cac@redhat.com>
On Thu, Apr 07, 2016 at 04:17:15PM +0200, Jesper Dangaard Brouer wrote:
> (Topic proposal for MM-summit)
>
> Network Interface Cards (NIC) drivers, and increasing speeds stress
> the page-allocator (and DMA APIs). A number of driver specific
> open-coded approaches exists that work-around these bottlenecks in the
> page allocator and DMA APIs. E.g. open-coded recycle mechanisms, and
> allocating larger pages and handing-out page "fragments".
>
> I'm proposing a generic page-pool recycle facility, that can cover the
> driver use-cases, increase performance and open up for zero-copy RX.
>
Which bottleneck dominates -- the page allocator or the DMA API when
setting up coherent pages?
I'm wary of another page allocator API being introduced if it's for
performance reasons. In response to this thread, I spent two days on
a series that boosts performance of the allocator in the fast paths by
11-18% to illustrate that there was low-hanging fruit for optimising. If
the one-LRU-per-node series was applied on top, there would be a further
boost to performance on the allocation side. It could be further boosted
if debugging checks and statistic updates were conditionally disabled by
the caller.
The main reason another allocator concerns me is that those pages
are effectively pinned and cannot be reclaimed by the VM in low memory
situations. It ends up needing its own API for tuning the size and hoping
all the drivers get it right without causing OOM situations. It becomes
a slippery slope of introducing shrinkers, locking and complexity. Then
callers start getting concerned about NUMA locality and having to deal
with multiple lists to maintain performance. Ultimately, it ends up being
as slow as the page allocator and back to square 1 except now with more code.
If it's the DMA API that dominates then something may be required but it
should rely on the existing page allocator to alloc/free from. It would
also need something like drain_all_pages to force free everything in there
in low memory situations. Remember that multiple instances private to
drivers or tasks will require shrinker implementations and the complexity
may get unwieldly.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-04-11 8:58 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1460034425.20949.7.camel@HansenPartnership.com>
2016-04-07 14:17 ` [LSF/MM TOPIC] Generic page-pool recycle facility? Jesper Dangaard Brouer
2016-04-07 14:38 ` [Lsf-pc] " Christoph Hellwig
2016-04-07 15:11 ` [Lsf] " Bart Van Assche
2016-04-10 18:45 ` Sagi Grimberg
2016-04-11 21:41 ` Jesper Dangaard Brouer
2016-04-11 22:02 ` Alexander Duyck
2016-04-12 6:28 ` Jesper Dangaard Brouer
2016-04-12 15:37 ` Alexander Duyck
2016-04-11 22:21 ` Alexei Starovoitov
2016-04-12 6:16 ` Jesper Dangaard Brouer
2016-04-12 17:20 ` Alexei Starovoitov
2016-04-07 15:48 ` Chuck Lever
2016-04-07 16:14 ` [Lsf-pc] [Lsf] " Rik van Riel
2016-04-07 19:43 ` [Lsf] [Lsf-pc] " Jesper Dangaard Brouer
2016-04-07 15:18 ` Eric Dumazet
2016-04-09 9:11 ` [Lsf] " Jesper Dangaard Brouer
2016-04-09 12:34 ` Eric Dumazet
2016-04-11 20:23 ` Jesper Dangaard Brouer
2016-04-11 21:27 ` Eric Dumazet
2016-04-07 19:48 ` Waskiewicz, PJ
2016-04-07 20:38 ` Jesper Dangaard Brouer
2016-04-08 16:12 ` Alexander Duyck
2016-04-11 8:58 ` Mel Gorman [this message]
2016-04-11 12:26 ` [Lsf-pc] " Jesper Dangaard Brouer
2016-04-11 13:08 ` Mel Gorman
2016-04-11 16:19 ` [Lsf] " Jesper Dangaard Brouer
2016-04-11 16:53 ` Eric Dumazet
2016-04-11 19:47 ` Jesper Dangaard Brouer
2016-04-11 21:14 ` Eric Dumazet
2016-04-11 18:07 ` Mel Gorman
2016-04-11 19:26 ` Jesper Dangaard Brouer
2016-04-11 16:20 ` Matthew Wilcox
2016-04-11 17:46 ` Thadeu Lima de Souza Cascardo
2016-04-11 18:37 ` Jesper Dangaard Brouer
2016-04-11 18:53 ` Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160411085819.GE21128@suse.de \
--to=mgorman@suse.de \
--cc=James.Bottomley@HansenPartnership.com \
--cc=alexei.starovoitov@gmail.com \
--cc=bblanco@plumgrid.com \
--cc=brouer@redhat.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=lsf@lists.linux-foundation.org \
--cc=netdev@vger.kernel.org \
--cc=tom@herbertland.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).