From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
Brenden Blanco <bblanco@plumgrid.com>,
lsf@lists.linux-foundation.org, linux-mm <linux-mm@kvack.org>,
Mel Gorman <mgorman@suse.de>, Tom Herbert <tom@herbertland.com>,
lsf-pc@lists.linux-foundation.org,
Alexei Starovoitov <alexei.starovoitov@gmail.com>,
brouer@redhat.com
Subject: Re: [Lsf] [Lsf-pc] [LSF/MM TOPIC] Generic page-pool recycle facility?
Date: Mon, 11 Apr 2016 18:19:07 +0200 [thread overview]
Message-ID: <20160411181907.15fdb8b9@redhat.com> (raw)
In-Reply-To: <20160411130826.GB32073@techsingularity.net>
On Mon, 11 Apr 2016 14:08:27 +0100 Mel Gorman <mgorman@techsingularity.net> wrote:
> On Mon, Apr 11, 2016 at 02:26:39PM +0200, Jesper Dangaard Brouer wrote:
[...]
> >
> > It is always great if you can optimized the page allocator. IMHO the
> > page allocator is too slow.
>
> It's why I spent some time on it as any improvement in the allocator is
> an unconditional win without requiring driver modifications.
>
> > At least for my performance needs (67ns
> > per packet, approx 201 cycles at 3GHz). I've measured[1]
> > alloc_pages(order=0) + __free_pages() to cost 277 cycles(tsc).
> >
>
> It'd be worth retrying this with the branch
>
> http://git.kernel.org/cgit/linux/kernel/git/mel/linux.git/log/?h=mm-vmscan-node-lru-v4r5
>
The cost decreased to: 228 cycles(tsc), but there are some variations,
sometimes it increase to 238 cycles(tsc).
Nice, but there is still a looong way to my performance target, where I
can spend 201 cycles for the entire forwarding path....
> This is an unreleased series that contains both the page allocator
> optimisations and the one-LRU-per-node series which in combination remove a
> lot of code from the page allocator fast paths. I have no data on how the
> combined series behaves but each series individually is known to improve
> page allocator performance.
>
> Once you have that, do a hackjob to remove the debugging checks from both the
> alloc and free path and see what that leaves. They could be bypassed properly
> with a __GFP_NOACCT flag used only by drivers that absolutely require pages
> as quickly as possible and willing to be less safe to get that performance.
I would be interested in testing/benchmarking a patch where you remove
the debugging checks...
You are also welcome to try out my benchmarking modules yourself:
https://github.com/netoptimizer/prototype-kernel/blob/master/getting_started.rst
This is really simple stuff (for rapid prototyping) I'm just doing:
modprobe page_bench01; rmmod page_bench01 ; dmesg | tail -n40
[...]
>
> Be aware that compound order allocs like this are a double edged sword as
> it'll be fast sometimes and other times require reclaim/compaction which
> can stall for prolonged periods of time.
Yes, I've notice that there can be a fairly high variation, when doing
compound order allocs, which is not so nice! I really don't like these
variations....
Drivers also do tricks where they fallback to smaller order pages. E.g.
lookup function mlx4_alloc_pages(). I've tried to simulate that
function here:
https://github.com/netoptimizer/prototype-kernel/blob/91d323fc53/kernel/mm/bench/page_bench01.c#L69
It does not seem very optimal. I tried to mem pressure the system a bit
to cause the alloc_pages() to fail, and then the result were very bad,
something like 2500 cycles, and it usually got the next order pages.
> > I've measured order 3 (32KB) alloc_pages(order=3) + __free_pages() to
> > cost approx 500 cycles(tsc). That was more expensive, BUT an order=3
> > page 32Kb correspond to 8 pages (32768/4096), thus 500/8 = 62.5
> > cycles. Usually a network RX-frame only need to be 2048 bytes, thus
> > the "bulk" effect speed up is x16 (32768/2048), thus 31.25 cycles.
The order=3 cost were reduced to: 417 cycles(tsc), nice! But I've also
seen it jump to 611 cycles.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2016-04-11 16:19 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1460034425.20949.7.camel@HansenPartnership.com>
2016-04-07 14:17 ` [LSF/MM TOPIC] Generic page-pool recycle facility? Jesper Dangaard Brouer
2016-04-07 14:38 ` [Lsf-pc] " Christoph Hellwig
2016-04-07 15:11 ` [Lsf] " Bart Van Assche
2016-04-10 18:45 ` Sagi Grimberg
2016-04-11 21:41 ` Jesper Dangaard Brouer
2016-04-11 22:02 ` Alexander Duyck
2016-04-12 6:28 ` Jesper Dangaard Brouer
2016-04-12 15:37 ` Alexander Duyck
2016-04-11 22:21 ` Alexei Starovoitov
2016-04-12 6:16 ` Jesper Dangaard Brouer
2016-04-12 17:20 ` Alexei Starovoitov
2016-04-07 15:48 ` Chuck Lever
2016-04-07 16:14 ` [Lsf-pc] [Lsf] " Rik van Riel
2016-04-07 19:43 ` [Lsf] [Lsf-pc] " Jesper Dangaard Brouer
2016-04-07 15:18 ` Eric Dumazet
2016-04-09 9:11 ` [Lsf] " Jesper Dangaard Brouer
2016-04-09 12:34 ` Eric Dumazet
2016-04-11 20:23 ` Jesper Dangaard Brouer
2016-04-11 21:27 ` Eric Dumazet
2016-04-07 19:48 ` Waskiewicz, PJ
2016-04-07 20:38 ` Jesper Dangaard Brouer
2016-04-08 16:12 ` Alexander Duyck
2016-04-11 8:58 ` [Lsf-pc] " Mel Gorman
2016-04-11 12:26 ` Jesper Dangaard Brouer
2016-04-11 13:08 ` Mel Gorman
2016-04-11 16:19 ` Jesper Dangaard Brouer [this message]
2016-04-11 16:53 ` [Lsf] " Eric Dumazet
2016-04-11 19:47 ` Jesper Dangaard Brouer
2016-04-11 21:14 ` Eric Dumazet
2016-04-11 18:07 ` Mel Gorman
2016-04-11 19:26 ` Jesper Dangaard Brouer
2016-04-11 16:20 ` Matthew Wilcox
2016-04-11 17:46 ` Thadeu Lima de Souza Cascardo
2016-04-11 18:37 ` Jesper Dangaard Brouer
2016-04-11 18:53 ` Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160411181907.15fdb8b9@redhat.com \
--to=brouer@redhat.com \
--cc=James.Bottomley@HansenPartnership.com \
--cc=alexei.starovoitov@gmail.com \
--cc=bblanco@plumgrid.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=lsf@lists.linux-foundation.org \
--cc=mgorman@suse.de \
--cc=mgorman@techsingularity.net \
--cc=netdev@vger.kernel.org \
--cc=tom@herbertland.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).