From: Matt Mackall <mpm@selenic.com>
To: Christoph Lameter <clameter@sgi.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Pekka J Enberg <penberg@cs.helsinki.fi>,
Ingo Molnar <mingo@elte.hu>, Hugh Dickins <hugh@veritas.com>,
Andi Kleen <andi@firstfloor.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH] greatly reduce SLOB external fragmentation
Date: Thu, 10 Jan 2008 13:44:42 -0600 [thread overview]
Message-ID: <1199994282.5331.173.camel@cinder.waste.org> (raw)
In-Reply-To: <Pine.LNX.4.64.0801101118070.20353@schroedinger.engr.sgi.com>
On Thu, 2008-01-10 at 11:24 -0800, Christoph Lameter wrote:
> On Thu, 10 Jan 2008, Matt Mackall wrote:
>
> > One idea I've been kicking around is pushing the boundary for the buddy
> > allocator back a bit (to 64k, say) and using SL*B under that. The page
> > allocators would call into buddy for larger than 64k (rare!) and SL*B
> > otherwise. This would let us greatly improve our handling of things like
> > task structs and skbs and possibly also things like 8k stacks and jumbo
> > frames. As SL*B would never be competing with the page allocator for
> > contiguous pages (the buddy allocator's granularity would be 64k), I
> > don't think this would exacerbate the page-level fragmentation issues.
>
> This would create another large page size (and that would have my
> enthusiastic support).
Well, I think we'd still have the same page size, in the sense that we'd
have a struct page for every hardware page and we'd still have hardware
page-sized pages in the page cache. We'd just change how we allocated
them. Right now we've got a stack that looks like:
buddy / page allocator
SL*B allocator
kmalloc
And we'd change that to:
buddy allocator
SL*B allocator
page allocator / kmalloc
So get_free_page() would still hand you back a hardware page, it would
just do it through SL*B.
> It would decrease listlock effect drastically for SLUB.
Not sure what you're referring to here.
> However, isnt this is basically confessing that the page allocator is not
> efficient for 4k page allocations?
Well I wasn't thinking of doing this for any performance reasons. But
there certainly could be some.
--
Mathematics is the supreme nostalgia of our time.
next prev parent reply other threads:[~2008-01-10 19:45 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-01-02 18:43 [PATCH] procfs: provide slub's /proc/slabinfo Hugh Dickins
2008-01-02 18:53 ` Christoph Lameter
2008-01-02 19:09 ` Pekka Enberg
2008-01-02 19:35 ` Linus Torvalds
2008-01-02 19:45 ` Linus Torvalds
2008-01-02 19:49 ` Pekka Enberg
2008-01-02 22:50 ` Matt Mackall
2008-01-03 8:52 ` Ingo Molnar
2008-01-03 16:46 ` Matt Mackall
2008-01-04 2:21 ` Christoph Lameter
2008-01-04 2:45 ` Andi Kleen
2008-01-04 4:34 ` Matt Mackall
2008-01-04 9:17 ` Peter Zijlstra
2008-01-04 20:37 ` Christoph Lameter
2008-01-04 4:11 ` Matt Mackall
2008-01-04 20:34 ` Christoph Lameter
2008-01-04 20:55 ` Matt Mackall
2008-01-04 21:36 ` Christoph Lameter
2008-01-04 22:30 ` Matt Mackall
2008-01-05 20:16 ` Christoph Lameter
2008-01-05 16:21 ` Pekka J Enberg
2008-01-05 17:14 ` Andi Kleen
2008-01-05 20:05 ` Christoph Lameter
2008-01-07 20:12 ` Pekka J Enberg
2008-01-06 17:51 ` Matt Mackall
2008-01-07 18:06 ` Pekka J Enberg
2008-01-07 19:03 ` Matt Mackall
2008-01-07 19:53 ` Pekka J Enberg
2008-01-07 20:44 ` Pekka J Enberg
2008-01-10 10:04 ` Pekka J Enberg
2008-01-09 19:15 ` [RFC PATCH] greatly reduce SLOB external fragmentation Matt Mackall
2008-01-09 22:43 ` Pekka J Enberg
2008-01-09 22:59 ` Matt Mackall
2008-01-10 10:02 ` Pekka J Enberg
2008-01-10 10:54 ` Pekka J Enberg
2008-01-10 15:44 ` Matt Mackall
2008-01-10 16:13 ` Linus Torvalds
2008-01-10 17:49 ` Matt Mackall
2008-01-10 18:28 ` Linus Torvalds
2008-01-10 18:42 ` Matt Mackall
2008-01-10 19:24 ` Christoph Lameter
2008-01-10 19:44 ` Matt Mackall [this message]
2008-01-10 19:51 ` Christoph Lameter
2008-01-10 19:41 ` Linus Torvalds
2008-01-10 19:46 ` Christoph Lameter
2008-01-10 19:53 ` Andi Kleen
2008-01-10 19:52 ` Christoph Lameter
2008-01-10 19:16 ` Christoph Lameter
2008-01-10 19:23 ` Matt Mackall
2008-01-10 19:31 ` Christoph Lameter
2008-01-10 21:25 ` Jörn Engel
2008-01-10 18:13 ` Andi Kleen
2008-07-30 21:51 ` Pekka J Enberg
2008-07-30 22:00 ` Linus Torvalds
2008-07-30 22:22 ` Pekka Enberg
2008-07-30 22:35 ` Linus Torvalds
2008-07-31 0:42 ` malc
2008-07-31 1:03 ` Matt Mackall
2008-07-31 1:09 ` Matt Mackall
2008-07-31 14:11 ` Andi Kleen
2008-07-31 15:25 ` Christoph Lameter
2008-07-31 16:03 ` Andi Kleen
2008-07-31 16:05 ` Christoph Lameter
2008-07-31 14:26 ` Christoph Lameter
2008-07-31 15:38 ` Matt Mackall
2008-07-31 15:42 ` Christoph Lameter
2008-01-10 2:46 ` Matt Mackall
2008-01-10 10:03 ` Pekka J Enberg
2008-01-03 20:31 ` [PATCH] procfs: provide slub's /proc/slabinfo Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1199994282.5331.173.camel@cinder.waste.org \
--to=mpm@selenic.com \
--cc=a.p.zijlstra@chello.nl \
--cc=andi@firstfloor.org \
--cc=clameter@sgi.com \
--cc=hugh@veritas.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=penberg@cs.helsinki.fi \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox