From: Matthew Dobson <colpatch@us.ibm.com>
To: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Paul Jackson <pj@sgi.com>,
bcrl@kvack.org, clameter@engr.sgi.com,
linux-kernel@vger.kernel.org, sri@us.ibm.com, andrea@suse.de,
pavel@suse.cz, linux-mm@kvack.org
Subject: Re: [patch 0/9] Critical Mempools
Date: Mon, 30 Jan 2006 14:38:59 -0800 [thread overview]
Message-ID: <43DE9583.5050700@us.ibm.com> (raw)
In-Reply-To: <1138443711.8657.16.camel@localhost>
Pekka Enberg wrote:
> Hi,
>
> On Fri, 2006-01-27 at 16:41 -0800, Matthew Dobson wrote:
>
>>Now, a few pages of memory could be incredibly crucial, since
>>we're discussing an emergency (presumably) low-mem situation, but if
>>we're going to be getting several requests for the same
>>slab/kmalloc-size then we're probably better of giving a whole page to
>>the slab allocator. This is pure speculation, of course... :)
>
>
> Yeah but even then there's no guarantee that the critical allocations
> will be serviced first. The slab allocator can as well be giving away
> bits of the fresh page to non-critical allocations. For the exact same
> reason, I don't think it's enough that you pass a subsystem-specific
> page pool to the slab allocator.
Well, it would give at least one object from the new slab to the critical
request, but you're right, the rest of the slab could be allocated to
non-critical users. I had planned on a small follow-on patch to add
exclusivity to mempool/critical slab pages, but going a different route
seems to be the consensus.
> Sorry if this has been explained before but why aren't mempools
> sufficient for your purposes? Also one more alternative would be to
> create a separate object cache for each subsystem-specific critical
> allocation and implement a internal "page pool" for the slab allocator
> so that you could specify for the number of pages an object cache
> guarantees to always hold on to.
Mempools aren't sufficient because in order to create a real critical pool
for the whole networking subsystem, we'd have to create dozens of mempools,
one each for all the different slabs & kmalloc sizes the networking stack
requires, plus another for whole pages. Not impossible, but U-G-L-Y. And
wasteful. Creating all those mempools is surely more wasteful than
creating one reasonably sized pool to back ALL the allocations. Or, at
least, such was my rationale... :)
-Matt
next prev parent reply other threads:[~2006-01-30 22:39 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-01-25 19:39 [patch 0/9] Critical Mempools Matthew Dobson
2006-01-26 17:57 ` Christoph Lameter
2006-01-26 23:01 ` Matthew Dobson
2006-01-26 23:18 ` Christoph Lameter
2006-01-26 23:32 ` Matthew Dobson
2006-01-27 0:03 ` Benjamin LaHaise
2006-01-27 0:27 ` Matthew Dobson
2006-01-27 7:35 ` Pekka Enberg
2006-01-27 10:10 ` Paul Jackson
2006-01-27 11:07 ` Pekka Enberg
2006-01-28 0:41 ` Matthew Dobson
2006-01-28 10:21 ` Pekka Enberg
2006-01-30 22:38 ` Matthew Dobson [this message]
2006-01-27 15:36 ` Jan Kiszka
2006-01-27 8:34 ` Sridhar Samudrala
2006-01-27 8:29 ` Sridhar Samudrala
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=43DE9583.5050700@us.ibm.com \
--to=colpatch@us.ibm.com \
--cc=andrea@suse.de \
--cc=bcrl@kvack.org \
--cc=clameter@engr.sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=pavel@suse.cz \
--cc=penberg@cs.helsinki.fi \
--cc=pj@sgi.com \
--cc=sri@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox