From: Nick Piggin <npiggin@suse.de>
To: Christoph Lameter <cl@linux-foundation.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>,
"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>,
Lin Ming <ming.m.lin@intel.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [patch] SLQB slab allocator
Date: Fri, 23 Jan 2009 16:53:07 +0100 [thread overview]
Message-ID: <20090123155307.GB14517@wotan.suse.de> (raw)
In-Reply-To: <alpine.DEB.1.10.0901231033210.32253@qirst.com>
On Fri, Jan 23, 2009 at 10:41:15AM -0500, Christoph Lameter wrote:
> On Fri, 23 Jan 2009, Nick Piggin wrote:
>
> > > No it cannot because in SLUB objects must come from the same page.
> > > Multiple objects in a queue will only ever require a single page and not
> > > multiple like in SLAB.
> >
> > I don't know how that solves the problem. Task with memory policy A
> > allocates an object, which allocates the "fast" page with policy A
> > and allocates an object. Then context switch to task with memory
> > policy B which allocates another object, which is taken from the page
> > allocated with policy A. Right?
>
> Correct. But this is only an issue if you think about policies applying to
> individual object allocations (like realized in SLAB). If policies only
> apply to pages (which is sufficient for balancing IMHO) then this is okay.
According to memory policies, a task's memory policy is supposed to
apply to its slab allocations too.
> > > (OK this doesn't give the wrong policy 100% of the time; I thought
> > there could have been a context switch race during page allocation
> > that would result in 100% incorrect, but anyway it could still be
> > significantly incorrect couldn't it?)
>
> Memory policies are applied in a fuzzy way anyways. A context switch can
> result in page allocation action that changes the expected interleave
> pattern. Page populations in an address space depend on the task policy.
> So the exact policy applied to a page depends on the task. This isnt an
> exact thing.
There are other memory policies than just interleave though.
> > "the first cpu will consume more and more memory from the page allocator
> > whereas the second will build up huge per cpu lists"
> >
> > And this is wrong. There is another possible issue where every single
> > object on the freelist might come from a different (and otherwise free)
> > page, and thus eg 100 8 byte objects might consume 400K.
> >
> > That's not an invalid concern, but I think it will be quite rare, and
> > the periodic queue trimming should naturally help this because it will
> > cycle out those objects and if new allocations are needed, they will
> > come from new pages which can be packed more densely.
>
> Well but you said that you would defer the trimming (due to latency
> concerns). The longer you defer the larger the lists will get.
But that is wrong. The lists obviously have high water marks that
get trimmed down. Periodic trimming as I keep saying basically is
alrady so infrequent that it is irrelevant (millions of objects
per cpu can be allocated anyway between existing trimming interval)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-01-23 15:53 UTC|newest]
Thread overview: 99+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-14 9:04 [patch] SLQB slab allocator Nick Piggin
2009-01-14 10:53 ` Pekka Enberg
2009-01-14 11:47 ` Nick Piggin
2009-01-14 13:44 ` Pekka Enberg
2009-01-14 14:22 ` Nick Piggin
2009-01-14 14:45 ` Pekka Enberg
2009-01-14 15:09 ` Nick Piggin
2009-01-14 15:22 ` Nick Piggin
2009-01-14 15:30 ` Pekka Enberg
2009-01-14 15:59 ` Nick Piggin
2009-01-14 18:40 ` Christoph Lameter
2009-01-15 6:19 ` Nick Piggin
2009-01-15 20:47 ` Christoph Lameter
2009-01-16 3:43 ` Nick Piggin
2009-01-16 21:25 ` Christoph Lameter
2009-01-19 6:18 ` Nick Piggin
2009-01-22 0:13 ` Christoph Lameter
2009-01-22 9:27 ` Pekka Enberg
2009-01-22 9:30 ` Zhang, Yanmin
2009-01-22 9:33 ` Pekka Enberg
2009-01-23 15:32 ` Christoph Lameter
2009-01-23 15:37 ` Pekka Enberg
2009-01-23 15:42 ` Christoph Lameter
2009-01-23 15:32 ` Christoph Lameter
2009-01-23 4:09 ` Nick Piggin
2009-01-23 15:41 ` Christoph Lameter
2009-01-23 15:53 ` Nick Piggin [this message]
2009-01-26 17:28 ` Christoph Lameter
2009-02-03 1:53 ` Nick Piggin
2009-02-03 17:33 ` Christoph Lameter
2009-02-03 18:42 ` Pekka Enberg
2009-02-03 18:47 ` Pekka Enberg
2009-02-04 4:22 ` Nick Piggin
2009-02-04 20:09 ` Christoph Lameter
2009-02-05 3:18 ` Nick Piggin
2009-02-04 20:10 ` Christoph Lameter
2009-02-05 3:14 ` Nick Piggin
2009-02-04 4:07 ` Nick Piggin
2009-01-14 18:01 ` Christoph Lameter
2009-01-15 6:03 ` Nick Piggin
2009-01-15 20:05 ` Christoph Lameter
2009-01-16 3:19 ` Nick Piggin
2009-01-16 21:07 ` Christoph Lameter
2009-01-19 5:47 ` Nick Piggin
2009-01-22 0:19 ` Christoph Lameter
2009-01-23 4:17 ` Nick Piggin
2009-01-23 15:52 ` Christoph Lameter
2009-01-23 16:10 ` Nick Piggin
2009-01-23 17:09 ` Nick Piggin
2009-01-26 17:46 ` Christoph Lameter
2009-02-03 1:42 ` Nick Piggin
2009-01-26 17:34 ` Christoph Lameter
2009-02-03 1:48 ` Nick Piggin
-- strict thread matches above, loose matches on Subject: below --
2009-01-21 14:30 Nick Piggin
2009-01-21 14:59 ` Ingo Molnar
2009-01-21 15:17 ` Nick Piggin
2009-01-21 16:56 ` Nick Piggin
2009-01-21 17:40 ` Ingo Molnar
2009-01-23 3:31 ` Nick Piggin
2009-01-23 6:14 ` Nick Piggin
2009-01-23 12:56 ` Ingo Molnar
2009-01-21 17:59 ` Joe Perches
2009-01-23 3:35 ` Nick Piggin
2009-01-23 4:00 ` Joe Perches
2009-01-21 18:10 ` Hugh Dickins
2009-01-22 10:01 ` Pekka Enberg
2009-01-22 12:47 ` Hugh Dickins
2009-01-23 14:23 ` Hugh Dickins
2009-01-23 14:30 ` Pekka Enberg
2009-02-02 3:38 ` Zhang, Yanmin
2009-02-02 9:00 ` Pekka Enberg
2009-02-02 15:00 ` Christoph Lameter
2009-02-03 1:34 ` Zhang, Yanmin
2009-02-03 7:29 ` Zhang, Yanmin
2009-02-03 12:18 ` Hugh Dickins
2009-02-04 2:21 ` Zhang, Yanmin
2009-02-05 19:04 ` Hugh Dickins
2009-02-06 0:47 ` Zhang, Yanmin
2009-02-06 8:57 ` Pekka Enberg
2009-02-06 12:33 ` Hugh Dickins
2009-02-10 8:56 ` Zhang, Yanmin
2009-02-02 11:50 ` Hugh Dickins
2009-01-23 3:55 ` Nick Piggin
2009-01-23 13:57 ` Hugh Dickins
2009-01-22 8:45 ` Zhang, Yanmin
2009-01-23 3:57 ` Nick Piggin
2009-01-23 9:00 ` Nick Piggin
2009-01-23 13:34 ` Hugh Dickins
2009-01-23 13:44 ` Nick Piggin
2009-01-23 9:55 ` Andi Kleen
2009-01-23 10:13 ` Pekka Enberg
2009-01-23 11:25 ` Nick Piggin
2009-01-23 11:57 ` Andi Kleen
2009-01-23 13:18 ` Nick Piggin
2009-01-23 14:04 ` Andi Kleen
2009-01-23 14:27 ` Nick Piggin
2009-01-23 15:06 ` Andi Kleen
2009-01-23 15:15 ` Nick Piggin
2009-01-23 12:55 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090123155307.GB14517@wotan.suse.de \
--to=npiggin@suse.de \
--cc=akpm@linux-foundation.org \
--cc=cl@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ming.m.lin@intel.com \
--cc=penberg@cs.helsinki.fi \
--cc=torvalds@linux-foundation.org \
--cc=yanmin_zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).