From: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
To: Christoph Lameter <clameter@sgi.com>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
akpm@linux-foundation.org, ak@suse.de, eric.whitney@hp.com,
Mel Gorman <mel@csn.ul.ie>
Subject: Re: [PATCH] 2.6.23-rc6: Fix NUMA Memory Policy Reference Counting
Date: Mon, 17 Sep 2007 16:19:53 -0400 [thread overview]
Message-ID: <1190060393.5460.143.camel@localhost> (raw)
In-Reply-To: <Pine.LNX.4.64.0709171235190.28178@schroedinger.engr.sgi.com>
On Mon, 2007-09-17 at 12:37 -0700, Christoph Lameter wrote:
> On Mon, 17 Sep 2007, Lee Schermerhorn wrote:
>
> > Here is the 23-rc6 verison of the patch. Andi considers it a high
> > priority bug fix for .23. I'm a bit uncomfortable with this, this late
> > in the 23 cycle. I've not heard of problems w/o this patch, but then,
> > maybe no one notices if they leak a memory policy struct now and then,
> > or occasionally allocate memory on the wrong node because they used a
> > prematurely freed memory policy.
>
> The patch does require concurrent increments and decrements in the main
> fault patch. The potential is to create another bouncing cacheline for
> concurrent faults. This looks like it would cause a performance issue.
Only for vma policy, right? show_numa_maps() isn't a performance path,
and shared policies are already reference counted--just not unref'd!
>
> > Kernel Build [16cpu, 32GB, ia64] - average of 10 runs:
> >
> > w/o patch w/ refcount patch
> > Avg Std Devn Avg Std Devn
> > Real: 100.59 0.38 100.63 0.43
> > User: 1209.60 0.37 1209.91 0.31
> > System: 81.52 0.42 81.64 0.34
>
> Single threaded build? I would suggest to try concurrently faulting memory
> from multiple processors. You may not see this on a kernel build even if
> this is run with -j16 because concurrent faults are rare.
Well, it was a 32-way parallel build [-j32] on a 16-cpu system--my usual
build method. But, I'm guessing that all of the build tools are single
threaded and all using default policy, so no reference counting is
needed.
I'm taking a look at your 'pft' program, and I'll try that.
I do have some ideas for enhancements to memtoy to test vma policies in
a multi-threaded task. I have the basic multi-threading infrastructure
that binds threads to cpus, allocates node local stacks, thread state
structs, ... in my mmtrace tool that I can probably hack for use in
memtoy to provoke cacheline bouncing of the mem policy. But, if pft
does the trick, I won't rush the memtoy enhancments...
Meanwhile, we do have a mem policy ref counting bug in the mainline.
Later,
Lee
next prev parent reply other threads:[~2007-09-17 20:21 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20070830185053.22619.96398.sendpatchset@localhost>
2007-09-17 19:32 ` [PATCH] 2.6.23-rc6: Fix NUMA Memory Policy Reference Counting Lee Schermerhorn
2007-09-17 19:37 ` Christoph Lameter
2007-09-17 20:19 ` Lee Schermerhorn [this message]
2007-09-17 21:23 ` Christoph Lameter
2007-09-17 22:25 ` Andi Kleen
2007-09-18 19:30 ` Christoph Lameter
2007-09-17 22:28 ` Andi Kleen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1190060393.5460.143.camel@localhost \
--to=lee.schermerhorn@hp.com \
--cc=ak@suse.de \
--cc=akpm@linux-foundation.org \
--cc=clameter@sgi.com \
--cc=eric.whitney@hp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mel@csn.ul.ie \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox