From: Pekka Enberg <penberg@cs.helsinki.fi>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Nick Piggin <npiggin@suse.de>,
Christoph Lameter <cl@linux-foundation.org>,
heiko.carstens@de.ibm.com, sachinp@in.ibm.com,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Tejun Heo <tj@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 1/3] slqb: Do not use DEFINE_PER_CPU for per-node data
Date: Sun, 20 Sep 2009 11:45:54 +0300 [thread overview]
Message-ID: <84144f020909200145w74037ab9vb66dae65d3b8a048@mail.gmail.com> (raw)
In-Reply-To: <1253302451-27740-2-git-send-email-mel@csn.ul.ie>
On Fri, Sep 18, 2009 at 10:34 PM, Mel Gorman <mel@csn.ul.ie> wrote:
> SLQB used a seemingly nice hack to allocate per-node data for the statically
> initialised caches. Unfortunately, due to some unknown per-cpu
> optimisation, these regions are being reused by something else as the
> per-node data is getting randomly scrambled. This patch fixes the
> problem but it's not fully understood *why* it fixes the problem at the
> moment.
Ouch, that sounds bad. I guess it's architecture specific bug as x86
works ok? Lets CC Tejun.
Nick, are you okay with this patch being merged for now?
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> ---
> mm/slqb.c | 16 ++++++++--------
> 1 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/mm/slqb.c b/mm/slqb.c
> index 4ca85e2..4d72be2 100644
> --- a/mm/slqb.c
> +++ b/mm/slqb.c
> @@ -1944,16 +1944,16 @@ static void init_kmem_cache_node(struct kmem_cache *s,
> static DEFINE_PER_CPU(struct kmem_cache_cpu, kmem_cache_cpus);
> #endif
> #ifdef CONFIG_NUMA
> -/* XXX: really need a DEFINE_PER_NODE for per-node data, but this is better than
> - * a static array */
> -static DEFINE_PER_CPU(struct kmem_cache_node, kmem_cache_nodes);
> +/* XXX: really need a DEFINE_PER_NODE for per-node data because a static
> + * array is wasteful */
> +static struct kmem_cache_node kmem_cache_nodes[MAX_NUMNODES];
> #endif
>
> #ifdef CONFIG_SMP
> static struct kmem_cache kmem_cpu_cache;
> static DEFINE_PER_CPU(struct kmem_cache_cpu, kmem_cpu_cpus);
> #ifdef CONFIG_NUMA
> -static DEFINE_PER_CPU(struct kmem_cache_node, kmem_cpu_nodes); /* XXX per-nid */
> +static struct kmem_cache_node kmem_cpu_nodes[MAX_NUMNODES]; /* XXX per-nid */
> #endif
> #endif
>
> @@ -1962,7 +1962,7 @@ static struct kmem_cache kmem_node_cache;
> #ifdef CONFIG_SMP
> static DEFINE_PER_CPU(struct kmem_cache_cpu, kmem_node_cpus);
> #endif
> -static DEFINE_PER_CPU(struct kmem_cache_node, kmem_node_nodes); /*XXX per-nid */
> +static struct kmem_cache_node kmem_node_nodes[MAX_NUMNODES]; /*XXX per-nid */
> #endif
>
> #ifdef CONFIG_SMP
> @@ -2918,15 +2918,15 @@ void __init kmem_cache_init(void)
> for_each_node_state(i, N_NORMAL_MEMORY) {
> struct kmem_cache_node *n;
>
> - n = &per_cpu(kmem_cache_nodes, i);
> + n = &kmem_cache_nodes[i];
> init_kmem_cache_node(&kmem_cache_cache, n);
> kmem_cache_cache.node_slab[i] = n;
> #ifdef CONFIG_SMP
> - n = &per_cpu(kmem_cpu_nodes, i);
> + n = &kmem_cpu_nodes[i];
> init_kmem_cache_node(&kmem_cpu_cache, n);
> kmem_cpu_cache.node_slab[i] = n;
> #endif
> - n = &per_cpu(kmem_node_nodes, i);
> + n = &kmem_node_nodes[i];
> init_kmem_cache_node(&kmem_node_cache, n);
> kmem_node_cache.node_slab[i] = n;
> }
> --
> 1.6.3.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-09-20 8:45 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-18 19:34 [RFC PATCH 0/3] Hatchet job for SLQB on memoryless configurations Mel Gorman
2009-09-18 19:34 ` [PATCH 1/3] slqb: Do not use DEFINE_PER_CPU for per-node data Mel Gorman
2009-09-20 8:45 ` Pekka Enberg [this message]
2009-09-20 10:00 ` Tejun Heo
2009-09-20 10:12 ` Pekka Enberg
2009-09-20 15:55 ` Tejun Heo
2009-09-21 6:24 ` Pekka Enberg
2009-09-21 8:46 ` Mel Gorman
2009-09-21 8:30 ` Sachin Sant
2009-09-21 8:42 ` Mel Gorman
2009-09-21 9:00 ` Tejun Heo
2009-09-21 9:44 ` Mel Gorman
2009-09-21 9:53 ` Tejun Heo
2009-09-21 10:04 ` Mel Gorman
2009-09-21 9:02 ` Sachin Sant
2009-09-21 9:09 ` Mel Gorman
2009-09-21 13:04 ` Mel Gorman
2009-09-21 13:31 ` Pekka Enberg
2009-09-21 13:45 ` Tejun Heo
2009-09-21 13:57 ` Mel Gorman
2009-09-21 23:54 ` Benjamin Herrenschmidt
2009-09-20 14:04 ` Mel Gorman
2009-09-18 19:34 ` [PATCH 2/3] slqb: Treat pages freed on a memoryless node as local node Mel Gorman
2009-09-18 21:01 ` Christoph Lameter
2009-09-19 11:46 ` Mel Gorman
2009-09-21 17:34 ` Lee Schermerhorn
2009-09-22 13:33 ` Mel Gorman
2009-09-22 18:29 ` Lee Schermerhorn
2009-09-18 19:34 ` [PATCH 3/3] slqb: Allow SLQB to be used on PPC and S390 Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=84144f020909200145w74037ab9vb66dae65d3b8a048@mail.gmail.com \
--to=penberg@cs.helsinki.fi \
--cc=akpm@linux-foundation.org \
--cc=cl@linux-foundation.org \
--cc=heiko.carstens@de.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=npiggin@suse.de \
--cc=sachinp@in.ibm.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).