* [PATCH] slub: move kmem_cache_node into it's own cacheline
@ 2010-05-20 23:47 Alexander Duyck
2010-05-21 4:59 ` Pekka Enberg
2010-05-21 18:06 ` Christoph Lameter
0 siblings, 2 replies; 12+ messages in thread
From: Alexander Duyck @ 2010-05-20 23:47 UTC (permalink / raw)
To: penberg, cl; +Cc: linux-mm
This patch is meant to improve the performance of SLUB by moving the local
kmem_cache_node lock into it's own cacheline separate from kmem_cache.
This is accomplished by simply removing the local_node when NUMA is enabled.
On my system with 2 nodes I saw around a 5% performance increase w/
hackbench times dropping from 6.2 seconds to 5.9 seconds on average. I
suspect the performance gain would increase as the number of nodes
increases, but I do not have the data to currently back that up.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
include/linux/slub_def.h | 11 ++++-------
mm/slub.c | 33 +++++++++++----------------------
2 files changed, 15 insertions(+), 29 deletions(-)
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 0249d41..e6217bb 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -52,7 +52,7 @@ struct kmem_cache_node {
atomic_long_t total_objects;
struct list_head full;
#endif
-};
+} ____cacheline_internodealigned_in_smp;
/*
* Word size structure that can be atomically updated or read and that
@@ -75,12 +75,6 @@ struct kmem_cache {
int offset; /* Free pointer offset. */
struct kmem_cache_order_objects oo;
- /*
- * Avoid an extra cache line for UP, SMP and for the node local to
- * struct kmem_cache.
- */
- struct kmem_cache_node local_node;
-
/* Allocation and freeing of slabs */
struct kmem_cache_order_objects max;
struct kmem_cache_order_objects min;
@@ -102,6 +96,9 @@ struct kmem_cache {
*/
int remote_node_defrag_ratio;
struct kmem_cache_node *node[MAX_NUMNODES];
+#else
+ /* Avoid an extra cache line for UP */
+ struct kmem_cache_node local_node;
#endif
};
diff --git a/mm/slub.c b/mm/slub.c
index 461314b..8af03de 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2141,7 +2141,7 @@ static void free_kmem_cache_nodes(struct kmem_cache *s)
for_each_node_state(node, N_NORMAL_MEMORY) {
struct kmem_cache_node *n = s->node[node];
- if (n && n != &s->local_node)
+ if (n)
kmem_cache_free(kmalloc_caches, n);
s->node[node] = NULL;
}
@@ -2150,33 +2150,22 @@ static void free_kmem_cache_nodes(struct kmem_cache *s)
static int init_kmem_cache_nodes(struct kmem_cache *s, gfp_t gfpflags)
{
int node;
- int local_node;
-
- if (slab_state >= UP && (s < kmalloc_caches ||
- s >= kmalloc_caches + KMALLOC_CACHES))
- local_node = page_to_nid(virt_to_page(s));
- else
- local_node = 0;
for_each_node_state(node, N_NORMAL_MEMORY) {
struct kmem_cache_node *n;
- if (local_node == node)
- n = &s->local_node;
- else {
- if (slab_state == DOWN) {
- early_kmem_cache_node_alloc(gfpflags, node);
- continue;
- }
- n = kmem_cache_alloc_node(kmalloc_caches,
- gfpflags, node);
-
- if (!n) {
- free_kmem_cache_nodes(s);
- return 0;
- }
+ if (slab_state == DOWN) {
+ early_kmem_cache_node_alloc(gfpflags, node);
+ continue;
+ }
+ n = kmem_cache_alloc_node(kmalloc_caches,
+ gfpflags, node);
+ if (!n) {
+ free_kmem_cache_nodes(s);
+ return 0;
}
+
s->node[node] = n;
init_kmem_cache_node(n, s);
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-20 23:47 [PATCH] slub: move kmem_cache_node into it's own cacheline Alexander Duyck
@ 2010-05-21 4:59 ` Pekka Enberg
2010-05-21 14:41 ` Shi, Alex
2010-05-21 18:06 ` Christoph Lameter
1 sibling, 1 reply; 12+ messages in thread
From: Pekka Enberg @ 2010-05-21 4:59 UTC (permalink / raw)
To: Alexander Duyck; +Cc: cl, linux-mm, Alex Shi, Zhang Yanmin
On Fri, May 21, 2010 at 2:47 AM, Alexander Duyck
<alexander.h.duyck@intel.com> wrote:
> This patch is meant to improve the performance of SLUB by moving the local
> kmem_cache_node lock into it's own cacheline separate from kmem_cache.
> This is accomplished by simply removing the local_node when NUMA is enabled.
>
> On my system with 2 nodes I saw around a 5% performance increase w/
> hackbench times dropping from 6.2 seconds to 5.9 seconds on average. I
> suspect the performance gain would increase as the number of nodes
> increases, but I do not have the data to currently back that up.
>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Yanmin, does this fix the hackbench regression for you?
> ---
>
> include/linux/slub_def.h | 11 ++++-------
> mm/slub.c | 33 +++++++++++----------------------
> 2 files changed, 15 insertions(+), 29 deletions(-)
>
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index 0249d41..e6217bb 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -52,7 +52,7 @@ struct kmem_cache_node {
> atomic_long_t total_objects;
> struct list_head full;
> #endif
> -};
> +} ____cacheline_internodealigned_in_smp;
>
> /*
> * Word size structure that can be atomically updated or read and that
> @@ -75,12 +75,6 @@ struct kmem_cache {
> int offset; /* Free pointer offset. */
> struct kmem_cache_order_objects oo;
>
> - /*
> - * Avoid an extra cache line for UP, SMP and for the node local to
> - * struct kmem_cache.
> - */
> - struct kmem_cache_node local_node;
> -
> /* Allocation and freeing of slabs */
> struct kmem_cache_order_objects max;
> struct kmem_cache_order_objects min;
> @@ -102,6 +96,9 @@ struct kmem_cache {
> */
> int remote_node_defrag_ratio;
> struct kmem_cache_node *node[MAX_NUMNODES];
> +#else
> + /* Avoid an extra cache line for UP */
> + struct kmem_cache_node local_node;
> #endif
> };
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 461314b..8af03de 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2141,7 +2141,7 @@ static void free_kmem_cache_nodes(struct kmem_cache *s)
>
> for_each_node_state(node, N_NORMAL_MEMORY) {
> struct kmem_cache_node *n = s->node[node];
> - if (n && n != &s->local_node)
> + if (n)
> kmem_cache_free(kmalloc_caches, n);
> s->node[node] = NULL;
> }
> @@ -2150,33 +2150,22 @@ static void free_kmem_cache_nodes(struct kmem_cache *s)
> static int init_kmem_cache_nodes(struct kmem_cache *s, gfp_t gfpflags)
> {
> int node;
> - int local_node;
> -
> - if (slab_state >= UP && (s < kmalloc_caches ||
> - s >= kmalloc_caches + KMALLOC_CACHES))
> - local_node = page_to_nid(virt_to_page(s));
> - else
> - local_node = 0;
>
> for_each_node_state(node, N_NORMAL_MEMORY) {
> struct kmem_cache_node *n;
>
> - if (local_node == node)
> - n = &s->local_node;
> - else {
> - if (slab_state == DOWN) {
> - early_kmem_cache_node_alloc(gfpflags, node);
> - continue;
> - }
> - n = kmem_cache_alloc_node(kmalloc_caches,
> - gfpflags, node);
> -
> - if (!n) {
> - free_kmem_cache_nodes(s);
> - return 0;
> - }
> + if (slab_state == DOWN) {
> + early_kmem_cache_node_alloc(gfpflags, node);
> + continue;
> + }
> + n = kmem_cache_alloc_node(kmalloc_caches,
> + gfpflags, node);
>
> + if (!n) {
> + free_kmem_cache_nodes(s);
> + return 0;
> }
> +
> s->node[node] = n;
> init_kmem_cache_node(n, s);
> }
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-21 4:59 ` Pekka Enberg
@ 2010-05-21 14:41 ` Shi, Alex
2010-05-21 18:03 ` Christoph Lameter
2010-05-24 18:14 ` Pekka Enberg
0 siblings, 2 replies; 12+ messages in thread
From: Shi, Alex @ 2010-05-21 14:41 UTC (permalink / raw)
To: Pekka Enberg, Duyck, Alexander H
Cc: cl@linux.com, linux-mm@kvack.org, Zhang Yanmin, Chen, Tim C
I have tested this patch based latest Linus' kernel tree. It real works!
About 10% improvement happened for hackbench threads mode and 8%~13% improve for process mode on our 2 sockets Westmere machine and about 7% hackbench improvement on 2 sockets NHM.
Alex
>-----Original Message-----
>From: penberg@gmail.com [mailto:penberg@gmail.com] On Behalf Of Pekka Enberg
>Sent: Friday, May 21, 2010 1:00 PM
>To: Duyck, Alexander H
>Cc: cl@linux.com; linux-mm@kvack.org; Shi, Alex; Zhang Yanmin
>Subject: Re: [PATCH] slub: move kmem_cache_node into it's own cacheline
>
>On Fri, May 21, 2010 at 2:47 AM, Alexander Duyck
><alexander.h.duyck@intel.com> wrote:
>> This patch is meant to improve the performance of SLUB by moving the local
>> kmem_cache_node lock into it's own cacheline separate from kmem_cache.
>> This is accomplished by simply removing the local_node when NUMA is enabled.
>>
>> On my system with 2 nodes I saw around a 5% performance increase w/
>> hackbench times dropping from 6.2 seconds to 5.9 seconds on average. I
>> suspect the performance gain would increase as the number of nodes
>> increases, but I do not have the data to currently back that up.
>>
>> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
>
>Yanmin, does this fix the hackbench regression for you?
>
>> ---
>>
>> include/linux/slub_def.h | 11 ++++-------
>> mm/slub.c | 33 +++++++++++----------------------
>> 2 files changed, 15 insertions(+), 29 deletions(-)
>>
>> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
>> index 0249d41..e6217bb 100644
>> --- a/include/linux/slub_def.h
>> +++ b/include/linux/slub_def.h
>> @@ -52,7 +52,7 @@ struct kmem_cache_node {
>> atomic_long_t total_objects;
>> struct list_head full;
>> #endif
>> -};
>> +} ____cacheline_internodealigned_in_smp;
>>
>> /*
>> * Word size structure that can be atomically updated or read and that
>> @@ -75,12 +75,6 @@ struct kmem_cache {
>> int offset; /* Free pointer offset. */
>> struct kmem_cache_order_objects oo;
>>
>> - /*
>> - * Avoid an extra cache line for UP, SMP and for the node local to
>> - * struct kmem_cache.
>> - */
>> - struct kmem_cache_node local_node;
>> -
>> /* Allocation and freeing of slabs */
>> struct kmem_cache_order_objects max;
>> struct kmem_cache_order_objects min;
>> @@ -102,6 +96,9 @@ struct kmem_cache {
>> */
>> int remote_node_defrag_ratio;
>> struct kmem_cache_node *node[MAX_NUMNODES];
>> +#else
>> + /* Avoid an extra cache line for UP */
>> + struct kmem_cache_node local_node;
>> #endif
>> };
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 461314b..8af03de 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -2141,7 +2141,7 @@ static void free_kmem_cache_nodes(struct kmem_cache *s)
>>
>> for_each_node_state(node, N_NORMAL_MEMORY) {
>> struct kmem_cache_node *n = s->node[node];
>> - if (n && n != &s->local_node)
>> + if (n)
>> kmem_cache_free(kmalloc_caches, n);
>> s->node[node] = NULL;
>> }
>> @@ -2150,33 +2150,22 @@ static void free_kmem_cache_nodes(struct kmem_cache *s)
>> static int init_kmem_cache_nodes(struct kmem_cache *s, gfp_t gfpflags)
>> {
>> int node;
>> - int local_node;
>> -
>> - if (slab_state >= UP && (s < kmalloc_caches ||
>> - s >= kmalloc_caches + KMALLOC_CACHES))
>> - local_node = page_to_nid(virt_to_page(s));
>> - else
>> - local_node = 0;
>>
>> for_each_node_state(node, N_NORMAL_MEMORY) {
>> struct kmem_cache_node *n;
>>
>> - if (local_node == node)
>> - n = &s->local_node;
>> - else {
>> - if (slab_state == DOWN) {
>> - early_kmem_cache_node_alloc(gfpflags, node);
>> - continue;
>> - }
>> - n = kmem_cache_alloc_node(kmalloc_caches,
>> - gfpflags, node);
>> -
>> - if (!n) {
>> - free_kmem_cache_nodes(s);
>> - return 0;
>> - }
>> + if (slab_state == DOWN) {
>> + early_kmem_cache_node_alloc(gfpflags, node);
>> + continue;
>> + }
>> + n = kmem_cache_alloc_node(kmalloc_caches,
>> + gfpflags, node);
>>
>> + if (!n) {
>> + free_kmem_cache_nodes(s);
>> + return 0;
>> }
>> +
>> s->node[node] = n;
>> init_kmem_cache_node(n, s);
>> }
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-21 14:41 ` Shi, Alex
@ 2010-05-21 18:03 ` Christoph Lameter
2010-05-24 18:14 ` Pekka Enberg
1 sibling, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2010-05-21 18:03 UTC (permalink / raw)
To: Shi, Alex
Cc: Pekka Enberg, Duyck, Alexander H, linux-mm@kvack.org,
Zhang Yanmin, Chen, Tim C
Yes right. The cacheline that also contains local_node is dirtied by the
locking in the SMP case and will evict the cacheline used to lookup the
per cpu vector and other important information. The per cpu patches
aggravated that problem by making more use of the fields that are evicted
with the cacheline.
Acked-by: Christoph Lameter <cl@linux-foundation.org>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-20 23:47 [PATCH] slub: move kmem_cache_node into it's own cacheline Alexander Duyck
2010-05-21 4:59 ` Pekka Enberg
@ 2010-05-21 18:06 ` Christoph Lameter
2010-05-21 18:17 ` Duyck, Alexander H
1 sibling, 1 reply; 12+ messages in thread
From: Christoph Lameter @ 2010-05-21 18:06 UTC (permalink / raw)
To: Alexander Duyck; +Cc: Pekka Enberg, linux-mm
On Thu, 20 May 2010, Alexander Duyck wrote:
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index 0249d41..e6217bb 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -52,7 +52,7 @@ struct kmem_cache_node {
> atomic_long_t total_objects;
> struct list_head full;
> #endif
> -};
> +} ____cacheline_internodealigned_in_smp;
What does this do? Leftovers?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-21 18:06 ` Christoph Lameter
@ 2010-05-21 18:17 ` Duyck, Alexander H
2010-05-21 18:24 ` Christoph Lameter
0 siblings, 1 reply; 12+ messages in thread
From: Duyck, Alexander H @ 2010-05-21 18:17 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Pekka Enberg, linux-mm@kvack.org
Christoph Lameter wrote:
> On Thu, 20 May 2010, Alexander Duyck wrote:
>
>> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
>> index 0249d41..e6217bb 100644 --- a/include/linux/slub_def.h
>> +++ b/include/linux/slub_def.h
>> @@ -52,7 +52,7 @@ struct kmem_cache_node {
>> atomic_long_t total_objects;
>> struct list_head full;
>> #endif
>> -};
>> +} ____cacheline_internodealigned_in_smp;
>
> What does this do? Leftovers?
It aligns it to the correct size so that no two instances can occupy a shared cacheline. I put that in place to avoid any false sharing of the objects should they fit into a shared cacheline on a NUMA system.
Thanks,
Alex
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-21 18:17 ` Duyck, Alexander H
@ 2010-05-21 18:24 ` Christoph Lameter
2010-05-21 18:33 ` Christoph Lameter
0 siblings, 1 reply; 12+ messages in thread
From: Christoph Lameter @ 2010-05-21 18:24 UTC (permalink / raw)
To: Duyck, Alexander H; +Cc: Pekka Enberg, linux-mm@kvack.org
On Fri, 21 May 2010, Duyck, Alexander H wrote:
> Christoph Lameter wrote:
> > On Thu, 20 May 2010, Alexander Duyck wrote:
> >
> >> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> >> index 0249d41..e6217bb 100644 --- a/include/linux/slub_def.h
> >> +++ b/include/linux/slub_def.h
> >> @@ -52,7 +52,7 @@ struct kmem_cache_node {
> >> atomic_long_t total_objects;
> >> struct list_head full;
> >> #endif
> >> -};
> >> +} ____cacheline_internodealigned_in_smp;
> >
> > What does this do? Leftovers?
>
> It aligns it to the correct size so that no two instances can occupy a shared cacheline. I put that in place to avoid any false sharing of the objects should they fit into a shared cacheline on a NUMA system.
It has no effect in the NUMA case since the slab allocator is used to
allocate the object. Alignments would have to be specified at slab creation.
Maybe in the SMP case? But then struct kmem_cache_node is part of the
struct kmem_cache.
internode aligned? This creates > 4k kmem_cache structures on some
platforms.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-21 18:24 ` Christoph Lameter
@ 2010-05-21 18:33 ` Christoph Lameter
2010-05-21 20:23 ` Duyck, Alexander H
0 siblings, 1 reply; 12+ messages in thread
From: Christoph Lameter @ 2010-05-21 18:33 UTC (permalink / raw)
To: Duyck, Alexander H; +Cc: Pekka Enberg, linux-mm@kvack.org
struct kmem_cache is allocated without any alignment so the alignment
spec does not work.
If you want this then you also need to align struct kmem_cache. internode
aligned would require the kmem_cache to be page aligned. So lets drop the
hunk from this patch for now. A separate patch may convince us to merge
aligning kmem_cache_node within kmem_cache.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-21 18:33 ` Christoph Lameter
@ 2010-05-21 20:23 ` Duyck, Alexander H
2010-05-21 20:41 ` Christoph Lameter
0 siblings, 1 reply; 12+ messages in thread
From: Duyck, Alexander H @ 2010-05-21 20:23 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Pekka Enberg, linux-mm@kvack.org
Christoph Lameter wrote:
> struct kmem_cache is allocated without any alignment so the alignment
> spec does not work.
>
> If you want this then you also need to align struct kmem_cache.
> internode aligned would require the kmem_cache to be page aligned. So
> lets drop the hunk from this patch for now. A separate patch may
> convince us to merge aligning kmem_cache_node within kmem_cache.
I will pull that hunk out, test it, and resubmit within the next hour or so if everything looks good.
Thanks,
Alex
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-21 20:23 ` Duyck, Alexander H
@ 2010-05-21 20:41 ` Christoph Lameter
0 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2010-05-21 20:41 UTC (permalink / raw)
To: Duyck, Alexander H; +Cc: Pekka Enberg, linux-mm@kvack.org
On Fri, 21 May 2010, Duyck, Alexander H wrote:
> Christoph Lameter wrote:
> > struct kmem_cache is allocated without any alignment so the alignment
> > spec does not work.
> >
> > If you want this then you also need to align struct kmem_cache.
> > internode aligned would require the kmem_cache to be page aligned. So
> > lets drop the hunk from this patch for now. A separate patch may
> > convince us to merge aligning kmem_cache_node within kmem_cache.
>
> I will pull that hunk out, test it, and resubmit within the next hour or so if everything looks good.
Again internode aligned may need page alignment. You may be getting into
messy issues. The architectures requiring internode alignment are NUMA
anyways so it may not matter because you only have local_node for the SMP
case.
Cacheline alignment therefore may be sufficient. But the variables at the
tail of the kmem_cache structure are mostly read only. Therefore may be
just forget about the alignment. It likely makes no difference.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-21 14:41 ` Shi, Alex
2010-05-21 18:03 ` Christoph Lameter
@ 2010-05-24 18:14 ` Pekka Enberg
2010-05-26 0:52 ` Shi, Alex
1 sibling, 1 reply; 12+ messages in thread
From: Pekka Enberg @ 2010-05-24 18:14 UTC (permalink / raw)
To: Shi, Alex
Cc: Duyck, Alexander H, cl@linux.com, linux-mm@kvack.org,
Zhang Yanmin, Chen, Tim C
Applied, thanks!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH] slub: move kmem_cache_node into it's own cacheline
2010-05-24 18:14 ` Pekka Enberg
@ 2010-05-26 0:52 ` Shi, Alex
0 siblings, 0 replies; 12+ messages in thread
From: Shi, Alex @ 2010-05-26 0:52 UTC (permalink / raw)
To: Pekka Enberg
Cc: Duyck, Alexander H, cl@linux.com, linux-mm@kvack.org,
Zhang Yanmin, Chen, Tim C
Tim reminder me that I need clearly add the following line to confirm my agreement for this patch. Sorry for miss this.
Tested-by: Alex Shi <alex.shi@intel.com>
>-----Original Message-----
>From: Pekka Enberg [mailto:penberg@cs.helsinki.fi]
>Sent: Tuesday, May 25, 2010 2:14 AM
>To: Shi, Alex
>Cc: Duyck, Alexander H; cl@linux.com; linux-mm@kvack.org; Zhang Yanmin; Chen, Tim C
>Subject: Re: [PATCH] slub: move kmem_cache_node into it's own cacheline
>
>Applied, thanks!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2010-05-26 0:53 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-20 23:47 [PATCH] slub: move kmem_cache_node into it's own cacheline Alexander Duyck
2010-05-21 4:59 ` Pekka Enberg
2010-05-21 14:41 ` Shi, Alex
2010-05-21 18:03 ` Christoph Lameter
2010-05-24 18:14 ` Pekka Enberg
2010-05-26 0:52 ` Shi, Alex
2010-05-21 18:06 ` Christoph Lameter
2010-05-21 18:17 ` Duyck, Alexander H
2010-05-21 18:24 ` Christoph Lameter
2010-05-21 18:33 ` Christoph Lameter
2010-05-21 20:23 ` Duyck, Alexander H
2010-05-21 20:41 ` Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).