linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [rfc PATCH]slub: per cpu partial statistics change
@ 2012-02-03  8:11 Alex,Shi
  2012-02-03 15:27 ` Christoph Lameter
  0 siblings, 1 reply; 11+ messages in thread
From: Alex,Shi @ 2012-02-03  8:11 UTC (permalink / raw)
  To: cl@linux.com
  Cc: linux-kernel@vger.kernel.org, Pekka Enberg, linux-mm@kvack.org


This patch split the cpu_partial_free into 2 parts: cpu_partial_node, PCP refilling
times from node partial; and same name cpu_partial_free, PCP refilling times in
slab_free slow path. A new statistic 'release_cpu_partial' is added to get PCP
release times. These info are useful when do PCP tunning.

The slabinfo.c code is unchanged, since cpu_partial_node is not on slow path.

Signed-off-by: Alex Shi <alex.shi@intel.com>
---
 include/linux/slub_def.h |    6 ++++--
 mm/slub.c                |   12 +++++++++---
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index a32bcfd..57ea943 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -21,7 +21,7 @@ enum stat_item {
 	FREE_FROZEN,		/* Freeing to frozen slab */
 	FREE_ADD_PARTIAL,	/* Freeing moves slab to partial list */
 	FREE_REMOVE_PARTIAL,	/* Freeing removes last object */
-	ALLOC_FROM_PARTIAL,	/* Cpu slab acquired from partial list */
+	ALLOC_FROM_PARTIAL,	/* Cpu slab acquired from node partial list */
 	ALLOC_SLAB,		/* Cpu slab acquired from page allocator */
 	ALLOC_REFILL,		/* Refill cpu slab from slab freelist */
 	ALLOC_NODE_MISMATCH,	/* Switching cpu slab */
@@ -37,7 +37,9 @@ enum stat_item {
 	CMPXCHG_DOUBLE_CPU_FAIL,/* Failure of this_cpu_cmpxchg_double */
 	CMPXCHG_DOUBLE_FAIL,	/* Number of times that cmpxchg double did not match */
 	CPU_PARTIAL_ALLOC,	/* Used cpu partial on alloc */
-	CPU_PARTIAL_FREE,	/* USed cpu partial on free */
+	CPU_PARTIAL_FREE,	/* Refill cpu partial on free */
+	CPU_PARTIAL_NODE,	/* Refill cpu partial from node partial */
+	RELEASE_CPU_PARTIAL,	/* Release per cpu partial */
 	NR_SLUB_STAT_ITEMS };
 
 struct kmem_cache_cpu {
diff --git a/mm/slub.c b/mm/slub.c
index 4907563..5dd299c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1560,6 +1560,7 @@ static void *get_partial_node(struct kmem_cache *s,
 		} else {
 			page->freelist = t;
 			available = put_cpu_partial(s, page, 0);
+			stat(s, CPU_PARTIAL_NODE);
 		}
 		if (kmem_cache_debug(s) || available > s->cpu_partial / 2)
 			break;
@@ -1973,6 +1974,7 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 				local_irq_restore(flags);
 				pobjects = 0;
 				pages = 0;
+				stat(s, RELEASE_CPU_PARTIAL);
 			}
 		}
 
@@ -1984,7 +1986,6 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 		page->next = oldpage;
 
 	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);
-	stat(s, CPU_PARTIAL_FREE);
 	return pobjects;
 }
 
@@ -2465,9 +2466,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 		 * If we just froze the page then put it onto the
 		 * per cpu partial list.
 		 */
-		if (new.frozen && !was_frozen)
+		if (new.frozen && !was_frozen) {
 			put_cpu_partial(s, page, 1);
-
+			stat(s, CPU_PARTIAL_FREE);
+		}
 		/*
 		 * The list lock was not taken therefore no list
 		 * activity can be necessary.
@@ -5059,6 +5061,8 @@ STAT_ATTR(CMPXCHG_DOUBLE_CPU_FAIL, cmpxchg_double_cpu_fail);
 STAT_ATTR(CMPXCHG_DOUBLE_FAIL, cmpxchg_double_fail);
 STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc);
 STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free);
+STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node);
+STAT_ATTR(RELEASE_CPU_PARTIAL, release_cpu_partial);
 #endif
 
 static struct attribute *slab_attrs[] = {
@@ -5124,6 +5128,8 @@ static struct attribute *slab_attrs[] = {
 	&cmpxchg_double_cpu_fail_attr.attr,
 	&cpu_partial_alloc_attr.attr,
 	&cpu_partial_free_attr.attr,
+	&cpu_partial_node_attr.attr,
+	&release_cpu_partial_attr.attr,
 #endif
 #ifdef CONFIG_FAILSLAB
 	&failslab_attr.attr,
-- 
1.7.5.1



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [rfc PATCH]slub: per cpu partial statistics change
  2012-02-03  8:11 [rfc PATCH]slub: per cpu partial statistics change Alex,Shi
@ 2012-02-03 15:27 ` Christoph Lameter
  2012-02-04  0:56   ` Alex Shi
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Lameter @ 2012-02-03 15:27 UTC (permalink / raw)
  To: Alex,Shi; +Cc: linux-kernel@vger.kernel.org, Pekka Enberg, linux-mm@kvack.org

On Fri, 3 Feb 2012, Alex,Shi wrote:

> This patch split the cpu_partial_free into 2 parts: cpu_partial_node, PCP refilling
> times from node partial; and same name cpu_partial_free, PCP refilling times in
> slab_free slow path. A new statistic 'release_cpu_partial' is added to get PCP
> release times. These info are useful when do PCP tunning.

Releasing? The code where you inserted the new statistics counts the pages
put on the cpu partial list when refilling from the node partial list.

See more below.

>  struct kmem_cache_cpu {
> diff --git a/mm/slub.c b/mm/slub.c
> index 4907563..5dd299c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1560,6 +1560,7 @@ static void *get_partial_node(struct kmem_cache *s,
>  		} else {
>  			page->freelist = t;
>  			available = put_cpu_partial(s, page, 0);
> +			stat(s, CPU_PARTIAL_NODE);

This is refilling the per cpu partial list from the node list.

>  		}
>  		if (kmem_cache_debug(s) || available > s->cpu_partial / 2)
>  			break;
> @@ -1973,6 +1974,7 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
>  				local_irq_restore(flags);
>  				pobjects = 0;
>  				pages = 0;
> +				stat(s, RELEASE_CPU_PARTIAL);

The callers count the cpu partial operations. Why is there now one in
put_cpu_partial? It is moving a page to the cpu partial list. Not
releasing it from the cpu partial list.

>
> @@ -2465,9 +2466,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
>  		 * If we just froze the page then put it onto the
>  		 * per cpu partial list.
>  		 */
> -		if (new.frozen && !was_frozen)
> +		if (new.frozen && !was_frozen) {
>  			put_cpu_partial(s, page, 1);
> -
> +			stat(s, CPU_PARTIAL_FREE);

cpu partial list filled with a partial page created from a fully allocated
slab (which therefore was not on any list before).


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [rfc PATCH]slub: per cpu partial statistics change
  2012-02-03 15:27 ` Christoph Lameter
@ 2012-02-04  0:56   ` Alex Shi
  2012-02-06 15:02     ` Christoph Lameter
  0 siblings, 1 reply; 11+ messages in thread
From: Alex Shi @ 2012-02-04  0:56 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel@vger.kernel.org, Pekka Enberg, linux-mm@kvack.org

On 02/03/2012 11:27 PM, Christoph Lameter wrote:

> On Fri, 3 Feb 2012, Alex,Shi wrote:
> 
>> This patch split the cpu_partial_free into 2 parts: cpu_partial_node, PCP refilling
>> times from node partial; and same name cpu_partial_free, PCP refilling times in
>> slab_free slow path. A new statistic 'release_cpu_partial' is added to get PCP
>> release times. These info are useful when do PCP tunning.
> 
> Releasing? The code where you inserted the new statistics counts the pages
> put on the cpu partial list when refilling from the node partial list.


Ops, are we talking the same base kernel: Linus' tree?  :)
Here the Releasing code only be called in slow free path and the PCP is
full at the same time, not in PCP refilling from node partial.

explanations more below.

> See more below.

>

>>  struct kmem_cache_cpu {
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 4907563..5dd299c 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1560,6 +1560,7 @@ static void *get_partial_node(struct kmem_cache *s,
>>  		} else {
>>  			page->freelist = t;
>>  			available = put_cpu_partial(s, page, 0);
>> +			stat(s, CPU_PARTIAL_NODE);
> 
> This is refilling the per cpu partial list from the node list.


Yes. and same as my explanation in patch:
-       CPU_PARTIAL_FREE,       /* USed cpu partial on free */
+       CPU_PARTIAL_FREE,       /* Refill cpu partial on free */
+       CPU_PARTIAL_NODE,       /* Refill cpu partial from node partial */

> 
>>  		}
>>  		if (kmem_cache_debug(s) || available > s->cpu_partial / 2)
>>  			break;
>> @@ -1973,6 +1974,7 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
>>  				local_irq_restore(flags);
>>  				pobjects = 0;
>>  				pages = 0;
>> +				stat(s, RELEASE_CPU_PARTIAL);
> 
> The callers count the cpu partial operations. Why is there now one in
> put_cpu_partial? It is moving a page to the cpu partial list. Not
> releasing it from the cpu partial list.


All old PCP will drain out on running CPU by unfreeze_partials() even it
is not accurate here. The new one is not lost counting. It still be
counted as CPU_PARTIAL_FREE in the following change as before.

If release is right, maybe named as drain_cpu_partial or
unfreeze_cpu_partial?

> 
>>
>> @@ -2465,9 +2466,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
>>  		 * If we just froze the page then put it onto the
>>  		 * per cpu partial list.
>>  		 */
>> -		if (new.frozen && !was_frozen)
>> +		if (new.frozen && !was_frozen) {
>>  			put_cpu_partial(s, page, 1);
>> -
>> +			stat(s, CPU_PARTIAL_FREE);
> 
> cpu partial list filled with a partial page created from a fully allocated
> slab (which therefore was not on any list before).


Yes, but the counting is not new here. It just moved out of
put_cpu_partial().

> 
> 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [rfc PATCH]slub: per cpu partial statistics change
  2012-02-04  0:56   ` Alex Shi
@ 2012-02-06 15:02     ` Christoph Lameter
  2012-02-07  5:06       ` Alex,Shi
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Lameter @ 2012-02-06 15:02 UTC (permalink / raw)
  To: Alex Shi; +Cc: linux-kernel@vger.kernel.org, Pekka Enberg, linux-mm@kvack.org

On Sat, 4 Feb 2012, Alex Shi wrote:

> On 02/03/2012 11:27 PM, Christoph Lameter wrote:
>
> > On Fri, 3 Feb 2012, Alex,Shi wrote:
> >
> >> This patch split the cpu_partial_free into 2 parts: cpu_partial_node, PCP refilling
> >> times from node partial; and same name cpu_partial_free, PCP refilling times in
> >> slab_free slow path. A new statistic 'release_cpu_partial' is added to get PCP
> >> release times. These info are useful when do PCP tunning.
> >
> > Releasing? The code where you inserted the new statistics counts the pages
> > put on the cpu partial list when refilling from the node partial list.
>
>
> Ops, are we talking the same base kernel: Linus' tree?  :)
> Here the Releasing code only be called in slow free path and the PCP is
> full at the same time, not in PCP refilling from node partial.

Well the term releasing is unfortunate. per cpu partial pages can migrate
to and from the per node partial list and become per cpu slabs under
allocation.

> >> @@ -2465,9 +2466,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
> >>  		 * If we just froze the page then put it onto the
> >>  		 * per cpu partial list.
> >>  		 */
> >> -		if (new.frozen && !was_frozen)
> >> +		if (new.frozen && !was_frozen) {
> >>  			put_cpu_partial(s, page, 1);
> >> -
> >> +			stat(s, CPU_PARTIAL_FREE);
> >
> > cpu partial list filled with a partial page created from a fully allocated
> > slab (which therefore was not on any list before).
>
>
> Yes, but the counting is not new here. It just moved out of
> put_cpu_partial().

Ok but then you also added different accounting in put_cpu_partial.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [rfc PATCH]slub: per cpu partial statistics change
  2012-02-06 15:02     ` Christoph Lameter
@ 2012-02-07  5:06       ` Alex,Shi
  2012-02-07 15:12         ` Christoph Lameter
  0 siblings, 1 reply; 11+ messages in thread
From: Alex,Shi @ 2012-02-07  5:06 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel@vger.kernel.org, Pekka Enberg, linux-mm@kvack.org


> Well the term releasing is unfortunate. per cpu partial pages can migrate
> to and from the per node partial list and become per cpu slabs under
> allocation.

Yes, The word is really not good here. How about of CPU_PARTIAL_UNFREEZE
since a unfreeze_cpu_partial() just called before ? 
> 
> > >> @@ -2465,9 +2466,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
> > >>  		 * If we just froze the page then put it onto the
> > >>  		 * per cpu partial list.
> > >>  		 */
> > >> -		if (new.frozen && !was_frozen)
> > >> +		if (new.frozen && !was_frozen) {
> > >>  			put_cpu_partial(s, page, 1);
> > >> -
> > >> +			stat(s, CPU_PARTIAL_FREE);
> > >
> > > cpu partial list filled with a partial page created from a fully allocated
> > > slab (which therefore was not on any list before).
> >
> >
> > Yes, but the counting is not new here. It just moved out of
> > put_cpu_partial().
> 
> Ok but then you also added different accounting in put_cpu_partial.

Yes, I want to account the unfreeze_partialsi 1/4 ?i 1/4 ? actions in
put_cpu_partiali 1/4 ?). The unfreezing accounting isn't conflict or repeat
with the cpu_partial_free accounting, since they are different actions
for the PCP. 

According your above comments, how about the new patch with new
accounting name? 
------------
>From bd2b79297b4550035b0b0ec16dd0f3008a3a76dc Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@intel.com>
Date: Fri, 3 Feb 2012 23:34:56 +0800
Subject: [PATCH] slub: per cpu partial statistics change

This patch split the cpu_partial_free into 2 parts: cpu_partial_node, PCP refilling
times from node partial; and same name cpu_partial_free, PCP refilling times in
slab_free slow path. A new statistic 'cpu_partial_unfreeze' is added to get PCP
unfreeze times. These info are useful when do PCP tunning.

The slabinfo.c code is unchanged, since cpu_partial_node is not on slow path.

Signed-off-by: Alex Shi <alex.shi@intel.com>
---
 include/linux/slub_def.h |    6 ++++--
 mm/slub.c                |   12 +++++++++---
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index a32bcfd..2549483 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -21,7 +21,7 @@ enum stat_item {
 	FREE_FROZEN,		/* Freeing to frozen slab */
 	FREE_ADD_PARTIAL,	/* Freeing moves slab to partial list */
 	FREE_REMOVE_PARTIAL,	/* Freeing removes last object */
-	ALLOC_FROM_PARTIAL,	/* Cpu slab acquired from partial list */
+	ALLOC_FROM_PARTIAL,	/* Cpu slab acquired from node partial list */
 	ALLOC_SLAB,		/* Cpu slab acquired from page allocator */
 	ALLOC_REFILL,		/* Refill cpu slab from slab freelist */
 	ALLOC_NODE_MISMATCH,	/* Switching cpu slab */
@@ -37,7 +37,9 @@ enum stat_item {
 	CMPXCHG_DOUBLE_CPU_FAIL,/* Failure of this_cpu_cmpxchg_double */
 	CMPXCHG_DOUBLE_FAIL,	/* Number of times that cmpxchg double did not match */
 	CPU_PARTIAL_ALLOC,	/* Used cpu partial on alloc */
-	CPU_PARTIAL_FREE,	/* USed cpu partial on free */
+	CPU_PARTIAL_FREE,	/* Refill cpu partial on free */
+	CPU_PARTIAL_NODE,	/* Refill cpu partial from node partial */
+	CPU_PARTIAL_UNFREEZE,	/* Unfreeze cpu partial */
 	NR_SLUB_STAT_ITEMS };
 
 struct kmem_cache_cpu {
diff --git a/mm/slub.c b/mm/slub.c
index 4907563..6ededd7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1560,6 +1560,7 @@ static void *get_partial_node(struct kmem_cache *s,
 		} else {
 			page->freelist = t;
 			available = put_cpu_partial(s, page, 0);
+			stat(s, CPU_PARTIAL_NODE);
 		}
 		if (kmem_cache_debug(s) || available > s->cpu_partial / 2)
 			break;
@@ -1973,6 +1974,7 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 				local_irq_restore(flags);
 				pobjects = 0;
 				pages = 0;
+				stat(s, CPU_PARTIAL_UNFREEZE);
 			}
 		}
 
@@ -1984,7 +1986,6 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 		page->next = oldpage;
 
 	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);
-	stat(s, CPU_PARTIAL_FREE);
 	return pobjects;
 }
 
@@ -2465,9 +2466,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 		 * If we just froze the page then put it onto the
 		 * per cpu partial list.
 		 */
-		if (new.frozen && !was_frozen)
+		if (new.frozen && !was_frozen) {
 			put_cpu_partial(s, page, 1);
-
+			stat(s, CPU_PARTIAL_FREE);
+		}
 		/*
 		 * The list lock was not taken therefore no list
 		 * activity can be necessary.
@@ -5059,6 +5061,8 @@ STAT_ATTR(CMPXCHG_DOUBLE_CPU_FAIL, cmpxchg_double_cpu_fail);
 STAT_ATTR(CMPXCHG_DOUBLE_FAIL, cmpxchg_double_fail);
 STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc);
 STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free);
+STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node);
+STAT_ATTR(CPU_PARTIAL_UNFREEZE, cpu_partial_unfreeze);
 #endif
 
 static struct attribute *slab_attrs[] = {
@@ -5124,6 +5128,8 @@ static struct attribute *slab_attrs[] = {
 	&cmpxchg_double_cpu_fail_attr.attr,
 	&cpu_partial_alloc_attr.attr,
 	&cpu_partial_free_attr.attr,
+	&cpu_partial_node_attr.attr,
+	&cpu_partial_unfreeze_attr.attr,
 #endif
 #ifdef CONFIG_FAILSLAB
 	&failslab_attr.attr,
-- 
1.6.3.3




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [rfc PATCH]slub: per cpu partial statistics change
  2012-02-07  5:06       ` Alex,Shi
@ 2012-02-07 15:12         ` Christoph Lameter
  2012-02-08  4:44           ` Alex,Shi
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Lameter @ 2012-02-07 15:12 UTC (permalink / raw)
  To: Alex,Shi; +Cc: linux-kernel@vger.kernel.org, Pekka Enberg, linux-mm@kvack.org

[-- Attachment #1: Type: TEXT/PLAIN, Size: 523 bytes --]

On Tue, 7 Feb 2012, Alex,Shi wrote:

> Yes, I want to account the unfreeze_partials() actions in
> put_cpu_partial(). The unfreezing accounting isn't conflict or repeat
> with the cpu_partial_free accounting, since they are different actions
> for the PCP.

Well what is happening here is that the whole per cpu partial list is
moved back to the per node partial list.

CPU_PARTIAL_DRAIN_TO_NODE_PARTIAL ?

A bit long I think. CPU_PARTIAL_DRAIN?

UNFREEZE does not truly reflect what is going on here.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [rfc PATCH]slub: per cpu partial statistics change
  2012-02-07 15:12         ` Christoph Lameter
@ 2012-02-08  4:44           ` Alex,Shi
  2012-02-08 14:46             ` Christoph Lameter
  0 siblings, 1 reply; 11+ messages in thread
From: Alex,Shi @ 2012-02-08  4:44 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel@vger.kernel.org, Pekka Enberg, linux-mm@kvack.org

On Tue, 2012-02-07 at 09:12 -0600, Christoph Lameter wrote:
> On Tue, 7 Feb 2012, Alex,Shi wrote:
> 
> > Yes, I want to account the unfreeze_partialsi 1/4 ?i 1/4 ? actions in
> > put_cpu_partiali 1/4 ?). The unfreezing accounting isn't conflict or repeat
> > with the cpu_partial_free accounting, since they are different actions
> > for the PCP.
> 
> Well what is happening here is that the whole per cpu partial list is
> moved back to the per node partial list.
> 
> CPU_PARTIAL_DRAIN_TO_NODE_PARTIAL ?
> 
> A bit long I think. CPU_PARTIAL_DRAIN?

Yes. it is more meaningful. :) 
Patch change here. 

----------------
>From af88a7b0134d3eea82a4cf9985026852e50f5343 Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@intel.com>
Date: Fri, 3 Feb 2012 23:34:56 +0800
Subject: [PATCH] slub: per cpu partial statistics change

This patch split the cpu_partial_free into 2 parts: cpu_partial_node, PCP refilling
times from node partial; and same name cpu_partial_free, PCP refilling times in
slab_free slow path. A new statistic 'cpu_partial_drain' is added to get PCP
drain to node partial times. These info are useful when do PCP tunning.

The slabinfo.c code is unchanged, since cpu_partial_node is not on slow path.

Signed-off-by: Alex Shi <alex.shi@intel.com>
---
 include/linux/slub_def.h |    6 ++++--
 mm/slub.c                |   12 +++++++++---
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index a32bcfd..6388a66 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -21,7 +21,7 @@ enum stat_item {
 	FREE_FROZEN,		/* Freeing to frozen slab */
 	FREE_ADD_PARTIAL,	/* Freeing moves slab to partial list */
 	FREE_REMOVE_PARTIAL,	/* Freeing removes last object */
-	ALLOC_FROM_PARTIAL,	/* Cpu slab acquired from partial list */
+	ALLOC_FROM_PARTIAL,	/* Cpu slab acquired from node partial list */
 	ALLOC_SLAB,		/* Cpu slab acquired from page allocator */
 	ALLOC_REFILL,		/* Refill cpu slab from slab freelist */
 	ALLOC_NODE_MISMATCH,	/* Switching cpu slab */
@@ -37,7 +37,9 @@ enum stat_item {
 	CMPXCHG_DOUBLE_CPU_FAIL,/* Failure of this_cpu_cmpxchg_double */
 	CMPXCHG_DOUBLE_FAIL,	/* Number of times that cmpxchg double did not match */
 	CPU_PARTIAL_ALLOC,	/* Used cpu partial on alloc */
-	CPU_PARTIAL_FREE,	/* USed cpu partial on free */
+	CPU_PARTIAL_FREE,	/* Refill cpu partial on free */
+	CPU_PARTIAL_NODE,	/* Refill cpu partial from node partial */
+	CPU_PARTIAL_DRAIN,	/* Drain cpu partial to node partial */
 	NR_SLUB_STAT_ITEMS };
 
 struct kmem_cache_cpu {
diff --git a/mm/slub.c b/mm/slub.c
index 4907563..4e71a0a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1560,6 +1560,7 @@ static void *get_partial_node(struct kmem_cache *s,
 		} else {
 			page->freelist = t;
 			available = put_cpu_partial(s, page, 0);
+			stat(s, CPU_PARTIAL_NODE);
 		}
 		if (kmem_cache_debug(s) || available > s->cpu_partial / 2)
 			break;
@@ -1973,6 +1974,7 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 				local_irq_restore(flags);
 				pobjects = 0;
 				pages = 0;
+				stat(s, CPU_PARTIAL_DRAIN);
 			}
 		}
 
@@ -1984,7 +1986,6 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 		page->next = oldpage;
 
 	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);
-	stat(s, CPU_PARTIAL_FREE);
 	return pobjects;
 }
 
@@ -2465,9 +2466,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 		 * If we just froze the page then put it onto the
 		 * per cpu partial list.
 		 */
-		if (new.frozen && !was_frozen)
+		if (new.frozen && !was_frozen) {
 			put_cpu_partial(s, page, 1);
-
+			stat(s, CPU_PARTIAL_FREE);
+		}
 		/*
 		 * The list lock was not taken therefore no list
 		 * activity can be necessary.
@@ -5059,6 +5061,8 @@ STAT_ATTR(CMPXCHG_DOUBLE_CPU_FAIL, cmpxchg_double_cpu_fail);
 STAT_ATTR(CMPXCHG_DOUBLE_FAIL, cmpxchg_double_fail);
 STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc);
 STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free);
+STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node);
+STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain);
 #endif
 
 static struct attribute *slab_attrs[] = {
@@ -5124,6 +5128,8 @@ static struct attribute *slab_attrs[] = {
 	&cmpxchg_double_cpu_fail_attr.attr,
 	&cpu_partial_alloc_attr.attr,
 	&cpu_partial_free_attr.attr,
+	&cpu_partial_node_attr.attr,
+	&cpu_partial_drain_attr.attr,
 #endif
 #ifdef CONFIG_FAILSLAB
 	&failslab_attr.attr,
-- 
1.6.3.3



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [rfc PATCH]slub: per cpu partial statistics change
  2012-02-08  4:44           ` Alex,Shi
@ 2012-02-08 14:46             ` Christoph Lameter
  2012-02-17  7:06               ` Alex,Shi
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Lameter @ 2012-02-08 14:46 UTC (permalink / raw)
  To: Alex,Shi; +Cc: linux-kernel@vger.kernel.org, Pekka Enberg, linux-mm@kvack.org

On Wed, 8 Feb 2012, Alex,Shi wrote:

> > A bit long I think. CPU_PARTIAL_DRAIN?
>
> Yes. it is more meaningful. :)
> Patch change here.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [rfc PATCH]slub: per cpu partial statistics change
  2012-02-08 14:46             ` Christoph Lameter
@ 2012-02-17  7:06               ` Alex,Shi
  2012-02-18  9:02                 ` Pekka Enberg
  0 siblings, 1 reply; 11+ messages in thread
From: Alex,Shi @ 2012-02-17  7:06 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, cl@linux.com

On Wed, 2012-02-08 at 08:46 -0600, Christoph Lameter wrote:
> On Wed, 8 Feb 2012, Alex,Shi wrote:
> 
> > > A bit long I think. CPU_PARTIAL_DRAIN?
> >
> > Yes. it is more meaningful. :)
> > Patch change here.
> 
> Acked-by: Christoph Lameter <cl@linux.com>

Pakka:
Would you like to pick up this patch? It works on latest Linus' tree. 

Thanks!
========
>From af88a7b0134d3eea82a4cf9985026852e50f5343 Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@intel.com>
Date: Fri, 3 Feb 2012 23:34:56 +0800
Subject: [PATCH] slub: per cpu partial statistics change

This patch split the cpu_partial_free into 2 parts: cpu_partial_node, PCP refilling
times from node partial; and same name cpu_partial_free, PCP refilling times in
slab_free slow path. A new statistic 'cpu_partial_drain' is added to get PCP
drain to node partial times. These info are useful when do PCP tunning.

The slabinfo.c code is unchanged, since cpu_partial_node is not on slow path.

Signed-off-by: Alex Shi <alex.shi@intel.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 include/linux/slub_def.h |    6 ++++--
 mm/slub.c                |   12 +++++++++---
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index a32bcfd..6388a66 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -21,7 +21,7 @@ enum stat_item {
 	FREE_FROZEN,		/* Freeing to frozen slab */
 	FREE_ADD_PARTIAL,	/* Freeing moves slab to partial list */
 	FREE_REMOVE_PARTIAL,	/* Freeing removes last object */
-	ALLOC_FROM_PARTIAL,	/* Cpu slab acquired from partial list */
+	ALLOC_FROM_PARTIAL,	/* Cpu slab acquired from node partial list */
 	ALLOC_SLAB,		/* Cpu slab acquired from page allocator */
 	ALLOC_REFILL,		/* Refill cpu slab from slab freelist */
 	ALLOC_NODE_MISMATCH,	/* Switching cpu slab */
@@ -37,7 +37,9 @@ enum stat_item {
 	CMPXCHG_DOUBLE_CPU_FAIL,/* Failure of this_cpu_cmpxchg_double */
 	CMPXCHG_DOUBLE_FAIL,	/* Number of times that cmpxchg double did not match */
 	CPU_PARTIAL_ALLOC,	/* Used cpu partial on alloc */
-	CPU_PARTIAL_FREE,	/* USed cpu partial on free */
+	CPU_PARTIAL_FREE,	/* Refill cpu partial on free */
+	CPU_PARTIAL_NODE,	/* Refill cpu partial from node partial */
+	CPU_PARTIAL_DRAIN,	/* Drain cpu partial to node partial */
 	NR_SLUB_STAT_ITEMS };
 
 struct kmem_cache_cpu {
diff --git a/mm/slub.c b/mm/slub.c
index 4907563..4e71a0a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1560,6 +1560,7 @@ static void *get_partial_node(struct kmem_cache *s,
 		} else {
 			page->freelist = t;
 			available = put_cpu_partial(s, page, 0);
+			stat(s, CPU_PARTIAL_NODE);
 		}
 		if (kmem_cache_debug(s) || available > s->cpu_partial / 2)
 			break;
@@ -1973,6 +1974,7 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 				local_irq_restore(flags);
 				pobjects = 0;
 				pages = 0;
+				stat(s, CPU_PARTIAL_DRAIN);
 			}
 		}
 
@@ -1984,7 +1986,6 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 		page->next = oldpage;
 
 	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);
-	stat(s, CPU_PARTIAL_FREE);
 	return pobjects;
 }
 
@@ -2465,9 +2466,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 		 * If we just froze the page then put it onto the
 		 * per cpu partial list.
 		 */
-		if (new.frozen && !was_frozen)
+		if (new.frozen && !was_frozen) {
 			put_cpu_partial(s, page, 1);
-
+			stat(s, CPU_PARTIAL_FREE);
+		}
 		/*
 		 * The list lock was not taken therefore no list
 		 * activity can be necessary.
@@ -5059,6 +5061,8 @@ STAT_ATTR(CMPXCHG_DOUBLE_CPU_FAIL, cmpxchg_double_cpu_fail);
 STAT_ATTR(CMPXCHG_DOUBLE_FAIL, cmpxchg_double_fail);
 STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc);
 STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free);
+STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node);
+STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain);
 #endif
 
 static struct attribute *slab_attrs[] = {
@@ -5124,6 +5128,8 @@ static struct attribute *slab_attrs[] = {
 	&cmpxchg_double_cpu_fail_attr.attr,
 	&cpu_partial_alloc_attr.attr,
 	&cpu_partial_free_attr.attr,
+	&cpu_partial_node_attr.attr,
+	&cpu_partial_drain_attr.attr,
 #endif
 #ifdef CONFIG_FAILSLAB
 	&failslab_attr.attr,
-- 
1.6.3.3



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [rfc PATCH]slub: per cpu partial statistics change
  2012-02-17  7:06               ` Alex,Shi
@ 2012-02-18  9:02                 ` Pekka Enberg
  2012-02-20  0:45                   ` Alex,Shi
  0 siblings, 1 reply; 11+ messages in thread
From: Pekka Enberg @ 2012-02-18  9:02 UTC (permalink / raw)
  To: Alex,Shi
  Cc: Pekka Enberg, linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	cl@linux.com

On Fri, 17 Feb 2012, Alex,Shi wrote:
> Pakka:
> Would you like to pick up this patch? It works on latest Linus' tree.

Applied, thanks! Can you please use my @kernel.org email address in the 
future? I don't really follow this account as often.

 			Pekka

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [rfc PATCH]slub: per cpu partial statistics change
  2012-02-18  9:02                 ` Pekka Enberg
@ 2012-02-20  0:45                   ` Alex,Shi
  0 siblings, 0 replies; 11+ messages in thread
From: Alex,Shi @ 2012-02-20  0:45 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Pekka Enberg, linux-kernel@vger.kernel.org, linux-mm@kvack.org

On Sat, 2012-02-18 at 11:02 +0200, Pekka Enberg wrote:
> On Fri, 17 Feb 2012, Alex,Shi wrote:
> > Pakka:
> > Would you like to pick up this patch? It works on latest Linus' tree.
> 
> Applied, thanks! Can you please use my @kernel.org email address in the 
> future? I don't really follow this account as often.

Thanks! Pekka, I will use your kernel.org address. :) 
> 
>  			Pekka


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-02-20  0:47 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-02-03  8:11 [rfc PATCH]slub: per cpu partial statistics change Alex,Shi
2012-02-03 15:27 ` Christoph Lameter
2012-02-04  0:56   ` Alex Shi
2012-02-06 15:02     ` Christoph Lameter
2012-02-07  5:06       ` Alex,Shi
2012-02-07 15:12         ` Christoph Lameter
2012-02-08  4:44           ` Alex,Shi
2012-02-08 14:46             ` Christoph Lameter
2012-02-17  7:06               ` Alex,Shi
2012-02-18  9:02                 ` Pekka Enberg
2012-02-20  0:45                   ` Alex,Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).