linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/slub: fix accumulate per cpu partial cache objects
@ 2013-12-27  9:46 Wanpeng Li
  2013-12-28  1:50 ` Li Zefan
  0 siblings, 1 reply; 5+ messages in thread
From: Wanpeng Li @ 2013-12-27  9:46 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Christoph Lameter, Andrew Morton, David Rientjes, Joonsoo Kim,
	linux-mm, linux-kernel, Wanpeng Li

SLUB per cpu partial cache is a list of slab caches to accelerate objects 
allocation. However, current codes just accumulate the objects number of 
the first slab cache of per cpu partial cache instead of traverse the whole 
list.

Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
 mm/slub.c |   32 +++++++++++++++++++++++---------
 1 files changed, 23 insertions(+), 9 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 545a170..799bfdc 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4280,7 +4280,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
 			struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab,
 							       cpu);
 			int node;
-			struct page *page;
+			struct page *page, *p;
 
 			page = ACCESS_ONCE(c->page);
 			if (!page)
@@ -4298,8 +4298,9 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
 			nodes[node] += x;
 
 			page = ACCESS_ONCE(c->partial);
-			if (page) {
-				x = page->pobjects;
+			while ((p = page)) {
+				page = p->next;
+				x = p->pobjects;
 				total += x;
 				nodes[node] += x;
 			}
@@ -4520,13 +4521,15 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
 	int pages = 0;
 	int cpu;
 	int len;
+	struct page *p;
 
 	for_each_online_cpu(cpu) {
 		struct page *page = per_cpu_ptr(s->cpu_slab, cpu)->partial;
 
-		if (page) {
-			pages += page->pages;
-			objects += page->pobjects;
+		while ((p = page)) {
+			page = p->next;
+			pages += p->pages;
+			objects += p->pobjects;
 		}
 	}
 
@@ -4535,10 +4538,21 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
 #ifdef CONFIG_SMP
 	for_each_online_cpu(cpu) {
 		struct page *page = per_cpu_ptr(s->cpu_slab, cpu) ->partial;
+		objects = 0;
+		pages = 0;
+
+		if (!page)
+			continue;
+
+		while ((p = page)) {
+			page = p->next;
+			pages += p->pages;
+			objects += p->pobjects;
+		}
 
-		if (page && len < PAGE_SIZE - 20)
-			len += sprintf(buf + len, " C%d=%d(%d)", cpu,
-				page->pobjects, page->pages);
+		if (len < PAGE_SIZE - 20)
+			len += sprintf(buf + len, " C%d=%d(%d)", cpu,
+				objects, pages);
 	}
 #endif
 	return len + sprintf(buf + len, "\n");
-- 
1.7.7.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-12-31 18:01 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-27  9:46 [PATCH] mm/slub: fix accumulate per cpu partial cache objects Wanpeng Li
2013-12-28  1:50 ` Li Zefan
2013-12-29 11:49   ` Pekka Enberg
2013-12-30  1:08     ` Wanpeng Li
     [not found]     ` <20131230010800.GA1623@hacker.(null)>
2013-12-30  9:54       ` Pekka Enberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).