* [PATCH] mm/slub: fix accumulate per cpu partial cache objects
@ 2013-12-27 9:46 Wanpeng Li
2013-12-28 1:50 ` Li Zefan
0 siblings, 1 reply; 5+ messages in thread
From: Wanpeng Li @ 2013-12-27 9:46 UTC (permalink / raw)
To: Pekka Enberg
Cc: Christoph Lameter, Andrew Morton, David Rientjes, Joonsoo Kim,
linux-mm, linux-kernel, Wanpeng Li
SLUB per cpu partial cache is a list of slab caches to accelerate objects
allocation. However, current codes just accumulate the objects number of
the first slab cache of per cpu partial cache instead of traverse the whole
list.
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
mm/slub.c | 32 +++++++++++++++++++++++---------
1 files changed, 23 insertions(+), 9 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 545a170..799bfdc 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4280,7 +4280,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab,
cpu);
int node;
- struct page *page;
+ struct page *page, *p;
page = ACCESS_ONCE(c->page);
if (!page)
@@ -4298,8 +4298,9 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
nodes[node] += x;
page = ACCESS_ONCE(c->partial);
- if (page) {
- x = page->pobjects;
+ while ((p = page)) {
+ page = p->next;
+ x = p->pobjects;
total += x;
nodes[node] += x;
}
@@ -4520,13 +4521,15 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
int pages = 0;
int cpu;
int len;
+ struct page *p;
for_each_online_cpu(cpu) {
struct page *page = per_cpu_ptr(s->cpu_slab, cpu)->partial;
- if (page) {
- pages += page->pages;
- objects += page->pobjects;
+ while ((p = page)) {
+ page = p->next;
+ pages += p->pages;
+ objects += p->pobjects;
}
}
@@ -4535,10 +4538,21 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
#ifdef CONFIG_SMP
for_each_online_cpu(cpu) {
struct page *page = per_cpu_ptr(s->cpu_slab, cpu) ->partial;
+ objects = 0;
+ pages = 0;
+
+ if (!page)
+ continue;
+
+ while ((p = page)) {
+ page = p->next;
+ pages += p->pages;
+ objects += p->pobjects;
+ }
- if (page && len < PAGE_SIZE - 20)
- len += sprintf(buf + len, " C%d=%d(%d)", cpu,
- page->pobjects, page->pages);
+ if (len < PAGE_SIZE - 20)
+ len += sprintf(buf + len, " C%d=%d(%d)", cpu,
+ objects, pages);
}
#endif
return len + sprintf(buf + len, "\n");
--
1.7.7.6
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/slub: fix accumulate per cpu partial cache objects
2013-12-27 9:46 [PATCH] mm/slub: fix accumulate per cpu partial cache objects Wanpeng Li
@ 2013-12-28 1:50 ` Li Zefan
2013-12-29 11:49 ` Pekka Enberg
0 siblings, 1 reply; 5+ messages in thread
From: Li Zefan @ 2013-12-28 1:50 UTC (permalink / raw)
To: Wanpeng Li, Andrew Morton
Cc: Pekka Enberg, Christoph Lameter, David Rientjes, Joonsoo Kim,
linux-mm, linux-kernel
On 2013/12/27 17:46, Wanpeng Li wrote:
> SLUB per cpu partial cache is a list of slab caches to accelerate objects
> allocation. However, current codes just accumulate the objects number of
> the first slab cache of per cpu partial cache instead of traverse the whole
> list.
>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
> ---
> mm/slub.c | 32 +++++++++++++++++++++++---------
> 1 files changed, 23 insertions(+), 9 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 545a170..799bfdc 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4280,7 +4280,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
> struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab,
> cpu);
> int node;
> - struct page *page;
> + struct page *page, *p;
>
> page = ACCESS_ONCE(c->page);
> if (!page)
> @@ -4298,8 +4298,9 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
> nodes[node] += x;
>
> page = ACCESS_ONCE(c->partial);
> - if (page) {
> - x = page->pobjects;
> + while ((p = page)) {
> + page = p->next;
> + x = p->pobjects;
> total += x;
> nodes[node] += x;
> }
Can we apply this patch first? It was sent month ago, but Pekka was not responsive.
=============================
[PATCH] slub: Fix calculation of cpu slabs
/sys/kernel/slab/:t-0000048 # cat cpu_slabs
231 N0=16 N1=215
/sys/kernel/slab/:t-0000048 # cat slabs
145 N0=36 N1=109
See, the number of slabs is smaller than that of cpu slabs.
The bug was introduced by commit 49e2258586b423684f03c278149ab46d8f8b6700
("slub: per cpu cache for partial pages").
We should use page->pages instead of page->pobjects when calculating
the number of cpu partial slabs. This also fixes the mapping of slabs
and nodes.
As there's no variable storing the number of total/active objects in
cpu partial slabs, and we don't have user interfaces requiring those
statistics, I just add WARN_ON for those cases.
Cc: <stable@vger.kernel.org> # 3.2+
Signed-off-by: Li Zefan <lizefan@huawei.com>
Acked-by: Christoph Lameter <cl@linux.com>
Reviewed-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
mm/slub.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index e3ba1f2..6ea461d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4300,7 +4300,13 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
page = ACCESS_ONCE(c->partial);
if (page) {
- x = page->pobjects;
+ node = page_to_nid(page);
+ if (flags & SO_TOTAL)
+ WARN_ON_ONCE(1);
+ else if (flags & SO_OBJECTS)
+ WARN_ON_ONCE(1);
+ else
+ x = page->pages;
total += x;
nodes[node] += x;
}
-- 1.8.0.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/slub: fix accumulate per cpu partial cache objects
2013-12-28 1:50 ` Li Zefan
@ 2013-12-29 11:49 ` Pekka Enberg
2013-12-30 1:08 ` Wanpeng Li
[not found] ` <20131230010800.GA1623@hacker.(null)>
0 siblings, 2 replies; 5+ messages in thread
From: Pekka Enberg @ 2013-12-29 11:49 UTC (permalink / raw)
To: Li Zefan
Cc: Wanpeng Li, Andrew Morton, Christoph Lameter, David Rientjes,
Joonsoo Kim, linux-mm@kvack.org, LKML
On Sat, Dec 28, 2013 at 3:50 AM, Li Zefan <lizefan@huawei.com> wrote:
> On 2013/12/27 17:46, Wanpeng Li wrote:
>> SLUB per cpu partial cache is a list of slab caches to accelerate objects
>> allocation. However, current codes just accumulate the objects number of
>> the first slab cache of per cpu partial cache instead of traverse the whole
>> list.
>>
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>> mm/slub.c | 32 +++++++++++++++++++++++---------
>> 1 files changed, 23 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 545a170..799bfdc 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -4280,7 +4280,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
>> struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab,
>> cpu);
>> int node;
>> - struct page *page;
>> + struct page *page, *p;
>>
>> page = ACCESS_ONCE(c->page);
>> if (!page)
>> @@ -4298,8 +4298,9 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
>> nodes[node] += x;
>>
>> page = ACCESS_ONCE(c->partial);
>> - if (page) {
>> - x = page->pobjects;
>> + while ((p = page)) {
>> + page = p->next;
>> + x = p->pobjects;
>> total += x;
>> nodes[node] += x;
>> }
>
> Can we apply this patch first? It was sent month ago, but Pekka was not responsive.
Applied. Wanpeng, care to resend your patch?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/slub: fix accumulate per cpu partial cache objects
2013-12-29 11:49 ` Pekka Enberg
@ 2013-12-30 1:08 ` Wanpeng Li
[not found] ` <20131230010800.GA1623@hacker.(null)>
1 sibling, 0 replies; 5+ messages in thread
From: Wanpeng Li @ 2013-12-30 1:08 UTC (permalink / raw)
To: Pekka Enberg
Cc: Li Zefan, Andrew Morton, Christoph Lameter, David Rientjes,
Joonsoo Kim, linux-mm@kvack.org, LKML
On Sun, Dec 29, 2013 at 01:49:48PM +0200, Pekka Enberg wrote:
>On Sat, Dec 28, 2013 at 3:50 AM, Li Zefan <lizefan@huawei.com> wrote:
>> On 2013/12/27 17:46, Wanpeng Li wrote:
>>> SLUB per cpu partial cache is a list of slab caches to accelerate objects
>>> allocation. However, current codes just accumulate the objects number of
>>> the first slab cache of per cpu partial cache instead of traverse the whole
>>> list.
>>>
>>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>>> ---
>>> mm/slub.c | 32 +++++++++++++++++++++++---------
>>> 1 files changed, 23 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index 545a170..799bfdc 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -4280,7 +4280,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
>>> struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab,
>>> cpu);
>>> int node;
>>> - struct page *page;
>>> + struct page *page, *p;
>>>
>>> page = ACCESS_ONCE(c->page);
>>> if (!page)
>>> @@ -4298,8 +4298,9 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
>>> nodes[node] += x;
>>>
>>> page = ACCESS_ONCE(c->partial);
>>> - if (page) {
>>> - x = page->pobjects;
>>> + while ((p = page)) {
>>> + page = p->next;
>>> + x = p->pobjects;
>>> total += x;
>>> nodes[node] += x;
>>> }
>>
>> Can we apply this patch first? It was sent month ago, but Pekka was not responsive.
>
>Applied. Wanpeng, care to resend your patch?
Zefan's patch is good enough, mine doesn't need any more.
Regards,
Wanpeng Li
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm/slub: fix accumulate per cpu partial cache objects
[not found] ` <20131230010800.GA1623@hacker.(null)>
@ 2013-12-30 9:54 ` Pekka Enberg
0 siblings, 0 replies; 5+ messages in thread
From: Pekka Enberg @ 2013-12-30 9:54 UTC (permalink / raw)
To: Wanpeng Li, Pekka Enberg
Cc: Li Zefan, Andrew Morton, Christoph Lameter, David Rientjes,
Joonsoo Kim, linux-mm@kvack.org, LKML
On 12/30/2013 03:08 AM, Wanpeng Li wrote:
> Zefan's patch is good enough, mine doesn't need any more.
OK, thanks guys!
Pekka
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2013-12-31 18:01 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-27 9:46 [PATCH] mm/slub: fix accumulate per cpu partial cache objects Wanpeng Li
2013-12-28 1:50 ` Li Zefan
2013-12-29 11:49 ` Pekka Enberg
2013-12-30 1:08 ` Wanpeng Li
[not found] ` <20131230010800.GA1623@hacker.(null)>
2013-12-30 9:54 ` Pekka Enberg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).