* [BUG] a bug in slub on 3.2 and 3.4
@ 2015-06-03 3:37 Zefan Li
2015-06-04 1:39 ` Joonsoo Kim
2015-08-01 19:31 ` Ben Hutchings
0 siblings, 2 replies; 3+ messages in thread
From: Zefan Li @ 2015-06-03 3:37 UTC (permalink / raw)
To: Ben Hutchings, Christoph Lameter, Pekka Enberg, Joonsoo Kim,
stable
I think 3.2 and 3.4 need to backport 43d77867a4f3 ("slub: refactoring unfreeze_partials()")
On 3.4 kernel, we found there are tens of thousands of free task_struct slabs
in the partial list of node 0 when they should have been discarded, which is
essentially a huge memory leak.
Looking into the code, seems the cause is, we check n->nr_partial but the page
might be in n2 where n != n2, and then the page is added to n2->partial though
n2->partial > s->min_partial.
static void unfreeze_partials(struct kmem_cache *s)
{
...
while ((page = c->partial)) {
...
c->partial = page->next;
...
if (!new.inuse && (!n || n->nr_partial > s->min_partial))
m = M_FREE;
else {
struct kmem_cache_node *n2 = get_node(s,
page_to_nid(page));
m = M_PARTIAL;
if (n != n2) {
...
n = n2;
...
}
...
}
...
}
^ permalink raw reply [flat|nested] 3+ messages in thread
* RE: [BUG] a bug in slub on 3.2 and 3.4
2015-06-03 3:37 [BUG] a bug in slub on 3.2 and 3.4 Zefan Li
@ 2015-06-04 1:39 ` Joonsoo Kim
2015-08-01 19:31 ` Ben Hutchings
1 sibling, 0 replies; 3+ messages in thread
From: Joonsoo Kim @ 2015-06-04 1:39 UTC (permalink / raw)
To: 'Zefan Li', 'Ben Hutchings',
'Christoph Lameter', 'Pekka Enberg',
'stable'
> -----Original Message-----
> From: Zefan Li [mailto:lizefan@huawei.com]
> Sent: Wednesday, June 03, 2015 12:37 PM
> To: Ben Hutchings; Christoph Lameter; Pekka Enberg; Joonsoo Kim; stable
> Subject: [BUG] a bug in slub on 3.2 and 3.4
>
> I think 3.2 and 3.4 need to backport 43d77867a4f3 ("slub: refactoring
> unfreeze_partials()")
>
> On 3.4 kernel, we found there are tens of thousands of free task_struct
> slabs
> in the partial list of node 0 when they should have been discarded, which
> is
> essentially a huge memory leak.
>
> Looking into the code, seems the cause is, we check n->nr_partial but the
> page
> might be in n2 where n != n2, and then the page is added to n2->partial
> though
> n2->partial > s->min_partial.
Looks like it's possible scenario.
And, yes, commit 43d77867a4f3 will fix the issue.
Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Thanks.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [BUG] a bug in slub on 3.2 and 3.4
2015-06-03 3:37 [BUG] a bug in slub on 3.2 and 3.4 Zefan Li
2015-06-04 1:39 ` Joonsoo Kim
@ 2015-08-01 19:31 ` Ben Hutchings
1 sibling, 0 replies; 3+ messages in thread
From: Ben Hutchings @ 2015-08-01 19:31 UTC (permalink / raw)
To: Zefan Li, Christoph Lameter, Pekka Enberg, Joonsoo Kim, stable
[-- Attachment #1: Type: text/plain, Size: 726 bytes --]
On Wed, 2015-06-03 at 11:37 +0800, Zefan Li wrote:
> I think 3.2 and 3.4 need to backport 43d77867a4f3 ("slub: refactoring unfreeze_partials()")
>
> On 3.4 kernel, we found there are tens of thousands of free task_struct slabs
> in the partial list of node 0 when they should have been discarded, which is
> essentially a huge memory leak.
>
> Looking into the code, seems the cause is, we check n->nr_partial but the page
> might be in n2 where n != n2, and then the page is added to n2->partial though
> n2->partial > s->min_partial.
[...]
Thanks, I've queued this up for 3.2 with a small adjustment.
Ben.
--
Ben Hutchings
One of the nice things about standards is that there are so many of them.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 811 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-08-01 19:32 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-03 3:37 [BUG] a bug in slub on 3.2 and 3.4 Zefan Li
2015-06-04 1:39 ` Joonsoo Kim
2015-08-01 19:31 ` Ben Hutchings
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).