* anon_vma accumulating for certain load still not addressed
@ 2014-11-14 13:08 Michal Hocko
2014-11-14 15:06 ` Rik van Riel
0 siblings, 1 reply; 3+ messages in thread
From: Michal Hocko @ 2014-11-14 13:08 UTC (permalink / raw)
To: linux-mm
Cc: Andrew Morton, Rik van Riel, Hugh Dickins, Michel Lespinasse,
Andrea Argangeli, Linus Torvalds, Vlastimil Babka, Daniel Forrest,
LKML
Hi,
back in 2012 [1] there was a discussion about a forking load which
accumulates anon_vmas. There was a trivial test case which triggers this
and can potentially deplete the memory by local user.
We have a report for an older enterprise distribution where nsd is
suffering from this issue most probably (I haven't debugged it throughly
but accumulating anon_vma structs over time sounds like a good enough
fit) and has to be restarted after some time to release the accumulated
anon_vma objects.
There was a patch which tried to work around the issue [2] but I do not
see any follow ups nor any indication that the issue would be addressed
in other way.
The test program from [1] was running for around 39 mins on my laptop
and here is the result:
$ date +%s; grep anon_vma /proc/slabinfo
1415960225
anon_vma 11664 11900 160 25 1 : tunables 0 0 0 : slabdata 476 476 0
$ ./a # The reproducer
$ date +%s; grep anon_vma /proc/slabinfo
1415962592
anon_vma 34875 34875 160 25 1 : tunables 0 0 0 : slabdata 1395 1395 0
$ killall a
$ date +%s; grep anon_vma /proc/slabinfo
1415962607
anon_vma 11277 12175 160 25 1 : tunables 0 0 0 : slabdata 487 487 0
So we have accumulated 23211 objects over that time period before the
offender was killed which released all of them.
The proposed workaround is kind of ugly but do people have a better idea
than reference counting? If not should we merge it?
---
[1] https://lkml.org/lkml/2012/8/15/765
[2] https://lkml.org/lkml/2013/6/3/568
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: anon_vma accumulating for certain load still not addressed
2014-11-14 13:08 anon_vma accumulating for certain load still not addressed Michal Hocko
@ 2014-11-14 15:06 ` Rik van Riel
2014-11-14 17:10 ` Vlastimil Babka
0 siblings, 1 reply; 3+ messages in thread
From: Rik van Riel @ 2014-11-14 15:06 UTC (permalink / raw)
To: Michal Hocko, linux-mm
Cc: Andrew Morton, Hugh Dickins, Michel Lespinasse, Andrea Argangeli,
Linus Torvalds, Vlastimil Babka, Daniel Forrest, LKML
On 11/14/2014 08:08 AM, Michal Hocko wrote:
> Hi,
> back in 2012 [1] there was a discussion about a forking load which
> accumulates anon_vmas. There was a trivial test case which triggers this
> and can potentially deplete the memory by local user.
>
> We have a report for an older enterprise distribution where nsd is
> suffering from this issue most probably (I haven't debugged it throughly
> but accumulating anon_vma structs over time sounds like a good enough
> fit) and has to be restarted after some time to release the accumulated
> anon_vma objects.
>
> There was a patch which tried to work around the issue [2] but I do not
> see any follow ups nor any indication that the issue would be addressed
> in other way.
>
> The test program from [1] was running for around 39 mins on my laptop
> and here is the result:
>
> $ date +%s; grep anon_vma /proc/slabinfo
> 1415960225
> anon_vma 11664 11900 160 25 1 : tunables 0 0 0 : slabdata 476 476 0
>
> $ ./a # The reproducer
>
> $ date +%s; grep anon_vma /proc/slabinfo
> 1415962592
> anon_vma 34875 34875 160 25 1 : tunables 0 0 0 : slabdata 1395 1395 0
>
> $ killall a
> $ date +%s; grep anon_vma /proc/slabinfo
> 1415962607
> anon_vma 11277 12175 160 25 1 : tunables 0 0 0 : slabdata 487 487 0
>
> So we have accumulated 23211 objects over that time period before the
> offender was killed which released all of them.
>
> The proposed workaround is kind of ugly but do people have a better idea
> than reference counting? If not should we merge it?
I believe we should just merge that patch.
I have not seen any better ideas come by.
The comment should probably be fixed to reflect the
chain length of 5 though :)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: anon_vma accumulating for certain load still not addressed
2014-11-14 15:06 ` Rik van Riel
@ 2014-11-14 17:10 ` Vlastimil Babka
0 siblings, 0 replies; 3+ messages in thread
From: Vlastimil Babka @ 2014-11-14 17:10 UTC (permalink / raw)
To: Rik van Riel, Michal Hocko, linux-mm
Cc: Andrew Morton, Hugh Dickins, Michel Lespinasse, Andrea Argangeli,
Linus Torvalds, Daniel Forrest, LKML
On 11/14/2014 04:06 PM, Rik van Riel wrote:
> On 11/14/2014 08:08 AM, Michal Hocko wrote:
>> Hi,
>> back in 2012 [1] there was a discussion about a forking load which
>> accumulates anon_vmas. There was a trivial test case which triggers this
>> and can potentially deplete the memory by local user.
>>
>> We have a report for an older enterprise distribution where nsd is
>> suffering from this issue most probably (I haven't debugged it throughly
>> but accumulating anon_vma structs over time sounds like a good enough
>> fit) and has to be restarted after some time to release the accumulated
>> anon_vma objects.
>>
>> There was a patch which tried to work around the issue [2] but I do not
>> see any follow ups nor any indication that the issue would be addressed
>> in other way.
>>
>> The test program from [1] was running for around 39 mins on my laptop
>> and here is the result:
>>
>> $ date +%s; grep anon_vma /proc/slabinfo
>> 1415960225
>> anon_vma 11664 11900 160 25 1 : tunables 0 0 0 : slabdata 476 476 0
>>
>> $ ./a # The reproducer
>>
>> $ date +%s; grep anon_vma /proc/slabinfo
>> 1415962592
>> anon_vma 34875 34875 160 25 1 : tunables 0 0 0 : slabdata 1395 1395 0
>>
>> $ killall a
>> $ date +%s; grep anon_vma /proc/slabinfo
>> 1415962607
>> anon_vma 11277 12175 160 25 1 : tunables 0 0 0 : slabdata 487 487 0
>>
>> So we have accumulated 23211 objects over that time period before the
>> offender was killed which released all of them.
>>
>> The proposed workaround is kind of ugly but do people have a better idea
>> than reference counting? If not should we merge it?
>
> I believe we should just merge that patch.
>
> I have not seen any better ideas come by.
I have some very vague idea that if we could distinguish (with a flag?)
anon_vma_chain (avc) pointing to parent's anon_vma, from avc's created
for new anon_vma's in the child, we could maybe detect at "child-type"
avc removal time, that the only avc's left for a non-root anon_vma are
those of "parent-type" pointing from children. Then we could go through
all pages that map the anon_vma, and change their mapping to the root
anon_vma. The root would have to stay, orphaned or not, because of the
lock there.
That would remove the need for determining a magic constant and the
possibility that we still leave non-useful "orphaned" anon_vma's on the
top levels of the fork hierarchy, while all the bottom levels have to
share the last anon_vma's that were allowed to be created. I'm not sure
if that's the case of nsd - if besides the "orphaned parent" forks it
also forks some workers that would no longer benefit from having their
private anon_vma's.
Of course the downside is that the idea would be too complicated wrt
locking and incur overhead on some fast paths (process exit?). And I
admit I'm not very familiar with the code (which is perhaps euphemism :)
Still, what do you think, Rik?
Vlastimil
> The comment should probably be fixed to reflect the
> chain length of 5 though :)
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2014-11-14 17:10 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-11-14 13:08 anon_vma accumulating for certain load still not addressed Michal Hocko
2014-11-14 15:06 ` Rik van Riel
2014-11-14 17:10 ` Vlastimil Babka
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).