From mboxrd@z Thu Jan 1 00:00:00 1970 From: Glauber Costa Subject: Re: [PATCH 11/11] protect architectures where THREAD_SIZE >= PAGE_SIZE against fork bombs Date: Tue, 26 Jun 2012 17:37:41 +0400 Message-ID: <4FE9BB25.60905@parallels.com> References: <1340633728-12785-1-git-send-email-glommer@parallels.com> <1340633728-12785-12-git-send-email-glommer@parallels.com> <4FE89807.50708@redhat.com> <20120625183818.GH3869@google.com> <4FE9AF88.5070803@parallels.com> <20120626133838.GA11519@somewhere.redhat.com> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20120626133838.GA11519-oHC15RC7JGTpAmv0O++HtFaTQe2KTcn/@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="us-ascii"; format="flowed" To: Frederic Weisbecker Cc: Tejun Heo , Frederic Weisbecker , cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, Andrew Morton , linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, David Rientjes , Pekka Enberg , Michal Hocko , Johannes Weiner , Christoph Lameter , devel-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org, kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org, Pekka Enberg , Suleiman Souhlal On 06/26/2012 05:38 PM, Frederic Weisbecker wrote: > On Tue, Jun 26, 2012 at 04:48:08PM +0400, Glauber Costa wrote: >> On 06/25/2012 10:38 PM, Tejun Heo wrote: >>> On Mon, Jun 25, 2012 at 06:55:35PM +0200, Frederic Weisbecker wrote: >>>> On 06/25/2012 04:15 PM, Glauber Costa wrote: >>>> >>>>> Because those architectures will draw their stacks directly from >>>>> the page allocator, rather than the slab cache, we can directly >>>>> pass __GFP_KMEMCG flag, and issue the corresponding free_pages. >>>>> >>>>> This code path is taken when the architecture doesn't define >>>>> CONFIG_ARCH_THREAD_INFO_ALLOCATOR (only ia64 seems to), and has >>>>> THREAD_SIZE >= PAGE_SIZE. Luckily, most - if not all - of the >>>>> remaining architectures fall in this category. >>>>> >>>>> This will guarantee that every stack page is accounted to the memcg >>>>> the process currently lives on, and will have the allocations to fail >>>>> if they go over limit. >>>>> >>>>> For the time being, I am defining a new variant of THREADINFO_GFP, not >>>>> to mess with the other path. Once the slab is also tracked by memcg, >>>>> we can get rid of that flag. >>>>> >>>>> Tested to successfully protect against :(){ :|:& };: >>>>> >>>>> Signed-off-by: Glauber Costa >>>>> CC: Christoph Lameter >>>>> CC: Pekka Enberg >>>>> CC: Michal Hocko >>>>> CC: Kamezawa Hiroyuki >>>>> CC: Johannes Weiner >>>>> CC: Suleiman Souhlal >>>> >>>> >>>> Acked-by: Frederic Weisbecker >>> >>> Frederic, does this (with proper slab accounting added later) achieve >>> what you wanted with the task counter? >>> >> >> A note: Frederic may confirm, but I think he doesn't even need >> the slab accounting to follow to achieve that goal. > > Limiting is enough. But that requires internal accounting. > Yes, but why the *slab* needs to get involved? accounting task stack pages should be equivalent to what you were doing, even without slab accounting. Right ?