From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751996Ab1GTN7S (ORCPT ); Wed, 20 Jul 2011 09:59:18 -0400 Received: from relay.parallels.com ([195.214.232.42]:57944 "EHLO relay.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751854Ab1GTN7R (ORCPT ); Wed, 20 Jul 2011 09:59:17 -0400 Message-ID: <4E26DF34.6010500@openvz.org> Date: Wed, 20 Jul 2011 17:59:16 +0400 From: Konstantin Khlebnikov User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.18) Gecko/20110416 SeaMonkey/2.0.13 MIME-Version: 1.0 To: Pekka Enberg CC: Andrew Morton , "linux-mm@kvack.org" , Christoph Lameter , "linux-kernel@vger.kernel.org" , Matt Mackall , "mgorman@suse.de" Subject: Re: [PATCH] mm-slab: allocate kmem_cache with __GFP_REPEAT References: <20110720121612.28888.38970.stgit@localhost6> <4E26D7EA.3000902@parallels.com> <4E26DD25.4010707@parallels.com> In-Reply-To: <4E26DD25.4010707@parallels.com> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Konstantin Khlebnikov wrote: > Pekka Enberg wrote: >> On Wed, 20 Jul 2011, Konstantin Khlebnikov wrote: >>>> The changelog isn't that convincing, really. This is kmem_cache_create() >>>> so I'm surprised we'd ever get NULL here in practice. Does this fix some >>>> problem you're seeing? If this is really an issue, I'd blame the page >>>> allocator as GFP_KERNEL should just work. >>> >>> nf_conntrack creates separate slab-cache for each net-namespace, >>> this patch of course not eliminates the chance of failure, but makes it more >>> acceptable. >> >> I'm still surprised you are seeing failures. mm/slab.c hasn't changed >> significantly in a long time. Why hasn't anyone reported this before? I'd >> still be inclined to shift the blame to the page allocator... Mel, >> Christoph? >> >> On Wed, 20 Jul 2011, Konstantin Khlebnikov wrote: >>> struct kmem_size for slub is more compact, it uses pecpu-pointers instead of >>> dumb NR_CPUS-size array. >>> probably better to fix this side... >> >> So how big is 'struct kmem_cache' for your configuration anyway? Fixing >> the per-cpu data structures would be nice but I'm guessing it'll be >> slightly painful for mm/slab.c. > > With NR_CPUS=4096 and MAX_NUMNODES=512 its over 9k! > so it require order-4 page, meanwhile PAGE_ALLOC_COSTLY_ORDER is 3 sorry, it is 0x9070 bytes, 36+ kb, 9+ pages > >> >> Pekka > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/