From mboxrd@z Thu Jan 1 00:00:00 1970 From: JoonSoo Kim Subject: Re: [PATCH v5 11/18] sl[au]b: Allocate objects from memcg cache Date: Tue, 30 Oct 2012 00:14:34 +0900 Message-ID: References: <1350656442-1523-1-git-send-email-glommer@parallels.com> <1350656442-1523-12-git-send-email-glommer@parallels.com> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=1HfZocdAeqkC7QWlYUuX6/HMwNqfrk8Kjv+AfcPAsUY=; b=vX3wMy3DvGJ44IKcequceGd/shnr+A1c/9sui02Q6nkxiW4MWPZfZFlFrWH63apIep /aHFH5Q4LoF0+Oz7ZRBNp8I45yIsMu/HfPJrcCM4Vs2FLEqGk1uWOHPoz5PHOYnVoHjr TEk3neJVBsHltQNCfO7s6i6hvzKhL83FFxccAdHktA6a+lATHXLo9SfkM6QbU/S/a3wT 13liU2EXQ/rXy0xirKwgIA2jpeWs6inXzcX1z4tOWq1lZbADYp+SARuUUYUUXXgEioeH YhHkTIR74yUOb3RaOHK2eMV74X39l/1urd/eUYGCidlsMgSxBobPCCewWelCdT7lEtvL /WMw== In-Reply-To: <1350656442-1523-12-git-send-email-glommer-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Glauber Costa Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Mel Gorman , Tejun Heo , Andrew Morton , Michal Hocko , Johannes Weiner , kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org, Christoph Lameter , David Rientjes , Pekka Enberg , devel-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org, Pekka Enberg , Suleiman Souhlal Hi, Glauber. 2012/10/19 Glauber Costa : > We are able to match a cache allocation to a particular memcg. If the > task doesn't change groups during the allocation itself - a rare event, > this will give us a good picture about who is the first group to touch a > cache page. > > This patch uses the now available infrastructure by calling > memcg_kmem_get_cache() before all the cache allocations. > > Signed-off-by: Glauber Costa > CC: Christoph Lameter > CC: Pekka Enberg > CC: Michal Hocko > CC: Kamezawa Hiroyuki > CC: Johannes Weiner > CC: Suleiman Souhlal > CC: Tejun Heo > --- > include/linux/slub_def.h | 15 ++++++++++----- > mm/memcontrol.c | 3 +++ > mm/slab.c | 6 +++++- > mm/slub.c | 5 +++-- > 4 files changed, 21 insertions(+), 8 deletions(-) > > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > index 961e72e..ed330df 100644 > --- a/include/linux/slub_def.h > +++ b/include/linux/slub_def.h > @@ -13,6 +13,8 @@ > #include > > #include > +#include > +#include > > enum stat_item { > ALLOC_FASTPATH, /* Allocation from cpu slab */ > @@ -209,14 +211,14 @@ static __always_inline int kmalloc_index(size_t size) > * This ought to end up with a global pointer to the right cache > * in kmalloc_caches. > */ > -static __always_inline struct kmem_cache *kmalloc_slab(size_t size) > +static __always_inline struct kmem_cache *kmalloc_slab(gfp_t flags, size_t size) > { > int index = kmalloc_index(size); > > if (index == 0) > return NULL; > > - return kmalloc_caches[index]; > + return memcg_kmem_get_cache(kmalloc_caches[index], flags); > } You don't need this, because memcg_kmem_get_cache() is invoked in both slab_alloc() and __cache_alloc_node().