public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: linux-kernel@vger.kernel.org, cl@linux.com, penberg@kernel.org,
	rientjes@google.com, mm-commits@vger.kernel.org,
	brouer@redhat.com
Subject: Re: + slub-bulk-alloc-extract-objects-from-the-per-cpu-slab.patch added to -mm tree
Date: Tue, 9 Jun 2015 09:09:56 +0200	[thread overview]
Message-ID: <20150609090956.3345d9c8@redhat.com> (raw)
In-Reply-To: <20150609002639.GB9687@js1304-P5Q-DELUXE>

On Tue, 9 Jun 2015 09:26:39 +0900
Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:

> On Wed, Apr 08, 2015 at 03:53:13PM -0700, akpm@linux-foundation.org wrote:
> > 
> > The patch titled
> >      Subject: slub bulk alloc: extract objects from the per cpu slab
> > has been added to the -mm tree.  Its filename is
> >      slub-bulk-alloc-extract-objects-from-the-per-cpu-slab.patch
> > 
> > This patch should soon appear at
> >     http://ozlabs.org/~akpm/mmots/broken-out/slub-bulk-alloc-extract-objects-from-the-per-cpu-slab.patch
> > and later at
> >     http://ozlabs.org/~akpm/mmotm/broken-out/slub-bulk-alloc-extract-objects-from-the-per-cpu-slab.patch
> > 
> > Before you just go and hit "reply", please:
> >    a) Consider who else should be cc'ed
> >    b) Prefer to cc a suitable mailing list as well
> >    c) Ideally: find the original patch on the mailing list and do a
> >       reply-to-all to that, adding suitable additional cc's
> > 
> > *** Remember to use Documentation/SubmitChecklist when testing your code ***
> > 
> > The -mm tree is included into linux-next and is updated
> > there every 3-4 working days
> > 
> > ------------------------------------------------------
> > From: Christoph Lameter <cl@linux.com>
> > Subject: slub bulk alloc: extract objects from the per cpu slab
> > 
> > First piece: acceleration of retrieval of per cpu objects
> > 
> > If we are allocating lots of objects then it is advantageous to disable
> > interrupts and avoid the this_cpu_cmpxchg() operation to get these objects
> > faster.  Note that we cannot do the fast operation if debugging is
> > enabled.  Note also that the requirement of having interrupts disabled
> > avoids having to do processor flag operations.
> > 
> > Allocate as many objects as possible in the fast way and then fall back to
> > the generic implementation for the rest of the objects.
> > 
> > Signed-off-by: Christoph Lameter <cl@linux.com>
> > Cc: Jesper Dangaard Brouer <brouer@redhat.com>
> > Cc: Christoph Lameter <cl@linux.com>
> > Cc: Pekka Enberg <penberg@kernel.org>
> > Cc: David Rientjes <rientjes@google.com>
> > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> > ---
> > 
> >  mm/slub.c |   27 ++++++++++++++++++++++++++-
> >  1 file changed, 26 insertions(+), 1 deletion(-)
> > 
> > diff -puN mm/slub.c~slub-bulk-alloc-extract-objects-from-the-per-cpu-slab mm/slub.c
> > --- a/mm/slub.c~slub-bulk-alloc-extract-objects-from-the-per-cpu-slab
> > +++ a/mm/slub.c
> > @@ -2759,7 +2759,32 @@ EXPORT_SYMBOL(kmem_cache_free_bulk);
> >  bool kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> >  								void **p)
> >  {
> > -	return kmem_cache_alloc_bulk(s, flags, size, p);
> > +	if (!kmem_cache_debug(s)) {
> > +		struct kmem_cache_cpu *c;
> > +
> > +		/* Drain objects in the per cpu slab */
> > +		local_irq_disable();
> > +		c = this_cpu_ptr(s->cpu_slab);
> > +
> > +		while (size) {
> > +			void *object = c->freelist;
> > +
> > +			if (!object)
> > +				break;
> > +
> > +			c->freelist = get_freepointer(s, object);
> > +			*p++ = object;
> > +			size--;
> > +
> > +			if (unlikely(flags & __GFP_ZERO))
> > +				memset(object, 0, s->object_size);
> > +		}
> > +		c->tid = next_tid(c->tid);
> > +
> > +		local_irq_enable();
> > +	}
> > +
> > +	return __kmem_cache_alloc_bulk(s, flags, size, p);
> 
> Hello,
> 
> So, if __kmem_cache_alloc_bulk() fails, all allocated objects in array
> should be freed, but, __kmem_cache_alloc_bulk() can't know
> about objects allocated by this slub specific kmem_cache_alloc_bulk()
> function. Please fix it.

Check, I've already noticed this, and have fixed it in my local git
tree. 

How do I submit a fix to AKPM? (do I replace the commit/patch, or do I
apply a patch on top)

(And as you also noticed, I've also moved the memset out of the loop,
after irq_enable)

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

  reply	other threads:[~2015-06-09  7:10 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <5525b159.BpuUD6FQt89EaGh/%akpm@linux-foundation.org>
2015-06-09  0:26 ` + slub-bulk-alloc-extract-objects-from-the-per-cpu-slab.patch added to -mm tree Joonsoo Kim
2015-06-09  7:09   ` Jesper Dangaard Brouer [this message]
2015-06-09  0:29 ` Joonsoo Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150609090956.3345d9c8@redhat.com \
    --to=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mm-commits@vger.kernel.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox