public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
To: Christoph Lameter <cl@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	Pekka Enberg <penberg@cs.helsinki.fi>, Tejun Heo <tj@kernel.org>,
	Mel Gorman <mel@csn.ul.ie>,
	mingo@elte.hu
Subject: Re: [this_cpu_xx V5 19/19] SLUB: Experimental new fastpath w/o interrupt disable
Date: Mon, 12 Oct 2009 09:56:00 -0400	[thread overview]
Message-ID: <20091012135600.GB15605@Krystal> (raw)
In-Reply-To: <alpine.DEB.1.10.0910081652190.8030@gentwo.org>

* Christoph Lameter (cl@linux-foundation.org) wrote:
> On Thu, 8 Oct 2009, Mathieu Desnoyers wrote:
> 
> > > Index: linux-2.6/mm/slub.c
> > > ===================================================================
> > > --- linux-2.6.orig/mm/slub.c	2009-10-08 11:35:59.000000000 -0500
> > > +++ linux-2.6/mm/slub.c	2009-10-08 14:03:22.000000000 -0500
> > > @@ -1606,7 +1606,14 @@ static void *__slab_alloc(struct kmem_ca
> > >  			  unsigned long addr)
> > >  {
> > >  	void **object;
> > > -	struct page *page = __this_cpu_read(s->cpu_slab->page);
> > > +	struct page *page;
> > > +	unsigned long flags;
> > > +	int hotpath;
> > > +
> > > +	local_irq_save(flags);
> >
> > (Recommend adding)
> >
> > 	preempt_enable_no_resched();
> >
> >
> > The preempt enable right in the middle of a big function is adding an
> > unnecessary barrier(), which will restrain gcc from doing its
> > optimizations.  This might hurt performances.
> 
> In the middle of the function we have determine that we have to go to the
> page allocator to get more memory. There is not much the compiler can do
> to speed that up.

Indeed, the compiler cannot do much about it. However, the programer
(you) can move the preempt_enable_no_resched() part of the
preempt_enable() to the beginning of the function.

> 
> > I still recommend the preempt_enable_no_resched() at the beginning of
> > __slab_alloc(), and simply putting a check_resched() here (which saves
> > us the odd compiler barrier in the middle of function).
> 
> Then preemption would be unnecessarily disabled for the page allocator
> call?

No ?
preempt_enable_no_resched() enables preemption.

> 
> > >  	if (gfpflags & __GFP_WAIT)
> > >  		local_irq_enable();
> > >
> > > +	preempt_enable();
> >
> > We could replace the above by:
> >
> > if (gfpflags & __GFP_WAIT) {
> > 	local_irq_enable();
> > 	preempt_check_resched();
> > }
> 
> Which would leave preempt off for the page allocator.

Not if you do preempt_enable_no_resched() at the beginnig of the
function, after disabling interrupts.

> 
> > > +	irqsafe_cpu_inc(s->cpu_slab->active);
> > > +	barrier();
> > >  	object = __this_cpu_read(s->cpu_slab->freelist);
> > > -	if (unlikely(!object || !node_match(s, node)))
> > > +	if (unlikely(!object || !node_match(s, node) ||
> > > +			__this_cpu_read(s->cpu_slab->active)))
> >
> > Missing a barrier() here ?
> 
> The modifications of the s->cpu_slab->freelist in __slab_alloc() are only
> done after interrupts have been disabled and after the slab has been locked.

I was concerned about a potential race between
cpu_slab->active/cpu_slab->freelist if an interrupt came in. I
understand that as soon as you get a hint that you must hit the slow
path, you don't care about the order in which these operations have been
done.

> 
> > The idea is to let gcc know that "active" inc/dec and "freelist" reads
> > must never be reordered. Even when the decrement is done in the slow
> > path branch.
> 
> Right. How could that occur with this code?
> 

__slab_alloc calls __this_cpu_dec(s->cpu_slab->active); without any
compiler barrier. But I get that when __slab_alloc is executed, we don't
care about "active" dec to be reordered, because we're not altering fast
path data anymore.

> > > +		preempt_enable();
> > >  		stat(s, FREE_FASTPATH);
> > > -	} else
> > > +	} else {
> >
> > Perhaps missing a barrier() in the else ?
> 
> Not sure why that would be necessary. __slab_free() does not even touch
> s->cpu_slab->freelist if you have the same reasons as in the alloc path.

My intent was to order __this_cpu_read(s->cpu_slab->page) and
irqsafe_cpu_dec(s->cpu_slab->active), but I get that if you run the slow
path, you don't care about some spilling of the slow path over the slab
active critical section.

Thanks,

Mathieu

> 
> 

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68

  reply	other threads:[~2009-10-12 13:56 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-10-06 23:36 [this_cpu_xx V5 00/19] Introduce per cpu atomic operations and avoid per cpu address arithmetic cl
2009-10-06 23:36 ` [this_cpu_xx V5 01/19] Introduce this_cpu_ptr() and generic this_cpu_* operations cl
2009-10-06 23:52   ` Tejun Heo
2009-10-07 14:23     ` Christoph Lameter
2009-10-07 15:29       ` Tejun Heo
2009-10-06 23:36 ` [this_cpu_xx V5 02/19] this_cpu: X86 optimized this_cpu operations cl
2009-10-06 23:36 ` [this_cpu_xx V5 03/19] Use this_cpu operations for SNMP statistics cl
2009-10-06 23:36 ` [this_cpu_xx V5 04/19] Use this_cpu operations for NFS statistics cl
2009-10-06 23:36 ` [this_cpu_xx V5 05/19] use this_cpu ops for network statistics cl
2009-10-06 23:37 ` [this_cpu_xx V5 06/19] this_cpu_ptr: Straight transformations cl
2009-10-06 23:37 ` [this_cpu_xx V5 07/19] this_cpu_ptr: Eliminate get/put_cpu cl
2009-10-06 23:37 ` [this_cpu_xx V5 09/19] Use this_cpu_ptr in crypto subsystem cl
2009-10-06 23:37 ` [this_cpu_xx V5 10/19] Use this_cpu ops for VM statistics cl
2009-10-06 23:37 ` [this_cpu_xx V5 11/19] RCU: Use this_cpu operations cl
2009-10-06 23:37 ` [this_cpu_xx V5 12/19] this_cpu_ops: page allocator conversion cl
2009-10-06 23:37 ` [this_cpu_xx V5 13/19] this_cpu ops: Remove pageset_notifier cl
2009-10-06 23:37 ` [this_cpu_xx V5 14/19] Use this_cpu operations in slub cl
2009-10-06 23:37 ` [this_cpu_xx V5 15/19] SLUB: Get rid of dynamic DMA kmalloc cache allocation cl
2009-10-06 23:37 ` [this_cpu_xx V5 16/19] this_cpu: Remove slub kmem_cache fields cl
2009-10-06 23:37 ` [this_cpu_xx V5 17/19] Make slub statistics use this_cpu_inc cl
2009-10-06 23:37 ` [this_cpu_xx V5 18/19] this_cpu: slub aggressive use of this_cpu operations in the hotpaths cl
2009-10-06 23:37 ` [this_cpu_xx V5 19/19] SLUB: Experimental new fastpath w/o interrupt disable cl
2009-10-07  2:54   ` Mathieu Desnoyers
2009-10-07  9:11     ` Peter Zijlstra
2009-10-07 12:46       ` Mathieu Desnoyers
2009-10-07 13:01         ` Peter Zijlstra
2009-10-07 13:31           ` Mathieu Desnoyers
2009-10-07 14:37             ` Peter Zijlstra
2009-10-07 14:21           ` Christoph Lameter
2009-10-07 14:42         ` Christoph Lameter
2009-10-07 15:02           ` Mathieu Desnoyers
2009-10-07 15:05             ` Christoph Lameter
2009-10-07 15:19               ` Mathieu Desnoyers
2009-10-07 15:21                 ` Christoph Lameter
2009-10-07 15:41                   ` Mathieu Desnoyers
2009-10-07 16:42                     ` Christoph Lameter
2009-10-07 17:12                       ` Mathieu Desnoyers
2009-10-08  7:52                   ` Peter Zijlstra
2009-10-08 12:44                     ` Mathieu Desnoyers
2009-10-08 12:53                       ` Peter Zijlstra
2009-10-08 16:17                         ` Christoph Lameter
2009-10-08 17:22                         ` Mathieu Desnoyers
2009-10-08 16:11                     ` Christoph Lameter
2009-10-08 17:17                       ` Mathieu Desnoyers
2009-10-08 17:44                         ` Christoph Lameter
2009-10-08 19:17                           ` Mathieu Desnoyers
2009-10-08 19:21                             ` Christoph Lameter
2009-10-08 20:37                               ` Mathieu Desnoyers
2009-10-08 21:08                                 ` Christoph Lameter
2009-10-12 13:56                                   ` Mathieu Desnoyers [this message]
2009-10-12 14:52                                     ` Christoph Lameter
2009-10-12 15:26                                       ` Mathieu Desnoyers
2009-10-12 15:23                                         ` Christoph Lameter
2009-10-12 15:38                                           ` Mathieu Desnoyers
2009-10-12 15:38                                             ` Christoph Lameter
2009-10-12 16:05                                               ` Mathieu Desnoyers
2009-10-07 15:25     ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20091012135600.GB15605@Krystal \
    --to=mathieu.desnoyers@polymtl.ca \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mel@csn.ul.ie \
    --cc=mingo@elte.hu \
    --cc=penberg@cs.helsinki.fi \
    --cc=peterz@infradead.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox