netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Dumazet <dada1@cosmosbay.com>
To: paulmck@linux.vnet.ibm.com
Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
	netfilter-devel@vger.kernel.org, mingo@elte.hu,
	akpm@linux-foundation.org, torvalds@linux-foundation.org,
	davem@davemloft.net, zbr@ioremap.net, jeff.chua.linux@gmail.com,
	paulus@samba.org, laijs@cn.fujitsu.com, jengelh@medozas.de,
	r000n@r000n.net, benh@kernel.crashing.org,
	mathieu.desnoyers@polymtl.ca
Subject: Re: [PATCH RFC] v4 somewhat-expedited "big hammer" RCU grace periods
Date: Fri, 08 May 2009 19:28:31 +0200	[thread overview]
Message-ID: <4A046BBF.9070400@cosmosbay.com> (raw)
In-Reply-To: <20090508170815.GA9708@linux.vnet.ibm.com>

Paul E. McKenney a écrit :
> Fourth cut of "big hammer" expedited RCU grace periods.  This uses
> a kthread that schedules itself on all online CPUs in turn, thus
> forcing a grace period.  The synchronize_sched(), synchronize_rcu(),
> and synchronize_bh() primitives wake this kthread up and then wait for
> it to force the grace period.
> 
> As before, this does nothing to expedite callbacks already registered
> with call_rcu() or call_rcu_bh(), but there is no need to.  Just maps
> to synchronize_rcu() and a new synchronize_rcu_bh() on preemptable RCU,
> which has more complex grace-period detection -- this can be fixed later.
> 
> Passes light rcutorture testing.  Grace periods take around 200
> microseconds on an 8-CPU Power machine.  This is a good order of magnitude
> better than v3, but an order of magnitude slower than v2.  Furthermore,
> it will get slower the more CPUs you have, and eight CPUs is not all
> that many these days.  So this implementation still does not cut it.
> 
> Once again, I am posting this on the off-chance that I made some stupid
> mistake that someone might spot.  Absent that, I am taking yet another
> different approach, namely setting up per-CPU threads that are awakened
> via smp_call_function(), permitting the quiescent states to be waited
> for in parallel.
> 

I dont know, dont we have possibility one cpu is dedicated for the use
of a cpu hungry real time thread ?

krcu_sched_expedited() would dead lock or something ?

> Shortcomings:
> 
> o	Too slow!!!  Thinking in terms of using per-CPU kthreads.
> 
> o	The wait_event() calls result in 120-second warnings, need
> 	to use something like wait_event_interruptible().  There are
> 	probably other corner cases that need attention.
> 
> o	Does not address preemptable RCU.
> 
> Changes since v3:
> 
> o	Use a kthread that schedules itself on each CPU in turn to
> 	force a grace period.  The synchronize_rcu() primitive
> 	wakes up the kthread in order to avoid messing with affinity
> 	masks on user tasks.
> 
> o	Tried a number of additional variations on the v3 approach, none
> 	of which helped much.
> 
> Changes since v2:
> 
> o	Use reschedule IPIs rather than a softirq.
> 
> Changes since v1:
> 
> o	Added rcutorture support, and added exports required by
> 	rcutorture.
> 
> o	Added comment stating that smp_call_function() implies a
> 	memory barrier, suggested by Mathieu.
> 
> o	Added #include for delay.h.
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
> 
>  include/linux/rcuclassic.h |   16 +++
>  include/linux/rcupdate.h   |   24 ++---
>  include/linux/rcupreempt.h |   10 ++
>  include/linux/rcutree.h    |   13 ++
>  kernel/rcupdate.c          |  103 +++++++++++++++++++++++
>  kernel/rcupreempt.c        |    1 
>  kernel/rcutorture.c        |  200 ++++++++++++++++++++++++---------------------
>  7 files changed, 261 insertions(+), 106 deletions(-)
> 

> +/*
> + * Kernel thread that processes synchronize_sched_expedited() requests.
> + * This is implemented as a separate kernel thread to avoid the need
> + * to mess with other tasks' cpumasks.
> + */
> +static int krcu_sched_expedited(void *arg)
> +{
> +	int cpu;
> +
> +	do {
> +		wait_event(need_sched_expedited_wq, need_sched_expedited);
> +		need_sched_expedited = 0;
> +		get_online_cpus();
> +		for_each_online_cpu(cpu) {
> +			sched_setaffinity(0, &cpumask_of_cpu(cpu));
> +			schedule();

<<no return>>

> +		}
> +		put_online_cpus();
> +		sched_expedited_done = 1;
> +		wake_up(&sched_expedited_done_wq);
> +	} while (!kthread_should_stop());
> +	return 0;
> +}



  reply	other threads:[~2009-05-08 17:30 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-08 17:08 [PATCH RFC] v4 somewhat-expedited "big hammer" RCU grace periods Paul E. McKenney
2009-05-08 17:28 ` Eric Dumazet [this message]
2009-05-08 18:05   ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A046BBF.9070400@cosmosbay.com \
    --to=dada1@cosmosbay.com \
    --cc=akpm@linux-foundation.org \
    --cc=benh@kernel.crashing.org \
    --cc=davem@davemloft.net \
    --cc=jeff.chua.linux@gmail.com \
    --cc=jengelh@medozas.de \
    --cc=laijs@cn.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@polymtl.ca \
    --cc=mingo@elte.hu \
    --cc=netdev@vger.kernel.org \
    --cc=netfilter-devel@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=paulus@samba.org \
    --cc=r000n@r000n.net \
    --cc=torvalds@linux-foundation.org \
    --cc=zbr@ioremap.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).