linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Jones <davej@redhat.com>,
	cl@linux.com, linux-kernel@vger.kernel.org, mingo@kernel.org,
	tj@kernel.org, grygorii.strashko@ti.com
Subject: Re: How do I increment a per-CPU variable without warning?
Date: Wed, 16 Apr 2014 06:03:55 -0700	[thread overview]
Message-ID: <20140416130355.GW4496@linux.vnet.ibm.com> (raw)
In-Reply-To: <20140416052147.GK26782@laptop.programming.kicks-ass.net>

On Wed, Apr 16, 2014 at 07:21:48AM +0200, Peter Zijlstra wrote:
> On Tue, Apr 15, 2014 at 08:54:19PM -0700, Paul E. McKenney wrote:
> > But falling back on the old ways of doing this at least looks a bit
> > nicer:
> > 
> > 	static inline bool rcu_should_resched(void)
> > 	{
> > 		int t;
> > 		int *tp = &per_cpu(rcu_cond_resched_count, raw_smp_processor_id());
> > 
> > 		t = ACCESS_ONCE(*tp) + 1;
> > 		if (t < RCU_COND_RESCHED_LIM) {
> 
> <here>
> 
> > 			ACCESS_ONCE(*tp) = t;
> > 			return false;
> > 		}
> > 		return true;
> > 	}
> > 
> > Other thoughts?
> 
> Still broken, if A starts out on CPU1, gets migrated to CPU0 at <here>,
> then B starts the same on CPU1. It is possible for both CPU0 and CPU1 to
> write a different value into your rcu_cond_resched_count.

That is actually OK.  The values written are guaranteed to be between
zero and RCU_COND_RESCHED_LIM-1.  In theory, yes, rcu_should_resched()
could end up failing due to a horribly unlucky sequence of preemptions,
but the probability is -way- lower than that of hardware failure.

However...

> You really want to disable preemption around there. The proper old way
> would've been get_cpu_var()/put_cpu_var().

If you are OK with unconditional disabling of preemption at this point,
that would avoid worrying about probabilities and would be quite a bit
simpler.

So unconditional preempt_disable()/preempt_enable() it is.

							Thanx, Paul


  reply	other threads:[~2014-04-16 13:04 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-15 22:17 How do I increment a per-CPU variable without warning? Paul E. McKenney
2014-04-15 22:29 ` Dave Jones
2014-04-15 22:47   ` Paul E. McKenney
2014-04-16  3:54     ` Paul E. McKenney
2014-04-16  5:21       ` Peter Zijlstra
2014-04-16 13:03         ` Paul E. McKenney [this message]
2014-04-16 15:12         ` Christoph Lameter
2014-04-16  5:19 ` Peter Zijlstra
2014-04-16 15:08 ` Christoph Lameter
2014-04-16 16:06   ` Paul E. McKenney
2014-04-16 17:12     ` Paul E. McKenney
2014-04-16 18:29       ` Christoph Lameter
2014-04-16 18:47         ` Paul E. McKenney
2014-04-17 17:36           ` Christoph Lameter
2014-04-17 17:46             ` Paul E. McKenney
2014-04-17 17:53               ` Christoph Lameter
2014-04-17 18:21                 ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140416130355.GW4496@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=cl@linux.com \
    --cc=davej@redhat.com \
    --cc=grygorii.strashko@ti.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).