public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
To: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org, mingo@elte.hu,
	laijs@cn.fujitsu.com, dipankar@in.ibm.com,
	akpm@linux-foundation.org, josht@linux.vnet.ibm.com,
	dvhltc@us.ibm.com, niv@us.ibm.com, tglx@linutronix.de,
	peterz@infradead.org, rostedt@goodmis.org
Subject: Re: [PATCH -tip] Create rcutree plugins to handle hotplug CPU for multi-level trees
Date: Tue, 25 Aug 2009 14:48:00 -0400	[thread overview]
Message-ID: <20090825184800.GD2448@Krystal> (raw)
In-Reply-To: <20090825182204.GA26736@linux.vnet.ibm.com>

* Paul E. McKenney (paulmck@linux.vnet.ibm.com) wrote:
> When offlining CPUs from a multi-level tree, there is the possibility
> of offlining the last CPU from a given node when there are preempted
> RCU read-side critical sections that started life on one of the CPUs on
> that node.  In this case, the corresponding tasks will be enqueued via
> the task_struct's rcu_node_entry list_head onto one of the rcu_node's
> blocked_tasks[] lists.  These tasks need to be moved somewhere else
> so that they will prevent the current grace period from ending.
> That somewhere is the root rcu_node.
> 
> With this patch, TREE_PREEMPT_RCU passes moderate rcutorture testing
> with aggressive CPU-hotplugging (no delay between inserting/removing
> randomly selected CPU).
> 
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
[...]
>  /*
> + * Handle tasklist migration for case in which all CPUs covered by the
> + * specified rcu_node have gone offline.  Move them up to the root
> + * rcu_node.  The reason for not just moving them to the immediate
> + * parent is to remove the need for rcu_read_unlock_special() to
> + * make more than two attempts to acquire the target rcu_node's lock.
> + *
> + * The caller must hold rnp->lock with irqs disabled.
> + */
> +static void rcu_preempt_offline_tasks(struct rcu_state *rsp,
> +				      struct rcu_node *rnp)
> +{
> +	int i;
> +	struct list_head *lp;
> +	struct list_head *lp_root;
> +	struct rcu_node *rnp_root = rcu_get_root(rsp);
> +	struct task_struct *tp;
> +
> +	if (rnp == rnp_root)
> +		return;  /* Shouldn't happen: at least one CPU online. */
> +

Hrm, is it "shouldn't happen" or "could be called, but we should not
move anything" ?

If it is really the former, we could put a WARN_ON_ONCE (or, more
aggressively, a BUG_ON) there and see when the caller is going crazy
rather than ignoring the error.

> +	/*
> +	 * Move tasks up to root rcu_node.  Rely on the fact that the
> +	 * root rcu_node can be at most one ahead of the rest of the
> +	 * rcu_nodes in terms of gp_num value.

Do you gather the description of such constraints in a central place
somewhere around the code or design documentation in the kernel tree ?
I just want to point out that every clever assumption like this, which
is based on the constraints imposed by the current design, should be
easy to list in a year from now if we ever decide to move from tree to
hashed RCU (or whichever next step will be necessary then).

I am just worried that migration helpers seems to be added to the design
as an afterthought, and therefore might make future evolution more
difficult.

Thanks,

Mathieu

>  This fact allows us to
> +	 * move the blocked_tasks[] array directly, element by element.
> +	 */
> +	for (i = 0; i < 2; i++) {
> +		lp = &rnp->blocked_tasks[i];
> +		lp_root = &rnp_root->blocked_tasks[i];
> +		while (!list_empty(lp)) {
> +			tp = list_entry(lp->next, typeof(*tp), rcu_node_entry);
> +			spin_lock(&rnp_root->lock); /* irqs already disabled */
> +			list_del(&tp->rcu_node_entry);
> +			tp->rcu_blocked_node = rnp_root;
> +			list_add(&tp->rcu_node_entry, lp_root);
> +			spin_unlock(&rnp_root->lock); /* irqs remain disabled */
> +		}
> +	}
> +}
> +
> +/*
>   * Do CPU-offline processing for preemptable RCU.
>   */
>  static void rcu_preempt_offline_cpu(int cpu)
> @@ -410,6 +460,15 @@ static int rcu_preempted_readers(struct rcu_node *rnp)
>  #ifdef CONFIG_HOTPLUG_CPU
>  
>  /*
> + * Because preemptable RCU does not exist, it never needs to migrate
> + * tasks that were blocked within RCU read-side critical sections.
> + */
> +static void rcu_preempt_offline_tasks(struct rcu_state *rsp,
> +				      struct rcu_node *rnp)
> +{
> +}
> +
> +/*
>   * Because preemptable RCU does not exist, it never needs CPU-offline
>   * processing.
>   */

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68

  parent reply	other threads:[~2009-08-25 18:48 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-08-25 18:22 [PATCH -tip] Create rcutree plugins to handle hotplug CPU for multi-level trees Paul E. McKenney
2009-08-25 18:38 ` Josh Triplett
2009-08-25 23:30   ` Paul E. McKenney
2009-08-25 18:48 ` Mathieu Desnoyers [this message]
2009-08-25 23:51   ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090825184800.GD2448@Krystal \
    --to=mathieu.desnoyers@polymtl.ca \
    --cc=akpm@linux-foundation.org \
    --cc=dipankar@in.ibm.com \
    --cc=dvhltc@us.ibm.com \
    --cc=josht@linux.vnet.ibm.com \
    --cc=laijs@cn.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=niv@us.ibm.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox