From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755917AbYJQO7R (ORCPT ); Fri, 17 Oct 2008 10:59:17 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754640AbYJQO7B (ORCPT ); Fri, 17 Oct 2008 10:59:01 -0400 Received: from e3.ny.us.ibm.com ([32.97.182.143]:58841 "EHLO e3.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754613AbYJQO7A (ORCPT ); Fri, 17 Oct 2008 10:59:00 -0400 Date: Fri, 17 Oct 2008 07:58:54 -0700 From: "Paul E. McKenney" To: Lai Jiangshan Cc: Ingo Molnar , Linux Kernel Mailing List , Dipankar Sarma , Thomas Gleixner Subject: Re: [PATCH] rcupdate: fix bug of rcu_barrier*() Message-ID: <20081017145854.GD6706@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <48F8335E.5060401@cn.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <48F8335E.5060401@cn.fujitsu.com> User-Agent: Mutt/1.5.15+20070412 (2007-04-11) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 17, 2008 at 02:40:30PM +0800, Lai Jiangshan wrote: > > current rcu_barrier_bh() is like this: > > void rcu_barrier_bh(void) > { > BUG_ON(in_interrupt()); > /* Take cpucontrol mutex to protect against CPU hotplug */ > mutex_lock(&rcu_barrier_mutex); > init_completion(&rcu_barrier_completion); > atomic_set(&rcu_barrier_cpu_count, 0); > /* > * The queueing of callbacks in all CPUs must be atomic with > * respect to RCU, otherwise one CPU may queue a callback, > * wait for a grace period, decrement barrier count and call > * complete(), while other CPUs have not yet queued anything. > * So, we need to make sure that grace periods cannot complete > * until all the callbacks are queued. > */ > rcu_read_lock(); > on_each_cpu(rcu_barrier_func, (void *)RCU_BARRIER_BH, 1); > rcu_read_unlock(); > wait_for_completion(&rcu_barrier_completion); > mutex_unlock(&rcu_barrier_mutex); > } > > The inconsistency of the code and the comments show a bug here. > rcu_read_lock() cannot make sure that "grace periods for RCU_BH > cannot complete until all the callbacks are queued". > it only make sure that race periods for RCU cannot complete > until all the callbacks are queued. > > so we must use rcu_read_lock_bh() for rcu_barrier_bh(). > like this: > > void rcu_barrier_bh(void) > { > ...... > rcu_read_lock_bh(); > on_each_cpu(rcu_barrier_func, (void *)RCU_BARRIER_BH, 1); > rcu_read_unlock_bh(); > ...... > } > > and also rcu_barrier() rcu_barrier_sched() are implemented like this. > it will bring a lot of duplicate code. My patch uses another way to > fix this bug, please see the comment of my patch. > Thank Paul E. McKenney for he rewrote the comment. Still looks good to me! Thank you again, Jiangshan, for finding and fixing this one!!! Thanx, Paul > Signed-off-by: Lai Jiangshan > Reviewed-by: Paul E. McKenney > --- > diff --git a/kernel/rcupdate.c b/kernel/rcupdate.c > index 467d594..ad63af8 100644 > --- a/kernel/rcupdate.c > +++ b/kernel/rcupdate.c > @@ -119,18 +119,19 @@ static void _rcu_barrier(enum rcu_barrier type) > /* Take cpucontrol mutex to protect against CPU hotplug */ > mutex_lock(&rcu_barrier_mutex); > init_completion(&rcu_barrier_completion); > - atomic_set(&rcu_barrier_cpu_count, 0); > /* > - * The queueing of callbacks in all CPUs must be atomic with > - * respect to RCU, otherwise one CPU may queue a callback, > - * wait for a grace period, decrement barrier count and call > - * complete(), while other CPUs have not yet queued anything. > - * So, we need to make sure that grace periods cannot complete > - * until all the callbacks are queued. > + * Initialize rcu_barrier_cpu_count to 1, then invoke > + * rcu_barrier_func() on each CPU, so that each CPU also has > + * incremented rcu_barrier_cpu_count. Only then is it safe to > + * decrement rcu_barrier_cpu_count -- otherwise the first CPU > + * might complete its grace period before all of the other CPUs > + * did their increment, causing this function to return too > + * early. > */ > - rcu_read_lock(); > + atomic_set(&rcu_barrier_cpu_count, 1); > on_each_cpu(rcu_barrier_func, (void *)type, 1); > - rcu_read_unlock(); > + if (atomic_dec_and_test(&rcu_barrier_cpu_count)) > + complete(&rcu_barrier_completion); > wait_for_completion(&rcu_barrier_completion); > mutex_unlock(&rcu_barrier_mutex); > } > > > > >