From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757972AbbE3UFU (ORCPT ); Sat, 30 May 2015 16:05:20 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48625 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757928AbbE3UFQ (ORCPT ); Sat, 30 May 2015 16:05:16 -0400 Date: Sat, 30 May 2015 22:04:25 +0200 From: Oleg Nesterov To: "Paul E. McKenney" Cc: Peter Zijlstra , tj@kernel.org, mingo@redhat.com, linux-kernel@vger.kernel.org, der.herr@hofr.at, dave@stgolabs.net, torvalds@linux-foundation.org, josh@joshtriplett.org Subject: ring_buffer_attach && cond_synchronize_rcu (Was: percpu-rwsem: Optimize readers and reduce global impact) Message-ID: <20150530200425.GA15748@redhat.com> References: <20150526114356.609107918@infradead.org> <20150526120215.042527659@infradead.org> <20150530171806.GB14999@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150530171806.GB14999@linux.vnet.ibm.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/30, Paul E. McKenney wrote: > > But it looks like you need the RCU-sched variant. Please see below for > an untested patch providing this support. One benefit of this patch > is that it does not add any bloat to Tiny RCU. I don't think so, see another email. But perhaps I am totally confused, please correct me. Well, actually the first writer (need_sync == T) can use it, but it does not make sense, I think. Because it calls sync() right after it observes GP_IDLE and drops the lock, the window is too small. > ------------------------------------------------------------------------ > > rcu: Add RCU-sched flavors of get-state and cond-sync However, to me this patch makes sense anyway. Just I don't think rcu_sync or percpu_rw_semaphore can use the new helpers. And. I tried to find other users of get_state/cond_sync. Found ring_buffer_attach() and it looks obviously buggy? Again, perhaps I am totally confused, but don't we need to ensure that we have "synchronize" _between_ list_del() and list_add() ? IOW. Suppose that ring_buffer_attach() preempts right_after get_state_synchronize_rcu() and gp completes before spin_lock(). In this case cond_synchronize_rcu() does nothing and we reuse ->rb_entry without waiting for gp in between? Don't we need the patch below? (it also moves the ->rcu_pending check under "if (rb)", to make it more readable imo). Peter? Oleg. --- x/kernel/events/core.c +++ x/kernel/events/core.c @@ -4310,20 +4310,20 @@ static void ring_buffer_attach(struct pe WARN_ON_ONCE(event->rcu_pending); old_rb = event->rb; - event->rcu_batches = get_state_synchronize_rcu(); - event->rcu_pending = 1; - spin_lock_irqsave(&old_rb->event_lock, flags); list_del_rcu(&event->rb_entry); spin_unlock_irqrestore(&old_rb->event_lock, flags); - } - if (event->rcu_pending && rb) { - cond_synchronize_rcu(event->rcu_batches); - event->rcu_pending = 0; + event->rcu_batches = get_state_synchronize_rcu(); + event->rcu_pending = 1; } if (rb) { + if (event->rcu_pending) { + cond_synchronize_rcu(event->rcu_batches); + event->rcu_pending = 0; + } + spin_lock_irqsave(&rb->event_lock, flags); list_add_rcu(&event->rb_entry, &rb->event_list); spin_unlock_irqrestore(&rb->event_lock, flags);