From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751518AbdJGPKS (ORCPT ); Sat, 7 Oct 2017 11:10:18 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:36129 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751119AbdJGPKP (ORCPT ); Sat, 7 Oct 2017 11:10:15 -0400 X-Google-Smtp-Source: AOwi7QAvBn/CpILPiXMEfqIU6YtoQtCt74bV360Z7fsrVP/zWn8tFC/WdUf5TswEWh8h/ms57R7kMg== Date: Sat, 7 Oct 2017 17:10:04 +0200 From: Andrea Parri To: Peter Zijlstra Cc: Mathieu Desnoyers , "Paul E. McKenney" , linux-kernel , Ingo Molnar , Lai Jiangshan , dipankar , Andrew Morton , Josh Triplett , Thomas Gleixner , rostedt , David Howells , Eric Dumazet , fweisbec , Oleg Nesterov , Boqun Feng , Andrew Hunter , maged michael , gromer , Avi Kivity , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Dave Watson , Alan Stern , Will Deacon , Andy Lutomirski , Ingo Molnar , Alexander Viro , Nicholas Piggin , linuxppc-dev , linux-arch Subject: Re: [PATCH tip/core/rcu 1/3] membarrier: Provide register expedited private command Message-ID: <20171007151004.GA3874@andrea> References: <20171004213734.GA11463@linux.vnet.ibm.com> <1507153075-12345-1-git-send-email-paulmck@linux.vnet.ibm.com> <20171005121250.prr5ff5kf3lxq6hx@hirez.programming.kicks-ass.net> <312162.31738.1507219326334.JavaMail.zimbra@efficios.com> <20171005220214.GA7140@andrea> <206890579.32344.1507241955010.JavaMail.zimbra@efficios.com> <20171006083219.asdpl5w4pl6hedcd@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171006083219.asdpl5w4pl6hedcd@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 06, 2017 at 10:32:19AM +0200, Peter Zijlstra wrote: > > AFAIU the scheduler rq->lock is held while preemption is disabled. > > synchronize_sched() is used here to ensure that all pre-existing > > preempt-off critical sections have completed. > > > > So saying that we use synchronize_sched() to synchronize with rq->lock > > would be stretching the truth a bit. It's actually only true because the > > scheduler holding the rq->lock is surrounded by a preempt-off > > critical section. > > No, rq->lock is sufficient, note that rq->lock is a raw_spinlock_t which > implies !preempt. Yes, we also surround the rq->lock usage with a > slightly larger preempt_disable() section but that's not in fact > required for this. That's what it is, according to the current sources: we seemed to agree that a preempt-off critical section is what we rely on here and that the start of this critical section is not marked by that raw_spin_lock. Andrea