From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932132AbbGAP7h (ORCPT ); Wed, 1 Jul 2015 11:59:37 -0400 Received: from e38.co.us.ibm.com ([32.97.110.159]:56218 "EHLO e38.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754591AbbGAP7W (ORCPT ); Wed, 1 Jul 2015 11:59:22 -0400 X-Helo: d03dlp03.boulder.ibm.com X-MailFrom: paulmck@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Date: Wed, 1 Jul 2015 08:59:14 -0700 From: "Paul E. McKenney" To: Josh Triplett Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com Subject: Re: [PATCH RFC tip/core/rcu 0/5] Expedited grace periods encouraging normal ones Message-ID: <20150701155914.GI3717@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20150630214805.GA7795@linux.vnet.ibm.com> <20150630220014.GA10916@cloud> <20150630221224.GQ3717@linux.vnet.ibm.com> <20150630234633.GA11450@cloud> <20150701001558.GU3717@linux.vnet.ibm.com> <20150701004214.GA30853@x> <20150701033701.GV3717@linux.vnet.ibm.com> <20150701154354.GA14535@jtriplet-mobl1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150701154354.GA14535@jtriplet-mobl1> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15070115-0029-0000-0000-00000AF298D7 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 01, 2015 at 08:43:54AM -0700, Josh Triplett wrote: > On Tue, Jun 30, 2015 at 08:37:01PM -0700, Paul E. McKenney wrote: > > On Tue, Jun 30, 2015 at 05:42:14PM -0700, Josh Triplett wrote: > > > On Tue, Jun 30, 2015 at 05:15:58PM -0700, Paul E. McKenney wrote: > > > > On Tue, Jun 30, 2015 at 04:46:33PM -0700, josh@joshtriplett.org wrote: > > > > > On Tue, Jun 30, 2015 at 03:12:24PM -0700, Paul E. McKenney wrote: > > > > > > On Tue, Jun 30, 2015 at 03:00:15PM -0700, josh@joshtriplett.org wrote: > > > > > > > On Tue, Jun 30, 2015 at 02:48:05PM -0700, Paul E. McKenney wrote: > > > > > > > > Hello! > > > > > > > > > > > > > > > > This series contains some highly experimental patches that allow normal > > > > > > > > grace periods to take advantage of the work done by concurrent expedited > > > > > > > > grace periods. This can reduce the overhead incurred by normal grace > > > > > > > > periods by eliminating the need for force-quiescent-state scans that > > > > > > > > would otherwise have happened after the expedited grace period completed. > > > > > > > > It is not clear whether this is a useful tradeoff. Nevertheless, this > > > > > > > > series contains the following patches: > > > > > > > > > > > > > > While it makes sense to avoid unnecessarily delaying a normal grace > > > > > > > period if the expedited machinery has provided the necessary delay, I'm > > > > > > > also *deeply* concerned that this will create a new class of > > > > > > > nondeterministic performance issues. Something that uses RCU may > > > > > > > perform badly due to grace period latency, but then suddenly start > > > > > > > performing well because an unrelated task starts hammering expedited > > > > > > > grace periods. This seems particularly likely during boot, for > > > > > > > instance, where RCU grace periods can be a significant component of boot > > > > > > > time (when you're trying to boot to userspace in small fractions of a > > > > > > > second). > > > > > > > > > > > > I will take that as another vote against. And for a reason that I had > > > > > > not yet come up with, so good show! ;-) > > > > > > > > > > Consider it a fairly weak concern against. Increasing performance seems > > > > > like a good thing in general; I just don't relish the future "feels less > > > > > responsive" bug reports that take a long time to track down and turn out > > > > > to be "this completely unrelated driver was loaded and started using > > > > > expedited grace periods". > > > > > > > > From what I can see, this one needs a good reason to go in, as opposed > > > > to a good reason to stay out. > > > > > > > > > Then again, perhaps the more relevant concern would be why drivers use > > > > > expedited grace periods in the first place. > > > > > > > > Networking uses expedited grace periods when RTNL is held to reduce > > > > contention on that lock. > > > > > > Wait, what? Why is anything using traditional (non-S) RCU while *any* > > > lock is held? > > > > In their defense, it is a sleeplock that is never taken except when > > rearranging networking configuration. Sometimes they need a grace period > > under the lock. So synchronize_net() checks to see if RTNL is held, and > > does a synchronize_rcu_expedited() if so and a synchronize_rcu() if not. > > > > But maybe I am misunderstanding your question? > > No, you understood my question. It seems wrong that they would need a > grace period *under* the lock, rather than the usual case of making a > change under the lock, dropping the lock, and *then* synchronizing. On that I must defer to the networking folks. Thanx, Paul