From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935841Ab2JYPPo (ORCPT ); Thu, 25 Oct 2012 11:15:44 -0400 Received: from e4.ny.us.ibm.com ([32.97.182.144]:37174 "EHLO e4.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933628Ab2JYPPk (ORCPT ); Thu, 25 Oct 2012 11:15:40 -0400 Date: Thu, 25 Oct 2012 08:07:56 -0700 From: "Paul E. McKenney" To: Mikulas Patocka Cc: Linus Torvalds , Oleg Nesterov , Ingo Molnar , Peter Zijlstra , Srikar Dronamraju , Ananth N Mavinakayanahalli , Anton Arapov , linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/2] percpu-rw-semaphores: use rcu_read_lock_sched Message-ID: <20121025150755.GV2465@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20121017224430.GC2518@linux.vnet.ibm.com> <20121018162409.GA28504@redhat.com> <20121018163833.GK2518@linux.vnet.ibm.com> <20121018175747.GA30691@redhat.com> <20121019192838.GM2518@linux.vnet.ibm.com> <20121024161638.GA2465@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12102515-3534-0000-0000-00000E30007C Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 25, 2012 at 10:54:11AM -0400, Mikulas Patocka wrote: > > > On Wed, 24 Oct 2012, Paul E. McKenney wrote: > > > On Mon, Oct 22, 2012 at 07:39:16PM -0400, Mikulas Patocka wrote: > > > Use rcu_read_lock_sched / rcu_read_unlock_sched / synchronize_sched > > > instead of rcu_read_lock / rcu_read_unlock / synchronize_rcu. > > > > > > This is an optimization. The RCU-protected region is very small, so > > > there will be no latency problems if we disable preempt in this region. > > > > > > So we use rcu_read_lock_sched / rcu_read_unlock_sched that translates > > > to preempt_disable / preempt_disable. It is smaller (and supposedly > > > faster) than preemptible rcu_read_lock / rcu_read_unlock. > > > > > > Signed-off-by: Mikulas Patocka > > > > OK, as promised/threatened, I finally got a chance to take a closer look. > > > > The light_mb() and heavy_mb() definitions aren't doing much for me, > > the code would be cleared with them expanded inline. And while the > > approach of pairing barrier() with synchronize_sched() is interesting, > > it would be simpler to rely on RCU's properties. The key point is that > > if RCU cannot prove that a given RCU-sched read-side critical section > > is seen by all CPUs to have started after a given synchronize_sched(), > > then that synchronize_sched() must wait for that RCU-sched read-side > > critical section to complete. > > Also note that you can define both light_mb() and heavy_mb() to be > smp_mb() and slow down the reader path a bit and speed up the writer path. > > On architectures with in-order memory access (and thus smp_mb() equals > barrier()), it doesn't hurt the reader but helps the writer, for example: > #ifdef ARCH_HAS_INORDER_MEMORY_ACCESS > #define light_mb() smp_mb() > #define heavy_mb() smp_mb() > #else > #define light_mb() barrier() > #define heavy_mb() synchronize_sched() > #endif Except that there are no systems running Linux with in-order memory access. Even x86 and s390 require a barrier instruction for smp_mb() on SMP=y builds. Thanx, Paul