From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751475AbcGOQaj (ORCPT ); Fri, 15 Jul 2016 12:30:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47113 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750996AbcGOQag (ORCPT ); Fri, 15 Jul 2016 12:30:36 -0400 Date: Fri, 15 Jul 2016 18:30:54 +0200 From: Oleg Nesterov To: Peter Zijlstra Cc: mingo@kernel.org, linux-kernel@vger.kernel.org, tj@kernel.org, paulmck@linux.vnet.ibm.com, john.stultz@linaro.org, dimitrysh@google.com, romlem@google.com, ccross@google.com, tkjos@google.com Subject: Re: [PATCH 1/2] locking/percpu-rwsem: Optimize readers and reduce global impact Message-ID: <20160715163054.GA2995@redhat.com> References: <20160714182545.786693675@infradead.org> <20160714183022.272275102@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160714183022.272275102@infradead.org> User-Agent: Mutt/1.5.18 (2008-05-17) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Fri, 15 Jul 2016 16:30:36 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/14, Peter Zijlstra wrote: > > Currently the percpu-rwsem switches to (global) atomic ops while a > writer is waiting; which could be quite a while and slows down > releasing the readers. > > This patch cures this problem by ordering the reader-state vs > reader-count (see the comments in __percpu_down_read() and > percpu_down_write()). This changes a global atomic op into a full > memory barrier, which doesn't have the global cacheline contention. I've applied this patch + another change you sent on top of it. Everything looks good to me except the __this_cpu_inc() in __percpu_down_read(), > + __down_read(&sem->rw_sem); > + __this_cpu_inc(*sem->read_count); > + __up_read(&sem->rw_sem); Preemption is already enabled, don't we need this_cpu_inc() ? > -EXPORT_SYMBOL_GPL(percpu_up_write); > +EXPORT_SYMBOL(percpu_up_write); and this one ;) I do not really care, but it seems you did this change by accident. Actually, I _think_ we can do some cleanups/improvements on top of this change, but we can do this later. In particular, _perhaps_ we can avoid the unconditional wakeup in __percpu_up_read(), but I am not sure and in any case this needs another change. Reviewed-by: Oleg Nesterov