From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751452AbdFFP1S (ORCPT ); Tue, 6 Jun 2017 11:27:18 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:35352 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751390AbdFFP1Q (ORCPT ); Tue, 6 Jun 2017 11:27:16 -0400 Date: Tue, 6 Jun 2017 17:27:06 +0200 From: Heiko Carstens To: Christian Borntraeger Cc: Paolo Bonzini , Peter Zijlstra , "Paul E. McKenney" , linux-kernel@vger.kernel.org, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, kvm@vger.kernel.org, Linus Torvalds , Martin Schwidefsky , linux-s390 Subject: Re: [PATCH RFC tip/core/rcu 1/2] srcu: Allow use of Tiny/Tree SRCU from both process and interrupt context References: <20170605220919.GA27820@linux.vnet.ibm.com> <1496700591-30177-1-git-send-email-paulmck@linux.vnet.ibm.com> <20170606105343.ibhzrk6jwhmoja5t@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-TM-AS-GCONF: 00 x-cbid: 17060615-0012-0000-0000-00000545FA43 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17060615-0013-0000-0000-000018B4D39A Message-Id: <20170606152705.GD6681@osiris> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-06-06_12:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706060262 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 06, 2017 at 04:45:57PM +0200, Christian Borntraeger wrote: > Adding s390 folks and list > >> Only s390 is TSO, arm64 is very much a weak arch. > > > > Right, and thus arm64 can implement a fast this_cpu_inc using LL/SC. > > s390 cannot because its atomic_inc has implicit memory barriers. > > > > s390's this_cpu_inc is *faster* than the generic one, but still pretty slow. > > FWIW, we improved the performance of local_irq_save/restore some time ago > with commit 204ee2c5643199a2 ("s390/irqflags: optimize irq restore") and > disable/enable seem to be reasonably fast (3-5ns on my system doing both > disable/enable in a loop) on todays systems. So I would assume that the > generic implementation would not be that bad. > > A the same time, the implicit memory barrier of the atomic_inc should be > even cheaper. In contrast to x86, a full smp_mb seems to be almost for > free (looks like <= 1 cycle for a bcr 14,0 and no contention). So I > _think_ that this should be really fast enough. > > As a side note, I am asking myself, though, why we do need the > preempt_disable/enable for the cases where we use the opcodes > like lao (atomic load and or to a memory location) and friends. Because you want the atomic instruction to be executed on the local cpu for which you have to per cpu pointer. If you get preempted to a different cpu between the ptr__ assignment and lan instruction it might be executed not on the local cpu. It's not really a correctness issue. #define arch_this_cpu_to_op(pcp, val, op) \ { \ typedef typeof(pcp) pcp_op_T__; \ pcp_op_T__ val__ = (val); \ pcp_op_T__ old__, *ptr__; \ preempt_disable(); \ ptr__ = raw_cpu_ptr(&(pcp)); \ asm volatile( \ op " %[old__],%[val__],%[ptr__]\n" \ : [old__] "=d" (old__), [ptr__] "+Q" (*ptr__) \ : [val__] "d" (val__) \ : "cc"); \ preempt_enable(); \ } #define this_cpu_and_4(pcp, val) arch_this_cpu_to_op(pcp, val, "lan") However in reality it doesn't matter at all, since all distributions we care about have preemption disabled. So this_cpu_inc() should just generate three instructions: two to calculate the percpu pointer and an additional asi for the atomic increment, with operand specific serialization. This is supposed to be a lot faster than disabling/enabling interrupts around a non-atomic operation. But maybe I didn't get the point of this thread :)