From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751533AbdG0BpM (ORCPT ); Wed, 26 Jul 2017 21:45:12 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:40304 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751509AbdG0BpJ (ORCPT ); Wed, 26 Jul 2017 21:45:09 -0400 Date: Wed, 26 Jul 2017 18:45:02 -0700 From: "Paul E. McKenney" To: Mathieu Desnoyers Cc: Peter Zijlstra , linux-kernel , Ingo Molnar , Lai Jiangshan , dipankar , Andrew Morton , Josh Triplett , Thomas Gleixner , rostedt , David Howells , Eric Dumazet , fweisbec , Oleg Nesterov , Will Deacon Subject: Re: [PATCH tip/core/rcu 4/5] sys_membarrier: Add expedited option Reply-To: paulmck@linux.vnet.ibm.com References: <20170724215758.GA12075@linux.vnet.ibm.com> <20170725211926.GA3730@linux.vnet.ibm.com> <20170725215510.GD28975@worktop> <1480480872.25829.1501023013862.JavaMail.zimbra@efficios.com> <20170726074656.obadfdu6hdlrmy7r@hirez.programming.kicks-ass.net> <20170726154229.GO3730@linux.vnet.ibm.com> <1415161894.26359.1501092075006.JavaMail.zimbra@efficios.com> <20170726183032.GV3730@linux.vnet.ibm.com> <304686293.28015.1501101443139.JavaMail.zimbra@efficios.com> <20170726211146.GA3730@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170726211146.GA3730@linux.vnet.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17072701-0024-0000-0000-000002B62BFC X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007432; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000214; SDB=6.00893412; UDB=6.00446645; IPR=6.00673576; BA=6.00005492; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00016398; XFM=3.00000015; UTC=2017-07-27 01:45:07 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17072701-0025-0000-0000-000044E3C6E0 Message-Id: <20170727014502.GA17160@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-07-26_13:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1706020000 definitions=main-1707270027 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 26, 2017 at 02:11:46PM -0700, Paul E. McKenney wrote: > On Wed, Jul 26, 2017 at 08:37:23PM +0000, Mathieu Desnoyers wrote: > > ----- On Jul 26, 2017, at 2:30 PM, Paul E. McKenney paulmck@linux.vnet.ibm.com wrote: > > > > > On Wed, Jul 26, 2017 at 06:01:15PM +0000, Mathieu Desnoyers wrote: > > >> ----- On Jul 26, 2017, at 11:42 AM, Paul E. McKenney paulmck@linux.vnet.ibm.com > > >> wrote: > > >> > > >> > On Wed, Jul 26, 2017 at 09:46:56AM +0200, Peter Zijlstra wrote: > > >> >> On Tue, Jul 25, 2017 at 10:50:13PM +0000, Mathieu Desnoyers wrote: > > >> >> > This would implement a MEMBARRIER_CMD_PRIVATE_EXPEDITED (or such) flag > > >> >> > for expedited process-local effect. This differs from the "SHARED" flag, > > >> >> > since the SHARED flag affects threads accessing memory mappings shared > > >> >> > across processes as well. > > >> >> > > > >> >> > I wonder if we could create a MEMBARRIER_CMD_SHARED_EXPEDITED behavior > > >> >> > by iterating on all memory mappings mapped into the current process, > > >> >> > and build a cpumask based on the union of all mm masks encountered ? > > >> >> > Then we could send the IPI to all cpus belonging to that cpumask. Or > > >> >> > am I missing something obvious ? > > >> >> > > >> >> I would readily object to such a beast. You far too quickly end up > > >> >> having to IPI everybody because of some stupid shared map or something > > >> >> (yes I know, normal DSOs are mapped private). > > >> > > > >> > Agreed, we should keep things simple to start with. The user can always > > >> > invoke sys_membarrier() from each process. > > >> > > >> Another alternative for a MEMBARRIER_CMD_SHARED_EXPEDITED would be rate-limiting > > >> per thread. For instance, we could add a new "ulimit" that would bound the > > >> number of expedited membarrier per thread that can be done per millisecond, > > >> and switch to synchronize_sched() whenever a thread goes beyond that limit > > >> for the rest of the time-slot. > > >> > > >> A RT system that really cares about not having userspace sending IPIs > > >> to all cpus could set the ulimit value to 0, which would always use > > >> synchronize_sched(). > > >> > > >> Thoughts ? > > > > > > The patch I posted reverts to synchronize_sched() in kernels booted with > > > rcupdate.rcu_normal=1. ;-) > > > > > > But who is pushing for multiple-process sys_membarrier()? Everyone I > > > have talked to is OK with it being local to the current process. > > > > I guess I'm probably the guilty one intending to do weird stuff in userspace ;) > > > > Here are my two use-cases: > > > > * a new multi-process liburcu flavor, useful if e.g. a set of processes are > > responsible for updating a shared memory data structure, and a separate set > > of processes read that data structure. The readers can be killed without ill > > effect on the other processes. The synchronization could be done by one > > multi-process liburcu flavor per reader process "group". > > > > * lttng-ust user-space ring buffers (shared across processes). > > > > Both rely on a shared memory mapping for communication between processes, and > > I would like to be able to issue a sys_membarrier targeting all CPUs that may > > currently touch the shared memory mapping. > > > > I don't really need a system-wide effect, but I would like to be able to target > > a shared memory mapping and efficiently do an expedited sys_membarrier on all > > cpus involved. > > > > With lttng-ust, the shared buffers can spawn across 1000+ processes, so > > asking each process to issue sys_membarrier would add lots of unneeded overhead, > > because this would issue lots of needless memory barriers. > > > > Thoughts ? > > Dealing explicitly with 1000+ processes sounds like no picnic. It instead > sounds like a job for synchronize_sched_expedited(). ;-) Actually... Mathieu, does your use case require unprivileged access to sys_membarrier()? Thanx, Paul