From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751822AbdG0Oos (ORCPT ); Thu, 27 Jul 2017 10:44:48 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:50829 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751606AbdG0Ooq (ORCPT ); Thu, 27 Jul 2017 10:44:46 -0400 Date: Thu, 27 Jul 2017 07:44:39 -0700 From: "Paul E. McKenney" To: Mathieu Desnoyers Cc: Peter Zijlstra , linux-kernel , Ingo Molnar , Lai Jiangshan , dipankar , Andrew Morton , Josh Triplett , Thomas Gleixner , rostedt , David Howells , Eric Dumazet , fweisbec , Oleg Nesterov , Will Deacon Subject: Re: [PATCH tip/core/rcu 4/5] sys_membarrier: Add expedited option Reply-To: paulmck@linux.vnet.ibm.com References: <20170724215758.GA12075@linux.vnet.ibm.com> <20170726074656.obadfdu6hdlrmy7r@hirez.programming.kicks-ass.net> <20170726154229.GO3730@linux.vnet.ibm.com> <1415161894.26359.1501092075006.JavaMail.zimbra@efficios.com> <20170726183032.GV3730@linux.vnet.ibm.com> <304686293.28015.1501101443139.JavaMail.zimbra@efficios.com> <20170726211146.GA3730@linux.vnet.ibm.com> <20170727014502.GA17160@linux.vnet.ibm.com> <1441520000.28295.1501159176561.JavaMail.zimbra@efficios.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1441520000.28295.1501159176561.JavaMail.zimbra@efficios.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17072714-0048-0000-0000-000001CA8E78 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007435; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000214; SDB=6.00893672; UDB=6.00446801; IPR=6.00673833; BA=6.00005495; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00016409; XFM=3.00000015; UTC=2017-07-27 14:44:43 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17072714-0049-0000-0000-00004207B7CA Message-Id: <20170727144439.GX3730@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-07-27_08:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1706020000 definitions=main-1707270231 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 27, 2017 at 12:39:36PM +0000, Mathieu Desnoyers wrote: > ----- On Jul 26, 2017, at 9:45 PM, Paul E. McKenney paulmck@linux.vnet.ibm.com wrote: > > > On Wed, Jul 26, 2017 at 02:11:46PM -0700, Paul E. McKenney wrote: > >> On Wed, Jul 26, 2017 at 08:37:23PM +0000, Mathieu Desnoyers wrote: > >> > ----- On Jul 26, 2017, at 2:30 PM, Paul E. McKenney paulmck@linux.vnet.ibm.com > >> > wrote: > >> > > >> > > On Wed, Jul 26, 2017 at 06:01:15PM +0000, Mathieu Desnoyers wrote: > >> > >> ----- On Jul 26, 2017, at 11:42 AM, Paul E. McKenney paulmck@linux.vnet.ibm.com > >> > >> wrote: > >> > >> > >> > >> > On Wed, Jul 26, 2017 at 09:46:56AM +0200, Peter Zijlstra wrote: > >> > >> >> On Tue, Jul 25, 2017 at 10:50:13PM +0000, Mathieu Desnoyers wrote: > >> > >> >> > This would implement a MEMBARRIER_CMD_PRIVATE_EXPEDITED (or such) flag > >> > >> >> > for expedited process-local effect. This differs from the "SHARED" flag, > >> > >> >> > since the SHARED flag affects threads accessing memory mappings shared > >> > >> >> > across processes as well. > >> > >> >> > > >> > >> >> > I wonder if we could create a MEMBARRIER_CMD_SHARED_EXPEDITED behavior > >> > >> >> > by iterating on all memory mappings mapped into the current process, > >> > >> >> > and build a cpumask based on the union of all mm masks encountered ? > >> > >> >> > Then we could send the IPI to all cpus belonging to that cpumask. Or > >> > >> >> > am I missing something obvious ? > >> > >> >> > >> > >> >> I would readily object to such a beast. You far too quickly end up > >> > >> >> having to IPI everybody because of some stupid shared map or something > >> > >> >> (yes I know, normal DSOs are mapped private). > >> > >> > > >> > >> > Agreed, we should keep things simple to start with. The user can always > >> > >> > invoke sys_membarrier() from each process. > >> > >> > >> > >> Another alternative for a MEMBARRIER_CMD_SHARED_EXPEDITED would be rate-limiting > >> > >> per thread. For instance, we could add a new "ulimit" that would bound the > >> > >> number of expedited membarrier per thread that can be done per millisecond, > >> > >> and switch to synchronize_sched() whenever a thread goes beyond that limit > >> > >> for the rest of the time-slot. > >> > >> > >> > >> A RT system that really cares about not having userspace sending IPIs > >> > >> to all cpus could set the ulimit value to 0, which would always use > >> > >> synchronize_sched(). > >> > >> > >> > >> Thoughts ? > >> > > > >> > > The patch I posted reverts to synchronize_sched() in kernels booted with > >> > > rcupdate.rcu_normal=1. ;-) > >> > > > >> > > But who is pushing for multiple-process sys_membarrier()? Everyone I > >> > > have talked to is OK with it being local to the current process. > >> > > >> > I guess I'm probably the guilty one intending to do weird stuff in userspace ;) > >> > > >> > Here are my two use-cases: > >> > > >> > * a new multi-process liburcu flavor, useful if e.g. a set of processes are > >> > responsible for updating a shared memory data structure, and a separate set > >> > of processes read that data structure. The readers can be killed without ill > >> > effect on the other processes. The synchronization could be done by one > >> > multi-process liburcu flavor per reader process "group". > >> > > >> > * lttng-ust user-space ring buffers (shared across processes). > >> > > >> > Both rely on a shared memory mapping for communication between processes, and > >> > I would like to be able to issue a sys_membarrier targeting all CPUs that may > >> > currently touch the shared memory mapping. > >> > > >> > I don't really need a system-wide effect, but I would like to be able to target > >> > a shared memory mapping and efficiently do an expedited sys_membarrier on all > >> > cpus involved. > >> > > >> > With lttng-ust, the shared buffers can spawn across 1000+ processes, so > >> > asking each process to issue sys_membarrier would add lots of unneeded overhead, > >> > because this would issue lots of needless memory barriers. > >> > > >> > Thoughts ? > >> > >> Dealing explicitly with 1000+ processes sounds like no picnic. It instead > >> sounds like a job for synchronize_sched_expedited(). ;-) > > > > Actually... > > > > Mathieu, does your use case require unprivileged access to sys_membarrier()? > > Unfortunately, yes, it does require sys_membarrier to be used from non-root > both for lttng-ust and liburcu multi-process. And as Peter pointed out, stuff > like containers complicates things even for the root case. Hey, I was hoping! ;-) Thanx, Paul