From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751612AbdG0Mhd (ORCPT ); Thu, 27 Jul 2017 08:37:33 -0400 Received: from mail.efficios.com ([167.114.142.141]:50343 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751586AbdG0Mh2 (ORCPT ); Thu, 27 Jul 2017 08:37:28 -0400 Date: Thu, 27 Jul 2017 12:39:36 +0000 (UTC) From: Mathieu Desnoyers To: "Paul E. McKenney" Cc: Peter Zijlstra , linux-kernel , Ingo Molnar , Lai Jiangshan , dipankar , Andrew Morton , Josh Triplett , Thomas Gleixner , rostedt , David Howells , Eric Dumazet , fweisbec , Oleg Nesterov , Will Deacon Message-ID: <1441520000.28295.1501159176561.JavaMail.zimbra@efficios.com> In-Reply-To: <20170727014502.GA17160@linux.vnet.ibm.com> References: <20170724215758.GA12075@linux.vnet.ibm.com> <20170726074656.obadfdu6hdlrmy7r@hirez.programming.kicks-ass.net> <20170726154229.GO3730@linux.vnet.ibm.com> <1415161894.26359.1501092075006.JavaMail.zimbra@efficios.com> <20170726183032.GV3730@linux.vnet.ibm.com> <304686293.28015.1501101443139.JavaMail.zimbra@efficios.com> <20170726211146.GA3730@linux.vnet.ibm.com> <20170727014502.GA17160@linux.vnet.ibm.com> Subject: Re: [PATCH tip/core/rcu 4/5] sys_membarrier: Add expedited option MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [167.114.142.141] X-Mailer: Zimbra 8.7.9_GA_1794 (ZimbraWebClient - FF52 (Linux)/8.7.9_GA_1794) Thread-Topic: sys_membarrier: Add expedited option Thread-Index: 3FgHcGmBfGOAhc1JTp/g9tSvUBp3fw== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ----- On Jul 26, 2017, at 9:45 PM, Paul E. McKenney paulmck@linux.vnet.ibm.com wrote: > On Wed, Jul 26, 2017 at 02:11:46PM -0700, Paul E. McKenney wrote: >> On Wed, Jul 26, 2017 at 08:37:23PM +0000, Mathieu Desnoyers wrote: >> > ----- On Jul 26, 2017, at 2:30 PM, Paul E. McKenney paulmck@linux.vnet.ibm.com >> > wrote: >> > >> > > On Wed, Jul 26, 2017 at 06:01:15PM +0000, Mathieu Desnoyers wrote: >> > >> ----- On Jul 26, 2017, at 11:42 AM, Paul E. McKenney paulmck@linux.vnet.ibm.com >> > >> wrote: >> > >> >> > >> > On Wed, Jul 26, 2017 at 09:46:56AM +0200, Peter Zijlstra wrote: >> > >> >> On Tue, Jul 25, 2017 at 10:50:13PM +0000, Mathieu Desnoyers wrote: >> > >> >> > This would implement a MEMBARRIER_CMD_PRIVATE_EXPEDITED (or such) flag >> > >> >> > for expedited process-local effect. This differs from the "SHARED" flag, >> > >> >> > since the SHARED flag affects threads accessing memory mappings shared >> > >> >> > across processes as well. >> > >> >> > >> > >> >> > I wonder if we could create a MEMBARRIER_CMD_SHARED_EXPEDITED behavior >> > >> >> > by iterating on all memory mappings mapped into the current process, >> > >> >> > and build a cpumask based on the union of all mm masks encountered ? >> > >> >> > Then we could send the IPI to all cpus belonging to that cpumask. Or >> > >> >> > am I missing something obvious ? >> > >> >> >> > >> >> I would readily object to such a beast. You far too quickly end up >> > >> >> having to IPI everybody because of some stupid shared map or something >> > >> >> (yes I know, normal DSOs are mapped private). >> > >> > >> > >> > Agreed, we should keep things simple to start with. The user can always >> > >> > invoke sys_membarrier() from each process. >> > >> >> > >> Another alternative for a MEMBARRIER_CMD_SHARED_EXPEDITED would be rate-limiting >> > >> per thread. For instance, we could add a new "ulimit" that would bound the >> > >> number of expedited membarrier per thread that can be done per millisecond, >> > >> and switch to synchronize_sched() whenever a thread goes beyond that limit >> > >> for the rest of the time-slot. >> > >> >> > >> A RT system that really cares about not having userspace sending IPIs >> > >> to all cpus could set the ulimit value to 0, which would always use >> > >> synchronize_sched(). >> > >> >> > >> Thoughts ? >> > > >> > > The patch I posted reverts to synchronize_sched() in kernels booted with >> > > rcupdate.rcu_normal=1. ;-) >> > > >> > > But who is pushing for multiple-process sys_membarrier()? Everyone I >> > > have talked to is OK with it being local to the current process. >> > >> > I guess I'm probably the guilty one intending to do weird stuff in userspace ;) >> > >> > Here are my two use-cases: >> > >> > * a new multi-process liburcu flavor, useful if e.g. a set of processes are >> > responsible for updating a shared memory data structure, and a separate set >> > of processes read that data structure. The readers can be killed without ill >> > effect on the other processes. The synchronization could be done by one >> > multi-process liburcu flavor per reader process "group". >> > >> > * lttng-ust user-space ring buffers (shared across processes). >> > >> > Both rely on a shared memory mapping for communication between processes, and >> > I would like to be able to issue a sys_membarrier targeting all CPUs that may >> > currently touch the shared memory mapping. >> > >> > I don't really need a system-wide effect, but I would like to be able to target >> > a shared memory mapping and efficiently do an expedited sys_membarrier on all >> > cpus involved. >> > >> > With lttng-ust, the shared buffers can spawn across 1000+ processes, so >> > asking each process to issue sys_membarrier would add lots of unneeded overhead, >> > because this would issue lots of needless memory barriers. >> > >> > Thoughts ? >> >> Dealing explicitly with 1000+ processes sounds like no picnic. It instead >> sounds like a job for synchronize_sched_expedited(). ;-) > > Actually... > > Mathieu, does your use case require unprivileged access to sys_membarrier()? Unfortunately, yes, it does require sys_membarrier to be used from non-root both for lttng-ust and liburcu multi-process. And as Peter pointed out, stuff like containers complicates things even for the root case. Thanks, Mathieu > > Thanx, Paul -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com