From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751760AbdGZABe (ORCPT ); Tue, 25 Jul 2017 20:01:34 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:57427 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751427AbdGZABd (ORCPT ); Tue, 25 Jul 2017 20:01:33 -0400 Date: Tue, 25 Jul 2017 17:01:28 -0700 From: "Paul E. McKenney" To: Mathieu Desnoyers Cc: Peter Zijlstra , linux-kernel , Ingo Molnar , Lai Jiangshan , dipankar , Andrew Morton , Josh Triplett , Thomas Gleixner , rostedt , David Howells , Eric Dumazet , fweisbec , Oleg Nesterov , Will Deacon Subject: Re: [PATCH tip/core/rcu 4/5] sys_membarrier: Add expedited option Reply-To: paulmck@linux.vnet.ibm.com References: <20170724215758.GA12075@linux.vnet.ibm.com> <20170725165957.alykngbnrrwn3onw@hirez.programming.kicks-ass.net> <20170725171701.GS3730@linux.vnet.ibm.com> <20170725185320.uis4hxqaqlx7y7gp@hirez.programming.kicks-ass.net> <20170725193612.GW3730@linux.vnet.ibm.com> <20170725202451.GC28975@worktop> <20170725211926.GA3730@linux.vnet.ibm.com> <20170725215510.GD28975@worktop> <1480480872.25829.1501023013862.JavaMail.zimbra@efficios.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1480480872.25829.1501023013862.JavaMail.zimbra@efficios.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17072600-0056-0000-0000-000003ACA214 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007426; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000214; SDB=6.00892898; UDB=6.00446337; IPR=6.00673062; BA=6.00005492; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00016379; XFM=3.00000015; UTC=2017-07-26 00:01:31 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17072600-0057-0000-0000-000007E2C036 Message-Id: <20170726000128.GD3730@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-07-25_10:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=3 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1706020000 definitions=main-1707250369 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 25, 2017 at 10:50:13PM +0000, Mathieu Desnoyers wrote: > ----- On Jul 25, 2017, at 5:55 PM, Peter Zijlstra peterz@infradead.org wrote: > > > On Tue, Jul 25, 2017 at 02:19:26PM -0700, Paul E. McKenney wrote: > >> On Tue, Jul 25, 2017 at 10:24:51PM +0200, Peter Zijlstra wrote: > >> > On Tue, Jul 25, 2017 at 12:36:12PM -0700, Paul E. McKenney wrote: > [...] > > > >> But it would not be hard for userspace code to force IPIs by repeatedly > >> awakening higher-priority threads that sleep immediately after being > >> awakened, right? > > > > RT tasks are not readily available to !root, and the user might have > > been constrained to a subset of available CPUs. > > > >> > Well, I'm not sure there is an easy means of doing machine wide IPIs for > >> > !root out there. This would be a first. > >> > > >> > Something along the lines of: > >> > > >> > void dummy(void *arg) > >> > { > >> > /* IPIs are assumed to be serializing */ > >> > } > >> > > >> > void ipi_mm(struct mm_struct *mm) > >> > { > >> > cpumask_var_t cpus; > >> > int cpu; > >> > > >> > zalloc_cpumask_var(&cpus, GFP_KERNEL); > >> > > >> > for_each_cpu(cpu, mm_cpumask(mm)) { > >> > struct task_struct *p; > >> > > >> > /* > >> > * If the current task of @cpu isn't of this @mm, then > >> > * it needs a context switch to become one, which will > >> > * provide the ordering we require. > >> > */ > >> > rcu_read_lock(); > >> > p = task_rcu_dereference(&cpu_curr(cpu)); > >> > if (p && p->mm == mm) > >> > __cpumask_set_cpu(cpu, cpus); > >> > rcu_read_unlock(); > >> > } > >> > > >> > on_each_cpu_mask(cpus, dummy, NULL, 1); > >> > } > >> > > >> > Would appear to be minimally invasive and only shoot at CPUs we're > >> > currently running our process on, which greatly reduces the impact. > >> > >> I am good with this approach as well, and I do very much like that it > >> avoids IPIing CPUs that aren't running our process (at least in the > >> common case). But don't we also need added memory ordering? It is > >> sort of OK to IPI a CPU that just now switched away from our process, > >> but not so good to miss IPIing a CPU that switched to our process just > >> a little before sys_membarrier(). > > > > My thinking was that if we observe '!= mm' that CPU will have to do a > > context switch in order to make it true. That context switch will > > provide the ordering we're after so all is well. > > > > Quite possible there's a hole in, but since I'm running on fumes someone > > needs to spell it out for me :-) > > > >> I was intending to base this on the last few versions of a 2010 patch, > >> but maybe things have changed: > >> > >> https://marc.info/?l=linux-kernel&m=126358017229620&w=2 > >> https://marc.info/?l=linux-kernel&m=126436996014016&w=2 > >> https://marc.info/?l=linux-kernel&m=126601479802978&w=2 > >> https://marc.info/?l=linux-kernel&m=126970692903302&w=2 > >> > >> Discussion here: > >> > >> https://marc.info/?l=linux-kernel&m=126349766324224&w=2 > >> > >> The discussion led to acquiring the runqueue locks, as there was > >> otherwise a need to add code to the scheduler fastpaths. > > > > TL;DR.. that's far too much to trawl through. > > > >> Some architectures are less precise than others in tracking which > >> CPUs are running a given process due to ASIDs, though this is > >> thought to be a non-problem: > >> > >> https://marc.info/?l=linux-arch&m=126716090413065&w=2 > >> https://marc.info/?l=linux-arch&m=126716262815202&w=2 > >> > >> Thoughts? > > > > Yes, there are architectures that only accumulate bits in mm_cpumask(), > > with the additional check to see if the remote task belongs to our MM > > this should be a non-issue. > > This would implement a MEMBARRIER_CMD_PRIVATE_EXPEDITED (or such) flag > for expedited process-local effect. This differs from the "SHARED" flag, > since the SHARED flag affects threads accessing memory mappings shared > across processes as well. > > I wonder if we could create a MEMBARRIER_CMD_SHARED_EXPEDITED behavior > by iterating on all memory mappings mapped into the current process, > and build a cpumask based on the union of all mm masks encountered ? > Then we could send the IPI to all cpus belonging to that cpumask. Or > am I missing something obvious ? I suspect that something like this would work, but I agree with your 2010 self, who argued that this should be follow-on functionality. After all, the user probably needs to be aware of who is sharing for other reasons, and can then make each process do sys_membarrier(). Thanx, Paul