From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751116AbdE3QZO (ORCPT ); Tue, 30 May 2017 12:25:14 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:42121 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750930AbdE3QZM (ORCPT ); Tue, 30 May 2017 12:25:12 -0400 Date: Tue, 30 May 2017 09:25:08 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Thomas Gleixner , LKML , Ingo Molnar , Steven Rostedt , Sebastian Siewior , Mathieu Desnoyers , Masami Hiramatsu Subject: Re: [patch V3 23/32] perf/tracing/cpuhotplug: Fix locking order Reply-To: paulmck@linux.vnet.ibm.com References: <20170524081511.203800767@linutronix.de> <20170524081548.930941109@linutronix.de> <20170524183018.GH3956@linux.vnet.ibm.com> <20170530112235.e3rt7rmfubamn3nq@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170530112235.e3rt7rmfubamn3nq@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17053016-0024-0000-0000-0000027FAB69 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007145; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000212; SDB=6.00867788; UDB=6.00431192; IPR=6.00647668; BA=6.00005387; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015647; XFM=3.00000015; UTC=2017-05-30 16:25:08 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17053016-0025-0000-0000-0000443050DB Message-Id: <20170530162508.GX3956@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-05-30_10:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1705300304 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 30, 2017 at 01:22:35PM +0200, Peter Zijlstra wrote: > On Wed, May 24, 2017 at 11:30:18AM -0700, Paul E. McKenney wrote: > > > @@ -8920,7 +8912,7 @@ perf_event_mux_interval_ms_store(struct > > > pmu->hrtimer_interval_ms = timer; > > > > > > /* update all cpuctx for this PMU */ > > > - get_online_cpus(); > > > + cpus_read_lock(); > > > > OK, I'll bite... > > > > Why is this piece using cpus_read_lock() instead of pmus_lock? > > > > My guess is for the benefit of the cpu_function_call() below, but if > > the code instead cycled through the perf_online_mask, wouldn't any > > CPU selected be guaranteed to be online? > > > > Or is there some reason that it would be necessary to specially handle > > CPUs that perf does not consider to be active, but that are still at > > least partway online? > > Mostly just lazy. This code path didn't present a problem with the lock > ordering. Find the conversion below. I know that feeling! With this addition: Acked-by: Paul E. McKenney > > > for_each_online_cpu(cpu) { > > > struct perf_cpu_context *cpuctx; > > > cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu); > > > @@ -8929,7 +8921,7 @@ perf_event_mux_interval_ms_store(struct > > > cpu_function_call(cpu, > > > (remote_function_f)perf_mux_hrtimer_restart, cpuctx); > > > } > > > - put_online_cpus(); > > > + cpus_read_unlock(); > > > mutex_unlock(&mux_interval_mutex); > > > > > > return count; > > > --- > Subject: perf: Complete CPU hotplug conversion > > Remove the last cpuc_read_lock() user in perf in favour of our internal > state. This conversion is non critical as the lock ordering wasn't > problematic but its nice to be consistent. > > Reported-by: "Paul E. McKenney" > Signed-off-by: Peter Zijlstra (Intel) > --- > kernel/events/core.c | 21 ++++++++++++++------- > 1 file changed, 14 insertions(+), 7 deletions(-) > > diff --git a/kernel/events/core.c b/kernel/events/core.c > index 8d6acaeeea17..ad4f7f03b519 100644 > --- a/kernel/events/core.c > +++ b/kernel/events/core.c > @@ -389,6 +389,16 @@ static atomic_t nr_switch_events __read_mostly; > static LIST_HEAD(pmus); > static DEFINE_MUTEX(pmus_lock); > static struct srcu_struct pmus_srcu; > + > +/* > + * CPU hotplug handling, also see perf_event_{exit,init}_cpu(). > + * > + * We use @pmus_lock to serialize PMU (un)registration against CPU hotplug, > + * tracking the online state in @perf_online_mask and > + * pmu->pmu_cpu_context->online. That latter is set while holding ctx->mutex > + * and therefore holding ctx->mutex is sufficient to serialize against > + * hotplug wrt cpuctx->online. > + */ > static cpumask_var_t perf_online_mask; > > /* > @@ -8887,8 +8897,6 @@ perf_event_mux_interval_ms_show(struct device *dev, > return snprintf(page, PAGE_SIZE-1, "%d\n", pmu->hrtimer_interval_ms); > } > > -static DEFINE_MUTEX(mux_interval_mutex); > - > static ssize_t > perf_event_mux_interval_ms_store(struct device *dev, > struct device_attribute *attr, > @@ -8908,12 +8916,12 @@ perf_event_mux_interval_ms_store(struct device *dev, > if (timer == pmu->hrtimer_interval_ms) > return count; > > - mutex_lock(&mux_interval_mutex); > + /* use pmus_lock to order against hotplug and self serialize */ > + mutex_lock(&pmus_lock); > pmu->hrtimer_interval_ms = timer; > > /* update all cpuctx for this PMU */ > - cpus_read_lock(); > - for_each_online_cpu(cpu) { > + for_each_cpu(cpu, perf_online_mask) { > struct perf_cpu_context *cpuctx; > cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu); > cpuctx->hrtimer_interval = ns_to_ktime(NSEC_PER_MSEC * timer); > @@ -8921,8 +8929,7 @@ perf_event_mux_interval_ms_store(struct device *dev, > cpu_function_call(cpu, > (remote_function_f)perf_mux_hrtimer_restart, cpuctx); > } > - cpus_read_unlock(); > - mutex_unlock(&mux_interval_mutex); > + mutex_unlock(&pmus_lock); > > return count; > } >