From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S938217AbdAKPAW (ORCPT ); Wed, 11 Jan 2017 10:00:22 -0500 Received: from foss.arm.com ([217.140.101.70]:50964 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S938150AbdAKPAT (ORCPT ); Wed, 11 Jan 2017 10:00:19 -0500 Date: Wed, 11 Jan 2017 14:59:20 +0000 From: Mark Rutland To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Arnaldo Carvalho de Melo , Thomas Gleixner , Sebastian Andrzej Siewior , jeremy.linton@arm.com, Will Deacon Subject: Re: Perf hotplug lockup in v4.9-rc8 Message-ID: <20170111145920.GB26344@leverpostej> References: <20161207135217.GA25605@leverpostej> <20161207175347.GB13840@leverpostej> <20161207183455.GQ3124@twins.programming.kicks-ass.net> <20161209135900.GU3174@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161209135900.GU3174@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Peter, Sorry for the delay; this fell into my backlog over the holiday. On Fri, Dec 09, 2016 at 02:59:00PM +0100, Peter Zijlstra wrote: > So while I went back and forth trying to make that less ugly, I figured > there was another problem. > > Imagine the cpu_function_call() hitting the 'right' cpu, but not finding > the task current. It will then continue to install the event in the > context. However, that doesn't stop another CPU from pulling the task in > question from our rq and scheduling it elsewhere. > > This all lead me to the below patch.. Now it has a rather large comment, > and while it represents my current thinking on the matter, I'm not at > all sure its entirely correct. I got my brain in a fair twist while > writing it. > > Please as to carefully think about it. FWIW, I've given the below a spin on a few systems, and with it applied my reproducer no longer triggers the issue. Unfortunately, most of the ordering concerns have gone over my head. :/ > @@ -2331,13 +2330,36 @@ perf_install_in_context(struct perf_event_context *ctx, > /* > * Installing events is tricky because we cannot rely on ctx->is_active > * to be set in case this is the nr_events 0 -> 1 transition. > + * > + * Instead we use task_curr(), which tells us if the task is running. > + * However, since we use task_curr() outside of rq::lock, we can race > + * against the actual state. This means the result can be wrong. > + * > + * If we get a false positive, we retry, this is harmless. > + * > + * If we get a false negative, things are complicated. If we are after > + * perf_event_context_sched_in() ctx::lock will serialize us, and the > + * value must be correct. If we're before, it doesn't matter since > + * perf_event_context_sched_in() will program the counter. > + * > + * However, this hinges on the remote context switch having observed > + * our task->perf_event_ctxp[] store, such that it will in fact take > + * ctx::lock in perf_event_context_sched_in(). Sorry if I'm being thick here, but which store are we describing above? i.e. which function, how does that relate to perf_install_in_context()? I haven't managed to wrap my head around why this matters. :/ Thanks, Mark.