From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751929AbdCNQYq (ORCPT ); Tue, 14 Mar 2017 12:24:46 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:46913 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751111AbdCNQYo (ORCPT ); Tue, 14 Mar 2017 12:24:44 -0400 Date: Tue, 14 Mar 2017 09:24:40 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, mingo@redhat.com, fweisbec@gmail.com Subject: Re: [PATCH] clock: Fix smp_processor_id() in preemptible bug Reply-To: paulmck@linux.vnet.ibm.com References: <20170308215306.GA8776@linux.vnet.ibm.com> <20170309152420.GC3343@twins.programming.kicks-ass.net> <20170309153114.GU30506@linux.vnet.ibm.com> <20170309183732.GB13748@linux.vnet.ibm.com> <20170313124621.GA3328@twins.programming.kicks-ass.net> <20170313155521.GZ30506@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170313155521.GZ30506@linux.vnet.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17031416-0056-0000-0000-000003078461 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006781; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000206; SDB=6.00833756; UDB=6.00409376; IPR=6.00611414; BA=6.00005209; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00014646; XFM=3.00000013; UTC=2017-03-14 16:24:42 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17031416-0057-0000-0000-0000073DAD63 Message-Id: <20170314162440.GA11696@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-03-14_08:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1702020001 definitions=main-1703140127 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 13, 2017 at 08:55:21AM -0700, Paul E. McKenney wrote: > On Mon, Mar 13, 2017 at 01:46:21PM +0100, Peter Zijlstra wrote: > > On Thu, Mar 09, 2017 at 10:37:32AM -0800, Paul E. McKenney wrote: > > > And it does pass light testing. I will hammer it harder this evening. > > > > > > So please send a formal patch! > > > > Changed it a bit... > > > > --- > > Subject: sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles > > > > Paul reported two independent problems with clear_sched_clock_stable(). > > > > - if we tickle it during hotplug (even though the sched_clock was > > already marked unstable) we'll attempt to schedule_work() and > > this explodes because RCU isn't watching the new CPU quite yet. > > > > - since we run all of __clear_sched_clock_stable() from workqueue > > context, there's a preempt problem. > > > > Cure both by only doing the static_branch_disable() from a workqueue, > > and only when it's still stable. > > > > This leaves the problem what to do about hotplug actually wrecking TSC > > though, because if it was stable and now isn't, then we will want to run > > that work, which then will prod RCU the wrong way. Bloody hotplug. > > > > Reported-by: "Paul E. McKenney" > > Signed-off-by: Peter Zijlstra (Intel) > > This passes initial testing. I will hammer it harder overnight, but > in the meantime: And it did just fine overnight, thank you! > Tested-by: "Paul E. McKenney" I am guessing that you will be pushing this one for -rc3, but please let me know. Thanx, Paul > > --- > > kernel/sched/clock.c | 17 ++++++++++++----- > > 1 file changed, 12 insertions(+), 5 deletions(-) > > > > diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c > > index a08795e21628..fec0f58c8dee 100644 > > --- a/kernel/sched/clock.c > > +++ b/kernel/sched/clock.c > > @@ -141,7 +141,14 @@ static void __set_sched_clock_stable(void) > > tick_dep_clear(TICK_DEP_BIT_CLOCK_UNSTABLE); > > } > > > > -static void __clear_sched_clock_stable(struct work_struct *work) > > +static void __sched_clock_work(struct work_struct *work) > > +{ > > + static_branch_disable(&__sched_clock_stable); > > +} > > + > > +static DECLARE_WORK(sched_clock_work, __sched_clock_work); > > + > > +static void __clear_sched_clock_stable(void) > > { > > struct sched_clock_data *scd = this_scd(); > > > > @@ -160,11 +167,11 @@ static void __clear_sched_clock_stable(struct work_struct *work) > > scd->tick_gtod, gtod_offset, > > scd->tick_raw, raw_offset); > > > > - static_branch_disable(&__sched_clock_stable); > > tick_dep_set(TICK_DEP_BIT_CLOCK_UNSTABLE); > > -} > > > > -static DECLARE_WORK(sched_clock_work, __clear_sched_clock_stable); > > + if (sched_clock_stable()) > > + schedule_work(&sched_clock_work); > > +} > > > > void clear_sched_clock_stable(void) > > { > > @@ -173,7 +180,7 @@ void clear_sched_clock_stable(void) > > smp_mb(); /* matches sched_clock_init_late() */ > > > > if (sched_clock_running == 2) > > - schedule_work(&sched_clock_work); > > + __clear_sched_clock_stable(); > > } > > > > void sched_clock_init_late(void) > >