From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935421AbdDFO74 (ORCPT ); Thu, 6 Apr 2017 10:59:56 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:42270 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934355AbdDFO7q (ORCPT ); Thu, 6 Apr 2017 10:59:46 -0400 Date: Thu, 6 Apr 2017 07:59:43 -0700 From: "Paul E. McKenney" To: Steven Rostedt Cc: LKML Subject: Re: [BUG] stack tracing causes: kernel/module.c:271 module_assert_mutex_or_preempt Reply-To: paulmck@linux.vnet.ibm.com References: <20170405093207.404f8deb@gandalf.local.home> <20170405162515.GF1600@linux.vnet.ibm.com> <20170405124539.730a2365@gandalf.local.home> <20170405175925.GG1600@linux.vnet.ibm.com> <20170405221224.0d265a3d@gandalf.local.home> <20170406041515.GX1600@linux.vnet.ibm.com> <20170406101425.61f661b7@gandalf.local.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170406101425.61f661b7@gandalf.local.home> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17040614-2213-0000-0000-00000184A8C3 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006887; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000208; SDB=6.00843930; UDB=6.00415885; IPR=6.00622129; BA=6.00005272; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00014937; XFM=3.00000013; UTC=2017-04-06 14:59:44 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17040614-2214-0000-0000-000055463D71 Message-Id: <20170406145943.GA1600@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-04-06_12:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1702020001 definitions=main-1704060124 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 06, 2017 at 10:14:25AM -0400, Steven Rostedt wrote: > On Wed, 5 Apr 2017 21:15:15 -0700 > "Paul E. McKenney" wrote: > \> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c > > > index 8efd9fe..28e3019 100644 > > > --- a/kernel/trace/ftrace.c > > > +++ b/kernel/trace/ftrace.c > > > @@ -2808,18 +2808,28 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command) > > > * callers are done before leaving this function. > > > * The same goes for freeing the per_cpu data of the per_cpu > > > * ops. > > > - * > > > - * Again, normal synchronize_sched() is not good enough. > > > - * We need to do a hard force of sched synchronization. > > > - * This is because we use preempt_disable() to do RCU, but > > > - * the function tracers can be called where RCU is not watching > > > - * (like before user_exit()). We can not rely on the RCU > > > - * infrastructure to do the synchronization, thus we must do it > > > - * ourselves. > > > */ > > > if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_PER_CPU)) { > > > + /* > > > + * We need to do a hard force of sched synchronization. > > > + * This is because we use preempt_disable() to do RCU, but > > > + * the function tracers can be called where RCU is not watching > > > + * (like before user_exit()). We can not rely on the RCU > > > + * infrastructure to do the synchronization, thus we must do it > > > + * ourselves. > > > + */ > > > schedule_on_each_cpu(ftrace_sync); > > > > Great header comment on ftrace_sync(): "Yes, function tracing is rude." > > And schedule_on_each_cpu() looks like a great workqueue gatling gun! ;-) > > > > > +#ifdef CONFIG_PREEMPT > > > + /* > > > + * When the kernel is preeptive, tasks can be preempted > > > + * while on a ftrace trampoline. Just scheduling a task on > > > + * a CPU is not good enough to flush them. Calling > > > + * synchronize_rcu_tasks() will wait for those tasks to > > > + * execute and either schedule voluntarily or enter user space. > > > + */ > > > + synchronize_rcu_tasks(); > > > +#endif > > > > How about this to save a line? > > > > if (IS_ENABLED(CONFIG_PREEMPT)) > > synchronize_rcu_tasks(); > > Ah, this works as gcc optimizes it out. Otherwise I received a compile > error with synchronize_rcu_tasks() not defined. But that's because I > never enabled CONFIG_TASKS_RCU. Should I define a synchronize_rcu_tasks() that does BUG() for this case? > > One thing that might speed this up a bit (or might not) would be to > > doe the schedule_on_each_cpu() from a delayed workqueue. That way, > > if any of the activity from schedule_on_each_cpu() involved a voluntary > > context switch (from a cond_resched() or some such), then > > synchronize_rcu_tasks() would get the benefit of that context switch. > > > > You would need a flush_work() to wait for that delayed workqueue > > as well, of course. > > This is a very slow path, I'm not too interested in making it complex > to speed it up. Makes sense to me! > > Not sure whether it is worth it, but figured I should pass it along. > > > > > arch_ftrace_trampoline_free(ops); > > > > > > if (ops->flags & FTRACE_OPS_FL_PER_CPU) > > > @@ -5366,22 +5376,6 @@ void __weak arch_ftrace_update_trampoline(struct ftrace_ops *ops) > > > > > > static void ftrace_update_trampoline(struct ftrace_ops *ops) > > > { > > > - > > > -/* > > > - * Currently there's no safe way to free a trampoline when the kernel > > > - * is configured with PREEMPT. That is because a task could be preempted > > > - * when it jumped to the trampoline, it may be preempted for a long time > > > - * depending on the system load, and currently there's no way to know > > > - * when it will be off the trampoline. If the trampoline is freed > > > - * too early, when the task runs again, it will be executing on freed > > > - * memory and crash. > > > - */ > > > -#ifdef CONFIG_PREEMPT > > > - /* Currently, only non dynamic ops can have a trampoline */ > > > - if (ops->flags & FTRACE_OPS_FL_DYNAMIC) > > > - return; > > > -#endif > > > - > > > arch_ftrace_update_trampoline(ops); > > > } > > > > Agreed, straightforward patch! > > Great, I'll start making it official then. Sounds very good! Thanx, Paul