From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932822AbdDFSMb (ORCPT ); Thu, 6 Apr 2017 14:12:31 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:38825 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753481AbdDFSM0 (ORCPT ); Thu, 6 Apr 2017 14:12:26 -0400 Date: Thu, 6 Apr 2017 11:12:22 -0700 From: "Paul E. McKenney" To: Steven Rostedt Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Andrew Morton Subject: Re: [PATCH 3/4] tracing: Add stack_tracer_disable/enable() functions Reply-To: paulmck@linux.vnet.ibm.com References: <20170406164237.874767449@goodmis.org> <20170406164432.361457723@goodmis.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170406164432.361457723@goodmis.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17040618-0036-0000-0000-000001D3097D X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006888; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000208; SDB=6.00843993; UDB=6.00415923; IPR=6.00622194; BA=6.00005274; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00014939; XFM=3.00000013; UTC=2017-04-06 18:12:23 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17040618-0037-0000-0000-00003F80DA53 Message-Id: <20170406181222.GH1600@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-04-06_14:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1702020001 definitions=main-1704060145 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 06, 2017 at 12:42:40PM -0400, Steven Rostedt wrote: > From: "Steven Rostedt (VMware)" > > There are certain parts of the kernel that can not let stack tracing > proceed (namely in RCU), because the stack tracer uses RCU, and parts of RCU > internals can not handle having RCU read side locks taken. > > Add stack_tracer_disable() and stack_tracer_enable() functions to let RCU > stop stack tracing on the current CPU as it is in those critical sections. s/as it is in/when it is in/? > Signed-off-by: Steven Rostedt (VMware) One quibble above, one objection below. Thanx, Paul > --- > include/linux/ftrace.h | 6 ++++++ > kernel/trace/trace_stack.c | 28 ++++++++++++++++++++++++++++ > 2 files changed, 34 insertions(+) > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h > index ef7123219f14..40afee35565a 100644 > --- a/include/linux/ftrace.h > +++ b/include/linux/ftrace.h > @@ -286,6 +286,12 @@ int > stack_trace_sysctl(struct ctl_table *table, int write, > void __user *buffer, size_t *lenp, > loff_t *ppos); > + > +void stack_tracer_disable(void); > +void stack_tracer_enable(void); > +#else > +static inline void stack_tracer_disable(void) { } > +static inline void stack_tracer_enabe(void) { } > #endif > > struct ftrace_func_command { > diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c > index 05ad2b86461e..5adbb73ec2ec 100644 > --- a/kernel/trace/trace_stack.c > +++ b/kernel/trace/trace_stack.c > @@ -41,6 +41,34 @@ static DEFINE_MUTEX(stack_sysctl_mutex); > int stack_tracer_enabled; > static int last_stack_tracer_enabled; > > +/** > + * stack_tracer_disable - temporarily disable the stack tracer > + * > + * There's a few locations (namely in RCU) where stack tracing > + * can not be executed. This function is used to disable stack > + * tracing during those critical sections. > + * > + * This function will disable preemption. stack_tracer_enable() > + * must be called shortly after this is called. > + */ > +void stack_tracer_disable(void) > +{ > + preempt_disable_notrace(); Interrupts are disabled in all current call points, so you don't really need to disable preemption. I would normally not worry, given the ease-of-use improvements, but some people get annoyed about even slight increases in idle-entry overhead. > + this_cpu_inc(trace_active); > +} > + > +/** > + * stack_tracer_enable - re-enable the stack tracer > + * > + * After stack_tracer_disable() is called, stack_tracer_enable() > + * must shortly be called afterward. > + */ > +void stack_tracer_enable(void) > +{ > + this_cpu_dec(trace_active); > + preempt_enable_notrace(); Ditto... > +} > + > void stack_trace_print(void) > { > long i; > -- > 2.10.2 > >