From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752474Ab0KKJJl (ORCPT ); Thu, 11 Nov 2010 04:09:41 -0500 Received: from mx1.redhat.com ([209.132.183.28]:1026 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1749667Ab0KKJJi (ORCPT ); Thu, 11 Nov 2010 04:09:38 -0500 From: Jiri Olsa To: mingo@elte.hu, rostedt@goodmis.org, andi@firstfloor.org, lwoodman@redhat.com, hch@infradead.org Cc: linux-kernel@vger.kernel.org, Jiri Olsa Subject: [PATCHv2 1/2] tracing - fix recursive user stack trace Date: Thu, 11 Nov 2010 10:09:08 +0100 Message-Id: <1289466549-7602-2-git-send-email-jolsa@redhat.com> In-Reply-To: <20101110164413.GA5360@nowhere> References: <20101110164413.GA5360@nowhere> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The user stack trace can fault when examining the trace. Which would call the do_page_fault handler, which would trace again, which would do the user stack trace, which would fault and call do_page_fault again ... Thus this is causing a recursive bug. We need to have a recursion detector here. Signed-off-by: Steven Rostedt Signed-off-by: Jiri Olsa --- kernel/trace/trace.c | 17 +++++++++++++++++ 1 files changed, 17 insertions(+), 0 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 82d9b81..1905a72 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -1284,6 +1284,8 @@ void trace_dump_stack(void) __ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count()); } +static DEFINE_PER_CPU(int, user_stack_count); + void ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc) { @@ -1302,6 +1304,16 @@ ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc) if (unlikely(in_nmi())) return; + /* + * Prevent recursion, since the user stack tracing may + * trigger other kernel events. + */ + preempt_disable(); + if (__get_cpu_var(user_stack_count)) + goto out; + + __get_cpu_var(user_stack_count)++; + event = trace_buffer_lock_reserve(buffer, TRACE_USER_STACK, sizeof(*entry), flags, pc); if (!event) @@ -1319,6 +1331,11 @@ ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc) save_stack_trace_user(&trace); if (!filter_check_discard(call, entry, buffer, event)) ring_buffer_unlock_commit(buffer, event); + + __get_cpu_var(user_stack_count)--; + + out: + preempt_enable(); } #ifdef UNUSED -- 1.7.1