From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756741AbYKQIth (ORCPT ); Mon, 17 Nov 2008 03:49:37 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752697AbYKQIt3 (ORCPT ); Mon, 17 Nov 2008 03:49:29 -0500 Received: from mx3.mail.elte.hu ([157.181.1.138]:35789 "EHLO mx3.mail.elte.hu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752619AbYKQIt2 (ORCPT ); Mon, 17 Nov 2008 03:49:28 -0500 Date: Mon, 17 Nov 2008 09:49:23 +0100 From: Ingo Molnar To: Frederic Weisbecker Cc: Steven Rostedt , Linux Kernel Subject: Re: [PATCH 3/3] tracing/function-return-tracer: add the overrun field Message-ID: <20081117084923.GD28786@elte.hu> References: <4920D571.4050007@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4920D571.4050007@gmail.com> User-Agent: Mutt/1.5.18 (2008-05-17) X-ELTE-VirusStatus: clean X-ELTE-SpamScore: -1.5 X-ELTE-SpamLevel: X-ELTE-SpamCheck: no X-ELTE-SpamVersion: ELTE 2.0 X-ELTE-SpamCheck-Details: score=-1.5 required=5.9 tests=BAYES_00,DNS_FROM_SECURITYSAGE autolearn=no SpamAssassin version=3.2.3 -1.5 BAYES_00 BODY: Bayesian spam probability is 0 to 1% [score: 0.0000] 0.0 DNS_FROM_SECURITYSAGE RBL: Envelope sender in blackholes.securitysage.com Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Frederic Weisbecker wrote: > Impact: help to find the better depth of trace > > We decided to arbitrary define the depth of function return trace as > "20". Perhaps this is not enough. To help finding an optimal depth, > we measure now the overrun: the number of functions that have been > missed for the current thread. By default this is not displayed, we > have to do set a particular flag on the return tracer: echo overrun > > /debug/tracing/trace_options And the overrun will be printed on > the right. > > As the trace shows below, the current 20 depth is not enough. > > update_wall_time+0x37f/0x8c0 -> update_xtime_cache (345 ns) (Overruns: 2838) > update_wall_time+0x384/0x8c0 -> clocksource_get_next (1141 ns) (Overruns: 2838) > do_timer+0x23/0x100 -> update_wall_time (3882 ns) (Overruns: 2838) hm, interesting. Have you tried to figure out what a practical depth limit would be? With lockdep we made the experience that function call stacks can be very deep - if we count IRQ contexts too it can be up to 100 in the extreme cases. (but at that stage kernel stack limits start hitting us) I'd say 50 would be needed. Ingo