public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Dave Jones <davej@redhat.com>
Cc: "Joao Correia" <joaomiguelcorreia@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>,
	"Américo Wang" <xiyou.wangcong@gmail.com>,
	"Frederic Weisbecker" <fweisbec@gmail.com>,
	"Arjan van de Ven" <arjan@linux.intel.com>,
	"Catalin Marinas" <catalin.marinas@arm.com>
Subject: Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
Date: Wed, 08 Jul 2009 20:36:04 +0200	[thread overview]
Message-ID: <1247078164.16156.18.camel@laptop> (raw)
In-Reply-To: <20090708172248.GB2521@redhat.com>

On Wed, 2009-07-08 at 13:22 -0400, Dave Jones wrote:
> On Tue, Jul 07, 2009 at 05:55:01PM +0200, Peter Zijlstra wrote:
>  > On Tue, 2009-07-07 at 16:50 +0100, Joao Correia wrote:
>  > 
>  > > >> Yes. Anything 2.6.31 forward triggers this immediatly during init
>  > > >> process, at random places.
>  > > >
>  > > > Not on my machines it doesn't.. so I suspect its something weird in
>  > > > your .config or maybe due to some hardware you have that I don't that
>  > > > triggers different drivers or somesuch.
>  > > 
>  > > I am not the only one reporting this, and it happens, for example,
>  > > with a stock .config from a Fedora 11 install.
>  > > 
>  > > It may, of course, be a funny driver interaction yes, but other than
>  > > stripping the box piece by piece, how would one go about debugging
>  > > this otherwise?
>  > 
>  > One thing to do is stare (or share) at the output
>  > of /proc/lockdep_chains and see if there's some particularly large
>  > chains in there, or many of the same name or something.
> 
> I don't see any long chains, just lots of them.
> 29065 lines on my box that's hitting MAX_STACK_TRACE_ENTRIES 
> 
>  > /proc/lockdep_stats might also be interesting, mine reads like:
>  
>  lock-classes:                         1518 [max: 8191]
>  direct dependencies:                  7142 [max: 16384]

Since we have 7 states per class, and can take one trace per state, and
also take one trace per dependency, this would yield a max of:

  7*1518+7142 = 17768 stack traces

With the current limit of 262144 stack-trace entries, that would leave
us with and avg depth of:

  262144/17768 = 14.75

Now since we would not use all states for each class, we'd likely have a
little more, but that would still suggest we have rather deep stack
traces on avg.

Looking at a lockdep dump hch gave me I can see that that is certainly
possible, I see tons of very deep callchains.

/me wonders if we're getting significantly deeper..

OK I guess we can raise this one, does doubling work? That would get us
around 29 entries per trace..

Also, Dave do these distro init scrips still load every module on the
planet or are we more sensible these days?

module load/unload cycles are really bad for lockdep resources.

--

As a side node, I see that each and every trace ends with a -1 entry:

...
[ 1194.412158]    [<c01f7990>] do_mount+0x3c0/0x7c0
[ 1194.412158]    [<c01f7e14>] sys_mount+0x84/0xb0
[ 1194.412158]    [<c01221b1>] syscall_call+0x7/0xb
[ 1194.412158]    [<ffffffff>] 0xffffffff

Which seems to come from:

void save_stack_trace(struct stack_trace *trace)
{
        dump_trace(current, NULL, NULL, 0, &save_stack_ops, trace);
        if (trace->nr_entries < trace->max_entries)
                trace->entries[trace->nr_entries++] = ULONG_MAX;
}
EXPORT_SYMBOL_GPL(save_stack_trace);

commit 006e84ee3a54e393ec6bef2a9bc891dc5bde2843 seems involved,..

Anybody got clue?


  reply	other threads:[~2009-07-08 18:36 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-07 15:25 [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES Joao Correia
2009-07-07 15:33 ` Peter Zijlstra
     [not found]   ` <a5d9929e0907070838q7ed3306du3bb7880e47d7207b@mail.gmail.com>
2009-07-07 15:38     ` Fwd: " Joao Correia
     [not found]     ` <1246981444.9777.11.camel@twins>
2009-07-07 15:50       ` Joao Correia
2009-07-07 15:55         ` Peter Zijlstra
2009-07-07 15:59           ` Joao Correia
2009-07-08 17:22           ` Dave Jones
2009-07-08 18:36             ` Peter Zijlstra [this message]
2009-07-08 18:44               ` Dave Jones
2009-07-08 19:48               ` Joao Correia
2009-07-08 19:56                 ` Peter Zijlstra
2009-07-09  4:39               ` Dave Jones
2009-07-09  8:02                 ` Peter Zijlstra
2009-07-09 16:10                   ` Dave Jones
2009-07-09 17:07                     ` Peter Zijlstra
2009-07-10 15:50                       ` Joao Correia
2009-07-09  9:06               ` Catalin Marinas
2009-07-09  9:09                 ` Peter Zijlstra
2009-07-20 13:31                   ` [PATCH] lockdep: fixup stacktrace wastage Peter Zijlstra
2009-08-02 13:14                     ` [tip:core/locking] lockdep: Fix backtraces tip-bot for Peter Zijlstra
2009-08-02 13:51                     ` tip-bot for Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1247078164.16156.18.camel@laptop \
    --to=a.p.zijlstra@chello.nl \
    --cc=arjan@linux.intel.com \
    --cc=catalin.marinas@arm.com \
    --cc=davej@redhat.com \
    --cc=fweisbec@gmail.com \
    --cc=joaomiguelcorreia@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox