* lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
@ 2013-12-19 5:53 Sasha Levin
2013-12-19 10:34 ` Peter Zijlstra
0 siblings, 1 reply; 12+ messages in thread
From: Sasha Levin @ 2013-12-19 5:53 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar; +Cc: LKML
[-- Attachment #1: Type: text/plain, Size: 4136 bytes --]
Hi all,
I think that my bloated kernel managed to create way too many entries in the
dependency table. If that sounds right, I can send a patch to increase those.
Attached /proc/lock_stat as requested as well.
[ 1078.462898] BUG: MAX_LOCKDEP_ENTRIES too low!
[ 1078.462898] turning off the locking correctness validator.
[ 1078.463109] Please attach the output of /proc/lock_stat to the bug report
[ 1078.463968] CPU: 1 PID: 26372 Comm: trinity-child16 Tainted: G W 3.13.0-rc
4-next-20131218-sasha-00014-gf6be483-dirty #4160
[ 1078.463968] ffff8801bb9b8c10 ffff8801bb34d788 ffffffff8439598c ffff8801bb34d788
[ 1078.464297] ffff8801bb9b8bd8 ffff8801bb34d798 ffffffff8118e681 ffff8801bb34d838
[ 1078.465304] ffffffff81193a7a ffff8801bb34d7b8 00000000810b4214 ffff8800cc1d83c0
[ 1078.466442] Call Trace:
[ 1078.470111] [<ffffffff8439598c>] dump_stack+0x52/0x7f
[ 1078.470111] [<ffffffff8118e681>] T.1062+0x31/0xb0
[ 1078.470111] [<ffffffff81193a7a>] check_prev_add+0x4ba/0x550
[ 1078.470111] [<ffffffff810b4214>] ? kvm_clock_read+0x24/0x50
[ 1078.470121] [<ffffffff810b4214>] ? kvm_clock_read+0x24/0x50
[ 1078.470121] [<ffffffff811941d3>] validate_chain+0x6c3/0x7b0
[ 1078.470121] [<ffffffff81174d58>] ? sched_clock_cpu+0x108/0x120
[ 1078.470121] [<ffffffff8119476d>] __lock_acquire+0x4ad/0x580
[ 1078.470121] [<ffffffff811949c2>] lock_acquire+0x182/0x1d0
[ 1078.470121] [<ffffffff83fb67dc>] ? tcp_sendmsg+0x2c/0xd50
[ 1078.470121] [<ffffffff83e508aa>] lock_sock_nested+0x8a/0xa0
[ 1078.470121] [<ffffffff83fb67dc>] ? tcp_sendmsg+0x2c/0xd50
[ 1078.470121] [<ffffffff8107499d>] ? sched_clock+0x1d/0x30
[ 1078.470121] [<ffffffff83fb67dc>] tcp_sendmsg+0x2c/0xd50
[ 1078.470121] [<ffffffff81174d58>] ? sched_clock_cpu+0x108/0x120
[ 1078.470121] [<ffffffff8118dbca>] ? get_lock_stats+0x2a/0x60
[ 1078.470121] [<ffffffff8118dc0e>] ? put_lock_stats+0xe/0x30
[ 1078.470121] [<ffffffff83fe6ef8>] ? inet_sendmsg+0x178/0x1f0
[ 1078.470121] [<ffffffff81aba87a>] ? delay_tsc+0xfa/0x120
[ 1078.470121] [<ffffffff83fe6f53>] inet_sendmsg+0x1d3/0x1f0
[ 1078.470121] [<ffffffff83fe6d80>] ? inet_recvmsg+0x200/0x200
[ 1078.470121] [<ffffffff810b4214>] ? kvm_clock_read+0x24/0x50
[ 1078.470121] [<ffffffff83e4d620>] sock_sendmsg+0x90/0xc0
[ 1078.470121] [<ffffffff810b4214>] ? kvm_clock_read+0x24/0x50
[ 1078.470121] [<ffffffff810b4214>] ? kvm_clock_read+0x24/0x50
[ 1078.470121] [<ffffffff8107499d>] ? sched_clock+0x1d/0x30
[ 1078.470121] [<ffffffff81174be5>] ? sched_clock_local+0x25/0x90
[ 1078.470121] [<ffffffff83e4d831>] kernel_sendmsg+0x41/0x60
[ 1078.470121] [<ffffffff83e50b16>] sock_no_sendpage+0x96/0xb0
[ 1078.470121] [<ffffffff83fe72d8>] ? inet_sendpage+0x198/0x240
[ 1078.470121] [<ffffffff83fb4864>] tcp_sendpage+0x34/0x80
[ 1078.470121] [<ffffffff81aba729>] ? __const_udelay+0x29/0x30
[ 1078.470121] [<ffffffff811b2bf4>] ? __rcu_read_unlock+0x44/0xb0
[ 1078.470121] [<ffffffff83fe733f>] inet_sendpage+0x1ff/0x240
[ 1078.470121] [<ffffffff83fe7140>] ? inet_release+0x1d0/0x1d0
[ 1078.470121] [<ffffffff81192b79>] ? mark_held_locks+0x109/0x140
[ 1078.470121] [<ffffffff83e4a70b>] kernel_sendpage+0x1b/0x30
[ 1078.470121] [<ffffffff83e4a753>] sock_sendpage+0x33/0x40
[ 1078.470121] [<ffffffff81307118>] pipe_to_sendpage+0x78/0x80
[ 1078.470121] [<ffffffff813071ac>] splice_from_pipe_feed+0x8c/0x140
[ 1078.470121] [<ffffffff813070a0>] ? direct_splice_actor+0x40/0x40
[ 1078.470121] [<ffffffff812db39d>] ? pipe_lock+0x1d/0x20
[ 1078.470121] [<ffffffff813070a0>] ? direct_splice_actor+0x40/0x40
[ 1078.470121] [<ffffffff81307ae4>] __splice_from_pipe+0x44/0x80
[ 1078.470121] [<ffffffff813070a0>] ? direct_splice_actor+0x40/0x40
[ 1078.470121] [<ffffffff813070a0>] ? direct_splice_actor+0x40/0x40
[ 1078.470121] [<ffffffff81307b71>] splice_from_pipe+0x51/0x70
[ 1078.470121] [<ffffffff812d237c>] ? rw_verify_area+0xcc/0x100
[ 1078.470121] [<ffffffff81307bd5>] generic_splice_sendpage+0x15/0x20
[ 1078.470121] [<ffffffff8130821c>] do_splice+0x19c/0x310
[ 1078.470121] [<ffffffff81308449>] SyS_splice+0xb9/0x110
[ 1078.470121] [<ffffffff843a6ad0>] tracesys+0xdd/0xe2
Thanks,
Sasha
[-- Attachment #2: stats.gz --]
[-- Type: application/gzip, Size: 67893 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 5:53 lockdep: BUG: MAX_LOCKDEP_ENTRIES too low! Sasha Levin
@ 2013-12-19 10:34 ` Peter Zijlstra
2013-12-19 14:02 ` Sasha Levin
0 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2013-12-19 10:34 UTC (permalink / raw)
To: Sasha Levin; +Cc: Ingo Molnar, LKML
On Thu, Dec 19, 2013 at 12:53:56AM -0500, Sasha Levin wrote:
> Hi all,
>
> I think that my bloated kernel managed to create way too many entries in the
> dependency table. If that sounds right, I can send a patch to increase those.
>
> Attached /proc/lock_stat as requested as well.
/proc/lockdep_stats not lock_stat :-)
Do you still happen to have that?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 10:34 ` Peter Zijlstra
@ 2013-12-19 14:02 ` Sasha Levin
2013-12-19 15:20 ` Peter Zijlstra
0 siblings, 1 reply; 12+ messages in thread
From: Sasha Levin @ 2013-12-19 14:02 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Ingo Molnar, LKML
On 12/19/2013 05:34 AM, Peter Zijlstra wrote:
> On Thu, Dec 19, 2013 at 12:53:56AM -0500, Sasha Levin wrote:
>> Hi all,
>>
>> I think that my bloated kernel managed to create way too many entries in the
>> dependency table. If that sounds right, I can send a patch to increase those.
>>
>> Attached /proc/lock_stat as requested as well.
>
> /proc/lockdep_stats not lock_stat :-)
>
> Do you still happen to have that?
Is the BUG message intentional ("Please attach the output of /proc/lock_stat to the bug report")?
Here's /proc/lockdep_stats:
lock-classes: 2397 [max: 8191]
direct dependencies: 16384 [max: 16384]
indirect dependencies: 60481
all direct dependencies: 308079
dependency chains: 21162 [max: 32768]
dependency chain hlocks: 67298 [max: 163840]
in-hardirq chains: 51
in-softirq chains: 430
in-process chains: 19782
stack-trace entries: 235133 [max: 262144]
combined max dependencies: 443376596
hardirq-safe locks: 59
hardirq-unsafe locks: 1285
softirq-safe locks: 156
softirq-unsafe locks: 1136
irq-safe locks: 168
irq-unsafe locks: 1285
hardirq-read-safe locks: 4
hardirq-read-unsafe locks: 357
softirq-read-safe locks: 7
softirq-read-unsafe locks: 337
irq-read-safe locks: 8
irq-read-unsafe locks: 357
uncategorized locks: 281
unused locks: 0
max locking depth: 12
max bfs queue depth: 663
chain lookup misses: 21432
chain lookup hits: 490237032
cyclic checks: 20219
find-mask forwards checks: 12353
find-mask backwards checks: 128734
hardirq on events: 372837372
hardirq off events: 372837362
redundant hardirq ons: 3467
redundant hardirq offs: 361517382
softirq on events: 3688513
softirq off events: 3688541
redundant softirq ons: 0
redundant softirq offs: 0
debug_locks: 0
Thanks,
Sasha
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 14:02 ` Sasha Levin
@ 2013-12-19 15:20 ` Peter Zijlstra
2013-12-19 15:39 ` Sasha Levin
2013-12-19 15:49 ` Ingo Molnar
0 siblings, 2 replies; 12+ messages in thread
From: Peter Zijlstra @ 2013-12-19 15:20 UTC (permalink / raw)
To: Sasha Levin; +Cc: Ingo Molnar, LKML, Dave Jones
On Thu, Dec 19, 2013 at 09:02:14AM -0500, Sasha Levin wrote:
> On 12/19/2013 05:34 AM, Peter Zijlstra wrote:
> >On Thu, Dec 19, 2013 at 12:53:56AM -0500, Sasha Levin wrote:
> >>Hi all,
> >>
> >>I think that my bloated kernel managed to create way too many entries in the
> >>dependency table. If that sounds right, I can send a patch to increase those.
> >>
> >>Attached /proc/lock_stat as requested as well.
> >
> >/proc/lockdep_stats not lock_stat :-)
> >
> >Do you still happen to have that?
>
> Is the BUG message intentional ("Please attach the output of /proc/lock_stat to the bug report")?
It does? This happened when I wasn't looking..
Commit 199e371f59d31 did that; and the Changelog fails to mention why or
what. Ingo, Dave?
So the thing I referred to was from Documentation/lockdep-design.txt:
---
Troubleshooting:
----------------
The validator tracks a maximum of MAX_LOCKDEP_KEYS number of lock classes.
Exceeding this number will trigger the following lockdep warning:
(DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS))
By default, MAX_LOCKDEP_KEYS is currently set to 8191, and typical
desktop systems have less than 1,000 lock classes, so this warning
normally results from lock-class leakage or failure to properly
initialize locks. These two problems are illustrated below:
1. Repeated module loading and unloading while running the validator
will result in lock-class leakage. The issue here is that each
load of the module will create a new set of lock classes for
that module's locks, but module unloading does not remove old
classes (see below discussion of reuse of lock classes for why).
Therefore, if that module is loaded and unloaded repeatedly,
the number of lock classes will eventually reach the maximum.
2. Using structures such as arrays that have large numbers of
locks that are not explicitly initialized. For example,
a hash table with 8192 buckets where each bucket has its own
spinlock_t will consume 8192 lock classes -unless- each spinlock
is explicitly initialized at runtime, for example, using the
run-time spin_lock_init() as opposed to compile-time initializers
such as __SPIN_LOCK_UNLOCKED(). Failure to properly initialize
the per-bucket spinlocks would guarantee lock-class overflow.
In contrast, a loop that called spin_lock_init() on each lock
would place all 8192 locks into a single lock class.
The moral of this story is that you should always explicitly
initialize your locks.
One might argue that the validator should be modified to allow
lock classes to be reused. However, if you are tempted to make this
argument, first review the code and think through the changes that would
be required, keeping in mind that the lock classes to be removed are
likely to be linked into the lock-dependency graph. This turns out to
be harder to do than to say.
Of course, if you do run out of lock classes, the next thing to do is
to find the offending lock classes. First, the following command gives
you the number of lock classes currently in use along with the maximum:
grep "lock-classes" /proc/lockdep_stats
This command produces the following output on a modest system:
lock-classes: 748 [max: 8191]
If the number allocated (748 above) increases continually over time,
then there is likely a leak. The following command can be used to
identify the leaking lock classes:
grep "BD" /proc/lockdep
Run the command and save the output, then compare against the output from
a later run of this command to identify the leakers. This same output
can also help you find situations where runtime lock initialization has
been omitted.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 15:20 ` Peter Zijlstra
@ 2013-12-19 15:39 ` Sasha Levin
2013-12-19 15:51 ` Peter Zijlstra
2013-12-19 15:49 ` Ingo Molnar
1 sibling, 1 reply; 12+ messages in thread
From: Sasha Levin @ 2013-12-19 15:39 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Ingo Molnar, LKML, Dave Jones
On 12/19/2013 10:20 AM, Peter Zijlstra wrote:
> On Thu, Dec 19, 2013 at 09:02:14AM -0500, Sasha Levin wrote:
>> On 12/19/2013 05:34 AM, Peter Zijlstra wrote:
>>> On Thu, Dec 19, 2013 at 12:53:56AM -0500, Sasha Levin wrote:
>>>> Hi all,
>>>>
>>>> I think that my bloated kernel managed to create way too many entries in the
>>>> dependency table. If that sounds right, I can send a patch to increase those.
>>>>
>>>> Attached /proc/lock_stat as requested as well.
>>>
>>> /proc/lockdep_stats not lock_stat :-)
>>>
>>> Do you still happen to have that?
>>
>> Is the BUG message intentional ("Please attach the output of /proc/lock_stat to the bug report")?
>
> It does? This happened when I wasn't looking..
>
> Commit 199e371f59d31 did that; and the Changelog fails to mention why or
> what. Ingo, Dave?
>
> So the thing I referred to was from Documentation/lockdep-design.txt:
[snip]
That discusses lockdep classes, which is actually fine in my case. I ran out of
MAX_LOCKDEP_ENTRIES, which isn't mentioned anywhere in Documentation/ .
Thanks,
Sasha
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 15:20 ` Peter Zijlstra
2013-12-19 15:39 ` Sasha Levin
@ 2013-12-19 15:49 ` Ingo Molnar
1 sibling, 0 replies; 12+ messages in thread
From: Ingo Molnar @ 2013-12-19 15:49 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Sasha Levin, LKML, Dave Jones
* Peter Zijlstra <peterz@infradead.org> wrote:
> On Thu, Dec 19, 2013 at 09:02:14AM -0500, Sasha Levin wrote:
> > On 12/19/2013 05:34 AM, Peter Zijlstra wrote:
> > >On Thu, Dec 19, 2013 at 12:53:56AM -0500, Sasha Levin wrote:
> > >>Hi all,
> > >>
> > >>I think that my bloated kernel managed to create way too many entries in the
> > >>dependency table. If that sounds right, I can send a patch to increase those.
> > >>
> > >>Attached /proc/lock_stat as requested as well.
> > >
> > >/proc/lockdep_stats not lock_stat :-)
> > >
> > >Do you still happen to have that?
> >
> > Is the BUG message intentional ("Please attach the output of /proc/lock_stat to the bug report")?
>
> It does? This happened when I wasn't looking..
>
> Commit 199e371f59d31 did that; and the Changelog fails to mention why or
> what. Ingo, Dave?
Simple oversight I think, should be fixed.
> [...]
>
> One might argue that the validator should be modified to allow lock
> classes to be reused. However, if you are tempted to make this
> argument, first review the code and think through the changes that
> would be required, keeping in mind that the lock classes to be
> removed are likely to be linked into the lock-dependency graph.
> This turns out to be harder to do than to say.
Yes, an append-only data structure was a conscious simplification I
decided on very early. (It also increases general robustness if your
data structure can never go away.)
Thanks,
Ingo
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 15:39 ` Sasha Levin
@ 2013-12-19 15:51 ` Peter Zijlstra
2013-12-19 15:59 ` Dave Jones
0 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2013-12-19 15:51 UTC (permalink / raw)
To: Sasha Levin; +Cc: Ingo Molnar, LKML, Dave Jones
On Thu, Dec 19, 2013 at 10:39:57AM -0500, Sasha Levin wrote:
> That discusses lockdep classes, which is actually fine in my case. I ran out of
> MAX_LOCKDEP_ENTRIES, which isn't mentioned anywhere in Documentation/ .
Yeah, it suffers from the same problem though. Lockdep has static
resource allocation and never frees them.
The lock classes are the smallest pool and usually run out first, but
the same could happen for the entries, after all, the more classes we
have the more class connections can happen.
Anyway, barring a leak and silly class mistakes like mentioned in the
document there's nothing we can do except raise the number.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 15:51 ` Peter Zijlstra
@ 2013-12-19 15:59 ` Dave Jones
2013-12-19 16:01 ` Peter Zijlstra
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: Dave Jones @ 2013-12-19 15:59 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Sasha Levin, Ingo Molnar, LKML
On Thu, Dec 19, 2013 at 04:51:21PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 19, 2013 at 10:39:57AM -0500, Sasha Levin wrote:
> > That discusses lockdep classes, which is actually fine in my case. I ran out of
> > MAX_LOCKDEP_ENTRIES, which isn't mentioned anywhere in Documentation/ .
>
> Yeah, it suffers from the same problem though. Lockdep has static
> resource allocation and never frees them.
>
> The lock classes are the smallest pool and usually run out first, but
> the same could happen for the entries, after all, the more classes we
> have the more class connections can happen.
>
> Anyway, barring a leak and silly class mistakes like mentioned in the
> document there's nothing we can do except raise the number.
I tried this. When you bump it to 32k, it fares better but then you
start seeing "BUG: MAX_LOCKDEP_CHAINS too low!" instead.
I've not tried bumping that yet, as I've stopped seeing these lately
due to hitting more serious bugs first.
Dave
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 15:59 ` Dave Jones
@ 2013-12-19 16:01 ` Peter Zijlstra
2013-12-19 16:10 ` Dave Jones
2013-12-19 16:51 ` Sasha Levin
2013-12-20 4:05 ` Mike Galbraith
2 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2013-12-19 16:01 UTC (permalink / raw)
To: Dave Jones, Sasha Levin, Ingo Molnar, LKML
On Thu, Dec 19, 2013 at 10:59:17AM -0500, Dave Jones wrote:
> On Thu, Dec 19, 2013 at 04:51:21PM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 19, 2013 at 10:39:57AM -0500, Sasha Levin wrote:
> > > That discusses lockdep classes, which is actually fine in my case. I ran out of
> > > MAX_LOCKDEP_ENTRIES, which isn't mentioned anywhere in Documentation/ .
> >
> > Yeah, it suffers from the same problem though. Lockdep has static
> > resource allocation and never frees them.
> >
> > The lock classes are the smallest pool and usually run out first, but
> > the same could happen for the entries, after all, the more classes we
> > have the more class connections can happen.
> >
> > Anyway, barring a leak and silly class mistakes like mentioned in the
> > document there's nothing we can do except raise the number.
>
> I tried this. When you bump it to 32k, it fares better but then you
> start seeing "BUG: MAX_LOCKDEP_CHAINS too low!" instead.
> I've not tried bumping that yet, as I've stopped seeing these lately
> due to hitting more serious bugs first.
What are you doing to trigger all this? I don't see these. Are you
loading/unloading modules a lot?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 16:01 ` Peter Zijlstra
@ 2013-12-19 16:10 ` Dave Jones
0 siblings, 0 replies; 12+ messages in thread
From: Dave Jones @ 2013-12-19 16:10 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Sasha Levin, Ingo Molnar, LKML
On Thu, Dec 19, 2013 at 05:01:16PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 19, 2013 at 10:59:17AM -0500, Dave Jones wrote:
> > On Thu, Dec 19, 2013 at 04:51:21PM +0100, Peter Zijlstra wrote:
> > > On Thu, Dec 19, 2013 at 10:39:57AM -0500, Sasha Levin wrote:
> > > > That discusses lockdep classes, which is actually fine in my case. I ran out of
> > > > MAX_LOCKDEP_ENTRIES, which isn't mentioned anywhere in Documentation/ .
> > >
> > > Yeah, it suffers from the same problem though. Lockdep has static
> > > resource allocation and never frees them.
> > >
> > > The lock classes are the smallest pool and usually run out first, but
> > > the same could happen for the entries, after all, the more classes we
> > > have the more class connections can happen.
> > >
> > > Anyway, barring a leak and silly class mistakes like mentioned in the
> > > document there's nothing we can do except raise the number.
> >
> > I tried this. When you bump it to 32k, it fares better but then you
> > start seeing "BUG: MAX_LOCKDEP_CHAINS too low!" instead.
> > I've not tried bumping that yet, as I've stopped seeing these lately
> > due to hitting more serious bugs first.
>
> What are you doing to trigger all this? I don't see these. Are you
> loading/unloading modules a lot?
syscall fuzzing.
Dave
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 15:59 ` Dave Jones
2013-12-19 16:01 ` Peter Zijlstra
@ 2013-12-19 16:51 ` Sasha Levin
2013-12-20 4:05 ` Mike Galbraith
2 siblings, 0 replies; 12+ messages in thread
From: Sasha Levin @ 2013-12-19 16:51 UTC (permalink / raw)
To: Dave Jones, Peter Zijlstra, Ingo Molnar, LKML
On 12/19/2013 10:59 AM, Dave Jones wrote:
> On Thu, Dec 19, 2013 at 04:51:21PM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 19, 2013 at 10:39:57AM -0500, Sasha Levin wrote:
> > > That discusses lockdep classes, which is actually fine in my case. I ran out of
> > > MAX_LOCKDEP_ENTRIES, which isn't mentioned anywhere in Documentation/ .
> >
> > Yeah, it suffers from the same problem though. Lockdep has static
> > resource allocation and never frees them.
> >
> > The lock classes are the smallest pool and usually run out first, but
> > the same could happen for the entries, after all, the more classes we
> > have the more class connections can happen.
> >
> > Anyway, barring a leak and silly class mistakes like mentioned in the
> > document there's nothing we can do except raise the number.
>
> I tried this. When you bump it to 32k, it fares better but then you
> start seeing "BUG: MAX_LOCKDEP_CHAINS too low!" instead.
> I've not tried bumping that yet, as I've stopped seeing these lately
> due to hitting more serious bugs first.
Yeah, I see what you're saying. After upping that to 32k I've hit MAX_STACK_TRACE_ENTRIES.
Going to try upping that and see how that goes.
Thanks,
Sasha
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: lockdep: BUG: MAX_LOCKDEP_ENTRIES too low!
2013-12-19 15:59 ` Dave Jones
2013-12-19 16:01 ` Peter Zijlstra
2013-12-19 16:51 ` Sasha Levin
@ 2013-12-20 4:05 ` Mike Galbraith
2 siblings, 0 replies; 12+ messages in thread
From: Mike Galbraith @ 2013-12-20 4:05 UTC (permalink / raw)
To: Dave Jones; +Cc: Peter Zijlstra, Sasha Levin, Ingo Molnar, LKML
On Thu, 2013-12-19 at 10:59 -0500, Dave Jones wrote:
> On Thu, Dec 19, 2013 at 04:51:21PM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 19, 2013 at 10:39:57AM -0500, Sasha Levin wrote:
> > > That discusses lockdep classes, which is actually fine in my case. I ran out of
> > > MAX_LOCKDEP_ENTRIES, which isn't mentioned anywhere in Documentation/ .
> >
> > Yeah, it suffers from the same problem though. Lockdep has static
> > resource allocation and never frees them.
> >
> > The lock classes are the smallest pool and usually run out first, but
> > the same could happen for the entries, after all, the more classes we
> > have the more class connections can happen.
> >
> > Anyway, barring a leak and silly class mistakes like mentioned in the
> > document there's nothing we can do except raise the number.
>
> I tried this. When you bump it to 32k, it fares better but then you
> start seeing "BUG: MAX_LOCKDEP_CHAINS too low!" instead.
> I've not tried bumping that yet, as I've stopped seeing these lately
> due to hitting more serious bugs first.
I had the same problem while testdriving a 3.6-rt kernel with lots of
debug enabled.. had to double both ENTRIES/CHAINS to keep lockdep from
running away in a snit.
-Mike
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2013-12-20 4:05 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-19 5:53 lockdep: BUG: MAX_LOCKDEP_ENTRIES too low! Sasha Levin
2013-12-19 10:34 ` Peter Zijlstra
2013-12-19 14:02 ` Sasha Levin
2013-12-19 15:20 ` Peter Zijlstra
2013-12-19 15:39 ` Sasha Levin
2013-12-19 15:51 ` Peter Zijlstra
2013-12-19 15:59 ` Dave Jones
2013-12-19 16:01 ` Peter Zijlstra
2013-12-19 16:10 ` Dave Jones
2013-12-19 16:51 ` Sasha Levin
2013-12-20 4:05 ` Mike Galbraith
2013-12-19 15:49 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).