* [PATCH] degrade severity of lockdep printk @ 2009-03-23 20:16 Kyle McMartin 2009-03-23 20:20 ` Ingo Molnar 0 siblings, 1 reply; 6+ messages in thread From: Kyle McMartin @ 2009-03-23 20:16 UTC (permalink / raw) To: mingo; +Cc: linux-kernel From: Kyle McMartin <kyle@redhat.com> While it means something is awry, the BUG prefix confuses users and possibly the kerneloops tool. It's also being trivially exhausted in normal use on my laptop and several other folks machines, but that will need another look. Signed-off-by: Kyle McMartin <kyle@redhat.com> diff --git a/kernel/lockdep.c b/kernel/lockdep.c index 06b0c35..139a81a 100644 --- a/kernel/lockdep.c +++ b/kernel/lockdep.c @@ -368,7 +368,7 @@ static int save_trace(struct stack_trace *trace) if (!debug_locks_off_graph_unlock()) return 0; - printk("BUG: MAX_STACK_TRACE_ENTRIES too low!\n"); + printk("warning: MAX_STACK_TRACE_ENTRIES too low!\n"); printk("turning off the locking correctness validator.\n"); dump_stack(); ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] degrade severity of lockdep printk 2009-03-23 20:16 [PATCH] degrade severity of lockdep printk Kyle McMartin @ 2009-03-23 20:20 ` Ingo Molnar 2009-03-23 20:28 ` Kyle McMartin 0 siblings, 1 reply; 6+ messages in thread From: Ingo Molnar @ 2009-03-23 20:20 UTC (permalink / raw) To: Kyle McMartin, Peter Zijlstra; +Cc: mingo, linux-kernel * Kyle McMartin <kyle@mcmartin.ca> wrote: > From: Kyle McMartin <kyle@redhat.com> > > While it means something is awry, the BUG prefix confuses users > and possibly the kerneloops tool. It's also being trivially > exhausted in normal use on my laptop and several other folks > machines, but that will need another look. > > Signed-off-by: Kyle McMartin <kyle@redhat.com> Please report it in that case and get us to increase the limit. Ingo ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] degrade severity of lockdep printk 2009-03-23 20:20 ` Ingo Molnar @ 2009-03-23 20:28 ` Kyle McMartin 2009-03-23 20:32 ` Peter Zijlstra 0 siblings, 1 reply; 6+ messages in thread From: Kyle McMartin @ 2009-03-23 20:28 UTC (permalink / raw) To: Ingo Molnar; +Cc: Kyle McMartin, Peter Zijlstra, mingo, linux-kernel On Mon, Mar 23, 2009 at 09:20:54PM +0100, Ingo Molnar wrote: > > Please report it in that case and get us to increase the limit. > https://bugzilla.redhat.com/show_bug.cgi?id=489563 Although I don't really see how this can be particularly fixed, since any workload that allocates a struct with a lock, initializes, and then eventually frees it a whole bunch of times will run out of lock_lists won't it? (I still think the text should be changed from BUG though since we're not actually calling BUG()...) regards, Kyle ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] degrade severity of lockdep printk 2009-03-23 20:28 ` Kyle McMartin @ 2009-03-23 20:32 ` Peter Zijlstra 2009-03-23 20:39 ` Kyle McMartin 0 siblings, 1 reply; 6+ messages in thread From: Peter Zijlstra @ 2009-03-23 20:32 UTC (permalink / raw) To: Kyle McMartin; +Cc: Ingo Molnar, mingo, linux-kernel On Mon, 2009-03-23 at 16:28 -0400, Kyle McMartin wrote: > On Mon, Mar 23, 2009 at 09:20:54PM +0100, Ingo Molnar wrote: > > > > Please report it in that case and get us to increase the limit. > > > > https://bugzilla.redhat.com/show_bug.cgi?id=489563 > > Although I don't really see how this can be particularly fixed, since > any workload that allocates a struct with a lock, initializes, and then > eventually frees it a whole bunch of times will run out of lock_lists > won't it? Not if the lock init site doesn't change. Modules are the big exception, cycling modules will run you out of lockdep space.. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] degrade severity of lockdep printk 2009-03-23 20:32 ` Peter Zijlstra @ 2009-03-23 20:39 ` Kyle McMartin 2009-03-23 21:12 ` Peter Zijlstra 0 siblings, 1 reply; 6+ messages in thread From: Kyle McMartin @ 2009-03-23 20:39 UTC (permalink / raw) To: Peter Zijlstra; +Cc: Kyle McMartin, Ingo Molnar, mingo, linux-kernel On Mon, Mar 23, 2009 at 09:32:35PM +0100, Peter Zijlstra wrote: > > Although I don't really see how this can be particularly fixed, since > > any workload that allocates a struct with a lock, initializes, and then > > eventually frees it a whole bunch of times will run out of lock_lists > > won't it? > > Not if the lock init site doesn't change. Modules are the big exception, > cycling modules will run you out of lockdep space.. > Somewhat confused by this... possibly I'm just being thick and missing some subtlety though, but surely the context is equally important? I mean, the locks held on entry to, say, a fs_operations struct member function could be different on every different possible callpath... regards, Kyle ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] degrade severity of lockdep printk 2009-03-23 20:39 ` Kyle McMartin @ 2009-03-23 21:12 ` Peter Zijlstra 0 siblings, 0 replies; 6+ messages in thread From: Peter Zijlstra @ 2009-03-23 21:12 UTC (permalink / raw) To: Kyle McMartin; +Cc: Ingo Molnar, mingo, linux-kernel On Mon, 2009-03-23 at 16:39 -0400, Kyle McMartin wrote: > On Mon, Mar 23, 2009 at 09:32:35PM +0100, Peter Zijlstra wrote: > > > Although I don't really see how this can be particularly fixed, since > > > any workload that allocates a struct with a lock, initializes, and then > > > eventually frees it a whole bunch of times will run out of lock_lists > > > won't it? > > > > Not if the lock init site doesn't change. Modules are the big exception, > > cycling modules will run you out of lockdep space.. > > > > Somewhat confused by this... possibly I'm just being thick and missing > some subtlety though, but surely the context is equally important? > I mean, the locks held on entry to, say, a fs_operations struct member > function could be different on every different possible callpath... I'm thinking we're not quite understanding each other here, so let me try and explain things (hopefully more clearly). Each lock gets a key object, this key object must be in static storage. Static lock objects: eg. static spinlock_t my_lock; can use the spinlock_t iteslf as key, dynamic objects: struct my_struct { spinlock_t lock; ... }; struct my_struct *my_obj = kmalloc(sizeof(*my_obj), GFP_KERNEL); spin_lock_init(&my->obj->lock); will use a static storage associated with the init site: # define spin_lock_init(lock) \ do { \ static struct lock_class_key __key; \ \ __spin_lock_init((lock), #lock, &__key); \ } while (0) So, no matter how many my_struct instances you allocate/free, it will not be more or less objects for lockdep. All that matters are the static key objects. Each key object creates a lock-class. And these lock_list thingies which we're running low on are used to encode the relationships between such classes. This relation is again, purely static, its a property of the code. And unless we have an in-kernel JIT, that doesn't change (only thing is, we're detecting them dynamically -- but their structure is already encoded in the code). Now the second thing you mention, callpaths -- distinctly different from structure instances -- these are static and predetermined at compile time. Now, since we're detecting these things are run-time, since we have to deduce them from our code structure, we need this allocation stuff, and it might turn out our estimates were too low, or alternatively, our lock dependencies have bloated over time (certainly possible). All this gets messed up with module unloading, we find all keys in the module space we're about to unload and destroy all related and derived information we had about them -- _NOT_ releasing anything back to these static pools like the list_entries[] array. So a module load/unload cycle will eat lockdep storage like crazy. We could possibly fix that, but it will add complexity, which imho isn't worth it -- just say no to modules :-) ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2009-03-23 21:13 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-03-23 20:16 [PATCH] degrade severity of lockdep printk Kyle McMartin 2009-03-23 20:20 ` Ingo Molnar 2009-03-23 20:28 ` Kyle McMartin 2009-03-23 20:32 ` Peter Zijlstra 2009-03-23 20:39 ` Kyle McMartin 2009-03-23 21:12 ` Peter Zijlstra
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox