public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [BUG] Lockdep recursive locking in kmem_cache_free
@ 2006-07-27 23:56 Thomas Gleixner
  2006-07-28  5:22 ` Pekka Enberg
  0 siblings, 1 reply; 22+ messages in thread
From: Thomas Gleixner @ 2006-07-27 23:56 UTC (permalink / raw)
  To: LKML; +Cc: Ingo Molnar, Arjan van de Ven

x86_64, latest git

	tglx

[   57.971202] =============================================
[   57.973118] [ INFO: possible recursive locking detected ]
[   57.973231] ---------------------------------------------
[   57.973343] events/0/5 is trying to acquire lock:
[   57.973452]  (&nc->lock){.+..}, at: [<ffffffff8028f201>] kmem_cache_free+0x141/0x210
[   57.973839]
[   57.973840] but task is already holding lock:
[   57.974040]  (&nc->lock){.+..}, at: [<ffffffff80290501>] cache_reap+0xd1/0x290
[   57.974420]
[   57.974421] other info that might help us debug this:
[   57.974625] 3 locks held by events/0/5:
[   57.974729]  #0:  (cache_chain_mutex){--..}, at: [<ffffffff80290458>] cache_reap+0x28/0x290
[   57.975171]  #1:  (&nc->lock){.+..}, at: [<ffffffff80290501>] cache_reap+0xd1/0x290
[   57.975610]  #2:  (&parent->list_lock){.+..}, at: [<ffffffff8028f572>] __drain_alien_cache+0x42/0x90
[   57.976056]
[   57.976057] stack backtrace:
[   57.976250]
[   57.976251] Call Trace:
[   57.976447]  [<ffffffff802542fc>] __lock_acquire+0x8cc/0xcb0
[   57.976562]  [<ffffffff80254a02>] lock_acquire+0x52/0x70
[   57.976675]  [<ffffffff8028f201>] kmem_cache_free+0x141/0x210
[   57.976790]  [<ffffffff804a6b74>] _spin_lock+0x34/0x50
[   57.976903]  [<ffffffff8028f201>] kmem_cache_free+0x141/0x210
[   57.977018]  [<ffffffff8028f388>] slab_destroy+0xb8/0xf0
[   57.977131]  [<ffffffff8028f4c8>] free_block+0x108/0x170
[   57.977245]  [<ffffffff8028f598>] __drain_alien_cache+0x68/0x90
[   57.977360]  [<ffffffff80290501>] cache_reap+0xd1/0x290
[   57.977473]  [<ffffffff8029069f>] cache_reap+0x26f/0x290
[   57.977588]  [<ffffffff80249a13>] run_workqueue+0xc3/0x120
[   57.977701]  [<ffffffff80290430>] cache_reap+0x0/0x290
[   57.977814]  [<ffffffff80249c91>] worker_thread+0x121/0x160
[   57.977928]  [<ffffffff802305d0>] default_wake_function+0x0/0x10
[   57.978045]  [<ffffffff80249b70>] worker_thread+0x0/0x160
[   57.978158]  [<ffffffff8024d7ba>] kthread+0xda/0x110
[   57.978270]  [<ffffffff8020af2a>] child_rip+0x8/0x12
[   57.978381]  [<ffffffff804a725b>] _spin_unlock_irq+0x2b/0x60
[   57.978496]  [<ffffffff8020a53b>] restore_args+0x0/0x30
[   57.978609]  [<ffffffff8024d6e0>] kthread+0x0/0x110
[   57.978719]  [<ffffffff8020af22>] child_rip+0x0/0x12



^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2006-08-07  7:27 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-07-27 23:56 [BUG] Lockdep recursive locking in kmem_cache_free Thomas Gleixner
2006-07-28  5:22 ` Pekka Enberg
2006-07-28  6:14   ` Thomas Gleixner
2006-07-28 15:35     ` Christoph Lameter
2006-07-28 20:11       ` Thomas Gleixner
2006-07-28 20:18         ` Christoph Lameter
2006-07-28 20:27           ` Arjan van de Ven
2006-07-28 20:27           ` Thomas Gleixner
2006-07-28 20:35             ` Thomas Gleixner
2006-07-28 20:36               ` Christoph Lameter
2006-07-28 20:47                 ` Thomas Gleixner
2006-07-28 20:48                   ` Christoph Lameter
2006-07-28 21:12                     ` Ravikiran G Thirumalai
2006-07-28 21:20                       ` Thomas Gleixner
2006-08-02 19:10                         ` Ravikiran G Thirumalai
2006-08-07  7:27                           ` Thomas Gleixner
2006-07-28 21:26                       ` Christoph Lameter
2006-07-28 21:34                         ` Alok Kataria
2006-07-29  4:26                         ` Ravikiran G Thirumalai
2006-07-28 14:53   ` Christoph Lameter
2006-07-28 17:11     ` Ravikiran G Thirumalai
2006-07-28 17:14       ` Arjan van de Ven

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox