linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Zdenek Kabelac <zdenek.kabelac@gmail.com>,
	"Rafael J. Wysocki" <rjw@sisk.pl>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Oleg Nesterov <oleg@redhat.com>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: INFO: possible circular locking dependency at cleanup_workqueue_thread
Date: Sun, 17 May 2009 09:18:34 +0200	[thread overview]
Message-ID: <20090517071834.GA8507@elte.hu> (raw)
In-Reply-To: <c4e36d110905120059i4c155508oa00672da0b33a07a@mail.gmail.com>


Cc:s added. This dependency:

> -> #2 (cfg80211_mutex){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffffa007e66a>] reg_todo+0x19a/0x590 [cfg80211]
>        [<ffffffff80258f18>] worker_thread+0x1e8/0x3a0
>        [<ffffffff8025dc3a>] kthread+0x5a/0xa0
>        [<ffffffff8020d23a>] child_rip+0xa/0x20

is what sets the dependencies upside down.

	Ingo

* Zdenek Kabelac <zdenek.kabelac@gmail.com> wrote:

> Hi
> 
> With this kernel a4d7749be5de4a7261bcbe3c7d96c748792ec455
> 
> I've got this INFO trace during suspend:
> 
> 
> CPU 1 is now offline
> lockdep: fixing up alternatives.
> SMP alternatives: switching to UP code
> CPU0 attaching NULL sched-domain.
> CPU1 attaching NULL sched-domain.
> CPU0 attaching NULL sched-domain.
> 
> =======================================================
> [ INFO: possible circular locking dependency detected ]
> 2.6.30-rc5-00097-gd665355 #59
> -------------------------------------------------------
> pm-suspend/12129 is trying to acquire lock:
>  (events){+.+.+.}, at: [<ffffffff80259496>] cleanup_workqueue_thread+0x26/0xd0
> 
> but task is already holding lock:
>  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff80246e57>]
> cpu_maps_update_begin+0x17/0x20
> 
> which lock already depends on the new lock.
> 
> 
> the existing dependency chain (in reverse order) is:
> 
> -> #5 (cpu_add_remove_lock){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffff80246e57>] cpu_maps_update_begin+0x17/0x20
>        [<ffffffff80259c33>] __create_workqueue_key+0xc3/0x250
>        [<ffffffff80287b20>] stop_machine_create+0x40/0xb0
>        [<ffffffff8027a784>] sys_delete_module+0x84/0x270
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #4 (setup_lock){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffff80287af7>] stop_machine_create+0x17/0xb0
>        [<ffffffff80246f06>] disable_nonboot_cpus+0x26/0x130
>        [<ffffffff8027dd8d>] suspend_devices_and_enter+0xbd/0x1b0
>        [<ffffffff8027dff7>] enter_state+0x107/0x170
>        [<ffffffff8027e0f9>] state_store+0x99/0x100
>        [<ffffffff803abe17>] kobj_attr_store+0x17/0x20
>        [<ffffffff8033f1e9>] sysfs_write_file+0xd9/0x160
>        [<ffffffff802e1c88>] vfs_write+0xb8/0x180
>        [<ffffffff802e2771>] sys_write+0x51/0x90
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #3 (dpm_list_mtx){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffff804532ff>] device_pm_add+0x1f/0xe0
>        [<ffffffff8044b9bf>] device_add+0x45f/0x570
>        [<ffffffffa007c578>] wiphy_register+0x158/0x280 [cfg80211]
>        [<ffffffffa017567c>] ieee80211_register_hw+0xbc/0x410 [mac80211]
>        [<ffffffffa01f7c5c>] iwl3945_pci_probe+0xa1c/0x1080 [iwl3945]
>        [<ffffffff803c4307>] local_pci_probe+0x17/0x20
>        [<ffffffff803c5458>] pci_device_probe+0x88/0xb0
>        [<ffffffff8044e1e9>] driver_probe_device+0x89/0x180
>        [<ffffffff8044e37b>] __driver_attach+0x9b/0xa0
>        [<ffffffff8044d67c>] bus_for_each_dev+0x6c/0xa0
>        [<ffffffff8044e03e>] driver_attach+0x1e/0x20
>        [<ffffffff8044d955>] bus_add_driver+0xd5/0x290
>        [<ffffffff8044e668>] driver_register+0x78/0x140
>        [<ffffffff803c56f6>] __pci_register_driver+0x66/0xe0
>        [<ffffffffa00bd05c>] 0xffffffffa00bd05c
>        [<ffffffff8020904f>] do_one_initcall+0x3f/0x1c0
>        [<ffffffff8027d071>] sys_init_module+0xb1/0x200
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #2 (cfg80211_mutex){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffffa007e66a>] reg_todo+0x19a/0x590 [cfg80211]
>        [<ffffffff80258f18>] worker_thread+0x1e8/0x3a0
>        [<ffffffff8025dc3a>] kthread+0x5a/0xa0
>        [<ffffffff8020d23a>] child_rip+0xa/0x20
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #1 (reg_work){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff80258f12>] worker_thread+0x1e2/0x3a0
>        [<ffffffff8025dc3a>] kthread+0x5a/0xa0
>        [<ffffffff8020d23a>] child_rip+0xa/0x20
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #0 (events){+.+.+.}:
>        [<ffffffff80271b36>] __lock_acquire+0xd36/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff802594bd>] cleanup_workqueue_thread+0x4d/0xd0
>        [<ffffffff8053fe79>] workqueue_cpu_callback+0xc9/0x10f
>        [<ffffffff805539a8>] notifier_call_chain+0x68/0xa0
>        [<ffffffff802630d6>] raw_notifier_call_chain+0x16/0x20
>        [<ffffffff8053d9ec>] _cpu_down+0x1cc/0x2d0
>        [<ffffffff80246f90>] disable_nonboot_cpus+0xb0/0x130
>        [<ffffffff8027dd8d>] suspend_devices_and_enter+0xbd/0x1b0
>        [<ffffffff8027dff7>] enter_state+0x107/0x170
>        [<ffffffff8027e0f9>] state_store+0x99/0x100
>        [<ffffffff803abe17>] kobj_attr_store+0x17/0x20
>        [<ffffffff8033f1e9>] sysfs_write_file+0xd9/0x160
>        [<ffffffff802e1c88>] vfs_write+0xb8/0x180
>        [<ffffffff802e2771>] sys_write+0x51/0x90
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> other info that might help us debug this:
> 
> 4 locks held by pm-suspend/12129:
>  #0:  (&buffer->mutex){+.+.+.}, at: [<ffffffff8033f154>]
> sysfs_write_file+0x44/0x160
>  #1:  (pm_mutex){+.+.+.}, at: [<ffffffff8027df44>] enter_state+0x54/0x170
>  #2:  (dpm_list_mtx){+.+.+.}, at: [<ffffffff80452747>] device_pm_lock+0x17/0x20
>  #3:  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff80246e57>]
> cpu_maps_update_begin+0x17/0x20
> 
> stack backtrace:
> Pid: 12129, comm: pm-suspend Not tainted 2.6.30-rc5-00097-gd665355 #59
> Call Trace:
>  [<ffffffff8026fa3d>] print_circular_bug_tail+0x9d/0xe0
>  [<ffffffff80271b36>] __lock_acquire+0xd36/0x10a0
>  [<ffffffff80270000>] ? mark_lock+0x3e0/0x400
>  [<ffffffff80271f38>] lock_acquire+0x98/0x140
>  [<ffffffff80259496>] ? cleanup_workqueue_thread+0x26/0xd0
>  [<ffffffff802594bd>] cleanup_workqueue_thread+0x4d/0xd0
>  [<ffffffff80259496>] ? cleanup_workqueue_thread+0x26/0xd0
>  [<ffffffff802703ed>] ? trace_hardirqs_on+0xd/0x10
>  [<ffffffff8053fe79>] workqueue_cpu_callback+0xc9/0x10f
>  [<ffffffff8054acf1>] ? cpu_callback+0x12/0x280
>  [<ffffffff805539a8>] notifier_call_chain+0x68/0xa0
>  [<ffffffff802630d6>] raw_notifier_call_chain+0x16/0x20
>  [<ffffffff8053d9ec>] _cpu_down+0x1cc/0x2d0
>  [<ffffffff80246f90>] disable_nonboot_cpus+0xb0/0x130
>  [<ffffffff8027dd8d>] suspend_devices_and_enter+0xbd/0x1b0
>  [<ffffffff8027dff7>] enter_state+0x107/0x170
>  [<ffffffff8027e0f9>] state_store+0x99/0x100
>  [<ffffffff803abe17>] kobj_attr_store+0x17/0x20
>  [<ffffffff8033f1e9>] sysfs_write_file+0xd9/0x160
>  [<ffffffff802e1c88>] vfs_write+0xb8/0x180
>  [<ffffffff8029088c>] ? audit_syscall_entry+0x21c/0x240
>  [<ffffffff802e2771>] sys_write+0x51/0x90
>  [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
> CPU1 is down
> Extended CMOS year: 2000
> x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
> Back to C!
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

  reply	other threads:[~2009-05-17  7:19 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-12  7:59 INFO: possible circular locking dependency at cleanup_workqueue_thread Zdenek Kabelac
2009-05-17  7:18 ` Ingo Molnar [this message]
2009-05-17 10:42   ` Ming Lei
2009-05-17 11:18   ` Johannes Berg
2009-05-17 13:10     ` Ingo Molnar
2009-05-18 19:47     ` Oleg Nesterov
2009-05-18 20:00       ` Peter Zijlstra
2009-05-18 20:16         ` Oleg Nesterov
2009-05-18 20:40           ` Peter Zijlstra
2009-05-18 22:14             ` Oleg Nesterov
2009-05-19  9:13               ` Peter Zijlstra
2009-05-19 10:49                 ` Peter Zijlstra
2009-05-19 14:53                   ` Oleg Nesterov
2009-05-19  8:51       ` Johannes Berg
2009-05-19 12:00         ` Oleg Nesterov
2009-05-19 15:33           ` Johannes Berg
2009-05-19 16:09             ` Oleg Nesterov
2009-05-19 16:27               ` Johannes Berg
2009-05-19 18:51                 ` Oleg Nesterov
2009-05-22 10:46                   ` Johannes Berg
2009-05-22 22:23                     ` Rafael J. Wysocki
2009-05-23  8:21                       ` Johannes Berg
2009-05-23 23:20                         ` Rafael J. Wysocki
2009-05-24  3:29                           ` Ming Lei
2009-05-24 11:09                             ` Rafael J. Wysocki
2009-05-24 12:48                               ` Ming Lei
2009-05-24 19:09                                 ` Rafael J. Wysocki
2009-05-24 14:30                           ` Alan Stern
2009-05-24 19:06                             ` Rafael J. Wysocki
2009-05-20  3:36             ` Ming Lei
2009-05-20  6:47               ` Johannes Berg
2009-05-20  7:09                 ` Ming Lei
2009-05-20  7:12                   ` Johannes Berg
2009-05-20  8:21                     ` Ming Lei
2009-05-20  8:45                       ` Johannes Berg
2009-05-22  8:03                 ` Ming Lei
2009-05-22  8:11                   ` Johannes Berg
2009-05-20 12:18   ` Peter Zijlstra
2009-05-20 13:18     ` Oleg Nesterov
2009-05-20 13:44       ` Peter Zijlstra
2009-05-20 13:55         ` Oleg Nesterov
2009-05-20 14:12           ` Peter Zijlstra
2009-05-24 18:58 ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090517071834.GA8507@elte.hu \
    --to=mingo@elte.hu \
    --cc=a.p.zijlstra@chello.nl \
    --cc=linux-kernel@vger.kernel.org \
    --cc=oleg@redhat.com \
    --cc=rjw@sisk.pl \
    --cc=zdenek.kabelac@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).