public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Ming Lei <tom.leiming@gmail.com>
Cc: Johannes Berg <johannes@sipsolutions.net>,
	Ingo Molnar <mingo@elte.hu>,
	Zdenek Kabelac <zdenek.kabelac@gmail.com>,
	"Rafael J. Wysocki" <rjw@sisk.pl>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Gautham R Shenoy <ego@in.ibm.com>,
	Oleg Nesterov <onestero@redhat.com>
Subject: Re: INFO: possible circular locking dependency at cleanup_workqueue_thread
Date: Sun, 24 May 2009 20:58:46 +0200	[thread overview]
Message-ID: <1243191526.6954.26.camel@laptop> (raw)
In-Reply-To: <c4e36d110905120059i4c155508oa00672da0b33a07a@mail.gmail.com>

Below are the original lockdep output and the one generated by applying
Lei Ming's BFS shortest cycle patch.

It appears to find a slightly shorter variant, removing setup_lock from
the cycle -- but it might be a difference in setup or userland.

Looking again at Oleg's example, I think this again falls short of
finding the L1-L2 inversion, simply because we establish (and therefore
find) the longer cycle first.

Simply because we warn at the first cycle detected, it means we'll never
continue to build dependencies to build shorter ones,.. I think?

/me goes trying to construct a scenario to disprove the above.

On Tue, 2009-05-12 at 09:59 +0200, Zdenek Kabelac wrote:
> =======================================================
> [ INFO: possible circular locking dependency detected ]
> 2.6.30-rc5-00097-gd665355 #59
> -------------------------------------------------------
> pm-suspend/12129 is trying to acquire lock:
>  (events){+.+.+.}, at: [<ffffffff80259496>] cleanup_workqueue_thread+0x26/0xd0
> 
> but task is already holding lock:
>  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff80246e57>]
> cpu_maps_update_begin+0x17/0x20
> 
> which lock already depends on the new lock.
> 
> 
> the existing dependency chain (in reverse order) is:
> 
> -> #5 (cpu_add_remove_lock){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffff80246e57>] cpu_maps_update_begin+0x17/0x20
>        [<ffffffff80259c33>] __create_workqueue_key+0xc3/0x250
>        [<ffffffff80287b20>] stop_machine_create+0x40/0xb0
>        [<ffffffff8027a784>] sys_delete_module+0x84/0x270
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #4 (setup_lock){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffff80287af7>] stop_machine_create+0x17/0xb0
>        [<ffffffff80246f06>] disable_nonboot_cpus+0x26/0x130
>        [<ffffffff8027dd8d>] suspend_devices_and_enter+0xbd/0x1b0
>        [<ffffffff8027dff7>] enter_state+0x107/0x170
>        [<ffffffff8027e0f9>] state_store+0x99/0x100
>        [<ffffffff803abe17>] kobj_attr_store+0x17/0x20
>        [<ffffffff8033f1e9>] sysfs_write_file+0xd9/0x160
>        [<ffffffff802e1c88>] vfs_write+0xb8/0x180
>        [<ffffffff802e2771>] sys_write+0x51/0x90
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #3 (dpm_list_mtx){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffff804532ff>] device_pm_add+0x1f/0xe0
>        [<ffffffff8044b9bf>] device_add+0x45f/0x570
>        [<ffffffffa007c578>] wiphy_register+0x158/0x280 [cfg80211]
>        [<ffffffffa017567c>] ieee80211_register_hw+0xbc/0x410 [mac80211]
>        [<ffffffffa01f7c5c>] iwl3945_pci_probe+0xa1c/0x1080 [iwl3945]
>        [<ffffffff803c4307>] local_pci_probe+0x17/0x20
>        [<ffffffff803c5458>] pci_device_probe+0x88/0xb0
>        [<ffffffff8044e1e9>] driver_probe_device+0x89/0x180
>        [<ffffffff8044e37b>] __driver_attach+0x9b/0xa0
>        [<ffffffff8044d67c>] bus_for_each_dev+0x6c/0xa0
>        [<ffffffff8044e03e>] driver_attach+0x1e/0x20
>        [<ffffffff8044d955>] bus_add_driver+0xd5/0x290
>        [<ffffffff8044e668>] driver_register+0x78/0x140
>        [<ffffffff803c56f6>] __pci_register_driver+0x66/0xe0
>        [<ffffffffa00bd05c>] 0xffffffffa00bd05c
>        [<ffffffff8020904f>] do_one_initcall+0x3f/0x1c0
>        [<ffffffff8027d071>] sys_init_module+0xb1/0x200
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #2 (cfg80211_mutex){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffffa007e66a>] reg_todo+0x19a/0x590 [cfg80211]
>        [<ffffffff80258f18>] worker_thread+0x1e8/0x3a0
>        [<ffffffff8025dc3a>] kthread+0x5a/0xa0
>        [<ffffffff8020d23a>] child_rip+0xa/0x20
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #1 (reg_work){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff80258f12>] worker_thread+0x1e2/0x3a0
>        [<ffffffff8025dc3a>] kthread+0x5a/0xa0
>        [<ffffffff8020d23a>] child_rip+0xa/0x20
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #0 (events){+.+.+.}:
>        [<ffffffff80271b36>] __lock_acquire+0xd36/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff802594bd>] cleanup_workqueue_thread+0x4d/0xd0
>        [<ffffffff8053fe79>] workqueue_cpu_callback+0xc9/0x10f
>        [<ffffffff805539a8>] notifier_call_chain+0x68/0xa0
>        [<ffffffff802630d6>] raw_notifier_call_chain+0x16/0x20
>        [<ffffffff8053d9ec>] _cpu_down+0x1cc/0x2d0
>        [<ffffffff80246f90>] disable_nonboot_cpus+0xb0/0x130
>        [<ffffffff8027dd8d>] suspend_devices_and_enter+0xbd/0x1b0
>        [<ffffffff8027dff7>] enter_state+0x107/0x170
>        [<ffffffff8027e0f9>] state_store+0x99/0x100
>        [<ffffffff803abe17>] kobj_attr_store+0x17/0x20
>        [<ffffffff8033f1e9>] sysfs_write_file+0xd9/0x160
>        [<ffffffff802e1c88>] vfs_write+0xb8/0x180
>        [<ffffffff802e2771>] sys_write+0x51/0x90
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> other info that might help us debug this:
> 
> 4 locks held by pm-suspend/12129:
>  #0:  (&buffer->mutex){+.+.+.}, at: [<ffffffff8033f154>]
> sysfs_write_file+0x44/0x160
>  #1:  (pm_mutex){+.+.+.}, at: [<ffffffff8027df44>] enter_state+0x54/0x170
>  #2:  (dpm_list_mtx){+.+.+.}, at: [<ffffffff80452747>] device_pm_lock+0x17/0x20
>  #3:  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff80246e57>]
> cpu_maps_update_begin+0x17/0x20
> 
> stack backtrace:
> Pid: 12129, comm: pm-suspend Not tainted 2.6.30-rc5-00097-gd665355 #59
> Call Trace:
>  [<ffffffff8026fa3d>] print_circular_bug_tail+0x9d/0xe0
>  [<ffffffff80271b36>] __lock_acquire+0xd36/0x10a0
>  [<ffffffff80270000>] ? mark_lock+0x3e0/0x400
>  [<ffffffff80271f38>] lock_acquire+0x98/0x140
>  [<ffffffff80259496>] ? cleanup_workqueue_thread+0x26/0xd0
>  [<ffffffff802594bd>] cleanup_workqueue_thread+0x4d/0xd0
>  [<ffffffff80259496>] ? cleanup_workqueue_thread+0x26/0xd0
>  [<ffffffff802703ed>] ? trace_hardirqs_on+0xd/0x10
>  [<ffffffff8053fe79>] workqueue_cpu_callback+0xc9/0x10f
>  [<ffffffff8054acf1>] ? cpu_callback+0x12/0x280
>  [<ffffffff805539a8>] notifier_call_chain+0x68/0xa0
>  [<ffffffff802630d6>] raw_notifier_call_chain+0x16/0x20
>  [<ffffffff8053d9ec>] _cpu_down+0x1cc/0x2d0
>  [<ffffffff80246f90>] disable_nonboot_cpus+0xb0/0x130
>  [<ffffffff8027dd8d>] suspend_devices_and_enter+0xbd/0x1b0
>  [<ffffffff8027dff7>] enter_state+0x107/0x170
>  [<ffffffff8027e0f9>] state_store+0x99/0x100
>  [<ffffffff803abe17>] kobj_attr_store+0x17/0x20
>  [<ffffffff8033f1e9>] sysfs_write_file+0xd9/0x160
>  [<ffffffff802e1c88>] vfs_write+0xb8/0x180
>  [<ffffffff8029088c>] ? audit_syscall_entry+0x21c/0x240
>  [<ffffffff802e2771>] sys_write+0x51/0x90
>  [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b


=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.30-rc6-tip #9
-------------------------------------------------------
bash/6174 is trying to acquire lock:
 (events){+.+.+.}, at: [<ffffffff81059076>] cleanup_workqueue_thread+0x28/0x10a

but task is already holding lock:
 (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff81046c26>] disable_nonboot_cpus+0x38/0x128

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (cpu_add_remove_lock){+.+.+.}:
       [<ffffffff8106f74d>] __lock_acquire+0xa80/0xc08
       [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
       [<ffffffff813553fe>] __mutex_lock_common+0x5e/0x494
       [<ffffffff81355883>] mutex_lock_nested+0x19/0x1b
       [<ffffffff81046c26>] disable_nonboot_cpus+0x38/0x128
       [<ffffffff8107b38e>] suspend_devices_and_enter+0xf5/0x1f4
       [<ffffffff8107b623>] enter_state+0x168/0x1ce
       [<ffffffff8107b745>] state_store+0xbc/0xdd
       [<ffffffff811737fb>] kobj_attr_store+0x17/0x19
       [<ffffffff81136620>] sysfs_write_file+0xe9/0x11e
       [<ffffffff810e2422>] vfs_write+0xb0/0x10a
       [<ffffffff810e254a>] sys_write+0x4c/0x75
       [<ffffffff8100bdf2>] system_call_fastpath+0x16/0x1b
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #3 (dpm_list_mtx){+.+.+.}:
       [<ffffffff8106f74d>] __lock_acquire+0xa80/0xc08
       [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
       [<ffffffff813553fe>] __mutex_lock_common+0x5e/0x494
       [<ffffffff81355883>] mutex_lock_nested+0x19/0x1b
       [<ffffffff812312fe>] device_pm_add+0x23/0xcd
       [<ffffffff8122ad87>] device_add+0x38b/0x549
       [<ffffffff8130de59>] wiphy_register+0x139/0x1ee
       [<ffffffff81317dff>] ieee80211_register_hw+0xee/0x3bf
       [<ffffffff8126163a>] iwl_setup_mac+0x8b/0xd1
       [<ffffffff8126fbcd>] iwl_pci_probe+0x7f5/0x921
       [<ffffffff81188d21>] local_pci_probe+0x17/0x1b
       [<ffffffff81059262>] do_work_for_cpu+0x18/0x2a
       [<ffffffff8105dc98>] kthread+0x5b/0x88
       [<ffffffff8100cf7a>] child_rip+0xa/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #2 (cfg80211_mutex){+.+.+.}:
       [<ffffffff8106f74d>] __lock_acquire+0xa80/0xc08
       [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
       [<ffffffff813553fe>] __mutex_lock_common+0x5e/0x494
       [<ffffffff81355883>] mutex_lock_nested+0x19/0x1b
       [<ffffffff8130f52a>] reg_todo+0x53/0x490
       [<ffffffff81058870>] worker_thread+0x250/0x3dc
       [<ffffffff8105dc98>] kthread+0x5b/0x88
       [<ffffffff8100cf7a>] child_rip+0xa/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #1 (reg_work){+.+.+.}:
       [<ffffffff8106f74d>] __lock_acquire+0xa80/0xc08
       [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
       [<ffffffff810587f7>] worker_thread+0x1d7/0x3dc
       [<ffffffff8105dc98>] kthread+0x5b/0x88
       [<ffffffff8100cf7a>] child_rip+0xa/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #0 (events){+.+.+.}:
       [<ffffffff8106f641>] __lock_acquire+0x974/0xc08
       [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
       [<ffffffff8105909d>] cleanup_workqueue_thread+0x4f/0x10a
       [<ffffffff81343fe4>] workqueue_cpu_callback+0xc9/0x10d
       [<ffffffff81359cfd>] notifier_call_chain+0x33/0x5b
       [<ffffffff81062254>] raw_notifier_call_chain+0x14/0x16
       [<ffffffff81342624>] _cpu_down+0x283/0x2a0
       [<ffffffff81046c6b>] disable_nonboot_cpus+0x7d/0x128
       [<ffffffff8107b38e>] suspend_devices_and_enter+0xf5/0x1f4
       [<ffffffff8107b623>] enter_state+0x168/0x1ce
       [<ffffffff8107b745>] state_store+0xbc/0xdd
       [<ffffffff811737fb>] kobj_attr_store+0x17/0x19
       [<ffffffff81136620>] sysfs_write_file+0xe9/0x11e
       [<ffffffff810e2422>] vfs_write+0xb0/0x10a
       [<ffffffff810e254a>] sys_write+0x4c/0x75
       [<ffffffff8100bdf2>] system_call_fastpath+0x16/0x1b
       [<ffffffffffffffff>] 0xffffffffffffffff

other info that might help us debug this:

4 locks held by bash/6174:
 #0:  (&buffer->mutex){+.+.+.}, at: [<ffffffff81136574>] sysfs_write_file+0x3d/0x11e
 #1:  (pm_mutex){+.+.+.}, at: [<ffffffff8107b680>] enter_state+0x1c5/0x1ce
 #2:  (dpm_list_mtx){+.+.+.}, at: [<ffffffff8123084d>] device_pm_lock+0x17/0x19
 #3:  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff81046c26>] disable_nonboot_cpus+0x38/0x128

stack backtrace:
Pid: 6174, comm: bash Not tainted 2.6.30-rc6-tip #9
Call Trace:
 [<ffffffff8106e92d>] print_circular_bug+0x1cc/0x201
 [<ffffffff8106f641>] __lock_acquire+0x974/0xc08
 [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
 [<ffffffff81059076>] ? cleanup_workqueue_thread+0x28/0x10a
 [<ffffffff8105909d>] cleanup_workqueue_thread+0x4f/0x10a
 [<ffffffff81059076>] ? cleanup_workqueue_thread+0x28/0x10a
 [<ffffffff81343fe4>] workqueue_cpu_callback+0xc9/0x10d
 [<ffffffff81359cfd>] notifier_call_chain+0x33/0x5b
 [<ffffffff81062254>] raw_notifier_call_chain+0x14/0x16
 [<ffffffff81342624>] _cpu_down+0x283/0x2a0
 [<ffffffff81046c6b>] disable_nonboot_cpus+0x7d/0x128
 [<ffffffff8107b38e>] suspend_devices_and_enter+0xf5/0x1f4
 [<ffffffff8107b623>] enter_state+0x168/0x1ce
 [<ffffffff8107b745>] state_store+0xbc/0xdd
 [<ffffffff811737fb>] kobj_attr_store+0x17/0x19
 [<ffffffff81136620>] sysfs_write_file+0xe9/0x11e
 [<ffffffff810e2422>] vfs_write+0xb0/0x10a
 [<ffffffff810e254a>] sys_write+0x4c/0x75
 [<ffffffff8100bdf2>] system_call_fastpath+0x16/0x1b



      parent reply	other threads:[~2009-05-24 19:00 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-12  7:59 INFO: possible circular locking dependency at cleanup_workqueue_thread Zdenek Kabelac
2009-05-17  7:18 ` Ingo Molnar
2009-05-17 10:42   ` Ming Lei
2009-05-17 11:18   ` Johannes Berg
2009-05-17 13:10     ` Ingo Molnar
2009-05-18 19:47     ` Oleg Nesterov
2009-05-18 20:00       ` Peter Zijlstra
2009-05-18 20:16         ` Oleg Nesterov
2009-05-18 20:40           ` Peter Zijlstra
2009-05-18 22:14             ` Oleg Nesterov
2009-05-19  9:13               ` Peter Zijlstra
2009-05-19 10:49                 ` Peter Zijlstra
2009-05-19 14:53                   ` Oleg Nesterov
2009-05-19  8:51       ` Johannes Berg
2009-05-19 12:00         ` Oleg Nesterov
2009-05-19 15:33           ` Johannes Berg
2009-05-19 16:09             ` Oleg Nesterov
2009-05-19 16:27               ` Johannes Berg
2009-05-19 18:51                 ` Oleg Nesterov
2009-05-22 10:46                   ` Johannes Berg
2009-05-22 22:23                     ` Rafael J. Wysocki
2009-05-23  8:21                       ` Johannes Berg
2009-05-23 23:20                         ` Rafael J. Wysocki
2009-05-24  3:29                           ` Ming Lei
2009-05-24 11:09                             ` Rafael J. Wysocki
2009-05-24 12:48                               ` Ming Lei
2009-05-24 19:09                                 ` Rafael J. Wysocki
2009-05-24 14:30                           ` Alan Stern
2009-05-24 19:06                             ` Rafael J. Wysocki
2009-05-20  3:36             ` Ming Lei
2009-05-20  6:47               ` Johannes Berg
2009-05-20  7:09                 ` Ming Lei
2009-05-20  7:12                   ` Johannes Berg
2009-05-20  8:21                     ` Ming Lei
2009-05-20  8:45                       ` Johannes Berg
2009-05-22  8:03                 ` Ming Lei
2009-05-22  8:11                   ` Johannes Berg
2009-05-20 12:18   ` Peter Zijlstra
2009-05-20 13:18     ` Oleg Nesterov
2009-05-20 13:44       ` Peter Zijlstra
2009-05-20 13:55         ` Oleg Nesterov
2009-05-20 14:12           ` Peter Zijlstra
2009-05-24 18:58 ` Peter Zijlstra [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1243191526.6954.26.camel@laptop \
    --to=peterz@infradead.org \
    --cc=ego@in.ibm.com \
    --cc=johannes@sipsolutions.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=onestero@redhat.com \
    --cc=rjw@sisk.pl \
    --cc=tom.leiming@gmail.com \
    --cc=zdenek.kabelac@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox