From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758215AbYDNGyT (ORCPT ); Mon, 14 Apr 2008 02:54:19 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754630AbYDNGyK (ORCPT ); Mon, 14 Apr 2008 02:54:10 -0400 Received: from pentafluge.infradead.org ([213.146.154.40]:50266 "EHLO pentafluge.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754565AbYDNGyJ (ORCPT ); Mon, 14 Apr 2008 02:54:09 -0400 Subject: Re: 2.6.25-rc9 -- INFO: possible circular locking dependency detected From: Peter Zijlstra To: Miles Lane Cc: LKML , Gautham Shenoy , "Rafael J. Wysocki" , Ingo Molnar In-Reply-To: References: Content-Type: text/plain Date: Mon, 14 Apr 2008 08:54:05 +0200 Message-Id: <1208156045.7427.16.camel@twins> Mime-Version: 1.0 X-Mailer: Evolution 2.22.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Fun, I will need to sort out this code before I can say anything about that, perhaps Gautham and or Rafael have ideas before I can come up with something.. ? On Sun, 2008-04-13 at 23:04 -0400, Miles Lane wrote: > [ 3217.586003] [ INFO: possible circular locking dependency detected ] > [ 3217.586006] 2.6.25-rc9 #1 > [ 3217.586008] ------------------------------------------------------- > [ 3217.586011] pm-suspend/7421 is trying to acquire lock: > [ 3217.586013] (&per_cpu(cpu_policy_rwsem, cpu)){----}, at: > [] lock_policy_rwsem_write+0x33/0x5a > [ 3217.586023] > [ 3217.586024] but task is already holding lock: > [ 3217.586026] (&cpu_hotplug.lock){--..}, at: [] > cpu_hotplug_begin+0x2f/0x89 > [ 3217.586033] > [ 3217.586033] which lock already depends on the new lock. > [ 3217.586035] > [ 3217.586036] > [ 3217.586037] the existing dependency chain (in reverse order) is: > [ 3217.586039] > [ 3217.586040] -> #3 (&cpu_hotplug.lock){--..}: > [ 3217.586044] [] __lock_acquire+0xa02/0xbaf > [ 3217.586052] [] lock_acquire+0x76/0x9d > [ 3217.586058] [] mutex_lock_nested+0xd5/0x274 > [ 3217.586066] [] get_online_cpus+0x2c/0x3e > [ 3217.586072] [] sched_getaffinity+0xe/0x4d > [ 3217.586079] [] __synchronize_sched+0x11/0x5f > [ 3217.586087] [] synchronize_srcu+0x22/0x5b > [ 3217.586093] [] srcu_notifier_chain_unregister+0x45/0x4c > [ 3217.586100] [] cpufreq_unregister_notifier+0x1f/0x2f > [ 3217.586107] [] cpufreq_governor_dbs+0x1e9/0x242 > [cpufreq_conservative] > [ 3217.586117] [] __cpufreq_governor+0xb2/0xe9 > [ 3217.586124] [] __cpufreq_set_policy+0x13f/0x1c3 > [ 3217.586130] [] store_scaling_governor+0x150/0x17f > [ 3217.586137] [] store+0x42/0x5b > [ 3217.586143] [] sysfs_write_file+0xb8/0xe3 > [ 3217.586151] [] vfs_write+0x8c/0x108 > [ 3217.586158] [] sys_write+0x3b/0x60 > [ 3217.586165] [] sysenter_past_esp+0x6d/0xc5 > [ 3217.586172] [] 0xffffffff > [ 3217.586184] > [ 3217.586185] -> #2 (&sp->mutex){--..}: > [ 3217.586188] [] __lock_acquire+0xa02/0xbaf > [ 3217.586195] [] lock_acquire+0x76/0x9d > [ 3217.586201] [] mutex_lock_nested+0xd5/0x274 > [ 3217.586208] [] synchronize_srcu+0x16/0x5b > [ 3217.586214] [] srcu_notifier_chain_unregister+0x45/0x4c > [ 3217.586220] [] cpufreq_unregister_notifier+0x1f/0x2f > [ 3217.586227] [] cpufreq_governor_dbs+0x1e9/0x242 > [cpufreq_conservative] > [ 3217.586235] [] __cpufreq_governor+0xb2/0xe9 > [ 3217.586242] [] __cpufreq_set_policy+0x13f/0x1c3 > [ 3217.586248] [] store_scaling_governor+0x150/0x17f > [ 3217.586255] [] store+0x42/0x5b > [ 3217.586261] [] sysfs_write_file+0xb8/0xe3 > [ 3217.586268] [] vfs_write+0x8c/0x108 > [ 3217.586274] [] sys_write+0x3b/0x60 > [ 3217.586280] [] sysenter_past_esp+0x6d/0xc5 > [ 3217.586287] [] 0xffffffff > [ 3217.586297] > [ 3217.586298] -> #1 (dbs_mutex#2){--..}: > [ 3217.586302] [] __lock_acquire+0xa02/0xbaf > [ 3217.586309] [] lock_acquire+0x76/0x9d > [ 3217.586315] [] mutex_lock_nested+0xd5/0x274 > [ 3217.586322] [] cpufreq_governor_dbs+0x6e/0x242 > [cpufreq_conservative] > [ 3217.586330] [] __cpufreq_governor+0xb2/0xe9 > [ 3217.586336] [] __cpufreq_set_policy+0x155/0x1c3 > [ 3217.586343] [] store_scaling_governor+0x150/0x17f > [ 3217.586349] [] store+0x42/0x5b > [ 3217.586355] [] sysfs_write_file+0xb8/0xe3 > [ 3217.586362] [] vfs_write+0x8c/0x108 > [ 3217.586369] [] sys_write+0x3b/0x60 > [ 3217.586375] [] sysenter_past_esp+0x6d/0xc5 > [ 3217.586381] [] 0xffffffff > [ 3217.586451] > [ 3217.586452] -> #0 (&per_cpu(cpu_policy_rwsem, cpu)){----}: > [ 3217.586456] [] __lock_acquire+0x929/0xbaf > [ 3217.586463] [] lock_acquire+0x76/0x9d > [ 3217.586469] [] down_write+0x28/0x44 > [ 3217.586475] [] lock_policy_rwsem_write+0x33/0x5a > [ 3217.586482] [] cpufreq_cpu_callback+0x43/0x66 > [ 3217.586489] [] notifier_call_chain+0x2b/0x4a > [ 3217.586495] [] __raw_notifier_call_chain+0xe/0x10 > [ 3217.586501] [] _cpu_down+0x71/0x1f8 > [ 3217.586507] [] disable_nonboot_cpus+0x4a/0xc6 > [ 3217.586513] [] suspend_devices_and_enter+0x6c/0x101 > [ 3217.586521] [] enter_state+0xc4/0x119 > [ 3217.586527] [] state_store+0x96/0xac > [ 3217.586533] [] kobj_attr_store+0x1a/0x22 > [ 3217.586541] [] sysfs_write_file+0xb8/0xe3 > [ 3217.586547] [] vfs_write+0x8c/0x108 > [ 3217.586554] [] sys_write+0x3b/0x60 > [ 3217.586560] [] sysenter_past_esp+0x6d/0xc5 > [ 3217.586567] [] 0xffffffff > [ 3217.586578] > [ 3217.586578] other info that might help us debug this: > [ 3217.586580] > [ 3217.586582] 5 locks held by pm-suspend/7421: > [ 3217.586584] #0: (&buffer->mutex){--..}, at: [] > sysfs_write_file+0x25/0xe3 > [ 3217.586590] #1: (pm_mutex){--..}, at: [] enter_state+0x103/0x119 > [ 3217.586596] #2: (pm_sleep_rwsem){--..}, at: [] > device_suspend+0x25/0x1ad > [ 3217.586604] #3: (cpu_add_remove_lock){--..}, at: [] > cpu_maps_update_begin+0xf/0x11 > [ 3217.586610] #4: (&cpu_hotplug.lock){--..}, at: [] > cpu_hotplug_begin+0x2f/0x89 > [ 3217.586616] > [ 3217.586617] stack backtrace: > [ 3217.586620] Pid: 7421, comm: pm-suspend Not tainted 2.6.25-rc9 #1 > [ 3217.586627] [] print_circular_bug_tail+0x5b/0x66 > [ 3217.586634] [] ? print_circular_bug_entry+0x39/0x43 > [ 3217.586643] [] __lock_acquire+0x929/0xbaf > [ 3217.586656] [] lock_acquire+0x76/0x9d > [ 3217.586661] [] ? lock_policy_rwsem_write+0x33/0x5a > [ 3217.586668] [] down_write+0x28/0x44 > [ 3217.586673] [] ? lock_policy_rwsem_write+0x33/0x5a > [ 3217.586678] [] lock_policy_rwsem_write+0x33/0x5a > [ 3217.586684] [] cpufreq_cpu_callback+0x43/0x66 > [ 3217.586689] [] notifier_call_chain+0x2b/0x4a > [ 3217.586696] [] __raw_notifier_call_chain+0xe/0x10 > [ 3217.586701] [] _cpu_down+0x71/0x1f8 > [ 3217.586710] [] disable_nonboot_cpus+0x4a/0xc6 > [ 3217.586716] [] suspend_devices_and_enter+0x6c/0x101 > [ 3217.586721] [] enter_state+0xc4/0x119 > [ 3217.586726] [] state_store+0x96/0xac > [ 3217.586731] [] ? state_store+0x0/0xac > [ 3217.586736] [] kobj_attr_store+0x1a/0x22 > [ 3217.586742] [] sysfs_write_file+0xb8/0xe3 > [ 3217.586750] [] ? sysfs_write_file+0x0/0xe3 > [ 3217.586755] [] vfs_write+0x8c/0x108 > [ 3217.586762] [] sys_write+0x3b/0x60 > [ 3217.586769] [] sysenter_past_esp+0x6d/0xc5 > [ 3217.586780] ======================= > [ 3217.588064] Breaking affinity for irq 16 > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/