* unsafe locks seen with netperf on net-2.6.29 tree
@ 2008-12-25 10:25 Jeff Kirsher
2008-12-25 11:26 ` Herbert Xu
0 siblings, 1 reply; 30+ messages in thread
From: Jeff Kirsher @ 2008-12-25 10:25 UTC (permalink / raw)
To: netdev, David Miller
Cc: Emil Tantilov, Peter P Waskiewicz Jr, Alexander Duyck
When testing multiple drivers (igb and ixgbe) using netperf
UDP/TCP/_RR ipv4/6 multiple threads, we are seeing unsafe locks
messages. In testing we have the locking checks enabled in the kernel
hacking option. The unsafe condition points to the netperf process
and this only started occurring recently on Dave's net-next-2.6 tree.
Any insight would be great.
--- cut here ---
[ 975.131762] calling igb_init_module+0x0/0x72 [igb] @ 19212
[ 975.131914] Intel(R) Gigabit Ethernet Network Driver - version 1.2.45-k2
[ 975.131917] Copyright (c) 2008 Intel Corporation.
[ 975.132004] igb 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[ 975.132045] igb 0000:01:00.0: setting latency timer to 64
[ 975.142963] igb 0000:01:00.0: irq 1279 for MSI/MSI-X
[ 975.142972] igb 0000:01:00.0: irq 1278 for MSI/MSI-X
[ 975.142978] igb 0000:01:00.0: irq 1277 for MSI/MSI-X
[ 975.142984] igb 0000:01:00.0: irq 1276 for MSI/MSI-X
[ 975.142989] igb 0000:01:00.0: irq 1275 for MSI/MSI-X
[ 975.142995] igb 0000:01:00.0: irq 1274 for MSI/MSI-X
[ 975.143001] igb 0000:01:00.0: irq 1273 for MSI/MSI-X
[ 975.143006] igb 0000:01:00.0: irq 1272 for MSI/MSI-X
[ 975.143012] igb 0000:01:00.0: irq 1271 for MSI/MSI-X
[ 975.250556] igb 0000:01:00.0: Intel(R) Gigabit Ethernet Network Connection
[ 975.250565] igb 0000:01:00.0: eth0: (PCIe:2.5Gb/s:Width x4) 00:1b:21:2a:8c:4c
[ 975.250645] igb 0000:01:00.0: eth0: PBA No: e19418-006
[ 975.250650] igb 0000:01:00.0: Using MSI-X interrupts. 4 rx
queue(s), 4 tx queue(s)
[ 975.250716] igb 0000:01:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17
[ 975.250768] igb 0000:01:00.1: setting latency timer to 64
[ 975.251242] igb 0000:01:00.1: irq 1270 for MSI/MSI-X
[ 975.251256] igb 0000:01:00.1: irq 1269 for MSI/MSI-X
[ 975.251267] igb 0000:01:00.1: irq 1268 for MSI/MSI-X
[ 975.251278] igb 0000:01:00.1: irq 1267 for MSI/MSI-X
[ 975.251289] igb 0000:01:00.1: irq 1266 for MSI/MSI-X
[ 975.251300] igb 0000:01:00.1: irq 1265 for MSI/MSI-X
[ 975.251310] igb 0000:01:00.1: irq 1264 for MSI/MSI-X
[ 975.251320] igb 0000:01:00.1: irq 1263 for MSI/MSI-X
[ 975.251330] igb 0000:01:00.1: irq 1262 for MSI/MSI-X
[ 975.367412] igb 0000:01:00.1: Intel(R) Gigabit Ethernet Network Connection
[ 975.367418] igb 0000:01:00.1: eth1: (PCIe:2.5Gb/s:Width x4) 00:1b:21:2a:8c:4d
[ 975.367493] igb 0000:01:00.1: eth1: PBA No: e19418-006
[ 975.367497] igb 0000:01:00.1: Using MSI-X interrupts. 4 rx
queue(s), 4 tx queue(s)
[ 975.367629] initcall igb_init_module+0x0/0x72 [igb] returned 0
after 230178 usecs
[ 980.600660] igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow
Control: RX/TX
[ 980.600886] ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 980.603009] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 980.793018] ADDRCONF(NETDEV_UP): eth1: link is not ready
[ 990.700023] eth0: no IPv6 routers present
[ 1439.758435]
[ 1439.758437] ======================================================
[ 1439.758724] [ INFO: soft-safe -> soft-unsafe lock order detected ]
[ 1439.758868] 2.6.28-rc8-net-next-igb #13
[ 1439.759007] ------------------------------------------------------
[ 1439.759150] netperf/22302 [HC0[0]:SC0[1]:HE1:SE0] is trying to acquire:
[ 1439.759293] (&fbc->lock){--..}, at: [<ffffffff803691a6>]
__percpu_counter_add+0x4a/0x6d
[ 1439.759581]
[ 1439.759582] and this task is already holding:
[ 1439.759853] (slock-AF_INET){-+..}, at: [<ffffffff804fdca6>]
tcp_close+0x16c/0x2da
[ 1439.760137] which would create a new lock dependency:
[ 1439.762122] (slock-AF_INET){-+..} -> (&fbc->lock){--..}
[ 1439.762122]
[ 1439.762122] but this new dependency connects a soft-irq-safe lock:
[ 1439.762122] (slock-AF_INET){-+..}
[ 1439.762122] ... which became soft-irq-safe at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] to a soft-irq-unsafe lock:
[ 1439.762122] (&fbc->lock){--..}
[ 1439.762122] ... which became soft-irq-unsafe at:
[ 1439.762122] ... [<ffffffff80260480>] mark_irqflags+0xda/0xf2
[ 1439.762122] [<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff803691a6>] __percpu_counter_add+0x4a/0x6d
[ 1439.762122] [<ffffffff802a3168>] percpu_counter_add+0xe/0x10
[ 1439.762122] [<ffffffff802a31db>] percpu_counter_inc+0xe/0x10
[ 1439.762122] [<ffffffff802a364c>] get_empty_filp+0x75/0x117
[ 1439.762122] [<ffffffff802acb19>] path_lookup_open+0x2b/0x9b
[ 1439.762122] [<ffffffff802acf12>] do_filp_open+0xae/0x6b8
[ 1439.762122] [<ffffffff802a0b2f>] do_sys_open+0x4f/0x99
[ 1439.762122] [<ffffffff802a0ba2>] sys_open+0x1b/0x1d
[ 1439.762122] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] other info that might help us debug this:
[ 1439.762122]
[ 1439.762122] 1 lock held by netperf/22302:
[ 1439.762122] #0: (slock-AF_INET){-+..}, at: [<ffffffff804fdca6>]
tcp_close+0x16c/0x2da
[ 1439.762122]
[ 1439.762122] the soft-irq-safe lock's dependencies:
[ 1439.762122] -> (slock-AF_INET){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564481>] _spin_lock_bh+0x31/0x3d
[ 1439.762122]
[<ffffffff804cb420>] lock_sock_nested+0x30/0x7f
[ 1439.762122]
[<ffffffff804fc544>] lock_sock+0xb/0xd
[ 1439.762122]
[<ffffffff804fdb54>] tcp_close+0x1a/0x2da
[ 1439.762122]
[<ffffffff80519082>] inet_release+0x58/0x5f
[ 1439.762122]
[<ffffffff804c9354>] sock_release+0x1a/0x76
[ 1439.762122]
[<ffffffff804c9804>] sock_close+0x22/0x26
[ 1439.762122]
[<ffffffff802a345e>] __fput+0x82/0x110
[ 1439.762122]
[<ffffffff802a381e>] fput+0x15/0x17
[ 1439.762122]
[<ffffffff802a09c9>] filp_close+0x67/0x72
[ 1439.762122]
[<ffffffff802a0a4f>] sys_close+0x7b/0xbe
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffff80260463>] mark_irqflags+0xbd/0xf2
[ 1439.762122]
[<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564481>] _spin_lock_bh+0x31/0x3d
[ 1439.762122]
[<ffffffff804cb420>] lock_sock_nested+0x30/0x7f
[ 1439.762122]
[<ffffffff804fc544>] lock_sock+0xb/0xd
[ 1439.762122]
[<ffffffff804fdb54>] tcp_close+0x1a/0x2da
[ 1439.762122]
[<ffffffff80519082>] inet_release+0x58/0x5f
[ 1439.762122]
[<ffffffff804c9354>] sock_release+0x1a/0x76
[ 1439.762122]
[<ffffffff804c9804>] sock_close+0x22/0x26
[ 1439.762122]
[<ffffffff802a345e>] __fput+0x82/0x110
[ 1439.762122]
[<ffffffff802a381e>] fput+0x15/0x17
[ 1439.762122]
[<ffffffff802a09c9>] filp_close+0x67/0x72
[ 1439.762122]
[<ffffffff802a0a4f>] sys_close+0x7b/0xbe
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80eae530>]
af_family_slock_keys+0x10/0x120
[ 1439.762122] -> (&list->lock){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564502>] _spin_lock_irqsave+0x37/0x49
[ 1439.762122]
[<ffffffff804cec52>] skb_queue_tail+0x1d/0x3f
[ 1439.762122]
[<ffffffff804ea247>] netlink_broadcast_deliver+0x44/0x6b
[ 1439.762122]
[<ffffffff804eb073>] do_one_broadcast+0xcc/0xfb
[ 1439.762122]
[<ffffffff804eb1b7>] netlink_broadcast+0x94/0xec
[ 1439.762122]
[<ffffffff8035e56b>] kobject_uevent_env+0x2c3/0x369
[ 1439.762122]
[<ffffffff8035e61c>] kobject_uevent+0xb/0xd
[ 1439.762122]
[<ffffffff803d11de>] device_del+0x15d/0x175
[ 1439.762122]
[<ffffffff803d1207>] device_unregister+0x11/0x1d
[ 1439.762122]
[<ffffffff803d1249>] device_destroy+0x36/0x3a
[ 1439.762122]
[<ffffffff803b739d>] vcs_remove_sysfs+0x23/0x42
[ 1439.762122]
[<ffffffff803bd227>] con_shutdown+0x33/0x45
[ 1439.762122]
[<ffffffff803af322>] release_one_tty+0x21/0x8c
[ 1439.762122]
[<ffffffff8035e6f6>] kref_put+0x43/0x4f
[ 1439.762122]
[<ffffffff803ace6f>] tty_kref_put+0x19/0x1b
[ 1439.762122]
[<ffffffff803ad0a0>] release_tty+0x3b/0x3f
[ 1439.762122]
[<ffffffff803af28e>] tty_release_dev+0x443/0x464
[ 1439.762122]
[<ffffffff803af2c8>] tty_release+0x19/0x25
[ 1439.762122]
[<ffffffff802a345e>] __fput+0x82/0x110
[ 1439.762122]
[<ffffffff802a381e>] fput+0x15/0x17
[ 1439.762122]
[<ffffffff802a09c9>] filp_close+0x67/0x72
[ 1439.762122]
[<ffffffff802a0a4f>] sys_close+0x7b/0xbe
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffff80260463>] mark_irqflags+0xbd/0xf2
[ 1439.762122]
[<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564481>] _spin_lock_bh+0x31/0x3d
[ 1439.762122]
[<ffffffff80513515>] udp_poll+0x60/0xe7
[ 1439.762122]
[<ffffffff804c7f1b>] sock_poll+0x18/0x1a
[ 1439.762122]
[<ffffffff802af335>] do_pollfd+0x4a/0x7c
[ 1439.762122]
[<ffffffff802af40d>] do_poll+0xa6/0x178
[ 1439.762122]
[<ffffffff802af7ba>] do_sys_poll+0x13b/0x1ae
[ 1439.762122]
[<ffffffff802af87f>] sys_poll+0x52/0xb3
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80eae3b0>] __key.23956+0x0/0x8
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&q->lock){++..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff805644bf>] _spin_lock_irq+0x32/0x3e
[ 1439.762122]
[<ffffffff80562e1f>] wait_for_common+0x30/0x54
[ 1439.762122]
[<ffffffff80562ecd>] wait_for_completion+0x18/0x1a
[ 1439.762122]
[<ffffffff80252a90>] kthread_create+0xe3/0x154
[ 1439.762122]
[<ffffffff80560db0>] migration_call+0x44/0x343
[ 1439.762122]
[<ffffffff807fa0d4>] migration_init+0x25/0x5b
[ 1439.762122]
[<ffffffff80209191>] do_one_initcall+0x58/0x138
[ 1439.762122]
[<ffffffff807e64e6>] do_pre_smp_initcalls+0x1e/0x2b
[ 1439.762122]
[<ffffffff807e6a2b>] kernel_init+0x57/0xa7
[ 1439.762122]
[<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-hardirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff808a7998>] __key.15533+0x0/0x8
[ 1439.762122] -> (&rq->lock){++..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564502>] _spin_lock_irqsave+0x37/0x49
[ 1439.762122]
[<ffffffff802328cf>] rq_attach_root+0x16/0xb2
[ 1439.762122]
[<ffffffff807fa472>] sched_init+0x2ad/0x364
[ 1439.762122]
[<ffffffff807e6b24>] start_kernel+0x76/0x281
[ 1439.762122]
[<ffffffff807e62c8>] x86_64_start_reservations+0x7c/0x80
[ 1439.762122]
[<ffffffff807e6375>] x86_64_start_kernel+0x6a/0x72
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-hardirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff808629f0>] __key.42413+0x0/0x8
[ 1439.762122] -> (&vec->lock){.+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564502>] _spin_lock_irqsave+0x37/0x49
[ 1439.762122]
[<ffffffff8027664a>] cpupri_set+0xae/0xf9
[ 1439.762122]
[<ffffffff802358e6>] rq_online_rt+0x3e/0x43
[ 1439.762122]
[<ffffffff80231aa4>] set_rq_online+0x49/0x57
[ 1439.762122]
[<ffffffff80232957>] rq_attach_root+0x9e/0xb2
[ 1439.762122]
[<ffffffff807fa472>] sched_init+0x2ad/0x364
[ 1439.762122]
[<ffffffff807e6b24>] start_kernel+0x76/0x281
[ 1439.762122]
[<ffffffff807e62c8>] x86_64_start_reservations+0x7c/0x80
[ 1439.762122]
[<ffffffff807e6375>] x86_64_start_kernel+0x6a/0x72
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80e96bb0>] __key.12617+0x0/0x10
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564502>] _spin_lock_irqsave+0x37/0x49
[ 1439.762122] [<ffffffff8027664a>] cpupri_set+0xae/0xf9
[ 1439.762122] [<ffffffff802358e6>] rq_online_rt+0x3e/0x43
[ 1439.762122] [<ffffffff80231aa4>] set_rq_online+0x49/0x57
[ 1439.762122] [<ffffffff80232957>] rq_attach_root+0x9e/0xb2
[ 1439.762122] [<ffffffff807fa472>] sched_init+0x2ad/0x364
[ 1439.762122] [<ffffffff807e6b24>] start_kernel+0x76/0x281
[ 1439.762122] [<ffffffff807e62c8>] x86_64_start_reservations+0x7c/0x80
[ 1439.762122] [<ffffffff807e6375>] x86_64_start_kernel+0x6a/0x72
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&rt_b->rt_runtime_lock){.+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff80235aa8>] start_rt_bandwidth+0x2e/0x66
[ 1439.762122]
[<ffffffff80235b79>] inc_rt_tasks+0x99/0x9e
[ 1439.762122]
[<ffffffff80235bc5>] __enqueue_rt_entity+0x47/0x4b
[ 1439.762122]
[<ffffffff80235c29>] enqueue_rt_entity+0x1e/0x22
[ 1439.762122]
[<ffffffff80235c52>] enqueue_task_rt+0x25/0x3e
[ 1439.762122]
[<ffffffff80230317>] enqueue_task+0x13/0x1e
[ 1439.762122]
[<ffffffff802303a6>] activate_task+0x22/0x2a
[ 1439.762122]
[<ffffffff80236e47>] try_to_wake_up+0x11d/0x190
[ 1439.762122]
[<ffffffff80236ee6>] wake_up_process+0x10/0x12
[ 1439.762122]
[<ffffffff80560e59>] migration_call+0xed/0x343
[ 1439.762122]
[<ffffffff807fa0f6>] migration_init+0x47/0x5b
[ 1439.762122]
[<ffffffff80209191>] do_one_initcall+0x58/0x138
[ 1439.762122]
[<ffffffff807e64e6>] do_pre_smp_initcalls+0x1e/0x2b
[ 1439.762122]
[<ffffffff807e6a2b>] kernel_init+0x57/0xa7
[ 1439.762122]
[<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff808629f8>] __key.34299+0x0/0x8
[ 1439.762122] -> (&cpu_base->lock){++..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-hardirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff808a79d0>] __key.18493+0x0/0x8
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564502>] _spin_lock_irqsave+0x37/0x49
[ 1439.762122] [<ffffffff8025600f>] lock_hrtimer_base+0x25/0x4b
[ 1439.762122] [<ffffffff80256100>] hrtimer_start_range_ns+0x26/0xbc
[ 1439.762122] [<ffffffff80232fd6>] hrtimer_start_expires+0x18/0x1a
[ 1439.762122] [<ffffffff80235ad1>] start_rt_bandwidth+0x57/0x66
[ 1439.762122] [<ffffffff80235b79>] inc_rt_tasks+0x99/0x9e
[ 1439.762122] [<ffffffff80235bc5>] __enqueue_rt_entity+0x47/0x4b
[ 1439.762122] [<ffffffff80235c29>] enqueue_rt_entity+0x1e/0x22
[ 1439.762122] [<ffffffff80235c52>] enqueue_task_rt+0x25/0x3e
[ 1439.762122] [<ffffffff80230317>] enqueue_task+0x13/0x1e
[ 1439.762122] [<ffffffff802303a6>] activate_task+0x22/0x2a
[ 1439.762122] [<ffffffff80236e47>] try_to_wake_up+0x11d/0x190
[ 1439.762122] [<ffffffff80236ee6>] wake_up_process+0x10/0x12
[ 1439.762122] [<ffffffff80560e59>] migration_call+0xed/0x343
[ 1439.762122] [<ffffffff807fa0f6>] migration_init+0x47/0x5b
[ 1439.762122] [<ffffffff80209191>] do_one_initcall+0x58/0x138
[ 1439.762122] [<ffffffff807e64e6>] do_pre_smp_initcalls+0x1e/0x2b
[ 1439.762122] [<ffffffff807e6a2b>] kernel_init+0x57/0xa7
[ 1439.762122] [<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&rt_rq->rt_runtime_lock){++..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff8023558e>] update_curr_rt+0x8b/0xc1
[ 1439.762122]
[<ffffffff80235be6>] dequeue_task_rt+0x12/0x37
[ 1439.762122]
[<ffffffff80230379>] dequeue_task+0x57/0x62
[ 1439.762122]
[<ffffffff802303d0>] deactivate_task+0x22/0x2a
[ 1439.762122]
[<ffffffff80562f91>] schedule+0xc2/0x19e
[ 1439.762122]
[<ffffffff8023a568>] migration_thread+0xad/0x152
[ 1439.762122]
[<ffffffff80252c40>] kthread+0x49/0x79
[ 1439.762122]
[<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-hardirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80862a00>] __key.42365+0x0/0x8
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff80233fc2>] __enable_runtime+0x3a/0x7a
[ 1439.762122] [<ffffffff802358ca>] rq_online_rt+0x22/0x43
[ 1439.762122] [<ffffffff80231aa4>] set_rq_online+0x49/0x57
[ 1439.762122] [<ffffffff80560e9d>] migration_call+0x131/0x343
[ 1439.762122] [<ffffffff80566de8>] notifier_call_chain+0x33/0x5b
[ 1439.762122] [<ffffffff802570d7>] __raw_notifier_call_chain+0x9/0xb
[ 1439.762122] [<ffffffff802570e8>] raw_notifier_call_chain+0xf/0x11
[ 1439.762122] [<ffffffff805612b8>] _cpu_up+0xd8/0x111
[ 1439.762122] [<ffffffff80561373>] cpu_up+0x57/0x67
[ 1439.762122] [<ffffffff807e6658>] smp_init+0x4f/0x95
[ 1439.762122] [<ffffffff807e6a30>] kernel_init+0x5c/0xa7
[ 1439.762122] [<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff80235aa8>] start_rt_bandwidth+0x2e/0x66
[ 1439.762122] [<ffffffff80235b79>] inc_rt_tasks+0x99/0x9e
[ 1439.762122] [<ffffffff80235bc5>] __enqueue_rt_entity+0x47/0x4b
[ 1439.762122] [<ffffffff80235c29>] enqueue_rt_entity+0x1e/0x22
[ 1439.762122] [<ffffffff80235c52>] enqueue_task_rt+0x25/0x3e
[ 1439.762122] [<ffffffff80230317>] enqueue_task+0x13/0x1e
[ 1439.762122] [<ffffffff802303a6>] activate_task+0x22/0x2a
[ 1439.762122] [<ffffffff80236e47>] try_to_wake_up+0x11d/0x190
[ 1439.762122] [<ffffffff80236ee6>] wake_up_process+0x10/0x12
[ 1439.762122] [<ffffffff80560e59>] migration_call+0xed/0x343
[ 1439.762122] [<ffffffff807fa0f6>] migration_init+0x47/0x5b
[ 1439.762122] [<ffffffff80209191>] do_one_initcall+0x58/0x138
[ 1439.762122] [<ffffffff807e64e6>] do_pre_smp_initcalls+0x1e/0x2b
[ 1439.762122] [<ffffffff807e6a2b>] kernel_init+0x57/0xa7
[ 1439.762122] [<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff8023558e>] update_curr_rt+0x8b/0xc1
[ 1439.762122] [<ffffffff80235be6>] dequeue_task_rt+0x12/0x37
[ 1439.762122] [<ffffffff80230379>] dequeue_task+0x57/0x62
[ 1439.762122] [<ffffffff802303d0>] deactivate_task+0x22/0x2a
[ 1439.762122] [<ffffffff80562f91>] schedule+0xc2/0x19e
[ 1439.762122] [<ffffffff8023a568>] migration_thread+0xad/0x152
[ 1439.762122] [<ffffffff80252c40>] kthread+0x49/0x79
[ 1439.762122] [<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&rq->lock/1){.+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff8056440c>] _spin_lock_nested+0x2a/0x36
[ 1439.762122]
[<ffffffff80235fee>] double_rq_lock+0x4d/0x62
[ 1439.762122]
[<ffffffff80236103>] __migrate_task+0x65/0xed
[ 1439.762122]
[<ffffffff8023a5ae>] migration_thread+0xf3/0x152
[ 1439.762122]
[<ffffffff80252c40>] kthread+0x49/0x79
[ 1439.762122]
[<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff808629f1>] __key.42413+0x1/0x8
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff8056440c>] _spin_lock_nested+0x2a/0x36
[ 1439.762122] [<ffffffff80235fee>] double_rq_lock+0x4d/0x62
[ 1439.762122] [<ffffffff80236103>] __migrate_task+0x65/0xed
[ 1439.762122] [<ffffffff8023a5ae>] migration_thread+0xf3/0x152
[ 1439.762122] [<ffffffff80252c40>] kthread+0x49/0x79
[ 1439.762122] [<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff80232c24>] task_rq_lock+0x45/0x7a
[ 1439.762122] [<ffffffff80236db6>] try_to_wake_up+0x8c/0x190
[ 1439.762122] [<ffffffff80236ec7>] default_wake_function+0xd/0xf
[ 1439.762122] [<ffffffff8023084a>] __wake_up_common+0x41/0x75
[ 1439.762122] [<ffffffff802329ef>] complete+0x38/0x4c
[ 1439.762122] [<ffffffff80252c1e>] kthread+0x27/0x79
[ 1439.762122] [<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&ep->lock){....} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564502>] _spin_lock_irqsave+0x37/0x49
[ 1439.762122]
[<ffffffff802cb47f>] ep_insert+0x132/0x231
[ 1439.762122]
[<ffffffff802cb677>] sys_epoll_ctl+0xf9/0x15f
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80e99bd0>] __key.20932+0x0/0x10
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564502>] _spin_lock_irqsave+0x37/0x49
[ 1439.762122] [<ffffffff802cb9bf>] ep_poll_callback+0x1d/0xe1
[ 1439.762122] [<ffffffff8023084a>] __wake_up_common+0x41/0x75
[ 1439.762122] [<ffffffff80232a3d>] __wake_up_sync+0x3a/0x4e
[ 1439.762122] [<ffffffff804cca48>] sock_def_readable+0x3e/0x5d
[ 1439.762122] [<ffffffff8052601d>] unix_stream_sendmsg+0x1e3/0x2a4
[ 1439.762122] [<ffffffff804c7d98>] __sock_sendmsg+0x28/0x2a
[ 1439.762122] [<ffffffff804c7eff>] do_sock_write+0x7f/0x83
[ 1439.762122] [<ffffffff804c830a>] sock_aio_write+0x53/0x67
[ 1439.762122] [<ffffffff802a23b6>] do_sync_write+0xe2/0x126
[ 1439.762122] [<ffffffff802a2ad8>] vfs_write+0xba/0xe1
[ 1439.762122] [<ffffffff802a2bb9>] sys_write+0x47/0x6c
[ 1439.762122] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&list->lock#2){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80eaf634>] __key.19759+0x0/0xc
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&hashinfo->ehash_locks[i]){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff804f93a9>] __inet_hash_nolisten+0x62/0x93
[ 1439.762122]
[<ffffffff804f975b>] __inet_hash_connect+0x1a3/0x253
[ 1439.762122]
[<ffffffff804f983b>] inet_hash_connect+0x30/0x35
[ 1439.762122]
[<ffffffff8050d083>] tcp_v4_connect+0x244/0x333
[ 1439.762122]
[<ffffffff80518ade>] inet_stream_connect+0x9a/0x183
[ 1439.762122]
[<ffffffff804c89ba>] sys_connect+0x68/0x8b
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffff80260463>] mark_irqflags+0xbd/0xf2
[ 1439.762122]
[<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff804f93a9>] __inet_hash_nolisten+0x62/0x93
[ 1439.762122]
[<ffffffff804f975b>] __inet_hash_connect+0x1a3/0x253
[ 1439.762122]
[<ffffffff804f983b>] inet_hash_connect+0x30/0x35
[ 1439.762122]
[<ffffffff8050d083>] tcp_v4_connect+0x244/0x333
[ 1439.762122]
[<ffffffff80518ade>] inet_stream_connect+0x9a/0x183
[ 1439.762122]
[<ffffffff804c89ba>] sys_connect+0x68/0x8b
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80eb03f8>] __key.35382+0x0/0x8
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&tcp_hashinfo.bhash[i].lock){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff804fb6d2>] inet_csk_get_port+0xdc/0x1f3
[ 1439.762122]
[<ffffffff80519456>] inet_bind+0x110/0x1a5
[ 1439.762122]
[<ffffffff804c8a9c>] sys_bind+0x60/0x83
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffff80260463>] mark_irqflags+0xbd/0xf2
[ 1439.762122]
[<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff804fb6d2>] inet_csk_get_port+0xdc/0x1f3
[ 1439.762122]
[<ffffffff80519456>] inet_bind+0x110/0x1a5
[ 1439.762122]
[<ffffffff804c8a9c>] sys_bind+0x60/0x83
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80eb03f0>] __key.42004+0x0/0x8
[ 1439.762122] -> (&parent->list_lock){++..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff8029f21d>] cache_alloc_refill+0x6d/0x216
[ 1439.762122]
[<ffffffff8029f41a>] ____cache_alloc+0x54/0x59
[ 1439.762122]
[<ffffffff8029f591>] kmem_cache_alloc+0x33/0x8b
[ 1439.762122]
[<ffffffff8029f5f8>] kmem_cache_zalloc+0xf/0x11
[ 1439.762122]
[<ffffffff802a020c>] kmem_cache_create+0x237/0x465
[ 1439.762122]
[<ffffffff807fe580>] kmem_cache_init+0x1ae/0x403
[ 1439.762122]
[<ffffffff807e6cb5>] start_kernel+0x207/0x281
[ 1439.762122]
[<ffffffff807e62c8>] x86_64_start_reservations+0x7c/0x80
[ 1439.762122]
[<ffffffff807e6375>] x86_64_start_kernel+0x6a/0x72
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-hardirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80e98e40>] __key.22137+0x0/0x8
[ 1439.762122] -> (&zone->lock){.+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff8027bd89>] free_pages_bulk+0x2c/0x96
[ 1439.762122]
[<ffffffff8027c206>] free_hot_cold_page+0x105/0x131
[ 1439.762122]
[<ffffffff8027c286>] free_hot_page+0xb/0xd
[ 1439.762122]
[<ffffffff8027c375>] __free_pages+0x18/0x21
[ 1439.762122]
[<ffffffff80811afb>] __free_pages_bootmem+0x56/0x58
[ 1439.762122]
[<ffffffff807fc8da>] free_all_bootmem_core+0xf2/0x1c2
[ 1439.762122]
[<ffffffff807fc9ba>] free_all_bootmem+0x10/0x12
[ 1439.762122]
[<ffffffff807f8e78>] mem_init+0x37/0x161
[ 1439.762122]
[<ffffffff807e6cab>] start_kernel+0x1fd/0x281
[ 1439.762122]
[<ffffffff807e62c8>] x86_64_start_reservations+0x7c/0x80
[ 1439.762122]
[<ffffffff807e6375>] x86_64_start_kernel+0x6a/0x72
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80e96c0c>] __key.28439+0x0/0x8
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&on_slab_l3_key){++..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff8029f21d>] cache_alloc_refill+0x6d/0x216
[ 1439.762122]
[<ffffffff8029f41a>] ____cache_alloc+0x54/0x59
[ 1439.762122]
[<ffffffff8029f466>] __kmalloc+0x47/0xa0
[ 1439.762122]
[<ffffffff8029f686>] kmalloc+0x9/0xb
[ 1439.762122]
[<ffffffff8029f6a2>] kmalloc_node+0x9/0xb
[ 1439.762122]
[<ffffffff8029f79e>] alloc_arraycache+0x2c/0x6c
[ 1439.762122]
[<ffffffff8029f9a2>] do_tune_cpucache+0x4c/0x151
[ 1439.762122]
[<ffffffff8029fc64>] enable_cpucache+0x68/0xb2
[ 1439.762122]
[<ffffffff8055140d>] setup_cpu_cache+0x1e/0x15e
[ 1439.762122]
[<ffffffff802a03a4>] kmem_cache_create+0x3cf/0x465
[ 1439.762122]
[<ffffffff80800564>] idr_init_cache+0x23/0x2c
[ 1439.762122]
[<ffffffff807e6cba>] start_kernel+0x20c/0x281
[ 1439.762122]
[<ffffffff807e62c8>] x86_64_start_reservations+0x7c/0x80
[ 1439.762122]
[<ffffffff807e6375>] x86_64_start_kernel+0x6a/0x72
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-hardirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80e98e64>] on_slab_l3_key+0x0/0x8
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff8027bd29>] free_one_page+0x23/0x57
[ 1439.762122] [<ffffffff8027c33b>] __free_pages_ok+0xb3/0xd5
[ 1439.762122] [<ffffffff8027c37c>] __free_pages+0x1f/0x21
[ 1439.762122] [<ffffffff8027c3b0>] free_pages+0x32/0x37
[ 1439.762122] [<ffffffff8029df15>] kmem_freepages+0xcd/0xda
[ 1439.762122] [<ffffffff8029eb26>] slab_destroy+0x4e/0x6f
[ 1439.762122] [<ffffffff8029ec16>] free_block+0xcf/0x122
[ 1439.762122] [<ffffffff8029e8c7>] cache_flusharray+0x83/0xec
[ 1439.762122] [<ffffffff8029e991>] __cache_free+0x61/0x7b
[ 1439.762122] [<ffffffff8029e9f7>] kfree+0x4c/0x70
[ 1439.762122] [<ffffffff804cf335>] skb_release_data+0xa8/0xad
[ 1439.762122] [<ffffffff804cfa1c>] skb_release_all+0x19/0x1d
[ 1439.762122] [<ffffffff804cf1c9>] __kfree_skb+0x11/0x1e
[ 1439.762122] [<ffffffff804cf1fd>] kfree_skb+0x27/0x29
[ 1439.762122] [<ffffffff804d1b2d>] skb_free_datagram+0x14/0x20
[ 1439.762122] [<ffffffff804ec492>] netlink_recvmsg+0x13e/0x1a4
[ 1439.762122] [<ffffffff804c7dc6>] __sock_recvmsg+0x2c/0x2e
[ 1439.762122] [<ffffffff804c8e2a>] sock_recvmsg+0xca/0xe3
[ 1439.762122] [<ffffffff804c9c11>] sys_recvfrom+0xa4/0xf2
[ 1439.762122] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff8029e882>] cache_flusharray+0x3e/0xec
[ 1439.762122] [<ffffffff8029e991>] __cache_free+0x61/0x7b
[ 1439.762122] [<ffffffff8029ea52>] kmem_cache_free+0x37/0x5b
[ 1439.762122] [<ffffffff8029eb3f>] slab_destroy+0x67/0x6f
[ 1439.762122] [<ffffffff8029ec16>] free_block+0xcf/0x122
[ 1439.762122] [<ffffffff8029e8c7>] cache_flusharray+0x83/0xec
[ 1439.762122] [<ffffffff8029e991>] __cache_free+0x61/0x7b
[ 1439.762122] [<ffffffff8029e9f7>] kfree+0x4c/0x70
[ 1439.762122] [<ffffffff804cf335>] skb_release_data+0xa8/0xad
[ 1439.762122] [<ffffffff804cfa1c>] skb_release_all+0x19/0x1d
[ 1439.762122] [<ffffffff804cf1c9>] __kfree_skb+0x11/0x1e
[ 1439.762122] [<ffffffff804fc64e>] sk_eat_skb+0x30/0x49
[ 1439.762122] [<ffffffff804ff339>] tcp_recvmsg+0x644/0x83b
[ 1439.762122] [<ffffffff804ca628>] sock_common_recvmsg+0x32/0x47
[ 1439.762122] [<ffffffff804c7dc6>] __sock_recvmsg+0x2c/0x2e
[ 1439.762122] [<ffffffff804c7e7c>] do_sock_read+0x77/0x7b
[ 1439.762122] [<ffffffff804c837d>] sock_aio_read+0x5f/0x73
[ 1439.762122] [<ffffffff802a24dc>] do_sync_read+0xe2/0x126
[ 1439.762122] [<ffffffff802a2c95>] vfs_read+0xb7/0xde
[ 1439.762122] [<ffffffff802a2ed6>] sys_read+0x47/0x6b
[ 1439.762122] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff8029f21d>] cache_alloc_refill+0x6d/0x216
[ 1439.762122] [<ffffffff8029f41a>] ____cache_alloc+0x54/0x59
[ 1439.762122] [<ffffffff8029f591>] kmem_cache_alloc+0x33/0x8b
[ 1439.762122] [<ffffffff804f9586>] inet_bind_bucket_create+0x1d/0x4f
[ 1439.762122] [<ffffffff804fb742>] inet_csk_get_port+0x14c/0x1f3
[ 1439.762122] [<ffffffff80519456>] inet_bind+0x110/0x1a5
[ 1439.762122] [<ffffffff804c8a9c>] sys_bind+0x60/0x83
[ 1439.762122] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff804f93a9>] __inet_hash_nolisten+0x62/0x93
[ 1439.762122] [<ffffffff804f975b>] __inet_hash_connect+0x1a3/0x253
[ 1439.762122] [<ffffffff804f983b>] inet_hash_connect+0x30/0x35
[ 1439.762122] [<ffffffff8050d083>] tcp_v4_connect+0x244/0x333
[ 1439.762122] [<ffffffff80518ade>] inet_stream_connect+0x9a/0x183
[ 1439.762122] [<ffffffff804c89ba>] sys_connect+0x68/0x8b
[ 1439.762122] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&queue->syn_wait_lock){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff8056457d>] _write_lock_bh+0x31/0x3d
[ 1439.762122]
[<ffffffff804cd1f7>] reqsk_queue_alloc+0xc1/0xda
[ 1439.762122]
[<ffffffff804fb446>] inet_csk_listen_start+0x1f/0xa3
[ 1439.762122]
[<ffffffff80518fb4>] inet_listen+0x5b/0x85
[ 1439.762122]
[<ffffffff804c8a21>] sys_listen+0x44/0x5f
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffff80260463>] mark_irqflags+0xbd/0xf2
[ 1439.762122]
[<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff8056457d>] _write_lock_bh+0x31/0x3d
[ 1439.762122]
[<ffffffff804cd1f7>] reqsk_queue_alloc+0xc1/0xda
[ 1439.762122]
[<ffffffff804fb446>] inet_csk_listen_start+0x1f/0xa3
[ 1439.762122]
[<ffffffff80518fb4>] inet_listen+0x5b/0x85
[ 1439.762122]
[<ffffffff804c8a21>] sys_listen+0x44/0x5f
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80eae768>] __key.31295+0x0/0x18
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&base->lock){++..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564502>] _spin_lock_irqsave+0x37/0x49
[ 1439.762122]
[<ffffffff8024837e>] lock_timer_base+0x26/0x4a
[ 1439.762122]
[<ffffffff80248449>] __mod_timer+0x2e/0xb8
[ 1439.762122]
[<ffffffff80248772>] mod_timer+0x25/0x27
[ 1439.762122]
[<ffffffff80804101>] con_init+0xbd/0x233
[ 1439.762122]
[<ffffffff808037b9>] console_init+0x19/0x2a
[ 1439.762122]
[<ffffffff807e6c05>] start_kernel+0x157/0x281
[ 1439.762122]
[<ffffffff807e62c8>] x86_64_start_reservations+0x7c/0x80
[ 1439.762122]
[<ffffffff807e6375>] x86_64_start_kernel+0x6a/0x72
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-hardirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff808a7258>] __key.21110+0x0/0x8
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&rt_hash_locks[i]){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564481>] _spin_lock_bh+0x31/0x3d
[ 1439.762122]
[<ffffffff804ef842>] rt_intern_hash+0x83/0x354
[ 1439.762122]
[<ffffffff804efb5a>] ip_mkroute_output+0x47/0x50
[ 1439.762122]
[<ffffffff804efec4>] ip_route_output_slow+0x361/0x3a2
[ 1439.762122]
[<ffffffff804f008f>] __ip_route_output_key+0x18a/0x196
[ 1439.762122]
[<ffffffff804f00ad>] ip_route_output_flow+0x12/0x4b
[ 1439.762122]
[<ffffffff80512ab0>] udp_sendmsg+0x2ff/0x58e
[ 1439.762122]
[<ffffffff8051925a>] inet_sendmsg+0x46/0x53
[ 1439.762122]
[<ffffffff804c7d98>] __sock_sendmsg+0x28/0x2a
[ 1439.762122]
[<ffffffff804c8f0a>] sock_sendmsg+0xc7/0xe0
[ 1439.762122]
[<ffffffff804c9252>] sys_sendto+0xdf/0x104
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffff80260463>] mark_irqflags+0xbd/0xf2
[ 1439.762122]
[<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564481>] _spin_lock_bh+0x31/0x3d
[ 1439.762122]
[<ffffffff804ef842>] rt_intern_hash+0x83/0x354
[ 1439.762122]
[<ffffffff804efb5a>] ip_mkroute_output+0x47/0x50
[ 1439.762122]
[<ffffffff804efec4>] ip_route_output_slow+0x361/0x3a2
[ 1439.762122]
[<ffffffff804f008f>] __ip_route_output_key+0x18a/0x196
[ 1439.762122]
[<ffffffff804f00ad>] ip_route_output_flow+0x12/0x4b
[ 1439.762122]
[<ffffffff80512ab0>] udp_sendmsg+0x2ff/0x58e
[ 1439.762122]
[<ffffffff8051925a>] inet_sendmsg+0x46/0x53
[ 1439.762122]
[<ffffffff804c7d98>] __sock_sendmsg+0x28/0x2a
[ 1439.762122]
[<ffffffff804c8f0a>] sock_sendmsg+0xc7/0xe0
[ 1439.762122]
[<ffffffff804c9252>] sys_sendto+0xdf/0x104
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80eaf7d4>] __key.40605+0x0/0x8
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff8029f21d>] cache_alloc_refill+0x6d/0x216
[ 1439.762122] [<ffffffff8029f41a>] ____cache_alloc+0x54/0x59
[ 1439.762122] [<ffffffff8029f591>] kmem_cache_alloc+0x33/0x8b
[ 1439.762122] [<ffffffff804dc8c6>] kmem_cache_zalloc+0xf/0x11
[ 1439.762122] [<ffffffff804de04a>] neigh_alloc+0x84/0x13b
[ 1439.762122] [<ffffffff804de12c>] neigh_create+0x2b/0x1a4
[ 1439.762122] [<ffffffff80513a8f>] __neigh_lookup_errno+0x2e/0x36
[ 1439.762122] [<ffffffff80513b3a>] arp_bind_neighbour+0x4e/0x67
[ 1439.762122] [<ffffffff804ef99b>] rt_intern_hash+0x1dc/0x354
[ 1439.762122] [<ffffffff804efb5a>] ip_mkroute_output+0x47/0x50
[ 1439.762122] [<ffffffff804efec4>] ip_route_output_slow+0x361/0x3a2
[ 1439.762122] [<ffffffff804f008f>] __ip_route_output_key+0x18a/0x196
[ 1439.762122] [<ffffffff804f00ad>] ip_route_output_flow+0x12/0x4b
[ 1439.762122] [<ffffffff80512ab0>] udp_sendmsg+0x2ff/0x58e
[ 1439.762122] [<ffffffff8051925a>] inet_sendmsg+0x46/0x53
[ 1439.762122] [<ffffffff804c7d98>] __sock_sendmsg+0x28/0x2a
[ 1439.762122] [<ffffffff804c8f0a>] sock_sendmsg+0xc7/0xe0
[ 1439.762122] [<ffffffff804c9252>] sys_sendto+0xdf/0x104
[ 1439.762122] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (rt_peer_lock){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff8078bbb8>]
rt_peer_lock.41458+0x18/0x40
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (inet_peer_idlock){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff8078c158>]
inet_peer_idlock+0x18/0x40
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (inet_peer_unused_lock){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff8078c198>]
inet_peer_unused_lock+0x18/0x30
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] -> (&pool->sp_lock){-+..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564481>] _spin_lock_bh+0x31/0x3d
[ 1439.762122]
[<ffffffffa0095f74>] svc_prepare_thread+0x59/0xf6 [sunrpc]
[ 1439.762122]
[<ffffffffa00c8f75>] lockd_up+0xa0/0x16f [lockd]
[ 1439.762122]
[<ffffffffa00de359>] nfsd_init_socks+0x3c/0x71 [nfsd]
[ 1439.762122]
[<ffffffffa00de60f>] nfsd_svc+0x7a/0xb6 [nfsd]
[ 1439.762122]
[<ffffffffa00df06c>] write_svc+0x1f/0x23 [nfsd]
[ 1439.762122]
[<ffffffffa00df541>] nfsctl_transaction_write+0x4b/0x6f [nfsd]
[ 1439.762122]
[<ffffffff802d6e85>] sys_nfsservctl+0x9d/0xec
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] in-softirq-W at:
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffff80260463>] mark_irqflags+0xbd/0xf2
[ 1439.762122]
[<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564481>] _spin_lock_bh+0x31/0x3d
[ 1439.762122]
[<ffffffffa0095f74>] svc_prepare_thread+0x59/0xf6 [sunrpc]
[ 1439.762122]
[<ffffffffa00c8f75>] lockd_up+0xa0/0x16f [lockd]
[ 1439.762122]
[<ffffffffa00de359>] nfsd_init_socks+0x3c/0x71 [nfsd]
[ 1439.762122]
[<ffffffffa00de60f>] nfsd_svc+0x7a/0xb6 [nfsd]
[ 1439.762122]
[<ffffffffa00df06c>] write_svc+0x1f/0x23 [nfsd]
[ 1439.762122]
[<ffffffffa00df541>] nfsctl_transaction_write+0x4b/0x6f [nfsd]
[ 1439.762122]
[<ffffffff802d6e85>] sys_nfsservctl+0x9d/0xec
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffffa00b89a0>]
__key.23540+0x0/0xfffffffffffe8f33 [sunrpc]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564502>] _spin_lock_irqsave+0x37/0x49
[ 1439.762122] [<ffffffff80253015>] add_wait_queue+0x15/0x44
[ 1439.762122] [<ffffffffa00a0c2d>] svc_recv+0x2d8/0x584 [sunrpc]
[ 1439.762122] [<ffffffffa00c8e0b>] lockd+0xc7/0x191 [lockd]
[ 1439.762122] [<ffffffff80252c40>] kthread+0x49/0x79
[ 1439.762122] [<ffffffff8020cae9>] child_rip+0xa/0x11
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122] ... acquired at:
[ 1439.762122] [<ffffffff80260e80>] check_prev_add+0x118/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff8027bd89>] free_pages_bulk+0x2c/0x96
[ 1439.762122] [<ffffffff8027c206>] free_hot_cold_page+0x105/0x131
[ 1439.762122] [<ffffffff8027c286>] free_hot_page+0xb/0xd
[ 1439.762122] [<ffffffff8027c375>] __free_pages+0x18/0x21
[ 1439.762122] [<ffffffff8050cd55>] tcp_v4_destroy_sock+0x6a/0x85
[ 1439.762122] [<ffffffff804fad4f>] inet_csk_destroy_sock+0x8a/0xae
[ 1439.762122] [<ffffffff804fddf6>] tcp_close+0x2bc/0x2da
[ 1439.762122] [<ffffffff80519082>] inet_release+0x58/0x5f
[ 1439.762122] [<ffffffff804c9354>] sock_release+0x1a/0x76
[ 1439.762122] [<ffffffff804c9804>] sock_close+0x22/0x26
[ 1439.762122] [<ffffffff802a345e>] __fput+0x82/0x110
[ 1439.762122] [<ffffffff802a381e>] fput+0x15/0x17
[ 1439.762122] [<ffffffff802a09c9>] filp_close+0x67/0x72
[ 1439.762122] [<ffffffff80240ae3>] close_files+0x66/0x8d
[ 1439.762122] [<ffffffff80240b39>] put_files_struct+0x19/0x42
[ 1439.762122] [<ffffffff80240b98>] exit_files+0x36/0x3b
[ 1439.762122] [<ffffffff80241eec>] do_exit+0x1b7/0x2b1
[ 1439.762122] [<ffffffff80242087>] sys_exit_group+0x0/0x14
[ 1439.762122] [<ffffffff80242099>] sys_exit_group+0x12/0x14
[ 1439.762122] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122] [<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122]
[ 1439.762122]
[ 1439.762122] the soft-irq-unsafe lock's dependencies:
[ 1439.762122] -> (&fbc->lock){--..} ops: 0 {
[ 1439.762122] initial-use at:
[ 1439.762122]
[<ffffffff80261243>] __lock_acquire+0x1d8/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff803691a6>] __percpu_counter_add+0x4a/0x6d
[ 1439.762122]
[<ffffffff802a3168>] percpu_counter_add+0xe/0x10
[ 1439.762122]
[<ffffffff802a31db>] percpu_counter_inc+0xe/0x10
[ 1439.762122]
[<ffffffff802a364c>] get_empty_filp+0x75/0x117
[ 1439.762122]
[<ffffffff802acb19>] path_lookup_open+0x2b/0x9b
[ 1439.762122]
[<ffffffff802acf12>] do_filp_open+0xae/0x6b8
[ 1439.762122]
[<ffffffff802a0b2f>] do_sys_open+0x4f/0x99
[ 1439.762122]
[<ffffffff802a0ba2>] sys_open+0x1b/0x1d
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] softirq-on-W at:
[ 1439.762122]
[<ffffffff80260480>] mark_irqflags+0xda/0xf2
[ 1439.762122]
[<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff803691a6>] __percpu_counter_add+0x4a/0x6d
[ 1439.762122]
[<ffffffff802a3168>] percpu_counter_add+0xe/0x10
[ 1439.762122]
[<ffffffff802a31db>] percpu_counter_inc+0xe/0x10
[ 1439.762122]
[<ffffffff802a364c>] get_empty_filp+0x75/0x117
[ 1439.762122]
[<ffffffff802acb19>] path_lookup_open+0x2b/0x9b
[ 1439.762122]
[<ffffffff802acf12>] do_filp_open+0xae/0x6b8
[ 1439.762122]
[<ffffffff802a0b2f>] do_sys_open+0x4f/0x99
[ 1439.762122]
[<ffffffff802a0ba2>] sys_open+0x1b/0x1d
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] hardirq-on-W at:
[ 1439.762122]
[<ffffffff80260463>] mark_irqflags+0xbd/0xf2
[ 1439.762122]
[<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 1439.762122]
[<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122]
[<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122]
[<ffffffff803691a6>] __percpu_counter_add+0x4a/0x6d
[ 1439.762122]
[<ffffffff802a3168>] percpu_counter_add+0xe/0x10
[ 1439.762122]
[<ffffffff802a31db>] percpu_counter_inc+0xe/0x10
[ 1439.762122]
[<ffffffff802a364c>] get_empty_filp+0x75/0x117
[ 1439.762122]
[<ffffffff802acb19>] path_lookup_open+0x2b/0x9b
[ 1439.762122]
[<ffffffff802acf12>] do_filp_open+0xae/0x6b8
[ 1439.762122]
[<ffffffff802a0b2f>] do_sys_open+0x4f/0x99
[ 1439.762122]
[<ffffffff802a0ba2>] sys_open+0x1b/0x1d
[ 1439.762122]
[<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
[ 1439.762122]
[<ffffffffffffffff>] 0xffffffffffffffff
[ 1439.762122] }
[ 1439.762122] ... key at: [<ffffffff80e9d78c>] __key.10254+0x0/0x8
[ 1439.762122]
[ 1439.762122] stack backtrace:
[ 1439.762122] Pid: 22302, comm: netperf Not tainted 2.6.28-rc8-net-next-igb #13
[ 1439.762122] Call Trace:
[ 1439.762122] [<ffffffff802609fb>] print_bad_irq_dependency+0x250/0x261
[ 1439.762122] [<ffffffff80260a82>] check_usage+0x76/0x83
[ 1439.762122] [<ffffffff80260b04>] check_prev_add_irq+0x75/0xae
[ 1439.762122] [<ffffffff80260dc1>] check_prev_add+0x59/0x182
[ 1439.762122] [<ffffffff80260f5d>] check_prevs_add+0x73/0xe5
[ 1439.762122] [<ffffffff80261042>] validate_chain+0x73/0x9c
[ 1439.762122] [<ffffffff802612f0>] __lock_acquire+0x285/0x2ee
[ 1439.762122] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 1439.762122] [<ffffffff803691a6>] ? __percpu_counter_add+0x4a/0x6d
[ 1439.762122] [<ffffffff80564444>] _spin_lock+0x2c/0x38
[ 1439.762122] [<ffffffff803691a6>] ? __percpu_counter_add+0x4a/0x6d
[ 1439.762122] [<ffffffff803691a6>] __percpu_counter_add+0x4a/0x6d
[ 1439.762122] [<ffffffff8050b196>] percpu_counter_add+0xe/0x10
[ 1439.762122] [<ffffffff8050b1a5>] percpu_counter_dec+0xd/0xf
[ 1439.762122] [<ffffffff8050cd6c>] tcp_v4_destroy_sock+0x81/0x85
[ 1439.762122] [<ffffffff804fad4f>] inet_csk_destroy_sock+0x8a/0xae
[ 1439.762122] [<ffffffff804fdca6>] ? tcp_close+0x16c/0x2da
[ 1439.762122] [<ffffffff804fddf6>] tcp_close+0x2bc/0x2da
[ 1439.762122] [<ffffffff80519082>] inet_release+0x58/0x5f
[ 1439.762122] [<ffffffff804c9354>] sock_release+0x1a/0x76
[ 1439.762122] [<ffffffff804c9804>] sock_close+0x22/0x26
[ 1439.762122] [<ffffffff802a345e>] __fput+0x82/0x110
[ 1439.762122] [<ffffffff802a381e>] fput+0x15/0x17
[ 1439.762122] [<ffffffff802a09c9>] filp_close+0x67/0x72
[ 1439.762122] [<ffffffff802a0a4f>] sys_close+0x7b/0xbe
[ 1439.762122] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
--
Cheers,
Jeff
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-25 10:25 unsafe locks seen with netperf on net-2.6.29 tree Jeff Kirsher
@ 2008-12-25 11:26 ` Herbert Xu
2008-12-26 14:08 ` Peter Zijlstra
2008-12-29 9:57 ` Ingo Molnar
0 siblings, 2 replies; 30+ messages in thread
From: Herbert Xu @ 2008-12-25 11:26 UTC (permalink / raw)
To: Jeff Kirsher
Cc: netdev, David Miller, Emil Tantilov, Peter P Waskiewicz Jr,
Alexander Duyck, Peter Zijlstra, Ingo Molnar
On Thu, Dec 25, 2008 at 10:25:44AM +0000, Jeff Kirsher wrote:
>
> [ 1439.758437] ======================================================
> [ 1439.758724] [ INFO: soft-safe -> soft-unsafe lock order detected ]
> [ 1439.758868] 2.6.28-rc8-net-next-igb #13
> [ 1439.759007] ------------------------------------------------------
> [ 1439.759150] netperf/22302 [HC0[0]:SC0[1]:HE1:SE0] is trying to acquire:
> [ 1439.759293] (&fbc->lock){--..}, at: [<ffffffff803691a6>]
> __percpu_counter_add+0x4a/0x6d
> [ 1439.759581]
> [ 1439.759582] and this task is already holding:
> [ 1439.759853] (slock-AF_INET){-+..}, at: [<ffffffff804fdca6>]
> tcp_close+0x16c/0x2da
> [ 1439.760137] which would create a new lock dependency:
> [ 1439.762122] (slock-AF_INET){-+..} -> (&fbc->lock){--..}
This is a false positive. The lock slock is not a normal lock.
It's an ancient creature that's a spinlock in interrupt context
and a semaphore in process context.
In particular, holding slock in process context does not disable
softirqs and you're still allowed to take the spinlock portion of
slock on the same CPU through an interrupt. What happens is that
the softirq will notice that the slock is already taken by process
context, and defer the work for later.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-25 11:26 ` Herbert Xu
@ 2008-12-26 14:08 ` Peter Zijlstra
2008-12-27 19:38 ` Tantilov, Emil S
2008-12-29 9:57 ` Ingo Molnar
1 sibling, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2008-12-26 14:08 UTC (permalink / raw)
To: Herbert Xu
Cc: Jeff Kirsher, netdev, David Miller, Emil Tantilov,
Peter P Waskiewicz Jr, Alexander Duyck, Ingo Molnar, Eric Dumazet
On Thu, 2008-12-25 at 22:26 +1100, Herbert Xu wrote:
> On Thu, Dec 25, 2008 at 10:25:44AM +0000, Jeff Kirsher wrote:
> >
> > [ 1439.758437] ======================================================
> > [ 1439.758724] [ INFO: soft-safe -> soft-unsafe lock order detected ]
> > [ 1439.758868] 2.6.28-rc8-net-next-igb #13
> > [ 1439.759007] ------------------------------------------------------
> > [ 1439.759150] netperf/22302 [HC0[0]:SC0[1]:HE1:SE0] is trying to acquire:
> > [ 1439.759293] (&fbc->lock){--..}, at: [<ffffffff803691a6>]
> > __percpu_counter_add+0x4a/0x6d
> > [ 1439.759581]
> > [ 1439.759582] and this task is already holding:
> > [ 1439.759853] (slock-AF_INET){-+..}, at: [<ffffffff804fdca6>]
> > tcp_close+0x16c/0x2da
> > [ 1439.760137] which would create a new lock dependency:
> > [ 1439.762122] (slock-AF_INET){-+..} -> (&fbc->lock){--..}
>
> This is a false positive. The lock slock is not a normal lock.
> It's an ancient creature that's a spinlock in interrupt context
> and a semaphore in process context.
>
> In particular, holding slock in process context does not disable
> softirqs and you're still allowed to take the spinlock portion of
> slock on the same CPU through an interrupt. What happens is that
> the softirq will notice that the slock is already taken by process
> context, and defer the work for later.
Which doesn't seem to be relevant to the report in question.
What happens is that two different percpu counters with different irq
semantics get put in the same class (I suspect Eric never tested this
stuff with lockdep enabled).
Does the below -- which isn't even compile tested solve the issue?
Jeff, would you, in future, take care not to word wrap splats like that,
it takes way too much effort to untangle that mess.
---
Subject: lockdep: annotate percpu_counter
Classify percpu_counter instances similar to regular lock objects --
that is, per instantiation site.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
include/linux/percpu_counter.h | 14 ++++++++++----
lib/percpu_counter.c | 18 ++++--------------
lib/proportions.c | 6 +++---
mm/backing-dev.c | 2 +-
4 files changed, 18 insertions(+), 22 deletions(-)
diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
index 9007ccd..a074d77 100644
--- a/include/linux/percpu_counter.h
+++ b/include/linux/percpu_counter.h
@@ -30,8 +30,16 @@ struct percpu_counter {
#define FBC_BATCH (NR_CPUS*4)
#endif
-int percpu_counter_init(struct percpu_counter *fbc, s64 amount);
-int percpu_counter_init_irq(struct percpu_counter *fbc, s64 amount);
+int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
+ struct lock_class_key *key);
+
+#define percpu_counter_init(fbc, value) \
+ do { \
+ static struct lock_class_key __key; \
+ \
+ __percpu_counter_init(fbc, value, &__key); \
+ } while (0)
+
void percpu_counter_destroy(struct percpu_counter *fbc);
void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch);
@@ -85,8 +93,6 @@ static inline int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
return 0;
}
-#define percpu_counter_init_irq percpu_counter_init
-
static inline void percpu_counter_destroy(struct percpu_counter *fbc)
{
}
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index b255b93..4bb0ed3 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -68,11 +68,11 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
}
EXPORT_SYMBOL(__percpu_counter_sum);
-static struct lock_class_key percpu_counter_irqsafe;
-
-int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
+int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
+ struct lock_class_key *key)
{
spin_lock_init(&fbc->lock);
+ lockdep_set_class(&fbc->lock, key);
fbc->count = amount;
fbc->counters = alloc_percpu(s32);
if (!fbc->counters)
@@ -84,17 +84,7 @@ int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
#endif
return 0;
}
-EXPORT_SYMBOL(percpu_counter_init);
-
-int percpu_counter_init_irq(struct percpu_counter *fbc, s64 amount)
-{
- int err;
-
- err = percpu_counter_init(fbc, amount);
- if (!err)
- lockdep_set_class(&fbc->lock, &percpu_counter_irqsafe);
- return err;
-}
+EXPORT_SYMBOL(__percpu_counter_init);
void percpu_counter_destroy(struct percpu_counter *fbc)
{
diff --git a/lib/proportions.c b/lib/proportions.c
index 4f387a6..7367f2b 100644
--- a/lib/proportions.c
+++ b/lib/proportions.c
@@ -83,11 +83,11 @@ int prop_descriptor_init(struct prop_descriptor *pd, int shift)
pd->index = 0;
pd->pg[0].shift = shift;
mutex_init(&pd->mutex);
- err = percpu_counter_init_irq(&pd->pg[0].events, 0);
+ err = percpu_counter_init(&pd->pg[0].events, 0);
if (err)
goto out;
- err = percpu_counter_init_irq(&pd->pg[1].events, 0);
+ err = percpu_counter_init(&pd->pg[1].events, 0);
if (err)
percpu_counter_destroy(&pd->pg[0].events);
@@ -191,7 +191,7 @@ int prop_local_init_percpu(struct prop_local_percpu *pl)
spin_lock_init(&pl->lock);
pl->shift = 0;
pl->period = 0;
- return percpu_counter_init_irq(&pl->events, 0);
+ return percpu_counter_init(&pl->events, 0);
}
void prop_local_destroy_percpu(struct prop_local_percpu *pl)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 801c08b..a7c6c56 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -223,7 +223,7 @@ int bdi_init(struct backing_dev_info *bdi)
bdi->max_prop_frac = PROP_FRAC_BASE;
for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
- err = percpu_counter_init_irq(&bdi->bdi_stat[i], 0);
+ err = percpu_counter_init(&bdi->bdi_stat[i], 0);
if (err)
goto err;
}
^ permalink raw reply related [flat|nested] 30+ messages in thread
* RE: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-26 14:08 ` Peter Zijlstra
@ 2008-12-27 19:38 ` Tantilov, Emil S
2008-12-27 20:38 ` Peter Zijlstra
0 siblings, 1 reply; 30+ messages in thread
From: Tantilov, Emil S @ 2008-12-27 19:38 UTC (permalink / raw)
To: Peter Zijlstra, Herbert Xu
Cc: Kirsher, Jeffrey T, netdev, David Miller, Waskiewicz Jr, Peter P,
Duyck, Alexander H, Ingo Molnar, Eric Dumazet
Peter Zijlstra wrote:
> On Thu, 2008-12-25 at 22:26 +1100, Herbert Xu wrote:
>> On Thu, Dec 25, 2008 at 10:25:44AM +0000, Jeff Kirsher wrote:
>>>
>>> [ 1439.758437]
>>> ====================================================== [
>>> 1439.758724] [ INFO: soft-safe -> soft-unsafe lock order detected ]
>>> [ 1439.758868] 2.6.28-rc8-net-next-igb #13 [ 1439.759007]
>>> ------------------------------------------------------ [
>>> 1439.759150] netperf/22302 [HC0[0]:SC0[1]:HE1:SE0] is trying to
>>> acquire: [ 1439.759293] (&fbc->lock){--..}, at:
>>> [<ffffffff803691a6>] __percpu_counter_add+0x4a/0x6d [ 1439.759581]
>>> [ 1439.759582] and this task is already holding:
>>> [ 1439.759853] (slock-AF_INET){-+..}, at: [<ffffffff804fdca6>]
>>> tcp_close+0x16c/0x2da [ 1439.760137] which would create a new lock
>>> dependency: [ 1439.762122] (slock-AF_INET){-+..} ->
>>> (&fbc->lock){--..}
>>
>> This is a false positive. The lock slock is not a normal lock.
>> It's an ancient creature that's a spinlock in interrupt context
>> and a semaphore in process context.
>>
>> In particular, holding slock in process context does not disable
>> softirqs and you're still allowed to take the spinlock portion of
>> slock on the same CPU through an interrupt. What happens is that
>> the softirq will notice that the slock is already taken by process
>> context, and defer the work for later.
>
> Which doesn't seem to be relevant to the report in question.
>
> What happens is that two different percpu counters with different irq
> semantics get put in the same class (I suspect Eric never tested this
> stuff with lockdep enabled).
>
> Does the below -- which isn't even compile tested solve the issue?
>
> Jeff, would you, in future, take care not to word wrap splats like
> that,
> it takes way too much effort to untangle that mess.
>
> ---
> Subject: lockdep: annotate percpu_counter
>
> Classify percpu_counter instances similar to regular lock objects --
> that is, per instantiation site.
>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> ---
> include/linux/percpu_counter.h | 14 ++++++++++----
> lib/percpu_counter.c | 18 ++++--------------
> lib/proportions.c | 6 +++---
> mm/backing-dev.c | 2 +-
> 4 files changed, 18 insertions(+), 22 deletions(-)
>
> diff --git a/include/linux/percpu_counter.h
> b/include/linux/percpu_counter.h
> index 9007ccd..a074d77 100644
> --- a/include/linux/percpu_counter.h
> +++ b/include/linux/percpu_counter.h
> @@ -30,8 +30,16 @@ struct percpu_counter {
> #define FBC_BATCH (NR_CPUS*4)
> #endif
>
> -int percpu_counter_init(struct percpu_counter *fbc, s64 amount);
> -int percpu_counter_init_irq(struct percpu_counter *fbc, s64 amount);
> +int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
> + struct lock_class_key *key);
> +
> +#define percpu_counter_init(fbc, value) \
> + do { \
> + static struct lock_class_key __key; \
> + \
> + __percpu_counter_init(fbc, value, &__key); \
> + } while (0)
> +
> void percpu_counter_destroy(struct percpu_counter *fbc);
> void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
> void __percpu_counter_add(struct percpu_counter *fbc, s64 amount,
> s32 batch); @@ -85,8 +93,6 @@ static inline int
> percpu_counter_init(struct percpu_counter *fbc, s64 amount) return
> 0; }
>
> -#define percpu_counter_init_irq percpu_counter_init
> -
> static inline void percpu_counter_destroy(struct percpu_counter *fbc)
> {
> }
> diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
> index b255b93..4bb0ed3 100644
> --- a/lib/percpu_counter.c
> +++ b/lib/percpu_counter.c
> @@ -68,11 +68,11 @@ s64 __percpu_counter_sum(struct percpu_counter
> *fbc) }
> EXPORT_SYMBOL(__percpu_counter_sum);
>
> -static struct lock_class_key percpu_counter_irqsafe;
> -
> -int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
> +int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
> + struct lock_class_key *key)
> {
> spin_lock_init(&fbc->lock);
> + lockdep_set_class(&fbc->lock, key);
> fbc->count = amount;
> fbc->counters = alloc_percpu(s32);
> if (!fbc->counters)
> @@ -84,17 +84,7 @@ int percpu_counter_init(struct percpu_counter
> *fbc, s64 amount) #endif
> return 0;
> }
> -EXPORT_SYMBOL(percpu_counter_init);
> -
> -int percpu_counter_init_irq(struct percpu_counter *fbc, s64 amount)
> -{
> - int err;
> -
> - err = percpu_counter_init(fbc, amount);
> - if (!err)
> - lockdep_set_class(&fbc->lock, &percpu_counter_irqsafe);
> - return err;
> -}
> +EXPORT_SYMBOL(__percpu_counter_init);
>
> void percpu_counter_destroy(struct percpu_counter *fbc)
> {
> diff --git a/lib/proportions.c b/lib/proportions.c
> index 4f387a6..7367f2b 100644
> --- a/lib/proportions.c
> +++ b/lib/proportions.c
> @@ -83,11 +83,11 @@ int prop_descriptor_init(struct prop_descriptor
> *pd, int shift) pd->index = 0;
> pd->pg[0].shift = shift;
> mutex_init(&pd->mutex);
> - err = percpu_counter_init_irq(&pd->pg[0].events, 0);
> + err = percpu_counter_init(&pd->pg[0].events, 0);
> if (err)
> goto out;
>
> - err = percpu_counter_init_irq(&pd->pg[1].events, 0);
> + err = percpu_counter_init(&pd->pg[1].events, 0);
> if (err)
> percpu_counter_destroy(&pd->pg[0].events);
>
> @@ -191,7 +191,7 @@ int prop_local_init_percpu(struct
> prop_local_percpu *pl) spin_lock_init(&pl->lock);
> pl->shift = 0;
> pl->period = 0;
> - return percpu_counter_init_irq(&pl->events, 0);
> + return percpu_counter_init(&pl->events, 0);
> }
>
> void prop_local_destroy_percpu(struct prop_local_percpu *pl)
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index 801c08b..a7c6c56 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -223,7 +223,7 @@ int bdi_init(struct backing_dev_info *bdi)
> bdi->max_prop_frac = PROP_FRAC_BASE;
>
> for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
> - err = percpu_counter_init_irq(&bdi->bdi_stat[i], 0);
> + err = percpu_counter_init(&bdi->bdi_stat[i], 0);
> if (err)
> goto err;
> }
This patch fails to compile:
mm/backing-dev.c: In function 'bdi_init':
mm/backing-dev.c:226: error: expected expression bedore 'do'
Thakns,
Emil
^ permalink raw reply [flat|nested] 30+ messages in thread
* RE: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-27 19:38 ` Tantilov, Emil S
@ 2008-12-27 20:38 ` Peter Zijlstra
2008-12-28 0:54 ` Tantilov, Emil S
0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2008-12-27 20:38 UTC (permalink / raw)
To: Tantilov, Emil S
Cc: Herbert Xu, Kirsher, Jeffrey T, netdev, David Miller,
Waskiewicz Jr, Peter P, Duyck, Alexander H, Ingo Molnar,
Eric Dumazet
On Sat, 2008-12-27 at 12:38 -0700, Tantilov, Emil S wrote:
> Peter Zijlstra wrote:
> > index 9007ccd..a074d77 100644
> > --- a/include/linux/percpu_counter.h
> > +++ b/include/linux/percpu_counter.h
> > @@ -30,8 +30,16 @@ struct percpu_counter {
> > #define FBC_BATCH (NR_CPUS*4)
> > #endif
> >
> > -int percpu_counter_init(struct percpu_counter *fbc, s64 amount);
> > -int percpu_counter_init_irq(struct percpu_counter *fbc, s64 amount);
> > +int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
> > + struct lock_class_key *key);
> > +
> > +#define percpu_counter_init(fbc, value) \
> > + do { \
> > + static struct lock_class_key __key; \
> > + \
> > + __percpu_counter_init(fbc, value, &__key); \
> > + } while (0)
> > +
> > void percpu_counter_destroy(struct percpu_counter *fbc);
> > void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
> > void __percpu_counter_add(struct percpu_counter *fbc, s64 amount,
> This patch fails to compile:
>
> mm/backing-dev.c: In function 'bdi_init':
> mm/backing-dev.c:226: error: expected expression bedore 'do'
Ah indeed, stupid me...
Please try something like this instead of the above hunk:
@@ -30,8 +30,16 @@ struct percpu_counter {
#define FBC_BATCH (NR_CPUS*4)
#endif
-int percpu_counter_init(struct percpu_counter *fbc, s64 amount);
-int percpu_counter_init_irq(struct percpu_counter *fbc, s64 amount);
+int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
+ struct lock_class_key *key);
+
+#define percpu_counter_init(fbc, value) \
+ ({ \
+ static struct lock_class_key __key; \
+ \
+ __percpu_counter_init(fbc, value, &__key); \
+ })
+
void percpu_counter_destroy(struct percpu_counter *fbc);
void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch)
^ permalink raw reply [flat|nested] 30+ messages in thread
* RE: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-27 20:38 ` Peter Zijlstra
@ 2008-12-28 0:54 ` Tantilov, Emil S
2008-12-29 10:02 ` Peter Zijlstra
0 siblings, 1 reply; 30+ messages in thread
From: Tantilov, Emil S @ 2008-12-28 0:54 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Herbert Xu, Kirsher, Jeffrey T, netdev, David Miller,
Waskiewicz Jr, Peter P, Duyck, Alexander H, Ingo Molnar,
Eric Dumazet
Peter Zijlstra wrote:
> On Sat, 2008-12-27 at 12:38 -0700, Tantilov, Emil S wrote:
>> Peter Zijlstra wrote:
>
>>> index 9007ccd..a074d77 100644
>>> --- a/include/linux/percpu_counter.h
>>> +++ b/include/linux/percpu_counter.h
>>> @@ -30,8 +30,16 @@ struct percpu_counter {
>>> #define FBC_BATCH (NR_CPUS*4)
>>> #endif
>>>
>>> -int percpu_counter_init(struct percpu_counter *fbc, s64 amount);
>>> -int percpu_counter_init_irq(struct percpu_counter *fbc, s64
>>> amount); +int __percpu_counter_init(struct percpu_counter *fbc, s64
>>> amount, + struct lock_class_key *key); +
>>> +#define percpu_counter_init(fbc, value)
>>> \ + do {
>>> \ + static struct lock_class_key __key;
>>> \ +
>>> \ + __percpu_counter_init(fbc, value, &__key);
>>> \ + } while (0) +
>>> void percpu_counter_destroy(struct percpu_counter *fbc);
>>> void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
>>> void __percpu_counter_add(struct percpu_counter *fbc, s64 amount,
>
>> This patch fails to compile:
>>
>> mm/backing-dev.c: In function 'bdi_init':
>> mm/backing-dev.c:226: error: expected expression bedore 'do'
>
> Ah indeed, stupid me...
>
> Please try something like this instead of the above hunk:
>
> @@ -30,8 +30,16 @@ struct percpu_counter {
> #define FBC_BATCH (NR_CPUS*4)
> #endif
>
> -int percpu_counter_init(struct percpu_counter *fbc, s64 amount);
> -int percpu_counter_init_irq(struct percpu_counter *fbc, s64 amount);
> +int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
> + struct lock_class_key *key);
> +
> +#define percpu_counter_init(fbc, value)
> \ + ({
> \ + static struct lock_class_key __key;
> \ +
> \ + __percpu_counter_init(fbc, value, &__key);
> \ + })
> +
> void percpu_counter_destroy(struct percpu_counter *fbc);
> void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
> void __percpu_counter_add(struct percpu_counter *fbc, s64 amount,
> s32 batch)
With this compiled, but I still get the following:
[ 435.632627] =================================
[ 435.633030] [ INFO: inconsistent lock state ]
[ 435.633037] 2.6.28-rc8-net-next-igbL #14
[ 435.633040] ---------------------------------
[ 435.633044] inconsistent {in-softirq-W} -> {softirq-on-W} usage.
[ 435.633049] netperf/12669 [HC0[0]:SC0[0]:HE1:SE1] takes:
[ 435.633053] (key#8){-+..}, at: [<ffffffff803691ac>] __percpu_counter_add+0x4a/0x6d
[ 435.633068] {in-softirq-W} state was registered at:
[ 435.633070] [<ffffffffffffffff>] 0xffffffffffffffff
[ 435.633078] irq event stamp: 988533
[ 435.633080] hardirqs last enabled at (988533): [<ffffffff80243712>] _local_bh_enable_ip+0xc8/0xcd
[ 435.633088] hardirqs last disabled at (988531): [<ffffffff8024369e>] _local_bh_enable_ip+0x54/0xcd
[ 435.633093] softirqs last enabled at (988532): [<ffffffff804fc814>] sock_orphan+0x3f/0x44
[ 435.633100] softirqs last disabled at (988530): [<ffffffff8056454d>] _write_lock_bh+0x11/0x3d
[ 435.633107]
[ 435.633108] other info that might help us debug this:
[ 435.633110] 1 lock held by netperf/12669:
[ 435.633112] #0: (sk_lock-AF_INET6){--..}, at: [<ffffffff804fc544>] lock_sock+0xb/0xd
[ 435.633119]
[ 435.633120] stack backtrace:
[ 435.633124] Pid: 12669, comm: netperf Not tainted 2.6.28-rc8-net-next-igbL #14
[ 435.633127] Call Trace:
[ 435.633134] [<ffffffff8025ffb8>] print_usage_bug+0x159/0x16a
[ 435.633139] [<ffffffff8026000e>] valid_state+0x45/0x52
[ 435.633143] [<ffffffff802601cf>] mark_lock_irq+0x1b4/0x27b
[ 435.633148] [<ffffffff80260339>] mark_lock+0xa3/0x110
[ 435.633152] [<ffffffff80260480>] mark_irqflags+0xda/0xf2
[ 435.633157] [<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
[ 435.633161] [<ffffffff80261d93>] lock_acquire+0x55/0x71
[ 435.633166] [<ffffffff803691ac>] ? __percpu_counter_add+0x4a/0x6d
[ 435.633170] [<ffffffff80564434>] _spin_lock+0x2c/0x38
[ 435.633175] [<ffffffff803691ac>] ? __percpu_counter_add+0x4a/0x6d
[ 435.633179] [<ffffffff803691ac>] __percpu_counter_add+0x4a/0x6d
[ 435.633184] [<ffffffff804fc827>] percpu_counter_add+0xe/0x10
[ 435.633188] [<ffffffff804fc837>] percpu_counter_inc+0xe/0x10
[ 435.633193] [<ffffffff804fdc91>] tcp_close+0x157/0x2da
[ 435.633197] [<ffffffff8051907e>] inet_release+0x58/0x5f
[ 435.633204] [<ffffffff80527c48>] inet6_release+0x30/0x35
[ 435.633213] [<ffffffff804c9354>] sock_release+0x1a/0x76
[ 435.633221] [<ffffffff804c9804>] sock_close+0x22/0x26
[ 435.633229] [<ffffffff802a345a>] __fput+0x82/0x110
[ 435.633234] [<ffffffff802a381a>] fput+0x15/0x17
[ 435.633239] [<ffffffff802a09c5>] filp_close+0x67/0x72
[ 435.633246] [<ffffffff80240ae3>] close_files+0x66/0x8d
[ 435.633251] [<ffffffff80240b39>] put_files_struct+0x19/0x42
[ 435.633256] [<ffffffff80240b98>] exit_files+0x36/0x3b
[ 435.633260] [<ffffffff80241eec>] do_exit+0x1b7/0x2b1
[ 435.633265] [<ffffffff80242087>] sys_exit_group+0x0/0x14
[ 435.633269] [<ffffffff80242099>] sys_exit_group+0x12/0x14
[ 435.633275] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
Thanks,
Emil
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-25 11:26 ` Herbert Xu
2008-12-26 14:08 ` Peter Zijlstra
@ 2008-12-29 9:57 ` Ingo Molnar
1 sibling, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2008-12-29 9:57 UTC (permalink / raw)
To: Herbert Xu
Cc: Jeff Kirsher, netdev, David Miller, Emil Tantilov,
Peter P Waskiewicz Jr, Alexander Duyck, Peter Zijlstra,
linux-kernel
* Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Thu, Dec 25, 2008 at 10:25:44AM +0000, Jeff Kirsher wrote:
> >
> > [ 1439.758437] ======================================================
> > [ 1439.758724] [ INFO: soft-safe -> soft-unsafe lock order detected ]
> > [ 1439.758868] 2.6.28-rc8-net-next-igb #13
> > [ 1439.759007] ------------------------------------------------------
> > [ 1439.759150] netperf/22302 [HC0[0]:SC0[1]:HE1:SE0] is trying to acquire:
> > [ 1439.759293] (&fbc->lock){--..}, at: [<ffffffff803691a6>]
> > __percpu_counter_add+0x4a/0x6d
> > [ 1439.759581]
> > [ 1439.759582] and this task is already holding:
> > [ 1439.759853] (slock-AF_INET){-+..}, at: [<ffffffff804fdca6>]
> > tcp_close+0x16c/0x2da
> > [ 1439.760137] which would create a new lock dependency:
> > [ 1439.762122] (slock-AF_INET){-+..} -> (&fbc->lock){--..}
>
> This is a false positive. The lock slock is not a normal lock.
> It's an ancient creature that's a spinlock in interrupt context
> and a semaphore in process context.
>
> In particular, holding slock in process context does not disable
> softirqs and you're still allowed to take the spinlock portion of slock
> on the same CPU through an interrupt. What happens is that the softirq
> will notice that the slock is already taken by process context, and
> defer the work for later.
False positive or not, this splat has now been allowed upstream - i just
got it with v2.6.28-3114-g3c92ec8:
[ 42.312021] eth0: no IPv6 routers present
[ 71.252349]
[ 71.252354] =================================
[ 71.256258] [ INFO: inconsistent lock state ]
[ 71.256258] 2.6.28-tip-03857-g33ad6a3-dirty #13095
[ 71.256258] ---------------------------------
[ 71.256258] inconsistent {softirq-on-W} -> {in-softirq-W} usage.
[ 71.256258] cc1/3913 [HC0[0]:SC1[1]:HE1:SE0] takes:
[ 71.256258] (&fbc->lock){-+..}, at: [<c0417b35>] __percpu_counter_add+0x65/0xb0
[ 71.256258] {softirq-on-W} state was registered at:
[ 71.256258] [<c016696c>] __lock_acquire+0x4cc/0x640
[ 71.256258] [<c0166b69>] lock_acquire+0x89/0xc0
[ 71.256258] [<c0dddb7b>] _spin_lock+0x3b/0x70
[ 71.256258] [<c0417b35>] __percpu_counter_add+0x65/0xb0
[ 71.256258] [<c01bf0fa>] get_empty_filp+0x6a/0x1d0
[ 71.256258] [<c01c85e9>] path_lookup_open+0x29/0x90
[ 71.256258] [<c01c86e2>] do_filp_open+0x92/0x7a0
[ 71.256258] [<c01bc32c>] do_sys_open+0x4c/0x90
[ 71.256258] [<c01bc3de>] sys_open+0x2e/0x40
[ 71.256258] [<c0103e6f>] sysenter_do_call+0x12/0x43
[ 71.256258] [<ffffffff>] 0xffffffff
[ 71.256258] irq event stamp: 18174
[ 71.256258] hardirqs last enabled at (18174): [<c0196de6>] free_hot_cold_page+0x1b6/0x280
[ 71.256258] hardirqs last disabled at (18173): [<c0196d3e>] free_hot_cold_page+0x10e/0x280
[ 71.256258] softirqs last enabled at (18136): [<c0143ee2>] __do_softirq+0x132/0x180
[ 71.256258] softirqs last disabled at (18139): [<c010631a>] call_on_stack+0x1a/0x30
[ 71.256258]
[ 71.256258] other info that might help us debug this:
[ 71.256258] 4 locks held by cc1/3913:
[ 71.256258] #0: (rcu_read_lock){..--}, at: [<c0ba49b0>] net_rx_action+0xd0/0x250
[ 71.256258] #1: (rcu_read_lock){..--}, at: [<c0ba1f62>] netif_receive_skb+0xf2/0x340
[ 71.256258] #2: (rcu_read_lock){..--}, at: [<c0c01556>] ip_local_deliver_finish+0x36/0x200
[ 71.256258] #3: (slock-AF_INET/1){-+..}, at: [<c0c1f3b8>] tcp_v4_rcv+0x5b8/0x820
[ 71.256258]
[ 71.256258] stack backtrace:
[ 71.256258] Pid: 3913, comm: cc1 Not tainted 2.6.28-tip-03857-g33ad6a3-dirty #13095
[ 71.256258] Call Trace:
[ 71.256258] [<c0163fc6>] print_usage_bug+0x176/0x1d0
[ 71.256258] [<c01662ef>] mark_lock+0xbcf/0xd80
[ 71.256258] [<c0109566>] ? sched_clock+0x16/0x40
[ 71.256258] [<c016692b>] __lock_acquire+0x48b/0x640
[ 71.256258] [<c0167581>] ? trace_hardirqs_on_caller+0x81/0x1e0
[ 71.256258] [<c01673f0>] ? mark_held_locks+0x30/0x80
[ 71.256258] [<c0196de6>] ? free_hot_cold_page+0x1b6/0x280
[ 71.256258] [<c0166b69>] lock_acquire+0x89/0xc0
[ 71.256258] [<c0417b35>] ? __percpu_counter_add+0x65/0xb0
[ 71.256258] [<c0dddb7b>] _spin_lock+0x3b/0x70
[ 71.256258] [<c0417b35>] ? __percpu_counter_add+0x65/0xb0
[ 71.256258] [<c0417b35>] __percpu_counter_add+0x65/0xb0
[ 71.256258] [<c0c0ac19>] inet_csk_destroy_sock+0x89/0x160
[ 71.256258] [<c0c0a1b5>] ? inet_csk_clear_xmit_timers+0x45/0x50
[ 71.256258] [<c0c0cbad>] tcp_done+0x4d/0x70
[ 71.256258] [<c0c16b6c>] tcp_rcv_state_process+0x68c/0x950
[ 71.256258] [<c0c1bd66>] ? tcp_v4_md5_do_lookup+0x16/0x70
[ 71.256258] [<c0c1e68b>] tcp_v4_do_rcv+0xdb/0x340
[ 71.256258] [<c0c1f3b8>] ? tcp_v4_rcv+0x5b8/0x820
[ 71.256258] [<c0dddb2f>] ? _spin_lock_nested+0x5f/0x70
[ 71.256258] [<c0c1f3d6>] tcp_v4_rcv+0x5d6/0x820
[ 71.256258] [<c0c219be>] ? raw_local_deliver+0xe/0x190
[ 71.256258] [<c0c01556>] ? ip_local_deliver_finish+0x36/0x200
[ 71.256258] [<c0c015d6>] ip_local_deliver_finish+0xb6/0x200
[ 71.256258] [<c0c01556>] ? ip_local_deliver_finish+0x36/0x200
[ 71.256258] [<c0c01752>] ip_local_deliver+0x32/0xa0
[ 71.256258] [<c0c01520>] ? ip_local_deliver_finish+0x0/0x200
[ 71.256258] [<c0c018ce>] ip_rcv_finish+0x10e/0x2f0
[ 71.256258] [<c0c017c0>] ? ip_rcv_finish+0x0/0x2f0
[ 71.256258] [<c0c01c8c>] ip_rcv+0x1dc/0x290
[ 71.256258] [<c0c017c0>] ? ip_rcv_finish+0x0/0x2f0
[ 71.256258] [<c0c01ab0>] ? ip_rcv+0x0/0x290
[ 71.256258] [<c0ba210c>] netif_receive_skb+0x29c/0x340
[ 71.256258] [<c0ba1f62>] ? netif_receive_skb+0xf2/0x340
[ 71.256258] [<c0166815>] ? __lock_acquire+0x375/0x640
[ 71.256258] [<c0b996d9>] ? skb_pull+0x9/0x40
[ 71.256258] [<c060f44e>] nv_napi_poll+0x33e/0x630
[ 71.256258] [<c0ba4a34>] net_rx_action+0x154/0x250
[ 71.256258] [<c0ba49b0>] ? net_rx_action+0xd0/0x250
[ 71.256258] [<c0143e59>] __do_softirq+0xa9/0x180
[ 71.256258] [<c0143db0>] ? __do_softirq+0x0/0x180
[ 71.256258] <IRQ> [<c012d778>] ? idle_cpu+0x8/0x30
[ 71.256258] [<c014436c>] ? irq_exit+0x7c/0x90
[ 71.256258] [<c0106601>] ? do_IRQ+0xa1/0x120
[ 71.256258] [<c010452c>] ? common_interrupt+0x2c/0x34
Ingo
^ permalink raw reply [flat|nested] 30+ messages in thread
* RE: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-28 0:54 ` Tantilov, Emil S
@ 2008-12-29 10:02 ` Peter Zijlstra
2008-12-29 10:07 ` Herbert Xu
2008-12-29 10:31 ` Herbert Xu
0 siblings, 2 replies; 30+ messages in thread
From: Peter Zijlstra @ 2008-12-29 10:02 UTC (permalink / raw)
To: Tantilov, Emil S
Cc: Herbert Xu, Kirsher, Jeffrey T, netdev, David Miller,
Waskiewicz Jr, Peter P, Duyck, Alexander H, Ingo Molnar,
Eric Dumazet
On Sat, 2008-12-27 at 17:54 -0700, Tantilov, Emil S wrote:
> Peter Zijlstra wrote:
> > On Sat, 2008-12-27 at 12:38 -0700, Tantilov, Emil S wrote:
> >> Peter Zijlstra wrote:
> >
> >>> index 9007ccd..a074d77 100644
> >>> --- a/include/linux/percpu_counter.h
> >>> +++ b/include/linux/percpu_counter.h
> >>> @@ -30,8 +30,16 @@ struct percpu_counter {
> >>> #define FBC_BATCH (NR_CPUS*4)
> >>> #endif
> >>>
> >>> -int percpu_counter_init(struct percpu_counter *fbc, s64 amount);
> >>> -int percpu_counter_init_irq(struct percpu_counter *fbc, s64
> >>> amount); +int __percpu_counter_init(struct percpu_counter *fbc, s64
> >>> amount, + struct lock_class_key *key); +
> >>> +#define percpu_counter_init(fbc, value)
> >>> \ + do {
> >>> \ + static struct lock_class_key __key;
> >>> \ +
> >>> \ + __percpu_counter_init(fbc, value, &__key);
> >>> \ + } while (0) +
> >>> void percpu_counter_destroy(struct percpu_counter *fbc);
> >>> void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
> >>> void __percpu_counter_add(struct percpu_counter *fbc, s64 amount,
> >
> >> This patch fails to compile:
> >>
> >> mm/backing-dev.c: In function 'bdi_init':
> >> mm/backing-dev.c:226: error: expected expression bedore 'do'
> >
> > Ah indeed, stupid me...
> >
> > Please try something like this instead of the above hunk:
> >
> > @@ -30,8 +30,16 @@ struct percpu_counter {
> > #define FBC_BATCH (NR_CPUS*4)
> > #endif
> >
> > -int percpu_counter_init(struct percpu_counter *fbc, s64 amount);
> > -int percpu_counter_init_irq(struct percpu_counter *fbc, s64 amount);
> > +int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
> > + struct lock_class_key *key);
> > +
> > +#define percpu_counter_init(fbc, value)
> > \ + ({
> > \ + static struct lock_class_key __key;
> > \ +
> > \ + __percpu_counter_init(fbc, value, &__key);
> > \ + })
> > +
> > void percpu_counter_destroy(struct percpu_counter *fbc);
> > void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
> > void __percpu_counter_add(struct percpu_counter *fbc, s64 amount,
> > s32 batch)
>
> With this compiled, but I still get the following:
>
> [ 435.632627] =================================
> [ 435.633030] [ INFO: inconsistent lock state ]
> [ 435.633037] 2.6.28-rc8-net-next-igbL #14
> [ 435.633040] ---------------------------------
> [ 435.633044] inconsistent {in-softirq-W} -> {softirq-on-W} usage.
> [ 435.633049] netperf/12669 [HC0[0]:SC0[0]:HE1:SE1] takes:
> [ 435.633053] (key#8){-+..}, at: [<ffffffff803691ac>] __percpu_counter_add+0x4a/0x6d
> [ 435.633068] {in-softirq-W} state was registered at:
> [ 435.633070] [<ffffffffffffffff>] 0xffffffffffffffff
> [ 435.633078] irq event stamp: 988533
> [ 435.633080] hardirqs last enabled at (988533): [<ffffffff80243712>] _local_bh_enable_ip+0xc8/0xcd
> [ 435.633088] hardirqs last disabled at (988531): [<ffffffff8024369e>] _local_bh_enable_ip+0x54/0xcd
> [ 435.633093] softirqs last enabled at (988532): [<ffffffff804fc814>] sock_orphan+0x3f/0x44
> [ 435.633100] softirqs last disabled at (988530): [<ffffffff8056454d>] _write_lock_bh+0x11/0x3d
> [ 435.633107]
> [ 435.633108] other info that might help us debug this:
> [ 435.633110] 1 lock held by netperf/12669:
> [ 435.633112] #0: (sk_lock-AF_INET6){--..}, at: [<ffffffff804fc544>] lock_sock+0xb/0xd
> [ 435.633119]
> [ 435.633120] stack backtrace:
> [ 435.633124] Pid: 12669, comm: netperf Not tainted 2.6.28-rc8-net-next-igbL #14
> [ 435.633127] Call Trace:
> [ 435.633134] [<ffffffff8025ffb8>] print_usage_bug+0x159/0x16a
> [ 435.633139] [<ffffffff8026000e>] valid_state+0x45/0x52
> [ 435.633143] [<ffffffff802601cf>] mark_lock_irq+0x1b4/0x27b
> [ 435.633148] [<ffffffff80260339>] mark_lock+0xa3/0x110
> [ 435.633152] [<ffffffff80260480>] mark_irqflags+0xda/0xf2
> [ 435.633157] [<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
> [ 435.633161] [<ffffffff80261d93>] lock_acquire+0x55/0x71
> [ 435.633166] [<ffffffff803691ac>] ? __percpu_counter_add+0x4a/0x6d
> [ 435.633170] [<ffffffff80564434>] _spin_lock+0x2c/0x38
> [ 435.633175] [<ffffffff803691ac>] ? __percpu_counter_add+0x4a/0x6d
> [ 435.633179] [<ffffffff803691ac>] __percpu_counter_add+0x4a/0x6d
> [ 435.633184] [<ffffffff804fc827>] percpu_counter_add+0xe/0x10
> [ 435.633188] [<ffffffff804fc837>] percpu_counter_inc+0xe/0x10
> [ 435.633193] [<ffffffff804fdc91>] tcp_close+0x157/0x2da
> [ 435.633197] [<ffffffff8051907e>] inet_release+0x58/0x5f
> [ 435.633204] [<ffffffff80527c48>] inet6_release+0x30/0x35
> [ 435.633213] [<ffffffff804c9354>] sock_release+0x1a/0x76
> [ 435.633221] [<ffffffff804c9804>] sock_close+0x22/0x26
> [ 435.633229] [<ffffffff802a345a>] __fput+0x82/0x110
> [ 435.633234] [<ffffffff802a381a>] fput+0x15/0x17
> [ 435.633239] [<ffffffff802a09c5>] filp_close+0x67/0x72
> [ 435.633246] [<ffffffff80240ae3>] close_files+0x66/0x8d
> [ 435.633251] [<ffffffff80240b39>] put_files_struct+0x19/0x42
> [ 435.633256] [<ffffffff80240b98>] exit_files+0x36/0x3b
> [ 435.633260] [<ffffffff80241eec>] do_exit+0x1b7/0x2b1
> [ 435.633265] [<ffffffff80242087>] sys_exit_group+0x0/0x14
> [ 435.633269] [<ffffffff80242099>] sys_exit_group+0x12/0x14
> [ 435.633275] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
Afaict this is a real deadlock.
All the code around that percpu_counter_inc() in tcp_close() seems to
disabled softirqs, which suggests a softirq can really happen there.
So either we disable softirqs around the percpu op, or we move it a few
lines down, like:
---
Subject: net: fix tcp deadlock
Lockdep spotted that the percpu counter op takes a lock so that softirq
recursion deadlocks can occur. Delay the op until we've disabled
softirqs to avoid this.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
net/ipv4/tcp.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 564d3a9..03eddf1 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1835,12 +1835,10 @@ adjudge_to_death:
state = sk->sk_state;
sock_hold(sk);
sock_orphan(sk);
- percpu_counter_inc(sk->sk_prot->orphan_count);
/* It is the last release_sock in its life. It will remove backlog. */
release_sock(sk);
-
/* Now socket is owned by kernel and we acquire BH lock
to finish close. No need to check for user refs.
*/
@@ -1848,6 +1846,9 @@ adjudge_to_death:
bh_lock_sock(sk);
WARN_ON(sock_owned_by_user(sk));
+ /* account the orphan state now that we have softirqs disabled. */
+ percpu_counter_inc(sk->sk_prot->orphan_count);
+
/* Have we already been destroyed by a softirq or backlog? */
if (state != TCP_CLOSE && sk->sk_state == TCP_CLOSE)
goto out;
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 10:02 ` Peter Zijlstra
@ 2008-12-29 10:07 ` Herbert Xu
2008-12-29 10:16 ` Peter Zijlstra
2008-12-29 10:31 ` Herbert Xu
1 sibling, 1 reply; 30+ messages in thread
From: Herbert Xu @ 2008-12-29 10:07 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Tantilov, Emil S, Kirsher, Jeffrey T, netdev, David Miller,
Waskiewicz Jr, Peter P, Duyck, Alexander H, Ingo Molnar,
Eric Dumazet
On Mon, Dec 29, 2008 at 11:02:07AM +0100, Peter Zijlstra wrote:
>
> > [ 435.633134] [<ffffffff8025ffb8>] print_usage_bug+0x159/0x16a
> > [ 435.633139] [<ffffffff8026000e>] valid_state+0x45/0x52
> > [ 435.633143] [<ffffffff802601cf>] mark_lock_irq+0x1b4/0x27b
> > [ 435.633148] [<ffffffff80260339>] mark_lock+0xa3/0x110
> > [ 435.633152] [<ffffffff80260480>] mark_irqflags+0xda/0xf2
> > [ 435.633157] [<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
> > [ 435.633161] [<ffffffff80261d93>] lock_acquire+0x55/0x71
> > [ 435.633166] [<ffffffff803691ac>] ? __percpu_counter_add+0x4a/0x6d
> > [ 435.633170] [<ffffffff80564434>] _spin_lock+0x2c/0x38
> > [ 435.633175] [<ffffffff803691ac>] ? __percpu_counter_add+0x4a/0x6d
> > [ 435.633179] [<ffffffff803691ac>] __percpu_counter_add+0x4a/0x6d
> > [ 435.633184] [<ffffffff804fc827>] percpu_counter_add+0xe/0x10
> > [ 435.633188] [<ffffffff804fc837>] percpu_counter_inc+0xe/0x10
> > [ 435.633193] [<ffffffff804fdc91>] tcp_close+0x157/0x2da
> > [ 435.633197] [<ffffffff8051907e>] inet_release+0x58/0x5f
> > [ 435.633204] [<ffffffff80527c48>] inet6_release+0x30/0x35
> > [ 435.633213] [<ffffffff804c9354>] sock_release+0x1a/0x76
> > [ 435.633221] [<ffffffff804c9804>] sock_close+0x22/0x26
> > [ 435.633229] [<ffffffff802a345a>] __fput+0x82/0x110
> > [ 435.633234] [<ffffffff802a381a>] fput+0x15/0x17
> > [ 435.633239] [<ffffffff802a09c5>] filp_close+0x67/0x72
> > [ 435.633246] [<ffffffff80240ae3>] close_files+0x66/0x8d
> > [ 435.633251] [<ffffffff80240b39>] put_files_struct+0x19/0x42
> > [ 435.633256] [<ffffffff80240b98>] exit_files+0x36/0x3b
> > [ 435.633260] [<ffffffff80241eec>] do_exit+0x1b7/0x2b1
> > [ 435.633265] [<ffffffff80242087>] sys_exit_group+0x0/0x14
> > [ 435.633269] [<ffffffff80242099>] sys_exit_group+0x12/0x14
> > [ 435.633275] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
>
> Afaict this is a real deadlock.
No this is the same case as before, i.e., a false positive. The
only way we can call tcp_done in softirq context is if user-space
is not holding slock. On the other hand, userspace never touches
the per-cpu counter without slock, QED.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 10:07 ` Herbert Xu
@ 2008-12-29 10:16 ` Peter Zijlstra
2008-12-29 10:22 ` Herbert Xu
0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2008-12-29 10:16 UTC (permalink / raw)
To: Herbert Xu
Cc: Tantilov, Emil S, Kirsher, Jeffrey T, netdev, David Miller,
Waskiewicz Jr, Peter P, Duyck, Alexander H, Ingo Molnar,
Eric Dumazet
On Mon, 2008-12-29 at 21:07 +1100, Herbert Xu wrote:
> On Mon, Dec 29, 2008 at 11:02:07AM +0100, Peter Zijlstra wrote:
> >
> > > [ 435.633134] [<ffffffff8025ffb8>] print_usage_bug+0x159/0x16a
> > > [ 435.633139] [<ffffffff8026000e>] valid_state+0x45/0x52
> > > [ 435.633143] [<ffffffff802601cf>] mark_lock_irq+0x1b4/0x27b
> > > [ 435.633148] [<ffffffff80260339>] mark_lock+0xa3/0x110
> > > [ 435.633152] [<ffffffff80260480>] mark_irqflags+0xda/0xf2
> > > [ 435.633157] [<ffffffff8026122e>] __lock_acquire+0x1c3/0x2ee
> > > [ 435.633161] [<ffffffff80261d93>] lock_acquire+0x55/0x71
> > > [ 435.633166] [<ffffffff803691ac>] ? __percpu_counter_add+0x4a/0x6d
> > > [ 435.633170] [<ffffffff80564434>] _spin_lock+0x2c/0x38
> > > [ 435.633175] [<ffffffff803691ac>] ? __percpu_counter_add+0x4a/0x6d
> > > [ 435.633179] [<ffffffff803691ac>] __percpu_counter_add+0x4a/0x6d
> > > [ 435.633184] [<ffffffff804fc827>] percpu_counter_add+0xe/0x10
> > > [ 435.633188] [<ffffffff804fc837>] percpu_counter_inc+0xe/0x10
> > > [ 435.633193] [<ffffffff804fdc91>] tcp_close+0x157/0x2da
> > > [ 435.633197] [<ffffffff8051907e>] inet_release+0x58/0x5f
> > > [ 435.633204] [<ffffffff80527c48>] inet6_release+0x30/0x35
> > > [ 435.633213] [<ffffffff804c9354>] sock_release+0x1a/0x76
> > > [ 435.633221] [<ffffffff804c9804>] sock_close+0x22/0x26
> > > [ 435.633229] [<ffffffff802a345a>] __fput+0x82/0x110
> > > [ 435.633234] [<ffffffff802a381a>] fput+0x15/0x17
> > > [ 435.633239] [<ffffffff802a09c5>] filp_close+0x67/0x72
> > > [ 435.633246] [<ffffffff80240ae3>] close_files+0x66/0x8d
> > > [ 435.633251] [<ffffffff80240b39>] put_files_struct+0x19/0x42
> > > [ 435.633256] [<ffffffff80240b98>] exit_files+0x36/0x3b
> > > [ 435.633260] [<ffffffff80241eec>] do_exit+0x1b7/0x2b1
> > > [ 435.633265] [<ffffffff80242087>] sys_exit_group+0x0/0x14
> > > [ 435.633269] [<ffffffff80242099>] sys_exit_group+0x12/0x14
> > > [ 435.633275] [<ffffffff8020b9cb>] system_call_fastpath+0x16/0x1b
> >
> > Afaict this is a real deadlock.
>
> No this is the same case as before, i.e., a false positive. The
> only way we can call tcp_done in softirq context is if user-space
> is not holding slock. On the other hand, userspace never touches
> the per-cpu counter without slock, QED.
Its a protocol wide counter, therefore not protected by slock.
sk1 sk2
close()
tcp_close()
lock_sock(sk1)
perpcu_counter_inc()
spin_lock(sk1->sk_prot->orphan_count->lock);
-----> softirq
bh_lock_sock(sk2)
percpu_counter_foo()
spin_lock(sk2->sk_prot->orphan_count->lock);
last time I checked that spelled deadlock.
Stop smoking crack -- its _NOT_ ok to let lockdep splats into mainline
without considerable effort to either fix or annotate them.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 10:16 ` Peter Zijlstra
@ 2008-12-29 10:22 ` Herbert Xu
0 siblings, 0 replies; 30+ messages in thread
From: Herbert Xu @ 2008-12-29 10:22 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Tantilov, Emil S, Kirsher, Jeffrey T, netdev, David Miller,
Waskiewicz Jr, Peter P, Duyck, Alexander H, Ingo Molnar,
Eric Dumazet
On Mon, Dec 29, 2008 at 11:16:17AM +0100, Peter Zijlstra wrote:
>
> Its a protocol wide counter, therefore not protected by slock.
My bad, yes we need to do it with bh disabled.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 10:02 ` Peter Zijlstra
2008-12-29 10:07 ` Herbert Xu
@ 2008-12-29 10:31 ` Herbert Xu
2008-12-29 10:37 ` Herbert Xu
1 sibling, 1 reply; 30+ messages in thread
From: Herbert Xu @ 2008-12-29 10:31 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Tantilov, Emil S, Kirsher, Jeffrey T, netdev, David Miller,
Waskiewicz Jr, Peter P, Duyck, Alexander H, Ingo Molnar,
Eric Dumazet
On Mon, Dec 29, 2008 at 11:02:07AM +0100, Peter Zijlstra wrote:
>
> Subject: net: fix tcp deadlock
>
> Lockdep spotted that the percpu counter op takes a lock so that softirq
> recursion deadlocks can occur. Delay the op until we've disabled
> softirqs to avoid this.
>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
You also need to patch the other places where we use that percpu
counter from process context, e.g., for /proc/net/tcp.
Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 10:31 ` Herbert Xu
@ 2008-12-29 10:37 ` Herbert Xu
2008-12-29 11:28 ` Ingo Molnar
0 siblings, 1 reply; 30+ messages in thread
From: Herbert Xu @ 2008-12-29 10:37 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Tantilov, Emil S, Kirsher, Jeffrey T, netdev, David Miller,
Waskiewicz Jr, Peter P, Duyck, Alexander H, Ingo Molnar,
Eric Dumazet
On Mon, Dec 29, 2008 at 09:31:54PM +1100, Herbert Xu wrote:
>
> You also need to patch the other places where we use that percpu
> counter from process context, e.g., for /proc/net/tcp.
In fact, it looks like just about every spot in the original
changeset (dd24c00191d5e4a1ae896aafe33c6b8095ab4bd1) may be run
from process context. So you might be better off making BH-disabling
variants of the percpu counter ops.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 10:37 ` Herbert Xu
@ 2008-12-29 11:28 ` Ingo Molnar
2008-12-29 11:31 ` Ingo Molnar
2008-12-29 11:49 ` Herbert Xu
0 siblings, 2 replies; 30+ messages in thread
From: Ingo Molnar @ 2008-12-29 11:28 UTC (permalink / raw)
To: Herbert Xu
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet
* Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Mon, Dec 29, 2008 at 09:31:54PM +1100, Herbert Xu wrote:
> >
> > You also need to patch the other places where we use that percpu
> > counter from process context, e.g., for /proc/net/tcp.
>
> In fact, it looks like just about every spot in the original
> changeset (dd24c00191d5e4a1ae896aafe33c6b8095ab4bd1) may be run
> from process context. So you might be better off making BH-disabling
> variants of the percpu counter ops.
i got the splat further below with Peter's workaround applied.
Ingo
[ 65.163041]
[ 65.163041] ======================================================
[ 65.164028] [ INFO: soft-safe -> soft-unsafe lock order detected ]
[ 65.164028] 2.6.28-tip-03865-gb8c8c2c-dirty #13131
[ 65.164028] ------------------------------------------------------
[ 65.164028] ssh/2856 [HC0[0]:SC0[1]:HE1:SE0] is trying to acquire:
[ 65.164028] (&fbc->lock){--..}, at: [<c03a94d4>] __percpu_counter_add+0x52/0x7a
[ 65.164028]
[ 65.164028] and this task is already holding:
[ 65.164028] (slock-AF_INET){-+..}, at: [<c05fd3f3>] tcp_close+0x184/0x335
[ 65.164028] which would create a new lock dependency:
[ 65.164028] (slock-AF_INET){-+..} -> (&fbc->lock){--..}
[ 65.164028]
[ 65.164028] but this new dependency connects a soft-irq-safe lock:
[ 65.164028] (slock-AF_INET){-+..}
[ 65.164028] ... which became soft-irq-safe at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c0608651>] tcp_delack_timer+0x17/0x1b1
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] to a soft-irq-unsafe lock:
[ 65.164028] (&fbc->lock){--..}
[ 65.164028] ... which became soft-irq-unsafe at:
[ 65.164028] ... [<c014da19>] __lock_acquire+0x58f/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c03a94d4>] __percpu_counter_add+0x52/0x7a
[ 65.164028] [<c019a133>] get_empty_filp+0x9f/0x1b0
[ 65.164028] [<c01a2612>] path_lookup_open+0x23/0x7a
[ 65.164028] [<c01a2710>] do_filp_open+0xa7/0x67e
[ 65.164028] [<c019777c>] do_sys_open+0x47/0xbc
[ 65.164028] [<c019783d>] sys_open+0x23/0x2b
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] other info that might help us debug this:
[ 65.164028]
[ 65.164028] 1 lock held by ssh/2856:
[ 65.164028] #0: (slock-AF_INET){-+..}, at: [<c05fd3f3>] tcp_close+0x184/0x335
[ 65.164028]
[ 65.164028] the soft-irq-safe lock's dependencies:
[ 65.164028] -> (slock-AF_INET){-+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05c3916>] lock_sock_nested+0x2c/0xc3
[ 65.164028] [<c05fd289>] tcp_close+0x1a/0x335
[ 65.164028] [<c0615b42>] inet_release+0x47/0x4d
[ 65.164028] [<c05c20be>] sock_release+0x15/0x58
[ 65.164028] [<c05c2510>] sock_close+0x21/0x25
[ 65.164028] [<c0199f08>] __fput+0xcf/0x1a2
[ 65.164028] [<c019a2da>] fput+0x1c/0x1e
[ 65.164028] [<c0197685>] filp_close+0x55/0x5f
[ 65.164028] [<c01976fc>] sys_close+0x6d/0xa6
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c0608651>] tcp_delack_timer+0x17/0x1b1
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05c3916>] lock_sock_nested+0x2c/0xc3
[ 65.164028] [<c05fd289>] tcp_close+0x1a/0x335
[ 65.164028] [<c0615b42>] inet_release+0x47/0x4d
[ 65.164028] [<c05c20be>] sock_release+0x15/0x58
[ 65.164028] [<c05c2510>] sock_close+0x21/0x25
[ 65.164028] [<c0199f08>] __fput+0xcf/0x1a2
[ 65.164028] [<c019a2da>] fput+0x1c/0x1e
[ 65.164028] [<c0197685>] filp_close+0x55/0x5f
[ 65.164028] [<c01976fc>] sys_close+0x6d/0xa6
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c132a184>] af_family_slock_keys+0x10/0x120
[ 65.164028] -> (pool_lock){.+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c03a5da9>] __debug_object_init+0xb5/0x29e
[ 65.164028] [<c03a5fbd>] debug_object_init+0x13/0x16
[ 65.164028] [<c0140344>] hrtimer_init+0x1b/0x2b
[ 65.164028] [<c0a7b1da>] sched_init+0x99/0x333
[ 65.164028] [<c0a6e6a9>] start_kernel+0x13e/0x30a
[ 65.164028] [<c0a6e056>] i386_start_kernel+0x56/0x5e
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5d3b>] __debug_object_init+0x47/0x29e
[ 65.164028] [<c03a5fbd>] debug_object_init+0x13/0x16
[ 65.164028] [<c0134b9b>] timer_fixup_activate+0x2d/0x7c
[ 65.164028] [<c03a5805>] debug_object_fixup+0x17/0x22
[ 65.164028] [<c03a5bb1>] debug_object_activate+0xb6/0xbd
[ 65.164028] [<c0135350>] __mod_timer+0x6d/0xc8
[ 65.164028] [<c0135585>] mod_timer+0x38/0x3c
[ 65.164028] [<c05f9c54>] inet_twsk_schedule+0x13f/0x158
[ 65.164028] [<c060c7aa>] tcp_time_wait+0x169/0x1a7
[ 65.164028] [<c0600b00>] tcp_fin+0x80/0x16a
[ 65.164028] [<c060159d>] tcp_data_queue+0x293/0xa6d
[ 65.164028] [<c0604869>] tcp_rcv_state_process+0x852/0x8b7
[ 65.164028] [<c0609f92>] tcp_v4_do_rcv+0x25c/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c09f5924>] pool_lock+0x10/0x24
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5d3b>] __debug_object_init+0x47/0x29e
[ 65.164028] [<c03a5fbd>] debug_object_init+0x13/0x16
[ 65.164028] [<c0134b4d>] init_timer+0x15/0x1f
[ 65.164028] [<c05faa8f>] inet_csk_init_xmit_timers+0x25/0x6a
[ 65.164028] [<c0608305>] tcp_init_xmit_timers+0x1c/0x1f
[ 65.164028] [<c060c40d>] tcp_create_openreq_child+0x190/0x3c4
[ 65.164028] [<c060a5f4>] tcp_v4_syn_recv_sock+0x53/0x1c0
[ 65.164028] [<c060c0f0>] tcp_check_req+0x1e0/0x36d
[ 65.164028] [<c0609ee8>] tcp_v4_do_rcv+0x1b2/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&obj_hash[i].lock){++..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5d89>] __debug_object_init+0x95/0x29e
[ 65.164028] [<c03a5fbd>] debug_object_init+0x13/0x16
[ 65.164028] [<c0140344>] hrtimer_init+0x1b/0x2b
[ 65.164028] [<c0a7b1da>] sched_init+0x99/0x333
[ 65.164028] [<c0a6e6a9>] start_kernel+0x13e/0x30a
[ 65.164028] [<c0a6e056>] i386_start_kernel+0x56/0x5e
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-hardirq-W at:
[ 65.164028] [<c014d983>] __lock_acquire+0x4f9/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5a95>] debug_object_deactivate+0x29/0x8f
[ 65.164028] [<c0140473>] __run_hrtimer+0x47/0xae
[ 65.164028] [<c01407e2>] hrtimer_run_queues+0xf6/0x124
[ 65.164028] [<c0134fb5>] run_local_timers+0xd/0x1e
[ 65.164028] [<c0135445>] update_process_times+0x29/0x53
[ 65.164028] [<c0147766>] tick_periodic+0x6b/0x6d
[ 65.164028] [<c0147786>] tick_handle_periodic+0x1e/0x60
[ 65.164028] [<c06c943c>] __irqentry_text_start+0x74/0x87
[ 65.164028] [<c0104061>] apic_timer_interrupt+0x2d/0x34
[ 65.164028] [<c012d4a0>] printk+0x1a/0x1c
[ 65.164028] [<c0101132>] do_one_initcall+0x4a/0x17f
[ 65.164028] [<c0a6e4ee>] kernel_init+0xfc/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5a95>] debug_object_deactivate+0x29/0x8f
[ 65.164028] [<c0134d00>] run_timer_softirq+0x116/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c127aabc>] __key.19955+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c03a5da9>] __debug_object_init+0xb5/0x29e
[ 65.164028] [<c03a5fbd>] debug_object_init+0x13/0x16
[ 65.164028] [<c0140344>] hrtimer_init+0x1b/0x2b
[ 65.164028] [<c0a7b1da>] sched_init+0x99/0x333
[ 65.164028] [<c0a6e6a9>] start_kernel+0x13e/0x30a
[ 65.164028] [<c0a6e056>] i386_start_kernel+0x56/0x5e
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5d89>] __debug_object_init+0x95/0x29e
[ 65.164028] [<c03a5fbd>] debug_object_init+0x13/0x16
[ 65.164028] [<c0134b4d>] init_timer+0x15/0x1f
[ 65.164028] [<c05faa8f>] inet_csk_init_xmit_timers+0x25/0x6a
[ 65.164028] [<c0608305>] tcp_init_xmit_timers+0x1c/0x1f
[ 65.164028] [<c060c40d>] tcp_create_openreq_child+0x190/0x3c4
[ 65.164028] [<c060a5f4>] tcp_v4_syn_recv_sock+0x53/0x1c0
[ 65.164028] [<c060c0f0>] tcp_check_req+0x1e0/0x36d
[ 65.164028] [<c0609ee8>] tcp_v4_do_rcv+0x1b2/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&hashinfo->ehash_locks[i]){-+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05f89c7>] __inet_hash_nolisten+0xea/0x11f
[ 65.164028] [<c05f8d9a>] __inet_hash_connect+0x17a/0x213
[ 65.164028] [<c05f8e6e>] inet_hash_connect+0x3b/0x42
[ 65.164028] [<c060b222>] tcp_v4_connect+0x33f/0x49e
[ 65.164028] [<c06160cf>] inet_stream_connect+0x8f/0x204
[ 65.164028] [<c05c276e>] sys_connect+0x59/0x76
[ 65.164028] [<c05c2db2>] sys_socketcall+0x91/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05f9d0d>] __inet_twsk_hashdance+0xa0/0x103
[ 65.164028] [<c060c772>] tcp_time_wait+0x131/0x1a7
[ 65.164028] [<c0600b00>] tcp_fin+0x80/0x16a
[ 65.164028] [<c060159d>] tcp_data_queue+0x293/0xa6d
[ 65.164028] [<c0604869>] tcp_rcv_state_process+0x852/0x8b7
[ 65.164028] [<c0609f92>] tcp_v4_do_rcv+0x25c/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05f89c7>] __inet_hash_nolisten+0xea/0x11f
[ 65.164028] [<c05f8d9a>] __inet_hash_connect+0x17a/0x213
[ 65.164028] [<c05f8e6e>] inet_hash_connect+0x3b/0x42
[ 65.164028] [<c060b222>] tcp_v4_connect+0x33f/0x49e
[ 65.164028] [<c06160cf>] inet_stream_connect+0x8f/0x204
[ 65.164028] [<c05c276e>] sys_connect+0x59/0x76
[ 65.164028] [<c05c2db2>] sys_socketcall+0x91/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c132b8a8>] __key.38748+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05f89c7>] __inet_hash_nolisten+0xea/0x11f
[ 65.164028] [<c060a71a>] tcp_v4_syn_recv_sock+0x179/0x1c0
[ 65.164028] [<c060c0f0>] tcp_check_req+0x1e0/0x36d
[ 65.164028] [<c0609ee8>] tcp_v4_do_rcv+0x1b2/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&tcp_hashinfo.bhash[i].lock){-+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05fa7de>] inet_csk_get_port+0xc4/0x1d2
[ 65.164028] [<c0615f9e>] inet_bind+0x102/0x1a4
[ 65.164028] [<c05c27de>] sys_bind+0x53/0x72
[ 65.164028] [<c05c2da0>] sys_socketcall+0x7f/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05f9cbc>] __inet_twsk_hashdance+0x4f/0x103
[ 65.164028] [<c060c772>] tcp_time_wait+0x131/0x1a7
[ 65.164028] [<c0600b00>] tcp_fin+0x80/0x16a
[ 65.164028] [<c060159d>] tcp_data_queue+0x293/0xa6d
[ 65.164028] [<c0604869>] tcp_rcv_state_process+0x852/0x8b7
[ 65.164028] [<c0609f92>] tcp_v4_do_rcv+0x25c/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05fa7de>] inet_csk_get_port+0xc4/0x1d2
[ 65.164028] [<c0615f9e>] inet_bind+0x102/0x1a4
[ 65.164028] [<c05c27de>] sys_bind+0x53/0x72
[ 65.164028] [<c05c2da0>] sys_socketcall+0x7f/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c132b8a0>] __key.45901+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05f89c7>] __inet_hash_nolisten+0xea/0x11f
[ 65.164028] [<c05f8d9a>] __inet_hash_connect+0x17a/0x213
[ 65.164028] [<c05f8e6e>] inet_hash_connect+0x3b/0x42
[ 65.164028] [<c060b222>] tcp_v4_connect+0x33f/0x49e
[ 65.164028] [<c06160cf>] inet_stream_connect+0x8f/0x204
[ 65.164028] [<c05c276e>] sys_connect+0x59/0x76
[ 65.164028] [<c05c2db2>] sys_socketcall+0x91/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&n->list_lock){.+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c0193510>] deactivate_slab+0x71/0xd6
[ 65.164028] [<c0193ac0>] __slab_alloc+0x91/0x441
[ 65.164028] [<c0194f21>] __kmalloc+0x99/0x10c
[ 65.164028] [<c0195a3d>] kmem_cache_create+0x1bd/0x1d4
[ 65.164028] [<c0a8033e>] vfs_caches_init+0x63/0x107
[ 65.164028] [<c0a6e837>] start_kernel+0x2cc/0x30a
[ 65.164028] [<c0a6e056>] i386_start_kernel+0x56/0x5e
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c019373f>] __slab_free+0x1b5/0x243
[ 65.164028] [<c0194002>] kfree+0xc1/0xf7
[ 65.164028] [<c016e092>] rcu_free_old_probes+0xd/0xf
[ 65.164028] [<c0169ea3>] __rcu_process_callbacks+0x162/0x1f7
[ 65.164028] [<c0169f5e>] rcu_process_callbacks+0x26/0x46
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c12296a4>] __key.25975+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c0193ad7>] __slab_alloc+0xa8/0x441
[ 65.164028] [<c0193ed3>] kmem_cache_alloc+0x63/0xd1
[ 65.164028] [<c05f8be8>] inet_bind_bucket_create+0x19/0x51
[ 65.164028] [<c05f8d21>] __inet_hash_connect+0x101/0x213
[ 65.164028] [<c05f8e6e>] inet_hash_connect+0x3b/0x42
[ 65.164028] [<c060b222>] tcp_v4_connect+0x33f/0x49e
[ 65.164028] [<c06160cf>] inet_stream_connect+0x8f/0x204
[ 65.164028] [<c05c276e>] sys_connect+0x59/0x76
[ 65.164028] [<c05c2db2>] sys_socketcall+0x91/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05f8ea7>] __inet_inherit_port+0x32/0x66
[ 65.164028] [<c060a724>] tcp_v4_syn_recv_sock+0x183/0x1c0
[ 65.164028] [<c060c0f0>] tcp_check_req+0x1e0/0x36d
[ 65.164028] [<c0609ee8>] tcp_v4_do_rcv+0x1b2/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&queue->syn_wait_lock){-+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8bf6>] _write_lock_bh+0x28/0x58
[ 65.164028] [<c05c579b>] reqsk_queue_alloc+0xba/0xcf
[ 65.164028] [<c05fa676>] inet_csk_listen_start+0x1e/0xc2
[ 65.164028] [<c0615e78>] inet_listen+0x43/0x67
[ 65.164028] [<c05c1a51>] sys_listen+0x3f/0x59
[ 65.164028] [<c05c2dbb>] sys_socketcall+0x9a/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b9e>] _write_lock+0x23/0x53
[ 65.164028] [<c05fa403>] inet_csk_reqsk_queue_hash_add+0xca/0x101
[ 65.164028] [<c060ab5f>] tcp_v4_conn_request+0x3fe/0x435
[ 65.164028] [<c0604079>] tcp_rcv_state_process+0x62/0x8b7
[ 65.164028] [<c0609f92>] tcp_v4_do_rcv+0x25c/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8bf6>] _write_lock_bh+0x28/0x58
[ 65.164028] [<c05c579b>] reqsk_queue_alloc+0xba/0xcf
[ 65.164028] [<c05fa676>] inet_csk_listen_start+0x1e/0xc2
[ 65.164028] [<c0615e78>] inet_listen+0x43/0x67
[ 65.164028] [<c05c1a51>] sys_listen+0x3f/0x59
[ 65.164028] [<c05c2dbb>] sys_socketcall+0x9a/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c132a3bc>] __key.34733+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b9e>] _write_lock+0x23/0x53
[ 65.164028] [<c060c15f>] tcp_check_req+0x24f/0x36d
[ 65.164028] [<c0609ee8>] tcp_v4_do_rcv+0x1b2/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&base->lock){++..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c01351eb>] lock_timer_base+0x24/0x43
[ 65.164028] [<c0135312>] __mod_timer+0x2f/0xc8
[ 65.164028] [<c0135585>] mod_timer+0x38/0x3c
[ 65.164028] [<c0a88246>] con_init+0xa1/0x215
[ 65.164028] [<c0a87aff>] console_init+0x12/0x20
[ 65.164028] [<c0a6e795>] start_kernel+0x22a/0x30a
[ 65.164028] [<c0a6e056>] i386_start_kernel+0x56/0x5e
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-hardirq-W at:
[ 65.164028] [<c014d983>] __lock_acquire+0x4f9/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c01351eb>] lock_timer_base+0x24/0x43
[ 65.164028] [<c0135312>] __mod_timer+0x2f/0xc8
[ 65.164028] [<c0135585>] mod_timer+0x38/0x3c
[ 65.164028] [<c05dbeb0>] __netdev_watchdog_up+0x47/0x55
[ 65.164028] [<c05dc228>] netif_carrier_on+0x34/0x37
[ 65.164028] [<c04b0fed>] nv_linkchange+0x21/0x5b
[ 65.164028] [<c04b1057>] nv_link_irq+0x30/0x33
[ 65.164028] [<c04b2201>] nv_nic_irq_optimized+0xfe/0x22d
[ 65.164028] [<c0166953>] handle_IRQ_event+0x1f/0x54
[ 65.164028] [<c0167f4b>] handle_fasteoi_irq+0xaa/0xb7
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8df5>] _spin_lock_irq+0x32/0x62
[ 65.164028] [<c0134c21>] run_timer_softirq+0x37/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0e2fbac>] __key.23710+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5b27>] debug_object_activate+0x2c/0xbd
[ 65.164028] [<c0135350>] __mod_timer+0x6d/0xc8
[ 65.164028] [<c0135585>] mod_timer+0x38/0x3c
[ 65.164028] [<c0a88246>] con_init+0xa1/0x215
[ 65.164028] [<c0a87aff>] console_init+0x12/0x20
[ 65.164028] [<c0a6e795>] start_kernel+0x22a/0x30a
[ 65.164028] [<c0a6e056>] i386_start_kernel+0x56/0x5e
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5d3b>] __debug_object_init+0x47/0x29e
[ 65.164028] [<c03a5fbd>] debug_object_init+0x13/0x16
[ 65.164028] [<c0134b9b>] timer_fixup_activate+0x2d/0x7c
[ 65.164028] [<c03a5805>] debug_object_fixup+0x17/0x22
[ 65.164028] [<c03a5bb1>] debug_object_activate+0xb6/0xbd
[ 65.164028] [<c0135350>] __mod_timer+0x6d/0xc8
[ 65.164028] [<c04408f4>] __reschedule_timeout+0x6f/0xaf
[ 65.164028] [<c0440961>] reschedule_timeout+0x2d/0x3f
[ 65.164028] [<c0a8998a>] floppy_init+0x172/0xdb3
[ 65.164028] [<c0101152>] do_one_initcall+0x6a/0x17f
[ 65.164028] [<c0a6e4ee>] kernel_init+0xfc/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&base->lock/1){....} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8a65>] _spin_lock_nested+0x21/0x51
[ 65.164028] [<c06c462e>] timer_cpu_notify+0x1a2/0x22b
[ 65.164028] [<c0141b6b>] notifier_call_chain+0x49/0x71
[ 65.164028] [<c0141c09>] __raw_notifier_call_chain+0x13/0x15
[ 65.164028] [<c0141c1c>] raw_notifier_call_chain+0x11/0x13
[ 65.164028] [<c069a366>] _cpu_down+0x168/0x221
[ 65.164028] [<c069a454>] cpu_down+0x35/0x57
[ 65.164028] [<c069b594>] store_online+0x2a/0x5e
[ 65.164028] [<c04397a5>] sysdev_store+0x20/0x28
[ 65.164028] [<c01d2f30>] sysfs_write_file+0xbd/0xe8
[ 65.164028] [<c019958a>] vfs_write+0x91/0x138
[ 65.164028] [<c0199a95>] sys_write+0x40/0x65
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0e2fbad>] __key.23710+0x1/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5a95>] debug_object_deactivate+0x29/0x8f
[ 65.164028] [<c0134af4>] migrate_timer_list+0x1d/0x4d
[ 65.164028] [<c06c4644>] timer_cpu_notify+0x1b8/0x22b
[ 65.164028] [<c0141b6b>] notifier_call_chain+0x49/0x71
[ 65.164028] [<c0141c09>] __raw_notifier_call_chain+0x13/0x15
[ 65.164028] [<c0141c1c>] raw_notifier_call_chain+0x11/0x13
[ 65.164028] [<c069a366>] _cpu_down+0x168/0x221
[ 65.164028] [<c069a454>] cpu_down+0x35/0x57
[ 65.164028] [<c069b594>] store_online+0x2a/0x5e
[ 65.164028] [<c04397a5>] sysdev_store+0x20/0x28
[ 65.164028] [<c01d2f30>] sysfs_write_file+0xbd/0xe8
[ 65.164028] [<c019958a>] vfs_write+0x91/0x138
[ 65.164028] [<c0199a95>] sys_write+0x40/0x65
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8a65>] _spin_lock_nested+0x21/0x51
[ 65.164028] [<c06c462e>] timer_cpu_notify+0x1a2/0x22b
[ 65.164028] [<c0141b6b>] notifier_call_chain+0x49/0x71
[ 65.164028] [<c0141c09>] __raw_notifier_call_chain+0x13/0x15
[ 65.164028] [<c0141c1c>] raw_notifier_call_chain+0x11/0x13
[ 65.164028] [<c069a366>] _cpu_down+0x168/0x221
[ 65.164028] [<c069a454>] cpu_down+0x35/0x57
[ 65.164028] [<c069b594>] store_online+0x2a/0x5e
[ 65.164028] [<c04397a5>] sysdev_store+0x20/0x28
[ 65.164028] [<c01d2f30>] sysfs_write_file+0xbd/0xe8
[ 65.164028] [<c019958a>] vfs_write+0x91/0x138
[ 65.164028] [<c0199a95>] sys_write+0x40/0x65
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c01351eb>] lock_timer_base+0x24/0x43
[ 65.164028] [<c01355af>] del_timer+0x26/0x66
[ 65.164028] [<c05c39c4>] sk_stop_timer+0x17/0x22
[ 65.164028] [<c05f9efb>] inet_csk_delete_keepalive_timer+0x13/0x15
[ 65.164028] [<c060c192>] tcp_check_req+0x282/0x36d
[ 65.164028] [<c0609ee8>] tcp_v4_do_rcv+0x1b2/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&q->lock){++..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8df5>] _spin_lock_irq+0x32/0x62
[ 65.164028] [<c06c5c49>] wait_for_common+0x2f/0x110
[ 65.164028] [<c06c5dc5>] wait_for_completion+0x17/0x19
[ 65.164028] [<c013df64>] kthread_create+0x75/0xa0
[ 65.164028] [<c06c3afb>] migration_call+0x39/0x4d6
[ 65.164028] [<c0a7aeca>] migration_init+0x1d/0x4b
[ 65.164028] [<c0101152>] do_one_initcall+0x6a/0x17f
[ 65.164028] [<c0a6e445>] kernel_init+0x53/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-hardirq-W at:
[ 65.164028] [<c014d983>] __lock_acquire+0x4f9/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c011fd42>] __wake_up+0x1a/0x40
[ 65.164028] [<c013aea2>] insert_work+0x48/0x50
[ 65.164028] [<c013b5df>] __queue_work+0x22/0x30
[ 65.164028] [<c013b650>] queue_work_on+0x3a/0x46
[ 65.164028] [<c013b710>] queue_work+0x1a/0x1d
[ 65.164028] [<c013b727>] schedule_work+0x14/0x16
[ 65.164028] [<c044142d>] schedule_bh+0x17/0x19
[ 65.164028] [<c04415a7>] floppy_interrupt+0x15a/0x171
[ 65.164028] [<c044385a>] floppy_hardint+0x22/0xe1
[ 65.164028] [<c0166953>] handle_IRQ_event+0x1f/0x54
[ 65.164028] [<c0167ffe>] handle_edge_irq+0xa6/0x10d
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c011fcb3>] complete+0x17/0x43
[ 65.164028] [<c013c1ee>] wakeme_after_rcu+0x10/0x12
[ 65.164028] [<c0169ea3>] __rcu_process_callbacks+0x162/0x1f7
[ 65.164028] [<c0169f5e>] rcu_process_callbacks+0x26/0x46
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0e2ff64>] __key.18770+0x0/0x8
[ 65.164028] -> (&rq->lock){++..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c0123e2a>] rq_attach_root+0x19/0x191
[ 65.164028] [<c0a7b355>] sched_init+0x214/0x333
[ 65.164028] [<c0a6e6a9>] start_kernel+0x13e/0x30a
[ 65.164028] [<c0a6e056>] i386_start_kernel+0x56/0x5e
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-hardirq-W at:
[ 65.164028] [<c014d983>] __lock_acquire+0x4f9/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c012a118>] scheduler_tick+0x3a/0x211
[ 65.164028] [<c0135463>] update_process_times+0x47/0x53
[ 65.164028] [<c0147766>] tick_periodic+0x6b/0x6d
[ 65.164028] [<c0147786>] tick_handle_periodic+0x1e/0x60
[ 65.164028] [<c0105c7a>] timer_interrupt+0x46/0x4d
[ 65.164028] [<c0166953>] handle_IRQ_event+0x1f/0x54
[ 65.164028] [<c0168123>] handle_level_irq+0xbe/0xcb
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c012a118>] scheduler_tick+0x3a/0x211
[ 65.164028] [<c0135463>] update_process_times+0x47/0x53
[ 65.164028] [<c0147766>] tick_periodic+0x6b/0x6d
[ 65.164028] [<c0147786>] tick_handle_periodic+0x1e/0x60
[ 65.164028] [<c06c943c>] __irqentry_text_start+0x74/0x87
[ 65.164028] [<c0104061>] apic_timer_interrupt+0x2d/0x34
[ 65.164028] [<c0134d86>] run_timer_softirq+0x19c/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0c2cdf4>] __key.45602+0x0/0x8
[ 65.164028] -> (&vec->lock){.+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c0178677>] cpupri_set+0xc1/0x12f
[ 65.164028] [<c0122d2f>] rq_online_rt+0x92/0x9a
[ 65.164028] [<c012012c>] set_rq_online+0x72/0x80
[ 65.164028] [<c0123f8f>] rq_attach_root+0x17e/0x191
[ 65.164028] [<c0a7b355>] sched_init+0x214/0x333
[ 65.164028] [<c0a6e6a9>] start_kernel+0x13e/0x30a
[ 65.164028] [<c0a6e056>] i386_start_kernel+0x56/0x5e
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c0178607>] cpupri_set+0x51/0x12f
[ 65.164028] [<c0122f72>] __enqueue_rt_entity+0xa3/0x14a
[ 65.164028] [<c012307d>] enqueue_task_rt+0x35/0x47
[ 65.164028] [<c011f747>] enqueue_task+0x51/0x5d
[ 65.164028] [<c011f771>] activate_task+0x1e/0x24
[ 65.164028] [<c0126c24>] try_to_wake_up+0x23e/0x2bf
[ 65.164028] [<c0126cdc>] wake_up_process+0x14/0x16
[ 65.164028] [<c012757b>] rebalance_domains+0x3ea/0x4d5
[ 65.164028] [<c012939f>] run_rebalance_domains+0x33/0xf2
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c1222338>] __key.15525+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c0178677>] cpupri_set+0xc1/0x12f
[ 65.164028] [<c0122d2f>] rq_online_rt+0x92/0x9a
[ 65.164028] [<c012012c>] set_rq_online+0x72/0x80
[ 65.164028] [<c0123f8f>] rq_attach_root+0x17e/0x191
[ 65.164028] [<c0a7b355>] sched_init+0x214/0x333
[ 65.164028] [<c0a6e6a9>] start_kernel+0x13e/0x30a
[ 65.164028] [<c0a6e056>] i386_start_kernel+0x56/0x5e
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&rt_b->rt_runtime_lock){.+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c0122fb6>] __enqueue_rt_entity+0xe7/0x14a
[ 65.164028] [<c012307d>] enqueue_task_rt+0x35/0x47
[ 65.164028] [<c011f747>] enqueue_task+0x51/0x5d
[ 65.164028] [<c011f771>] activate_task+0x1e/0x24
[ 65.164028] [<c0126c24>] try_to_wake_up+0x23e/0x2bf
[ 65.164028] [<c0126cdc>] wake_up_process+0x14/0x16
[ 65.164028] [<c06c3b7b>] migration_call+0xb9/0x4d6
[ 65.164028] [<c0a7aee8>] migration_init+0x3b/0x4b
[ 65.164028] [<c0101152>] do_one_initcall+0x6a/0x17f
[ 65.164028] [<c0a6e445>] kernel_init+0x53/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c0122fb6>] __enqueue_rt_entity+0xe7/0x14a
[ 65.164028] [<c012307d>] enqueue_task_rt+0x35/0x47
[ 65.164028] [<c011f747>] enqueue_task+0x51/0x5d
[ 65.164028] [<c011f771>] activate_task+0x1e/0x24
[ 65.164028] [<c0126c24>] try_to_wake_up+0x23e/0x2bf
[ 65.164028] [<c0126cdc>] wake_up_process+0x14/0x16
[ 65.164028] [<c012757b>] rebalance_domains+0x3ea/0x4d5
[ 65.164028] [<c012939f>] run_rebalance_domains+0x33/0xf2
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0c2cdfc>] __key.37294+0x0/0x8
[ 65.164028] -> (&cpu_base->lock){++..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c01408f7>] lock_hrtimer_base+0x1d/0x38
[ 65.164028] [<c0140a62>] hrtimer_start_range_ns+0x1e/0x142
[ 65.164028] [<c0123002>] __enqueue_rt_entity+0x133/0x14a
[ 65.164028] [<c012307d>] enqueue_task_rt+0x35/0x47
[ 65.164028] [<c011f747>] enqueue_task+0x51/0x5d
[ 65.164028] [<c011f771>] activate_task+0x1e/0x24
[ 65.164028] [<c0126c24>] try_to_wake_up+0x23e/0x2bf
[ 65.164028] [<c0126cdc>] wake_up_process+0x14/0x16
[ 65.164028] [<c06c3b7b>] migration_call+0xb9/0x4d6
[ 65.164028] [<c0a7aee8>] migration_init+0x3b/0x4b
[ 65.164028] [<c0101152>] do_one_initcall+0x6a/0x17f
[ 65.164028] [<c0a6e445>] kernel_init+0x53/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-hardirq-W at:
[ 65.164028] [<c014d983>] __lock_acquire+0x4f9/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c01407c1>] hrtimer_run_queues+0xd5/0x124
[ 65.164028] [<c0134fb5>] run_local_timers+0xd/0x1e
[ 65.164028] [<c0135445>] update_process_times+0x29/0x53
[ 65.164028] [<c0147766>] tick_periodic+0x6b/0x6d
[ 65.164028] [<c0147786>] tick_handle_periodic+0x1e/0x60
[ 65.164028] [<c06c943c>] __irqentry_text_start+0x74/0x87
[ 65.164028] [<c0104061>] apic_timer_interrupt+0x2d/0x34
[ 65.164028] [<c012d4a0>] printk+0x1a/0x1c
[ 65.164028] [<c01011ac>] do_one_initcall+0xc4/0x17f
[ 65.164028] [<c0a6e445>] kernel_init+0x53/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c01407c1>] hrtimer_run_queues+0xd5/0x124
[ 65.164028] [<c0134fb5>] run_local_timers+0xd/0x1e
[ 65.164028] [<c0135445>] update_process_times+0x29/0x53
[ 65.164028] [<c0147766>] tick_periodic+0x6b/0x6d
[ 65.164028] [<c0147786>] tick_handle_periodic+0x1e/0x60
[ 65.164028] [<c06c943c>] __irqentry_text_start+0x74/0x87
[ 65.164028] [<c0104061>] apic_timer_interrupt+0x2d/0x34
[ 65.164028] [<c0134d86>] run_timer_softirq+0x19c/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0e2ff8c>] __key.21090+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5b27>] debug_object_activate+0x2c/0xbd
[ 65.164028] [<c0140505>] enqueue_hrtimer+0x2b/0x15d
[ 65.164028] [<c0140b6e>] hrtimer_start_range_ns+0x12a/0x142
[ 65.164028] [<c0123002>] __enqueue_rt_entity+0x133/0x14a
[ 65.164028] [<c012307d>] enqueue_task_rt+0x35/0x47
[ 65.164028] [<c011f747>] enqueue_task+0x51/0x5d
[ 65.164028] [<c011f771>] activate_task+0x1e/0x24
[ 65.164028] [<c0126c24>] try_to_wake_up+0x23e/0x2bf
[ 65.164028] [<c0126cdc>] wake_up_process+0x14/0x16
[ 65.164028] [<c06c3b7b>] migration_call+0xb9/0x4d6
[ 65.164028] [<c0a7aee8>] migration_init+0x3b/0x4b
[ 65.164028] [<c0101152>] do_one_initcall+0x6a/0x17f
[ 65.164028] [<c0a6e445>] kernel_init+0x53/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&cpu_base->lock/1){....} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8a65>] _spin_lock_nested+0x21/0x51
[ 65.164028] [<c06c4789>] hrtimer_cpu_notify+0xd2/0x179
[ 65.164028] [<c0141b6b>] notifier_call_chain+0x49/0x71
[ 65.164028] [<c0141c09>] __raw_notifier_call_chain+0x13/0x15
[ 65.164028] [<c0141c1c>] raw_notifier_call_chain+0x11/0x13
[ 65.164028] [<c069a366>] _cpu_down+0x168/0x221
[ 65.164028] [<c069a454>] cpu_down+0x35/0x57
[ 65.164028] [<c069b594>] store_online+0x2a/0x5e
[ 65.164028] [<c04397a5>] sysdev_store+0x20/0x28
[ 65.164028] [<c01d2f30>] sysfs_write_file+0xbd/0xe8
[ 65.164028] [<c019958a>] vfs_write+0x91/0x138
[ 65.164028] [<c0199a95>] sys_write+0x40/0x65
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0e2ff8d>] __key.21090+0x1/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c03a5a95>] debug_object_deactivate+0x29/0x8f
[ 65.164028] [<c06c47c5>] hrtimer_cpu_notify+0x10e/0x179
[ 65.164028] [<c0141b6b>] notifier_call_chain+0x49/0x71
[ 65.164028] [<c0141c09>] __raw_notifier_call_chain+0x13/0x15
[ 65.164028] [<c0141c1c>] raw_notifier_call_chain+0x11/0x13
[ 65.164028] [<c069a366>] _cpu_down+0x168/0x221
[ 65.164028] [<c069a454>] cpu_down+0x35/0x57
[ 65.164028] [<c069b594>] store_online+0x2a/0x5e
[ 65.164028] [<c04397a5>] sysdev_store+0x20/0x28
[ 65.164028] [<c01d2f30>] sysfs_write_file+0xbd/0xe8
[ 65.164028] [<c019958a>] vfs_write+0x91/0x138
[ 65.164028] [<c0199a95>] sys_write+0x40/0x65
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8a65>] _spin_lock_nested+0x21/0x51
[ 65.164028] [<c06c4789>] hrtimer_cpu_notify+0xd2/0x179
[ 65.164028] [<c0141b6b>] notifier_call_chain+0x49/0x71
[ 65.164028] [<c0141c09>] __raw_notifier_call_chain+0x13/0x15
[ 65.164028] [<c0141c1c>] raw_notifier_call_chain+0x11/0x13
[ 65.164028] [<c069a366>] _cpu_down+0x168/0x221
[ 65.164028] [<c069a454>] cpu_down+0x35/0x57
[ 65.164028] [<c069b594>] store_online+0x2a/0x5e
[ 65.164028] [<c04397a5>] sysdev_store+0x20/0x28
[ 65.164028] [<c01d2f30>] sysfs_write_file+0xbd/0xe8
[ 65.164028] [<c019958a>] vfs_write+0x91/0x138
[ 65.164028] [<c0199a95>] sys_write+0x40/0x65
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c01408f7>] lock_hrtimer_base+0x1d/0x38
[ 65.164028] [<c0140a62>] hrtimer_start_range_ns+0x1e/0x142
[ 65.164028] [<c0123002>] __enqueue_rt_entity+0x133/0x14a
[ 65.164028] [<c012307d>] enqueue_task_rt+0x35/0x47
[ 65.164028] [<c011f747>] enqueue_task+0x51/0x5d
[ 65.164028] [<c011f771>] activate_task+0x1e/0x24
[ 65.164028] [<c0126c24>] try_to_wake_up+0x23e/0x2bf
[ 65.164028] [<c0126cdc>] wake_up_process+0x14/0x16
[ 65.164028] [<c06c3b7b>] migration_call+0xb9/0x4d6
[ 65.164028] [<c0a7aee8>] migration_init+0x3b/0x4b
[ 65.164028] [<c0101152>] do_one_initcall+0x6a/0x17f
[ 65.164028] [<c0a6e445>] kernel_init+0x53/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&rt_rq->rt_runtime_lock){++..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c012246d>] update_curr_rt+0xdf/0x198
[ 65.164028] [<c012302c>] dequeue_task_rt+0x13/0x2f
[ 65.164028] [<c011f6c3>] dequeue_task+0xfb/0x10a
[ 65.164028] [<c011f6f0>] deactivate_task+0x1e/0x24
[ 65.164028] [<c06c5fc3>] schedule+0x168/0x98a
[ 65.164028] [<c0127df5>] migration_thread+0x1b3/0x24a
[ 65.164028] [<c013dfcf>] kthread+0x40/0x69
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-hardirq-W at:
[ 65.164028] [<c014d983>] __lock_acquire+0x4f9/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c012314f>] sched_rt_period_timer+0xc0/0x239
[ 65.164028] [<c01404a8>] __run_hrtimer+0x7c/0xae
[ 65.164028] [<c01407e2>] hrtimer_run_queues+0xf6/0x124
[ 65.164028] [<c0134fb5>] run_local_timers+0xd/0x1e
[ 65.164028] [<c0135445>] update_process_times+0x29/0x53
[ 65.164028] [<c0147766>] tick_periodic+0x6b/0x6d
[ 65.164028] [<c0147786>] tick_handle_periodic+0x1e/0x60
[ 65.164028] [<c06c943c>] __irqentry_text_start+0x74/0x87
[ 65.164028] [<c0104061>] apic_timer_interrupt+0x2d/0x34
[ 65.164028] [<c0108f9b>] default_idle+0x5d/0x97
[ 65.164028] [<c010233d>] cpu_idle+0x84/0xa5
[ 65.164028] [<c0699b3f>] rest_init+0x53/0x55
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c012314f>] sched_rt_period_timer+0xc0/0x239
[ 65.164028] [<c01404a8>] __run_hrtimer+0x7c/0xae
[ 65.164028] [<c01407e2>] hrtimer_run_queues+0xf6/0x124
[ 65.164028] [<c0134fb5>] run_local_timers+0xd/0x1e
[ 65.164028] [<c0135445>] update_process_times+0x29/0x53
[ 65.164028] [<c0147766>] tick_periodic+0x6b/0x6d
[ 65.164028] [<c0147786>] tick_handle_periodic+0x1e/0x60
[ 65.164028] [<c06c943c>] __irqentry_text_start+0x74/0x87
[ 65.164028] [<c0104061>] apic_timer_interrupt+0x2d/0x34
[ 65.164028] [<c017b301>] mempool_free_slab+0x13/0x15
[ 65.164028] [<c017b36f>] mempool_free+0x6c/0x74
[ 65.164028] [<c01b72c5>] bio_free+0x2d/0x49
[ 65.164028] [<c01b72f4>] bio_fs_destructor+0x13/0x15
[ 65.164028] [<c01b6467>] bio_put+0x2b/0x2d
[ 65.164028] [<c01b5732>] end_bio_bh_io_sync+0x2f/0x32
[ 65.164028] [<c01b61fc>] bio_endio+0x2d/0x30
[ 65.164028] [<c0384bc6>] req_bio_endio+0x81/0x9e
[ 65.164028] [<c0384d4f>] __end_that_request_first+0x16c/0x265
[ 65.164028] [<c0384e61>] end_that_request_data+0x19/0x47
[ 65.164028] [<c0385753>] blk_end_io+0x21/0x6f
[ 65.164028] [<c03857da>] blk_end_request+0x11/0x13
[ 65.164028] [<c04ec256>] scsi_end_request+0x24/0x80
[ 65.164028] [<c04ece1e>] scsi_io_completion+0x1a4/0x376
[ 65.164028] [<c04e7588>] scsi_finish_command+0xd1/0xd9
[ 65.164028] [<c04ed0e3>] scsi_softirq_done+0xf3/0xfb
[ 65.164028] [<c0389705>] blk_done_softirq+0x72/0x81
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0c2ce04>] __key.45584+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c0121921>] __enable_runtime+0x33/0x7d
[ 65.164028] [<c0122d15>] rq_online_rt+0x78/0x9a
[ 65.164028] [<c012012c>] set_rq_online+0x72/0x80
[ 65.164028] [<c06c3be9>] migration_call+0x127/0x4d6
[ 65.164028] [<c0141b6b>] notifier_call_chain+0x49/0x71
[ 65.164028] [<c0141c09>] __raw_notifier_call_chain+0x13/0x15
[ 65.164028] [<c0141c1c>] raw_notifier_call_chain+0x11/0x13
[ 65.164028] [<c06c40cb>] _cpu_up+0xbd/0xef
[ 65.164028] [<c06c4146>] cpu_up+0x49/0x59
[ 65.164028] [<c0a6e48d>] kernel_init+0x9b/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c0122fb6>] __enqueue_rt_entity+0xe7/0x14a
[ 65.164028] [<c012307d>] enqueue_task_rt+0x35/0x47
[ 65.164028] [<c011f747>] enqueue_task+0x51/0x5d
[ 65.164028] [<c011f771>] activate_task+0x1e/0x24
[ 65.164028] [<c0126c24>] try_to_wake_up+0x23e/0x2bf
[ 65.164028] [<c0126cdc>] wake_up_process+0x14/0x16
[ 65.164028] [<c06c3b7b>] migration_call+0xb9/0x4d6
[ 65.164028] [<c0a7aee8>] migration_init+0x3b/0x4b
[ 65.164028] [<c0101152>] do_one_initcall+0x6a/0x17f
[ 65.164028] [<c0a6e445>] kernel_init+0x53/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c012246d>] update_curr_rt+0xdf/0x198
[ 65.164028] [<c012302c>] dequeue_task_rt+0x13/0x2f
[ 65.164028] [<c011f6c3>] dequeue_task+0xfb/0x10a
[ 65.164028] [<c011f6f0>] deactivate_task+0x1e/0x24
[ 65.164028] [<c06c5fc3>] schedule+0x168/0x98a
[ 65.164028] [<c0127df5>] migration_thread+0x1b3/0x24a
[ 65.164028] [<c013dfcf>] kthread+0x40/0x69
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&zone->lock){.+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c017cfbf>] free_pages_bulk+0x21/0x1f2
[ 65.164028] [<c017d559>] free_hot_cold_page+0x1d8/0x233
[ 65.164028] [<c017d602>] free_hot_page+0xf/0x11
[ 65.164028] [<c017d9b4>] __free_pages+0x2a/0x35
[ 65.164028] [<c0a9c969>] __free_pages_bootmem+0x7e/0x82
[ 65.164028] [<c0a7eb2c>] free_all_bootmem_core+0xcb/0x180
[ 65.164028] [<c0a7ebee>] free_all_bootmem+0xd/0xf
[ 65.164028] [<c0a79f2a>] mem_init+0x26/0x261
[ 65.164028] [<c0a6e7c8>] start_kernel+0x25d/0x30a
[ 65.164028] [<c0a6e056>] i386_start_kernel+0x56/0x5e
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c017d7ad>] __free_pages_ok+0x1a9/0x386
[ 65.164028] [<c017d9bd>] __free_pages+0x33/0x35
[ 65.164028] [<c0193463>] __free_slab+0xad/0xb5
[ 65.164028] [<c019349d>] discard_slab+0x32/0x34
[ 65.164028] [<c0193650>] __slab_free+0xc6/0x243
[ 65.164028] [<c0194ac2>] kmem_cache_free+0x9c/0xd2
[ 65.164028] [<c012ac09>] free_task+0x38/0x3b
[ 65.164028] [<c012c301>] __put_task_struct+0xd0/0xd5
[ 65.164028] [<c012ddd2>] delayed_put_task_struct+0x41/0x45
[ 65.164028] [<c0169ea3>] __rcu_process_callbacks+0x162/0x1f7
[ 65.164028] [<c0169f5e>] rcu_process_callbacks+0x26/0x46
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c12223a0>] __key.31326+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c017ce6c>] rmqueue_bulk+0x2b/0x70
[ 65.164028] [<c017dbb6>] get_page_from_freelist+0x167/0x42a
[ 65.164028] [<c017df4c>] __alloc_pages_internal+0xb8/0x370
[ 65.164028] [<c0193b79>] __slab_alloc+0x14a/0x441
[ 65.164028] [<c0193ed3>] kmem_cache_alloc+0x63/0xd1
[ 65.164028] [<c0391c65>] alloc_cpumask_var+0x23/0x6f
[ 65.164028] [<c06c6004>] schedule+0x1a9/0x98a
[ 65.164028] [<c0166552>] watchdog+0x3f/0x1d5
[ 65.164028] [<c013dfcf>] kthread+0x40/0x69
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c0193510>] deactivate_slab+0x71/0xd6
[ 65.164028] [<c0193ac0>] __slab_alloc+0x91/0x441
[ 65.164028] [<c0193ed3>] kmem_cache_alloc+0x63/0xd1
[ 65.164028] [<c0391c65>] alloc_cpumask_var+0x23/0x6f
[ 65.164028] [<c06c6004>] schedule+0x1a9/0x98a
[ 65.164028] [<c06c6934>] schedule_timeout+0x1b/0x97
[ 65.164028] [<c06c5cdb>] wait_for_common+0xc1/0x110
[ 65.164028] [<c06c5dc5>] wait_for_completion+0x17/0x19
[ 65.164028] [<c013a8ad>] call_usermodehelper_exec+0x90/0xd0
[ 65.164028] [<c0393cf2>] kobject_uevent_env+0x489/0x4c5
[ 65.164028] [<c0393d38>] kobject_uevent+0xa/0xc
[ 65.164028] [<c03932c0>] kset_register+0x2e/0x34
[ 65.164028] [<c04399c1>] sysdev_class_register+0x96/0x9e
[ 65.164028] [<c0a8954c>] cpu_dev_init+0xf/0x49
[ 65.164028] [<c0a895ca>] driver_init+0x26/0x28
[ 65.164028] [<c0a6e4db>] kernel_init+0xe9/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (trace_wait.lock){.+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c011fd42>] __wake_up+0x1a/0x40
[ 65.164028] [<c0175152>] trace_wake_up+0x2b/0x2e
[ 65.164028] [<c0177897>] trace_boot_call+0x74/0x7b
[ 65.164028] [<c0101148>] do_one_initcall+0x60/0x17f
[ 65.164028] [<c0a6e4ee>] kernel_init+0xfc/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c011fd42>] __wake_up+0x1a/0x40
[ 65.164028] [<c0175152>] trace_wake_up+0x2b/0x2e
[ 65.164028] [<c01751fe>] tracing_sched_wakeup_trace+0xa9/0xb4
[ 65.164028] [<c0176369>] probe_sched_wakeup+0x71/0xa5
[ 65.164028] [<c0126c44>] try_to_wake_up+0x25e/0x2bf
[ 65.164028] [<c0126cdc>] wake_up_process+0x14/0x16
[ 65.164028] [<c017fdba>] pdflush_operation+0x76/0x8c
[ 65.164028] [<c017eec4>] wb_timer_fn+0x14/0x30
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c09dcd98>] trace_wait+0x10/0x2c
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c011fd42>] __wake_up+0x1a/0x40
[ 65.164028] [<c0175152>] trace_wake_up+0x2b/0x2e
[ 65.164028] [<c01751fe>] tracing_sched_wakeup_trace+0xa9/0xb4
[ 65.164028] [<c0176369>] probe_sched_wakeup+0x71/0xa5
[ 65.164028] [<c0129f55>] wake_up_new_task+0x8f/0xc9
[ 65.164028] [<c012bf31>] do_fork+0x205/0x2a2
[ 65.164028] [<c01022b1>] kernel_thread+0x7c/0x84
[ 65.164028] [<c013de53>] kthreadd+0xb1/0x14d
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&rq->lock/1){.+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8a65>] _spin_lock_nested+0x21/0x51
[ 65.164028] [<c012679c>] double_rq_lock+0x53/0x85
[ 65.164028] [<c01273b5>] rebalance_domains+0x224/0x4d5
[ 65.164028] [<c012939f>] run_rebalance_domains+0x33/0xf2
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8a65>] _spin_lock_nested+0x21/0x51
[ 65.164028] [<c012679c>] double_rq_lock+0x53/0x85
[ 65.164028] [<c01273b5>] rebalance_domains+0x224/0x4d5
[ 65.164028] [<c012939f>] run_rebalance_domains+0x33/0xf2
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0c2cdf5>] __key.45602+0x1/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8a65>] _spin_lock_nested+0x21/0x51
[ 65.164028] [<c012679c>] double_rq_lock+0x53/0x85
[ 65.164028] [<c01273b5>] rebalance_domains+0x224/0x4d5
[ 65.164028] [<c012939f>] run_rebalance_domains+0x33/0xf2
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c011fe83>] task_rq_lock+0x4e/0x78
[ 65.164028] [<c0126a87>] try_to_wake_up+0xa1/0x2bf
[ 65.164028] [<c0126cb5>] default_wake_function+0x10/0x12
[ 65.164028] [<c011ec20>] __wake_up_common+0x3a/0x60
[ 65.164028] [<c011fccc>] complete+0x30/0x43
[ 65.164028] [<c013dfb3>] kthread+0x24/0x69
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c011fcfd>] __wake_up_sync+0x1e/0x49
[ 65.164028] [<c05c5301>] sock_def_readable+0x3d/0x68
[ 65.164028] [<c060beac>] tcp_child_process+0x4c/0x9a
[ 65.164028] [<c0609f77>] tcp_v4_do_rcv+0x241/0x2a5
[ 65.164028] [<c060be0b>] tcp_v4_rcv+0x67d/0x6d2
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (tcp_lock){-+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8bf6>] _write_lock_bh+0x28/0x58
[ 65.164028] [<c05e7a08>] tcp_packet+0x7a/0xea6
[ 65.164028] [<c05e4da5>] nf_conntrack_in+0x6e9/0x7d4
[ 65.164028] [<c0620120>] ipv4_conntrack_local+0x55/0x5c
[ 65.164028] [<c05e0265>] nf_iterate+0x34/0x80
[ 65.164028] [<c05e030f>] nf_hook_slow+0x5e/0xca
[ 65.164028] [<c05f5dc5>] __ip_local_out+0x82/0x89
[ 65.164028] [<c05f5ddc>] ip_local_out+0x10/0x20
[ 65.164028] [<c05f68ce>] ip_queue_xmit+0x2b6/0x2f7
[ 65.164028] [<c0605e8a>] tcp_transmit_skb+0x5ca/0x604
[ 65.164028] [<c0607fed>] tcp_connect+0x2ad/0x388
[ 65.164028] [<c060b32a>] tcp_v4_connect+0x447/0x49e
[ 65.164028] [<c06160cf>] inet_stream_connect+0x8f/0x204
[ 65.164028] [<c05c276e>] sys_connect+0x59/0x76
[ 65.164028] [<c05c2db2>] sys_socketcall+0x91/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8bf6>] _write_lock_bh+0x28/0x58
[ 65.164028] [<c05e7a08>] tcp_packet+0x7a/0xea6
[ 65.164028] [<c05e4da5>] nf_conntrack_in+0x6e9/0x7d4
[ 65.164028] [<c06204ac>] ipv4_conntrack_in+0x1a/0x1c
[ 65.164028] [<c05e0265>] nf_iterate+0x34/0x80
[ 65.164028] [<c05e030f>] nf_hook_slow+0x5e/0xca
[ 65.164028] [<c05f3008>] ip_rcv+0x1e2/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8bf6>] _write_lock_bh+0x28/0x58
[ 65.164028] [<c05e7a08>] tcp_packet+0x7a/0xea6
[ 65.164028] [<c05e4da5>] nf_conntrack_in+0x6e9/0x7d4
[ 65.164028] [<c0620120>] ipv4_conntrack_local+0x55/0x5c
[ 65.164028] [<c05e0265>] nf_iterate+0x34/0x80
[ 65.164028] [<c05e030f>] nf_hook_slow+0x5e/0xca
[ 65.164028] [<c05f5dc5>] __ip_local_out+0x82/0x89
[ 65.164028] [<c05f5ddc>] ip_local_out+0x10/0x20
[ 65.164028] [<c05f68ce>] ip_queue_xmit+0x2b6/0x2f7
[ 65.164028] [<c0605e8a>] tcp_transmit_skb+0x5ca/0x604
[ 65.164028] [<c0607fed>] tcp_connect+0x2ad/0x388
[ 65.164028] [<c060b32a>] tcp_v4_connect+0x447/0x49e
[ 65.164028] [<c06160cf>] inet_stream_connect+0x8f/0x204
[ 65.164028] [<c05c276e>] sys_connect+0x59/0x76
[ 65.164028] [<c05c2db2>] sys_socketcall+0x91/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0a29b08>] tcp_lock+0x10/0x24
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8bf6>] _write_lock_bh+0x28/0x58
[ 65.164028] [<c05e7a08>] tcp_packet+0x7a/0xea6
[ 65.164028] [<c05e4da5>] nf_conntrack_in+0x6e9/0x7d4
[ 65.164028] [<c0620120>] ipv4_conntrack_local+0x55/0x5c
[ 65.164028] [<c05e0265>] nf_iterate+0x34/0x80
[ 65.164028] [<c05e030f>] nf_hook_slow+0x5e/0xca
[ 65.164028] [<c05f5dc5>] __ip_local_out+0x82/0x89
[ 65.164028] [<c05f5ddc>] ip_local_out+0x10/0x20
[ 65.164028] [<c05f68ce>] ip_queue_xmit+0x2b6/0x2f7
[ 65.164028] [<c0605e8a>] tcp_transmit_skb+0x5ca/0x604
[ 65.164028] [<c0605ff0>] tcp_send_ack+0xb3/0xbb
[ 65.164028] [<c0608794>] tcp_delack_timer+0x15a/0x1b1
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (nf_conntrack_lock){-+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05e4a7b>] nf_conntrack_in+0x3bf/0x7d4
[ 65.164028] [<c0620120>] ipv4_conntrack_local+0x55/0x5c
[ 65.164028] [<c05e0265>] nf_iterate+0x34/0x80
[ 65.164028] [<c05e030f>] nf_hook_slow+0x5e/0xca
[ 65.164028] [<c05f5dc5>] __ip_local_out+0x82/0x89
[ 65.164028] [<c05f5ddc>] ip_local_out+0x10/0x20
[ 65.164028] [<c05f68ce>] ip_queue_xmit+0x2b6/0x2f7
[ 65.164028] [<c0605e8a>] tcp_transmit_skb+0x5ca/0x604
[ 65.164028] [<c0607fed>] tcp_connect+0x2ad/0x388
[ 65.164028] [<c060b32a>] tcp_v4_connect+0x447/0x49e
[ 65.164028] [<c06160cf>] inet_stream_connect+0x8f/0x204
[ 65.164028] [<c05c276e>] sys_connect+0x59/0x76
[ 65.164028] [<c05c2db2>] sys_socketcall+0x91/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05e3fc7>] __nf_ct_refresh_acct+0x57/0x167
[ 65.164028] [<c05e8816>] tcp_packet+0xe88/0xea6
[ 65.164028] [<c05e4da5>] nf_conntrack_in+0x6e9/0x7d4
[ 65.164028] [<c06204ac>] ipv4_conntrack_in+0x1a/0x1c
[ 65.164028] [<c05e0265>] nf_iterate+0x34/0x80
[ 65.164028] [<c05e030f>] nf_hook_slow+0x5e/0xca
[ 65.164028] [<c05f3008>] ip_rcv+0x1e2/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05e4a7b>] nf_conntrack_in+0x3bf/0x7d4
[ 65.164028] [<c0620120>] ipv4_conntrack_local+0x55/0x5c
[ 65.164028] [<c05e0265>] nf_iterate+0x34/0x80
[ 65.164028] [<c05e030f>] nf_hook_slow+0x5e/0xca
[ 65.164028] [<c05f5dc5>] __ip_local_out+0x82/0x89
[ 65.164028] [<c05f5ddc>] ip_local_out+0x10/0x20
[ 65.164028] [<c05f68ce>] ip_queue_xmit+0x2b6/0x2f7
[ 65.164028] [<c0605e8a>] tcp_transmit_skb+0x5ca/0x604
[ 65.164028] [<c0607fed>] tcp_connect+0x2ad/0x388
[ 65.164028] [<c060b32a>] tcp_v4_connect+0x447/0x49e
[ 65.164028] [<c06160cf>] inet_stream_connect+0x8f/0x204
[ 65.164028] [<c05c276e>] sys_connect+0x59/0x76
[ 65.164028] [<c05c2db2>] sys_socketcall+0x91/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0a2983c>] nf_conntrack_lock+0x10/0x24
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c01351eb>] lock_timer_base+0x24/0x43
[ 65.164028] [<c0135312>] __mod_timer+0x2f/0xc8
[ 65.164028] [<c05e45e9>] __nf_conntrack_confirm+0x23c/0x30f
[ 65.164028] [<c0620585>] ipv4_confirm+0xa7/0xc5
[ 65.164028] [<c05e0265>] nf_iterate+0x34/0x80
[ 65.164028] [<c05e030f>] nf_hook_slow+0x5e/0xca
[ 65.164028] [<c05f656f>] ip_output+0x68/0x7c
[ 65.164028] [<c05f5de9>] ip_local_out+0x1d/0x20
[ 65.164028] [<c05f68ce>] ip_queue_xmit+0x2b6/0x2f7
[ 65.164028] [<c0605e8a>] tcp_transmit_skb+0x5ca/0x604
[ 65.164028] [<c0607fed>] tcp_connect+0x2ad/0x388
[ 65.164028] [<c060b32a>] tcp_v4_connect+0x447/0x49e
[ 65.164028] [<c06160cf>] inet_stream_connect+0x8f/0x204
[ 65.164028] [<c05c276e>] sys_connect+0x59/0x76
[ 65.164028] [<c05c2db2>] sys_socketcall+0x91/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05e3fc7>] __nf_ct_refresh_acct+0x57/0x167
[ 65.164028] [<c05e8816>] tcp_packet+0xe88/0xea6
[ 65.164028] [<c05e4da5>] nf_conntrack_in+0x6e9/0x7d4
[ 65.164028] [<c0620120>] ipv4_conntrack_local+0x55/0x5c
[ 65.164028] [<c05e0265>] nf_iterate+0x34/0x80
[ 65.164028] [<c05e030f>] nf_hook_slow+0x5e/0xca
[ 65.164028] [<c05f5dc5>] __ip_local_out+0x82/0x89
[ 65.164028] [<c05f5ddc>] ip_local_out+0x10/0x20
[ 65.164028] [<c05f68ce>] ip_queue_xmit+0x2b6/0x2f7
[ 65.164028] [<c0605e8a>] tcp_transmit_skb+0x5ca/0x604
[ 65.164028] [<c0605ff0>] tcp_send_ack+0xb3/0xbb
[ 65.164028] [<c0608794>] tcp_delack_timer+0x15a/0x1b1
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&list->lock#4){-+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05dc5a8>] dev_deactivate+0x11d/0x15a
[ 65.164028] [<c05d7238>] __linkwatch_run_queue+0x112/0x141
[ 65.164028] [<c05d728c>] linkwatch_event+0x25/0x2c
[ 65.164028] [<c013ad65>] run_workqueue+0xc3/0x193
[ 65.164028] [<c013b81f>] worker_thread+0xbb/0xc7
[ 65.164028] [<c013dfcf>] kthread+0x40/0x69
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05cfafa>] dev_queue_xmit+0x304/0x482
[ 65.164028] [<c0610c3b>] arp_xmit+0x38/0x3d
[ 65.164028] [<c061159a>] arp_send+0x37/0x3c
[ 65.164028] [<c0611ef0>] arp_solicit+0x16e/0x185
[ 65.164028] [<c05d5123>] neigh_timer_handler+0x24f/0x298
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05dc5a8>] dev_deactivate+0x11d/0x15a
[ 65.164028] [<c05d7238>] __linkwatch_run_queue+0x112/0x141
[ 65.164028] [<c05d728c>] linkwatch_event+0x25/0x2c
[ 65.164028] [<c013ad65>] run_workqueue+0xc3/0x193
[ 65.164028] [<c013b81f>] worker_thread+0xbb/0xc7
[ 65.164028] [<c013dfcf>] kthread+0x40/0x69
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c132b094>] __key.22497+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05cfafa>] dev_queue_xmit+0x304/0x482
[ 65.164028] [<c05f629e>] ip_finish_output+0x1e0/0x218
[ 65.164028] [<c05f657e>] ip_output+0x77/0x7c
[ 65.164028] [<c05f5de9>] ip_local_out+0x1d/0x20
[ 65.164028] [<c05f68ce>] ip_queue_xmit+0x2b6/0x2f7
[ 65.164028] [<c0605e8a>] tcp_transmit_skb+0x5ca/0x604
[ 65.164028] [<c0605ff0>] tcp_send_ack+0xb3/0xbb
[ 65.164028] [<c0608794>] tcp_delack_timer+0x15a/0x1b1
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (_xmit_ETHER#2){-+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05dc509>] dev_deactivate+0x7e/0x15a
[ 65.164028] [<c05d7238>] __linkwatch_run_queue+0x112/0x141
[ 65.164028] [<c05d728c>] linkwatch_event+0x25/0x2c
[ 65.164028] [<c013ad65>] run_workqueue+0xc3/0x193
[ 65.164028] [<c013b81f>] worker_thread+0xbb/0xc7
[ 65.164028] [<c013dfcf>] kthread+0x40/0x69
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05dc628>] dev_watchdog+0x43/0x193
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05dc509>] dev_deactivate+0x7e/0x15a
[ 65.164028] [<c05d7238>] __linkwatch_run_queue+0x112/0x141
[ 65.164028] [<c05d728c>] linkwatch_event+0x25/0x2c
[ 65.164028] [<c013ad65>] run_workqueue+0xc3/0x193
[ 65.164028] [<c013b81f>] worker_thread+0xbb/0xc7
[ 65.164028] [<c013dfcf>] kthread+0x40/0x69
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c132a944>] netdev_xmit_lock_key+0x8/0x1c8
[ 65.164028] -> (&np->lock){++..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8df5>] _spin_lock_irq+0x32/0x62
[ 65.164028] [<c06afc22>] nv_probe+0xb87/0x1b85
[ 65.164028] [<c03af7d6>] pci_device_probe+0x3e/0x5e
[ 65.164028] [<c043b7b0>] driver_probe_device+0x158/0x235
[ 65.164028] [<c043b8e1>] __driver_attach+0x54/0x76
[ 65.164028] [<c043ac57>] bus_for_each_dev+0x43/0x65
[ 65.164028] [<c043b4c3>] driver_attach+0x19/0x1b
[ 65.164028] [<c043b10b>] bus_add_driver+0xef/0x202
[ 65.164028] [<c043ba82>] driver_register+0x76/0xd2
[ 65.164028] [<c03afa04>] __pci_register_driver+0x58/0x84
[ 65.164028] [<c0a8e884>] init_nic+0x14/0x16
[ 65.164028] [<c0101152>] do_one_initcall+0x6a/0x17f
[ 65.164028] [<c0a6e4ee>] kernel_init+0xfc/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-hardirq-W at:
[ 65.164028] [<c014d983>] __lock_acquire+0x4f9/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c04b2185>] nv_nic_irq_optimized+0x82/0x22d
[ 65.164028] [<c0166953>] handle_IRQ_event+0x1f/0x54
[ 65.164028] [<c0167f4b>] handle_fasteoi_irq+0xaa/0xb7
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W<3>INFO: RCU detected CPU 0 stall (t=4294911085/2500 jiffies)
[ 75.156028] Pid: 2701, comm: distccd Not tainted 2.6.28-tip-03865-gb8c8c2c-dirty #13131
[ 75.156028] Call Trace:
[ 75.156028] [<c016a06c>] __rcu_pending+0x50/0x182
[ 75.156028] [<c016a1bf>] rcu_pending+0x21/0x4c
[ 75.156028] [<c013544c>] update_process_times+0x30/0x53
[ 75.156028] [<c0147766>] tick_periodic+0x6b/0x6d
[ 75.156028] [<c0147786>] tick_handle_periodic+0x1e/0x60
[ 75.156028] [<c06c943c>] __irqentry_text_start+0x74/0x87
[ 75.156028] [<c0104061>] apic_timer_interrupt+0x2d/0x34
[ 75.156028] [<c0397971>] ? delay_tsc+0xe/0xb5
[ 75.156028] [<c03978c8>] __delay+0xe/0x10
[ 75.156028] [<c03a5687>] _raw_spin_lock+0xa0/0xf9
[ 75.156028] [<c06c8a84>] _spin_lock_nested+0x40/0x51
[ 75.156028] [<c060ba68>] tcp_v4_rcv+0x2da/0x6d2
[ 75.156028] [<c05f2bb5>] ? ip_local_deliver_finish+0x32/0x1c5
[ 75.156028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 75.156028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 75.156028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 75.156028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 75.156028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 75.156028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 75.156028] [<c05cef7d>] ? net_rx_action+0x67/0x1f7
[ 75.156028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 75.156028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 75.156028] [<c013144f>] ? __do_softirq+0x0/0x154
[ 75.156028] <IRQ> [<c0167ea1>] ? handle_fasteoi_irq+0x0/0xb7
[ 75.156028] [<c0131421>] ? irq_exit+0x49/0x77
[ 75.156028] [<c010593a>] ? do_IRQ+0xb5/0xcb
[ 75.156028] [<c0103f2c>] ? common_interrupt+0x2c/0x34
[ 75.156028] [<c0116355>] ? __ticket_spin_is_locked+0xb/0x16
[ 75.156028] [<c03a5486>] ? _raw_spin_unlock+0x2d/0x92
[ 75.156028] [<c06c8976>] ? _spin_unlock+0x22/0x25
[ 75.156028] [<c019bdce>] ? inode_add_bytes+0x5b/0x61
[ 75.156028] [<c01d935e>] ? ext3_new_blocks+0x7f/0x5b5
[ 75.156028] [<c06c7437>] ? mutex_lock_nested+0x2b9/0x2d5
[ 75.156028] [<c06c744b>] ? mutex_lock_nested+0x2cd/0x2d5
[ 75.156028] [<c01dc150>] ? ext3_get_blocks_handle+0x136/0x816
[ 75.156028] [<c01dc35a>] ? ext3_get_blocks_handle+0x340/0x816
[ 75.156028] [<c01dca32>] ? ext3_get_block+0x90/0xc5
[ 75.156028] [<c01b4306>] ? __block_prepare_write+0x152/0x32b
[ 75.156028] [<c01b4589>] ? block_write_begin+0x7e/0xdb
[ 75.156028] [<c01dc9a2>] ? ext3_get_block+0x0/0xc5
[ 75.156028] [<c01dddc7>] ? ext3_write_begin+0xcd/0x192
[ 75.156028] [<c01dc9a2>] ? ext3_get_block+0x0/0xc5
[ 75.156028] [<c0179cd6>] ? generic_file_buffered_write+0xce/0x204
[ 75.156028] [<c017a3f8>] ? __generic_file_aio_write_nolock+0x3f9/0x433
[ 75.156028] [<c014c042>] ? mark_held_locks+0x45/0x5c
[ 75.156028] [<c06c744b>] ? mutex_lock_nested+0x2cd/0x2d5
[ 75.156028] [<c017a49b>] ? generic_file_aio_write+0x69/0xbd
[ 75.156028] [<c01da3ff>] ? ext3_file_write+0x1f/0x90
[ 75.156028] [<c0198e09>] ? do_sync_write+0xc0/0xfe
[ 75.156028] [<c013e0a8>] ? autoremove_wake_function+0x0/0x35
[ 75.156028] [<c0149c1a>] ? lock_release_holdtime+0x8a/0x92
[ 75.156028] [<c01c5437>] ? dnotify_parent+0x5d/0x62
[ 75.156028] [<c03a54e7>] ? _raw_spin_unlock+0x8e/0x92
[ 75.156028] [<c0359dad>] ? security_file_permission+0x14/0x16
[ 75.156028] [<c0198d49>] ? do_sync_write+0x0/0xfe
[ 75.156028] [<c019958a>] ? vfs_write+0x91/0x138
[ 75.156028] [<c0199a95>] ? sys_write+0x40/0x65
[ 75.156028] [<c010388b>] ? sysenter_do_call+0x12/0x3f
[ 65.164028] at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c04b2185>] nv_nic_irq_optimized+0x82/0x22d
[ 65.164028] [<c0166953>] handle_IRQ_event+0x1f/0x54
[ 65.164028] [<c0167f4b>] handle_fasteoi_irq+0xaa/0xb7
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c13263dc>] __key.35695+0x0/0x8
[ 65.164028] -> (lweventlist_lock){+...} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c05d703b>] linkwatch_add_event+0x15/0x36
[ 65.164028] [<c05d7110>] linkwatch_fire_event+0x2d/0x43
[ 65.164028] [<c05dc1f2>] netif_carrier_off+0x26/0x28
[ 65.164028] [<c0469f41>] bond_create+0x297/0x2fd
[ 65.164028] [<c0a8e09c>] bonding_init+0x720/0x7a6
[ 65.164028] [<c0101152>] do_one_initcall+0x6a/0x17f
[ 65.164028] [<c0a6e4ee>] kernel_init+0xfc/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-hardirq-W at:
[ 65.164028] [<c014d983>] __lock_acquire+0x4f9/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c05d703b>] linkwatch_add_event+0x15/0x36
[ 65.164028] [<c05d7110>] linkwatch_fire_event+0x2d/0x43
[ 65.164028] [<c05dc21b>] netif_carrier_on+0x27/0x37
[ 65.164028] [<c04b0fed>] nv_linkchange+0x21/0x5b
[ 65.164028] [<c04b1057>] nv_link_irq+0x30/0x33
[ 65.164028] [<c04b2201>] nv_nic_irq_optimized+0xfe/0x22d
[ 65.164028] [<c0166953>] handle_IRQ_event+0x1f/0x54
[ 65.164028] [<c0167f4b>] handle_fasteoi_irq+0xaa/0xb7
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0a28c10>] lweventlist_lock+0x10/0x24
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c05d703b>] linkwatch_add_event+0x15/0x36
[ 65.164028] [<c05d7110>] linkwatch_fire_event+0x2d/0x43
[ 65.164028] [<c05dc1f2>] netif_carrier_off+0x26/0x28
[ 65.164028] [<c04b19f5>] nv_open+0x4b4/0x518
[ 65.164028] [<c05cf66c>] dev_open+0x6c/0xa1
[ 65.164028] [<c05cd684>] dev_change_flags+0x9b/0x149
[ 65.164028] [<c06145da>] devinet_ioctl+0x215/0x4e3
[ 65.164028] [<c0615318>] inet_ioctl+0x93/0xac
[ 65.164028] [<c05c17ea>] sock_ioctl+0x1cc/0x1f0
[ 65.164028] [<c01a39e9>] vfs_ioctl+0x27/0x6c
[ 65.164028] [<c01a3ebf>] do_vfs_ioctl+0x3c3/0x3f5
[ 65.164028] [<c01a3f36>] sys_ioctl+0x45/0x5f
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (&cwq->lock){++..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c013b5d1>] __queue_work+0x14/0x30
[ 65.164028] [<c013b650>] queue_work_on+0x3a/0x46
[ 65.164028] [<c013b710>] queue_work+0x1a/0x1d
[ 65.164028] [<c013a8a0>] call_usermodehelper_exec+0x83/0xd0
[ 65.164028] [<c0393cf2>] kobject_uevent_env+0x489/0x4c5
[ 65.164028] [<c0393d38>] kobject_uevent+0xa/0xc
[ 65.164028] [<c03932c0>] kset_register+0x2e/0x34
[ 65.164028] [<c043ae7e>] bus_register+0x93/0x231
[ 65.164028] [<c0a89528>] platform_bus_init+0x1e/0x33
[ 65.164028] [<c0a895c0>] driver_init+0x1c/0x28
[ 65.164028] [<c0a6e4db>] kernel_init+0xe9/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-hardirq-W at:
[ 65.164028] [<c014d983>] __lock_acquire+0x4f9/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c013b5d1>] __queue_work+0x14/0x30
[ 65.164028] [<c013b650>] queue_work_on+0x3a/0x46
[ 65.164028] [<c013b710>] queue_work+0x1a/0x1d
[ 65.164028] [<c013b727>] schedule_work+0x14/0x16
[ 65.164028] [<c044142d>] schedule_bh+0x17/0x19
[ 65.164028] [<c04415a7>] floppy_interrupt+0x15a/0x171
[ 65.164028] [<c044385a>] floppy_hardint+0x22/0xe1
[ 65.164028] [<c0166953>] handle_IRQ_event+0x1f/0x54
[ 65.164028] [<c0167ffe>] handle_edge_irq+0xa6/0x10d
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c013b5d1>] __queue_work+0x14/0x30
[ 65.164028] [<c013b613>] delayed_work_timer_fn+0x26/0x29
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0e2fc0c>] __key.24394+0x0/0x8
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c011fd42>] __wake_up+0x1a/0x40
[ 65.164028] [<c013aea2>] insert_work+0x48/0x50
[ 65.164028] [<c013b5df>] __queue_work+0x22/0x30
[ 65.164028] [<c013b650>] queue_work_on+0x3a/0x46
[ 65.164028] [<c013b710>] queue_work+0x1a/0x1d
[ 65.164028] [<c013a8a0>] call_usermodehelper_exec+0x83/0xd0
[ 65.164028] [<c0393cf2>] kobject_uevent_env+0x489/0x4c5
[ 65.164028] [<c0393d38>] kobject_uevent+0xa/0xc
[ 65.164028] [<c03932c0>] kset_register+0x2e/0x34
[ 65.164028] [<c043ae7e>] bus_register+0x93/0x231
[ 65.164028] [<c0a89528>] platform_bus_init+0x1e/0x33
[ 65.164028] [<c0a895c0>] driver_init+0x1c/0x28
[ 65.164028] [<c0a6e4db>] kernel_init+0xe9/0x14a
[ 65.164028] [<c010417b>] kernel_thread_helper+0x7/0x10
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c013b5d1>] __queue_work+0x14/0x30
[ 65.164028] [<c013b650>] queue_work_on+0x3a/0x46
[ 65.164028] [<c013b710>] queue_work+0x1a/0x1d
[ 65.164028] [<c013b73a>] queue_delayed_work+0x11/0x23
[ 65.164028] [<c013b762>] schedule_delayed_work+0x16/0x18
[ 65.164028] [<c05d70aa>] linkwatch_schedule_work+0x4e/0x87
[ 65.164028] [<c05d7122>] linkwatch_fire_event+0x3f/0x43
[ 65.164028] [<c05dc1f2>] netif_carrier_off+0x26/0x28
[ 65.164028] [<c04b19f5>] nv_open+0x4b4/0x518
[ 65.164028] [<c05cf66c>] dev_open+0x6c/0xa1
[ 65.164028] [<c05cd684>] dev_change_flags+0x9b/0x149
[ 65.164028] [<c06145da>] devinet_ioctl+0x215/0x4e3
[ 65.164028] [<c0615318>] inet_ioctl+0x93/0xac
[ 65.164028] [<c05c17ea>] sock_ioctl+0x1cc/0x1f0
[ 65.164028] [<c01a39e9>] vfs_ioctl+0x27/0x6c
[ 65.164028] [<c01a3ebf>] do_vfs_ioctl+0x3c3/0x3f5
[ 65.164028] [<c01a3f36>] sys_ioctl+0x45/0x5f
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c01351eb>] lock_timer_base+0x24/0x43
[ 65.164028] [<c0135312>] __mod_timer+0x2f/0xc8
[ 65.164028] [<c0135585>] mod_timer+0x38/0x3c
[ 65.164028] [<c04b1a39>] nv_open+0x4f8/0x518
[ 65.164028] [<c05cf66c>] dev_open+0x6c/0xa1
[ 65.164028] [<c05cd684>] dev_change_flags+0x9b/0x149
[ 65.164028] [<c06145da>] devinet_ioctl+0x215/0x4e3
[ 65.164028] [<c0615318>] inet_ioctl+0x93/0xac
[ 65.164028] [<c05c17ea>] sock_ioctl+0x1cc/0x1f0
[ 65.164028] [<c01a39e9>] vfs_ioctl+0x27/0x6c
[ 65.164028] [<c01a3ebf>] do_vfs_ioctl+0x3c3/0x3f5
[ 65.164028] [<c01a3f36>] sys_ioctl+0x45/0x5f
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8f2e>] _spin_lock_irqsave+0x3f/0x72
[ 65.164028] [<c04b3dfb>] nv_start_xmit_optimized+0x36a/0x411
[ 65.164028] [<c05cd1e2>] dev_hard_start_xmit+0x235/0x299
[ 65.164028] [<c05dc2f7>] __qdisc_run+0xcc/0x1ad
[ 65.164028] [<c05cfb37>] dev_queue_xmit+0x341/0x482
[ 65.164028] [<c0630943>] packet_sendmsg+0x1c1/0x20a
[ 65.164028] [<c05c1e4e>] sock_sendmsg+0xcf/0xe6
[ 65.164028] [<c05c26d7>] sys_sendto+0xa9/0xc8
[ 65.164028] [<c05c2e16>] sys_socketcall+0xf5/0x18a
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c05dc2dc>] __qdisc_run+0xb1/0x1ad
[ 65.164028] [<c05cfb37>] dev_queue_xmit+0x341/0x482
[ 65.164028] [<c05f629e>] ip_finish_output+0x1e0/0x218
[ 65.164028] [<c05f657e>] ip_output+0x77/0x7c
[ 65.164028] [<c05f5de9>] ip_local_out+0x1d/0x20
[ 65.164028] [<c05f68ce>] ip_queue_xmit+0x2b6/0x2f7
[ 65.164028] [<c0605e8a>] tcp_transmit_skb+0x5ca/0x604
[ 65.164028] [<c0605ff0>] tcp_send_ack+0xb3/0xbb
[ 65.164028] [<c0608794>] tcp_delack_timer+0x15a/0x1b1
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (rt_peer_lock){-+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05f0f43>] rt_bind_peer+0x23/0x4d
[ 65.164028] [<c05f0ff7>] __ip_select_ident+0x2e/0xb9
[ 65.164028] [<c05f6000>] ip_push_pending_frames+0x214/0x2d2
[ 65.164028] [<c06121e3>] icmp_push_reply+0xc9/0xd4
[ 65.164028] [<c06123aa>] icmp_reply+0x141/0x15d
[ 65.164028] [<c06127f3>] icmp_echo+0x4c/0x4e
[ 65.164028] [<c0612606>] icmp_rcv+0x1d5/0x209
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05f0f43>] rt_bind_peer+0x23/0x4d
[ 65.164028] [<c05f0ff7>] __ip_select_ident+0x2e/0xb9
[ 65.164028] [<c05f6000>] ip_push_pending_frames+0x214/0x2d2
[ 65.164028] [<c06121e3>] icmp_push_reply+0xc9/0xd4
[ 65.164028] [<c06123aa>] icmp_reply+0x141/0x15d
[ 65.164028] [<c06127f3>] icmp_echo+0x4c/0x4e
[ 65.164028] [<c0612606>] icmp_rcv+0x1d5/0x209
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05f0f43>] rt_bind_peer+0x23/0x4d
[ 65.164028] [<c05f0ff7>] __ip_select_ident+0x2e/0xb9
[ 65.164028] [<c05f6000>] ip_push_pending_frames+0x214/0x2d2
[ 65.164028] [<c06121e3>] icmp_push_reply+0xc9/0xd4
[ 65.164028] [<c06123aa>] icmp_reply+0x141/0x15d
[ 65.164028] [<c06127f3>] icmp_echo+0x4c/0x4e
[ 65.164028] [<c0612606>] icmp_rcv+0x1d5/0x209
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0a2a500>] rt_peer_lock.45038+0x10/0x24
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05f0f43>] rt_bind_peer+0x23/0x4d
[ 65.164028] [<c05f0ff7>] __ip_select_ident+0x2e/0xb9
[ 65.164028] [<c05f6000>] ip_push_pending_frames+0x214/0x2d2
[ 65.164028] [<c06121e3>] icmp_push_reply+0xc9/0xd4
[ 65.164028] [<c06123aa>] icmp_reply+0x141/0x15d
[ 65.164028] [<c06127f3>] icmp_echo+0x4c/0x4e
[ 65.164028] [<c0612606>] icmp_rcv+0x1d5/0x209
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] -> (inet_peer_idlock){-+..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05f100b>] __ip_select_ident+0x42/0xb9
[ 65.164028] [<c05f6000>] ip_push_pending_frames+0x214/0x2d2
[ 65.164028] [<c06121e3>] icmp_push_reply+0xc9/0xd4
[ 65.164028] [<c06123aa>] icmp_reply+0x141/0x15d
[ 65.164028] [<c06127f3>] icmp_echo+0x4c/0x4e
[ 65.164028] [<c0612606>] icmp_rcv+0x1d5/0x209
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] in-softirq-W at:
[ 65.164028] [<c014d9a4>] __lock_acquire+0x51a/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05f100b>] __ip_select_ident+0x42/0xb9
[ 65.164028] [<c05f6000>] ip_push_pending_frames+0x214/0x2d2
[ 65.164028] [<c06121e3>] icmp_push_reply+0xc9/0xd4
[ 65.164028] [<c06123aa>] icmp_reply+0x141/0x15d
[ 65.164028] [<c06127f3>] icmp_echo+0x4c/0x4e
[ 65.164028] [<c0612606>] icmp_rcv+0x1d5/0x209
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05f100b>] __ip_select_ident+0x42/0xb9
[ 65.164028] [<c05f6000>] ip_push_pending_frames+0x214/0x2d2
[ 65.164028] [<c06121e3>] icmp_push_reply+0xc9/0xd4
[ 65.164028] [<c06123aa>] icmp_reply+0x141/0x15d
[ 65.164028] [<c06127f3>] icmp_echo+0x4c/0x4e
[ 65.164028] [<c0612606>] icmp_rcv+0x1d5/0x209
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c0a2a810>] inet_peer_idlock+0x10/0x24
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8b10>] _spin_lock_bh+0x28/0x58
[ 65.164028] [<c05f100b>] __ip_select_ident+0x42/0xb9
[ 65.164028] [<c05f6000>] ip_push_pending_frames+0x214/0x2d2
[ 65.164028] [<c06121e3>] icmp_push_reply+0xc9/0xd4
[ 65.164028] [<c06123aa>] icmp_reply+0x141/0x15d
[ 65.164028] [<c06127f3>] icmp_echo+0x4c/0x4e
[ 65.164028] [<c0612606>] icmp_rcv+0x1d5/0x209
[ 65.164028] [<c05f2c95>] ip_local_deliver_finish+0x112/0x1c5
[ 65.164028] [<c05f309a>] ip_local_deliver+0x59/0x63
[ 65.164028] [<c05f2b6d>] ip_rcv_finish+0x285/0x29b
[ 65.164028] [<c05f3017>] ip_rcv+0x1f1/0x21b
[ 65.164028] [<c05cc853>] netif_receive_skb+0x2f2/0x325
[ 65.164028] [<c04b597f>] nv_napi_poll+0x3b9/0x4d2
[ 65.164028] [<c05cefec>] net_rx_action+0xd6/0x1f7
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028] ... acquired at:
[ 65.164028] [<c014d1b5>] validate_chain+0x8f9/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c0193510>] deactivate_slab+0x71/0xd6
[ 65.164028] [<c0193ac0>] __slab_alloc+0x91/0x441
[ 65.164028] [<c0193ed3>] kmem_cache_alloc+0x63/0xd1
[ 65.164028] [<c05c7613>] __alloc_skb+0x2e/0x10c
[ 65.164028] [<c0605f67>] tcp_send_ack+0x2a/0xbb
[ 65.164028] [<c0608794>] tcp_delack_timer+0x15a/0x1b1
[ 65.164028] [<c0134d2e>] run_timer_softirq+0x144/0x1a4
[ 65.164028] [<c01314ec>] __do_softirq+0x9d/0x154
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028]
[ 65.164028]
[ 65.164028] the soft-irq-unsafe lock's dependencies:
[ 65.164028] -> (&fbc->lock){--..} ops: 0 {
[ 65.164028] initial-use at:
[ 65.164028] [<c014da2e>] __lock_acquire+0x5a4/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c03a94d4>] __percpu_counter_add+0x52/0x7a
[ 65.164028] [<c019a133>] get_empty_filp+0x9f/0x1b0
[ 65.164028] [<c01a2612>] path_lookup_open+0x23/0x7a
[ 65.164028] [<c01a2710>] do_filp_open+0xa7/0x67e
[ 65.164028] [<c019777c>] do_sys_open+0x47/0xbc
[ 65.164028] [<c019783d>] sys_open+0x23/0x2b
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] softirq-on-W at:
[ 65.164028] [<c014da19>] __lock_acquire+0x58f/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c03a94d4>] __percpu_counter_add+0x52/0x7a
[ 65.164028] [<c019a133>] get_empty_filp+0x9f/0x1b0
[ 65.164028] [<c01a2612>] path_lookup_open+0x23/0x7a
[ 65.164028] [<c01a2710>] do_filp_open+0xa7/0x67e
[ 65.164028] [<c019777c>] do_sys_open+0x47/0xbc
[ 65.164028] [<c019783d>] sys_open+0x23/0x2b
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] hardirq-on-W at:
[ 65.164028] [<c014d9f8>] __lock_acquire+0x56e/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c03a94d4>] __percpu_counter_add+0x52/0x7a
[ 65.164028] [<c019a133>] get_empty_filp+0x9f/0x1b0
[ 65.164028] [<c01a2612>] path_lookup_open+0x23/0x7a
[ 65.164028] [<c01a2710>] do_filp_open+0xa7/0x67e
[ 65.164028] [<c019777c>] do_sys_open+0x47/0xbc
[ 65.164028] [<c019783d>] sys_open+0x23/0x2b
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
[ 65.164028] [<ffffffff>] 0xffffffff
[ 65.164028] }
[ 65.164028] ... key at: [<c131d2dc>] __key.12530+0x0/0x8
[ 65.164028]
[ 65.164028] stack backtrace:
[ 65.164028] Pid: 2856, comm: ssh Not tainted 2.6.28-tip-03865-gb8c8c2c-dirty #13131
[ 65.164028] Call Trace:
[ 65.164028] [<c014c7f5>] check_usage+0x345/0x356
[ 65.164028] [<c014cf3b>] validate_chain+0x67f/0xbce
[ 65.164028] [<c014db6a>] __lock_acquire+0x6e0/0x746
[ 65.164028] [<c014dc2b>] lock_acquire+0x5b/0x81
[ 65.164028] [<c03a94d4>] ? __percpu_counter_add+0x52/0x7a
[ 65.164028] [<c06c8ab8>] _spin_lock+0x23/0x53
[ 65.164028] [<c03a94d4>] ? __percpu_counter_add+0x52/0x7a
[ 65.164028] [<c03a94d4>] __percpu_counter_add+0x52/0x7a
[ 65.164028] [<c05fd421>] tcp_close+0x1b2/0x335
[ 65.164028] [<c0615b42>] inet_release+0x47/0x4d
[ 65.164028] [<c05c20be>] sock_release+0x15/0x58
[ 65.164028] [<c05c2510>] sock_close+0x21/0x25
[ 65.164028] [<c0199f08>] __fput+0xcf/0x1a2
[ 65.164028] [<c019a2da>] fput+0x1c/0x1e
[ 65.164028] [<c0197685>] filp_close+0x55/0x5f
[ 65.164028] [<c01976fc>] sys_close+0x6d/0xa6
[ 65.164028] [<c010388b>] sysenter_do_call+0x12/0x3f
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 11:28 ` Ingo Molnar
@ 2008-12-29 11:31 ` Ingo Molnar
2008-12-29 11:49 ` Herbert Xu
1 sibling, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2008-12-29 11:31 UTC (permalink / raw)
To: Herbert Xu
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet
* Ingo Molnar <mingo@elte.hu> wrote:
> * Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> > On Mon, Dec 29, 2008 at 09:31:54PM +1100, Herbert Xu wrote:
> > >
> > > You also need to patch the other places where we use that percpu
> > > counter from process context, e.g., for /proc/net/tcp.
> >
> > In fact, it looks like just about every spot in the original
> > changeset (dd24c00191d5e4a1ae896aafe33c6b8095ab4bd1) may be run
> > from process context. So you might be better off making BH-disabling
> > variants of the percpu counter ops.
>
> i got the splat further below with Peter's workaround applied.
so i'm trying the (partly manual) temporary revert below. The deadlock
warning triggers very frequently so it masks a lot of other potential
problems so i needed some quick solution. Can test any other patch as
well.
Ingo
--------------->
>From 71b9aceb2b5a1d0e4c6ed5ad0967f45184b6d2f8 Mon Sep 17 00:00:00 2001
From: Ingo Molnar <mingo@elte.hu>
Date: Mon, 29 Dec 2008 12:27:09 +0100
Subject: [PATCH] Revert "net: Use a percpu_counter for orphan_count"
This reverts commit dd24c00191d5e4a1ae896aafe33c6b8095ab4bd1.
Conflicts:
net/ipv4/inet_connection_sock.c
Lockdep splat as per:
http://lkml.org/lkml/2008/12/29/50
---
include/net/sock.h | 2 +-
include/net/tcp.h | 2 +-
net/dccp/dccp.h | 2 +-
net/dccp/proto.c | 16 ++++++----------
net/ipv4/inet_connection_sock.c | 4 ++--
net/ipv4/proc.c | 2 +-
net/ipv4/tcp.c | 12 +++++-------
net/ipv4/tcp_timer.c | 2 +-
8 files changed, 18 insertions(+), 24 deletions(-)
diff --git a/include/net/sock.h b/include/net/sock.h
index 5a3a151..a2a3890 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -666,7 +666,7 @@ struct proto {
unsigned int obj_size;
int slab_flags;
- struct percpu_counter *orphan_count;
+ atomic_t *orphan_count;
struct request_sock_ops *rsk_prot;
struct timewait_sock_ops *twsk_prot;
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 218235d..a64e5da 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -46,7 +46,7 @@
extern struct inet_hashinfo tcp_hashinfo;
-extern struct percpu_counter tcp_orphan_count;
+extern atomic_t tcp_orphan_count;
extern void tcp_time_wait(struct sock *sk, int state, int timeo);
#define MAX_TCP_HEADER (128 + MAX_HEADER)
diff --git a/net/dccp/dccp.h b/net/dccp/dccp.h
index 0bc4c9a..3fd16e8 100644
--- a/net/dccp/dccp.h
+++ b/net/dccp/dccp.h
@@ -49,7 +49,7 @@ extern int dccp_debug;
extern struct inet_hashinfo dccp_hashinfo;
-extern struct percpu_counter dccp_orphan_count;
+extern atomic_t dccp_orphan_count;
extern void dccp_time_wait(struct sock *sk, int state, int timeo);
diff --git a/net/dccp/proto.c b/net/dccp/proto.c
index d5c2bac..959da85 100644
--- a/net/dccp/proto.c
+++ b/net/dccp/proto.c
@@ -40,7 +40,8 @@ DEFINE_SNMP_STAT(struct dccp_mib, dccp_statistics) __read_mostly;
EXPORT_SYMBOL_GPL(dccp_statistics);
-struct percpu_counter dccp_orphan_count;
+atomic_t dccp_orphan_count = ATOMIC_INIT(0);
+
EXPORT_SYMBOL_GPL(dccp_orphan_count);
struct inet_hashinfo dccp_hashinfo;
@@ -964,7 +965,7 @@ adjudge_to_death:
state = sk->sk_state;
sock_hold(sk);
sock_orphan(sk);
- percpu_counter_inc(sk->sk_prot->orphan_count);
+ atomic_inc(sk->sk_prot->orphan_count);
/*
* It is the last release_sock in its life. It will remove backlog.
@@ -1028,21 +1029,18 @@ static int __init dccp_init(void)
{
unsigned long goal;
int ehash_order, bhash_order, i;
- int rc;
+ int rc = -ENOBUFS;
BUILD_BUG_ON(sizeof(struct dccp_skb_cb) >
FIELD_SIZEOF(struct sk_buff, cb));
- rc = percpu_counter_init(&dccp_orphan_count, 0);
- if (rc)
- goto out;
- rc = -ENOBUFS;
+
inet_hashinfo_init(&dccp_hashinfo);
dccp_hashinfo.bind_bucket_cachep =
kmem_cache_create("dccp_bind_bucket",
sizeof(struct inet_bind_bucket), 0,
SLAB_HWCACHE_ALIGN, NULL);
if (!dccp_hashinfo.bind_bucket_cachep)
- goto out_free_percpu;
+ goto out;
/*
* Size and allocate the main established and bind bucket
@@ -1135,8 +1133,6 @@ out_free_dccp_ehash:
out_free_bind_bucket_cachep:
kmem_cache_destroy(dccp_hashinfo.bind_bucket_cachep);
dccp_hashinfo.bind_bucket_cachep = NULL;
-out_free_percpu:
- percpu_counter_destroy(&dccp_orphan_count);
goto out;
}
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index c7cda1c..2ebfd9e 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -562,7 +562,7 @@ void inet_csk_destroy_sock(struct sock *sk)
sk_refcnt_debug_release(sk);
- percpu_counter_dec(sk->sk_prot->orphan_count);
+ atomic_dec(sk->sk_prot->orphan_count);
sock_put(sk);
}
@@ -633,7 +633,7 @@ void inet_csk_listen_stop(struct sock *sk)
acc_req = req->dl_next;
- percpu_counter_inc(sk->sk_prot->orphan_count);
+ atomic_inc(sk->sk_prot->orphan_count);
local_bh_disable();
bh_lock_sock(child);
diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 614958b..4944b47 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -54,7 +54,7 @@ static int sockstat_seq_show(struct seq_file *seq, void *v)
socket_seq_show(seq);
seq_printf(seq, "TCP: inuse %d orphan %d tw %d alloc %d mem %d\n",
sock_prot_inuse_get(net, &tcp_prot),
- (int)percpu_counter_sum_positive(&tcp_orphan_count),
+ atomic_read(&tcp_orphan_count),
tcp_death_row.tw_count,
(int)percpu_counter_sum_positive(&tcp_sockets_allocated),
atomic_read(&tcp_memory_allocated));
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 1f3d529..82b9425 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -277,7 +277,8 @@
int sysctl_tcp_fin_timeout __read_mostly = TCP_FIN_TIMEOUT;
-struct percpu_counter tcp_orphan_count;
+atomic_t tcp_orphan_count = ATOMIC_INIT(0);
+
EXPORT_SYMBOL_GPL(tcp_orphan_count);
int sysctl_tcp_mem[3] __read_mostly;
@@ -1836,7 +1837,7 @@ adjudge_to_death:
state = sk->sk_state;
sock_hold(sk);
sock_orphan(sk);
- percpu_counter_inc(sk->sk_prot->orphan_count);
+ atomic_inc(sk->sk_prot->orphan_count);
/* It is the last release_sock in its life. It will remove backlog. */
release_sock(sk);
@@ -1887,11 +1888,9 @@ adjudge_to_death:
}
}
if (sk->sk_state != TCP_CLOSE) {
- int orphan_count = percpu_counter_read_positive(
- sk->sk_prot->orphan_count);
-
sk_mem_reclaim(sk);
- if (tcp_too_many_orphans(sk, orphan_count)) {
+ if (tcp_too_many_orphans(sk,
+ atomic_read(sk->sk_prot->orphan_count))) {
if (net_ratelimit())
printk(KERN_INFO "TCP: too many of orphaned "
"sockets\n");
@@ -2790,7 +2789,6 @@ void __init tcp_init(void)
BUILD_BUG_ON(sizeof(struct tcp_skb_cb) > sizeof(skb->cb));
percpu_counter_init(&tcp_sockets_allocated, 0);
- percpu_counter_init(&tcp_orphan_count, 0);
tcp_hashinfo.bind_bucket_cachep =
kmem_cache_create("tcp_bind_bucket",
sizeof(struct inet_bind_bucket), 0,
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
index 0170e91..076030c 100644
--- a/net/ipv4/tcp_timer.c
+++ b/net/ipv4/tcp_timer.c
@@ -65,7 +65,7 @@ static void tcp_write_err(struct sock *sk)
static int tcp_out_of_resources(struct sock *sk, int do_reset)
{
struct tcp_sock *tp = tcp_sk(sk);
- int orphans = percpu_counter_read_positive(&tcp_orphan_count);
+ int orphans = atomic_read(&tcp_orphan_count);
/* If peer does not open window for long time, or did not transmit
* anything for long time, penalize it. */
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 11:28 ` Ingo Molnar
2008-12-29 11:31 ` Ingo Molnar
@ 2008-12-29 11:49 ` Herbert Xu
2008-12-29 11:58 ` Ingo Molnar
1 sibling, 1 reply; 30+ messages in thread
From: Herbert Xu @ 2008-12-29 11:49 UTC (permalink / raw)
To: Ingo Molnar
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet
On Mon, Dec 29, 2008 at 12:28:58PM +0100, Ingo Molnar wrote:
>
> i got the splat further below with Peter's workaround applied.
This looks like the issue that Peter's original patch (the one
that differentiated between instances of percpu counters) fixed.
Did you have both of his patches applied?
Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 11:49 ` Herbert Xu
@ 2008-12-29 11:58 ` Ingo Molnar
2008-12-29 12:01 ` Herbert Xu
0 siblings, 1 reply; 30+ messages in thread
From: Ingo Molnar @ 2008-12-29 11:58 UTC (permalink / raw)
To: Herbert Xu
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet
* Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Mon, Dec 29, 2008 at 12:28:58PM +0100, Ingo Molnar wrote:
> >
> > i got the splat further below with Peter's workaround applied.
>
> This looks like the issue that Peter's original patch (the one that
> differentiated between instances of percpu counters) fixed. Did you have
> both of his patches applied?
no, i only applied one of them. Is his second patch a good solution in
your opinion, and should i thus test both of them? (or will the second one
iterate some more - in which case i will keep the revert for now)
Ingo
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 11:58 ` Ingo Molnar
@ 2008-12-29 12:01 ` Herbert Xu
2008-12-29 12:16 ` Ingo Molnar
0 siblings, 1 reply; 30+ messages in thread
From: Herbert Xu @ 2008-12-29 12:01 UTC (permalink / raw)
To: Ingo Molnar
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet
On Mon, Dec 29, 2008 at 12:58:27PM +0100, Ingo Molnar wrote:
>
> no, i only applied one of them. Is his second patch a good solution in
> your opinion, and should i thus test both of them? (or will the second one
> iterate some more - in which case i will keep the revert for now)
Well the second patch is definitely the right solution to the
problem as reported. It just needs to be extended to fix other
similar bugs introduced by the original changeset.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 12:01 ` Herbert Xu
@ 2008-12-29 12:16 ` Ingo Molnar
2008-12-29 12:38 ` Ingo Molnar
0 siblings, 1 reply; 30+ messages in thread
From: Ingo Molnar @ 2008-12-29 12:16 UTC (permalink / raw)
To: Herbert Xu
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet
* Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Mon, Dec 29, 2008 at 12:58:27PM +0100, Ingo Molnar wrote:
> >
> > no, i only applied one of them. Is his second patch a good solution in
> > your opinion, and should i thus test both of them? (or will the second one
> > iterate some more - in which case i will keep the revert for now)
>
> Well the second patch is definitely the right solution to the problem as
> reported. It just needs to be extended to fix other similar bugs
> introduced by the original changeset.
okay - will keep the revert for now and will wait for you guys to do the
full fix.
Ingo
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 12:16 ` Ingo Molnar
@ 2008-12-29 12:38 ` Ingo Molnar
2008-12-29 12:44 ` [patch] locking, percpu counters: introduce separate lock classes Ingo Molnar
2008-12-29 12:49 ` unsafe locks seen with netperf on net-2.6.29 tree Herbert Xu
0 siblings, 2 replies; 30+ messages in thread
From: Ingo Molnar @ 2008-12-29 12:38 UTC (permalink / raw)
To: Herbert Xu
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet
* Ingo Molnar <mingo@elte.hu> wrote:
>
> * Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> > On Mon, Dec 29, 2008 at 12:58:27PM +0100, Ingo Molnar wrote:
> > >
> > > no, i only applied one of them. Is his second patch a good solution in
> > > your opinion, and should i thus test both of them? (or will the second one
> > > iterate some more - in which case i will keep the revert for now)
> >
> > Well the second patch is definitely the right solution to the problem
> > as reported. It just needs to be extended to fix other similar bugs
> > introduced by the original changeset.
>
> okay - will keep the revert for now and will wait for you guys to do the
> full fix.
hm, even with the revert i got the splat below. So some other commits are
causing this too?
Ingo
=================================
[ INFO: inconsistent lock state ]
2.6.28-tip-03883-gf855e6c-dirty #13150
---------------------------------
inconsistent {softirq-on-W} -> {in-softirq-W} usage.
kjournald/1435 [HC0[0]:SC1[1]:HE1:SE0] takes:
(&fbc->lock){-+..}, at: [<c034fc75>] __percpu_counter_add+0x65/0xb0
{softirq-on-W} state was registered at:
[<c015da56>] __lock_acquire+0x4c6/0xae0
[<c015e0f9>] lock_acquire+0x89/0xc0
[<c07247a8>] _spin_lock+0x38/0x50
[<c034fc75>] __percpu_counter_add+0x65/0xb0
[<c01b704a>] get_empty_filp+0x6a/0x1d0
[<c01c10a9>] path_lookup_open+0x29/0x90
[<c01c134e>] do_filp_open+0x9e/0x790
[<c01b3e60>] do_sys_open+0x50/0xe0
[<c01b3f5e>] sys_open+0x2e/0x40
[<c0103e76>] syscall_call+0x7/0xb
[<ffffffff>] 0xffffffff
irq event stamp: 125790
hardirqs last enabled at (125790): [<c0191c56>] free_hot_cold_page+0x1b6/0x280
hardirqs last disabled at (125789): [<c0191bae>] free_hot_cold_page+0x10e/0x280
softirqs last enabled at (123900): [<c013ca12>] __do_softirq+0x132/0x180
softirqs last disabled at (125765): [<c010621a>] call_on_stack+0x1a/0x30
other info that might help us debug this:
4 locks held by kjournald/1435:
#0: (rcu_read_lock){..--}, at: [<c05bef70>] net_rx_action+0xd0/0x220
#1: (rcu_read_lock){..--}, at: [<c05bbfb1>] netif_receive_skb+0x101/0x3a0
#2: (rcu_read_lock){..--}, at: [<c05f1bf5>] ip_local_deliver+0x55/0x1d0
#3: (slock-AF_INET/1){-+..}, at: [<c060ec3a>] tcp_v4_rcv+0x55a/0x6e0
stack backtrace:
Pid: 1435, comm: kjournald Not tainted 2.6.28-tip-03883-gf855e6c-dirty #13150
Call Trace:
[<c015a0d6>] print_usage_bug+0x176/0x1d0
[<c015b800>] mark_lock+0xbd0/0xd80
[<c015da13>] __lock_acquire+0x483/0xae0
[<c015bcdb>] ? trace_hardirqs_on+0xb/0x10
[<c015e0f9>] lock_acquire+0x89/0xc0
[<c034fc75>] ? __percpu_counter_add+0x65/0xb0
[<c07247a8>] _spin_lock+0x38/0x50
[<c034fc75>] ? __percpu_counter_add+0x65/0xb0
[<c034fc75>] __percpu_counter_add+0x65/0xb0
[<c060dc49>] tcp_v4_destroy_sock+0x1d9/0x240
[<c05fa06a>] inet_csk_destroy_sock+0x4a/0x140
[<c05fa675>] ? inet_csk_clear_xmit_timers+0x45/0x50
[<c05fb96d>] tcp_done+0x4d/0x70
[<c060655c>] tcp_rcv_state_process+0x68c/0x950
[<c060c9b6>] tcp_v4_do_rcv+0xd6/0x310
[<c072475d>] ? _spin_lock_nested+0x3d/0x50
[<c060ecc4>] tcp_v4_rcv+0x5e4/0x6e0
[<c05f1bf5>] ? ip_local_deliver+0x55/0x1d0
[<c05f1c44>] ip_local_deliver+0xa4/0x1d0
[<c05f1bf5>] ? ip_local_deliver+0x55/0x1d0
[<c05f201a>] ip_rcv+0x2aa/0x510
[<c05bbfb1>] ? netif_receive_skb+0x101/0x3a0
[<c05f1d70>] ? ip_rcv+0x0/0x510
[<c05bc199>] netif_receive_skb+0x2e9/0x3a0
[<c05bbfb1>] ? netif_receive_skb+0x101/0x3a0
[<c015d8f1>] ? __lock_acquire+0x361/0xae0
[<c05bc541>] napi_gro_receive+0x1c1/0x200
[<c015b9e0>] ? mark_held_locks+0x30/0x80
[<c05bf1bb>] ? process_backlog+0x7b/0xd0
[<c05bf1d2>] process_backlog+0x92/0xd0
[<c05beff4>] net_rx_action+0x154/0x220
[<c05bef70>] ? net_rx_action+0xd0/0x220
[<c013c989>] __do_softirq+0xa9/0x180
[<c013c8e0>] ? __do_softirq+0x0/0x180
<IRQ> [<c013c8cd>] ? irq_exit+0x4d/0x60
[<c01064ca>] ? do_IRQ+0x8a/0xe0
[<c01b086f>] ? check_object+0xef/0x1f0
[<c01044ac>] ? common_interrupt+0x2c/0x34
[<c01b27d2>] ? kmem_cache_free+0xc2/0xf0
[<c0234b65>] ? journal_write_revoke_records+0xa5/0x140
[<c0234b65>] ? journal_write_revoke_records+0xa5/0x140
[<c0234b65>] ? journal_write_revoke_records+0xa5/0x140
[<c023276d>] ? journal_commit_transaction+0x42d/0xe80
[<c015bc6e>] ? trace_hardirqs_on_caller+0x17e/0x1e0
[<c015bcdb>] ? trace_hardirqs_on+0xb/0x10
[<c014101e>] ? try_to_del_timer_sync+0x4e/0x60
[<c023608b>] ? kjournald+0xbb/0x1d0
[<c014b8e0>] ? autoremove_wake_function+0x0/0x40
[<c0235fd0>] ? kjournald+0x0/0x1d0
[<c014b5d7>] ? kthread+0x47/0x80
[<c014b590>] ? kthread+0x0/0x80
[<c010472f>] ? kernel_thread_helper+0x7/0x10
^ permalink raw reply [flat|nested] 30+ messages in thread
* [patch] locking, percpu counters: introduce separate lock classes
2008-12-29 12:38 ` Ingo Molnar
@ 2008-12-29 12:44 ` Ingo Molnar
2008-12-29 14:14 ` Ingo Molnar
2008-12-29 12:49 ` unsafe locks seen with netperf on net-2.6.29 tree Herbert Xu
1 sibling, 1 reply; 30+ messages in thread
From: Ingo Molnar @ 2008-12-29 12:44 UTC (permalink / raw)
To: Herbert Xu
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet
* Ingo Molnar <mingo@elte.hu> wrote:
> hm, even with the revert i got the splat below. So some other commits
> are causing this too?
in any case, i picked up and tidied up Peter's annotation patch - see it
below. Chances are that this plus the revert will do the trick - and
that's what i'm testing now in tip/master.
Ingo
------------------------>
>From ea319518ba3de282c13ae1cf4bf2215c5e03e67e Mon Sep 17 00:00:00 2001
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Fri, 26 Dec 2008 15:08:55 +0100
Subject: [PATCH] locking, percpu counters: introduce separate lock classes
Impact: fix lockdep false positives
Classify percpu_counter instances similar to regular lock objects --
that is, per instantiation site.
The networking code has increased its use of percpu_counters, which
leads to false positives if they are treated as a single class.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
include/linux/percpu_counter.h | 14 ++++++++++----
lib/percpu_counter.c | 18 ++++--------------
lib/proportions.c | 6 +++---
mm/backing-dev.c | 2 +-
4 files changed, 18 insertions(+), 22 deletions(-)
diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
index 9007ccd..96bdde3 100644
--- a/include/linux/percpu_counter.h
+++ b/include/linux/percpu_counter.h
@@ -30,8 +30,16 @@ struct percpu_counter {
#define FBC_BATCH (NR_CPUS*4)
#endif
-int percpu_counter_init(struct percpu_counter *fbc, s64 amount);
-int percpu_counter_init_irq(struct percpu_counter *fbc, s64 amount);
+int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
+ struct lock_class_key *key);
+
+#define percpu_counter_init(fbc, value) \
+ ({ \
+ static struct lock_class_key __key; \
+ \
+ __percpu_counter_init(fbc, value, &__key); \
+ })
+
void percpu_counter_destroy(struct percpu_counter *fbc);
void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch);
@@ -85,8 +93,6 @@ static inline int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
return 0;
}
-#define percpu_counter_init_irq percpu_counter_init
-
static inline void percpu_counter_destroy(struct percpu_counter *fbc)
{
}
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index a866389..c7fe2e4 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -71,11 +71,11 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
}
EXPORT_SYMBOL(__percpu_counter_sum);
-static struct lock_class_key percpu_counter_irqsafe;
-
-int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
+int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
+ struct lock_class_key *key)
{
spin_lock_init(&fbc->lock);
+ lockdep_set_class(&fbc->lock, key);
fbc->count = amount;
fbc->counters = alloc_percpu(s32);
if (!fbc->counters)
@@ -87,17 +87,7 @@ int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
#endif
return 0;
}
-EXPORT_SYMBOL(percpu_counter_init);
-
-int percpu_counter_init_irq(struct percpu_counter *fbc, s64 amount)
-{
- int err;
-
- err = percpu_counter_init(fbc, amount);
- if (!err)
- lockdep_set_class(&fbc->lock, &percpu_counter_irqsafe);
- return err;
-}
+EXPORT_SYMBOL(__percpu_counter_init);
void percpu_counter_destroy(struct percpu_counter *fbc)
{
diff --git a/lib/proportions.c b/lib/proportions.c
index 4f387a6..7367f2b 100644
--- a/lib/proportions.c
+++ b/lib/proportions.c
@@ -83,11 +83,11 @@ int prop_descriptor_init(struct prop_descriptor *pd, int shift)
pd->index = 0;
pd->pg[0].shift = shift;
mutex_init(&pd->mutex);
- err = percpu_counter_init_irq(&pd->pg[0].events, 0);
+ err = percpu_counter_init(&pd->pg[0].events, 0);
if (err)
goto out;
- err = percpu_counter_init_irq(&pd->pg[1].events, 0);
+ err = percpu_counter_init(&pd->pg[1].events, 0);
if (err)
percpu_counter_destroy(&pd->pg[0].events);
@@ -191,7 +191,7 @@ int prop_local_init_percpu(struct prop_local_percpu *pl)
spin_lock_init(&pl->lock);
pl->shift = 0;
pl->period = 0;
- return percpu_counter_init_irq(&pl->events, 0);
+ return percpu_counter_init(&pl->events, 0);
}
void prop_local_destroy_percpu(struct prop_local_percpu *pl)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index f2e574d..f3b1258 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -220,7 +220,7 @@ int bdi_init(struct backing_dev_info *bdi)
bdi->max_prop_frac = PROP_FRAC_BASE;
for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
- err = percpu_counter_init_irq(&bdi->bdi_stat[i], 0);
+ err = percpu_counter_init(&bdi->bdi_stat[i], 0);
if (err)
goto err;
}
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 12:38 ` Ingo Molnar
2008-12-29 12:44 ` [patch] locking, percpu counters: introduce separate lock classes Ingo Molnar
@ 2008-12-29 12:49 ` Herbert Xu
2008-12-29 12:55 ` Ingo Molnar
1 sibling, 1 reply; 30+ messages in thread
From: Herbert Xu @ 2008-12-29 12:49 UTC (permalink / raw)
To: Ingo Molnar
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet
On Mon, Dec 29, 2008 at 01:38:19PM +0100, Ingo Molnar wrote:
>
> hm, even with the revert i got the splat below. So some other commits are
> causing this too?
Indeed, there is more :)
> stack backtrace:
> Pid: 1435, comm: kjournald Not tainted 2.6.28-tip-03883-gf855e6c-dirty #13150
> Call Trace:
> [<c015a0d6>] print_usage_bug+0x176/0x1d0
> [<c015b800>] mark_lock+0xbd0/0xd80
> [<c015da13>] __lock_acquire+0x483/0xae0
> [<c015bcdb>] ? trace_hardirqs_on+0xb/0x10
> [<c015e0f9>] lock_acquire+0x89/0xc0
> [<c034fc75>] ? __percpu_counter_add+0x65/0xb0
> [<c07247a8>] _spin_lock+0x38/0x50
> [<c034fc75>] ? __percpu_counter_add+0x65/0xb0
> [<c034fc75>] __percpu_counter_add+0x65/0xb0
> [<c060dc49>] tcp_v4_destroy_sock+0x1d9/0x240
This came from 1748376b6626acf59c24e9592ac67b3fe2a0e026 which also
has the same bug (although this particular trace is bogus and is
fixed by Peter's first patch).
I think these are the only two percpu counter patches around that
time frame.
Also watch out for 6976a1d6c222c50ac93d2273b9cf57e6fd047e59 when
reverting.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: unsafe locks seen with netperf on net-2.6.29 tree
2008-12-29 12:49 ` unsafe locks seen with netperf on net-2.6.29 tree Herbert Xu
@ 2008-12-29 12:55 ` Ingo Molnar
0 siblings, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2008-12-29 12:55 UTC (permalink / raw)
To: Herbert Xu
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet
* Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Mon, Dec 29, 2008 at 01:38:19PM +0100, Ingo Molnar wrote:
> >
> > hm, even with the revert i got the splat below. So some other commits are
> > causing this too?
>
> Indeed, there is more :)
>
> > stack backtrace:
> > Pid: 1435, comm: kjournald Not tainted 2.6.28-tip-03883-gf855e6c-dirty #13150
> > Call Trace:
> > [<c015a0d6>] print_usage_bug+0x176/0x1d0
> > [<c015b800>] mark_lock+0xbd0/0xd80
> > [<c015da13>] __lock_acquire+0x483/0xae0
> > [<c015bcdb>] ? trace_hardirqs_on+0xb/0x10
> > [<c015e0f9>] lock_acquire+0x89/0xc0
> > [<c034fc75>] ? __percpu_counter_add+0x65/0xb0
> > [<c07247a8>] _spin_lock+0x38/0x50
> > [<c034fc75>] ? __percpu_counter_add+0x65/0xb0
> > [<c034fc75>] __percpu_counter_add+0x65/0xb0
> > [<c060dc49>] tcp_v4_destroy_sock+0x1d9/0x240
>
> This came from 1748376b6626acf59c24e9592ac67b3fe2a0e026 which also
> has the same bug (although this particular trace is bogus and is
> fixed by Peter's first patch).
>
> I think these are the only two percpu counter patches around that
> time frame.
>
> Also watch out for 6976a1d6c222c50ac93d2273b9cf57e6fd047e59 when
> reverting.
okay, since i'm not really in the business of reverting various networking
patches, would you mind to Cc: me to the real fixes, once they are
available? Or do you think a revert will be the approach in -git? (in
which case i can guinea-pig the above reverts i guess)
Ingo
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch] locking, percpu counters: introduce separate lock classes
2008-12-29 12:44 ` [patch] locking, percpu counters: introduce separate lock classes Ingo Molnar
@ 2008-12-29 14:14 ` Ingo Molnar
2008-12-30 3:58 ` Herbert Xu
0 siblings, 1 reply; 30+ messages in thread
From: Ingo Molnar @ 2008-12-29 14:14 UTC (permalink / raw)
To: Herbert Xu
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet, linux-kernel
* Ingo Molnar <mingo@elte.hu> wrote:
> > hm, even with the revert i got the splat below. So some other commits
> > are causing this too?
>
> in any case, i picked up and tidied up Peter's annotation patch - see it
> below. Chances are that this plus the revert will do the trick - and
> that's what i'm testing now in tip/master.
my testing efforts today are not particularly dominated by luck :-)
Below is the latest splat that i got with Peter's patch plus the revert of
dd24c00191 applied.
Ingo
[ 78.679386]
[ 78.679389] =================================
[ 78.680039] [ INFO: inconsistent lock state ]
[ 78.680039] 2.6.28-tip-03885-g44c31d5-dirty #13188
[ 78.680039] ---------------------------------
[ 78.680039] inconsistent {softirq-on-W} -> {in-softirq-W} usage.
[ 78.680039] ssh/4054 [HC0[0]:SC1[1]:HE1:SE0] takes:
[ 78.680039] (key#8){-+..}, at: [<c042710e>] __percpu_counter_add+0x52/0x7a
[ 78.680039] {softirq-on-W} state was registered at:
[ 78.680039] [<c0163e6e>] __lock_acquire+0x288/0xa93
[ 78.680039] [<c01646d6>] lock_acquire+0x5d/0x7a
[ 78.680039] [<c0ae0473>] _spin_lock+0x20/0x2f
[ 78.680039] [<c042710e>] __percpu_counter_add+0x52/0x7a
[ 78.680039] [<c09c3e2f>] percpu_counter_add+0xf/0x12
[ 78.680039] [<c09c50ef>] tcp_v4_init_sock+0xe5/0xea
[ 78.680039] [<c09d10d5>] inet_create+0x277/0x2a4
[ 78.680039] [<c095e5d0>] __sock_create+0xfd/0x159
[ 78.680039] [<c095e673>] sock_create+0x29/0x2e
[ 78.680039] [<c095e844>] sys_socket+0x31/0x5f
[ 78.680039] [<c095ef20>] sys_socketcall+0x56/0x16a
[ 78.680039] [<c011bff5>] sysenter_do_call+0x12/0x35
[ 78.680039] [<ffffffff>] 0xffffffff
[ 78.680039] irq event stamp: 16094
[ 78.680039] hardirqs last enabled at (16094): [<c0188987>] free_hot_cold_page+0x117/0x123
[ 78.680039] hardirqs last disabled at (16093): [<c01888e0>] free_hot_cold_page+0x70/0x123
[ 78.680039] softirqs last enabled at (16022): [<c09b817f>] tcp_close+0x2bd/0x2d6
[ 78.680039] softirqs last disabled at (16045): [<c014a6a8>] do_softirq+0x3f/0x57
[ 78.680039]
[ 78.680039] other info that might help us debug this:
[ 78.680039] 4 locks held by ssh/4054:
[ 78.680039] #0: (rcu_read_lock){..--}, at: [<c096bb18>] net_rx_action+0x61/0x1a8
[ 78.680039] #1: (rcu_read_lock){..--}, at: [<c096904d>] netif_receive_skb+0xda/0x312
[ 78.680039] #2: (rcu_read_lock){..--}, at: [<c09aeb24>] ip_local_deliver_finish+0x42/0x1c3
[ 78.680039] #3: (slock-AF_INET/1){-+..}, at: [<c09c597a>] tcp_v4_rcv+0x1f5/0x4e2
[ 78.680039]
[ 78.680039] stack backtrace:
[ 78.680039] Pid: 4054, comm: ssh Not tainted 2.6.28-tip-03885-g44c31d5-dirty #13188
[ 78.680039] Call Trace:
[ 78.680039] [<c0162c1d>] valid_state+0x12a/0x13d
[ 78.680039] [<c016343c>] mark_lock+0x109/0x313
[ 78.680039] [<c0163e01>] __lock_acquire+0x21b/0xa93
[ 78.680039] [<c0164660>] ? __lock_acquire+0xa7a/0xa93
[ 78.680039] [<c0163353>] ? mark_lock+0x20/0x313
[ 78.680039] [<c01646d6>] lock_acquire+0x5d/0x7a
[ 78.680039] [<c042710e>] ? __percpu_counter_add+0x52/0x7a
[ 78.680039] [<c0ae0473>] _spin_lock+0x20/0x2f
[ 78.680039] [<c042710e>] ? __percpu_counter_add+0x52/0x7a
[ 78.680039] [<c042710e>] __percpu_counter_add+0x52/0x7a
[ 78.680039] [<c09c3e2f>] percpu_counter_add+0xf/0x12
[ 78.680039] [<c09c5005>] tcp_v4_destroy_sock+0xe7/0xec
[ 78.680039] [<c09b5ea5>] inet_csk_destroy_sock+0x90/0xe8
[ 78.680039] [<c09b7ebf>] tcp_done+0x66/0x69
[ 78.680039] [<c09c716a>] tcp_time_wait+0x18f/0x199
[ 78.680039] [<c09c13d3>] ? tcp_send_ack+0x85/0x8d
[ 78.680039] [<c09bc712>] tcp_fin+0x7f/0x10a
[ 78.680039] [<c09bd012>] tcp_data_queue+0x1c4/0x7ff
[ 78.680039] [<c014e5c9>] ? mod_timer+0x35/0x39
[ 78.680039] [<c09bc49e>] ? tcp_urg+0xe/0x174
[ 78.680039] [<c0961986>] ? sk_reset_timer+0x14/0x22
[ 78.680039] [<c09bff27>] tcp_rcv_state_process+0x7ce/0x807
[ 78.680039] [<c09c4d8d>] tcp_v4_do_rcv+0x156/0x197
[ 78.680039] [<c09c5c13>] tcp_v4_rcv+0x48e/0x4e2
[ 78.680039] [<c09aec02>] ip_local_deliver_finish+0x120/0x1c3
[ 78.680039] [<c09aef21>] ip_local_deliver+0x5b/0x64
[ 78.680039] [<c09aea68>] ip_rcv_finish+0x26d/0x283
[ 78.680039] [<c09aee83>] ? ip_rcv+0x1de/0x221
[ 78.680039] [<c09aee92>] ip_rcv+0x1ed/0x221
[ 78.680039] [<c0969252>] netif_receive_skb+0x2df/0x312
[ 78.680039] [<c096953d>] napi_gro_receive+0x1bd/0x1c5
[ 78.680039] [<c01637ac>] ? trace_hardirqs_on_caller+0x10a/0x15f
[ 78.680039] [<c096b604>] process_backlog+0x6c/0x8d
[ 78.680039] [<c096bb85>] net_rx_action+0xce/0x1a8
[ 78.680039] [<c014a5d6>] __do_softirq+0x94/0x127
[ 78.680039] [<c014a6a8>] do_softirq+0x3f/0x57
[ 78.680039] [<c014ab0b>] irq_exit+0x4c/0x83
[ 78.680039] [<c011df67>] do_IRQ+0x97/0xb0
[ 78.680039] [<c011c5ec>] common_interrupt+0x2c/0x34
[ 93.926827] cc1 used greatest stack depth: 5332 bytes left
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch] locking, percpu counters: introduce separate lock classes
2008-12-29 14:14 ` Ingo Molnar
@ 2008-12-30 3:58 ` Herbert Xu
2008-12-30 6:05 ` Ingo Molnar
0 siblings, 1 reply; 30+ messages in thread
From: Herbert Xu @ 2008-12-30 3:58 UTC (permalink / raw)
To: Ingo Molnar
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet, linux-kernel
On Mon, Dec 29, 2008 at 03:14:17PM +0100, Ingo Molnar wrote:
>
> my testing efforts today are not particularly dominated by luck :-)
>
> Below is the latest splat that i got with Peter's patch plus the revert of
> dd24c00191 applied.
> [ 78.679386]
> [ 78.679389] =================================
> [ 78.680039] [ INFO: inconsistent lock state ]
> [ 78.680039] 2.6.28-tip-03885-g44c31d5-dirty #13188
> [ 78.680039] ---------------------------------
> [ 78.680039] inconsistent {softirq-on-W} -> {in-softirq-W} usage.
> [ 78.680039] ssh/4054 [HC0[0]:SC1[1]:HE1:SE0] takes:
> [ 78.680039] (key#8){-+..}, at: [<c042710e>] __percpu_counter_add+0x52/0x7a
> [ 78.680039] {softirq-on-W} state was registered at:
> [ 78.680039] [<c0163e6e>] __lock_acquire+0x288/0xa93
> [ 78.680039] [<c01646d6>] lock_acquire+0x5d/0x7a
> [ 78.680039] [<c0ae0473>] _spin_lock+0x20/0x2f
> [ 78.680039] [<c042710e>] __percpu_counter_add+0x52/0x7a
> [ 78.680039] [<c09c3e2f>] percpu_counter_add+0xf/0x12
> [ 78.680039] [<c09c50ef>] tcp_v4_init_sock+0xe5/0xea
Right, this is the correct version of the earlier splat :)
Anyway, I've extended Peter's patch to cover the other cases.
Please let me know if it still bitches with this + Peter's fbc
patch.
net: Fix percpu counters deadlock
When we converted the protocol atomic counters such as the orphan
count and the total socket count deadlocks were introduced due to
the mismatch in BH status of the spots that used the percpu counter
operations.
Based on the diagnosis and patch by Peter Zijlstra, this patch
fixes these issues by disabling BH where we may be in process
context.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
diff --git a/net/dccp/proto.c b/net/dccp/proto.c
index d5c2bac..1747cca 100644
--- a/net/dccp/proto.c
+++ b/net/dccp/proto.c
@@ -964,7 +964,6 @@ adjudge_to_death:
state = sk->sk_state;
sock_hold(sk);
sock_orphan(sk);
- percpu_counter_inc(sk->sk_prot->orphan_count);
/*
* It is the last release_sock in its life. It will remove backlog.
@@ -978,6 +977,8 @@ adjudge_to_death:
bh_lock_sock(sk);
WARN_ON(sock_owned_by_user(sk));
+ percpu_counter_inc(sk->sk_prot->orphan_count);
+
/* Have we already been destroyed by a softirq or backlog? */
if (state != DCCP_CLOSED && sk->sk_state == DCCP_CLOSED)
goto out;
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index c7cda1c..f26ab38 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -633,8 +633,6 @@ void inet_csk_listen_stop(struct sock *sk)
acc_req = req->dl_next;
- percpu_counter_inc(sk->sk_prot->orphan_count);
-
local_bh_disable();
bh_lock_sock(child);
WARN_ON(sock_owned_by_user(child));
@@ -644,6 +642,8 @@ void inet_csk_listen_stop(struct sock *sk)
sock_orphan(child);
+ percpu_counter_inc(sk->sk_prot->orphan_count);
+
inet_csk_destroy_sock(child);
bh_unlock_sock(child);
diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 614958b..eb62e58 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -38,6 +38,7 @@
#include <net/tcp.h>
#include <net/udp.h>
#include <net/udplite.h>
+#include <linux/bottom_half.h>
#include <linux/inetdevice.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
@@ -50,13 +51,17 @@
static int sockstat_seq_show(struct seq_file *seq, void *v)
{
struct net *net = seq->private;
+ int orphans, sockets;
+
+ local_bh_disable();
+ orphans = percpu_counter_sum_positive(&tcp_orphan_count),
+ sockets = percpu_counter_sum_positive(&tcp_sockets_allocated),
+ local_bh_enable();
socket_seq_show(seq);
seq_printf(seq, "TCP: inuse %d orphan %d tw %d alloc %d mem %d\n",
- sock_prot_inuse_get(net, &tcp_prot),
- (int)percpu_counter_sum_positive(&tcp_orphan_count),
- tcp_death_row.tw_count,
- (int)percpu_counter_sum_positive(&tcp_sockets_allocated),
+ sock_prot_inuse_get(net, &tcp_prot), orphans,
+ tcp_death_row.tw_count, sockets,
atomic_read(&tcp_memory_allocated));
seq_printf(seq, "UDP: inuse %d mem %d\n",
sock_prot_inuse_get(net, &udp_prot),
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 1f3d529..f28acf1 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1836,7 +1836,6 @@ adjudge_to_death:
state = sk->sk_state;
sock_hold(sk);
sock_orphan(sk);
- percpu_counter_inc(sk->sk_prot->orphan_count);
/* It is the last release_sock in its life. It will remove backlog. */
release_sock(sk);
@@ -1849,6 +1848,8 @@ adjudge_to_death:
bh_lock_sock(sk);
WARN_ON(sock_owned_by_user(sk));
+ percpu_counter_inc(sk->sk_prot->orphan_count);
+
/* Have we already been destroyed by a softirq or backlog? */
if (state != TCP_CLOSE && sk->sk_state == TCP_CLOSE)
goto out;
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 1017248..9d839fa 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -51,6 +51,7 @@
*/
+#include <linux/bottom_half.h>
#include <linux/types.h>
#include <linux/fcntl.h>
#include <linux/module.h>
@@ -1797,7 +1798,9 @@ static int tcp_v4_init_sock(struct sock *sk)
sk->sk_sndbuf = sysctl_tcp_wmem[1];
sk->sk_rcvbuf = sysctl_tcp_rmem[1];
+ local_bh_disable();
percpu_counter_inc(&tcp_sockets_allocated);
+ local_bh_enable();
return 0;
}
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 8702b06..e8b8337 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -23,6 +23,7 @@
* 2 of the License, or (at your option) any later version.
*/
+#include <linux/bottom_half.h>
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/types.h>
@@ -1830,7 +1831,9 @@ static int tcp_v6_init_sock(struct sock *sk)
sk->sk_sndbuf = sysctl_tcp_wmem[1];
sk->sk_rcvbuf = sysctl_tcp_rmem[1];
+ local_bh_disable();
percpu_counter_inc(&tcp_sockets_allocated);
+ local_bh_enable();
return 0;
}
Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [patch] locking, percpu counters: introduce separate lock classes
2008-12-30 3:58 ` Herbert Xu
@ 2008-12-30 6:05 ` Ingo Molnar
2008-12-30 6:39 ` David Miller
0 siblings, 1 reply; 30+ messages in thread
From: Ingo Molnar @ 2008-12-30 6:05 UTC (permalink / raw)
To: Herbert Xu
Cc: Peter Zijlstra, Tantilov, Emil S, Kirsher, Jeffrey T, netdev,
David Miller, Waskiewicz Jr, Peter P, Duyck, Alexander H,
Eric Dumazet, linux-kernel
* Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Mon, Dec 29, 2008 at 03:14:17PM +0100, Ingo Molnar wrote:
> >
> > my testing efforts today are not particularly dominated by luck :-)
> >
> > Below is the latest splat that i got with Peter's patch plus the revert of
> > dd24c00191 applied.
>
> > [ 78.679386]
> > [ 78.679389] =================================
> > [ 78.680039] [ INFO: inconsistent lock state ]
> > [ 78.680039] 2.6.28-tip-03885-g44c31d5-dirty #13188
> > [ 78.680039] ---------------------------------
> > [ 78.680039] inconsistent {softirq-on-W} -> {in-softirq-W} usage.
> > [ 78.680039] ssh/4054 [HC0[0]:SC1[1]:HE1:SE0] takes:
> > [ 78.680039] (key#8){-+..}, at: [<c042710e>] __percpu_counter_add+0x52/0x7a
> > [ 78.680039] {softirq-on-W} state was registered at:
> > [ 78.680039] [<c0163e6e>] __lock_acquire+0x288/0xa93
> > [ 78.680039] [<c01646d6>] lock_acquire+0x5d/0x7a
> > [ 78.680039] [<c0ae0473>] _spin_lock+0x20/0x2f
> > [ 78.680039] [<c042710e>] __percpu_counter_add+0x52/0x7a
> > [ 78.680039] [<c09c3e2f>] percpu_counter_add+0xf/0x12
> > [ 78.680039] [<c09c50ef>] tcp_v4_init_sock+0xe5/0xea
>
> Right, this is the correct version of the earlier splat :)
>
> Anyway, I've extended Peter's patch to cover the other cases.
> Please let me know if it still bitches with this + Peter's fbc
> patch.
>
> net: Fix percpu counters deadlock
>
> When we converted the protocol atomic counters such as the orphan
> count and the total socket count deadlocks were introduced due to
> the mismatch in BH status of the spots that used the percpu counter
> operations.
>
> Based on the diagnosis and patch by Peter Zijlstra, this patch
> fixes these issues by disabling BH where we may be in process
> context.
>
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
thanks, will start testing it now.
One small nit: could you please add the Reported-by line for Jeff Kirscher
who reported the problem originally:
Reported-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Ingo
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch] locking, percpu counters: introduce separate lock classes
2008-12-30 6:05 ` Ingo Molnar
@ 2008-12-30 6:39 ` David Miller
2008-12-30 6:56 ` Ingo Molnar
0 siblings, 1 reply; 30+ messages in thread
From: David Miller @ 2008-12-30 6:39 UTC (permalink / raw)
To: mingo
Cc: herbert, a.p.zijlstra, emil.s.tantilov, jeffrey.t.kirsher, netdev,
peter.p.waskiewicz.jr, alexander.h.duyck, dada1, linux-kernel
From: Ingo Molnar <mingo@elte.hu>
Date: Tue, 30 Dec 2008 07:05:36 +0100
> One small nit: could you please add the Reported-by line for Jeff Kirscher
> who reported the problem originally:
>
> Reported-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
I'll make sure to add that to the commit message after
you successfully test Herbert's patch.
Thanks.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch] locking, percpu counters: introduce separate lock classes
2008-12-30 6:39 ` David Miller
@ 2008-12-30 6:56 ` Ingo Molnar
2008-12-30 7:04 ` David Miller
0 siblings, 1 reply; 30+ messages in thread
From: Ingo Molnar @ 2008-12-30 6:56 UTC (permalink / raw)
To: David Miller
Cc: herbert, a.p.zijlstra, emil.s.tantilov, jeffrey.t.kirsher, netdev,
peter.p.waskiewicz.jr, alexander.h.duyck, dada1, linux-kernel
* David Miller <davem@davemloft.net> wrote:
> From: Ingo Molnar <mingo@elte.hu>
> Date: Tue, 30 Dec 2008 07:05:36 +0100
>
> > One small nit: could you please add the Reported-by line for Jeff Kirscher
> > who reported the problem originally:
> >
> > Reported-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
>
> I'll make sure to add that to the commit message after
> you successfully test Herbert's patch.
>
> Thanks.
early indications are good: after about 15 random bootup tests the lockdep
warning has not triggered. (it would trigger on every 2nd kernel before
that - there's a 66% chance for lockdep to be enabled in my randconfig
tests)
Ingo
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch] locking, percpu counters: introduce separate lock classes
2008-12-30 6:56 ` Ingo Molnar
@ 2008-12-30 7:04 ` David Miller
2008-12-30 7:21 ` Ingo Molnar
0 siblings, 1 reply; 30+ messages in thread
From: David Miller @ 2008-12-30 7:04 UTC (permalink / raw)
To: mingo
Cc: herbert, a.p.zijlstra, emil.s.tantilov, jeffrey.t.kirsher, netdev,
peter.p.waskiewicz.jr, alexander.h.duyck, dada1, linux-kernel
From: Ingo Molnar <mingo@elte.hu>
Date: Tue, 30 Dec 2008 07:56:50 +0100
>
> * David Miller <davem@davemloft.net> wrote:
>
> > From: Ingo Molnar <mingo@elte.hu>
> > Date: Tue, 30 Dec 2008 07:05:36 +0100
> >
> > > One small nit: could you please add the Reported-by line for Jeff Kirscher
> > > who reported the problem originally:
> > >
> > > Reported-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> >
> > I'll make sure to add that to the commit message after
> > you successfully test Herbert's patch.
> >
> > Thanks.
>
> early indications are good: after about 15 random bootup tests the lockdep
> warning has not triggered. (it would trigger on every 2nd kernel before
> that - there's a 66% chance for lockdep to be enabled in my randconfig
> tests)
Sounds good. I'll toss this into my net-2.6 tree, with Jeff's
reported-by and your Tested-by, and send this off to Linus
tonight.
Thanks everyone.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch] locking, percpu counters: introduce separate lock classes
2008-12-30 7:04 ` David Miller
@ 2008-12-30 7:21 ` Ingo Molnar
0 siblings, 0 replies; 30+ messages in thread
From: Ingo Molnar @ 2008-12-30 7:21 UTC (permalink / raw)
To: David Miller
Cc: herbert, a.p.zijlstra, emil.s.tantilov, jeffrey.t.kirsher, netdev,
peter.p.waskiewicz.jr, alexander.h.duyck, dada1, linux-kernel
* David Miller <davem@davemloft.net> wrote:
> From: Ingo Molnar <mingo@elte.hu>
> Date: Tue, 30 Dec 2008 07:56:50 +0100
>
> >
> > * David Miller <davem@davemloft.net> wrote:
> >
> > > From: Ingo Molnar <mingo@elte.hu>
> > > Date: Tue, 30 Dec 2008 07:05:36 +0100
> > >
> > > > One small nit: could you please add the Reported-by line for Jeff Kirscher
> > > > who reported the problem originally:
> > > >
> > > > Reported-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> > >
> > > I'll make sure to add that to the commit message after
> > > you successfully test Herbert's patch.
> > >
> > > Thanks.
> >
> > early indications are good: after about 15 random bootup tests the lockdep
> > warning has not triggered. (it would trigger on every 2nd kernel before
> > that - there's a 66% chance for lockdep to be enabled in my randconfig
> > tests)
>
> Sounds good. I'll toss this into my net-2.6 tree, with Jeff's
> reported-by and your Tested-by, and send this off to Linus tonight.
cool. If by tonight you dont get a followup mail from me then you can
assume that the patch passed hundreds of tests here.
Ingo
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2008-12-30 7:21 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-12-25 10:25 unsafe locks seen with netperf on net-2.6.29 tree Jeff Kirsher
2008-12-25 11:26 ` Herbert Xu
2008-12-26 14:08 ` Peter Zijlstra
2008-12-27 19:38 ` Tantilov, Emil S
2008-12-27 20:38 ` Peter Zijlstra
2008-12-28 0:54 ` Tantilov, Emil S
2008-12-29 10:02 ` Peter Zijlstra
2008-12-29 10:07 ` Herbert Xu
2008-12-29 10:16 ` Peter Zijlstra
2008-12-29 10:22 ` Herbert Xu
2008-12-29 10:31 ` Herbert Xu
2008-12-29 10:37 ` Herbert Xu
2008-12-29 11:28 ` Ingo Molnar
2008-12-29 11:31 ` Ingo Molnar
2008-12-29 11:49 ` Herbert Xu
2008-12-29 11:58 ` Ingo Molnar
2008-12-29 12:01 ` Herbert Xu
2008-12-29 12:16 ` Ingo Molnar
2008-12-29 12:38 ` Ingo Molnar
2008-12-29 12:44 ` [patch] locking, percpu counters: introduce separate lock classes Ingo Molnar
2008-12-29 14:14 ` Ingo Molnar
2008-12-30 3:58 ` Herbert Xu
2008-12-30 6:05 ` Ingo Molnar
2008-12-30 6:39 ` David Miller
2008-12-30 6:56 ` Ingo Molnar
2008-12-30 7:04 ` David Miller
2008-12-30 7:21 ` Ingo Molnar
2008-12-29 12:49 ` unsafe locks seen with netperf on net-2.6.29 tree Herbert Xu
2008-12-29 12:55 ` Ingo Molnar
2008-12-29 9:57 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).