* Inconsistend lock state in inet_frag_find
@ 2008-05-29 12:02 Eric Sesterhenn
2008-05-30 10:53 ` Jarek Poplawski
0 siblings, 1 reply; 9+ messages in thread
From: Eric Sesterhenn @ 2008-05-29 12:02 UTC (permalink / raw)
To: netdev
hi,
the following just popped up on my test box with
tcpsic6 -s ::1 -d ::1 -p 100000 -r 4995
[ 63.616218] =================================
[ 63.616456] [ INFO: inconsistent lock state ]
[ 63.616456] 2.6.26-rc4 #5
[ 63.616456] ---------------------------------
[ 63.616456] inconsistent {softirq-on-W} -> {in-softirq-R} usage.
[ 63.616456] tcpsic6/3869 [HC0[0]:SC1[1]:HE1:SE0] takes:
[ 63.616456] (&f->lock){---?}, at: [<c06be62e>]
inet_frag_find+0x1e/0x140
[ 63.616456] {softirq-on-W} state was registered at:
[ 63.616456] [<c0143b7a>] __lock_acquire+0x3aa/0x1080
[ 63.616456] [<c01448c6>] lock_acquire+0x76/0xa0
[ 63.616456] [<c07a8d7b>] _write_lock+0x2b/0x40
[ 63.616456] [<c06be6df>] inet_frag_find+0xcf/0x140
[ 63.616456] [<c072740c>] nf_ct_frag6_gather+0x3cc/0x900
[ 63.616456] [<c0726653>] ipv6_defrag+0x23/0x70
[ 63.616456] [<c0673563>] nf_iterate+0x53/0x80
[ 63.616456] [<c0673717>] nf_hook_slow+0xb7/0x100
[ 63.616456] [<c07102e9>] rawv6_sendmsg+0x719/0xc10
[ 63.616456] [<c06b6864>] inet_sendmsg+0x34/0x60
[ 63.616456] [<c06472df>] sock_sendmsg+0xff/0x120
[ 63.616456] [<c0647d95>] sys_sendto+0xa5/0xd0
[ 63.616456] [<c06486cb>] sys_socketcall+0x16b/0x290
[ 63.616456] [<c0103005>] sysenter_past_esp+0x6a/0xb1
[ 63.616456] [<ffffffff>] 0xffffffff
[ 63.616456] irq event stamp: 3590
[ 63.616456] hardirqs last enabled at (3590): [<c0127a7d>]
local_bh_enable+0x7d/0xf0
[ 63.616456] hardirqs last disabled at (3589): [<c0127a27>]
local_bh_enable+0x27/0xf0
[ 63.616456] softirqs last enabled at (3572): [<c0655674>]
dev_queue_xmit+0xd4/0x370
[ 63.616456] softirqs last disabled at (3573): [<c0105814>]
do_softirq+0x84/0xc0
[ 63.616456]
[ 63.616456] other info that might help us debug this:
[ 63.616456] 3 locks held by tcpsic6/3869:
[ 63.616456] #0: (rcu_read_lock){..--}, at: [<c0654b30>]
net_rx_action+0x60/0x1c0
[ 63.616456] #1: (rcu_read_lock){..--}, at: [<c0652540>]
netif_receive_skb+0x100/0x320
[ 63.616456] #2: (rcu_read_lock){..--}, at: [<c06fcb40>]
ip6_input_finish+0x0/0x330
[ 63.616456]
[ 63.616456] stack backtrace:
[ 63.616456] Pid: 3869, comm: tcpsic6 Not tainted 2.6.26-rc4 #5
[ 63.616456] [<c0142313>] print_usage_bug+0x153/0x160
[ 63.616456] [<c0142ff9>] mark_lock+0x469/0x590
[ 63.616456] [<c0143c90>] __lock_acquire+0x4c0/0x1080
[ 63.616456] [<c0143a3d>] ? __lock_acquire+0x26d/0x1080
[ 63.616456] [<c0143a3d>] ? __lock_acquire+0x26d/0x1080
[ 63.616456] [<c01432b8>] ? trace_hardirqs_on+0x78/0x150
[ 63.616456] [<c07257d8>] ? ip6t_do_table+0x258/0x360
[ 63.616456] [<c01448c6>] lock_acquire+0x76/0xa0
[ 63.616456] [<c06be62e>] ? inet_frag_find+0x1e/0x140
[ 63.616456] [<c07a8e7b>] _read_lock+0x2b/0x40
[ 63.616456] [<c06be62e>] ? inet_frag_find+0x1e/0x140
[ 63.616456] [<c06be62e>] inet_frag_find+0x1e/0x140
[ 63.616456] [<c071739a>] ipv6_frag_rcv+0xba/0xbd0
[ 63.616456] [<c067bf1a>] ? nf_ct_deliver_cached_events+0x1a/0x80
[ 63.616456] [<c0726964>] ? ipv6_confirm+0xb4/0xe0
[ 63.616456] [<c06fcc5d>] ip6_input_finish+0x11d/0x330
[ 63.616456] [<c06fcb40>] ? ip6_input_finish+0x0/0x330
[ 63.616456] [<c06fcec7>] ip6_input+0x57/0x60
[ 63.616456] [<c06fcb40>] ? ip6_input_finish+0x0/0x330
[ 63.616456] [<c06fd154>] ipv6_rcv+0x1e4/0x340
[ 63.616456] [<c06fcf30>] ? ip6_rcv_finish+0x0/0x40
[ 63.616456] [<c06fcf70>] ? ipv6_rcv+0x0/0x340
[ 63.616456] [<c06526c0>] netif_receive_skb+0x280/0x320
[ 63.616456] [<c0652540>] ? netif_receive_skb+0x100/0x320
[ 63.616456] [<c06552ca>] process_backlog+0x6a/0xc0
[ 63.616456] [<c0654c09>] net_rx_action+0x139/0x1c0
[ 63.616456] [<c0654b30>] ? net_rx_action+0x60/0x1c0
[ 63.616456] [<c0127c72>] __do_softirq+0x52/0xb0
[ 63.616456] [<c0105814>] do_softirq+0x84/0xc0
[ 63.616456] [<c0127a95>] local_bh_enable+0x95/0xf0
[ 63.616456] [<c0655674>] dev_queue_xmit+0xd4/0x370
[ 63.616456] [<c06555d4>] ? dev_queue_xmit+0x34/0x370
[ 63.616456] [<c06fa1b0>] ip6_output_finish+0x70/0xc0
[ 63.616456] [<c06fa5cb>] ip6_output2+0xbb/0x1d0
[ 63.616456] [<c06fa140>] ? ip6_output_finish+0x0/0xc0
[ 63.616456] [<c06fac9e>] ip6_output+0x4fe/0xa40
[ 63.616456] [<c0725902>] ? ip6t_local_out_hook+0x22/0x30
[ 63.616456] [<c0673723>] ? nf_hook_slow+0xc3/0x100
[ 63.616456] [<c0673739>] ? nf_hook_slow+0xd9/0x100
[ 63.616456] [<c070eef0>] ? dst_output+0x0/0x10
[ 63.616456] [<c071065d>] rawv6_sendmsg+0xa8d/0xc10
[ 63.616456] [<c070eef0>] ? dst_output+0x0/0x10
[ 63.616456] [<c0143a3d>] ? __lock_acquire+0x26d/0x1080
[ 63.616456] [<c0143160>] ? mark_held_locks+0x40/0x80
[ 63.616456] [<c07a91a7>] ? _spin_unlock_irqrestore+0x47/0x60
[ 63.616456] [<c06b6864>] inet_sendmsg+0x34/0x60
[ 63.616456] [<c06472df>] sock_sendmsg+0xff/0x120
[ 63.616456] [<c0135970>] ? autoremove_wake_function+0x0/0x40
[ 63.616456] [<c01432f9>] ? trace_hardirqs_on+0xb9/0x150
[ 63.616456] [<c07a9062>] ? _read_unlock_irq+0x22/0x30
[ 63.616456] [<c0647d95>] sys_sendto+0xa5/0xd0
[ 63.616456] [<c016f311>] ? __do_fault+0x191/0x3a0
[ 63.616456] [<c06486cb>] sys_socketcall+0x16b/0x290
[ 63.616456] [<c0103005>] sysenter_past_esp+0x6a/0xb1
[ 63.616456] =======================
Greetings, Eric
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Inconsistend lock state in inet_frag_find
2008-05-29 12:02 Inconsistend lock state in inet_frag_find Eric Sesterhenn
@ 2008-05-30 10:53 ` Jarek Poplawski
2008-05-30 13:18 ` Eric Sesterhenn
2008-05-30 21:13 ` Eric Sesterhenn
0 siblings, 2 replies; 9+ messages in thread
From: Jarek Poplawski @ 2008-05-30 10:53 UTC (permalink / raw)
To: Eric Sesterhenn; +Cc: netdev, Patrick McHardy
On 29-05-2008 14:02, Eric Sesterhenn wrote:
> hi,
>
> the following just popped up on my test box with
> tcpsic6 -s ::1 -d ::1 -p 100000 -r 4995
>
> [ 63.616218] =================================
> [ 63.616456] [ INFO: inconsistent lock state ]
> [ 63.616456] 2.6.26-rc4 #5
> [ 63.616456] ---------------------------------
> [ 63.616456] inconsistent {softirq-on-W} -> {in-softirq-R} usage.
> [ 63.616456] tcpsic6/3869 [HC0[0]:SC1[1]:HE1:SE0] takes:
> [ 63.616456] (&f->lock){---?}, at: [<c06be62e>]
> inet_frag_find+0x1e/0x140
...
Hi,
Could you try this patch?
Regards,
Jarek P.
---
net/ipv6/netfilter/nf_conntrack_reasm.c | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
index 2dccad4..f3c36d6 100644
--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
+++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
@@ -209,7 +209,9 @@ fq_find(__be32 id, struct in6_addr *src, struct in6_addr *dst)
arg.dst = dst;
hash = ip6qhashfn(id, src, dst);
+ local_bh_disable();
q = inet_frag_find(&nf_init_frags, &nf_frags, &arg, hash);
+ local_bh_enable();
if (q == NULL)
goto oom;
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: Inconsistend lock state in inet_frag_find
2008-05-30 10:53 ` Jarek Poplawski
@ 2008-05-30 13:18 ` Eric Sesterhenn
2008-05-30 17:10 ` Jarek Poplawski
2008-05-30 21:13 ` Eric Sesterhenn
1 sibling, 1 reply; 9+ messages in thread
From: Eric Sesterhenn @ 2008-05-30 13:18 UTC (permalink / raw)
To: Jarek Poplawski; +Cc: netdev, Patrick McHardy
* Jarek Poplawski (jarkao2@gmail.com) wrote:
> On 29-05-2008 14:02, Eric Sesterhenn wrote:
> > hi,
> >
> > the following just popped up on my test box with
> > tcpsic6 -s ::1 -d ::1 -p 100000 -r 4995
> >
> > [ 63.616218] =================================
> > [ 63.616456] [ INFO: inconsistent lock state ]
> > [ 63.616456] 2.6.26-rc4 #5
> > [ 63.616456] ---------------------------------
> > [ 63.616456] inconsistent {softirq-on-W} -> {in-softirq-R} usage.
> > [ 63.616456] tcpsic6/3869 [HC0[0]:SC1[1]:HE1:SE0] takes:
> > [ 63.616456] (&f->lock){---?}, at: [<c06be62e>]
> > inet_frag_find+0x1e/0x140
> ...
>
> Hi,
>
> Could you try this patch?
with the patch applied i get the following lockdep warning:
[ 63.531438] =================================
[ 63.531520] [ INFO: inconsistent lock state ]
[ 63.531520] 2.6.26-rc4 #7
[ 63.531520] ---------------------------------
[ 63.531520] inconsistent {softirq-on-W} -> {in-softirq-W} usage.
[ 63.531520] tcpsic6/3864 [HC0[0]:SC1[1]:HE1:SE0] takes:
[ 63.531520] (&q->lock#2){-+..}, at: [<c07175b0>]
ipv6_frag_rcv+0xd0/0xbd0
[ 63.531520] {softirq-on-W} state was registered at:
[ 63.531520] [<c0143bba>] __lock_acquire+0x3aa/0x1080
[ 63.531520] [<c0144906>] lock_acquire+0x76/0xa0
[ 63.531520] [<c07a8f0b>] _spin_lock+0x2b/0x40
[ 63.531520] [<c0727636>] nf_ct_frag6_gather+0x3f6/0x910
[ 63.531520] [<c0726853>] ipv6_defrag+0x23/0x70
[ 63.531520] [<c0673773>] nf_iterate+0x53/0x80
[ 63.531520] [<c0673927>] nf_hook_slow+0xb7/0x100
[ 63.531520] [<c07104f9>] rawv6_sendmsg+0x719/0xc10
[ 63.531520] [<c06b6a74>] inet_sendmsg+0x34/0x60
[ 63.531520] [<c06474df>] sock_sendmsg+0xff/0x120
[ 63.531520] [<c0647f95>] sys_sendto+0xa5/0xd0
[ 63.531520] [<c06488cb>] sys_socketcall+0x16b/0x290
[ 63.531520] [<c0103005>] sysenter_past_esp+0x6a/0xb1
[ 63.531520] [<ffffffff>] 0xffffffff
[ 63.531520] irq event stamp: 3344
[ 63.531520] hardirqs last enabled at (3344): [<c07a93b7>]
_spin_unlock_irqrestore+0x47/0x60
[ 63.531520] hardirqs last disabled at (3343): [<c07a9296>]
_spin_lock_irqsave+0x16/0x50
[ 63.531520] softirqs last enabled at (3320): [<c0655884>]
dev_queue_xmit+0xd4/0x370
[ 63.531520] softirqs last disabled at (3321): [<c0105814>]
do_softirq+0x84/0xc0
[ 63.531520]
[ 63.531520] other info that might help us debug this:
[ 63.531520] 3 locks held by tcpsic6/3864:
[ 63.531520] #0: (rcu_read_lock){..--}, at: [<c0654d40>]
net_rx_action+0x60/0x1c0
[ 63.531520] #1: (rcu_read_lock){..--}, at: [<c0652750>]
netif_receive_skb+0x100/0x320
[ 63.531520] #2: (rcu_read_lock){..--}, at: [<c06fcd50>]
ip6_input_finish+0x0/0x330
[ 63.531520]
[ 63.531520] stack backtrace:
[ 63.531520] Pid: 3864, comm: tcpsic6 Not tainted 2.6.26-rc4 #7
[ 63.531520] [<c0142353>] print_usage_bug+0x153/0x160
[ 63.531520] [<c0143144>] mark_lock+0x574/0x590
[ 63.531520] [<c0143b75>] __lock_acquire+0x365/0x1080
[ 63.531520] [<c07a93b7>] ? _spin_unlock_irqrestore+0x47/0x60
[ 63.531520] [<c0144906>] lock_acquire+0x76/0xa0
[ 63.531520] [<c07175b0>] ? ipv6_frag_rcv+0xd0/0xbd0
[ 63.531520] [<c07a8f0b>] _spin_lock+0x2b/0x40
[ 63.531520] [<c07175b0>] ? ipv6_frag_rcv+0xd0/0xbd0
[ 63.531520] [<c07175b0>] ipv6_frag_rcv+0xd0/0xbd0
[ 63.531520] [<c067c12a>] ? nf_ct_deliver_cached_events+0x1a/0x80
[ 63.531520] [<c0726b64>] ? ipv6_confirm+0xb4/0xe0
[ 63.531520] [<c06fce6d>] ip6_input_finish+0x11d/0x330
[ 63.531520] [<c06fcd50>] ? ip6_input_finish+0x0/0x330
[ 63.531520] [<c06fd0d7>] ip6_input+0x57/0x60
[ 63.531520] [<c06fcd50>] ? ip6_input_finish+0x0/0x330
[ 63.531520] [<c06fd364>] ipv6_rcv+0x1e4/0x340
[ 63.531520] [<c06fd140>] ? ip6_rcv_finish+0x0/0x40
[ 63.531520] [<c06fd180>] ? ipv6_rcv+0x0/0x340
[ 63.531520] [<c06528d0>] netif_receive_skb+0x280/0x320
[ 63.531520] [<c0652750>] ? netif_receive_skb+0x100/0x320
[ 63.531520] [<c06554da>] process_backlog+0x6a/0xc0
[ 63.531520] [<c0654e19>] net_rx_action+0x139/0x1c0
[ 63.531520] [<c0654d40>] ? net_rx_action+0x60/0x1c0
[ 63.531520] [<c0127c92>] __do_softirq+0x52/0xb0
[ 63.531520] [<c0105814>] do_softirq+0x84/0xc0
[ 63.531520] [<c0127ab5>] local_bh_enable+0x95/0xf0
[ 63.531520] [<c0655884>] dev_queue_xmit+0xd4/0x370
[ 63.531520] [<c06557e4>] ? dev_queue_xmit+0x34/0x370
[ 63.531520] [<c06fa3c0>] ip6_output_finish+0x70/0xc0
[ 63.531520] [<c06fa7db>] ip6_output2+0xbb/0x1d0
[ 63.531520] [<c06fa350>] ? ip6_output_finish+0x0/0xc0
[ 63.531520] [<c06faeae>] ip6_output+0x4fe/0xa40
[ 63.531520] [<c0725b02>] ? ip6t_local_out_hook+0x22/0x30
[ 63.531520] [<c0673933>] ? nf_hook_slow+0xc3/0x100
[ 63.531520] [<c0673949>] ? nf_hook_slow+0xd9/0x100
[ 63.531520] [<c070f100>] ? dst_output+0x0/0x10
[ 63.531520] [<c071086d>] rawv6_sendmsg+0xa8d/0xc10
[ 63.531520] [<c070f100>] ? dst_output+0x0/0x10
[ 63.531520] [<c0143a7d>] ? __lock_acquire+0x26d/0x1080
[ 63.531520] [<c01431a0>] ? mark_held_locks+0x40/0x80
[ 63.531520] [<c07a93b7>] ? _spin_unlock_irqrestore+0x47/0x60
[ 63.531520] [<c06b6a74>] inet_sendmsg+0x34/0x60
[ 63.531520] [<c06474df>] sock_sendmsg+0xff/0x120
[ 63.531520] [<c0135990>] ? autoremove_wake_function+0x0/0x40
[ 63.531520] [<c0143339>] ? trace_hardirqs_on+0xb9/0x150
[ 63.531520] [<c07a9272>] ? _read_unlock_irq+0x22/0x30
[ 63.531520] [<c0647f95>] sys_sendto+0xa5/0xd0
[ 63.531520] [<c016f351>] ? __do_fault+0x191/0x3a0
[ 63.531520] [<c06488cb>] sys_socketcall+0x16b/0x290
[ 63.531520] [<c0103005>] sysenter_past_esp+0x6a/0xb1
[ 63.531520] =======================
Greetings, Eric
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Inconsistend lock state in inet_frag_find
2008-05-30 13:18 ` Eric Sesterhenn
@ 2008-05-30 17:10 ` Jarek Poplawski
0 siblings, 0 replies; 9+ messages in thread
From: Jarek Poplawski @ 2008-05-30 17:10 UTC (permalink / raw)
To: Eric Sesterhenn; +Cc: netdev, Patrick McHardy
On Fri, May 30, 2008 at 03:18:45PM +0200, Eric Sesterhenn wrote:
...
> with the patch applied i get the following lockdep warning:
>
> [ 63.531438] =================================
> [ 63.531520] [ INFO: inconsistent lock state ]
> [ 63.531520] 2.6.26-rc4 #7
> [ 63.531520] ---------------------------------
> [ 63.531520] inconsistent {softirq-on-W} -> {in-softirq-W} usage.
> [ 63.531520] tcpsic6/3864 [HC0[0]:SC1[1]:HE1:SE0] takes:
> [ 63.531520] (&q->lock#2){-+..}, at: [<c07175b0>]
> ipv6_frag_rcv+0xd0/0xbd0
> [ 63.531520] {softirq-on-W} state was registered at:
> [ 63.531520] [<c0143bba>] __lock_acquire+0x3aa/0x1080
> [ 63.531520] [<c0144906>] lock_acquire+0x76/0xa0
> [ 63.531520] [<c07a8f0b>] _spin_lock+0x2b/0x40
> [ 63.531520] [<c0727636>] nf_ct_frag6_gather+0x3f6/0x910
...
I hope it's not a Pandora's Box: looks like there are still a few of
these locks around, which could make more such reports, but since I
don't know this code, I'd prefer not to go ahead of lockdep...
Thanks,
Jarek P.
(Take 2: please revert the previous patch before applying.)
---
net/ipv6/netfilter/nf_conntrack_reasm.c | 8 +++++---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
index 2dccad4..e65e26e 100644
--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
+++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
@@ -209,7 +209,9 @@ fq_find(__be32 id, struct in6_addr *src, struct in6_addr *dst)
arg.dst = dst;
hash = ip6qhashfn(id, src, dst);
+ local_bh_disable();
q = inet_frag_find(&nf_init_frags, &nf_frags, &arg, hash);
+ local_bh_enable();
if (q == NULL)
goto oom;
@@ -638,10 +640,10 @@ struct sk_buff *nf_ct_frag6_gather(struct sk_buff *skb)
goto ret_orig;
}
- spin_lock(&fq->q.lock);
+ spin_lock_bh(&fq->q.lock);
if (nf_ct_frag6_queue(fq, clone, fhdr, nhoff) < 0) {
- spin_unlock(&fq->q.lock);
+ spin_unlock_bh(&fq->q.lock);
pr_debug("Can't insert skb to queue\n");
fq_put(fq);
goto ret_orig;
@@ -653,7 +655,7 @@ struct sk_buff *nf_ct_frag6_gather(struct sk_buff *skb)
if (ret_skb == NULL)
pr_debug("Can't reassemble fragmented packets\n");
}
- spin_unlock(&fq->q.lock);
+ spin_unlock_bh(&fq->q.lock);
fq_put(fq);
return ret_skb;
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: Inconsistend lock state in inet_frag_find
2008-05-30 10:53 ` Jarek Poplawski
2008-05-30 13:18 ` Eric Sesterhenn
@ 2008-05-30 21:13 ` Eric Sesterhenn
2008-05-30 21:53 ` [PATCH][NETFILTER]: fix inconsistent lock state in nf_ct_frag6_gather() Jarek Poplawski
1 sibling, 1 reply; 9+ messages in thread
From: Eric Sesterhenn @ 2008-05-30 21:13 UTC (permalink / raw)
To: Jarek Poplawski; +Cc: netdev, Patrick McHardy
* Jarek Poplawski (jarkao2@gmail.com) wrote:
>On Fri, May 30, 2008 at 03:18:45PM +0200, Eric Sesterhenn wrote:
>...
>> with the patch applied i get the following lockdep warning:
>>
>> [ 63.531438] =================================
>> [ 63.531520] [ INFO: inconsistent lock state ]
>> [ 63.531520] 2.6.26-rc4 #7
>> [ 63.531520] ---------------------------------
>> [ 63.531520] inconsistent {softirq-on-W} -> {in-softirq-W} usage.
>> [ 63.531520] tcpsic6/3864 [HC0[0]:SC1[1]:HE1:SE0] takes:
>> [ 63.531520] (&q->lock#2){-+..}, at: [<c07175b0>]
>> ipv6_frag_rcv+0xd0/0xbd0
>> [ 63.531520] {softirq-on-W} state was registered at:
>> [ 63.531520] [<c0143bba>] __lock_acquire+0x3aa/0x1080
>> [ 63.531520] [<c0144906>] lock_acquire+0x76/0xa0
>> [ 63.531520] [<c07a8f0b>] _spin_lock+0x2b/0x40
>> [ 63.531520] [<c0727636>] nf_ct_frag6_gather+0x3f6/0x910
> ...
>
> I hope it's not a Pandora's Box: looks like there are still a few of
> these locks around, which could make more such reports, but since I
> don't know this code, I'd prefer not to go ahead of lockdep...
looks like we got lucky :-)
> (Take 2: please revert the previous patch before applying.)
This one does the trick, i let tcp6sic run a bit longer and didnt
see any more lockdep warnings.
Thanks, Eric
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH][NETFILTER]: fix inconsistent lock state in nf_ct_frag6_gather()
2008-05-30 21:13 ` Eric Sesterhenn
@ 2008-05-30 21:53 ` Jarek Poplawski
2008-06-02 10:43 ` Patrick McHardy
0 siblings, 1 reply; 9+ messages in thread
From: Jarek Poplawski @ 2008-05-30 21:53 UTC (permalink / raw)
To: Patrick McHardy, David Miller; +Cc: Eric Sesterhenn, netdev
On Fri, May 30, 2008 at 11:13:15PM +0200, Eric Sesterhenn wrote:
...
> This one does the trick, i let tcp6sic run a bit longer and didnt
> see any more lockdep warnings.
>
> Thanks, Eric
David & Patrick,
It looks like this patch could be applied.
Thanks,
Jarek P.
-------------------------->
[NETFILTER]: fix inconsistent lock state in nf_ct_frag6_gather()
[ 63.531438] =================================
[ 63.531520] [ INFO: inconsistent lock state ]
[ 63.531520] 2.6.26-rc4 #7
[ 63.531520] ---------------------------------
[ 63.531520] inconsistent {softirq-on-W} -> {in-softirq-W} usage.
[ 63.531520] tcpsic6/3864 [HC0[0]:SC1[1]:HE1:SE0] takes:
[ 63.531520] (&q->lock#2){-+..}, at: [<c07175b0>] ipv6_frag_rcv+0xd0/0xbd0
[ 63.531520] {softirq-on-W} state was registered at:
[ 63.531520] [<c0143bba>] __lock_acquire+0x3aa/0x1080
[ 63.531520] [<c0144906>] lock_acquire+0x76/0xa0
[ 63.531520] [<c07a8f0b>] _spin_lock+0x2b/0x40
[ 63.531520] [<c0727636>] nf_ct_frag6_gather+0x3f6/0x910
...
According to this and another similar lockdep report inet_fragment
locks are taken from nf_ct_frag6_gather() with softirqs enabled, but
these locks are mainly used in softirq context, so disabling BHs is
necessary.
Reported-and-tested-by: Eric Sesterhenn <snakebyte@gmx.de>
Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>
---
net/ipv6/netfilter/nf_conntrack_reasm.c | 8 +++++---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
index 2dccad4..e65e26e 100644
--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
+++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
@@ -209,7 +209,9 @@ fq_find(__be32 id, struct in6_addr *src, struct in6_addr *dst)
arg.dst = dst;
hash = ip6qhashfn(id, src, dst);
+ local_bh_disable();
q = inet_frag_find(&nf_init_frags, &nf_frags, &arg, hash);
+ local_bh_enable();
if (q == NULL)
goto oom;
@@ -638,10 +640,10 @@ struct sk_buff *nf_ct_frag6_gather(struct sk_buff *skb)
goto ret_orig;
}
- spin_lock(&fq->q.lock);
+ spin_lock_bh(&fq->q.lock);
if (nf_ct_frag6_queue(fq, clone, fhdr, nhoff) < 0) {
- spin_unlock(&fq->q.lock);
+ spin_unlock_bh(&fq->q.lock);
pr_debug("Can't insert skb to queue\n");
fq_put(fq);
goto ret_orig;
@@ -653,7 +655,7 @@ struct sk_buff *nf_ct_frag6_gather(struct sk_buff *skb)
if (ret_skb == NULL)
pr_debug("Can't reassemble fragmented packets\n");
}
- spin_unlock(&fq->q.lock);
+ spin_unlock_bh(&fq->q.lock);
fq_put(fq);
return ret_skb;
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH][NETFILTER]: fix inconsistent lock state in nf_ct_frag6_gather()
2008-05-30 21:53 ` [PATCH][NETFILTER]: fix inconsistent lock state in nf_ct_frag6_gather() Jarek Poplawski
@ 2008-06-02 10:43 ` Patrick McHardy
2008-06-02 12:45 ` Jarek Poplawski
0 siblings, 1 reply; 9+ messages in thread
From: Patrick McHardy @ 2008-06-02 10:43 UTC (permalink / raw)
To: Jarek Poplawski; +Cc: David Miller, Eric Sesterhenn, netdev
Jarek Poplawski wrote:
> According to this and another similar lockdep report inet_fragment
> locks are taken from nf_ct_frag6_gather() with softirqs enabled, but
> these locks are mainly used in softirq context, so disabling BHs is
> necessary.
Yes, this can happen on the local output path,
> David & Patrick,
>
> It looks like this patch could be applied.
Looks mostly fine, but don't we also have to disable BHs for
the inet_frag_find() call in nf_ct_frag6_gather()?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH][NETFILTER]: fix inconsistent lock state in nf_ct_frag6_gather()
2008-06-02 10:43 ` Patrick McHardy
@ 2008-06-02 12:45 ` Jarek Poplawski
2008-06-02 13:24 ` Patrick McHardy
0 siblings, 1 reply; 9+ messages in thread
From: Jarek Poplawski @ 2008-06-02 12:45 UTC (permalink / raw)
To: Patrick McHardy; +Cc: David Miller, Eric Sesterhenn, netdev
On Mon, Jun 02, 2008 at 12:43:42PM +0200, Patrick McHardy wrote:
> Jarek Poplawski wrote:
>> According to this and another similar lockdep report inet_fragment
>> locks are taken from nf_ct_frag6_gather() with softirqs enabled, but
>> these locks are mainly used in softirq context, so disabling BHs is
>> necessary.
>
> Yes, this can happen on the local output path,
>
>> David & Patrick,
>>
>> It looks like this patch could be applied.
>
> Looks mostly fine, but don't we also have to disable BHs for
> the inet_frag_find() call in nf_ct_frag6_gather()?
>
Probably I miss something, but I think this patch seems to do it:
I can see only one such call in nf_ct_frag6_gather() - with
fq_find()?
BTW, it looks like this lockdep warning points at some other possible
problem: inet_fragment.c uses an SMP "optimization" in
inet_frag_intern(), probably assuming softirq context; so without such
BH blocking around inet_frag_find(), there should be at least
preemption disabled, but I don't know if it's a problem for nf_ct.
Regards,
Jarek P.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH][NETFILTER]: fix inconsistent lock state in nf_ct_frag6_gather()
2008-06-02 12:45 ` Jarek Poplawski
@ 2008-06-02 13:24 ` Patrick McHardy
0 siblings, 0 replies; 9+ messages in thread
From: Patrick McHardy @ 2008-06-02 13:24 UTC (permalink / raw)
To: Jarek Poplawski; +Cc: David Miller, Eric Sesterhenn, netdev
Jarek Poplawski wrote:
> On Mon, Jun 02, 2008 at 12:43:42PM +0200, Patrick McHardy wrote:
>> Jarek Poplawski wrote:
>>> According to this and another similar lockdep report inet_fragment
>>> locks are taken from nf_ct_frag6_gather() with softirqs enabled, but
>>> these locks are mainly used in softirq context, so disabling BHs is
>>> necessary.
>> Yes, this can happen on the local output path,
>>
>>> David & Patrick,
>>>
>>> It looks like this patch could be applied.
>> Looks mostly fine, but don't we also have to disable BHs for
>> the inet_frag_find() call in nf_ct_frag6_gather()?
>>
>
> Probably I miss something, but I think this patch seems to do it:
> I can see only one such call in nf_ct_frag6_gather() - with
> fq_find()?
Right, I missed that, sorry.
> BTW, it looks like this lockdep warning points at some other possible
> problem: inet_fragment.c uses an SMP "optimization" in
> inet_frag_intern(), probably assuming softirq context; so without such
> BH blocking around inet_frag_find(), there should be at least
> preemption disabled, but I don't know if it's a problem for nf_ct.
Yes, good catch. With your patch things should be fine however.
I've applied your patch and will also push it to -stable.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2008-06-02 13:24 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-29 12:02 Inconsistend lock state in inet_frag_find Eric Sesterhenn
2008-05-30 10:53 ` Jarek Poplawski
2008-05-30 13:18 ` Eric Sesterhenn
2008-05-30 17:10 ` Jarek Poplawski
2008-05-30 21:13 ` Eric Sesterhenn
2008-05-30 21:53 ` [PATCH][NETFILTER]: fix inconsistent lock state in nf_ct_frag6_gather() Jarek Poplawski
2008-06-02 10:43 ` Patrick McHardy
2008-06-02 12:45 ` Jarek Poplawski
2008-06-02 13:24 ` Patrick McHardy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).