From: Simon Horman <horms@kernel.org>
To: Eric Dumazet <edumazet@google.com>
Cc: Wang Liang <wangliang74@huawei.com>,
davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com,
dsahern@kernel.org, kuniyu@amazon.com, luoxuanqiang@kylinos.cn,
kernelxing@tencent.com, kirjanov@gmail.com,
yuehaibing@huawei.com, zhangchangzhong@huawei.com,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH net v2] net: fix data-races around sk->sk_forward_alloc
Date: Wed, 6 Nov 2024 15:14:01 +0000 [thread overview]
Message-ID: <20241106151401.GA120192@kernel.org> (raw)
In-Reply-To: <CANn89iJ8mOqtOkMvrn6c892XrA_m3uf5FabmDWzA_pk-tTMCzw@mail.gmail.com>
On Tue, Nov 05, 2024 at 10:52:34AM +0100, Eric Dumazet wrote:
> On Tue, Nov 5, 2024 at 8:46 AM Wang Liang <wangliang74@huawei.com> wrote:
> >
> > Syzkaller reported this warning:
> > ------------[ cut here ]------------
> > WARNING: CPU: 0 PID: 16 at net/ipv4/af_inet.c:156 inet_sock_destruct+0x1c5/0x1e0
> > Modules linked in:
> > CPU: 0 UID: 0 PID: 16 Comm: ksoftirqd/0 Not tainted 6.12.0-rc5 #26
> > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
> > RIP: 0010:inet_sock_destruct+0x1c5/0x1e0
> > Code: 24 12 4c 89 e2 5b 48 c7 c7 98 ec bb 82 41 5c e9 d1 18 17 ff 4c 89 e6 5b 48 c7 c7 d0 ec bb 82 41 5c e9 bf 18 17 ff 0f 0b eb 83 <0f> 0b eb 97 0f 0b eb 87 0f 0b e9 68 ff ff ff 66 66 2e 0f 1f 84 00
> > RSP: 0018:ffffc9000008bd90 EFLAGS: 00010206
> > RAX: 0000000000000300 RBX: ffff88810b172a90 RCX: 0000000000000007
> > RDX: 0000000000000002 RSI: 0000000000000300 RDI: ffff88810b172a00
> > RBP: ffff88810b172a00 R08: ffff888104273c00 R09: 0000000000100007
> > R10: 0000000000020000 R11: 0000000000000006 R12: ffff88810b172a00
> > R13: 0000000000000004 R14: 0000000000000000 R15: ffff888237c31f78
> > FS: 0000000000000000(0000) GS:ffff888237c00000(0000) knlGS:0000000000000000
> > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > CR2: 00007ffc63fecac8 CR3: 000000000342e000 CR4: 00000000000006f0
> > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> > Call Trace:
> > <TASK>
> > ? __warn+0x88/0x130
> > ? inet_sock_destruct+0x1c5/0x1e0
> > ? report_bug+0x18e/0x1a0
> > ? handle_bug+0x53/0x90
> > ? exc_invalid_op+0x18/0x70
> > ? asm_exc_invalid_op+0x1a/0x20
> > ? inet_sock_destruct+0x1c5/0x1e0
> > __sk_destruct+0x2a/0x200
> > rcu_do_batch+0x1aa/0x530
> > ? rcu_do_batch+0x13b/0x530
> > rcu_core+0x159/0x2f0
> > handle_softirqs+0xd3/0x2b0
> > ? __pfx_smpboot_thread_fn+0x10/0x10
> > run_ksoftirqd+0x25/0x30
> > smpboot_thread_fn+0xdd/0x1d0
> > kthread+0xd3/0x100
> > ? __pfx_kthread+0x10/0x10
> > ret_from_fork+0x34/0x50
> > ? __pfx_kthread+0x10/0x10
> > ret_from_fork_asm+0x1a/0x30
> > </TASK>
> > ---[ end trace 0000000000000000 ]---
> >
> > Its possible that two threads call tcp_v6_do_rcv()/sk_forward_alloc_add()
> > concurrently when sk->sk_state == TCP_LISTEN with sk->sk_lock unlocked,
> > which triggers a data-race around sk->sk_forward_alloc:
> > tcp_v6_rcv
> > tcp_v6_do_rcv
> > skb_clone_and_charge_r
> > sk_rmem_schedule
> > __sk_mem_schedule
> > sk_forward_alloc_add()
> > skb_set_owner_r
> > sk_mem_charge
> > sk_forward_alloc_add()
> > __kfree_skb
> > skb_release_all
> > skb_release_head_state
> > sock_rfree
> > sk_mem_uncharge
> > sk_forward_alloc_add()
> > sk_mem_reclaim
> > // set local var reclaimable
> > __sk_mem_reclaim
> > sk_forward_alloc_add()
> >
> > In this syzkaller testcase, two threads call
> > tcp_v6_do_rcv() with skb->truesize=768, the sk_forward_alloc changes like
> > this:
> > (cpu 1) | (cpu 2) | sk_forward_alloc
> > ... | ... | 0
> > __sk_mem_schedule() | | +4096 = 4096
> > | __sk_mem_schedule() | +4096 = 8192
> > sk_mem_charge() | | -768 = 7424
> > | sk_mem_charge() | -768 = 6656
> > ... | ... |
> > sk_mem_uncharge() | | +768 = 7424
> > reclaimable=7424 | |
> > | sk_mem_uncharge() | +768 = 8192
> > | reclaimable=8192 |
> > __sk_mem_reclaim() | | -4096 = 4096
> > | __sk_mem_reclaim() | -8192 = -4096 != 0
> >
> > The skb_clone_and_charge_r() should not be called in tcp_v6_do_rcv() when
> > sk->sk_state is TCP_LISTEN, it happens later in tcp_v6_syn_recv_sock().
> > Fix the same issue in dccp_v6_do_rcv().
> >
> > Suggested-by: Eric Dumazet <edumazet@google.com>
> > Fixes: e994b2f0fb92 ("tcp: do not lock listener to process SYN packets")
> > Signed-off-by: Wang Liang <wangliang74@huawei.com>
>
> Reviewed-by: Eric Dumazet <edumazet@google.com>
Hi Wang Liang,
Please post a non-RFC variant of this patch so it can be considered for
inclusion in net. And please include Eric's Reviewed-by tag.
Thanks!
next prev parent reply other threads:[~2024-11-06 15:14 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-05 8:03 [RFC PATCH net v2] net: fix data-races around sk->sk_forward_alloc Wang Liang
2024-11-05 9:52 ` Eric Dumazet
2024-11-06 15:14 ` Simon Horman [this message]
2024-11-08 1:34 ` Wang Liang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241106151401.GA120192@kernel.org \
--to=horms@kernel.org \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=edumazet@google.com \
--cc=kernelxing@tencent.com \
--cc=kirjanov@gmail.com \
--cc=kuba@kernel.org \
--cc=kuniyu@amazon.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luoxuanqiang@kylinos.cn \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=wangliang74@huawei.com \
--cc=yuehaibing@huawei.com \
--cc=zhangchangzhong@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).