netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kuniyuki Iwashima <kuniyu@amazon.com>
To: "David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: Kuniyuki Iwashima <kuniyu@amazon.com>,
	Kuniyuki Iwashima <kuni1840@gmail.com>, <netdev@vger.kernel.org>,
	syzbot <syzkaller@googlegroups.com>,
	Wei Chen <harperchen1110@gmail.com>
Subject: [PATCH v1 net] af_unix: Call sk_diag_fill() under the bucket lock.
Date: Tue, 22 Nov 2022 12:58:11 -0800	[thread overview]
Message-ID: <20221122205811.20910-1-kuniyu@amazon.com> (raw)

Wei Chen reported sk->sk_socket can be NULL in sk_user_ns(). [0][1]

It seems that syzbot was dumping an AF_UNIX socket while closing it,
and there is a race below.

  unix_release_sock               unix_diag_handler_dump
  |                               `- unix_diag_get_exact
  |                                  |- unix_lookup_by_ino
  |                                  |  |- spin_lock(&net->unx.table.locks[i])
  |                                  |  |- sock_hold
  |                                  |  `- spin_unlock(&net->unx.table.locks[i])
  |- unix_remove_socket(net, sk)     |     /* after releasing this lock,
  |  /* from here, sk cannot be      |      * there is no guarantee that
  |   * seen in the hash table.      |      * sk is not SOCK_DEAD.
  |   */                             |      */
  |                                  |
  |- unix_state_lock(sk)             |
  |- sock_orphan(sk)                 `- sk_diag_fill
  |  |- sock_set_flag(sk, SOCK_DEAD)    `- sk_diag_dump_uid
  |  `- sk_set_socket(sk, NULL)            `- sk_user_ns
  `- unix_state_unlock(sk)                   `- sk->sk_socket->file->f_cred->user_ns
                                                /* NULL deref here */

After releasing the bucket lock, we cannot guarantee that the found
socket is still alive.  Then, we have to check the SOCK_DEAD flag
under unix_state_lock() and keep holding it unless we access the socket.

In this case, however, we cannot acquire unix_state_lock() in
unix_lookup_by_ino() because we lock it later in sk_diag_dump_peer(),
resulting in deadlock.

Instead, we do not release the bucket lock; then, we can safely access
sk->sk_socket later in sk_user_ns(), and there is no deadlock scenario.
We are already using this strategy in unix_diag_dump().

Note we have to call nlmsg_new() before unix_lookup_by_ino() not to
change the flag from GFP_KERNEL to GFP_ATOMIC.

[0]: https://lore.kernel.org/netdev/CAO4mrfdvyjFpokhNsiwZiP-wpdSD0AStcJwfKcKQdAALQ9_2Qw@mail.gmail.com/
[1]:
BUG: kernel NULL pointer dereference, address: 0000000000000270
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 12bbce067 P4D 12bbce067 PUD 12bc40067 PMD 0
Oops: 0000 [#1] PREEMPT SMP
CPU: 0 PID: 27942 Comm: syz-executor.0 Not tainted 6.1.0-rc5-next-20221118 #2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
RIP: 0010:sk_user_ns include/net/sock.h:920 [inline]
RIP: 0010:sk_diag_dump_uid net/unix/diag.c:119 [inline]
RIP: 0010:sk_diag_fill+0x77d/0x890 net/unix/diag.c:170
Code: 89 ef e8 66 d4 2d fd c7 44 24 40 00 00 00 00 49 8d 7c 24 18 e8
54 d7 2d fd 49 8b 5c 24 18 48 8d bb 70 02 00 00 e8 43 d7 2d fd <48> 8b
9b 70 02 00 00 48 8d 7b 10 e8 33 d7 2d fd 48 8b 5b 10 48 8d
RSP: 0018:ffffc90000d67968 EFLAGS: 00010246
RAX: ffff88812badaa48 RBX: 0000000000000000 RCX: ffffffff840d481d
RDX: 0000000000000465 RSI: 0000000000000000 RDI: 0000000000000270
RBP: ffffc90000d679a8 R08: 0000000000000277 R09: 0000000000000000
R10: 0001ffffffffffff R11: 0001c90000d679a8 R12: ffff88812ac03800
R13: ffff88812c87c400 R14: ffff88812ae42210 R15: ffff888103026940
FS:  00007f08b4e6f700(0000) GS:ffff88813bc00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000270 CR3: 000000012c58b000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 unix_diag_get_exact net/unix/diag.c:285 [inline]
 unix_diag_handler_dump+0x3f9/0x500 net/unix/diag.c:317
 __sock_diag_cmd net/core/sock_diag.c:235 [inline]
 sock_diag_rcv_msg+0x237/0x250 net/core/sock_diag.c:266
 netlink_rcv_skb+0x13e/0x250 net/netlink/af_netlink.c:2564
 sock_diag_rcv+0x24/0x40 net/core/sock_diag.c:277
 netlink_unicast_kernel net/netlink/af_netlink.c:1330 [inline]
 netlink_unicast+0x5e9/0x6b0 net/netlink/af_netlink.c:1356
 netlink_sendmsg+0x739/0x860 net/netlink/af_netlink.c:1932
 sock_sendmsg_nosec net/socket.c:714 [inline]
 sock_sendmsg net/socket.c:734 [inline]
 ____sys_sendmsg+0x38f/0x500 net/socket.c:2476
 ___sys_sendmsg net/socket.c:2530 [inline]
 __sys_sendmsg+0x197/0x230 net/socket.c:2559
 __do_sys_sendmsg net/socket.c:2568 [inline]
 __se_sys_sendmsg net/socket.c:2566 [inline]
 __x64_sys_sendmsg+0x42/0x50 net/socket.c:2566
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x4697f9
Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48
89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d
01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f08b4e6ec48 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 000000000077bf80 RCX: 00000000004697f9
RDX: 0000000000000000 RSI: 00000000200001c0 RDI: 0000000000000003
RBP: 00000000004d29e9 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000077bf80
R13: 0000000000000000 R14: 000000000077bf80 R15: 00007ffdb36bc6c0
 </TASK>
Modules linked in:
CR2: 0000000000000270

Fixes: 5d3cae8bc39d ("unix_diag: Dumping exact socket core")
Reported-by: syzbot <syzkaller@googlegroups.com>
Reported-by: Wei Chen <harperchen1110@gmail.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
---
 net/unix/diag.c | 38 +++++++++++++++++++++++---------------
 1 file changed, 23 insertions(+), 15 deletions(-)

diff --git a/net/unix/diag.c b/net/unix/diag.c
index 105f522a89fe..96583cb71cf5 100644
--- a/net/unix/diag.c
+++ b/net/unix/diag.c
@@ -242,8 +242,9 @@ static struct sock *unix_lookup_by_ino(struct net *net, unsigned int ino)
 		spin_lock(&net->unx.table.locks[i]);
 		sk_for_each(sk, &net->unx.table.buckets[i]) {
 			if (ino == sock_i_ino(sk)) {
-				sock_hold(sk);
-				spin_unlock(&net->unx.table.locks[i]);
+				/* sk_diag_fill() must be done under the bucket
+				 * lock not to race with unix_release_sock().
+				 */
 				return sk;
 			}
 		}
@@ -264,15 +265,6 @@ static int unix_diag_get_exact(struct sk_buff *in_skb,
 
 	err = -EINVAL;
 	if (req->udiag_ino == 0)
-		goto out_nosk;
-
-	sk = unix_lookup_by_ino(net, req->udiag_ino);
-	err = -ENOENT;
-	if (sk == NULL)
-		goto out_nosk;
-
-	err = sock_diag_check_cookie(sk, req->udiag_cookie);
-	if (err)
 		goto out;
 
 	extra_len = 256;
@@ -282,8 +274,21 @@ static int unix_diag_get_exact(struct sk_buff *in_skb,
 	if (!rep)
 		goto out;
 
+	/* Acquire a bucket lock on success. */
+	sk = unix_lookup_by_ino(net, req->udiag_ino);
+	err = -ENOENT;
+	if (!sk)
+		goto free;
+
+	err = sock_diag_check_cookie(sk, req->udiag_cookie);
+	if (err)
+		goto unlock;
+
 	err = sk_diag_fill(sk, rep, req, NETLINK_CB(in_skb).portid,
 			   nlh->nlmsg_seq, 0, req->udiag_ino);
+
+	spin_unlock(&net->unx.table.locks[sk->sk_hash]);
+
 	if (err < 0) {
 		nlmsg_free(rep);
 		extra_len += 256;
@@ -292,13 +297,16 @@ static int unix_diag_get_exact(struct sk_buff *in_skb,
 
 		goto again;
 	}
-	err = nlmsg_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid);
 
+	err = nlmsg_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid);
 out:
-	if (sk)
-		sock_put(sk);
-out_nosk:
 	return err;
+
+unlock:
+	spin_unlock(&net->unx.table.locks[sk->sk_hash]);
+free:
+	nlmsg_free(rep);
+	goto out;
 }
 
 static int unix_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h)
-- 
2.30.2


             reply	other threads:[~2022-11-22 20:58 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-22 20:58 Kuniyuki Iwashima [this message]
2022-11-23 10:26 ` [PATCH v1 net] af_unix: Call sk_diag_fill() under the bucket lock Paolo Abeni
2022-11-23 15:09   ` Wei Chen
2022-11-23 15:22     ` Kuniyuki Iwashima
2022-11-23 15:38       ` Paolo Abeni
2022-11-24  9:37         ` Wei Chen
2022-11-25  1:49           ` Kuniyuki Iwashima

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221122205811.20910-1-kuniyu@amazon.com \
    --to=kuniyu@amazon.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=harperchen1110@gmail.com \
    --cc=kuba@kernel.org \
    --cc=kuni1840@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=syzkaller@googlegroups.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).