netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] 9p/fd: set req refcount to zero to avoid uninitialized usage
@ 2022-11-30 13:08 Schspa Shi
  2022-11-30 13:30 ` asmadeus
  0 siblings, 1 reply; 3+ messages in thread
From: Schspa Shi @ 2022-11-30 13:08 UTC (permalink / raw)
  To: ericvh, lucho, asmadeus, linux_oss, davem, edumazet, kuba, pabeni
  Cc: v9fs-developer, netdev, linux-kernel, Schspa Shi,
	syzbot+8f1060e2aaf8ca55220b

When the transport layer of fs cancels the request, it is deleted from the
client side. But the server can send a response with the freed tag.

When the new request allocated, we add it to idr, and use the id form idr
as tag, which will have the same tag with high probability. Then initialize
the refcount after adding it to idr.

If the p9_read_work got a response before the refcount initiated. It will
use a uninitialized req, which will result in a bad request data struct.

There is the logs from syzbot.

Corrupted memory at 0xffff88807eade00b [ 0xff 0x07 0x00 0x00 0x00 0x00
0x00 0x00 . . . . . . . . ] (in kfence-#110):
 p9_fcall_fini net/9p/client.c:248 [inline]
 p9_req_put net/9p/client.c:396 [inline]
 p9_req_put+0x208/0x250 net/9p/client.c:390
 p9_client_walk+0x247/0x540 net/9p/client.c:1165
 clone_fid fs/9p/fid.h:21 [inline]
 v9fs_fid_xattr_set+0xe4/0x2b0 fs/9p/xattr.c:118
 v9fs_xattr_set fs/9p/xattr.c:100 [inline]
 v9fs_xattr_handler_set+0x6f/0x120 fs/9p/xattr.c:159
 __vfs_setxattr+0x119/0x180 fs/xattr.c:182
 __vfs_setxattr_noperm+0x129/0x5f0 fs/xattr.c:216
 __vfs_setxattr_locked+0x1d3/0x260 fs/xattr.c:277
 vfs_setxattr+0x143/0x340 fs/xattr.c:309
 setxattr+0x146/0x160 fs/xattr.c:617
 path_setxattr+0x197/0x1c0 fs/xattr.c:636
 __do_sys_setxattr fs/xattr.c:652 [inline]
 __se_sys_setxattr fs/xattr.c:648 [inline]
 __ia32_sys_setxattr+0xc0/0x160 fs/xattr.c:648
 do_syscall_32_irqs_on arch/x86/entry/common.c:112 [inline]
 __do_fast_syscall_32+0x65/0xf0 arch/x86/entry/common.c:178
 do_fast_syscall_32+0x33/0x70 arch/x86/entry/common.c:203
 entry_SYSENTER_compat_after_hwframe+0x70/0x82

Below is a similar scenario, the scenario in the syzbot log looks more
complicated than this one, but this patch seems can fix it.

     T21124                   p9_read_work
======================== second trans =================================
p9_client_walk
  p9_client_rpc
    p9_client_prepare_req
      p9_tag_alloc
        req = kmem_cache_alloc(p9_req_cache, GFP_NOFS);
        tag = idr_alloc
        << preempted >>
        req->tc.tag = tag;
                            /* req->[refcount/tag] == uninitilzed */
                            m->rreq = p9_tag_lookup(m->client, m->rc.tag);

        refcount_set(&req->refcount, 2);
                            << do response/error >>
                            p9_req_put(m->client, m->rreq);
                            /* req->refcount == 1 */

    /* req->refcount == 1 */
    << got a bad refcount >>

To fix it, we can initize the refcount to zero before add to idr.

Reported-by: syzbot+8f1060e2aaf8ca55220b@syzkaller.appspotmail.com

Signed-off-by: Schspa Shi <schspa@gmail.com>
---
 net/9p/client.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/net/9p/client.c b/net/9p/client.c
index aaa37b07e30a..a72cb597a8ab 100644
--- a/net/9p/client.c
+++ b/net/9p/client.c
@@ -297,6 +297,10 @@ p9_tag_alloc(struct p9_client *c, int8_t type, uint t_size, uint r_size,
 	p9pdu_reset(&req->rc);
 	req->t_err = 0;
 	req->status = REQ_STATUS_ALLOC;
+	/* p9_tag_lookup relies on this refcount to be zero to avoid
+	 * getting a freed request.
+	 */
+	refcount_set(&req->refcount, 0);
 	init_waitqueue_head(&req->wq);
 	INIT_LIST_HEAD(&req->req_list);
 
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-12-01  2:19 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-11-30 13:08 [PATCH v2] 9p/fd: set req refcount to zero to avoid uninitialized usage Schspa Shi
2022-11-30 13:30 ` asmadeus
2022-12-01  2:14   ` Schspa Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).