From: libaokun@huaweicloud.com
To: netfs@lists.linux.dev, dhowells@redhat.com, jlayton@kernel.org
Cc: hsiangkao@linux.alibaba.com, jefflexu@linux.alibaba.com,
zhujia.zj@bytedance.com, linux-erofs@lists.ozlabs.org,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
libaokun@huaweicloud.com, yangerkun@huawei.com,
houtao1@huawei.com, yukuai3@huawei.com, wozizhi@huawei.com,
Baokun Li <libaokun1@huawei.com>
Subject: [PATCH v2 10/12] cachefiles: Set object to close if ondemand_id < 0 in copen
Date: Wed, 15 May 2024 16:45:59 +0800 [thread overview]
Message-ID: <20240515084601.3240503-11-libaokun@huaweicloud.com> (raw)
In-Reply-To: <20240515084601.3240503-1-libaokun@huaweicloud.com>
From: Zizhi Wo <wozizhi@huawei.com>
If copen is maliciously called in the user mode, it may delete the request
corresponding to the random id. And the request may have not been read yet.
Note that when the object is set to reopen, the open request will be done
with the still reopen state in above case. As a result, the request
corresponding to this object is always skipped in select_req function, so
the read request is never completed and blocks other process.
Fix this issue by simply set object to close if its id < 0 in copen.
Signed-off-by: Zizhi Wo <wozizhi@huawei.com>
Signed-off-by: Baokun Li <libaokun1@huawei.com>
Reviewed-by: Jia Zhu <zhujia.zj@bytedance.com>
---
fs/cachefiles/ondemand.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
index 3a36613e00a7..a511d0a89109 100644
--- a/fs/cachefiles/ondemand.c
+++ b/fs/cachefiles/ondemand.c
@@ -182,6 +182,7 @@ int cachefiles_ondemand_copen(struct cachefiles_cache *cache, char *args)
xas_store(&xas, NULL);
xa_unlock(&cache->reqs);
+ info = req->object->ondemand;
/* fail OPEN request if copen format is invalid */
ret = kstrtol(psize, 0, &size);
if (ret) {
@@ -201,7 +202,6 @@ int cachefiles_ondemand_copen(struct cachefiles_cache *cache, char *args)
goto out;
}
- info = req->object->ondemand;
spin_lock(&info->lock);
/*
* The anonymous fd was closed before copen ? Fail the request.
@@ -241,6 +241,11 @@ int cachefiles_ondemand_copen(struct cachefiles_cache *cache, char *args)
wake_up_all(&cache->daemon_pollwq);
out:
+ spin_lock(&info->lock);
+ /* Need to set object close to avoid reopen status continuing */
+ if (info->ondemand_id == CACHEFILES_ONDEMAND_ID_CLOSED)
+ cachefiles_ondemand_set_object_close(req->object);
+ spin_unlock(&info->lock);
complete(&req->done);
return ret;
}
--
2.39.2
next prev parent reply other threads:[~2024-05-15 8:56 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-15 8:45 [PATCH v2 00/12] cachefiles: some bugfixes and cleanups for ondemand requests libaokun
2024-05-15 8:45 ` [PATCH v2 01/12] cachefiles: remove request from xarry during flush requests libaokun
2024-05-20 2:20 ` Gao Xiang
2024-05-20 4:11 ` Baokun Li
2024-05-20 7:09 ` Jingbo Xu
2024-05-15 8:45 ` [PATCH v2 02/12] cachefiles: remove err_put_fd tag in cachefiles_ondemand_daemon_read() libaokun
2024-05-20 2:23 ` Gao Xiang
2024-05-20 4:15 ` Baokun Li
2024-05-15 8:45 ` [PATCH v2 03/12] cachefiles: fix slab-use-after-free in cachefiles_ondemand_get_fd() libaokun
2024-05-20 7:24 ` Jingbo Xu
2024-05-20 8:38 ` Baokun Li
2024-05-20 8:45 ` Gao Xiang
2024-05-20 9:10 ` Jingbo Xu
2024-05-20 9:19 ` Baokun Li
2024-05-20 12:22 ` Baokun Li
2024-05-20 8:06 ` Jingbo Xu
2024-05-20 9:10 ` Baokun Li
2024-05-15 8:45 ` [PATCH v2 04/12] cachefiles: fix slab-use-after-free in cachefiles_ondemand_daemon_read() libaokun
2024-05-20 7:36 ` Jingbo Xu
2024-05-20 8:56 ` Baokun Li
2024-05-15 8:45 ` [PATCH v2 05/12] cachefiles: add output string to cachefiles_obj_[get|put]_ondemand_fd libaokun
2024-05-20 7:40 ` Jingbo Xu
2024-05-20 9:02 ` Baokun Li
2024-05-15 8:45 ` [PATCH v2 06/12] cachefiles: add consistency check for copen/cread libaokun
2024-05-15 8:45 ` [PATCH v2 07/12] cachefiles: add spin_lock for cachefiles_ondemand_info libaokun
2024-05-15 8:45 ` [PATCH v2 08/12] cachefiles: never get a new anonymous fd if ondemand_id is valid libaokun
2024-05-20 8:43 ` Jingbo Xu
2024-05-20 9:07 ` Baokun Li
2024-05-20 9:24 ` Jingbo Xu
2024-05-20 11:14 ` Baokun Li
2024-05-20 11:24 ` Gao Xiang
2024-05-15 8:45 ` [PATCH v2 09/12] cachefiles: defer exposing anon_fd until after copy_to_user() succeeds libaokun
2024-05-20 9:39 ` Jingbo Xu
2024-05-20 11:36 ` Baokun Li
2024-05-15 8:45 ` libaokun [this message]
2024-05-15 8:46 ` [PATCH v2 11/12] cachefiles: flush all requests after setting CACHEFILES_DEAD libaokun
2024-05-15 8:46 ` [PATCH v2 12/12] cachefiles: make on-demand read killable libaokun
2024-05-19 10:56 ` [PATCH v2 00/12] cachefiles: some bugfixes and cleanups for ondemand requests Jeff Layton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240515084601.3240503-11-libaokun@huaweicloud.com \
--to=libaokun@huaweicloud.com \
--cc=dhowells@redhat.com \
--cc=houtao1@huawei.com \
--cc=hsiangkao@linux.alibaba.com \
--cc=jefflexu@linux.alibaba.com \
--cc=jlayton@kernel.org \
--cc=libaokun1@huawei.com \
--cc=linux-erofs@lists.ozlabs.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netfs@lists.linux.dev \
--cc=wozizhi@huawei.com \
--cc=yangerkun@huawei.com \
--cc=yukuai3@huawei.com \
--cc=zhujia.zj@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).