From: Jingbo Xu <jefflexu@linux.alibaba.com>
To: Baokun Li <libaokun@huaweicloud.com>,
netfs@lists.linux.dev, dhowells@redhat.com, jlayton@kernel.org
Cc: hsiangkao@linux.alibaba.com, zhujia.zj@bytedance.com,
linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org,
linux-kernel@vger.kernel.org, yangerkun@huawei.com,
houtao1@huawei.com, yukuai3@huawei.com, wozizhi@huawei.com,
Baokun Li <libaokun1@huawei.com>
Subject: Re: [PATCH v2 03/12] cachefiles: fix slab-use-after-free in cachefiles_ondemand_get_fd()
Date: Mon, 20 May 2024 17:10:25 +0800 [thread overview]
Message-ID: <d0e6d1f6-002f-4255-a481-6bd17f3da7fc@linux.alibaba.com> (raw)
In-Reply-To: <d8154eed-98d0-9cb7-4a2c-6b68ed75b7a2@huaweicloud.com>
On 5/20/24 4:38 PM, Baokun Li wrote:
> Hi Jingbo,
>
> Thanks for your review!
>
> On 2024/5/20 15:24, Jingbo Xu wrote:
>>
>> On 5/15/24 4:45 PM, libaokun@huaweicloud.com wrote:
>>> From: Baokun Li <libaokun1@huawei.com>
>>>
>>> We got the following issue in a fuzz test of randomly issuing the
>>> restore
>>> command:
>>>
>>> ==================================================================
>>> BUG: KASAN: slab-use-after-free in
>>> cachefiles_ondemand_daemon_read+0x609/0xab0
>>> Write of size 4 at addr ffff888109164a80 by task ondemand-04-dae/4962
>>>
>>> CPU: 11 PID: 4962 Comm: ondemand-04-dae Not tainted 6.8.0-rc7-dirty #542
>>> Call Trace:
>>> kasan_report+0x94/0xc0
>>> cachefiles_ondemand_daemon_read+0x609/0xab0
>>> vfs_read+0x169/0xb50
>>> ksys_read+0xf5/0x1e0
>>>
>>> Allocated by task 626:
>>> __kmalloc+0x1df/0x4b0
>>> cachefiles_ondemand_send_req+0x24d/0x690
>>> cachefiles_create_tmpfile+0x249/0xb30
>>> cachefiles_create_file+0x6f/0x140
>>> cachefiles_look_up_object+0x29c/0xa60
>>> cachefiles_lookup_cookie+0x37d/0xca0
>>> fscache_cookie_state_machine+0x43c/0x1230
>>> [...]
>>>
>>> Freed by task 626:
>>> kfree+0xf1/0x2c0
>>> cachefiles_ondemand_send_req+0x568/0x690
>>> cachefiles_create_tmpfile+0x249/0xb30
>>> cachefiles_create_file+0x6f/0x140
>>> cachefiles_look_up_object+0x29c/0xa60
>>> cachefiles_lookup_cookie+0x37d/0xca0
>>> fscache_cookie_state_machine+0x43c/0x1230
>>> [...]
>>> ==================================================================
>>>
>>> Following is the process that triggers the issue:
>>>
>>> mount | daemon_thread1 | daemon_thread2
>>> ------------------------------------------------------------
>>> cachefiles_ondemand_init_object
>>> cachefiles_ondemand_send_req
>>> REQ_A = kzalloc(sizeof(*req) + data_len)
>>> wait_for_completion(&REQ_A->done)
>>>
>>> cachefiles_daemon_read
>>> cachefiles_ondemand_daemon_read
>>> REQ_A = cachefiles_ondemand_select_req
>>> cachefiles_ondemand_get_fd
>>> copy_to_user(_buffer, msg, n)
>>> process_open_req(REQ_A)
>>> ------ restore ------
>>> cachefiles_ondemand_restore
>>> xas_for_each(&xas, req, ULONG_MAX)
>>> xas_set_mark(&xas,
>>> CACHEFILES_REQ_NEW);
>>>
>>> cachefiles_daemon_read
>>> cachefiles_ondemand_daemon_read
>>> REQ_A =
>>> cachefiles_ondemand_select_req
>>>
>>> write(devfd, ("copen %u,%llu", msg->msg_id, size));
>>> cachefiles_ondemand_copen
>>> xa_erase(&cache->reqs, id)
>>> complete(&REQ_A->done)
>>> kfree(REQ_A)
>>> cachefiles_ondemand_get_fd(REQ_A)
>>> fd = get_unused_fd_flags
>>> file = anon_inode_getfile
>>> fd_install(fd, file)
>>> load = (void *)REQ_A->msg.data;
>>> load->fd = fd;
>>> // load UAF !!!
>>>
>>> This issue is caused by issuing a restore command when the daemon is
>>> still
>>> alive, which results in a request being processed multiple times thus
>>> triggering a UAF. So to avoid this problem, add an additional reference
>>> count to cachefiles_req, which is held while waiting and reading, and
>>> then
>>> released when the waiting and reading is over.
>>>
>>>
>>> Note that since there is only one reference count for waiting, we
>>> need to
>>> avoid the same request being completed multiple times, so we can only
>>> complete the request if it is successfully removed from the xarray.
>> Sorry the above description makes me confused. As the same request may
>> be got by different daemon threads multiple times, the introduced
>> refcount mechanism can't protect it from being completed multiple times
>> (which is expected). The refcount only protects it from being freed
>> multiple times.
> The idea here is that because the wait only holds one reference count,
> complete(&req->done) can only be called when the req has been
> successfully removed from the xarry, otherwise the following UAF may
> occur:
"complete(&req->done) can only be called when the req has been
successfully removed from the xarry ..."
How this is done? since the following xarray_erase() following the first
xarray_erase() will fail as the xarray slot referred by the same id has
already been erased?
>>> @@ -455,7 +459,7 @@ static int cachefiles_ondemand_send_req(struct
>>> cachefiles_object *object,
>>> wake_up_all(&cache->daemon_pollwq);
>>> wait_for_completion(&req->done);
>>> ret = req->error;
>>> - kfree(req);
>>> + cachefiles_req_put(req);
>>> return ret;
>>> out:
>>> /* Reset the object to close state in error handling path.
>>
>> Don't we need to also convert "kfree(req)" to cachefiles_req_put(req)
>> for the error path of cachefiles_ondemand_send_req()?
>>
>> ```
>> out:
>> /* Reset the object to close state in error handling path.
>> * If error occurs after creating the anonymous fd,
>> * cachefiles_ondemand_fd_release() will set object to close.
>> */
>> if (opcode == CACHEFILES_OP_OPEN)
>> cachefiles_ondemand_set_object_close(object);
>> kfree(req);
>> return ret;
>> ```
> When "goto out;" is called in cachefiles_ondemand_send_req(),
> it means that the req is unallocated/failed to be allocated/failed to
> be inserted into the xarry, and therefore the req can only be accessed
> by the current function, so there is no need to consider concurrency
> and reference counting.
Okay I understand. But this is indeed quite confusing. I see no cost of
also converting to cachefiles_req_put(req).
--
Thanks,
Jingbo
next prev parent reply other threads:[~2024-05-20 9:10 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-15 8:45 [PATCH v2 00/12] cachefiles: some bugfixes and cleanups for ondemand requests libaokun
2024-05-15 8:45 ` [PATCH v2 01/12] cachefiles: remove request from xarry during flush requests libaokun
2024-05-20 2:20 ` Gao Xiang
2024-05-20 4:11 ` Baokun Li
2024-05-20 7:09 ` Jingbo Xu
2024-05-15 8:45 ` [PATCH v2 02/12] cachefiles: remove err_put_fd tag in cachefiles_ondemand_daemon_read() libaokun
2024-05-20 2:23 ` Gao Xiang
2024-05-20 4:15 ` Baokun Li
2024-05-15 8:45 ` [PATCH v2 03/12] cachefiles: fix slab-use-after-free in cachefiles_ondemand_get_fd() libaokun
2024-05-20 7:24 ` Jingbo Xu
2024-05-20 8:38 ` Baokun Li
2024-05-20 8:45 ` Gao Xiang
2024-05-20 9:10 ` Jingbo Xu [this message]
2024-05-20 9:19 ` Baokun Li
2024-05-20 12:22 ` Baokun Li
2024-05-20 8:06 ` Jingbo Xu
2024-05-20 9:10 ` Baokun Li
2024-05-15 8:45 ` [PATCH v2 04/12] cachefiles: fix slab-use-after-free in cachefiles_ondemand_daemon_read() libaokun
2024-05-20 7:36 ` Jingbo Xu
2024-05-20 8:56 ` Baokun Li
2024-05-15 8:45 ` [PATCH v2 05/12] cachefiles: add output string to cachefiles_obj_[get|put]_ondemand_fd libaokun
2024-05-20 7:40 ` Jingbo Xu
2024-05-20 9:02 ` Baokun Li
2024-05-15 8:45 ` [PATCH v2 06/12] cachefiles: add consistency check for copen/cread libaokun
2024-05-15 8:45 ` [PATCH v2 07/12] cachefiles: add spin_lock for cachefiles_ondemand_info libaokun
2024-05-15 8:45 ` [PATCH v2 08/12] cachefiles: never get a new anonymous fd if ondemand_id is valid libaokun
2024-05-20 8:43 ` Jingbo Xu
2024-05-20 9:07 ` Baokun Li
2024-05-20 9:24 ` Jingbo Xu
2024-05-20 11:14 ` Baokun Li
2024-05-20 11:24 ` Gao Xiang
2024-05-15 8:45 ` [PATCH v2 09/12] cachefiles: defer exposing anon_fd until after copy_to_user() succeeds libaokun
2024-05-20 9:39 ` Jingbo Xu
2024-05-20 11:36 ` Baokun Li
2024-05-15 8:45 ` [PATCH v2 10/12] cachefiles: Set object to close if ondemand_id < 0 in copen libaokun
2024-05-15 8:46 ` [PATCH v2 11/12] cachefiles: flush all requests after setting CACHEFILES_DEAD libaokun
2024-05-15 8:46 ` [PATCH v2 12/12] cachefiles: make on-demand read killable libaokun
2024-05-19 10:56 ` [PATCH v2 00/12] cachefiles: some bugfixes and cleanups for ondemand requests Jeff Layton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d0e6d1f6-002f-4255-a481-6bd17f3da7fc@linux.alibaba.com \
--to=jefflexu@linux.alibaba.com \
--cc=dhowells@redhat.com \
--cc=houtao1@huawei.com \
--cc=hsiangkao@linux.alibaba.com \
--cc=jlayton@kernel.org \
--cc=libaokun1@huawei.com \
--cc=libaokun@huaweicloud.com \
--cc=linux-erofs@lists.ozlabs.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netfs@lists.linux.dev \
--cc=wozizhi@huawei.com \
--cc=yangerkun@huawei.com \
--cc=yukuai3@huawei.com \
--cc=zhujia.zj@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).