From: "Luis Henriques (SUSE)" <luis.henriques@linux.dev>
To: Xiubo Li <xiubli@redhat.com>, Ilya Dryomov <idryomov@gmail.com>
Cc: ceph-devel@vger.kernel.org, linux-kernel@vger.kernel.org,
"Luis Henriques (SUSE)" <luis.henriques@linux.dev>
Subject: [RFC PATCH] ceph: fix out-of-bound array access when doing a file read
Date: Thu, 22 Aug 2024 16:01:13 +0100 [thread overview]
Message-ID: <20240822150113.14274-1-luis.henriques@linux.dev> (raw)
If, while doing a read, the inode is updated and the size is set to zero,
__ceph_sync_read() may not be able to handle it. It is thus easy to hit a
NULL pointer dereferrence by continuously reading a file while, on another
client, we keep truncating and writing new data into it.
This patch fixes the issue by adding extra checks to avoid integer overflows
for the case of a zero size inode. This will prevent the loop doing page
copies from running and thus accessing the pages[] array beyond num_pages.
Link: https://tracker.ceph.com/issues/67524
Signed-off-by: Luis Henriques (SUSE) <luis.henriques@linux.dev>
---
Hi!
Please note that this patch is only lightly tested and, to be honest, I'm
not sure if this is the correct way to fix this bug. For example, if the
inode size is 0, then maybe ceph_osdc_wait_request() should have returned
0 and the problem would be solved. However, it seems to be returning the
size of the reply message and that's not something easy to change. Or maybe
I'm just reading it wrong. Anyway, this is just an RFC to see if there's
other ideas.
Also, the tracker contains a simple testcase for crashing the client.
fs/ceph/file.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index 4b8d59ebda00..dc23d5e5b11e 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -1200,9 +1200,9 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
}
idx = 0;
- if (ret <= 0)
+ if ((ret <= 0) || (i_size == 0))
left = 0;
- else if (off + ret > i_size)
+ else if ((i_size >= off) && (off + ret > i_size))
left = i_size - off;
else
left = ret;
@@ -1210,6 +1210,7 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
size_t plen, copied;
plen = min_t(size_t, left, PAGE_SIZE - page_off);
+ WARN_ON_ONCE(idx >= num_pages);
SetPageUptodate(pages[idx]);
copied = copy_page_to_iter(pages[idx++],
page_off, plen, to);
@@ -1234,7 +1235,7 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
}
if (ret > 0) {
- if (off >= i_size) {
+ if ((i_size >= *ki_pos) && (off >= i_size)) {
*retry_op = CHECK_EOF;
ret = i_size - *ki_pos;
*ki_pos = i_size;
next reply other threads:[~2024-08-22 15:01 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-22 15:01 Luis Henriques (SUSE) [this message]
2024-08-23 1:48 ` [RFC PATCH] ceph: fix out-of-bound array access when doing a file read Xiubo Li
2024-08-23 7:25 ` Luis Henriques
2024-08-25 23:53 ` Xiubo Li
2024-08-23 10:33 ` Luis Henriques
2024-08-27 13:36 ` Luis Henriques
2024-08-28 5:47 ` Xiubo Li
2024-08-28 15:48 ` Luis Henriques
2024-09-06 11:08 ` Xiubo Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240822150113.14274-1-luis.henriques@linux.dev \
--to=luis.henriques@linux.dev \
--cc=ceph-devel@vger.kernel.org \
--cc=idryomov@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=xiubli@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox