From: <gregkh@linuxfoundation.org>
To: adilger.kernel@dilger.ca, akpm@linux-foundation.org,
anna@kernel.org, axboe@kernel.dk, chao@kernel.org,
djwong@kernel.org, dlemoal@kernel.org,
gregkh@linuxfoundation.org, hare@suse.de, hch@infradead.org,
hch@lst.de, idryomov@gmail.com, jaegeuk@kernel.org,
jlayton@kernel.org, konishi.ryusuke@gmail.com,
linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org,
mcgrof@kernel.org, mngyadam@amazon.de, nagy@khwaternagy.com,
shinichiro.kawasaki@wdc.com, trond.myklebust@hammerspace.com,
tytso@mit.edu, viro@zeniv.linux.org.uk, willy@infradead.org,
xiubli@redhat.com
Cc: stable-commits@vger.kernel.org
Subject: [f2fs-dev] Patch "block: fix race between set_blocksize and read paths" has been added to the 6.1-stable tree
Date: Mon, 03 Nov 2025 10:46:56 +0900 [thread overview]
Message-ID: <2025110356-shrapnel-squash-a5dc@gregkh> (raw)
In-Reply-To: <20251021070353.96705-9-mngyadam@amazon.de>
This is a note to let you know that I've just added the patch titled
block: fix race between set_blocksize and read paths
to the 6.1-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
block-fix-race-between-set_blocksize-and-read-paths.patch
and it can be found in the queue-6.1 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
From stable+bounces-188302-greg=kroah.com@vger.kernel.org Tue Oct 21 16:13:58 2025
From: Mahmoud Adam <mngyadam@amazon.de>
Date: Tue, 21 Oct 2025 09:03:42 +0200
Subject: block: fix race between set_blocksize and read paths
To: <stable@vger.kernel.org>
Cc: <gregkh@linuxfoundation.org>, <nagy@khwaternagy.com>, "Darrick J. Wong" <djwong@kernel.org>, Christoph Hellwig <hch@lst.de>, Luis Chamberlain <mcgrof@kernel.org>, Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>, "Jens Axboe" <axboe@kernel.dk>, Xiubo Li <xiubli@redhat.com>, Ilya Dryomov <idryomov@gmail.com>, Jeff Layton <jlayton@kernel.org>, Alexander Viro <viro@zeniv.linux.org.uk>, Theodore Ts'o <tytso@mit.edu>, Andreas Dilger <adilger.kernel@dilger.ca>, Jaegeuk Kim <jaegeuk@kernel.org>, Chao Yu <chao@kernel.org>, Christoph Hellwig <hch@infradead.org>, Trond Myklebust <trond.myklebust@hammerspace.com>, Anna Schumaker <anna@kernel.org>, "Ryusuke Konishi" <konishi.ryusuke@gmail.com>, "Matthew Wilcox (Oracle)" <willy@infradead.org>, Andrew Morton <akpm@linux-foundation.org>, "Hannes Reinecke" <hare@suse.de>, Damien Le Moal <dlemoal@kernel.org>, <linux-block@vger.kernel.org>, <linux-kernel@vger.kernel.org>, <ceph-devel@vger.kernel.org>, <linux-fsdevel@vger.kernel.org>, <linux-ext4
@vger.kernel.org>, <linux-f2fs-devel@lists.sourceforge.net>, <linux-xfs@vger.kernel.org>, <linux-nfs@vger.kernel.org>, <linux-nilfs@vger.kernel.org>, <linux-mm@kvack.org>
Message-ID: <20251021070353.96705-9-mngyadam@amazon.de>
From: "Darrick J. Wong" <djwong@kernel.org>
commit c0e473a0d226479e8e925d5ba93f751d8df628e9 upstream.
With the new large sector size support, it's now the case that
set_blocksize can change i_blksize and the folio order in a manner that
conflicts with a concurrent reader and causes a kernel crash.
Specifically, let's say that udev-worker calls libblkid to detect the
labels on a block device. The read call can create an order-0 folio to
read the first 4096 bytes from the disk. But then udev is preempted.
Next, someone tries to mount an 8k-sectorsize filesystem from the same
block device. The filesystem calls set_blksize, which sets i_blksize to
8192 and the minimum folio order to 1.
Now udev resumes, still holding the order-0 folio it allocated. It then
tries to schedule a read bio and do_mpage_readahead tries to create
bufferheads for the folio. Unfortunately, blocks_per_folio == 0 because
the page size is 4096 but the blocksize is 8192 so no bufferheads are
attached and the bh walk never sets bdev. We then submit the bio with a
NULL block device and crash.
Therefore, truncate the page cache after flushing but before updating
i_blksize. However, that's not enough -- we also need to lock out file
IO and page faults during the update. Take both the i_rwsem and the
invalidate_lock in exclusive mode for invalidations, and in shared mode
for read/write operations.
I don't know if this is the correct fix, but xfs/259 found it.
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Tested-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Link: https://lore.kernel.org/r/174543795699.4139148.2086129139322431423.stgit@frogsfrogsfrogs
Signed-off-by: Jens Axboe <axboe@kernel.dk>
[use bdev->bd_inode instead & fix small contextual changes]
Signed-off-by: Mahmoud Adam <mngyadam@amazon.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
block/bdev.c | 17 +++++++++++++++++
block/blk-zoned.c | 5 ++++-
block/fops.c | 16 ++++++++++++++++
block/ioctl.c | 6 ++++++
4 files changed, 43 insertions(+), 1 deletion(-)
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -147,9 +147,26 @@ int set_blocksize(struct block_device *b
/* Don't change the size if it is same as current */
if (bdev->bd_inode->i_blkbits != blksize_bits(size)) {
+ /*
+ * Flush and truncate the pagecache before we reconfigure the
+ * mapping geometry because folio sizes are variable now. If a
+ * reader has already allocated a folio whose size is smaller
+ * than the new min_order but invokes readahead after the new
+ * min_order becomes visible, readahead will think there are
+ * "zero" blocks per folio and crash. Take the inode and
+ * invalidation locks to avoid racing with
+ * read/write/fallocate.
+ */
+ inode_lock(bdev->bd_inode);
+ filemap_invalidate_lock(bdev->bd_inode->i_mapping);
+
sync_blockdev(bdev);
+ kill_bdev(bdev);
+
bdev->bd_inode->i_blkbits = blksize_bits(size);
kill_bdev(bdev);
+ filemap_invalidate_unlock(bdev->bd_inode->i_mapping);
+ inode_unlock(bdev->bd_inode);
}
return 0;
}
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -417,6 +417,7 @@ int blkdev_zone_mgmt_ioctl(struct block_
op = REQ_OP_ZONE_RESET;
/* Invalidate the page cache, including dirty pages. */
+ inode_lock(bdev->bd_inode);
filemap_invalidate_lock(bdev->bd_inode->i_mapping);
ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
if (ret)
@@ -439,8 +440,10 @@ int blkdev_zone_mgmt_ioctl(struct block_
GFP_KERNEL);
fail:
- if (cmd == BLKRESETZONE)
+ if (cmd == BLKRESETZONE) {
filemap_invalidate_unlock(bdev->bd_inode->i_mapping);
+ inode_unlock(bdev->bd_inode);
+ }
return ret;
}
--- a/block/fops.c
+++ b/block/fops.c
@@ -592,7 +592,14 @@ static ssize_t blkdev_write_iter(struct
ret = direct_write_fallback(iocb, from, ret,
generic_perform_write(iocb, from));
} else {
+ /*
+ * Take i_rwsem and invalidate_lock to avoid racing with
+ * set_blocksize changing i_blkbits/folio order and punching
+ * out the pagecache.
+ */
+ inode_lock_shared(bd_inode);
ret = generic_perform_write(iocb, from);
+ inode_unlock_shared(bd_inode);
}
if (ret > 0)
@@ -605,6 +612,7 @@ static ssize_t blkdev_write_iter(struct
static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
{
struct block_device *bdev = iocb->ki_filp->private_data;
+ struct inode *bd_inode = bdev->bd_inode;
loff_t size = bdev_nr_bytes(bdev);
loff_t pos = iocb->ki_pos;
size_t shorted = 0;
@@ -652,7 +660,13 @@ static ssize_t blkdev_read_iter(struct k
goto reexpand;
}
+ /*
+ * Take i_rwsem and invalidate_lock to avoid racing with set_blocksize
+ * changing i_blkbits/folio order and punching out the pagecache.
+ */
+ inode_lock_shared(bd_inode);
ret = filemap_read(iocb, to, ret);
+ inode_unlock_shared(bd_inode);
reexpand:
if (unlikely(shorted))
@@ -695,6 +709,7 @@ static long blkdev_fallocate(struct file
if ((start | len) & (bdev_logical_block_size(bdev) - 1))
return -EINVAL;
+ inode_lock(inode);
filemap_invalidate_lock(inode->i_mapping);
/*
@@ -735,6 +750,7 @@ static long blkdev_fallocate(struct file
fail:
filemap_invalidate_unlock(inode->i_mapping);
+ inode_unlock(inode);
return error;
}
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -114,6 +114,7 @@ static int blk_ioctl_discard(struct bloc
end > bdev_nr_bytes(bdev))
return -EINVAL;
+ inode_lock(inode);
filemap_invalidate_lock(inode->i_mapping);
err = truncate_bdev_range(bdev, mode, start, end - 1);
if (err)
@@ -121,6 +122,7 @@ static int blk_ioctl_discard(struct bloc
err = blkdev_issue_discard(bdev, start >> 9, len >> 9, GFP_KERNEL);
fail:
filemap_invalidate_unlock(inode->i_mapping);
+ inode_unlock(inode);
return err;
}
@@ -146,12 +148,14 @@ static int blk_ioctl_secure_erase(struct
end > bdev_nr_bytes(bdev))
return -EINVAL;
+ inode_lock(bdev->bd_inode);
filemap_invalidate_lock(bdev->bd_inode->i_mapping);
err = truncate_bdev_range(bdev, mode, start, end - 1);
if (!err)
err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9,
GFP_KERNEL);
filemap_invalidate_unlock(bdev->bd_inode->i_mapping);
+ inode_unlock(bdev->bd_inode);
return err;
}
@@ -184,6 +188,7 @@ static int blk_ioctl_zeroout(struct bloc
return -EINVAL;
/* Invalidate the page cache, including dirty pages */
+ inode_lock(inode);
filemap_invalidate_lock(inode->i_mapping);
err = truncate_bdev_range(bdev, mode, start, end);
if (err)
@@ -194,6 +199,7 @@ static int blk_ioctl_zeroout(struct bloc
fail:
filemap_invalidate_unlock(inode->i_mapping);
+ inode_unlock(inode);
return err;
}
Patches currently in stable-queue which might be from mngyadam@amazon.de are
queue-6.1/block-fix-race-between-set_blocksize-and-read-paths.patch
queue-6.1/filemap-add-a-kiocb_invalidate_pages-helper.patch
queue-6.1/fs-factor-out-a-direct_write_fallback-helper.patch
queue-6.1/direct_write_fallback-on-error-revert-the-ki_pos-update-from-buffered-write.patch
queue-6.1/filemap-update-ki_pos-in-generic_perform_write.patch
queue-6.1/filemap-add-a-kiocb_invalidate_post_direct_write-helper.patch
queue-6.1/nilfs2-fix-deadlock-warnings-caused-by-lock-dependency-in-init_nilfs.patch
queue-6.1/block-open-code-__generic_file_write_iter-for-blkdev-writes.patch
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
next prev parent reply other threads:[~2025-11-03 1:49 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-21 7:03 [f2fs-dev] [PATCH 6.1 0/8] Backporting CVE-2025-38073 fix patch Mahmoud Adam via Linux-f2fs-devel
2025-10-21 7:03 ` [f2fs-dev] [PATCH 6.1 1/8] filemap: add a kiocb_invalidate_pages helper Mahmoud Adam via Linux-f2fs-devel
2025-11-03 1:46 ` [f2fs-dev] Patch "filemap: add a kiocb_invalidate_pages helper" has been added to the 6.1-stable tree gregkh
2025-10-21 7:03 ` [f2fs-dev] [PATCH 6.1 2/8] filemap: add a kiocb_invalidate_post_direct_write helper Mahmoud Adam via Linux-f2fs-devel
2025-11-03 1:46 ` [f2fs-dev] Patch "filemap: add a kiocb_invalidate_post_direct_write helper" has been added to the 6.1-stable tree gregkh
2025-10-21 7:03 ` [f2fs-dev] [PATCH 6.1 3/8] filemap: update ki_pos in generic_perform_write Mahmoud Adam via Linux-f2fs-devel
2025-11-03 1:46 ` [f2fs-dev] Patch "filemap: update ki_pos in generic_perform_write" has been added to the 6.1-stable tree gregkh
2025-10-21 7:03 ` [f2fs-dev] [PATCH 6.1 4/8] fs: factor out a direct_write_fallback helper Mahmoud Adam via Linux-f2fs-devel
2025-11-03 1:46 ` [f2fs-dev] Patch "fs: factor out a direct_write_fallback helper" has been added to the 6.1-stable tree gregkh
2025-10-21 7:03 ` [f2fs-dev] [PATCH 6.1 5/8] direct_write_fallback(): on error revert the ->ki_pos update from buffered write Mahmoud Adam via Linux-f2fs-devel
2025-10-21 7:03 ` [f2fs-dev] [PATCH 6.1 6/8] block: open code __generic_file_write_iter for blkdev writes Mahmoud Adam via Linux-f2fs-devel
2025-11-03 1:46 ` [f2fs-dev] Patch "block: open code __generic_file_write_iter for blkdev writes" has been added to the 6.1-stable tree gregkh
2025-10-21 7:03 ` [f2fs-dev] [PATCH 6.1 7/8] block: fix race between set_blocksize and read paths Mahmoud Adam via Linux-f2fs-devel
2025-11-03 1:46 ` gregkh [this message]
2025-10-21 7:03 ` [f2fs-dev] [PATCH 6.1 8/8] nilfs2: fix deadlock warnings caused by lock dependency in init_nilfs() Mahmoud Adam via Linux-f2fs-devel
2025-11-03 1:46 ` [f2fs-dev] Patch "nilfs2: fix deadlock warnings caused by lock dependency in init_nilfs()" has been added to the 6.1-stable tree gregkh
2025-10-21 7:16 ` [f2fs-dev] [PATCH 6.1 0/8] Backporting CVE-2025-38073 fix patch Greg KH
2025-10-21 7:25 ` Mahmoud Nagy Adam via Linux-f2fs-devel
2025-10-21 7:43 ` Greg KH
2025-10-21 10:16 ` Mahmoud Nagy Adam via Linux-f2fs-devel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2025110356-shrapnel-squash-a5dc@gregkh \
--to=gregkh@linuxfoundation.org \
--cc=adilger.kernel@dilger.ca \
--cc=akpm@linux-foundation.org \
--cc=anna@kernel.org \
--cc=axboe@kernel.dk \
--cc=chao@kernel.org \
--cc=djwong@kernel.org \
--cc=dlemoal@kernel.org \
--cc=hare@suse.de \
--cc=hch@infradead.org \
--cc=hch@lst.de \
--cc=idryomov@gmail.com \
--cc=jaegeuk@kernel.org \
--cc=jlayton@kernel.org \
--cc=konishi.ryusuke@gmail.com \
--cc=linux-f2fs-devel@lists.sourceforge.net \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=mngyadam@amazon.de \
--cc=nagy@khwaternagy.com \
--cc=shinichiro.kawasaki@wdc.com \
--cc=stable-commits@vger.kernel.org \
--cc=trond.myklebust@hammerspace.com \
--cc=tytso@mit.edu \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
--cc=xiubli@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).