linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] block: fix race between set_blocksize and read paths
@ 2025-04-18 15:54 Darrick J. Wong
  2025-04-18 15:58 ` [PATCH 2/2] xfs: stop using set_blocksize Darrick J. Wong
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Darrick J. Wong @ 2025-04-18 15:54 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Shinichiro Kawasaki, Luis Chamberlain,
	Matthew Wilcox, linux-block, linux-fsdevel, xfs

From: Darrick J. Wong <djwong@kernel.org>

With the new large sector size support, it's now the case that
set_blocksize can change i_blksize and the folio order in a manner that
conflicts with a concurrent reader and causes a kernel crash.

Specifically, let's say that udev-worker calls libblkid to detect the
labels on a block device.  The read call can create an order-0 folio to
read the first 4096 bytes from the disk.  But then udev is preempted.

Next, someone tries to mount an 8k-sectorsize filesystem from the same
block device.  The filesystem calls set_blksize, which sets i_blksize to
8192 and the minimum folio order to 1.

Now udev resumes, still holding the order-0 folio it allocated.  It then
tries to schedule a read bio and do_mpage_readahead tries to create
bufferheads for the folio.  Unfortunately, blocks_per_folio == 0 because
the page size is 4096 but the blocksize is 8192 so no bufferheads are
attached and the bh walk never sets bdev.  We then submit the bio with a
NULL block device and crash.

Therefore, truncate the page cache after flushing but before updating
i_blksize.  However, that's not enough -- we also need to lock out file
IO and page faults during the update.  Take both the i_rwsem and the
invalidate_lock in exclusive mode for invalidations, and in shared mode
for read/write operations.

I don't know if this is the correct fix, but xfs/259 found it.

Signed-off-by: "Darrick J. Wong" <djwong@kernel.org>
---
 block/bdev.c      |   17 +++++++++++++++++
 block/blk-zoned.c |    5 ++++-
 block/fops.c      |   16 ++++++++++++++++
 block/ioctl.c     |    6 ++++++
 4 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/block/bdev.c b/block/bdev.c
index 7b4e35a661b0c9..1313ad256593c5 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -169,11 +169,28 @@ int set_blocksize(struct file *file, int size)
 
 	/* Don't change the size if it is same as current */
 	if (inode->i_blkbits != blksize_bits(size)) {
+		/*
+		 * Flush and truncate the pagecache before we reconfigure the
+		 * mapping geometry because folio sizes are variable now.  If a
+		 * reader has already allocated a folio whose size is smaller
+		 * than the new min_order but invokes readahead after the new
+		 * min_order becomes visible, readahead will think there are
+		 * "zero" blocks per folio and crash.  Take the inode and
+		 * invalidation locks to avoid racing with
+		 * read/write/fallocate.
+		 */
+		inode_lock(inode);
+		filemap_invalidate_lock(inode->i_mapping);
+
 		sync_blockdev(bdev);
+		kill_bdev(bdev);
+
 		inode->i_blkbits = blksize_bits(size);
 		mapping_set_folio_order_range(inode->i_mapping,
 				get_order(size), get_order(size));
 		kill_bdev(bdev);
+		filemap_invalidate_unlock(inode->i_mapping);
+		inode_unlock(inode);
 	}
 	return 0;
 }
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 0c77244a35c92e..8f15d1aa6eb89a 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -343,6 +343,7 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode,
 		op = REQ_OP_ZONE_RESET;
 
 		/* Invalidate the page cache, including dirty pages. */
+		inode_lock(bdev->bd_mapping->host);
 		filemap_invalidate_lock(bdev->bd_mapping);
 		ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
 		if (ret)
@@ -364,8 +365,10 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode,
 	ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors);
 
 fail:
-	if (cmd == BLKRESETZONE)
+	if (cmd == BLKRESETZONE) {
 		filemap_invalidate_unlock(bdev->bd_mapping);
+		inode_unlock(bdev->bd_mapping->host);
+	}
 
 	return ret;
 }
diff --git a/block/fops.c b/block/fops.c
index be9f1dbea9ce0a..e221fdcaa8aaf8 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -746,7 +746,14 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
 			ret = direct_write_fallback(iocb, from, ret,
 					blkdev_buffered_write(iocb, from));
 	} else {
+		/*
+		 * Take i_rwsem and invalidate_lock to avoid racing with
+		 * set_blocksize changing i_blkbits/folio order and punching
+		 * out the pagecache.
+		 */
+		inode_lock_shared(bd_inode);
 		ret = blkdev_buffered_write(iocb, from);
+		inode_unlock_shared(bd_inode);
 	}
 
 	if (ret > 0)
@@ -757,6 +764,7 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
 
 static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
 {
+	struct inode *bd_inode = bdev_file_inode(iocb->ki_filp);
 	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
 	loff_t size = bdev_nr_bytes(bdev);
 	loff_t pos = iocb->ki_pos;
@@ -793,7 +801,13 @@ static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
 			goto reexpand;
 	}
 
+	/*
+	 * Take i_rwsem and invalidate_lock to avoid racing with set_blocksize
+	 * changing i_blkbits/folio order and punching out the pagecache.
+	 */
+	inode_lock_shared(bd_inode);
 	ret = filemap_read(iocb, to, ret);
+	inode_unlock_shared(bd_inode);
 
 reexpand:
 	if (unlikely(shorted))
@@ -836,6 +850,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
 	if ((start | len) & (bdev_logical_block_size(bdev) - 1))
 		return -EINVAL;
 
+	inode_lock(inode);
 	filemap_invalidate_lock(inode->i_mapping);
 
 	/*
@@ -868,6 +883,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
 
  fail:
 	filemap_invalidate_unlock(inode->i_mapping);
+	inode_unlock(inode);
 	return error;
 }
 
diff --git a/block/ioctl.c b/block/ioctl.c
index faa40f383e2736..e472cc1030c60c 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -142,6 +142,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
 	if (err)
 		return err;
 
+	inode_lock(bdev->bd_mapping->host);
 	filemap_invalidate_lock(bdev->bd_mapping);
 	err = truncate_bdev_range(bdev, mode, start, start + len - 1);
 	if (err)
@@ -174,6 +175,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
 	blk_finish_plug(&plug);
 fail:
 	filemap_invalidate_unlock(bdev->bd_mapping);
+	inode_unlock(bdev->bd_mapping->host);
 	return err;
 }
 
@@ -199,12 +201,14 @@ static int blk_ioctl_secure_erase(struct block_device *bdev, blk_mode_t mode,
 	    end > bdev_nr_bytes(bdev))
 		return -EINVAL;
 
+	inode_lock(bdev->bd_mapping->host);
 	filemap_invalidate_lock(bdev->bd_mapping);
 	err = truncate_bdev_range(bdev, mode, start, end - 1);
 	if (!err)
 		err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9,
 						GFP_KERNEL);
 	filemap_invalidate_unlock(bdev->bd_mapping);
+	inode_unlock(bdev->bd_mapping->host);
 	return err;
 }
 
@@ -236,6 +240,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev, blk_mode_t mode,
 		return -EINVAL;
 
 	/* Invalidate the page cache, including dirty pages */
+	inode_lock(bdev->bd_mapping->host);
 	filemap_invalidate_lock(bdev->bd_mapping);
 	err = truncate_bdev_range(bdev, mode, start, end);
 	if (err)
@@ -246,6 +251,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev, blk_mode_t mode,
 
 fail:
 	filemap_invalidate_unlock(bdev->bd_mapping);
+	inode_unlock(bdev->bd_mapping->host);
 	return err;
 }
 

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] xfs: stop using set_blocksize
  2025-04-18 15:54 [PATCH 1/2] block: fix race between set_blocksize and read paths Darrick J. Wong
@ 2025-04-18 15:58 ` Darrick J. Wong
  2025-04-18 17:56   ` Luis Chamberlain
  2025-04-21  7:59   ` Christoph Hellwig
  2025-04-18 16:02 ` [PATCH 1/2] block: fix race between set_blocksize and read paths Darrick J. Wong
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 8+ messages in thread
From: Darrick J. Wong @ 2025-04-18 15:58 UTC (permalink / raw)
  To: Carlos Maiolino
  Cc: Jens Axboe, Christoph Hellwig, Shinichiro Kawasaki,
	Luis Chamberlain, Matthew Wilcox, linux-block, linux-fsdevel, xfs

From: Darrick J. Wong <djwong@kernel.org>

XFS has its own buffer cache for metadata that uses submit_bio, which
means that it no longer uses the block device pagecache for anything.
Create a more lightweight helper that runs the blocksize checks and
flushes dirty data and use that instead.  No more truncating the
pagecache because why would XFS care?

Signed-off-by: "Darrick J. Wong" <djwong@kernel.org>
---
 include/linux/blkdev.h |    1 +
 block/bdev.c           |   33 +++++++++++++++++++++++++++------
 fs/xfs/xfs_buf.c       |   15 +++++++++++----
 3 files changed, 39 insertions(+), 10 deletions(-)

diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f442639dfae224..df6df616740371 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1618,6 +1618,7 @@ static inline void bio_end_io_acct(struct bio *bio, unsigned long start_time)
 	return bio_end_io_acct_remapped(bio, start_time, bio->bi_bdev);
 }
 
+int bdev_validate_blocksize(struct block_device *bdev, int block_size);
 int set_blocksize(struct file *file, int size);
 
 int lookup_bdev(const char *pathname, dev_t *dev);
diff --git a/block/bdev.c b/block/bdev.c
index 1313ad256593c5..0196b62007d343 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -152,17 +152,38 @@ static void set_init_blocksize(struct block_device *bdev)
 				    get_order(bsize), get_order(bsize));
 }
 
+/**
+ * bdev_validate_blocksize - check that this block size is acceptable
+ * @bdev:	blockdevice to check
+ * @block_size:	block size to check
+ *
+ * For block device users that do not use buffer heads or the block device
+ * page cache, make sure that this block size can be used with the device.
+ *
+ * Return: On success zero is returned, negative error code on failure.
+ */
+int bdev_validate_blocksize(struct block_device *bdev, int block_size)
+{
+	if (blk_validate_block_size(block_size))
+		return -EINVAL;
+
+	/* Size cannot be smaller than the size supported by the device */
+	if (block_size < bdev_logical_block_size(bdev))
+		return -EINVAL;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(bdev_validate_blocksize);
+
 int set_blocksize(struct file *file, int size)
 {
 	struct inode *inode = file->f_mapping->host;
 	struct block_device *bdev = I_BDEV(inode);
+	int ret;
 
-	if (blk_validate_block_size(size))
-		return -EINVAL;
-
-	/* Size cannot be smaller than the size supported by the device */
-	if (size < bdev_logical_block_size(bdev))
-		return -EINVAL;
+	ret = bdev_validate_blocksize(bdev, size);
+	if (ret)
+		return ret;
 
 	if (!file->private_data)
 		return -EINVAL;
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 8e7f1b324b3bea..0b4bd16cb568c8 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -1718,18 +1718,25 @@ xfs_setsize_buftarg(
 	struct xfs_buftarg	*btp,
 	unsigned int		sectorsize)
 {
+	int			error;
+
 	/* Set up metadata sector size info */
 	btp->bt_meta_sectorsize = sectorsize;
 	btp->bt_meta_sectormask = sectorsize - 1;
 
-	if (set_blocksize(btp->bt_bdev_file, sectorsize)) {
+	error = bdev_validate_blocksize(btp->bt_bdev, sectorsize);
+	if (error) {
 		xfs_warn(btp->bt_mount,
-			"Cannot set_blocksize to %u on device %pg",
-			sectorsize, btp->bt_bdev);
+			"Cannot use blocksize %u on device %pg, err %d",
+			sectorsize, btp->bt_bdev, error);
 		return -EINVAL;
 	}
 
-	return 0;
+	/*
+	 * Flush the block device pagecache so our bios see anything dirtied
+	 * before mount.
+	 */
+	return sync_blockdev(btp->bt_bdev);
 }
 
 int

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] block: fix race between set_blocksize and read paths
  2025-04-18 15:54 [PATCH 1/2] block: fix race between set_blocksize and read paths Darrick J. Wong
  2025-04-18 15:58 ` [PATCH 2/2] xfs: stop using set_blocksize Darrick J. Wong
@ 2025-04-18 16:02 ` Darrick J. Wong
  2025-04-18 17:56   ` Luis Chamberlain
  2025-04-18 17:55 ` Luis Chamberlain
  2025-04-21  7:58 ` Christoph Hellwig
  3 siblings, 1 reply; 8+ messages in thread
From: Darrick J. Wong @ 2025-04-18 16:02 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Shinichiro Kawasaki, Luis Chamberlain,
	Matthew Wilcox, linux-block, linux-fsdevel, xfs

On Fri, Apr 18, 2025 at 08:54:58AM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <djwong@kernel.org>
> 
> With the new large sector size support, it's now the case that
> set_blocksize can change i_blksize and the folio order in a manner that
> conflicts with a concurrent reader and causes a kernel crash.
> 
> Specifically, let's say that udev-worker calls libblkid to detect the
> labels on a block device.  The read call can create an order-0 folio to
> read the first 4096 bytes from the disk.  But then udev is preempted.
> 
> Next, someone tries to mount an 8k-sectorsize filesystem from the same
> block device.  The filesystem calls set_blksize, which sets i_blksize to
> 8192 and the minimum folio order to 1.
> 
> Now udev resumes, still holding the order-0 folio it allocated.  It then
> tries to schedule a read bio and do_mpage_readahead tries to create
> bufferheads for the folio.  Unfortunately, blocks_per_folio == 0 because
> the page size is 4096 but the blocksize is 8192 so no bufferheads are
> attached and the bh walk never sets bdev.  We then submit the bio with a
> NULL block device and crash.
> 
> Therefore, truncate the page cache after flushing but before updating
> i_blksize.  However, that's not enough -- we also need to lock out file
> IO and page faults during the update.  Take both the i_rwsem and the
> invalidate_lock in exclusive mode for invalidations, and in shared mode
> for read/write operations.
> 
> I don't know if this is the correct fix, but xfs/259 found it.
> 
> Signed-off-by: "Darrick J. Wong" <djwong@kernel.org>

I think this could also have the tag:
Fixes: 3c20917120ce61 ("block/bdev: enable large folio support for large logical block sizes")

Not sure anyone cares about that for a fix for 6.15-rc1 though.

--D

> ---
>  block/bdev.c      |   17 +++++++++++++++++
>  block/blk-zoned.c |    5 ++++-
>  block/fops.c      |   16 ++++++++++++++++
>  block/ioctl.c     |    6 ++++++
>  4 files changed, 43 insertions(+), 1 deletion(-)
> 
> diff --git a/block/bdev.c b/block/bdev.c
> index 7b4e35a661b0c9..1313ad256593c5 100644
> --- a/block/bdev.c
> +++ b/block/bdev.c
> @@ -169,11 +169,28 @@ int set_blocksize(struct file *file, int size)
>  
>  	/* Don't change the size if it is same as current */
>  	if (inode->i_blkbits != blksize_bits(size)) {
> +		/*
> +		 * Flush and truncate the pagecache before we reconfigure the
> +		 * mapping geometry because folio sizes are variable now.  If a
> +		 * reader has already allocated a folio whose size is smaller
> +		 * than the new min_order but invokes readahead after the new
> +		 * min_order becomes visible, readahead will think there are
> +		 * "zero" blocks per folio and crash.  Take the inode and
> +		 * invalidation locks to avoid racing with
> +		 * read/write/fallocate.
> +		 */
> +		inode_lock(inode);
> +		filemap_invalidate_lock(inode->i_mapping);
> +
>  		sync_blockdev(bdev);
> +		kill_bdev(bdev);
> +
>  		inode->i_blkbits = blksize_bits(size);
>  		mapping_set_folio_order_range(inode->i_mapping,
>  				get_order(size), get_order(size));
>  		kill_bdev(bdev);
> +		filemap_invalidate_unlock(inode->i_mapping);
> +		inode_unlock(inode);
>  	}
>  	return 0;
>  }
> diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> index 0c77244a35c92e..8f15d1aa6eb89a 100644
> --- a/block/blk-zoned.c
> +++ b/block/blk-zoned.c
> @@ -343,6 +343,7 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode,
>  		op = REQ_OP_ZONE_RESET;
>  
>  		/* Invalidate the page cache, including dirty pages. */
> +		inode_lock(bdev->bd_mapping->host);
>  		filemap_invalidate_lock(bdev->bd_mapping);
>  		ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
>  		if (ret)
> @@ -364,8 +365,10 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode,
>  	ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors);
>  
>  fail:
> -	if (cmd == BLKRESETZONE)
> +	if (cmd == BLKRESETZONE) {
>  		filemap_invalidate_unlock(bdev->bd_mapping);
> +		inode_unlock(bdev->bd_mapping->host);
> +	}
>  
>  	return ret;
>  }
> diff --git a/block/fops.c b/block/fops.c
> index be9f1dbea9ce0a..e221fdcaa8aaf8 100644
> --- a/block/fops.c
> +++ b/block/fops.c
> @@ -746,7 +746,14 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
>  			ret = direct_write_fallback(iocb, from, ret,
>  					blkdev_buffered_write(iocb, from));
>  	} else {
> +		/*
> +		 * Take i_rwsem and invalidate_lock to avoid racing with
> +		 * set_blocksize changing i_blkbits/folio order and punching
> +		 * out the pagecache.
> +		 */
> +		inode_lock_shared(bd_inode);
>  		ret = blkdev_buffered_write(iocb, from);
> +		inode_unlock_shared(bd_inode);
>  	}
>  
>  	if (ret > 0)
> @@ -757,6 +764,7 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
>  
>  static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
>  {
> +	struct inode *bd_inode = bdev_file_inode(iocb->ki_filp);
>  	struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
>  	loff_t size = bdev_nr_bytes(bdev);
>  	loff_t pos = iocb->ki_pos;
> @@ -793,7 +801,13 @@ static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
>  			goto reexpand;
>  	}
>  
> +	/*
> +	 * Take i_rwsem and invalidate_lock to avoid racing with set_blocksize
> +	 * changing i_blkbits/folio order and punching out the pagecache.
> +	 */
> +	inode_lock_shared(bd_inode);
>  	ret = filemap_read(iocb, to, ret);
> +	inode_unlock_shared(bd_inode);
>  
>  reexpand:
>  	if (unlikely(shorted))
> @@ -836,6 +850,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
>  	if ((start | len) & (bdev_logical_block_size(bdev) - 1))
>  		return -EINVAL;
>  
> +	inode_lock(inode);
>  	filemap_invalidate_lock(inode->i_mapping);
>  
>  	/*
> @@ -868,6 +883,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
>  
>   fail:
>  	filemap_invalidate_unlock(inode->i_mapping);
> +	inode_unlock(inode);
>  	return error;
>  }
>  
> diff --git a/block/ioctl.c b/block/ioctl.c
> index faa40f383e2736..e472cc1030c60c 100644
> --- a/block/ioctl.c
> +++ b/block/ioctl.c
> @@ -142,6 +142,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
>  	if (err)
>  		return err;
>  
> +	inode_lock(bdev->bd_mapping->host);
>  	filemap_invalidate_lock(bdev->bd_mapping);
>  	err = truncate_bdev_range(bdev, mode, start, start + len - 1);
>  	if (err)
> @@ -174,6 +175,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
>  	blk_finish_plug(&plug);
>  fail:
>  	filemap_invalidate_unlock(bdev->bd_mapping);
> +	inode_unlock(bdev->bd_mapping->host);
>  	return err;
>  }
>  
> @@ -199,12 +201,14 @@ static int blk_ioctl_secure_erase(struct block_device *bdev, blk_mode_t mode,
>  	    end > bdev_nr_bytes(bdev))
>  		return -EINVAL;
>  
> +	inode_lock(bdev->bd_mapping->host);
>  	filemap_invalidate_lock(bdev->bd_mapping);
>  	err = truncate_bdev_range(bdev, mode, start, end - 1);
>  	if (!err)
>  		err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9,
>  						GFP_KERNEL);
>  	filemap_invalidate_unlock(bdev->bd_mapping);
> +	inode_unlock(bdev->bd_mapping->host);
>  	return err;
>  }
>  
> @@ -236,6 +240,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev, blk_mode_t mode,
>  		return -EINVAL;
>  
>  	/* Invalidate the page cache, including dirty pages */
> +	inode_lock(bdev->bd_mapping->host);
>  	filemap_invalidate_lock(bdev->bd_mapping);
>  	err = truncate_bdev_range(bdev, mode, start, end);
>  	if (err)
> @@ -246,6 +251,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev, blk_mode_t mode,
>  
>  fail:
>  	filemap_invalidate_unlock(bdev->bd_mapping);
> +	inode_unlock(bdev->bd_mapping->host);
>  	return err;
>  }
>  
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] block: fix race between set_blocksize and read paths
  2025-04-18 15:54 [PATCH 1/2] block: fix race between set_blocksize and read paths Darrick J. Wong
  2025-04-18 15:58 ` [PATCH 2/2] xfs: stop using set_blocksize Darrick J. Wong
  2025-04-18 16:02 ` [PATCH 1/2] block: fix race between set_blocksize and read paths Darrick J. Wong
@ 2025-04-18 17:55 ` Luis Chamberlain
  2025-04-21  7:58 ` Christoph Hellwig
  3 siblings, 0 replies; 8+ messages in thread
From: Luis Chamberlain @ 2025-04-18 17:55 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Jens Axboe, Christoph Hellwig, Shinichiro Kawasaki,
	Matthew Wilcox, linux-block, linux-fsdevel, xfs

On Fri, Apr 18, 2025 at 08:54:58AM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <djwong@kernel.org>
> 
> With the new large sector size support, it's now the case that
> set_blocksize can change i_blksize and the folio order in a manner that
> conflicts with a concurrent reader and causes a kernel crash.
> 
> Specifically, let's say that udev-worker calls libblkid to detect the
> labels on a block device.  The read call can create an order-0 folio to
> read the first 4096 bytes from the disk.  But then udev is preempted.
> 
> Next, someone tries to mount an 8k-sectorsize filesystem from the same
> block device.  The filesystem calls set_blksize, which sets i_blksize to
> 8192 and the minimum folio order to 1.
> 
> Now udev resumes, still holding the order-0 folio it allocated.  It then
> tries to schedule a read bio and do_mpage_readahead tries to create
> bufferheads for the folio.  Unfortunately, blocks_per_folio == 0 because
> the page size is 4096 but the blocksize is 8192 so no bufferheads are
> attached and the bh walk never sets bdev.  We then submit the bio with a
> NULL block device and crash.
> 
> Therefore, truncate the page cache after flushing but before updating
> i_blksize.  However, that's not enough -- we also need to lock out file
> IO and page faults during the update.  Take both the i_rwsem and the
> invalidate_lock in exclusive mode for invalidations, and in shared mode
> for read/write operations.
> 
> I don't know if this is the correct fix, but xfs/259 found it.
> 
> Signed-off-by: "Darrick J. Wong" <djwong@kernel.org>

Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>

  Luis

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] xfs: stop using set_blocksize
  2025-04-18 15:58 ` [PATCH 2/2] xfs: stop using set_blocksize Darrick J. Wong
@ 2025-04-18 17:56   ` Luis Chamberlain
  2025-04-21  7:59   ` Christoph Hellwig
  1 sibling, 0 replies; 8+ messages in thread
From: Luis Chamberlain @ 2025-04-18 17:56 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Carlos Maiolino, Jens Axboe, Christoph Hellwig,
	Shinichiro Kawasaki, Matthew Wilcox, linux-block, linux-fsdevel,
	xfs

On Fri, Apr 18, 2025 at 08:58:04AM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <djwong@kernel.org>
> 
> XFS has its own buffer cache for metadata that uses submit_bio, which
> means that it no longer uses the block device pagecache for anything.
> Create a more lightweight helper that runs the blocksize checks and
> flushes dirty data and use that instead.  No more truncating the
> pagecache because why would XFS care?
> 
> Signed-off-by: "Darrick J. Wong" <djwong@kernel.org>

Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>

  Luis

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] block: fix race between set_blocksize and read paths
  2025-04-18 16:02 ` [PATCH 1/2] block: fix race between set_blocksize and read paths Darrick J. Wong
@ 2025-04-18 17:56   ` Luis Chamberlain
  0 siblings, 0 replies; 8+ messages in thread
From: Luis Chamberlain @ 2025-04-18 17:56 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Jens Axboe, Christoph Hellwig, Shinichiro Kawasaki,
	Matthew Wilcox, linux-block, linux-fsdevel, xfs

On Fri, Apr 18, 2025 at 09:02:34AM -0700, Darrick J. Wong wrote:
> On Fri, Apr 18, 2025 at 08:54:58AM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@kernel.org>
> > 
> > With the new large sector size support, it's now the case that
> > set_blocksize can change i_blksize and the folio order in a manner that
> > conflicts with a concurrent reader and causes a kernel crash.
> > 
> > Specifically, let's say that udev-worker calls libblkid to detect the
> > labels on a block device.  The read call can create an order-0 folio to
> > read the first 4096 bytes from the disk.  But then udev is preempted.
> > 
> > Next, someone tries to mount an 8k-sectorsize filesystem from the same
> > block device.  The filesystem calls set_blksize, which sets i_blksize to
> > 8192 and the minimum folio order to 1.
> > 
> > Now udev resumes, still holding the order-0 folio it allocated.  It then
> > tries to schedule a read bio and do_mpage_readahead tries to create
> > bufferheads for the folio.  Unfortunately, blocks_per_folio == 0 because
> > the page size is 4096 but the blocksize is 8192 so no bufferheads are
> > attached and the bh walk never sets bdev.  We then submit the bio with a
> > NULL block device and crash.
> > 
> > Therefore, truncate the page cache after flushing but before updating
> > i_blksize.  However, that's not enough -- we also need to lock out file
> > IO and page faults during the update.  Take both the i_rwsem and the
> > invalidate_lock in exclusive mode for invalidations, and in shared mode
> > for read/write operations.
> > 
> > I don't know if this is the correct fix, but xfs/259 found it.
> > 
> > Signed-off-by: "Darrick J. Wong" <djwong@kernel.org>
> 
> I think this could also have the tag:
> Fixes: 3c20917120ce61 ("block/bdev: enable large folio support for large logical block sizes")
> 
> Not sure anyone cares about that for a fix for 6.15-rc1 though.

Its a fix, so I'd prefer this goes to v6.15-rcx for sure.

  Luis

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] block: fix race between set_blocksize and read paths
  2025-04-18 15:54 [PATCH 1/2] block: fix race between set_blocksize and read paths Darrick J. Wong
                   ` (2 preceding siblings ...)
  2025-04-18 17:55 ` Luis Chamberlain
@ 2025-04-21  7:58 ` Christoph Hellwig
  3 siblings, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2025-04-21  7:58 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Jens Axboe, Christoph Hellwig, Shinichiro Kawasaki,
	Luis Chamberlain, Matthew Wilcox, linux-block, linux-fsdevel, xfs

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] xfs: stop using set_blocksize
  2025-04-18 15:58 ` [PATCH 2/2] xfs: stop using set_blocksize Darrick J. Wong
  2025-04-18 17:56   ` Luis Chamberlain
@ 2025-04-21  7:59   ` Christoph Hellwig
  1 sibling, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2025-04-21  7:59 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Carlos Maiolino, Jens Axboe, Christoph Hellwig,
	Shinichiro Kawasaki, Luis Chamberlain, Matthew Wilcox,
	linux-block, linux-fsdevel, xfs

Please split this into separate patches for the block helper and the
use of it in xfs.  Each part looks fine, though.


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-04-21  7:59 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-18 15:54 [PATCH 1/2] block: fix race between set_blocksize and read paths Darrick J. Wong
2025-04-18 15:58 ` [PATCH 2/2] xfs: stop using set_blocksize Darrick J. Wong
2025-04-18 17:56   ` Luis Chamberlain
2025-04-21  7:59   ` Christoph Hellwig
2025-04-18 16:02 ` [PATCH 1/2] block: fix race between set_blocksize and read paths Darrick J. Wong
2025-04-18 17:56   ` Luis Chamberlain
2025-04-18 17:55 ` Luis Chamberlain
2025-04-21  7:58 ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).