* [PATCH 1/9] xfs: write page faults in iomap are not buffered writes
2022-11-17 5:58 [PATCH 0/8 v3] xfs, iomap: fix data corrupton due to stale cached iomaps Dave Chinner
@ 2022-11-17 5:58 ` Dave Chinner
2022-11-17 5:58 ` [PATCH 2/9] xfs: punching delalloc extents on write failure is racy Dave Chinner
` (7 subsequent siblings)
8 siblings, 0 replies; 18+ messages in thread
From: Dave Chinner @ 2022-11-17 5:58 UTC (permalink / raw)
To: linux-xfs; +Cc: linux-fsdevel
From: Dave Chinner <dchinner@redhat.com>
When we reserve a delalloc region in xfs_buffered_write_iomap_begin,
we mark the iomap as IOMAP_F_NEW so that the the write context
understands that it allocated the delalloc region.
If we then fail that buffered write, xfs_buffered_write_iomap_end()
checks for the IOMAP_F_NEW flag and if it is set, it punches out
the unused delalloc region that was allocated for the write.
The assumption this code makes is that all buffered write operations
that can allocate space are run under an exclusive lock (i_rwsem).
This is an invalid assumption: page faults in mmap()d regions call
through this same function pair to map the file range being faulted
and this runs only holding the inode->i_mapping->invalidate_lock in
shared mode.
IOWs, we can have races between page faults and write() calls that
fail the nested page cache write operation that result in data loss.
That is, the failing iomap_end call will punch out the data that
the other racing iomap iteration brought into the page cache. This
can be reproduced with generic/34[46] if we arbitrarily fail page
cache copy-in operations from write() syscalls.
Code analysis tells us that the iomap_page_mkwrite() function holds
the already instantiated and uptodate folio locked across the iomap
mapping iterations. Hence the folio cannot be removed from memory
whilst we are mapping the range it covers, and as such we do not
care if the mapping changes state underneath the iomap iteration
loop:
1. if the folio is not already dirty, there is no writeback races
possible.
2. if we allocated the mapping (delalloc or unwritten), the folio
cannot already be dirty. See #1.
3. If the folio is already dirty, it must be up to date. As we hold
it locked, it cannot be reclaimed from memory. Hence we always
have valid data in the page cache while iterating the mapping.
4. Valid data in the page cache can exist when the underlying
mapping is DELALLOC, UNWRITTEN or WRITTEN. Having the mapping
change from DELALLOC->UNWRITTEN or UNWRITTEN->WRITTEN does not
change the data in the page - it only affects actions if we are
initialising a new page. Hence #3 applies and we don't care
about these extent map transitions racing with
iomap_page_mkwrite().
5. iomap_page_mkwrite() checks for page invalidation races
(truncate, hole punch, etc) after it locks the folio. We also
hold the mapping->invalidation_lock here, and hence the mapping
cannot change due to extent removal operations while we are
iterating the folio.
As such, filesystems that don't use bufferheads will never fail
the iomap_folio_mkwrite_iter() operation on the current mapping,
regardless of whether the iomap should be considered stale.
Further, the range we are asked to iterate is limited to the range
inside EOF that the folio spans. Hence, for XFS, we will only map
the exact range we are asked for, and we will only do speculative
preallocation with delalloc if we are mapping a hole at the EOF
page. The iterator will consume the entire range of the folio that
is within EOF, and anything beyond the EOF block cannot be accessed.
We never need to truncate this post-EOF speculative prealloc away in
the context of the iomap_page_mkwrite() iterator because if it
remains unused we'll remove it when the last reference to the inode
goes away.
Hence we don't actually need an .iomap_end() cleanup/error handling
path at all for iomap_page_mkwrite() for XFS. This means we can
separate the page fault processing from the complexity of the
.iomap_end() processing in the buffered write path. This also means
that the buffered write path will also be able to take the
mapping->invalidate_lock as necessary.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
---
fs/xfs/xfs_file.c | 2 +-
fs/xfs/xfs_iomap.c | 9 +++++++++
fs/xfs/xfs_iomap.h | 1 +
3 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index e462d39c840e..595a5bcf46b9 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1325,7 +1325,7 @@ __xfs_filemap_fault(
if (write_fault) {
xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
ret = iomap_page_mkwrite(vmf,
- &xfs_buffered_write_iomap_ops);
+ &xfs_page_mkwrite_iomap_ops);
xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
} else {
ret = filemap_fault(vmf);
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 07da03976ec1..5cea069a38b4 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1187,6 +1187,15 @@ const struct iomap_ops xfs_buffered_write_iomap_ops = {
.iomap_end = xfs_buffered_write_iomap_end,
};
+/*
+ * iomap_page_mkwrite() will never fail in a way that requires delalloc extents
+ * that it allocated to be revoked. Hence we do not need an .iomap_end method
+ * for this operation.
+ */
+const struct iomap_ops xfs_page_mkwrite_iomap_ops = {
+ .iomap_begin = xfs_buffered_write_iomap_begin,
+};
+
static int
xfs_read_iomap_begin(
struct inode *inode,
diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h
index c782e8c0479c..0f62ab633040 100644
--- a/fs/xfs/xfs_iomap.h
+++ b/fs/xfs/xfs_iomap.h
@@ -47,6 +47,7 @@ xfs_aligned_fsb_count(
}
extern const struct iomap_ops xfs_buffered_write_iomap_ops;
+extern const struct iomap_ops xfs_page_mkwrite_iomap_ops;
extern const struct iomap_ops xfs_direct_write_iomap_ops;
extern const struct iomap_ops xfs_read_iomap_ops;
extern const struct iomap_ops xfs_seek_iomap_ops;
--
2.37.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 2/9] xfs: punching delalloc extents on write failure is racy
2022-11-17 5:58 [PATCH 0/8 v3] xfs, iomap: fix data corrupton due to stale cached iomaps Dave Chinner
2022-11-17 5:58 ` [PATCH 1/9] xfs: write page faults in iomap are not buffered writes Dave Chinner
@ 2022-11-17 5:58 ` Dave Chinner
2022-11-17 17:50 ` Darrick J. Wong
2022-11-17 5:58 ` [PATCH 3/9] xfs: use byte ranges for write cleanup ranges Dave Chinner
` (6 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Dave Chinner @ 2022-11-17 5:58 UTC (permalink / raw)
To: linux-xfs; +Cc: linux-fsdevel
From: Dave Chinner <dchinner@redhat.com>
xfs_buffered_write_iomap_end() has a comment about the safety of
punching delalloc extents based holding the IOLOCK_EXCL. This
comment is wrong, and punching delalloc extents is not race free.
When we punch out a delalloc extent after a write failure in
xfs_buffered_write_iomap_end(), we punch out the page cache with
truncate_pagecache_range() before we punch out the delalloc extents.
At this point, we only hold the IOLOCK_EXCL, so there is nothing
stopping mmap() write faults racing with this cleanup operation,
reinstantiating a folio over the range we are about to punch and
hence requiring the delalloc extent to be kept.
If this race condition is hit, we can end up with a dirty page in
the page cache that has no delalloc extent or space reservation
backing it. This leads to bad things happening at writeback time.
To avoid this race condition, we need the page cache truncation to
be atomic w.r.t. the extent manipulation. We can do this by holding
the mapping->invalidate_lock exclusively across this operation -
this will prevent new pages from being inserted into the page cache
whilst we are removing the pages and the backing extent and space
reservation.
Taking the mapping->invalidate_lock exclusively in the buffered
write IO path is safe - it naturally nests inside the IOLOCK (see
truncate and fallocate paths). iomap_zero_range() can be called from
under the mapping->invalidate_lock (from the truncate path via
either xfs_zero_eof() or xfs_truncate_page(), but iomap_zero_iter()
will not instantiate new delalloc pages (because it skips holes) and
hence will not ever need to punch out delalloc extents on failure.
Fix the locking issue, and clean up the code logic a little to avoid
unnecessary work if we didn't allocate the delalloc extent or wrote
the entire region we allocated.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
fs/xfs/xfs_iomap.c | 41 +++++++++++++++++++++++------------------
1 file changed, 23 insertions(+), 18 deletions(-)
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 5cea069a38b4..a2e45ea1b0cb 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1147,6 +1147,10 @@ xfs_buffered_write_iomap_end(
written = 0;
}
+ /* If we didn't reserve the blocks, we're not allowed to punch them. */
+ if (!(iomap->flags & IOMAP_F_NEW))
+ return 0;
+
/*
* start_fsb refers to the first unused block after a short write. If
* nothing was written, round offset down to point at the first block in
@@ -1158,27 +1162,28 @@ xfs_buffered_write_iomap_end(
start_fsb = XFS_B_TO_FSB(mp, offset + written);
end_fsb = XFS_B_TO_FSB(mp, offset + length);
+ /* Nothing to do if we've written the entire delalloc extent */
+ if (start_fsb >= end_fsb)
+ return 0;
+
/*
- * Trim delalloc blocks if they were allocated by this write and we
- * didn't manage to write the whole range.
- *
- * We don't need to care about racing delalloc as we hold i_mutex
- * across the reserve/allocate/unreserve calls. If there are delalloc
- * blocks in the range, they are ours.
+ * Lock the mapping to avoid races with page faults re-instantiating
+ * folios and dirtying them via ->page_mkwrite between the page cache
+ * truncation and the delalloc extent removal. Failing to do this can
+ * leave dirty pages with no space reservation in the cache.
*/
- if ((iomap->flags & IOMAP_F_NEW) && start_fsb < end_fsb) {
- truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb),
- XFS_FSB_TO_B(mp, end_fsb) - 1);
-
- error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
- end_fsb - start_fsb);
- if (error && !xfs_is_shutdown(mp)) {
- xfs_alert(mp, "%s: unable to clean up ino %lld",
- __func__, ip->i_ino);
- return error;
- }
+ filemap_invalidate_lock(inode->i_mapping);
+ truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb),
+ XFS_FSB_TO_B(mp, end_fsb) - 1);
+
+ error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
+ end_fsb - start_fsb);
+ filemap_invalidate_unlock(inode->i_mapping);
+ if (error && !xfs_is_shutdown(mp)) {
+ xfs_alert(mp, "%s: unable to clean up ino %lld",
+ __func__, ip->i_ino);
+ return error;
}
-
return 0;
}
--
2.37.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 2/9] xfs: punching delalloc extents on write failure is racy
2022-11-17 5:58 ` [PATCH 2/9] xfs: punching delalloc extents on write failure is racy Dave Chinner
@ 2022-11-17 17:50 ` Darrick J. Wong
0 siblings, 0 replies; 18+ messages in thread
From: Darrick J. Wong @ 2022-11-17 17:50 UTC (permalink / raw)
To: Dave Chinner; +Cc: linux-xfs, linux-fsdevel
On Thu, Nov 17, 2022 at 04:58:03PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
>
> xfs_buffered_write_iomap_end() has a comment about the safety of
> punching delalloc extents based holding the IOLOCK_EXCL. This
> comment is wrong, and punching delalloc extents is not race free.
>
> When we punch out a delalloc extent after a write failure in
> xfs_buffered_write_iomap_end(), we punch out the page cache with
> truncate_pagecache_range() before we punch out the delalloc extents.
> At this point, we only hold the IOLOCK_EXCL, so there is nothing
> stopping mmap() write faults racing with this cleanup operation,
> reinstantiating a folio over the range we are about to punch and
> hence requiring the delalloc extent to be kept.
>
> If this race condition is hit, we can end up with a dirty page in
> the page cache that has no delalloc extent or space reservation
> backing it. This leads to bad things happening at writeback time.
>
> To avoid this race condition, we need the page cache truncation to
> be atomic w.r.t. the extent manipulation. We can do this by holding
> the mapping->invalidate_lock exclusively across this operation -
> this will prevent new pages from being inserted into the page cache
> whilst we are removing the pages and the backing extent and space
> reservation.
>
> Taking the mapping->invalidate_lock exclusively in the buffered
> write IO path is safe - it naturally nests inside the IOLOCK (see
> truncate and fallocate paths). iomap_zero_range() can be called from
> under the mapping->invalidate_lock (from the truncate path via
> either xfs_zero_eof() or xfs_truncate_page(), but iomap_zero_iter()
> will not instantiate new delalloc pages (because it skips holes) and
> hence will not ever need to punch out delalloc extents on failure.
>
> Fix the locking issue, and clean up the code logic a little to avoid
> unnecessary work if we didn't allocate the delalloc extent or wrote
> the entire region we allocated.
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
It's really odd how this RVB tag keeps falling off...
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
--D
> ---
> fs/xfs/xfs_iomap.c | 41 +++++++++++++++++++++++------------------
> 1 file changed, 23 insertions(+), 18 deletions(-)
>
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index 5cea069a38b4..a2e45ea1b0cb 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -1147,6 +1147,10 @@ xfs_buffered_write_iomap_end(
> written = 0;
> }
>
> + /* If we didn't reserve the blocks, we're not allowed to punch them. */
> + if (!(iomap->flags & IOMAP_F_NEW))
> + return 0;
> +
> /*
> * start_fsb refers to the first unused block after a short write. If
> * nothing was written, round offset down to point at the first block in
> @@ -1158,27 +1162,28 @@ xfs_buffered_write_iomap_end(
> start_fsb = XFS_B_TO_FSB(mp, offset + written);
> end_fsb = XFS_B_TO_FSB(mp, offset + length);
>
> + /* Nothing to do if we've written the entire delalloc extent */
> + if (start_fsb >= end_fsb)
> + return 0;
> +
> /*
> - * Trim delalloc blocks if they were allocated by this write and we
> - * didn't manage to write the whole range.
> - *
> - * We don't need to care about racing delalloc as we hold i_mutex
> - * across the reserve/allocate/unreserve calls. If there are delalloc
> - * blocks in the range, they are ours.
> + * Lock the mapping to avoid races with page faults re-instantiating
> + * folios and dirtying them via ->page_mkwrite between the page cache
> + * truncation and the delalloc extent removal. Failing to do this can
> + * leave dirty pages with no space reservation in the cache.
> */
> - if ((iomap->flags & IOMAP_F_NEW) && start_fsb < end_fsb) {
> - truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb),
> - XFS_FSB_TO_B(mp, end_fsb) - 1);
> -
> - error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
> - end_fsb - start_fsb);
> - if (error && !xfs_is_shutdown(mp)) {
> - xfs_alert(mp, "%s: unable to clean up ino %lld",
> - __func__, ip->i_ino);
> - return error;
> - }
> + filemap_invalidate_lock(inode->i_mapping);
> + truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb),
> + XFS_FSB_TO_B(mp, end_fsb) - 1);
> +
> + error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
> + end_fsb - start_fsb);
> + filemap_invalidate_unlock(inode->i_mapping);
> + if (error && !xfs_is_shutdown(mp)) {
> + xfs_alert(mp, "%s: unable to clean up ino %lld",
> + __func__, ip->i_ino);
> + return error;
> }
> -
> return 0;
> }
>
> --
> 2.37.2
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 3/9] xfs: use byte ranges for write cleanup ranges
2022-11-17 5:58 [PATCH 0/8 v3] xfs, iomap: fix data corrupton due to stale cached iomaps Dave Chinner
2022-11-17 5:58 ` [PATCH 1/9] xfs: write page faults in iomap are not buffered writes Dave Chinner
2022-11-17 5:58 ` [PATCH 2/9] xfs: punching delalloc extents on write failure is racy Dave Chinner
@ 2022-11-17 5:58 ` Dave Chinner
2022-11-17 5:58 ` [PATCH 4/9] xfs,iomap: move delalloc punching to iomap Dave Chinner
` (5 subsequent siblings)
8 siblings, 0 replies; 18+ messages in thread
From: Dave Chinner @ 2022-11-17 5:58 UTC (permalink / raw)
To: linux-xfs; +Cc: linux-fsdevel
From: Dave Chinner <dchinner@redhat.com>
xfs_buffered_write_iomap_end() currently converts the byte ranges
passed to it to filesystem blocks to pass them to the bmap code to
punch out delalloc blocks, but then has to convert filesytem
blocks back to byte ranges for page cache truncate.
We're about to make the page cache truncate go away and replace it
with a page cache walk, so having to convert everything to/from/to
filesystem blocks is messy and error-prone. It is much easier to
pass around byte ranges and convert to page indexes and/or
filesystem blocks only where those units are needed.
In preparation for the page cache walk being added, add a helper
that converts byte ranges to filesystem blocks and calls
xfs_bmap_punch_delalloc_range() and convert
xfs_buffered_write_iomap_end() to calculate limits in byte ranges.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
---
fs/xfs/xfs_iomap.c | 40 +++++++++++++++++++++++++---------------
1 file changed, 25 insertions(+), 15 deletions(-)
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index a2e45ea1b0cb..7bb55dbc19d3 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1120,6 +1120,20 @@ xfs_buffered_write_iomap_begin(
return error;
}
+static int
+xfs_buffered_write_delalloc_punch(
+ struct inode *inode,
+ loff_t start_byte,
+ loff_t end_byte)
+{
+ struct xfs_mount *mp = XFS_M(inode->i_sb);
+ xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, start_byte);
+ xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, end_byte);
+
+ return xfs_bmap_punch_delalloc_range(XFS_I(inode), start_fsb,
+ end_fsb - start_fsb);
+}
+
static int
xfs_buffered_write_iomap_end(
struct inode *inode,
@@ -1129,10 +1143,9 @@ xfs_buffered_write_iomap_end(
unsigned flags,
struct iomap *iomap)
{
- struct xfs_inode *ip = XFS_I(inode);
- struct xfs_mount *mp = ip->i_mount;
- xfs_fileoff_t start_fsb;
- xfs_fileoff_t end_fsb;
+ struct xfs_mount *mp = XFS_M(inode->i_sb);
+ loff_t start_byte;
+ loff_t end_byte;
int error = 0;
if (iomap->type != IOMAP_DELALLOC)
@@ -1157,13 +1170,13 @@ xfs_buffered_write_iomap_end(
* the range.
*/
if (unlikely(!written))
- start_fsb = XFS_B_TO_FSBT(mp, offset);
+ start_byte = round_down(offset, mp->m_sb.sb_blocksize);
else
- start_fsb = XFS_B_TO_FSB(mp, offset + written);
- end_fsb = XFS_B_TO_FSB(mp, offset + length);
+ start_byte = round_up(offset + written, mp->m_sb.sb_blocksize);
+ end_byte = round_up(offset + length, mp->m_sb.sb_blocksize);
/* Nothing to do if we've written the entire delalloc extent */
- if (start_fsb >= end_fsb)
+ if (start_byte >= end_byte)
return 0;
/*
@@ -1173,15 +1186,12 @@ xfs_buffered_write_iomap_end(
* leave dirty pages with no space reservation in the cache.
*/
filemap_invalidate_lock(inode->i_mapping);
- truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb),
- XFS_FSB_TO_B(mp, end_fsb) - 1);
-
- error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
- end_fsb - start_fsb);
+ truncate_pagecache_range(inode, start_byte, end_byte - 1);
+ error = xfs_buffered_write_delalloc_punch(inode, start_byte, end_byte);
filemap_invalidate_unlock(inode->i_mapping);
if (error && !xfs_is_shutdown(mp)) {
- xfs_alert(mp, "%s: unable to clean up ino %lld",
- __func__, ip->i_ino);
+ xfs_alert(mp, "%s: unable to clean up ino 0x%llx",
+ __func__, XFS_I(inode)->i_ino);
return error;
}
return 0;
--
2.37.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 4/9] xfs,iomap: move delalloc punching to iomap
2022-11-17 5:58 [PATCH 0/8 v3] xfs, iomap: fix data corrupton due to stale cached iomaps Dave Chinner
` (2 preceding siblings ...)
2022-11-17 5:58 ` [PATCH 3/9] xfs: use byte ranges for write cleanup ranges Dave Chinner
@ 2022-11-17 5:58 ` Dave Chinner
2022-11-17 17:57 ` Darrick J. Wong
2022-11-17 5:58 ` [PATCH 5/9] iomap: buffered write failure should not truncate the page cache Dave Chinner
` (4 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Dave Chinner @ 2022-11-17 5:58 UTC (permalink / raw)
To: linux-xfs; +Cc: linux-fsdevel
From: Dave Chinner <dchinner@redhat.com>
Because that's what Christoph wants for this error handling path
only XFS uses.
It requires a new iomap export for handling errors over delalloc
ranges. This is basically the XFS code as is stands, but even though
Christoph wants this as iomap funcitonality, we still have
to call it from the filesystem specific ->iomap_end callback, and
call into the iomap code with yet another filesystem specific
callback to punch the delalloc extent within the defined ranges.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
fs/iomap/buffered-io.c | 60 ++++++++++++++++++++++++++++++++++++++++++
fs/xfs/xfs_iomap.c | 47 ++++++---------------------------
include/linux/iomap.h | 6 +++++
3 files changed, 74 insertions(+), 39 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 91ee0b308e13..77f391fd90ca 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -832,6 +832,66 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i,
}
EXPORT_SYMBOL_GPL(iomap_file_buffered_write);
+/*
+ * When a short write occurs, the filesystem may need to remove reserved space
+ * that was allocated in ->iomap_begin from it's ->iomap_end method. For
+ * filesystems that use delayed allocation, we need to punch out delalloc
+ * extents from the range that are not dirty in the page cache. As the write can
+ * race with page faults, there can be dirty pages over the delalloc extent
+ * outside the range of a short write but still within the delalloc extent
+ * allocated for this iomap.
+ *
+ * This function uses [start_byte, end_byte) intervals (i.e. open ended) to
+ * simplify range iterations, but converts them back to {offset,len} tuples for
+ * the punch callback.
+ */
+int iomap_file_buffered_write_punch_delalloc(struct inode *inode,
+ struct iomap *iomap, loff_t offset, loff_t length,
+ ssize_t written,
+ int (*punch)(struct inode *inode, loff_t offset, loff_t length))
+{
+ loff_t start_byte;
+ loff_t end_byte;
+ int blocksize = 1 << inode->i_blkbits;
+ int error = 0;
+
+ if (iomap->type != IOMAP_DELALLOC)
+ return 0;
+
+ /* If we didn't reserve the blocks, we're not allowed to punch them. */
+ if (!(iomap->flags & IOMAP_F_NEW))
+ return 0;
+
+ /*
+ * start_byte refers to the first unused block after a short write. If
+ * nothing was written, round offset down to point at the first block in
+ * the range.
+ */
+ if (unlikely(!written))
+ start_byte = round_down(offset, blocksize);
+ else
+ start_byte = round_up(offset + written, blocksize);
+ end_byte = round_up(offset + length, blocksize);
+
+ /* Nothing to do if we've written the entire delalloc extent */
+ if (start_byte >= end_byte)
+ return 0;
+
+ /*
+ * Lock the mapping to avoid races with page faults re-instantiating
+ * folios and dirtying them via ->page_mkwrite between the page cache
+ * truncation and the delalloc extent removal. Failing to do this can
+ * leave dirty pages with no space reservation in the cache.
+ */
+ filemap_invalidate_lock(inode->i_mapping);
+ truncate_pagecache_range(inode, start_byte, end_byte - 1);
+ error = punch(inode, start_byte, end_byte - start_byte);
+ filemap_invalidate_unlock(inode->i_mapping);
+
+ return error;
+}
+EXPORT_SYMBOL_GPL(iomap_file_buffered_write_punch_delalloc);
+
static loff_t iomap_unshare_iter(struct iomap_iter *iter)
{
struct iomap *iomap = &iter->iomap;
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 7bb55dbc19d3..ea96e8a34868 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1123,12 +1123,12 @@ xfs_buffered_write_iomap_begin(
static int
xfs_buffered_write_delalloc_punch(
struct inode *inode,
- loff_t start_byte,
- loff_t end_byte)
+ loff_t offset,
+ loff_t length)
{
struct xfs_mount *mp = XFS_M(inode->i_sb);
- xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, start_byte);
- xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, end_byte);
+ xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, offset);
+ xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, offset + length);
return xfs_bmap_punch_delalloc_range(XFS_I(inode), start_fsb,
end_fsb - start_fsb);
@@ -1143,13 +1143,9 @@ xfs_buffered_write_iomap_end(
unsigned flags,
struct iomap *iomap)
{
- struct xfs_mount *mp = XFS_M(inode->i_sb);
- loff_t start_byte;
- loff_t end_byte;
- int error = 0;
- if (iomap->type != IOMAP_DELALLOC)
- return 0;
+ struct xfs_mount *mp = XFS_M(inode->i_sb);
+ int error;
/*
* Behave as if the write failed if drop writes is enabled. Set the NEW
@@ -1160,35 +1156,8 @@ xfs_buffered_write_iomap_end(
written = 0;
}
- /* If we didn't reserve the blocks, we're not allowed to punch them. */
- if (!(iomap->flags & IOMAP_F_NEW))
- return 0;
-
- /*
- * start_fsb refers to the first unused block after a short write. If
- * nothing was written, round offset down to point at the first block in
- * the range.
- */
- if (unlikely(!written))
- start_byte = round_down(offset, mp->m_sb.sb_blocksize);
- else
- start_byte = round_up(offset + written, mp->m_sb.sb_blocksize);
- end_byte = round_up(offset + length, mp->m_sb.sb_blocksize);
-
- /* Nothing to do if we've written the entire delalloc extent */
- if (start_byte >= end_byte)
- return 0;
-
- /*
- * Lock the mapping to avoid races with page faults re-instantiating
- * folios and dirtying them via ->page_mkwrite between the page cache
- * truncation and the delalloc extent removal. Failing to do this can
- * leave dirty pages with no space reservation in the cache.
- */
- filemap_invalidate_lock(inode->i_mapping);
- truncate_pagecache_range(inode, start_byte, end_byte - 1);
- error = xfs_buffered_write_delalloc_punch(inode, start_byte, end_byte);
- filemap_invalidate_unlock(inode->i_mapping);
+ error = iomap_file_buffered_write_punch_delalloc(inode, iomap, offset,
+ length, written, &xfs_buffered_write_delalloc_punch);
if (error && !xfs_is_shutdown(mp)) {
xfs_alert(mp, "%s: unable to clean up ino 0x%llx",
__func__, XFS_I(inode)->i_ino);
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 238a03087e17..6bbed915c83a 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -226,6 +226,12 @@ static inline const struct iomap *iomap_iter_srcmap(const struct iomap_iter *i)
ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
const struct iomap_ops *ops);
+int iomap_file_buffered_write_punch_delalloc(struct inode *inode,
+ struct iomap *iomap, loff_t offset, loff_t length,
+ ssize_t written,
+ int (*punch)(struct inode *inode,
+ loff_t offset, loff_t length));
+
int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops);
void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops);
bool iomap_is_partially_uptodate(struct folio *, size_t from, size_t count);
--
2.37.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 4/9] xfs,iomap: move delalloc punching to iomap
2022-11-17 5:58 ` [PATCH 4/9] xfs,iomap: move delalloc punching to iomap Dave Chinner
@ 2022-11-17 17:57 ` Darrick J. Wong
0 siblings, 0 replies; 18+ messages in thread
From: Darrick J. Wong @ 2022-11-17 17:57 UTC (permalink / raw)
To: Dave Chinner; +Cc: linux-xfs, linux-fsdevel
On Thu, Nov 17, 2022 at 04:58:05PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
>
> Because that's what Christoph wants for this error handling path
> only XFS uses.
>
> It requires a new iomap export for handling errors over delalloc
> ranges. This is basically the XFS code as is stands, but even though
> Christoph wants this as iomap funcitonality, we still have
> to call it from the filesystem specific ->iomap_end callback, and
> call into the iomap code with yet another filesystem specific
> callback to punch the delalloc extent within the defined ranges.
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
> fs/iomap/buffered-io.c | 60 ++++++++++++++++++++++++++++++++++++++++++
> fs/xfs/xfs_iomap.c | 47 ++++++---------------------------
> include/linux/iomap.h | 6 +++++
> 3 files changed, 74 insertions(+), 39 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 91ee0b308e13..77f391fd90ca 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -832,6 +832,66 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i,
> }
> EXPORT_SYMBOL_GPL(iomap_file_buffered_write);
>
> +/*
> + * When a short write occurs, the filesystem may need to remove reserved space
> + * that was allocated in ->iomap_begin from it's ->iomap_end method. For
> + * filesystems that use delayed allocation, we need to punch out delalloc
> + * extents from the range that are not dirty in the page cache. As the write can
> + * race with page faults, there can be dirty pages over the delalloc extent
> + * outside the range of a short write but still within the delalloc extent
> + * allocated for this iomap.
> + *
> + * This function uses [start_byte, end_byte) intervals (i.e. open ended) to
> + * simplify range iterations, but converts them back to {offset,len} tuples for
> + * the punch callback.
> + */
> +int iomap_file_buffered_write_punch_delalloc(struct inode *inode,
> + struct iomap *iomap, loff_t offset, loff_t length,
Nit: loff_t pos, not 'offset', to (try to) be consistent with the rest
of the codebase.
I'll fix this on commit if you don't beat me to the punch, so
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
--D
> + ssize_t written,
> + int (*punch)(struct inode *inode, loff_t offset, loff_t length))
> +{
> + loff_t start_byte;
> + loff_t end_byte;
> + int blocksize = 1 << inode->i_blkbits;
> + int error = 0;
> +
> + if (iomap->type != IOMAP_DELALLOC)
> + return 0;
> +
> + /* If we didn't reserve the blocks, we're not allowed to punch them. */
> + if (!(iomap->flags & IOMAP_F_NEW))
> + return 0;
> +
> + /*
> + * start_byte refers to the first unused block after a short write. If
> + * nothing was written, round offset down to point at the first block in
> + * the range.
> + */
> + if (unlikely(!written))
> + start_byte = round_down(offset, blocksize);
> + else
> + start_byte = round_up(offset + written, blocksize);
> + end_byte = round_up(offset + length, blocksize);
> +
> + /* Nothing to do if we've written the entire delalloc extent */
> + if (start_byte >= end_byte)
> + return 0;
> +
> + /*
> + * Lock the mapping to avoid races with page faults re-instantiating
> + * folios and dirtying them via ->page_mkwrite between the page cache
> + * truncation and the delalloc extent removal. Failing to do this can
> + * leave dirty pages with no space reservation in the cache.
> + */
> + filemap_invalidate_lock(inode->i_mapping);
> + truncate_pagecache_range(inode, start_byte, end_byte - 1);
> + error = punch(inode, start_byte, end_byte - start_byte);
> + filemap_invalidate_unlock(inode->i_mapping);
> +
> + return error;
> +}
> +EXPORT_SYMBOL_GPL(iomap_file_buffered_write_punch_delalloc);
> +
> static loff_t iomap_unshare_iter(struct iomap_iter *iter)
> {
> struct iomap *iomap = &iter->iomap;
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index 7bb55dbc19d3..ea96e8a34868 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -1123,12 +1123,12 @@ xfs_buffered_write_iomap_begin(
> static int
> xfs_buffered_write_delalloc_punch(
> struct inode *inode,
> - loff_t start_byte,
> - loff_t end_byte)
> + loff_t offset,
> + loff_t length)
> {
> struct xfs_mount *mp = XFS_M(inode->i_sb);
> - xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, start_byte);
> - xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, end_byte);
> + xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, offset);
> + xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, offset + length);
>
> return xfs_bmap_punch_delalloc_range(XFS_I(inode), start_fsb,
> end_fsb - start_fsb);
> @@ -1143,13 +1143,9 @@ xfs_buffered_write_iomap_end(
> unsigned flags,
> struct iomap *iomap)
> {
> - struct xfs_mount *mp = XFS_M(inode->i_sb);
> - loff_t start_byte;
> - loff_t end_byte;
> - int error = 0;
>
> - if (iomap->type != IOMAP_DELALLOC)
> - return 0;
> + struct xfs_mount *mp = XFS_M(inode->i_sb);
> + int error;
>
> /*
> * Behave as if the write failed if drop writes is enabled. Set the NEW
> @@ -1160,35 +1156,8 @@ xfs_buffered_write_iomap_end(
> written = 0;
> }
>
> - /* If we didn't reserve the blocks, we're not allowed to punch them. */
> - if (!(iomap->flags & IOMAP_F_NEW))
> - return 0;
> -
> - /*
> - * start_fsb refers to the first unused block after a short write. If
> - * nothing was written, round offset down to point at the first block in
> - * the range.
> - */
> - if (unlikely(!written))
> - start_byte = round_down(offset, mp->m_sb.sb_blocksize);
> - else
> - start_byte = round_up(offset + written, mp->m_sb.sb_blocksize);
> - end_byte = round_up(offset + length, mp->m_sb.sb_blocksize);
> -
> - /* Nothing to do if we've written the entire delalloc extent */
> - if (start_byte >= end_byte)
> - return 0;
> -
> - /*
> - * Lock the mapping to avoid races with page faults re-instantiating
> - * folios and dirtying them via ->page_mkwrite between the page cache
> - * truncation and the delalloc extent removal. Failing to do this can
> - * leave dirty pages with no space reservation in the cache.
> - */
> - filemap_invalidate_lock(inode->i_mapping);
> - truncate_pagecache_range(inode, start_byte, end_byte - 1);
> - error = xfs_buffered_write_delalloc_punch(inode, start_byte, end_byte);
> - filemap_invalidate_unlock(inode->i_mapping);
> + error = iomap_file_buffered_write_punch_delalloc(inode, iomap, offset,
> + length, written, &xfs_buffered_write_delalloc_punch);
> if (error && !xfs_is_shutdown(mp)) {
> xfs_alert(mp, "%s: unable to clean up ino 0x%llx",
> __func__, XFS_I(inode)->i_ino);
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index 238a03087e17..6bbed915c83a 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -226,6 +226,12 @@ static inline const struct iomap *iomap_iter_srcmap(const struct iomap_iter *i)
>
> ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
> const struct iomap_ops *ops);
> +int iomap_file_buffered_write_punch_delalloc(struct inode *inode,
> + struct iomap *iomap, loff_t offset, loff_t length,
> + ssize_t written,
> + int (*punch)(struct inode *inode,
> + loff_t offset, loff_t length));
> +
> int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops);
> void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops);
> bool iomap_is_partially_uptodate(struct folio *, size_t from, size_t count);
> --
> 2.37.2
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 5/9] iomap: buffered write failure should not truncate the page cache
2022-11-17 5:58 [PATCH 0/8 v3] xfs, iomap: fix data corrupton due to stale cached iomaps Dave Chinner
` (3 preceding siblings ...)
2022-11-17 5:58 ` [PATCH 4/9] xfs,iomap: move delalloc punching to iomap Dave Chinner
@ 2022-11-17 5:58 ` Dave Chinner
2022-11-17 18:36 ` Darrick J. Wong
2022-11-17 5:58 ` [PATCH 6/9] xfs: xfs_bmap_punch_delalloc_range() should take a byte range Dave Chinner
` (3 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Dave Chinner @ 2022-11-17 5:58 UTC (permalink / raw)
To: linux-xfs; +Cc: linux-fsdevel
From: Dave Chinner <dchinner@redhat.com>
iomap_file_buffered_write_punch_delalloc() currently invalidates the
page cache over the unused range of the delalloc extent that was
allocated. While the write allocated the delalloc extent, it does
not own it exclusively as the write does not hold any locks that
prevent either writeback or mmap page faults from changing the state
of either the page cache or the extent state backing this range.
Whilst xfs_bmap_punch_delalloc_range() already handles races in
extent conversion - it will only punch out delalloc extents and it
ignores any other type of extent - the page cache truncate does not
discriminate between data written by this write or some other task.
As a result, truncating the page cache can result in data corruption
if the write races with mmap modifications to the file over the same
range.
generic/346 exercises this workload, and if we randomly fail writes
(as will happen when iomap gets stale iomap detection later in the
patchset), it will randomly corrupt the file data because it removes
data written by mmap() in the same page as the write() that failed.
Hence we do not want to punch out the page cache over the range of
the extent we failed to write to - what we actually need to do is
detect the ranges that have dirty data in cache over them and *not
punch them out*.
To do this, we have to walk the page cache over the range of the
delalloc extent we want to remove. This is made complex by the fact
we have to handle partially up-to-date folios correctly and this can
happen even when the FSB size == PAGE_SIZE because we now support
multi-page folios in the page cache.
Because we are only interested in discovering the edges of data
ranges in the page cache (i.e. hole-data boundaries) we can make use
of mapping_seek_hole_data() to find those transitions in the page
cache. As we hold the invalidate_lock, we know that the boundaries
are not going to change while we walk the range. This interface is
also byte-based and is sub-page block aware, so we can find the data
ranges in the cache based on byte offsets rather than page, folio or
fs block sized chunks. This greatly simplifies the logic of finding
dirty cached ranges in the page cache.
Once we've identified a range that contains cached data, we can then
iterate the range folio by folio. This allows us to determine if the
data is dirty and hence perform the correct delalloc extent punching
operations. The seek interface we use to iterate data ranges will
give us sub-folio start/end granularity, so we may end up looking up
the same folio multiple times as the seek interface iterates across
each discontiguous data region in the folio.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
fs/iomap/buffered-io.c | 187 +++++++++++++++++++++++++++++++++++++----
1 file changed, 169 insertions(+), 18 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 77f391fd90ca..028cdf115477 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -832,6 +832,151 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i,
}
EXPORT_SYMBOL_GPL(iomap_file_buffered_write);
+/*
+ * Scan the data range passed to us for dirty page cache folios. If we find a
+ * dirty folio, punch out the preceeding range and update the offset from which
+ * the next punch will start from.
+ *
+ * We can punch out clean pages because they either contain data that has been
+ * written back - in which case the delalloc punch over that range is a no-op -
+ * or they have been read faults in which case they contain zeroes and we can
+ * remove the delalloc backing range and any new writes to those pages will do
+ * the normal hole filling operation...
+ *
+ * This makes the logic simple: we only need to keep the delalloc extents only
+ * over the dirty ranges of the page cache.
+ *
+ * This function uses [start_byte, end_byte) intervals (i.e. open ended) to
+ * simplify range iterations.
+ */
+static int iomap_write_delalloc_scan(struct inode *inode,
+ loff_t *punch_start_byte, loff_t start_byte, loff_t end_byte,
+ int (*punch)(struct inode *inode, loff_t offset, loff_t length))
+{
+ while (start_byte < end_byte) {
+ struct folio *folio;
+
+ /* grab locked page */
+ folio = filemap_lock_folio(inode->i_mapping,
+ start_byte >> PAGE_SHIFT);
+ if (!folio) {
+ start_byte = ALIGN_DOWN(start_byte, PAGE_SIZE) +
+ PAGE_SIZE;
+ continue;
+ }
+
+ /* if dirty, punch up to offset */
+ if (folio_test_dirty(folio)) {
+ if (start_byte > *punch_start_byte) {
+ int error;
+
+ error = punch(inode, *punch_start_byte,
+ start_byte - *punch_start_byte);
+ if (error) {
+ folio_unlock(folio);
+ folio_put(folio);
+ return error;
+ }
+ }
+
+ /*
+ * Make sure the next punch start is correctly bound to
+ * the end of this data range, not the end of the folio.
+ */
+ *punch_start_byte = min_t(loff_t, end_byte,
+ folio_next_index(folio) << PAGE_SHIFT);
+ }
+
+ /* move offset to start of next folio in range */
+ start_byte = folio_next_index(folio) << PAGE_SHIFT;
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+ return 0;
+}
+
+/*
+ * Punch out all the delalloc blocks in the range given except for those that
+ * have dirty data still pending in the page cache - those are going to be
+ * written and so must still retain the delalloc backing for writeback.
+ *
+ * As we are scanning the page cache for data, we don't need to reimplement the
+ * wheel - mapping_seek_hole_data() does exactly what we need to identify the
+ * start and end of data ranges correctly even for sub-folio block sizes. This
+ * byte range based iteration is especially convenient because it means we don't
+ * have to care about variable size folios, nor where the start or end of the
+ * data range lies within a folio, if they lie within the same folio or even if
+ * there are multiple discontiguous data ranges within the folio.
+ *
+ * Intervals are of the form [start_byte, end_byte) (i.e. open ended) because
+ * it matches the intervals returned by mapping_seek_hole_data(). i.e. SEEK_DATA
+ * returns the start of a data range (start_byte), and SEEK_HOLE(start_byte)
+ * returns the end of the data range (data_end). Using closed intervals would
+ * require sprinkling this code with magic "+ 1" and "- 1" arithmetic and expose
+ * the code to subtle off-by-one bugs....
+ */
+static int iomap_write_delalloc_release(struct inode *inode,
+ loff_t start_byte, loff_t end_byte,
+ int (*punch)(struct inode *inode, loff_t offset, loff_t length))
+{
+ loff_t punch_start_byte = start_byte;
+ int error = 0;
+
+ /*
+ * Lock the mapping to avoid races with page faults re-instantiating
+ * folios and dirtying them via ->page_mkwrite whilst we walk the
+ * cache and perform delalloc extent removal. Failing to do this can
+ * leave dirty pages with no space reservation in the cache.
+ */
+ filemap_invalidate_lock(inode->i_mapping);
+ while (start_byte < end_byte) {
+ loff_t data_end;
+
+ start_byte = mapping_seek_hole_data(inode->i_mapping,
+ start_byte, end_byte, SEEK_DATA);
+ /*
+ * If there is no more data to scan, all that is left is to
+ * punch out the remaining range.
+ */
+ if (start_byte == -ENXIO || start_byte == end_byte)
+ break;
+ if (start_byte < 0) {
+ error = start_byte;
+ goto out_unlock;
+ }
+ WARN_ON_ONCE(start_byte < punch_start_byte);
+ WARN_ON_ONCE(start_byte > end_byte);
+
+ /*
+ * We find the end of this contiguous cached data range by
+ * seeking from start_byte to the beginning of the next hole.
+ */
+ data_end = mapping_seek_hole_data(inode->i_mapping, start_byte,
+ end_byte, SEEK_HOLE);
+ if (data_end < 0) {
+ error = data_end;
+ goto out_unlock;
+ }
+ WARN_ON_ONCE(data_end <= start_byte);
+ WARN_ON_ONCE(data_end > end_byte);
+
+ error = iomap_write_delalloc_scan(inode, &punch_start_byte,
+ start_byte, data_end, punch);
+ if (error)
+ goto out_unlock;
+
+ /* The next data search starts at the end of this one. */
+ start_byte = data_end;
+ }
+
+ if (punch_start_byte < end_byte)
+ error = punch(inode, punch_start_byte,
+ end_byte - punch_start_byte);
+out_unlock:
+ filemap_invalidate_unlock(inode->i_mapping);
+ return error;
+}
+
/*
* When a short write occurs, the filesystem may need to remove reserved space
* that was allocated in ->iomap_begin from it's ->iomap_end method. For
@@ -842,18 +987,34 @@ EXPORT_SYMBOL_GPL(iomap_file_buffered_write);
* allocated for this iomap.
*
* This function uses [start_byte, end_byte) intervals (i.e. open ended) to
- * simplify range iterations, but converts them back to {offset,len} tuples for
- * the punch callback.
+ * simplify range iterations.
+ *
+ * The punch() callback *must* only punch delalloc extents in the range passed
+ * to it. It must skip over all other types of extents in the range and leave
+ * them completely unchanged. It must do this punch atomically with respect to
+ * other extent modifications.
+ *
+ * The punch() callback may be called with a folio locked to prevent writeback
+ * extent allocation racing at the edge of the range we are currently punching.
+ * The locked folio may or may not cover the range being punched, so it is not
+ * safe for the punch() callback to lock folios itself.
+ *
+ * Lock order is:
+ *
+ * inode->i_rwsem (shared or exclusive)
+ * inode->i_mapping->invalidate_lock (exclusive)
+ * folio_lock()
+ * ->punch
+ * internal filesystem allocation lock
*/
int iomap_file_buffered_write_punch_delalloc(struct inode *inode,
struct iomap *iomap, loff_t offset, loff_t length,
ssize_t written,
int (*punch)(struct inode *inode, loff_t offset, loff_t length))
{
- loff_t start_byte;
- loff_t end_byte;
- int blocksize = 1 << inode->i_blkbits;
- int error = 0;
+ loff_t start_byte;
+ loff_t end_byte;
+ int blocksize = 1 << inode->i_blkbits;
if (iomap->type != IOMAP_DELALLOC)
return 0;
@@ -877,18 +1038,8 @@ int iomap_file_buffered_write_punch_delalloc(struct inode *inode,
if (start_byte >= end_byte)
return 0;
- /*
- * Lock the mapping to avoid races with page faults re-instantiating
- * folios and dirtying them via ->page_mkwrite between the page cache
- * truncation and the delalloc extent removal. Failing to do this can
- * leave dirty pages with no space reservation in the cache.
- */
- filemap_invalidate_lock(inode->i_mapping);
- truncate_pagecache_range(inode, start_byte, end_byte - 1);
- error = punch(inode, start_byte, end_byte - start_byte);
- filemap_invalidate_unlock(inode->i_mapping);
-
- return error;
+ return iomap_write_delalloc_release(inode, start_byte, end_byte,
+ punch);
}
EXPORT_SYMBOL_GPL(iomap_file_buffered_write_punch_delalloc);
--
2.37.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 5/9] iomap: buffered write failure should not truncate the page cache
2022-11-17 5:58 ` [PATCH 5/9] iomap: buffered write failure should not truncate the page cache Dave Chinner
@ 2022-11-17 18:36 ` Darrick J. Wong
0 siblings, 0 replies; 18+ messages in thread
From: Darrick J. Wong @ 2022-11-17 18:36 UTC (permalink / raw)
To: Dave Chinner; +Cc: linux-xfs, linux-fsdevel
On Thu, Nov 17, 2022 at 04:58:06PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
>
> iomap_file_buffered_write_punch_delalloc() currently invalidates the
> page cache over the unused range of the delalloc extent that was
> allocated. While the write allocated the delalloc extent, it does
> not own it exclusively as the write does not hold any locks that
> prevent either writeback or mmap page faults from changing the state
> of either the page cache or the extent state backing this range.
>
> Whilst xfs_bmap_punch_delalloc_range() already handles races in
> extent conversion - it will only punch out delalloc extents and it
> ignores any other type of extent - the page cache truncate does not
> discriminate between data written by this write or some other task.
> As a result, truncating the page cache can result in data corruption
> if the write races with mmap modifications to the file over the same
> range.
>
> generic/346 exercises this workload, and if we randomly fail writes
> (as will happen when iomap gets stale iomap detection later in the
> patchset), it will randomly corrupt the file data because it removes
> data written by mmap() in the same page as the write() that failed.
>
> Hence we do not want to punch out the page cache over the range of
> the extent we failed to write to - what we actually need to do is
> detect the ranges that have dirty data in cache over them and *not
> punch them out*.
>
> To do this, we have to walk the page cache over the range of the
> delalloc extent we want to remove. This is made complex by the fact
> we have to handle partially up-to-date folios correctly and this can
> happen even when the FSB size == PAGE_SIZE because we now support
> multi-page folios in the page cache.
>
> Because we are only interested in discovering the edges of data
> ranges in the page cache (i.e. hole-data boundaries) we can make use
> of mapping_seek_hole_data() to find those transitions in the page
> cache. As we hold the invalidate_lock, we know that the boundaries
> are not going to change while we walk the range. This interface is
> also byte-based and is sub-page block aware, so we can find the data
> ranges in the cache based on byte offsets rather than page, folio or
> fs block sized chunks. This greatly simplifies the logic of finding
> dirty cached ranges in the page cache.
>
> Once we've identified a range that contains cached data, we can then
> iterate the range folio by folio. This allows us to determine if the
> data is dirty and hence perform the correct delalloc extent punching
> operations. The seek interface we use to iterate data ranges will
> give us sub-folio start/end granularity, so we may end up looking up
> the same folio multiple times as the seek interface iterates across
> each discontiguous data region in the folio.
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
> fs/iomap/buffered-io.c | 187 +++++++++++++++++++++++++++++++++++++----
> 1 file changed, 169 insertions(+), 18 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 77f391fd90ca..028cdf115477 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -832,6 +832,151 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i,
> }
> EXPORT_SYMBOL_GPL(iomap_file_buffered_write);
>
> +/*
> + * Scan the data range passed to us for dirty page cache folios. If we find a
> + * dirty folio, punch out the preceeding range and update the offset from which
> + * the next punch will start from.
> + *
> + * We can punch out clean pages because they either contain data that has been
I got briefly confused by this when I read this in the v2 patchset,
because I thought we were punching out clean pages, not the delalloc
iomapping under them. Maybe:
"We can punch out storage reservations under clean pages..."
> + * written back - in which case the delalloc punch over that range is a no-op -
> + * or they have been read faults in which case they contain zeroes and we can
> + * remove the delalloc backing range and any new writes to those pages will do
> + * the normal hole filling operation...
> + *
> + * This makes the logic simple: we only need to keep the delalloc extents only
> + * over the dirty ranges of the page cache.
> + *
> + * This function uses [start_byte, end_byte) intervals (i.e. open ended) to
> + * simplify range iterations.
> + */
> +static int iomap_write_delalloc_scan(struct inode *inode,
> + loff_t *punch_start_byte, loff_t start_byte, loff_t end_byte,
> + int (*punch)(struct inode *inode, loff_t offset, loff_t length))
> +{
> + while (start_byte < end_byte) {
> + struct folio *folio;
> +
> + /* grab locked page */
> + folio = filemap_lock_folio(inode->i_mapping,
> + start_byte >> PAGE_SHIFT);
> + if (!folio) {
> + start_byte = ALIGN_DOWN(start_byte, PAGE_SIZE) +
> + PAGE_SIZE;
> + continue;
> + }
> +
> + /* if dirty, punch up to offset */
> + if (folio_test_dirty(folio)) {
> + if (start_byte > *punch_start_byte) {
> + int error;
> +
> + error = punch(inode, *punch_start_byte,
> + start_byte - *punch_start_byte);
> + if (error) {
> + folio_unlock(folio);
> + folio_put(folio);
> + return error;
> + }
> + }
> +
> + /*
> + * Make sure the next punch start is correctly bound to
> + * the end of this data range, not the end of the folio.
> + */
> + *punch_start_byte = min_t(loff_t, end_byte,
> + folio_next_index(folio) << PAGE_SHIFT);
> + }
> +
> + /* move offset to start of next folio in range */
> + start_byte = folio_next_index(folio) << PAGE_SHIFT;
> + folio_unlock(folio);
> + folio_put(folio);
> + }
> + return 0;
> +}
> +
> +/*
> + * Punch out all the delalloc blocks in the range given except for those that
> + * have dirty data still pending in the page cache - those are going to be
> + * written and so must still retain the delalloc backing for writeback.
> + *
> + * As we are scanning the page cache for data, we don't need to reimplement the
> + * wheel - mapping_seek_hole_data() does exactly what we need to identify the
> + * start and end of data ranges correctly even for sub-folio block sizes. This
> + * byte range based iteration is especially convenient because it means we don't
> + * have to care about variable size folios, nor where the start or end of the
> + * data range lies within a folio, if they lie within the same folio or even if
> + * there are multiple discontiguous data ranges within the folio.
> + *
> + * Intervals are of the form [start_byte, end_byte) (i.e. open ended) because
> + * it matches the intervals returned by mapping_seek_hole_data(). i.e. SEEK_DATA
> + * returns the start of a data range (start_byte), and SEEK_HOLE(start_byte)
> + * returns the end of the data range (data_end). Using closed intervals would
> + * require sprinkling this code with magic "+ 1" and "- 1" arithmetic and expose
> + * the code to subtle off-by-one bugs....
> + */
> +static int iomap_write_delalloc_release(struct inode *inode,
> + loff_t start_byte, loff_t end_byte,
> + int (*punch)(struct inode *inode, loff_t offset, loff_t length))
> +{
> + loff_t punch_start_byte = start_byte;
> + int error = 0;
> +
> + /*
> + * Lock the mapping to avoid races with page faults re-instantiating
> + * folios and dirtying them via ->page_mkwrite whilst we walk the
> + * cache and perform delalloc extent removal. Failing to do this can
> + * leave dirty pages with no space reservation in the cache.
> + */
> + filemap_invalidate_lock(inode->i_mapping);
> + while (start_byte < end_byte) {
> + loff_t data_end;
> +
> + start_byte = mapping_seek_hole_data(inode->i_mapping,
> + start_byte, end_byte, SEEK_DATA);
> + /*
> + * If there is no more data to scan, all that is left is to
> + * punch out the remaining range.
> + */
> + if (start_byte == -ENXIO || start_byte == end_byte)
> + break;
> + if (start_byte < 0) {
> + error = start_byte;
> + goto out_unlock;
> + }
> + WARN_ON_ONCE(start_byte < punch_start_byte);
> + WARN_ON_ONCE(start_byte > end_byte);
> +
> + /*
> + * We find the end of this contiguous cached data range by
> + * seeking from start_byte to the beginning of the next hole.
> + */
> + data_end = mapping_seek_hole_data(inode->i_mapping, start_byte,
> + end_byte, SEEK_HOLE);
> + if (data_end < 0) {
> + error = data_end;
> + goto out_unlock;
> + }
> + WARN_ON_ONCE(data_end <= start_byte);
> + WARN_ON_ONCE(data_end > end_byte);
> +
> + error = iomap_write_delalloc_scan(inode, &punch_start_byte,
> + start_byte, data_end, punch);
> + if (error)
> + goto out_unlock;
> +
> + /* The next data search starts at the end of this one. */
> + start_byte = data_end;
> + }
> +
> + if (punch_start_byte < end_byte)
> + error = punch(inode, punch_start_byte,
> + end_byte - punch_start_byte);
> +out_unlock:
> + filemap_invalidate_unlock(inode->i_mapping);
> + return error;
> +}
> +
> /*
> * When a short write occurs, the filesystem may need to remove reserved space
> * that was allocated in ->iomap_begin from it's ->iomap_end method. For
> @@ -842,18 +987,34 @@ EXPORT_SYMBOL_GPL(iomap_file_buffered_write);
> * allocated for this iomap.
> *
> * This function uses [start_byte, end_byte) intervals (i.e. open ended) to
> - * simplify range iterations, but converts them back to {offset,len} tuples for
> - * the punch callback.
> + * simplify range iterations.
> + *
> + * The punch() callback *must* only punch delalloc extents in the range passed
> + * to it. It must skip over all other types of extents in the range and leave
> + * them completely unchanged. It must do this punch atomically with respect to
> + * other extent modifications.
> + *
> + * The punch() callback may be called with a folio locked to prevent writeback
> + * extent allocation racing at the edge of the range we are currently punching.
> + * The locked folio may or may not cover the range being punched, so it is not
> + * safe for the punch() callback to lock folios itself.
> + *
> + * Lock order is:
> + *
> + * inode->i_rwsem (shared or exclusive)
> + * inode->i_mapping->invalidate_lock (exclusive)
> + * folio_lock()
> + * ->punch
> + * internal filesystem allocation lock
Yay locking diagrams!
> */
> int iomap_file_buffered_write_punch_delalloc(struct inode *inode,
> struct iomap *iomap, loff_t offset, loff_t length,
> ssize_t written,
> int (*punch)(struct inode *inode, loff_t offset, loff_t length))
> {
> - loff_t start_byte;
> - loff_t end_byte;
> - int blocksize = 1 << inode->i_blkbits;
> - int error = 0;
> + loff_t start_byte;
> + loff_t end_byte;
> + int blocksize = 1 << inode->i_blkbits;
int blocksize = i_blocksize(inode);
With all those bits fixed up I think I'm ready (again) to send this to
the fstests cloud and see what happens. If you end up doing a v4 then
please fix these things before submitting them.
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
--D
>
> if (iomap->type != IOMAP_DELALLOC)
> return 0;
> @@ -877,18 +1038,8 @@ int iomap_file_buffered_write_punch_delalloc(struct inode *inode,
> if (start_byte >= end_byte)
> return 0;
>
> - /*
> - * Lock the mapping to avoid races with page faults re-instantiating
> - * folios and dirtying them via ->page_mkwrite between the page cache
> - * truncation and the delalloc extent removal. Failing to do this can
> - * leave dirty pages with no space reservation in the cache.
> - */
> - filemap_invalidate_lock(inode->i_mapping);
> - truncate_pagecache_range(inode, start_byte, end_byte - 1);
> - error = punch(inode, start_byte, end_byte - start_byte);
> - filemap_invalidate_unlock(inode->i_mapping);
> -
> - return error;
> + return iomap_write_delalloc_release(inode, start_byte, end_byte,
> + punch);
> }
> EXPORT_SYMBOL_GPL(iomap_file_buffered_write_punch_delalloc);
>
> --
> 2.37.2
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 6/9] xfs: xfs_bmap_punch_delalloc_range() should take a byte range
2022-11-17 5:58 [PATCH 0/8 v3] xfs, iomap: fix data corrupton due to stale cached iomaps Dave Chinner
` (4 preceding siblings ...)
2022-11-17 5:58 ` [PATCH 5/9] iomap: buffered write failure should not truncate the page cache Dave Chinner
@ 2022-11-17 5:58 ` Dave Chinner
2022-11-17 18:37 ` Darrick J. Wong
2022-11-17 5:58 ` [PATCH 7/9] iomap: write iomap validity checks Dave Chinner
` (2 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Dave Chinner @ 2022-11-17 5:58 UTC (permalink / raw)
To: linux-xfs; +Cc: linux-fsdevel
From: Dave Chinner <dchinner@redhat.com>
All the callers of xfs_bmap_punch_delalloc_range() jump through
hoops to convert a byte range to filesystem blocks before calling
xfs_bmap_punch_delalloc_range(). Instead, pass the byte range to
xfs_bmap_punch_delalloc_range() and have it do the conversion to
filesystem blocks internally.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
fs/xfs/xfs_aops.c | 16 ++++++----------
fs/xfs/xfs_bmap_util.c | 10 ++++++----
fs/xfs/xfs_bmap_util.h | 2 +-
fs/xfs/xfs_iomap.c | 8 ++------
4 files changed, 15 insertions(+), 21 deletions(-)
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 5d1a995b15f8..6aadc5815068 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -114,9 +114,8 @@ xfs_end_ioend(
if (unlikely(error)) {
if (ioend->io_flags & IOMAP_F_SHARED) {
xfs_reflink_cancel_cow_range(ip, offset, size, true);
- xfs_bmap_punch_delalloc_range(ip,
- XFS_B_TO_FSBT(mp, offset),
- XFS_B_TO_FSB(mp, size));
+ xfs_bmap_punch_delalloc_range(ip, offset,
+ offset + size);
}
goto done;
}
@@ -455,12 +454,8 @@ xfs_discard_folio(
struct folio *folio,
loff_t pos)
{
- struct inode *inode = folio->mapping->host;
- struct xfs_inode *ip = XFS_I(inode);
+ struct xfs_inode *ip = XFS_I(folio->mapping->host);
struct xfs_mount *mp = ip->i_mount;
- size_t offset = offset_in_folio(folio, pos);
- xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, pos);
- xfs_fileoff_t pageoff_fsb = XFS_B_TO_FSBT(mp, offset);
int error;
if (xfs_is_shutdown(mp))
@@ -470,8 +465,9 @@ xfs_discard_folio(
"page discard on page "PTR_FMT", inode 0x%llx, pos %llu.",
folio, ip->i_ino, pos);
- error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
- i_blocks_per_folio(inode, folio) - pageoff_fsb);
+ error = xfs_bmap_punch_delalloc_range(ip, pos,
+ round_up(pos, folio_size(folio)));
+
if (error && !xfs_is_shutdown(mp))
xfs_alert(mp, "page discard unable to remove delalloc mapping.");
}
diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
index 04d0c2bff67c..867645b74d88 100644
--- a/fs/xfs/xfs_bmap_util.c
+++ b/fs/xfs/xfs_bmap_util.c
@@ -590,11 +590,13 @@ xfs_getbmap(
int
xfs_bmap_punch_delalloc_range(
struct xfs_inode *ip,
- xfs_fileoff_t start_fsb,
- xfs_fileoff_t length)
+ xfs_off_t start_byte,
+ xfs_off_t end_byte)
{
+ struct xfs_mount *mp = ip->i_mount;
struct xfs_ifork *ifp = &ip->i_df;
- xfs_fileoff_t end_fsb = start_fsb + length;
+ xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, start_byte);
+ xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, end_byte);
struct xfs_bmbt_irec got, del;
struct xfs_iext_cursor icur;
int error = 0;
@@ -607,7 +609,7 @@ xfs_bmap_punch_delalloc_range(
while (got.br_startoff + got.br_blockcount > start_fsb) {
del = got;
- xfs_trim_extent(&del, start_fsb, length);
+ xfs_trim_extent(&del, start_fsb, end_fsb - start_fsb);
/*
* A delete can push the cursor forward. Step back to the
diff --git a/fs/xfs/xfs_bmap_util.h b/fs/xfs/xfs_bmap_util.h
index 24b37d211f1d..6888078f5c31 100644
--- a/fs/xfs/xfs_bmap_util.h
+++ b/fs/xfs/xfs_bmap_util.h
@@ -31,7 +31,7 @@ xfs_bmap_rtalloc(struct xfs_bmalloca *ap)
#endif /* CONFIG_XFS_RT */
int xfs_bmap_punch_delalloc_range(struct xfs_inode *ip,
- xfs_fileoff_t start_fsb, xfs_fileoff_t length);
+ xfs_off_t start_byte, xfs_off_t end_byte);
struct kgetbmap {
__s64 bmv_offset; /* file offset of segment in blocks */
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index ea96e8a34868..09676ff6940e 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1126,12 +1126,8 @@ xfs_buffered_write_delalloc_punch(
loff_t offset,
loff_t length)
{
- struct xfs_mount *mp = XFS_M(inode->i_sb);
- xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, offset);
- xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, offset + length);
-
- return xfs_bmap_punch_delalloc_range(XFS_I(inode), start_fsb,
- end_fsb - start_fsb);
+ return xfs_bmap_punch_delalloc_range(XFS_I(inode), offset,
+ offset + length);
}
static int
--
2.37.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 6/9] xfs: xfs_bmap_punch_delalloc_range() should take a byte range
2022-11-17 5:58 ` [PATCH 6/9] xfs: xfs_bmap_punch_delalloc_range() should take a byte range Dave Chinner
@ 2022-11-17 18:37 ` Darrick J. Wong
0 siblings, 0 replies; 18+ messages in thread
From: Darrick J. Wong @ 2022-11-17 18:37 UTC (permalink / raw)
To: Dave Chinner; +Cc: linux-xfs, linux-fsdevel
On Thu, Nov 17, 2022 at 04:58:07PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
>
> All the callers of xfs_bmap_punch_delalloc_range() jump through
> hoops to convert a byte range to filesystem blocks before calling
> xfs_bmap_punch_delalloc_range(). Instead, pass the byte range to
> xfs_bmap_punch_delalloc_range() and have it do the conversion to
> filesystem blocks internally.
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
Looks fine to me,
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
--D
> ---
> fs/xfs/xfs_aops.c | 16 ++++++----------
> fs/xfs/xfs_bmap_util.c | 10 ++++++----
> fs/xfs/xfs_bmap_util.h | 2 +-
> fs/xfs/xfs_iomap.c | 8 ++------
> 4 files changed, 15 insertions(+), 21 deletions(-)
>
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index 5d1a995b15f8..6aadc5815068 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -114,9 +114,8 @@ xfs_end_ioend(
> if (unlikely(error)) {
> if (ioend->io_flags & IOMAP_F_SHARED) {
> xfs_reflink_cancel_cow_range(ip, offset, size, true);
> - xfs_bmap_punch_delalloc_range(ip,
> - XFS_B_TO_FSBT(mp, offset),
> - XFS_B_TO_FSB(mp, size));
> + xfs_bmap_punch_delalloc_range(ip, offset,
> + offset + size);
> }
> goto done;
> }
> @@ -455,12 +454,8 @@ xfs_discard_folio(
> struct folio *folio,
> loff_t pos)
> {
> - struct inode *inode = folio->mapping->host;
> - struct xfs_inode *ip = XFS_I(inode);
> + struct xfs_inode *ip = XFS_I(folio->mapping->host);
> struct xfs_mount *mp = ip->i_mount;
> - size_t offset = offset_in_folio(folio, pos);
> - xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, pos);
> - xfs_fileoff_t pageoff_fsb = XFS_B_TO_FSBT(mp, offset);
> int error;
>
> if (xfs_is_shutdown(mp))
> @@ -470,8 +465,9 @@ xfs_discard_folio(
> "page discard on page "PTR_FMT", inode 0x%llx, pos %llu.",
> folio, ip->i_ino, pos);
>
> - error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
> - i_blocks_per_folio(inode, folio) - pageoff_fsb);
> + error = xfs_bmap_punch_delalloc_range(ip, pos,
> + round_up(pos, folio_size(folio)));
> +
> if (error && !xfs_is_shutdown(mp))
> xfs_alert(mp, "page discard unable to remove delalloc mapping.");
> }
> diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
> index 04d0c2bff67c..867645b74d88 100644
> --- a/fs/xfs/xfs_bmap_util.c
> +++ b/fs/xfs/xfs_bmap_util.c
> @@ -590,11 +590,13 @@ xfs_getbmap(
> int
> xfs_bmap_punch_delalloc_range(
> struct xfs_inode *ip,
> - xfs_fileoff_t start_fsb,
> - xfs_fileoff_t length)
> + xfs_off_t start_byte,
> + xfs_off_t end_byte)
> {
> + struct xfs_mount *mp = ip->i_mount;
> struct xfs_ifork *ifp = &ip->i_df;
> - xfs_fileoff_t end_fsb = start_fsb + length;
> + xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, start_byte);
> + xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, end_byte);
> struct xfs_bmbt_irec got, del;
> struct xfs_iext_cursor icur;
> int error = 0;
> @@ -607,7 +609,7 @@ xfs_bmap_punch_delalloc_range(
>
> while (got.br_startoff + got.br_blockcount > start_fsb) {
> del = got;
> - xfs_trim_extent(&del, start_fsb, length);
> + xfs_trim_extent(&del, start_fsb, end_fsb - start_fsb);
>
> /*
> * A delete can push the cursor forward. Step back to the
> diff --git a/fs/xfs/xfs_bmap_util.h b/fs/xfs/xfs_bmap_util.h
> index 24b37d211f1d..6888078f5c31 100644
> --- a/fs/xfs/xfs_bmap_util.h
> +++ b/fs/xfs/xfs_bmap_util.h
> @@ -31,7 +31,7 @@ xfs_bmap_rtalloc(struct xfs_bmalloca *ap)
> #endif /* CONFIG_XFS_RT */
>
> int xfs_bmap_punch_delalloc_range(struct xfs_inode *ip,
> - xfs_fileoff_t start_fsb, xfs_fileoff_t length);
> + xfs_off_t start_byte, xfs_off_t end_byte);
>
> struct kgetbmap {
> __s64 bmv_offset; /* file offset of segment in blocks */
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index ea96e8a34868..09676ff6940e 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -1126,12 +1126,8 @@ xfs_buffered_write_delalloc_punch(
> loff_t offset,
> loff_t length)
> {
> - struct xfs_mount *mp = XFS_M(inode->i_sb);
> - xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, offset);
> - xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, offset + length);
> -
> - return xfs_bmap_punch_delalloc_range(XFS_I(inode), start_fsb,
> - end_fsb - start_fsb);
> + return xfs_bmap_punch_delalloc_range(XFS_I(inode), offset,
> + offset + length);
> }
>
> static int
> --
> 2.37.2
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 7/9] iomap: write iomap validity checks
2022-11-17 5:58 [PATCH 0/8 v3] xfs, iomap: fix data corrupton due to stale cached iomaps Dave Chinner
` (5 preceding siblings ...)
2022-11-17 5:58 ` [PATCH 6/9] xfs: xfs_bmap_punch_delalloc_range() should take a byte range Dave Chinner
@ 2022-11-17 5:58 ` Dave Chinner
2022-11-17 18:40 ` Darrick J. Wong
2022-11-17 5:58 ` [PATCH 8/9] xfs: use iomap_valid method to detect stale cached iomaps Dave Chinner
2022-11-17 5:58 ` [PATCH 9/9] xfs: drop write error injection is unfixable, remove it Dave Chinner
8 siblings, 1 reply; 18+ messages in thread
From: Dave Chinner @ 2022-11-17 5:58 UTC (permalink / raw)
To: linux-xfs; +Cc: linux-fsdevel
From: Dave Chinner <dchinner@redhat.com>
A recent multithreaded write data corruption has been uncovered in
the iomap write code. The core of the problem is partial folio
writes can be flushed to disk while a new racing write can map it
and fill the rest of the page:
writeback new write
allocate blocks
blocks are unwritten
submit IO
.....
map blocks
iomap indicates UNWRITTEN range
loop {
lock folio
copyin data
.....
IO completes
runs unwritten extent conv
blocks are marked written
<iomap now stale>
get next folio
}
Now add memory pressure such that memory reclaim evicts the
partially written folio that has already been written to disk.
When the new write finally gets to the last partial page of the new
write, it does not find it in cache, so it instantiates a new page,
sees the iomap is unwritten, and zeros the part of the page that
it does not have data from. This overwrites the data on disk that
was originally written.
The full description of the corruption mechanism can be found here:
https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/
To solve this problem, we need to check whether the iomap is still
valid after we lock each folio during the write. We have to do it
after we lock the page so that we don't end up with state changes
occurring while we wait for the folio to be locked.
Hence we need a mechanism to be able to check that the cached iomap
is still valid (similar to what we already do in buffered
writeback), and we need a way for ->begin_write to back out and
tell the high level iomap iterator that we need to remap the
remaining write range.
The iomap needs to grow some storage for the validity cookie that
the filesystem provides to travel with the iomap. XFS, in
particular, also needs to know some more information about what the
iomap maps (attribute extents rather than file data extents) to for
the validity cookie to cover all the types of iomaps we might need
to validate.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
fs/iomap/buffered-io.c | 29 +++++++++++++++++++++++++++-
fs/iomap/iter.c | 19 ++++++++++++++++++-
include/linux/iomap.h | 43 ++++++++++++++++++++++++++++++++++--------
3 files changed, 81 insertions(+), 10 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 028cdf115477..1a7702de76fb 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -584,7 +584,7 @@ static int iomap_write_begin_inline(const struct iomap_iter *iter,
return iomap_read_inline_data(iter, folio);
}
-static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
+static int iomap_write_begin(struct iomap_iter *iter, loff_t pos,
size_t len, struct folio **foliop)
{
const struct iomap_page_ops *page_ops = iter->iomap.page_ops;
@@ -618,6 +618,27 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
status = (iter->flags & IOMAP_NOWAIT) ? -EAGAIN : -ENOMEM;
goto out_no_page;
}
+
+ /*
+ * Now we have a locked folio, before we do anything with it we need to
+ * check that the iomap we have cached is not stale. The inode extent
+ * mapping can change due to concurrent IO in flight (e.g.
+ * IOMAP_UNWRITTEN state can change and memory reclaim could have
+ * reclaimed a previously partially written page at this index after IO
+ * completion before this write reaches this file offset) and hence we
+ * could do the wrong thing here (zero a page range incorrectly or fail
+ * to zero) and corrupt data.
+ */
+ if (page_ops && page_ops->iomap_valid) {
+ bool iomap_valid = page_ops->iomap_valid(iter->inode,
+ &iter->iomap);
+ if (!iomap_valid) {
+ iter->iomap.flags |= IOMAP_F_STALE;
+ status = 0;
+ goto out_unlock;
+ }
+ }
+
if (pos + len > folio_pos(folio) + folio_size(folio))
len = folio_pos(folio) + folio_size(folio) - pos;
@@ -773,6 +794,8 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
status = iomap_write_begin(iter, pos, bytes, &folio);
if (unlikely(status))
break;
+ if (iter->iomap.flags & IOMAP_F_STALE)
+ break;
page = folio_file_page(folio, pos >> PAGE_SHIFT);
if (mapping_writably_mapped(mapping))
@@ -1067,6 +1090,8 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter)
status = iomap_write_begin(iter, pos, bytes, &folio);
if (unlikely(status))
return status;
+ if (iter->iomap.flags & IOMAP_F_STALE)
+ break;
status = iomap_write_end(iter, pos, bytes, bytes, folio);
if (WARN_ON_ONCE(status == 0))
@@ -1122,6 +1147,8 @@ static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero)
status = iomap_write_begin(iter, pos, bytes, &folio);
if (status)
return status;
+ if (iter->iomap.flags & IOMAP_F_STALE)
+ break;
offset = offset_in_folio(folio, pos);
if (bytes > folio_size(folio) - offset)
diff --git a/fs/iomap/iter.c b/fs/iomap/iter.c
index a1c7592d2ade..79a0614eaab7 100644
--- a/fs/iomap/iter.c
+++ b/fs/iomap/iter.c
@@ -7,12 +7,28 @@
#include <linux/iomap.h>
#include "trace.h"
+/*
+ * Advance to the next range we need to map.
+ *
+ * If the iomap is marked IOMAP_F_STALE, it means the existing map was not fully
+ * processed - it was aborted because the extent the iomap spanned may have been
+ * changed during the operation. In this case, the iteration behaviour is to
+ * remap the unprocessed range of the iter, and that means we may need to remap
+ * even when we've made no progress (i.e. iter->processed = 0). Hence the
+ * "finished iterating" case needs to distinguish between
+ * (processed = 0) meaning we are done and (processed = 0 && stale) meaning we
+ * need to remap the entire remaining range.
+ */
static inline int iomap_iter_advance(struct iomap_iter *iter)
{
+ bool stale = iter->iomap.flags & IOMAP_F_STALE;
+
/* handle the previous iteration (if any) */
if (iter->iomap.length) {
- if (iter->processed <= 0)
+ if (iter->processed < 0)
return iter->processed;
+ if (!iter->processed && !stale)
+ return 0;
if (WARN_ON_ONCE(iter->processed > iomap_length(iter)))
return -EIO;
iter->pos += iter->processed;
@@ -33,6 +49,7 @@ static inline void iomap_iter_done(struct iomap_iter *iter)
WARN_ON_ONCE(iter->iomap.offset > iter->pos);
WARN_ON_ONCE(iter->iomap.length == 0);
WARN_ON_ONCE(iter->iomap.offset + iter->iomap.length <= iter->pos);
+ WARN_ON_ONCE(iter->iomap.flags & IOMAP_F_STALE);
trace_iomap_iter_dstmap(iter->inode, &iter->iomap);
if (iter->srcmap.type != IOMAP_HOLE)
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 6bbed915c83a..2ecdd9d90efc 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -49,26 +49,35 @@ struct vm_fault;
*
* IOMAP_F_BUFFER_HEAD indicates that the file system requires the use of
* buffer heads for this mapping.
+ *
+ * IOMAP_F_XATTR indicates that the iomap is for an extended attribute extent
+ * rather than a file data extent.
*/
-#define IOMAP_F_NEW 0x01
-#define IOMAP_F_DIRTY 0x02
-#define IOMAP_F_SHARED 0x04
-#define IOMAP_F_MERGED 0x08
-#define IOMAP_F_BUFFER_HEAD 0x10
-#define IOMAP_F_ZONE_APPEND 0x20
+#define IOMAP_F_NEW (1U << 0)
+#define IOMAP_F_DIRTY (1U << 1)
+#define IOMAP_F_SHARED (1U << 2)
+#define IOMAP_F_MERGED (1U << 3)
+#define IOMAP_F_BUFFER_HEAD (1U << 4)
+#define IOMAP_F_ZONE_APPEND (1U << 5)
+#define IOMAP_F_XATTR (1U << 6)
/*
* Flags set by the core iomap code during operations:
*
* IOMAP_F_SIZE_CHANGED indicates to the iomap_end method that the file size
* has changed as the result of this write operation.
+ *
+ * IOMAP_F_STALE indicates that the iomap is not valid any longer and the file
+ * range it covers needs to be remapped by the high level before the operation
+ * can proceed.
*/
-#define IOMAP_F_SIZE_CHANGED 0x100
+#define IOMAP_F_SIZE_CHANGED (1U << 8)
+#define IOMAP_F_STALE (1U << 9)
/*
* Flags from 0x1000 up are for file system specific usage:
*/
-#define IOMAP_F_PRIVATE 0x1000
+#define IOMAP_F_PRIVATE (1U << 12)
/*
@@ -89,6 +98,7 @@ struct iomap {
void *inline_data;
void *private; /* filesystem private */
const struct iomap_page_ops *page_ops;
+ u64 validity_cookie; /* used with .iomap_valid() */
};
static inline sector_t iomap_sector(const struct iomap *iomap, loff_t pos)
@@ -128,6 +138,23 @@ struct iomap_page_ops {
int (*page_prepare)(struct inode *inode, loff_t pos, unsigned len);
void (*page_done)(struct inode *inode, loff_t pos, unsigned copied,
struct page *page);
+
+ /*
+ * Check that the cached iomap still maps correctly to the filesystem's
+ * internal extent map. FS internal extent maps can change while iomap
+ * is iterating a cached iomap, so this hook allows iomap to detect that
+ * the iomap needs to be refreshed during a long running write
+ * operation.
+ *
+ * The filesystem can store internal state (e.g. a sequence number) in
+ * iomap->validity_cookie when the iomap is first mapped to be able to
+ * detect changes between mapping time and whenever .iomap_valid() is
+ * called.
+ *
+ * This is called with the folio over the specified file position held
+ * locked by the iomap code.
+ */
+ bool (*iomap_valid)(struct inode *inode, const struct iomap *iomap);
};
/*
--
2.37.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 7/9] iomap: write iomap validity checks
2022-11-17 5:58 ` [PATCH 7/9] iomap: write iomap validity checks Dave Chinner
@ 2022-11-17 18:40 ` Darrick J. Wong
0 siblings, 0 replies; 18+ messages in thread
From: Darrick J. Wong @ 2022-11-17 18:40 UTC (permalink / raw)
To: Dave Chinner; +Cc: linux-xfs, linux-fsdevel
On Thu, Nov 17, 2022 at 04:58:08PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
>
> A recent multithreaded write data corruption has been uncovered in
> the iomap write code. The core of the problem is partial folio
> writes can be flushed to disk while a new racing write can map it
> and fill the rest of the page:
>
> writeback new write
>
> allocate blocks
> blocks are unwritten
> submit IO
> .....
> map blocks
> iomap indicates UNWRITTEN range
> loop {
> lock folio
> copyin data
> .....
> IO completes
> runs unwritten extent conv
> blocks are marked written
> <iomap now stale>
> get next folio
> }
>
> Now add memory pressure such that memory reclaim evicts the
> partially written folio that has already been written to disk.
>
> When the new write finally gets to the last partial page of the new
> write, it does not find it in cache, so it instantiates a new page,
> sees the iomap is unwritten, and zeros the part of the page that
> it does not have data from. This overwrites the data on disk that
> was originally written.
>
> The full description of the corruption mechanism can be found here:
>
> https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/
>
> To solve this problem, we need to check whether the iomap is still
> valid after we lock each folio during the write. We have to do it
> after we lock the page so that we don't end up with state changes
> occurring while we wait for the folio to be locked.
>
> Hence we need a mechanism to be able to check that the cached iomap
> is still valid (similar to what we already do in buffered
> writeback), and we need a way for ->begin_write to back out and
> tell the high level iomap iterator that we need to remap the
> remaining write range.
>
> The iomap needs to grow some storage for the validity cookie that
> the filesystem provides to travel with the iomap. XFS, in
> particular, also needs to know some more information about what the
> iomap maps (attribute extents rather than file data extents) to for
> the validity cookie to cover all the types of iomaps we might need
> to validate.
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
OK, I think grok this enough to:
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
--D
> ---
> fs/iomap/buffered-io.c | 29 +++++++++++++++++++++++++++-
> fs/iomap/iter.c | 19 ++++++++++++++++++-
> include/linux/iomap.h | 43 ++++++++++++++++++++++++++++++++++--------
> 3 files changed, 81 insertions(+), 10 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 028cdf115477..1a7702de76fb 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -584,7 +584,7 @@ static int iomap_write_begin_inline(const struct iomap_iter *iter,
> return iomap_read_inline_data(iter, folio);
> }
>
> -static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
> +static int iomap_write_begin(struct iomap_iter *iter, loff_t pos,
> size_t len, struct folio **foliop)
> {
> const struct iomap_page_ops *page_ops = iter->iomap.page_ops;
> @@ -618,6 +618,27 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
> status = (iter->flags & IOMAP_NOWAIT) ? -EAGAIN : -ENOMEM;
> goto out_no_page;
> }
> +
> + /*
> + * Now we have a locked folio, before we do anything with it we need to
> + * check that the iomap we have cached is not stale. The inode extent
> + * mapping can change due to concurrent IO in flight (e.g.
> + * IOMAP_UNWRITTEN state can change and memory reclaim could have
> + * reclaimed a previously partially written page at this index after IO
> + * completion before this write reaches this file offset) and hence we
> + * could do the wrong thing here (zero a page range incorrectly or fail
> + * to zero) and corrupt data.
> + */
> + if (page_ops && page_ops->iomap_valid) {
> + bool iomap_valid = page_ops->iomap_valid(iter->inode,
> + &iter->iomap);
> + if (!iomap_valid) {
> + iter->iomap.flags |= IOMAP_F_STALE;
> + status = 0;
> + goto out_unlock;
> + }
> + }
> +
> if (pos + len > folio_pos(folio) + folio_size(folio))
> len = folio_pos(folio) + folio_size(folio) - pos;
>
> @@ -773,6 +794,8 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
> status = iomap_write_begin(iter, pos, bytes, &folio);
> if (unlikely(status))
> break;
> + if (iter->iomap.flags & IOMAP_F_STALE)
> + break;
>
> page = folio_file_page(folio, pos >> PAGE_SHIFT);
> if (mapping_writably_mapped(mapping))
> @@ -1067,6 +1090,8 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter)
> status = iomap_write_begin(iter, pos, bytes, &folio);
> if (unlikely(status))
> return status;
> + if (iter->iomap.flags & IOMAP_F_STALE)
> + break;
>
> status = iomap_write_end(iter, pos, bytes, bytes, folio);
> if (WARN_ON_ONCE(status == 0))
> @@ -1122,6 +1147,8 @@ static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero)
> status = iomap_write_begin(iter, pos, bytes, &folio);
> if (status)
> return status;
> + if (iter->iomap.flags & IOMAP_F_STALE)
> + break;
>
> offset = offset_in_folio(folio, pos);
> if (bytes > folio_size(folio) - offset)
> diff --git a/fs/iomap/iter.c b/fs/iomap/iter.c
> index a1c7592d2ade..79a0614eaab7 100644
> --- a/fs/iomap/iter.c
> +++ b/fs/iomap/iter.c
> @@ -7,12 +7,28 @@
> #include <linux/iomap.h>
> #include "trace.h"
>
> +/*
> + * Advance to the next range we need to map.
> + *
> + * If the iomap is marked IOMAP_F_STALE, it means the existing map was not fully
> + * processed - it was aborted because the extent the iomap spanned may have been
> + * changed during the operation. In this case, the iteration behaviour is to
> + * remap the unprocessed range of the iter, and that means we may need to remap
> + * even when we've made no progress (i.e. iter->processed = 0). Hence the
> + * "finished iterating" case needs to distinguish between
> + * (processed = 0) meaning we are done and (processed = 0 && stale) meaning we
> + * need to remap the entire remaining range.
> + */
> static inline int iomap_iter_advance(struct iomap_iter *iter)
> {
> + bool stale = iter->iomap.flags & IOMAP_F_STALE;
> +
> /* handle the previous iteration (if any) */
> if (iter->iomap.length) {
> - if (iter->processed <= 0)
> + if (iter->processed < 0)
> return iter->processed;
> + if (!iter->processed && !stale)
> + return 0;
> if (WARN_ON_ONCE(iter->processed > iomap_length(iter)))
> return -EIO;
> iter->pos += iter->processed;
> @@ -33,6 +49,7 @@ static inline void iomap_iter_done(struct iomap_iter *iter)
> WARN_ON_ONCE(iter->iomap.offset > iter->pos);
> WARN_ON_ONCE(iter->iomap.length == 0);
> WARN_ON_ONCE(iter->iomap.offset + iter->iomap.length <= iter->pos);
> + WARN_ON_ONCE(iter->iomap.flags & IOMAP_F_STALE);
>
> trace_iomap_iter_dstmap(iter->inode, &iter->iomap);
> if (iter->srcmap.type != IOMAP_HOLE)
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index 6bbed915c83a..2ecdd9d90efc 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -49,26 +49,35 @@ struct vm_fault;
> *
> * IOMAP_F_BUFFER_HEAD indicates that the file system requires the use of
> * buffer heads for this mapping.
> + *
> + * IOMAP_F_XATTR indicates that the iomap is for an extended attribute extent
> + * rather than a file data extent.
> */
> -#define IOMAP_F_NEW 0x01
> -#define IOMAP_F_DIRTY 0x02
> -#define IOMAP_F_SHARED 0x04
> -#define IOMAP_F_MERGED 0x08
> -#define IOMAP_F_BUFFER_HEAD 0x10
> -#define IOMAP_F_ZONE_APPEND 0x20
> +#define IOMAP_F_NEW (1U << 0)
> +#define IOMAP_F_DIRTY (1U << 1)
> +#define IOMAP_F_SHARED (1U << 2)
> +#define IOMAP_F_MERGED (1U << 3)
> +#define IOMAP_F_BUFFER_HEAD (1U << 4)
> +#define IOMAP_F_ZONE_APPEND (1U << 5)
> +#define IOMAP_F_XATTR (1U << 6)
>
> /*
> * Flags set by the core iomap code during operations:
> *
> * IOMAP_F_SIZE_CHANGED indicates to the iomap_end method that the file size
> * has changed as the result of this write operation.
> + *
> + * IOMAP_F_STALE indicates that the iomap is not valid any longer and the file
> + * range it covers needs to be remapped by the high level before the operation
> + * can proceed.
> */
> -#define IOMAP_F_SIZE_CHANGED 0x100
> +#define IOMAP_F_SIZE_CHANGED (1U << 8)
> +#define IOMAP_F_STALE (1U << 9)
>
> /*
> * Flags from 0x1000 up are for file system specific usage:
> */
> -#define IOMAP_F_PRIVATE 0x1000
> +#define IOMAP_F_PRIVATE (1U << 12)
>
>
> /*
> @@ -89,6 +98,7 @@ struct iomap {
> void *inline_data;
> void *private; /* filesystem private */
> const struct iomap_page_ops *page_ops;
> + u64 validity_cookie; /* used with .iomap_valid() */
> };
>
> static inline sector_t iomap_sector(const struct iomap *iomap, loff_t pos)
> @@ -128,6 +138,23 @@ struct iomap_page_ops {
> int (*page_prepare)(struct inode *inode, loff_t pos, unsigned len);
> void (*page_done)(struct inode *inode, loff_t pos, unsigned copied,
> struct page *page);
> +
> + /*
> + * Check that the cached iomap still maps correctly to the filesystem's
> + * internal extent map. FS internal extent maps can change while iomap
> + * is iterating a cached iomap, so this hook allows iomap to detect that
> + * the iomap needs to be refreshed during a long running write
> + * operation.
> + *
> + * The filesystem can store internal state (e.g. a sequence number) in
> + * iomap->validity_cookie when the iomap is first mapped to be able to
> + * detect changes between mapping time and whenever .iomap_valid() is
> + * called.
> + *
> + * This is called with the folio over the specified file position held
> + * locked by the iomap code.
> + */
> + bool (*iomap_valid)(struct inode *inode, const struct iomap *iomap);
> };
>
> /*
> --
> 2.37.2
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 8/9] xfs: use iomap_valid method to detect stale cached iomaps
2022-11-17 5:58 [PATCH 0/8 v3] xfs, iomap: fix data corrupton due to stale cached iomaps Dave Chinner
` (6 preceding siblings ...)
2022-11-17 5:58 ` [PATCH 7/9] iomap: write iomap validity checks Dave Chinner
@ 2022-11-17 5:58 ` Dave Chinner
2022-11-17 18:59 ` Darrick J. Wong
2022-11-17 5:58 ` [PATCH 9/9] xfs: drop write error injection is unfixable, remove it Dave Chinner
8 siblings, 1 reply; 18+ messages in thread
From: Dave Chinner @ 2022-11-17 5:58 UTC (permalink / raw)
To: linux-xfs; +Cc: linux-fsdevel
From: Dave Chinner <dchinner@redhat.com>
Now that iomap supports a mechanism to validate cached iomaps for
buffered write operations, hook it up to the XFS buffered write ops
so that we can avoid data corruptions that result from stale cached
iomaps. See:
https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/
or the ->iomap_valid() introduction commit for exact details of the
corruption vector.
The validity cookie we store in the iomap is based on the type of
iomap we return. It is expected that the iomap->flags we set in
xfs_bmbt_to_iomap() is not perturbed by the iomap core and are
returned to us in the iomap passed via the .iomap_valid() callback.
This ensures that the validity cookie is always checking the correct
inode fork sequence numbers to detect potential changes that affect
the extent cached by the iomap.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
fs/xfs/libxfs/xfs_bmap.c | 6 ++-
fs/xfs/xfs_aops.c | 2 +-
fs/xfs/xfs_iomap.c | 96 +++++++++++++++++++++++++++++++---------
fs/xfs/xfs_iomap.h | 5 ++-
fs/xfs/xfs_pnfs.c | 6 ++-
5 files changed, 88 insertions(+), 27 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
index 49d0d4ea63fc..56b9b7db38bb 100644
--- a/fs/xfs/libxfs/xfs_bmap.c
+++ b/fs/xfs/libxfs/xfs_bmap.c
@@ -4551,7 +4551,8 @@ xfs_bmapi_convert_delalloc(
* the extent. Just return the real extent at this offset.
*/
if (!isnullstartblock(bma.got.br_startblock)) {
- xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags);
+ xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags,
+ xfs_iomap_inode_sequence(ip, flags));
*seq = READ_ONCE(ifp->if_seq);
goto out_trans_cancel;
}
@@ -4599,7 +4600,8 @@ xfs_bmapi_convert_delalloc(
XFS_STATS_INC(mp, xs_xstrat_quick);
ASSERT(!isnullstartblock(bma.got.br_startblock));
- xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags);
+ xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags,
+ xfs_iomap_inode_sequence(ip, flags));
*seq = READ_ONCE(ifp->if_seq);
if (whichfork == XFS_COW_FORK)
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 6aadc5815068..a22d90af40c8 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -372,7 +372,7 @@ xfs_map_blocks(
isnullstartblock(imap.br_startblock))
goto allocate_blocks;
- xfs_bmbt_to_iomap(ip, &wpc->iomap, &imap, 0, 0);
+ xfs_bmbt_to_iomap(ip, &wpc->iomap, &imap, 0, 0, XFS_WPC(wpc)->data_seq);
trace_xfs_map_blocks_found(ip, offset, count, whichfork, &imap);
return 0;
allocate_blocks:
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 09676ff6940e..a9b3a1bcc3fd 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -48,13 +48,45 @@ xfs_alert_fsblock_zero(
return -EFSCORRUPTED;
}
+u64
+xfs_iomap_inode_sequence(
+ struct xfs_inode *ip,
+ u16 iomap_flags)
+{
+ u64 cookie = 0;
+
+ if (iomap_flags & IOMAP_F_XATTR)
+ return READ_ONCE(ip->i_af.if_seq);
+ if ((iomap_flags & IOMAP_F_SHARED) && ip->i_cowfp)
+ cookie = (u64)READ_ONCE(ip->i_cowfp->if_seq) << 32;
+ return cookie | READ_ONCE(ip->i_df.if_seq);
+}
+
+/*
+ * Check that the iomap passed to us is still valid for the given offset and
+ * length.
+ */
+static bool
+xfs_iomap_valid(
+ struct inode *inode,
+ const struct iomap *iomap)
+{
+ return iomap->validity_cookie ==
+ xfs_iomap_inode_sequence(XFS_I(inode), iomap->flags);
+}
+
+const struct iomap_page_ops xfs_iomap_page_ops = {
+ .iomap_valid = xfs_iomap_valid,
+};
+
int
xfs_bmbt_to_iomap(
struct xfs_inode *ip,
struct iomap *iomap,
struct xfs_bmbt_irec *imap,
unsigned int mapping_flags,
- u16 iomap_flags)
+ u16 iomap_flags,
+ u64 sequence_cookie)
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_buftarg *target = xfs_inode_buftarg(ip);
@@ -91,6 +123,9 @@ xfs_bmbt_to_iomap(
if (xfs_ipincount(ip) &&
(ip->i_itemp->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP))
iomap->flags |= IOMAP_F_DIRTY;
+
+ iomap->validity_cookie = sequence_cookie;
+ iomap->page_ops = &xfs_iomap_page_ops;
return 0;
}
@@ -195,7 +230,8 @@ xfs_iomap_write_direct(
xfs_fileoff_t offset_fsb,
xfs_fileoff_t count_fsb,
unsigned int flags,
- struct xfs_bmbt_irec *imap)
+ struct xfs_bmbt_irec *imap,
+ u64 *seq)
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_trans *tp;
@@ -285,6 +321,8 @@ xfs_iomap_write_direct(
error = xfs_alert_fsblock_zero(ip, imap);
out_unlock:
+ if (seq)
+ *seq = xfs_iomap_inode_sequence(ip, 0);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
return error;
@@ -743,6 +781,7 @@ xfs_direct_write_iomap_begin(
bool shared = false;
u16 iomap_flags = 0;
unsigned int lockmode = XFS_ILOCK_SHARED;
+ u64 seq;
ASSERT(flags & (IOMAP_WRITE | IOMAP_ZERO));
@@ -811,9 +850,10 @@ xfs_direct_write_iomap_begin(
goto out_unlock;
}
+ seq = xfs_iomap_inode_sequence(ip, iomap_flags);
xfs_iunlock(ip, lockmode);
trace_xfs_iomap_found(ip, offset, length, XFS_DATA_FORK, &imap);
- return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, iomap_flags);
+ return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, iomap_flags, seq);
allocate_blocks:
error = -EAGAIN;
@@ -839,24 +879,26 @@ xfs_direct_write_iomap_begin(
xfs_iunlock(ip, lockmode);
error = xfs_iomap_write_direct(ip, offset_fsb, end_fsb - offset_fsb,
- flags, &imap);
+ flags, &imap, &seq);
if (error)
return error;
trace_xfs_iomap_alloc(ip, offset, length, XFS_DATA_FORK, &imap);
return xfs_bmbt_to_iomap(ip, iomap, &imap, flags,
- iomap_flags | IOMAP_F_NEW);
+ iomap_flags | IOMAP_F_NEW, seq);
out_found_cow:
- xfs_iunlock(ip, lockmode);
length = XFS_FSB_TO_B(mp, cmap.br_startoff + cmap.br_blockcount);
trace_xfs_iomap_found(ip, offset, length - offset, XFS_COW_FORK, &cmap);
if (imap.br_startblock != HOLESTARTBLOCK) {
- error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0);
+ seq = xfs_iomap_inode_sequence(ip, 0);
+ error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0, seq);
if (error)
- return error;
+ goto out_unlock;
}
- return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, IOMAP_F_SHARED);
+ seq = xfs_iomap_inode_sequence(ip, IOMAP_F_SHARED);
+ xfs_iunlock(ip, lockmode);
+ return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, IOMAP_F_SHARED, seq);
out_unlock:
if (lockmode)
@@ -915,6 +957,7 @@ xfs_buffered_write_iomap_begin(
int allocfork = XFS_DATA_FORK;
int error = 0;
unsigned int lockmode = XFS_ILOCK_EXCL;
+ u64 seq;
if (xfs_is_shutdown(mp))
return -EIO;
@@ -1094,26 +1137,31 @@ xfs_buffered_write_iomap_begin(
* Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch
* them out if the write happens to fail.
*/
+ seq = xfs_iomap_inode_sequence(ip, IOMAP_F_NEW);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
trace_xfs_iomap_alloc(ip, offset, count, allocfork, &imap);
- return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, IOMAP_F_NEW);
+ return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, IOMAP_F_NEW, seq);
found_imap:
+ seq = xfs_iomap_inode_sequence(ip, 0);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
- return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0);
+ return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0, seq);
found_cow:
- xfs_iunlock(ip, XFS_ILOCK_EXCL);
+ seq = xfs_iomap_inode_sequence(ip, 0);
if (imap.br_startoff <= offset_fsb) {
- error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0);
+ error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0, seq);
if (error)
- return error;
+ goto out_unlock;
+ seq = xfs_iomap_inode_sequence(ip, IOMAP_F_SHARED);
+ xfs_iunlock(ip, XFS_ILOCK_EXCL);
return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags,
- IOMAP_F_SHARED);
+ IOMAP_F_SHARED, seq);
}
xfs_trim_extent(&cmap, offset_fsb, imap.br_startoff - offset_fsb);
- return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, 0);
+ xfs_iunlock(ip, XFS_ILOCK_EXCL);
+ return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, 0, seq);
out_unlock:
xfs_iunlock(ip, XFS_ILOCK_EXCL);
@@ -1193,6 +1241,7 @@ xfs_read_iomap_begin(
int nimaps = 1, error = 0;
bool shared = false;
unsigned int lockmode = XFS_ILOCK_SHARED;
+ u64 seq;
ASSERT(!(flags & (IOMAP_WRITE | IOMAP_ZERO)));
@@ -1206,13 +1255,14 @@ xfs_read_iomap_begin(
&nimaps, 0);
if (!error && (flags & IOMAP_REPORT))
error = xfs_reflink_trim_around_shared(ip, &imap, &shared);
+ seq = xfs_iomap_inode_sequence(ip, shared ? IOMAP_F_SHARED : 0);
xfs_iunlock(ip, lockmode);
if (error)
return error;
trace_xfs_iomap_found(ip, offset, length, XFS_DATA_FORK, &imap);
return xfs_bmbt_to_iomap(ip, iomap, &imap, flags,
- shared ? IOMAP_F_SHARED : 0);
+ shared ? IOMAP_F_SHARED : 0, seq);
}
const struct iomap_ops xfs_read_iomap_ops = {
@@ -1237,6 +1287,7 @@ xfs_seek_iomap_begin(
struct xfs_bmbt_irec imap, cmap;
int error = 0;
unsigned lockmode;
+ u64 seq;
if (xfs_is_shutdown(mp))
return -EIO;
@@ -1271,8 +1322,9 @@ xfs_seek_iomap_begin(
if (data_fsb < cow_fsb + cmap.br_blockcount)
end_fsb = min(end_fsb, data_fsb);
xfs_trim_extent(&cmap, offset_fsb, end_fsb);
+ seq = xfs_iomap_inode_sequence(ip, IOMAP_F_SHARED);
error = xfs_bmbt_to_iomap(ip, iomap, &cmap, flags,
- IOMAP_F_SHARED);
+ IOMAP_F_SHARED, seq);
/*
* This is a COW extent, so we must probe the page cache
* because there could be dirty page cache being backed
@@ -1293,8 +1345,9 @@ xfs_seek_iomap_begin(
imap.br_startblock = HOLESTARTBLOCK;
imap.br_state = XFS_EXT_NORM;
done:
+ seq = xfs_iomap_inode_sequence(ip, 0);
xfs_trim_extent(&imap, offset_fsb, end_fsb);
- error = xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0);
+ error = xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0, seq);
out_unlock:
xfs_iunlock(ip, lockmode);
return error;
@@ -1320,6 +1373,7 @@ xfs_xattr_iomap_begin(
struct xfs_bmbt_irec imap;
int nimaps = 1, error = 0;
unsigned lockmode;
+ int seq;
if (xfs_is_shutdown(mp))
return -EIO;
@@ -1336,12 +1390,14 @@ xfs_xattr_iomap_begin(
error = xfs_bmapi_read(ip, offset_fsb, end_fsb - offset_fsb, &imap,
&nimaps, XFS_BMAPI_ATTRFORK);
out_unlock:
+
+ seq = xfs_iomap_inode_sequence(ip, IOMAP_F_XATTR);
xfs_iunlock(ip, lockmode);
if (error)
return error;
ASSERT(nimaps);
- return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0);
+ return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, IOMAP_F_XATTR, seq);
}
const struct iomap_ops xfs_xattr_iomap_ops = {
diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h
index 0f62ab633040..4da13440bae9 100644
--- a/fs/xfs/xfs_iomap.h
+++ b/fs/xfs/xfs_iomap.h
@@ -13,14 +13,15 @@ struct xfs_bmbt_irec;
int xfs_iomap_write_direct(struct xfs_inode *ip, xfs_fileoff_t offset_fsb,
xfs_fileoff_t count_fsb, unsigned int flags,
- struct xfs_bmbt_irec *imap);
+ struct xfs_bmbt_irec *imap, u64 *sequence);
int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t, bool);
xfs_fileoff_t xfs_iomap_eof_align_last_fsb(struct xfs_inode *ip,
xfs_fileoff_t end_fsb);
+u64 xfs_iomap_inode_sequence(struct xfs_inode *ip, u16 iomap_flags);
int xfs_bmbt_to_iomap(struct xfs_inode *ip, struct iomap *iomap,
struct xfs_bmbt_irec *imap, unsigned int mapping_flags,
- u16 iomap_flags);
+ u16 iomap_flags, u64 sequence_cookie);
int xfs_zero_range(struct xfs_inode *ip, loff_t pos, loff_t len,
bool *did_zero);
diff --git a/fs/xfs/xfs_pnfs.c b/fs/xfs/xfs_pnfs.c
index 37a24f0f7cd4..38d23f0e703a 100644
--- a/fs/xfs/xfs_pnfs.c
+++ b/fs/xfs/xfs_pnfs.c
@@ -125,6 +125,7 @@ xfs_fs_map_blocks(
int nimaps = 1;
uint lock_flags;
int error = 0;
+ u64 seq;
if (xfs_is_shutdown(mp))
return -EIO;
@@ -176,6 +177,7 @@ xfs_fs_map_blocks(
lock_flags = xfs_ilock_data_map_shared(ip);
error = xfs_bmapi_read(ip, offset_fsb, end_fsb - offset_fsb,
&imap, &nimaps, bmapi_flags);
+ seq = xfs_iomap_inode_sequence(ip, 0);
ASSERT(!nimaps || imap.br_startblock != DELAYSTARTBLOCK);
@@ -189,7 +191,7 @@ xfs_fs_map_blocks(
xfs_iunlock(ip, lock_flags);
error = xfs_iomap_write_direct(ip, offset_fsb,
- end_fsb - offset_fsb, 0, &imap);
+ end_fsb - offset_fsb, 0, &imap, &seq);
if (error)
goto out_unlock;
@@ -209,7 +211,7 @@ xfs_fs_map_blocks(
}
xfs_iunlock(ip, XFS_IOLOCK_EXCL);
- error = xfs_bmbt_to_iomap(ip, iomap, &imap, 0, 0);
+ error = xfs_bmbt_to_iomap(ip, iomap, &imap, 0, 0, seq);
*device_generation = mp->m_generation;
return error;
out_unlock:
--
2.37.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 8/9] xfs: use iomap_valid method to detect stale cached iomaps
2022-11-17 5:58 ` [PATCH 8/9] xfs: use iomap_valid method to detect stale cached iomaps Dave Chinner
@ 2022-11-17 18:59 ` Darrick J. Wong
2022-11-17 21:36 ` Dave Chinner
0 siblings, 1 reply; 18+ messages in thread
From: Darrick J. Wong @ 2022-11-17 18:59 UTC (permalink / raw)
To: Dave Chinner; +Cc: linux-xfs, linux-fsdevel
On Thu, Nov 17, 2022 at 04:58:09PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
>
> Now that iomap supports a mechanism to validate cached iomaps for
> buffered write operations, hook it up to the XFS buffered write ops
> so that we can avoid data corruptions that result from stale cached
> iomaps. See:
>
> https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/
>
> or the ->iomap_valid() introduction commit for exact details of the
> corruption vector.
>
> The validity cookie we store in the iomap is based on the type of
> iomap we return. It is expected that the iomap->flags we set in
> xfs_bmbt_to_iomap() is not perturbed by the iomap core and are
> returned to us in the iomap passed via the .iomap_valid() callback.
> This ensures that the validity cookie is always checking the correct
> inode fork sequence numbers to detect potential changes that affect
> the extent cached by the iomap.
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
> fs/xfs/libxfs/xfs_bmap.c | 6 ++-
> fs/xfs/xfs_aops.c | 2 +-
> fs/xfs/xfs_iomap.c | 96 +++++++++++++++++++++++++++++++---------
> fs/xfs/xfs_iomap.h | 5 ++-
> fs/xfs/xfs_pnfs.c | 6 ++-
> 5 files changed, 88 insertions(+), 27 deletions(-)
>
> diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
> index 49d0d4ea63fc..56b9b7db38bb 100644
> --- a/fs/xfs/libxfs/xfs_bmap.c
> +++ b/fs/xfs/libxfs/xfs_bmap.c
> @@ -4551,7 +4551,8 @@ xfs_bmapi_convert_delalloc(
> * the extent. Just return the real extent at this offset.
> */
> if (!isnullstartblock(bma.got.br_startblock)) {
> - xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags);
> + xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags,
> + xfs_iomap_inode_sequence(ip, flags));
> *seq = READ_ONCE(ifp->if_seq);
Question beyond the scope of this series: Does it make any sense to use
to the same u64 sequencecookie for writeback validation? I think the
answer is that we could, but that would increase (unnecessary)
invalidations during writeback due to (say) writing to the cow fork
whilst someone else messes with the data fork.
> goto out_trans_cancel;
> }
> @@ -4599,7 +4600,8 @@ xfs_bmapi_convert_delalloc(
> XFS_STATS_INC(mp, xs_xstrat_quick);
>
> ASSERT(!isnullstartblock(bma.got.br_startblock));
> - xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags);
> + xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags,
> + xfs_iomap_inode_sequence(ip, flags));
> *seq = READ_ONCE(ifp->if_seq);
>
> if (whichfork == XFS_COW_FORK)
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index 6aadc5815068..a22d90af40c8 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -372,7 +372,7 @@ xfs_map_blocks(
> isnullstartblock(imap.br_startblock))
> goto allocate_blocks;
>
> - xfs_bmbt_to_iomap(ip, &wpc->iomap, &imap, 0, 0);
> + xfs_bmbt_to_iomap(ip, &wpc->iomap, &imap, 0, 0, XFS_WPC(wpc)->data_seq);
If I'm following this correctly, we never use wpc->iomap's sequence
counters, so (ATM) it doesn't matter that we don't call
xfs_iomap_inode_sequence here.
> trace_xfs_map_blocks_found(ip, offset, count, whichfork, &imap);
> return 0;
> allocate_blocks:
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index 09676ff6940e..a9b3a1bcc3fd 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -48,13 +48,45 @@ xfs_alert_fsblock_zero(
> return -EFSCORRUPTED;
> }
>
> +u64
> +xfs_iomap_inode_sequence(
> + struct xfs_inode *ip,
> + u16 iomap_flags)
> +{
> + u64 cookie = 0;
> +
> + if (iomap_flags & IOMAP_F_XATTR)
> + return READ_ONCE(ip->i_af.if_seq);
> + if ((iomap_flags & IOMAP_F_SHARED) && ip->i_cowfp)
I checked that IOMAP_F_SHARED is always set when we stuff the cow fork
extent into @iomap and the data fork extent into @srcmap, and hence this
is the correct flag handling logic.
> + cookie = (u64)READ_ONCE(ip->i_cowfp->if_seq) << 32;
> + return cookie | READ_ONCE(ip->i_df.if_seq);
> +}
> +
> +/*
> + * Check that the iomap passed to us is still valid for the given offset and
> + * length.
> + */
> +static bool
> +xfs_iomap_valid(
> + struct inode *inode,
> + const struct iomap *iomap)
> +{
> + return iomap->validity_cookie ==
> + xfs_iomap_inode_sequence(XFS_I(inode), iomap->flags);
> +}
> +
> +const struct iomap_page_ops xfs_iomap_page_ops = {
> + .iomap_valid = xfs_iomap_valid,
> +};
> +
> int
> xfs_bmbt_to_iomap(
> struct xfs_inode *ip,
> struct iomap *iomap,
> struct xfs_bmbt_irec *imap,
> unsigned int mapping_flags,
> - u16 iomap_flags)
> + u16 iomap_flags,
> + u64 sequence_cookie)
> {
> struct xfs_mount *mp = ip->i_mount;
> struct xfs_buftarg *target = xfs_inode_buftarg(ip);
> @@ -91,6 +123,9 @@ xfs_bmbt_to_iomap(
> if (xfs_ipincount(ip) &&
> (ip->i_itemp->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP))
> iomap->flags |= IOMAP_F_DIRTY;
> +
> + iomap->validity_cookie = sequence_cookie;
> + iomap->page_ops = &xfs_iomap_page_ops;
> return 0;
> }
>
> @@ -195,7 +230,8 @@ xfs_iomap_write_direct(
> xfs_fileoff_t offset_fsb,
> xfs_fileoff_t count_fsb,
> unsigned int flags,
> - struct xfs_bmbt_irec *imap)
> + struct xfs_bmbt_irec *imap,
> + u64 *seq)
> {
> struct xfs_mount *mp = ip->i_mount;
> struct xfs_trans *tp;
> @@ -285,6 +321,8 @@ xfs_iomap_write_direct(
> error = xfs_alert_fsblock_zero(ip, imap);
>
> out_unlock:
> + if (seq)
> + *seq = xfs_iomap_inode_sequence(ip, 0);
None of the callers pass NULL, so I think this can go away?
That aside, I think I'm satisfied now.
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
So the next question is -- how should we regression-test the
revalidation schemes in the write and writeback paths? Do you have
something ready to go that supersedes what I built in patches 13-16 of
https://lore.kernel.org/linux-xfs/166801781760.3992140.10078383339454429922.stgit@magnolia/T/#u
Please let me know what you're thinking.
--D
> xfs_iunlock(ip, XFS_ILOCK_EXCL);
> return error;
>
> @@ -743,6 +781,7 @@ xfs_direct_write_iomap_begin(
> bool shared = false;
> u16 iomap_flags = 0;
> unsigned int lockmode = XFS_ILOCK_SHARED;
> + u64 seq;
>
> ASSERT(flags & (IOMAP_WRITE | IOMAP_ZERO));
>
> @@ -811,9 +850,10 @@ xfs_direct_write_iomap_begin(
> goto out_unlock;
> }
>
> + seq = xfs_iomap_inode_sequence(ip, iomap_flags);
> xfs_iunlock(ip, lockmode);
> trace_xfs_iomap_found(ip, offset, length, XFS_DATA_FORK, &imap);
> - return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, iomap_flags);
> + return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, iomap_flags, seq);
>
> allocate_blocks:
> error = -EAGAIN;
> @@ -839,24 +879,26 @@ xfs_direct_write_iomap_begin(
> xfs_iunlock(ip, lockmode);
>
> error = xfs_iomap_write_direct(ip, offset_fsb, end_fsb - offset_fsb,
> - flags, &imap);
> + flags, &imap, &seq);
> if (error)
> return error;
>
> trace_xfs_iomap_alloc(ip, offset, length, XFS_DATA_FORK, &imap);
> return xfs_bmbt_to_iomap(ip, iomap, &imap, flags,
> - iomap_flags | IOMAP_F_NEW);
> + iomap_flags | IOMAP_F_NEW, seq);
>
> out_found_cow:
> - xfs_iunlock(ip, lockmode);
> length = XFS_FSB_TO_B(mp, cmap.br_startoff + cmap.br_blockcount);
> trace_xfs_iomap_found(ip, offset, length - offset, XFS_COW_FORK, &cmap);
> if (imap.br_startblock != HOLESTARTBLOCK) {
> - error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0);
> + seq = xfs_iomap_inode_sequence(ip, 0);
> + error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0, seq);
> if (error)
> - return error;
> + goto out_unlock;
> }
> - return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, IOMAP_F_SHARED);
> + seq = xfs_iomap_inode_sequence(ip, IOMAP_F_SHARED);
> + xfs_iunlock(ip, lockmode);
> + return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, IOMAP_F_SHARED, seq);
>
> out_unlock:
> if (lockmode)
> @@ -915,6 +957,7 @@ xfs_buffered_write_iomap_begin(
> int allocfork = XFS_DATA_FORK;
> int error = 0;
> unsigned int lockmode = XFS_ILOCK_EXCL;
> + u64 seq;
>
> if (xfs_is_shutdown(mp))
> return -EIO;
> @@ -1094,26 +1137,31 @@ xfs_buffered_write_iomap_begin(
> * Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch
> * them out if the write happens to fail.
> */
> + seq = xfs_iomap_inode_sequence(ip, IOMAP_F_NEW);
> xfs_iunlock(ip, XFS_ILOCK_EXCL);
> trace_xfs_iomap_alloc(ip, offset, count, allocfork, &imap);
> - return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, IOMAP_F_NEW);
> + return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, IOMAP_F_NEW, seq);
>
> found_imap:
> + seq = xfs_iomap_inode_sequence(ip, 0);
> xfs_iunlock(ip, XFS_ILOCK_EXCL);
> - return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0);
> + return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0, seq);
>
> found_cow:
> - xfs_iunlock(ip, XFS_ILOCK_EXCL);
> + seq = xfs_iomap_inode_sequence(ip, 0);
> if (imap.br_startoff <= offset_fsb) {
> - error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0);
> + error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0, seq);
> if (error)
> - return error;
> + goto out_unlock;
> + seq = xfs_iomap_inode_sequence(ip, IOMAP_F_SHARED);
> + xfs_iunlock(ip, XFS_ILOCK_EXCL);
> return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags,
> - IOMAP_F_SHARED);
> + IOMAP_F_SHARED, seq);
> }
>
> xfs_trim_extent(&cmap, offset_fsb, imap.br_startoff - offset_fsb);
> - return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, 0);
> + xfs_iunlock(ip, XFS_ILOCK_EXCL);
> + return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, 0, seq);
>
> out_unlock:
> xfs_iunlock(ip, XFS_ILOCK_EXCL);
> @@ -1193,6 +1241,7 @@ xfs_read_iomap_begin(
> int nimaps = 1, error = 0;
> bool shared = false;
> unsigned int lockmode = XFS_ILOCK_SHARED;
> + u64 seq;
>
> ASSERT(!(flags & (IOMAP_WRITE | IOMAP_ZERO)));
>
> @@ -1206,13 +1255,14 @@ xfs_read_iomap_begin(
> &nimaps, 0);
> if (!error && (flags & IOMAP_REPORT))
> error = xfs_reflink_trim_around_shared(ip, &imap, &shared);
> + seq = xfs_iomap_inode_sequence(ip, shared ? IOMAP_F_SHARED : 0);
> xfs_iunlock(ip, lockmode);
>
> if (error)
> return error;
> trace_xfs_iomap_found(ip, offset, length, XFS_DATA_FORK, &imap);
> return xfs_bmbt_to_iomap(ip, iomap, &imap, flags,
> - shared ? IOMAP_F_SHARED : 0);
> + shared ? IOMAP_F_SHARED : 0, seq);
> }
>
> const struct iomap_ops xfs_read_iomap_ops = {
> @@ -1237,6 +1287,7 @@ xfs_seek_iomap_begin(
> struct xfs_bmbt_irec imap, cmap;
> int error = 0;
> unsigned lockmode;
> + u64 seq;
>
> if (xfs_is_shutdown(mp))
> return -EIO;
> @@ -1271,8 +1322,9 @@ xfs_seek_iomap_begin(
> if (data_fsb < cow_fsb + cmap.br_blockcount)
> end_fsb = min(end_fsb, data_fsb);
> xfs_trim_extent(&cmap, offset_fsb, end_fsb);
> + seq = xfs_iomap_inode_sequence(ip, IOMAP_F_SHARED);
> error = xfs_bmbt_to_iomap(ip, iomap, &cmap, flags,
> - IOMAP_F_SHARED);
> + IOMAP_F_SHARED, seq);
> /*
> * This is a COW extent, so we must probe the page cache
> * because there could be dirty page cache being backed
> @@ -1293,8 +1345,9 @@ xfs_seek_iomap_begin(
> imap.br_startblock = HOLESTARTBLOCK;
> imap.br_state = XFS_EXT_NORM;
> done:
> + seq = xfs_iomap_inode_sequence(ip, 0);
> xfs_trim_extent(&imap, offset_fsb, end_fsb);
> - error = xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0);
> + error = xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0, seq);
> out_unlock:
> xfs_iunlock(ip, lockmode);
> return error;
> @@ -1320,6 +1373,7 @@ xfs_xattr_iomap_begin(
> struct xfs_bmbt_irec imap;
> int nimaps = 1, error = 0;
> unsigned lockmode;
> + int seq;
>
> if (xfs_is_shutdown(mp))
> return -EIO;
> @@ -1336,12 +1390,14 @@ xfs_xattr_iomap_begin(
> error = xfs_bmapi_read(ip, offset_fsb, end_fsb - offset_fsb, &imap,
> &nimaps, XFS_BMAPI_ATTRFORK);
> out_unlock:
> +
> + seq = xfs_iomap_inode_sequence(ip, IOMAP_F_XATTR);
> xfs_iunlock(ip, lockmode);
>
> if (error)
> return error;
> ASSERT(nimaps);
> - return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0);
> + return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, IOMAP_F_XATTR, seq);
> }
>
> const struct iomap_ops xfs_xattr_iomap_ops = {
> diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h
> index 0f62ab633040..4da13440bae9 100644
> --- a/fs/xfs/xfs_iomap.h
> +++ b/fs/xfs/xfs_iomap.h
> @@ -13,14 +13,15 @@ struct xfs_bmbt_irec;
>
> int xfs_iomap_write_direct(struct xfs_inode *ip, xfs_fileoff_t offset_fsb,
> xfs_fileoff_t count_fsb, unsigned int flags,
> - struct xfs_bmbt_irec *imap);
> + struct xfs_bmbt_irec *imap, u64 *sequence);
> int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t, bool);
> xfs_fileoff_t xfs_iomap_eof_align_last_fsb(struct xfs_inode *ip,
> xfs_fileoff_t end_fsb);
>
> +u64 xfs_iomap_inode_sequence(struct xfs_inode *ip, u16 iomap_flags);
> int xfs_bmbt_to_iomap(struct xfs_inode *ip, struct iomap *iomap,
> struct xfs_bmbt_irec *imap, unsigned int mapping_flags,
> - u16 iomap_flags);
> + u16 iomap_flags, u64 sequence_cookie);
>
> int xfs_zero_range(struct xfs_inode *ip, loff_t pos, loff_t len,
> bool *did_zero);
> diff --git a/fs/xfs/xfs_pnfs.c b/fs/xfs/xfs_pnfs.c
> index 37a24f0f7cd4..38d23f0e703a 100644
> --- a/fs/xfs/xfs_pnfs.c
> +++ b/fs/xfs/xfs_pnfs.c
> @@ -125,6 +125,7 @@ xfs_fs_map_blocks(
> int nimaps = 1;
> uint lock_flags;
> int error = 0;
> + u64 seq;
>
> if (xfs_is_shutdown(mp))
> return -EIO;
> @@ -176,6 +177,7 @@ xfs_fs_map_blocks(
> lock_flags = xfs_ilock_data_map_shared(ip);
> error = xfs_bmapi_read(ip, offset_fsb, end_fsb - offset_fsb,
> &imap, &nimaps, bmapi_flags);
> + seq = xfs_iomap_inode_sequence(ip, 0);
>
> ASSERT(!nimaps || imap.br_startblock != DELAYSTARTBLOCK);
>
> @@ -189,7 +191,7 @@ xfs_fs_map_blocks(
> xfs_iunlock(ip, lock_flags);
>
> error = xfs_iomap_write_direct(ip, offset_fsb,
> - end_fsb - offset_fsb, 0, &imap);
> + end_fsb - offset_fsb, 0, &imap, &seq);
> if (error)
> goto out_unlock;
>
> @@ -209,7 +211,7 @@ xfs_fs_map_blocks(
> }
> xfs_iunlock(ip, XFS_IOLOCK_EXCL);
>
> - error = xfs_bmbt_to_iomap(ip, iomap, &imap, 0, 0);
> + error = xfs_bmbt_to_iomap(ip, iomap, &imap, 0, 0, seq);
> *device_generation = mp->m_generation;
> return error;
> out_unlock:
> --
> 2.37.2
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 8/9] xfs: use iomap_valid method to detect stale cached iomaps
2022-11-17 18:59 ` Darrick J. Wong
@ 2022-11-17 21:36 ` Dave Chinner
0 siblings, 0 replies; 18+ messages in thread
From: Dave Chinner @ 2022-11-17 21:36 UTC (permalink / raw)
To: Darrick J. Wong; +Cc: linux-xfs, linux-fsdevel
On Thu, Nov 17, 2022 at 10:59:11AM -0800, Darrick J. Wong wrote:
> On Thu, Nov 17, 2022 at 04:58:09PM +1100, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@redhat.com>
[snip code, I'm on PTO for the next coupleof days so, just a quick
process answer here...]
> So the next question is -- how should we regression-test the
> revalidation schemes in the write and writeback paths? Do you have
> something ready to go that supersedes what I built in patches 13-16 of
> https://lore.kernel.org/linux-xfs/166801781760.3992140.10078383339454429922.stgit@magnolia/T/#u
Short answer is no.
Longer answer is that I haven't needed to write new tests to
exercise the code added to fix the bug.
I've found that g/346 stresses the IOMAP_F_STALE path quite well
because it mixes racing unaligned sub-folio write() calls with mmap
write faults, often to the same folio. It's similar in nature to the
original reproducer in that it does racing concurrent ascending
offset unaligned sub-block writes to a single file.
g/346 repeatedly found data corruptions (it's a data integrity test)
as a result of the dellalloc punch code doing the wrong thing with
1kB block size, as well as with 4kB block size when the mmap page
faults instatiated multi-page folios....
g269 and g/270 also seem to trigger IOMAP_F_STALE conditions quite
frequently - streaming writes at ENOSPC trigger with fsstress
running in the background executing sync() operations means
writeback is racing with the streaming writes all the time. These
tests exposed bugs that caused stale delalloc blocks to be left
behind by the delalloc punch code.
fsx also tripped over a couple of corruptions, too, when being
run with buffered writes. Because fsx is single threaded, this
implies that it was writeback that was triggering the IOMAP_F_STALE
write() invalidations....
So from a "exercise the IOMAP_F_STALE write() case causing iomap
invalidation, delalloc punching and continuing to complete the rest
of the write", I think we've got a fair bunch of existing tests that
cover both the "racing mmap dirties data in the punch range" and the
"writeback/racing mmap triggers extent changes so triggers
IOMAP_F_STALE" cases.
As for the specific data corruption reproducer, I haven't done
anything other than run the original regression test. I've been
using it as, well, a regression test. I haven't had a chance to look
at any of the other variants that have been written, because all the
actual development was done running "-g rw -g enospc" on 1kB block
size filesystems and repeatedly running g/346 and g/270 until they
passed tens of iterations in a row. I only ran the original
regression test to confirm that I hadn't broken the fix whilsts
getting all the fstests to pass....
> Please let me know what you're thinking.
I'll look at the other tests next week. Until then, I can't really
comment on them.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 9/9] xfs: drop write error injection is unfixable, remove it
2022-11-17 5:58 [PATCH 0/8 v3] xfs, iomap: fix data corrupton due to stale cached iomaps Dave Chinner
` (7 preceding siblings ...)
2022-11-17 5:58 ` [PATCH 8/9] xfs: use iomap_valid method to detect stale cached iomaps Dave Chinner
@ 2022-11-17 5:58 ` Dave Chinner
2022-11-17 18:01 ` Darrick J. Wong
8 siblings, 1 reply; 18+ messages in thread
From: Dave Chinner @ 2022-11-17 5:58 UTC (permalink / raw)
To: linux-xfs; +Cc: linux-fsdevel
From: Dave Chinner <dchinner@redhat.com>
With the changes to scan the page cache for dirty data to avoid data
corruptions from partial write cleanup racing with other page cache
operations, the drop writes error injection no longer works the same
way it used to and causes xfs/196 to fail. This is because xfs/196
writes to the file and populates the page cache before it turns on
the error injection and starts failing -overwrites-.
The result is that the original drop-writes code failed writes only
-after- overwriting the data in the cache, followed by invalidates
the cached data, then punching out the delalloc extent from under
that data.
On the surface, this looks fine. The problem is that page cache
invalidation *doesn't guarantee that it removes anything from the
page cache* and it doesn't change the dirty state of the folio. When
block size == page size and we do page aligned IO (as xfs/196 does)
everything happens to align perfectly and page cache invalidation
removes the single page folios that span the written data. Hence the
followup delalloc punch pass does not find cached data over that
range and it can punch the extent out.
IOWs, xfs/196 "works" for block size == page size with the new
code. I say "works", because it actually only works for the case
where IO is page aligned, and no data was read from disk before
writes occur. Because the moment we actually read data first, the
readahead code allocates multipage folios and suddenly the
invalidate code goes back to zeroing subfolio ranges without
changing dirty state.
Hence, with multipage folios in play, block size == page size is
functionally identical to block size < page size behaviour, and
drop-writes is manifestly broken w.r.t to this case. Invalidation of
a subfolio range doesn't result in the folio being removed from the
cache, just the range gets zeroed. Hence after we've sequentially
walked over a folio that we've dirtied (via write data) and then
invalidated, we end up with a dirty folio full of zeroed data.
And because the new code skips punching ranges that have dirty
folios covering them, we end up leaving the delalloc range intact
after failing all the writes. Hence failed writes now end up
writing zeroes to disk in the cases where invalidation zeroes folios
rather than removing them from cache.
This is a fundamental change of behaviour that is needed to avoid
the data corruption vectors that exist in the old write fail path,
and it renders the drop-writes injection non-functional and
unworkable as it stands.
As it is, I think the error injection is also now unnecessary, as
partial writes that need delalloc extent are going to be a lot more
common with stale iomap detection in place. Hence this patch removes
the drop-writes error injection completely. xfs/196 can remain for
testing kernels that don't have this data corruption fix, but those
that do will report:
xfs/196 3s ... [not run] XFS error injection drop_writes unknown on this kernel.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
fs/xfs/libxfs/xfs_errortag.h | 12 +++++-------
fs/xfs/xfs_error.c | 27 ++++++++++++++++++++-------
fs/xfs/xfs_iomap.c | 9 ---------
3 files changed, 25 insertions(+), 23 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_errortag.h b/fs/xfs/libxfs/xfs_errortag.h
index 5362908164b0..580ccbd5aadc 100644
--- a/fs/xfs/libxfs/xfs_errortag.h
+++ b/fs/xfs/libxfs/xfs_errortag.h
@@ -40,13 +40,12 @@
#define XFS_ERRTAG_REFCOUNT_FINISH_ONE 25
#define XFS_ERRTAG_BMAP_FINISH_ONE 26
#define XFS_ERRTAG_AG_RESV_CRITICAL 27
+
/*
- * DEBUG mode instrumentation to test and/or trigger delayed allocation
- * block killing in the event of failed writes. When enabled, all
- * buffered writes are silenty dropped and handled as if they failed.
- * All delalloc blocks in the range of the write (including pre-existing
- * delalloc blocks!) are tossed as part of the write failure error
- * handling sequence.
+ * Drop-writes support removed because write error handling cannot trash
+ * pre-existing delalloc extents in any useful way anymore. We retain the
+ * definition so that we can reject it as an invalid value in
+ * xfs_errortag_valid().
*/
#define XFS_ERRTAG_DROP_WRITES 28
#define XFS_ERRTAG_LOG_BAD_CRC 29
@@ -95,7 +94,6 @@
#define XFS_RANDOM_REFCOUNT_FINISH_ONE 1
#define XFS_RANDOM_BMAP_FINISH_ONE 1
#define XFS_RANDOM_AG_RESV_CRITICAL 4
-#define XFS_RANDOM_DROP_WRITES 1
#define XFS_RANDOM_LOG_BAD_CRC 1
#define XFS_RANDOM_LOG_ITEM_PIN 1
#define XFS_RANDOM_BUF_LRU_REF 2
diff --git a/fs/xfs/xfs_error.c b/fs/xfs/xfs_error.c
index c6b2aabd6f18..dea3c0649d2f 100644
--- a/fs/xfs/xfs_error.c
+++ b/fs/xfs/xfs_error.c
@@ -46,7 +46,7 @@ static unsigned int xfs_errortag_random_default[] = {
XFS_RANDOM_REFCOUNT_FINISH_ONE,
XFS_RANDOM_BMAP_FINISH_ONE,
XFS_RANDOM_AG_RESV_CRITICAL,
- XFS_RANDOM_DROP_WRITES,
+ 0, /* XFS_RANDOM_DROP_WRITES has been removed */
XFS_RANDOM_LOG_BAD_CRC,
XFS_RANDOM_LOG_ITEM_PIN,
XFS_RANDOM_BUF_LRU_REF,
@@ -162,7 +162,6 @@ XFS_ERRORTAG_ATTR_RW(refcount_continue_update, XFS_ERRTAG_REFCOUNT_CONTINUE_UPDA
XFS_ERRORTAG_ATTR_RW(refcount_finish_one, XFS_ERRTAG_REFCOUNT_FINISH_ONE);
XFS_ERRORTAG_ATTR_RW(bmap_finish_one, XFS_ERRTAG_BMAP_FINISH_ONE);
XFS_ERRORTAG_ATTR_RW(ag_resv_critical, XFS_ERRTAG_AG_RESV_CRITICAL);
-XFS_ERRORTAG_ATTR_RW(drop_writes, XFS_ERRTAG_DROP_WRITES);
XFS_ERRORTAG_ATTR_RW(log_bad_crc, XFS_ERRTAG_LOG_BAD_CRC);
XFS_ERRORTAG_ATTR_RW(log_item_pin, XFS_ERRTAG_LOG_ITEM_PIN);
XFS_ERRORTAG_ATTR_RW(buf_lru_ref, XFS_ERRTAG_BUF_LRU_REF);
@@ -206,7 +205,6 @@ static struct attribute *xfs_errortag_attrs[] = {
XFS_ERRORTAG_ATTR_LIST(refcount_finish_one),
XFS_ERRORTAG_ATTR_LIST(bmap_finish_one),
XFS_ERRORTAG_ATTR_LIST(ag_resv_critical),
- XFS_ERRORTAG_ATTR_LIST(drop_writes),
XFS_ERRORTAG_ATTR_LIST(log_bad_crc),
XFS_ERRORTAG_ATTR_LIST(log_item_pin),
XFS_ERRORTAG_ATTR_LIST(buf_lru_ref),
@@ -256,6 +254,19 @@ xfs_errortag_del(
kmem_free(mp->m_errortag);
}
+static bool
+xfs_errortag_valid(
+ unsigned int error_tag)
+{
+ if (error_tag >= XFS_ERRTAG_MAX)
+ return false;
+
+ /* Error out removed injection types */
+ if (error_tag == XFS_ERRTAG_DROP_WRITES)
+ return false;
+ return true;
+}
+
bool
xfs_errortag_test(
struct xfs_mount *mp,
@@ -277,7 +288,9 @@ xfs_errortag_test(
if (!mp->m_errortag)
return false;
- ASSERT(error_tag < XFS_ERRTAG_MAX);
+ if (!xfs_errortag_valid(error_tag))
+ return false;
+
randfactor = mp->m_errortag[error_tag];
if (!randfactor || prandom_u32_max(randfactor))
return false;
@@ -293,7 +306,7 @@ xfs_errortag_get(
struct xfs_mount *mp,
unsigned int error_tag)
{
- if (error_tag >= XFS_ERRTAG_MAX)
+ if (!xfs_errortag_valid(error_tag))
return -EINVAL;
return mp->m_errortag[error_tag];
@@ -305,7 +318,7 @@ xfs_errortag_set(
unsigned int error_tag,
unsigned int tag_value)
{
- if (error_tag >= XFS_ERRTAG_MAX)
+ if (!xfs_errortag_valid(error_tag))
return -EINVAL;
mp->m_errortag[error_tag] = tag_value;
@@ -319,7 +332,7 @@ xfs_errortag_add(
{
BUILD_BUG_ON(ARRAY_SIZE(xfs_errortag_random_default) != XFS_ERRTAG_MAX);
- if (error_tag >= XFS_ERRTAG_MAX)
+ if (!xfs_errortag_valid(error_tag))
return -EINVAL;
return xfs_errortag_set(mp, error_tag,
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index a9b3a1bcc3fd..d294a00ca28b 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1191,15 +1191,6 @@ xfs_buffered_write_iomap_end(
struct xfs_mount *mp = XFS_M(inode->i_sb);
int error;
- /*
- * Behave as if the write failed if drop writes is enabled. Set the NEW
- * flag to force delalloc cleanup.
- */
- if (XFS_TEST_ERROR(false, mp, XFS_ERRTAG_DROP_WRITES)) {
- iomap->flags |= IOMAP_F_NEW;
- written = 0;
- }
-
error = iomap_file_buffered_write_punch_delalloc(inode, iomap, offset,
length, written, &xfs_buffered_write_delalloc_punch);
if (error && !xfs_is_shutdown(mp)) {
--
2.37.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 9/9] xfs: drop write error injection is unfixable, remove it
2022-11-17 5:58 ` [PATCH 9/9] xfs: drop write error injection is unfixable, remove it Dave Chinner
@ 2022-11-17 18:01 ` Darrick J. Wong
0 siblings, 0 replies; 18+ messages in thread
From: Darrick J. Wong @ 2022-11-17 18:01 UTC (permalink / raw)
To: Dave Chinner; +Cc: linux-xfs, linux-fsdevel
On Thu, Nov 17, 2022 at 04:58:10PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
>
> With the changes to scan the page cache for dirty data to avoid data
> corruptions from partial write cleanup racing with other page cache
> operations, the drop writes error injection no longer works the same
> way it used to and causes xfs/196 to fail. This is because xfs/196
> writes to the file and populates the page cache before it turns on
> the error injection and starts failing -overwrites-.
>
> The result is that the original drop-writes code failed writes only
> -after- overwriting the data in the cache, followed by invalidates
> the cached data, then punching out the delalloc extent from under
> that data.
>
> On the surface, this looks fine. The problem is that page cache
> invalidation *doesn't guarantee that it removes anything from the
> page cache* and it doesn't change the dirty state of the folio. When
> block size == page size and we do page aligned IO (as xfs/196 does)
> everything happens to align perfectly and page cache invalidation
> removes the single page folios that span the written data. Hence the
> followup delalloc punch pass does not find cached data over that
> range and it can punch the extent out.
>
> IOWs, xfs/196 "works" for block size == page size with the new
> code. I say "works", because it actually only works for the case
> where IO is page aligned, and no data was read from disk before
> writes occur. Because the moment we actually read data first, the
> readahead code allocates multipage folios and suddenly the
> invalidate code goes back to zeroing subfolio ranges without
> changing dirty state.
>
> Hence, with multipage folios in play, block size == page size is
> functionally identical to block size < page size behaviour, and
> drop-writes is manifestly broken w.r.t to this case. Invalidation of
> a subfolio range doesn't result in the folio being removed from the
> cache, just the range gets zeroed. Hence after we've sequentially
> walked over a folio that we've dirtied (via write data) and then
> invalidated, we end up with a dirty folio full of zeroed data.
>
> And because the new code skips punching ranges that have dirty
> folios covering them, we end up leaving the delalloc range intact
> after failing all the writes. Hence failed writes now end up
> writing zeroes to disk in the cases where invalidation zeroes folios
> rather than removing them from cache.
>
> This is a fundamental change of behaviour that is needed to avoid
> the data corruption vectors that exist in the old write fail path,
> and it renders the drop-writes injection non-functional and
> unworkable as it stands.
>
> As it is, I think the error injection is also now unnecessary, as
> partial writes that need delalloc extent are going to be a lot more
> common with stale iomap detection in place. Hence this patch removes
> the drop-writes error injection completely. xfs/196 can remain for
> testing kernels that don't have this data corruption fix, but those
> that do will report:
>
> xfs/196 3s ... [not run] XFS error injection drop_writes unknown on this kernel.
Assuming you're planning to scuttle xfs/196 as well,
(I appreciate the cleanups in xfs_error.c too.)
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
--D
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
> fs/xfs/libxfs/xfs_errortag.h | 12 +++++-------
> fs/xfs/xfs_error.c | 27 ++++++++++++++++++++-------
> fs/xfs/xfs_iomap.c | 9 ---------
> 3 files changed, 25 insertions(+), 23 deletions(-)
>
> diff --git a/fs/xfs/libxfs/xfs_errortag.h b/fs/xfs/libxfs/xfs_errortag.h
> index 5362908164b0..580ccbd5aadc 100644
> --- a/fs/xfs/libxfs/xfs_errortag.h
> +++ b/fs/xfs/libxfs/xfs_errortag.h
> @@ -40,13 +40,12 @@
> #define XFS_ERRTAG_REFCOUNT_FINISH_ONE 25
> #define XFS_ERRTAG_BMAP_FINISH_ONE 26
> #define XFS_ERRTAG_AG_RESV_CRITICAL 27
> +
> /*
> - * DEBUG mode instrumentation to test and/or trigger delayed allocation
> - * block killing in the event of failed writes. When enabled, all
> - * buffered writes are silenty dropped and handled as if they failed.
> - * All delalloc blocks in the range of the write (including pre-existing
> - * delalloc blocks!) are tossed as part of the write failure error
> - * handling sequence.
> + * Drop-writes support removed because write error handling cannot trash
> + * pre-existing delalloc extents in any useful way anymore. We retain the
> + * definition so that we can reject it as an invalid value in
> + * xfs_errortag_valid().
> */
> #define XFS_ERRTAG_DROP_WRITES 28
> #define XFS_ERRTAG_LOG_BAD_CRC 29
> @@ -95,7 +94,6 @@
> #define XFS_RANDOM_REFCOUNT_FINISH_ONE 1
> #define XFS_RANDOM_BMAP_FINISH_ONE 1
> #define XFS_RANDOM_AG_RESV_CRITICAL 4
> -#define XFS_RANDOM_DROP_WRITES 1
> #define XFS_RANDOM_LOG_BAD_CRC 1
> #define XFS_RANDOM_LOG_ITEM_PIN 1
> #define XFS_RANDOM_BUF_LRU_REF 2
> diff --git a/fs/xfs/xfs_error.c b/fs/xfs/xfs_error.c
> index c6b2aabd6f18..dea3c0649d2f 100644
> --- a/fs/xfs/xfs_error.c
> +++ b/fs/xfs/xfs_error.c
> @@ -46,7 +46,7 @@ static unsigned int xfs_errortag_random_default[] = {
> XFS_RANDOM_REFCOUNT_FINISH_ONE,
> XFS_RANDOM_BMAP_FINISH_ONE,
> XFS_RANDOM_AG_RESV_CRITICAL,
> - XFS_RANDOM_DROP_WRITES,
> + 0, /* XFS_RANDOM_DROP_WRITES has been removed */
> XFS_RANDOM_LOG_BAD_CRC,
> XFS_RANDOM_LOG_ITEM_PIN,
> XFS_RANDOM_BUF_LRU_REF,
> @@ -162,7 +162,6 @@ XFS_ERRORTAG_ATTR_RW(refcount_continue_update, XFS_ERRTAG_REFCOUNT_CONTINUE_UPDA
> XFS_ERRORTAG_ATTR_RW(refcount_finish_one, XFS_ERRTAG_REFCOUNT_FINISH_ONE);
> XFS_ERRORTAG_ATTR_RW(bmap_finish_one, XFS_ERRTAG_BMAP_FINISH_ONE);
> XFS_ERRORTAG_ATTR_RW(ag_resv_critical, XFS_ERRTAG_AG_RESV_CRITICAL);
> -XFS_ERRORTAG_ATTR_RW(drop_writes, XFS_ERRTAG_DROP_WRITES);
> XFS_ERRORTAG_ATTR_RW(log_bad_crc, XFS_ERRTAG_LOG_BAD_CRC);
> XFS_ERRORTAG_ATTR_RW(log_item_pin, XFS_ERRTAG_LOG_ITEM_PIN);
> XFS_ERRORTAG_ATTR_RW(buf_lru_ref, XFS_ERRTAG_BUF_LRU_REF);
> @@ -206,7 +205,6 @@ static struct attribute *xfs_errortag_attrs[] = {
> XFS_ERRORTAG_ATTR_LIST(refcount_finish_one),
> XFS_ERRORTAG_ATTR_LIST(bmap_finish_one),
> XFS_ERRORTAG_ATTR_LIST(ag_resv_critical),
> - XFS_ERRORTAG_ATTR_LIST(drop_writes),
> XFS_ERRORTAG_ATTR_LIST(log_bad_crc),
> XFS_ERRORTAG_ATTR_LIST(log_item_pin),
> XFS_ERRORTAG_ATTR_LIST(buf_lru_ref),
> @@ -256,6 +254,19 @@ xfs_errortag_del(
> kmem_free(mp->m_errortag);
> }
>
> +static bool
> +xfs_errortag_valid(
> + unsigned int error_tag)
> +{
> + if (error_tag >= XFS_ERRTAG_MAX)
> + return false;
> +
> + /* Error out removed injection types */
> + if (error_tag == XFS_ERRTAG_DROP_WRITES)
> + return false;
> + return true;
> +}
> +
> bool
> xfs_errortag_test(
> struct xfs_mount *mp,
> @@ -277,7 +288,9 @@ xfs_errortag_test(
> if (!mp->m_errortag)
> return false;
>
> - ASSERT(error_tag < XFS_ERRTAG_MAX);
> + if (!xfs_errortag_valid(error_tag))
> + return false;
> +
> randfactor = mp->m_errortag[error_tag];
> if (!randfactor || prandom_u32_max(randfactor))
> return false;
> @@ -293,7 +306,7 @@ xfs_errortag_get(
> struct xfs_mount *mp,
> unsigned int error_tag)
> {
> - if (error_tag >= XFS_ERRTAG_MAX)
> + if (!xfs_errortag_valid(error_tag))
> return -EINVAL;
>
> return mp->m_errortag[error_tag];
> @@ -305,7 +318,7 @@ xfs_errortag_set(
> unsigned int error_tag,
> unsigned int tag_value)
> {
> - if (error_tag >= XFS_ERRTAG_MAX)
> + if (!xfs_errortag_valid(error_tag))
> return -EINVAL;
>
> mp->m_errortag[error_tag] = tag_value;
> @@ -319,7 +332,7 @@ xfs_errortag_add(
> {
> BUILD_BUG_ON(ARRAY_SIZE(xfs_errortag_random_default) != XFS_ERRTAG_MAX);
>
> - if (error_tag >= XFS_ERRTAG_MAX)
> + if (!xfs_errortag_valid(error_tag))
> return -EINVAL;
>
> return xfs_errortag_set(mp, error_tag,
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index a9b3a1bcc3fd..d294a00ca28b 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -1191,15 +1191,6 @@ xfs_buffered_write_iomap_end(
> struct xfs_mount *mp = XFS_M(inode->i_sb);
> int error;
>
> - /*
> - * Behave as if the write failed if drop writes is enabled. Set the NEW
> - * flag to force delalloc cleanup.
> - */
> - if (XFS_TEST_ERROR(false, mp, XFS_ERRTAG_DROP_WRITES)) {
> - iomap->flags |= IOMAP_F_NEW;
> - written = 0;
> - }
> -
> error = iomap_file_buffered_write_punch_delalloc(inode, iomap, offset,
> length, written, &xfs_buffered_write_delalloc_punch);
> if (error && !xfs_is_shutdown(mp)) {
> --
> 2.37.2
>
^ permalink raw reply [flat|nested] 18+ messages in thread