* [PATCH 0/4] iomap: trivial fixes for ext4 conversion
@ 2026-05-14 6:29 Zhang Yi
2026-05-14 6:29 ` [PATCH 1/4] iomap: correct the range of a partial dirty clear Zhang Yi
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Zhang Yi @ 2026-05-14 6:29 UTC (permalink / raw)
To: linux-fsdevel, linux-xfs
Cc: linux-ext4, brauner, djwong, hch, yi.zhang, yi.zhang, yizhang089,
yangerkun, yukuai
From: Zhang Yi <yi.zhang@huawei.com>
This patch series contains a few trivial iomap-related fixes in
preparation for converting ext4 buffered I/O to use iomap.
The first three patches are taken from my ext4 conversion series [1], as
suggested by Christoph. The last patch fixes a bug originally reported
by Sashiko during review of my series; although unrelated to the ext4
conversion, it is worth fixing on its own. Please see the following
patches for detail.
Thanks,
Yi.
[1] https://lore.kernel.org/linux-ext4/20260511072344.191271-1-yi.zhang@huaweicloud.com/
Zhang Yi (4):
iomap: correct the range of a partial dirty clear
iomap: support invalidating partial folios
iomap: fix incorrect did_zero setting in iomap_zero_iter()
iomap: fix out-of-bounds bitmap_set() with zero-length range
fs/iomap/buffered-io.c | 45 +++++++++++++++++++++++++++++-------------
1 file changed, 31 insertions(+), 14 deletions(-)
--
2.52.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/4] iomap: correct the range of a partial dirty clear
2026-05-14 6:29 [PATCH 0/4] iomap: trivial fixes for ext4 conversion Zhang Yi
@ 2026-05-14 6:29 ` Zhang Yi
2026-05-14 18:03 ` Darrick J. Wong
2026-05-14 6:29 ` [PATCH 2/4] iomap: support invalidating partial folios Zhang Yi
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Zhang Yi @ 2026-05-14 6:29 UTC (permalink / raw)
To: linux-fsdevel, linux-xfs
Cc: linux-ext4, brauner, djwong, hch, yi.zhang, yi.zhang, yizhang089,
yangerkun, yukuai
From: Zhang Yi <yi.zhang@huawei.com>
The block range calculation in ifs_clear_range_dirty() is incorrect when
partially clearing a range in a folio. We cannot clear the dirty bit of
the first block or the last block if the start or end offset is not
blocksize-aligned. This has not yet caused any issues since we always
clear a whole folio in iomap_writeback_folio().
Fix this by rounding up the first block to blocksize alignment, and
calculate the last block by rounding down (using truncation). Correct
the nr_blks calculation accordingly.
Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
---
This is modified from:
https://lore.kernel.org/linux-fsdevel/20240812121159.3775074-2-yi.zhang@huaweicloud.com/
Changes:
- Use round_up() instead of DIV_ROUND_UP() to prevent wasted integer
division.
fs/iomap/buffered-io.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index d7b648421a70..64351a448a8b 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -176,13 +176,17 @@ static void ifs_clear_range_dirty(struct folio *folio,
{
struct inode *inode = folio->mapping->host;
unsigned int blks_per_folio = i_blocks_per_folio(inode, folio);
- unsigned int first_blk = (off >> inode->i_blkbits);
- unsigned int last_blk = (off + len - 1) >> inode->i_blkbits;
- unsigned int nr_blks = last_blk - first_blk + 1;
+ unsigned int first_blk = round_up(off, i_blocksize(inode)) >>
+ inode->i_blkbits;
+ unsigned int last_blk = (off + len) >> inode->i_blkbits;
unsigned long flags;
+ if (first_blk >= last_blk)
+ return;
+
spin_lock_irqsave(&ifs->state_lock, flags);
- bitmap_clear(ifs->state, first_blk + blks_per_folio, nr_blks);
+ bitmap_clear(ifs->state, first_blk + blks_per_folio,
+ last_blk - first_blk);
spin_unlock_irqrestore(&ifs->state_lock, flags);
}
--
2.52.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/4] iomap: support invalidating partial folios
2026-05-14 6:29 [PATCH 0/4] iomap: trivial fixes for ext4 conversion Zhang Yi
2026-05-14 6:29 ` [PATCH 1/4] iomap: correct the range of a partial dirty clear Zhang Yi
@ 2026-05-14 6:29 ` Zhang Yi
2026-05-14 6:29 ` [PATCH 3/4] iomap: fix incorrect did_zero setting in iomap_zero_iter() Zhang Yi
2026-05-14 6:29 ` [PATCH 4/4] iomap: fix out-of-bounds bitmap_set() with zero-length range Zhang Yi
3 siblings, 0 replies; 8+ messages in thread
From: Zhang Yi @ 2026-05-14 6:29 UTC (permalink / raw)
To: linux-fsdevel, linux-xfs
Cc: linux-ext4, brauner, djwong, hch, yi.zhang, yi.zhang, yizhang089,
yangerkun, yukuai
From: Zhang Yi <yi.zhang@huawei.com>
Current iomap_invalidate_folio() can only invalidate an entire folio. If
we truncate a partial folio on a filesystem where the block size is
smaller than the folio size, it will leave behind dirty bits for the
truncated or punched blocks. During the write-back process, it will
attempt to map the invalid hole range. Fortunately, this has not caused
any real problems so far because the ->writeback_range() function
corrects the length.
However, the implementation of FALLOC_FL_ZERO_RANGE in ext4 depends on
the support for invalidating partial folios. When ext4 partially zeroes
out a dirty and unwritten folio, it does not perform a flush first like
XFS. Therefore, if the dirty bits of the corresponding area cannot be
cleared, the zeroed area after writeback remains in the written state
rather than reverting to the unwritten state. Fix this by supporting
invalidation of partial folios.
Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
---
This is taken from:
https://lore.kernel.org/linux-fsdevel/20240812121159.3775074-3-yi.zhang@huaweicloud.com/
No code changes, only update the commit message to explain why Ext4
needs this.
fs/iomap/buffered-io.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 64351a448a8b..876c2f507f58 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -761,6 +761,8 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len)
WARN_ON_ONCE(folio_test_writeback(folio));
folio_cancel_dirty(folio);
ifs_free(folio);
+ } else {
+ iomap_clear_range_dirty(folio, offset, len);
}
}
EXPORT_SYMBOL_GPL(iomap_invalidate_folio);
--
2.52.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/4] iomap: fix incorrect did_zero setting in iomap_zero_iter()
2026-05-14 6:29 [PATCH 0/4] iomap: trivial fixes for ext4 conversion Zhang Yi
2026-05-14 6:29 ` [PATCH 1/4] iomap: correct the range of a partial dirty clear Zhang Yi
2026-05-14 6:29 ` [PATCH 2/4] iomap: support invalidating partial folios Zhang Yi
@ 2026-05-14 6:29 ` Zhang Yi
2026-05-14 6:29 ` [PATCH 4/4] iomap: fix out-of-bounds bitmap_set() with zero-length range Zhang Yi
3 siblings, 0 replies; 8+ messages in thread
From: Zhang Yi @ 2026-05-14 6:29 UTC (permalink / raw)
To: linux-fsdevel, linux-xfs
Cc: linux-ext4, brauner, djwong, hch, yi.zhang, yi.zhang, yizhang089,
yangerkun, yukuai
From: Zhang Yi <yi.zhang@huawei.com>
The did_zero output parameter was unconditionally set after the loop,
which is incorrect. It should only be set when the zeroing operation
actually completes, not when IOMAP_F_STALE is set or when
IOMAP_F_FOLIO_BATCH is set but !folio causes the loop to break early,
or when iomap_iter_advance() returns an error.
This causes did_zero to be incorrectly set when zeroing a clean
unwritten extent because the loop exits early without actually zeroing
any data.
Fix it by using a local variable to track whether any folio was actually
zeroed, and only set did_zero after the loop if zeroing happened.
Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
---
This is taken from:
https://lore.kernel.org/linux-fsdevel/20260310082250.3535486-1-yi.zhang@huaweicloud.com/
No changes.
fs/iomap/buffered-io.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 876c2f507f58..27ab33edbdee 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1542,6 +1542,7 @@ static int iomap_zero_iter(struct iomap_iter *iter, bool *did_zero,
const struct iomap_write_ops *write_ops)
{
u64 bytes = iomap_length(iter);
+ bool zeroed = false;
int status;
do {
@@ -1560,6 +1561,8 @@ static int iomap_zero_iter(struct iomap_iter *iter, bool *did_zero,
/* a NULL folio means we're done with a folio batch */
if (!folio) {
status = iomap_iter_advance_full(iter);
+ if (status)
+ return status;
break;
}
@@ -1570,6 +1573,7 @@ static int iomap_zero_iter(struct iomap_iter *iter, bool *did_zero,
bytes);
folio_zero_range(folio, offset, bytes);
+ zeroed = true;
folio_mark_accessed(folio);
ret = iomap_write_end(iter, bytes, bytes, folio);
@@ -1579,10 +1583,10 @@ static int iomap_zero_iter(struct iomap_iter *iter, bool *did_zero,
status = iomap_iter_advance(iter, bytes);
if (status)
- break;
+ return status;
} while ((bytes = iomap_length(iter)) > 0);
- if (did_zero)
+ if (did_zero && zeroed)
*did_zero = true;
return status;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 4/4] iomap: fix out-of-bounds bitmap_set() with zero-length range
2026-05-14 6:29 [PATCH 0/4] iomap: trivial fixes for ext4 conversion Zhang Yi
` (2 preceding siblings ...)
2026-05-14 6:29 ` [PATCH 3/4] iomap: fix incorrect did_zero setting in iomap_zero_iter() Zhang Yi
@ 2026-05-14 6:29 ` Zhang Yi
2026-05-14 15:08 ` Joanne Koong
2026-05-14 18:10 ` Darrick J. Wong
3 siblings, 2 replies; 8+ messages in thread
From: Zhang Yi @ 2026-05-14 6:29 UTC (permalink / raw)
To: linux-fsdevel, linux-xfs
Cc: linux-ext4, brauner, djwong, hch, yi.zhang, yi.zhang, yizhang089,
yangerkun, yukuai
From: Zhang Yi <yi.zhang@huawei.com>
ifs_set_range_dirty() and ifs_set_range_uptodate() compute last_blk
as (off + len - 1) >> i_blkbits. When off is 0 and len is 0, the
unsigned subtraction underflows to SIZE_MAX, producing a huge
last_blk and nr_blks value that causes bitmap_set() to write far
beyond the ifs->state allocation.
Regarding ifs_set_range_uptodate(), it is temporarily safe because len
cannot be passed in as 0. However, for ifs_set_range_dirty() this is
reachable from __iomap_write_end(): when copy_folio_from_iter_atomic()
returns 0 (e.g. user buffer fault) and the folio is already uptodate,
the guard at the top of __iomap_write_end() does not trigger because
!folio_test_uptodate() is false, and iomap_set_range_dirty() is called
with copied == 0.
Add a !len guard to both functions before the computation, so that a
zero-length range is a no-op.
Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
---
fs/iomap/buffered-io.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 27ab33edbdee..6fe5f7e998fd 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -67,11 +67,14 @@ static bool ifs_set_range_uptodate(struct folio *folio,
struct iomap_folio_state *ifs, size_t off, size_t len)
{
struct inode *inode = folio->mapping->host;
- unsigned int first_blk = off >> inode->i_blkbits;
- unsigned int last_blk = (off + len - 1) >> inode->i_blkbits;
- unsigned int nr_blks = last_blk - first_blk + 1;
+ unsigned int first_blk, last_blk;
- bitmap_set(ifs->state, first_blk, nr_blks);
+ if (!len)
+ return true;
+
+ first_blk = off >> inode->i_blkbits;
+ last_blk = (off + len - 1) >> inode->i_blkbits;
+ bitmap_set(ifs->state, first_blk, last_blk - first_blk + 1);
return ifs_is_fully_uptodate(folio, ifs);
}
@@ -203,13 +206,17 @@ static void ifs_set_range_dirty(struct folio *folio,
{
struct inode *inode = folio->mapping->host;
unsigned int blks_per_folio = i_blocks_per_folio(inode, folio);
- unsigned int first_blk = (off >> inode->i_blkbits);
- unsigned int last_blk = (off + len - 1) >> inode->i_blkbits;
- unsigned int nr_blks = last_blk - first_blk + 1;
+ unsigned int first_blk, last_blk;
unsigned long flags;
+ if (!len)
+ return;
+
+ first_blk = off >> inode->i_blkbits;
+ last_blk = (off + len - 1) >> inode->i_blkbits;
spin_lock_irqsave(&ifs->state_lock, flags);
- bitmap_set(ifs->state, first_blk + blks_per_folio, nr_blks);
+ bitmap_set(ifs->state, first_blk + blks_per_folio,
+ last_blk - first_blk + 1);
spin_unlock_irqrestore(&ifs->state_lock, flags);
}
--
2.52.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 4/4] iomap: fix out-of-bounds bitmap_set() with zero-length range
2026-05-14 6:29 ` [PATCH 4/4] iomap: fix out-of-bounds bitmap_set() with zero-length range Zhang Yi
@ 2026-05-14 15:08 ` Joanne Koong
2026-05-14 18:10 ` Darrick J. Wong
1 sibling, 0 replies; 8+ messages in thread
From: Joanne Koong @ 2026-05-14 15:08 UTC (permalink / raw)
To: Zhang Yi
Cc: linux-fsdevel, linux-xfs, linux-ext4, brauner, djwong, hch,
yi.zhang, yizhang089, yangerkun, yukuai
On Wed, May 13, 2026 at 11:35 PM Zhang Yi <yi.zhang@huaweicloud.com> wrote:
>
> From: Zhang Yi <yi.zhang@huawei.com>
>
> ifs_set_range_dirty() and ifs_set_range_uptodate() compute last_blk
> as (off + len - 1) >> i_blkbits. When off is 0 and len is 0, the
> unsigned subtraction underflows to SIZE_MAX, producing a huge
> last_blk and nr_blks value that causes bitmap_set() to write far
> beyond the ifs->state allocation.
>
> Regarding ifs_set_range_uptodate(), it is temporarily safe because len
> cannot be passed in as 0. However, for ifs_set_range_dirty() this is
> reachable from __iomap_write_end(): when copy_folio_from_iter_atomic()
> returns 0 (e.g. user buffer fault) and the folio is already uptodate,
> the guard at the top of __iomap_write_end() does not trigger because
> !folio_test_uptodate() is false, and iomap_set_range_dirty() is called
> with copied == 0.
Is the Fixes: 4ce02c679722 ("iomap: Add per-block dirty state
tracking to improve performance") tag needed for this?
>
> Add a !len guard to both functions before the computation, so that a
> zero-length range is a no-op.
>
> Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
> ---
> fs/iomap/buffered-io.c | 23 +++++++++++++++--------
> 1 file changed, 15 insertions(+), 8 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 27ab33edbdee..6fe5f7e998fd 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -67,11 +67,14 @@ static bool ifs_set_range_uptodate(struct folio *folio,
> struct iomap_folio_state *ifs, size_t off, size_t len)
> {
> struct inode *inode = folio->mapping->host;
> - unsigned int first_blk = off >> inode->i_blkbits;
> - unsigned int last_blk = (off + len - 1) >> inode->i_blkbits;
> - unsigned int nr_blks = last_blk - first_blk + 1;
> + unsigned int first_blk, last_blk;
>
> - bitmap_set(ifs->state, first_blk, nr_blks);
> + if (!len)
> + return true;
I think both callers of ifs_set_range_uptodate() use the return value
to decide whether to mark the folio uptodate or not - does this still
need to return ifs_is_fully_uptodate(folio, ifs) instead of always
true?
Thanks,
Joanne
> +
> + first_blk = off >> inode->i_blkbits;
> + last_blk = (off + len - 1) >> inode->i_blkbits;
> + bitmap_set(ifs->state, first_blk, last_blk - first_blk + 1);
> return ifs_is_fully_uptodate(folio, ifs);
> }
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/4] iomap: correct the range of a partial dirty clear
2026-05-14 6:29 ` [PATCH 1/4] iomap: correct the range of a partial dirty clear Zhang Yi
@ 2026-05-14 18:03 ` Darrick J. Wong
0 siblings, 0 replies; 8+ messages in thread
From: Darrick J. Wong @ 2026-05-14 18:03 UTC (permalink / raw)
To: Zhang Yi
Cc: linux-fsdevel, linux-xfs, linux-ext4, brauner, hch, yi.zhang,
yizhang089, yangerkun, yukuai
On Thu, May 14, 2026 at 02:29:52PM +0800, Zhang Yi wrote:
> From: Zhang Yi <yi.zhang@huawei.com>
>
> The block range calculation in ifs_clear_range_dirty() is incorrect when
> partially clearing a range in a folio. We cannot clear the dirty bit of
> the first block or the last block if the start or end offset is not
> blocksize-aligned. This has not yet caused any issues since we always
> clear a whole folio in iomap_writeback_folio().
>
> Fix this by rounding up the first block to blocksize alignment, and
> calculate the last block by rounding down (using truncation). Correct
> the nr_blks calculation accordingly.
>
> Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
Cc: <stable@vger.kernel.org> # v6.6
Fixes: 4ce02c67972211 ("iomap: Add per-block dirty state tracking to improve performance")
> ---
> This is modified from:
> https://lore.kernel.org/linux-fsdevel/20240812121159.3775074-2-yi.zhang@huaweicloud.com/
> Changes:
> - Use round_up() instead of DIV_ROUND_UP() to prevent wasted integer
> division.
>
> fs/iomap/buffered-io.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index d7b648421a70..64351a448a8b 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -176,13 +176,17 @@ static void ifs_clear_range_dirty(struct folio *folio,
> {
> struct inode *inode = folio->mapping->host;
> unsigned int blks_per_folio = i_blocks_per_folio(inode, folio);
> - unsigned int first_blk = (off >> inode->i_blkbits);
> - unsigned int last_blk = (off + len - 1) >> inode->i_blkbits;
> - unsigned int nr_blks = last_blk - first_blk + 1;
> + unsigned int first_blk = round_up(off, i_blocksize(inode)) >>
> + inode->i_blkbits;
Ok, so now we round off up to the next fsblock to compute first_blk...
> + unsigned int last_blk = (off + len) >> inode->i_blkbits;
...and last_blk (which is really the next block number after the range
that we're undirtying) is now rounded down. Presumably off/len have to
be aligned to fsblock granularity so we'll never have to deal with
unaligned situations like (off=324,len=1), right?
> unsigned long flags;
>
> + if (first_blk >= last_blk)
Do we need this check? When would the test actually be true?
--D
> + return;
> +
> spin_lock_irqsave(&ifs->state_lock, flags);
> - bitmap_clear(ifs->state, first_blk + blks_per_folio, nr_blks);
> + bitmap_clear(ifs->state, first_blk + blks_per_folio,
> + last_blk - first_blk);
> spin_unlock_irqrestore(&ifs->state_lock, flags);
> }
>
> --
> 2.52.0
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 4/4] iomap: fix out-of-bounds bitmap_set() with zero-length range
2026-05-14 6:29 ` [PATCH 4/4] iomap: fix out-of-bounds bitmap_set() with zero-length range Zhang Yi
2026-05-14 15:08 ` Joanne Koong
@ 2026-05-14 18:10 ` Darrick J. Wong
1 sibling, 0 replies; 8+ messages in thread
From: Darrick J. Wong @ 2026-05-14 18:10 UTC (permalink / raw)
To: Zhang Yi
Cc: linux-fsdevel, linux-xfs, linux-ext4, brauner, hch, yi.zhang,
yizhang089, yangerkun, yukuai
On Thu, May 14, 2026 at 02:29:55PM +0800, Zhang Yi wrote:
> From: Zhang Yi <yi.zhang@huawei.com>
>
> ifs_set_range_dirty() and ifs_set_range_uptodate() compute last_blk
> as (off + len - 1) >> i_blkbits. When off is 0 and len is 0, the
> unsigned subtraction underflows to SIZE_MAX, producing a huge
> last_blk and nr_blks value that causes bitmap_set() to write far
> beyond the ifs->state allocation.
>
> Regarding ifs_set_range_uptodate(), it is temporarily safe because len
> cannot be passed in as 0. However, for ifs_set_range_dirty() this is
> reachable from __iomap_write_end(): when copy_folio_from_iter_atomic()
> returns 0 (e.g. user buffer fault) and the folio is already uptodate,
> the guard at the top of __iomap_write_end() does not trigger because
> !folio_test_uptodate() is false, and iomap_set_range_dirty() is called
> with copied == 0.
>
> Add a !len guard to both functions before the computation, so that a
> zero-length range is a no-op.
>
> Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
> ---
> fs/iomap/buffered-io.c | 23 +++++++++++++++--------
> 1 file changed, 15 insertions(+), 8 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 27ab33edbdee..6fe5f7e998fd 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -67,11 +67,14 @@ static bool ifs_set_range_uptodate(struct folio *folio,
> struct iomap_folio_state *ifs, size_t off, size_t len)
> {
> struct inode *inode = folio->mapping->host;
> - unsigned int first_blk = off >> inode->i_blkbits;
> - unsigned int last_blk = (off + len - 1) >> inode->i_blkbits;
> - unsigned int nr_blks = last_blk - first_blk + 1;
> + unsigned int first_blk, last_blk;
>
> - bitmap_set(ifs->state, first_blk, nr_blks);
> + if (!len)
> + return true;
> +
> + first_blk = off >> inode->i_blkbits;
> + last_blk = (off + len - 1) >> inode->i_blkbits;
> + bitmap_set(ifs->state, first_blk, last_blk - first_blk + 1);
> return ifs_is_fully_uptodate(folio, ifs);
> }
>
> @@ -203,13 +206,17 @@ static void ifs_set_range_dirty(struct folio *folio,
> {
> struct inode *inode = folio->mapping->host;
> unsigned int blks_per_folio = i_blocks_per_folio(inode, folio);
> - unsigned int first_blk = (off >> inode->i_blkbits);
> - unsigned int last_blk = (off + len - 1) >> inode->i_blkbits;
> - unsigned int nr_blks = last_blk - first_blk + 1;
> + unsigned int first_blk, last_blk;
> unsigned long flags;
>
> + if (!len)
> + return;
> +
> + first_blk = off >> inode->i_blkbits;
> + last_blk = (off + len - 1) >> inode->i_blkbits;
> spin_lock_irqsave(&ifs->state_lock, flags);
> - bitmap_set(ifs->state, first_blk + blks_per_folio, nr_blks);
> + bitmap_set(ifs->state, first_blk + blks_per_folio,
> + last_blk - first_blk + 1);
I'm curious about the inconsistency in the computations between
ifs_clear_range_dirty and ifs_set_range_dirty now. In the function that
clears dirty bits, off/len are rounded inwards:
unsigned int first_blk = round_up(off, i_blocksize(inode)) >>
inode->i_blkbits;
unsigned int last_blk = (off + len) >> inode->i_blkbits;
unsigned long flags;
if (first_blk >= last_blk)
return;
spin_lock_irqsave(&ifs->state_lock, flags);
bitmap_clear(ifs->state, first_blk + blks_per_folio,
last_blk - first_blk);
but here we're still rounding outwards:
if (!len)
return;
first_blk = off >> inode->i_blkbits;
last_blk = (off + len - 1) >> inode->i_blkbits;
spin_lock_irqsave(&ifs->state_lock, flags);
bitmap_set(ifs->state, first_blk + blks_per_folio,
last_blk - first_blk + 1);
That doesn't quite sound right to me without an explanation in the code,
which currently lacks one. I *think* the reason for the discrepancy is
that if we want to dirty part of an fsblock, we need to mark the whole
block dirty in the ifs so that all the blocks get written out; but when
we're clearing dirty bits, we want to leave an fsblock dirty if we only
wrote back part of that fsblock. Does that sound right?
--D
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-05-14 18:10 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-14 6:29 [PATCH 0/4] iomap: trivial fixes for ext4 conversion Zhang Yi
2026-05-14 6:29 ` [PATCH 1/4] iomap: correct the range of a partial dirty clear Zhang Yi
2026-05-14 18:03 ` Darrick J. Wong
2026-05-14 6:29 ` [PATCH 2/4] iomap: support invalidating partial folios Zhang Yi
2026-05-14 6:29 ` [PATCH 3/4] iomap: fix incorrect did_zero setting in iomap_zero_iter() Zhang Yi
2026-05-14 6:29 ` [PATCH 4/4] iomap: fix out-of-bounds bitmap_set() with zero-length range Zhang Yi
2026-05-14 15:08 ` Joanne Koong
2026-05-14 18:10 ` Darrick J. Wong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox