From: alexjlzheng@gmail.com
To: brauner@kernel.org, djwong@kernel.org
Cc: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-kernel@vger.kernel.org, yi.zhang@huawei.com,
Jinliang Zheng <alexjlzheng@tencent.com>
Subject: [PATCH v3 4/4] iomap: don't abandon the whole copy when we have iomap_folio_state
Date: Tue, 12 Aug 2025 17:15:38 +0800 [thread overview]
Message-ID: <20250812091538.2004295-5-alexjlzheng@tencent.com> (raw)
In-Reply-To: <20250812091538.2004295-1-alexjlzheng@tencent.com>
From: Jinliang Zheng <alexjlzheng@tencent.com>
With iomap_folio_state, we can identify uptodate states at the block
level, and a read_folio reading can correctly handle partially
uptodate folios.
Therefore, when a partial write occurs, accept the block-aligned
partial write instead of rejecting the entire write.
For example, suppose a folio is 2MB, blocksize is 4kB, and the copied
bytes are 2MB-3kB.
Without this patchset, we'd need to recopy from the beginning of the
folio in the next iteration, which means 2MB-3kB of bytes is copy
duplicately.
|<-------------------- 2MB -------------------->|
+-------+-------+-------+-------+-------+-------+
| block | ... | block | block | ... | block | folio
+-------+-------+-------+-------+-------+-------+
|<-4kB->|
|<--------------- copied 2MB-3kB --------->| first time copied
|<-------- 1MB -------->| next time we need copy (chunk /= 2)
|<-------- 1MB -------->| next next time we need copy.
|<------ 2MB-3kB bytes duplicate copy ---->|
With this patchset, we can accept 2MB-4kB of bytes, which is block-aligned.
This means we only need to process the remaining 4kB in the next iteration,
which means there's only 1kB we need to copy duplicately.
|<-------------------- 2MB -------------------->|
+-------+-------+-------+-------+-------+-------+
| block | ... | block | block | ... | block | folio
+-------+-------+-------+-------+-------+-------+
|<-4kB->|
|<--------------- copied 2MB-3kB --------->| first time copied
|<-4kB->| next time we need copy
|<>|
only 1kB bytes duplicate copy
Although partial writes are inherently a relatively unusual situation and do
not account for a large proportion of performance testing, the optimization
here still makes sense in large-scale data centers.
Signed-off-by: Jinliang Zheng <alexjlzheng@tencent.com>
---
fs/iomap/buffered-io.c | 32 +++++++++++++++++++++++++++-----
1 file changed, 27 insertions(+), 5 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 7b9193f8243a..743e369b64d4 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -873,6 +873,25 @@ static int iomap_write_begin(struct iomap_iter *iter,
return status;
}
+static int iomap_trim_tail_partial(struct inode *inode, loff_t pos,
+ size_t copied, struct folio *folio)
+{
+ struct iomap_folio_state *ifs = folio->private;
+ unsigned block_size, last_blk, last_blk_bytes;
+
+ if (!ifs || !copied)
+ return 0;
+
+ block_size = 1 << inode->i_blkbits;
+ last_blk = offset_in_folio(folio, pos + copied - 1) >> inode->i_blkbits;
+ last_blk_bytes = (pos + copied) & (block_size - 1);
+
+ if (!ifs_block_is_uptodate(ifs, last_blk))
+ copied -= min(copied, last_blk_bytes);
+
+ return copied;
+}
+
static int __iomap_write_end(struct inode *inode, loff_t pos, size_t len,
size_t copied, struct folio *folio)
{
@@ -886,12 +905,15 @@ static int __iomap_write_end(struct inode *inode, loff_t pos, size_t len,
* read_folio might come in and destroy our partial write.
*
* Do the simplest thing and just treat any short write to a
- * non-uptodate page as a zero-length write, and force the caller to
- * redo the whole thing.
+ * non-uptodate block as a zero-length write, and force the caller to
+ * redo the things begin from the block.
*/
- if (unlikely(copied < len && !folio_test_uptodate(folio)))
- return 0;
- iomap_set_range_uptodate(folio, offset_in_folio(folio, pos), len);
+ if (unlikely(copied < len && !folio_test_uptodate(folio))) {
+ copied = iomap_trim_tail_partial(inode, pos, copied, folio);
+ if (!copied)
+ return 0;
+ }
+ iomap_set_range_uptodate(folio, offset_in_folio(folio, pos), copied);
iomap_set_range_dirty(folio, offset_in_folio(folio, pos), copied);
filemap_dirty_folio(inode->i_mapping, folio);
return copied;
--
2.49.0
next prev parent reply other threads:[~2025-08-12 9:16 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-12 9:15 [PATCH v3 0/4] allow partial folio write with iomap_folio_state alexjlzheng
2025-08-12 9:15 ` [PATCH v3 1/4] iomap: make sure iomap_adjust_read_range() are aligned with block_size alexjlzheng
2025-08-12 9:15 ` [PATCH v3 2/4] iomap: move iter revert case out of the unwritten branch alexjlzheng
2025-08-12 9:15 ` [PATCH v3 3/4] iomap: make iomap_write_end() return the number of written length again alexjlzheng
2025-08-12 9:15 ` alexjlzheng [this message]
2025-08-25 6:41 ` [PATCH v3 0/4] allow partial folio write with iomap_folio_state Jinliang Zheng
2025-08-25 9:34 ` Christoph Hellwig
2025-08-25 11:39 ` Jinliang Zheng
2025-08-26 13:20 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250812091538.2004295-5-alexjlzheng@tencent.com \
--to=alexjlzheng@gmail.com \
--cc=alexjlzheng@tencent.com \
--cc=brauner@kernel.org \
--cc=djwong@kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=yi.zhang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).