From: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com>
To: linux-xfs@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org,
"Darrick J . Wong" <djwong@kernel.org>,
Andreas Gruenbacher <agruenba@redhat.com>,
Matthew Wilcox <willy@infradead.org>,
Christoph Hellwig <hch@infradead.org>,
Brian Foster <bfoster@redhat.com>,
Ojaswin Mujoo <ojaswin@linux.ibm.com>,
Disha Goel <disgoel@linux.ibm.com>,
"Ritesh Harjani (IBM)" <ritesh.list@gmail.com>,
Christoph Hellwig <hch@lst.de>
Subject: [PATCHv10 6/8] iomap: Refactor iomap_write_delalloc_punch() function out
Date: Mon, 19 Jun 2023 07:58:49 +0530 [thread overview]
Message-ID: <1b7e89f65fbee7cc0b1909f136d40552e68d9829.1687140389.git.ritesh.list@gmail.com> (raw)
In-Reply-To: <cover.1687140389.git.ritesh.list@gmail.com>
This patch factors iomap_write_delalloc_punch() function out. This function
is resposible for actual punch out operation.
The reason for doing this is, to avoid deep indentation when we bring
punch-out of individual non-dirty blocks within a dirty folio in a later
patch (which adds per-block dirty status handling to iomap) to avoid
delalloc block leak.
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
fs/iomap/buffered-io.c | 53 ++++++++++++++++++++++++++----------------
1 file changed, 33 insertions(+), 20 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index e03ffdc259c4..2d79061022d8 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -882,6 +882,32 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i,
}
EXPORT_SYMBOL_GPL(iomap_file_buffered_write);
+static int iomap_write_delalloc_punch(struct inode *inode, struct folio *folio,
+ loff_t *punch_start_byte, loff_t start_byte, loff_t end_byte,
+ iomap_punch_t punch)
+{
+ int ret = 0;
+
+ if (!folio_test_dirty(folio))
+ return ret;
+
+ /* if dirty, punch up to offset */
+ if (start_byte > *punch_start_byte) {
+ ret = punch(inode, *punch_start_byte,
+ start_byte - *punch_start_byte);
+ if (ret)
+ return ret;
+ }
+ /*
+ * Make sure the next punch start is correctly bound to
+ * the end of this data range, not the end of the folio.
+ */
+ *punch_start_byte = min_t(loff_t, end_byte,
+ folio_pos(folio) + folio_size(folio));
+
+ return ret;
+}
+
/*
* Scan the data range passed to us for dirty page cache folios. If we find a
* dirty folio, punch out the preceeding range and update the offset from which
@@ -905,6 +931,7 @@ static int iomap_write_delalloc_scan(struct inode *inode,
{
while (start_byte < end_byte) {
struct folio *folio;
+ int ret;
/* grab locked page */
folio = filemap_lock_folio(inode->i_mapping,
@@ -915,26 +942,12 @@ static int iomap_write_delalloc_scan(struct inode *inode,
continue;
}
- /* if dirty, punch up to offset */
- if (folio_test_dirty(folio)) {
- if (start_byte > *punch_start_byte) {
- int error;
-
- error = punch(inode, *punch_start_byte,
- start_byte - *punch_start_byte);
- if (error) {
- folio_unlock(folio);
- folio_put(folio);
- return error;
- }
- }
-
- /*
- * Make sure the next punch start is correctly bound to
- * the end of this data range, not the end of the folio.
- */
- *punch_start_byte = min_t(loff_t, end_byte,
- folio_pos(folio) + folio_size(folio));
+ ret = iomap_write_delalloc_punch(inode, folio, punch_start_byte,
+ start_byte, end_byte, punch);
+ if (ret) {
+ folio_unlock(folio);
+ folio_put(folio);
+ return ret;
}
/* move offset to start of next folio in range */
--
2.40.1
next prev parent reply other threads:[~2023-06-19 2:29 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-19 2:28 [PATCHv10 0/8] iomap: Add support for per-block dirty state to improve write performance Ritesh Harjani (IBM)
2023-06-19 2:28 ` [PATCHv10 1/8] iomap: Rename iomap_page to iomap_folio_state and others Ritesh Harjani (IBM)
2023-06-19 2:28 ` [PATCHv10 2/8] iomap: Drop ifs argument from iomap_set_range_uptodate() Ritesh Harjani (IBM)
2023-06-19 2:28 ` [PATCHv10 3/8] iomap: Add some uptodate state handling helpers for ifs state bitmap Ritesh Harjani (IBM)
2023-06-19 2:28 ` [PATCHv10 4/8] iomap: Fix possible overflow condition in iomap_write_delalloc_scan Ritesh Harjani (IBM)
2023-06-19 2:28 ` [PATCHv10 5/8] iomap: Use iomap_punch_t typedef Ritesh Harjani (IBM)
2023-06-19 2:28 ` Ritesh Harjani (IBM) [this message]
2023-06-19 2:28 ` [PATCHv10 7/8] iomap: Allocate ifs in ->write_begin() early Ritesh Harjani (IBM)
2023-06-19 2:28 ` [PATCHv10 8/8] iomap: Add per-block dirty state tracking to improve performance Ritesh Harjani (IBM)
2023-06-19 12:43 ` Andreas Gruenbacher
2023-06-19 16:55 ` Ritesh Harjani
2023-06-19 14:26 ` Matthew Wilcox
2023-06-19 16:25 ` Ritesh Harjani
2023-06-19 16:54 ` Matthew Wilcox
2023-06-19 17:29 ` Ritesh Harjani
2023-06-20 5:01 ` Darrick J. Wong
2023-06-20 5:58 ` Ritesh Harjani
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1b7e89f65fbee7cc0b1909f136d40552e68d9829.1687140389.git.ritesh.list@gmail.com \
--to=ritesh.list@gmail.com \
--cc=agruenba@redhat.com \
--cc=bfoster@redhat.com \
--cc=disgoel@linux.ibm.com \
--cc=djwong@kernel.org \
--cc=hch@infradead.org \
--cc=hch@lst.de \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=ojaswin@linux.ibm.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).