Linux Btrfs filesystem development
 help / color / mirror / Atom feed
* [PATCH 0/2] btrfs: simple cleanup around
@ 2026-05-03  9:47 Qu Wenruo
  2026-05-03  9:47 ` [PATCH 1/2] btrfs: unexport and move extent_invalidate_folio() Qu Wenruo
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Qu Wenruo @ 2026-05-03  9:47 UTC (permalink / raw)
  To: linux-btrfs

The first patch moves and unexport the function.

The second one open-codes the function into the only caller, and
replaces remove the need for @offset/@length usage.

Qu Wenruo (2):
  btrfs: unexport and move extent_invalidate_folio()
  btrfs: simplify the btree folio wait during invalidation

 fs/btrfs/disk-io.c   | 30 +++++++++++++++++++++++++++---
 fs/btrfs/extent_io.c | 32 --------------------------------
 fs/btrfs/extent_io.h |  2 --
 3 files changed, 27 insertions(+), 37 deletions(-)

-- 
2.54.0


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] btrfs: unexport and move extent_invalidate_folio()
  2026-05-03  9:47 [PATCH 0/2] btrfs: simple cleanup around Qu Wenruo
@ 2026-05-03  9:47 ` Qu Wenruo
  2026-05-03  9:47 ` [PATCH 2/2] btrfs: simplify the btree folio wait during invalidation Qu Wenruo
  2026-05-04 13:22 ` [PATCH 0/2] btrfs: simple cleanup around David Sterba
  2 siblings, 0 replies; 4+ messages in thread
From: Qu Wenruo @ 2026-05-03  9:47 UTC (permalink / raw)
  To: linux-btrfs

The function extent_invalidate_folio() has only a single caller inside
btree_invalidate_folio().

There is no need to export such a function just for a single caller inside
another file.

Unexport extent_invalidate_folio() and move it to disk-io.c.

And since we're moving the code, update the commit to match the current
style, and remove the seemingly stale comment on the extent state
removal, it's better explained by the comment just before
btrfs_unlock_extent().

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/disk-io.c   | 31 +++++++++++++++++++++++++++++++
 fs/btrfs/extent_io.c | 32 --------------------------------
 fs/btrfs/extent_io.h |  2 --
 3 files changed, 31 insertions(+), 34 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 48ddbeb18e3c..f925dcea0c46 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -488,6 +488,37 @@ static bool btree_release_folio(struct folio *folio, gfp_t gfp_flags)
 	return try_release_extent_buffer(folio);
 }
 
+/*
+ * Basic invalidate_folio code, this waits on any locked or writeback
+ * ranges corresponding to the folio.
+ */
+static int extent_invalidate_folio(struct extent_io_tree *tree,
+				   struct folio *folio, size_t offset)
+{
+	struct extent_state *cached_state = NULL;
+	u64 start = folio_pos(folio);
+	u64 end = start + folio_size(folio) - 1;
+	size_t blocksize = folio_to_fs_info(folio)->sectorsize;
+
+	/* This function is only called for the btree inode */
+	ASSERT(tree->owner == IO_TREE_BTREE_INODE_IO);
+
+	start += ALIGN(offset, blocksize);
+	if (start > end)
+		return 0;
+
+	btrfs_lock_extent(tree, start, end, &cached_state);
+	folio_wait_writeback(folio);
+
+	/*
+	 * Currently for btree io tree, only EXTENT_LOCKED is utilized,
+	 * so here we only need to unlock the extent range to free any
+	 * existing extent state.
+	 */
+	btrfs_unlock_extent(tree, start, end, &cached_state);
+	return 0;
+}
+
 static void btree_invalidate_folio(struct folio *folio, size_t offset,
 				 size_t length)
 {
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 31a65c662b65..ebf9a63946e5 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2734,38 +2734,6 @@ void btrfs_readahead(struct readahead_control *rac)
 	submit_one_bio(&bio_ctrl);
 }
 
-/*
- * basic invalidate_folio code, this waits on any locked or writeback
- * ranges corresponding to the folio, and then deletes any extent state
- * records from the tree
- */
-int extent_invalidate_folio(struct extent_io_tree *tree,
-			  struct folio *folio, size_t offset)
-{
-	struct extent_state *cached_state = NULL;
-	u64 start = folio_pos(folio);
-	u64 end = start + folio_size(folio) - 1;
-	size_t blocksize = folio_to_fs_info(folio)->sectorsize;
-
-	/* This function is only called for the btree inode */
-	ASSERT(tree->owner == IO_TREE_BTREE_INODE_IO);
-
-	start += ALIGN(offset, blocksize);
-	if (start > end)
-		return 0;
-
-	btrfs_lock_extent(tree, start, end, &cached_state);
-	folio_wait_writeback(folio);
-
-	/*
-	 * Currently for btree io tree, only EXTENT_LOCKED is utilized,
-	 * so here we only need to unlock the extent range to free any
-	 * existing extent state.
-	 */
-	btrfs_unlock_extent(tree, start, end, &cached_state);
-	return 0;
-}
-
 /*
  * A helper for struct address_space_operations::release_folio, this tests for
  * areas of the folio that are locked or under IO and drops the related state
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index b310a5145cf6..ede7abbe4031 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -381,8 +381,6 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
 				  const struct folio *locked_folio,
 				  struct extent_state **cached,
 				  u32 bits_to_clear, unsigned long page_ops);
-int extent_invalidate_folio(struct extent_io_tree *tree,
-			    struct folio *folio, size_t offset);
 void btrfs_clear_buffer_dirty(struct btrfs_trans_handle *trans,
 			      struct extent_buffer *buf);
 
-- 
2.54.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] btrfs: simplify the btree folio wait during invalidation
  2026-05-03  9:47 [PATCH 0/2] btrfs: simple cleanup around Qu Wenruo
  2026-05-03  9:47 ` [PATCH 1/2] btrfs: unexport and move extent_invalidate_folio() Qu Wenruo
@ 2026-05-03  9:47 ` Qu Wenruo
  2026-05-04 13:22 ` [PATCH 0/2] btrfs: simple cleanup around David Sterba
  2 siblings, 0 replies; 4+ messages in thread
From: Qu Wenruo @ 2026-05-03  9:47 UTC (permalink / raw)
  To: linux-btrfs

The btree inode is very different from regular data inodes, as the btree
inode is never exposed to user space operations.

All operations are either initiated by btrfs metadata operations, or MM
layer like memory pressure to release folios.

This means we never need to handle partial folio invalidation inside
btree_invalidate_folio().

With that said, we can slightly simplify the btree folio invalidation
by:

- Add ASSERT()s to make sure the range covers the whole folio

- Remove "if (start > end)" check
  As the range always covers the full folio, that check is always
  false and can be removed.

- Open code extent_invalidate_folio()

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/disk-io.c | 37 +++++++++++++++----------------------
 1 file changed, 15 insertions(+), 22 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index f925dcea0c46..9e1da0b812e0 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -488,25 +488,27 @@ static bool btree_release_folio(struct folio *folio, gfp_t gfp_flags)
 	return try_release_extent_buffer(folio);
 }
 
-/*
- * Basic invalidate_folio code, this waits on any locked or writeback
- * ranges corresponding to the folio.
- */
-static int extent_invalidate_folio(struct extent_io_tree *tree,
-				   struct folio *folio, size_t offset)
+static void btree_invalidate_folio(struct folio *folio, size_t offset,
+				 size_t length)
 {
+	struct extent_io_tree *tree = &folio_to_inode(folio)->io_tree;
 	struct extent_state *cached_state = NULL;
-	u64 start = folio_pos(folio);
-	u64 end = start + folio_size(folio) - 1;
-	size_t blocksize = folio_to_fs_info(folio)->sectorsize;
+	const u64 start = folio_pos(folio);
+	const u64 end = folio_next_pos(folio) - 1;
+
+	/*
+	 * The range must cover the full @folio.
+	 * Btree inode is never exposed to regular file operations, thus there
+	 * is no partial truncation.
+	 * The folio is only invalidated when the btree inode is evicted.
+	 */
+	ASSERT(offset == 0, "folio=%llu offset=%zu", folio_pos(folio), offset);
+	ASSERT(length == folio_size(folio), "folio=%llu folio_size=%zu length=%zu",
+	       folio_pos(folio), folio_size(folio), length);
 
 	/* This function is only called for the btree inode */
 	ASSERT(tree->owner == IO_TREE_BTREE_INODE_IO);
 
-	start += ALIGN(offset, blocksize);
-	if (start > end)
-		return 0;
-
 	btrfs_lock_extent(tree, start, end, &cached_state);
 	folio_wait_writeback(folio);
 
@@ -516,16 +518,7 @@ static int extent_invalidate_folio(struct extent_io_tree *tree,
 	 * existing extent state.
 	 */
 	btrfs_unlock_extent(tree, start, end, &cached_state);
-	return 0;
-}
 
-static void btree_invalidate_folio(struct folio *folio, size_t offset,
-				 size_t length)
-{
-	struct extent_io_tree *tree;
-
-	tree = &folio_to_inode(folio)->io_tree;
-	extent_invalidate_folio(tree, folio, offset);
 	btree_release_folio(folio, GFP_NOFS);
 	if (folio_get_private(folio)) {
 		btrfs_warn(folio_to_fs_info(folio),
-- 
2.54.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 0/2] btrfs: simple cleanup around
  2026-05-03  9:47 [PATCH 0/2] btrfs: simple cleanup around Qu Wenruo
  2026-05-03  9:47 ` [PATCH 1/2] btrfs: unexport and move extent_invalidate_folio() Qu Wenruo
  2026-05-03  9:47 ` [PATCH 2/2] btrfs: simplify the btree folio wait during invalidation Qu Wenruo
@ 2026-05-04 13:22 ` David Sterba
  2 siblings, 0 replies; 4+ messages in thread
From: David Sterba @ 2026-05-04 13:22 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Sun, May 03, 2026 at 07:17:49PM +0930, Qu Wenruo wrote:
> The first patch moves and unexport the function.
> 
> The second one open-codes the function into the only caller, and
> replaces remove the need for @offset/@length usage.
> 
> Qu Wenruo (2):
>   btrfs: unexport and move extent_invalidate_folio()
>   btrfs: simplify the btree folio wait during invalidation

Reviewed-by: David Sterba <dsterba@suse.com>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-05-04 13:22 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-03  9:47 [PATCH 0/2] btrfs: simple cleanup around Qu Wenruo
2026-05-03  9:47 ` [PATCH 1/2] btrfs: unexport and move extent_invalidate_folio() Qu Wenruo
2026-05-03  9:47 ` [PATCH 2/2] btrfs: simplify the btree folio wait during invalidation Qu Wenruo
2026-05-04 13:22 ` [PATCH 0/2] btrfs: simple cleanup around David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox