linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] Folio pos + size cleanups
@ 2025-06-10 12:17 David Sterba
  2025-06-10 12:17 ` [PATCH v2 1/4] btrfs: simplify range end calculations in truncate_block_zero_beyond_eof() David Sterba
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: David Sterba @ 2025-06-10 12:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Helper to simplify numerous folio_pos() + folio_size() calculations.

v2:
- preparatory patches: rename variables matching the semantics of
  locking with the "-1" end of range, and simplify some range calculations
- updated changelogs with stats of potential usage outside of btrfs

David Sterba (4):
  btrfs: simplify range end calculations in
    truncate_block_zero_beyond_eof()
  btrfs: rename variables for locked range in defrag_prepare_one_folio()
  btrfs: add helper folio_end()
  btrfs: use folio_end() where appropriate

 fs/btrfs/compression.h  |  7 +++----
 fs/btrfs/defrag.c       | 19 +++++++++----------
 fs/btrfs/extent_io.c    | 17 ++++++++---------
 fs/btrfs/file.c         |  9 ++++-----
 fs/btrfs/inode.c        | 12 +++++-------
 fs/btrfs/misc.h         |  7 +++++++
 fs/btrfs/ordered-data.c |  2 +-
 fs/btrfs/subpage.c      |  5 ++---
 8 files changed, 39 insertions(+), 39 deletions(-)

-- 
2.47.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/4] btrfs: simplify range end calculations in truncate_block_zero_beyond_eof()
  2025-06-10 12:17 [PATCH v2 0/4] Folio pos + size cleanups David Sterba
@ 2025-06-10 12:17 ` David Sterba
  2025-06-11 10:26   ` Johannes Thumshirn
  2025-06-10 12:17 ` [PATCH v2 2/4] btrfs: rename variables for locked range in defrag_prepare_one_folio() David Sterba
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: David Sterba @ 2025-06-10 12:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The way zoned_end is calculated and used does a -1 and +1 that
effectively cancel out, so this can be simplified. This is also
preparatory patch for using a helper for folio_pos + folio_size with the
semantics of exclusive end of the range.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/inode.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 37626c6816f1..50e99b599275 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -4819,9 +4819,9 @@ static int truncate_block_zero_beyond_eof(struct btrfs_inode *inode, u64 start)
 	 */
 
 	zero_start = max_t(u64, folio_pos(folio), start);
-	zero_end = folio_pos(folio) + folio_size(folio) - 1;
+	zero_end = folio_pos(folio) + folio_size(folio);
 	folio_zero_range(folio, zero_start - folio_pos(folio),
-			 zero_end - zero_start + 1);
+			 zero_end - zero_start);
 
 out_unlock:
 	folio_unlock(folio);
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/4] btrfs: rename variables for locked range in defrag_prepare_one_folio()
  2025-06-10 12:17 [PATCH v2 0/4] Folio pos + size cleanups David Sterba
  2025-06-10 12:17 ` [PATCH v2 1/4] btrfs: simplify range end calculations in truncate_block_zero_beyond_eof() David Sterba
@ 2025-06-10 12:17 ` David Sterba
  2025-06-10 12:17 ` [PATCH v2 3/4] btrfs: add helper folio_end() David Sterba
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: David Sterba @ 2025-06-10 12:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

In preparation to use a helper for folio_pos + folio_size, rename the
variables for the locked range so they don't use the 'folio_' prefix. As
the locking ranges take inclusive end of the range (hence the "-1") this
would be confusing as the folio helpers typically use exclusive end of
the range.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/defrag.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
index 6dca263b224e..faa563ee3000 100644
--- a/fs/btrfs/defrag.c
+++ b/fs/btrfs/defrag.c
@@ -848,8 +848,8 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
 {
 	struct address_space *mapping = inode->vfs_inode.i_mapping;
 	gfp_t mask = btrfs_alloc_write_mask(mapping);
-	u64 folio_start;
-	u64 folio_end;
+	u64 lock_start;
+	u64 lock_end;
 	struct extent_state *cached_state = NULL;
 	struct folio *folio;
 	int ret;
@@ -885,15 +885,15 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
 		return ERR_PTR(ret);
 	}
 
-	folio_start = folio_pos(folio);
-	folio_end = folio_pos(folio) + folio_size(folio) - 1;
+	lock_start = folio_pos(folio);
+	lock_end = folio_pos(folio) + folio_size(folio) - 1;
 	/* Wait for any existing ordered extent in the range */
 	while (1) {
 		struct btrfs_ordered_extent *ordered;
 
-		btrfs_lock_extent(&inode->io_tree, folio_start, folio_end, &cached_state);
-		ordered = btrfs_lookup_ordered_range(inode, folio_start, folio_size(folio));
-		btrfs_unlock_extent(&inode->io_tree, folio_start, folio_end, &cached_state);
+		btrfs_lock_extent(&inode->io_tree, lock_start, lock_end, &cached_state);
+		ordered = btrfs_lookup_ordered_range(inode, lock_start, folio_size(folio));
+		btrfs_unlock_extent(&inode->io_tree, lock_start, lock_end, &cached_state);
 		if (!ordered)
 			break;
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/4] btrfs: add helper folio_end()
  2025-06-10 12:17 [PATCH v2 0/4] Folio pos + size cleanups David Sterba
  2025-06-10 12:17 ` [PATCH v2 1/4] btrfs: simplify range end calculations in truncate_block_zero_beyond_eof() David Sterba
  2025-06-10 12:17 ` [PATCH v2 2/4] btrfs: rename variables for locked range in defrag_prepare_one_folio() David Sterba
@ 2025-06-10 12:17 ` David Sterba
  2025-07-29  9:59   ` Andreas Gruenbacher
  2025-06-10 12:17 ` [PATCH v2 4/4] btrfs: use folio_end() where appropriate David Sterba
  2025-06-11 10:27 ` [PATCH v2 0/4] Folio pos + size cleanups Johannes Thumshirn
  4 siblings, 1 reply; 10+ messages in thread
From: David Sterba @ 2025-06-10 12:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

There are several cases of folio_pos + folio_size, add a convenience
helper for that. This is a local helper and not proposed as folio API
because it does not seem to be heavily used elsewhere:

A quick grep (folio_size + folio_end) in fs/ shows

     24 btrfs
      4 iomap
      4 ext4
      2 xfs
      2 netfs
      1 gfs2
      1 f2fs
      1 bcachefs
      1 buffer.c

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/misc.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
index 9cc292402696..ff5eac84d819 100644
--- a/fs/btrfs/misc.h
+++ b/fs/btrfs/misc.h
@@ -7,6 +7,8 @@
 #include <linux/bitmap.h>
 #include <linux/sched.h>
 #include <linux/wait.h>
+#include <linux/mm.h>
+#include <linux/pagemap.h>
 #include <linux/math64.h>
 #include <linux/rbtree.h>
 
@@ -158,4 +160,9 @@ static inline bool bitmap_test_range_all_zero(const unsigned long *addr,
 	return (found_set == start + nbits);
 }
 
+static inline u64 folio_end(struct folio *folio)
+{
+	return folio_pos(folio) + folio_size(folio);
+}
+
 #endif
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/4] btrfs: use folio_end() where appropriate
  2025-06-10 12:17 [PATCH v2 0/4] Folio pos + size cleanups David Sterba
                   ` (2 preceding siblings ...)
  2025-06-10 12:17 ` [PATCH v2 3/4] btrfs: add helper folio_end() David Sterba
@ 2025-06-10 12:17 ` David Sterba
  2025-06-11 10:27 ` [PATCH v2 0/4] Folio pos + size cleanups Johannes Thumshirn
  4 siblings, 0 replies; 10+ messages in thread
From: David Sterba @ 2025-06-10 12:17 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Simplify folio_pos() + folio_size() and use the new helper.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/compression.h  |  7 +++----
 fs/btrfs/defrag.c       |  7 +++----
 fs/btrfs/extent_io.c    | 17 ++++++++---------
 fs/btrfs/file.c         |  9 ++++-----
 fs/btrfs/inode.c        | 10 ++++------
 fs/btrfs/ordered-data.c |  2 +-
 fs/btrfs/subpage.c      |  5 ++---
 7 files changed, 25 insertions(+), 32 deletions(-)

diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index d34c4341eaf4..1df3c8dec40a 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -13,6 +13,7 @@
 #include <linux/wait.h>
 #include <linux/pagemap.h>
 #include "bio.h"
+#include "fs.h"
 #include "messages.h"
 
 struct address_space;
@@ -77,12 +78,10 @@ struct compressed_bio {
 /* @range_end must be exclusive. */
 static inline u32 btrfs_calc_input_length(struct folio *folio, u64 range_end, u64 cur)
 {
-	const u64 folio_end = folio_pos(folio) + folio_size(folio);
-
 	/* @cur must be inside the folio. */
 	ASSERT(folio_pos(folio) <= cur);
-	ASSERT(cur < folio_end);
-	return min(range_end, folio_end) - cur;
+	ASSERT(cur < folio_end(folio));
+	return min(range_end, folio_end(folio)) - cur;
 }
 
 int __init btrfs_init_compress(void);
diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
index faa563ee3000..701b6b51ea85 100644
--- a/fs/btrfs/defrag.c
+++ b/fs/btrfs/defrag.c
@@ -886,7 +886,7 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
 	}
 
 	lock_start = folio_pos(folio);
-	lock_end = folio_pos(folio) + folio_size(folio) - 1;
+	lock_end = folio_end(folio) - 1;
 	/* Wait for any existing ordered extent in the range */
 	while (1) {
 		struct btrfs_ordered_extent *ordered;
@@ -1178,8 +1178,7 @@ static int defrag_one_locked_target(struct btrfs_inode *inode,
 
 		if (!folio)
 			break;
-		if (start >= folio_pos(folio) + folio_size(folio) ||
-		    start + len <= folio_pos(folio))
+		if (start >= folio_end(folio) || start + len <= folio_pos(folio))
 			continue;
 		btrfs_folio_clamp_clear_checked(fs_info, folio, start, len);
 		btrfs_folio_clamp_set_dirty(fs_info, folio, start, len);
@@ -1220,7 +1219,7 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
 			folios[i] = NULL;
 			goto free_folios;
 		}
-		cur = folio_pos(folios[i]) + folio_size(folios[i]);
+		cur = folio_end(folios[i]);
 	}
 	for (int i = 0; i < nr_pages; i++) {
 		if (!folios[i])
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 8445f297e664..bc99df91f629 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -266,8 +266,7 @@ static noinline int lock_delalloc_folios(struct inode *inode,
 				goto out;
 			}
 			range_start = max_t(u64, folio_pos(folio), start);
-			range_len = min_t(u64, folio_pos(folio) + folio_size(folio),
-					  end + 1) - range_start;
+			range_len = min_t(u64, folio_end(folio), end + 1) - range_start;
 			btrfs_folio_set_lock(fs_info, folio, range_start, range_len);
 
 			processed_end = range_start + range_len - 1;
@@ -321,7 +320,7 @@ noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
 	ASSERT(orig_end > orig_start);
 
 	/* The range should at least cover part of the folio */
-	ASSERT(!(orig_start >= folio_pos(locked_folio) + folio_size(locked_folio) ||
+	ASSERT(!(orig_start >= folio_end(locked_folio) ||
 		 orig_end <= folio_pos(locked_folio)));
 again:
 	/* step one, find a bunch of delalloc bytes starting at start */
@@ -419,7 +418,7 @@ static void end_folio_read(struct folio *folio, bool uptodate, u64 start, u32 le
 	struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
 
 	ASSERT(folio_pos(folio) <= start &&
-	       start + len <= folio_pos(folio) + folio_size(folio));
+	       start + len <= folio_end(folio));
 
 	if (uptodate && btrfs_verify_folio(folio, start, len))
 		btrfs_folio_set_uptodate(fs_info, folio, start, len);
@@ -1086,7 +1085,7 @@ static bool can_skip_one_ordered_range(struct btrfs_inode *inode,
 	 * finished our folio read and unlocked the folio.
 	 */
 	if (btrfs_folio_test_dirty(fs_info, folio, cur, blocksize)) {
-		u64 range_len = min(folio_pos(folio) + folio_size(folio),
+		u64 range_len = min(folio_end(folio),
 				    ordered->file_offset + ordered->num_bytes) - cur;
 
 		ret = true;
@@ -1108,7 +1107,7 @@ static bool can_skip_one_ordered_range(struct btrfs_inode *inode,
 	 * So we return true and update @next_ret to the OE/folio boundary.
 	 */
 	if (btrfs_folio_test_uptodate(fs_info, folio, cur, blocksize)) {
-		u64 range_len = min(folio_pos(folio) + folio_size(folio),
+		u64 range_len = min(folio_end(folio),
 				    ordered->file_offset + ordered->num_bytes) - cur;
 
 		/*
@@ -2085,7 +2084,7 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb,
 	for (int i = 0; i < num_extent_folios(eb); i++) {
 		struct folio *folio = eb->folios[i];
 		u64 range_start = max_t(u64, eb->start, folio_pos(folio));
-		u32 range_len = min_t(u64, folio_pos(folio) + folio_size(folio),
+		u32 range_len = min_t(u64, folio_end(folio),
 				      eb->start + eb->len) - range_start;
 
 		folio_lock(folio);
@@ -2489,7 +2488,7 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f
 			continue;
 		}
 
-		cur_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1, end);
+		cur_end = min_t(u64, folio_end(folio) - 1, end);
 		cur_len = cur_end + 1 - cur;
 
 		ASSERT(folio_test_locked(folio));
@@ -3729,7 +3728,7 @@ int read_extent_buffer_pages_nowait(struct extent_buffer *eb, int mirror_num,
 	for (int i = 0; i < num_extent_folios(eb); i++) {
 		struct folio *folio = eb->folios[i];
 		u64 range_start = max_t(u64, eb->start, folio_pos(folio));
-		u32 range_len = min_t(u64, folio_pos(folio) + folio_size(folio),
+		u32 range_len = min_t(u64, folio_end(folio),
 				      eb->start + eb->len) - range_start;
 
 		bio_add_folio_nofail(&bbio->bio, folio, range_len,
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 2902de88dd1b..082630562838 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -89,8 +89,7 @@ int btrfs_dirty_folio(struct btrfs_inode *inode, struct folio *folio, loff_t pos
 	num_bytes = round_up(write_bytes + pos - start_pos,
 			     fs_info->sectorsize);
 	ASSERT(num_bytes <= U32_MAX);
-	ASSERT(folio_pos(folio) <= pos &&
-	       folio_pos(folio) + folio_size(folio) >= pos + write_bytes);
+	ASSERT(folio_pos(folio) <= pos && folio_end(folio) >= pos + write_bytes);
 
 	end_of_last_block = start_pos + num_bytes - 1;
 
@@ -801,7 +800,7 @@ static int prepare_uptodate_folio(struct inode *inode, struct folio *folio, u64
 				  u64 len)
 {
 	u64 clamp_start = max_t(u64, pos, folio_pos(folio));
-	u64 clamp_end = min_t(u64, pos + len, folio_pos(folio) + folio_size(folio));
+	u64 clamp_end = min_t(u64, pos + len, folio_end(folio));
 	const u32 blocksize = inode_to_fs_info(inode)->sectorsize;
 	int ret = 0;
 
@@ -1233,8 +1232,8 @@ static int copy_one_range(struct btrfs_inode *inode, struct iov_iter *iter,
 	 * The reserved range goes beyond the current folio, shrink the reserved
 	 * space to the folio boundary.
 	 */
-	if (reserved_start + reserved_len > folio_pos(folio) + folio_size(folio)) {
-		const u64 last_block = folio_pos(folio) + folio_size(folio);
+	if (reserved_start + reserved_len > folio_end(folio)) {
+		const u64 last_block = folio_end(folio);
 
 		shrink_reserved_space(inode, *data_reserved, reserved_start,
 				      reserved_len, last_block - reserved_start,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 50e99b599275..a56c900df989 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -2328,8 +2328,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct folio *locked_fol
 	 * The range must cover part of the @locked_folio, or a return of 1
 	 * can confuse the caller.
 	 */
-	ASSERT(!(end <= folio_pos(locked_folio) ||
-		 start >= folio_pos(locked_folio) + folio_size(locked_folio)));
+	ASSERT(!(end <= folio_pos(locked_folio) || start >= folio_end(locked_folio)));
 
 	if (should_nocow(inode, start, end)) {
 		ret = run_delalloc_nocow(inode, locked_folio, start, end);
@@ -2737,7 +2736,7 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
 	struct btrfs_inode *inode = fixup->inode;
 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	u64 page_start = folio_pos(folio);
-	u64 page_end = folio_pos(folio) + folio_size(folio) - 1;
+	u64 page_end = folio_end(folio) - 1;
 	int ret = 0;
 	bool free_delalloc_space = true;
 
@@ -4819,7 +4818,7 @@ static int truncate_block_zero_beyond_eof(struct btrfs_inode *inode, u64 start)
 	 */
 
 	zero_start = max_t(u64, folio_pos(folio), start);
-	zero_end = folio_pos(folio) + folio_size(folio);
+	zero_end = folio_end(folio);
 	folio_zero_range(folio, zero_start - folio_pos(folio),
 			 zero_end - zero_start);
 
@@ -4999,8 +4998,7 @@ int btrfs_truncate_block(struct btrfs_inode *inode, u64 offset, u64 start, u64 e
 		 * not reach disk, it still affects our page caches.
 		 */
 		zero_start = max_t(u64, folio_pos(folio), start);
-		zero_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
-				 end);
+		zero_end = min_t(u64, folio_end(folio) - 1, end);
 	} else {
 		zero_start = max_t(u64, block_start, start);
 		zero_end = min_t(u64, block_end, end);
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 9212ce110cde..2829f20d7bb5 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -359,7 +359,7 @@ static bool can_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
 	if (folio) {
 		ASSERT(folio->mapping);
 		ASSERT(folio_pos(folio) <= file_offset);
-		ASSERT(file_offset + len <= folio_pos(folio) + folio_size(folio));
+		ASSERT(file_offset + len <= folio_end(folio));
 
 		/*
 		 * Ordered flag indicates whether we still have
diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
index 9b63a4d1c989..2c5c9262b1a8 100644
--- a/fs/btrfs/subpage.c
+++ b/fs/btrfs/subpage.c
@@ -187,7 +187,7 @@ static void btrfs_subpage_assert(const struct btrfs_fs_info *fs_info,
 	 */
 	if (folio->mapping)
 		ASSERT(folio_pos(folio) <= start &&
-		       start + len <= folio_pos(folio) + folio_size(folio));
+		       start + len <= folio_end(folio));
 }
 
 #define subpage_calc_start_bit(fs_info, folio, name, start, len)	\
@@ -216,8 +216,7 @@ static void btrfs_subpage_clamp_range(struct folio *folio, u64 *start, u32 *len)
 	if (folio_pos(folio) >= orig_start + orig_len)
 		*len = 0;
 	else
-		*len = min_t(u64, folio_pos(folio) + folio_size(folio),
-			     orig_start + orig_len) - *start;
+		*len = min_t(u64, folio_end(folio), orig_start + orig_len) - *start;
 }
 
 static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/4] btrfs: simplify range end calculations in truncate_block_zero_beyond_eof()
  2025-06-10 12:17 ` [PATCH v2 1/4] btrfs: simplify range end calculations in truncate_block_zero_beyond_eof() David Sterba
@ 2025-06-11 10:26   ` Johannes Thumshirn
  0 siblings, 0 replies; 10+ messages in thread
From: Johannes Thumshirn @ 2025-06-11 10:26 UTC (permalink / raw)
  To: David Sterba, linux-btrfs@vger.kernel.org

On 10.06.25 14:19, David Sterba wrote:
> The way zoned_end is calculated and used does a -1 and +1 that

s/zoned_end/zero_end/



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/4] Folio pos + size cleanups
  2025-06-10 12:17 [PATCH v2 0/4] Folio pos + size cleanups David Sterba
                   ` (3 preceding siblings ...)
  2025-06-10 12:17 ` [PATCH v2 4/4] btrfs: use folio_end() where appropriate David Sterba
@ 2025-06-11 10:27 ` Johannes Thumshirn
  4 siblings, 0 replies; 10+ messages in thread
From: Johannes Thumshirn @ 2025-06-11 10:27 UTC (permalink / raw)
  To: David Sterba, linux-btrfs@vger.kernel.org

Minus the nit in 1/4's commit message
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/4] btrfs: add helper folio_end()
  2025-06-10 12:17 ` [PATCH v2 3/4] btrfs: add helper folio_end() David Sterba
@ 2025-07-29  9:59   ` Andreas Gruenbacher
  2025-07-30 21:58     ` Matthew Wilcox
  0 siblings, 1 reply; 10+ messages in thread
From: Andreas Gruenbacher @ 2025-07-29  9:59 UTC (permalink / raw)
  To: David Sterba; +Cc: Matthew Wilcox, linux-btrfs

Hi David,

On Tue, Jul 29, 2025 at 11:44 AM David Sterba <dsterba@suse.com> wrote:
> There are several cases of folio_pos + folio_size, add a convenience
> helper for that. This is a local helper and not proposed as folio API
> because it does not seem to be heavily used elsewhere:
>
> A quick grep (folio_size + folio_end) in fs/ shows
>
>      24 btrfs
>       4 iomap
>       4 ext4
>       2 xfs
>       2 netfs
>       1 gfs2
>       1 f2fs
>       1 bcachefs
>       1 buffer.c

we now have a folio_end() helper in btrfs and a folio_end_pos() helper
in bcachefs, so people obviously keep reinventing the same thing.
Could this please be turned into a proper common helper?

I guess Willy will have a preferred function name.

Thanks,
Andreas

> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>  fs/btrfs/misc.h | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
> index 9cc292402696..ff5eac84d819 100644
> --- a/fs/btrfs/misc.h
> +++ b/fs/btrfs/misc.h
> @@ -7,6 +7,8 @@
>  #include <linux/bitmap.h>
>  #include <linux/sched.h>
>  #include <linux/wait.h>
> +#include <linux/mm.h>
> +#include <linux/pagemap.h>
>  #include <linux/math64.h>
>  #include <linux/rbtree.h>
>
> @@ -158,4 +160,9 @@ static inline bool bitmap_test_range_all_zero(const unsigned long *addr,
>         return (found_set == start + nbits);
>  }
>
> +static inline u64 folio_end(struct folio *folio)
> +{
> +       return folio_pos(folio) + folio_size(folio);
> +}
> +
>  #endif
> --
> 2.47.1
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/4] btrfs: add helper folio_end()
  2025-07-29  9:59   ` Andreas Gruenbacher
@ 2025-07-30 21:58     ` Matthew Wilcox
  2025-08-04 13:14       ` David Sterba
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2025-07-30 21:58 UTC (permalink / raw)
  To: Andreas Gruenbacher; +Cc: David Sterba, linux-btrfs

On Tue, Jul 29, 2025 at 11:59:12AM +0200, Andreas Gruenbacher wrote:
> On Tue, Jul 29, 2025 at 11:44 AM David Sterba <dsterba@suse.com> wrote:
> > There are several cases of folio_pos + folio_size, add a convenience
> > helper for that. This is a local helper and not proposed as folio API
> > because it does not seem to be heavily used elsewhere:
> >
> > A quick grep (folio_size + folio_end) in fs/ shows
> >
> >      24 btrfs
> >       4 iomap
> >       4 ext4
> >       2 xfs
> >       2 netfs
> >       1 gfs2
> >       1 f2fs
> >       1 bcachefs
> >       1 buffer.c
> 
> we now have a folio_end() helper in btrfs and a folio_end_pos() helper
> in bcachefs, so people obviously keep reinventing the same thing.
> Could this please be turned into a proper common helper?
> 
> I guess Willy will have a preferred function name.

Yes, and even implementation ;-)

folio_end() is too generic -- folios have a lot of "ends" (virtual,
physical, logical) in various different units (bytes, sectors,
PAGE_SIZE).  And then the question becomes whether this is an inclusive
or exclusive 'end'.

Fortunately, we already have a great function which almost does what you
want -- folio_next_index().  The only reason you don't want to use that
is that it returns the answer in the wrong units (PAGE_SIZE) instead of
bytes.  So:

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 8eb4b73b6884..a6d32d4a77c3 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -899,11 +899,22 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping,
  *
  * Return: The index of the folio which follows this folio in the file.
  */
-static inline pgoff_t folio_next_index(struct folio *folio)
+static inline pgoff_t folio_next_index(const struct folio *folio)
 {
 	return folio->index + folio_nr_pages(folio);
 }
 
+/**
+ * folio_next_pos - Get the file position of the next folio.
+ * @folio: The current folio.
+ *
+ * Return: The position of the folio which follows this folio in the file.
+ */
+static inline loff_t folio_next_pos(const struct folio *folio)
+{
+	return (loff_t)folio_next_index(folio) << PAGE_SHIFT;
+}
+
 /**
  * folio_file_page - The page for a particular index.
  * @folio: The folio which contains this index.


Best of all, there's even a tiny code quality improvement with this
implementation.  Converting just one folio_pos() + folio_size() to
folio_next_pos() gives:

$ ./scripts/bloat-o-meter before.o after.o
add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-5 (-5)
Function                                     old     new   delta
lock_delalloc_folios                         649     644      -5

The assembly is a little tough to follow, but basically we do
(a + b) << 12 instead of (a << 12) + (b << 12).

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/4] btrfs: add helper folio_end()
  2025-07-30 21:58     ` Matthew Wilcox
@ 2025-08-04 13:14       ` David Sterba
  0 siblings, 0 replies; 10+ messages in thread
From: David Sterba @ 2025-08-04 13:14 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Andreas Gruenbacher, David Sterba, linux-btrfs

On Wed, Jul 30, 2025 at 10:58:10PM +0100, Matthew Wilcox wrote:
> On Tue, Jul 29, 2025 at 11:59:12AM +0200, Andreas Gruenbacher wrote:
> > On Tue, Jul 29, 2025 at 11:44 AM David Sterba <dsterba@suse.com> wrote:
> > > There are several cases of folio_pos + folio_size, add a convenience
> > > helper for that. This is a local helper and not proposed as folio API
> > > because it does not seem to be heavily used elsewhere:
> > >
> > > A quick grep (folio_size + folio_end) in fs/ shows
> > >
> > >      24 btrfs
> > >       4 iomap
> > >       4 ext4
> > >       2 xfs
> > >       2 netfs
> > >       1 gfs2
> > >       1 f2fs
> > >       1 bcachefs
> > >       1 buffer.c
> > 
> > we now have a folio_end() helper in btrfs and a folio_end_pos() helper
> > in bcachefs, so people obviously keep reinventing the same thing.
> > Could this please be turned into a proper common helper?
> > 
> > I guess Willy will have a preferred function name.
> 
> Yes, and even implementation ;-)
> 
> folio_end() is too generic -- folios have a lot of "ends" (virtual,
> physical, logical) in various different units (bytes, sectors,
> PAGE_SIZE).  And then the question becomes whether this is an inclusive
> or exclusive 'end'.
> 
> Fortunately, we already have a great function which almost does what you
> want -- folio_next_index().  The only reason you don't want to use that
> is that it returns the answer in the wrong units (PAGE_SIZE) instead of
> bytes.  So:
> 
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 8eb4b73b6884..a6d32d4a77c3 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -899,11 +899,22 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping,
>   *
>   * Return: The index of the folio which follows this folio in the file.
>   */
> -static inline pgoff_t folio_next_index(struct folio *folio)
> +static inline pgoff_t folio_next_index(const struct folio *folio)
>  {
>  	return folio->index + folio_nr_pages(folio);
>  }
>  
> +/**
> + * folio_next_pos - Get the file position of the next folio.
> + * @folio: The current folio.
> + *
> + * Return: The position of the folio which follows this folio in the file.
> + */
> +static inline loff_t folio_next_pos(const struct folio *folio)
> +{
> +	return (loff_t)folio_next_index(folio) << PAGE_SHIFT;
> +}

Works for me. The change in semantics from "end" to "start of the next
one" is still understandable, it's used in bounds checks or when moving
to the next folio.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2025-08-04 13:15 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-10 12:17 [PATCH v2 0/4] Folio pos + size cleanups David Sterba
2025-06-10 12:17 ` [PATCH v2 1/4] btrfs: simplify range end calculations in truncate_block_zero_beyond_eof() David Sterba
2025-06-11 10:26   ` Johannes Thumshirn
2025-06-10 12:17 ` [PATCH v2 2/4] btrfs: rename variables for locked range in defrag_prepare_one_folio() David Sterba
2025-06-10 12:17 ` [PATCH v2 3/4] btrfs: add helper folio_end() David Sterba
2025-07-29  9:59   ` Andreas Gruenbacher
2025-07-30 21:58     ` Matthew Wilcox
2025-08-04 13:14       ` David Sterba
2025-06-10 12:17 ` [PATCH v2 4/4] btrfs: use folio_end() where appropriate David Sterba
2025-06-11 10:27 ` [PATCH v2 0/4] Folio pos + size cleanups Johannes Thumshirn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).