public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Folio pos + size cleanups
@ 2025-06-03  8:16 David Sterba
  2025-06-03  8:16 ` [PATCH 1/2] btrfs: add helper folio_end() David Sterba
  2025-06-03  8:16 ` [PATCH 2/2] btrfs: use folio_end() where appropriate David Sterba
  0 siblings, 2 replies; 7+ messages in thread
From: David Sterba @ 2025-06-03  8:16 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Helper to simplify numrous folio_pos() + folio_size() calculations.

David Sterba (2):
  btrfs: add helper folio_end()
  btrfs: use folio_end() where appropriate

 fs/btrfs/compression.h  |  7 +++----
 fs/btrfs/defrag.c       | 13 ++++++-------
 fs/btrfs/extent_io.c    | 17 ++++++++---------
 fs/btrfs/file.c         |  9 ++++-----
 fs/btrfs/inode.c        | 10 ++++------
 fs/btrfs/misc.h         |  7 +++++++
 fs/btrfs/ordered-data.c |  2 +-
 fs/btrfs/subpage.c      |  5 ++---
 8 files changed, 35 insertions(+), 35 deletions(-)

-- 
2.49.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] btrfs: add helper folio_end()
  2025-06-03  8:16 [PATCH 0/2] Folio pos + size cleanups David Sterba
@ 2025-06-03  8:16 ` David Sterba
  2025-06-03 18:54   ` Boris Burkov
  2025-06-03  8:16 ` [PATCH 2/2] btrfs: use folio_end() where appropriate David Sterba
  1 sibling, 1 reply; 7+ messages in thread
From: David Sterba @ 2025-06-03  8:16 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

There are several cases of folio_pos + folio_size, add a convenience
helper for that. Rename local variable in defrag_prepare_one_folio() to
avoid name clash.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/defrag.c | 8 ++++----
 fs/btrfs/misc.h   | 7 +++++++
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
index 6dca263b224e87..e5739835ad02f0 100644
--- a/fs/btrfs/defrag.c
+++ b/fs/btrfs/defrag.c
@@ -849,7 +849,7 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
 	struct address_space *mapping = inode->vfs_inode.i_mapping;
 	gfp_t mask = btrfs_alloc_write_mask(mapping);
 	u64 folio_start;
-	u64 folio_end;
+	u64 folio_last;
 	struct extent_state *cached_state = NULL;
 	struct folio *folio;
 	int ret;
@@ -886,14 +886,14 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
 	}
 
 	folio_start = folio_pos(folio);
-	folio_end = folio_pos(folio) + folio_size(folio) - 1;
+	folio_last = folio_pos(folio) + folio_size(folio) - 1;
 	/* Wait for any existing ordered extent in the range */
 	while (1) {
 		struct btrfs_ordered_extent *ordered;
 
-		btrfs_lock_extent(&inode->io_tree, folio_start, folio_end, &cached_state);
+		btrfs_lock_extent(&inode->io_tree, folio_start, folio_last, &cached_state);
 		ordered = btrfs_lookup_ordered_range(inode, folio_start, folio_size(folio));
-		btrfs_unlock_extent(&inode->io_tree, folio_start, folio_end, &cached_state);
+		btrfs_unlock_extent(&inode->io_tree, folio_start, folio_last, &cached_state);
 		if (!ordered)
 			break;
 
diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
index 9cc292402696cc..ff5eac84d819d8 100644
--- a/fs/btrfs/misc.h
+++ b/fs/btrfs/misc.h
@@ -7,6 +7,8 @@
 #include <linux/bitmap.h>
 #include <linux/sched.h>
 #include <linux/wait.h>
+#include <linux/mm.h>
+#include <linux/pagemap.h>
 #include <linux/math64.h>
 #include <linux/rbtree.h>
 
@@ -158,4 +160,9 @@ static inline bool bitmap_test_range_all_zero(const unsigned long *addr,
 	return (found_set == start + nbits);
 }
 
+static inline u64 folio_end(struct folio *folio)
+{
+	return folio_pos(folio) + folio_size(folio);
+}
+
 #endif
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] btrfs: use folio_end() where appropriate
  2025-06-03  8:16 [PATCH 0/2] Folio pos + size cleanups David Sterba
  2025-06-03  8:16 ` [PATCH 1/2] btrfs: add helper folio_end() David Sterba
@ 2025-06-03  8:16 ` David Sterba
  1 sibling, 0 replies; 7+ messages in thread
From: David Sterba @ 2025-06-03  8:16 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Simplify folio_pos() + folio_size() and use the new helper.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/compression.h  |  7 +++----
 fs/btrfs/defrag.c       |  7 +++----
 fs/btrfs/extent_io.c    | 17 ++++++++---------
 fs/btrfs/file.c         |  9 ++++-----
 fs/btrfs/inode.c        | 10 ++++------
 fs/btrfs/ordered-data.c |  2 +-
 fs/btrfs/subpage.c      |  5 ++---
 7 files changed, 25 insertions(+), 32 deletions(-)

diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index d34c4341eaf485..1df3c8dec40a91 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -13,6 +13,7 @@
 #include <linux/wait.h>
 #include <linux/pagemap.h>
 #include "bio.h"
+#include "fs.h"
 #include "messages.h"
 
 struct address_space;
@@ -77,12 +78,10 @@ struct compressed_bio {
 /* @range_end must be exclusive. */
 static inline u32 btrfs_calc_input_length(struct folio *folio, u64 range_end, u64 cur)
 {
-	const u64 folio_end = folio_pos(folio) + folio_size(folio);
-
 	/* @cur must be inside the folio. */
 	ASSERT(folio_pos(folio) <= cur);
-	ASSERT(cur < folio_end);
-	return min(range_end, folio_end) - cur;
+	ASSERT(cur < folio_end(folio));
+	return min(range_end, folio_end(folio)) - cur;
 }
 
 int __init btrfs_init_compress(void);
diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
index e5739835ad02f0..ed15034f33d736 100644
--- a/fs/btrfs/defrag.c
+++ b/fs/btrfs/defrag.c
@@ -886,7 +886,7 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
 	}
 
 	folio_start = folio_pos(folio);
-	folio_last = folio_pos(folio) + folio_size(folio) - 1;
+	folio_last = folio_end(folio) - 1;
 	/* Wait for any existing ordered extent in the range */
 	while (1) {
 		struct btrfs_ordered_extent *ordered;
@@ -1178,8 +1178,7 @@ static int defrag_one_locked_target(struct btrfs_inode *inode,
 
 		if (!folio)
 			break;
-		if (start >= folio_pos(folio) + folio_size(folio) ||
-		    start + len <= folio_pos(folio))
+		if (start >= folio_end(folio) || start + len <= folio_pos(folio))
 			continue;
 		btrfs_folio_clamp_clear_checked(fs_info, folio, start, len);
 		btrfs_folio_clamp_set_dirty(fs_info, folio, start, len);
@@ -1220,7 +1219,7 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
 			folios[i] = NULL;
 			goto free_folios;
 		}
-		cur = folio_pos(folios[i]) + folio_size(folios[i]);
+		cur = folio_end(folios[i]);
 	}
 	for (int i = 0; i < nr_pages; i++) {
 		if (!folios[i])
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index c562b589876e50..9c099e672236e2 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -266,8 +266,7 @@ static noinline int lock_delalloc_folios(struct inode *inode,
 				goto out;
 			}
 			range_start = max_t(u64, folio_pos(folio), start);
-			range_len = min_t(u64, folio_pos(folio) + folio_size(folio),
-					  end + 1) - range_start;
+			range_len = min_t(u64, folio_end(folio), end + 1) - range_start;
 			btrfs_folio_set_lock(fs_info, folio, range_start, range_len);
 
 			processed_end = range_start + range_len - 1;
@@ -321,7 +320,7 @@ noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
 	ASSERT(orig_end > orig_start);
 
 	/* The range should at least cover part of the folio */
-	ASSERT(!(orig_start >= folio_pos(locked_folio) + folio_size(locked_folio) ||
+	ASSERT(!(orig_start >= folio_end(locked_folio) ||
 		 orig_end <= folio_pos(locked_folio)));
 again:
 	/* step one, find a bunch of delalloc bytes starting at start */
@@ -419,7 +418,7 @@ static void end_folio_read(struct folio *folio, bool uptodate, u64 start, u32 le
 	struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
 
 	ASSERT(folio_pos(folio) <= start &&
-	       start + len <= folio_pos(folio) + folio_size(folio));
+	       start + len <= folio_end(folio));
 
 	if (uptodate && btrfs_verify_folio(folio, start, len))
 		btrfs_folio_set_uptodate(fs_info, folio, start, len);
@@ -1086,7 +1085,7 @@ static bool can_skip_one_ordered_range(struct btrfs_inode *inode,
 	 * finished our folio read and unlocked the folio.
 	 */
 	if (btrfs_folio_test_dirty(fs_info, folio, cur, blocksize)) {
-		u64 range_len = min(folio_pos(folio) + folio_size(folio),
+		u64 range_len = min(folio_end(folio),
 				    ordered->file_offset + ordered->num_bytes) - cur;
 
 		ret = true;
@@ -1108,7 +1107,7 @@ static bool can_skip_one_ordered_range(struct btrfs_inode *inode,
 	 * So we return true and update @next_ret to the OE/folio boundary.
 	 */
 	if (btrfs_folio_test_uptodate(fs_info, folio, cur, blocksize)) {
-		u64 range_len = min(folio_pos(folio) + folio_size(folio),
+		u64 range_len = min(folio_end(folio),
 				    ordered->file_offset + ordered->num_bytes) - cur;
 
 		/*
@@ -2085,7 +2084,7 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb,
 	for (int i = 0; i < num_extent_folios(eb); i++) {
 		struct folio *folio = eb->folios[i];
 		u64 range_start = max_t(u64, eb->start, folio_pos(folio));
-		u32 range_len = min_t(u64, folio_pos(folio) + folio_size(folio),
+		u32 range_len = min_t(u64, folio_end(folio),
 				      eb->start + eb->len) - range_start;
 
 		folio_lock(folio);
@@ -2489,7 +2488,7 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f
 			continue;
 		}
 
-		cur_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1, end);
+		cur_end = min_t(u64, folio_end(folio) - 1, end);
 		cur_len = cur_end + 1 - cur;
 
 		ASSERT(folio_test_locked(folio));
@@ -3726,7 +3725,7 @@ int read_extent_buffer_pages_nowait(struct extent_buffer *eb, int mirror_num,
 	for (int i = 0; i < num_extent_folios(eb); i++) {
 		struct folio *folio = eb->folios[i];
 		u64 range_start = max_t(u64, eb->start, folio_pos(folio));
-		u32 range_len = min_t(u64, folio_pos(folio) + folio_size(folio),
+		u32 range_len = min_t(u64, folio_end(folio),
 				      eb->start + eb->len) - range_start;
 
 		bio_add_folio_nofail(&bbio->bio, folio, range_len,
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 8ce6f45f45e0bb..12fa50e2fd39cf 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -89,8 +89,7 @@ int btrfs_dirty_folio(struct btrfs_inode *inode, struct folio *folio, loff_t pos
 	num_bytes = round_up(write_bytes + pos - start_pos,
 			     fs_info->sectorsize);
 	ASSERT(num_bytes <= U32_MAX);
-	ASSERT(folio_pos(folio) <= pos &&
-	       folio_pos(folio) + folio_size(folio) >= pos + write_bytes);
+	ASSERT(folio_pos(folio) <= pos && folio_end(folio) >= pos + write_bytes);
 
 	end_of_last_block = start_pos + num_bytes - 1;
 
@@ -801,7 +800,7 @@ static int prepare_uptodate_folio(struct inode *inode, struct folio *folio, u64
 				  u64 len)
 {
 	u64 clamp_start = max_t(u64, pos, folio_pos(folio));
-	u64 clamp_end = min_t(u64, pos + len, folio_pos(folio) + folio_size(folio));
+	u64 clamp_end = min_t(u64, pos + len, folio_end(folio));
 	const u32 blocksize = inode_to_fs_info(inode)->sectorsize;
 	int ret = 0;
 
@@ -1233,8 +1232,8 @@ static int copy_one_range(struct btrfs_inode *inode, struct iov_iter *iter,
 	 * The reserved range goes beyond the current folio, shrink the reserved
 	 * space to the folio boundary.
 	 */
-	if (reserved_start + reserved_len > folio_pos(folio) + folio_size(folio)) {
-		const u64 last_block = folio_pos(folio) + folio_size(folio);
+	if (reserved_start + reserved_len > folio_end(folio)) {
+		const u64 last_block = folio_end(folio);
 
 		shrink_reserved_space(inode, *data_reserved, reserved_start,
 				      reserved_len, last_block - reserved_start,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 98abb101623aa1..162ccd58613e15 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -2328,8 +2328,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct folio *locked_fol
 	 * The range must cover part of the @locked_folio, or a return of 1
 	 * can confuse the caller.
 	 */
-	ASSERT(!(end <= folio_pos(locked_folio) ||
-		 start >= folio_pos(locked_folio) + folio_size(locked_folio)));
+	ASSERT(!(end <= folio_pos(locked_folio) || start >= folio_end(locked_folio)));
 
 	if (should_nocow(inode, start, end)) {
 		ret = run_delalloc_nocow(inode, locked_folio, start, end);
@@ -2737,7 +2736,7 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
 	struct btrfs_inode *inode = fixup->inode;
 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	u64 page_start = folio_pos(folio);
-	u64 page_end = folio_pos(folio) + folio_size(folio) - 1;
+	u64 page_end = folio_end(folio) - 1;
 	int ret = 0;
 	bool free_delalloc_space = true;
 
@@ -4818,7 +4817,7 @@ static int truncate_block_zero_beyond_eof(struct btrfs_inode *inode, u64 start)
 	 */
 
 	zero_start = max_t(u64, folio_pos(folio), start);
-	zero_end = folio_pos(folio) + folio_size(folio) - 1;
+	zero_end = folio_end(folio) - 1;
 	folio_zero_range(folio, zero_start - folio_pos(folio),
 			 zero_end - zero_start + 1);
 
@@ -4998,8 +4997,7 @@ int btrfs_truncate_block(struct btrfs_inode *inode, u64 offset, u64 start, u64 e
 		 * not reach disk, it still affects our page caches.
 		 */
 		zero_start = max_t(u64, folio_pos(folio), start);
-		zero_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
-				 end);
+		zero_end = min_t(u64, folio_end(folio) - 1, end);
 	} else {
 		zero_start = max_t(u64, block_start, start);
 		zero_end = min_t(u64, block_end, end);
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 9212ce110cdeef..2829f20d7bb59d 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -359,7 +359,7 @@ static bool can_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
 	if (folio) {
 		ASSERT(folio->mapping);
 		ASSERT(folio_pos(folio) <= file_offset);
-		ASSERT(file_offset + len <= folio_pos(folio) + folio_size(folio));
+		ASSERT(file_offset + len <= folio_end(folio));
 
 		/*
 		 * Ordered flag indicates whether we still have
diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
index 9b63a4d1c98906..2c5c9262b1a83f 100644
--- a/fs/btrfs/subpage.c
+++ b/fs/btrfs/subpage.c
@@ -187,7 +187,7 @@ static void btrfs_subpage_assert(const struct btrfs_fs_info *fs_info,
 	 */
 	if (folio->mapping)
 		ASSERT(folio_pos(folio) <= start &&
-		       start + len <= folio_pos(folio) + folio_size(folio));
+		       start + len <= folio_end(folio));
 }
 
 #define subpage_calc_start_bit(fs_info, folio, name, start, len)	\
@@ -216,8 +216,7 @@ static void btrfs_subpage_clamp_range(struct folio *folio, u64 *start, u32 *len)
 	if (folio_pos(folio) >= orig_start + orig_len)
 		*len = 0;
 	else
-		*len = min_t(u64, folio_pos(folio) + folio_size(folio),
-			     orig_start + orig_len) - *start;
+		*len = min_t(u64, folio_end(folio), orig_start + orig_len) - *start;
 }
 
 static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] btrfs: add helper folio_end()
  2025-06-03  8:16 ` [PATCH 1/2] btrfs: add helper folio_end() David Sterba
@ 2025-06-03 18:54   ` Boris Burkov
  2025-06-03 19:36     ` David Sterba
  2025-06-03 22:46     ` David Sterba
  0 siblings, 2 replies; 7+ messages in thread
From: Boris Burkov @ 2025-06-03 18:54 UTC (permalink / raw)
  To: David Sterba; +Cc: linux-btrfs

On Tue, Jun 03, 2025 at 10:16:17AM +0200, David Sterba wrote:
> There are several cases of folio_pos + folio_size, add a convenience
> helper for that. Rename local variable in defrag_prepare_one_folio() to
> avoid name clash.
> 
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>  fs/btrfs/defrag.c | 8 ++++----
>  fs/btrfs/misc.h   | 7 +++++++
>  2 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
> index 6dca263b224e87..e5739835ad02f0 100644
> --- a/fs/btrfs/defrag.c
> +++ b/fs/btrfs/defrag.c
> @@ -849,7 +849,7 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
>  	struct address_space *mapping = inode->vfs_inode.i_mapping;
>  	gfp_t mask = btrfs_alloc_write_mask(mapping);
>  	u64 folio_start;
> -	u64 folio_end;
> +	u64 folio_last;

This is nitpicky, but I think introducing the new word "last" in in an
inconsistent fashion is a mistake.

In patch 2 at truncate_block_zero_beyond_eof() and
btrfs_writepage_fixup_worker, you have variables called "X_end" that get
assigned to folio_end() - 1. Either those should also get called
"X_last" or this one should have "end" in its name.

>  	struct extent_state *cached_state = NULL;
>  	struct folio *folio;
>  	int ret;
> @@ -886,14 +886,14 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
>  	}
>  
>  	folio_start = folio_pos(folio);
> -	folio_end = folio_pos(folio) + folio_size(folio) - 1;
> +	folio_last = folio_pos(folio) + folio_size(folio) - 1;
>  	/* Wait for any existing ordered extent in the range */
>  	while (1) {
>  		struct btrfs_ordered_extent *ordered;
>  
> -		btrfs_lock_extent(&inode->io_tree, folio_start, folio_end, &cached_state);
> +		btrfs_lock_extent(&inode->io_tree, folio_start, folio_last, &cached_state);
>  		ordered = btrfs_lookup_ordered_range(inode, folio_start, folio_size(folio));
> -		btrfs_unlock_extent(&inode->io_tree, folio_start, folio_end, &cached_state);
> +		btrfs_unlock_extent(&inode->io_tree, folio_start, folio_last, &cached_state);
>  		if (!ordered)
>  			break;
>  
> diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
> index 9cc292402696cc..ff5eac84d819d8 100644
> --- a/fs/btrfs/misc.h
> +++ b/fs/btrfs/misc.h
> @@ -7,6 +7,8 @@
>  #include <linux/bitmap.h>
>  #include <linux/sched.h>
>  #include <linux/wait.h>
> +#include <linux/mm.h>
> +#include <linux/pagemap.h>
>  #include <linux/math64.h>
>  #include <linux/rbtree.h>
>  
> @@ -158,4 +160,9 @@ static inline bool bitmap_test_range_all_zero(const unsigned long *addr,
>  	return (found_set == start + nbits);
>  }
>  
> +static inline u64 folio_end(struct folio *folio)
> +{
> +	return folio_pos(folio) + folio_size(folio);
> +}
> +

Is there a reason we can't propose this is a generic folio helper
function alongside folio_pos() and folio_size()? Too many variables out
there called folio_end from places that would include it?

>  #endif
> -- 
> 2.49.0
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] btrfs: add helper folio_end()
  2025-06-03 18:54   ` Boris Burkov
@ 2025-06-03 19:36     ` David Sterba
  2025-06-03 20:21       ` Boris Burkov
  2025-06-03 22:46     ` David Sterba
  1 sibling, 1 reply; 7+ messages in thread
From: David Sterba @ 2025-06-03 19:36 UTC (permalink / raw)
  To: Boris Burkov; +Cc: David Sterba, linux-btrfs

On Tue, Jun 03, 2025 at 11:54:42AM -0700, Boris Burkov wrote:
> On Tue, Jun 03, 2025 at 10:16:17AM +0200, David Sterba wrote:
> > There are several cases of folio_pos + folio_size, add a convenience
> > helper for that. Rename local variable in defrag_prepare_one_folio() to
> > avoid name clash.
> > 
> > Signed-off-by: David Sterba <dsterba@suse.com>
> > ---
> >  fs/btrfs/defrag.c | 8 ++++----
> >  fs/btrfs/misc.h   | 7 +++++++
> >  2 files changed, 11 insertions(+), 4 deletions(-)
> > 
> > diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
> > index 6dca263b224e87..e5739835ad02f0 100644
> > --- a/fs/btrfs/defrag.c
> > +++ b/fs/btrfs/defrag.c
> > @@ -849,7 +849,7 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
> >  	struct address_space *mapping = inode->vfs_inode.i_mapping;
> >  	gfp_t mask = btrfs_alloc_write_mask(mapping);
> >  	u64 folio_start;
> > -	u64 folio_end;
> > +	u64 folio_last;
> 
> This is nitpicky, but I think introducing the new word "last" in in an
> inconsistent fashion is a mistake.
> 
> In patch 2 at truncate_block_zero_beyond_eof() and
> btrfs_writepage_fixup_worker, you have variables called "X_end" that get
> assigned to folio_end() - 1. Either those should also get called
> "X_last" or this one should have "end" in its name.

Ok, then the "-1" should be renamed to _last, so it's "last byte of the
range", but unfortunatelly the "_end" suffix could mean the same.

> >  	struct extent_state *cached_state = NULL;
> >  	struct folio *folio;
> >  	int ret;
> > diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
> > index 9cc292402696cc..ff5eac84d819d8 100644
> > --- a/fs/btrfs/misc.h
> > +++ b/fs/btrfs/misc.h
> > @@ -158,4 +160,9 @@ static inline bool bitmap_test_range_all_zero(const unsigned long *addr,
> >  	return (found_set == start + nbits);
> >  }
> >  
> > +static inline u64 folio_end(struct folio *folio)
> > +{
> > +	return folio_pos(folio) + folio_size(folio);
> > +}
> 
> Is there a reason we can't propose this is a generic folio helper
> function alongside folio_pos() and folio_size()?

We can eventually. The difference is that now it's a local convenience
helper while in MM it would be part of the API and I haven't done the
extensive research of it's use.

A quick grep (folio_size + folio_end) in fs/ shows

     24 btrfs
      4 iomap
      4 ext4
      2 xfs
      2 netfs
      1 gfs2
      1 f2fs
      1 buffer.c
      1 bcachefs

where bcachefs has its own helper folio_end_pos() with 19 uses. In mm/
there are 2.

> Too many variables out
> there called folio_end from places that would include it?

I found only a handful of cases, so should be doable.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] btrfs: add helper folio_end()
  2025-06-03 19:36     ` David Sterba
@ 2025-06-03 20:21       ` Boris Burkov
  0 siblings, 0 replies; 7+ messages in thread
From: Boris Burkov @ 2025-06-03 20:21 UTC (permalink / raw)
  To: David Sterba; +Cc: David Sterba, linux-btrfs

On Tue, Jun 03, 2025 at 09:36:59PM +0200, David Sterba wrote:
> On Tue, Jun 03, 2025 at 11:54:42AM -0700, Boris Burkov wrote:
> > On Tue, Jun 03, 2025 at 10:16:17AM +0200, David Sterba wrote:
> > > There are several cases of folio_pos + folio_size, add a convenience
> > > helper for that. Rename local variable in defrag_prepare_one_folio() to
> > > avoid name clash.
> > > 
> > > Signed-off-by: David Sterba <dsterba@suse.com>
> > > ---
> > >  fs/btrfs/defrag.c | 8 ++++----
> > >  fs/btrfs/misc.h   | 7 +++++++
> > >  2 files changed, 11 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
> > > index 6dca263b224e87..e5739835ad02f0 100644
> > > --- a/fs/btrfs/defrag.c
> > > +++ b/fs/btrfs/defrag.c
> > > @@ -849,7 +849,7 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
> > >  	struct address_space *mapping = inode->vfs_inode.i_mapping;
> > >  	gfp_t mask = btrfs_alloc_write_mask(mapping);
> > >  	u64 folio_start;
> > > -	u64 folio_end;
> > > +	u64 folio_last;
> > 
> > This is nitpicky, but I think introducing the new word "last" in in an
> > inconsistent fashion is a mistake.
> > 
> > In patch 2 at truncate_block_zero_beyond_eof() and
> > btrfs_writepage_fixup_worker, you have variables called "X_end" that get
> > assigned to folio_end() - 1. Either those should also get called
> > "X_last" or this one should have "end" in its name.
> 
> Ok, then the "-1" should be renamed to _last, so it's "last byte of the
> range", but unfortunatelly the "_end" suffix could mean the same.
> 

Agreed that it's still not supremely clear. Whatever you decide you
can add

Reviewed-by: Boris Burkov <boris@bur.io>

to both patches.

> > >  	struct extent_state *cached_state = NULL;
> > >  	struct folio *folio;
> > >  	int ret;
> > > diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
> > > index 9cc292402696cc..ff5eac84d819d8 100644
> > > --- a/fs/btrfs/misc.h
> > > +++ b/fs/btrfs/misc.h
> > > @@ -158,4 +160,9 @@ static inline bool bitmap_test_range_all_zero(const unsigned long *addr,
> > >  	return (found_set == start + nbits);
> > >  }
> > >  
> > > +static inline u64 folio_end(struct folio *folio)
> > > +{
> > > +	return folio_pos(folio) + folio_size(folio);
> > > +}
> > 
> > Is there a reason we can't propose this is a generic folio helper
> > function alongside folio_pos() and folio_size()?
> 
> We can eventually. The difference is that now it's a local convenience
> helper while in MM it would be part of the API and I haven't done the
> extensive research of it's use.
> 
> A quick grep (folio_size + folio_end) in fs/ shows
> 
>      24 btrfs
>       4 iomap
>       4 ext4
>       2 xfs
>       2 netfs
>       1 gfs2
>       1 f2fs
>       1 buffer.c
>       1 bcachefs
> 
> where bcachefs has its own helper folio_end_pos() with 19 uses. In mm/
> there are 2.
> 

OK, thanks for looking into it. It seems like it's not a crazy change
either way, without too many relevant users.

> > Too many variables out
> > there called folio_end from places that would include it?
> 
> I found only a handful of cases, so should be doable.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] btrfs: add helper folio_end()
  2025-06-03 18:54   ` Boris Burkov
  2025-06-03 19:36     ` David Sterba
@ 2025-06-03 22:46     ` David Sterba
  1 sibling, 0 replies; 7+ messages in thread
From: David Sterba @ 2025-06-03 22:46 UTC (permalink / raw)
  To: Boris Burkov; +Cc: David Sterba, linux-btrfs

On Tue, Jun 03, 2025 at 11:54:42AM -0700, Boris Burkov wrote:
> > --- a/fs/btrfs/defrag.c
> > +++ b/fs/btrfs/defrag.c
> > @@ -849,7 +849,7 @@ static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
> >  	struct address_space *mapping = inode->vfs_inode.i_mapping;
> >  	gfp_t mask = btrfs_alloc_write_mask(mapping);
> >  	u64 folio_start;
> > -	u64 folio_end;
> > +	u64 folio_last;
> 
> This is nitpicky, but I think introducing the new word "last" in in an
> inconsistent fashion is a mistake.
> 
> In patch 2 at truncate_block_zero_beyond_eof()

In truncate_block_zero_beyond_eof if zero_end is folio_end() exactly
then we don't have to do "zero_end - zero_start + 1" in the length
parameter of folio_zero_range() as the -1 and +1 cancel out.

> and btrfs_writepage_fixup_worker,

Here the page_start and page_end are passed to the extent locking where
the end of the range is "-1" but from what I've seen we're using the
"end" in the variable names.

To avoid confusion I'd rename the folio_start and folio_end in
defrag_prepare_one_folio to lock_start and lock_end so the _last
semantic for ranges does exist.

> you have variables called "X_end" that get
> assigned to folio_end() - 1. Either those should also get called
> "X_last" or this one should have "end" in its name.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-06-03 22:46 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-03  8:16 [PATCH 0/2] Folio pos + size cleanups David Sterba
2025-06-03  8:16 ` [PATCH 1/2] btrfs: add helper folio_end() David Sterba
2025-06-03 18:54   ` Boris Burkov
2025-06-03 19:36     ` David Sterba
2025-06-03 20:21       ` Boris Burkov
2025-06-03 22:46     ` David Sterba
2025-06-03  8:16 ` [PATCH 2/2] btrfs: use folio_end() where appropriate David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox