public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/8] use cleanup.h in btrfs
@ 2025-11-12 18:49 Gladyshev Ilya
  2025-11-12 18:49 ` [RFC PATCH 1/8] btrfs: remove redundant label in __del_qgroup_relation Gladyshev Ilya
                   ` (8 more replies)
  0 siblings, 9 replies; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-12 18:49 UTC (permalink / raw)
  To: foxido; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

This series represents my experimentation with refactoring with
cleanup guards. In my opinion, RAII-style locking improves readability
in most cases and also improves code robustness for future code changes,
so I tried to refactor simple cases that really benefits from lock guards.

However readability is a subjective concept, so you can freely disagree
and reject any of those changes, I won't insist on any. Please note that
patches 1-3 can be useful even without lock guards.

I didn't know how to split this series, mostly because it's just a lot of
small changes... so I tried to split it by types of transformation:

1. Patches 1-3 include some preparation work and simple fixes I noticed.
2. Patches 4-6  gradually increase the complexity of the refactored
  situations, from simple lock/unlock pairs to scoped guards.
3. Patch 7 refactors functions which control flow can really benefit from
  removed cleanups on exit. E.g. we can get rid of obscure if statements
  in exit paths.
4. Patch 8 is kinda an example of overdone code refactoring and I predict
  that it will be dropped anyway.

There is no TODOs for this series, but it's junk enough to be marked as
RFC.

Gladyshev Ilya (8):
  btrfs: remove redundant label in __del_qgroup_relation
  btrfs: move kfree out of btrfs_create_qgroup's cleanup path
  btrfs: simplify control flow in scrub_simple_mirror
  btrfs: simplify function protections with guards
  btrfs: use cleanup.h guard()s to simplify unlocks on return
  btrfs: simplify cleanup via scoped_guard()
  btrfs: simplify return path via cleanup.h
  btrfs: simplify cleanup in btrfs_add_qgroup_relation

 fs/btrfs/block-group.c      |  24 ++----
 fs/btrfs/compression.c      |  13 ++-
 fs/btrfs/discard.c          |  20 ++---
 fs/btrfs/disk-io.c          |   9 +-
 fs/btrfs/extent-io-tree.c   |  72 ++++++----------
 fs/btrfs/extent-tree.c      | 104 ++++++++++-------------
 fs/btrfs/extent_io.c        |  33 ++++----
 fs/btrfs/file-item.c        |   6 +-
 fs/btrfs/free-space-cache.c |  87 +++++++------------
 fs/btrfs/fs.c               |   9 +-
 fs/btrfs/inode.c            |   3 +-
 fs/btrfs/ordered-data.c     |  67 ++++++---------
 fs/btrfs/qgroup.c           | 165 ++++++++++++++----------------------
 fs/btrfs/raid56.c           |  20 ++---
 fs/btrfs/scrub.c            |  19 ++---
 fs/btrfs/send.c             |  40 ++++-----
 fs/btrfs/space-info.c       |   4 +-
 fs/btrfs/subpage.c          |  41 +++------
 fs/btrfs/tree-log.c         |  28 +++---
 fs/btrfs/volumes.c          |   3 +-
 fs/btrfs/zoned.c            |  13 +--
 fs/btrfs/zstd.c             |  13 +--
 22 files changed, 299 insertions(+), 494 deletions(-)


base-commit: 24172e0d79900908cf5ebf366600616d29c9b417
-- 
2.51.1.dirty


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC PATCH 1/8] btrfs: remove redundant label in __del_qgroup_relation
  2025-11-12 18:49 [RFC PATCH 0/8] use cleanup.h in btrfs Gladyshev Ilya
@ 2025-11-12 18:49 ` Gladyshev Ilya
  2025-11-12 18:49 ` [RFC PATCH 2/8] btrfs: move kfree out of btrfs_create_qgroup's cleanup path Gladyshev Ilya
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-12 18:49 UTC (permalink / raw)
  To: foxido; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

The 'out' label in __del_qgroup_relation only contains a direct return,
with no actual cleanup operations. Replace all 'goto out' statements
with direct return statements to improve readability.

Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
---
 fs/btrfs/qgroup.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 31ad8580322a..9904bcfd3a60 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1600,10 +1600,8 @@ static int __del_qgroup_relation(struct btrfs_trans_handle *trans, u64 src,
 	int ret = 0;
 	int ret2;
 
-	if (!fs_info->quota_root) {
-		ret = -ENOTCONN;
-		goto out;
-	}
+	if (!fs_info->quota_root)
+		return -ENOTCONN;
 
 	member = find_qgroup_rb(fs_info, src);
 	parent = find_qgroup_rb(fs_info, dst);
@@ -1625,10 +1623,10 @@ static int __del_qgroup_relation(struct btrfs_trans_handle *trans, u64 src,
 delete_item:
 	ret = del_qgroup_relation_item(trans, src, dst);
 	if (ret < 0 && ret != -ENOENT)
-		goto out;
+		return ret;
 	ret2 = del_qgroup_relation_item(trans, dst, src);
 	if (ret2 < 0 && ret2 != -ENOENT)
-		goto out;
+		return ret;
 
 	/* At least one deletion succeeded, return 0 */
 	if (!ret || !ret2)
@@ -1640,7 +1638,6 @@ static int __del_qgroup_relation(struct btrfs_trans_handle *trans, u64 src,
 		ret = quick_update_accounting(fs_info, src, dst, -1);
 		spin_unlock(&fs_info->qgroup_lock);
 	}
-out:
 	return ret;
 }
 
-- 
2.51.1.dirty


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH 2/8] btrfs: move kfree out of btrfs_create_qgroup's cleanup path
  2025-11-12 18:49 [RFC PATCH 0/8] use cleanup.h in btrfs Gladyshev Ilya
  2025-11-12 18:49 ` [RFC PATCH 1/8] btrfs: remove redundant label in __del_qgroup_relation Gladyshev Ilya
@ 2025-11-12 18:49 ` Gladyshev Ilya
  2025-11-12 20:30   ` Qu Wenruo
  2025-11-12 18:49 ` [RFC PATCH 3/8] btrfs: simplify control flow in scrub_simple_mirror Gladyshev Ilya
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-12 18:49 UTC (permalink / raw)
  To: foxido; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

Relocate kfree() from the generic cleanup path to the specific error
exit where the allocation could leak. This prepares for future
simplification by allowing removal of the 'out' label and use of
mutex_guard for cleaner resource management.

Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
---
 fs/btrfs/qgroup.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 9904bcfd3a60..a8474d0a9c58 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1659,7 +1659,7 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
 	struct btrfs_fs_info *fs_info = trans->fs_info;
 	struct btrfs_root *quota_root;
 	struct btrfs_qgroup *qgroup;
-	struct btrfs_qgroup *prealloc = NULL;
+	struct btrfs_qgroup *prealloc;
 	int ret = 0;
 
 	mutex_lock(&fs_info->qgroup_ioctl_lock);
@@ -1681,18 +1681,18 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
 	}
 
 	ret = add_qgroup_item(trans, quota_root, qgroupid);
-	if (ret)
+	if (ret) {
+		kfree(prealloc);
 		goto out;
+	}
 
 	spin_lock(&fs_info->qgroup_lock);
 	qgroup = add_qgroup_rb(fs_info, prealloc, qgroupid);
 	spin_unlock(&fs_info->qgroup_lock);
-	prealloc = NULL;
 
 	ret = btrfs_sysfs_add_one_qgroup(fs_info, qgroup);
 out:
 	mutex_unlock(&fs_info->qgroup_ioctl_lock);
-	kfree(prealloc);
 	return ret;
 }
 
-- 
2.51.1.dirty


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH 3/8] btrfs: simplify control flow in scrub_simple_mirror
  2025-11-12 18:49 [RFC PATCH 0/8] use cleanup.h in btrfs Gladyshev Ilya
  2025-11-12 18:49 ` [RFC PATCH 1/8] btrfs: remove redundant label in __del_qgroup_relation Gladyshev Ilya
  2025-11-12 18:49 ` [RFC PATCH 2/8] btrfs: move kfree out of btrfs_create_qgroup's cleanup path Gladyshev Ilya
@ 2025-11-12 18:49 ` Gladyshev Ilya
  2025-11-12 18:49 ` [RFC PATCH 4/8] btrfs: simplify function protections with guards Gladyshev Ilya
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-12 18:49 UTC (permalink / raw)
  To: foxido; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

Replace 'ret = VAL; break;' pattern with direct return statements
to reduce indirection and improve code clarity.

Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
---
 fs/btrfs/scrub.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index ba20d9286a34..dd4e6d20e35f 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -2253,7 +2253,7 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
 	struct btrfs_fs_info *fs_info = sctx->fs_info;
 	const u64 logical_end = logical_start + logical_length;
 	u64 cur_logical = logical_start;
-	int ret = 0;
+	int ret;
 
 	/* The range must be inside the bg */
 	ASSERT(logical_start >= bg->start && logical_end <= bg->start + bg->length);
@@ -2265,10 +2265,9 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
 
 		/* Canceled? */
 		if (atomic_read(&fs_info->scrub_cancel_req) ||
-		    atomic_read(&sctx->cancel_req)) {
-			ret = -ECANCELED;
-			break;
-		}
+		    atomic_read(&sctx->cancel_req))
+			return -ECANCELED;
+
 		/* Paused? */
 		if (atomic_read(&fs_info->scrub_pause_req)) {
 			/* Push queued extents */
@@ -2278,8 +2277,7 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
 		spin_lock(&bg->lock);
 		if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &bg->runtime_flags)) {
 			spin_unlock(&bg->lock);
-			ret = 0;
-			break;
+			return 0;
 		}
 		spin_unlock(&bg->lock);
 
@@ -2291,11 +2289,10 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
 			spin_lock(&sctx->stat_lock);
 			sctx->stat.last_physical = physical + logical_length;
 			spin_unlock(&sctx->stat_lock);
-			ret = 0;
-			break;
+			return 0;
 		}
 		if (ret < 0)
-			break;
+			return ret;
 
 		/* queue_scrub_stripe() returned 0, @found_logical must be updated. */
 		ASSERT(found_logical != U64_MAX);
@@ -2304,7 +2301,7 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
 		/* Don't hold CPU for too long time */
 		cond_resched();
 	}
-	return ret;
+	return 0;
 }
 
 /* Calculate the full stripe length for simple stripe based profiles */
-- 
2.51.1.dirty


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH 4/8] btrfs: simplify function protections with guards
  2025-11-12 18:49 [RFC PATCH 0/8] use cleanup.h in btrfs Gladyshev Ilya
                   ` (2 preceding siblings ...)
  2025-11-12 18:49 ` [RFC PATCH 3/8] btrfs: simplify control flow in scrub_simple_mirror Gladyshev Ilya
@ 2025-11-12 18:49 ` Gladyshev Ilya
  2025-11-13  8:43   ` David Sterba
  2025-11-12 18:49 ` [RFC PATCH 5/8] btrfs: use cleanup.h guard()s to simplify unlocks on return Gladyshev Ilya
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-12 18:49 UTC (permalink / raw)
  To: foxido; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

Replaces cases like

void foo() {
    spin_lock(&lock);
    ... some code ...
    spin_unlock(&lock)
}

with

void foo() {
    guard(spinlock)(&lock);
    ... some code ...
}

While it doesn't has any measurable impact, it makes clear that whole
function body is protected under lock and removes future errors with
additional cleanup paths.

Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
---
 fs/btrfs/discard.c          | 10 +++-------
 fs/btrfs/disk-io.c          |  3 +--
 fs/btrfs/extent-io-tree.c   | 15 +++++----------
 fs/btrfs/free-space-cache.c |  4 +---
 fs/btrfs/inode.c            |  3 +--
 fs/btrfs/ordered-data.c     |  6 ++----
 fs/btrfs/qgroup.c           |  9 +++------
 fs/btrfs/raid56.c           |  7 ++-----
 fs/btrfs/send.c             |  3 +--
 fs/btrfs/subpage.c          | 36 +++++++++---------------------------
 fs/btrfs/volumes.c          |  3 +--
 11 files changed, 29 insertions(+), 70 deletions(-)

diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c
index 89fe85778115..9bd7a8ad45c4 100644
--- a/fs/btrfs/discard.c
+++ b/fs/btrfs/discard.c
@@ -158,7 +158,7 @@ static bool remove_from_discard_list(struct btrfs_discard_ctl *discard_ctl,
 	bool running = false;
 	bool queued = false;
 
-	spin_lock(&discard_ctl->lock);
+	guard(spinlock)(&discard_ctl->lock);
 
 	if (block_group == discard_ctl->block_group) {
 		running = true;
@@ -171,8 +171,6 @@ static bool remove_from_discard_list(struct btrfs_discard_ctl *discard_ctl,
 	if (queued)
 		btrfs_put_block_group(block_group);
 
-	spin_unlock(&discard_ctl->lock);
-
 	return running;
 }
 
@@ -236,7 +234,7 @@ static struct btrfs_block_group *peek_discard_list(
 {
 	struct btrfs_block_group *block_group;
 
-	spin_lock(&discard_ctl->lock);
+	guard(spinlock)(&discard_ctl->lock);
 again:
 	block_group = find_next_block_group(discard_ctl, now);
 
@@ -276,7 +274,6 @@ static struct btrfs_block_group *peek_discard_list(
 		*discard_state = block_group->discard_state;
 		*discard_index = block_group->discard_index;
 	}
-	spin_unlock(&discard_ctl->lock);
 
 	return block_group;
 }
@@ -694,7 +691,7 @@ void btrfs_discard_punt_unused_bgs_list(struct btrfs_fs_info *fs_info)
 {
 	struct btrfs_block_group *block_group, *next;
 
-	spin_lock(&fs_info->unused_bgs_lock);
+	guard(spinlock)(&fs_info->unused_bgs_lock);
 	/* We enabled async discard, so punt all to the queue */
 	list_for_each_entry_safe(block_group, next, &fs_info->unused_bgs,
 				 bg_list) {
@@ -706,7 +703,6 @@ void btrfs_discard_punt_unused_bgs_list(struct btrfs_fs_info *fs_info)
 		 */
 		btrfs_put_block_group(block_group);
 	}
-	spin_unlock(&fs_info->unused_bgs_lock);
 }
 
 /*
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 0aa7e5d1b05f..55c29557c26c 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -4759,7 +4759,7 @@ static void btrfs_free_all_qgroup_pertrans(struct btrfs_fs_info *fs_info)
 	int i;
 	int ret;
 
-	spin_lock(&fs_info->fs_roots_radix_lock);
+	guard(spinlock)(&fs_info->fs_roots_radix_lock);
 	while (1) {
 		ret = radix_tree_gang_lookup_tag(&fs_info->fs_roots_radix,
 						 (void **)gang, 0,
@@ -4776,7 +4776,6 @@ static void btrfs_free_all_qgroup_pertrans(struct btrfs_fs_info *fs_info)
 					BTRFS_ROOT_TRANS_TAG);
 		}
 	}
-	spin_unlock(&fs_info->fs_roots_radix_lock);
 }
 
 void btrfs_cleanup_one_transaction(struct btrfs_transaction *cur_trans)
diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c
index bb2ca1c9c7b0..3a9774bc714f 100644
--- a/fs/btrfs/extent-io-tree.c
+++ b/fs/btrfs/extent-io-tree.c
@@ -118,7 +118,7 @@ void btrfs_extent_io_tree_release(struct extent_io_tree *tree)
 	struct extent_state *state;
 	struct extent_state *tmp;
 
-	spin_lock(&tree->lock);
+	guard(spinlock)(&tree->lock);
 	root = tree->state;
 	tree->state = RB_ROOT;
 	rbtree_postorder_for_each_entry_safe(state, tmp, &root, rb_node) {
@@ -139,7 +139,6 @@ void btrfs_extent_io_tree_release(struct extent_io_tree *tree)
 	 * be accessing the tree anymore.
 	 */
 	ASSERT(RB_EMPTY_ROOT(&tree->state));
-	spin_unlock(&tree->lock);
 }
 
 static struct extent_state *alloc_extent_state(gfp_t mask)
@@ -958,7 +957,7 @@ bool btrfs_find_contiguous_extent_bit(struct extent_io_tree *tree, u64 start,
 
 	ASSERT(!btrfs_fs_incompat(btrfs_extent_io_tree_to_fs_info(tree), NO_HOLES));
 
-	spin_lock(&tree->lock);
+	guard(spinlock)(&tree->lock);
 	state = find_first_extent_bit_state(tree, start, bits);
 	if (state) {
 		*start_ret = state->start;
@@ -970,7 +969,6 @@ bool btrfs_find_contiguous_extent_bit(struct extent_io_tree *tree, u64 start,
 		}
 		ret = true;
 	}
-	spin_unlock(&tree->lock);
 	return ret;
 }
 
@@ -1757,7 +1755,7 @@ bool btrfs_test_range_bit_exists(struct extent_io_tree *tree, u64 start, u64 end
 
 	ASSERT(is_power_of_2(bit));
 
-	spin_lock(&tree->lock);
+	guard(spinlock)(&tree->lock);
 	state = tree_search(tree, start);
 	while (state) {
 		if (state->start > end)
@@ -1772,7 +1770,6 @@ bool btrfs_test_range_bit_exists(struct extent_io_tree *tree, u64 start, u64 end
 			break;
 		state = next_state(state);
 	}
-	spin_unlock(&tree->lock);
 	return bitset;
 }
 
@@ -1790,7 +1787,7 @@ void btrfs_get_range_bits(struct extent_io_tree *tree, u64 start, u64 end, u32 *
 
 	*bits = 0;
 
-	spin_lock(&tree->lock);
+	guard(spinlock)(&tree->lock);
 	state = tree_search(tree, start);
 	if (state && state->start < end) {
 		*cached_state = state;
@@ -1807,7 +1804,6 @@ void btrfs_get_range_bits(struct extent_io_tree *tree, u64 start, u64 end, u32 *
 
 		state = next_state(state);
 	}
-	spin_unlock(&tree->lock);
 }
 
 /*
@@ -1931,12 +1927,11 @@ struct extent_state *btrfs_next_extent_state(struct extent_io_tree *tree,
 {
 	struct extent_state *next;
 
-	spin_lock(&tree->lock);
+	guard(spinlock)(&tree->lock);
 	ASSERT(extent_state_in_tree(state));
 	next = next_state(state);
 	if (next)
 		refcount_inc(&next->refs);
-	spin_unlock(&tree->lock);
 
 	return next;
 }
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index ab873bd67192..30e361ab02dc 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -3845,7 +3845,7 @@ static void reset_trimming_bitmap(struct btrfs_free_space_ctl *ctl, u64 offset)
 {
 	struct btrfs_free_space *entry;
 
-	spin_lock(&ctl->tree_lock);
+	guard(spinlock)(&ctl->tree_lock);
 	entry = tree_search_offset(ctl, offset, 1, 0);
 	if (entry) {
 		if (btrfs_free_space_trimmed(entry)) {
@@ -3855,8 +3855,6 @@ static void reset_trimming_bitmap(struct btrfs_free_space_ctl *ctl, u64 offset)
 		}
 		entry->trim_state = BTRFS_TRIM_STATE_UNTRIMMED;
 	}
-
-	spin_unlock(&ctl->tree_lock);
 }
 
 static void end_trimming_bitmap(struct btrfs_free_space_ctl *ctl,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 6282911e536f..fdcf4948fa56 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -10424,12 +10424,11 @@ void btrfs_update_inode_bytes(struct btrfs_inode *inode,
 	if (add_bytes == del_bytes)
 		return;
 
-	spin_lock(&inode->lock);
+	guard(spinlock)(&inode->lock);
 	if (del_bytes > 0)
 		inode_sub_bytes(&inode->vfs_inode, del_bytes);
 	if (add_bytes > 0)
 		inode_add_bytes(&inode->vfs_inode, add_bytes);
-	spin_unlock(&inode->lock);
 }
 
 /*
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 2829f20d7bb5..27a16bacdf9c 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -328,9 +328,8 @@ void btrfs_add_ordered_sum(struct btrfs_ordered_extent *entry,
 {
 	struct btrfs_inode *inode = entry->inode;
 
-	spin_lock_irq(&inode->ordered_tree_lock);
+	guard(spinlock_irq)(&inode->ordered_tree_lock);
 	list_add_tail(&sum->list, &entry->list);
-	spin_unlock_irq(&inode->ordered_tree_lock);
 }
 
 void btrfs_mark_ordered_extent_error(struct btrfs_ordered_extent *ordered)
@@ -1041,7 +1040,7 @@ void btrfs_get_ordered_extents_for_logging(struct btrfs_inode *inode,
 
 	btrfs_assert_inode_locked(inode);
 
-	spin_lock_irq(&inode->ordered_tree_lock);
+	guard(spinlock_irq)(&inode->ordered_tree_lock);
 	for (n = rb_first(&inode->ordered_tree); n; n = rb_next(n)) {
 		struct btrfs_ordered_extent *ordered;
 
@@ -1055,7 +1054,6 @@ void btrfs_get_ordered_extents_for_logging(struct btrfs_inode *inode,
 		refcount_inc(&ordered->refs);
 		trace_btrfs_ordered_extent_lookup_for_logging(inode, ordered);
 	}
-	spin_unlock_irq(&inode->ordered_tree_lock);
 }
 
 /*
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index a8474d0a9c58..683905f4481d 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -3973,7 +3973,7 @@ qgroup_rescan_zero_tracking(struct btrfs_fs_info *fs_info)
 	struct rb_node *n;
 	struct btrfs_qgroup *qgroup;
 
-	spin_lock(&fs_info->qgroup_lock);
+	guard(spinlock)(&fs_info->qgroup_lock);
 	/* clear all current qgroup tracking information */
 	for (n = rb_first(&fs_info->qgroup_tree); n; n = rb_next(n)) {
 		qgroup = rb_entry(n, struct btrfs_qgroup, node);
@@ -3983,7 +3983,6 @@ qgroup_rescan_zero_tracking(struct btrfs_fs_info *fs_info)
 		qgroup->excl_cmpr = 0;
 		qgroup_dirty(fs_info, qgroup);
 	}
-	spin_unlock(&fs_info->qgroup_lock);
 }
 
 int
@@ -4419,12 +4418,11 @@ static void add_root_meta_rsv(struct btrfs_root *root, int num_bytes,
 	if (num_bytes == 0)
 		return;
 
-	spin_lock(&root->qgroup_meta_rsv_lock);
+	guard(spinlock)(&root->qgroup_meta_rsv_lock);
 	if (type == BTRFS_QGROUP_RSV_META_PREALLOC)
 		root->qgroup_meta_rsv_prealloc += num_bytes;
 	else
 		root->qgroup_meta_rsv_pertrans += num_bytes;
-	spin_unlock(&root->qgroup_meta_rsv_lock);
 }
 
 static int sub_root_meta_rsv(struct btrfs_root *root, int num_bytes,
@@ -4436,7 +4434,7 @@ static int sub_root_meta_rsv(struct btrfs_root *root, int num_bytes,
 	if (num_bytes == 0)
 		return 0;
 
-	spin_lock(&root->qgroup_meta_rsv_lock);
+	guard(spinlock)(&root->qgroup_meta_rsv_lock);
 	if (type == BTRFS_QGROUP_RSV_META_PREALLOC) {
 		num_bytes = min_t(u64, root->qgroup_meta_rsv_prealloc,
 				  num_bytes);
@@ -4446,7 +4444,6 @@ static int sub_root_meta_rsv(struct btrfs_root *root, int num_bytes,
 				  num_bytes);
 		root->qgroup_meta_rsv_pertrans -= num_bytes;
 	}
-	spin_unlock(&root->qgroup_meta_rsv_lock);
 	return num_bytes;
 }
 
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 0135dceb7baa..6b9fda36d3c6 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -516,13 +516,12 @@ static void btrfs_clear_rbio_cache(struct btrfs_fs_info *info)
 
 	table = info->stripe_hash_table;
 
-	spin_lock(&table->cache_lock);
+	guard(spinlock)(&table->cache_lock);
 	while (!list_empty(&table->stripe_cache)) {
 		rbio = list_first_entry(&table->stripe_cache,
 					struct btrfs_raid_bio, stripe_cache);
 		__remove_rbio_from_cache(rbio);
 	}
-	spin_unlock(&table->cache_lock);
 }
 
 /*
@@ -1234,11 +1233,9 @@ static void index_rbio_pages(struct btrfs_raid_bio *rbio)
 {
 	struct bio *bio;
 
-	spin_lock(&rbio->bio_list_lock);
+	guard(spinlock)(&rbio->bio_list_lock);
 	bio_list_for_each(bio, &rbio->bio_list)
 		index_one_bio(rbio, bio);
-
-	spin_unlock(&rbio->bio_list_lock);
 }
 
 static void bio_get_trace_info(struct btrfs_raid_bio *rbio, struct bio *bio,
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index 96a030d28e09..e7e33c9feca0 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -7980,7 +7980,7 @@ static int flush_delalloc_roots(struct send_ctx *sctx)
 
 static void btrfs_root_dec_send_in_progress(struct btrfs_root* root)
 {
-	spin_lock(&root->root_item_lock);
+	guard(spinlock)(&root->root_item_lock);
 	root->send_in_progress--;
 	/*
 	 * Not much left to do, we don't know why it's unbalanced and
@@ -7990,7 +7990,6 @@ static void btrfs_root_dec_send_in_progress(struct btrfs_root* root)
 		btrfs_err(root->fs_info,
 			  "send_in_progress unbalanced %d root %llu",
 			  root->send_in_progress, btrfs_root_id(root));
-	spin_unlock(&root->root_item_lock);
 }
 
 static void dedupe_in_progress_warn(const struct btrfs_root *root)
diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
index 5ca8d4db6722..8c8563e87aea 100644
--- a/fs/btrfs/subpage.c
+++ b/fs/btrfs/subpage.c
@@ -364,13 +364,11 @@ void btrfs_subpage_set_uptodate(const struct btrfs_fs_info *fs_info,
 	struct btrfs_folio_state *bfs = folio_get_private(folio);
 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
 							uptodate, start, len);
-	unsigned long flags;
 
-	spin_lock_irqsave(&bfs->lock, flags);
+	guard(spinlock_irqsave)(&bfs->lock);
 	bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
 	if (subpage_test_bitmap_all_set(fs_info, folio, uptodate))
 		folio_mark_uptodate(folio);
-	spin_unlock_irqrestore(&bfs->lock, flags);
 }
 
 void btrfs_subpage_clear_uptodate(const struct btrfs_fs_info *fs_info,
@@ -379,12 +377,10 @@ void btrfs_subpage_clear_uptodate(const struct btrfs_fs_info *fs_info,
 	struct btrfs_folio_state *bfs = folio_get_private(folio);
 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
 							uptodate, start, len);
-	unsigned long flags;
 
-	spin_lock_irqsave(&bfs->lock, flags);
+	guard(spinlock_irqsave)(&bfs->lock);
 	bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
 	folio_clear_uptodate(folio);
-	spin_unlock_irqrestore(&bfs->lock, flags);
 }
 
 void btrfs_subpage_set_dirty(const struct btrfs_fs_info *fs_info,
@@ -417,14 +413,12 @@ bool btrfs_subpage_clear_and_test_dirty(const struct btrfs_fs_info *fs_info,
 	struct btrfs_folio_state *bfs = folio_get_private(folio);
 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
 							dirty, start, len);
-	unsigned long flags;
 	bool last = false;
 
-	spin_lock_irqsave(&bfs->lock, flags);
+	guard(spinlock_irqsave)(&bfs->lock);
 	bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
 	if (subpage_test_bitmap_all_zero(fs_info, folio, dirty))
 		last = true;
-	spin_unlock_irqrestore(&bfs->lock, flags);
 	return last;
 }
 
@@ -444,9 +438,8 @@ void btrfs_subpage_set_writeback(const struct btrfs_fs_info *fs_info,
 	struct btrfs_folio_state *bfs = folio_get_private(folio);
 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
 							writeback, start, len);
-	unsigned long flags;
 
-	spin_lock_irqsave(&bfs->lock, flags);
+	guard(spinlock_irqsave)(&bfs->lock);
 	bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
 
 	/*
@@ -467,7 +460,6 @@ void btrfs_subpage_set_writeback(const struct btrfs_fs_info *fs_info,
 		xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE);
 		xas_unlock_irqrestore(&xas, flags);
 	}
-	spin_unlock_irqrestore(&bfs->lock, flags);
 }
 
 void btrfs_subpage_clear_writeback(const struct btrfs_fs_info *fs_info,
@@ -476,15 +468,13 @@ void btrfs_subpage_clear_writeback(const struct btrfs_fs_info *fs_info,
 	struct btrfs_folio_state *bfs = folio_get_private(folio);
 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
 							writeback, start, len);
-	unsigned long flags;
 
-	spin_lock_irqsave(&bfs->lock, flags);
+	guard(spinlock_irqsave)(&bfs->lock);
 	bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
 	if (subpage_test_bitmap_all_zero(fs_info, folio, writeback)) {
 		ASSERT(folio_test_writeback(folio));
 		folio_end_writeback(folio);
 	}
-	spin_unlock_irqrestore(&bfs->lock, flags);
 }
 
 void btrfs_subpage_set_ordered(const struct btrfs_fs_info *fs_info,
@@ -493,12 +483,10 @@ void btrfs_subpage_set_ordered(const struct btrfs_fs_info *fs_info,
 	struct btrfs_folio_state *bfs = folio_get_private(folio);
 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
 							ordered, start, len);
-	unsigned long flags;
 
-	spin_lock_irqsave(&bfs->lock, flags);
+	guard(spinlock_irqsave)(&bfs->lock);
 	bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
 	folio_set_ordered(folio);
-	spin_unlock_irqrestore(&bfs->lock, flags);
 }
 
 void btrfs_subpage_clear_ordered(const struct btrfs_fs_info *fs_info,
@@ -507,13 +495,11 @@ void btrfs_subpage_clear_ordered(const struct btrfs_fs_info *fs_info,
 	struct btrfs_folio_state *bfs = folio_get_private(folio);
 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
 							ordered, start, len);
-	unsigned long flags;
 
-	spin_lock_irqsave(&bfs->lock, flags);
+	guard(spinlock_irqsave)(&bfs->lock);
 	bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
 	if (subpage_test_bitmap_all_zero(fs_info, folio, ordered))
 		folio_clear_ordered(folio);
-	spin_unlock_irqrestore(&bfs->lock, flags);
 }
 
 void btrfs_subpage_set_checked(const struct btrfs_fs_info *fs_info,
@@ -522,13 +508,11 @@ void btrfs_subpage_set_checked(const struct btrfs_fs_info *fs_info,
 	struct btrfs_folio_state *bfs = folio_get_private(folio);
 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
 							checked, start, len);
-	unsigned long flags;
 
-	spin_lock_irqsave(&bfs->lock, flags);
+	guard(spinlock_irqsave)(&bfs->lock);
 	bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
 	if (subpage_test_bitmap_all_set(fs_info, folio, checked))
 		folio_set_checked(folio);
-	spin_unlock_irqrestore(&bfs->lock, flags);
 }
 
 void btrfs_subpage_clear_checked(const struct btrfs_fs_info *fs_info,
@@ -537,12 +521,10 @@ void btrfs_subpage_clear_checked(const struct btrfs_fs_info *fs_info,
 	struct btrfs_folio_state *bfs = folio_get_private(folio);
 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
 							checked, start, len);
-	unsigned long flags;
 
-	spin_lock_irqsave(&bfs->lock, flags);
+	guard(spinlock_irqsave)(&bfs->lock);
 	bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
 	folio_clear_checked(folio);
-	spin_unlock_irqrestore(&bfs->lock, flags);
 }
 
 /*
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 2bec544d8ba3..3ccf6c958388 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -8152,7 +8152,7 @@ bool btrfs_pinned_by_swapfile(struct btrfs_fs_info *fs_info, void *ptr)
 	struct btrfs_swapfile_pin *sp;
 	struct rb_node *node;
 
-	spin_lock(&fs_info->swapfile_pins_lock);
+	guard(spinlock)(&fs_info->swapfile_pins_lock);
 	node = fs_info->swapfile_pins.rb_node;
 	while (node) {
 		sp = rb_entry(node, struct btrfs_swapfile_pin, node);
@@ -8163,7 +8163,6 @@ bool btrfs_pinned_by_swapfile(struct btrfs_fs_info *fs_info, void *ptr)
 		else
 			break;
 	}
-	spin_unlock(&fs_info->swapfile_pins_lock);
 	return node != NULL;
 }
 
-- 
2.51.1.dirty


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH 5/8] btrfs: use cleanup.h guard()s to simplify unlocks on return
  2025-11-12 18:49 [RFC PATCH 0/8] use cleanup.h in btrfs Gladyshev Ilya
                   ` (3 preceding siblings ...)
  2025-11-12 18:49 ` [RFC PATCH 4/8] btrfs: simplify function protections with guards Gladyshev Ilya
@ 2025-11-12 18:49 ` Gladyshev Ilya
  2025-11-12 18:49 ` [RFC PATCH 6/8] btrfs: simplify cleanup via scoped_guard() Gladyshev Ilya
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-12 18:49 UTC (permalink / raw)
  To: foxido; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

Simplify cleanup in functions with multiple exit paths / cleanup gotos.
While it's only measurable benefit is slightly reduced code size, it
improves readability and robustness of resource cleanups, eliminating
future errors.

Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
---
 fs/btrfs/block-group.c      |  4 +--
 fs/btrfs/discard.c          | 10 ++-----
 fs/btrfs/disk-io.c          |  6 ++--
 fs/btrfs/extent-io-tree.c   | 36 +++++++++--------------
 fs/btrfs/file-item.c        |  6 ++--
 fs/btrfs/free-space-cache.c | 20 ++++---------
 fs/btrfs/fs.c               |  9 ++----
 fs/btrfs/ordered-data.c     | 15 ++++------
 fs/btrfs/qgroup.c           | 57 +++++++++++++------------------------
 fs/btrfs/raid56.c           | 13 ++-------
 fs/btrfs/space-info.c       |  4 +--
 fs/btrfs/subpage.c          |  5 +---
 fs/btrfs/tree-log.c         | 15 +++++-----
 fs/btrfs/zoned.c            | 13 +++------
 fs/btrfs/zstd.c             | 13 +++------
 15 files changed, 73 insertions(+), 153 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 5322ef2ae015..0ef8917e7a71 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -416,16 +416,14 @@ struct btrfs_caching_control *btrfs_get_caching_control(
 		struct btrfs_block_group *cache)
 {
 	struct btrfs_caching_control *ctl;
+	guard(spinlock)(&cache->lock);
 
-	spin_lock(&cache->lock);
 	if (!cache->caching_ctl) {
-		spin_unlock(&cache->lock);
 		return NULL;
 	}
 
 	ctl = cache->caching_ctl;
 	refcount_inc(&ctl->count);
-	spin_unlock(&cache->lock);
 	return ctl;
 }
 
diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c
index 9bd7a8ad45c4..4dd9f58118bc 100644
--- a/fs/btrfs/discard.c
+++ b/fs/btrfs/discard.c
@@ -129,12 +129,11 @@ static void add_to_discard_unused_list(struct btrfs_discard_ctl *discard_ctl,
 {
 	bool queued;
 
-	spin_lock(&discard_ctl->lock);
+	guard(spinlock)(&discard_ctl->lock);
 
 	queued = !list_empty(&block_group->discard_list);
 
 	if (!btrfs_run_discard_work(discard_ctl)) {
-		spin_unlock(&discard_ctl->lock);
 		return;
 	}
 
@@ -148,8 +147,6 @@ static void add_to_discard_unused_list(struct btrfs_discard_ctl *discard_ctl,
 		btrfs_get_block_group(block_group);
 	list_add_tail(&block_group->discard_list,
 		      &discard_ctl->discard_list[BTRFS_DISCARD_INDEX_UNUSED]);
-
-	spin_unlock(&discard_ctl->lock);
 }
 
 static bool remove_from_discard_list(struct btrfs_discard_ctl *discard_ctl,
@@ -592,7 +589,7 @@ void btrfs_discard_calc_delay(struct btrfs_discard_ctl *discard_ctl)
 	if (!discardable_extents)
 		return;
 
-	spin_lock(&discard_ctl->lock);
+	guard(spinlock)(&discard_ctl->lock);
 
 	/*
 	 * The following is to fix a potential -1 discrepancy that we're not
@@ -611,7 +608,6 @@ void btrfs_discard_calc_delay(struct btrfs_discard_ctl *discard_ctl)
 			     &discard_ctl->discardable_bytes);
 
 	if (discardable_extents <= 0) {
-		spin_unlock(&discard_ctl->lock);
 		return;
 	}
 
@@ -630,8 +626,6 @@ void btrfs_discard_calc_delay(struct btrfs_discard_ctl *discard_ctl)
 
 	delay = clamp(delay, min_delay, BTRFS_DISCARD_MAX_DELAY_MSEC);
 	discard_ctl->delay_ms = delay;
-
-	spin_unlock(&discard_ctl->lock);
 }
 
 /*
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 55c29557c26c..23dd82a944ee 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1142,12 +1142,10 @@ static struct btrfs_root *btrfs_lookup_fs_root(struct btrfs_fs_info *fs_info,
 {
 	struct btrfs_root *root;
 
-	spin_lock(&fs_info->fs_roots_radix_lock);
+	guard(spinlock)(&fs_info->fs_roots_radix_lock);
 	root = radix_tree_lookup(&fs_info->fs_roots_radix,
 				 (unsigned long)root_id);
-	root = btrfs_grab_root(root);
-	spin_unlock(&fs_info->fs_roots_radix_lock);
-	return root;
+	return btrfs_grab_root(root);
 }
 
 static struct btrfs_root *btrfs_get_global_root(struct btrfs_fs_info *fs_info,
diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c
index 3a9774bc714f..69ea2bd359a6 100644
--- a/fs/btrfs/extent-io-tree.c
+++ b/fs/btrfs/extent-io-tree.c
@@ -987,7 +987,7 @@ bool btrfs_find_delalloc_range(struct extent_io_tree *tree, u64 *start,
 	bool found = false;
 	u64 total_bytes = 0;
 
-	spin_lock(&tree->lock);
+	guard(spinlock)(&tree->lock);
 
 	/*
 	 * This search will find all the extents that end after our range
@@ -996,18 +996,18 @@ bool btrfs_find_delalloc_range(struct extent_io_tree *tree, u64 *start,
 	state = tree_search(tree, cur_start);
 	if (!state) {
 		*end = (u64)-1;
-		goto out;
+		return false;
 	}
 
 	while (state) {
 		if (found && (state->start != cur_start ||
 			      (state->state & EXTENT_BOUNDARY))) {
-			goto out;
+			return true;
 		}
 		if (!(state->state & EXTENT_DELALLOC)) {
 			if (!found)
 				*end = state->end;
-			goto out;
+			return found;
 		}
 		if (!found) {
 			*start = state->start;
@@ -1019,11 +1019,9 @@ bool btrfs_find_delalloc_range(struct extent_io_tree *tree, u64 *start,
 		cur_start = state->end + 1;
 		total_bytes += state->end - state->start + 1;
 		if (total_bytes >= max_bytes)
-			break;
+			return true;
 		state = next_state(state);
 	}
-out:
-	spin_unlock(&tree->lock);
 	return found;
 }
 
@@ -1548,7 +1546,7 @@ void btrfs_find_first_clear_extent_bit(struct extent_io_tree *tree, u64 start,
 	struct extent_state *state;
 	struct extent_state *prev = NULL, *next = NULL;
 
-	spin_lock(&tree->lock);
+	guard(spinlock)(&tree->lock);
 
 	/* Find first extent with bits cleared */
 	while (1) {
@@ -1560,7 +1558,7 @@ void btrfs_find_first_clear_extent_bit(struct extent_io_tree *tree, u64 start,
 			 */
 			*start_ret = 0;
 			*end_ret = -1;
-			goto out;
+			return;
 		} else if (!state && !next) {
 			/*
 			 * We are past the last allocated chunk, set start at
@@ -1568,7 +1566,7 @@ void btrfs_find_first_clear_extent_bit(struct extent_io_tree *tree, u64 start,
 			 */
 			*start_ret = prev->end + 1;
 			*end_ret = -1;
-			goto out;
+			return;
 		} else if (!state) {
 			state = next;
 		}
@@ -1631,8 +1629,6 @@ void btrfs_find_first_clear_extent_bit(struct extent_io_tree *tree, u64 start,
 		}
 		state = next_state(state);
 	}
-out:
-	spin_unlock(&tree->lock);
 }
 
 /*
@@ -1813,12 +1809,11 @@ bool btrfs_test_range_bit(struct extent_io_tree *tree, u64 start, u64 end, u32 b
 			  struct extent_state *cached)
 {
 	struct extent_state *state;
-	bool bitset = true;
 
 	ASSERT(is_power_of_2(bit));
 	ASSERT(start < end);
 
-	spin_lock(&tree->lock);
+	guard(spinlock)(&tree->lock);
 	if (cached && extent_state_in_tree(cached) && cached->start <= start &&
 	    cached->end > start)
 		state = cached;
@@ -1826,17 +1821,15 @@ bool btrfs_test_range_bit(struct extent_io_tree *tree, u64 start, u64 end, u32 b
 		state = tree_search(tree, start);
 	while (state) {
 		if (state->start > start) {
-			bitset = false;
-			break;
+			return false;
 		}
 
 		if ((state->state & bit) == 0) {
-			bitset = false;
-			break;
+			return false;
 		}
 
 		if (state->end >= end)
-			break;
+			return true;
 
 		/* Next state must start where this one ends. */
 		start = state->end + 1;
@@ -1844,10 +1837,7 @@ bool btrfs_test_range_bit(struct extent_io_tree *tree, u64 start, u64 end, u32 b
 	}
 
 	/* We ran out of states and were still inside of our range. */
-	if (!state)
-		bitset = false;
-	spin_unlock(&tree->lock);
-	return bitset;
+	return false;
 }
 
 /* Wrappers around set/clear extent bit */
diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
index a42e6d54e7cd..7dfbfe468a34 100644
--- a/fs/btrfs/file-item.c
+++ b/fs/btrfs/file-item.c
@@ -48,11 +48,11 @@ void btrfs_inode_safe_disk_i_size_write(struct btrfs_inode *inode, u64 new_i_siz
 	u64 start, end, i_size;
 	bool found;
 
-	spin_lock(&inode->lock);
+	guard(spinlock)(&inode->lock);
 	i_size = new_i_size ?: i_size_read(&inode->vfs_inode);
 	if (!inode->file_extent_tree) {
 		inode->disk_i_size = i_size;
-		goto out_unlock;
+		return;
 	}
 
 	found = btrfs_find_contiguous_extent_bit(inode->file_extent_tree, 0, &start,
@@ -62,8 +62,6 @@ void btrfs_inode_safe_disk_i_size_write(struct btrfs_inode *inode, u64 new_i_siz
 	else
 		i_size = 0;
 	inode->disk_i_size = i_size;
-out_unlock:
-	spin_unlock(&inode->lock);
 }
 
 /*
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index 30e361ab02dc..7ad3f635576e 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -3061,24 +3061,21 @@ bool btrfs_is_free_space_trimmed(struct btrfs_block_group *block_group)
 	struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;
 	struct btrfs_free_space *info;
 	struct rb_node *node;
-	bool ret = true;
 
-	spin_lock(&ctl->tree_lock);
+	guard(spinlock)(&ctl->tree_lock);
 	node = rb_first(&ctl->free_space_offset);
 
 	while (node) {
 		info = rb_entry(node, struct btrfs_free_space, offset_index);
 
 		if (!btrfs_free_space_trimmed(info)) {
-			ret = false;
-			break;
+			return false;
 		}
 
 		node = rb_next(node);
 	}
 
-	spin_unlock(&ctl->tree_lock);
-	return ret;
+	return true;
 }
 
 u64 btrfs_find_space_for_alloc(struct btrfs_block_group *block_group,
@@ -3583,23 +3580,21 @@ int btrfs_find_space_cluster(struct btrfs_block_group *block_group,
 		min_bytes = fs_info->sectorsize;
 	}
 
-	spin_lock(&ctl->tree_lock);
+	guard(spinlock)(&ctl->tree_lock);
 
 	/*
 	 * If we know we don't have enough space to make a cluster don't even
 	 * bother doing all the work to try and find one.
 	 */
 	if (ctl->free_space < bytes) {
-		spin_unlock(&ctl->tree_lock);
 		return -ENOSPC;
 	}
 
-	spin_lock(&cluster->lock);
+	guard(spinlock)(&cluster->lock);
 
 	/* someone already found a cluster, hooray */
 	if (cluster->block_group) {
-		ret = 0;
-		goto out;
+		return 0;
 	}
 
 	trace_btrfs_find_cluster(block_group, offset, bytes, empty_size,
@@ -3625,9 +3620,6 @@ int btrfs_find_space_cluster(struct btrfs_block_group *block_group,
 	} else {
 		trace_btrfs_failed_cluster_setup(block_group);
 	}
-out:
-	spin_unlock(&cluster->lock);
-	spin_unlock(&ctl->tree_lock);
 
 	return ret;
 }
diff --git a/fs/btrfs/fs.c b/fs/btrfs/fs.c
index feb0a2faa837..774c1ffa032c 100644
--- a/fs/btrfs/fs.c
+++ b/fs/btrfs/fs.c
@@ -108,16 +108,13 @@ bool __attribute_const__ btrfs_supported_blocksize(u32 blocksize)
 bool btrfs_exclop_start(struct btrfs_fs_info *fs_info,
 			enum btrfs_exclusive_operation type)
 {
-	bool ret = false;
-
-	spin_lock(&fs_info->super_lock);
+	guard(spinlock)(&fs_info->super_lock);
 	if (fs_info->exclusive_operation == BTRFS_EXCLOP_NONE) {
 		fs_info->exclusive_operation = type;
-		ret = true;
+		return true;
 	}
-	spin_unlock(&fs_info->super_lock);
 
-	return ret;
+	return false;
 }
 
 /*
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 27a16bacdf9c..451b60de4550 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -970,22 +970,19 @@ struct btrfs_ordered_extent *btrfs_lookup_ordered_extent(struct btrfs_inode *ino
 {
 	struct rb_node *node;
 	struct btrfs_ordered_extent *entry = NULL;
-	unsigned long flags;
 
-	spin_lock_irqsave(&inode->ordered_tree_lock, flags);
+	guard(spinlock_irqsave)(&inode->ordered_tree_lock);
 	node = ordered_tree_search(inode, file_offset);
 	if (!node)
-		goto out;
+		return NULL;
 
 	entry = rb_entry(node, struct btrfs_ordered_extent, rb_node);
 	if (!in_range(file_offset, entry->file_offset, entry->num_bytes))
-		entry = NULL;
+		return NULL;
 	if (entry) {
 		refcount_inc(&entry->refs);
 		trace_btrfs_ordered_extent_lookup(inode, entry);
 	}
-out:
-	spin_unlock_irqrestore(&inode->ordered_tree_lock, flags);
 	return entry;
 }
 
@@ -1066,16 +1063,14 @@ btrfs_lookup_first_ordered_extent(struct btrfs_inode *inode, u64 file_offset)
 	struct rb_node *node;
 	struct btrfs_ordered_extent *entry = NULL;
 
-	spin_lock_irq(&inode->ordered_tree_lock);
+	guard(spinlock_irq)(&inode->ordered_tree_lock);
 	node = ordered_tree_search(inode, file_offset);
 	if (!node)
-		goto out;
+		return NULL;
 
 	entry = rb_entry(node, struct btrfs_ordered_extent, rb_node);
 	refcount_inc(&entry->refs);
 	trace_btrfs_ordered_extent_lookup_first(inode, entry);
-out:
-	spin_unlock_irq(&inode->ordered_tree_lock);
 	return entry;
 }
 
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 683905f4481d..b76fc5474ae9 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1662,38 +1662,31 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
 	struct btrfs_qgroup *prealloc;
 	int ret = 0;
 
-	mutex_lock(&fs_info->qgroup_ioctl_lock);
-	if (!fs_info->quota_root) {
-		ret = -ENOTCONN;
-		goto out;
-	}
+	guard(mutex)(&fs_info->qgroup_ioctl_lock);
+
+	if (!fs_info->quota_root)
+		return -ENOTCONN;
+
 	quota_root = fs_info->quota_root;
 	qgroup = find_qgroup_rb(fs_info, qgroupid);
-	if (qgroup) {
-		ret = -EEXIST;
-		goto out;
-	}
+	if (qgroup)
+		return -EEXIST;
 
 	prealloc = kzalloc(sizeof(*prealloc), GFP_NOFS);
-	if (!prealloc) {
-		ret = -ENOMEM;
-		goto out;
-	}
+	if (!prealloc)
+		return -ENOMEM;
 
 	ret = add_qgroup_item(trans, quota_root, qgroupid);
 	if (ret) {
 		kfree(prealloc);
-		goto out;
+		return ret;
 	}
 
 	spin_lock(&fs_info->qgroup_lock);
 	qgroup = add_qgroup_rb(fs_info, prealloc, qgroupid);
 	spin_unlock(&fs_info->qgroup_lock);
 
-	ret = btrfs_sysfs_add_one_qgroup(fs_info, qgroup);
-out:
-	mutex_unlock(&fs_info->qgroup_ioctl_lock);
-	return ret;
+	return btrfs_sysfs_add_one_qgroup(fs_info, qgroup);
 }
 
 /*
@@ -3175,13 +3168,10 @@ int btrfs_qgroup_check_inherit(struct btrfs_fs_info *fs_info,
 		if (btrfs_qgroup_level(qgroupid) == 0)
 			return -EINVAL;
 
-		spin_lock(&fs_info->qgroup_lock);
+		guard(spinlock)(&fs_info->qgroup_lock);
 		qgroup = find_qgroup_rb(fs_info, qgroupid);
-		if (!qgroup) {
-			spin_unlock(&fs_info->qgroup_lock);
+		if (!qgroup)
 			return -ENOENT;
-		}
-		spin_unlock(&fs_info->qgroup_lock);
 	}
 	return 0;
 }
@@ -4640,9 +4630,9 @@ void btrfs_qgroup_clean_swapped_blocks(struct btrfs_root *root)
 
 	swapped_blocks = &root->swapped_blocks;
 
-	spin_lock(&swapped_blocks->lock);
+	guard(spinlock)(&swapped_blocks->lock);
 	if (!swapped_blocks->swapped)
-		goto out;
+		return;
 	for (i = 0; i < BTRFS_MAX_LEVEL; i++) {
 		struct rb_root *cur_root = &swapped_blocks->blocks[i];
 		struct btrfs_qgroup_swapped_block *entry;
@@ -4654,8 +4644,6 @@ void btrfs_qgroup_clean_swapped_blocks(struct btrfs_root *root)
 		swapped_blocks->blocks[i] = RB_ROOT;
 	}
 	swapped_blocks->swapped = false;
-out:
-	spin_unlock(&swapped_blocks->lock);
 }
 
 static int qgroup_swapped_block_bytenr_key_cmp(const void *key, const struct rb_node *node)
@@ -4873,7 +4861,6 @@ void btrfs_qgroup_destroy_extent_records(struct btrfs_transaction *trans)
 int btrfs_record_squota_delta(struct btrfs_fs_info *fs_info,
 			      const struct btrfs_squota_delta *delta)
 {
-	int ret;
 	struct btrfs_qgroup *qgroup;
 	struct btrfs_qgroup *qg;
 	LIST_HEAD(qgroup_list);
@@ -4891,14 +4878,11 @@ int btrfs_record_squota_delta(struct btrfs_fs_info *fs_info,
 	if (delta->generation < fs_info->qgroup_enable_gen)
 		return 0;
 
-	spin_lock(&fs_info->qgroup_lock);
+	guard(spinlock)(&fs_info->qgroup_lock);
 	qgroup = find_qgroup_rb(fs_info, root);
-	if (!qgroup) {
-		ret = -ENOENT;
-		goto out;
-	}
+	if (!qgroup)
+		return -ENOENT;
 
-	ret = 0;
 	qgroup_iterator_add(&qgroup_list, qgroup);
 	list_for_each_entry(qg, &qgroup_list, iterator) {
 		struct btrfs_qgroup_list *glist;
@@ -4911,8 +4895,5 @@ int btrfs_record_squota_delta(struct btrfs_fs_info *fs_info,
 			qgroup_iterator_add(&qgroup_list, glist->group);
 	}
 	qgroup_iterator_clean(&qgroup_list);
-
-out:
-	spin_unlock(&fs_info->qgroup_lock);
-	return ret;
+	return 0;
 }
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 6b9fda36d3c6..98959d35132e 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -614,15 +614,10 @@ static void run_xor(void **pages, int src_cnt, ssize_t len)
 static int rbio_is_full(struct btrfs_raid_bio *rbio)
 {
 	unsigned long size = rbio->bio_list_bytes;
-	int ret = 1;
 
-	spin_lock(&rbio->bio_list_lock);
-	if (size != rbio->nr_data * BTRFS_STRIPE_LEN)
-		ret = 0;
+	guard(spinlock)(&rbio->bio_list_lock);
 	BUG_ON(size > rbio->nr_data * BTRFS_STRIPE_LEN);
-	spin_unlock(&rbio->bio_list_lock);
-
-	return ret;
+	return size == rbio->nr_data * BTRFS_STRIPE_LEN;
 }
 
 /*
@@ -969,16 +964,14 @@ static struct sector_ptr *sector_in_rbio(struct btrfs_raid_bio *rbio,
 	index = stripe_nr * rbio->stripe_nsectors + sector_nr;
 	ASSERT(index >= 0 && index < rbio->nr_sectors);
 
-	spin_lock(&rbio->bio_list_lock);
+	guard(spinlock)(&rbio->bio_list_lock);
 	sector = &rbio->bio_sectors[index];
 	if (sector->has_paddr || bio_list_only) {
 		/* Don't return sector without a valid page pointer */
 		if (!sector->has_paddr)
 			sector = NULL;
-		spin_unlock(&rbio->bio_list_lock);
 		return sector;
 	}
-	spin_unlock(&rbio->bio_list_lock);
 
 	return &rbio->stripe_sectors[index];
 }
diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
index 97452fb5d29b..01b68037296b 100644
--- a/fs/btrfs/space-info.c
+++ b/fs/btrfs/space-info.c
@@ -1050,10 +1050,9 @@ static bool steal_from_global_rsv(struct btrfs_fs_info *fs_info,
 	if (global_rsv->space_info != space_info)
 		return false;
 
-	spin_lock(&global_rsv->lock);
+	guard(spinlock)(&global_rsv->lock);
 	min_bytes = mult_perc(global_rsv->size, 10);
 	if (global_rsv->reserved < min_bytes + ticket->bytes) {
-		spin_unlock(&global_rsv->lock);
 		return false;
 	}
 	global_rsv->reserved -= ticket->bytes;
@@ -1063,7 +1062,6 @@ static bool steal_from_global_rsv(struct btrfs_fs_info *fs_info,
 	space_info->tickets_id++;
 	if (global_rsv->reserved < global_rsv->size)
 		global_rsv->full = 0;
-	spin_unlock(&global_rsv->lock);
 
 	return true;
 }
diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
index 8c8563e87aea..89c8e2a4c590 100644
--- a/fs/btrfs/subpage.c
+++ b/fs/btrfs/subpage.c
@@ -226,14 +226,13 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
 	struct btrfs_folio_state *bfs = folio_get_private(folio);
 	const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
 	const int nbits = (len >> fs_info->sectorsize_bits);
-	unsigned long flags;
 	unsigned int cleared = 0;
 	int bit = start_bit;
 	bool last;
 
 	btrfs_subpage_assert(fs_info, folio, start, len);
 
-	spin_lock_irqsave(&bfs->lock, flags);
+	guard(spinlock_irqsave)(&bfs->lock);
 	/*
 	 * We have call sites passing @lock_page into
 	 * extent_clear_unlock_delalloc() for compression path.
@@ -242,7 +241,6 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
 	 * subpage::locked is 0.  Handle them in a special way.
 	 */
 	if (atomic_read(&bfs->nr_locked) == 0) {
-		spin_unlock_irqrestore(&bfs->lock, flags);
 		return true;
 	}
 
@@ -252,7 +250,6 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
 	}
 	ASSERT(atomic_read(&bfs->nr_locked) >= cleared);
 	last = atomic_sub_and_test(cleared, &bfs->nr_locked);
-	spin_unlock_irqrestore(&bfs->lock, flags);
 	return last;
 }
 
diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
index 30f3c3b849c1..e7a9e9582246 100644
--- a/fs/btrfs/tree-log.c
+++ b/fs/btrfs/tree-log.c
@@ -3695,8 +3695,6 @@ int btrfs_free_log_root_tree(struct btrfs_trans_handle *trans,
 static bool mark_inode_as_not_logged(const struct btrfs_trans_handle *trans,
 				     struct btrfs_inode *inode)
 {
-	bool ret = false;
-
 	/*
 	 * Do this only if ->logged_trans is still 0 to prevent races with
 	 * concurrent logging as we may see the inode not logged when
@@ -3707,14 +3705,15 @@ static bool mark_inode_as_not_logged(const struct btrfs_trans_handle *trans,
 	 * and link operations may end up not logging new names and removing old
 	 * names from the log.
 	 */
-	spin_lock(&inode->lock);
-	if (inode->logged_trans == 0)
+	guard(spinlock)(&inode->lock);
+	if (inode->logged_trans == 0) {
 		inode->logged_trans = trans->transid - 1;
-	else if (inode->logged_trans == trans->transid)
-		ret = true;
-	spin_unlock(&inode->lock);
+		return false;
+	} else if (inode->logged_trans == trans->transid) {
+		return true;
+	}
 
-	return ret;
+	return false;
 }
 
 /*
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index d1db7fa1fe58..3e1d9e8eab78 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -2690,9 +2690,9 @@ void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logica
 	/* It should be called on a previous data relocation block group. */
 	ASSERT(block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA));
 
-	spin_lock(&block_group->lock);
+	guard(spinlock)(&block_group->lock);
 	if (!test_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, &block_group->runtime_flags))
-		goto out;
+		return;
 
 	/* All relocation extents are written. */
 	if (block_group->start + block_group->alloc_offset == logical + length) {
@@ -2704,8 +2704,6 @@ void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logica
 			  &block_group->runtime_flags);
 	}
 
-out:
-	spin_unlock(&block_group->lock);
 	btrfs_put_block_group(block_group);
 }
 
@@ -2720,14 +2718,12 @@ int btrfs_zone_finish_one_bg(struct btrfs_fs_info *fs_info)
 	list_for_each_entry(block_group, &fs_info->zone_active_bgs,
 			    active_bg_list) {
 		u64 avail;
+		guard(spinlock)(&block_group->lock);
 
-		spin_lock(&block_group->lock);
 		if (block_group->reserved || block_group->alloc_offset == 0 ||
 		    !(block_group->flags & BTRFS_BLOCK_GROUP_DATA) ||
-		    test_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, &block_group->runtime_flags)) {
-			spin_unlock(&block_group->lock);
+		    test_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, &block_group->runtime_flags))
 			continue;
-		}
 
 		avail = block_group->zone_capacity - block_group->alloc_offset;
 		if (min_avail > avail) {
@@ -2737,7 +2733,6 @@ int btrfs_zone_finish_one_bg(struct btrfs_fs_info *fs_info)
 			min_avail = avail;
 			btrfs_get_block_group(min_bg);
 		}
-		spin_unlock(&block_group->lock);
 	}
 	spin_unlock(&fs_info->zone_active_bgs_lock);
 
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index c9cddcfa337b..ce59214a3778 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -114,12 +114,10 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
 	unsigned long reclaim_threshold = jiffies - ZSTD_BTRFS_RECLAIM_JIFFIES;
 	struct list_head *pos, *next;
 
-	spin_lock(&zwsm->lock);
+	guard(spinlock)(&zwsm->lock);
 
-	if (list_empty(&zwsm->lru_list)) {
-		spin_unlock(&zwsm->lock);
+	if (list_empty(&zwsm->lru_list))
 		return;
-	}
 
 	list_for_each_prev_safe(pos, next, &zwsm->lru_list) {
 		struct workspace *victim = container_of(pos, struct workspace,
@@ -145,8 +143,6 @@ static void zstd_reclaim_timer_fn(struct timer_list *timer)
 
 	if (!list_empty(&zwsm->lru_list))
 		mod_timer(&zwsm->timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
-
-	spin_unlock(&zwsm->lock);
 }
 
 /*
@@ -251,7 +247,8 @@ static struct list_head *zstd_find_workspace(struct btrfs_fs_info *fs_info, int
 	int i = clip_level(level);
 
 	ASSERT(zwsm);
-	spin_lock_bh(&zwsm->lock);
+	guard(spinlock_bh)(&zwsm->lock);
+
 	for_each_set_bit_from(i, &zwsm->active_map, ZSTD_BTRFS_MAX_LEVEL) {
 		if (!list_empty(&zwsm->idle_ws[i])) {
 			ws = zwsm->idle_ws[i].next;
@@ -263,11 +260,9 @@ static struct list_head *zstd_find_workspace(struct btrfs_fs_info *fs_info, int
 				list_del(&workspace->lru_list);
 			if (list_empty(&zwsm->idle_ws[i]))
 				clear_bit(i, &zwsm->active_map);
-			spin_unlock_bh(&zwsm->lock);
 			return ws;
 		}
 	}
-	spin_unlock_bh(&zwsm->lock);
 
 	return NULL;
 }
-- 
2.51.1.dirty


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH 6/8] btrfs: simplify cleanup via scoped_guard()
  2025-11-12 18:49 [RFC PATCH 0/8] use cleanup.h in btrfs Gladyshev Ilya
                   ` (4 preceding siblings ...)
  2025-11-12 18:49 ` [RFC PATCH 5/8] btrfs: use cleanup.h guard()s to simplify unlocks on return Gladyshev Ilya
@ 2025-11-12 18:49 ` Gladyshev Ilya
  2025-11-13  8:48   ` David Sterba
  2025-11-12 18:49 ` [RFC PATCH 7/8] btrfs: simplify return path via cleanup.h Gladyshev Ilya
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-12 18:49 UTC (permalink / raw)
  To: foxido; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

Simplify cases with multiple unlock paths like

spin_lock(&lock);
if (something) {
        spin_unlock(&lock);
        goto faraway; // or return
}
spin_unlock(&lock);

with scoped_guards() to improve readability and robustness.

Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
---
 fs/btrfs/block-group.c      | 20 +++++-------
 fs/btrfs/compression.c      | 13 ++++----
 fs/btrfs/extent-tree.c      |  8 ++---
 fs/btrfs/extent_io.c        | 33 +++++++++----------
 fs/btrfs/free-space-cache.c | 63 +++++++++++++++----------------------
 fs/btrfs/qgroup.c           | 38 +++++++++++-----------
 fs/btrfs/send.c             | 37 ++++++++++------------
 fs/btrfs/tree-log.c         | 13 +++-----
 8 files changed, 97 insertions(+), 128 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 0ef8917e7a71..73c283344ea4 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -93,13 +93,11 @@ static u64 btrfs_reduce_alloc_profile(struct btrfs_fs_info *fs_info, u64 flags)
 	 * See if restripe for this chunk_type is in progress, if so try to
 	 * reduce to the target profile
 	 */
-	spin_lock(&fs_info->balance_lock);
-	target = get_restripe_target(fs_info, flags);
-	if (target) {
-		spin_unlock(&fs_info->balance_lock);
-		return extended_to_chunk(target);
+	scoped_guard(spinlock, &fs_info->balance_lock) {
+		target = get_restripe_target(fs_info, flags);
+		if (target)
+			return extended_to_chunk(target);
 	}
-	spin_unlock(&fs_info->balance_lock);
 
 	/* First, mask out the RAID levels which aren't possible */
 	for (raid_type = 0; raid_type < BTRFS_NR_RAID_TYPES; raid_type++) {
@@ -3402,13 +3400,11 @@ int btrfs_start_dirty_block_groups(struct btrfs_trans_handle *trans)
 	struct list_head *io = &cur_trans->io_bgs;
 	int loops = 0;
 
-	spin_lock(&cur_trans->dirty_bgs_lock);
-	if (list_empty(&cur_trans->dirty_bgs)) {
-		spin_unlock(&cur_trans->dirty_bgs_lock);
-		return 0;
+	scoped_guard(spinlock, &cur_trans->dirty_bgs_lock) {
+		if (list_empty(&cur_trans->dirty_bgs))
+			return 0;
+		list_splice_init(&cur_trans->dirty_bgs, &dirty);
 	}
-	list_splice_init(&cur_trans->dirty_bgs, &dirty);
-	spin_unlock(&cur_trans->dirty_bgs_lock);
 
 again:
 	/* Make sure all the block groups on our dirty list actually exist */
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index bacad18357b3..39dbc0b2ead2 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -962,14 +962,13 @@ void btrfs_put_workspace(struct btrfs_fs_info *fs_info, int type, struct list_he
 	ws_wait	 = &gwsm->ws_wait;
 	free_ws	 = &gwsm->free_ws;
 
-	spin_lock(ws_lock);
-	if (*free_ws <= num_online_cpus()) {
-		list_add(ws, idle_ws);
-		(*free_ws)++;
-		spin_unlock(ws_lock);
-		goto wake;
+	scoped_guard(spinlock, ws_lock) {
+		if (*free_ws <= num_online_cpus()) {
+			list_add(ws, idle_ws);
+			(*free_ws)++;
+			goto wake;
+		}
 	}
-	spin_unlock(ws_lock);
 
 	free_workspace(type, ws);
 	atomic_dec(total_ws);
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index dc4ca98c3780..f9744e456c6c 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2165,12 +2165,10 @@ int btrfs_run_delayed_refs(struct btrfs_trans_handle *trans, u64 min_bytes)
 	if (min_bytes == U64_MAX) {
 		btrfs_create_pending_block_groups(trans);
 
-		spin_lock(&delayed_refs->lock);
-		if (xa_empty(&delayed_refs->head_refs)) {
-			spin_unlock(&delayed_refs->lock);
-			return 0;
+		scoped_guard(spinlock, &delayed_refs->lock) {
+			if (xa_empty(&delayed_refs->head_refs))
+				return 0;
 		}
-		spin_unlock(&delayed_refs->lock);
 
 		cond_resched();
 		goto again;
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 23273d0e6f22..9d7501da2dda 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -4503,27 +4503,24 @@ int try_release_extent_buffer(struct folio *folio)
 	 * We need to make sure nobody is changing folio private, as we rely on
 	 * folio private as the pointer to extent buffer.
 	 */
-	spin_lock(&folio->mapping->i_private_lock);
-	if (!folio_test_private(folio)) {
-		spin_unlock(&folio->mapping->i_private_lock);
-		return 1;
-	}
+	scoped_guard(spinlock, &folio->mapping->i_private_lock) {
+		if (!folio_test_private(folio))
+			return 1;
 
-	eb = folio_get_private(folio);
-	BUG_ON(!eb);
+		eb = folio_get_private(folio);
+		BUG_ON(!eb);
 
-	/*
-	 * This is a little awful but should be ok, we need to make sure that
-	 * the eb doesn't disappear out from under us while we're looking at
-	 * this page.
-	 */
-	spin_lock(&eb->refs_lock);
-	if (refcount_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) {
-		spin_unlock(&eb->refs_lock);
-		spin_unlock(&folio->mapping->i_private_lock);
-		return 0;
+		/*
+		 * This is a little awful but should be ok, we need to make sure that
+		 * the eb doesn't disappear out from under us while we're looking at
+		 * this page.
+		 */
+		spin_lock(&eb->refs_lock);
+		if (refcount_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) {
+			spin_unlock(&eb->refs_lock);
+			return 0;
+		}
 	}
-	spin_unlock(&folio->mapping->i_private_lock);
 
 	/*
 	 * If tree ref isn't set then we know the ref on this eb is a real ref,
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index 7ad3f635576e..5e5560043eeb 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -958,12 +958,10 @@ int load_free_space_cache(struct btrfs_block_group *block_group)
 	 * If this block group has been marked to be cleared for one reason or
 	 * another then we can't trust the on disk cache, so just return.
 	 */
-	spin_lock(&block_group->lock);
-	if (block_group->disk_cache_state != BTRFS_DC_WRITTEN) {
-		spin_unlock(&block_group->lock);
-		return 0;
+	scoped_guard(spinlock, &block_group->lock) {
+		if (block_group->disk_cache_state != BTRFS_DC_WRITTEN)
+			return 0;
 	}
-	spin_unlock(&block_group->lock);
 
 	path = btrfs_alloc_path();
 	if (!path)
@@ -1525,12 +1523,10 @@ int btrfs_write_out_cache(struct btrfs_trans_handle *trans,
 	struct inode *inode;
 	int ret = 0;
 
-	spin_lock(&block_group->lock);
-	if (block_group->disk_cache_state < BTRFS_DC_SETUP) {
-		spin_unlock(&block_group->lock);
-		return 0;
+	scoped_guard(spinlock, &block_group->lock) {
+		if (block_group->disk_cache_state < BTRFS_DC_SETUP)
+			return 0;
 	}
-	spin_unlock(&block_group->lock);
 
 	inode = lookup_free_space_inode(block_group, path);
 	if (IS_ERR(inode))
@@ -3154,20 +3150,17 @@ void btrfs_return_cluster_to_free_space(
 	struct btrfs_free_space_ctl *ctl;
 
 	/* first, get a safe pointer to the block group */
-	spin_lock(&cluster->lock);
-	if (!block_group) {
-		block_group = cluster->block_group;
+	scoped_guard(spinlock, &cluster->lock) {
 		if (!block_group) {
-			spin_unlock(&cluster->lock);
+			block_group = cluster->block_group;
+			if (!block_group)
+				return;
+		} else if (cluster->block_group != block_group) {
+			/* someone else has already freed it don't redo their work */
 			return;
 		}
-	} else if (cluster->block_group != block_group) {
-		/* someone else has already freed it don't redo their work */
-		spin_unlock(&cluster->lock);
-		return;
+		btrfs_get_block_group(block_group);
 	}
-	btrfs_get_block_group(block_group);
-	spin_unlock(&cluster->lock);
 
 	ctl = block_group->free_space_ctl;
 
@@ -4018,13 +4011,11 @@ int btrfs_trim_block_group(struct btrfs_block_group *block_group,
 
 	*trimmed = 0;
 
-	spin_lock(&block_group->lock);
-	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags)) {
-		spin_unlock(&block_group->lock);
-		return 0;
+	scoped_guard(spinlock, &block_group->lock) {
+		if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags))
+			return 0;
+		btrfs_freeze_block_group(block_group);
 	}
-	btrfs_freeze_block_group(block_group);
-	spin_unlock(&block_group->lock);
 
 	ret = trim_no_bitmap(block_group, trimmed, start, end, minlen, false);
 	if (ret)
@@ -4048,13 +4039,11 @@ int btrfs_trim_block_group_extents(struct btrfs_block_group *block_group,
 
 	*trimmed = 0;
 
-	spin_lock(&block_group->lock);
-	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags)) {
-		spin_unlock(&block_group->lock);
-		return 0;
+	scoped_guard(spinlock, &block_group->lock) {
+		if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags))
+			return 0;
+		btrfs_freeze_block_group(block_group);
 	}
-	btrfs_freeze_block_group(block_group);
-	spin_unlock(&block_group->lock);
 
 	ret = trim_no_bitmap(block_group, trimmed, start, end, minlen, async);
 	btrfs_unfreeze_block_group(block_group);
@@ -4070,13 +4059,11 @@ int btrfs_trim_block_group_bitmaps(struct btrfs_block_group *block_group,
 
 	*trimmed = 0;
 
-	spin_lock(&block_group->lock);
-	if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags)) {
-		spin_unlock(&block_group->lock);
-		return 0;
+	scoped_guard(spinlock, &block_group->lock) {
+		if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags))
+			return 0;
+		btrfs_freeze_block_group(block_group);
 	}
-	btrfs_freeze_block_group(block_group);
-	spin_unlock(&block_group->lock);
 
 	ret = trim_bitmaps(block_group, trimmed, start, end, minlen, maxlen,
 			   async);
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index b76fc5474ae9..9b2f2c8ca505 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -4791,29 +4791,27 @@ int btrfs_qgroup_trace_subtree_after_cow(struct btrfs_trans_handle *trans,
 	if (!btrfs_is_fstree(btrfs_root_id(root)) || !root->reloc_root)
 		return 0;
 
-	spin_lock(&blocks->lock);
-	if (!blocks->swapped) {
-		spin_unlock(&blocks->lock);
-		return 0;
-	}
-	node = rb_find(&subvol_eb->start, &blocks->blocks[level],
-			qgroup_swapped_block_bytenr_key_cmp);
-	if (!node) {
-		spin_unlock(&blocks->lock);
-		goto out;
-	}
-	block = rb_entry(node, struct btrfs_qgroup_swapped_block, node);
+	scoped_guard(spinlock, &blocks->lock) {
+		if (!blocks->swapped)
+			return 0;
 
-	/* Found one, remove it from @blocks first and update blocks->swapped */
-	rb_erase(&block->node, &blocks->blocks[level]);
-	for (i = 0; i < BTRFS_MAX_LEVEL; i++) {
-		if (RB_EMPTY_ROOT(&blocks->blocks[i])) {
-			swapped = true;
-			break;
+		node = rb_find(&subvol_eb->start, &blocks->blocks[level],
+			       qgroup_swapped_block_bytenr_key_cmp);
+		if (!node)
+			goto out;
+
+		block = rb_entry(node, struct btrfs_qgroup_swapped_block, node);
+
+		/* Found one, remove it from @blocks first and update blocks->swapped */
+		rb_erase(&block->node, &blocks->blocks[level]);
+		for (i = 0; i < BTRFS_MAX_LEVEL; i++) {
+			if (RB_EMPTY_ROOT(&blocks->blocks[i])) {
+				swapped = true;
+				break;
+			}
 		}
+		blocks->swapped = swapped;
 	}
-	blocks->swapped = swapped;
-	spin_unlock(&blocks->lock);
 
 	check.level = block->level;
 	check.transid = block->reloc_generation;
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index e7e33c9feca0..5af63c71a01a 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -8020,27 +8020,24 @@ long btrfs_ioctl_send(struct btrfs_root *send_root, const struct btrfs_ioctl_sen
 	 * The subvolume must remain read-only during send, protect against
 	 * making it RW. This also protects against deletion.
 	 */
-	spin_lock(&send_root->root_item_lock);
-	/*
-	 * Unlikely but possible, if the subvolume is marked for deletion but
-	 * is slow to remove the directory entry, send can still be started.
-	 */
-	if (btrfs_root_dead(send_root)) {
-		spin_unlock(&send_root->root_item_lock);
-		return -EPERM;
-	}
-	/* Userspace tools do the checks and warn the user if it's not RO. */
-	if (!btrfs_root_readonly(send_root)) {
-		spin_unlock(&send_root->root_item_lock);
-		return -EPERM;
-	}
-	if (send_root->dedupe_in_progress) {
-		dedupe_in_progress_warn(send_root);
-		spin_unlock(&send_root->root_item_lock);
-		return -EAGAIN;
+	scoped_guard(spinlock, &send_root->root_item_lock) {
+		/*
+		 * Unlikely but possible, if the subvolume is marked for deletion but
+		 * is slow to remove the directory entry, send can still be started.
+		 */
+		if (btrfs_root_dead(send_root))
+			return -EPERM;
+
+		/* Userspace tools do the checks and warn the user if it's not RO. */
+		if (!btrfs_root_readonly(send_root))
+			return -EPERM;
+
+		if (send_root->dedupe_in_progress) {
+			dedupe_in_progress_warn(send_root);
+			return -EAGAIN;
+		}
+		send_root->send_in_progress++;
 	}
-	send_root->send_in_progress++;
-	spin_unlock(&send_root->root_item_lock);
 
 	/*
 	 * Check that we don't overflow at later allocations, we request
diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
index e7a9e9582246..f6ce72f0fbcb 100644
--- a/fs/btrfs/tree-log.c
+++ b/fs/btrfs/tree-log.c
@@ -3750,15 +3750,12 @@ static int inode_logged(const struct btrfs_trans_handle *trans,
 	 * see a positive value that is not trans->transid and assume the inode
 	 * was not logged when it was.
 	 */
-	spin_lock(&inode->lock);
-	if (inode->logged_trans == trans->transid) {
-		spin_unlock(&inode->lock);
-		return 1;
-	} else if (inode->logged_trans > 0) {
-		spin_unlock(&inode->lock);
-		return 0;
+	scoped_guard(spinlock, &inode->lock) {
+		if (inode->logged_trans == trans->transid)
+			return 1;
+		else if (inode->logged_trans > 0)
+			return 0;
 	}
-	spin_unlock(&inode->lock);
 
 	/*
 	 * If no log tree was created for this root in this transaction, then
-- 
2.51.1.dirty


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH 7/8] btrfs: simplify return path via cleanup.h
  2025-11-12 18:49 [RFC PATCH 0/8] use cleanup.h in btrfs Gladyshev Ilya
                   ` (5 preceding siblings ...)
  2025-11-12 18:49 ` [RFC PATCH 6/8] btrfs: simplify cleanup via scoped_guard() Gladyshev Ilya
@ 2025-11-12 18:49 ` Gladyshev Ilya
  2025-11-12 20:50   ` Qu Wenruo
  2025-11-12 18:49 ` [RFC PATCH 8/8] btrfs: simplify cleanup in btrfs_add_qgroup_relation Gladyshev Ilya
  2025-11-12 20:55 ` [RFC PATCH 0/8] use cleanup.h in btrfs Qu Wenruo
  8 siblings, 1 reply; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-12 18:49 UTC (permalink / raw)
  To: foxido; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

In some functions removing cleanup paths allows to overall simplify it's
code, so replace cleanup paths with guard()s.

Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
---
 fs/btrfs/extent-io-tree.c | 21 ++++-----
 fs/btrfs/extent-tree.c    | 96 ++++++++++++++++-----------------------
 fs/btrfs/ordered-data.c   | 46 ++++++++-----------
 3 files changed, 68 insertions(+), 95 deletions(-)

diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c
index 69ea2bd359a6..88d7aed7055f 100644
--- a/fs/btrfs/extent-io-tree.c
+++ b/fs/btrfs/extent-io-tree.c
@@ -890,9 +890,8 @@ bool btrfs_find_first_extent_bit(struct extent_io_tree *tree, u64 start,
 				 struct extent_state **cached_state)
 {
 	struct extent_state *state;
-	bool ret = false;
 
-	spin_lock(&tree->lock);
+	guard(spinlock)(&tree->lock);
 	if (cached_state && *cached_state) {
 		state = *cached_state;
 		if (state->end == start - 1 && extent_state_in_tree(state)) {
@@ -911,23 +910,21 @@ bool btrfs_find_first_extent_bit(struct extent_io_tree *tree, u64 start,
 			*cached_state = NULL;
 			if (state)
 				goto got_it;
-			goto out;
+			return false;
 		}
 		btrfs_free_extent_state(*cached_state);
 		*cached_state = NULL;
 	}
 
 	state = find_first_extent_bit_state(tree, start, bits);
+	if (!state)
+		return false;
+
 got_it:
-	if (state) {
-		cache_state_if_flags(state, cached_state, 0);
-		*start_ret = state->start;
-		*end_ret = state->end;
-		ret = true;
-	}
-out:
-	spin_unlock(&tree->lock);
-	return ret;
+	cache_state_if_flags(state, cached_state, 0);
+	*start_ret = state->start;
+	*end_ret = state->end;
+	return true;
 }
 
 /*
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index f9744e456c6c..cb3d61d96e66 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -1878,16 +1878,14 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans,
 	 * and then re-check to make sure nobody got added.
 	 */
 	spin_unlock(&head->lock);
-	spin_lock(&delayed_refs->lock);
-	spin_lock(&head->lock);
-	if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root) || head->extent_op) {
-		spin_unlock(&head->lock);
-		spin_unlock(&delayed_refs->lock);
-		return 1;
+	{
+		guard(spinlock)(&delayed_refs->lock);
+		guard(spinlock)(&head->lock);
+
+		if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root) || head->extent_op)
+			return 1;
+		btrfs_delete_ref_head(fs_info, delayed_refs, head);
 	}
-	btrfs_delete_ref_head(fs_info, delayed_refs, head);
-	spin_unlock(&head->lock);
-	spin_unlock(&delayed_refs->lock);
 
 	if (head->must_insert_reserved) {
 		btrfs_pin_extent(trans, head->bytenr, head->num_bytes, 1);
@@ -3391,30 +3389,29 @@ static noinline int check_ref_cleanup(struct btrfs_trans_handle *trans,
 	int ret = 0;
 
 	delayed_refs = &trans->transaction->delayed_refs;
-	spin_lock(&delayed_refs->lock);
-	head = btrfs_find_delayed_ref_head(fs_info, delayed_refs, bytenr);
-	if (!head)
-		goto out_delayed_unlock;
-
-	spin_lock(&head->lock);
-	if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root))
-		goto out;
+	{
+		guard(spinlock)(&delayed_refs->lock);
+		head = btrfs_find_delayed_ref_head(fs_info, delayed_refs, bytenr);
+		if (!head)
+			return 0;
 
-	if (cleanup_extent_op(head) != NULL)
-		goto out;
+		guard(spinlock)(&head->lock);
+		if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root))
+			return 0;
 
-	/*
-	 * waiting for the lock here would deadlock.  If someone else has it
-	 * locked they are already in the process of dropping it anyway
-	 */
-	if (!mutex_trylock(&head->mutex))
-		goto out;
+		if (cleanup_extent_op(head) != NULL)
+			return 0;
 
-	btrfs_delete_ref_head(fs_info, delayed_refs, head);
-	head->processing = false;
+		/*
+		 * waiting for the lock here would deadlock.  If someone else has it
+		 * locked they are already in the process of dropping it anyway
+		 */
+		if (!mutex_trylock(&head->mutex))
+			return 0;
 
-	spin_unlock(&head->lock);
-	spin_unlock(&delayed_refs->lock);
+		btrfs_delete_ref_head(fs_info, delayed_refs, head);
+		head->processing = false;
+	}
 
 	BUG_ON(head->extent_op);
 	if (head->must_insert_reserved)
@@ -3424,12 +3421,6 @@ static noinline int check_ref_cleanup(struct btrfs_trans_handle *trans,
 	mutex_unlock(&head->mutex);
 	btrfs_put_delayed_ref_head(head);
 	return ret;
-out:
-	spin_unlock(&head->lock);
-
-out_delayed_unlock:
-	spin_unlock(&delayed_refs->lock);
-	return 0;
 }
 
 int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
@@ -3910,13 +3901,13 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 		 */
 	}
 
-	spin_lock(&space_info->lock);
-	spin_lock(&block_group->lock);
-	spin_lock(&fs_info->treelog_bg_lock);
-	spin_lock(&fs_info->relocation_bg_lock);
+	guard(spinlock)(&space_info->lock);
+	guard(spinlock)(&block_group->lock);
+	guard(spinlock)(&fs_info->treelog_bg_lock);
+	guard(spinlock)(&fs_info->relocation_bg_lock);
 
 	if (ret)
-		goto out;
+		goto err;
 
 	ASSERT(!ffe_ctl->for_treelog ||
 	       block_group->start == fs_info->treelog_bg ||
@@ -3928,8 +3919,7 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 	if (block_group->ro ||
 	    (!ffe_ctl->for_data_reloc &&
 	     test_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, &block_group->runtime_flags))) {
-		ret = 1;
-		goto out;
+		goto err;
 	}
 
 	/*
@@ -3938,8 +3928,7 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 	 */
 	if (ffe_ctl->for_treelog && !fs_info->treelog_bg &&
 	    (block_group->used || block_group->reserved)) {
-		ret = 1;
-		goto out;
+		goto err;
 	}
 
 	/*
@@ -3948,8 +3937,7 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 	 */
 	if (ffe_ctl->for_data_reloc && !fs_info->data_reloc_bg &&
 	    (block_group->used || block_group->reserved)) {
-		ret = 1;
-		goto out;
+		goto err;
 	}
 
 	WARN_ON_ONCE(block_group->alloc_offset > block_group->zone_capacity);
@@ -3963,8 +3951,7 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 			ffe_ctl->max_extent_size = avail;
 			ffe_ctl->total_free_space = avail;
 		}
-		ret = 1;
-		goto out;
+		goto err;
 	}
 
 	if (ffe_ctl->for_treelog && !fs_info->treelog_bg)
@@ -4003,17 +3990,14 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
 	 */
 
 	ffe_ctl->search_start = ffe_ctl->found_offset;
+	return 0;
 
-out:
-	if (ret && ffe_ctl->for_treelog)
+err:
+	if (ffe_ctl->for_treelog)
 		fs_info->treelog_bg = 0;
-	if (ret && ffe_ctl->for_data_reloc)
+	if (ffe_ctl->for_data_reloc)
 		fs_info->data_reloc_bg = 0;
-	spin_unlock(&fs_info->relocation_bg_lock);
-	spin_unlock(&fs_info->treelog_bg_lock);
-	spin_unlock(&block_group->lock);
-	spin_unlock(&space_info->lock);
-	return ret;
+	return 1;
 }
 
 static int do_allocation(struct btrfs_block_group *block_group,
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 451b60de4550..4dbec4ef4ffd 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -995,35 +995,29 @@ struct btrfs_ordered_extent *btrfs_lookup_ordered_range(
 	struct rb_node *node;
 	struct btrfs_ordered_extent *entry = NULL;
 
-	spin_lock_irq(&inode->ordered_tree_lock);
+	guard(spinlock_irq)(&inode->ordered_tree_lock);
 	node = ordered_tree_search(inode, file_offset);
 	if (!node) {
 		node = ordered_tree_search(inode, file_offset + len);
 		if (!node)
-			goto out;
+			return NULL;
 	}
 
 	while (1) {
 		entry = rb_entry(node, struct btrfs_ordered_extent, rb_node);
-		if (btrfs_range_overlaps(entry, file_offset, len))
-			break;
+		if (btrfs_range_overlaps(entry, file_offset, len)) {
+			refcount_inc(&entry->refs);
+			trace_btrfs_ordered_extent_lookup_range(inode, entry);
+			return entry;
+		}
 
 		if (entry->file_offset >= file_offset + len) {
-			entry = NULL;
-			break;
+			return NULL;
 		}
-		entry = NULL;
 		node = rb_next(node);
 		if (!node)
-			break;
+			return NULL;
 	}
-out:
-	if (entry) {
-		refcount_inc(&entry->refs);
-		trace_btrfs_ordered_extent_lookup_range(inode, entry);
-	}
-	spin_unlock_irq(&inode->ordered_tree_lock);
-	return entry;
 }
 
 /*
@@ -1092,7 +1086,7 @@ struct btrfs_ordered_extent *btrfs_lookup_first_ordered_range(
 	struct rb_node *next;
 	struct btrfs_ordered_extent *entry = NULL;
 
-	spin_lock_irq(&inode->ordered_tree_lock);
+	guard(spinlock_irq)(&inode->ordered_tree_lock);
 	node = inode->ordered_tree.rb_node;
 	/*
 	 * Here we don't want to use tree_search() which will use tree->last
@@ -1112,12 +1106,12 @@ struct btrfs_ordered_extent *btrfs_lookup_first_ordered_range(
 			 * Direct hit, got an ordered extent that starts at
 			 * @file_offset
 			 */
-			goto out;
+			goto ret_entry;
 		}
 	}
 	if (!entry) {
 		/* Empty tree */
-		goto out;
+		return NULL;
 	}
 
 	cur = &entry->rb_node;
@@ -1132,22 +1126,20 @@ struct btrfs_ordered_extent *btrfs_lookup_first_ordered_range(
 	if (prev) {
 		entry = rb_entry(prev, struct btrfs_ordered_extent, rb_node);
 		if (btrfs_range_overlaps(entry, file_offset, len))
-			goto out;
+			goto ret_entry;
 	}
 	if (next) {
 		entry = rb_entry(next, struct btrfs_ordered_extent, rb_node);
 		if (btrfs_range_overlaps(entry, file_offset, len))
-			goto out;
+			goto ret_entry;
 	}
 	/* No ordered extent in the range */
-	entry = NULL;
-out:
-	if (entry) {
-		refcount_inc(&entry->refs);
-		trace_btrfs_ordered_extent_lookup_first_range(inode, entry);
-	}
+	return NULL;
+
+ret_entry:
+	refcount_inc(&entry->refs);
+	trace_btrfs_ordered_extent_lookup_first_range(inode, entry);
 
-	spin_unlock_irq(&inode->ordered_tree_lock);
 	return entry;
 }
 
-- 
2.51.1.dirty


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH 8/8] btrfs: simplify cleanup in btrfs_add_qgroup_relation
  2025-11-12 18:49 [RFC PATCH 0/8] use cleanup.h in btrfs Gladyshev Ilya
                   ` (6 preceding siblings ...)
  2025-11-12 18:49 ` [RFC PATCH 7/8] btrfs: simplify return path via cleanup.h Gladyshev Ilya
@ 2025-11-12 18:49 ` Gladyshev Ilya
  2025-11-12 20:46   ` Qu Wenruo
  2025-11-12 20:55 ` [RFC PATCH 0/8] use cleanup.h in btrfs Qu Wenruo
  8 siblings, 1 reply; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-12 18:49 UTC (permalink / raw)
  To: foxido; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

Remove from cleanup path mutex_unlock via guard() and kfree via
__free(kfree) macro. With those two cleanups gone, we can remove `out`
label and replace all gotos with direct returns.

Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
---
 fs/btrfs/qgroup.c | 42 ++++++++++++++++--------------------------
 1 file changed, 16 insertions(+), 26 deletions(-)

diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 9b2f2c8ca505..238c17c7d969 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1528,65 +1528,55 @@ static int quick_update_accounting(struct btrfs_fs_info *fs_info,
  * callers and transferred here (either used or freed on error).
  */
 int btrfs_add_qgroup_relation(struct btrfs_trans_handle *trans, u64 src, u64 dst,
-			      struct btrfs_qgroup_list *prealloc)
+			      struct btrfs_qgroup_list *_prealloc)
 {
 	struct btrfs_fs_info *fs_info = trans->fs_info;
 	struct btrfs_qgroup *parent;
 	struct btrfs_qgroup *member;
 	struct btrfs_qgroup_list *list;
+	struct btrfs_qgroup_list *prealloc __free(kfree) = _prealloc;
 	int ret = 0;
 
 	ASSERT(prealloc);
 
 	/* Check the level of src and dst first */
 	if (btrfs_qgroup_level(src) >= btrfs_qgroup_level(dst)) {
-		kfree(prealloc);
 		return -EINVAL;
 	}
 
-	mutex_lock(&fs_info->qgroup_ioctl_lock);
-	if (!fs_info->quota_root) {
-		ret = -ENOTCONN;
-		goto out;
-	}
+	guard(mutex)(&fs_info->qgroup_ioctl_lock);
+
+	if (!fs_info->quota_root)
+		return -ENOTCONN;
+
 	member = find_qgroup_rb(fs_info, src);
 	parent = find_qgroup_rb(fs_info, dst);
-	if (!member || !parent) {
-		ret = -EINVAL;
-		goto out;
-	}
+	if (!member || !parent)
+		return -EINVAL;
 
 	/* check if such qgroup relation exist firstly */
 	list_for_each_entry(list, &member->groups, next_group) {
-		if (list->group == parent) {
-			ret = -EEXIST;
-			goto out;
-		}
+		if (list->group == parent)
+			return -EEXIST;
 	}
 
 	ret = add_qgroup_relation_item(trans, src, dst);
 	if (ret)
-		goto out;
+		return ret;
 
 	ret = add_qgroup_relation_item(trans, dst, src);
 	if (ret) {
 		del_qgroup_relation_item(trans, src, dst);
-		goto out;
+		return ret;
 	}
 
-	spin_lock(&fs_info->qgroup_lock);
+	guard(spinlock)(&fs_info->qgroup_lock);
 	ret = __add_relation_rb(prealloc, member, parent);
 	prealloc = NULL;
 	if (ret < 0) {
-		spin_unlock(&fs_info->qgroup_lock);
-		goto out;
+		return ret;
 	}
-	ret = quick_update_accounting(fs_info, src, dst, 1);
-	spin_unlock(&fs_info->qgroup_lock);
-out:
-	kfree(prealloc);
-	mutex_unlock(&fs_info->qgroup_ioctl_lock);
-	return ret;
+	return quick_update_accounting(fs_info, src, dst, 1);
 }
 
 static int __del_qgroup_relation(struct btrfs_trans_handle *trans, u64 src,
-- 
2.51.1.dirty


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 2/8] btrfs: move kfree out of btrfs_create_qgroup's cleanup path
  2025-11-12 18:49 ` [RFC PATCH 2/8] btrfs: move kfree out of btrfs_create_qgroup's cleanup path Gladyshev Ilya
@ 2025-11-12 20:30   ` Qu Wenruo
  0 siblings, 0 replies; 21+ messages in thread
From: Qu Wenruo @ 2025-11-12 20:30 UTC (permalink / raw)
  To: Gladyshev Ilya; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel



在 2025/11/13 05:19, Gladyshev Ilya 写道:
> Relocate kfree() from the generic cleanup path to the specific error
> exit where the allocation could leak. This prepares for future
> simplification by allowing removal of the 'out' label and use of
> mutex_guard for cleaner resource management.
> 
> Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
> ---
>   fs/btrfs/qgroup.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
> index 9904bcfd3a60..a8474d0a9c58 100644
> --- a/fs/btrfs/qgroup.c
> +++ b/fs/btrfs/qgroup.c
> @@ -1659,7 +1659,7 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
>   	struct btrfs_fs_info *fs_info = trans->fs_info;
>   	struct btrfs_root *quota_root;
>   	struct btrfs_qgroup *qgroup;
> -	struct btrfs_qgroup *prealloc = NULL;
> +	struct btrfs_qgroup *prealloc;
>   	int ret = 0;
>   
>   	mutex_lock(&fs_info->qgroup_ioctl_lock);
> @@ -1681,18 +1681,18 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
>   	}
>   
>   	ret = add_qgroup_item(trans, quota_root, qgroupid);
> -	if (ret)
> +	if (ret) {
> +		kfree(prealloc);
>   		goto out;
> +	}
>   
>   	spin_lock(&fs_info->qgroup_lock);
>   	qgroup = add_qgroup_rb(fs_info, prealloc, qgroupid);
>   	spin_unlock(&fs_info->qgroup_lock);
> -	prealloc = NULL;
>   
>   	ret = btrfs_sysfs_add_one_qgroup(fs_info, qgroup);
>   out:
>   	mutex_unlock(&fs_info->qgroup_ioctl_lock);

You're not on the latest for-next branch, which has the following patch 
applied doing the extra sanity checks:

https://lore.kernel.org/linux-btrfs/20251024102143.236665-5-mssola@mssola.com/

With the extra ASSERT()s, the old code makes more sense.

Thanks,
Qu
> -	kfree(prealloc);
>   	return ret;
>   }
>   


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 8/8] btrfs: simplify cleanup in btrfs_add_qgroup_relation
  2025-11-12 18:49 ` [RFC PATCH 8/8] btrfs: simplify cleanup in btrfs_add_qgroup_relation Gladyshev Ilya
@ 2025-11-12 20:46   ` Qu Wenruo
  0 siblings, 0 replies; 21+ messages in thread
From: Qu Wenruo @ 2025-11-12 20:46 UTC (permalink / raw)
  To: Gladyshev Ilya; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel



在 2025/11/13 05:19, Gladyshev Ilya 写道:
> Remove from cleanup path mutex_unlock via guard() and kfree via
> __free(kfree) macro. With those two cleanups gone, we can remove `out`
> label and replace all gotos with direct returns.
> 
> Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
> ---
>   fs/btrfs/qgroup.c | 42 ++++++++++++++++--------------------------
>   1 file changed, 16 insertions(+), 26 deletions(-)
> 
> diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
> index 9b2f2c8ca505..238c17c7d969 100644
> --- a/fs/btrfs/qgroup.c
> +++ b/fs/btrfs/qgroup.c
> @@ -1528,65 +1528,55 @@ static int quick_update_accounting(struct btrfs_fs_info *fs_info,
>    * callers and transferred here (either used or freed on error).
>    */
>   int btrfs_add_qgroup_relation(struct btrfs_trans_handle *trans, u64 src, u64 dst,
> -			      struct btrfs_qgroup_list *prealloc)
> +			      struct btrfs_qgroup_list *_prealloc)

We don't use '_' prefix. Even '__' usage is discouraged.

>   {
>   	struct btrfs_fs_info *fs_info = trans->fs_info;
>   	struct btrfs_qgroup *parent;
>   	struct btrfs_qgroup *member;
>   	struct btrfs_qgroup_list *list;
> +	struct btrfs_qgroup_list *prealloc __free(kfree) = _prealloc;

I do not think preallocation is a good use case for scope based cleanup, 
especially when there are two similarly named variables.

This reduces the readability.

Thanks,
Qu

>   	int ret = 0;
>   
>   	ASSERT(prealloc);
>   
>   	/* Check the level of src and dst first */
>   	if (btrfs_qgroup_level(src) >= btrfs_qgroup_level(dst)) {
> -		kfree(prealloc);
>   		return -EINVAL;
>   	}
>   
> -	mutex_lock(&fs_info->qgroup_ioctl_lock);
> -	if (!fs_info->quota_root) {
> -		ret = -ENOTCONN;
> -		goto out;
> -	}
> +	guard(mutex)(&fs_info->qgroup_ioctl_lock);
> +
> +	if (!fs_info->quota_root)
> +		return -ENOTCONN;
> +
>   	member = find_qgroup_rb(fs_info, src);
>   	parent = find_qgroup_rb(fs_info, dst);
> -	if (!member || !parent) {
> -		ret = -EINVAL;
> -		goto out;
> -	}
> +	if (!member || !parent)
> +		return -EINVAL;
>   
>   	/* check if such qgroup relation exist firstly */
>   	list_for_each_entry(list, &member->groups, next_group) {
> -		if (list->group == parent) {
> -			ret = -EEXIST;
> -			goto out;
> -		}
> +		if (list->group == parent)
> +			return -EEXIST;
>   	}
>   
>   	ret = add_qgroup_relation_item(trans, src, dst);
>   	if (ret)
> -		goto out;
> +		return ret;
>   
>   	ret = add_qgroup_relation_item(trans, dst, src);
>   	if (ret) {
>   		del_qgroup_relation_item(trans, src, dst);
> -		goto out;
> +		return ret;
>   	}
>   
> -	spin_lock(&fs_info->qgroup_lock);
> +	guard(spinlock)(&fs_info->qgroup_lock);
>   	ret = __add_relation_rb(prealloc, member, parent);
>   	prealloc = NULL;
>   	if (ret < 0) {
> -		spin_unlock(&fs_info->qgroup_lock);
> -		goto out;
> +		return ret;
>   	}
> -	ret = quick_update_accounting(fs_info, src, dst, 1);
> -	spin_unlock(&fs_info->qgroup_lock);
> -out:
> -	kfree(prealloc);
> -	mutex_unlock(&fs_info->qgroup_ioctl_lock);
> -	return ret;
> +	return quick_update_accounting(fs_info, src, dst, 1);
>   }
>   
>   static int __del_qgroup_relation(struct btrfs_trans_handle *trans, u64 src,


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 7/8] btrfs: simplify return path via cleanup.h
  2025-11-12 18:49 ` [RFC PATCH 7/8] btrfs: simplify return path via cleanup.h Gladyshev Ilya
@ 2025-11-12 20:50   ` Qu Wenruo
  2025-11-13  8:54     ` David Sterba
  0 siblings, 1 reply; 21+ messages in thread
From: Qu Wenruo @ 2025-11-12 20:50 UTC (permalink / raw)
  To: Gladyshev Ilya; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel



在 2025/11/13 05:19, Gladyshev Ilya 写道:
> In some functions removing cleanup paths allows to overall simplify it's
> code, so replace cleanup paths with guard()s.
> 
> Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
> ---
>   fs/btrfs/extent-io-tree.c | 21 ++++-----
>   fs/btrfs/extent-tree.c    | 96 ++++++++++++++++-----------------------
>   fs/btrfs/ordered-data.c   | 46 ++++++++-----------
>   3 files changed, 68 insertions(+), 95 deletions(-)
> 
> diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c
> index 69ea2bd359a6..88d7aed7055f 100644
> --- a/fs/btrfs/extent-io-tree.c
> +++ b/fs/btrfs/extent-io-tree.c
> @@ -890,9 +890,8 @@ bool btrfs_find_first_extent_bit(struct extent_io_tree *tree, u64 start,
>   				 struct extent_state **cached_state)
>   {
>   	struct extent_state *state;
> -	bool ret = false;
>   
> -	spin_lock(&tree->lock);
> +	guard(spinlock)(&tree->lock);
>   	if (cached_state && *cached_state) {
>   		state = *cached_state;
>   		if (state->end == start - 1 && extent_state_in_tree(state)) {
> @@ -911,23 +910,21 @@ bool btrfs_find_first_extent_bit(struct extent_io_tree *tree, u64 start,
>   			*cached_state = NULL;
>   			if (state)
>   				goto got_it;
> -			goto out;
> +			return false;
>   		}
>   		btrfs_free_extent_state(*cached_state);
>   		*cached_state = NULL;
>   	}
>   
>   	state = find_first_extent_bit_state(tree, start, bits);
> +	if (!state)
> +		return false;
> +
>   got_it:
> -	if (state) {
> -		cache_state_if_flags(state, cached_state, 0);
> -		*start_ret = state->start;
> -		*end_ret = state->end;
> -		ret = true;
> -	}
> -out:
> -	spin_unlock(&tree->lock);
> -	return ret;
> +	cache_state_if_flags(state, cached_state, 0);
> +	*start_ret = state->start;
> +	*end_ret = state->end;
> +	return true;
>   }
>   
>   /*
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index f9744e456c6c..cb3d61d96e66 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -1878,16 +1878,14 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans,
>   	 * and then re-check to make sure nobody got added.
>   	 */
>   	spin_unlock(&head->lock);
> -	spin_lock(&delayed_refs->lock);
> -	spin_lock(&head->lock);
> -	if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root) || head->extent_op) {
> -		spin_unlock(&head->lock);
> -		spin_unlock(&delayed_refs->lock);
> -		return 1;
> +	{

There are some internal discussion about such anonymous code block usage.

Although I support such usage, especially when it can reduce the 
lifespan of local variables, it's not a commonly accepted pattern yet.

Thanks,
Qu

> +		guard(spinlock)(&delayed_refs->lock);
> +		guard(spinlock)(&head->lock);
> +
> +		if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root) || head->extent_op)
> +			return 1;
> +		btrfs_delete_ref_head(fs_info, delayed_refs, head);
>   	}
> -	btrfs_delete_ref_head(fs_info, delayed_refs, head);
> -	spin_unlock(&head->lock);
> -	spin_unlock(&delayed_refs->lock);
>   
>   	if (head->must_insert_reserved) {
>   		btrfs_pin_extent(trans, head->bytenr, head->num_bytes, 1);
> @@ -3391,30 +3389,29 @@ static noinline int check_ref_cleanup(struct btrfs_trans_handle *trans,
>   	int ret = 0;
>   
>   	delayed_refs = &trans->transaction->delayed_refs;
> -	spin_lock(&delayed_refs->lock);
> -	head = btrfs_find_delayed_ref_head(fs_info, delayed_refs, bytenr);
> -	if (!head)
> -		goto out_delayed_unlock;
> -
> -	spin_lock(&head->lock);
> -	if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root))
> -		goto out;
> +	{
> +		guard(spinlock)(&delayed_refs->lock);
> +		head = btrfs_find_delayed_ref_head(fs_info, delayed_refs, bytenr);
> +		if (!head)
> +			return 0;
>   
> -	if (cleanup_extent_op(head) != NULL)
> -		goto out;
> +		guard(spinlock)(&head->lock);
> +		if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root))
> +			return 0;
>   
> -	/*
> -	 * waiting for the lock here would deadlock.  If someone else has it
> -	 * locked they are already in the process of dropping it anyway
> -	 */
> -	if (!mutex_trylock(&head->mutex))
> -		goto out;
> +		if (cleanup_extent_op(head) != NULL)
> +			return 0;
>   
> -	btrfs_delete_ref_head(fs_info, delayed_refs, head);
> -	head->processing = false;
> +		/*
> +		 * waiting for the lock here would deadlock.  If someone else has it
> +		 * locked they are already in the process of dropping it anyway
> +		 */
> +		if (!mutex_trylock(&head->mutex))
> +			return 0;
>   
> -	spin_unlock(&head->lock);
> -	spin_unlock(&delayed_refs->lock);
> +		btrfs_delete_ref_head(fs_info, delayed_refs, head);
> +		head->processing = false;
> +	}
>   
>   	BUG_ON(head->extent_op);
>   	if (head->must_insert_reserved)
> @@ -3424,12 +3421,6 @@ static noinline int check_ref_cleanup(struct btrfs_trans_handle *trans,
>   	mutex_unlock(&head->mutex);
>   	btrfs_put_delayed_ref_head(head);
>   	return ret;
> -out:
> -	spin_unlock(&head->lock);
> -
> -out_delayed_unlock:
> -	spin_unlock(&delayed_refs->lock);
> -	return 0;
>   }
>   
>   int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
> @@ -3910,13 +3901,13 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
>   		 */
>   	}
>   
> -	spin_lock(&space_info->lock);
> -	spin_lock(&block_group->lock);
> -	spin_lock(&fs_info->treelog_bg_lock);
> -	spin_lock(&fs_info->relocation_bg_lock);
> +	guard(spinlock)(&space_info->lock);
> +	guard(spinlock)(&block_group->lock);
> +	guard(spinlock)(&fs_info->treelog_bg_lock);
> +	guard(spinlock)(&fs_info->relocation_bg_lock);
>   
>   	if (ret)
> -		goto out;
> +		goto err;
>   
>   	ASSERT(!ffe_ctl->for_treelog ||
>   	       block_group->start == fs_info->treelog_bg ||
> @@ -3928,8 +3919,7 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
>   	if (block_group->ro ||
>   	    (!ffe_ctl->for_data_reloc &&
>   	     test_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, &block_group->runtime_flags))) {
> -		ret = 1;
> -		goto out;
> +		goto err;
>   	}
>   
>   	/*
> @@ -3938,8 +3928,7 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
>   	 */
>   	if (ffe_ctl->for_treelog && !fs_info->treelog_bg &&
>   	    (block_group->used || block_group->reserved)) {
> -		ret = 1;
> -		goto out;
> +		goto err;
>   	}
>   
>   	/*
> @@ -3948,8 +3937,7 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
>   	 */
>   	if (ffe_ctl->for_data_reloc && !fs_info->data_reloc_bg &&
>   	    (block_group->used || block_group->reserved)) {
> -		ret = 1;
> -		goto out;
> +		goto err;
>   	}
>   
>   	WARN_ON_ONCE(block_group->alloc_offset > block_group->zone_capacity);
> @@ -3963,8 +3951,7 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
>   			ffe_ctl->max_extent_size = avail;
>   			ffe_ctl->total_free_space = avail;
>   		}
> -		ret = 1;
> -		goto out;
> +		goto err;
>   	}
>   
>   	if (ffe_ctl->for_treelog && !fs_info->treelog_bg)
> @@ -4003,17 +3990,14 @@ static int do_allocation_zoned(struct btrfs_block_group *block_group,
>   	 */
>   
>   	ffe_ctl->search_start = ffe_ctl->found_offset;
> +	return 0;
>   
> -out:
> -	if (ret && ffe_ctl->for_treelog)
> +err:
> +	if (ffe_ctl->for_treelog)
>   		fs_info->treelog_bg = 0;
> -	if (ret && ffe_ctl->for_data_reloc)
> +	if (ffe_ctl->for_data_reloc)
>   		fs_info->data_reloc_bg = 0;
> -	spin_unlock(&fs_info->relocation_bg_lock);
> -	spin_unlock(&fs_info->treelog_bg_lock);
> -	spin_unlock(&block_group->lock);
> -	spin_unlock(&space_info->lock);
> -	return ret;
> +	return 1;
>   }
>   
>   static int do_allocation(struct btrfs_block_group *block_group,
> diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
> index 451b60de4550..4dbec4ef4ffd 100644
> --- a/fs/btrfs/ordered-data.c
> +++ b/fs/btrfs/ordered-data.c
> @@ -995,35 +995,29 @@ struct btrfs_ordered_extent *btrfs_lookup_ordered_range(
>   	struct rb_node *node;
>   	struct btrfs_ordered_extent *entry = NULL;
>   
> -	spin_lock_irq(&inode->ordered_tree_lock);
> +	guard(spinlock_irq)(&inode->ordered_tree_lock);
>   	node = ordered_tree_search(inode, file_offset);
>   	if (!node) {
>   		node = ordered_tree_search(inode, file_offset + len);
>   		if (!node)
> -			goto out;
> +			return NULL;
>   	}
>   
>   	while (1) {
>   		entry = rb_entry(node, struct btrfs_ordered_extent, rb_node);
> -		if (btrfs_range_overlaps(entry, file_offset, len))
> -			break;
> +		if (btrfs_range_overlaps(entry, file_offset, len)) {
> +			refcount_inc(&entry->refs);
> +			trace_btrfs_ordered_extent_lookup_range(inode, entry);
> +			return entry;
> +		}
>   
>   		if (entry->file_offset >= file_offset + len) {
> -			entry = NULL;
> -			break;
> +			return NULL;
>   		}
> -		entry = NULL;
>   		node = rb_next(node);
>   		if (!node)
> -			break;
> +			return NULL;
>   	}
> -out:
> -	if (entry) {
> -		refcount_inc(&entry->refs);
> -		trace_btrfs_ordered_extent_lookup_range(inode, entry);
> -	}
> -	spin_unlock_irq(&inode->ordered_tree_lock);
> -	return entry;
>   }
>   
>   /*
> @@ -1092,7 +1086,7 @@ struct btrfs_ordered_extent *btrfs_lookup_first_ordered_range(
>   	struct rb_node *next;
>   	struct btrfs_ordered_extent *entry = NULL;
>   
> -	spin_lock_irq(&inode->ordered_tree_lock);
> +	guard(spinlock_irq)(&inode->ordered_tree_lock);
>   	node = inode->ordered_tree.rb_node;
>   	/*
>   	 * Here we don't want to use tree_search() which will use tree->last
> @@ -1112,12 +1106,12 @@ struct btrfs_ordered_extent *btrfs_lookup_first_ordered_range(
>   			 * Direct hit, got an ordered extent that starts at
>   			 * @file_offset
>   			 */
> -			goto out;
> +			goto ret_entry;
>   		}
>   	}
>   	if (!entry) {
>   		/* Empty tree */
> -		goto out;
> +		return NULL;
>   	}
>   
>   	cur = &entry->rb_node;
> @@ -1132,22 +1126,20 @@ struct btrfs_ordered_extent *btrfs_lookup_first_ordered_range(
>   	if (prev) {
>   		entry = rb_entry(prev, struct btrfs_ordered_extent, rb_node);
>   		if (btrfs_range_overlaps(entry, file_offset, len))
> -			goto out;
> +			goto ret_entry;
>   	}
>   	if (next) {
>   		entry = rb_entry(next, struct btrfs_ordered_extent, rb_node);
>   		if (btrfs_range_overlaps(entry, file_offset, len))
> -			goto out;
> +			goto ret_entry;
>   	}
>   	/* No ordered extent in the range */
> -	entry = NULL;
> -out:
> -	if (entry) {
> -		refcount_inc(&entry->refs);
> -		trace_btrfs_ordered_extent_lookup_first_range(inode, entry);
> -	}
> +	return NULL;
> +
> +ret_entry:
> +	refcount_inc(&entry->refs);
> +	trace_btrfs_ordered_extent_lookup_first_range(inode, entry);
>   
> -	spin_unlock_irq(&inode->ordered_tree_lock);
>   	return entry;
>   }
>   


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 0/8] use cleanup.h in btrfs
  2025-11-12 18:49 [RFC PATCH 0/8] use cleanup.h in btrfs Gladyshev Ilya
                   ` (7 preceding siblings ...)
  2025-11-12 18:49 ` [RFC PATCH 8/8] btrfs: simplify cleanup in btrfs_add_qgroup_relation Gladyshev Ilya
@ 2025-11-12 20:55 ` Qu Wenruo
  2025-11-13  8:01   ` Gladyshev Ilya
  8 siblings, 1 reply; 21+ messages in thread
From: Qu Wenruo @ 2025-11-12 20:55 UTC (permalink / raw)
  To: Gladyshev Ilya; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel



在 2025/11/13 05:19, Gladyshev Ilya 写道:
> This series represents my experimentation with refactoring with
> cleanup guards. In my opinion, RAII-style locking improves readability
> in most cases and also improves code robustness for future code changes,
> so I tried to refactor simple cases that really benefits from lock guards.

Although I totally agree with the guard usages, it's not yet determined 
we will fully embrace guard usages.

> 
> However readability is a subjective concept, so you can freely disagree
> and reject any of those changes, I won't insist on any. Please note that
> patches 1-3 can be useful even without lock guards.
> 
> I didn't know how to split this series, mostly because it's just a lot of
> small changes... so I tried to split it by types of transformation:

And even if we're determined to go guard path, I doubt if it should be 
done in such a rushed way.

There are already some cases where scope based auto-cleanup conversion 
led to some regressions, no matter how trivial they seem.
Thankfully they are all caught early, but we have to ask one critical 
question:

   Have you run the full fstest test cases?

If not, please run it first. Such huge change is not really that easy to 
review.


Although I love the new scope based auto cleanup, I still tend to be 
more cautious doing the conversion.

Thus my recommendation on the conversion would be:

- Up to the author/expert on the involved field
   E.g. if Filipe wants to use guards for send, he is 100% fine to
   send out dedicated patches to do the conversion.

   This also ensures reviewablity, as such change will only involve one
   functionality.

- During other refactors of the code
   This is pretty much the same for any code-style fixups.
   We do not accept dedicated patches just fixing up whitespace/code-
   style errors.
   But if one is refactoring some code, it's recommended to fix any code-
   style related problems near the touched part.

So I'm afraid we're not yet at the stage to accept huge conversions yet.

Thanks,
Qu

> 
> 1. Patches 1-3 include some preparation work and simple fixes I noticed.
> 2. Patches 4-6  gradually increase the complexity of the refactored
>    situations, from simple lock/unlock pairs to scoped guards.
> 3. Patch 7 refactors functions which control flow can really benefit from
>    removed cleanups on exit. E.g. we can get rid of obscure if statements
>    in exit paths.
> 4. Patch 8 is kinda an example of overdone code refactoring and I predict
>    that it will be dropped anyway.
> 
> There is no TODOs for this series, but it's junk enough to be marked as
> RFC.
> 
> Gladyshev Ilya (8):
>    btrfs: remove redundant label in __del_qgroup_relation
>    btrfs: move kfree out of btrfs_create_qgroup's cleanup path
>    btrfs: simplify control flow in scrub_simple_mirror
>    btrfs: simplify function protections with guards
>    btrfs: use cleanup.h guard()s to simplify unlocks on return
>    btrfs: simplify cleanup via scoped_guard()
>    btrfs: simplify return path via cleanup.h
>    btrfs: simplify cleanup in btrfs_add_qgroup_relation
> 
>   fs/btrfs/block-group.c      |  24 ++----
>   fs/btrfs/compression.c      |  13 ++-
>   fs/btrfs/discard.c          |  20 ++---
>   fs/btrfs/disk-io.c          |   9 +-
>   fs/btrfs/extent-io-tree.c   |  72 ++++++----------
>   fs/btrfs/extent-tree.c      | 104 ++++++++++-------------
>   fs/btrfs/extent_io.c        |  33 ++++----
>   fs/btrfs/file-item.c        |   6 +-
>   fs/btrfs/free-space-cache.c |  87 +++++++------------
>   fs/btrfs/fs.c               |   9 +-
>   fs/btrfs/inode.c            |   3 +-
>   fs/btrfs/ordered-data.c     |  67 ++++++---------
>   fs/btrfs/qgroup.c           | 165 ++++++++++++++----------------------
>   fs/btrfs/raid56.c           |  20 ++---
>   fs/btrfs/scrub.c            |  19 ++---
>   fs/btrfs/send.c             |  40 ++++-----
>   fs/btrfs/space-info.c       |   4 +-
>   fs/btrfs/subpage.c          |  41 +++------
>   fs/btrfs/tree-log.c         |  28 +++---
>   fs/btrfs/volumes.c          |   3 +-
>   fs/btrfs/zoned.c            |  13 +--
>   fs/btrfs/zstd.c             |  13 +--
>   22 files changed, 299 insertions(+), 494 deletions(-)
> 
> 
> base-commit: 24172e0d79900908cf5ebf366600616d29c9b417


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 0/8] use cleanup.h in btrfs
  2025-11-12 20:55 ` [RFC PATCH 0/8] use cleanup.h in btrfs Qu Wenruo
@ 2025-11-13  8:01   ` Gladyshev Ilya
  0 siblings, 0 replies; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-13  8:01 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

On 11/12/25 23:55, Qu Wenruo wrote:
> 在 2025/11/13 05:19, Gladyshev Ilya 写道:
>> This series represents my experimentation with refactoring with
>> cleanup guards. In my opinion, RAII-style locking improves readability
>> in most cases and also improves code robustness for future code changes,
>> so I tried to refactor simple cases that really benefits from lock 
>> guards.
> 
> Although I totally agree with the guard usages, it's not yet determined 
> we will fully embrace guard usages.
> 
>>
>> However readability is a subjective concept, so you can freely disagree
>> and reject any of those changes, I won't insist on any. Please note that
>> patches 1-3 can be useful even without lock guards.
>>
>> I didn't know how to split this series, mostly because it's just a lot of
>> small changes... so I tried to split it by types of transformation:
> 
> And even if we're determined to go guard path, I doubt if it should be 
> done in such a rushed way.
> 
> There are already some cases where scope based auto-cleanup conversion 
> led to some regressions, no matter how trivial they seem.
> Thankfully they are all caught early, but we have to ask one critical 
> question:
> 
>    Have you run the full fstest test cases?
> 
> If not, please run it first. Such huge change is not really that easy to 
> review.

No, because it's RFC only [however I tried to verify that no locking 
semantics/scopes changed and I tried not to touch any really complex 
scenarios.]
> Although I love the new scope based auto cleanup, I still tend to be 
> more cautious doing the conversion.
> 
> Thus my recommendation on the conversion would be:
> 
> - Up to the author/expert on the involved field
>    E.g. if Filipe wants to use guards for send, he is 100% fine to
>    send out dedicated patches to do the conversion.
> 
>    This also ensures reviewablity, as such change will only involve one
>    functionality.
> 
> - During other refactors of the code
>    This is pretty much the same for any code-style fixups.
>    We do not accept dedicated patches just fixing up whitespace/code-
>    style errors.
>    But if one is refactoring some code, it's recommended to fix any code-
>    style related problems near the touched part.

Personally I don't like this approach for correctness-sensitive 
refactoring. When it's something dedicated and standalone, it's easier 
to focus and verify that all changes are valid

> So I'm afraid we're not yet at the stage to accept huge conversions yet.

I would be surprised if such patchset would be accepted as one thing, 
honestly) For now, it's only purpose is to show how code _can_ be 
refactored in theory [not _should_]. And then, for example, if there is 
positive feedback on scoped guards, but not on full-function guards, I 
will send smaller, fully tested patch with only approved approaches.

Probably I should've been more clear on that in the cover letter, sorry...

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 4/8] btrfs: simplify function protections with guards
  2025-11-12 18:49 ` [RFC PATCH 4/8] btrfs: simplify function protections with guards Gladyshev Ilya
@ 2025-11-13  8:43   ` David Sterba
  2025-11-13 10:06     ` Gladyshev Ilya
  0 siblings, 1 reply; 21+ messages in thread
From: David Sterba @ 2025-11-13  8:43 UTC (permalink / raw)
  To: Gladyshev Ilya; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

On Wed, Nov 12, 2025 at 09:49:40PM +0300, Gladyshev Ilya wrote:
> Replaces cases like
> 
> void foo() {
>     spin_lock(&lock);
>     ... some code ...
>     spin_unlock(&lock)
> }
> 
> with
> 
> void foo() {
>     guard(spinlock)(&lock);
>     ... some code ...
> }
> 
> While it doesn't has any measurable impact,

There is impact, as Daniel mentioned elsewhere, this also adds the
variable on stack. It's measuable on the stack meter, one variable is
"nothing" but combined wherever the guards are used can add up. We don't
mind adding variables where needed, occasional cleanups and stack
reduction is done. Here it's a systematic stack use increase, not a
reason to reject the guards but still something I cannot just brush off.

> it makes clear that whole
> function body is protected under lock and removes future errors with
> additional cleanup paths.

The pattern above is the one I find problematic the most, namely in
longer functions or evolved code. Using your example as starting point
a followup change adds code outside of the locked section:

void foo() {
    spin_lock(&lock);
    ... some code ...
    spin_unlock(&lock)

    new code;
}

with

void foo() {
    guard(spinlock)(&lock);
    ... some code ...

    new code;
}

This will lead to longer critical sections or even incorrect code
regarding locking when new code must not run under lock.

The fix is to convert it to scoped locking, with indentation and code
churn to unrelated code to the new one.

Suggestions like refactoring to make minimal helpers and functions is
another unecessary churn and breaks code reading flow.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 6/8] btrfs: simplify cleanup via scoped_guard()
  2025-11-12 18:49 ` [RFC PATCH 6/8] btrfs: simplify cleanup via scoped_guard() Gladyshev Ilya
@ 2025-11-13  8:48   ` David Sterba
  0 siblings, 0 replies; 21+ messages in thread
From: David Sterba @ 2025-11-13  8:48 UTC (permalink / raw)
  To: Gladyshev Ilya; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

On Wed, Nov 12, 2025 at 09:49:42PM +0300, Gladyshev Ilya wrote:
> Simplify cases with multiple unlock paths like
> 
> spin_lock(&lock);
> if (something) {
>         spin_unlock(&lock);
>         goto faraway; // or return
> }
> spin_unlock(&lock);
> 
> with scoped_guards() to improve readability and robustness.
> 
> Signed-off-by: Gladyshev Ilya <foxido@foxido.dev>
> ---
>  fs/btrfs/block-group.c      | 20 +++++-------
>  fs/btrfs/compression.c      | 13 ++++----
>  fs/btrfs/extent-tree.c      |  8 ++---
>  fs/btrfs/extent_io.c        | 33 +++++++++----------
>  fs/btrfs/free-space-cache.c | 63 +++++++++++++++----------------------
>  fs/btrfs/qgroup.c           | 38 +++++++++++-----------
>  fs/btrfs/send.c             | 37 ++++++++++------------
>  fs/btrfs/tree-log.c         | 13 +++-----

This is probably the best example of the worst case of the conversions.
8 files changed, many of them likely target of future fixes that would
need to be backported, churn in many lines besides locking, code flow
and error handling is supposed to be improved but each instance needs to
be verified anyway. So this is quite costly cleanup.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 7/8] btrfs: simplify return path via cleanup.h
  2025-11-12 20:50   ` Qu Wenruo
@ 2025-11-13  8:54     ` David Sterba
  2025-11-13 12:48       ` Daniel Vacek
  0 siblings, 1 reply; 21+ messages in thread
From: David Sterba @ 2025-11-13  8:54 UTC (permalink / raw)
  To: Qu Wenruo
  Cc: Gladyshev Ilya, Chris Mason, David Sterba, linux-btrfs,
	linux-kernel

On Thu, Nov 13, 2025 at 07:20:01AM +1030, Qu Wenruo wrote:
> > --- a/fs/btrfs/extent-tree.c
> > +++ b/fs/btrfs/extent-tree.c
> > @@ -1878,16 +1878,14 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans,
> >   	 * and then re-check to make sure nobody got added.
> >   	 */
> >   	spin_unlock(&head->lock);
> > -	spin_lock(&delayed_refs->lock);
> > -	spin_lock(&head->lock);
> > -	if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root) || head->extent_op) {
> > -		spin_unlock(&head->lock);
> > -		spin_unlock(&delayed_refs->lock);
> > -		return 1;
> > +	{
> 
> There are some internal discussion about such anonymous code block usage.
> 
> Although I support such usage, especially when it can reduce the 
> lifespan of local variables, it's not a commonly accepted pattern yet.

And the discussion is going great, I think we wont't find a consensus
without somebody either missing a coding pattern (you) or suffering to
look at such code each time (me). Others have similar mixed feelings
about the guards use.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 4/8] btrfs: simplify function protections with guards
  2025-11-13  8:43   ` David Sterba
@ 2025-11-13 10:06     ` Gladyshev Ilya
  2025-11-13 11:25       ` David Sterba
  0 siblings, 1 reply; 21+ messages in thread
From: Gladyshev Ilya @ 2025-11-13 10:06 UTC (permalink / raw)
  To: dsterba; +Cc: Chris Mason, David Sterba, linux-btrfs, linux-kernel

On 11/13/25 11:43, David Sterba wrote:
> On Wed, Nov 12, 2025 at 09:49:40PM +0300, Gladyshev Ilya wrote:
>> Replaces cases like
>>
>> void foo() {
>>      spin_lock(&lock);
>>      ... some code ...
>>      spin_unlock(&lock)
>> }
>>
>> with
>>
>> void foo() {
>>      guard(spinlock)(&lock);
>>      ... some code ...
>> }
>>
>> While it doesn't has any measurable impact,
> 
> There is impact, as Daniel mentioned elsewhere, this also adds the
> variable on stack. It's measuable on the stack meter, one variable is
> "nothing" but combined wherever the guards are used can add up. We don't
> mind adding variables where needed, occasional cleanups and stack
> reduction is done. Here it's a systematic stack use increase, not a
> reason to reject the guards but still something I cannot just brush off
I thought it would be optimized out by the compiler in the end, I will 
probably recheck this

>> it makes clear that whole
>> function body is protected under lock and removes future errors with
>> additional cleanup paths.
> 
> The pattern above is the one I find problematic the most, namely in
> longer functions or evolved code. Using your example as starting point
> a followup change adds code outside of the locked section:
> 
> void foo() {
>      spin_lock(&lock);
>      ... some code ...
>      spin_unlock(&lock)
> 
>      new code;
> }
> 
> with
> 
> void foo() {
>      guard(spinlock)(&lock);
>      ... some code ...
> 
>      new code;
> }
> 
> This will lead to longer critical sections or even incorrect code
> regarding locking when new code must not run under lock.
> 
> The fix is to convert it to scoped locking, with indentation and code
> churn to unrelated code to the new one.
> 
> Suggestions like refactoring to make minimal helpers and functions is
> another unecessary churn and breaks code reading flow.

What if something like C++ unique_lock existed? So code like this would 
be possible:

void foo() {
   GUARD = unique_lock(&spin);

   if (a)
     // No unlocking required -> it will be done automatically
     return;

   unlock(GUARD);

   ... unlocked code ...

   // Guard undestands that it's unlocked and does nothing
   return;
}

It has similar semantics to scoped_guard [however it has weaker 
protection -- goto from locked section can bypass `unlock` and hold lock 
until return], but it doesn't introduce diff noise correlated with 
indentations.

Another approach is allowing scoped_guards to have different indentation 
codestyle to avoid indentation of internal block [like goto labels for 
example].

However both of this approaches has their own downsides and are pure 
proposals

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 4/8] btrfs: simplify function protections with guards
  2025-11-13 10:06     ` Gladyshev Ilya
@ 2025-11-13 11:25       ` David Sterba
  2025-11-13 12:30         ` Daniel Vacek
  0 siblings, 1 reply; 21+ messages in thread
From: David Sterba @ 2025-11-13 11:25 UTC (permalink / raw)
  To: Gladyshev Ilya; +Cc: David Sterba, neelx, linux-btrfs, linux-kernel

On Thu, Nov 13, 2025 at 01:06:42PM +0300, Gladyshev Ilya wrote:
> On 11/13/25 11:43, David Sterba wrote:
> > On Wed, Nov 12, 2025 at 09:49:40PM +0300, Gladyshev Ilya wrote:
> >> Replaces cases like
> >>
> >> void foo() {
> >>      spin_lock(&lock);
> >>      ... some code ...
> >>      spin_unlock(&lock)
> >> }
> >>
> >> with
> >>
> >> void foo() {
> >>      guard(spinlock)(&lock);
> >>      ... some code ...
> >> }
> >>
> >> While it doesn't has any measurable impact,
> > 
> > There is impact, as Daniel mentioned elsewhere, this also adds the
> > variable on stack. It's measuable on the stack meter, one variable is
> > "nothing" but combined wherever the guards are used can add up. We don't
> > mind adding variables where needed, occasional cleanups and stack
> > reduction is done. Here it's a systematic stack use increase, not a
> > reason to reject the guards but still something I cannot just brush off
> I thought it would be optimized out by the compiler in the end, I will 
> probably recheck this

There are cases where compiler will optimize it out, IIRC when the lock
is embedded in a structure, and not when the pointer is dereferenced.

> >> it makes clear that whole
> >> function body is protected under lock and removes future errors with
> >> additional cleanup paths.
> > 
> > The pattern above is the one I find problematic the most, namely in
> > longer functions or evolved code. Using your example as starting point
> > a followup change adds code outside of the locked section:
> > 
> > void foo() {
> >      spin_lock(&lock);
> >      ... some code ...
> >      spin_unlock(&lock)
> > 
> >      new code;
> > }
> > 
> > with
> > 
> > void foo() {
> >      guard(spinlock)(&lock);
> >      ... some code ...
> > 
> >      new code;
> > }
> > 
> > This will lead to longer critical sections or even incorrect code
> > regarding locking when new code must not run under lock.
> > 
> > The fix is to convert it to scoped locking, with indentation and code
> > churn to unrelated code to the new one.
> > 
> > Suggestions like refactoring to make minimal helpers and functions is
> > another unecessary churn and breaks code reading flow.
> 
> What if something like C++ unique_lock existed? So code like this would 
> be possible:
> 
> void foo() {
>    GUARD = unique_lock(&spin);
> 
>    if (a)
>      // No unlocking required -> it will be done automatically
>      return;
> 
>    unlock(GUARD);
> 
>    ... unlocked code ...
> 
>    // Guard undestands that it's unlocked and does nothing
>    return;
> }
> 
> It has similar semantics to scoped_guard [however it has weaker 
> protection -- goto from locked section can bypass `unlock` and hold lock 
> until return], but it doesn't introduce diff noise correlated with 
> indentations.
> 
> Another approach is allowing scoped_guards to have different indentation 
> codestyle to avoid indentation of internal block [like goto labels for 
> example].
> 
> However both of this approaches has their own downsides and are pure 
> proposals

Thanks, it's good to have concrete examples to discuss. It feels like
we'd switch away from C and force patterns that are maybe common in
other languages like C++ that do things under the hood and it's fine
there as it's the mental programming model one has. This feels alien to
kernel C programming, namely for locking where the "hide things" is IMO
worse than "make things explicit".

There are cases where switching to guards would be reasonable because
the functions are short and simple but then it does not utilize the
semantics of automatic cleanup. In more complex functions it would mean
to make more structural changes to the code at the cost of churn and
merge conflicts of backported patches.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 4/8] btrfs: simplify function protections with guards
  2025-11-13 11:25       ` David Sterba
@ 2025-11-13 12:30         ` Daniel Vacek
  0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vacek @ 2025-11-13 12:30 UTC (permalink / raw)
  To: dsterba; +Cc: Gladyshev Ilya, David Sterba, linux-btrfs, linux-kernel

On Thu, 13 Nov 2025 at 12:25, David Sterba <dsterba@suse.cz> wrote:
>
> On Thu, Nov 13, 2025 at 01:06:42PM +0300, Gladyshev Ilya wrote:
> > On 11/13/25 11:43, David Sterba wrote:
> > > On Wed, Nov 12, 2025 at 09:49:40PM +0300, Gladyshev Ilya wrote:
> > >> Replaces cases like
> > >>
> > >> void foo() {
> > >>      spin_lock(&lock);
> > >>      ... some code ...
> > >>      spin_unlock(&lock)
> > >> }
> > >>
> > >> with
> > >>
> > >> void foo() {
> > >>      guard(spinlock)(&lock);
> > >>      ... some code ...
> > >> }
> > >>
> > >> While it doesn't has any measurable impact,
> > >
> > > There is impact, as Daniel mentioned elsewhere, this also adds the
> > > variable on stack. It's measuable on the stack meter, one variable is
> > > "nothing" but combined wherever the guards are used can add up. We don't
> > > mind adding variables where needed, occasional cleanups and stack
> > > reduction is done. Here it's a systematic stack use increase, not a
> > > reason to reject the guards but still something I cannot just brush off
> > I thought it would be optimized out by the compiler in the end, I will
> > probably recheck this
>
> There are cases where compiler will optimize it out, IIRC when the lock
> is embedded in a structure, and not when the pointer is dereferenced.

Yes. If you have a pointer to a structure with a lock embedded in it,
the compiler knows the local variable is stable (as in non-volatile or
const, the pointer does not change) and is happy to use it and
optimize out the explicit copy. It knows both values are the same and
so it is safe to do so. At least starting with -O2 with gcc with my
tests.

But if you have a pointer to a pointer to some structure, the compiler
(unlike us) cannot make the same assumptions. Strictly speaking, since
the address to the structure containing the lock is not in a local
variable but somewhere in the memory, it can change during the
execution of the function. We as developers know the pointers are
reasonably stable as the code is usually designed in such a way. And
if it's not, we know that as well and modify the locking accordingly.
So usually we're happy to `spin_unlock(&foo->bar->lock)`, but for the
compiler this may be a different lock then the one initially locked
before. From the language point of view, the address of `bar` could
have changed in the meantime. And so the compiler is forced to use the
local copy of the pointer to the lock saved when acquiring the lock
and cannot optimize it out.

So this really depends case by case. Using the guard in place of
patterns like `spin_unlock(&foo->lock)` can be optimized out while
guard for patterns like `spin_unlock(&foo->bar->lock)` need to use an
extra stack space for the local variable storing a copy of the lock
address.

You would really have to use `struct bar *bar = foo->bar;
guard(spinlock)(&bar->lock); ...`. But then you are explicitly using
the stack yourself. So it's equivalent.

> > >> it makes clear that whole
> > >> function body is protected under lock and removes future errors with
> > >> additional cleanup paths.
> > >
> > > The pattern above is the one I find problematic the most, namely in
> > > longer functions or evolved code. Using your example as starting point
> > > a followup change adds code outside of the locked section:
> > >
> > > void foo() {
> > >      spin_lock(&lock);
> > >      ... some code ...
> > >      spin_unlock(&lock)
> > >
> > >      new code;
> > > }
> > >
> > > with
> > >
> > > void foo() {
> > >      guard(spinlock)(&lock);
> > >      ... some code ...
> > >
> > >      new code;
> > > }
> > >
> > > This will lead to longer critical sections or even incorrect code
> > > regarding locking when new code must not run under lock.
> > >
> > > The fix is to convert it to scoped locking, with indentation and code
> > > churn to unrelated code to the new one.
> > >
> > > Suggestions like refactoring to make minimal helpers and functions is
> > > another unecessary churn and breaks code reading flow.
> >
> > What if something like C++ unique_lock existed? So code like this would
> > be possible:
> >
> > void foo() {
> >    GUARD = unique_lock(&spin);
> >
> >    if (a)
> >      // No unlocking required -> it will be done automatically
> >      return;
> >
> >    unlock(GUARD);
> >
> >    ... unlocked code ...
> >
> >    // Guard undestands that it's unlocked and does nothing
> >    return;
> > }
> >
> > It has similar semantics to scoped_guard [however it has weaker
> > protection -- goto from locked section can bypass `unlock` and hold lock
> > until return], but it doesn't introduce diff noise correlated with
> > indentations.
> >
> > Another approach is allowing scoped_guards to have different indentation
> > codestyle to avoid indentation of internal block [like goto labels for
> > example].
> >
> > However both of this approaches has their own downsides and are pure
> > proposals
>
> Thanks, it's good to have concrete examples to discuss. It feels like
> we'd switch away from C and force patterns that are maybe common in
> other languages like C++ that do things under the hood and it's fine
> there as it's the mental programming model one has. This feels alien to
> kernel C programming, namely for locking where the "hide things" is IMO
> worse than "make things explicit".
>
> There are cases where switching to guards would be reasonable because
> the functions are short and simple but then it does not utilize the
> semantics of automatic cleanup. In more complex functions it would mean
> to make more structural changes to the code at the cost of churn and
> merge conflicts of backported patches.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 7/8] btrfs: simplify return path via cleanup.h
  2025-11-13  8:54     ` David Sterba
@ 2025-11-13 12:48       ` Daniel Vacek
  0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vacek @ 2025-11-13 12:48 UTC (permalink / raw)
  To: dsterba
  Cc: Qu Wenruo, Gladyshev Ilya, Chris Mason, David Sterba, linux-btrfs,
	linux-kernel

On Thu, 13 Nov 2025 at 09:59, David Sterba <dsterba@suse.cz> wrote:
>
> On Thu, Nov 13, 2025 at 07:20:01AM +1030, Qu Wenruo wrote:
> > > --- a/fs/btrfs/extent-tree.c
> > > +++ b/fs/btrfs/extent-tree.c
> > > @@ -1878,16 +1878,14 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans,
> > >      * and then re-check to make sure nobody got added.
> > >      */
> > >     spin_unlock(&head->lock);
> > > -   spin_lock(&delayed_refs->lock);
> > > -   spin_lock(&head->lock);
> > > -   if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root) || head->extent_op) {
> > > -           spin_unlock(&head->lock);
> > > -           spin_unlock(&delayed_refs->lock);
> > > -           return 1;
> > > +   {
> >
> > There are some internal discussion about such anonymous code block usage.
> >
> > Although I support such usage, especially when it can reduce the
> > lifespan of local variables, it's not a commonly accepted pattern yet.
>
> And the discussion is going great, I think we wont't find a consensus
> without somebody either missing a coding pattern (you) or suffering to
> look at such code each time (me). Others have similar mixed feelings
> about the guards use.

And yet I can imagine even wilder creativity like:

> +     scoped_guard(spinlock, &delayed_refs->lock)
> +     scoped_guard(spinlock, &head->lock) {
> +             if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root) || head->extent_op)
> +                     return 1;
> +             btrfs_delete_ref_head(fs_info, delayed_refs, head);
>       }

Here the indentation is irregular, but still looks kind of just. Would
we be happy with such exceptions?

Otherwise this could end up rather mixed and that does not look
preferable, at least to me:

> +     scoped_guard(spinlock, &delayed_refs->lock) {
> +             guard(spinlock)(&head->lock)
> +             if (!RB_EMPTY_ROOT(&head->ref_tree.rb_root) || head->extent_op)
> +                     return 1;
> +             btrfs_delete_ref_head(fs_info, delayed_refs, head);
>       }

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2025-11-13 12:48 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-12 18:49 [RFC PATCH 0/8] use cleanup.h in btrfs Gladyshev Ilya
2025-11-12 18:49 ` [RFC PATCH 1/8] btrfs: remove redundant label in __del_qgroup_relation Gladyshev Ilya
2025-11-12 18:49 ` [RFC PATCH 2/8] btrfs: move kfree out of btrfs_create_qgroup's cleanup path Gladyshev Ilya
2025-11-12 20:30   ` Qu Wenruo
2025-11-12 18:49 ` [RFC PATCH 3/8] btrfs: simplify control flow in scrub_simple_mirror Gladyshev Ilya
2025-11-12 18:49 ` [RFC PATCH 4/8] btrfs: simplify function protections with guards Gladyshev Ilya
2025-11-13  8:43   ` David Sterba
2025-11-13 10:06     ` Gladyshev Ilya
2025-11-13 11:25       ` David Sterba
2025-11-13 12:30         ` Daniel Vacek
2025-11-12 18:49 ` [RFC PATCH 5/8] btrfs: use cleanup.h guard()s to simplify unlocks on return Gladyshev Ilya
2025-11-12 18:49 ` [RFC PATCH 6/8] btrfs: simplify cleanup via scoped_guard() Gladyshev Ilya
2025-11-13  8:48   ` David Sterba
2025-11-12 18:49 ` [RFC PATCH 7/8] btrfs: simplify return path via cleanup.h Gladyshev Ilya
2025-11-12 20:50   ` Qu Wenruo
2025-11-13  8:54     ` David Sterba
2025-11-13 12:48       ` Daniel Vacek
2025-11-12 18:49 ` [RFC PATCH 8/8] btrfs: simplify cleanup in btrfs_add_qgroup_relation Gladyshev Ilya
2025-11-12 20:46   ` Qu Wenruo
2025-11-12 20:55 ` [RFC PATCH 0/8] use cleanup.h in btrfs Qu Wenruo
2025-11-13  8:01   ` Gladyshev Ilya

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox