public inbox for linux-bcachefs@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] bcachefs: Random cleanup
@ 2025-04-15  5:33 Alan Huang
  2025-04-15  5:33 ` [PATCH 1/4] bcachefs: Kill bch2_trans_unlock_noassert Alan Huang
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Alan Huang @ 2025-04-15  5:33 UTC (permalink / raw)
  To: kent.overstreet; +Cc: linux-bcachefs, Alan Huang

Alan Huang (4):
  bcachefs: Kill bch2_trans_unlock_noassert
  bcachefs: mark_btree_node_locked_noreset ->
    mark_btree_node_locked_reset
  bcachefs: Remove spurious +1/-1 operation
  bcachefs: Simplify logic

 fs/bcachefs/btree_cache.c           |  2 +-
 fs/bcachefs/btree_io.c              |  8 ++------
 fs/bcachefs/btree_key_cache.c       |  6 +++---
 fs/bcachefs/btree_locking.c         | 13 +++----------
 fs/bcachefs/btree_locking.h         | 11 +++++------
 fs/bcachefs/btree_types.h           |  4 ++--
 fs/bcachefs/btree_update_interior.c |  4 ++--
 7 files changed, 18 insertions(+), 30 deletions(-)

-- 
2.48.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/4] bcachefs: Kill bch2_trans_unlock_noassert
  2025-04-15  5:33 [PATCH 0/4] bcachefs: Random cleanup Alan Huang
@ 2025-04-15  5:33 ` Alan Huang
  2025-04-15 14:34   ` Kent Overstreet
  2025-04-15  5:33 ` [PATCH 2/4] bcachefs: mark_btree_node_locked_noreset -> mark_btree_node_locked_reset Alan Huang
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Alan Huang @ 2025-04-15  5:33 UTC (permalink / raw)
  To: kent.overstreet; +Cc: linux-bcachefs, Alan Huang

Signed-off-by: Alan Huang <mmpgouride@gmail.com>
---
 fs/bcachefs/btree_cache.c   | 2 +-
 fs/bcachefs/btree_locking.c | 7 -------
 fs/bcachefs/btree_locking.h | 1 -
 3 files changed, 1 insertion(+), 9 deletions(-)

diff --git a/fs/bcachefs/btree_cache.c b/fs/bcachefs/btree_cache.c
index 9b80201c7982..153980a9a1cc 100644
--- a/fs/bcachefs/btree_cache.c
+++ b/fs/bcachefs/btree_cache.c
@@ -978,7 +978,7 @@ static noinline struct btree *bch2_btree_node_fill(struct btree_trans *trans,
 
 		/* Unlock before doing IO: */
 		six_unlock_intent(&b->c.lock);
-		bch2_trans_unlock_noassert(trans);
+		bch2_trans_unlock(trans);
 
 		bch2_btree_node_read(trans, b, sync);
 
diff --git a/fs/bcachefs/btree_locking.c b/fs/bcachefs/btree_locking.c
index 94eb2b73a843..f4f563944340 100644
--- a/fs/bcachefs/btree_locking.c
+++ b/fs/bcachefs/btree_locking.c
@@ -799,13 +799,6 @@ int bch2_trans_relock_notrace(struct btree_trans *trans)
 	return __bch2_trans_relock(trans, false);
 }
 
-void bch2_trans_unlock_noassert(struct btree_trans *trans)
-{
-	__bch2_trans_unlock(trans);
-
-	trans_set_unlocked(trans);
-}
-
 void bch2_trans_unlock(struct btree_trans *trans)
 {
 	__bch2_trans_unlock(trans);
diff --git a/fs/bcachefs/btree_locking.h b/fs/bcachefs/btree_locking.h
index b33ab7af8440..66b27c0853a5 100644
--- a/fs/bcachefs/btree_locking.h
+++ b/fs/bcachefs/btree_locking.h
@@ -15,7 +15,6 @@
 
 void bch2_btree_lock_init(struct btree_bkey_cached_common *, enum six_lock_init_flags, gfp_t gfp);
 
-void bch2_trans_unlock_noassert(struct btree_trans *);
 void bch2_trans_unlock_write(struct btree_trans *);
 
 static inline bool is_btree_node(struct btree_path *path, unsigned l)
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/4] bcachefs: mark_btree_node_locked_noreset -> mark_btree_node_locked_reset
  2025-04-15  5:33 [PATCH 0/4] bcachefs: Random cleanup Alan Huang
  2025-04-15  5:33 ` [PATCH 1/4] bcachefs: Kill bch2_trans_unlock_noassert Alan Huang
@ 2025-04-15  5:33 ` Alan Huang
  2025-04-15 14:34   ` Kent Overstreet
  2025-04-15  5:33 ` [PATCH 3/4] bcachefs: Remove spurious +1/-1 operation Alan Huang
  2025-04-15  5:33 ` [PATCH 4/4] bcachefs: Simplify logic Alan Huang
  3 siblings, 1 reply; 11+ messages in thread
From: Alan Huang @ 2025-04-15  5:33 UTC (permalink / raw)
  To: kent.overstreet; +Cc: linux-bcachefs, Alan Huang

The semantic is reset now, rename the function to reflect that.

Signed-off-by: Alan Huang <mmpgouride@gmail.com>
---
 fs/bcachefs/btree_key_cache.c       |  6 +++---
 fs/bcachefs/btree_locking.c         |  6 +++---
 fs/bcachefs/btree_locking.h         | 10 +++++-----
 fs/bcachefs/btree_update_interior.c |  4 ++--
 4 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/fs/bcachefs/btree_key_cache.c b/fs/bcachefs/btree_key_cache.c
index 2b186584a291..c7ad01c1355c 100644
--- a/fs/bcachefs/btree_key_cache.c
+++ b/fs/bcachefs/btree_key_cache.c
@@ -240,7 +240,7 @@ static int btree_key_cache_create(struct btree_trans *trans,
 	ck->flags		= 1U << BKEY_CACHED_ACCESSED;
 
 	if (unlikely(key_u64s > ck->u64s)) {
-		mark_btree_node_locked_noreset(ck_path, 0, BTREE_NODE_UNLOCKED);
+		mark_btree_node_locked_reset(ck_path, 0, BTREE_NODE_UNLOCKED);
 
 		struct bkey_i *new_k = allocate_dropping_locks(trans, ret,
 				kmalloc(key_u64s * sizeof(u64), _gfp));
@@ -282,7 +282,7 @@ static int btree_key_cache_create(struct btree_trans *trans,
 	return 0;
 err:
 	bkey_cached_free(bc, ck);
-	mark_btree_node_locked_noreset(ck_path, 0, BTREE_NODE_UNLOCKED);
+	mark_btree_node_locked_reset(ck_path, 0, BTREE_NODE_UNLOCKED);
 
 	return ret;
 }
@@ -500,7 +500,7 @@ static int btree_key_cache_flush_pos(struct btree_trans *trans,
 			atomic_long_dec(&c->btree_key_cache.nr_dirty);
 		}
 
-		mark_btree_node_locked_noreset(path, 0, BTREE_NODE_UNLOCKED);
+		mark_btree_node_locked_reset(path, 0, BTREE_NODE_UNLOCKED);
 		if (bkey_cached_evict(&c->btree_key_cache, ck)) {
 			bkey_cached_free(&c->btree_key_cache, ck);
 		} else {
diff --git a/fs/bcachefs/btree_locking.c b/fs/bcachefs/btree_locking.c
index f4f563944340..71dbac0bcd58 100644
--- a/fs/bcachefs/btree_locking.c
+++ b/fs/bcachefs/btree_locking.c
@@ -435,7 +435,7 @@ int __bch2_btree_node_lock_write(struct btree_trans *trans, struct btree_path *p
 	six_lock_readers_add(&b->lock, readers);
 
 	if (ret)
-		mark_btree_node_locked_noreset(path, b->level, BTREE_NODE_INTENT_LOCKED);
+		mark_btree_node_locked_reset(path, b->level, BTREE_NODE_INTENT_LOCKED);
 
 	return ret;
 }
@@ -564,7 +564,7 @@ bool bch2_btree_node_upgrade(struct btree_trans *trans,
 	trace_and_count(trans->c, btree_path_upgrade_fail, trans, _RET_IP_, path, level);
 	return false;
 success:
-	mark_btree_node_locked_noreset(path, level, BTREE_NODE_INTENT_LOCKED);
+	mark_btree_node_locked_reset(path, level, BTREE_NODE_INTENT_LOCKED);
 	return true;
 }
 
@@ -693,7 +693,7 @@ void __bch2_btree_path_downgrade(struct btree_trans *trans,
 		} else {
 			if (btree_node_intent_locked(path, l)) {
 				six_lock_downgrade(&path->l[l].b->c.lock);
-				mark_btree_node_locked_noreset(path, l, BTREE_NODE_READ_LOCKED);
+				mark_btree_node_locked_reset(path, l, BTREE_NODE_READ_LOCKED);
 			}
 			break;
 		}
diff --git a/fs/bcachefs/btree_locking.h b/fs/bcachefs/btree_locking.h
index 66b27c0853a5..8978f7969bef 100644
--- a/fs/bcachefs/btree_locking.h
+++ b/fs/bcachefs/btree_locking.h
@@ -63,7 +63,7 @@ static inline bool btree_node_locked(struct btree_path *path, unsigned level)
 	return btree_node_locked_type(path, level) != BTREE_NODE_UNLOCKED;
 }
 
-static inline void mark_btree_node_locked_noreset(struct btree_path *path,
+static inline void mark_btree_node_locked_reset(struct btree_path *path,
 						  unsigned level,
 						  enum btree_node_locked_type type)
 {
@@ -80,7 +80,7 @@ static inline void mark_btree_node_locked(struct btree_trans *trans,
 					  unsigned level,
 					  enum btree_node_locked_type type)
 {
-	mark_btree_node_locked_noreset(path, level, (enum btree_node_locked_type) type);
+	mark_btree_node_locked_reset(path, level, (enum btree_node_locked_type) type);
 #ifdef CONFIG_BCACHEFS_LOCK_TIME_STATS
 	path->l[level].lock_taken_time = local_clock();
 #endif
@@ -134,7 +134,7 @@ static inline void btree_node_unlock(struct btree_trans *trans,
 		}
 		six_unlock_type(&path->l[level].b->c.lock, lock_type);
 		btree_trans_lock_hold_time_update(trans, path, level);
-		mark_btree_node_locked_noreset(path, level, BTREE_NODE_UNLOCKED);
+		mark_btree_node_locked_reset(path, level, BTREE_NODE_UNLOCKED);
 	}
 }
 
@@ -183,7 +183,7 @@ bch2_btree_node_unlock_write_inlined(struct btree_trans *trans, struct btree_pat
 	EBUG_ON(path->l[b->c.level].lock_seq != six_lock_seq(&b->c.lock));
 	EBUG_ON(btree_node_locked_type(path, b->c.level) != SIX_LOCK_write);
 
-	mark_btree_node_locked_noreset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
+	mark_btree_node_locked_reset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
 	__bch2_btree_node_unlock_write(trans, b);
 }
 
@@ -315,7 +315,7 @@ static inline int __btree_node_lock_write(struct btree_trans *trans,
 	 * write lock: thus, we need to tell the cycle detector we have a write
 	 * lock _before_ taking the lock:
 	 */
-	mark_btree_node_locked_noreset(path, b->level, BTREE_NODE_WRITE_LOCKED);
+	mark_btree_node_locked_reset(path, b->level, BTREE_NODE_WRITE_LOCKED);
 
 	return likely(six_trylock_write(&b->lock))
 		? 0
diff --git a/fs/bcachefs/btree_update_interior.c b/fs/bcachefs/btree_update_interior.c
index 55fbeeb8eaaa..29e03408a019 100644
--- a/fs/bcachefs/btree_update_interior.c
+++ b/fs/bcachefs/btree_update_interior.c
@@ -245,7 +245,7 @@ static void bch2_btree_node_free_inmem(struct btree_trans *trans,
 	mutex_unlock(&c->btree_cache.lock);
 
 	six_unlock_write(&b->c.lock);
-	mark_btree_node_locked_noreset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
+	mark_btree_node_locked_reset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
 
 	bch2_trans_node_drop(trans, b);
 }
@@ -788,7 +788,7 @@ static void btree_update_nodes_written(struct btree_update *as)
 
 		mutex_unlock(&c->btree_interior_update_lock);
 
-		mark_btree_node_locked_noreset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
+		mark_btree_node_locked_reset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
 		six_unlock_write(&b->c.lock);
 
 		btree_node_write_if_need(trans, b, SIX_LOCK_intent);
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/4] bcachefs: Remove spurious +1/-1 operation
  2025-04-15  5:33 [PATCH 0/4] bcachefs: Random cleanup Alan Huang
  2025-04-15  5:33 ` [PATCH 1/4] bcachefs: Kill bch2_trans_unlock_noassert Alan Huang
  2025-04-15  5:33 ` [PATCH 2/4] bcachefs: mark_btree_node_locked_noreset -> mark_btree_node_locked_reset Alan Huang
@ 2025-04-15  5:33 ` Alan Huang
  2025-04-15 14:35   ` Kent Overstreet
  2025-04-15  5:33 ` [PATCH 4/4] bcachefs: Simplify logic Alan Huang
  3 siblings, 1 reply; 11+ messages in thread
From: Alan Huang @ 2025-04-15  5:33 UTC (permalink / raw)
  To: kent.overstreet; +Cc: linux-bcachefs, Alan Huang

Signed-off-by: Alan Huang <mmpgouride@gmail.com>
---
 fs/bcachefs/btree_types.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/bcachefs/btree_types.h b/fs/bcachefs/btree_types.h
index dd109dea0f1c..c2e05824e99b 100644
--- a/fs/bcachefs/btree_types.h
+++ b/fs/bcachefs/btree_types.h
@@ -656,13 +656,13 @@ static inline struct bset_tree *bset_tree_last(struct btree *b)
 static inline void *
 __btree_node_offset_to_ptr(const struct btree *b, u16 offset)
 {
-	return (void *) ((u64 *) b->data + 1 + offset);
+	return (void *) ((u64 *) b->data + offset);
 }
 
 static inline u16
 __btree_node_ptr_to_offset(const struct btree *b, const void *p)
 {
-	u16 ret = (u64 *) p - 1 - (u64 *) b->data;
+	u16 ret = (u64 *) p - (u64 *) b->data;
 
 	EBUG_ON(__btree_node_offset_to_ptr(b, ret) != p);
 	return ret;
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/4] bcachefs: Simplify logic
  2025-04-15  5:33 [PATCH 0/4] bcachefs: Random cleanup Alan Huang
                   ` (2 preceding siblings ...)
  2025-04-15  5:33 ` [PATCH 3/4] bcachefs: Remove spurious +1/-1 operation Alan Huang
@ 2025-04-15  5:33 ` Alan Huang
  2025-04-15 14:36   ` Kent Overstreet
  3 siblings, 1 reply; 11+ messages in thread
From: Alan Huang @ 2025-04-15  5:33 UTC (permalink / raw)
  To: kent.overstreet; +Cc: linux-bcachefs, Alan Huang

Signed-off-by: Alan Huang <mmpgouride@gmail.com>
---
 fs/bcachefs/btree_io.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/fs/bcachefs/btree_io.c b/fs/bcachefs/btree_io.c
index 14e3329baa43..afcd13092807 100644
--- a/fs/bcachefs/btree_io.c
+++ b/fs/bcachefs/btree_io.c
@@ -1017,7 +1017,6 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca,
 	bool used_mempool, blacklisted;
 	bool updated_range = b->key.k.type == KEY_TYPE_btree_ptr_v2 &&
 		BTREE_PTR_RANGE_UPDATED(&bkey_i_to_btree_ptr_v2(&b->key)->v);
-	unsigned u64s;
 	unsigned ptr_written = btree_ptr_sectors_written(bkey_i_to_s_c(&b->key));
 	u64 max_journal_seq = 0;
 	struct printbuf buf = PRINTBUF;
@@ -1224,23 +1223,20 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca,
 	sorted = btree_bounce_alloc(c, btree_buf_bytes(b), &used_mempool);
 	sorted->keys.u64s = 0;
 
-	set_btree_bset(b, b->set, &b->data->keys);
-
 	b->nr = bch2_key_sort_fix_overlapping(c, &sorted->keys, iter);
 	memset((uint8_t *)(sorted + 1) + b->nr.live_u64s * sizeof(u64), 0,
 			btree_buf_bytes(b) -
 			sizeof(struct btree_node) -
 			b->nr.live_u64s * sizeof(u64));
 
-	u64s = le16_to_cpu(sorted->keys.u64s);
+	b->data->keys.u64s = sorted->keys.u64s;
 	*sorted = *b->data;
-	sorted->keys.u64s = cpu_to_le16(u64s);
 	swap(sorted, b->data);
 	set_btree_bset(b, b->set, &b->data->keys);
 	b->nsets = 1;
 	b->data->keys.journal_seq = cpu_to_le64(max_journal_seq);
 
-	BUG_ON(b->nr.live_u64s != u64s);
+	BUG_ON(b->nr.live_u64s != le16_to_cpu(b->data->keys.u64s));
 
 	btree_bounce_free(c, btree_buf_bytes(b), used_mempool, sorted);
 
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] bcachefs: mark_btree_node_locked_noreset -> mark_btree_node_locked_reset
  2025-04-15  5:33 ` [PATCH 2/4] bcachefs: mark_btree_node_locked_noreset -> mark_btree_node_locked_reset Alan Huang
@ 2025-04-15 14:34   ` Kent Overstreet
  2025-04-15 14:40     ` Alan Huang
  0 siblings, 1 reply; 11+ messages in thread
From: Kent Overstreet @ 2025-04-15 14:34 UTC (permalink / raw)
  To: Alan Huang; +Cc: linux-bcachefs

On Tue, Apr 15, 2025 at 01:33:05PM +0800, Alan Huang wrote:
> The semantic is reset now, rename the function to reflect that.
> 
> Signed-off-by: Alan Huang <mmpgouride@gmail.com>

erm, are you sure?

mark_btree_node_locked() is the outer, more 'standard' interface, we
generally do the more specialized naming for the inner, more
specialized, 'are you sure this is the one you want' helper

> ---
>  fs/bcachefs/btree_key_cache.c       |  6 +++---
>  fs/bcachefs/btree_locking.c         |  6 +++---
>  fs/bcachefs/btree_locking.h         | 10 +++++-----
>  fs/bcachefs/btree_update_interior.c |  4 ++--
>  4 files changed, 13 insertions(+), 13 deletions(-)
> 
> diff --git a/fs/bcachefs/btree_key_cache.c b/fs/bcachefs/btree_key_cache.c
> index 2b186584a291..c7ad01c1355c 100644
> --- a/fs/bcachefs/btree_key_cache.c
> +++ b/fs/bcachefs/btree_key_cache.c
> @@ -240,7 +240,7 @@ static int btree_key_cache_create(struct btree_trans *trans,
>  	ck->flags		= 1U << BKEY_CACHED_ACCESSED;
>  
>  	if (unlikely(key_u64s > ck->u64s)) {
> -		mark_btree_node_locked_noreset(ck_path, 0, BTREE_NODE_UNLOCKED);
> +		mark_btree_node_locked_reset(ck_path, 0, BTREE_NODE_UNLOCKED);
>  
>  		struct bkey_i *new_k = allocate_dropping_locks(trans, ret,
>  				kmalloc(key_u64s * sizeof(u64), _gfp));
> @@ -282,7 +282,7 @@ static int btree_key_cache_create(struct btree_trans *trans,
>  	return 0;
>  err:
>  	bkey_cached_free(bc, ck);
> -	mark_btree_node_locked_noreset(ck_path, 0, BTREE_NODE_UNLOCKED);
> +	mark_btree_node_locked_reset(ck_path, 0, BTREE_NODE_UNLOCKED);
>  
>  	return ret;
>  }
> @@ -500,7 +500,7 @@ static int btree_key_cache_flush_pos(struct btree_trans *trans,
>  			atomic_long_dec(&c->btree_key_cache.nr_dirty);
>  		}
>  
> -		mark_btree_node_locked_noreset(path, 0, BTREE_NODE_UNLOCKED);
> +		mark_btree_node_locked_reset(path, 0, BTREE_NODE_UNLOCKED);
>  		if (bkey_cached_evict(&c->btree_key_cache, ck)) {
>  			bkey_cached_free(&c->btree_key_cache, ck);
>  		} else {
> diff --git a/fs/bcachefs/btree_locking.c b/fs/bcachefs/btree_locking.c
> index f4f563944340..71dbac0bcd58 100644
> --- a/fs/bcachefs/btree_locking.c
> +++ b/fs/bcachefs/btree_locking.c
> @@ -435,7 +435,7 @@ int __bch2_btree_node_lock_write(struct btree_trans *trans, struct btree_path *p
>  	six_lock_readers_add(&b->lock, readers);
>  
>  	if (ret)
> -		mark_btree_node_locked_noreset(path, b->level, BTREE_NODE_INTENT_LOCKED);
> +		mark_btree_node_locked_reset(path, b->level, BTREE_NODE_INTENT_LOCKED);
>  
>  	return ret;
>  }
> @@ -564,7 +564,7 @@ bool bch2_btree_node_upgrade(struct btree_trans *trans,
>  	trace_and_count(trans->c, btree_path_upgrade_fail, trans, _RET_IP_, path, level);
>  	return false;
>  success:
> -	mark_btree_node_locked_noreset(path, level, BTREE_NODE_INTENT_LOCKED);
> +	mark_btree_node_locked_reset(path, level, BTREE_NODE_INTENT_LOCKED);
>  	return true;
>  }
>  
> @@ -693,7 +693,7 @@ void __bch2_btree_path_downgrade(struct btree_trans *trans,
>  		} else {
>  			if (btree_node_intent_locked(path, l)) {
>  				six_lock_downgrade(&path->l[l].b->c.lock);
> -				mark_btree_node_locked_noreset(path, l, BTREE_NODE_READ_LOCKED);
> +				mark_btree_node_locked_reset(path, l, BTREE_NODE_READ_LOCKED);
>  			}
>  			break;
>  		}
> diff --git a/fs/bcachefs/btree_locking.h b/fs/bcachefs/btree_locking.h
> index 66b27c0853a5..8978f7969bef 100644
> --- a/fs/bcachefs/btree_locking.h
> +++ b/fs/bcachefs/btree_locking.h
> @@ -63,7 +63,7 @@ static inline bool btree_node_locked(struct btree_path *path, unsigned level)
>  	return btree_node_locked_type(path, level) != BTREE_NODE_UNLOCKED;
>  }
>  
> -static inline void mark_btree_node_locked_noreset(struct btree_path *path,
> +static inline void mark_btree_node_locked_reset(struct btree_path *path,
>  						  unsigned level,
>  						  enum btree_node_locked_type type)
>  {
> @@ -80,7 +80,7 @@ static inline void mark_btree_node_locked(struct btree_trans *trans,
>  					  unsigned level,
>  					  enum btree_node_locked_type type)
>  {
> -	mark_btree_node_locked_noreset(path, level, (enum btree_node_locked_type) type);
> +	mark_btree_node_locked_reset(path, level, (enum btree_node_locked_type) type);
>  #ifdef CONFIG_BCACHEFS_LOCK_TIME_STATS
>  	path->l[level].lock_taken_time = local_clock();
>  #endif
> @@ -134,7 +134,7 @@ static inline void btree_node_unlock(struct btree_trans *trans,
>  		}
>  		six_unlock_type(&path->l[level].b->c.lock, lock_type);
>  		btree_trans_lock_hold_time_update(trans, path, level);
> -		mark_btree_node_locked_noreset(path, level, BTREE_NODE_UNLOCKED);
> +		mark_btree_node_locked_reset(path, level, BTREE_NODE_UNLOCKED);
>  	}
>  }
>  
> @@ -183,7 +183,7 @@ bch2_btree_node_unlock_write_inlined(struct btree_trans *trans, struct btree_pat
>  	EBUG_ON(path->l[b->c.level].lock_seq != six_lock_seq(&b->c.lock));
>  	EBUG_ON(btree_node_locked_type(path, b->c.level) != SIX_LOCK_write);
>  
> -	mark_btree_node_locked_noreset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
> +	mark_btree_node_locked_reset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
>  	__bch2_btree_node_unlock_write(trans, b);
>  }
>  
> @@ -315,7 +315,7 @@ static inline int __btree_node_lock_write(struct btree_trans *trans,
>  	 * write lock: thus, we need to tell the cycle detector we have a write
>  	 * lock _before_ taking the lock:
>  	 */
> -	mark_btree_node_locked_noreset(path, b->level, BTREE_NODE_WRITE_LOCKED);
> +	mark_btree_node_locked_reset(path, b->level, BTREE_NODE_WRITE_LOCKED);
>  
>  	return likely(six_trylock_write(&b->lock))
>  		? 0
> diff --git a/fs/bcachefs/btree_update_interior.c b/fs/bcachefs/btree_update_interior.c
> index 55fbeeb8eaaa..29e03408a019 100644
> --- a/fs/bcachefs/btree_update_interior.c
> +++ b/fs/bcachefs/btree_update_interior.c
> @@ -245,7 +245,7 @@ static void bch2_btree_node_free_inmem(struct btree_trans *trans,
>  	mutex_unlock(&c->btree_cache.lock);
>  
>  	six_unlock_write(&b->c.lock);
> -	mark_btree_node_locked_noreset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
> +	mark_btree_node_locked_reset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
>  
>  	bch2_trans_node_drop(trans, b);
>  }
> @@ -788,7 +788,7 @@ static void btree_update_nodes_written(struct btree_update *as)
>  
>  		mutex_unlock(&c->btree_interior_update_lock);
>  
> -		mark_btree_node_locked_noreset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
> +		mark_btree_node_locked_reset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
>  		six_unlock_write(&b->c.lock);
>  
>  		btree_node_write_if_need(trans, b, SIX_LOCK_intent);
> -- 
> 2.48.1
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/4] bcachefs: Kill bch2_trans_unlock_noassert
  2025-04-15  5:33 ` [PATCH 1/4] bcachefs: Kill bch2_trans_unlock_noassert Alan Huang
@ 2025-04-15 14:34   ` Kent Overstreet
  0 siblings, 0 replies; 11+ messages in thread
From: Kent Overstreet @ 2025-04-15 14:34 UTC (permalink / raw)
  To: Alan Huang; +Cc: linux-bcachefs

On Tue, Apr 15, 2025 at 01:33:04PM +0800, Alan Huang wrote:
> Signed-off-by: Alan Huang <mmpgouride@gmail.com>

Applied

> ---
>  fs/bcachefs/btree_cache.c   | 2 +-
>  fs/bcachefs/btree_locking.c | 7 -------
>  fs/bcachefs/btree_locking.h | 1 -
>  3 files changed, 1 insertion(+), 9 deletions(-)
> 
> diff --git a/fs/bcachefs/btree_cache.c b/fs/bcachefs/btree_cache.c
> index 9b80201c7982..153980a9a1cc 100644
> --- a/fs/bcachefs/btree_cache.c
> +++ b/fs/bcachefs/btree_cache.c
> @@ -978,7 +978,7 @@ static noinline struct btree *bch2_btree_node_fill(struct btree_trans *trans,
>  
>  		/* Unlock before doing IO: */
>  		six_unlock_intent(&b->c.lock);
> -		bch2_trans_unlock_noassert(trans);
> +		bch2_trans_unlock(trans);
>  
>  		bch2_btree_node_read(trans, b, sync);
>  
> diff --git a/fs/bcachefs/btree_locking.c b/fs/bcachefs/btree_locking.c
> index 94eb2b73a843..f4f563944340 100644
> --- a/fs/bcachefs/btree_locking.c
> +++ b/fs/bcachefs/btree_locking.c
> @@ -799,13 +799,6 @@ int bch2_trans_relock_notrace(struct btree_trans *trans)
>  	return __bch2_trans_relock(trans, false);
>  }
>  
> -void bch2_trans_unlock_noassert(struct btree_trans *trans)
> -{
> -	__bch2_trans_unlock(trans);
> -
> -	trans_set_unlocked(trans);
> -}
> -
>  void bch2_trans_unlock(struct btree_trans *trans)
>  {
>  	__bch2_trans_unlock(trans);
> diff --git a/fs/bcachefs/btree_locking.h b/fs/bcachefs/btree_locking.h
> index b33ab7af8440..66b27c0853a5 100644
> --- a/fs/bcachefs/btree_locking.h
> +++ b/fs/bcachefs/btree_locking.h
> @@ -15,7 +15,6 @@
>  
>  void bch2_btree_lock_init(struct btree_bkey_cached_common *, enum six_lock_init_flags, gfp_t gfp);
>  
> -void bch2_trans_unlock_noassert(struct btree_trans *);
>  void bch2_trans_unlock_write(struct btree_trans *);
>  
>  static inline bool is_btree_node(struct btree_path *path, unsigned l)
> -- 
> 2.48.1
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/4] bcachefs: Remove spurious +1/-1 operation
  2025-04-15  5:33 ` [PATCH 3/4] bcachefs: Remove spurious +1/-1 operation Alan Huang
@ 2025-04-15 14:35   ` Kent Overstreet
  0 siblings, 0 replies; 11+ messages in thread
From: Kent Overstreet @ 2025-04-15 14:35 UTC (permalink / raw)
  To: Alan Huang; +Cc: linux-bcachefs

On Tue, Apr 15, 2025 at 01:33:06PM +0800, Alan Huang wrote:
> Signed-off-by: Alan Huang <mmpgouride@gmail.com>

Applied

> ---
>  fs/bcachefs/btree_types.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/bcachefs/btree_types.h b/fs/bcachefs/btree_types.h
> index dd109dea0f1c..c2e05824e99b 100644
> --- a/fs/bcachefs/btree_types.h
> +++ b/fs/bcachefs/btree_types.h
> @@ -656,13 +656,13 @@ static inline struct bset_tree *bset_tree_last(struct btree *b)
>  static inline void *
>  __btree_node_offset_to_ptr(const struct btree *b, u16 offset)
>  {
> -	return (void *) ((u64 *) b->data + 1 + offset);
> +	return (void *) ((u64 *) b->data + offset);
>  }
>  
>  static inline u16
>  __btree_node_ptr_to_offset(const struct btree *b, const void *p)
>  {
> -	u16 ret = (u64 *) p - 1 - (u64 *) b->data;
> +	u16 ret = (u64 *) p - (u64 *) b->data;
>  
>  	EBUG_ON(__btree_node_offset_to_ptr(b, ret) != p);
>  	return ret;
> -- 
> 2.48.1
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/4] bcachefs: Simplify logic
  2025-04-15  5:33 ` [PATCH 4/4] bcachefs: Simplify logic Alan Huang
@ 2025-04-15 14:36   ` Kent Overstreet
  0 siblings, 0 replies; 11+ messages in thread
From: Kent Overstreet @ 2025-04-15 14:36 UTC (permalink / raw)
  To: Alan Huang; +Cc: linux-bcachefs

On Tue, Apr 15, 2025 at 01:33:07PM +0800, Alan Huang wrote:
> Signed-off-by: Alan Huang <mmpgouride@gmail.com>

Applied

> ---
>  fs/bcachefs/btree_io.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/bcachefs/btree_io.c b/fs/bcachefs/btree_io.c
> index 14e3329baa43..afcd13092807 100644
> --- a/fs/bcachefs/btree_io.c
> +++ b/fs/bcachefs/btree_io.c
> @@ -1017,7 +1017,6 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca,
>  	bool used_mempool, blacklisted;
>  	bool updated_range = b->key.k.type == KEY_TYPE_btree_ptr_v2 &&
>  		BTREE_PTR_RANGE_UPDATED(&bkey_i_to_btree_ptr_v2(&b->key)->v);
> -	unsigned u64s;
>  	unsigned ptr_written = btree_ptr_sectors_written(bkey_i_to_s_c(&b->key));
>  	u64 max_journal_seq = 0;
>  	struct printbuf buf = PRINTBUF;
> @@ -1224,23 +1223,20 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca,
>  	sorted = btree_bounce_alloc(c, btree_buf_bytes(b), &used_mempool);
>  	sorted->keys.u64s = 0;
>  
> -	set_btree_bset(b, b->set, &b->data->keys);
> -
>  	b->nr = bch2_key_sort_fix_overlapping(c, &sorted->keys, iter);
>  	memset((uint8_t *)(sorted + 1) + b->nr.live_u64s * sizeof(u64), 0,
>  			btree_buf_bytes(b) -
>  			sizeof(struct btree_node) -
>  			b->nr.live_u64s * sizeof(u64));
>  
> -	u64s = le16_to_cpu(sorted->keys.u64s);
> +	b->data->keys.u64s = sorted->keys.u64s;
>  	*sorted = *b->data;
> -	sorted->keys.u64s = cpu_to_le16(u64s);
>  	swap(sorted, b->data);
>  	set_btree_bset(b, b->set, &b->data->keys);
>  	b->nsets = 1;
>  	b->data->keys.journal_seq = cpu_to_le64(max_journal_seq);
>  
> -	BUG_ON(b->nr.live_u64s != u64s);
> +	BUG_ON(b->nr.live_u64s != le16_to_cpu(b->data->keys.u64s));
>  
>  	btree_bounce_free(c, btree_buf_bytes(b), used_mempool, sorted);
>  
> -- 
> 2.48.1
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] bcachefs: mark_btree_node_locked_noreset -> mark_btree_node_locked_reset
  2025-04-15 14:34   ` Kent Overstreet
@ 2025-04-15 14:40     ` Alan Huang
  2025-04-15 15:10       ` Kent Overstreet
  0 siblings, 1 reply; 11+ messages in thread
From: Alan Huang @ 2025-04-15 14:40 UTC (permalink / raw)
  To: Kent Overstreet; +Cc: linux-bcachefs

On Apr 15, 2025, at 22:34, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> 
> On Tue, Apr 15, 2025 at 01:33:05PM +0800, Alan Huang wrote:
>> The semantic is reset now, rename the function to reflect that.
>> 
>> Signed-off-by: Alan Huang <mmpgouride@gmail.com>
> 
> erm, are you sure?
> 
> mark_btree_node_locked() is the outer, more 'standard' interface, we
> generally do the more specialized naming for the inner, more
> specialized, 'are you sure this is the one you want' helper

I checked the git log, the original _noreset helper was no reset semantic, that is, add a new state.
And then the logic has been changed, but the name doesn't

> 
>> ---
>> fs/bcachefs/btree_key_cache.c       |  6 +++---
>> fs/bcachefs/btree_locking.c         |  6 +++---
>> fs/bcachefs/btree_locking.h         | 10 +++++-----
>> fs/bcachefs/btree_update_interior.c |  4 ++--
>> 4 files changed, 13 insertions(+), 13 deletions(-)
>> 
>> diff --git a/fs/bcachefs/btree_key_cache.c b/fs/bcachefs/btree_key_cache.c
>> index 2b186584a291..c7ad01c1355c 100644
>> --- a/fs/bcachefs/btree_key_cache.c
>> +++ b/fs/bcachefs/btree_key_cache.c
>> @@ -240,7 +240,7 @@ static int btree_key_cache_create(struct btree_trans *trans,
>> ck->flags = 1U << BKEY_CACHED_ACCESSED;
>> 
>> if (unlikely(key_u64s > ck->u64s)) {
>> - mark_btree_node_locked_noreset(ck_path, 0, BTREE_NODE_UNLOCKED);
>> + mark_btree_node_locked_reset(ck_path, 0, BTREE_NODE_UNLOCKED);
>> 
>> struct bkey_i *new_k = allocate_dropping_locks(trans, ret,
>> kmalloc(key_u64s * sizeof(u64), _gfp));
>> @@ -282,7 +282,7 @@ static int btree_key_cache_create(struct btree_trans *trans,
>> return 0;
>> err:
>> bkey_cached_free(bc, ck);
>> - mark_btree_node_locked_noreset(ck_path, 0, BTREE_NODE_UNLOCKED);
>> + mark_btree_node_locked_reset(ck_path, 0, BTREE_NODE_UNLOCKED);
>> 
>> return ret;
>> }
>> @@ -500,7 +500,7 @@ static int btree_key_cache_flush_pos(struct btree_trans *trans,
>> atomic_long_dec(&c->btree_key_cache.nr_dirty);
>> }
>> 
>> - mark_btree_node_locked_noreset(path, 0, BTREE_NODE_UNLOCKED);
>> + mark_btree_node_locked_reset(path, 0, BTREE_NODE_UNLOCKED);
>> if (bkey_cached_evict(&c->btree_key_cache, ck)) {
>> bkey_cached_free(&c->btree_key_cache, ck);
>> } else {
>> diff --git a/fs/bcachefs/btree_locking.c b/fs/bcachefs/btree_locking.c
>> index f4f563944340..71dbac0bcd58 100644
>> --- a/fs/bcachefs/btree_locking.c
>> +++ b/fs/bcachefs/btree_locking.c
>> @@ -435,7 +435,7 @@ int __bch2_btree_node_lock_write(struct btree_trans *trans, struct btree_path *p
>> six_lock_readers_add(&b->lock, readers);
>> 
>> if (ret)
>> - mark_btree_node_locked_noreset(path, b->level, BTREE_NODE_INTENT_LOCKED);
>> + mark_btree_node_locked_reset(path, b->level, BTREE_NODE_INTENT_LOCKED);
>> 
>> return ret;
>> }
>> @@ -564,7 +564,7 @@ bool bch2_btree_node_upgrade(struct btree_trans *trans,
>> trace_and_count(trans->c, btree_path_upgrade_fail, trans, _RET_IP_, path, level);
>> return false;
>> success:
>> - mark_btree_node_locked_noreset(path, level, BTREE_NODE_INTENT_LOCKED);
>> + mark_btree_node_locked_reset(path, level, BTREE_NODE_INTENT_LOCKED);
>> return true;
>> }
>> 
>> @@ -693,7 +693,7 @@ void __bch2_btree_path_downgrade(struct btree_trans *trans,
>> } else {
>> if (btree_node_intent_locked(path, l)) {
>> six_lock_downgrade(&path->l[l].b->c.lock);
>> - mark_btree_node_locked_noreset(path, l, BTREE_NODE_READ_LOCKED);
>> + mark_btree_node_locked_reset(path, l, BTREE_NODE_READ_LOCKED);
>> }
>> break;
>> }
>> diff --git a/fs/bcachefs/btree_locking.h b/fs/bcachefs/btree_locking.h
>> index 66b27c0853a5..8978f7969bef 100644
>> --- a/fs/bcachefs/btree_locking.h
>> +++ b/fs/bcachefs/btree_locking.h
>> @@ -63,7 +63,7 @@ static inline bool btree_node_locked(struct btree_path *path, unsigned level)
>> return btree_node_locked_type(path, level) != BTREE_NODE_UNLOCKED;
>> }
>> 
>> -static inline void mark_btree_node_locked_noreset(struct btree_path *path,
>> +static inline void mark_btree_node_locked_reset(struct btree_path *path,
>>  unsigned level,
>>  enum btree_node_locked_type type)
>> {
>> @@ -80,7 +80,7 @@ static inline void mark_btree_node_locked(struct btree_trans *trans,
>>  unsigned level,
>>  enum btree_node_locked_type type)
>> {
>> - mark_btree_node_locked_noreset(path, level, (enum btree_node_locked_type) type);
>> + mark_btree_node_locked_reset(path, level, (enum btree_node_locked_type) type);
>> #ifdef CONFIG_BCACHEFS_LOCK_TIME_STATS
>> path->l[level].lock_taken_time = local_clock();
>> #endif
>> @@ -134,7 +134,7 @@ static inline void btree_node_unlock(struct btree_trans *trans,
>> }
>> six_unlock_type(&path->l[level].b->c.lock, lock_type);
>> btree_trans_lock_hold_time_update(trans, path, level);
>> - mark_btree_node_locked_noreset(path, level, BTREE_NODE_UNLOCKED);
>> + mark_btree_node_locked_reset(path, level, BTREE_NODE_UNLOCKED);
>> }
>> }
>> 
>> @@ -183,7 +183,7 @@ bch2_btree_node_unlock_write_inlined(struct btree_trans *trans, struct btree_pat
>> EBUG_ON(path->l[b->c.level].lock_seq != six_lock_seq(&b->c.lock));
>> EBUG_ON(btree_node_locked_type(path, b->c.level) != SIX_LOCK_write);
>> 
>> - mark_btree_node_locked_noreset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
>> + mark_btree_node_locked_reset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
>> __bch2_btree_node_unlock_write(trans, b);
>> }
>> 
>> @@ -315,7 +315,7 @@ static inline int __btree_node_lock_write(struct btree_trans *trans,
>> * write lock: thus, we need to tell the cycle detector we have a write
>> * lock _before_ taking the lock:
>> */
>> - mark_btree_node_locked_noreset(path, b->level, BTREE_NODE_WRITE_LOCKED);
>> + mark_btree_node_locked_reset(path, b->level, BTREE_NODE_WRITE_LOCKED);
>> 
>> return likely(six_trylock_write(&b->lock))
>> ? 0
>> diff --git a/fs/bcachefs/btree_update_interior.c b/fs/bcachefs/btree_update_interior.c
>> index 55fbeeb8eaaa..29e03408a019 100644
>> --- a/fs/bcachefs/btree_update_interior.c
>> +++ b/fs/bcachefs/btree_update_interior.c
>> @@ -245,7 +245,7 @@ static void bch2_btree_node_free_inmem(struct btree_trans *trans,
>> mutex_unlock(&c->btree_cache.lock);
>> 
>> six_unlock_write(&b->c.lock);
>> - mark_btree_node_locked_noreset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
>> + mark_btree_node_locked_reset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
>> 
>> bch2_trans_node_drop(trans, b);
>> }
>> @@ -788,7 +788,7 @@ static void btree_update_nodes_written(struct btree_update *as)
>> 
>> mutex_unlock(&c->btree_interior_update_lock);
>> 
>> - mark_btree_node_locked_noreset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
>> + mark_btree_node_locked_reset(path, b->c.level, BTREE_NODE_INTENT_LOCKED);
>> six_unlock_write(&b->c.lock);
>> 
>> btree_node_write_if_need(trans, b, SIX_LOCK_intent);
>> -- 
>> 2.48.1
>> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/4] bcachefs: mark_btree_node_locked_noreset -> mark_btree_node_locked_reset
  2025-04-15 14:40     ` Alan Huang
@ 2025-04-15 15:10       ` Kent Overstreet
  0 siblings, 0 replies; 11+ messages in thread
From: Kent Overstreet @ 2025-04-15 15:10 UTC (permalink / raw)
  To: Alan Huang; +Cc: linux-bcachefs

On Tue, Apr 15, 2025 at 10:40:27PM +0800, Alan Huang wrote:
> On Apr 15, 2025, at 22:34, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> > 
> > On Tue, Apr 15, 2025 at 01:33:05PM +0800, Alan Huang wrote:
> >> The semantic is reset now, rename the function to reflect that.
> >> 
> >> Signed-off-by: Alan Huang <mmpgouride@gmail.com>
> > 
> > erm, are you sure?
> > 
> > mark_btree_node_locked() is the outer, more 'standard' interface, we
> > generally do the more specialized naming for the inner, more
> > specialized, 'are you sure this is the one you want' helper
> 
> I checked the git log, the original _noreset helper was no reset semantic, that is, add a new state.
> And then the logic has been changed, but the name doesn't

Not sure what you're seeing? mark_btree_node_locked() resets lock_taken_time

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-04-15 15:11 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-15  5:33 [PATCH 0/4] bcachefs: Random cleanup Alan Huang
2025-04-15  5:33 ` [PATCH 1/4] bcachefs: Kill bch2_trans_unlock_noassert Alan Huang
2025-04-15 14:34   ` Kent Overstreet
2025-04-15  5:33 ` [PATCH 2/4] bcachefs: mark_btree_node_locked_noreset -> mark_btree_node_locked_reset Alan Huang
2025-04-15 14:34   ` Kent Overstreet
2025-04-15 14:40     ` Alan Huang
2025-04-15 15:10       ` Kent Overstreet
2025-04-15  5:33 ` [PATCH 3/4] bcachefs: Remove spurious +1/-1 operation Alan Huang
2025-04-15 14:35   ` Kent Overstreet
2025-04-15  5:33 ` [PATCH 4/4] bcachefs: Simplify logic Alan Huang
2025-04-15 14:36   ` Kent Overstreet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox