public inbox for linux-bcachefs@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] bcachefs: Change bucket_lock() to use bit_spin_lock()
@ 2023-09-14  0:37 Kent Overstreet
  2023-09-14 12:56 ` Brian Foster
  0 siblings, 1 reply; 5+ messages in thread
From: Kent Overstreet @ 2023-09-14  0:37 UTC (permalink / raw)
  Cc: Kent Overstreet, linux-bcachefs

bucket_lock() previously open coded a spinlock, because we need to cram
a spinlock into a single byte.

But it turns out not all archs support xchg() on a single byte; since we
need struct bucket to be small, this means we have to play fun games
with casts and ifdefs for endianness.

This fixes building on 32 bit arm, and likely other architectures.

Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Cc: linux-bcachefs@vger.kernel.org
---
 fs/bcachefs/buckets.h | 26 +++++++++++++++++++++++---
 1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/fs/bcachefs/buckets.h b/fs/bcachefs/buckets.h
index f192809f50cf..e055c1076e63 100644
--- a/fs/bcachefs/buckets.h
+++ b/fs/bcachefs/buckets.h
@@ -40,15 +40,35 @@ static inline size_t sector_to_bucket_and_offset(const struct bch_dev *ca, secto
 	for (_b = (_buckets)->b + (_buckets)->first_bucket;	\
 	     _b < (_buckets)->b + (_buckets)->nbuckets; _b++)
 
+/*
+ * Ugly hack alert:
+ *
+ * We need to cram a spinlock in a single byte, because that's what we have left
+ * in struct bucket, and we care about the size of these - during fsck, we need
+ * in memory state for every single bucket on every device.
+ *
+ * We used to do
+ *   while (xchg(&b->lock, 1) cpu_relax();
+ * but, it turns out not all architectures support xchg on a single byte.
+ *
+ * So now we use bit_spin_lock(), with fun games since we can't burn a whole
+ * ulong for this.
+ */
+
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+#define BUCKET_LOCK_BITNR	0
+#else
+#define BUCKET_LOCK_BITNR	(BITS_PER_LONG - 1)
+#endif
+
 static inline void bucket_unlock(struct bucket *b)
 {
-	smp_store_release(&b->lock, 0);
+	bit_unspin_lock(BUCKET_LOCK_BITNR, (void *) &b->lock);
 }
 
 static inline void bucket_lock(struct bucket *b)
 {
-	while (xchg(&b->lock, 1))
-		cpu_relax();
+	bit_spin_lock(BUCKET_LOCK_BITNR, (void *) &b->lock);
 }
 
 static inline struct bucket_array *gc_bucket_array(struct bch_dev *ca)
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-09-19 14:32 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-14  0:37 [PATCH] bcachefs: Change bucket_lock() to use bit_spin_lock() Kent Overstreet
2023-09-14 12:56 ` Brian Foster
2023-09-14 19:47   ` Kent Overstreet
2023-09-15 10:51     ` Brian Foster
2023-09-19 14:31       ` Brian Foster

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox