* [PATCH v5 00/13] Zacas/Zabha support and qspinlocks
@ 2024-08-18 6:35 Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 01/13] riscv: Move cpufeature.h macros into their own header Alexandre Ghiti
` (13 more replies)
0 siblings, 14 replies; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti
This implements [cmp]xchgXX() macros using Zacas and Zabha extensions
and finally uses those newly introduced macros to add support for
qspinlocks: note that this implementation of qspinlocks satisfies the
forward progress guarantee.
It also uses Ziccrse to provide the qspinlock implementation.
Thanks to Guo and Leonardo for their work!
v4: https://lore.kernel.org/linux-riscv/20240731072405.197046-1-alexghiti@rivosinc.com/
v3: https://lore.kernel.org/linux-riscv/20240717061957.140712-1-alexghiti@rivosinc.com/
v2: https://lore.kernel.org/linux-riscv/20240626130347.520750-1-alexghiti@rivosinc.com/
v1: https://lore.kernel.org/linux-riscv/20240528151052.313031-1-alexghiti@rivosinc.com/
Changes in v5:
- Remove useless include in cpufeature.h and add required ones (Drew)
- Add RB from Drew
- Add AB from Conor and Peter
- use macros to help readability of arch_cmpxchg_XXX() (Drew)
- restore the build_bug() for size > 8 (Drew)
- Update Ziccrse riscv profile spec version commit hash (Conor)
Changes in v4:
- rename sc_sfx into sc_cas_sfx in _arch_cmpxchg (Drew)
- cmpxchg() depends on 64BIT (Drew)
- rename xX register into tX (Drew)
- cas operations require the old value in rd, make this assignment more explicit
as it seems to confuse people (Drew, Andrea)
- Fix ticket/queued configs build errors (Andrea)
- riscv_spinlock_init() is only needed for combo spinlocks but implement it
anyway to inform of the type of spinlocks used (Andrea)
- Add RB from Guo
- Add NONPORTABLE to RISCV_QUEUED_SPINLOCKS (Samuel)
- Add a link to Guo's qspinlocks results on the sophgo platform
- Reorder ZICCRSE (Samuel)
- Use riscv_has_extention_unlikely() instead of direct asm goto, which is way
cleaner and fixes the llvm 16 bug
- add dependency on RISCV_ALTERNATIVES in kconfig
- Rebase on top of 6.11, add patches to fix header circular dependency and
to fix build_bug()
Changes in v3:
- Fix patch 4 to restrict the optimization to fully ordered AMO (Andrea)
- Move RISCV_ISA_EXT_ZABHA definition to patch 4 (Andrea)
- !Zacas at build time => no CAS from Zabha too (Andrea)
- drop patch 7 "riscv: Improve amoswap.X use in xchg()" (Andrea)
- Switch lr/sc and cas order (Guo)
- Combo spinlocks do not depend on Zabha
- Add a Kconfig for ticket/queued/combo (Guo)
- Use Ziccrse (Guo)
Changes in v2:
- Add patch for Zabha dtbinding (Conor)
- Fix cmpxchg128() build warnings missed in v1
- Make arch_cmpxchg128() fully ordered
- Improve Kconfig help texts for both extensions (Conor)
- Fix Makefile dependencies by requiring TOOLCHAIN_HAS_XXX (Nathan)
- Fix compilation errors when the toolchain does not support the
extensions (Nathan)
- Fix C23 warnings about label at the end of coumpound statements (Nathan)
- Fix Zabha and !Zacas configurations (Andrea)
- Add COMBO spinlocks (Guo)
- Improve amocas fully ordered operations by using .aqrl semantics and
removing the fence rw, rw (Andrea)
- Rebase on top "riscv: Fix fully ordered LR/SC xchg[8|16]() implementations"
- Add ARCH_WEAK_RELEASE_ACQUIRE (Andrea)
- Remove the extension version in march for LLVM since it is only required
for experimental extensions (Nathan)
- Fix cmpxchg128() implementation by adding both registers of a pair
in the list of input/output operands
Alexandre Ghiti (11):
riscv: Move cpufeature.h macros into their own header
riscv: Do not fail to build on byte/halfword operations with Zawrs
riscv: Implement cmpxchg32/64() using Zacas
dt-bindings: riscv: Add Zabha ISA extension description
riscv: Implement cmpxchg8/16() using Zabha
riscv: Improve zacas fully-ordered cmpxchg()
riscv: Implement arch_cmpxchg128() using Zacas
riscv: Implement xchg8/16() using Zabha
riscv: Add ISA extension parsing for Ziccrse
dt-bindings: riscv: Add Ziccrse ISA extension description
riscv: Add qspinlock support
Guo Ren (2):
asm-generic: ticket-lock: Reuse arch_spinlock_t of qspinlock
asm-generic: ticket-lock: Add separate ticket-lock.h
.../devicetree/bindings/riscv/extensions.yaml | 12 +
.../locking/queued-spinlocks/arch-support.txt | 2 +-
arch/riscv/Kconfig | 69 +++++
arch/riscv/Makefile | 6 +
arch/riscv/include/asm/Kbuild | 4 +-
arch/riscv/include/asm/cmpxchg.h | 286 +++++++++++++-----
arch/riscv/include/asm/cpufeature-macros.h | 66 ++++
arch/riscv/include/asm/cpufeature.h | 61 +---
arch/riscv/include/asm/hwcap.h | 2 +
arch/riscv/include/asm/spinlock.h | 47 +++
arch/riscv/kernel/cpufeature.c | 2 +
arch/riscv/kernel/setup.c | 37 +++
include/asm-generic/qspinlock.h | 2 +
include/asm-generic/spinlock.h | 87 +-----
include/asm-generic/spinlock_types.h | 12 +-
include/asm-generic/ticket_spinlock.h | 105 +++++++
16 files changed, 567 insertions(+), 233 deletions(-)
create mode 100644 arch/riscv/include/asm/cpufeature-macros.h
create mode 100644 arch/riscv/include/asm/spinlock.h
create mode 100644 include/asm-generic/ticket_spinlock.h
--
2.39.2
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v5 01/13] riscv: Move cpufeature.h macros into their own header
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 02/13] riscv: Do not fail to build on byte/halfword operations with Zawrs Alexandre Ghiti
` (12 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti, Andrew Jones
asm/cmpxchg.h will soon need riscv_has_extension_unlikely() macros and
then needs to include asm/cpufeature.h which introduces a lot of header
circular dependencies.
So move the riscv_has_extension_XXX() macros into their own header which
prevents such circular dependencies by including a restricted number of
headers.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
---
arch/riscv/include/asm/cpufeature-macros.h | 66 ++++++++++++++++++++++
arch/riscv/include/asm/cpufeature.h | 61 ++------------------
2 files changed, 70 insertions(+), 57 deletions(-)
create mode 100644 arch/riscv/include/asm/cpufeature-macros.h
diff --git a/arch/riscv/include/asm/cpufeature-macros.h b/arch/riscv/include/asm/cpufeature-macros.h
new file mode 100644
index 000000000000..a8103edbf51f
--- /dev/null
+++ b/arch/riscv/include/asm/cpufeature-macros.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright 2022-2024 Rivos, Inc
+ */
+
+#ifndef _ASM_CPUFEATURE_MACROS_H
+#define _ASM_CPUFEATURE_MACROS_H
+
+#include <asm/hwcap.h>
+#include <asm/alternative-macros.h>
+
+#define STANDARD_EXT 0
+
+bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, unsigned int bit);
+#define riscv_isa_extension_available(isa_bitmap, ext) \
+ __riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_##ext)
+
+static __always_inline bool __riscv_has_extension_likely(const unsigned long vendor,
+ const unsigned long ext)
+{
+ asm goto(ALTERNATIVE("j %l[l_no]", "nop", %[vendor], %[ext], 1)
+ :
+ : [vendor] "i" (vendor), [ext] "i" (ext)
+ :
+ : l_no);
+
+ return true;
+l_no:
+ return false;
+}
+
+static __always_inline bool __riscv_has_extension_unlikely(const unsigned long vendor,
+ const unsigned long ext)
+{
+ asm goto(ALTERNATIVE("nop", "j %l[l_yes]", %[vendor], %[ext], 1)
+ :
+ : [vendor] "i" (vendor), [ext] "i" (ext)
+ :
+ : l_yes);
+
+ return false;
+l_yes:
+ return true;
+}
+
+static __always_inline bool riscv_has_extension_unlikely(const unsigned long ext)
+{
+ compiletime_assert(ext < RISCV_ISA_EXT_MAX, "ext must be < RISCV_ISA_EXT_MAX");
+
+ if (IS_ENABLED(CONFIG_RISCV_ALTERNATIVE))
+ return __riscv_has_extension_unlikely(STANDARD_EXT, ext);
+
+ return __riscv_isa_extension_available(NULL, ext);
+}
+
+static __always_inline bool riscv_has_extension_likely(const unsigned long ext)
+{
+ compiletime_assert(ext < RISCV_ISA_EXT_MAX, "ext must be < RISCV_ISA_EXT_MAX");
+
+ if (IS_ENABLED(CONFIG_RISCV_ALTERNATIVE))
+ return __riscv_has_extension_likely(STANDARD_EXT, ext);
+
+ return __riscv_isa_extension_available(NULL, ext);
+}
+
+#endif /* _ASM_CPUFEATURE_MACROS_H */
diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
index 45f9c1171a48..87ed88fc950d 100644
--- a/arch/riscv/include/asm/cpufeature.h
+++ b/arch/riscv/include/asm/cpufeature.h
@@ -8,9 +8,11 @@
#include <linux/bitmap.h>
#include <linux/jump_label.h>
+#include <linux/kconfig.h>
+#include <linux/percpu-defs.h>
+#include <linux/threads.h>
#include <asm/hwcap.h>
-#include <asm/alternative-macros.h>
-#include <asm/errno.h>
+#include <asm/cpufeature-macros.h>
/*
* These are probed via a device_initcall(), via either the SBI or directly
@@ -103,61 +105,6 @@ extern const size_t riscv_isa_ext_count;
extern bool riscv_isa_fallback;
unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap);
-
-#define STANDARD_EXT 0
-
-bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, unsigned int bit);
-#define riscv_isa_extension_available(isa_bitmap, ext) \
- __riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_##ext)
-
-static __always_inline bool __riscv_has_extension_likely(const unsigned long vendor,
- const unsigned long ext)
-{
- asm goto(ALTERNATIVE("j %l[l_no]", "nop", %[vendor], %[ext], 1)
- :
- : [vendor] "i" (vendor), [ext] "i" (ext)
- :
- : l_no);
-
- return true;
-l_no:
- return false;
-}
-
-static __always_inline bool __riscv_has_extension_unlikely(const unsigned long vendor,
- const unsigned long ext)
-{
- asm goto(ALTERNATIVE("nop", "j %l[l_yes]", %[vendor], %[ext], 1)
- :
- : [vendor] "i" (vendor), [ext] "i" (ext)
- :
- : l_yes);
-
- return false;
-l_yes:
- return true;
-}
-
-static __always_inline bool riscv_has_extension_unlikely(const unsigned long ext)
-{
- compiletime_assert(ext < RISCV_ISA_EXT_MAX, "ext must be < RISCV_ISA_EXT_MAX");
-
- if (IS_ENABLED(CONFIG_RISCV_ALTERNATIVE))
- return __riscv_has_extension_unlikely(STANDARD_EXT, ext);
-
- return __riscv_isa_extension_available(NULL, ext);
-}
-
-static __always_inline bool riscv_has_extension_likely(const unsigned long ext)
-{
- compiletime_assert(ext < RISCV_ISA_EXT_MAX, "ext must be < RISCV_ISA_EXT_MAX");
-
- if (IS_ENABLED(CONFIG_RISCV_ALTERNATIVE))
- return __riscv_has_extension_likely(STANDARD_EXT, ext);
-
- return __riscv_isa_extension_available(NULL, ext);
-}
-
static __always_inline bool riscv_cpu_has_extension_likely(int cpu, const unsigned long ext)
{
compiletime_assert(ext < RISCV_ISA_EXT_MAX, "ext must be < RISCV_ISA_EXT_MAX");
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 02/13] riscv: Do not fail to build on byte/halfword operations with Zawrs
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 01/13] riscv: Move cpufeature.h macros into their own header Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-21 14:11 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 03/13] riscv: Implement cmpxchg32/64() using Zacas Alexandre Ghiti
` (11 subsequent siblings)
13 siblings, 1 reply; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti
riscv does not have lr instructions on byte and halfword but the
qspinlock implementation actually uses such atomics provided by the
Zabha extension, so those sizes are legitimate.
Then instead of failing to build, just fallback to the !Zawrs path.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/riscv/include/asm/cmpxchg.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index ebbce134917c..ac1d7df898ef 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -245,6 +245,11 @@ static __always_inline void __cmpwait(volatile void *ptr,
: : : : no_zawrs);
switch (size) {
+ case 1:
+ fallthrough;
+ case 2:
+ /* RISC-V doesn't have lr instructions on byte and half-word. */
+ goto no_zawrs;
case 4:
asm volatile(
" lr.w %0, %1\n"
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 03/13] riscv: Implement cmpxchg32/64() using Zacas
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 01/13] riscv: Move cpufeature.h macros into their own header Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 02/13] riscv: Do not fail to build on byte/halfword operations with Zawrs Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 04/13] dt-bindings: riscv: Add Zabha ISA extension description Alexandre Ghiti
` (10 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti, Andrew Jones
This adds runtime support for Zacas in cmpxchg operations.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
---
arch/riscv/Kconfig | 16 +++++++++++
arch/riscv/Makefile | 3 ++
arch/riscv/include/asm/cmpxchg.h | 48 +++++++++++++++++++++-----------
3 files changed, 50 insertions(+), 17 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 0f3cd7c3a436..d955c64d50c2 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -613,6 +613,22 @@ config RISCV_ISA_ZAWRS
use of these instructions in the kernel when the Zawrs extension is
detected at boot.
+config TOOLCHAIN_HAS_ZACAS
+ bool
+ default y
+ depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zacas)
+ depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zacas)
+ depends on AS_HAS_OPTION_ARCH
+
+config RISCV_ISA_ZACAS
+ bool "Zacas extension support for atomic CAS"
+ depends on TOOLCHAIN_HAS_ZACAS
+ depends on RISCV_ALTERNATIVE
+ default y
+ help
+ Enable the use of the Zacas ISA-extension to implement kernel atomic
+ cmpxchg operations when it is detected at boot.
+
If you don't know what to do here, say Y.
config TOOLCHAIN_HAS_ZBB
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
index 6fe682139d2e..f1788131d5fe 100644
--- a/arch/riscv/Makefile
+++ b/arch/riscv/Makefile
@@ -82,6 +82,9 @@ else
riscv-march-$(CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI) := $(riscv-march-y)_zicsr_zifencei
endif
+# Check if the toolchain supports Zacas
+riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZACAS) := $(riscv-march-y)_zacas
+
# Remove F,D,V from isa string for all. Keep extensions between "fd" and "v" by
# matching non-v and non-multi-letter extensions out with the filter ([^v_]*)
KBUILD_CFLAGS += -march=$(shell echo $(riscv-march-y) | sed -E 's/(rv32ima|rv64ima)fd([^v_]*)v?/\1\2/')
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index ac1d7df898ef..39c1daf39f6a 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -12,6 +12,7 @@
#include <asm/fence.h>
#include <asm/hwcap.h>
#include <asm/insn-def.h>
+#include <asm/cpufeature-macros.h>
#define __arch_xchg_masked(sc_sfx, prepend, append, r, p, n) \
({ \
@@ -137,24 +138,37 @@
r = (__typeof__(*(p)))((__retx & __mask) >> __s); \
})
-#define __arch_cmpxchg(lr_sfx, sc_sfx, prepend, append, r, p, co, o, n) \
+#define __arch_cmpxchg(lr_sfx, sc_cas_sfx, prepend, append, r, p, co, o, n) \
({ \
- register unsigned int __rc; \
+ if (IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) && \
+ riscv_has_extension_unlikely(RISCV_ISA_EXT_ZACAS)) { \
+ r = o; \
\
- __asm__ __volatile__ ( \
- prepend \
- "0: lr" lr_sfx " %0, %2\n" \
- " bne %0, %z3, 1f\n" \
- " sc" sc_sfx " %1, %z4, %2\n" \
- " bnez %1, 0b\n" \
- append \
- "1:\n" \
- : "=&r" (r), "=&r" (__rc), "+A" (*(p)) \
- : "rJ" (co o), "rJ" (n) \
- : "memory"); \
+ __asm__ __volatile__ ( \
+ prepend \
+ " amocas" sc_cas_sfx " %0, %z2, %1\n" \
+ append \
+ : "+&r" (r), "+A" (*(p)) \
+ : "rJ" (n) \
+ : "memory"); \
+ } else { \
+ register unsigned int __rc; \
+ \
+ __asm__ __volatile__ ( \
+ prepend \
+ "0: lr" lr_sfx " %0, %2\n" \
+ " bne %0, %z3, 1f\n" \
+ " sc" sc_cas_sfx " %1, %z4, %2\n" \
+ " bnez %1, 0b\n" \
+ append \
+ "1:\n" \
+ : "=&r" (r), "=&r" (__rc), "+A" (*(p)) \
+ : "rJ" (co o), "rJ" (n) \
+ : "memory"); \
+ } \
})
-#define _arch_cmpxchg(ptr, old, new, sc_sfx, prepend, append) \
+#define _arch_cmpxchg(ptr, old, new, sc_cas_sfx, prepend, append) \
({ \
__typeof__(ptr) __ptr = (ptr); \
__typeof__(*(__ptr)) __old = (old); \
@@ -164,15 +178,15 @@
switch (sizeof(*__ptr)) { \
case 1: \
case 2: \
- __arch_cmpxchg_masked(sc_sfx, prepend, append, \
+ __arch_cmpxchg_masked(sc_cas_sfx, prepend, append, \
__ret, __ptr, __old, __new); \
break; \
case 4: \
- __arch_cmpxchg(".w", ".w" sc_sfx, prepend, append, \
+ __arch_cmpxchg(".w", ".w" sc_cas_sfx, prepend, append, \
__ret, __ptr, (long), __old, __new); \
break; \
case 8: \
- __arch_cmpxchg(".d", ".d" sc_sfx, prepend, append, \
+ __arch_cmpxchg(".d", ".d" sc_cas_sfx, prepend, append, \
__ret, __ptr, /**/, __old, __new); \
break; \
default: \
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 04/13] dt-bindings: riscv: Add Zabha ISA extension description
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (2 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 03/13] riscv: Implement cmpxchg32/64() using Zacas Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-21 14:16 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 05/13] riscv: Implement cmpxchg8/16() using Zabha Alexandre Ghiti
` (9 subsequent siblings)
13 siblings, 1 reply; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti, Conor Dooley
Add description for the Zabha ISA extension which was ratified in April
2024.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
Acked-by: Conor Dooley <conor.dooley@microchip.com>
---
Documentation/devicetree/bindings/riscv/extensions.yaml | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/Documentation/devicetree/bindings/riscv/extensions.yaml b/Documentation/devicetree/bindings/riscv/extensions.yaml
index a06dbc6b4928..a63578b95c4a 100644
--- a/Documentation/devicetree/bindings/riscv/extensions.yaml
+++ b/Documentation/devicetree/bindings/riscv/extensions.yaml
@@ -171,6 +171,12 @@ properties:
memory types as ratified in the 20191213 version of the privileged
ISA specification.
+ - const: zabha
+ description: |
+ The Zabha extension for Byte and Halfword Atomic Memory Operations
+ as ratified at commit 49f49c842ff9 ("Update to Rafified state") of
+ riscv-zabha.
+
- const: zacas
description: |
The Zacas extension for Atomic Compare-and-Swap (CAS) instructions
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 05/13] riscv: Implement cmpxchg8/16() using Zabha
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (3 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 04/13] dt-bindings: riscv: Add Zabha ISA extension description Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 06/13] riscv: Improve zacas fully-ordered cmpxchg() Alexandre Ghiti
` (8 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti, Andrew Jones
This adds runtime support for Zabha in cmpxchg8/16() operations.
Note that in the absence of Zacas support in the toolchain, CAS
instructions from Zabha won't be used.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
---
arch/riscv/Kconfig | 18 ++++++++
arch/riscv/Makefile | 3 ++
arch/riscv/include/asm/cmpxchg.h | 78 ++++++++++++++++++++------------
arch/riscv/include/asm/hwcap.h | 1 +
arch/riscv/kernel/cpufeature.c | 1 +
5 files changed, 72 insertions(+), 29 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d955c64d50c2..212ec2aab389 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -613,6 +613,24 @@ config RISCV_ISA_ZAWRS
use of these instructions in the kernel when the Zawrs extension is
detected at boot.
+config TOOLCHAIN_HAS_ZABHA
+ bool
+ default y
+ depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zabha)
+ depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zabha)
+ depends on AS_HAS_OPTION_ARCH
+
+config RISCV_ISA_ZABHA
+ bool "Zabha extension support for atomic byte/halfword operations"
+ depends on TOOLCHAIN_HAS_ZABHA
+ depends on RISCV_ALTERNATIVE
+ default y
+ help
+ Enable the use of the Zabha ISA-extension to implement kernel
+ byte/halfword atomic memory operations when it is detected at boot.
+
+ If you don't know what to do here, say Y.
+
config TOOLCHAIN_HAS_ZACAS
bool
default y
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
index f1788131d5fe..f6dc5ba7c526 100644
--- a/arch/riscv/Makefile
+++ b/arch/riscv/Makefile
@@ -85,6 +85,9 @@ endif
# Check if the toolchain supports Zacas
riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZACAS) := $(riscv-march-y)_zacas
+# Check if the toolchain supports Zabha
+riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZABHA) := $(riscv-march-y)_zabha
+
# Remove F,D,V from isa string for all. Keep extensions between "fd" and "v" by
# matching non-v and non-multi-letter extensions out with the filter ([^v_]*)
KBUILD_CFLAGS += -march=$(shell echo $(riscv-march-y) | sed -E 's/(rv32ima|rv64ima)fd([^v_]*)v?/\1\2/')
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 39c1daf39f6a..1f4cd12e4664 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -108,34 +108,49 @@
* indicated by comparing RETURN with OLD.
*/
-#define __arch_cmpxchg_masked(sc_sfx, prepend, append, r, p, o, n) \
-({ \
- u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \
- ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \
- ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \
- << __s; \
- ulong __newx = (ulong)(n) << __s; \
- ulong __oldx = (ulong)(o) << __s; \
- ulong __retx; \
- ulong __rc; \
- \
- __asm__ __volatile__ ( \
- prepend \
- "0: lr.w %0, %2\n" \
- " and %1, %0, %z5\n" \
- " bne %1, %z3, 1f\n" \
- " and %1, %0, %z6\n" \
- " or %1, %1, %z4\n" \
- " sc.w" sc_sfx " %1, %1, %2\n" \
- " bnez %1, 0b\n" \
- append \
- "1:\n" \
- : "=&r" (__retx), "=&r" (__rc), "+A" (*(__ptr32b)) \
- : "rJ" ((long)__oldx), "rJ" (__newx), \
- "rJ" (__mask), "rJ" (~__mask) \
- : "memory"); \
- \
- r = (__typeof__(*(p)))((__retx & __mask) >> __s); \
+#define __arch_cmpxchg_masked(sc_sfx, cas_sfx, prepend, append, r, p, o, n) \
+({ \
+ if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) && \
+ IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) && \
+ riscv_has_extension_unlikely(RISCV_ISA_EXT_ZABHA) && \
+ riscv_has_extension_unlikely(RISCV_ISA_EXT_ZACAS)) { \
+ r = o; \
+ \
+ __asm__ __volatile__ ( \
+ prepend \
+ " amocas" cas_sfx " %0, %z2, %1\n" \
+ append \
+ : "+&r" (r), "+A" (*(p)) \
+ : "rJ" (n) \
+ : "memory"); \
+ } else { \
+ u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \
+ ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \
+ ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \
+ << __s; \
+ ulong __newx = (ulong)(n) << __s; \
+ ulong __oldx = (ulong)(o) << __s; \
+ ulong __retx; \
+ ulong __rc; \
+ \
+ __asm__ __volatile__ ( \
+ prepend \
+ "0: lr.w %0, %2\n" \
+ " and %1, %0, %z5\n" \
+ " bne %1, %z3, 1f\n" \
+ " and %1, %0, %z6\n" \
+ " or %1, %1, %z4\n" \
+ " sc.w" sc_sfx " %1, %1, %2\n" \
+ " bnez %1, 0b\n" \
+ append \
+ "1:\n" \
+ : "=&r" (__retx), "=&r" (__rc), "+A" (*(__ptr32b)) \
+ : "rJ" ((long)__oldx), "rJ" (__newx), \
+ "rJ" (__mask), "rJ" (~__mask) \
+ : "memory"); \
+ \
+ r = (__typeof__(*(p)))((__retx & __mask) >> __s); \
+ } \
})
#define __arch_cmpxchg(lr_sfx, sc_cas_sfx, prepend, append, r, p, co, o, n) \
@@ -177,8 +192,13 @@
\
switch (sizeof(*__ptr)) { \
case 1: \
+ __arch_cmpxchg_masked(sc_cas_sfx, ".b" sc_cas_sfx, \
+ prepend, append, \
+ __ret, __ptr, __old, __new); \
+ break; \
case 2: \
- __arch_cmpxchg_masked(sc_cas_sfx, prepend, append, \
+ __arch_cmpxchg_masked(sc_cas_sfx, ".h" sc_cas_sfx, \
+ prepend, append, \
__ret, __ptr, __old, __new); \
break; \
case 4: \
diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h
index 5a0bd27fd11a..f5d53251c947 100644
--- a/arch/riscv/include/asm/hwcap.h
+++ b/arch/riscv/include/asm/hwcap.h
@@ -92,6 +92,7 @@
#define RISCV_ISA_EXT_ZCF 83
#define RISCV_ISA_EXT_ZCMOP 84
#define RISCV_ISA_EXT_ZAWRS 85
+#define RISCV_ISA_EXT_ZABHA 86
#define RISCV_ISA_EXT_XLINUXENVCFG 127
diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index b427188b28fc..67ebcc4c9424 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -322,6 +322,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = {
__RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE),
__RISCV_ISA_EXT_DATA(zihpm, RISCV_ISA_EXT_ZIHPM),
__RISCV_ISA_EXT_DATA(zimop, RISCV_ISA_EXT_ZIMOP),
+ __RISCV_ISA_EXT_DATA(zabha, RISCV_ISA_EXT_ZABHA),
__RISCV_ISA_EXT_DATA(zacas, RISCV_ISA_EXT_ZACAS),
__RISCV_ISA_EXT_DATA(zawrs, RISCV_ISA_EXT_ZAWRS),
__RISCV_ISA_EXT_DATA(zfa, RISCV_ISA_EXT_ZFA),
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 06/13] riscv: Improve zacas fully-ordered cmpxchg()
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (4 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 05/13] riscv: Implement cmpxchg8/16() using Zabha Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-21 14:26 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 07/13] riscv: Implement arch_cmpxchg128() using Zacas Alexandre Ghiti
` (7 subsequent siblings)
13 siblings, 1 reply; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti, Andrea Parri
The current fully-ordered cmpxchgXX() implementation results in:
amocas.X.rl a5,a4,(s1)
fence rw,rw
This provides enough sync but we can actually use the following better
mapping instead:
amocas.X.aqrl a5,a4,(s1)
Suggested-by: Andrea Parri <andrea@rivosinc.com>
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/riscv/include/asm/cmpxchg.h | 92 ++++++++++++++++++++++----------
1 file changed, 64 insertions(+), 28 deletions(-)
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 1f4cd12e4664..5b2f95f7f310 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -107,8 +107,10 @@
* store NEW in MEM. Return the initial value in MEM. Success is
* indicated by comparing RETURN with OLD.
*/
-
-#define __arch_cmpxchg_masked(sc_sfx, cas_sfx, prepend, append, r, p, o, n) \
+#define __arch_cmpxchg_masked(sc_sfx, cas_sfx, \
+ sc_prepend, sc_append, \
+ cas_prepend, cas_append, \
+ r, p, o, n) \
({ \
if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) && \
IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) && \
@@ -117,9 +119,9 @@
r = o; \
\
__asm__ __volatile__ ( \
- prepend \
+ cas_prepend \
" amocas" cas_sfx " %0, %z2, %1\n" \
- append \
+ cas_append \
: "+&r" (r), "+A" (*(p)) \
: "rJ" (n) \
: "memory"); \
@@ -134,7 +136,7 @@
ulong __rc; \
\
__asm__ __volatile__ ( \
- prepend \
+ sc_prepend \
"0: lr.w %0, %2\n" \
" and %1, %0, %z5\n" \
" bne %1, %z3, 1f\n" \
@@ -142,7 +144,7 @@
" or %1, %1, %z4\n" \
" sc.w" sc_sfx " %1, %1, %2\n" \
" bnez %1, 0b\n" \
- append \
+ sc_append \
"1:\n" \
: "=&r" (__retx), "=&r" (__rc), "+A" (*(__ptr32b)) \
: "rJ" ((long)__oldx), "rJ" (__newx), \
@@ -153,16 +155,19 @@
} \
})
-#define __arch_cmpxchg(lr_sfx, sc_cas_sfx, prepend, append, r, p, co, o, n) \
+#define __arch_cmpxchg(lr_sfx, sc_sfx, cas_sfx, \
+ sc_prepend, sc_append, \
+ cas_prepend, cas_append, \
+ r, p, co, o, n) \
({ \
if (IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) && \
riscv_has_extension_unlikely(RISCV_ISA_EXT_ZACAS)) { \
r = o; \
\
__asm__ __volatile__ ( \
- prepend \
- " amocas" sc_cas_sfx " %0, %z2, %1\n" \
- append \
+ cas_prepend \
+ " amocas" cas_sfx " %0, %z2, %1\n" \
+ cas_append \
: "+&r" (r), "+A" (*(p)) \
: "rJ" (n) \
: "memory"); \
@@ -170,12 +175,12 @@
register unsigned int __rc; \
\
__asm__ __volatile__ ( \
- prepend \
+ sc_prepend \
"0: lr" lr_sfx " %0, %2\n" \
" bne %0, %z3, 1f\n" \
- " sc" sc_cas_sfx " %1, %z4, %2\n" \
+ " sc" sc_sfx " %1, %z4, %2\n" \
" bnez %1, 0b\n" \
- append \
+ sc_append \
"1:\n" \
: "=&r" (r), "=&r" (__rc), "+A" (*(p)) \
: "rJ" (co o), "rJ" (n) \
@@ -183,7 +188,9 @@
} \
})
-#define _arch_cmpxchg(ptr, old, new, sc_cas_sfx, prepend, append) \
+#define _arch_cmpxchg(ptr, old, new, sc_sfx, cas_sfx, \
+ sc_prepend, sc_append, \
+ cas_prepend, cas_append) \
({ \
__typeof__(ptr) __ptr = (ptr); \
__typeof__(*(__ptr)) __old = (old); \
@@ -192,22 +199,28 @@
\
switch (sizeof(*__ptr)) { \
case 1: \
- __arch_cmpxchg_masked(sc_cas_sfx, ".b" sc_cas_sfx, \
- prepend, append, \
- __ret, __ptr, __old, __new); \
+ __arch_cmpxchg_masked(sc_sfx, ".b" cas_sfx, \
+ sc_prepend, sc_append, \
+ cas_prepend, cas_append, \
+ __ret, __ptr, __old, __new); \
break; \
case 2: \
- __arch_cmpxchg_masked(sc_cas_sfx, ".h" sc_cas_sfx, \
- prepend, append, \
- __ret, __ptr, __old, __new); \
+ __arch_cmpxchg_masked(sc_sfx, ".h" cas_sfx, \
+ sc_prepend, sc_append, \
+ cas_prepend, cas_append, \
+ __ret, __ptr, __old, __new); \
break; \
case 4: \
- __arch_cmpxchg(".w", ".w" sc_cas_sfx, prepend, append, \
- __ret, __ptr, (long), __old, __new); \
+ __arch_cmpxchg(".w", ".w" sc_sfx, ".w" cas_sfx, \
+ sc_prepend, sc_append, \
+ cas_prepend, cas_append, \
+ __ret, __ptr, (long), __old, __new); \
break; \
case 8: \
- __arch_cmpxchg(".d", ".d" sc_cas_sfx, prepend, append, \
- __ret, __ptr, /**/, __old, __new); \
+ __arch_cmpxchg(".d", ".d" sc_sfx, ".d" cas_sfx, \
+ sc_prepend, sc_append, \
+ cas_prepend, cas_append, \
+ __ret, __ptr, /**/, __old, __new); \
break; \
default: \
BUILD_BUG(); \
@@ -215,17 +228,40 @@
(__typeof__(*(__ptr)))__ret; \
})
+/*
+ * Those macros are there only to make the arch_cmpxchg_XXX() macros
+ * more readable.
+ */
+#define SC_SFX(x) x
+#define CAS_SFX(x) x
+#define SC_PREPEND(x) x
+#define SC_APPEND(x) x
+#define CAS_PREPEND(x) x
+#define CAS_APPEND(x) x
+
#define arch_cmpxchg_relaxed(ptr, o, n) \
- _arch_cmpxchg((ptr), (o), (n), "", "", "")
+ _arch_cmpxchg((ptr), (o), (n), \
+ SC_SFX(""), CAS_SFX(""), \
+ SC_PREPEND(""), SC_APPEND(""), \
+ CAS_PREPEND(""), CAS_APPEND(""))
#define arch_cmpxchg_acquire(ptr, o, n) \
- _arch_cmpxchg((ptr), (o), (n), "", "", RISCV_ACQUIRE_BARRIER)
+ _arch_cmpxchg((ptr), (o), (n), \
+ SC_SFX(""), CAS_SFX(""), \
+ SC_PREPEND(""), SC_APPEND(RISCV_ACQUIRE_BARRIER), \
+ CAS_PREPEND(""), CAS_APPEND(RISCV_ACQUIRE_BARRIER))
#define arch_cmpxchg_release(ptr, o, n) \
- _arch_cmpxchg((ptr), (o), (n), "", RISCV_RELEASE_BARRIER, "")
+ _arch_cmpxchg((ptr), (o), (n), \
+ SC_SFX(""), CAS_SFX(""), \
+ SC_PREPEND(RISCV_RELEASE_BARRIER), SC_APPEND(""), \
+ CAS_PREPEND(RISCV_RELEASE_BARRIER), CAS_APPEND(""))
#define arch_cmpxchg(ptr, o, n) \
- _arch_cmpxchg((ptr), (o), (n), ".rl", "", " fence rw, rw\n")
+ _arch_cmpxchg((ptr), (o), (n), \
+ SC_SFX(".rl"), CAS_SFX(".aqrl"), \
+ SC_PREPEND(""), SC_APPEND(RISCV_FULL_BARRIER), \
+ CAS_PREPEND(""), CAS_APPEND(""))
#define arch_cmpxchg_local(ptr, o, n) \
arch_cmpxchg_relaxed((ptr), (o), (n))
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 07/13] riscv: Implement arch_cmpxchg128() using Zacas
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (5 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 06/13] riscv: Improve zacas fully-ordered cmpxchg() Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 08/13] riscv: Implement xchg8/16() using Zabha Alexandre Ghiti
` (6 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti, Andrew Jones
Now that Zacas is supported in the kernel, let's use the double word
atomic version of amocas to improve the SLUB allocator.
Note that we have to select fixed registers, otherwise gcc fails to pick
even registers and then produces a reserved encoding which fails to
assemble.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/cmpxchg.h | 38 ++++++++++++++++++++++++++++++++
2 files changed, 39 insertions(+)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 212ec2aab389..ef55ab94027e 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -111,6 +111,7 @@ config RISCV
select GENERIC_VDSO_TIME_NS if HAVE_GENERIC_VDSO
select HARDIRQS_SW_RESEND
select HAS_IOPORT if MMU
+ select HAVE_ALIGNED_STRUCT_PAGE
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP
select HAVE_ARCH_HUGE_VMAP if MMU && 64BIT
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 5b2f95f7f310..05ba8a8e2ef5 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -296,6 +296,44 @@
arch_cmpxchg_release((ptr), (o), (n)); \
})
+#if defined(CONFIG_64BIT) && defined(CONFIG_RISCV_ISA_ZACAS)
+
+#define system_has_cmpxchg128() riscv_has_extension_unlikely(RISCV_ISA_EXT_ZACAS)
+
+union __u128_halves {
+ u128 full;
+ struct {
+ u64 low, high;
+ };
+};
+
+#define __arch_cmpxchg128(p, o, n, cas_sfx) \
+({ \
+ __typeof__(*(p)) __o = (o); \
+ union __u128_halves __hn = { .full = (n) }; \
+ union __u128_halves __ho = { .full = (__o) }; \
+ register unsigned long t1 asm ("t1") = __hn.low; \
+ register unsigned long t2 asm ("t2") = __hn.high; \
+ register unsigned long t3 asm ("t3") = __ho.low; \
+ register unsigned long t4 asm ("t4") = __ho.high; \
+ \
+ __asm__ __volatile__ ( \
+ " amocas.q" cas_sfx " %0, %z3, %2" \
+ : "+&r" (t3), "+&r" (t4), "+A" (*(p)) \
+ : "rJ" (t1), "rJ" (t2) \
+ : "memory"); \
+ \
+ ((u128)t4 << 64) | t3; \
+})
+
+#define arch_cmpxchg128(ptr, o, n) \
+ __arch_cmpxchg128((ptr), (o), (n), ".aqrl")
+
+#define arch_cmpxchg128_local(ptr, o, n) \
+ __arch_cmpxchg128((ptr), (o), (n), "")
+
+#endif /* CONFIG_64BIT && CONFIG_RISCV_ISA_ZACAS */
+
#ifdef CONFIG_RISCV_ISA_ZAWRS
/*
* Despite wrs.nto being "WRS-with-no-timeout", in the absence of changes to
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 08/13] riscv: Implement xchg8/16() using Zabha
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (6 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 07/13] riscv: Implement arch_cmpxchg128() using Zacas Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 09/13] asm-generic: ticket-lock: Reuse arch_spinlock_t of qspinlock Alexandre Ghiti
` (5 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti, Andrew Jones
This adds runtime support for Zabha in xchg8/16() operations.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
---
arch/riscv/include/asm/cmpxchg.h | 65 ++++++++++++++++++++------------
1 file changed, 41 insertions(+), 24 deletions(-)
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 05ba8a8e2ef5..19779d4d134a 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -14,29 +14,41 @@
#include <asm/insn-def.h>
#include <asm/cpufeature-macros.h>
-#define __arch_xchg_masked(sc_sfx, prepend, append, r, p, n) \
-({ \
- u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \
- ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \
- ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \
- << __s; \
- ulong __newx = (ulong)(n) << __s; \
- ulong __retx; \
- ulong __rc; \
- \
- __asm__ __volatile__ ( \
- prepend \
- "0: lr.w %0, %2\n" \
- " and %1, %0, %z4\n" \
- " or %1, %1, %z3\n" \
- " sc.w" sc_sfx " %1, %1, %2\n" \
- " bnez %1, 0b\n" \
- append \
- : "=&r" (__retx), "=&r" (__rc), "+A" (*(__ptr32b)) \
- : "rJ" (__newx), "rJ" (~__mask) \
- : "memory"); \
- \
- r = (__typeof__(*(p)))((__retx & __mask) >> __s); \
+#define __arch_xchg_masked(sc_sfx, swap_sfx, prepend, sc_append, \
+ swap_append, r, p, n) \
+({ \
+ if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) && \
+ riscv_has_extension_unlikely(RISCV_ISA_EXT_ZABHA)) { \
+ __asm__ __volatile__ ( \
+ prepend \
+ " amoswap" swap_sfx " %0, %z2, %1\n" \
+ swap_append \
+ : "=&r" (r), "+A" (*(p)) \
+ : "rJ" (n) \
+ : "memory"); \
+ } else { \
+ u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \
+ ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \
+ ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \
+ << __s; \
+ ulong __newx = (ulong)(n) << __s; \
+ ulong __retx; \
+ ulong __rc; \
+ \
+ __asm__ __volatile__ ( \
+ prepend \
+ "0: lr.w %0, %2\n" \
+ " and %1, %0, %z4\n" \
+ " or %1, %1, %z3\n" \
+ " sc.w" sc_sfx " %1, %1, %2\n" \
+ " bnez %1, 0b\n" \
+ sc_append \
+ : "=&r" (__retx), "=&r" (__rc), "+A" (*(__ptr32b)) \
+ : "rJ" (__newx), "rJ" (~__mask) \
+ : "memory"); \
+ \
+ r = (__typeof__(*(p)))((__retx & __mask) >> __s); \
+ } \
})
#define __arch_xchg(sfx, prepend, append, r, p, n) \
@@ -59,8 +71,13 @@
\
switch (sizeof(*__ptr)) { \
case 1: \
+ __arch_xchg_masked(sc_sfx, ".b" swap_sfx, \
+ prepend, sc_append, swap_append, \
+ __ret, __ptr, __new); \
+ break; \
case 2: \
- __arch_xchg_masked(sc_sfx, prepend, sc_append, \
+ __arch_xchg_masked(sc_sfx, ".h" swap_sfx, \
+ prepend, sc_append, swap_append, \
__ret, __ptr, __new); \
break; \
case 4: \
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 09/13] asm-generic: ticket-lock: Reuse arch_spinlock_t of qspinlock
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (7 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 08/13] riscv: Implement xchg8/16() using Zabha Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-21 14:28 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 10/13] asm-generic: ticket-lock: Add separate ticket-lock.h Alexandre Ghiti
` (4 subsequent siblings)
13 siblings, 1 reply; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Guo Ren
From: Guo Ren <guoren@linux.alibaba.com>
The arch_spinlock_t of qspinlock has contained the atomic_t val, which
satisfies the ticket-lock requirement. Thus, unify the arch_spinlock_t
into qspinlock_types.h. This is the preparation for the next combo
spinlock.
Reviewed-by: Leonardo Bras <leobras@redhat.com>
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/linux-riscv/CAK8P3a2rnz9mQqhN6-e0CGUUv9rntRELFdxt_weiD7FxH7fkfQ@mail.gmail.com/
Signed-off-by: Guo Ren <guoren@kernel.org>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
include/asm-generic/spinlock.h | 14 +++++++-------
include/asm-generic/spinlock_types.h | 12 ++----------
2 files changed, 9 insertions(+), 17 deletions(-)
diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h
index 90803a826ba0..4773334ee638 100644
--- a/include/asm-generic/spinlock.h
+++ b/include/asm-generic/spinlock.h
@@ -32,7 +32,7 @@
static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
{
- u32 val = atomic_fetch_add(1<<16, lock);
+ u32 val = atomic_fetch_add(1<<16, &lock->val);
u16 ticket = val >> 16;
if (ticket == (u16)val)
@@ -46,31 +46,31 @@ static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
* have no outstanding writes due to the atomic_fetch_add() the extra
* orderings are free.
*/
- atomic_cond_read_acquire(lock, ticket == (u16)VAL);
+ atomic_cond_read_acquire(&lock->val, ticket == (u16)VAL);
smp_mb();
}
static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock)
{
- u32 old = atomic_read(lock);
+ u32 old = atomic_read(&lock->val);
if ((old >> 16) != (old & 0xffff))
return false;
- return atomic_try_cmpxchg(lock, &old, old + (1<<16)); /* SC, for RCsc */
+ return atomic_try_cmpxchg(&lock->val, &old, old + (1<<16)); /* SC, for RCsc */
}
static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
{
u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN);
- u32 val = atomic_read(lock);
+ u32 val = atomic_read(&lock->val);
smp_store_release(ptr, (u16)val + 1);
}
static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
{
- u32 val = lock.counter;
+ u32 val = lock.val.counter;
return ((val >> 16) == (val & 0xffff));
}
@@ -84,7 +84,7 @@ static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock)
static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock)
{
- u32 val = atomic_read(lock);
+ u32 val = atomic_read(&lock->val);
return (s16)((val >> 16) - (val & 0xffff)) > 1;
}
diff --git a/include/asm-generic/spinlock_types.h b/include/asm-generic/spinlock_types.h
index 8962bb730945..f534aa5de394 100644
--- a/include/asm-generic/spinlock_types.h
+++ b/include/asm-generic/spinlock_types.h
@@ -3,15 +3,7 @@
#ifndef __ASM_GENERIC_SPINLOCK_TYPES_H
#define __ASM_GENERIC_SPINLOCK_TYPES_H
-#include <linux/types.h>
-typedef atomic_t arch_spinlock_t;
-
-/*
- * qrwlock_types depends on arch_spinlock_t, so we must typedef that before the
- * include.
- */
-#include <asm/qrwlock_types.h>
-
-#define __ARCH_SPIN_LOCK_UNLOCKED ATOMIC_INIT(0)
+#include <asm-generic/qspinlock_types.h>
+#include <asm-generic/qrwlock_types.h>
#endif /* __ASM_GENERIC_SPINLOCK_TYPES_H */
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 10/13] asm-generic: ticket-lock: Add separate ticket-lock.h
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (8 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 09/13] asm-generic: ticket-lock: Reuse arch_spinlock_t of qspinlock Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-21 14:32 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 11/13] riscv: Add ISA extension parsing for Ziccrse Alexandre Ghiti
` (3 subsequent siblings)
13 siblings, 1 reply; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Guo Ren
From: Guo Ren <guoren@linux.alibaba.com>
Add a separate ticket-lock.h to include multiple spinlock versions and
select one at compile time or runtime.
Reviewed-by: Leonardo Bras <leobras@redhat.com>
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/linux-riscv/CAK8P3a2rnz9mQqhN6-e0CGUUv9rntRELFdxt_weiD7FxH7fkfQ@mail.gmail.com/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
include/asm-generic/spinlock.h | 87 +---------------------
include/asm-generic/ticket_spinlock.h | 103 ++++++++++++++++++++++++++
2 files changed, 104 insertions(+), 86 deletions(-)
create mode 100644 include/asm-generic/ticket_spinlock.h
diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h
index 4773334ee638..970590baf61b 100644
--- a/include/asm-generic/spinlock.h
+++ b/include/asm-generic/spinlock.h
@@ -1,94 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * 'Generic' ticket-lock implementation.
- *
- * It relies on atomic_fetch_add() having well defined forward progress
- * guarantees under contention. If your architecture cannot provide this, stick
- * to a test-and-set lock.
- *
- * It also relies on atomic_fetch_add() being safe vs smp_store_release() on a
- * sub-word of the value. This is generally true for anything LL/SC although
- * you'd be hard pressed to find anything useful in architecture specifications
- * about this. If your architecture cannot do this you might be better off with
- * a test-and-set.
- *
- * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc and hence
- * uses atomic_fetch_add() which is RCsc to create an RCsc hot path, along with
- * a full fence after the spin to upgrade the otherwise-RCpc
- * atomic_cond_read_acquire().
- *
- * The implementation uses smp_cond_load_acquire() to spin, so if the
- * architecture has WFE like instructions to sleep instead of poll for word
- * modifications be sure to implement that (see ARM64 for example).
- *
- */
-
#ifndef __ASM_GENERIC_SPINLOCK_H
#define __ASM_GENERIC_SPINLOCK_H
-#include <linux/atomic.h>
-#include <asm-generic/spinlock_types.h>
-
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
- u32 val = atomic_fetch_add(1<<16, &lock->val);
- u16 ticket = val >> 16;
-
- if (ticket == (u16)val)
- return;
-
- /*
- * atomic_cond_read_acquire() is RCpc, but rather than defining a
- * custom cond_read_rcsc() here we just emit a full fence. We only
- * need the prior reads before subsequent writes ordering from
- * smb_mb(), but as atomic_cond_read_acquire() just emits reads and we
- * have no outstanding writes due to the atomic_fetch_add() the extra
- * orderings are free.
- */
- atomic_cond_read_acquire(&lock->val, ticket == (u16)VAL);
- smp_mb();
-}
-
-static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock)
-{
- u32 old = atomic_read(&lock->val);
-
- if ((old >> 16) != (old & 0xffff))
- return false;
-
- return atomic_try_cmpxchg(&lock->val, &old, old + (1<<16)); /* SC, for RCsc */
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
- u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN);
- u32 val = atomic_read(&lock->val);
-
- smp_store_release(ptr, (u16)val + 1);
-}
-
-static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
-{
- u32 val = lock.val.counter;
-
- return ((val >> 16) == (val & 0xffff));
-}
-
-static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
- arch_spinlock_t val = READ_ONCE(*lock);
-
- return !arch_spin_value_unlocked(val);
-}
-
-static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
- u32 val = atomic_read(&lock->val);
-
- return (s16)((val >> 16) - (val & 0xffff)) > 1;
-}
-
+#include <asm-generic/ticket_spinlock.h>
#include <asm/qrwlock.h>
#endif /* __ASM_GENERIC_SPINLOCK_H */
diff --git a/include/asm-generic/ticket_spinlock.h b/include/asm-generic/ticket_spinlock.h
new file mode 100644
index 000000000000..cfcff22b37b3
--- /dev/null
+++ b/include/asm-generic/ticket_spinlock.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * 'Generic' ticket-lock implementation.
+ *
+ * It relies on atomic_fetch_add() having well defined forward progress
+ * guarantees under contention. If your architecture cannot provide this, stick
+ * to a test-and-set lock.
+ *
+ * It also relies on atomic_fetch_add() being safe vs smp_store_release() on a
+ * sub-word of the value. This is generally true for anything LL/SC although
+ * you'd be hard pressed to find anything useful in architecture specifications
+ * about this. If your architecture cannot do this you might be better off with
+ * a test-and-set.
+ *
+ * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc and hence
+ * uses atomic_fetch_add() which is RCsc to create an RCsc hot path, along with
+ * a full fence after the spin to upgrade the otherwise-RCpc
+ * atomic_cond_read_acquire().
+ *
+ * The implementation uses smp_cond_load_acquire() to spin, so if the
+ * architecture has WFE like instructions to sleep instead of poll for word
+ * modifications be sure to implement that (see ARM64 for example).
+ *
+ */
+
+#ifndef __ASM_GENERIC_TICKET_SPINLOCK_H
+#define __ASM_GENERIC_TICKET_SPINLOCK_H
+
+#include <linux/atomic.h>
+#include <asm-generic/spinlock_types.h>
+
+static __always_inline void ticket_spin_lock(arch_spinlock_t *lock)
+{
+ u32 val = atomic_fetch_add(1<<16, &lock->val);
+ u16 ticket = val >> 16;
+
+ if (ticket == (u16)val)
+ return;
+
+ /*
+ * atomic_cond_read_acquire() is RCpc, but rather than defining a
+ * custom cond_read_rcsc() here we just emit a full fence. We only
+ * need the prior reads before subsequent writes ordering from
+ * smb_mb(), but as atomic_cond_read_acquire() just emits reads and we
+ * have no outstanding writes due to the atomic_fetch_add() the extra
+ * orderings are free.
+ */
+ atomic_cond_read_acquire(&lock->val, ticket == (u16)VAL);
+ smp_mb();
+}
+
+static __always_inline bool ticket_spin_trylock(arch_spinlock_t *lock)
+{
+ u32 old = atomic_read(&lock->val);
+
+ if ((old >> 16) != (old & 0xffff))
+ return false;
+
+ return atomic_try_cmpxchg(&lock->val, &old, old + (1<<16)); /* SC, for RCsc */
+}
+
+static __always_inline void ticket_spin_unlock(arch_spinlock_t *lock)
+{
+ u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN);
+ u32 val = atomic_read(&lock->val);
+
+ smp_store_release(ptr, (u16)val + 1);
+}
+
+static __always_inline int ticket_spin_value_unlocked(arch_spinlock_t lock)
+{
+ u32 val = lock.val.counter;
+
+ return ((val >> 16) == (val & 0xffff));
+}
+
+static __always_inline int ticket_spin_is_locked(arch_spinlock_t *lock)
+{
+ arch_spinlock_t val = READ_ONCE(*lock);
+
+ return !ticket_spin_value_unlocked(val);
+}
+
+static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock)
+{
+ u32 val = atomic_read(&lock->val);
+
+ return (s16)((val >> 16) - (val & 0xffff)) > 1;
+}
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * ticket spinlock functions.
+ */
+#define arch_spin_is_locked(l) ticket_spin_is_locked(l)
+#define arch_spin_is_contended(l) ticket_spin_is_contended(l)
+#define arch_spin_value_unlocked(l) ticket_spin_value_unlocked(l)
+#define arch_spin_lock(l) ticket_spin_lock(l)
+#define arch_spin_trylock(l) ticket_spin_trylock(l)
+#define arch_spin_unlock(l) ticket_spin_unlock(l)
+
+#endif /* __ASM_GENERIC_TICKET_SPINLOCK_H */
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 11/13] riscv: Add ISA extension parsing for Ziccrse
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (9 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 10/13] asm-generic: ticket-lock: Add separate ticket-lock.h Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-21 14:33 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 12/13] dt-bindings: riscv: Add Ziccrse ISA extension description Alexandre Ghiti
` (2 subsequent siblings)
13 siblings, 1 reply; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti
Add support to parse the Ziccrse string in the riscv,isa string.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
arch/riscv/include/asm/hwcap.h | 1 +
arch/riscv/kernel/cpufeature.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h
index f5d53251c947..9e228b079a6d 100644
--- a/arch/riscv/include/asm/hwcap.h
+++ b/arch/riscv/include/asm/hwcap.h
@@ -93,6 +93,7 @@
#define RISCV_ISA_EXT_ZCMOP 84
#define RISCV_ISA_EXT_ZAWRS 85
#define RISCV_ISA_EXT_ZABHA 86
+#define RISCV_ISA_EXT_ZICCRSE 87
#define RISCV_ISA_EXT_XLINUXENVCFG 127
diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index 67ebcc4c9424..ea9c255bbe3d 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -314,6 +314,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = {
riscv_ext_zicbom_validate),
__RISCV_ISA_EXT_SUPERSET_VALIDATE(zicboz, RISCV_ISA_EXT_ZICBOZ, riscv_xlinuxenvcfg_exts,
riscv_ext_zicboz_validate),
+ __RISCV_ISA_EXT_DATA(ziccrse, RISCV_ISA_EXT_ZICCRSE),
__RISCV_ISA_EXT_DATA(zicntr, RISCV_ISA_EXT_ZICNTR),
__RISCV_ISA_EXT_DATA(zicond, RISCV_ISA_EXT_ZICOND),
__RISCV_ISA_EXT_DATA(zicsr, RISCV_ISA_EXT_ZICSR),
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 12/13] dt-bindings: riscv: Add Ziccrse ISA extension description
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (10 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 11/13] riscv: Add ISA extension parsing for Ziccrse Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-18 22:40 ` Conor Dooley
2024-08-21 14:35 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 13/13] riscv: Add qspinlock support Alexandre Ghiti
2024-11-13 15:12 ` [PATCH v5 00/13] Zacas/Zabha support and qspinlocks patchwork-bot+linux-riscv
13 siblings, 2 replies; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti
Add description for the Ziccrse ISA extension which was ratified in
the riscv profiles specification v1.0.
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Reviewed-by: Guo Ren <guoren@kernel.org>
---
Documentation/devicetree/bindings/riscv/extensions.yaml | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/Documentation/devicetree/bindings/riscv/extensions.yaml b/Documentation/devicetree/bindings/riscv/extensions.yaml
index a63578b95c4a..4f174c4c08ff 100644
--- a/Documentation/devicetree/bindings/riscv/extensions.yaml
+++ b/Documentation/devicetree/bindings/riscv/extensions.yaml
@@ -289,6 +289,12 @@ properties:
in commit 64074bc ("Update version numbers for Zfh/Zfinx") of
riscv-isa-manual.
+ - const: ziccrse
+ description:
+ The standard Ziccrse extension which provides forward progress
+ guarantee on LR/SC sequences, as ratified in commit b1d806605f87
+ ("Updated to ratified state.") of the riscv profiles specification.
+
- const: zk
description:
The standard Zk Standard Scalar cryptography extension as ratified
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v5 13/13] riscv: Add qspinlock support
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (11 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 12/13] dt-bindings: riscv: Add Ziccrse ISA extension description Alexandre Ghiti
@ 2024-08-18 6:35 ` Alexandre Ghiti
2024-08-21 14:51 ` Andrew Jones
2024-11-13 15:12 ` [PATCH v5 00/13] Zacas/Zabha support and qspinlocks patchwork-bot+linux-riscv
13 siblings, 1 reply; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-18 6:35 UTC (permalink / raw)
To: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Cc: Alexandre Ghiti
In order to produce a generic kernel, a user can select
CONFIG_COMBO_SPINLOCKS which will fallback at runtime to the ticket
spinlock implementation if Zabha or Ziccrse are not present.
Note that we can't use alternatives here because the discovery of
extensions is done too late and we need to start with the qspinlock
implementation because the ticket spinlock implementation would pollute
the spinlock value, so let's use static keys.
This is largely based on Guo's work and Leonardo reviews at [1].
Link: https://lore.kernel.org/linux-riscv/20231225125847.2778638-1-guoren@kernel.org/ [1]
Signed-off-by: Guo Ren <guoren@kernel.org>
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
---
.../locking/queued-spinlocks/arch-support.txt | 2 +-
arch/riscv/Kconfig | 34 ++++++++++++++
arch/riscv/include/asm/Kbuild | 4 +-
arch/riscv/include/asm/spinlock.h | 47 +++++++++++++++++++
arch/riscv/kernel/setup.c | 37 +++++++++++++++
include/asm-generic/qspinlock.h | 2 +
include/asm-generic/ticket_spinlock.h | 2 +
7 files changed, 126 insertions(+), 2 deletions(-)
create mode 100644 arch/riscv/include/asm/spinlock.h
diff --git a/Documentation/features/locking/queued-spinlocks/arch-support.txt b/Documentation/features/locking/queued-spinlocks/arch-support.txt
index 22f2990392ff..cf26042480e2 100644
--- a/Documentation/features/locking/queued-spinlocks/arch-support.txt
+++ b/Documentation/features/locking/queued-spinlocks/arch-support.txt
@@ -20,7 +20,7 @@
| openrisc: | ok |
| parisc: | TODO |
| powerpc: | ok |
- | riscv: | TODO |
+ | riscv: | ok |
| s390: | TODO |
| sh: | TODO |
| sparc: | ok |
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index ef55ab94027e..201d0669db7f 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -79,6 +79,7 @@ config RISCV
select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP
select ARCH_WANTS_NO_INSTR
select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE
+ select ARCH_WEAK_RELEASE_ACQUIRE if ARCH_USE_QUEUED_SPINLOCKS
select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU
select BUILDTIME_TABLE_SORT if MMU
select CLINT_TIMER if RISCV_M_MODE
@@ -488,6 +489,39 @@ config NODES_SHIFT
Specify the maximum number of NUMA Nodes available on the target
system. Increases memory reserved to accommodate various tables.
+choice
+ prompt "RISC-V spinlock type"
+ default RISCV_COMBO_SPINLOCKS
+
+config RISCV_TICKET_SPINLOCKS
+ bool "Using ticket spinlock"
+
+config RISCV_QUEUED_SPINLOCKS
+ bool "Using queued spinlock"
+ depends on SMP && MMU && NONPORTABLE
+ select ARCH_USE_QUEUED_SPINLOCKS
+ help
+ The queued spinlock implementation requires the forward progress
+ guarantee of cmpxchg()/xchg() atomic operations: CAS with Zabha or
+ LR/SC with Ziccrse provide such guarantee.
+
+ Select this if and only if Zabha or Ziccrse is available on your
+ platform, RISCV_QUEUED_SPINLOCKS must not be selected for platforms
+ without one of those extensions.
+
+ If unsure, select RISCV_COMBO_SPINLOCKS, which will use qspinlocks
+ when supported and otherwise ticket spinlocks.
+
+config RISCV_COMBO_SPINLOCKS
+ bool "Using combo spinlock"
+ depends on SMP && MMU
+ select ARCH_USE_QUEUED_SPINLOCKS
+ help
+ Embed both queued spinlock and ticket lock so that the spinlock
+ implementation can be chosen at runtime.
+
+endchoice
+
config RISCV_ALTERNATIVE
bool
depends on !XIP_KERNEL
diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
index 5c589770f2a8..1c2618c964f0 100644
--- a/arch/riscv/include/asm/Kbuild
+++ b/arch/riscv/include/asm/Kbuild
@@ -5,10 +5,12 @@ syscall-y += syscall_table_64.h
generic-y += early_ioremap.h
generic-y += flat.h
generic-y += kvm_para.h
+generic-y += mcs_spinlock.h
generic-y += parport.h
-generic-y += spinlock.h
generic-y += spinlock_types.h
+generic-y += ticket_spinlock.h
generic-y += qrwlock.h
generic-y += qrwlock_types.h
+generic-y += qspinlock.h
generic-y += user.h
generic-y += vmlinux.lds.h
diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
new file mode 100644
index 000000000000..e5121b89acea
--- /dev/null
+++ b/arch/riscv/include/asm/spinlock.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_RISCV_SPINLOCK_H
+#define __ASM_RISCV_SPINLOCK_H
+
+#ifdef CONFIG_RISCV_COMBO_SPINLOCKS
+#define _Q_PENDING_LOOPS (1 << 9)
+
+#define __no_arch_spinlock_redefine
+#include <asm/ticket_spinlock.h>
+#include <asm/qspinlock.h>
+#include <asm/jump_label.h>
+
+/*
+ * TODO: Use an alternative instead of a static key when we are able to parse
+ * the extensions string earlier in the boot process.
+ */
+DECLARE_STATIC_KEY_TRUE(qspinlock_key);
+
+#define SPINLOCK_BASE_DECLARE(op, type, type_lock) \
+static __always_inline type arch_spin_##op(type_lock lock) \
+{ \
+ if (static_branch_unlikely(&qspinlock_key)) \
+ return queued_spin_##op(lock); \
+ return ticket_spin_##op(lock); \
+}
+
+SPINLOCK_BASE_DECLARE(lock, void, arch_spinlock_t *)
+SPINLOCK_BASE_DECLARE(unlock, void, arch_spinlock_t *)
+SPINLOCK_BASE_DECLARE(is_locked, int, arch_spinlock_t *)
+SPINLOCK_BASE_DECLARE(is_contended, int, arch_spinlock_t *)
+SPINLOCK_BASE_DECLARE(trylock, bool, arch_spinlock_t *)
+SPINLOCK_BASE_DECLARE(value_unlocked, int, arch_spinlock_t)
+
+#elif defined(CONFIG_RISCV_QUEUED_SPINLOCKS)
+
+#include <asm/qspinlock.h>
+
+#else
+
+#include <asm/ticket_spinlock.h>
+
+#endif
+
+#include <asm/qrwlock.h>
+
+#endif /* __ASM_RISCV_SPINLOCK_H */
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
index a2cde65b69e9..438e4f6ad2ad 100644
--- a/arch/riscv/kernel/setup.c
+++ b/arch/riscv/kernel/setup.c
@@ -244,6 +244,42 @@ static void __init parse_dtb(void)
#endif
}
+#if defined(CONFIG_RISCV_COMBO_SPINLOCKS)
+DEFINE_STATIC_KEY_TRUE(qspinlock_key);
+EXPORT_SYMBOL(qspinlock_key);
+#endif
+
+static void __init riscv_spinlock_init(void)
+{
+ char *using_ext = NULL;
+
+ if (IS_ENABLED(CONFIG_RISCV_TICKET_SPINLOCKS)) {
+ pr_info("Ticket spinlock: enabled\n");
+ return;
+ }
+
+ if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) &&
+ IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) &&
+ riscv_isa_extension_available(NULL, ZABHA) &&
+ riscv_isa_extension_available(NULL, ZACAS)) {
+ using_ext = "using Zabha";
+ } else if (riscv_isa_extension_available(NULL, ZICCRSE)) {
+ using_ext = "using Ziccrse";
+ }
+#if defined(CONFIG_RISCV_COMBO_SPINLOCKS)
+ else {
+ static_branch_disable(&qspinlock_key);
+ pr_info("Ticket spinlock: enabled\n");
+ return;
+ }
+#endif
+
+ if (!using_ext)
+ pr_err("Queued spinlock without Zabha or Ziccrse");
+ else
+ pr_info("Queued spinlock %s: enabled\n", using_ext);
+}
+
extern void __init init_rt_signal_env(void);
void __init setup_arch(char **cmdline_p)
@@ -297,6 +333,7 @@ void __init setup_arch(char **cmdline_p)
riscv_set_dma_cache_alignment();
riscv_user_isa_enable();
+ riscv_spinlock_init();
}
bool arch_cpu_is_hotpluggable(int cpu)
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index 0655aa5b57b2..bf47cca2c375 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -136,6 +136,7 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock)
}
#endif
+#ifndef __no_arch_spinlock_redefine
/*
* Remapping spinlock architecture specific functions to the corresponding
* queued spinlock functions.
@@ -146,5 +147,6 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock)
#define arch_spin_lock(l) queued_spin_lock(l)
#define arch_spin_trylock(l) queued_spin_trylock(l)
#define arch_spin_unlock(l) queued_spin_unlock(l)
+#endif
#endif /* __ASM_GENERIC_QSPINLOCK_H */
diff --git a/include/asm-generic/ticket_spinlock.h b/include/asm-generic/ticket_spinlock.h
index cfcff22b37b3..325779970d8a 100644
--- a/include/asm-generic/ticket_spinlock.h
+++ b/include/asm-generic/ticket_spinlock.h
@@ -89,6 +89,7 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock)
return (s16)((val >> 16) - (val & 0xffff)) > 1;
}
+#ifndef __no_arch_spinlock_redefine
/*
* Remapping spinlock architecture specific functions to the corresponding
* ticket spinlock functions.
@@ -99,5 +100,6 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock)
#define arch_spin_lock(l) ticket_spin_lock(l)
#define arch_spin_trylock(l) ticket_spin_trylock(l)
#define arch_spin_unlock(l) ticket_spin_unlock(l)
+#endif
#endif /* __ASM_GENERIC_TICKET_SPINLOCK_H */
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH v5 12/13] dt-bindings: riscv: Add Ziccrse ISA extension description
2024-08-18 6:35 ` [PATCH v5 12/13] dt-bindings: riscv: Add Ziccrse ISA extension description Alexandre Ghiti
@ 2024-08-18 22:40 ` Conor Dooley
2024-08-21 14:35 ` Andrew Jones
1 sibling, 0 replies; 26+ messages in thread
From: Conor Dooley @ 2024-08-18 22:40 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Rob Herring, Krzysztof Kozlowski, Andrea Parri, Nathan Chancellor,
Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng,
Arnd Bergmann, Leonardo Bras, Guo Ren, linux-doc, devicetree,
linux-kernel, linux-riscv, linux-arch
[-- Attachment #1: Type: text/plain, Size: 341 bytes --]
On Sun, Aug 18, 2024 at 08:35:37AM +0200, Alexandre Ghiti wrote:
> Add description for the Ziccrse ISA extension which was ratified in
> the riscv profiles specification v1.0.
>
> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
> Reviewed-by: Guo Ren <guoren@kernel.org>
Acked-by: Conor Dooley <conor.dooley@microchip.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 02/13] riscv: Do not fail to build on byte/halfword operations with Zawrs
2024-08-18 6:35 ` [PATCH v5 02/13] riscv: Do not fail to build on byte/halfword operations with Zawrs Alexandre Ghiti
@ 2024-08-21 14:11 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2024-08-21 14:11 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
On Sun, Aug 18, 2024 at 08:35:27AM GMT, Alexandre Ghiti wrote:
> riscv does not have lr instructions on byte and halfword but the
> qspinlock implementation actually uses such atomics provided by the
> Zabha extension, so those sizes are legitimate.
>
> Then instead of failing to build, just fallback to the !Zawrs path.
>
> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
> ---
> arch/riscv/include/asm/cmpxchg.h | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> index ebbce134917c..ac1d7df898ef 100644
> --- a/arch/riscv/include/asm/cmpxchg.h
> +++ b/arch/riscv/include/asm/cmpxchg.h
> @@ -245,6 +245,11 @@ static __always_inline void __cmpwait(volatile void *ptr,
> : : : : no_zawrs);
>
> switch (size) {
> + case 1:
> + fallthrough;
> + case 2:
> + /* RISC-V doesn't have lr instructions on byte and half-word. */
> + goto no_zawrs;
> case 4:
> asm volatile(
> " lr.w %0, %1\n"
> --
> 2.39.2
>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 04/13] dt-bindings: riscv: Add Zabha ISA extension description
2024-08-18 6:35 ` [PATCH v5 04/13] dt-bindings: riscv: Add Zabha ISA extension description Alexandre Ghiti
@ 2024-08-21 14:16 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2024-08-21 14:16 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch,
Conor Dooley
On Sun, Aug 18, 2024 at 08:35:29AM GMT, Alexandre Ghiti wrote:
> Add description for the Zabha ISA extension which was ratified in April
> 2024.
>
> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
> Reviewed-by: Guo Ren <guoren@kernel.org>
> Acked-by: Conor Dooley <conor.dooley@microchip.com>
> ---
> Documentation/devicetree/bindings/riscv/extensions.yaml | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/riscv/extensions.yaml b/Documentation/devicetree/bindings/riscv/extensions.yaml
> index a06dbc6b4928..a63578b95c4a 100644
> --- a/Documentation/devicetree/bindings/riscv/extensions.yaml
> +++ b/Documentation/devicetree/bindings/riscv/extensions.yaml
> @@ -171,6 +171,12 @@ properties:
> memory types as ratified in the 20191213 version of the privileged
> ISA specification.
>
> + - const: zabha
> + description: |
> + The Zabha extension for Byte and Halfword Atomic Memory Operations
> + as ratified at commit 49f49c842ff9 ("Update to Rafified state") of
The typo is verbatim from the commit, so
"Reviewfified-by:", err...
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
> + riscv-zabha.
> +
> - const: zacas
> description: |
> The Zacas extension for Atomic Compare-and-Swap (CAS) instructions
> --
> 2.39.2
>
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 06/13] riscv: Improve zacas fully-ordered cmpxchg()
2024-08-18 6:35 ` [PATCH v5 06/13] riscv: Improve zacas fully-ordered cmpxchg() Alexandre Ghiti
@ 2024-08-21 14:26 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2024-08-21 14:26 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch,
Andrea Parri
On Sun, Aug 18, 2024 at 08:35:31AM GMT, Alexandre Ghiti wrote:
> The current fully-ordered cmpxchgXX() implementation results in:
>
> amocas.X.rl a5,a4,(s1)
> fence rw,rw
>
> This provides enough sync but we can actually use the following better
> mapping instead:
>
> amocas.X.aqrl a5,a4,(s1)
>
> Suggested-by: Andrea Parri <andrea@rivosinc.com>
> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
> ---
> arch/riscv/include/asm/cmpxchg.h | 92 ++++++++++++++++++++++----------
> 1 file changed, 64 insertions(+), 28 deletions(-)
>
> diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> index 1f4cd12e4664..5b2f95f7f310 100644
> --- a/arch/riscv/include/asm/cmpxchg.h
> +++ b/arch/riscv/include/asm/cmpxchg.h
> @@ -107,8 +107,10 @@
> * store NEW in MEM. Return the initial value in MEM. Success is
> * indicated by comparing RETURN with OLD.
> */
> -
> -#define __arch_cmpxchg_masked(sc_sfx, cas_sfx, prepend, append, r, p, o, n) \
> +#define __arch_cmpxchg_masked(sc_sfx, cas_sfx, \
> + sc_prepend, sc_append, \
> + cas_prepend, cas_append, \
> + r, p, o, n) \
> ({ \
> if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) && \
> IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) && \
> @@ -117,9 +119,9 @@
> r = o; \
> \
> __asm__ __volatile__ ( \
> - prepend \
> + cas_prepend \
> " amocas" cas_sfx " %0, %z2, %1\n" \
> - append \
> + cas_append \
> : "+&r" (r), "+A" (*(p)) \
> : "rJ" (n) \
> : "memory"); \
> @@ -134,7 +136,7 @@
> ulong __rc; \
> \
> __asm__ __volatile__ ( \
> - prepend \
> + sc_prepend \
> "0: lr.w %0, %2\n" \
> " and %1, %0, %z5\n" \
> " bne %1, %z3, 1f\n" \
> @@ -142,7 +144,7 @@
> " or %1, %1, %z4\n" \
> " sc.w" sc_sfx " %1, %1, %2\n" \
> " bnez %1, 0b\n" \
> - append \
> + sc_append \
> "1:\n" \
> : "=&r" (__retx), "=&r" (__rc), "+A" (*(__ptr32b)) \
> : "rJ" ((long)__oldx), "rJ" (__newx), \
> @@ -153,16 +155,19 @@
> } \
> })
>
> -#define __arch_cmpxchg(lr_sfx, sc_cas_sfx, prepend, append, r, p, co, o, n) \
> +#define __arch_cmpxchg(lr_sfx, sc_sfx, cas_sfx, \
> + sc_prepend, sc_append, \
> + cas_prepend, cas_append, \
> + r, p, co, o, n) \
> ({ \
> if (IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) && \
> riscv_has_extension_unlikely(RISCV_ISA_EXT_ZACAS)) { \
> r = o; \
> \
> __asm__ __volatile__ ( \
> - prepend \
> - " amocas" sc_cas_sfx " %0, %z2, %1\n" \
> - append \
> + cas_prepend \
> + " amocas" cas_sfx " %0, %z2, %1\n" \
> + cas_append \
> : "+&r" (r), "+A" (*(p)) \
> : "rJ" (n) \
> : "memory"); \
> @@ -170,12 +175,12 @@
> register unsigned int __rc; \
> \
> __asm__ __volatile__ ( \
> - prepend \
> + sc_prepend \
> "0: lr" lr_sfx " %0, %2\n" \
> " bne %0, %z3, 1f\n" \
> - " sc" sc_cas_sfx " %1, %z4, %2\n" \
> + " sc" sc_sfx " %1, %z4, %2\n" \
> " bnez %1, 0b\n" \
> - append \
> + sc_append \
> "1:\n" \
> : "=&r" (r), "=&r" (__rc), "+A" (*(p)) \
> : "rJ" (co o), "rJ" (n) \
> @@ -183,7 +188,9 @@
> } \
> })
>
> -#define _arch_cmpxchg(ptr, old, new, sc_cas_sfx, prepend, append) \
> +#define _arch_cmpxchg(ptr, old, new, sc_sfx, cas_sfx, \
> + sc_prepend, sc_append, \
> + cas_prepend, cas_append) \
> ({ \
> __typeof__(ptr) __ptr = (ptr); \
> __typeof__(*(__ptr)) __old = (old); \
> @@ -192,22 +199,28 @@
> \
> switch (sizeof(*__ptr)) { \
> case 1: \
> - __arch_cmpxchg_masked(sc_cas_sfx, ".b" sc_cas_sfx, \
> - prepend, append, \
> - __ret, __ptr, __old, __new); \
> + __arch_cmpxchg_masked(sc_sfx, ".b" cas_sfx, \
> + sc_prepend, sc_append, \
> + cas_prepend, cas_append, \
> + __ret, __ptr, __old, __new); \
> break; \
> case 2: \
> - __arch_cmpxchg_masked(sc_cas_sfx, ".h" sc_cas_sfx, \
> - prepend, append, \
> - __ret, __ptr, __old, __new); \
> + __arch_cmpxchg_masked(sc_sfx, ".h" cas_sfx, \
> + sc_prepend, sc_append, \
> + cas_prepend, cas_append, \
> + __ret, __ptr, __old, __new); \
> break; \
> case 4: \
> - __arch_cmpxchg(".w", ".w" sc_cas_sfx, prepend, append, \
> - __ret, __ptr, (long), __old, __new); \
> + __arch_cmpxchg(".w", ".w" sc_sfx, ".w" cas_sfx, \
> + sc_prepend, sc_append, \
> + cas_prepend, cas_append, \
> + __ret, __ptr, (long), __old, __new); \
> break; \
> case 8: \
> - __arch_cmpxchg(".d", ".d" sc_cas_sfx, prepend, append, \
> - __ret, __ptr, /**/, __old, __new); \
> + __arch_cmpxchg(".d", ".d" sc_sfx, ".d" cas_sfx, \
> + sc_prepend, sc_append, \
> + cas_prepend, cas_append, \
> + __ret, __ptr, /**/, __old, __new); \
> break; \
> default: \
> BUILD_BUG(); \
> @@ -215,17 +228,40 @@
> (__typeof__(*(__ptr)))__ret; \
> })
>
> +/*
> + * Those macros are there only to make the arch_cmpxchg_XXX() macros
These macros are here to improve the readability of the arch_cmpxchg_XXX()
macros.
> + * more readable.
> + */
> +#define SC_SFX(x) x
> +#define CAS_SFX(x) x
> +#define SC_PREPEND(x) x
> +#define SC_APPEND(x) x
> +#define CAS_PREPEND(x) x
> +#define CAS_APPEND(x) x
> +
> #define arch_cmpxchg_relaxed(ptr, o, n) \
> - _arch_cmpxchg((ptr), (o), (n), "", "", "")
> + _arch_cmpxchg((ptr), (o), (n), \
nit: no need for the () around the macro args when the arg is not used
in an expression.
> + SC_SFX(""), CAS_SFX(""), \
> + SC_PREPEND(""), SC_APPEND(""), \
> + CAS_PREPEND(""), CAS_APPEND(""))
>
> #define arch_cmpxchg_acquire(ptr, o, n) \
> - _arch_cmpxchg((ptr), (o), (n), "", "", RISCV_ACQUIRE_BARRIER)
> + _arch_cmpxchg((ptr), (o), (n), \
> + SC_SFX(""), CAS_SFX(""), \
> + SC_PREPEND(""), SC_APPEND(RISCV_ACQUIRE_BARRIER), \
> + CAS_PREPEND(""), CAS_APPEND(RISCV_ACQUIRE_BARRIER))
>
> #define arch_cmpxchg_release(ptr, o, n) \
> - _arch_cmpxchg((ptr), (o), (n), "", RISCV_RELEASE_BARRIER, "")
> + _arch_cmpxchg((ptr), (o), (n), \
> + SC_SFX(""), CAS_SFX(""), \
> + SC_PREPEND(RISCV_RELEASE_BARRIER), SC_APPEND(""), \
> + CAS_PREPEND(RISCV_RELEASE_BARRIER), CAS_APPEND(""))
>
> #define arch_cmpxchg(ptr, o, n) \
> - _arch_cmpxchg((ptr), (o), (n), ".rl", "", " fence rw, rw\n")
> + _arch_cmpxchg((ptr), (o), (n), \
> + SC_SFX(".rl"), CAS_SFX(".aqrl"), \
> + SC_PREPEND(""), SC_APPEND(RISCV_FULL_BARRIER), \
> + CAS_PREPEND(""), CAS_APPEND(""))
>
> #define arch_cmpxchg_local(ptr, o, n) \
> arch_cmpxchg_relaxed((ptr), (o), (n))
> --
> 2.39.2
>
Besides the comment wording and the nit about macro args,
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 09/13] asm-generic: ticket-lock: Reuse arch_spinlock_t of qspinlock
2024-08-18 6:35 ` [PATCH v5 09/13] asm-generic: ticket-lock: Reuse arch_spinlock_t of qspinlock Alexandre Ghiti
@ 2024-08-21 14:28 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2024-08-21 14:28 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch,
Guo Ren
On Sun, Aug 18, 2024 at 08:35:34AM GMT, Alexandre Ghiti wrote:
> From: Guo Ren <guoren@linux.alibaba.com>
>
> The arch_spinlock_t of qspinlock has contained the atomic_t val, which
> satisfies the ticket-lock requirement. Thus, unify the arch_spinlock_t
> into qspinlock_types.h. This is the preparation for the next combo
> spinlock.
>
> Reviewed-by: Leonardo Bras <leobras@redhat.com>
> Suggested-by: Arnd Bergmann <arnd@arndb.de>
> Link: https://lore.kernel.org/linux-riscv/CAK8P3a2rnz9mQqhN6-e0CGUUv9rntRELFdxt_weiD7FxH7fkfQ@mail.gmail.com/
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> include/asm-generic/spinlock.h | 14 +++++++-------
> include/asm-generic/spinlock_types.h | 12 ++----------
> 2 files changed, 9 insertions(+), 17 deletions(-)
>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 10/13] asm-generic: ticket-lock: Add separate ticket-lock.h
2024-08-18 6:35 ` [PATCH v5 10/13] asm-generic: ticket-lock: Add separate ticket-lock.h Alexandre Ghiti
@ 2024-08-21 14:32 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2024-08-21 14:32 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch,
Guo Ren
On Sun, Aug 18, 2024 at 08:35:35AM GMT, Alexandre Ghiti wrote:
> From: Guo Ren <guoren@linux.alibaba.com>
>
> Add a separate ticket-lock.h to include multiple spinlock versions and
> select one at compile time or runtime.
>
> Reviewed-by: Leonardo Bras <leobras@redhat.com>
> Suggested-by: Arnd Bergmann <arnd@arndb.de>
> Link: https://lore.kernel.org/linux-riscv/CAK8P3a2rnz9mQqhN6-e0CGUUv9rntRELFdxt_weiD7FxH7fkfQ@mail.gmail.com/
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> include/asm-generic/spinlock.h | 87 +---------------------
> include/asm-generic/ticket_spinlock.h | 103 ++++++++++++++++++++++++++
> 2 files changed, 104 insertions(+), 86 deletions(-)
> create mode 100644 include/asm-generic/ticket_spinlock.h
>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 11/13] riscv: Add ISA extension parsing for Ziccrse
2024-08-18 6:35 ` [PATCH v5 11/13] riscv: Add ISA extension parsing for Ziccrse Alexandre Ghiti
@ 2024-08-21 14:33 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2024-08-21 14:33 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
On Sun, Aug 18, 2024 at 08:35:36AM GMT, Alexandre Ghiti wrote:
> Add support to parse the Ziccrse string in the riscv,isa string.
>
> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
> ---
> arch/riscv/include/asm/hwcap.h | 1 +
> arch/riscv/kernel/cpufeature.c | 1 +
> 2 files changed, 2 insertions(+)
>
> diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h
> index f5d53251c947..9e228b079a6d 100644
> --- a/arch/riscv/include/asm/hwcap.h
> +++ b/arch/riscv/include/asm/hwcap.h
> @@ -93,6 +93,7 @@
> #define RISCV_ISA_EXT_ZCMOP 84
> #define RISCV_ISA_EXT_ZAWRS 85
> #define RISCV_ISA_EXT_ZABHA 86
> +#define RISCV_ISA_EXT_ZICCRSE 87
>
> #define RISCV_ISA_EXT_XLINUXENVCFG 127
>
> diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
> index 67ebcc4c9424..ea9c255bbe3d 100644
> --- a/arch/riscv/kernel/cpufeature.c
> +++ b/arch/riscv/kernel/cpufeature.c
> @@ -314,6 +314,7 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = {
> riscv_ext_zicbom_validate),
> __RISCV_ISA_EXT_SUPERSET_VALIDATE(zicboz, RISCV_ISA_EXT_ZICBOZ, riscv_xlinuxenvcfg_exts,
> riscv_ext_zicboz_validate),
> + __RISCV_ISA_EXT_DATA(ziccrse, RISCV_ISA_EXT_ZICCRSE),
> __RISCV_ISA_EXT_DATA(zicntr, RISCV_ISA_EXT_ZICNTR),
> __RISCV_ISA_EXT_DATA(zicond, RISCV_ISA_EXT_ZICOND),
> __RISCV_ISA_EXT_DATA(zicsr, RISCV_ISA_EXT_ZICSR),
> --
> 2.39.2
>
>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 12/13] dt-bindings: riscv: Add Ziccrse ISA extension description
2024-08-18 6:35 ` [PATCH v5 12/13] dt-bindings: riscv: Add Ziccrse ISA extension description Alexandre Ghiti
2024-08-18 22:40 ` Conor Dooley
@ 2024-08-21 14:35 ` Andrew Jones
1 sibling, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2024-08-21 14:35 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
On Sun, Aug 18, 2024 at 08:35:37AM GMT, Alexandre Ghiti wrote:
> Add description for the Ziccrse ISA extension which was ratified in
> the riscv profiles specification v1.0.
>
> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
> Reviewed-by: Guo Ren <guoren@kernel.org>
> ---
> Documentation/devicetree/bindings/riscv/extensions.yaml | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/riscv/extensions.yaml b/Documentation/devicetree/bindings/riscv/extensions.yaml
> index a63578b95c4a..4f174c4c08ff 100644
> --- a/Documentation/devicetree/bindings/riscv/extensions.yaml
> +++ b/Documentation/devicetree/bindings/riscv/extensions.yaml
> @@ -289,6 +289,12 @@ properties:
> in commit 64074bc ("Update version numbers for Zfh/Zfinx") of
> riscv-isa-manual.
>
> + - const: ziccrse
> + description:
> + The standard Ziccrse extension which provides forward progress
> + guarantee on LR/SC sequences, as ratified in commit b1d806605f87
> + ("Updated to ratified state.") of the riscv profiles specification.
> +
> - const: zk
> description:
> The standard Zk Standard Scalar cryptography extension as ratified
> --
> 2.39.2
>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 13/13] riscv: Add qspinlock support
2024-08-18 6:35 ` [PATCH v5 13/13] riscv: Add qspinlock support Alexandre Ghiti
@ 2024-08-21 14:51 ` Andrew Jones
2024-08-28 8:16 ` Alexandre Ghiti
0 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2024-08-21 14:51 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
On Sun, Aug 18, 2024 at 08:35:38AM GMT, Alexandre Ghiti wrote:
> In order to produce a generic kernel, a user can select
> CONFIG_COMBO_SPINLOCKS which will fallback at runtime to the ticket
> spinlock implementation if Zabha or Ziccrse are not present.
>
> Note that we can't use alternatives here because the discovery of
> extensions is done too late and we need to start with the qspinlock
> implementation because the ticket spinlock implementation would pollute
> the spinlock value, so let's use static keys.
>
> This is largely based on Guo's work and Leonardo reviews at [1].
>
> Link: https://lore.kernel.org/linux-riscv/20231225125847.2778638-1-guoren@kernel.org/ [1]
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
> ---
> .../locking/queued-spinlocks/arch-support.txt | 2 +-
> arch/riscv/Kconfig | 34 ++++++++++++++
> arch/riscv/include/asm/Kbuild | 4 +-
> arch/riscv/include/asm/spinlock.h | 47 +++++++++++++++++++
> arch/riscv/kernel/setup.c | 37 +++++++++++++++
> include/asm-generic/qspinlock.h | 2 +
> include/asm-generic/ticket_spinlock.h | 2 +
> 7 files changed, 126 insertions(+), 2 deletions(-)
> create mode 100644 arch/riscv/include/asm/spinlock.h
>
> diff --git a/Documentation/features/locking/queued-spinlocks/arch-support.txt b/Documentation/features/locking/queued-spinlocks/arch-support.txt
> index 22f2990392ff..cf26042480e2 100644
> --- a/Documentation/features/locking/queued-spinlocks/arch-support.txt
> +++ b/Documentation/features/locking/queued-spinlocks/arch-support.txt
> @@ -20,7 +20,7 @@
> | openrisc: | ok |
> | parisc: | TODO |
> | powerpc: | ok |
> - | riscv: | TODO |
> + | riscv: | ok |
> | s390: | TODO |
> | sh: | TODO |
> | sparc: | ok |
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index ef55ab94027e..201d0669db7f 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -79,6 +79,7 @@ config RISCV
> select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP
> select ARCH_WANTS_NO_INSTR
> select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE
> + select ARCH_WEAK_RELEASE_ACQUIRE if ARCH_USE_QUEUED_SPINLOCKS
Hi Alex,
Did you get a chance to experiment this as suggested by Andrea?
> select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU
> select BUILDTIME_TABLE_SORT if MMU
> select CLINT_TIMER if RISCV_M_MODE
> @@ -488,6 +489,39 @@ config NODES_SHIFT
> Specify the maximum number of NUMA Nodes available on the target
> system. Increases memory reserved to accommodate various tables.
>
> +choice
> + prompt "RISC-V spinlock type"
> + default RISCV_COMBO_SPINLOCKS
> +
> +config RISCV_TICKET_SPINLOCKS
> + bool "Using ticket spinlock"
> +
> +config RISCV_QUEUED_SPINLOCKS
> + bool "Using queued spinlock"
> + depends on SMP && MMU && NONPORTABLE
> + select ARCH_USE_QUEUED_SPINLOCKS
> + help
> + The queued spinlock implementation requires the forward progress
> + guarantee of cmpxchg()/xchg() atomic operations: CAS with Zabha or
> + LR/SC with Ziccrse provide such guarantee.
> +
> + Select this if and only if Zabha or Ziccrse is available on your
> + platform, RISCV_QUEUED_SPINLOCKS must not be selected for platforms
> + without one of those extensions.
> +
> + If unsure, select RISCV_COMBO_SPINLOCKS, which will use qspinlocks
> + when supported and otherwise ticket spinlocks.
> +
> +config RISCV_COMBO_SPINLOCKS
> + bool "Using combo spinlock"
> + depends on SMP && MMU
> + select ARCH_USE_QUEUED_SPINLOCKS
> + help
> + Embed both queued spinlock and ticket lock so that the spinlock
> + implementation can be chosen at runtime.
> +
> +endchoice
> +
> config RISCV_ALTERNATIVE
> bool
> depends on !XIP_KERNEL
> diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
> index 5c589770f2a8..1c2618c964f0 100644
> --- a/arch/riscv/include/asm/Kbuild
> +++ b/arch/riscv/include/asm/Kbuild
> @@ -5,10 +5,12 @@ syscall-y += syscall_table_64.h
> generic-y += early_ioremap.h
> generic-y += flat.h
> generic-y += kvm_para.h
> +generic-y += mcs_spinlock.h
> generic-y += parport.h
> -generic-y += spinlock.h
> generic-y += spinlock_types.h
> +generic-y += ticket_spinlock.h
> generic-y += qrwlock.h
> generic-y += qrwlock_types.h
> +generic-y += qspinlock.h
> generic-y += user.h
> generic-y += vmlinux.lds.h
> diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
> new file mode 100644
> index 000000000000..e5121b89acea
> --- /dev/null
> +++ b/arch/riscv/include/asm/spinlock.h
> @@ -0,0 +1,47 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef __ASM_RISCV_SPINLOCK_H
> +#define __ASM_RISCV_SPINLOCK_H
> +
> +#ifdef CONFIG_RISCV_COMBO_SPINLOCKS
> +#define _Q_PENDING_LOOPS (1 << 9)
> +
> +#define __no_arch_spinlock_redefine
> +#include <asm/ticket_spinlock.h>
> +#include <asm/qspinlock.h>
> +#include <asm/jump_label.h>
> +
> +/*
> + * TODO: Use an alternative instead of a static key when we are able to parse
> + * the extensions string earlier in the boot process.
> + */
> +DECLARE_STATIC_KEY_TRUE(qspinlock_key);
> +
> +#define SPINLOCK_BASE_DECLARE(op, type, type_lock) \
> +static __always_inline type arch_spin_##op(type_lock lock) \
> +{ \
> + if (static_branch_unlikely(&qspinlock_key)) \
> + return queued_spin_##op(lock); \
> + return ticket_spin_##op(lock); \
> +}
I guess there were still a couple questions on the kernel size impact of
this.
> +
> +SPINLOCK_BASE_DECLARE(lock, void, arch_spinlock_t *)
> +SPINLOCK_BASE_DECLARE(unlock, void, arch_spinlock_t *)
> +SPINLOCK_BASE_DECLARE(is_locked, int, arch_spinlock_t *)
> +SPINLOCK_BASE_DECLARE(is_contended, int, arch_spinlock_t *)
> +SPINLOCK_BASE_DECLARE(trylock, bool, arch_spinlock_t *)
> +SPINLOCK_BASE_DECLARE(value_unlocked, int, arch_spinlock_t)
> +
> +#elif defined(CONFIG_RISCV_QUEUED_SPINLOCKS)
> +
> +#include <asm/qspinlock.h>
> +
> +#else
> +
> +#include <asm/ticket_spinlock.h>
> +
> +#endif
> +
> +#include <asm/qrwlock.h>
> +
> +#endif /* __ASM_RISCV_SPINLOCK_H */
> diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> index a2cde65b69e9..438e4f6ad2ad 100644
> --- a/arch/riscv/kernel/setup.c
> +++ b/arch/riscv/kernel/setup.c
> @@ -244,6 +244,42 @@ static void __init parse_dtb(void)
> #endif
> }
>
> +#if defined(CONFIG_RISCV_COMBO_SPINLOCKS)
> +DEFINE_STATIC_KEY_TRUE(qspinlock_key);
> +EXPORT_SYMBOL(qspinlock_key);
> +#endif
> +
> +static void __init riscv_spinlock_init(void)
> +{
> + char *using_ext = NULL;
> +
> + if (IS_ENABLED(CONFIG_RISCV_TICKET_SPINLOCKS)) {
> + pr_info("Ticket spinlock: enabled\n");
> + return;
> + }
> +
> + if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) &&
> + IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) &&
> + riscv_isa_extension_available(NULL, ZABHA) &&
> + riscv_isa_extension_available(NULL, ZACAS)) {
> + using_ext = "using Zabha";
> + } else if (riscv_isa_extension_available(NULL, ZICCRSE)) {
> + using_ext = "using Ziccrse";
> + }
> +#if defined(CONFIG_RISCV_COMBO_SPINLOCKS)
> + else {
> + static_branch_disable(&qspinlock_key);
> + pr_info("Ticket spinlock: enabled\n");
> + return;
> + }
> +#endif
> +
> + if (!using_ext)
> + pr_err("Queued spinlock without Zabha or Ziccrse");
> + else
> + pr_info("Queued spinlock %s: enabled\n", using_ext);
> +}
> +
> extern void __init init_rt_signal_env(void);
>
> void __init setup_arch(char **cmdline_p)
> @@ -297,6 +333,7 @@ void __init setup_arch(char **cmdline_p)
> riscv_set_dma_cache_alignment();
>
> riscv_user_isa_enable();
> + riscv_spinlock_init();
> }
>
> bool arch_cpu_is_hotpluggable(int cpu)
> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> index 0655aa5b57b2..bf47cca2c375 100644
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -136,6 +136,7 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock)
> }
> #endif
>
> +#ifndef __no_arch_spinlock_redefine
> /*
> * Remapping spinlock architecture specific functions to the corresponding
> * queued spinlock functions.
> @@ -146,5 +147,6 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock)
> #define arch_spin_lock(l) queued_spin_lock(l)
> #define arch_spin_trylock(l) queued_spin_trylock(l)
> #define arch_spin_unlock(l) queued_spin_unlock(l)
> +#endif
>
> #endif /* __ASM_GENERIC_QSPINLOCK_H */
> diff --git a/include/asm-generic/ticket_spinlock.h b/include/asm-generic/ticket_spinlock.h
> index cfcff22b37b3..325779970d8a 100644
> --- a/include/asm-generic/ticket_spinlock.h
> +++ b/include/asm-generic/ticket_spinlock.h
> @@ -89,6 +89,7 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock)
> return (s16)((val >> 16) - (val & 0xffff)) > 1;
> }
>
> +#ifndef __no_arch_spinlock_redefine
> /*
> * Remapping spinlock architecture specific functions to the corresponding
> * ticket spinlock functions.
> @@ -99,5 +100,6 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock)
> #define arch_spin_lock(l) ticket_spin_lock(l)
> #define arch_spin_trylock(l) ticket_spin_trylock(l)
> #define arch_spin_unlock(l) ticket_spin_unlock(l)
> +#endif
>
> #endif /* __ASM_GENERIC_TICKET_SPINLOCK_H */
> --
> 2.39.2
>
The patch looks good to me, so
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
It'd still be good to hear more about ARCH_WEAK_RELEASE_ACQUIRE and the
kernel size though.
Thanks,
drew
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 13/13] riscv: Add qspinlock support
2024-08-21 14:51 ` Andrew Jones
@ 2024-08-28 8:16 ` Alexandre Ghiti
2024-08-28 9:06 ` Andrew Jones
0 siblings, 1 reply; 26+ messages in thread
From: Alexandre Ghiti @ 2024-08-28 8:16 UTC (permalink / raw)
To: Andrew Jones
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
Hi drew,
On Wed, Aug 21, 2024 at 4:51 PM Andrew Jones <ajones@ventanamicro.com> wrote:
>
> On Sun, Aug 18, 2024 at 08:35:38AM GMT, Alexandre Ghiti wrote:
> > In order to produce a generic kernel, a user can select
> > CONFIG_COMBO_SPINLOCKS which will fallback at runtime to the ticket
> > spinlock implementation if Zabha or Ziccrse are not present.
> >
> > Note that we can't use alternatives here because the discovery of
> > extensions is done too late and we need to start with the qspinlock
> > implementation because the ticket spinlock implementation would pollute
> > the spinlock value, so let's use static keys.
> >
> > This is largely based on Guo's work and Leonardo reviews at [1].
> >
> > Link: https://lore.kernel.org/linux-riscv/20231225125847.2778638-1-guoren@kernel.org/ [1]
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
> > ---
> > .../locking/queued-spinlocks/arch-support.txt | 2 +-
> > arch/riscv/Kconfig | 34 ++++++++++++++
> > arch/riscv/include/asm/Kbuild | 4 +-
> > arch/riscv/include/asm/spinlock.h | 47 +++++++++++++++++++
> > arch/riscv/kernel/setup.c | 37 +++++++++++++++
> > include/asm-generic/qspinlock.h | 2 +
> > include/asm-generic/ticket_spinlock.h | 2 +
> > 7 files changed, 126 insertions(+), 2 deletions(-)
> > create mode 100644 arch/riscv/include/asm/spinlock.h
> >
> > diff --git a/Documentation/features/locking/queued-spinlocks/arch-support.txt b/Documentation/features/locking/queued-spinlocks/arch-support.txt
> > index 22f2990392ff..cf26042480e2 100644
> > --- a/Documentation/features/locking/queued-spinlocks/arch-support.txt
> > +++ b/Documentation/features/locking/queued-spinlocks/arch-support.txt
> > @@ -20,7 +20,7 @@
> > | openrisc: | ok |
> > | parisc: | TODO |
> > | powerpc: | ok |
> > - | riscv: | TODO |
> > + | riscv: | ok |
> > | s390: | TODO |
> > | sh: | TODO |
> > | sparc: | ok |
> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > index ef55ab94027e..201d0669db7f 100644
> > --- a/arch/riscv/Kconfig
> > +++ b/arch/riscv/Kconfig
> > @@ -79,6 +79,7 @@ config RISCV
> > select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP
> > select ARCH_WANTS_NO_INSTR
> > select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE
> > + select ARCH_WEAK_RELEASE_ACQUIRE if ARCH_USE_QUEUED_SPINLOCKS
>
> Hi Alex,
>
> Did you get a chance to experiment this as suggested by Andrea?
We talked about it and that will be a separate patch (not sure when
though, I don't feel really comfortable sending what Andrea
suggested).
>
> > select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU
> > select BUILDTIME_TABLE_SORT if MMU
> > select CLINT_TIMER if RISCV_M_MODE
> > @@ -488,6 +489,39 @@ config NODES_SHIFT
> > Specify the maximum number of NUMA Nodes available on the target
> > system. Increases memory reserved to accommodate various tables.
> >
> > +choice
> > + prompt "RISC-V spinlock type"
> > + default RISCV_COMBO_SPINLOCKS
> > +
> > +config RISCV_TICKET_SPINLOCKS
> > + bool "Using ticket spinlock"
> > +
> > +config RISCV_QUEUED_SPINLOCKS
> > + bool "Using queued spinlock"
> > + depends on SMP && MMU && NONPORTABLE
> > + select ARCH_USE_QUEUED_SPINLOCKS
> > + help
> > + The queued spinlock implementation requires the forward progress
> > + guarantee of cmpxchg()/xchg() atomic operations: CAS with Zabha or
> > + LR/SC with Ziccrse provide such guarantee.
> > +
> > + Select this if and only if Zabha or Ziccrse is available on your
> > + platform, RISCV_QUEUED_SPINLOCKS must not be selected for platforms
> > + without one of those extensions.
> > +
> > + If unsure, select RISCV_COMBO_SPINLOCKS, which will use qspinlocks
> > + when supported and otherwise ticket spinlocks.
> > +
> > +config RISCV_COMBO_SPINLOCKS
> > + bool "Using combo spinlock"
> > + depends on SMP && MMU
> > + select ARCH_USE_QUEUED_SPINLOCKS
> > + help
> > + Embed both queued spinlock and ticket lock so that the spinlock
> > + implementation can be chosen at runtime.
> > +
> > +endchoice
> > +
> > config RISCV_ALTERNATIVE
> > bool
> > depends on !XIP_KERNEL
> > diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
> > index 5c589770f2a8..1c2618c964f0 100644
> > --- a/arch/riscv/include/asm/Kbuild
> > +++ b/arch/riscv/include/asm/Kbuild
> > @@ -5,10 +5,12 @@ syscall-y += syscall_table_64.h
> > generic-y += early_ioremap.h
> > generic-y += flat.h
> > generic-y += kvm_para.h
> > +generic-y += mcs_spinlock.h
> > generic-y += parport.h
> > -generic-y += spinlock.h
> > generic-y += spinlock_types.h
> > +generic-y += ticket_spinlock.h
> > generic-y += qrwlock.h
> > generic-y += qrwlock_types.h
> > +generic-y += qspinlock.h
> > generic-y += user.h
> > generic-y += vmlinux.lds.h
> > diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
> > new file mode 100644
> > index 000000000000..e5121b89acea
> > --- /dev/null
> > +++ b/arch/riscv/include/asm/spinlock.h
> > @@ -0,0 +1,47 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +
> > +#ifndef __ASM_RISCV_SPINLOCK_H
> > +#define __ASM_RISCV_SPINLOCK_H
> > +
> > +#ifdef CONFIG_RISCV_COMBO_SPINLOCKS
> > +#define _Q_PENDING_LOOPS (1 << 9)
> > +
> > +#define __no_arch_spinlock_redefine
> > +#include <asm/ticket_spinlock.h>
> > +#include <asm/qspinlock.h>
> > +#include <asm/jump_label.h>
> > +
> > +/*
> > + * TODO: Use an alternative instead of a static key when we are able to parse
> > + * the extensions string earlier in the boot process.
> > + */
> > +DECLARE_STATIC_KEY_TRUE(qspinlock_key);
> > +
> > +#define SPINLOCK_BASE_DECLARE(op, type, type_lock) \
> > +static __always_inline type arch_spin_##op(type_lock lock) \
> > +{ \
> > + if (static_branch_unlikely(&qspinlock_key)) \
> > + return queued_spin_##op(lock); \
> > + return ticket_spin_##op(lock); \
> > +}
>
> I guess there were still a couple questions on the kernel size impact of
> this.
>
> > +
> > +SPINLOCK_BASE_DECLARE(lock, void, arch_spinlock_t *)
> > +SPINLOCK_BASE_DECLARE(unlock, void, arch_spinlock_t *)
> > +SPINLOCK_BASE_DECLARE(is_locked, int, arch_spinlock_t *)
> > +SPINLOCK_BASE_DECLARE(is_contended, int, arch_spinlock_t *)
> > +SPINLOCK_BASE_DECLARE(trylock, bool, arch_spinlock_t *)
> > +SPINLOCK_BASE_DECLARE(value_unlocked, int, arch_spinlock_t)
> > +
> > +#elif defined(CONFIG_RISCV_QUEUED_SPINLOCKS)
> > +
> > +#include <asm/qspinlock.h>
> > +
> > +#else
> > +
> > +#include <asm/ticket_spinlock.h>
> > +
> > +#endif
> > +
> > +#include <asm/qrwlock.h>
> > +
> > +#endif /* __ASM_RISCV_SPINLOCK_H */
> > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> > index a2cde65b69e9..438e4f6ad2ad 100644
> > --- a/arch/riscv/kernel/setup.c
> > +++ b/arch/riscv/kernel/setup.c
> > @@ -244,6 +244,42 @@ static void __init parse_dtb(void)
> > #endif
> > }
> >
> > +#if defined(CONFIG_RISCV_COMBO_SPINLOCKS)
> > +DEFINE_STATIC_KEY_TRUE(qspinlock_key);
> > +EXPORT_SYMBOL(qspinlock_key);
> > +#endif
> > +
> > +static void __init riscv_spinlock_init(void)
> > +{
> > + char *using_ext = NULL;
> > +
> > + if (IS_ENABLED(CONFIG_RISCV_TICKET_SPINLOCKS)) {
> > + pr_info("Ticket spinlock: enabled\n");
> > + return;
> > + }
> > +
> > + if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) &&
> > + IS_ENABLED(CONFIG_RISCV_ISA_ZACAS) &&
> > + riscv_isa_extension_available(NULL, ZABHA) &&
> > + riscv_isa_extension_available(NULL, ZACAS)) {
> > + using_ext = "using Zabha";
> > + } else if (riscv_isa_extension_available(NULL, ZICCRSE)) {
> > + using_ext = "using Ziccrse";
> > + }
> > +#if defined(CONFIG_RISCV_COMBO_SPINLOCKS)
> > + else {
> > + static_branch_disable(&qspinlock_key);
> > + pr_info("Ticket spinlock: enabled\n");
> > + return;
> > + }
> > +#endif
> > +
> > + if (!using_ext)
> > + pr_err("Queued spinlock without Zabha or Ziccrse");
> > + else
> > + pr_info("Queued spinlock %s: enabled\n", using_ext);
> > +}
> > +
> > extern void __init init_rt_signal_env(void);
> >
> > void __init setup_arch(char **cmdline_p)
> > @@ -297,6 +333,7 @@ void __init setup_arch(char **cmdline_p)
> > riscv_set_dma_cache_alignment();
> >
> > riscv_user_isa_enable();
> > + riscv_spinlock_init();
> > }
> >
> > bool arch_cpu_is_hotpluggable(int cpu)
> > diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> > index 0655aa5b57b2..bf47cca2c375 100644
> > --- a/include/asm-generic/qspinlock.h
> > +++ b/include/asm-generic/qspinlock.h
> > @@ -136,6 +136,7 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock)
> > }
> > #endif
> >
> > +#ifndef __no_arch_spinlock_redefine
> > /*
> > * Remapping spinlock architecture specific functions to the corresponding
> > * queued spinlock functions.
> > @@ -146,5 +147,6 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock)
> > #define arch_spin_lock(l) queued_spin_lock(l)
> > #define arch_spin_trylock(l) queued_spin_trylock(l)
> > #define arch_spin_unlock(l) queued_spin_unlock(l)
> > +#endif
> >
> > #endif /* __ASM_GENERIC_QSPINLOCK_H */
> > diff --git a/include/asm-generic/ticket_spinlock.h b/include/asm-generic/ticket_spinlock.h
> > index cfcff22b37b3..325779970d8a 100644
> > --- a/include/asm-generic/ticket_spinlock.h
> > +++ b/include/asm-generic/ticket_spinlock.h
> > @@ -89,6 +89,7 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock)
> > return (s16)((val >> 16) - (val & 0xffff)) > 1;
> > }
> >
> > +#ifndef __no_arch_spinlock_redefine
> > /*
> > * Remapping spinlock architecture specific functions to the corresponding
> > * ticket spinlock functions.
> > @@ -99,5 +100,6 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock)
> > #define arch_spin_lock(l) ticket_spin_lock(l)
> > #define arch_spin_trylock(l) ticket_spin_trylock(l)
> > #define arch_spin_unlock(l) ticket_spin_unlock(l)
> > +#endif
> >
> > #endif /* __ASM_GENERIC_TICKET_SPINLOCK_H */
> > --
> > 2.39.2
> >
>
> The patch looks good to me, so
>
> Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
>
> It'd still be good to hear more about ARCH_WEAK_RELEASE_ACQUIRE and the
> kernel size though.
>
I sent the kernel size impact using -Os as asked and
ARCH_WEAK_RELEASE_ACQUIRE should be handled by Andrea.
Thanks for all the reviews drew, the patchset is way better now! I
won't respin a new version since there is only a minor comment change
requested in "riscv: Improve zacas fully-ordered cmpxchg()" unless you
insist.
Thanks again,
Alex
> Thanks,
> drew
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 13/13] riscv: Add qspinlock support
2024-08-28 8:16 ` Alexandre Ghiti
@ 2024-08-28 9:06 ` Andrew Jones
0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2024-08-28 9:06 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: Jonathan Corbet, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Conor Dooley, Rob Herring, Krzysztof Kozlowski, Andrea Parri,
Nathan Chancellor, Peter Zijlstra, Ingo Molnar, Will Deacon,
Waiman Long, Boqun Feng, Arnd Bergmann, Leonardo Bras, Guo Ren,
linux-doc, devicetree, linux-kernel, linux-riscv, linux-arch
On Wed, Aug 28, 2024 at 10:16:40AM GMT, Alexandre Ghiti wrote:
...
> I sent the kernel size impact using -Os as asked and
> ARCH_WEAK_RELEASE_ACQUIRE should be handled by Andrea.
Sounds good. Thanks.
>
> Thanks for all the reviews drew, the patchset is way better now! I
> won't respin a new version since there is only a minor comment change
> requested in "riscv: Improve zacas fully-ordered cmpxchg()" unless you
> insist.
I don't insist. It might be nice to change the comment text at merge
though since it's safe to do with only compile-testing and it would
improve readability.
Thanks,
drew
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5 00/13] Zacas/Zabha support and qspinlocks
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
` (12 preceding siblings ...)
2024-08-18 6:35 ` [PATCH v5 13/13] riscv: Add qspinlock support Alexandre Ghiti
@ 2024-11-13 15:12 ` patchwork-bot+linux-riscv
13 siblings, 0 replies; 26+ messages in thread
From: patchwork-bot+linux-riscv @ 2024-11-13 15:12 UTC (permalink / raw)
To: Alexandre Ghiti
Cc: linux-riscv, corbet, paul.walmsley, palmer, aou, conor, robh,
krzk+dt, parri.andrea, nathan, peterz, mingo, will, longman,
boqun.feng, arnd, leobras, guoren, linux-doc, devicetree,
linux-kernel, linux-arch
Hello:
This series was applied to riscv/linux.git (for-next)
by Palmer Dabbelt <palmer@rivosinc.com>:
On Sun, 18 Aug 2024 08:35:25 +0200 you wrote:
> This implements [cmp]xchgXX() macros using Zacas and Zabha extensions
> and finally uses those newly introduced macros to add support for
> qspinlocks: note that this implementation of qspinlocks satisfies the
> forward progress guarantee.
>
> It also uses Ziccrse to provide the qspinlock implementation.
>
> [...]
Here is the summary with links:
- [v5,01/13] riscv: Move cpufeature.h macros into their own header
https://git.kernel.org/riscv/c/010e12aa4925
- [v5,02/13] riscv: Do not fail to build on byte/halfword operations with Zawrs
https://git.kernel.org/riscv/c/af042c457db0
- [v5,03/13] riscv: Implement cmpxchg32/64() using Zacas
https://git.kernel.org/riscv/c/38acdee32d23
- [v5,04/13] dt-bindings: riscv: Add Zabha ISA extension description
(no matching commit)
- [v5,05/13] riscv: Implement cmpxchg8/16() using Zabha
(no matching commit)
- [v5,06/13] riscv: Improve zacas fully-ordered cmpxchg()
(no matching commit)
- [v5,07/13] riscv: Implement arch_cmpxchg128() using Zacas
https://git.kernel.org/riscv/c/f7bd2be7663c
- [v5,08/13] riscv: Implement xchg8/16() using Zabha
https://git.kernel.org/riscv/c/97ddab7fbea8
- [v5,09/13] asm-generic: ticket-lock: Reuse arch_spinlock_t of qspinlock
https://git.kernel.org/riscv/c/cbe82e140bb7
- [v5,10/13] asm-generic: ticket-lock: Add separate ticket-lock.h
https://git.kernel.org/riscv/c/22c33321e260
- [v5,11/13] riscv: Add ISA extension parsing for Ziccrse
(no matching commit)
- [v5,12/13] dt-bindings: riscv: Add Ziccrse ISA extension description
https://git.kernel.org/riscv/c/447b2afbcde1
- [v5,13/13] riscv: Add qspinlock support
(no matching commit)
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2024-11-13 15:12 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-18 6:35 [PATCH v5 00/13] Zacas/Zabha support and qspinlocks Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 01/13] riscv: Move cpufeature.h macros into their own header Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 02/13] riscv: Do not fail to build on byte/halfword operations with Zawrs Alexandre Ghiti
2024-08-21 14:11 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 03/13] riscv: Implement cmpxchg32/64() using Zacas Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 04/13] dt-bindings: riscv: Add Zabha ISA extension description Alexandre Ghiti
2024-08-21 14:16 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 05/13] riscv: Implement cmpxchg8/16() using Zabha Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 06/13] riscv: Improve zacas fully-ordered cmpxchg() Alexandre Ghiti
2024-08-21 14:26 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 07/13] riscv: Implement arch_cmpxchg128() using Zacas Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 08/13] riscv: Implement xchg8/16() using Zabha Alexandre Ghiti
2024-08-18 6:35 ` [PATCH v5 09/13] asm-generic: ticket-lock: Reuse arch_spinlock_t of qspinlock Alexandre Ghiti
2024-08-21 14:28 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 10/13] asm-generic: ticket-lock: Add separate ticket-lock.h Alexandre Ghiti
2024-08-21 14:32 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 11/13] riscv: Add ISA extension parsing for Ziccrse Alexandre Ghiti
2024-08-21 14:33 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 12/13] dt-bindings: riscv: Add Ziccrse ISA extension description Alexandre Ghiti
2024-08-18 22:40 ` Conor Dooley
2024-08-21 14:35 ` Andrew Jones
2024-08-18 6:35 ` [PATCH v5 13/13] riscv: Add qspinlock support Alexandre Ghiti
2024-08-21 14:51 ` Andrew Jones
2024-08-28 8:16 ` Alexandre Ghiti
2024-08-28 9:06 ` Andrew Jones
2024-11-13 15:12 ` [PATCH v5 00/13] Zacas/Zabha support and qspinlocks patchwork-bot+linux-riscv
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).