public inbox for linux-doc@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/26] locking/atomic: restructuring + kerneldoc
@ 2023-05-22 12:24 Mark Rutland
  2023-05-22 12:24 ` [PATCH 01/26] locking/atomic: arm: fix sync ops Mark Rutland
                   ` (23 more replies)
  0 siblings, 24 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Hi all,

These patches restructure the generated atomic headers, and add
kerneldoc comments for all of the generic atomic{,64,_long}_t
operations. This is intended to supersede Paul's earlier attempt:

  https://lore.kernel.org/lkml/19135936-06d7-4705-8bc8-bb31c2a478ca@paulmck-laptop/

The core headers now generate raw_atomic*() operations as the
fundamental instrumentation-safe atomics, with the arch_atomic*()
functions being an implementation detail that shouldn't be used
directly.

Each raw_atomic*() op is given a single definition with all related
ifdeffery inside, e.g.

| /**
|  * raw_atomic_inc_return_acquire() - atomic increment with acquire ordering
|  * @v: pointer to atomic_t
|  *
|  * Atomically updates @v to (@v + 1) with acquire ordering.
|  *
|  * Safe to use in noinstr code; prefer atomic_inc_return_acquire() elsewhere.
|  *
|  * Return: the new value of @v.
|  */
| static __always_inline int
| raw_atomic_inc_return_acquire(atomic_t *v)
| {
| #if defined(arch_atomic_inc_return_acquire)
| 	return arch_atomic_inc_return_acquire(v);
| #elif defined(arch_atomic_inc_return_relaxed)
| 	int ret = arch_atomic_inc_return_relaxed(v);
| 	__atomic_acquire_fence();
| 	return ret;
| #elif defined(arch_atomic_inc_return)
| 	return arch_atomic_inc_return(v);
| #else
| 	return raw_atomic_add_return_acquire(1, v);
| #endif
| }

Similarly, the regular atomic*() ops (which already have a single
definition) are given kerneldoc comments, e.g.

| /**
|  * atomic_inc_return_acquire() - atomic increment with acquire ordering
|  * @v: pointer to atomic_t
|  *
|  * Atomically updates @v to (@v + 1) with acquire ordering.
|  *
|  * Unsafe to use in noinstr code; use raw_atomic_inc_return_acquire() there.
|  *
|  * Return: the new value of @v.
|  */
| static __always_inline int
| atomic_inc_return_acquire(atomic_t *v)
| {
| 	instrument_atomic_read_write(v, sizeof(*v));
| 	return raw_atomic_inc_return_acquire(v);
| }

The kerneldoc comments themselves are built from templates as with the
fallbacks, which should allow them to be extended in future if necessary.

I've compile-tested this for a number of architectures and
configurations, but as usual this probably needs to see some testing by
build robots.

The patches are based on the tip tree's locking/core branch,
specifically commit:

  3cf363a4daf359e8 ("s390/cpum_sf: Convert to cmpxchg128()")

Thanks,
Mark.

Mark Rutland (25):
  locking/atomic: arm: fix sync ops
  locking/atomic: remove fallback comments
  locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg
  locking/atomic: make atomic*_{cmp,}xchg optional
  locking/atomic: arc: add preprocessor symbols
  locking/atomic: arm: add preprocessor symbols
  locking/atomic: hexagon: add preprocessor symbols
  locking/atomic: m68k: add preprocessor symbols
  locking/atomic: parisc: add preprocessor symbols
  locking/atomic: sh: add preprocessor symbols
  locking/atomic: sparc: add preprocessor symbols
  locking/atomic: x86: add preprocessor symbols
  locking/atomic: xtensa: add preprocessor symbols
  locking/atomic: scripts: remove bogus order parameter
  locking/atomic: scripts: remove leftover "${mult}"
  locking/atomic: scripts: factor out order template generation
  locking/atomic: scripts: add trivial raw_atomic*_<op>()
  locking/atomic: treewide: use raw_atomic*_<op>()
  locking/atomic: scripts: build raw_atomic_long*() directly
  locking/atomic: scripts: restructure fallback ifdeffery
  locking/atomic: scripts: split pfx/name/sfx/order
  locking/atomic: scripts: simplify raw_atomic_long*() definitions
  locking/atomic: scripts: simplify raw_atomic*() definitions
  locking/atomic: scripts: generate kerneldoc comments
  locking/atomic: treewide: delete arch_atomic_*() kerneldoc

Paul E. McKenney (1):
  locking/atomic: docs: Add atomic operations to the driver basic API
    documentation

 Documentation/driver-api/basics.rst          |    5 +-
 arch/alpha/include/asm/atomic.h              |   35 -
 arch/arc/include/asm/atomic-spinlock.h       |    9 +
 arch/arc/include/asm/atomic.h                |   24 -
 arch/arc/include/asm/atomic64-arcv2.h        |   19 +-
 arch/arm/include/asm/assembler.h             |   17 +
 arch/arm/include/asm/atomic.h                |   15 +-
 arch/arm/include/asm/sync_bitops.h           |   29 +-
 arch/arm/lib/bitops.h                        |   14 +-
 arch/arm/lib/testchangebit.S                 |    4 +
 arch/arm/lib/testclearbit.S                  |    4 +
 arch/arm/lib/testsetbit.S                    |    4 +
 arch/arm64/include/asm/atomic.h              |   28 -
 arch/csky/include/asm/atomic.h               |   35 -
 arch/hexagon/include/asm/atomic.h            |   69 +-
 arch/ia64/include/asm/atomic.h               |    7 -
 arch/loongarch/include/asm/atomic.h          |   56 -
 arch/m68k/include/asm/atomic.h               |   18 +-
 arch/mips/include/asm/atomic.h               |   11 -
 arch/openrisc/include/asm/atomic.h           |    3 -
 arch/parisc/include/asm/atomic.h             |   27 +-
 arch/powerpc/include/asm/atomic.h            |   24 -
 arch/powerpc/kernel/smp.c                    |   12 +-
 arch/riscv/include/asm/atomic.h              |   72 -
 arch/sh/include/asm/atomic-grb.h             |    9 +
 arch/sh/include/asm/atomic-irq.h             |    9 +
 arch/sh/include/asm/atomic-llsc.h            |    9 +
 arch/sh/include/asm/atomic.h                 |    3 -
 arch/sparc/include/asm/atomic_32.h           |   18 +-
 arch/sparc/include/asm/atomic_64.h           |   29 +-
 arch/x86/include/asm/atomic.h                |   87 -
 arch/x86/include/asm/atomic64_32.h           |   76 -
 arch/x86/include/asm/atomic64_64.h           |   81 -
 arch/x86/include/asm/cmpxchg_64.h            |    4 +
 arch/x86/kernel/alternative.c                |    4 +-
 arch/x86/kernel/cpu/mce/core.c               |   16 +-
 arch/x86/kernel/nmi.c                        |    2 +-
 arch/x86/kernel/pvclock.c                    |    4 +-
 arch/x86/kvm/x86.c                           |    2 +-
 arch/xtensa/include/asm/atomic.h             |   12 +-
 include/asm-generic/atomic.h                 |    3 -
 include/asm-generic/bitops/atomic.h          |   12 +-
 include/asm-generic/bitops/lock.h            |    8 +-
 include/linux/atomic/atomic-arch-fallback.h  | 5200 ++++++++++++------
 include/linux/atomic/atomic-instrumented.h   | 3484 ++++++++++--
 include/linux/atomic/atomic-long.h           | 2122 ++++---
 include/linux/context_tracking.h             |    4 +-
 include/linux/context_tracking_state.h       |    2 +-
 include/linux/cpumask.h                      |    2 +-
 include/linux/jump_label.h                   |    2 +-
 kernel/context_tracking.c                    |   12 +-
 kernel/sched/clock.c                         |    2 +-
 scripts/atomic/atomic-tbl.sh                 |  112 +-
 scripts/atomic/atomics.tbl                   |    2 +-
 scripts/atomic/fallbacks/acquire             |    4 -
 scripts/atomic/fallbacks/add_negative        |   14 +-
 scripts/atomic/fallbacks/add_unless          |   15 +-
 scripts/atomic/fallbacks/andnot              |    6 +-
 scripts/atomic/fallbacks/cmpxchg             |    3 +
 scripts/atomic/fallbacks/dec                 |    6 +-
 scripts/atomic/fallbacks/dec_and_test        |   14 +-
 scripts/atomic/fallbacks/dec_if_positive     |    8 +-
 scripts/atomic/fallbacks/dec_unless_positive |    8 +-
 scripts/atomic/fallbacks/fence               |    4 -
 scripts/atomic/fallbacks/fetch_add_unless    |   17 +-
 scripts/atomic/fallbacks/inc                 |    6 +-
 scripts/atomic/fallbacks/inc_and_test        |   14 +-
 scripts/atomic/fallbacks/inc_not_zero        |   13 +-
 scripts/atomic/fallbacks/inc_unless_negative |    8 +-
 scripts/atomic/fallbacks/read_acquire        |    6 +-
 scripts/atomic/fallbacks/release             |    4 -
 scripts/atomic/fallbacks/set_release         |    6 +-
 scripts/atomic/fallbacks/sub_and_test        |   15 +-
 scripts/atomic/fallbacks/try_cmpxchg         |    6 +-
 scripts/atomic/fallbacks/xchg                |    3 +
 scripts/atomic/gen-atomic-fallback.sh        |  264 +-
 scripts/atomic/gen-atomic-instrumented.sh    |   23 +-
 scripts/atomic/gen-atomic-long.sh            |   38 +-
 scripts/atomic/kerneldoc/add                 |   13 +
 scripts/atomic/kerneldoc/add_negative        |   13 +
 scripts/atomic/kerneldoc/add_unless          |   18 +
 scripts/atomic/kerneldoc/and                 |   13 +
 scripts/atomic/kerneldoc/andnot              |   13 +
 scripts/atomic/kerneldoc/cmpxchg             |   14 +
 scripts/atomic/kerneldoc/dec                 |   12 +
 scripts/atomic/kerneldoc/dec_and_test        |   12 +
 scripts/atomic/kerneldoc/dec_if_positive     |   12 +
 scripts/atomic/kerneldoc/dec_unless_positive |   12 +
 scripts/atomic/kerneldoc/inc                 |   12 +
 scripts/atomic/kerneldoc/inc_and_test        |   12 +
 scripts/atomic/kerneldoc/inc_not_zero        |   12 +
 scripts/atomic/kerneldoc/inc_unless_negative |   12 +
 scripts/atomic/kerneldoc/or                  |   13 +
 scripts/atomic/kerneldoc/read                |   12 +
 scripts/atomic/kerneldoc/set                 |   13 +
 scripts/atomic/kerneldoc/sub                 |   13 +
 scripts/atomic/kerneldoc/sub_and_test        |   13 +
 scripts/atomic/kerneldoc/try_cmpxchg         |   15 +
 scripts/atomic/kerneldoc/xchg                |   13 +
 scripts/atomic/kerneldoc/xor                 |   13 +
 100 files changed, 8975 insertions(+), 3688 deletions(-)
 create mode 100755 scripts/atomic/fallbacks/cmpxchg
 create mode 100755 scripts/atomic/fallbacks/xchg
 create mode 100644 scripts/atomic/kerneldoc/add
 create mode 100644 scripts/atomic/kerneldoc/add_negative
 create mode 100644 scripts/atomic/kerneldoc/add_unless
 create mode 100644 scripts/atomic/kerneldoc/and
 create mode 100644 scripts/atomic/kerneldoc/andnot
 create mode 100644 scripts/atomic/kerneldoc/cmpxchg
 create mode 100644 scripts/atomic/kerneldoc/dec
 create mode 100644 scripts/atomic/kerneldoc/dec_and_test
 create mode 100644 scripts/atomic/kerneldoc/dec_if_positive
 create mode 100644 scripts/atomic/kerneldoc/dec_unless_positive
 create mode 100644 scripts/atomic/kerneldoc/inc
 create mode 100644 scripts/atomic/kerneldoc/inc_and_test
 create mode 100644 scripts/atomic/kerneldoc/inc_not_zero
 create mode 100644 scripts/atomic/kerneldoc/inc_unless_negative
 create mode 100644 scripts/atomic/kerneldoc/or
 create mode 100644 scripts/atomic/kerneldoc/read
 create mode 100644 scripts/atomic/kerneldoc/set
 create mode 100644 scripts/atomic/kerneldoc/sub
 create mode 100644 scripts/atomic/kerneldoc/sub_and_test
 create mode 100644 scripts/atomic/kerneldoc/try_cmpxchg
 create mode 100644 scripts/atomic/kerneldoc/xchg
 create mode 100644 scripts/atomic/kerneldoc/xor

-- 
2.30.2


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 01/26] locking/atomic: arm: fix sync ops
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 02/26] locking/atomic: remove fallback comments Mark Rutland
                   ` (22 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

The sync_*() ops on arch/arm are defined in terms of the regular bitops
with no special handling. This is not correct, as UP kernels elide
barriers for the fully-ordered operations, and so the required ordering
is lost when such UP kernels are run under a hypervsior on an SMP
system.

Fix this by defining sync ops with the required barriers.

Note: On 32-bit arm, the sync_*() ops are currently only used by Xen,
which requires ARMv7, but the semantics can be implemented for ARMv6+.

Fixes: e54d2f61528165bb ("xen/arm: sync_bitops")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm/include/asm/assembler.h   | 17 +++++++++++++++++
 arch/arm/include/asm/sync_bitops.h | 29 +++++++++++++++++++++++++----
 arch/arm/lib/bitops.h              | 14 +++++++++++---
 arch/arm/lib/testchangebit.S       |  4 ++++
 arch/arm/lib/testclearbit.S        |  4 ++++
 arch/arm/lib/testsetbit.S          |  4 ++++
 6 files changed, 65 insertions(+), 7 deletions(-)

diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
index 505a306e0271a..aebe2c8f6a686 100644
--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -394,6 +394,23 @@ ALT_UP_B(.L0_\@)
 #endif
 	.endm
 
+/*
+ * Raw SMP data memory barrier
+ */
+	.macro	__smp_dmb mode
+#if __LINUX_ARM_ARCH__ >= 7
+	.ifeqs "\mode","arm"
+	dmb	ish
+	.else
+	W(dmb)	ish
+	.endif
+#elif __LINUX_ARM_ARCH__ == 6
+	mcr	p15, 0, r0, c7, c10, 5	@ dmb
+#else
+	.error "Incompatible SMP platform"
+#endif
+	.endm
+
 #if defined(CONFIG_CPU_V7M)
 	/*
 	 * setmode is used to assert to be in svc mode during boot. For v7-M
diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
index 6f5d627c44a3c..f46b3c570f92e 100644
--- a/arch/arm/include/asm/sync_bitops.h
+++ b/arch/arm/include/asm/sync_bitops.h
@@ -14,14 +14,35 @@
  * ops which are SMP safe even on a UP kernel.
  */
 
+/*
+ * Unordered
+ */
+
 #define sync_set_bit(nr, p)		_set_bit(nr, p)
 #define sync_clear_bit(nr, p)		_clear_bit(nr, p)
 #define sync_change_bit(nr, p)		_change_bit(nr, p)
-#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
-#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
-#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
 #define sync_test_bit(nr, addr)		test_bit(nr, addr)
-#define arch_sync_cmpxchg		arch_cmpxchg
 
+/*
+ * Fully ordered
+ */
+
+int _sync_test_and_set_bit(int nr, volatile unsigned long * p);
+#define sync_test_and_set_bit(nr, p)	_sync_test_and_set_bit(nr, p)
+
+int _sync_test_and_clear_bit(int nr, volatile unsigned long * p);
+#define sync_test_and_clear_bit(nr, p)	_sync_test_and_clear_bit(nr, p)
+
+int _sync_test_and_change_bit(int nr, volatile unsigned long * p);
+#define sync_test_and_change_bit(nr, p)	_sync_test_and_change_bit(nr, p)
+
+#define arch_sync_cmpxchg(ptr, old, new)				\
+({									\
+	__typeof__(*(ptr)) __ret;					\
+	__smp_mb__before_atomic();					\
+	__ret = arch_cmpxchg_relaxed((ptr), (old), (new));		\
+	__smp_mb__after_atomic();					\
+	__ret;								\
+})
 
 #endif
diff --git a/arch/arm/lib/bitops.h b/arch/arm/lib/bitops.h
index 95bd359912889..f069d1b2318e6 100644
--- a/arch/arm/lib/bitops.h
+++ b/arch/arm/lib/bitops.h
@@ -28,7 +28,7 @@ UNWIND(	.fnend		)
 ENDPROC(\name		)
 	.endm
 
-	.macro	testop, name, instr, store
+	.macro	__testop, name, instr, store, barrier
 ENTRY(	\name		)
 UNWIND(	.fnstart	)
 	ands	ip, r1, #3
@@ -38,7 +38,7 @@ UNWIND(	.fnstart	)
 	mov	r0, r0, lsr #5
 	add	r1, r1, r0, lsl #2	@ Get word offset
 	mov	r3, r2, lsl r3		@ create mask
-	smp_dmb
+	\barrier
 #if __LINUX_ARM_ARCH__ >= 7 && defined(CONFIG_SMP)
 	.arch_extension	mp
 	ALT_SMP(W(pldw)	[r1])
@@ -50,13 +50,21 @@ UNWIND(	.fnstart	)
 	strex	ip, r2, [r1]
 	cmp	ip, #0
 	bne	1b
-	smp_dmb
+	\barrier
 	cmp	r0, #0
 	movne	r0, #1
 2:	bx	lr
 UNWIND(	.fnend		)
 ENDPROC(\name		)
 	.endm
+
+	.macro	testop, name, instr, store
+	__testop \name, \instr, \store, smp_dmb
+	.endm
+
+	.macro	sync_testop, name, instr, store
+	__testop \name, \instr, \store, __smp_dmb
+	.endm
 #else
 	.macro	bitop, name, instr
 ENTRY(	\name		)
diff --git a/arch/arm/lib/testchangebit.S b/arch/arm/lib/testchangebit.S
index 4ebecc67e6e04..f13fe9bc2399a 100644
--- a/arch/arm/lib/testchangebit.S
+++ b/arch/arm/lib/testchangebit.S
@@ -10,3 +10,7 @@
                 .text
 
 testop	_test_and_change_bit, eor, str
+
+#if __LINUX_ARM_ARCH__ >= 6
+sync_testop	_sync_test_and_change_bit, eor, str
+#endif
diff --git a/arch/arm/lib/testclearbit.S b/arch/arm/lib/testclearbit.S
index 009afa0f5b4a7..4d2c5ca620ebf 100644
--- a/arch/arm/lib/testclearbit.S
+++ b/arch/arm/lib/testclearbit.S
@@ -10,3 +10,7 @@
                 .text
 
 testop	_test_and_clear_bit, bicne, strne
+
+#if __LINUX_ARM_ARCH__ >= 6
+sync_testop	_sync_test_and_clear_bit, bicne, strne
+#endif
diff --git a/arch/arm/lib/testsetbit.S b/arch/arm/lib/testsetbit.S
index f3192e55acc87..649dbab65d8d0 100644
--- a/arch/arm/lib/testsetbit.S
+++ b/arch/arm/lib/testsetbit.S
@@ -10,3 +10,7 @@
                 .text
 
 testop	_test_and_set_bit, orreq, streq
+
+#if __LINUX_ARM_ARCH__ >= 6
+sync_testop	_sync_test_and_set_bit, orreq, streq
+#endif
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 02/26] locking/atomic: remove fallback comments
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
  2023-05-22 12:24 ` [PATCH 01/26] locking/atomic: arm: fix sync ops Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 03/26] locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg Mark Rutland
                   ` (21 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Currently a subset of the fallback templates have kerneldoc comments,
resulting in a haphazard set of generated kerneldoc comments as only
some operations have fallback templates to begin with.

We'd like to generate more consistent kerneldoc comments, and to do so
we'll need to restructure the way the fallback code is generated.

To minimize churn and to make it easier to restructure the fallback
code, this patch removes the existing kerneldoc comments from the
fallback templates. We can add new kerneldoc comments in subsequent
patches.

Aside from generated documentation being reduced, there should be no
functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/atomic/atomic-arch-fallback.h | 166 +-------------------
 scripts/atomic/fallbacks/add_negative       |   8 -
 scripts/atomic/fallbacks/add_unless         |   9 --
 scripts/atomic/fallbacks/dec_and_test       |   8 -
 scripts/atomic/fallbacks/fetch_add_unless   |   9 --
 scripts/atomic/fallbacks/inc_and_test       |   8 -
 scripts/atomic/fallbacks/inc_not_zero       |   7 -
 scripts/atomic/fallbacks/sub_and_test       |   9 --
 8 files changed, 1 insertion(+), 223 deletions(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 1722ddb6f17e0..3ce4cb5e790c5 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -1272,15 +1272,6 @@ arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 #endif /* arch_atomic_try_cmpxchg_relaxed */
 
 #ifndef arch_atomic_sub_and_test
-/**
- * arch_atomic_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
 static __always_inline bool
 arch_atomic_sub_and_test(int i, atomic_t *v)
 {
@@ -1290,14 +1281,6 @@ arch_atomic_sub_and_test(int i, atomic_t *v)
 #endif
 
 #ifndef arch_atomic_dec_and_test
-/**
- * arch_atomic_dec_and_test - decrement and test
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
 static __always_inline bool
 arch_atomic_dec_and_test(atomic_t *v)
 {
@@ -1307,14 +1290,6 @@ arch_atomic_dec_and_test(atomic_t *v)
 #endif
 
 #ifndef arch_atomic_inc_and_test
-/**
- * arch_atomic_inc_and_test - increment and test
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
 static __always_inline bool
 arch_atomic_inc_and_test(atomic_t *v)
 {
@@ -1331,14 +1306,6 @@ arch_atomic_inc_and_test(atomic_t *v)
 #endif /* arch_atomic_add_negative */
 
 #ifndef arch_atomic_add_negative
-/**
- * arch_atomic_add_negative - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
 static __always_inline bool
 arch_atomic_add_negative(int i, atomic_t *v)
 {
@@ -1348,14 +1315,6 @@ arch_atomic_add_negative(int i, atomic_t *v)
 #endif
 
 #ifndef arch_atomic_add_negative_acquire
-/**
- * arch_atomic_add_negative_acquire - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
 static __always_inline bool
 arch_atomic_add_negative_acquire(int i, atomic_t *v)
 {
@@ -1365,14 +1324,6 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v)
 #endif
 
 #ifndef arch_atomic_add_negative_release
-/**
- * arch_atomic_add_negative_release - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
 static __always_inline bool
 arch_atomic_add_negative_release(int i, atomic_t *v)
 {
@@ -1382,14 +1333,6 @@ arch_atomic_add_negative_release(int i, atomic_t *v)
 #endif
 
 #ifndef arch_atomic_add_negative_relaxed
-/**
- * arch_atomic_add_negative_relaxed - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
 static __always_inline bool
 arch_atomic_add_negative_relaxed(int i, atomic_t *v)
 {
@@ -1437,15 +1380,6 @@ arch_atomic_add_negative(int i, atomic_t *v)
 #endif /* arch_atomic_add_negative_relaxed */
 
 #ifndef arch_atomic_fetch_add_unless
-/**
- * arch_atomic_fetch_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
- */
 static __always_inline int
 arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
@@ -1462,15 +1396,6 @@ arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 #endif
 
 #ifndef arch_atomic_add_unless
-/**
- * arch_atomic_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
- */
 static __always_inline bool
 arch_atomic_add_unless(atomic_t *v, int a, int u)
 {
@@ -1480,13 +1405,6 @@ arch_atomic_add_unless(atomic_t *v, int a, int u)
 #endif
 
 #ifndef arch_atomic_inc_not_zero
-/**
- * arch_atomic_inc_not_zero - increment unless the number is zero
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
- */
 static __always_inline bool
 arch_atomic_inc_not_zero(atomic_t *v)
 {
@@ -2488,15 +2406,6 @@ arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
 #endif /* arch_atomic64_try_cmpxchg_relaxed */
 
 #ifndef arch_atomic64_sub_and_test
-/**
- * arch_atomic64_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type atomic64_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
 static __always_inline bool
 arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
 {
@@ -2506,14 +2415,6 @@ arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
 #endif
 
 #ifndef arch_atomic64_dec_and_test
-/**
- * arch_atomic64_dec_and_test - decrement and test
- * @v: pointer of type atomic64_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
 static __always_inline bool
 arch_atomic64_dec_and_test(atomic64_t *v)
 {
@@ -2523,14 +2424,6 @@ arch_atomic64_dec_and_test(atomic64_t *v)
 #endif
 
 #ifndef arch_atomic64_inc_and_test
-/**
- * arch_atomic64_inc_and_test - increment and test
- * @v: pointer of type atomic64_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
 static __always_inline bool
 arch_atomic64_inc_and_test(atomic64_t *v)
 {
@@ -2547,14 +2440,6 @@ arch_atomic64_inc_and_test(atomic64_t *v)
 #endif /* arch_atomic64_add_negative */
 
 #ifndef arch_atomic64_add_negative
-/**
- * arch_atomic64_add_negative - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
 static __always_inline bool
 arch_atomic64_add_negative(s64 i, atomic64_t *v)
 {
@@ -2564,14 +2449,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v)
 #endif
 
 #ifndef arch_atomic64_add_negative_acquire
-/**
- * arch_atomic64_add_negative_acquire - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
 static __always_inline bool
 arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
 {
@@ -2581,14 +2458,6 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
 #endif
 
 #ifndef arch_atomic64_add_negative_release
-/**
- * arch_atomic64_add_negative_release - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
 static __always_inline bool
 arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
 {
@@ -2598,14 +2467,6 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
 #endif
 
 #ifndef arch_atomic64_add_negative_relaxed
-/**
- * arch_atomic64_add_negative_relaxed - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
 static __always_inline bool
 arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
 {
@@ -2653,15 +2514,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v)
 #endif /* arch_atomic64_add_negative_relaxed */
 
 #ifndef arch_atomic64_fetch_add_unless
-/**
- * arch_atomic64_fetch_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
- */
 static __always_inline s64
 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
@@ -2678,15 +2530,6 @@ arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 #endif
 
 #ifndef arch_atomic64_add_unless
-/**
- * arch_atomic64_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
- */
 static __always_inline bool
 arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
 {
@@ -2696,13 +2539,6 @@ arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
 #endif
 
 #ifndef arch_atomic64_inc_not_zero
-/**
- * arch_atomic64_inc_not_zero - increment unless the number is zero
- * @v: pointer of type atomic64_t
- *
- * Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
- */
 static __always_inline bool
 arch_atomic64_inc_not_zero(atomic64_t *v)
 {
@@ -2761,4 +2597,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
 #endif
 
 #endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 52dfc6fe4a2e7234bbd2aa3e16a377c1db793a53
+// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277
diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
index e5980abf5904e..d0bd2dfbb244c 100755
--- a/scripts/atomic/fallbacks/add_negative
+++ b/scripts/atomic/fallbacks/add_negative
@@ -1,12 +1,4 @@
 cat <<EOF
-/**
- * arch_${atomic}_add_negative${order} - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type ${atomic}_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
 static __always_inline bool
 arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
 {
diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
index 9e5159c2ccfc8..cf79b9da38dbb 100755
--- a/scripts/atomic/fallbacks/add_unless
+++ b/scripts/atomic/fallbacks/add_unless
@@ -1,13 +1,4 @@
 cat << EOF
-/**
- * arch_${atomic}_add_unless - add unless the number is already a given value
- * @v: pointer of type ${atomic}_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
- */
 static __always_inline bool
 arch_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
 {
diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
index 8549f359bd0ef..3f6b6a8b47733 100755
--- a/scripts/atomic/fallbacks/dec_and_test
+++ b/scripts/atomic/fallbacks/dec_and_test
@@ -1,12 +1,4 @@
 cat <<EOF
-/**
- * arch_${atomic}_dec_and_test - decrement and test
- * @v: pointer of type ${atomic}_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
 static __always_inline bool
 arch_${atomic}_dec_and_test(${atomic}_t *v)
 {
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index 68ce13c8b9dad..81d2834f03d23 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -1,13 +1,4 @@
 cat << EOF
-/**
- * arch_${atomic}_fetch_add_unless - add unless the number is already a given value
- * @v: pointer of type ${atomic}_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
- */
 static __always_inline ${int}
 arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
 {
diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
index 0cf23fe1efb85..c726a6d0634d3 100755
--- a/scripts/atomic/fallbacks/inc_and_test
+++ b/scripts/atomic/fallbacks/inc_and_test
@@ -1,12 +1,4 @@
 cat <<EOF
-/**
- * arch_${atomic}_inc_and_test - increment and test
- * @v: pointer of type ${atomic}_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
 static __always_inline bool
 arch_${atomic}_inc_and_test(${atomic}_t *v)
 {
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index ed8a1f5626675..97603591aac2a 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -1,11 +1,4 @@
 cat <<EOF
-/**
- * arch_${atomic}_inc_not_zero - increment unless the number is zero
- * @v: pointer of type ${atomic}_t
- *
- * Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
- */
 static __always_inline bool
 arch_${atomic}_inc_not_zero(${atomic}_t *v)
 {
diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
index 260f37341c888..da8a049c9b02b 100755
--- a/scripts/atomic/fallbacks/sub_and_test
+++ b/scripts/atomic/fallbacks/sub_and_test
@@ -1,13 +1,4 @@
 cat <<EOF
-/**
- * arch_${atomic}_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type ${atomic}_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
 static __always_inline bool
 arch_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
 {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 03/26] locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
  2023-05-22 12:24 ` [PATCH 01/26] locking/atomic: arm: fix sync ops Mark Rutland
  2023-05-22 12:24 ` [PATCH 02/26] locking/atomic: remove fallback comments Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 04/26] locking/atomic: make atomic*_{cmp,}xchg optional Mark Rutland
                   ` (20 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Hexagon's implementation of arch_atomic_cmpxchg() is identical to its
implementation of arch_cmpxchg(). Have it define arch_atomic_cmpxchg()
in terms of arch_cmpxchg(), matching what it does for arch_atomic_xchg()
and arch_xchg().

At the same time, remove the kerneldoc comments for hexagon's
arch_atomic_xchg() and arch_atomic_cmpxchg(). The arch_atomic_*()
namespace is shared by all architectures and the API should be
documented centrally, and the comments aren't all that helpful as-is.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/hexagon/include/asm/atomic.h | 46 +++----------------------------
 1 file changed, 4 insertions(+), 42 deletions(-)

diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index 6e94f8d04146f..738857e10d6ec 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -36,49 +36,11 @@ static inline void arch_atomic_set(atomic_t *v, int new)
  */
 #define arch_atomic_read(v)		READ_ONCE((v)->counter)
 
-/**
- * arch_atomic_xchg - atomic
- * @v: pointer to memory to change
- * @new: new value (technically passed in a register -- see xchg)
- */
-#define arch_atomic_xchg(v, new)	(arch_xchg(&((v)->counter), (new)))
-
-
-/**
- * arch_atomic_cmpxchg - atomic compare-and-exchange values
- * @v: pointer to value to change
- * @old:  desired old value to match
- * @new:  new value to put in
- *
- * Parameters are then pointer, value-in-register, value-in-register,
- * and the output is the old value.
- *
- * Apparently this is complicated for archs that don't support
- * the memw_locked like we do (or it's broken or whatever).
- *
- * Kind of the lynchpin of the rest of the generically defined routines.
- * Remember V2 had that bug with dotnew predicate set by memw_locked.
- *
- * "old" is "expected" old val, __oldval is actual old value
- */
-static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
-{
-	int __oldval;
+#define arch_atomic_xchg(v, new)					\
+	(arch_xchg(&((v)->counter), (new)))
 
-	asm volatile(
-		"1:	%0 = memw_locked(%1);\n"
-		"	{ P0 = cmp.eq(%0,%2);\n"
-		"	  if (!P0.new) jump:nt 2f; }\n"
-		"	memw_locked(%1,P0) = %3;\n"
-		"	if (!P0) jump 1b;\n"
-		"2:\n"
-		: "=&r" (__oldval)
-		: "r" (&v->counter), "r" (old), "r" (new)
-		: "memory", "p0"
-	);
-
-	return __oldval;
-}
+#define arch_atomic_cmpxchg(v, old, new)				\
+	(arch_cmpxchg(&((v)->counter), (old), (new)))
 
 #define ATOMIC_OP(op)							\
 static inline void arch_atomic_##op(int i, atomic_t *v)			\
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 04/26] locking/atomic: make atomic*_{cmp,}xchg optional
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (2 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 03/26] locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 05/26] locking/atomic: arc: add preprocessor symbols Mark Rutland
                   ` (19 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Most architectures define the atomic/atomic64 xchg and cmpxchg
operations in terms of arch_xchg and arch_cmpxchg respectfully.

Add fallbacks for these cases and remove the trivial cases from arch
code. On some architectures the existing definitions are kept as these
are used to build other arch_atomic*() operations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/alpha/include/asm/atomic.h             |  10 --
 arch/arc/include/asm/atomic.h               |  24 ---
 arch/arc/include/asm/atomic64-arcv2.h       |   2 +
 arch/arm/include/asm/atomic.h               |   3 +-
 arch/arm64/include/asm/atomic.h             |  28 ----
 arch/csky/include/asm/atomic.h              |  35 -----
 arch/hexagon/include/asm/atomic.h           |   6 -
 arch/ia64/include/asm/atomic.h              |   7 -
 arch/loongarch/include/asm/atomic.h         |   7 -
 arch/m68k/include/asm/atomic.h              |   9 +-
 arch/mips/include/asm/atomic.h              |  11 --
 arch/openrisc/include/asm/atomic.h          |   3 -
 arch/parisc/include/asm/atomic.h            |   9 --
 arch/powerpc/include/asm/atomic.h           |  24 ---
 arch/riscv/include/asm/atomic.h             |  72 ---------
 arch/sh/include/asm/atomic.h                |   3 -
 arch/sparc/include/asm/atomic_32.h          |   2 +
 arch/sparc/include/asm/atomic_64.h          |  11 --
 arch/xtensa/include/asm/atomic.h            |   3 -
 include/asm-generic/atomic.h                |   3 -
 include/linux/atomic/atomic-arch-fallback.h | 158 +++++++++++++++++++-
 scripts/atomic/fallbacks/cmpxchg            |   7 +
 scripts/atomic/fallbacks/xchg               |   7 +
 23 files changed, 179 insertions(+), 265 deletions(-)
 create mode 100755 scripts/atomic/fallbacks/cmpxchg
 create mode 100755 scripts/atomic/fallbacks/xchg

diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index f2861a43a61ef..ec8ab552c527a 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -200,16 +200,6 @@ ATOMIC_OPS(xor, xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define arch_atomic64_cmpxchg(v, old, new) \
-	(arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic64_xchg(v, new) \
-	(arch_xchg(&((v)->counter), new))
-
-#define arch_atomic_cmpxchg(v, old, new) \
-	(arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic_xchg(v, new) \
-	(arch_xchg(&((v)->counter), new))
-
 /**
  * arch_atomic_fetch_add_unless - add unless the number is a given value
  * @v: pointer of type atomic_t
diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 52ee51e1ff7c2..592d7fffc223c 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -22,30 +22,6 @@
 #include <asm/atomic-spinlock.h>
 #endif
 
-#define arch_atomic_cmpxchg(v, o, n)					\
-({									\
-	arch_cmpxchg(&((v)->counter), (o), (n));			\
-})
-
-#ifdef arch_cmpxchg_relaxed
-#define arch_atomic_cmpxchg_relaxed(v, o, n)				\
-({									\
-	arch_cmpxchg_relaxed(&((v)->counter), (o), (n));		\
-})
-#endif
-
-#define arch_atomic_xchg(v, n)						\
-({									\
-	arch_xchg(&((v)->counter), (n));				\
-})
-
-#ifdef arch_xchg_relaxed
-#define arch_atomic_xchg_relaxed(v, n)					\
-({									\
-	arch_xchg_relaxed(&((v)->counter), (n));			\
-})
-#endif
-
 /*
  * 64-bit atomics
  */
diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h
index c5a8010fdc97d..2b7c9e61a2947 100644
--- a/arch/arc/include/asm/atomic64-arcv2.h
+++ b/arch/arc/include/asm/atomic64-arcv2.h
@@ -159,6 +159,7 @@ arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
 
 	return prev;
 }
+#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
 
 static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
 {
@@ -179,6 +180,7 @@ static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
 
 	return prev;
 }
+#define arch_atomic64_xchg arch_atomic64_xchg
 
 /**
  * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index db8512d9a918d..9458d47ff209c 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -210,6 +210,7 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 
 	return ret;
 }
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
 
 #define arch_atomic_fetch_andnot		arch_atomic_fetch_andnot
 
@@ -240,8 +241,6 @@ ATOMIC_OPS(xor, ^=, eor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
 #ifndef CONFIG_GENERIC_ATOMIC64
 typedef struct {
 	s64 counter;
diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
index c9979273d3898..400d279e0f8d0 100644
--- a/arch/arm64/include/asm/atomic.h
+++ b/arch/arm64/include/asm/atomic.h
@@ -142,24 +142,6 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
 #define arch_atomic_fetch_xor_release		arch_atomic_fetch_xor_release
 #define arch_atomic_fetch_xor			arch_atomic_fetch_xor
 
-#define arch_atomic_xchg_relaxed(v, new) \
-	arch_xchg_relaxed(&((v)->counter), (new))
-#define arch_atomic_xchg_acquire(v, new) \
-	arch_xchg_acquire(&((v)->counter), (new))
-#define arch_atomic_xchg_release(v, new) \
-	arch_xchg_release(&((v)->counter), (new))
-#define arch_atomic_xchg(v, new) \
-	arch_xchg(&((v)->counter), (new))
-
-#define arch_atomic_cmpxchg_relaxed(v, old, new) \
-	arch_cmpxchg_relaxed(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg_acquire(v, old, new) \
-	arch_cmpxchg_acquire(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg_release(v, old, new) \
-	arch_cmpxchg_release(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg(v, old, new) \
-	arch_cmpxchg(&((v)->counter), (old), (new))
-
 #define arch_atomic_andnot			arch_atomic_andnot
 
 /*
@@ -209,16 +191,6 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
 #define arch_atomic64_fetch_xor_release		arch_atomic64_fetch_xor_release
 #define arch_atomic64_fetch_xor			arch_atomic64_fetch_xor
 
-#define arch_atomic64_xchg_relaxed		arch_atomic_xchg_relaxed
-#define arch_atomic64_xchg_acquire		arch_atomic_xchg_acquire
-#define arch_atomic64_xchg_release		arch_atomic_xchg_release
-#define arch_atomic64_xchg			arch_atomic_xchg
-
-#define arch_atomic64_cmpxchg_relaxed		arch_atomic_cmpxchg_relaxed
-#define arch_atomic64_cmpxchg_acquire		arch_atomic_cmpxchg_acquire
-#define arch_atomic64_cmpxchg_release		arch_atomic_cmpxchg_release
-#define arch_atomic64_cmpxchg			arch_atomic_cmpxchg
-
 #define arch_atomic64_andnot			arch_atomic64_andnot
 
 #define arch_atomic64_dec_if_positive		arch_atomic64_dec_if_positive
diff --git a/arch/csky/include/asm/atomic.h b/arch/csky/include/asm/atomic.h
index 60406ef9c2bbc..4dab44f6143a5 100644
--- a/arch/csky/include/asm/atomic.h
+++ b/arch/csky/include/asm/atomic.h
@@ -195,41 +195,6 @@ arch_atomic_dec_if_positive(atomic_t *v)
 }
 #define arch_atomic_dec_if_positive arch_atomic_dec_if_positive
 
-#define ATOMIC_OP()							\
-static __always_inline							\
-int arch_atomic_xchg_relaxed(atomic_t *v, int n)			\
-{									\
-	return __xchg_relaxed(n, &(v->counter), 4);			\
-}									\
-static __always_inline							\
-int arch_atomic_cmpxchg_relaxed(atomic_t *v, int o, int n)		\
-{									\
-	return __cmpxchg_relaxed(&(v->counter), o, n, 4);		\
-}									\
-static __always_inline							\
-int arch_atomic_cmpxchg_acquire(atomic_t *v, int o, int n)		\
-{									\
-	return __cmpxchg_acquire(&(v->counter), o, n, 4);		\
-}									\
-static __always_inline							\
-int arch_atomic_cmpxchg(atomic_t *v, int o, int n)			\
-{									\
-	return __cmpxchg(&(v->counter), o, n, 4);			\
-}
-
-#define ATOMIC_OPS()							\
-	ATOMIC_OP()
-
-ATOMIC_OPS()
-
-#define arch_atomic_xchg_relaxed	arch_atomic_xchg_relaxed
-#define arch_atomic_cmpxchg_relaxed	arch_atomic_cmpxchg_relaxed
-#define arch_atomic_cmpxchg_acquire	arch_atomic_cmpxchg_acquire
-#define arch_atomic_cmpxchg		arch_atomic_cmpxchg
-
-#undef ATOMIC_OPS
-#undef ATOMIC_OP
-
 #else
 #include <asm-generic/atomic.h>
 #endif
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index 738857e10d6ec..ad6c111e9c10f 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -36,12 +36,6 @@ static inline void arch_atomic_set(atomic_t *v, int new)
  */
 #define arch_atomic_read(v)		READ_ONCE((v)->counter)
 
-#define arch_atomic_xchg(v, new)					\
-	(arch_xchg(&((v)->counter), (new)))
-
-#define arch_atomic_cmpxchg(v, old, new)				\
-	(arch_cmpxchg(&((v)->counter), (old), (new)))
-
 #define ATOMIC_OP(op)							\
 static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 266c429b91372..6540a628d2573 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -207,13 +207,6 @@ ATOMIC64_FETCH_OP(xor, ^)
 #undef ATOMIC64_FETCH_OP
 #undef ATOMIC64_OP
 
-#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
-#define arch_atomic64_cmpxchg(v, old, new) \
-	(arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
 #define arch_atomic_add(i,v)		(void)arch_atomic_add_return((i), (v))
 #define arch_atomic_sub(i,v)		(void)arch_atomic_sub_return((i), (v))
 
diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h
index 6b9aca9ab6e9f..8d73c85911b08 100644
--- a/arch/loongarch/include/asm/atomic.h
+++ b/arch/loongarch/include/asm/atomic.h
@@ -181,9 +181,6 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
 	return result;
 }
 
-#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new)))
-
 /*
  * arch_atomic_dec_if_positive - decrement by 1 if old value positive
  * @v: pointer of type atomic_t
@@ -342,10 +339,6 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
 	return result;
 }
 
-#define arch_atomic64_cmpxchg(v, o, n) \
-	((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), (new)))
-
 /*
  * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
  * @v: pointer of type atomic64_t
diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h
index cfba83d230fde..190a032f19be7 100644
--- a/arch/m68k/include/asm/atomic.h
+++ b/arch/m68k/include/asm/atomic.h
@@ -158,12 +158,7 @@ static inline int arch_atomic_inc_and_test(atomic_t *v)
 }
 #define arch_atomic_inc_and_test arch_atomic_inc_and_test
 
-#ifdef CONFIG_RMW_INSNS
-
-#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
-#else /* !CONFIG_RMW_INSNS */
+#ifndef CONFIG_RMW_INSNS
 
 static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 {
@@ -177,6 +172,7 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 	local_irq_restore(flags);
 	return prev;
 }
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
 
 static inline int arch_atomic_xchg(atomic_t *v, int new)
 {
@@ -189,6 +185,7 @@ static inline int arch_atomic_xchg(atomic_t *v, int new)
 	local_irq_restore(flags);
 	return prev;
 }
+#define arch_atomic_xchg arch_atomic_xchg
 
 #endif /* !CONFIG_RMW_INSNS */
 
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 712fb5a6a5682..ba188e77768b2 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -33,17 +33,6 @@ static __always_inline void arch_##pfx##_set(pfx##_t *v, type i)	\
 {									\
 	WRITE_ONCE(v->counter, i);					\
 }									\
-									\
-static __always_inline type						\
-arch_##pfx##_cmpxchg(pfx##_t *v, type o, type n)			\
-{									\
-	return arch_cmpxchg(&v->counter, o, n);				\
-}									\
-									\
-static __always_inline type arch_##pfx##_xchg(pfx##_t *v, type n)	\
-{									\
-	return arch_xchg(&v->counter, n);				\
-}
 
 ATOMIC_OPS(atomic, int)
 
diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h
index 326167e4783a9..8ce67ec7c9a30 100644
--- a/arch/openrisc/include/asm/atomic.h
+++ b/arch/openrisc/include/asm/atomic.h
@@ -130,7 +130,4 @@ static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 
 #include <asm/cmpxchg.h>
 
-#define arch_atomic_xchg(ptr, v)		(arch_xchg(&(ptr)->counter, (v)))
-#define arch_atomic_cmpxchg(v, old, new)	(arch_cmpxchg(&((v)->counter), (old), (new)))
-
 #endif /* __ASM_OPENRISC_ATOMIC_H */
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index dd5a299ada695..0b3f64c92e3c0 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -73,10 +73,6 @@ static __inline__ int arch_atomic_read(const atomic_t *v)
 	return READ_ONCE((v)->counter);
 }
 
-/* exported interface */
-#define arch_atomic_cmpxchg(v, o, n)	(arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new)	(arch_xchg(&((v)->counter), new))
-
 #define ATOMIC_OP(op, c_op)						\
 static __inline__ void arch_atomic_##op(int i, atomic_t *v)		\
 {									\
@@ -218,11 +214,6 @@ arch_atomic64_read(const atomic64_t *v)
 	return READ_ONCE((v)->counter);
 }
 
-/* exported interface */
-#define arch_atomic64_cmpxchg(v, o, n) \
-	((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
 #endif /* !CONFIG_64BIT */
 
 
diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index 47228b1774781..5bf6a4d49268c 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -126,18 +126,6 @@ ATOMIC_OPS(xor, xor, "", K)
 #undef ATOMIC_OP_RETURN_RELAXED
 #undef ATOMIC_OP
 
-#define arch_atomic_cmpxchg(v, o, n) \
-	(arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_cmpxchg_relaxed(v, o, n) \
-	arch_cmpxchg_relaxed(&((v)->counter), (o), (n))
-#define arch_atomic_cmpxchg_acquire(v, o, n) \
-	arch_cmpxchg_acquire(&((v)->counter), (o), (n))
-
-#define arch_atomic_xchg(v, new) \
-	(arch_xchg(&((v)->counter), new))
-#define arch_atomic_xchg_relaxed(v, new) \
-	arch_xchg_relaxed(&((v)->counter), (new))
-
 /**
  * atomic_fetch_add_unless - add unless the number is a given value
  * @v: pointer of type atomic_t
@@ -396,18 +384,6 @@ static __inline__ s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 }
 #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
 
-#define arch_atomic64_cmpxchg(v, o, n) \
-	(arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_cmpxchg_relaxed(v, o, n) \
-	arch_cmpxchg_relaxed(&((v)->counter), (o), (n))
-#define arch_atomic64_cmpxchg_acquire(v, o, n) \
-	arch_cmpxchg_acquire(&((v)->counter), (o), (n))
-
-#define arch_atomic64_xchg(v, new) \
-	(arch_xchg(&((v)->counter), new))
-#define arch_atomic64_xchg_relaxed(v, new) \
-	arch_xchg_relaxed(&((v)->counter), (new))
-
 /**
  * atomic64_fetch_add_unless - add unless the number is a given value
  * @v: pointer of type atomic64_t
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index bba472928b539..f5dfef6c2153f 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -238,78 +238,6 @@ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a,
 #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
 #endif
 
-/*
- * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as
- * {cmp,}xchg and the operations that return, so they need a full barrier.
- */
-#define ATOMIC_OP(c_t, prefix, size)					\
-static __always_inline							\
-c_t arch_atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n)	\
-{									\
-	return __xchg_relaxed(&(v->counter), n, size);			\
-}									\
-static __always_inline							\
-c_t arch_atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n)	\
-{									\
-	return __xchg_acquire(&(v->counter), n, size);			\
-}									\
-static __always_inline							\
-c_t arch_atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n)	\
-{									\
-	return __xchg_release(&(v->counter), n, size);			\
-}									\
-static __always_inline							\
-c_t arch_atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n)		\
-{									\
-	return __arch_xchg(&(v->counter), n, size);			\
-}									\
-static __always_inline							\
-c_t arch_atomic##prefix##_cmpxchg_relaxed(atomic##prefix##_t *v,	\
-				     c_t o, c_t n)			\
-{									\
-	return __cmpxchg_relaxed(&(v->counter), o, n, size);		\
-}									\
-static __always_inline							\
-c_t arch_atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v,	\
-				     c_t o, c_t n)			\
-{									\
-	return __cmpxchg_acquire(&(v->counter), o, n, size);		\
-}									\
-static __always_inline							\
-c_t arch_atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v,	\
-				     c_t o, c_t n)			\
-{									\
-	return __cmpxchg_release(&(v->counter), o, n, size);		\
-}									\
-static __always_inline							\
-c_t arch_atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n)	\
-{									\
-	return __cmpxchg(&(v->counter), o, n, size);			\
-}
-
-#ifdef CONFIG_GENERIC_ATOMIC64
-#define ATOMIC_OPS()							\
-	ATOMIC_OP(int,   , 4)
-#else
-#define ATOMIC_OPS()							\
-	ATOMIC_OP(int,   , 4)						\
-	ATOMIC_OP(s64, 64, 8)
-#endif
-
-ATOMIC_OPS()
-
-#define arch_atomic_xchg_relaxed	arch_atomic_xchg_relaxed
-#define arch_atomic_xchg_acquire	arch_atomic_xchg_acquire
-#define arch_atomic_xchg_release	arch_atomic_xchg_release
-#define arch_atomic_xchg		arch_atomic_xchg
-#define arch_atomic_cmpxchg_relaxed	arch_atomic_cmpxchg_relaxed
-#define arch_atomic_cmpxchg_acquire	arch_atomic_cmpxchg_acquire
-#define arch_atomic_cmpxchg_release	arch_atomic_cmpxchg_release
-#define arch_atomic_cmpxchg		arch_atomic_cmpxchg
-
-#undef ATOMIC_OPS
-#undef ATOMIC_OP
-
 static __always_inline bool arch_atomic_inc_unless_negative(atomic_t *v)
 {
 	int prev, rc;
diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h
index 528bfeda78f56..7a18cb2a1c1ac 100644
--- a/arch/sh/include/asm/atomic.h
+++ b/arch/sh/include/asm/atomic.h
@@ -30,9 +30,6 @@
 #include <asm/atomic-irq.h>
 #endif
 
-#define arch_atomic_xchg(v, new)	(arch_xchg(&((v)->counter), new))
-#define arch_atomic_cmpxchg(v, o, n)	(arch_cmpxchg(&((v)->counter), (o), (n)))
-
 #endif /* CONFIG_CPU_J2 */
 
 #endif /* __ASM_SH_ATOMIC_H */
diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h
index d775daa83d129..1c9e6c7366e41 100644
--- a/arch/sparc/include/asm/atomic_32.h
+++ b/arch/sparc/include/asm/atomic_32.h
@@ -24,7 +24,9 @@ int arch_atomic_fetch_and(int, atomic_t *);
 int arch_atomic_fetch_or(int, atomic_t *);
 int arch_atomic_fetch_xor(int, atomic_t *);
 int arch_atomic_cmpxchg(atomic_t *, int, int);
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
 int arch_atomic_xchg(atomic_t *, int);
+#define arch_atomic_xchg arch_atomic_xchg
 int arch_atomic_fetch_add_unless(atomic_t *, int, int);
 void arch_atomic_set(atomic_t *, int);
 
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 077891686715a..df6a8b07d7e63 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -49,17 +49,6 @@ ATOMIC_OPS(xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
-
-static inline int arch_atomic_xchg(atomic_t *v, int new)
-{
-	return arch_xchg(&v->counter, new);
-}
-
-#define arch_atomic64_cmpxchg(v, o, n) \
-	((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
 s64 arch_atomic64_dec_if_positive(atomic64_t *v);
 #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
 
diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h
index 52da614f953ce..1d323a864002c 100644
--- a/arch/xtensa/include/asm/atomic.h
+++ b/arch/xtensa/include/asm/atomic.h
@@ -257,7 +257,4 @@ ATOMIC_OPS(xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
 #endif /* _XTENSA_ATOMIC_H */
diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index e271d6708c876..22142c71d35a1 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -130,7 +130,4 @@ ATOMIC_OP(xor, ^)
 #define arch_atomic_read(v)			READ_ONCE((v)->counter)
 #define arch_atomic_set(v, i)			WRITE_ONCE(((v)->counter), (i))
 
-#define arch_atomic_xchg(ptr, v)		(arch_xchg(&(ptr)->counter, (u32)(v)))
-#define arch_atomic_cmpxchg(v, old, new)	(arch_cmpxchg(&((v)->counter), (u32)(old), (u32)(new)))
-
 #endif /* __ASM_GENERIC_ATOMIC_H */
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 3ce4cb5e790c5..1a2d81dbc2e48 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -1091,9 +1091,48 @@ arch_atomic_fetch_xor(int i, atomic_t *v)
 #endif /* arch_atomic_fetch_xor_relaxed */
 
 #ifndef arch_atomic_xchg_relaxed
+#ifdef arch_atomic_xchg
 #define arch_atomic_xchg_acquire arch_atomic_xchg
 #define arch_atomic_xchg_release arch_atomic_xchg
 #define arch_atomic_xchg_relaxed arch_atomic_xchg
+#endif /* arch_atomic_xchg */
+
+#ifndef arch_atomic_xchg
+static __always_inline int
+arch_atomic_xchg(atomic_t *v, int new)
+{
+	return arch_xchg(&v->counter, new);
+}
+#define arch_atomic_xchg arch_atomic_xchg
+#endif
+
+#ifndef arch_atomic_xchg_acquire
+static __always_inline int
+arch_atomic_xchg_acquire(atomic_t *v, int new)
+{
+	return arch_xchg_acquire(&v->counter, new);
+}
+#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire
+#endif
+
+#ifndef arch_atomic_xchg_release
+static __always_inline int
+arch_atomic_xchg_release(atomic_t *v, int new)
+{
+	return arch_xchg_release(&v->counter, new);
+}
+#define arch_atomic_xchg_release arch_atomic_xchg_release
+#endif
+
+#ifndef arch_atomic_xchg_relaxed
+static __always_inline int
+arch_atomic_xchg_relaxed(atomic_t *v, int new)
+{
+	return arch_xchg_relaxed(&v->counter, new);
+}
+#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed
+#endif
+
 #else /* arch_atomic_xchg_relaxed */
 
 #ifndef arch_atomic_xchg_acquire
@@ -1133,9 +1172,48 @@ arch_atomic_xchg(atomic_t *v, int i)
 #endif /* arch_atomic_xchg_relaxed */
 
 #ifndef arch_atomic_cmpxchg_relaxed
+#ifdef arch_atomic_cmpxchg
 #define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg
 #define arch_atomic_cmpxchg_release arch_atomic_cmpxchg
 #define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg
+#endif /* arch_atomic_cmpxchg */
+
+#ifndef arch_atomic_cmpxchg
+static __always_inline int
+arch_atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+	return arch_cmpxchg(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
+#endif
+
+#ifndef arch_atomic_cmpxchg_acquire
+static __always_inline int
+arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+{
+	return arch_cmpxchg_acquire(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
+#endif
+
+#ifndef arch_atomic_cmpxchg_release
+static __always_inline int
+arch_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+{
+	return arch_cmpxchg_release(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release
+#endif
+
+#ifndef arch_atomic_cmpxchg_relaxed
+static __always_inline int
+arch_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
+{
+	return arch_cmpxchg_relaxed(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
+#endif
+
 #else /* arch_atomic_cmpxchg_relaxed */
 
 #ifndef arch_atomic_cmpxchg_acquire
@@ -2225,9 +2303,48 @@ arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
 #endif /* arch_atomic64_fetch_xor_relaxed */
 
 #ifndef arch_atomic64_xchg_relaxed
+#ifdef arch_atomic64_xchg
 #define arch_atomic64_xchg_acquire arch_atomic64_xchg
 #define arch_atomic64_xchg_release arch_atomic64_xchg
 #define arch_atomic64_xchg_relaxed arch_atomic64_xchg
+#endif /* arch_atomic64_xchg */
+
+#ifndef arch_atomic64_xchg
+static __always_inline s64
+arch_atomic64_xchg(atomic64_t *v, s64 new)
+{
+	return arch_xchg(&v->counter, new);
+}
+#define arch_atomic64_xchg arch_atomic64_xchg
+#endif
+
+#ifndef arch_atomic64_xchg_acquire
+static __always_inline s64
+arch_atomic64_xchg_acquire(atomic64_t *v, s64 new)
+{
+	return arch_xchg_acquire(&v->counter, new);
+}
+#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire
+#endif
+
+#ifndef arch_atomic64_xchg_release
+static __always_inline s64
+arch_atomic64_xchg_release(atomic64_t *v, s64 new)
+{
+	return arch_xchg_release(&v->counter, new);
+}
+#define arch_atomic64_xchg_release arch_atomic64_xchg_release
+#endif
+
+#ifndef arch_atomic64_xchg_relaxed
+static __always_inline s64
+arch_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
+{
+	return arch_xchg_relaxed(&v->counter, new);
+}
+#define arch_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed
+#endif
+
 #else /* arch_atomic64_xchg_relaxed */
 
 #ifndef arch_atomic64_xchg_acquire
@@ -2267,9 +2384,48 @@ arch_atomic64_xchg(atomic64_t *v, s64 i)
 #endif /* arch_atomic64_xchg_relaxed */
 
 #ifndef arch_atomic64_cmpxchg_relaxed
+#ifdef arch_atomic64_cmpxchg
 #define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg
 #define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg
 #define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg
+#endif /* arch_atomic64_cmpxchg */
+
+#ifndef arch_atomic64_cmpxchg
+static __always_inline s64
+arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+{
+	return arch_cmpxchg(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
+#endif
+
+#ifndef arch_atomic64_cmpxchg_acquire
+static __always_inline s64
+arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+{
+	return arch_cmpxchg_acquire(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
+#endif
+
+#ifndef arch_atomic64_cmpxchg_release
+static __always_inline s64
+arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+{
+	return arch_cmpxchg_release(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
+#endif
+
+#ifndef arch_atomic64_cmpxchg_relaxed
+static __always_inline s64
+arch_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
+{
+	return arch_cmpxchg_relaxed(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed
+#endif
+
 #else /* arch_atomic64_cmpxchg_relaxed */
 
 #ifndef arch_atomic64_cmpxchg_acquire
@@ -2597,4 +2753,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
 #endif
 
 #endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277
+// e1cee558cc61cae887890db30fcdf93baca9f498
diff --git a/scripts/atomic/fallbacks/cmpxchg b/scripts/atomic/fallbacks/cmpxchg
new file mode 100755
index 0000000000000..87cd010f98d58
--- /dev/null
+++ b/scripts/atomic/fallbacks/cmpxchg
@@ -0,0 +1,7 @@
+cat <<EOF
+static __always_inline ${int}
+arch_${atomic}_cmpxchg${order}(${atomic}_t *v, ${int} old, ${int} new)
+{
+	return arch_cmpxchg${order}(&v->counter, old, new);
+}
+EOF
diff --git a/scripts/atomic/fallbacks/xchg b/scripts/atomic/fallbacks/xchg
new file mode 100755
index 0000000000000..733b8980b2f3b
--- /dev/null
+++ b/scripts/atomic/fallbacks/xchg
@@ -0,0 +1,7 @@
+cat <<EOF
+static __always_inline ${int}
+arch_${atomic}_xchg${order}(${atomic}_t *v, ${int} new)
+{
+	return arch_xchg${order}(&v->counter, new);
+}
+EOF
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 05/26] locking/atomic: arc: add preprocessor symbols
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (3 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 04/26] locking/atomic: make atomic*_{cmp,}xchg optional Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 06/26] locking/atomic: arm: " Mark Rutland
                   ` (18 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).

Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.

Add the required definitions to arch/arc.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arc/include/asm/atomic-spinlock.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arc/include/asm/atomic-spinlock.h b/arch/arc/include/asm/atomic-spinlock.h
index 2c830347bfb4e..89d12a60f84c0 100644
--- a/arch/arc/include/asm/atomic-spinlock.h
+++ b/arch/arc/include/asm/atomic-spinlock.h
@@ -81,6 +81,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 ATOMIC_OPS(add, +=, add)
 ATOMIC_OPS(sub, -=, sub)
 
+#define arch_atomic_fetch_add		arch_atomic_fetch_add
+#define arch_atomic_fetch_sub		arch_atomic_fetch_sub
+#define arch_atomic_add_return		arch_atomic_add_return
+#define arch_atomic_sub_return		arch_atomic_sub_return
+
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op, c_op, asm_op)					\
 	ATOMIC_OP(op, c_op, asm_op)					\
@@ -92,7 +97,11 @@ ATOMIC_OPS(or, |=, or)
 ATOMIC_OPS(xor, ^=, xor)
 
 #define arch_atomic_andnot		arch_atomic_andnot
+
+#define arch_atomic_fetch_and		arch_atomic_fetch_and
 #define arch_atomic_fetch_andnot	arch_atomic_fetch_andnot
+#define arch_atomic_fetch_or		arch_atomic_fetch_or
+#define arch_atomic_fetch_xor		arch_atomic_fetch_xor
 
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 06/26] locking/atomic: arm: add preprocessor symbols
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (4 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 05/26] locking/atomic: arc: add preprocessor symbols Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 07/26] locking/atomic: hexagon: " Mark Rutland
                   ` (17 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).

Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.

Add the required definitions to arch/arm.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm/include/asm/atomic.h | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 9458d47ff209c..f0e3b01afa746 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -197,6 +197,16 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 	return val;							\
 }
 
+#define arch_atomic_add_return			arch_atomic_add_return
+#define arch_atomic_sub_return			arch_atomic_sub_return
+#define arch_atomic_fetch_add			arch_atomic_fetch_add
+#define arch_atomic_fetch_sub			arch_atomic_fetch_sub
+
+#define arch_atomic_fetch_and			arch_atomic_fetch_and
+#define arch_atomic_fetch_andnot		arch_atomic_fetch_andnot
+#define arch_atomic_fetch_or			arch_atomic_fetch_or
+#define arch_atomic_fetch_xor			arch_atomic_fetch_xor
+
 static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 {
 	int ret;
@@ -212,8 +222,6 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 }
 #define arch_atomic_cmpxchg arch_atomic_cmpxchg
 
-#define arch_atomic_fetch_andnot		arch_atomic_fetch_andnot
-
 #endif /* __LINUX_ARM_ARCH__ */
 
 #define ATOMIC_OPS(op, c_op, asm_op)					\
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 07/26] locking/atomic: hexagon: add preprocessor symbols
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (5 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 06/26] locking/atomic: arm: " Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 08/26] locking/atomic: m68k: " Mark Rutland
                   ` (16 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).

Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.

Add the required definitions to arch/hexagon.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/hexagon/include/asm/atomic.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index ad6c111e9c10f..5c8440016c762 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -91,6 +91,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
 
+#define arch_atomic_add_return			arch_atomic_add_return
+#define arch_atomic_sub_return			arch_atomic_sub_return
+#define arch_atomic_fetch_add			arch_atomic_fetch_add
+#define arch_atomic_fetch_sub			arch_atomic_fetch_sub
+
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
 
@@ -98,6 +103,10 @@ ATOMIC_OPS(and)
 ATOMIC_OPS(or)
 ATOMIC_OPS(xor)
 
+#define arch_atomic_fetch_and			arch_atomic_fetch_and
+#define arch_atomic_fetch_or			arch_atomic_fetch_or
+#define arch_atomic_fetch_xor			arch_atomic_fetch_xor
+
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_OP_RETURN
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 08/26] locking/atomic: m68k: add preprocessor symbols
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (6 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 07/26] locking/atomic: hexagon: " Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 09/26] locking/atomic: parisc: " Mark Rutland
                   ` (15 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).

Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.

Add the required definitions to arch/m68k.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/m68k/include/asm/atomic.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h
index 190a032f19be7..4bfbc25f6ecf4 100644
--- a/arch/m68k/include/asm/atomic.h
+++ b/arch/m68k/include/asm/atomic.h
@@ -106,6 +106,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t * v)		\
 ATOMIC_OPS(add, +=, add)
 ATOMIC_OPS(sub, -=, sub)
 
+#define arch_atomic_add_return			arch_atomic_add_return
+#define arch_atomic_sub_return			arch_atomic_sub_return
+#define arch_atomic_fetch_add			arch_atomic_fetch_add
+#define arch_atomic_fetch_sub			arch_atomic_fetch_sub
+
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op, c_op, asm_op)					\
 	ATOMIC_OP(op, c_op, asm_op)					\
@@ -115,6 +120,10 @@ ATOMIC_OPS(and, &=, and)
 ATOMIC_OPS(or, |=, or)
 ATOMIC_OPS(xor, ^=, eor)
 
+#define arch_atomic_fetch_and			arch_atomic_fetch_and
+#define arch_atomic_fetch_or			arch_atomic_fetch_or
+#define arch_atomic_fetch_xor			arch_atomic_fetch_xor
+
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_OP_RETURN
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 09/26] locking/atomic: parisc: add preprocessor symbols
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (7 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 08/26] locking/atomic: m68k: " Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 10/26] locking/atomic: sh: " Mark Rutland
                   ` (14 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).

Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.

Add the required definitions to arch/parisc.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/parisc/include/asm/atomic.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index 0b3f64c92e3c0..d4f023887ff87 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -118,6 +118,11 @@ static __inline__ int arch_atomic_fetch_##op(int i, atomic_t *v)	\
 ATOMIC_OPS(add, +=)
 ATOMIC_OPS(sub, -=)
 
+#define arch_atomic_add_return	arch_atomic_add_return
+#define arch_atomic_sub_return	arch_atomic_sub_return
+#define arch_atomic_fetch_add	arch_atomic_fetch_add
+#define arch_atomic_fetch_sub	arch_atomic_fetch_sub
+
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op, c_op)						\
 	ATOMIC_OP(op, c_op)						\
@@ -127,6 +132,10 @@ ATOMIC_OPS(and, &=)
 ATOMIC_OPS(or, |=)
 ATOMIC_OPS(xor, ^=)
 
+#define arch_atomic_fetch_and	arch_atomic_fetch_and
+#define arch_atomic_fetch_or	arch_atomic_fetch_or
+#define arch_atomic_fetch_xor	arch_atomic_fetch_xor
+
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_OP_RETURN
@@ -181,6 +190,11 @@ static __inline__ s64 arch_atomic64_fetch_##op(s64 i, atomic64_t *v)	\
 ATOMIC64_OPS(add, +=)
 ATOMIC64_OPS(sub, -=)
 
+#define arch_atomic64_add_return	arch_atomic64_add_return
+#define arch_atomic64_sub_return	arch_atomic64_sub_return
+#define arch_atomic64_fetch_add		arch_atomic64_fetch_add
+#define arch_atomic64_fetch_sub		arch_atomic64_fetch_sub
+
 #undef ATOMIC64_OPS
 #define ATOMIC64_OPS(op, c_op)						\
 	ATOMIC64_OP(op, c_op)						\
@@ -190,6 +204,10 @@ ATOMIC64_OPS(and, &=)
 ATOMIC64_OPS(or, |=)
 ATOMIC64_OPS(xor, ^=)
 
+#define arch_atomic64_fetch_and		arch_atomic64_fetch_and
+#define arch_atomic64_fetch_or		arch_atomic64_fetch_or
+#define arch_atomic64_fetch_xor		arch_atomic64_fetch_xor
+
 #undef ATOMIC64_OPS
 #undef ATOMIC64_FETCH_OP
 #undef ATOMIC64_OP_RETURN
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 10/26] locking/atomic: sh: add preprocessor symbols
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (8 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 09/26] locking/atomic: parisc: " Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 11/26] locking/atomic: sparc: " Mark Rutland
                   ` (13 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).

Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.

Add the required definitions to arch/sh.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/sh/include/asm/atomic-grb.h  | 9 +++++++++
 arch/sh/include/asm/atomic-irq.h  | 9 +++++++++
 arch/sh/include/asm/atomic-llsc.h | 9 +++++++++
 3 files changed, 27 insertions(+)

diff --git a/arch/sh/include/asm/atomic-grb.h b/arch/sh/include/asm/atomic-grb.h
index 059791fd394fc..cf1c10f15528b 100644
--- a/arch/sh/include/asm/atomic-grb.h
+++ b/arch/sh/include/asm/atomic-grb.h
@@ -71,6 +71,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
 
+#define arch_atomic_add_return	arch_atomic_add_return
+#define arch_atomic_sub_return	arch_atomic_sub_return
+#define arch_atomic_fetch_add	arch_atomic_fetch_add
+#define arch_atomic_fetch_sub	arch_atomic_fetch_sub
+
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
 
@@ -78,6 +83,10 @@ ATOMIC_OPS(and)
 ATOMIC_OPS(or)
 ATOMIC_OPS(xor)
 
+#define arch_atomic_fetch_and	arch_atomic_fetch_and
+#define arch_atomic_fetch_or	arch_atomic_fetch_or
+#define arch_atomic_fetch_xor	arch_atomic_fetch_xor
+
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_OP_RETURN
diff --git a/arch/sh/include/asm/atomic-irq.h b/arch/sh/include/asm/atomic-irq.h
index 7665de9d00d0d..b4090cc354935 100644
--- a/arch/sh/include/asm/atomic-irq.h
+++ b/arch/sh/include/asm/atomic-irq.h
@@ -55,6 +55,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 ATOMIC_OPS(add, +=)
 ATOMIC_OPS(sub, -=)
 
+#define arch_atomic_add_return	arch_atomic_add_return
+#define arch_atomic_sub_return	arch_atomic_sub_return
+#define arch_atomic_fetch_add	arch_atomic_fetch_add
+#define arch_atomic_fetch_sub	arch_atomic_fetch_sub
+
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op, c_op)						\
 	ATOMIC_OP(op, c_op)						\
@@ -64,6 +69,10 @@ ATOMIC_OPS(and, &=)
 ATOMIC_OPS(or, |=)
 ATOMIC_OPS(xor, ^=)
 
+#define arch_atomic_fetch_and	arch_atomic_fetch_and
+#define arch_atomic_fetch_or	arch_atomic_fetch_or
+#define arch_atomic_fetch_xor	arch_atomic_fetch_xor
+
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_OP_RETURN
diff --git a/arch/sh/include/asm/atomic-llsc.h b/arch/sh/include/asm/atomic-llsc.h
index b63dcfbfa14ef..9ef1fb1dd12ee 100644
--- a/arch/sh/include/asm/atomic-llsc.h
+++ b/arch/sh/include/asm/atomic-llsc.h
@@ -73,6 +73,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
 
+#define arch_atomic_add_return	arch_atomic_add_return
+#define arch_atomic_sub_return	arch_atomic_sub_return
+#define arch_atomic_fetch_add	arch_atomic_fetch_add
+#define arch_atomic_fetch_sub	arch_atomic_fetch_sub
+
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
 
@@ -80,6 +85,10 @@ ATOMIC_OPS(and)
 ATOMIC_OPS(or)
 ATOMIC_OPS(xor)
 
+#define arch_atomic_fetch_and	arch_atomic_fetch_and
+#define arch_atomic_fetch_or	arch_atomic_fetch_or
+#define arch_atomic_fetch_xor	arch_atomic_fetch_xor
+
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_OP_RETURN
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 11/26] locking/atomic: sparc: add preprocessor symbols
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (9 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 10/26] locking/atomic: sh: " Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 12/26] locking/atomic: x86: " Mark Rutland
                   ` (12 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).

Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.

Add the required definitions to arch/sparc.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/sparc/include/asm/atomic_32.h | 16 ++++++++++++++--
 arch/sparc/include/asm/atomic_64.h | 18 ++++++++++++++++++
 2 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h
index 1c9e6c7366e41..60ce2fe57fcd7 100644
--- a/arch/sparc/include/asm/atomic_32.h
+++ b/arch/sparc/include/asm/atomic_32.h
@@ -19,19 +19,31 @@
 #include <asm-generic/atomic64.h>
 
 int arch_atomic_add_return(int, atomic_t *);
+#define arch_atomic_add_return arch_atomic_add_return
+
 int arch_atomic_fetch_add(int, atomic_t *);
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+
 int arch_atomic_fetch_and(int, atomic_t *);
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+
 int arch_atomic_fetch_or(int, atomic_t *);
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+
 int arch_atomic_fetch_xor(int, atomic_t *);
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
 int arch_atomic_cmpxchg(atomic_t *, int, int);
 #define arch_atomic_cmpxchg arch_atomic_cmpxchg
+
 int arch_atomic_xchg(atomic_t *, int);
 #define arch_atomic_xchg arch_atomic_xchg
-int arch_atomic_fetch_add_unless(atomic_t *, int, int);
-void arch_atomic_set(atomic_t *, int);
 
+int arch_atomic_fetch_add_unless(atomic_t *, int, int);
 #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
 
+void arch_atomic_set(atomic_t *, int);
+
 #define arch_atomic_set_release(v, i)	arch_atomic_set((v), (i))
 
 #define arch_atomic_read(v)		READ_ONCE((v)->counter)
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index df6a8b07d7e63..a5e9c37605a70 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -37,6 +37,16 @@ s64 arch_atomic64_fetch_##op(s64, atomic64_t *);
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
 
+#define arch_atomic_add_return			arch_atomic_add_return
+#define arch_atomic_sub_return			arch_atomic_sub_return
+#define arch_atomic_fetch_add			arch_atomic_fetch_add
+#define arch_atomic_fetch_sub			arch_atomic_fetch_sub
+
+#define arch_atomic64_add_return		arch_atomic64_add_return
+#define arch_atomic64_sub_return		arch_atomic64_sub_return
+#define arch_atomic64_fetch_add			arch_atomic64_fetch_add
+#define arch_atomic64_fetch_sub			arch_atomic64_fetch_sub
+
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
 
@@ -44,6 +54,14 @@ ATOMIC_OPS(and)
 ATOMIC_OPS(or)
 ATOMIC_OPS(xor)
 
+#define arch_atomic_fetch_and			arch_atomic_fetch_and
+#define arch_atomic_fetch_or			arch_atomic_fetch_or
+#define arch_atomic_fetch_xor			arch_atomic_fetch_xor
+
+#define arch_atomic64_fetch_and			arch_atomic64_fetch_and
+#define arch_atomic64_fetch_or			arch_atomic64_fetch_or
+#define arch_atomic64_fetch_xor			arch_atomic64_fetch_xor
+
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_OP_RETURN
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 12/26] locking/atomic: x86: add preprocessor symbols
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (10 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 11/26] locking/atomic: sparc: " Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 13/26] locking/atomic: xtensa: " Mark Rutland
                   ` (11 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).

Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.

Add the required definitions to arch/x86.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/x86/include/asm/cmpxchg_64.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/cmpxchg_64.h b/arch/x86/include/asm/cmpxchg_64.h
index 3e6e3eef701b3..44b08b53ab32f 100644
--- a/arch/x86/include/asm/cmpxchg_64.h
+++ b/arch/x86/include/asm/cmpxchg_64.h
@@ -45,11 +45,13 @@ static __always_inline u128 arch_cmpxchg128(volatile u128 *ptr, u128 old, u128 n
 {
 	return __arch_cmpxchg128(ptr, old, new, LOCK_PREFIX);
 }
+#define arch_cmpxchg128 arch_cmpxchg128
 
 static __always_inline u128 arch_cmpxchg128_local(volatile u128 *ptr, u128 old, u128 new)
 {
 	return __arch_cmpxchg128(ptr, old, new,);
 }
+#define arch_cmpxchg128_local arch_cmpxchg128_local
 
 #define __arch_try_cmpxchg128(_ptr, _oldp, _new, _lock)			\
 ({									\
@@ -75,11 +77,13 @@ static __always_inline bool arch_try_cmpxchg128(volatile u128 *ptr, u128 *oldp,
 {
 	return __arch_try_cmpxchg128(ptr, oldp, new, LOCK_PREFIX);
 }
+#define arch_try_cmpxchg128 arch_try_cmpxchg128
 
 static __always_inline bool arch_try_cmpxchg128_local(volatile u128 *ptr, u128 *oldp, u128 new)
 {
 	return __arch_try_cmpxchg128(ptr, oldp, new,);
 }
+#define arch_try_cmpxchg128_local arch_try_cmpxchg128_local
 
 #define system_has_cmpxchg128()		boot_cpu_has(X86_FEATURE_CX16)
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 13/26] locking/atomic: xtensa: add preprocessor symbols
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (11 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 12/26] locking/atomic: x86: " Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 14/26] locking/atomic: scripts: remove bogus order parameter Mark Rutland
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).

Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.

Add the required definitions to arch/xtensa.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/xtensa/include/asm/atomic.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h
index 1d323a864002c..7308b7f777d79 100644
--- a/arch/xtensa/include/asm/atomic.h
+++ b/arch/xtensa/include/asm/atomic.h
@@ -245,6 +245,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t * v)		\
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
 
+#define arch_atomic_add_return			arch_atomic_add_return
+#define arch_atomic_sub_return			arch_atomic_sub_return
+#define arch_atomic_fetch_add			arch_atomic_fetch_add
+#define arch_atomic_fetch_sub			arch_atomic_fetch_sub
+
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
 
@@ -252,6 +257,10 @@ ATOMIC_OPS(and)
 ATOMIC_OPS(or)
 ATOMIC_OPS(xor)
 
+#define arch_atomic_fetch_and			arch_atomic_fetch_and
+#define arch_atomic_fetch_or			arch_atomic_fetch_or
+#define arch_atomic_fetch_xor			arch_atomic_fetch_xor
+
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_OP_RETURN
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 14/26] locking/atomic: scripts: remove bogus order parameter
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (12 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 13/26] locking/atomic: xtensa: " Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 15/26] locking/atomic: scripts: remove leftover "${mult}" Mark Rutland
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

At the start of gen_proto_order_variants(), the ${order} variable is not
yet defined, and will be substituted with an empty string.

Replace the current bogus use of ${order} with an empty string instead.

This results in no change to the generated headers.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
---
 scripts/atomic/gen-atomic-fallback.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index a70acd548fcd8..7a6bcea8f565b 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -81,7 +81,7 @@ gen_proto_order_variants()
 
 	local basename="arch_${atomic}_${pfx}${name}${sfx}"
 
-	local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
+	local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "")"
 
 	# If we don't have relaxed atomics, then we don't bother with ordering fallbacks
 	# read_acquire and set_release need to be templated, though
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 15/26] locking/atomic: scripts: remove leftover "${mult}"
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (13 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 14/26] locking/atomic: scripts: remove bogus order parameter Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 16/26] locking/atomic: scripts: factor out order template generation Mark Rutland
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

We removed cmpxchg_double() and variants in commit:

  b4cf83b2d1da40b2 ("arch: Remove cmpxchg_double")

Which removed the need for "${mult}" in the instrumentation logic.
Unfortunately we missed an instance of "${mult}".

There is no change to the generated header.
There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 scripts/atomic/gen-atomic-instrumented.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index a2ef735be8ca9..68557bfbbdc5e 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -118,7 +118,7 @@ cat <<EOF
 EOF
 [ -n "$kcsan_barrier" ] && printf "\t${kcsan_barrier}; \\\\\n"
 cat <<EOF
-	instrument_atomic_read_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\
+	instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \\
 	arch_${xchg}${order}(__ai_ptr, __VA_ARGS__); \\
 })
 EOF
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 16/26] locking/atomic: scripts: factor out order template generation
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (14 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 15/26] locking/atomic: scripts: remove leftover "${mult}" Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 18/26] locking/atomic: treewide: use raw_atomic*_<op>() Mark Rutland
                   ` (7 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Currently gen_proto_order_variants() hard codes the path for the templates used
for order fallbacks. Factor this out into a helper so that it can be reused
elsewhere.

This results in no change to the generated headers, so there should be
no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
---
 scripts/atomic/gen-atomic-fallback.sh | 34 +++++++++++++--------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index 7a6bcea8f565b..337330865fa2e 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -32,6 +32,20 @@ gen_template_fallback()
 	fi
 }
 
+#gen_order_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
+gen_order_fallback()
+{
+	local meta="$1"; shift
+	local pfx="$1"; shift
+	local name="$1"; shift
+	local sfx="$1"; shift
+	local order="$1"; shift
+
+	local tmpl_order=${order#_}
+	local tmpl="${ATOMICDIR}/fallbacks/${tmpl_order:-fence}"
+	gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+}
+
 #gen_proto_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
 gen_proto_fallback()
 {
@@ -56,20 +70,6 @@ cat << EOF
 EOF
 }
 
-gen_proto_order_variant()
-{
-	local meta="$1"; shift
-	local pfx="$1"; shift
-	local name="$1"; shift
-	local sfx="$1"; shift
-	local order="$1"; shift
-	local atomic="$1"
-
-	local basename="arch_${atomic}_${pfx}${name}${sfx}"
-
-	printf "#define ${basename}${order} ${basename}${order}\n"
-}
-
 #gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...)
 gen_proto_order_variants()
 {
@@ -117,9 +117,9 @@ gen_proto_order_variants()
 
 	printf "#else /* ${basename}_relaxed */\n\n"
 
-	gen_template_fallback "${ATOMICDIR}/fallbacks/acquire"  "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
-	gen_template_fallback "${ATOMICDIR}/fallbacks/release"  "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
-	gen_template_fallback "${ATOMICDIR}/fallbacks/fence"  "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
+	gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
+	gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
+	gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
 
 	printf "#endif /* ${basename}_relaxed */\n\n"
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 18/26] locking/atomic: treewide: use raw_atomic*_<op>()
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (15 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 16/26] locking/atomic: scripts: factor out order template generation Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 19/26] locking/atomic: scripts: build raw_atomic_long*() directly Mark Rutland
                   ` (6 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Now that we have raw_atomic*_<op>() definitions, there's no need to use
arch_atomic*_<op>() definitions outside of the low-level atomic
definitions.

Move treewide users of arch_atomic*_<op>() over to the equivalent
raw_atomic*_<op>().

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
---
 arch/powerpc/kernel/smp.c              | 12 ++++++------
 arch/x86/kernel/alternative.c          |  4 ++--
 arch/x86/kernel/cpu/mce/core.c         | 16 ++++++++--------
 arch/x86/kernel/nmi.c                  |  2 +-
 arch/x86/kernel/pvclock.c              |  4 ++--
 arch/x86/kvm/x86.c                     |  2 +-
 include/asm-generic/bitops/atomic.h    | 12 ++++++------
 include/asm-generic/bitops/lock.h      |  8 ++++----
 include/linux/context_tracking.h       |  4 ++--
 include/linux/context_tracking_state.h |  2 +-
 include/linux/cpumask.h                |  2 +-
 include/linux/jump_label.h             |  2 +-
 kernel/context_tracking.c              | 12 ++++++------
 kernel/sched/clock.c                   |  2 +-
 14 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 265801a3e94cf..e8965f18686f0 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -417,9 +417,9 @@ noinstr static void nmi_ipi_lock_start(unsigned long *flags)
 {
 	raw_local_irq_save(*flags);
 	hard_irq_disable();
-	while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) {
+	while (raw_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) {
 		raw_local_irq_restore(*flags);
-		spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0);
+		spin_until_cond(raw_atomic_read(&__nmi_ipi_lock) == 0);
 		raw_local_irq_save(*flags);
 		hard_irq_disable();
 	}
@@ -427,15 +427,15 @@ noinstr static void nmi_ipi_lock_start(unsigned long *flags)
 
 noinstr static void nmi_ipi_lock(void)
 {
-	while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1)
-		spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0);
+	while (raw_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1)
+		spin_until_cond(raw_atomic_read(&__nmi_ipi_lock) == 0);
 }
 
 noinstr static void nmi_ipi_unlock(void)
 {
 	smp_mb();
-	WARN_ON(arch_atomic_read(&__nmi_ipi_lock) != 1);
-	arch_atomic_set(&__nmi_ipi_lock, 0);
+	WARN_ON(raw_atomic_read(&__nmi_ipi_lock) != 1);
+	raw_atomic_set(&__nmi_ipi_lock, 0);
 }
 
 noinstr static void nmi_ipi_unlock_end(unsigned long *flags)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index f615e0cb6d932..18f16e93838fe 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -1799,7 +1799,7 @@ struct bp_patching_desc *try_get_desc(void)
 {
 	struct bp_patching_desc *desc = &bp_desc;
 
-	if (!arch_atomic_inc_not_zero(&desc->refs))
+	if (!raw_atomic_inc_not_zero(&desc->refs))
 		return NULL;
 
 	return desc;
@@ -1810,7 +1810,7 @@ static __always_inline void put_desc(void)
 	struct bp_patching_desc *desc = &bp_desc;
 
 	smp_mb__before_atomic();
-	arch_atomic_dec(&desc->refs);
+	raw_atomic_dec(&desc->refs);
 }
 
 static __always_inline void *text_poke_addr(struct text_poke_loc *tp)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 2eec60f50057a..ab156e6e71208 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1022,12 +1022,12 @@ static noinstr int mce_start(int *no_way_out)
 	if (!timeout)
 		return ret;
 
-	arch_atomic_add(*no_way_out, &global_nwo);
+	raw_atomic_add(*no_way_out, &global_nwo);
 	/*
 	 * Rely on the implied barrier below, such that global_nwo
 	 * is updated before mce_callin.
 	 */
-	order = arch_atomic_inc_return(&mce_callin);
+	order = raw_atomic_inc_return(&mce_callin);
 	arch_cpumask_clear_cpu(smp_processor_id(), &mce_missing_cpus);
 
 	/* Enable instrumentation around calls to external facilities */
@@ -1036,10 +1036,10 @@ static noinstr int mce_start(int *no_way_out)
 	/*
 	 * Wait for everyone.
 	 */
-	while (arch_atomic_read(&mce_callin) != num_online_cpus()) {
+	while (raw_atomic_read(&mce_callin) != num_online_cpus()) {
 		if (mce_timed_out(&timeout,
 				  "Timeout: Not all CPUs entered broadcast exception handler")) {
-			arch_atomic_set(&global_nwo, 0);
+			raw_atomic_set(&global_nwo, 0);
 			goto out;
 		}
 		ndelay(SPINUNIT);
@@ -1054,7 +1054,7 @@ static noinstr int mce_start(int *no_way_out)
 		/*
 		 * Monarch: Starts executing now, the others wait.
 		 */
-		arch_atomic_set(&mce_executing, 1);
+		raw_atomic_set(&mce_executing, 1);
 	} else {
 		/*
 		 * Subject: Now start the scanning loop one by one in
@@ -1062,10 +1062,10 @@ static noinstr int mce_start(int *no_way_out)
 		 * This way when there are any shared banks it will be
 		 * only seen by one CPU before cleared, avoiding duplicates.
 		 */
-		while (arch_atomic_read(&mce_executing) < order) {
+		while (raw_atomic_read(&mce_executing) < order) {
 			if (mce_timed_out(&timeout,
 					  "Timeout: Subject CPUs unable to finish machine check processing")) {
-				arch_atomic_set(&global_nwo, 0);
+				raw_atomic_set(&global_nwo, 0);
 				goto out;
 			}
 			ndelay(SPINUNIT);
@@ -1075,7 +1075,7 @@ static noinstr int mce_start(int *no_way_out)
 	/*
 	 * Cache the global no_way_out state.
 	 */
-	*no_way_out = arch_atomic_read(&global_nwo);
+	*no_way_out = raw_atomic_read(&global_nwo);
 
 	ret = order;
 
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index 776f4b1e395b5..a0c551846b35f 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -496,7 +496,7 @@ DEFINE_IDTENTRY_RAW(exc_nmi)
 	 */
 	sev_es_nmi_complete();
 	if (IS_ENABLED(CONFIG_NMI_CHECK_CPU))
-		arch_atomic_long_inc(&nsp->idt_calls);
+		raw_atomic_long_inc(&nsp->idt_calls);
 
 	if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id()))
 		return;
diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
index 56acf53a782ad..b3f81379c2fc0 100644
--- a/arch/x86/kernel/pvclock.c
+++ b/arch/x86/kernel/pvclock.c
@@ -101,11 +101,11 @@ u64 __pvclock_clocksource_read(struct pvclock_vcpu_time_info *src, bool dowd)
 	 * updating at the same time, and one of them could be slightly behind,
 	 * making the assumption that last_value always go forward fail to hold.
 	 */
-	last = arch_atomic64_read(&last_value);
+	last = raw_atomic64_read(&last_value);
 	do {
 		if (ret <= last)
 			return last;
-	} while (!arch_atomic64_try_cmpxchg(&last_value, &last, ret));
+	} while (!raw_atomic64_try_cmpxchg(&last_value, &last, ret));
 
 	return ret;
 }
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ceb7c5e9cf9e9..ac6f609068106 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13155,7 +13155,7 @@ EXPORT_SYMBOL_GPL(kvm_arch_end_assignment);
 
 bool noinstr kvm_arch_has_assigned_device(struct kvm *kvm)
 {
-	return arch_atomic_read(&kvm->arch.assigned_device_count);
+	return raw_atomic_read(&kvm->arch.assigned_device_count);
 }
 EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device);
 
diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
index 71ab4ba9c25d1..e076e079f6b2e 100644
--- a/include/asm-generic/bitops/atomic.h
+++ b/include/asm-generic/bitops/atomic.h
@@ -15,21 +15,21 @@ static __always_inline void
 arch_set_bit(unsigned int nr, volatile unsigned long *p)
 {
 	p += BIT_WORD(nr);
-	arch_atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p);
+	raw_atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p);
 }
 
 static __always_inline void
 arch_clear_bit(unsigned int nr, volatile unsigned long *p)
 {
 	p += BIT_WORD(nr);
-	arch_atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p);
+	raw_atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p);
 }
 
 static __always_inline void
 arch_change_bit(unsigned int nr, volatile unsigned long *p)
 {
 	p += BIT_WORD(nr);
-	arch_atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p);
+	raw_atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p);
 }
 
 static __always_inline int
@@ -39,7 +39,7 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p)
 	unsigned long mask = BIT_MASK(nr);
 
 	p += BIT_WORD(nr);
-	old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p);
+	old = raw_atomic_long_fetch_or(mask, (atomic_long_t *)p);
 	return !!(old & mask);
 }
 
@@ -50,7 +50,7 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
 	unsigned long mask = BIT_MASK(nr);
 
 	p += BIT_WORD(nr);
-	old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
+	old = raw_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
 	return !!(old & mask);
 }
 
@@ -61,7 +61,7 @@ arch_test_and_change_bit(unsigned int nr, volatile unsigned long *p)
 	unsigned long mask = BIT_MASK(nr);
 
 	p += BIT_WORD(nr);
-	old = arch_atomic_long_fetch_xor(mask, (atomic_long_t *)p);
+	old = raw_atomic_long_fetch_xor(mask, (atomic_long_t *)p);
 	return !!(old & mask);
 }
 
diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h
index 630f2f6b95956..40913516e654c 100644
--- a/include/asm-generic/bitops/lock.h
+++ b/include/asm-generic/bitops/lock.h
@@ -25,7 +25,7 @@ arch_test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p)
 	if (READ_ONCE(*p) & mask)
 		return 1;
 
-	old = arch_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
+	old = raw_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
 	return !!(old & mask);
 }
 
@@ -41,7 +41,7 @@ static __always_inline void
 arch_clear_bit_unlock(unsigned int nr, volatile unsigned long *p)
 {
 	p += BIT_WORD(nr);
-	arch_atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p);
+	raw_atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p);
 }
 
 /**
@@ -63,7 +63,7 @@ arch___clear_bit_unlock(unsigned int nr, volatile unsigned long *p)
 	p += BIT_WORD(nr);
 	old = READ_ONCE(*p);
 	old &= ~BIT_MASK(nr);
-	arch_atomic_long_set_release((atomic_long_t *)p, old);
+	raw_atomic_long_set_release((atomic_long_t *)p, old);
 }
 
 /**
@@ -83,7 +83,7 @@ static inline bool arch_clear_bit_unlock_is_negative_byte(unsigned int nr,
 	unsigned long mask = BIT_MASK(nr);
 
 	p += BIT_WORD(nr);
-	old = arch_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p);
+	old = raw_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p);
 	return !!(old & BIT(7));
 }
 #define arch_clear_bit_unlock_is_negative_byte arch_clear_bit_unlock_is_negative_byte
diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
index d3cbb6c16babf..6e76b9dba00e7 100644
--- a/include/linux/context_tracking.h
+++ b/include/linux/context_tracking.h
@@ -119,7 +119,7 @@ extern void ct_idle_exit(void);
  */
 static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void)
 {
-	return !(arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX);
+	return !(raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX);
 }
 
 /*
@@ -128,7 +128,7 @@ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void)
  */
 static __always_inline unsigned long ct_state_inc(int incby)
 {
-	return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state));
+	return raw_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state));
 }
 
 static __always_inline bool warn_rcu_enter(void)
diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h
index fdd537ea513ff..bbff5f7f88030 100644
--- a/include/linux/context_tracking_state.h
+++ b/include/linux/context_tracking_state.h
@@ -51,7 +51,7 @@ DECLARE_PER_CPU(struct context_tracking, context_tracking);
 #ifdef CONFIG_CONTEXT_TRACKING_USER
 static __always_inline int __ct_state(void)
 {
-	return arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK;
+	return raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK;
 }
 #endif
 
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index ca736b05ec7b0..0d2e2a38b92d0 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -1071,7 +1071,7 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
  */
 static __always_inline unsigned int num_online_cpus(void)
 {
-	return arch_atomic_read(&__num_online_cpus);
+	return raw_atomic_read(&__num_online_cpus);
 }
 #define num_possible_cpus()	cpumask_weight(cpu_possible_mask)
 #define num_present_cpus()	cpumask_weight(cpu_present_mask)
diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 4e968ebadce60..f0a949b7c9733 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -257,7 +257,7 @@ extern enum jump_label_type jump_label_init_type(struct jump_entry *entry);
 
 static __always_inline int static_key_count(struct static_key *key)
 {
-	return arch_atomic_read(&key->enabled);
+	return raw_atomic_read(&key->enabled);
 }
 
 static __always_inline void jump_label_init(void)
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index a09f1c19336ae..6ef0b35fc28c5 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -510,7 +510,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
 			 * In this we case we don't care about any concurrency/ordering.
 			 */
 			if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE))
-				arch_atomic_set(&ct->state, state);
+				raw_atomic_set(&ct->state, state);
 		} else {
 			/*
 			 * Even if context tracking is disabled on this CPU, because it's outside
@@ -527,7 +527,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
 			 */
 			if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) {
 				/* Tracking for vtime only, no concurrent RCU EQS accounting */
-				arch_atomic_set(&ct->state, state);
+				raw_atomic_set(&ct->state, state);
 			} else {
 				/*
 				 * Tracking for vtime and RCU EQS. Make sure we don't race
@@ -535,7 +535,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
 				 * RCU only requires RCU_DYNTICKS_IDX increments to be fully
 				 * ordered.
 				 */
-				arch_atomic_add(state, &ct->state);
+				raw_atomic_add(state, &ct->state);
 			}
 		}
 	}
@@ -630,12 +630,12 @@ void noinstr __ct_user_exit(enum ctx_state state)
 			 * In this we case we don't care about any concurrency/ordering.
 			 */
 			if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE))
-				arch_atomic_set(&ct->state, CONTEXT_KERNEL);
+				raw_atomic_set(&ct->state, CONTEXT_KERNEL);
 
 		} else {
 			if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) {
 				/* Tracking for vtime only, no concurrent RCU EQS accounting */
-				arch_atomic_set(&ct->state, CONTEXT_KERNEL);
+				raw_atomic_set(&ct->state, CONTEXT_KERNEL);
 			} else {
 				/*
 				 * Tracking for vtime and RCU EQS. Make sure we don't race
@@ -643,7 +643,7 @@ void noinstr __ct_user_exit(enum ctx_state state)
 				 * RCU only requires RCU_DYNTICKS_IDX increments to be fully
 				 * ordered.
 				 */
-				arch_atomic_sub(state, &ct->state);
+				raw_atomic_sub(state, &ct->state);
 			}
 		}
 	}
diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index b5cc2b53464de..71443cff31f0d 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -287,7 +287,7 @@ static __always_inline u64 sched_clock_local(struct sched_clock_data *scd)
 	clock = wrap_max(clock, min_clock);
 	clock = wrap_min(clock, max_clock);
 
-	if (!arch_try_cmpxchg64(&scd->clock, &old_clock, clock))
+	if (!raw_try_cmpxchg64(&scd->clock, &old_clock, clock))
 		goto again;
 
 	return clock;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 19/26] locking/atomic: scripts: build raw_atomic_long*() directly
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (16 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 18/26] locking/atomic: treewide: use raw_atomic*_<op>() Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 21/26] locking/atomic: scripts: split pfx/name/sfx/order Mark Rutland
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Now that arch_atomic*() usage is limited to the atomic headers, we no
longer have any users of arch_atomic_long_*(), and can generate
raw_atomic_long_*() directly.

Generate the raw_atomic_long_*() ops directly.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
---
 include/linux/atomic.h             |   2 +-
 include/linux/atomic/atomic-long.h | 682 ++++++++++++++---------------
 include/linux/atomic/atomic-raw.h  | 512 +---------------------
 scripts/atomic/gen-atomic-long.sh  |   4 +-
 scripts/atomic/gen-atomic-raw.sh   |   4 -
 5 files changed, 345 insertions(+), 859 deletions(-)

diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 127f5dc63a7df..296cfae0389fe 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -78,8 +78,8 @@
 })
 
 #include <linux/atomic/atomic-arch-fallback.h>
-#include <linux/atomic/atomic-long.h>
 #include <linux/atomic/atomic-raw.h>
+#include <linux/atomic/atomic-long.h>
 #include <linux/atomic/atomic-instrumented.h>
 
 #endif /* _LINUX_ATOMIC_H */
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index 2fc51ba66bebd..92dc82ce1ce6d 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -24,1027 +24,1027 @@ typedef atomic_t atomic_long_t;
 #ifdef CONFIG_64BIT
 
 static __always_inline long
-arch_atomic_long_read(const atomic_long_t *v)
+raw_atomic_long_read(const atomic_long_t *v)
 {
-	return arch_atomic64_read(v);
+	return raw_atomic64_read(v);
 }
 
 static __always_inline long
-arch_atomic_long_read_acquire(const atomic_long_t *v)
+raw_atomic_long_read_acquire(const atomic_long_t *v)
 {
-	return arch_atomic64_read_acquire(v);
+	return raw_atomic64_read_acquire(v);
 }
 
 static __always_inline void
-arch_atomic_long_set(atomic_long_t *v, long i)
+raw_atomic_long_set(atomic_long_t *v, long i)
 {
-	arch_atomic64_set(v, i);
+	raw_atomic64_set(v, i);
 }
 
 static __always_inline void
-arch_atomic_long_set_release(atomic_long_t *v, long i)
+raw_atomic_long_set_release(atomic_long_t *v, long i)
 {
-	arch_atomic64_set_release(v, i);
+	raw_atomic64_set_release(v, i);
 }
 
 static __always_inline void
-arch_atomic_long_add(long i, atomic_long_t *v)
+raw_atomic_long_add(long i, atomic_long_t *v)
 {
-	arch_atomic64_add(i, v);
+	raw_atomic64_add(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_add_return(long i, atomic_long_t *v)
+raw_atomic_long_add_return(long i, atomic_long_t *v)
 {
-	return arch_atomic64_add_return(i, v);
+	return raw_atomic64_add_return(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_add_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic64_add_return_acquire(i, v);
+	return raw_atomic64_add_return_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_add_return_release(long i, atomic_long_t *v)
+raw_atomic_long_add_return_release(long i, atomic_long_t *v)
 {
-	return arch_atomic64_add_return_release(i, v);
+	return raw_atomic64_add_return_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic64_add_return_relaxed(i, v);
+	return raw_atomic64_add_return_relaxed(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_add(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_add(i, v);
+	return raw_atomic64_fetch_add(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_add_acquire(i, v);
+	return raw_atomic64_fetch_add_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_add_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_add_release(i, v);
+	return raw_atomic64_fetch_add_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_add_relaxed(i, v);
+	return raw_atomic64_fetch_add_relaxed(i, v);
 }
 
 static __always_inline void
-arch_atomic_long_sub(long i, atomic_long_t *v)
+raw_atomic_long_sub(long i, atomic_long_t *v)
 {
-	arch_atomic64_sub(i, v);
+	raw_atomic64_sub(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_sub_return(long i, atomic_long_t *v)
+raw_atomic_long_sub_return(long i, atomic_long_t *v)
 {
-	return arch_atomic64_sub_return(i, v);
+	return raw_atomic64_sub_return(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic64_sub_return_acquire(i, v);
+	return raw_atomic64_sub_return_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_sub_return_release(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
 {
-	return arch_atomic64_sub_return_release(i, v);
+	return raw_atomic64_sub_return_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic64_sub_return_relaxed(i, v);
+	return raw_atomic64_sub_return_relaxed(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_sub(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_sub(i, v);
+	return raw_atomic64_fetch_sub(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_sub_acquire(i, v);
+	return raw_atomic64_fetch_sub_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_sub_release(i, v);
+	return raw_atomic64_fetch_sub_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_sub_relaxed(i, v);
+	return raw_atomic64_fetch_sub_relaxed(i, v);
 }
 
 static __always_inline void
-arch_atomic_long_inc(atomic_long_t *v)
+raw_atomic_long_inc(atomic_long_t *v)
 {
-	arch_atomic64_inc(v);
+	raw_atomic64_inc(v);
 }
 
 static __always_inline long
-arch_atomic_long_inc_return(atomic_long_t *v)
+raw_atomic_long_inc_return(atomic_long_t *v)
 {
-	return arch_atomic64_inc_return(v);
+	return raw_atomic64_inc_return(v);
 }
 
 static __always_inline long
-arch_atomic_long_inc_return_acquire(atomic_long_t *v)
+raw_atomic_long_inc_return_acquire(atomic_long_t *v)
 {
-	return arch_atomic64_inc_return_acquire(v);
+	return raw_atomic64_inc_return_acquire(v);
 }
 
 static __always_inline long
-arch_atomic_long_inc_return_release(atomic_long_t *v)
+raw_atomic_long_inc_return_release(atomic_long_t *v)
 {
-	return arch_atomic64_inc_return_release(v);
+	return raw_atomic64_inc_return_release(v);
 }
 
 static __always_inline long
-arch_atomic_long_inc_return_relaxed(atomic_long_t *v)
+raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
 {
-	return arch_atomic64_inc_return_relaxed(v);
+	return raw_atomic64_inc_return_relaxed(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_inc(atomic_long_t *v)
+raw_atomic_long_fetch_inc(atomic_long_t *v)
 {
-	return arch_atomic64_fetch_inc(v);
+	return raw_atomic64_fetch_inc(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_inc_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
 {
-	return arch_atomic64_fetch_inc_acquire(v);
+	return raw_atomic64_fetch_inc_acquire(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_inc_release(atomic_long_t *v)
+raw_atomic_long_fetch_inc_release(atomic_long_t *v)
 {
-	return arch_atomic64_fetch_inc_release(v);
+	return raw_atomic64_fetch_inc_release(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
 {
-	return arch_atomic64_fetch_inc_relaxed(v);
+	return raw_atomic64_fetch_inc_relaxed(v);
 }
 
 static __always_inline void
-arch_atomic_long_dec(atomic_long_t *v)
+raw_atomic_long_dec(atomic_long_t *v)
 {
-	arch_atomic64_dec(v);
+	raw_atomic64_dec(v);
 }
 
 static __always_inline long
-arch_atomic_long_dec_return(atomic_long_t *v)
+raw_atomic_long_dec_return(atomic_long_t *v)
 {
-	return arch_atomic64_dec_return(v);
+	return raw_atomic64_dec_return(v);
 }
 
 static __always_inline long
-arch_atomic_long_dec_return_acquire(atomic_long_t *v)
+raw_atomic_long_dec_return_acquire(atomic_long_t *v)
 {
-	return arch_atomic64_dec_return_acquire(v);
+	return raw_atomic64_dec_return_acquire(v);
 }
 
 static __always_inline long
-arch_atomic_long_dec_return_release(atomic_long_t *v)
+raw_atomic_long_dec_return_release(atomic_long_t *v)
 {
-	return arch_atomic64_dec_return_release(v);
+	return raw_atomic64_dec_return_release(v);
 }
 
 static __always_inline long
-arch_atomic_long_dec_return_relaxed(atomic_long_t *v)
+raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
 {
-	return arch_atomic64_dec_return_relaxed(v);
+	return raw_atomic64_dec_return_relaxed(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_dec(atomic_long_t *v)
+raw_atomic_long_fetch_dec(atomic_long_t *v)
 {
-	return arch_atomic64_fetch_dec(v);
+	return raw_atomic64_fetch_dec(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_dec_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
 {
-	return arch_atomic64_fetch_dec_acquire(v);
+	return raw_atomic64_fetch_dec_acquire(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_dec_release(atomic_long_t *v)
+raw_atomic_long_fetch_dec_release(atomic_long_t *v)
 {
-	return arch_atomic64_fetch_dec_release(v);
+	return raw_atomic64_fetch_dec_release(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
 {
-	return arch_atomic64_fetch_dec_relaxed(v);
+	return raw_atomic64_fetch_dec_relaxed(v);
 }
 
 static __always_inline void
-arch_atomic_long_and(long i, atomic_long_t *v)
+raw_atomic_long_and(long i, atomic_long_t *v)
 {
-	arch_atomic64_and(i, v);
+	raw_atomic64_and(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_and(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_and(i, v);
+	return raw_atomic64_fetch_and(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_and_acquire(i, v);
+	return raw_atomic64_fetch_and_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_and_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_and_release(i, v);
+	return raw_atomic64_fetch_and_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_and_relaxed(i, v);
+	return raw_atomic64_fetch_and_relaxed(i, v);
 }
 
 static __always_inline void
-arch_atomic_long_andnot(long i, atomic_long_t *v)
+raw_atomic_long_andnot(long i, atomic_long_t *v)
 {
-	arch_atomic64_andnot(i, v);
+	raw_atomic64_andnot(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_andnot(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_andnot(i, v);
+	return raw_atomic64_fetch_andnot(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_andnot_acquire(i, v);
+	return raw_atomic64_fetch_andnot_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_andnot_release(i, v);
+	return raw_atomic64_fetch_andnot_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_andnot_relaxed(i, v);
+	return raw_atomic64_fetch_andnot_relaxed(i, v);
 }
 
 static __always_inline void
-arch_atomic_long_or(long i, atomic_long_t *v)
+raw_atomic_long_or(long i, atomic_long_t *v)
 {
-	arch_atomic64_or(i, v);
+	raw_atomic64_or(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_or(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_or(i, v);
+	return raw_atomic64_fetch_or(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_or_acquire(i, v);
+	return raw_atomic64_fetch_or_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_or_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_or_release(i, v);
+	return raw_atomic64_fetch_or_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_or_relaxed(i, v);
+	return raw_atomic64_fetch_or_relaxed(i, v);
 }
 
 static __always_inline void
-arch_atomic_long_xor(long i, atomic_long_t *v)
+raw_atomic_long_xor(long i, atomic_long_t *v)
 {
-	arch_atomic64_xor(i, v);
+	raw_atomic64_xor(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_xor(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_xor(i, v);
+	return raw_atomic64_fetch_xor(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_xor_acquire(i, v);
+	return raw_atomic64_fetch_xor_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_xor_release(i, v);
+	return raw_atomic64_fetch_xor_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic64_fetch_xor_relaxed(i, v);
+	return raw_atomic64_fetch_xor_relaxed(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_xchg(atomic_long_t *v, long i)
+raw_atomic_long_xchg(atomic_long_t *v, long i)
 {
-	return arch_atomic64_xchg(v, i);
+	return raw_atomic64_xchg(v, i);
 }
 
 static __always_inline long
-arch_atomic_long_xchg_acquire(atomic_long_t *v, long i)
+raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
 {
-	return arch_atomic64_xchg_acquire(v, i);
+	return raw_atomic64_xchg_acquire(v, i);
 }
 
 static __always_inline long
-arch_atomic_long_xchg_release(atomic_long_t *v, long i)
+raw_atomic_long_xchg_release(atomic_long_t *v, long i)
 {
-	return arch_atomic64_xchg_release(v, i);
+	return raw_atomic64_xchg_release(v, i);
 }
 
 static __always_inline long
-arch_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
 {
-	return arch_atomic64_xchg_relaxed(v, i);
+	return raw_atomic64_xchg_relaxed(v, i);
 }
 
 static __always_inline long
-arch_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
 {
-	return arch_atomic64_cmpxchg(v, old, new);
+	return raw_atomic64_cmpxchg(v, old, new);
 }
 
 static __always_inline long
-arch_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
 {
-	return arch_atomic64_cmpxchg_acquire(v, old, new);
+	return raw_atomic64_cmpxchg_acquire(v, old, new);
 }
 
 static __always_inline long
-arch_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
 {
-	return arch_atomic64_cmpxchg_release(v, old, new);
+	return raw_atomic64_cmpxchg_release(v, old, new);
 }
 
 static __always_inline long
-arch_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
 {
-	return arch_atomic64_cmpxchg_relaxed(v, old, new);
+	return raw_atomic64_cmpxchg_relaxed(v, old, new);
 }
 
 static __always_inline bool
-arch_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
 {
-	return arch_atomic64_try_cmpxchg(v, (s64 *)old, new);
+	return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
 }
 
 static __always_inline bool
-arch_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
 {
-	return arch_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
+	return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
 }
 
 static __always_inline bool
-arch_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
 {
-	return arch_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
+	return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
 }
 
 static __always_inline bool
-arch_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
 {
-	return arch_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
+	return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
 }
 
 static __always_inline bool
-arch_atomic_long_sub_and_test(long i, atomic_long_t *v)
+raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
 {
-	return arch_atomic64_sub_and_test(i, v);
+	return raw_atomic64_sub_and_test(i, v);
 }
 
 static __always_inline bool
-arch_atomic_long_dec_and_test(atomic_long_t *v)
+raw_atomic_long_dec_and_test(atomic_long_t *v)
 {
-	return arch_atomic64_dec_and_test(v);
+	return raw_atomic64_dec_and_test(v);
 }
 
 static __always_inline bool
-arch_atomic_long_inc_and_test(atomic_long_t *v)
+raw_atomic_long_inc_and_test(atomic_long_t *v)
 {
-	return arch_atomic64_inc_and_test(v);
+	return raw_atomic64_inc_and_test(v);
 }
 
 static __always_inline bool
-arch_atomic_long_add_negative(long i, atomic_long_t *v)
+raw_atomic_long_add_negative(long i, atomic_long_t *v)
 {
-	return arch_atomic64_add_negative(i, v);
+	return raw_atomic64_add_negative(i, v);
 }
 
 static __always_inline bool
-arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic64_add_negative_acquire(i, v);
+	return raw_atomic64_add_negative_acquire(i, v);
 }
 
 static __always_inline bool
-arch_atomic_long_add_negative_release(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
 {
-	return arch_atomic64_add_negative_release(i, v);
+	return raw_atomic64_add_negative_release(i, v);
 }
 
 static __always_inline bool
-arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic64_add_negative_relaxed(i, v);
+	return raw_atomic64_add_negative_relaxed(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
 {
-	return arch_atomic64_fetch_add_unless(v, a, u);
+	return raw_atomic64_fetch_add_unless(v, a, u);
 }
 
 static __always_inline bool
-arch_atomic_long_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
 {
-	return arch_atomic64_add_unless(v, a, u);
+	return raw_atomic64_add_unless(v, a, u);
 }
 
 static __always_inline bool
-arch_atomic_long_inc_not_zero(atomic_long_t *v)
+raw_atomic_long_inc_not_zero(atomic_long_t *v)
 {
-	return arch_atomic64_inc_not_zero(v);
+	return raw_atomic64_inc_not_zero(v);
 }
 
 static __always_inline bool
-arch_atomic_long_inc_unless_negative(atomic_long_t *v)
+raw_atomic_long_inc_unless_negative(atomic_long_t *v)
 {
-	return arch_atomic64_inc_unless_negative(v);
+	return raw_atomic64_inc_unless_negative(v);
 }
 
 static __always_inline bool
-arch_atomic_long_dec_unless_positive(atomic_long_t *v)
+raw_atomic_long_dec_unless_positive(atomic_long_t *v)
 {
-	return arch_atomic64_dec_unless_positive(v);
+	return raw_atomic64_dec_unless_positive(v);
 }
 
 static __always_inline long
-arch_atomic_long_dec_if_positive(atomic_long_t *v)
+raw_atomic_long_dec_if_positive(atomic_long_t *v)
 {
-	return arch_atomic64_dec_if_positive(v);
+	return raw_atomic64_dec_if_positive(v);
 }
 
 #else /* CONFIG_64BIT */
 
 static __always_inline long
-arch_atomic_long_read(const atomic_long_t *v)
+raw_atomic_long_read(const atomic_long_t *v)
 {
-	return arch_atomic_read(v);
+	return raw_atomic_read(v);
 }
 
 static __always_inline long
-arch_atomic_long_read_acquire(const atomic_long_t *v)
+raw_atomic_long_read_acquire(const atomic_long_t *v)
 {
-	return arch_atomic_read_acquire(v);
+	return raw_atomic_read_acquire(v);
 }
 
 static __always_inline void
-arch_atomic_long_set(atomic_long_t *v, long i)
+raw_atomic_long_set(atomic_long_t *v, long i)
 {
-	arch_atomic_set(v, i);
+	raw_atomic_set(v, i);
 }
 
 static __always_inline void
-arch_atomic_long_set_release(atomic_long_t *v, long i)
+raw_atomic_long_set_release(atomic_long_t *v, long i)
 {
-	arch_atomic_set_release(v, i);
+	raw_atomic_set_release(v, i);
 }
 
 static __always_inline void
-arch_atomic_long_add(long i, atomic_long_t *v)
+raw_atomic_long_add(long i, atomic_long_t *v)
 {
-	arch_atomic_add(i, v);
+	raw_atomic_add(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_add_return(long i, atomic_long_t *v)
+raw_atomic_long_add_return(long i, atomic_long_t *v)
 {
-	return arch_atomic_add_return(i, v);
+	return raw_atomic_add_return(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_add_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic_add_return_acquire(i, v);
+	return raw_atomic_add_return_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_add_return_release(long i, atomic_long_t *v)
+raw_atomic_long_add_return_release(long i, atomic_long_t *v)
 {
-	return arch_atomic_add_return_release(i, v);
+	return raw_atomic_add_return_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic_add_return_relaxed(i, v);
+	return raw_atomic_add_return_relaxed(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_add(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_add(i, v);
+	return raw_atomic_fetch_add(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_add_acquire(i, v);
+	return raw_atomic_fetch_add_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_add_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_add_release(i, v);
+	return raw_atomic_fetch_add_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_add_relaxed(i, v);
+	return raw_atomic_fetch_add_relaxed(i, v);
 }
 
 static __always_inline void
-arch_atomic_long_sub(long i, atomic_long_t *v)
+raw_atomic_long_sub(long i, atomic_long_t *v)
 {
-	arch_atomic_sub(i, v);
+	raw_atomic_sub(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_sub_return(long i, atomic_long_t *v)
+raw_atomic_long_sub_return(long i, atomic_long_t *v)
 {
-	return arch_atomic_sub_return(i, v);
+	return raw_atomic_sub_return(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic_sub_return_acquire(i, v);
+	return raw_atomic_sub_return_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_sub_return_release(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
 {
-	return arch_atomic_sub_return_release(i, v);
+	return raw_atomic_sub_return_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic_sub_return_relaxed(i, v);
+	return raw_atomic_sub_return_relaxed(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_sub(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_sub(i, v);
+	return raw_atomic_fetch_sub(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_sub_acquire(i, v);
+	return raw_atomic_fetch_sub_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_sub_release(i, v);
+	return raw_atomic_fetch_sub_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_sub_relaxed(i, v);
+	return raw_atomic_fetch_sub_relaxed(i, v);
 }
 
 static __always_inline void
-arch_atomic_long_inc(atomic_long_t *v)
+raw_atomic_long_inc(atomic_long_t *v)
 {
-	arch_atomic_inc(v);
+	raw_atomic_inc(v);
 }
 
 static __always_inline long
-arch_atomic_long_inc_return(atomic_long_t *v)
+raw_atomic_long_inc_return(atomic_long_t *v)
 {
-	return arch_atomic_inc_return(v);
+	return raw_atomic_inc_return(v);
 }
 
 static __always_inline long
-arch_atomic_long_inc_return_acquire(atomic_long_t *v)
+raw_atomic_long_inc_return_acquire(atomic_long_t *v)
 {
-	return arch_atomic_inc_return_acquire(v);
+	return raw_atomic_inc_return_acquire(v);
 }
 
 static __always_inline long
-arch_atomic_long_inc_return_release(atomic_long_t *v)
+raw_atomic_long_inc_return_release(atomic_long_t *v)
 {
-	return arch_atomic_inc_return_release(v);
+	return raw_atomic_inc_return_release(v);
 }
 
 static __always_inline long
-arch_atomic_long_inc_return_relaxed(atomic_long_t *v)
+raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
 {
-	return arch_atomic_inc_return_relaxed(v);
+	return raw_atomic_inc_return_relaxed(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_inc(atomic_long_t *v)
+raw_atomic_long_fetch_inc(atomic_long_t *v)
 {
-	return arch_atomic_fetch_inc(v);
+	return raw_atomic_fetch_inc(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_inc_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
 {
-	return arch_atomic_fetch_inc_acquire(v);
+	return raw_atomic_fetch_inc_acquire(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_inc_release(atomic_long_t *v)
+raw_atomic_long_fetch_inc_release(atomic_long_t *v)
 {
-	return arch_atomic_fetch_inc_release(v);
+	return raw_atomic_fetch_inc_release(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
 {
-	return arch_atomic_fetch_inc_relaxed(v);
+	return raw_atomic_fetch_inc_relaxed(v);
 }
 
 static __always_inline void
-arch_atomic_long_dec(atomic_long_t *v)
+raw_atomic_long_dec(atomic_long_t *v)
 {
-	arch_atomic_dec(v);
+	raw_atomic_dec(v);
 }
 
 static __always_inline long
-arch_atomic_long_dec_return(atomic_long_t *v)
+raw_atomic_long_dec_return(atomic_long_t *v)
 {
-	return arch_atomic_dec_return(v);
+	return raw_atomic_dec_return(v);
 }
 
 static __always_inline long
-arch_atomic_long_dec_return_acquire(atomic_long_t *v)
+raw_atomic_long_dec_return_acquire(atomic_long_t *v)
 {
-	return arch_atomic_dec_return_acquire(v);
+	return raw_atomic_dec_return_acquire(v);
 }
 
 static __always_inline long
-arch_atomic_long_dec_return_release(atomic_long_t *v)
+raw_atomic_long_dec_return_release(atomic_long_t *v)
 {
-	return arch_atomic_dec_return_release(v);
+	return raw_atomic_dec_return_release(v);
 }
 
 static __always_inline long
-arch_atomic_long_dec_return_relaxed(atomic_long_t *v)
+raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
 {
-	return arch_atomic_dec_return_relaxed(v);
+	return raw_atomic_dec_return_relaxed(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_dec(atomic_long_t *v)
+raw_atomic_long_fetch_dec(atomic_long_t *v)
 {
-	return arch_atomic_fetch_dec(v);
+	return raw_atomic_fetch_dec(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_dec_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
 {
-	return arch_atomic_fetch_dec_acquire(v);
+	return raw_atomic_fetch_dec_acquire(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_dec_release(atomic_long_t *v)
+raw_atomic_long_fetch_dec_release(atomic_long_t *v)
 {
-	return arch_atomic_fetch_dec_release(v);
+	return raw_atomic_fetch_dec_release(v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
 {
-	return arch_atomic_fetch_dec_relaxed(v);
+	return raw_atomic_fetch_dec_relaxed(v);
 }
 
 static __always_inline void
-arch_atomic_long_and(long i, atomic_long_t *v)
+raw_atomic_long_and(long i, atomic_long_t *v)
 {
-	arch_atomic_and(i, v);
+	raw_atomic_and(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_and(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_and(i, v);
+	return raw_atomic_fetch_and(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_and_acquire(i, v);
+	return raw_atomic_fetch_and_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_and_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_and_release(i, v);
+	return raw_atomic_fetch_and_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_and_relaxed(i, v);
+	return raw_atomic_fetch_and_relaxed(i, v);
 }
 
 static __always_inline void
-arch_atomic_long_andnot(long i, atomic_long_t *v)
+raw_atomic_long_andnot(long i, atomic_long_t *v)
 {
-	arch_atomic_andnot(i, v);
+	raw_atomic_andnot(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_andnot(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_andnot(i, v);
+	return raw_atomic_fetch_andnot(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_andnot_acquire(i, v);
+	return raw_atomic_fetch_andnot_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_andnot_release(i, v);
+	return raw_atomic_fetch_andnot_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_andnot_relaxed(i, v);
+	return raw_atomic_fetch_andnot_relaxed(i, v);
 }
 
 static __always_inline void
-arch_atomic_long_or(long i, atomic_long_t *v)
+raw_atomic_long_or(long i, atomic_long_t *v)
 {
-	arch_atomic_or(i, v);
+	raw_atomic_or(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_or(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_or(i, v);
+	return raw_atomic_fetch_or(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_or_acquire(i, v);
+	return raw_atomic_fetch_or_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_or_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_or_release(i, v);
+	return raw_atomic_fetch_or_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_or_relaxed(i, v);
+	return raw_atomic_fetch_or_relaxed(i, v);
 }
 
 static __always_inline void
-arch_atomic_long_xor(long i, atomic_long_t *v)
+raw_atomic_long_xor(long i, atomic_long_t *v)
 {
-	arch_atomic_xor(i, v);
+	raw_atomic_xor(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_xor(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_xor(i, v);
+	return raw_atomic_fetch_xor(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_xor_acquire(i, v);
+	return raw_atomic_fetch_xor_acquire(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_xor_release(i, v);
+	return raw_atomic_fetch_xor_release(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic_fetch_xor_relaxed(i, v);
+	return raw_atomic_fetch_xor_relaxed(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_xchg(atomic_long_t *v, long i)
+raw_atomic_long_xchg(atomic_long_t *v, long i)
 {
-	return arch_atomic_xchg(v, i);
+	return raw_atomic_xchg(v, i);
 }
 
 static __always_inline long
-arch_atomic_long_xchg_acquire(atomic_long_t *v, long i)
+raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
 {
-	return arch_atomic_xchg_acquire(v, i);
+	return raw_atomic_xchg_acquire(v, i);
 }
 
 static __always_inline long
-arch_atomic_long_xchg_release(atomic_long_t *v, long i)
+raw_atomic_long_xchg_release(atomic_long_t *v, long i)
 {
-	return arch_atomic_xchg_release(v, i);
+	return raw_atomic_xchg_release(v, i);
 }
 
 static __always_inline long
-arch_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
 {
-	return arch_atomic_xchg_relaxed(v, i);
+	return raw_atomic_xchg_relaxed(v, i);
 }
 
 static __always_inline long
-arch_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
 {
-	return arch_atomic_cmpxchg(v, old, new);
+	return raw_atomic_cmpxchg(v, old, new);
 }
 
 static __always_inline long
-arch_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
 {
-	return arch_atomic_cmpxchg_acquire(v, old, new);
+	return raw_atomic_cmpxchg_acquire(v, old, new);
 }
 
 static __always_inline long
-arch_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
 {
-	return arch_atomic_cmpxchg_release(v, old, new);
+	return raw_atomic_cmpxchg_release(v, old, new);
 }
 
 static __always_inline long
-arch_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
 {
-	return arch_atomic_cmpxchg_relaxed(v, old, new);
+	return raw_atomic_cmpxchg_relaxed(v, old, new);
 }
 
 static __always_inline bool
-arch_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
 {
-	return arch_atomic_try_cmpxchg(v, (int *)old, new);
+	return raw_atomic_try_cmpxchg(v, (int *)old, new);
 }
 
 static __always_inline bool
-arch_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
 {
-	return arch_atomic_try_cmpxchg_acquire(v, (int *)old, new);
+	return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new);
 }
 
 static __always_inline bool
-arch_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
 {
-	return arch_atomic_try_cmpxchg_release(v, (int *)old, new);
+	return raw_atomic_try_cmpxchg_release(v, (int *)old, new);
 }
 
 static __always_inline bool
-arch_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
 {
-	return arch_atomic_try_cmpxchg_relaxed(v, (int *)old, new);
+	return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new);
 }
 
 static __always_inline bool
-arch_atomic_long_sub_and_test(long i, atomic_long_t *v)
+raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
 {
-	return arch_atomic_sub_and_test(i, v);
+	return raw_atomic_sub_and_test(i, v);
 }
 
 static __always_inline bool
-arch_atomic_long_dec_and_test(atomic_long_t *v)
+raw_atomic_long_dec_and_test(atomic_long_t *v)
 {
-	return arch_atomic_dec_and_test(v);
+	return raw_atomic_dec_and_test(v);
 }
 
 static __always_inline bool
-arch_atomic_long_inc_and_test(atomic_long_t *v)
+raw_atomic_long_inc_and_test(atomic_long_t *v)
 {
-	return arch_atomic_inc_and_test(v);
+	return raw_atomic_inc_and_test(v);
 }
 
 static __always_inline bool
-arch_atomic_long_add_negative(long i, atomic_long_t *v)
+raw_atomic_long_add_negative(long i, atomic_long_t *v)
 {
-	return arch_atomic_add_negative(i, v);
+	return raw_atomic_add_negative(i, v);
 }
 
 static __always_inline bool
-arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
 {
-	return arch_atomic_add_negative_acquire(i, v);
+	return raw_atomic_add_negative_acquire(i, v);
 }
 
 static __always_inline bool
-arch_atomic_long_add_negative_release(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
 {
-	return arch_atomic_add_negative_release(i, v);
+	return raw_atomic_add_negative_release(i, v);
 }
 
 static __always_inline bool
-arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
 {
-	return arch_atomic_add_negative_relaxed(i, v);
+	return raw_atomic_add_negative_relaxed(i, v);
 }
 
 static __always_inline long
-arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
 {
-	return arch_atomic_fetch_add_unless(v, a, u);
+	return raw_atomic_fetch_add_unless(v, a, u);
 }
 
 static __always_inline bool
-arch_atomic_long_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
 {
-	return arch_atomic_add_unless(v, a, u);
+	return raw_atomic_add_unless(v, a, u);
 }
 
 static __always_inline bool
-arch_atomic_long_inc_not_zero(atomic_long_t *v)
+raw_atomic_long_inc_not_zero(atomic_long_t *v)
 {
-	return arch_atomic_inc_not_zero(v);
+	return raw_atomic_inc_not_zero(v);
 }
 
 static __always_inline bool
-arch_atomic_long_inc_unless_negative(atomic_long_t *v)
+raw_atomic_long_inc_unless_negative(atomic_long_t *v)
 {
-	return arch_atomic_inc_unless_negative(v);
+	return raw_atomic_inc_unless_negative(v);
 }
 
 static __always_inline bool
-arch_atomic_long_dec_unless_positive(atomic_long_t *v)
+raw_atomic_long_dec_unless_positive(atomic_long_t *v)
 {
-	return arch_atomic_dec_unless_positive(v);
+	return raw_atomic_dec_unless_positive(v);
 }
 
 static __always_inline long
-arch_atomic_long_dec_if_positive(atomic_long_t *v)
+raw_atomic_long_dec_if_positive(atomic_long_t *v)
 {
-	return arch_atomic_dec_if_positive(v);
+	return raw_atomic_dec_if_positive(v);
 }
 
 #endif /* CONFIG_64BIT */
 #endif /* _LINUX_ATOMIC_LONG_H */
-// a194c07d7d2f4b0e178d3c118c919775d5d65f50
+// 108784846d3bbbb201b8dabe621c5dc30b216206
diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h
index 83ff0269657e7..8b2fc04cf8c54 100644
--- a/include/linux/atomic/atomic-raw.h
+++ b/include/linux/atomic/atomic-raw.h
@@ -1026,516 +1026,6 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
 	return arch_atomic64_dec_if_positive(v);
 }
 
-static __always_inline long
-raw_atomic_long_read(const atomic_long_t *v)
-{
-	return arch_atomic_long_read(v);
-}
-
-static __always_inline long
-raw_atomic_long_read_acquire(const atomic_long_t *v)
-{
-	return arch_atomic_long_read_acquire(v);
-}
-
-static __always_inline void
-raw_atomic_long_set(atomic_long_t *v, long i)
-{
-	arch_atomic_long_set(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_set_release(atomic_long_t *v, long i)
-{
-	arch_atomic_long_set_release(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_add(long i, atomic_long_t *v)
-{
-	arch_atomic_long_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_add_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_add_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_release(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_add_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_add_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_add_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_add_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_add_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_sub(long i, atomic_long_t *v)
-{
-	arch_atomic_long_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_sub_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_sub_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_sub_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_sub_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_sub_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_sub_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_sub_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_inc(atomic_long_t *v)
-{
-	arch_atomic_long_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return(atomic_long_t *v)
-{
-	return arch_atomic_long_inc_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_acquire(atomic_long_t *v)
-{
-	return arch_atomic_long_inc_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_release(atomic_long_t *v)
-{
-	return arch_atomic_long_inc_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
-{
-	return arch_atomic_long_inc_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc(atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_inc_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_release(atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_inc_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_inc_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_dec(atomic_long_t *v)
-{
-	arch_atomic_long_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return(atomic_long_t *v)
-{
-	return arch_atomic_long_dec_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_acquire(atomic_long_t *v)
-{
-	return arch_atomic_long_dec_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_release(atomic_long_t *v)
-{
-	return arch_atomic_long_dec_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
-{
-	return arch_atomic_long_dec_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec(atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_dec_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_release(atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_dec_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_dec_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_and(long i, atomic_long_t *v)
-{
-	arch_atomic_long_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_and_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_and_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_and_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_andnot(long i, atomic_long_t *v)
-{
-	arch_atomic_long_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_andnot_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_andnot_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_andnot_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_or(long i, atomic_long_t *v)
-{
-	arch_atomic_long_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_or_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_or_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_or_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_xor(long i, atomic_long_t *v)
-{
-	arch_atomic_long_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_xor_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_xor_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_fetch_xor_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_xchg(atomic_long_t *v, long i)
-{
-	return arch_atomic_long_xchg(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
-{
-	return arch_atomic_long_xchg_acquire(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_release(atomic_long_t *v, long i)
-{
-	return arch_atomic_long_xchg_release(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
-{
-	return arch_atomic_long_xchg_relaxed(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
-{
-	return arch_atomic_long_cmpxchg(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
-{
-	return arch_atomic_long_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
-{
-	return arch_atomic_long_cmpxchg_release(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
-{
-	return arch_atomic_long_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
-{
-	return arch_atomic_long_try_cmpxchg(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
-{
-	return arch_atomic_long_try_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
-{
-	return arch_atomic_long_try_cmpxchg_release(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
-{
-	return arch_atomic_long_try_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_sub_and_test(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_and_test(atomic_long_t *v)
-{
-	return arch_atomic_long_dec_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_and_test(atomic_long_t *v)
-{
-	return arch_atomic_long_inc_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_add_negative(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_add_negative_acquire(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_add_negative_release(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
-{
-	return arch_atomic_long_add_negative_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
-{
-	return arch_atomic_long_fetch_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
-{
-	return arch_atomic_long_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_not_zero(atomic_long_t *v)
-{
-	return arch_atomic_long_inc_not_zero(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_unless_negative(atomic_long_t *v)
-{
-	return arch_atomic_long_inc_unless_negative(v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_unless_positive(atomic_long_t *v)
-{
-	return arch_atomic_long_dec_unless_positive(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_if_positive(atomic_long_t *v)
-{
-	return arch_atomic_long_dec_if_positive(v);
-}
-
 #define raw_xchg(...) \
 	arch_xchg(__VA_ARGS__)
 
@@ -1642,4 +1132,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
 	arch_try_cmpxchg128_local(__VA_ARGS__)
 
 #endif /* _LINUX_ATOMIC_RAW_H */
-// 01d54200571b3857755a07c10074a4fd58cef6b1
+// b23ed4424e85200e200ded094522e1d743b3a5b1
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index eda89cea6e1d1..75e91d6da30d3 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -47,9 +47,9 @@ gen_proto_order_variant()
 
 cat <<EOF
 static __always_inline ${ret}
-arch_atomic_long_${name}(${params})
+raw_atomic_long_${name}(${params})
 {
-	${retstmt}arch_${atomic}_${name}(${argscast});
+	${retstmt}raw_${atomic}_${name}(${argscast});
 }
 
 EOF
diff --git a/scripts/atomic/gen-atomic-raw.sh b/scripts/atomic/gen-atomic-raw.sh
index ba8d136f30e4c..c7e3c52b49279 100755
--- a/scripts/atomic/gen-atomic-raw.sh
+++ b/scripts/atomic/gen-atomic-raw.sh
@@ -63,10 +63,6 @@ grep '^[a-z]' "$1" | while read name meta args; do
 	gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
 done
 
-grep '^[a-z]' "$1" | while read name meta args; do
-	gen_proto "${meta}" "${name}" "atomic_long" "long" ${args}
-done
-
 for xchg in "xchg" "cmpxchg" "cmpxchg64" "cmpxchg128" "try_cmpxchg" "try_cmpxchg64" "try_cmpxchg128"; do
 	for order in "" "_acquire" "_release" "_relaxed"; do
 		gen_xchg "${xchg}" "${order}"
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 21/26] locking/atomic: scripts: split pfx/name/sfx/order
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (17 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 19/26] locking/atomic: scripts: build raw_atomic_long*() directly Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 22/26] locking/atomic: scripts: simplify raw_atomic_long*() definitions Mark Rutland
                   ` (4 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Currently gen-atomic-long.sh's gen_proto_order_variant() function
combines the pfx/name/sfx/order variables immediately, unlike other
functions in gen-atomic-*.sh.

This is fine today, but subsequent patches will require the individual
individual pfx/name/sfx/order variables within gen-atomic-long.sh's
gen_proto_order_variant() function. In preparation for this, split the
variables in the style of other gen-atomic-*.sh scripts.

This results in no change to the generated headers, so there should be
no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
---
 scripts/atomic/gen-atomic-long.sh | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index 75e91d6da30d3..13832171f7219 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -36,10 +36,15 @@ gen_args_cast()
 gen_proto_order_variant()
 {
 	local meta="$1"; shift
-	local name="$1$2$3$4"; shift; shift; shift; shift
+	local pfx="$1"; shift
+	local name="$1"; shift
+	local sfx="$1"; shift
+	local order="$1"; shift
 	local atomic="$1"; shift
 	local int="$1"; shift
 
+	local atomicname="${pfx}${name}${sfx}${order}"
+
 	local ret="$(gen_ret_type "${meta}" "long")"
 	local params="$(gen_params "long" "atomic_long" "$@")"
 	local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")"
@@ -47,9 +52,9 @@ gen_proto_order_variant()
 
 cat <<EOF
 static __always_inline ${ret}
-raw_atomic_long_${name}(${params})
+raw_atomic_long_${atomicname}(${params})
 {
-	${retstmt}raw_${atomic}_${name}(${argscast});
+	${retstmt}raw_${atomic}_${atomicname}(${argscast});
 }
 
 EOF
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 22/26] locking/atomic: scripts: simplify raw_atomic_long*() definitions
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (18 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 21/26] locking/atomic: scripts: split pfx/name/sfx/order Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 12:24 ` [PATCH 25/26] locking/atomic: docs: Add atomic operations to the driver basic API documentation Mark Rutland
                   ` (3 subsequent siblings)
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Currently, atomic-long is split into two sections, one defining the
raw_atomic_long_*() ops for CONFIG_64BIT, and one defining the raw
atomic_long_*() ops for !CONFIG_64BIT.

With many lines elided, this looks like:

| #ifdef CONFIG_64BIT
| ...
| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
|         return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
| }
| ...
| #else /* CONFIG_64BIT */
| ...
| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
|         return raw_atomic_try_cmpxchg(v, (int *)old, new);
| }
| ...
| #endif

The two definitions are spread far apart in the file, and duplicate the
prototype, making it hard to have a legible set of kerneldoc comments.

Make this simpler by defining the C prototype once, and writing the two
definitions inline. For example, the above becomes:

| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
| #ifdef CONFIG_64BIT
|         return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
| #else
|         return raw_atomic_try_cmpxchg(v, (int *)old, new);
| #endif
| }

As we now always have a single copy of the C prototype wrapping all the
potential definitions, we now have an obvious single location for kerneldoc
comments. As a bonus, both the script and the generated file are
somewhat shorter.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
---
 include/linux/atomic/atomic-long.h | 857 ++++++++++++-----------------
 scripts/atomic/gen-atomic-long.sh  |  27 +-
 2 files changed, 350 insertions(+), 534 deletions(-)

diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index 92dc82ce1ce6d..63e0b4078ebd5 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -21,1030 +21,855 @@ typedef atomic_t atomic_long_t;
 #define atomic_long_cond_read_relaxed	atomic_cond_read_relaxed
 #endif
 
-#ifdef CONFIG_64BIT
-
-static __always_inline long
-raw_atomic_long_read(const atomic_long_t *v)
-{
-	return raw_atomic64_read(v);
-}
-
-static __always_inline long
-raw_atomic_long_read_acquire(const atomic_long_t *v)
-{
-	return raw_atomic64_read_acquire(v);
-}
-
-static __always_inline void
-raw_atomic_long_set(atomic_long_t *v, long i)
-{
-	raw_atomic64_set(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_set_release(atomic_long_t *v, long i)
-{
-	raw_atomic64_set_release(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_add(long i, atomic_long_t *v)
-{
-	raw_atomic64_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return(long i, atomic_long_t *v)
-{
-	return raw_atomic64_add_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
-{
-	return raw_atomic64_add_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_release(long i, atomic_long_t *v)
-{
-	return raw_atomic64_add_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
-{
-	return raw_atomic64_add_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_add_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_add_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_add_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_sub(long i, atomic_long_t *v)
-{
-	raw_atomic64_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return(long i, atomic_long_t *v)
-{
-	return raw_atomic64_sub_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
-{
-	return raw_atomic64_sub_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
-{
-	return raw_atomic64_sub_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
-{
-	return raw_atomic64_sub_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_sub_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_sub_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_sub_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_inc(atomic_long_t *v)
-{
-	raw_atomic64_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return(atomic_long_t *v)
-{
-	return raw_atomic64_inc_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_acquire(atomic_long_t *v)
-{
-	return raw_atomic64_inc_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_release(atomic_long_t *v)
-{
-	return raw_atomic64_inc_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
-{
-	return raw_atomic64_inc_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc(atomic_long_t *v)
-{
-	return raw_atomic64_fetch_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
-{
-	return raw_atomic64_fetch_inc_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_release(atomic_long_t *v)
-{
-	return raw_atomic64_fetch_inc_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
-{
-	return raw_atomic64_fetch_inc_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_dec(atomic_long_t *v)
-{
-	raw_atomic64_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return(atomic_long_t *v)
-{
-	return raw_atomic64_dec_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_acquire(atomic_long_t *v)
-{
-	return raw_atomic64_dec_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_release(atomic_long_t *v)
-{
-	return raw_atomic64_dec_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
-{
-	return raw_atomic64_dec_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec(atomic_long_t *v)
-{
-	return raw_atomic64_fetch_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
-{
-	return raw_atomic64_fetch_dec_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_release(atomic_long_t *v)
-{
-	return raw_atomic64_fetch_dec_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
-{
-	return raw_atomic64_fetch_dec_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_and(long i, atomic_long_t *v)
-{
-	raw_atomic64_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_and_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_and_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_and_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_andnot(long i, atomic_long_t *v)
-{
-	raw_atomic64_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_andnot_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_andnot_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_andnot_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_or(long i, atomic_long_t *v)
-{
-	raw_atomic64_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_or_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_or_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_or_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_xor(long i, atomic_long_t *v)
-{
-	raw_atomic64_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_xor_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_xor_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
-{
-	return raw_atomic64_fetch_xor_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_xchg(atomic_long_t *v, long i)
-{
-	return raw_atomic64_xchg(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
-{
-	return raw_atomic64_xchg_acquire(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_release(atomic_long_t *v, long i)
-{
-	return raw_atomic64_xchg_release(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
-{
-	return raw_atomic64_xchg_relaxed(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
-{
-	return raw_atomic64_cmpxchg(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
-{
-	return raw_atomic64_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
-{
-	return raw_atomic64_cmpxchg_release(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
-{
-	return raw_atomic64_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
-{
-	return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
-{
-	return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
-{
-	return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
-{
-	return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
-{
-	return raw_atomic64_sub_and_test(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_and_test(atomic_long_t *v)
-{
-	return raw_atomic64_dec_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_and_test(atomic_long_t *v)
-{
-	return raw_atomic64_inc_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative(long i, atomic_long_t *v)
-{
-	return raw_atomic64_add_negative(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
-{
-	return raw_atomic64_add_negative_acquire(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
-{
-	return raw_atomic64_add_negative_release(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
-{
-	return raw_atomic64_add_negative_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
-{
-	return raw_atomic64_fetch_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
-{
-	return raw_atomic64_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_not_zero(atomic_long_t *v)
-{
-	return raw_atomic64_inc_not_zero(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_unless_negative(atomic_long_t *v)
-{
-	return raw_atomic64_inc_unless_negative(v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_unless_positive(atomic_long_t *v)
-{
-	return raw_atomic64_dec_unless_positive(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_if_positive(atomic_long_t *v)
-{
-	return raw_atomic64_dec_if_positive(v);
-}
-
-#else /* CONFIG_64BIT */
-
 static __always_inline long
 raw_atomic_long_read(const atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_read(v);
+#else
 	return raw_atomic_read(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_read_acquire(const atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_read_acquire(v);
+#else
 	return raw_atomic_read_acquire(v);
+#endif
 }
 
 static __always_inline void
 raw_atomic_long_set(atomic_long_t *v, long i)
 {
+#ifdef CONFIG_64BIT
+	raw_atomic64_set(v, i);
+#else
 	raw_atomic_set(v, i);
+#endif
 }
 
 static __always_inline void
 raw_atomic_long_set_release(atomic_long_t *v, long i)
 {
+#ifdef CONFIG_64BIT
+	raw_atomic64_set_release(v, i);
+#else
 	raw_atomic_set_release(v, i);
+#endif
 }
 
 static __always_inline void
 raw_atomic_long_add(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	raw_atomic64_add(i, v);
+#else
 	raw_atomic_add(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_add_return(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_add_return(i, v);
+#else
 	return raw_atomic_add_return(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_add_return_acquire(i, v);
+#else
 	return raw_atomic_add_return_acquire(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_add_return_release(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_add_return_release(i, v);
+#else
 	return raw_atomic_add_return_release(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_add_return_relaxed(i, v);
+#else
 	return raw_atomic_add_return_relaxed(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_add(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_add(i, v);
+#else
 	return raw_atomic_fetch_add(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_add_acquire(i, v);
+#else
 	return raw_atomic_fetch_add_acquire(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_add_release(i, v);
+#else
 	return raw_atomic_fetch_add_release(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_add_relaxed(i, v);
+#else
 	return raw_atomic_fetch_add_relaxed(i, v);
+#endif
 }
 
 static __always_inline void
 raw_atomic_long_sub(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	raw_atomic64_sub(i, v);
+#else
 	raw_atomic_sub(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_sub_return(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_sub_return(i, v);
+#else
 	return raw_atomic_sub_return(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_sub_return_acquire(i, v);
+#else
 	return raw_atomic_sub_return_acquire(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_sub_return_release(i, v);
+#else
 	return raw_atomic_sub_return_release(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_sub_return_relaxed(i, v);
+#else
 	return raw_atomic_sub_return_relaxed(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_sub(i, v);
+#else
 	return raw_atomic_fetch_sub(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_sub_acquire(i, v);
+#else
 	return raw_atomic_fetch_sub_acquire(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_sub_release(i, v);
+#else
 	return raw_atomic_fetch_sub_release(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_sub_relaxed(i, v);
+#else
 	return raw_atomic_fetch_sub_relaxed(i, v);
+#endif
 }
 
 static __always_inline void
 raw_atomic_long_inc(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	raw_atomic64_inc(v);
+#else
 	raw_atomic_inc(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_inc_return(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_inc_return(v);
+#else
 	return raw_atomic_inc_return(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_inc_return_acquire(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_inc_return_acquire(v);
+#else
 	return raw_atomic_inc_return_acquire(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_inc_return_release(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_inc_return_release(v);
+#else
 	return raw_atomic_inc_return_release(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_inc_return_relaxed(v);
+#else
 	return raw_atomic_inc_return_relaxed(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_inc(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_inc(v);
+#else
 	return raw_atomic_fetch_inc(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_inc_acquire(v);
+#else
 	return raw_atomic_fetch_inc_acquire(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_inc_release(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_inc_release(v);
+#else
 	return raw_atomic_fetch_inc_release(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_inc_relaxed(v);
+#else
 	return raw_atomic_fetch_inc_relaxed(v);
+#endif
 }
 
 static __always_inline void
 raw_atomic_long_dec(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	raw_atomic64_dec(v);
+#else
 	raw_atomic_dec(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_dec_return(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_dec_return(v);
+#else
 	return raw_atomic_dec_return(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_dec_return_acquire(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_dec_return_acquire(v);
+#else
 	return raw_atomic_dec_return_acquire(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_dec_return_release(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_dec_return_release(v);
+#else
 	return raw_atomic_dec_return_release(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_dec_return_relaxed(v);
+#else
 	return raw_atomic_dec_return_relaxed(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_dec(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_dec(v);
+#else
 	return raw_atomic_fetch_dec(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_dec_acquire(v);
+#else
 	return raw_atomic_fetch_dec_acquire(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_dec_release(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_dec_release(v);
+#else
 	return raw_atomic_fetch_dec_release(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_dec_relaxed(v);
+#else
 	return raw_atomic_fetch_dec_relaxed(v);
+#endif
 }
 
 static __always_inline void
 raw_atomic_long_and(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	raw_atomic64_and(i, v);
+#else
 	raw_atomic_and(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_and(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_and(i, v);
+#else
 	return raw_atomic_fetch_and(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_and_acquire(i, v);
+#else
 	return raw_atomic_fetch_and_acquire(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_and_release(i, v);
+#else
 	return raw_atomic_fetch_and_release(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_and_relaxed(i, v);
+#else
 	return raw_atomic_fetch_and_relaxed(i, v);
+#endif
 }
 
 static __always_inline void
 raw_atomic_long_andnot(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	raw_atomic64_andnot(i, v);
+#else
 	raw_atomic_andnot(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_andnot(i, v);
+#else
 	return raw_atomic_fetch_andnot(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_andnot_acquire(i, v);
+#else
 	return raw_atomic_fetch_andnot_acquire(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_andnot_release(i, v);
+#else
 	return raw_atomic_fetch_andnot_release(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_andnot_relaxed(i, v);
+#else
 	return raw_atomic_fetch_andnot_relaxed(i, v);
+#endif
 }
 
 static __always_inline void
 raw_atomic_long_or(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	raw_atomic64_or(i, v);
+#else
 	raw_atomic_or(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_or(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_or(i, v);
+#else
 	return raw_atomic_fetch_or(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_or_acquire(i, v);
+#else
 	return raw_atomic_fetch_or_acquire(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_or_release(i, v);
+#else
 	return raw_atomic_fetch_or_release(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_or_relaxed(i, v);
+#else
 	return raw_atomic_fetch_or_relaxed(i, v);
+#endif
 }
 
 static __always_inline void
 raw_atomic_long_xor(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	raw_atomic64_xor(i, v);
+#else
 	raw_atomic_xor(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_xor(i, v);
+#else
 	return raw_atomic_fetch_xor(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_xor_acquire(i, v);
+#else
 	return raw_atomic_fetch_xor_acquire(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_xor_release(i, v);
+#else
 	return raw_atomic_fetch_xor_release(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_xor_relaxed(i, v);
+#else
 	return raw_atomic_fetch_xor_relaxed(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_xchg(atomic_long_t *v, long i)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_xchg(v, i);
+#else
 	return raw_atomic_xchg(v, i);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_xchg_acquire(v, i);
+#else
 	return raw_atomic_xchg_acquire(v, i);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_xchg_release(atomic_long_t *v, long i)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_xchg_release(v, i);
+#else
 	return raw_atomic_xchg_release(v, i);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_xchg_relaxed(v, i);
+#else
 	return raw_atomic_xchg_relaxed(v, i);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_cmpxchg(v, old, new);
+#else
 	return raw_atomic_cmpxchg(v, old, new);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_cmpxchg_acquire(v, old, new);
+#else
 	return raw_atomic_cmpxchg_acquire(v, old, new);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_cmpxchg_release(v, old, new);
+#else
 	return raw_atomic_cmpxchg_release(v, old, new);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_cmpxchg_relaxed(v, old, new);
+#else
 	return raw_atomic_cmpxchg_relaxed(v, old, new);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
+#else
 	return raw_atomic_try_cmpxchg(v, (int *)old, new);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
+#else
 	return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
+#else
 	return raw_atomic_try_cmpxchg_release(v, (int *)old, new);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
+#else
 	return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_sub_and_test(i, v);
+#else
 	return raw_atomic_sub_and_test(i, v);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_dec_and_test(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_dec_and_test(v);
+#else
 	return raw_atomic_dec_and_test(v);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_inc_and_test(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_inc_and_test(v);
+#else
 	return raw_atomic_inc_and_test(v);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_add_negative(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_add_negative(i, v);
+#else
 	return raw_atomic_add_negative(i, v);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_add_negative_acquire(i, v);
+#else
 	return raw_atomic_add_negative_acquire(i, v);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_add_negative_release(i, v);
+#else
 	return raw_atomic_add_negative_release(i, v);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_add_negative_relaxed(i, v);
+#else
 	return raw_atomic_add_negative_relaxed(i, v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_fetch_add_unless(v, a, u);
+#else
 	return raw_atomic_fetch_add_unless(v, a, u);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_add_unless(v, a, u);
+#else
 	return raw_atomic_add_unless(v, a, u);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_inc_not_zero(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_inc_not_zero(v);
+#else
 	return raw_atomic_inc_not_zero(v);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_inc_unless_negative(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_inc_unless_negative(v);
+#else
 	return raw_atomic_inc_unless_negative(v);
+#endif
 }
 
 static __always_inline bool
 raw_atomic_long_dec_unless_positive(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_dec_unless_positive(v);
+#else
 	return raw_atomic_dec_unless_positive(v);
+#endif
 }
 
 static __always_inline long
 raw_atomic_long_dec_if_positive(atomic_long_t *v)
 {
+#ifdef CONFIG_64BIT
+	return raw_atomic64_dec_if_positive(v);
+#else
 	return raw_atomic_dec_if_positive(v);
+#endif
 }
 
-#endif /* CONFIG_64BIT */
 #endif /* _LINUX_ATOMIC_LONG_H */
-// 108784846d3bbbb201b8dabe621c5dc30b216206
+// ad09f849db0db5b30c82e497eeb9056a394c5f22
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index 13832171f7219..af27a71b37ef1 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -32,7 +32,7 @@ gen_args_cast()
 	done
 }
 
-#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
+#gen_proto_order_variant(meta, pfx, name, sfx, order, arg...)
 gen_proto_order_variant()
 {
 	local meta="$1"; shift
@@ -40,21 +40,24 @@ gen_proto_order_variant()
 	local name="$1"; shift
 	local sfx="$1"; shift
 	local order="$1"; shift
-	local atomic="$1"; shift
-	local int="$1"; shift
 
 	local atomicname="${pfx}${name}${sfx}${order}"
 
 	local ret="$(gen_ret_type "${meta}" "long")"
 	local params="$(gen_params "long" "atomic_long" "$@")"
-	local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")"
+	local argscast_32="$(gen_args_cast "int" "atomic" "$@")"
+	local argscast_64="$(gen_args_cast "s64" "atomic64" "$@")"
 	local retstmt="$(gen_ret_stmt "${meta}")"
 
 cat <<EOF
 static __always_inline ${ret}
 raw_atomic_long_${atomicname}(${params})
 {
-	${retstmt}raw_${atomic}_${atomicname}(${argscast});
+#ifdef CONFIG_64BIT
+	${retstmt}raw_atomic64_${atomicname}(${argscast_64});
+#else
+	${retstmt}raw_atomic_${atomicname}(${argscast_32});
+#endif
 }
 
 EOF
@@ -84,24 +87,12 @@ typedef atomic_t atomic_long_t;
 #define atomic_long_cond_read_relaxed	atomic_cond_read_relaxed
 #endif
 
-#ifdef CONFIG_64BIT
-
-EOF
-
-grep '^[a-z]' "$1" | while read name meta args; do
-	gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
-done
-
-cat <<EOF
-#else /* CONFIG_64BIT */
-
 EOF
 
 grep '^[a-z]' "$1" | while read name meta args; do
-	gen_proto "${meta}" "${name}" "atomic" "int" ${args}
+	gen_proto "${meta}" "${name}" ${args}
 done
 
 cat <<EOF
-#endif /* CONFIG_64BIT */
 #endif /* _LINUX_ATOMIC_LONG_H */
 EOF
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 25/26] locking/atomic: docs: Add atomic operations to the driver basic API documentation
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (19 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 22/26] locking/atomic: scripts: simplify raw_atomic_long*() definitions Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-24 14:10   ` Akira Yokosawa
  2023-05-22 12:24 ` [PATCH 26/26] locking/atomic: treewide: delete arch_atomic_*() kerneldoc Mark Rutland
                   ` (2 subsequent siblings)
  23 siblings, 1 reply; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

From: "Paul E. McKenney" <paulmck@kernel.org>

Add the generated atomic headers to driver-api/basics.rst in order to
provide documentation for the Linux kernel's atomic operations.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Kees Cook <keescook@chromium.org>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: <linux-doc@vger.kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
[Mark: add atomic-long.h]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
---
 Documentation/driver-api/basics.rst | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api/basics.rst
index 4b4d8e28d3be4..a1fbd97fb79fb 100644
--- a/Documentation/driver-api/basics.rst
+++ b/Documentation/driver-api/basics.rst
@@ -87,6 +87,12 @@ Atomics
 .. kernel-doc:: arch/x86/include/asm/atomic.h
    :internal:
 
+.. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h
+   :internal:
+
+.. kernel-doc:: include/linux/atomic/atomic-long.h
+   :internal:
+
 Kernel objects manipulation
 ---------------------------
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 26/26] locking/atomic: treewide: delete arch_atomic_*() kerneldoc
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (20 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 25/26] locking/atomic: docs: Add atomic operations to the driver basic API documentation Mark Rutland
@ 2023-05-22 12:24 ` Mark Rutland
  2023-05-22 20:58 ` [PATCH 00/26] locking/atomic: restructuring + kerneldoc Kees Cook
       [not found] ` <20230522122429.1915021-25-mark.rutland@arm.com>
  23 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-22 12:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: akiyks, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, mark.rutland, paulmck, peterz, sstabellini, will

Currently several architectures have kerneldoc comments for
arch_atomic_*(), which is unhelpful as these live in a shared namespace
where they clash, and the arch_atomic_*() ops are now an implementation
detail of the raw_atomic_*() ops, which no-one should use those
directly.

Delete the kerneldoc comments for arch_atomic_*(), along with
pseudo-kerneldoc comments which are in the correct style but are missing
the leading '/**' necessary to be true kerneldoc comments.

Drop x86's asm/atomic.h from Documentation/driver-api/basics.rst as it
no longer contains any relevant kerneldoc comments.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
---
 Documentation/driver-api/basics.rst   |  3 -
 arch/alpha/include/asm/atomic.h       | 25 --------
 arch/arc/include/asm/atomic64-arcv2.h | 17 ------
 arch/hexagon/include/asm/atomic.h     | 16 -----
 arch/loongarch/include/asm/atomic.h   | 49 ---------------
 arch/x86/include/asm/atomic.h         | 87 ---------------------------
 arch/x86/include/asm/atomic64_32.h    | 76 -----------------------
 arch/x86/include/asm/atomic64_64.h    | 81 -------------------------
 8 files changed, 354 deletions(-)

diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api/basics.rst
index a1fbd97fb79fb..d8e4e5c82bcf0 100644
--- a/Documentation/driver-api/basics.rst
+++ b/Documentation/driver-api/basics.rst
@@ -84,9 +84,6 @@ Reference counting
 Atomics
 -------
 
-.. kernel-doc:: arch/x86/include/asm/atomic.h
-   :internal:
-
 .. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h
    :internal:
 
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index ec8ab552c527a..cbd9244571af0 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -200,15 +200,6 @@ ATOMIC_OPS(xor, xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-/**
- * arch_atomic_fetch_add_unless - add unless the number is a given value
- * @v: pointer of type atomic_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as it was not @u.
- * Returns the old value of @v.
- */
 static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
 	int c, new, old;
@@ -232,15 +223,6 @@ static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 }
 #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
 
-/**
- * arch_atomic64_fetch_add_unless - add unless the number is a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as it was not @u.
- * Returns the old value of @v.
- */
 static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	s64 c, new, old;
@@ -264,13 +246,6 @@ static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u
 }
 #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
 
-/*
- * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic_t
- *
- * The function returns the old value of *v minus 1, even if
- * the atomic variable, v, was not decremented.
- */
 static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 {
 	s64 old, tmp;
diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h
index 2b7c9e61a2947..6b6db981967ae 100644
--- a/arch/arc/include/asm/atomic64-arcv2.h
+++ b/arch/arc/include/asm/atomic64-arcv2.h
@@ -182,14 +182,6 @@ static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
 }
 #define arch_atomic64_xchg arch_atomic64_xchg
 
-/**
- * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic64_t
- *
- * The function returns the old value of *v minus 1, even if
- * the atomic variable, v, was not decremented.
- */
-
 static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 {
 	s64 val;
@@ -214,15 +206,6 @@ static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 }
 #define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
 
-/**
- * arch_atomic64_fetch_add_unless - add unless the number is a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if it was not @u.
- * Returns the old value of @v
- */
 static inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	s64 old, temp;
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index 5c8440016c762..2447d083c432f 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -28,12 +28,6 @@ static inline void arch_atomic_set(atomic_t *v, int new)
 
 #define arch_atomic_set_release(v, i)	arch_atomic_set((v), (i))
 
-/**
- * arch_atomic_read - reads a word, atomically
- * @v: pointer to atomic value
- *
- * Assumes all word reads on our architecture are atomic.
- */
 #define arch_atomic_read(v)		READ_ONCE((v)->counter)
 
 #define ATOMIC_OP(op)							\
@@ -112,16 +106,6 @@ ATOMIC_OPS(xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-/**
- * arch_atomic_fetch_add_unless - add unless the number is a given value
- * @v: pointer to value
- * @a: amount to add
- * @u: unless value is equal to u
- *
- * Returns old value.
- *
- */
-
 static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
 	int __oldval;
diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h
index 8d73c85911b08..e27f0c72d3242 100644
--- a/arch/loongarch/include/asm/atomic.h
+++ b/arch/loongarch/include/asm/atomic.h
@@ -29,21 +29,7 @@
 
 #define ATOMIC_INIT(i)	  { (i) }
 
-/*
- * arch_atomic_read - read atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically reads the value of @v.
- */
 #define arch_atomic_read(v)	READ_ONCE((v)->counter)
-
-/*
- * arch_atomic_set - set atomic variable
- * @v: pointer of type atomic_t
- * @i: required value
- *
- * Atomically sets the value of @v to @i.
- */
 #define arch_atomic_set(v, i)	WRITE_ONCE((v)->counter, (i))
 
 #define ATOMIC_OP(op, I, asm_op)					\
@@ -139,14 +125,6 @@ static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 }
 #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
 
-/*
- * arch_atomic_sub_if_positive - conditionally subtract integer from atomic variable
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically test @v and subtract @i if @v is greater or equal than @i.
- * The function returns the old value of @v minus @i.
- */
 static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
 {
 	int result;
@@ -181,28 +159,13 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
 	return result;
 }
 
-/*
- * arch_atomic_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic_t
- */
 #define arch_atomic_dec_if_positive(v)	arch_atomic_sub_if_positive(1, v)
 
 #ifdef CONFIG_64BIT
 
 #define ATOMIC64_INIT(i)    { (i) }
 
-/*
- * arch_atomic64_read - read atomic variable
- * @v: pointer of type atomic64_t
- *
- */
 #define arch_atomic64_read(v)	READ_ONCE((v)->counter)
-
-/*
- * arch_atomic64_set - set atomic variable
- * @v: pointer of type atomic64_t
- * @i: required value
- */
 #define arch_atomic64_set(v, i)	WRITE_ONCE((v)->counter, (i))
 
 #define ATOMIC64_OP(op, I, asm_op)					\
@@ -297,14 +260,6 @@ static inline long arch_atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
 }
 #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
 
-/*
- * arch_atomic64_sub_if_positive - conditionally subtract integer from atomic variable
- * @i: integer value to subtract
- * @v: pointer of type atomic64_t
- *
- * Atomically test @v and subtract @i if @v is greater or equal than @i.
- * The function returns the old value of @v minus @i.
- */
 static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
 {
 	long result;
@@ -339,10 +294,6 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
 	return result;
 }
 
-/*
- * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic64_t
- */
 #define arch_atomic64_dec_if_positive(v)	arch_atomic64_sub_if_positive(1, v)
 
 #endif /* CONFIG_64BIT */
diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 5e754e8957671..55a55ec043502 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -14,12 +14,6 @@
  * resource counting etc..
  */
 
-/**
- * arch_atomic_read - read atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically reads the value of @v.
- */
 static __always_inline int arch_atomic_read(const atomic_t *v)
 {
 	/*
@@ -29,25 +23,11 @@ static __always_inline int arch_atomic_read(const atomic_t *v)
 	return __READ_ONCE((v)->counter);
 }
 
-/**
- * arch_atomic_set - set atomic variable
- * @v: pointer of type atomic_t
- * @i: required value
- *
- * Atomically sets the value of @v to @i.
- */
 static __always_inline void arch_atomic_set(atomic_t *v, int i)
 {
 	__WRITE_ONCE(v->counter, i);
 }
 
-/**
- * arch_atomic_add - add integer to atomic variable
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v.
- */
 static __always_inline void arch_atomic_add(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "addl %1,%0"
@@ -55,13 +35,6 @@ static __always_inline void arch_atomic_add(int i, atomic_t *v)
 		     : "ir" (i) : "memory");
 }
 
-/**
- * arch_atomic_sub - subtract integer from atomic variable
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically subtracts @i from @v.
- */
 static __always_inline void arch_atomic_sub(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "subl %1,%0"
@@ -69,27 +42,12 @@ static __always_inline void arch_atomic_sub(int i, atomic_t *v)
 		     : "ir" (i) : "memory");
 }
 
-/**
- * arch_atomic_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
 static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
 {
 	return GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, e, "er", i);
 }
 #define arch_atomic_sub_and_test arch_atomic_sub_and_test
 
-/**
- * arch_atomic_inc - increment atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1.
- */
 static __always_inline void arch_atomic_inc(atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "incl %0"
@@ -97,12 +55,6 @@ static __always_inline void arch_atomic_inc(atomic_t *v)
 }
 #define arch_atomic_inc arch_atomic_inc
 
-/**
- * arch_atomic_dec - decrement atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1.
- */
 static __always_inline void arch_atomic_dec(atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "decl %0"
@@ -110,69 +62,30 @@ static __always_inline void arch_atomic_dec(atomic_t *v)
 }
 #define arch_atomic_dec arch_atomic_dec
 
-/**
- * arch_atomic_dec_and_test - decrement and test
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
 static __always_inline bool arch_atomic_dec_and_test(atomic_t *v)
 {
 	return GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, e);
 }
 #define arch_atomic_dec_and_test arch_atomic_dec_and_test
 
-/**
- * arch_atomic_inc_and_test - increment and test
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
 static __always_inline bool arch_atomic_inc_and_test(atomic_t *v)
 {
 	return GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, e);
 }
 #define arch_atomic_inc_and_test arch_atomic_inc_and_test
 
-/**
- * arch_atomic_add_negative - add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true
- * if the result is negative, or false when
- * result is greater than or equal to zero.
- */
 static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
 {
 	return GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, s, "er", i);
 }
 #define arch_atomic_add_negative arch_atomic_add_negative
 
-/**
- * arch_atomic_add_return - add integer and return
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns @i + @v
- */
 static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
 {
 	return i + xadd(&v->counter, i);
 }
 #define arch_atomic_add_return arch_atomic_add_return
 
-/**
- * arch_atomic_sub_return - subtract integer and return
- * @v: pointer of type atomic_t
- * @i: integer value to subtract
- *
- * Atomically subtracts @i from @v and returns @v - @i
- */
 static __always_inline int arch_atomic_sub_return(int i, atomic_t *v)
 {
 	return arch_atomic_add_return(-i, v);
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 808b4eece251e..3486d91b8595f 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -61,30 +61,12 @@ ATOMIC64_DECL(add_unless);
 #undef __ATOMIC64_DECL
 #undef ATOMIC64_EXPORT
 
-/**
- * arch_atomic64_cmpxchg - cmpxchg atomic64 variable
- * @v: pointer to type atomic64_t
- * @o: expected value
- * @n: new value
- *
- * Atomically sets @v to @n if it was equal to @o and returns
- * the old value.
- */
-
 static __always_inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
 {
 	return arch_cmpxchg64(&v->counter, o, n);
 }
 #define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
 
-/**
- * arch_atomic64_xchg - xchg atomic64 variable
- * @v: pointer to type atomic64_t
- * @n: value to assign
- *
- * Atomically xchgs the value of @v to @n and returns
- * the old value.
- */
 static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
 {
 	s64 o;
@@ -97,13 +79,6 @@ static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
 }
 #define arch_atomic64_xchg arch_atomic64_xchg
 
-/**
- * arch_atomic64_set - set atomic64 variable
- * @v: pointer to type atomic64_t
- * @i: value to assign
- *
- * Atomically sets the value of @v to @n.
- */
 static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	unsigned high = (unsigned)(i >> 32);
@@ -113,12 +88,6 @@ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
 			     : "eax", "edx", "memory");
 }
 
-/**
- * arch_atomic64_read - read atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically reads the value of @v and returns it.
- */
 static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
 {
 	s64 r;
@@ -126,13 +95,6 @@ static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
 	return r;
 }
 
-/**
- * arch_atomic64_add_return - add and return
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v and returns @i + *@v
- */
 static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
 {
 	alternative_atomic64(add_return,
@@ -142,9 +104,6 @@ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
 }
 #define arch_atomic64_add_return arch_atomic64_add_return
 
-/*
- * Other variants with different arithmetic operators:
- */
 static __always_inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
 {
 	alternative_atomic64(sub_return,
@@ -172,13 +131,6 @@ static __always_inline s64 arch_atomic64_dec_return(atomic64_t *v)
 }
 #define arch_atomic64_dec_return arch_atomic64_dec_return
 
-/**
- * arch_atomic64_add - add integer to atomic64 variable
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v.
- */
 static __always_inline s64 arch_atomic64_add(s64 i, atomic64_t *v)
 {
 	__alternative_atomic64(add, add_return,
@@ -187,13 +139,6 @@ static __always_inline s64 arch_atomic64_add(s64 i, atomic64_t *v)
 	return i;
 }
 
-/**
- * arch_atomic64_sub - subtract the atomic64 variable
- * @i: integer value to subtract
- * @v: pointer to type atomic64_t
- *
- * Atomically subtracts @i from @v.
- */
 static __always_inline s64 arch_atomic64_sub(s64 i, atomic64_t *v)
 {
 	__alternative_atomic64(sub, sub_return,
@@ -202,12 +147,6 @@ static __always_inline s64 arch_atomic64_sub(s64 i, atomic64_t *v)
 	return i;
 }
 
-/**
- * arch_atomic64_inc - increment atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically increments @v by 1.
- */
 static __always_inline void arch_atomic64_inc(atomic64_t *v)
 {
 	__alternative_atomic64(inc, inc_return, /* no output */,
@@ -215,12 +154,6 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
 }
 #define arch_atomic64_inc arch_atomic64_inc
 
-/**
- * arch_atomic64_dec - decrement atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically decrements @v by 1.
- */
 static __always_inline void arch_atomic64_dec(atomic64_t *v)
 {
 	__alternative_atomic64(dec, dec_return, /* no output */,
@@ -228,15 +161,6 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v)
 }
 #define arch_atomic64_dec arch_atomic64_dec
 
-/**
- * arch_atomic64_add_unless - add unless the number is a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as it was not @u.
- * Returns non-zero if the add was done, zero otherwise.
- */
 static __always_inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	unsigned low = (unsigned)u;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index c496595bf6012..3165c0feedf74 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -10,37 +10,16 @@
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-/**
- * arch_atomic64_read - read atomic64 variable
- * @v: pointer of type atomic64_t
- *
- * Atomically reads the value of @v.
- * Doesn't imply a read memory barrier.
- */
 static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
 {
 	return __READ_ONCE((v)->counter);
 }
 
-/**
- * arch_atomic64_set - set atomic64 variable
- * @v: pointer to type atomic64_t
- * @i: required value
- *
- * Atomically sets the value of @v to @i.
- */
 static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	__WRITE_ONCE(v->counter, i);
 }
 
-/**
- * arch_atomic64_add - add integer to atomic64 variable
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v.
- */
 static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "addq %1,%0"
@@ -48,13 +27,6 @@ static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
 		     : "er" (i), "m" (v->counter) : "memory");
 }
 
-/**
- * arch_atomic64_sub - subtract the atomic64 variable
- * @i: integer value to subtract
- * @v: pointer to type atomic64_t
- *
- * Atomically subtracts @i from @v.
- */
 static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "subq %1,%0"
@@ -62,27 +34,12 @@ static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v)
 		     : "er" (i), "m" (v->counter) : "memory");
 }
 
-/**
- * arch_atomic64_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer to type atomic64_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
 static __always_inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
 {
 	return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i);
 }
 #define arch_atomic64_sub_and_test arch_atomic64_sub_and_test
 
-/**
- * arch_atomic64_inc - increment atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically increments @v by 1.
- */
 static __always_inline void arch_atomic64_inc(atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "incq %0"
@@ -91,12 +48,6 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
 }
 #define arch_atomic64_inc arch_atomic64_inc
 
-/**
- * arch_atomic64_dec - decrement atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically decrements @v by 1.
- */
 static __always_inline void arch_atomic64_dec(atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "decq %0"
@@ -105,56 +56,24 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v)
 }
 #define arch_atomic64_dec arch_atomic64_dec
 
-/**
- * arch_atomic64_dec_and_test - decrement and test
- * @v: pointer to type atomic64_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
 static __always_inline bool arch_atomic64_dec_and_test(atomic64_t *v)
 {
 	return GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, e);
 }
 #define arch_atomic64_dec_and_test arch_atomic64_dec_and_test
 
-/**
- * arch_atomic64_inc_and_test - increment and test
- * @v: pointer to type atomic64_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
 static __always_inline bool arch_atomic64_inc_and_test(atomic64_t *v)
 {
 	return GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, e);
 }
 #define arch_atomic64_inc_and_test arch_atomic64_inc_and_test
 
-/**
- * arch_atomic64_add_negative - add and test if negative
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v and returns true
- * if the result is negative, or false when
- * result is greater than or equal to zero.
- */
 static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v)
 {
 	return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i);
 }
 #define arch_atomic64_add_negative arch_atomic64_add_negative
 
-/**
- * arch_atomic64_add_return - add and return
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v and returns @i + @v
- */
 static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
 {
 	return i + xadd(&v->counter, i);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH 00/26] locking/atomic: restructuring + kerneldoc
  2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
                   ` (21 preceding siblings ...)
  2023-05-22 12:24 ` [PATCH 26/26] locking/atomic: treewide: delete arch_atomic_*() kerneldoc Mark Rutland
@ 2023-05-22 20:58 ` Kees Cook
       [not found] ` <20230522122429.1915021-25-mark.rutland@arm.com>
  23 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2023-05-22 20:58 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-kernel, akiyks, boqun.feng, corbet, linux-arch, linux,
	linux-doc, paulmck, peterz, sstabellini, will

On Mon, May 22, 2023 at 01:24:03PM +0100, Mark Rutland wrote:
> The patches are based on the tip tree's locking/core branch,
> specifically commit:
> 
>   3cf363a4daf359e8 ("s390/cpum_sf: Convert to cmpxchg128()")

I did successful builds of:

m68k
parisc
sh4
sparc64
arm
arm64
i386
x86_64

Reviewed-by: Kees Cook <keescook@chromium.org>

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments
       [not found] ` <20230522122429.1915021-25-mark.rutland@arm.com>
@ 2023-05-24 14:03   ` Akira Yokosawa
  2023-05-24 14:11     ` Peter Zijlstra
  2023-05-24 14:17   ` Peter Zijlstra
  1 sibling, 1 reply; 36+ messages in thread
From: Akira Yokosawa @ 2023-05-24 14:03 UTC (permalink / raw)
  To: Mark Rutland, linux-kernel
  Cc: boqun.feng, corbet, keescook, linux-arch, linux, linux-doc,
	paulmck, peterz, sstabellini, will, Akira Yokosawa

Hi Mark,

Thank you for the nice documentation improvements!

Please see inline comments for minor nits.

On Mon, 22 May 2023 13:24:27 +0100, Mark Rutland wrote:
> Currently the atomics are documented in Documentation/atomic_t.txt, and
> have no kerneldoc comments. There are a sufficient number of gotchas
> (e.g. semantics, noinstr-safety) that it would be nice to have comments
> to call these out, and it would be nice to have kerneldoc comments such
> that these can be collated.
> 
> While it's possible to derive the semantics from the code, this can be
> painful given the amount of indirection we currently have (e.g. fallback
> paths), and it's easy to be mislead by naming, e.g.
> 
> * The unconditional void-returning ops *only* have relaxed variants
>   without a _relaxed suffix, and can easily be mistaken for being fully
>   ordered.
> 
>   It would be nice to give these a _relaxed() suffix, but this would
>   result in significant churn throughout the kernel.
> 
> * Our naming of conditional and unconditional+test ops is rather
>   inconsistent, and it can be difficult to derive the name of an
>   operation, or to identify where an op is conditional or
>   unconditional+test.
> 
>   Some ops are clearly conditional:
>   - dec_if_positive
>   - add_unless
>   - dec_unless_positive
>   - inc_unless_negative
> 
>   Some ops are clearly unconditional+test:
>   - sub_and_test
>   - dec_and_test
>   - inc_and_test
> 
>   However, what exactly those test is not obvious. A _test_zero suffix
>   might be clearer.
> 
>   Others could be read ambiguously:
>   - inc_not_zero	// conditional
>   - add_negative	// unconditional+test
> 
>   It would probably be worth renaming these, e.g. to inc_unless_zero and
>   add_test_negative.
> 
> As a step towards making this more consistent and easier to understand,
> this patch adds kerneldoc comments for all generated *atomic*_*()
> functions. These are generated from templates, with some common text
> shared, making it easy to extend these in future if necessary.
> 
> I've tried to make these as consistent and clear as possible, and I've
> deliberately ensured:
> 
> * All ops have their ordering explicitly mentioned in the short and long
>   description.
> 
> * All test ops have "test" in their short description.
> 
> * All ops are described as an expression using their usual C operator.
>   For example:
> 
>   andnot: "Atomically updates @v to (@v & ~@i)"

The kernel-doc script converts "~@i" into reST source of "~**i**",
where the emphasis of i is not recognized by Sphinx.

For the "@" to work as expected, please say "~(@i)" or "~ @i".
My preference is the former.

>   inc:    "Atomically updates @v to (@v + 1)"
> 
>   Which may be clearer to non-naative English speakers, and allows all
                            non-native

>   the operations to be described in the same style.
> 
> * All conditional ops have their condition described as an expression
>   using the usual C operators. For example:
> 
>   add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
>   cmpxchg:    "If (@v == @old), atomically updates @v to @new"
> 
>   Which may be clearer to non-naative English speakers, and allows all

Ditto.

>   the operations to be described in the same style.
> 
> * All bitwise ops (and,andnot,or,xor) explicitly mention that they are
>   bitwise in their short description, so that they are not mistaken for
>   performing their logical equivalents.
> 
> * The noinstr safety of each op is explicitly described, with a
>   description of whether or not to use the raw_ form of the op.
> 
> There should be no functional change as a result of this patch.
> 
> Reported-by: Paul E. McKenney <paulmck@kernel.org>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>

FWIW,

Reviewed-by: Akira Yokosawa <akiyks@gmail.com>

        Thanks, Akira

> ---
>  include/linux/atomic/atomic-arch-fallback.h  | 1848 +++++++++++-
>  include/linux/atomic/atomic-instrumented.h   | 2771 +++++++++++++++++-
>  include/linux/atomic/atomic-long.h           |  925 +++++-
>  scripts/atomic/atomic-tbl.sh                 |  112 +-
>  scripts/atomic/gen-atomic-fallback.sh        |    2 +
>  scripts/atomic/gen-atomic-instrumented.sh    |    2 +
>  scripts/atomic/gen-atomic-long.sh            |    2 +
>  scripts/atomic/kerneldoc/add                 |   13 +
>  scripts/atomic/kerneldoc/add_negative        |   13 +
>  scripts/atomic/kerneldoc/add_unless          |   18 +
>  scripts/atomic/kerneldoc/and                 |   13 +
>  scripts/atomic/kerneldoc/andnot              |   13 +
>  scripts/atomic/kerneldoc/cmpxchg             |   14 +
>  scripts/atomic/kerneldoc/dec                 |   12 +
>  scripts/atomic/kerneldoc/dec_and_test        |   12 +
>  scripts/atomic/kerneldoc/dec_if_positive     |   12 +
>  scripts/atomic/kerneldoc/dec_unless_positive |   12 +
>  scripts/atomic/kerneldoc/inc                 |   12 +
>  scripts/atomic/kerneldoc/inc_and_test        |   12 +
>  scripts/atomic/kerneldoc/inc_not_zero        |   12 +
>  scripts/atomic/kerneldoc/inc_unless_negative |   12 +
>  scripts/atomic/kerneldoc/or                  |   13 +
>  scripts/atomic/kerneldoc/read                |   12 +
>  scripts/atomic/kerneldoc/set                 |   13 +
>  scripts/atomic/kerneldoc/sub                 |   13 +
>  scripts/atomic/kerneldoc/sub_and_test        |   13 +
>  scripts/atomic/kerneldoc/try_cmpxchg         |   15 +
>  scripts/atomic/kerneldoc/xchg                |   13 +
>  scripts/atomic/kerneldoc/xor                 |   13 +
>  29 files changed, 5940 insertions(+), 7 deletions(-)
>  create mode 100644 scripts/atomic/kerneldoc/add
>  create mode 100644 scripts/atomic/kerneldoc/add_negative
>  create mode 100644 scripts/atomic/kerneldoc/add_unless
>  create mode 100644 scripts/atomic/kerneldoc/and
>  create mode 100644 scripts/atomic/kerneldoc/andnot
>  create mode 100644 scripts/atomic/kerneldoc/cmpxchg
>  create mode 100644 scripts/atomic/kerneldoc/dec
>  create mode 100644 scripts/atomic/kerneldoc/dec_and_test
>  create mode 100644 scripts/atomic/kerneldoc/dec_if_positive
>  create mode 100644 scripts/atomic/kerneldoc/dec_unless_positive
>  create mode 100644 scripts/atomic/kerneldoc/inc
>  create mode 100644 scripts/atomic/kerneldoc/inc_and_test
>  create mode 100644 scripts/atomic/kerneldoc/inc_not_zero
>  create mode 100644 scripts/atomic/kerneldoc/inc_unless_negative
>  create mode 100644 scripts/atomic/kerneldoc/or
>  create mode 100644 scripts/atomic/kerneldoc/read
>  create mode 100644 scripts/atomic/kerneldoc/set
>  create mode 100644 scripts/atomic/kerneldoc/sub
>  create mode 100644 scripts/atomic/kerneldoc/sub_and_test
>  create mode 100644 scripts/atomic/kerneldoc/try_cmpxchg
>  create mode 100644 scripts/atomic/kerneldoc/xchg
>  create mode 100644 scripts/atomic/kerneldoc/xor
> 
[...]


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 25/26] locking/atomic: docs: Add atomic operations to the driver basic API documentation
  2023-05-22 12:24 ` [PATCH 25/26] locking/atomic: docs: Add atomic operations to the driver basic API documentation Mark Rutland
@ 2023-05-24 14:10   ` Akira Yokosawa
  2023-05-30 12:33     ` Mark Rutland
  0 siblings, 1 reply; 36+ messages in thread
From: Akira Yokosawa @ 2023-05-24 14:10 UTC (permalink / raw)
  To: Mark Rutland, linux-kernel
  Cc: boqun.feng, corbet, keescook, linux-arch, linux, linux-doc,
	paulmck, peterz, sstabellini, will, Akira Yokosawa

On Mon, 22 May 2023 13:24:28 +0100, Mark Rutland wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> Add the generated atomic headers to driver-api/basics.rst in order to
> provide documentation for the Linux kernel's atomic operations.
> 
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Akira Yokosawa <akiyks@gmail.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: <linux-doc@vger.kernel.org>
> Reviewed-by: Kees Cook <keescook@chromium.org>
> [Mark: add atomic-long.h]
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> ---
>  Documentation/driver-api/basics.rst | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api/basics.rst
> index 4b4d8e28d3be4..a1fbd97fb79fb 100644
> --- a/Documentation/driver-api/basics.rst
> +++ b/Documentation/driver-api/basics.rst
> @@ -87,6 +87,12 @@ Atomics
>  .. kernel-doc:: arch/x86/include/asm/atomic.h
>     :internal:
>  
> +.. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h
> +   :internal:
> +
> +.. kernel-doc:: include/linux/atomic/atomic-long.h
> +   :internal:
> +

Why not add

  .. kernel-doc:: include/linux/atomic/atomic-instrumented.h
     :internal:

as well ??

I think those kernel-doc comments are the most relevant ones
for driver writers.

        Thanks, Akira

>  Kernel objects manipulation
>  ---------------------------
>  

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments
  2023-05-24 14:03   ` [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments Akira Yokosawa
@ 2023-05-24 14:11     ` Peter Zijlstra
  2023-05-26  3:17       ` Akira Yokosawa
  0 siblings, 1 reply; 36+ messages in thread
From: Peter Zijlstra @ 2023-05-24 14:11 UTC (permalink / raw)
  To: Akira Yokosawa
  Cc: Mark Rutland, linux-kernel, boqun.feng, corbet, keescook,
	linux-arch, linux, linux-doc, paulmck, sstabellini, will

On Wed, May 24, 2023 at 11:03:58PM +0900, Akira Yokosawa wrote:

> > * All ops are described as an expression using their usual C operator.
> >   For example:
> > 
> >   andnot: "Atomically updates @v to (@v & ~@i)"
> 
> The kernel-doc script converts "~@i" into reST source of "~**i**",
> where the emphasis of i is not recognized by Sphinx.
> 
> For the "@" to work as expected, please say "~(@i)" or "~ @i".
> My preference is the former.

And here we start :-/ making the actual comment less readable because
retarded tooling.

> >   inc:    "Atomically updates @v to (@v + 1)"
> > 
> >   Which may be clearer to non-naative English speakers, and allows all
>                             non-native
> 
> >   the operations to be described in the same style.
> > 
> > * All conditional ops have their condition described as an expression
> >   using the usual C operators. For example:
> > 
> >   add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
> >   cmpxchg:    "If (@v == @old), atomically updates @v to @new"
> > 
> >   Which may be clearer to non-naative English speakers, and allows all
> 
> Ditto.

How about we just keep it as is, and all the rst and html weenies learn
to use a text editor to read code comments?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments
       [not found] ` <20230522122429.1915021-25-mark.rutland@arm.com>
  2023-05-24 14:03   ` [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments Akira Yokosawa
@ 2023-05-24 14:17   ` Peter Zijlstra
  1 sibling, 0 replies; 36+ messages in thread
From: Peter Zijlstra @ 2023-05-24 14:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-kernel, akiyks, boqun.feng, corbet, keescook, linux-arch,
	linux, linux-doc, paulmck, sstabellini, will

On Mon, May 22, 2023 at 01:24:27PM +0100, Mark Rutland wrote:
>  include/linux/atomic/atomic-arch-fallback.h  | 1848 +++++++++++-
>  include/linux/atomic/atomic-instrumented.h   | 2771 +++++++++++++++++-
>  include/linux/atomic/atomic-long.h           |  925 +++++-

>  29 files changed, 5940 insertions(+), 7 deletions(-)

So my biggest concern with all this is 5k+ lines of comments that GCC
has to read and discard over and over and over.

I'll see if I can measure a difference in compile time before and after
this here patch.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments
  2023-05-24 14:11     ` Peter Zijlstra
@ 2023-05-26  3:17       ` Akira Yokosawa
  2023-05-26  4:51         ` Randy Dunlap
  0 siblings, 1 reply; 36+ messages in thread
From: Akira Yokosawa @ 2023-05-26  3:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mark Rutland, linux-kernel, boqun.feng, corbet, keescook,
	linux-arch, linux, linux-doc, paulmck, sstabellini, will,
	Akira Yokosawa

On Wed, 24 May 2023 16:11:52 +0200, Peter Zijlstra wrote:
> On Wed, May 24, 2023 at 11:03:58PM +0900, Akira Yokosawa wrote:
> 
>>> * All ops are described as an expression using their usual C operator.
>>>   For example:
>>>
>>>   andnot: "Atomically updates @v to (@v & ~@i)"
>>
>> The kernel-doc script converts "~@i" into reST source of "~**i**",
>> where the emphasis of i is not recognized by Sphinx.
>>
>> For the "@" to work as expected, please say "~(@i)" or "~ @i".
>> My preference is the former.
> 
> And here we start :-/ making the actual comment less readable because
> retarded tooling.
> 
>>>   inc:    "Atomically updates @v to (@v + 1)"
>>>
>>>   Which may be clearer to non-naative English speakers, and allows all
>>                             non-native
>>
>>>   the operations to be described in the same style.
>>>
>>> * All conditional ops have their condition described as an expression
>>>   using the usual C operators. For example:
>>>
>>>   add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
>>>   cmpxchg:    "If (@v == @old), atomically updates @v to @new"
>>>
>>>   Which may be clearer to non-naative English speakers, and allows all
>>
>> Ditto.
> 
> How about we just keep it as is, and all the rst and html weenies learn
> to use a text editor to read code comments?

:-) :-) :-)

It turns out that kernel-doc is aware of !@var [1].
Similar tricks can be added for ~@var.
So let's keep it as is!

I'll ask documentation forks for updating kernel-doc when this change
is merged eventually.

[1]: ee2aa7590398 ("scripts: kernel-doc: accept negation like !@var")

        Thanks, Akira


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments
  2023-05-26  3:17       ` Akira Yokosawa
@ 2023-05-26  4:51         ` Randy Dunlap
  2023-05-26 10:27           ` Akira Yokosawa
  0 siblings, 1 reply; 36+ messages in thread
From: Randy Dunlap @ 2023-05-26  4:51 UTC (permalink / raw)
  To: Akira Yokosawa, Peter Zijlstra
  Cc: Mark Rutland, linux-kernel, boqun.feng, corbet, keescook,
	linux-arch, linux, linux-doc, paulmck, sstabellini, will

Hi Akira,

On 5/25/23 20:17, Akira Yokosawa wrote:
> On Wed, 24 May 2023 16:11:52 +0200, Peter Zijlstra wrote:
>> On Wed, May 24, 2023 at 11:03:58PM +0900, Akira Yokosawa wrote:
>>
>>>> * All ops are described as an expression using their usual C operator.
>>>>   For example:
>>>>
>>>>   andnot: "Atomically updates @v to (@v & ~@i)"
>>>
>>> The kernel-doc script converts "~@i" into reST source of "~**i**",
>>> where the emphasis of i is not recognized by Sphinx.
>>>
>>> For the "@" to work as expected, please say "~(@i)" or "~ @i".
>>> My preference is the former.
>>
>> And here we start :-/ making the actual comment less readable because
>> retarded tooling.
>>
>>>>   inc:    "Atomically updates @v to (@v + 1)"
>>>>
>>>>   Which may be clearer to non-naative English speakers, and allows all
>>>                             non-native
>>>
>>>>   the operations to be described in the same style.
>>>>
>>>> * All conditional ops have their condition described as an expression
>>>>   using the usual C operators. For example:
>>>>
>>>>   add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
>>>>   cmpxchg:    "If (@v == @old), atomically updates @v to @new"
>>>>
>>>>   Which may be clearer to non-naative English speakers, and allows all
>>>
>>> Ditto.
>>
>> How about we just keep it as is, and all the rst and html weenies learn
>> to use a text editor to read code comments?
> 
> :-) :-) :-)
> 
> It turns out that kernel-doc is aware of !@var [1].
> Similar tricks can be added for ~@var.
> So let's keep it as is!
> 
> I'll ask documentation forks for updating kernel-doc when this change
> is merged eventually.

What do you mean by that?
What needs to be updated and how?


> [1]: ee2aa7590398 ("scripts: kernel-doc: accept negation like !@var")

thanks.
-- 
~Randy

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments
  2023-05-26  4:51         ` Randy Dunlap
@ 2023-05-26 10:27           ` Akira Yokosawa
  2023-05-26 15:24             ` Randy Dunlap
  2023-05-30 12:42             ` Mark Rutland
  0 siblings, 2 replies; 36+ messages in thread
From: Akira Yokosawa @ 2023-05-26 10:27 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: Mark Rutland, linux-kernel, boqun.feng, corbet, keescook,
	linux-arch, linux, linux-doc, paulmck, sstabellini, will,
	Peter Zijlstra, Akira Yokosawa

Hi Randy,

On 2023/05/26 13:51, Randy Dunlap wrote:
> Hi Akira,
> 
> On 5/25/23 20:17, Akira Yokosawa wrote:
>> On Wed, 24 May 2023 16:11:52 +0200, Peter Zijlstra wrote:
>>> On Wed, May 24, 2023 at 11:03:58PM +0900, Akira Yokosawa wrote:
>>>
>>>>> * All ops are described as an expression using their usual C operator.
>>>>>   For example:
>>>>>
>>>>>   andnot: "Atomically updates @v to (@v & ~@i)"
>>>>
>>>> The kernel-doc script converts "~@i" into reST source of "~**i**",
>>>> where the emphasis of i is not recognized by Sphinx.
>>>>
>>>> For the "@" to work as expected, please say "~(@i)" or "~ @i".
>>>> My preference is the former.
>>>
>>> And here we start :-/ making the actual comment less readable because
>>> retarded tooling.
>>>
>>>>>   inc:    "Atomically updates @v to (@v + 1)"
>>>>>
>>>>>   Which may be clearer to non-naative English speakers, and allows all
>>>>                             non-native
>>>>
>>>>>   the operations to be described in the same style.
>>>>>
>>>>> * All conditional ops have their condition described as an expression
>>>>>   using the usual C operators. For example:
>>>>>
>>>>>   add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
>>>>>   cmpxchg:    "If (@v == @old), atomically updates @v to @new"
>>>>>
>>>>>   Which may be clearer to non-naative English speakers, and allows all
>>>>
>>>> Ditto.
>>>
>>> How about we just keep it as is, and all the rst and html weenies learn
>>> to use a text editor to read code comments?
>>
>> :-) :-) :-)
>>
>> It turns out that kernel-doc is aware of !@var [1].
>> Similar tricks can be added for ~@var.
>> So let's keep it as is!
>>
>> I'll ask documentation forks for updating kernel-doc when this change
>> is merged eventually.
> 
> What do you mean by that?
> What needs to be updated and how?
 
I mean, scripts/kernel-doc needs to be updated so that "~@var"
is converted into "**~var**".

I think adding "~" to the substitution pattern added in [1] as follows
should do the trick (not well tested):

diff --git a/scripts/kernel-doc b/scripts/kernel-doc
index 2486689ffc7b..eb70c1fd4e86 100755
--- a/scripts/kernel-doc
+++ b/scripts/kernel-doc
@@ -64,7 +64,7 @@ my $type_constant = '\b``([^\`]+)``\b';
 my $type_constant2 = '\%([-_\w]+)';
 my $type_func = '(\w+)\(\)';
 my $type_param = '\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
-my $type_param_ref = '([\!]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
+my $type_param_ref = '([\!~]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
 my $type_fp_param = '\@(\w+)\(\)';  # Special RST handling for func ptr params
 my $type_fp_param2 = '\@(\w+->\S+)\(\)';  # Special RST handling for structs with func ptr params
 my $type_env = '(\$\w+)';

Thoughts?

        Thanks, Akira

> 
> 
>> [1]: ee2aa7590398 ("scripts: kernel-doc: accept negation like !@var")
> 
> thanks.

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments
  2023-05-26 10:27           ` Akira Yokosawa
@ 2023-05-26 15:24             ` Randy Dunlap
  2023-05-30 12:42             ` Mark Rutland
  1 sibling, 0 replies; 36+ messages in thread
From: Randy Dunlap @ 2023-05-26 15:24 UTC (permalink / raw)
  To: Akira Yokosawa
  Cc: Mark Rutland, linux-kernel, boqun.feng, corbet, keescook,
	linux-arch, linux, linux-doc, paulmck, sstabellini, will,
	Peter Zijlstra



On 5/26/23 03:27, Akira Yokosawa wrote:
> Hi Randy,
> 
> On 2023/05/26 13:51, Randy Dunlap wrote:
>> Hi Akira,
>>
>> On 5/25/23 20:17, Akira Yokosawa wrote:
>>> On Wed, 24 May 2023 16:11:52 +0200, Peter Zijlstra wrote:
>>>> On Wed, May 24, 2023 at 11:03:58PM +0900, Akira Yokosawa wrote:
>>>>
>>>>>> * All ops are described as an expression using their usual C operator.
>>>>>>   For example:
>>>>>>
>>>>>>   andnot: "Atomically updates @v to (@v & ~@i)"
>>>>>
>>>>> The kernel-doc script converts "~@i" into reST source of "~**i**",
>>>>> where the emphasis of i is not recognized by Sphinx.
>>>>>
>>>>> For the "@" to work as expected, please say "~(@i)" or "~ @i".
>>>>> My preference is the former.
>>>>
>>>> And here we start :-/ making the actual comment less readable because
>>>> retarded tooling.
>>>>
>>>>>>   inc:    "Atomically updates @v to (@v + 1)"
>>>>>>
>>>>>>   Which may be clearer to non-naative English speakers, and allows all
>>>>>                             non-native
>>>>>
>>>>>>   the operations to be described in the same style.
>>>>>>
>>>>>> * All conditional ops have their condition described as an expression
>>>>>>   using the usual C operators. For example:
>>>>>>
>>>>>>   add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
>>>>>>   cmpxchg:    "If (@v == @old), atomically updates @v to @new"
>>>>>>
>>>>>>   Which may be clearer to non-naative English speakers, and allows all
>>>>>
>>>>> Ditto.
>>>>
>>>> How about we just keep it as is, and all the rst and html weenies learn
>>>> to use a text editor to read code comments?
>>>
>>> :-) :-) :-)
>>>
>>> It turns out that kernel-doc is aware of !@var [1].
>>> Similar tricks can be added for ~@var.
>>> So let's keep it as is!
>>>
>>> I'll ask documentation forks for updating kernel-doc when this change
>>> is merged eventually.
>>
>> What do you mean by that?
>> What needs to be updated and how?
>  
> I mean, scripts/kernel-doc needs to be updated so that "~@var"
> is converted into "**~var**".
> 
> I think adding "~" to the substitution pattern added in [1] as follows
> should do the trick (not well tested):
> 
> diff --git a/scripts/kernel-doc b/scripts/kernel-doc
> index 2486689ffc7b..eb70c1fd4e86 100755
> --- a/scripts/kernel-doc
> +++ b/scripts/kernel-doc
> @@ -64,7 +64,7 @@ my $type_constant = '\b``([^\`]+)``\b';
>  my $type_constant2 = '\%([-_\w]+)';
>  my $type_func = '(\w+)\(\)';
>  my $type_param = '\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
> -my $type_param_ref = '([\!]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
> +my $type_param_ref = '([\!~]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
>  my $type_fp_param = '\@(\w+)\(\)';  # Special RST handling for func ptr params
>  my $type_fp_param2 = '\@(\w+->\S+)\(\)';  # Special RST handling for structs with func ptr params
>  my $type_env = '(\$\w+)';
> 

At a quick glance, that looks OK.
I haven't had enough coffee yet to be able to read all of that regex though.

Just submit the patch (when it is needed) to see what breaks. :)

>>
>>
>>> [1]: ee2aa7590398 ("scripts: kernel-doc: accept negation like !@var")
>>

Thanks.
-- 
~Randy

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 25/26] locking/atomic: docs: Add atomic operations to the driver basic API documentation
  2023-05-24 14:10   ` Akira Yokosawa
@ 2023-05-30 12:33     ` Mark Rutland
  0 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-05-30 12:33 UTC (permalink / raw)
  To: Akira Yokosawa
  Cc: linux-kernel, boqun.feng, corbet, keescook, linux-arch, linux,
	linux-doc, paulmck, peterz, sstabellini, will

On Wed, May 24, 2023 at 11:10:48PM +0900, Akira Yokosawa wrote:
> On Mon, 22 May 2023 13:24:28 +0100, Mark Rutland wrote:
> > From: "Paul E. McKenney" <paulmck@kernel.org>
> > 
> > Add the generated atomic headers to driver-api/basics.rst in order to
> > provide documentation for the Linux kernel's atomic operations.
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > Cc: Jonathan Corbet <corbet@lwn.net>
> > Cc: Kees Cook <keescook@chromium.org>
> > Cc: Akira Yokosawa <akiyks@gmail.com>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: <linux-doc@vger.kernel.org>
> > Reviewed-by: Kees Cook <keescook@chromium.org>
> > [Mark: add atomic-long.h]
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > ---
> >  Documentation/driver-api/basics.rst | 6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api/basics.rst
> > index 4b4d8e28d3be4..a1fbd97fb79fb 100644
> > --- a/Documentation/driver-api/basics.rst
> > +++ b/Documentation/driver-api/basics.rst
> > @@ -87,6 +87,12 @@ Atomics
> >  .. kernel-doc:: arch/x86/include/asm/atomic.h
> >     :internal:
> >  
> > +.. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h
> > +   :internal:
> > +
> > +.. kernel-doc:: include/linux/atomic/atomic-long.h
> > +   :internal:
> > +
> 
> Why not add
> 
>   .. kernel-doc:: include/linux/atomic/atomic-instrumented.h
>      :internal:
> 
> as well ??
> 
> I think those kernel-doc comments are the most relevant ones
> for driver writers.

Yes, that should be there too.

I've added that (and dropped the x86 header) with an updated commit message.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments
  2023-05-26 10:27           ` Akira Yokosawa
  2023-05-26 15:24             ` Randy Dunlap
@ 2023-05-30 12:42             ` Mark Rutland
  2023-05-31 23:41               ` Akira Yokosawa
  1 sibling, 1 reply; 36+ messages in thread
From: Mark Rutland @ 2023-05-30 12:42 UTC (permalink / raw)
  To: Akira Yokosawa
  Cc: Randy Dunlap, linux-kernel, boqun.feng, corbet, keescook,
	linux-arch, linux, linux-doc, paulmck, sstabellini, will,
	Peter Zijlstra

Hi Akira,

On Fri, May 26, 2023 at 07:27:56PM +0900, Akira Yokosawa wrote:
> I think adding "~" to the substitution pattern added in [1] as follows
> should do the trick (not well tested):
> 
> diff --git a/scripts/kernel-doc b/scripts/kernel-doc
> index 2486689ffc7b..eb70c1fd4e86 100755
> --- a/scripts/kernel-doc
> +++ b/scripts/kernel-doc
> @@ -64,7 +64,7 @@ my $type_constant = '\b``([^\`]+)``\b';
>  my $type_constant2 = '\%([-_\w]+)';
>  my $type_func = '(\w+)\(\)';
>  my $type_param = '\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
> -my $type_param_ref = '([\!]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
> +my $type_param_ref = '([\!~]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
>  my $type_fp_param = '\@(\w+)\(\)';  # Special RST handling for func ptr params
>  my $type_fp_param2 = '\@(\w+->\S+)\(\)';  # Special RST handling for structs with func ptr params
>  my $type_env = '(\$\w+)';

Are you happy to send this as a patch?

I'd like to pick it into this series, so if you're happy to provide your
Signed-off-by tag here, I'm happy to go write the commit message and so on.

Thanks,
Mark.

> 
> Thoughts?
> 
>         Thanks, Akira
> 
> > 
> > 
> >> [1]: ee2aa7590398 ("scripts: kernel-doc: accept negation like !@var")
> > 
> > thanks.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments
  2023-05-30 12:42             ` Mark Rutland
@ 2023-05-31 23:41               ` Akira Yokosawa
  2023-06-01 10:22                 ` Mark Rutland
  0 siblings, 1 reply; 36+ messages in thread
From: Akira Yokosawa @ 2023-05-31 23:41 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Randy Dunlap, linux-kernel, corbet, linux-doc, paulmck,
	Akira Yokosawa

[Keeping documentation folks in CC]

On 2023/05/30 21:42, Mark Rutland wrote:
> Hi Akira,
> 
> On Fri, May 26, 2023 at 07:27:56PM +0900, Akira Yokosawa wrote:
>> I think adding "~" to the substitution pattern added in [1] as follows
>> should do the trick (not well tested):
>>
>> diff --git a/scripts/kernel-doc b/scripts/kernel-doc
>> index 2486689ffc7b..eb70c1fd4e86 100755
>> --- a/scripts/kernel-doc
>> +++ b/scripts/kernel-doc
>> @@ -64,7 +64,7 @@ my $type_constant = '\b``([^\`]+)``\b';
>>  my $type_constant2 = '\%([-_\w]+)';
>>  my $type_func = '(\w+)\(\)';
>>  my $type_param = '\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
>> -my $type_param_ref = '([\!]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
>> +my $type_param_ref = '([\!~]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
>>  my $type_fp_param = '\@(\w+)\(\)';  # Special RST handling for func ptr params
>>  my $type_fp_param2 = '\@(\w+->\S+)\(\)';  # Special RST handling for structs with func ptr params
>>  my $type_env = '(\$\w+)';
> 
> Are you happy to send this as a patch?

I'm afraid I am not at the moment.

The reason being I have never made changes in that 2500 line perl
script.  Please consider the change above as a random suggestion from
someone who don't/can't understand the script fully ...
> 
> I'd like to pick it into this series, so if you're happy to provide your
> Signed-off-by tag here, I'm happy to go write the commit message and so on.

I you happen to know perl well and be confident the change won't have
any side effect, I wouldn't mind if you go forward and make a patch
on your own, maybe with my Suggested-by.  That patch should also have
explicit Cc: tags to Jon and Mauro who are in the SOB chain of commit
ee2aa7590398.

        Thanks, Akira

> 
> Thanks,
> Mark.
> 
>>
>> Thoughts?
>>
>>         Thanks, Akira
>>
>>>
>>>
>>>> [1]: ee2aa7590398 ("scripts: kernel-doc: accept negation like !@var")
>>>
>>> thanks.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments
  2023-05-31 23:41               ` Akira Yokosawa
@ 2023-06-01 10:22                 ` Mark Rutland
  0 siblings, 0 replies; 36+ messages in thread
From: Mark Rutland @ 2023-06-01 10:22 UTC (permalink / raw)
  To: Akira Yokosawa; +Cc: Randy Dunlap, linux-kernel, corbet, linux-doc, paulmck

On Thu, Jun 01, 2023 at 08:41:29AM +0900, Akira Yokosawa wrote:
> [Keeping documentation folks in CC]
> 
> On 2023/05/30 21:42, Mark Rutland wrote:
> > Hi Akira,
> > 
> > On Fri, May 26, 2023 at 07:27:56PM +0900, Akira Yokosawa wrote:
> >> I think adding "~" to the substitution pattern added in [1] as follows
> >> should do the trick (not well tested):
> >>
> >> diff --git a/scripts/kernel-doc b/scripts/kernel-doc
> >> index 2486689ffc7b..eb70c1fd4e86 100755
> >> --- a/scripts/kernel-doc
> >> +++ b/scripts/kernel-doc
> >> @@ -64,7 +64,7 @@ my $type_constant = '\b``([^\`]+)``\b';
> >>  my $type_constant2 = '\%([-_\w]+)';
> >>  my $type_func = '(\w+)\(\)';
> >>  my $type_param = '\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
> >> -my $type_param_ref = '([\!]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
> >> +my $type_param_ref = '([\!~]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
> >>  my $type_fp_param = '\@(\w+)\(\)';  # Special RST handling for func ptr params
> >>  my $type_fp_param2 = '\@(\w+->\S+)\(\)';  # Special RST handling for structs with func ptr params
> >>  my $type_env = '(\$\w+)';
> > 
> > Are you happy to send this as a patch?
> 
> I'm afraid I am not at the moment.
> 
> The reason being I have never made changes in that 2500 line perl
> script.  Please consider the change above as a random suggestion from
> someone who don't/can't understand the script fully ...
> > 
> > I'd like to pick it into this series, so if you're happy to provide your
> > Signed-off-by tag here, I'm happy to go write the commit message and so on.
> 
> I you happen to know perl well and be confident the change won't have
> any side effect, I wouldn't mind if you go forward and make a patch
> on your own, maybe with my Suggested-by.  That patch should also have
> explicit Cc: tags to Jon and Mauro who are in the SOB chain of commit
> ee2aa7590398.

Sure; I'm happy to go try that, thanks for letting me know!

Mark.

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2023-06-01 10:26 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-22 12:24 [PATCH 00/26] locking/atomic: restructuring + kerneldoc Mark Rutland
2023-05-22 12:24 ` [PATCH 01/26] locking/atomic: arm: fix sync ops Mark Rutland
2023-05-22 12:24 ` [PATCH 02/26] locking/atomic: remove fallback comments Mark Rutland
2023-05-22 12:24 ` [PATCH 03/26] locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg Mark Rutland
2023-05-22 12:24 ` [PATCH 04/26] locking/atomic: make atomic*_{cmp,}xchg optional Mark Rutland
2023-05-22 12:24 ` [PATCH 05/26] locking/atomic: arc: add preprocessor symbols Mark Rutland
2023-05-22 12:24 ` [PATCH 06/26] locking/atomic: arm: " Mark Rutland
2023-05-22 12:24 ` [PATCH 07/26] locking/atomic: hexagon: " Mark Rutland
2023-05-22 12:24 ` [PATCH 08/26] locking/atomic: m68k: " Mark Rutland
2023-05-22 12:24 ` [PATCH 09/26] locking/atomic: parisc: " Mark Rutland
2023-05-22 12:24 ` [PATCH 10/26] locking/atomic: sh: " Mark Rutland
2023-05-22 12:24 ` [PATCH 11/26] locking/atomic: sparc: " Mark Rutland
2023-05-22 12:24 ` [PATCH 12/26] locking/atomic: x86: " Mark Rutland
2023-05-22 12:24 ` [PATCH 13/26] locking/atomic: xtensa: " Mark Rutland
2023-05-22 12:24 ` [PATCH 14/26] locking/atomic: scripts: remove bogus order parameter Mark Rutland
2023-05-22 12:24 ` [PATCH 15/26] locking/atomic: scripts: remove leftover "${mult}" Mark Rutland
2023-05-22 12:24 ` [PATCH 16/26] locking/atomic: scripts: factor out order template generation Mark Rutland
2023-05-22 12:24 ` [PATCH 18/26] locking/atomic: treewide: use raw_atomic*_<op>() Mark Rutland
2023-05-22 12:24 ` [PATCH 19/26] locking/atomic: scripts: build raw_atomic_long*() directly Mark Rutland
2023-05-22 12:24 ` [PATCH 21/26] locking/atomic: scripts: split pfx/name/sfx/order Mark Rutland
2023-05-22 12:24 ` [PATCH 22/26] locking/atomic: scripts: simplify raw_atomic_long*() definitions Mark Rutland
2023-05-22 12:24 ` [PATCH 25/26] locking/atomic: docs: Add atomic operations to the driver basic API documentation Mark Rutland
2023-05-24 14:10   ` Akira Yokosawa
2023-05-30 12:33     ` Mark Rutland
2023-05-22 12:24 ` [PATCH 26/26] locking/atomic: treewide: delete arch_atomic_*() kerneldoc Mark Rutland
2023-05-22 20:58 ` [PATCH 00/26] locking/atomic: restructuring + kerneldoc Kees Cook
     [not found] ` <20230522122429.1915021-25-mark.rutland@arm.com>
2023-05-24 14:03   ` [PATCH 24/26] locking/atomic: scripts: generate kerneldoc comments Akira Yokosawa
2023-05-24 14:11     ` Peter Zijlstra
2023-05-26  3:17       ` Akira Yokosawa
2023-05-26  4:51         ` Randy Dunlap
2023-05-26 10:27           ` Akira Yokosawa
2023-05-26 15:24             ` Randy Dunlap
2023-05-30 12:42             ` Mark Rutland
2023-05-31 23:41               ` Akira Yokosawa
2023-06-01 10:22                 ` Mark Rutland
2023-05-24 14:17   ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox