* [PATCH 0/2] lib/atomic64: ring-buffer: Fix infinite recursion on some 32bit archs
@ 2025-01-20 23:56 Steven Rostedt
2025-01-20 23:56 ` [PATCH 1/2] ring-buffer: Do not allow events in NMI with generic atomic64 cmpxchg() Steven Rostedt
2025-01-20 23:56 ` [PATCH 2/2] atomic64: Use arch_spin_locks instead of raw_spin_locks Steven Rostedt
0 siblings, 2 replies; 5+ messages in thread
From: Steven Rostedt @ 2025-01-20 23:56 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Peter Zijlstra, Thomas Gleixner, Linus Torvalds, Ludwig Rydberg,
Andreas Larsson
It was reported that the tracing ring buffer code would cause an infinite
recursion and crash on risc32 and sparc32 architectures. The reason is that
they use the generic atomic64 operations which call raw_spin_locks. As
raw_spin_locks can be traced this would cause an infinite recursion, because
the atomic64 operations call raw_spin_locks. Instead, the generic atomic64
operations should be calling arch_spin_locks as this is architecture
specific implementation and the locks that are taken should never have any
other locks taken while they are held, and are always taking with interrupts
disabled. This means they do not need to be tested by lockdep either.
Another issue was that locks should not be taken in NMI context and the ring
buffer can be called in NMI. If the arch uses the generic atomic64
operations, do not allow events to be recorded in NMI as the atomic64
operations are not safe in NMI context.
Steven Rostedt (2):
ring-buffer: Do not allow events in NMI with generic atomic64 cmpxchg()
atomic64: Use arch_spin_locks instead of raw_spin_locks
----
kernel/trace/ring_buffer.c | 9 ++++--
lib/atomic64.c | 78 ++++++++++++++++++++++++++++------------------
2 files changed, 55 insertions(+), 32 deletions(-)
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/2] ring-buffer: Do not allow events in NMI with generic atomic64 cmpxchg()
2025-01-20 23:56 [PATCH 0/2] lib/atomic64: ring-buffer: Fix infinite recursion on some 32bit archs Steven Rostedt
@ 2025-01-20 23:56 ` Steven Rostedt
2025-01-21 7:59 ` Masami Hiramatsu
2025-01-20 23:56 ` [PATCH 2/2] atomic64: Use arch_spin_locks instead of raw_spin_locks Steven Rostedt
1 sibling, 1 reply; 5+ messages in thread
From: Steven Rostedt @ 2025-01-20 23:56 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Peter Zijlstra, Thomas Gleixner, Linus Torvalds, Ludwig Rydberg,
Andreas Larsson, stable
From: Steven Rostedt <rostedt@goodmis.org>
Some architectures can not safely do atomic64 operations in NMI context.
Since the ring buffer relies on atomic64 operations to do its time
keeping, if an event is requested in NMI context, reject it for these
architectures.
Cc: stable@vger.kernel.org
Fixes: c84897c0ff592 ("ring-buffer: Remove 32bit timestamp logic")
Closes: https://lore.kernel.org/all/86fb4f86-a0e4-45a2-a2df-3154acc4f086@gaisler.com/
Reported-by: Ludwig Rydberg <ludwig.rydberg@gaisler.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/ring_buffer.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 6d61ff78926b..b8e0ae15ca5b 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -4398,8 +4398,13 @@ rb_reserve_next_event(struct trace_buffer *buffer,
int nr_loops = 0;
int add_ts_default;
- /* ring buffer does cmpxchg, make sure it is safe in NMI context */
- if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
+ /*
+ * ring buffer does cmpxchg as well as atomic64 operations
+ * (which some archs use locking for atomic64), make sure this
+ * is safe in NMI context
+ */
+ if ((!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) ||
+ IS_ENABLED(CONFIG_GENERIC_ATOMIC64)) &&
(unlikely(in_nmi()))) {
return NULL;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] atomic64: Use arch_spin_locks instead of raw_spin_locks
2025-01-20 23:56 [PATCH 0/2] lib/atomic64: ring-buffer: Fix infinite recursion on some 32bit archs Steven Rostedt
2025-01-20 23:56 ` [PATCH 1/2] ring-buffer: Do not allow events in NMI with generic atomic64 cmpxchg() Steven Rostedt
@ 2025-01-20 23:56 ` Steven Rostedt
2025-01-21 8:04 ` Masami Hiramatsu
1 sibling, 1 reply; 5+ messages in thread
From: Steven Rostedt @ 2025-01-20 23:56 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Peter Zijlstra, Thomas Gleixner, Linus Torvalds, Ludwig Rydberg,
Andreas Larsson, stable
From: Steven Rostedt <rostedt@goodmis.org>
raw_spin_locks can be traced by lockdep or tracing itself. Atomic64
operations can be used in the tracing infrastructure. When an architecture
does not have true atomic64 operations it can use the generic version that
disables interrupts and uses spin_locks.
The tracing ring buffer code uses atomic64 operations for the time
keeping. But because some architectures use the default operations, the
locking inside the atomic operations can cause an infinite recursion.
As atomic64 is an architecture specific operation, it should not be using
raw_spin_locks() but instead arch_spin_locks as that is the purpose of
arch_spin_locks. To be used in architecture specific implementations of
generic infrastructure like atomic64 operations.
Cc: stable@vger.kernel.org
Fixes: c84897c0ff592 ("ring-buffer: Remove 32bit timestamp logic")
Closes: https://lore.kernel.org/all/86fb4f86-a0e4-45a2-a2df-3154acc4f086@gaisler.com/
Reported-by: Ludwig Rydberg <ludwig.rydberg@gaisler.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
lib/atomic64.c | 78 +++++++++++++++++++++++++++++++-------------------
1 file changed, 48 insertions(+), 30 deletions(-)
diff --git a/lib/atomic64.c b/lib/atomic64.c
index caf895789a1e..1a72bba36d24 100644
--- a/lib/atomic64.c
+++ b/lib/atomic64.c
@@ -25,15 +25,15 @@
* Ensure each lock is in a separate cacheline.
*/
static union {
- raw_spinlock_t lock;
+ arch_spinlock_t lock;
char pad[L1_CACHE_BYTES];
} atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp = {
[0 ... (NR_LOCKS - 1)] = {
- .lock = __RAW_SPIN_LOCK_UNLOCKED(atomic64_lock.lock),
+ .lock = __ARCH_SPIN_LOCK_UNLOCKED,
},
};
-static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
+static inline arch_spinlock_t *lock_addr(const atomic64_t *v)
{
unsigned long addr = (unsigned long) v;
@@ -45,12 +45,14 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
s64 generic_atomic64_read(const atomic64_t *v)
{
unsigned long flags;
- raw_spinlock_t *lock = lock_addr(v);
+ arch_spinlock_t *lock = lock_addr(v);
s64 val;
- raw_spin_lock_irqsave(lock, flags);
+ local_irq_save(flags);
+ arch_spin_lock(lock);
val = v->counter;
- raw_spin_unlock_irqrestore(lock, flags);
+ arch_spin_unlock(lock);
+ local_irq_restore(flags);
return val;
}
EXPORT_SYMBOL(generic_atomic64_read);
@@ -58,11 +60,13 @@ EXPORT_SYMBOL(generic_atomic64_read);
void generic_atomic64_set(atomic64_t *v, s64 i)
{
unsigned long flags;
- raw_spinlock_t *lock = lock_addr(v);
+ arch_spinlock_t *lock = lock_addr(v);
- raw_spin_lock_irqsave(lock, flags);
+ local_irq_save(flags);
+ arch_spin_lock(lock);
v->counter = i;
- raw_spin_unlock_irqrestore(lock, flags);
+ arch_spin_unlock(lock);
+ local_irq_restore(flags);
}
EXPORT_SYMBOL(generic_atomic64_set);
@@ -70,11 +74,13 @@ EXPORT_SYMBOL(generic_atomic64_set);
void generic_atomic64_##op(s64 a, atomic64_t *v) \
{ \
unsigned long flags; \
- raw_spinlock_t *lock = lock_addr(v); \
+ arch_spinlock_t *lock = lock_addr(v); \
\
- raw_spin_lock_irqsave(lock, flags); \
+ local_irq_save(flags); \
+ arch_spin_lock(lock); \
v->counter c_op a; \
- raw_spin_unlock_irqrestore(lock, flags); \
+ arch_spin_unlock(lock); \
+ local_irq_restore(flags); \
} \
EXPORT_SYMBOL(generic_atomic64_##op);
@@ -82,12 +88,14 @@ EXPORT_SYMBOL(generic_atomic64_##op);
s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v) \
{ \
unsigned long flags; \
- raw_spinlock_t *lock = lock_addr(v); \
+ arch_spinlock_t *lock = lock_addr(v); \
s64 val; \
\
- raw_spin_lock_irqsave(lock, flags); \
+ local_irq_save(flags); \
+ arch_spin_lock(lock); \
val = (v->counter c_op a); \
- raw_spin_unlock_irqrestore(lock, flags); \
+ arch_spin_unlock(lock); \
+ local_irq_restore(flags); \
return val; \
} \
EXPORT_SYMBOL(generic_atomic64_##op##_return);
@@ -96,13 +104,15 @@ EXPORT_SYMBOL(generic_atomic64_##op##_return);
s64 generic_atomic64_fetch_##op(s64 a, atomic64_t *v) \
{ \
unsigned long flags; \
- raw_spinlock_t *lock = lock_addr(v); \
+ arch_spinlock_t *lock = lock_addr(v); \
s64 val; \
\
- raw_spin_lock_irqsave(lock, flags); \
+ local_irq_save(flags); \
+ arch_spin_lock(lock); \
val = v->counter; \
v->counter c_op a; \
- raw_spin_unlock_irqrestore(lock, flags); \
+ arch_spin_unlock(lock); \
+ local_irq_restore(flags); \
return val; \
} \
EXPORT_SYMBOL(generic_atomic64_fetch_##op);
@@ -131,14 +141,16 @@ ATOMIC64_OPS(xor, ^=)
s64 generic_atomic64_dec_if_positive(atomic64_t *v)
{
unsigned long flags;
- raw_spinlock_t *lock = lock_addr(v);
+ arch_spinlock_t *lock = lock_addr(v);
s64 val;
- raw_spin_lock_irqsave(lock, flags);
+ local_irq_save(flags);
+ arch_spin_lock(lock);
val = v->counter - 1;
if (val >= 0)
v->counter = val;
- raw_spin_unlock_irqrestore(lock, flags);
+ arch_spin_unlock(lock);
+ local_irq_restore(flags);
return val;
}
EXPORT_SYMBOL(generic_atomic64_dec_if_positive);
@@ -146,14 +158,16 @@ EXPORT_SYMBOL(generic_atomic64_dec_if_positive);
s64 generic_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
{
unsigned long flags;
- raw_spinlock_t *lock = lock_addr(v);
+ arch_spinlock_t *lock = lock_addr(v);
s64 val;
- raw_spin_lock_irqsave(lock, flags);
+ local_irq_save(flags);
+ arch_spin_lock(lock);
val = v->counter;
if (val == o)
v->counter = n;
- raw_spin_unlock_irqrestore(lock, flags);
+ arch_spin_unlock(lock);
+ local_irq_restore(flags);
return val;
}
EXPORT_SYMBOL(generic_atomic64_cmpxchg);
@@ -161,13 +175,15 @@ EXPORT_SYMBOL(generic_atomic64_cmpxchg);
s64 generic_atomic64_xchg(atomic64_t *v, s64 new)
{
unsigned long flags;
- raw_spinlock_t *lock = lock_addr(v);
+ arch_spinlock_t *lock = lock_addr(v);
s64 val;
- raw_spin_lock_irqsave(lock, flags);
+ local_irq_save(flags);
+ arch_spin_lock(lock);
val = v->counter;
v->counter = new;
- raw_spin_unlock_irqrestore(lock, flags);
+ arch_spin_unlock(lock);
+ local_irq_restore(flags);
return val;
}
EXPORT_SYMBOL(generic_atomic64_xchg);
@@ -175,14 +191,16 @@ EXPORT_SYMBOL(generic_atomic64_xchg);
s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
unsigned long flags;
- raw_spinlock_t *lock = lock_addr(v);
+ arch_spinlock_t *lock = lock_addr(v);
s64 val;
- raw_spin_lock_irqsave(lock, flags);
+ local_irq_save(flags);
+ arch_spin_lock(lock);
val = v->counter;
if (val != u)
v->counter += a;
- raw_spin_unlock_irqrestore(lock, flags);
+ arch_spin_unlock(lock);
+ local_irq_restore(flags);
return val;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] ring-buffer: Do not allow events in NMI with generic atomic64 cmpxchg()
2025-01-20 23:56 ` [PATCH 1/2] ring-buffer: Do not allow events in NMI with generic atomic64 cmpxchg() Steven Rostedt
@ 2025-01-21 7:59 ` Masami Hiramatsu
0 siblings, 0 replies; 5+ messages in thread
From: Masami Hiramatsu @ 2025-01-21 7:59 UTC (permalink / raw)
To: Steven Rostedt
Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
Mathieu Desnoyers, Andrew Morton, Peter Zijlstra, Thomas Gleixner,
Linus Torvalds, Ludwig Rydberg, Andreas Larsson, stable
On Mon, 20 Jan 2025 18:56:56 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> From: Steven Rostedt <rostedt@goodmis.org>
>
> Some architectures can not safely do atomic64 operations in NMI context.
> Since the ring buffer relies on atomic64 operations to do its time
> keeping, if an event is requested in NMI context, reject it for these
> architectures.
>
Looks good to me.
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Cc: stable@vger.kernel.org
> Fixes: c84897c0ff592 ("ring-buffer: Remove 32bit timestamp logic")
> Closes: https://lore.kernel.org/all/86fb4f86-a0e4-45a2-a2df-3154acc4f086@gaisler.com/
> Reported-by: Ludwig Rydberg <ludwig.rydberg@gaisler.com>
> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
> ---
> kernel/trace/ring_buffer.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index 6d61ff78926b..b8e0ae15ca5b 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -4398,8 +4398,13 @@ rb_reserve_next_event(struct trace_buffer *buffer,
> int nr_loops = 0;
> int add_ts_default;
>
> - /* ring buffer does cmpxchg, make sure it is safe in NMI context */
> - if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
> + /*
> + * ring buffer does cmpxchg as well as atomic64 operations
> + * (which some archs use locking for atomic64), make sure this
> + * is safe in NMI context
> + */
> + if ((!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) ||
> + IS_ENABLED(CONFIG_GENERIC_ATOMIC64)) &&
> (unlikely(in_nmi()))) {
> return NULL;
> }
> --
> 2.45.2
>
>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] atomic64: Use arch_spin_locks instead of raw_spin_locks
2025-01-20 23:56 ` [PATCH 2/2] atomic64: Use arch_spin_locks instead of raw_spin_locks Steven Rostedt
@ 2025-01-21 8:04 ` Masami Hiramatsu
0 siblings, 0 replies; 5+ messages in thread
From: Masami Hiramatsu @ 2025-01-21 8:04 UTC (permalink / raw)
To: Steven Rostedt
Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
Mathieu Desnoyers, Andrew Morton, Peter Zijlstra, Thomas Gleixner,
Linus Torvalds, Ludwig Rydberg, Andreas Larsson, stable
On Mon, 20 Jan 2025 18:56:57 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> From: Steven Rostedt <rostedt@goodmis.org>
>
> raw_spin_locks can be traced by lockdep or tracing itself. Atomic64
> operations can be used in the tracing infrastructure. When an architecture
> does not have true atomic64 operations it can use the generic version that
> disables interrupts and uses spin_locks.
>
> The tracing ring buffer code uses atomic64 operations for the time
> keeping. But because some architectures use the default operations, the
> locking inside the atomic operations can cause an infinite recursion.
>
> As atomic64 is an architecture specific operation, it should not be using
> raw_spin_locks() but instead arch_spin_locks as that is the purpose of
> arch_spin_locks. To be used in architecture specific implementations of
> generic infrastructure like atomic64 operations.
>
Looks good to me.
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Thanks,
> Cc: stable@vger.kernel.org
> Fixes: c84897c0ff592 ("ring-buffer: Remove 32bit timestamp logic")
> Closes: https://lore.kernel.org/all/86fb4f86-a0e4-45a2-a2df-3154acc4f086@gaisler.com/
> Reported-by: Ludwig Rydberg <ludwig.rydberg@gaisler.com>
> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
> ---
> lib/atomic64.c | 78 +++++++++++++++++++++++++++++++-------------------
> 1 file changed, 48 insertions(+), 30 deletions(-)
>
> diff --git a/lib/atomic64.c b/lib/atomic64.c
> index caf895789a1e..1a72bba36d24 100644
> --- a/lib/atomic64.c
> +++ b/lib/atomic64.c
> @@ -25,15 +25,15 @@
> * Ensure each lock is in a separate cacheline.
> */
> static union {
> - raw_spinlock_t lock;
> + arch_spinlock_t lock;
> char pad[L1_CACHE_BYTES];
> } atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp = {
> [0 ... (NR_LOCKS - 1)] = {
> - .lock = __RAW_SPIN_LOCK_UNLOCKED(atomic64_lock.lock),
> + .lock = __ARCH_SPIN_LOCK_UNLOCKED,
> },
> };
>
> -static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
> +static inline arch_spinlock_t *lock_addr(const atomic64_t *v)
> {
> unsigned long addr = (unsigned long) v;
>
> @@ -45,12 +45,14 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
> s64 generic_atomic64_read(const atomic64_t *v)
> {
> unsigned long flags;
> - raw_spinlock_t *lock = lock_addr(v);
> + arch_spinlock_t *lock = lock_addr(v);
> s64 val;
>
> - raw_spin_lock_irqsave(lock, flags);
> + local_irq_save(flags);
> + arch_spin_lock(lock);
> val = v->counter;
> - raw_spin_unlock_irqrestore(lock, flags);
> + arch_spin_unlock(lock);
> + local_irq_restore(flags);
> return val;
> }
> EXPORT_SYMBOL(generic_atomic64_read);
> @@ -58,11 +60,13 @@ EXPORT_SYMBOL(generic_atomic64_read);
> void generic_atomic64_set(atomic64_t *v, s64 i)
> {
> unsigned long flags;
> - raw_spinlock_t *lock = lock_addr(v);
> + arch_spinlock_t *lock = lock_addr(v);
>
> - raw_spin_lock_irqsave(lock, flags);
> + local_irq_save(flags);
> + arch_spin_lock(lock);
> v->counter = i;
> - raw_spin_unlock_irqrestore(lock, flags);
> + arch_spin_unlock(lock);
> + local_irq_restore(flags);
> }
> EXPORT_SYMBOL(generic_atomic64_set);
>
> @@ -70,11 +74,13 @@ EXPORT_SYMBOL(generic_atomic64_set);
> void generic_atomic64_##op(s64 a, atomic64_t *v) \
> { \
> unsigned long flags; \
> - raw_spinlock_t *lock = lock_addr(v); \
> + arch_spinlock_t *lock = lock_addr(v); \
> \
> - raw_spin_lock_irqsave(lock, flags); \
> + local_irq_save(flags); \
> + arch_spin_lock(lock); \
> v->counter c_op a; \
> - raw_spin_unlock_irqrestore(lock, flags); \
> + arch_spin_unlock(lock); \
> + local_irq_restore(flags); \
> } \
> EXPORT_SYMBOL(generic_atomic64_##op);
>
> @@ -82,12 +88,14 @@ EXPORT_SYMBOL(generic_atomic64_##op);
> s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v) \
> { \
> unsigned long flags; \
> - raw_spinlock_t *lock = lock_addr(v); \
> + arch_spinlock_t *lock = lock_addr(v); \
> s64 val; \
> \
> - raw_spin_lock_irqsave(lock, flags); \
> + local_irq_save(flags); \
> + arch_spin_lock(lock); \
> val = (v->counter c_op a); \
> - raw_spin_unlock_irqrestore(lock, flags); \
> + arch_spin_unlock(lock); \
> + local_irq_restore(flags); \
> return val; \
> } \
> EXPORT_SYMBOL(generic_atomic64_##op##_return);
> @@ -96,13 +104,15 @@ EXPORT_SYMBOL(generic_atomic64_##op##_return);
> s64 generic_atomic64_fetch_##op(s64 a, atomic64_t *v) \
> { \
> unsigned long flags; \
> - raw_spinlock_t *lock = lock_addr(v); \
> + arch_spinlock_t *lock = lock_addr(v); \
> s64 val; \
> \
> - raw_spin_lock_irqsave(lock, flags); \
> + local_irq_save(flags); \
> + arch_spin_lock(lock); \
> val = v->counter; \
> v->counter c_op a; \
> - raw_spin_unlock_irqrestore(lock, flags); \
> + arch_spin_unlock(lock); \
> + local_irq_restore(flags); \
> return val; \
> } \
> EXPORT_SYMBOL(generic_atomic64_fetch_##op);
> @@ -131,14 +141,16 @@ ATOMIC64_OPS(xor, ^=)
> s64 generic_atomic64_dec_if_positive(atomic64_t *v)
> {
> unsigned long flags;
> - raw_spinlock_t *lock = lock_addr(v);
> + arch_spinlock_t *lock = lock_addr(v);
> s64 val;
>
> - raw_spin_lock_irqsave(lock, flags);
> + local_irq_save(flags);
> + arch_spin_lock(lock);
> val = v->counter - 1;
> if (val >= 0)
> v->counter = val;
> - raw_spin_unlock_irqrestore(lock, flags);
> + arch_spin_unlock(lock);
> + local_irq_restore(flags);
> return val;
> }
> EXPORT_SYMBOL(generic_atomic64_dec_if_positive);
> @@ -146,14 +158,16 @@ EXPORT_SYMBOL(generic_atomic64_dec_if_positive);
> s64 generic_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
> {
> unsigned long flags;
> - raw_spinlock_t *lock = lock_addr(v);
> + arch_spinlock_t *lock = lock_addr(v);
> s64 val;
>
> - raw_spin_lock_irqsave(lock, flags);
> + local_irq_save(flags);
> + arch_spin_lock(lock);
> val = v->counter;
> if (val == o)
> v->counter = n;
> - raw_spin_unlock_irqrestore(lock, flags);
> + arch_spin_unlock(lock);
> + local_irq_restore(flags);
> return val;
> }
> EXPORT_SYMBOL(generic_atomic64_cmpxchg);
> @@ -161,13 +175,15 @@ EXPORT_SYMBOL(generic_atomic64_cmpxchg);
> s64 generic_atomic64_xchg(atomic64_t *v, s64 new)
> {
> unsigned long flags;
> - raw_spinlock_t *lock = lock_addr(v);
> + arch_spinlock_t *lock = lock_addr(v);
> s64 val;
>
> - raw_spin_lock_irqsave(lock, flags);
> + local_irq_save(flags);
> + arch_spin_lock(lock);
> val = v->counter;
> v->counter = new;
> - raw_spin_unlock_irqrestore(lock, flags);
> + arch_spin_unlock(lock);
> + local_irq_restore(flags);
> return val;
> }
> EXPORT_SYMBOL(generic_atomic64_xchg);
> @@ -175,14 +191,16 @@ EXPORT_SYMBOL(generic_atomic64_xchg);
> s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> {
> unsigned long flags;
> - raw_spinlock_t *lock = lock_addr(v);
> + arch_spinlock_t *lock = lock_addr(v);
> s64 val;
>
> - raw_spin_lock_irqsave(lock, flags);
> + local_irq_save(flags);
> + arch_spin_lock(lock);
> val = v->counter;
> if (val != u)
> v->counter += a;
> - raw_spin_unlock_irqrestore(lock, flags);
> + arch_spin_unlock(lock);
> + local_irq_restore(flags);
>
> return val;
> }
> --
> 2.45.2
>
>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-01-21 8:05 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-20 23:56 [PATCH 0/2] lib/atomic64: ring-buffer: Fix infinite recursion on some 32bit archs Steven Rostedt
2025-01-20 23:56 ` [PATCH 1/2] ring-buffer: Do not allow events in NMI with generic atomic64 cmpxchg() Steven Rostedt
2025-01-21 7:59 ` Masami Hiramatsu
2025-01-20 23:56 ` [PATCH 2/2] atomic64: Use arch_spin_locks instead of raw_spin_locks Steven Rostedt
2025-01-21 8:04 ` Masami Hiramatsu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).