From: Lyude Paul <lyude@redhat.com>
To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org,
Thomas Gleixner <tglx@linutronix.de>
Cc: "Boqun Feng" <boqun.feng@gmail.com>,
"Daniel Almeida" <daniel.almeida@collabora.com>,
"Miguel Ojeda" <ojeda@kernel.org>,
"Alex Gaynor" <alex.gaynor@gmail.com>,
"Gary Guo" <gary@garyguo.net>,
"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
"Benno Lossin" <lossin@kernel.org>,
"Andreas Hindborg" <a.hindborg@kernel.org>,
"Alice Ryhl" <aliceryhl@google.com>,
"Trevor Gross" <tmgross@umich.edu>,
"Danilo Krummrich" <dakr@kernel.org>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Peter Zijlstra" <peterz@infradead.org>,
"Ingo Molnar" <mingo@redhat.com>, "Will Deacon" <will@kernel.org>,
"Waiman Long" <longman@redhat.com>
Subject: [PATCH v14 04/16] irq & spin_lock: Add counted interrupt disabling/enabling
Date: Thu, 20 Nov 2025 16:45:56 -0500 [thread overview]
Message-ID: <20251120214616.14386-5-lyude@redhat.com> (raw)
In-Reply-To: <20251120214616.14386-1-lyude@redhat.com>
From: Boqun Feng <boqun.feng@gmail.com>
Currently the nested interrupt disabling and enabling is present by
_irqsave() and _irqrestore() APIs, which are relatively unsafe, for
example:
<interrupts are enabled as beginning>
spin_lock_irqsave(l1, flag1);
spin_lock_irqsave(l2, flag2);
spin_unlock_irqrestore(l1, flags1);
<l2 is still held but interrupts are enabled>
// accesses to interrupt-disable protect data will cause races.
This is even easier to triggered with guard facilities:
unsigned long flag2;
scoped_guard(spin_lock_irqsave, l1) {
spin_lock_irqsave(l2, flag2);
}
// l2 locked but interrupts are enabled.
spin_unlock_irqrestore(l2, flag2);
(Hand-to-hand locking critical sections are not uncommon for a
fine-grained lock design)
And because this unsafety, Rust cannot easily wrap the
interrupt-disabling locks in a safe API, which complicates the design.
To resolve this, introduce a new set of interrupt disabling APIs:
* local_interrupt_disable();
* local_interrupt_enable();
They work like local_irq_save() and local_irq_restore() except that 1)
the outermost local_interrupt_disable() call save the interrupt state
into a percpu variable, so that the outermost local_interrupt_enable()
can restore the state, and 2) a percpu counter is added to record the
nest level of these calls, so that interrupts are not accidentally
enabled inside the outermost critical section.
Also add the corresponding spin_lock primitives: spin_lock_irq_disable()
and spin_unlock_irq_enable(), as a result, code as follow:
spin_lock_irq_disable(l1);
spin_lock_irq_disable(l2);
spin_unlock_irq_enable(l1);
// Interrupts are still disabled.
spin_unlock_irq_enable(l2);
doesn't have the issue that interrupts are accidentally enabled.
This also makes the wrapper of interrupt-disabling locks on Rust easier
to design.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Lyude Paul <lyude@redhat.com>
---
V10:
* Add missing __raw_spin_lock_irq_disable() definition in spinlock.c
V11:
* Move definition of spin_trylock_irq_disable() into this commit
* Get rid of leftover space
* Remove unneeded preempt_disable()/preempt_enable()
V12:
* Move local_interrupt_enable()/local_interrupt_disable() out of
include/linux/spinlock.h, into include/linux/irqflags.h
V14:
* Move local_interrupt_enable()/disable() again, this time into its own
header, interrupt_rc.h, in order to fix a hexagon-specific build issue
caught by the CKI bot.
The reason this is needed is because on most architectures, irqflags.h
ends up including <arch/smp.h>. This provides a definition for the
raw_smp_processor_id() function which we depend on like so:
<linux/percpu-defs.h> <arch/smp.h>
local_interrupt_disable() → raw_cpu_write() → raw_smp_processor_id()
Unfortunately, hexagon appears to be one such architecture which does
not pull in <arch/smp.h> by default here - causing kernel builds to
fail and claim that raw_smp_processor_id() is undefined:
In file included from kernel/sched/rq-offsets.c:5:
In file included from kernel/sched/sched.h:8:
In file included from include/linux/sched/affinity.h:1:
In file included from include/linux/sched.h:37:
In file included from include/linux/spinlock.h:59:
>> include/linux/irqflags.h:277:3: error: call to undeclared function
'raw_smp_processor_id'; ISO C99 and later do not support implicit
function declarations [-Wimplicit-function-declaration]
277 | raw_cpu_write(local_interrupt_disable_state.flags, flags);
| ^
include/linux/percpu-defs.h:413:34: note: expanded from macro
'raw_cpu_write'
While including <arch/smp.h> in <linux/irqflags.h> does fix the build
on hexagon, it ends up breaking the build on x86_64:
In file included from kernel/sched/rq-offsets.c:5:
In file included from kernel/sched/sched.h:8:
In file included from ./include/linux/sched/affinity.h:1:
In file included from ./include/linux/sched.h:13:
In file included from ./arch/x86/include/asm/processor.h:25:
In file included from ./arch/x86/include/asm/special_insns.h:10:
In file included from ./include/linux/irqflags.h:22:
In file included from ./arch/x86/include/asm/smp.h:6:
In file included from ./include/linux/thread_info.h:60:
In file included from ./arch/x86/include/asm/thread_info.h:59:
./arch/x86/include/asm/cpufeature.h:110:40: error: use of undeclared
identifier 'boot_cpu_data'
[cap_byte] "i" (&((const char *)boot_cpu_data.x86_capability)[bit >> 3])
^
While boot_cpu_data is defined in <asm/processor.h>, it's not possible
for us to include that header in irqflags.h because we're already
inside of <asm/processor.h>.
As a result, I just concluded there's no reasonable way of having these
functions in <linux/irqflags.h> because of how many low level ASM
headers depend on it. So, we go with the solution of simply giving
ourselves our own header file.
include/linux/interrupt_rc.h | 61 ++++++++++++++++++++++++++++++++
include/linux/preempt.h | 4 +++
include/linux/spinlock.h | 25 +++++++++++++
include/linux/spinlock_api_smp.h | 27 ++++++++++++++
include/linux/spinlock_api_up.h | 8 +++++
include/linux/spinlock_rt.h | 15 ++++++++
kernel/locking/spinlock.c | 29 +++++++++++++++
kernel/softirq.c | 3 ++
8 files changed, 172 insertions(+)
create mode 100644 include/linux/interrupt_rc.h
diff --git a/include/linux/interrupt_rc.h b/include/linux/interrupt_rc.h
new file mode 100644
index 0000000000000..2f131f8ef1d61
--- /dev/null
+++ b/include/linux/interrupt_rc.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * include/linux/interrupt_rc.h - refcounted local processor interrupt
+ * management.
+ *
+ * Since the implementation of this API currently depends on
+ * local_irq_save()/local_irq_restore(), we split this into it's own header to
+ * make it easier to include without hitting circular header dependencies.
+ */
+
+#ifndef __LINUX_INTERRUPT_RC_H
+#define __LINUX_INTERRUPT_RC_H
+
+#include <linux/irqflags.h>
+#include <asm/processor.h>
+#include <asm/smp.h>
+
+/* Per-cpu interrupt disabling state for local_interrupt_{disable,enable}() */
+struct interrupt_disable_state {
+ unsigned long flags;
+};
+
+DECLARE_PER_CPU(struct interrupt_disable_state, local_interrupt_disable_state);
+
+static inline void local_interrupt_disable(void)
+{
+ unsigned long flags;
+ int new_count;
+
+ new_count = hardirq_disable_enter();
+
+ if ((new_count & HARDIRQ_DISABLE_MASK) == HARDIRQ_DISABLE_OFFSET) {
+ local_irq_save(flags);
+ raw_cpu_write(local_interrupt_disable_state.flags, flags);
+ }
+}
+
+static inline void local_interrupt_enable(void)
+{
+ int new_count;
+
+ new_count = hardirq_disable_exit();
+
+ if ((new_count & HARDIRQ_DISABLE_MASK) == 0) {
+ unsigned long flags;
+
+ flags = raw_cpu_read(local_interrupt_disable_state.flags);
+ local_irq_restore(flags);
+ /*
+ * TODO: re-read preempt count can be avoided, but it needs
+ * should_resched() taking another parameter as the current
+ * preempt count
+ */
+#ifdef PREEMPTION
+ if (should_resched(0))
+ __preempt_schedule();
+#endif
+ }
+}
+
+#endif /* !__LINUX_INTERRUPT_RC_H */
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 860769a717c10..2f2ee9006f544 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -153,6 +153,10 @@ static __always_inline unsigned char interrupt_context_level(void)
#define in_softirq() (softirq_count())
#define in_interrupt() (irq_count())
+#define hardirq_disable_count() ((preempt_count() & HARDIRQ_DISABLE_MASK) >> HARDIRQ_DISABLE_SHIFT)
+#define hardirq_disable_enter() __preempt_count_add_return(HARDIRQ_DISABLE_OFFSET)
+#define hardirq_disable_exit() __preempt_count_sub_return(HARDIRQ_DISABLE_OFFSET)
+
/*
* The preempt_count offset after preempt_disable();
*/
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index d3561c4a080e2..bbbee61c6f5df 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -57,6 +57,7 @@
#include <linux/linkage.h>
#include <linux/compiler.h>
#include <linux/irqflags.h>
+#include <linux/interrupt_rc.h>
#include <linux/thread_info.h>
#include <linux/stringify.h>
#include <linux/bottom_half.h>
@@ -272,9 +273,11 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
#endif
#define raw_spin_lock_irq(lock) _raw_spin_lock_irq(lock)
+#define raw_spin_lock_irq_disable(lock) _raw_spin_lock_irq_disable(lock)
#define raw_spin_lock_bh(lock) _raw_spin_lock_bh(lock)
#define raw_spin_unlock(lock) _raw_spin_unlock(lock)
#define raw_spin_unlock_irq(lock) _raw_spin_unlock_irq(lock)
+#define raw_spin_unlock_irq_enable(lock) _raw_spin_unlock_irq_enable(lock)
#define raw_spin_unlock_irqrestore(lock, flags) \
do { \
@@ -300,6 +303,13 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
1 : ({ local_irq_restore(flags); 0; }); \
})
+#define raw_spin_trylock_irq_disable(lock) \
+({ \
+ local_interrupt_disable(); \
+ raw_spin_trylock(lock) ? \
+ 1 : ({ local_interrupt_enable(); 0; }); \
+})
+
#ifndef CONFIG_PREEMPT_RT
/* Include rwlock functions for !RT */
#include <linux/rwlock.h>
@@ -376,6 +386,11 @@ static __always_inline void spin_lock_irq(spinlock_t *lock)
raw_spin_lock_irq(&lock->rlock);
}
+static __always_inline void spin_lock_irq_disable(spinlock_t *lock)
+{
+ raw_spin_lock_irq_disable(&lock->rlock);
+}
+
#define spin_lock_irqsave(lock, flags) \
do { \
raw_spin_lock_irqsave(spinlock_check(lock), flags); \
@@ -401,6 +416,11 @@ static __always_inline void spin_unlock_irq(spinlock_t *lock)
raw_spin_unlock_irq(&lock->rlock);
}
+static __always_inline void spin_unlock_irq_enable(spinlock_t *lock)
+{
+ raw_spin_unlock_irq_enable(&lock->rlock);
+}
+
static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
{
raw_spin_unlock_irqrestore(&lock->rlock, flags);
@@ -421,6 +441,11 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
})
+static __always_inline int spin_trylock_irq_disable(spinlock_t *lock)
+{
+ return raw_spin_trylock_irq_disable(&lock->rlock);
+}
+
/**
* spin_is_locked() - Check whether a spinlock is locked.
* @lock: Pointer to the spinlock.
diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h
index 9ecb0ab504e32..92532103b9eaa 100644
--- a/include/linux/spinlock_api_smp.h
+++ b/include/linux/spinlock_api_smp.h
@@ -28,6 +28,8 @@ _raw_spin_lock_nest_lock(raw_spinlock_t *lock, struct lockdep_map *map)
void __lockfunc _raw_spin_lock_bh(raw_spinlock_t *lock) __acquires(lock);
void __lockfunc _raw_spin_lock_irq(raw_spinlock_t *lock)
__acquires(lock);
+void __lockfunc _raw_spin_lock_irq_disable(raw_spinlock_t *lock)
+ __acquires(lock);
unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinlock_t *lock)
__acquires(lock);
@@ -39,6 +41,7 @@ int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock);
void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock);
void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock);
void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock);
+void __lockfunc _raw_spin_unlock_irq_enable(raw_spinlock_t *lock) __releases(lock);
void __lockfunc
_raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags)
__releases(lock);
@@ -55,6 +58,11 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags)
#define _raw_spin_lock_irq(lock) __raw_spin_lock_irq(lock)
#endif
+/* Use the same config as spin_lock_irq() temporarily. */
+#ifdef CONFIG_INLINE_SPIN_LOCK_IRQ
+#define _raw_spin_lock_irq_disable(lock) __raw_spin_lock_irq_disable(lock)
+#endif
+
#ifdef CONFIG_INLINE_SPIN_LOCK_IRQSAVE
#define _raw_spin_lock_irqsave(lock) __raw_spin_lock_irqsave(lock)
#endif
@@ -79,6 +87,11 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags)
#define _raw_spin_unlock_irq(lock) __raw_spin_unlock_irq(lock)
#endif
+/* Use the same config as spin_unlock_irq() temporarily. */
+#ifdef CONFIG_INLINE_SPIN_UNLOCK_IRQ
+#define _raw_spin_unlock_irq_enable(lock) __raw_spin_unlock_irq_enable(lock)
+#endif
+
#ifdef CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE
#define _raw_spin_unlock_irqrestore(lock, flags) __raw_spin_unlock_irqrestore(lock, flags)
#endif
@@ -120,6 +133,13 @@ static inline void __raw_spin_lock_irq(raw_spinlock_t *lock)
LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
}
+static inline void __raw_spin_lock_irq_disable(raw_spinlock_t *lock)
+{
+ local_interrupt_disable();
+ spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
+ LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
+}
+
static inline void __raw_spin_lock_bh(raw_spinlock_t *lock)
{
__local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
@@ -160,6 +180,13 @@ static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock)
preempt_enable();
}
+static inline void __raw_spin_unlock_irq_enable(raw_spinlock_t *lock)
+{
+ spin_release(&lock->dep_map, _RET_IP_);
+ do_raw_spin_unlock(lock);
+ local_interrupt_enable();
+}
+
static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock)
{
spin_release(&lock->dep_map, _RET_IP_);
diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_up.h
index 819aeba1c87e6..d02a73671713b 100644
--- a/include/linux/spinlock_api_up.h
+++ b/include/linux/spinlock_api_up.h
@@ -36,6 +36,9 @@
#define __LOCK_IRQ(lock) \
do { local_irq_disable(); __LOCK(lock); } while (0)
+#define __LOCK_IRQ_DISABLE(lock) \
+ do { local_interrupt_disable(); __LOCK(lock); } while (0)
+
#define __LOCK_IRQSAVE(lock, flags) \
do { local_irq_save(flags); __LOCK(lock); } while (0)
@@ -52,6 +55,9 @@
#define __UNLOCK_IRQ(lock) \
do { local_irq_enable(); __UNLOCK(lock); } while (0)
+#define __UNLOCK_IRQ_ENABLE(lock) \
+ do { __UNLOCK(lock); local_interrupt_enable(); } while (0)
+
#define __UNLOCK_IRQRESTORE(lock, flags) \
do { local_irq_restore(flags); __UNLOCK(lock); } while (0)
@@ -64,6 +70,7 @@
#define _raw_read_lock_bh(lock) __LOCK_BH(lock)
#define _raw_write_lock_bh(lock) __LOCK_BH(lock)
#define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock)
+#define _raw_spin_lock_irq_disable(lock) __LOCK_IRQ_DISABLE(lock)
#define _raw_read_lock_irq(lock) __LOCK_IRQ(lock)
#define _raw_write_lock_irq(lock) __LOCK_IRQ(lock)
#define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags)
@@ -80,6 +87,7 @@
#define _raw_write_unlock_bh(lock) __UNLOCK_BH(lock)
#define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock)
#define _raw_spin_unlock_irq(lock) __UNLOCK_IRQ(lock)
+#define _raw_spin_unlock_irq_enable(lock) __UNLOCK_IRQ_ENABLE(lock)
#define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock)
#define _raw_write_unlock_irq(lock) __UNLOCK_IRQ(lock)
#define _raw_spin_unlock_irqrestore(lock, flags) \
diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
index f6499c37157df..074182f7cfeea 100644
--- a/include/linux/spinlock_rt.h
+++ b/include/linux/spinlock_rt.h
@@ -93,6 +93,11 @@ static __always_inline void spin_lock_irq(spinlock_t *lock)
rt_spin_lock(lock);
}
+static __always_inline void spin_lock_irq_disable(spinlock_t *lock)
+{
+ rt_spin_lock(lock);
+}
+
#define spin_lock_irqsave(lock, flags) \
do { \
typecheck(unsigned long, flags); \
@@ -116,12 +121,22 @@ static __always_inline void spin_unlock_irq(spinlock_t *lock)
rt_spin_unlock(lock);
}
+static __always_inline void spin_unlock_irq_enable(spinlock_t *lock)
+{
+ rt_spin_unlock(lock);
+}
+
static __always_inline void spin_unlock_irqrestore(spinlock_t *lock,
unsigned long flags)
{
rt_spin_unlock(lock);
}
+static __always_inline int spin_trylock_irq_disable(spinlock_t *lock)
+{
+ return rt_spin_trylock(lock);
+}
+
#define spin_trylock(lock) \
__cond_lock(lock, rt_spin_trylock(lock))
diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c
index 7685defd7c526..da54b220b5a45 100644
--- a/kernel/locking/spinlock.c
+++ b/kernel/locking/spinlock.c
@@ -125,6 +125,19 @@ static void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock) \
*/
BUILD_LOCK_OPS(spin, raw_spinlock);
+/* No rwlock_t variants for now, so just build this function by hand */
+static void __lockfunc __raw_spin_lock_irq_disable(raw_spinlock_t *lock)
+{
+ for (;;) {
+ local_interrupt_disable();
+ if (likely(do_raw_spin_trylock(lock)))
+ break;
+ local_interrupt_enable();
+
+ arch_spin_relax(&lock->raw_lock);
+ }
+}
+
#ifndef CONFIG_PREEMPT_RT
BUILD_LOCK_OPS(read, rwlock);
BUILD_LOCK_OPS(write, rwlock);
@@ -172,6 +185,14 @@ noinline void __lockfunc _raw_spin_lock_irq(raw_spinlock_t *lock)
EXPORT_SYMBOL(_raw_spin_lock_irq);
#endif
+#ifndef CONFIG_INLINE_SPIN_LOCK_IRQ
+noinline void __lockfunc _raw_spin_lock_irq_disable(raw_spinlock_t *lock)
+{
+ __raw_spin_lock_irq_disable(lock);
+}
+EXPORT_SYMBOL_GPL(_raw_spin_lock_irq_disable);
+#endif
+
#ifndef CONFIG_INLINE_SPIN_LOCK_BH
noinline void __lockfunc _raw_spin_lock_bh(raw_spinlock_t *lock)
{
@@ -204,6 +225,14 @@ noinline void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock)
EXPORT_SYMBOL(_raw_spin_unlock_irq);
#endif
+#ifndef CONFIG_INLINE_SPIN_UNLOCK_IRQ
+noinline void __lockfunc _raw_spin_unlock_irq_enable(raw_spinlock_t *lock)
+{
+ __raw_spin_unlock_irq_enable(lock);
+}
+EXPORT_SYMBOL_GPL(_raw_spin_unlock_irq_enable);
+#endif
+
#ifndef CONFIG_INLINE_SPIN_UNLOCK_BH
noinline void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock)
{
diff --git a/kernel/softirq.c b/kernel/softirq.c
index af47ea23aba3b..b681545eabbbe 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -88,6 +88,9 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
#endif
+DEFINE_PER_CPU(struct interrupt_disable_state, local_interrupt_disable_state);
+EXPORT_PER_CPU_SYMBOL_GPL(local_interrupt_disable_state);
+
DEFINE_PER_CPU(unsigned int, nmi_nesting);
/*
--
2.51.1
next prev parent reply other threads:[~2025-11-20 21:47 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-20 21:45 [PATCH v14 00/16] Refcounted interrupts, SpinLockIrq for rust Lyude Paul
2025-11-20 21:45 ` [PATCH v14 01/16] preempt: Introduce HARDIRQ_DISABLE_BITS Lyude Paul
2025-11-20 21:45 ` [PATCH v14 02/16] preempt: Track NMI nesting to separate per-CPU counter Lyude Paul
2025-11-20 21:45 ` [PATCH v14 03/16] preempt: Introduce __preempt_count_{sub, add}_return() Lyude Paul
2025-11-20 21:45 ` Lyude Paul [this message]
2025-11-20 21:45 ` [PATCH v14 05/16] irq: Add KUnit test for refcounted interrupt enable/disable Lyude Paul
2025-11-20 21:45 ` [PATCH v14 06/16] rust: Introduce interrupt module Lyude Paul
2025-11-20 21:45 ` [PATCH v14 07/16] rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers Lyude Paul
2025-11-20 21:46 ` [PATCH v14 08/16] rust: sync: Add SpinLockIrq Lyude Paul
2025-11-20 21:46 ` [PATCH v14 09/16] rust: sync: Introduce lock::Backend::Context Lyude Paul
2025-11-20 21:46 ` [PATCH v14 10/16] rust: sync: lock: Add `Backend::BackendInContext` Lyude Paul
2025-11-20 21:46 ` [PATCH v14 11/16] rust: sync: lock/global: Rename B to G in trait bounds Lyude Paul
2025-11-20 21:46 ` [PATCH v14 12/16] rust: sync: Add a lifetime parameter to lock::global::GlobalGuard Lyude Paul
2025-11-20 21:46 ` [PATCH v14 13/16] rust: sync: Expose lock::Backend Lyude Paul
2025-11-20 21:46 ` [PATCH v14 14/16] rust: sync: lock/global: Add Backend parameter to GlobalGuard Lyude Paul
2025-11-20 21:46 ` [PATCH v14 15/16] rust: sync: lock/global: Add BackendInContext support to GlobalLock Lyude Paul
2025-11-20 21:46 ` [PATCH v14 16/16] locking: Switch to _irq_{disable,enable}() variants in cleanup guards Lyude Paul
2025-11-20 23:16 ` [PATCH v14 00/16] Refcounted interrupts, SpinLockIrq for rust John Hubbard
2025-11-21 17:47 ` Boqun Feng
2025-11-22 1:09 ` John Hubbard
2025-11-22 2:38 ` Boqun Feng
2025-11-22 2:56 ` John Hubbard
2025-11-22 3:35 ` Boqun Feng
2025-11-22 4:14 ` John Hubbard
2025-11-22 4:24 ` Boqun Feng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251120214616.14386-5-lyude@redhat.com \
--to=lyude@redhat.com \
--cc=a.hindborg@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alex.gaynor@gmail.com \
--cc=aliceryhl@google.com \
--cc=bjorn3_gh@protonmail.com \
--cc=boqun.feng@gmail.com \
--cc=dakr@kernel.org \
--cc=daniel.almeida@collabora.com \
--cc=gary@garyguo.net \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=lossin@kernel.org \
--cc=mingo@redhat.com \
--cc=ojeda@kernel.org \
--cc=peterz@infradead.org \
--cc=rust-for-linux@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=tmgross@umich.edu \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).