public inbox for rust-for-linux@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v18 0/5] SpinLockIrq for rust
@ 2026-02-05 20:44 Lyude Paul
  2026-02-05 20:44 ` [PATCH v18 1/5] rust: Introduce interrupt module Lyude Paul
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Lyude Paul @ 2026-02-05 20:44 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

This is the latest patch series for adding rust bindings for controlling
local processor interrupts, adding support for spinlocks in rust that
are acquired with local processor interrupts disabled, and implementing
local interrupt controls through refcounting in the kernel.

The previous version of this patch series can be found here:

  https://lore.kernel.org/all/20260121223933.1568682-1-lyude@redhat.com/

This patch series applies on top of boqun's rust-sync branch:

  https://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git/?h=rust-sync

Now that we've gotten the C-side of these changes in, this patch series
is now exclusively concerned with the rust side of these changes. For
more information on how we came to the design for the C side of the API,
see the explanation in the previous version of the patch series.

Boqun Feng (1):
  rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers

Lyude Paul (4):
  rust: Introduce interrupt module
  rust: sync: use super::* in spinlock.rs
  rust: sync: Add SpinLockIrq
  rust: sync: Introduce SpinLockIrq::lock_with() and friends

			    FULL CHANGELOG:

Series-wide:
  V15:
    * Added a patch for fixing build erors introduced on OpenRISC
  V17:
    * Squash:
      - "rust: sync: Introduce lock::Backend::Context"
      - "rust: sync: lock: Add `Backend::BackendInContext`"
      With BackendWithContext being a separate trait, it no longer makes
      sense for these patches to be separate.
  V18:
    * Add "rust: sync: use super::* in spinlock.rs"

Patch-specific:
  * rust: Introduce interrupt module
    V10:
      * Fix documentation typos
    V11:
      * Get rid of unneeded `use bindings;`
      * Move ASSUME_DISABLED into assume_disabled()
      * Confirm using lockdep_assert_irqs_disabled() that local interrupts are
        in fact disabled when LocalInterruptDisabled::assume_disabled() is called.
  * rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers
    V18:
      * Add missing __rust_helper annotations
  * rust: sync: Add SpinLockIrq
    V10:
      * Also add support to GlobalLock
      * Documentation fixes from Dirk
    V11:
      * Add unit test requested by Daniel Almeida
    V14:
      * Improve rustdoc for SpinLockIrqBackend
    V18:
      * Add missing __rust_helper annotations
  * rust: sync: Introduce SpinLockIrq::lock_with() and friends
    V10:
      * Fix typos - Dirk
    V17:
      * Introduce `BackendWithContext`, move context-related bits into there and
        out of `Backend`.
      * Add missing #[must_use = …] for try_lock_with()
      * Remove all unsafe code from lock_with() and try_lock_with():
        Somehow I never noticed that literally none of the unsafe code in these
        two functions is needed with as_lock_in_context()...
    V18:
      * Drop the traits, just implement this on SpinLockIrq
  * rust: sync: Expose lock::Backend
    V10:
      * Fix typos - Dirk/Lyude
      * Since we're adding support for context locks to GlobalLock as well, let's
        also make sure to cover try_lock while we're at it and add try_lock_with
      * Add a private function as_lock_in_context() for handling casting from a
        Lock<T, B> to Lock<T, B::BackendInContext> so we don't have to duplicate
        safety comments
    V11:
      * Fix clippy::ref_as_ptr error in Lock::as_lock_in_context()
    V14:
      * Add benchmark results, rewrite commit message
  * rust: sync: lock/global: Rename B to G in trait bounds
  * rust: sync: Add a lifetime parameter to lock::global::GlobalGuard
  * rust: sync: lock/global: Add Backend parameter to GlobalGuard
    V17:
      * Add default parameter for generic `B` to `GlobalGuard`
  * rust: sync: lock/global: Add ContextualBackend support to GlobalLock
  * locking: Switch to _irq_{disable,enable}() variants in cleanup guards
    V10:
      * Add PREEMPT_RT build fix from Guangbo Cui

 rust/helpers/helpers.c            |   1 +
 rust/helpers/interrupt.c          |  18 ++
 rust/helpers/spinlock.c           |  15 ++
 rust/helpers/sync.c               |   5 +
 rust/kernel/interrupt.rs          |  86 ++++++++
 rust/kernel/lib.rs                |   1 +
 rust/kernel/sync.rs               |   4 +-
 rust/kernel/sync/lock/global.rs   |   3 +
 rust/kernel/sync/lock/spinlock.rs | 313 +++++++++++++++++++++++++++++-
 9 files changed, 440 insertions(+), 6 deletions(-)
 create mode 100644 rust/helpers/interrupt.c
 create mode 100644 rust/kernel/interrupt.rs



base-commit: de718b2ca866e10e2a26c259ab0493a5af411879
-- 
2.53.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v18 1/5] rust: Introduce interrupt module
  2026-02-05 20:44 [PATCH v18 0/5] SpinLockIrq for rust Lyude Paul
@ 2026-02-05 20:44 ` Lyude Paul
  2026-02-05 23:13   ` Gary Guo
  2026-02-05 20:44 ` [PATCH v18 2/5] rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers Lyude Paul
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Lyude Paul @ 2026-02-05 20:44 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

This introduces a module for dealing with interrupt-disabled contexts,
including the ability to enable and disable interrupts along with the
ability to annotate functions as expecting that IRQs are already
disabled on the local CPU.

[Boqun: This is based on Lyude's work on interrupt disable abstraction,
I port to the new local_interrupt_disable() mechanism to make it work
as a guard type. I cannot even take the credit of this design, since
Lyude also brought up the same idea in zulip. Anyway, this is only for
POC purpose, and of course all bugs are mine]

Signed-off-by: Lyude Paul <lyude@redhat.com>
Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Benno Lossin <lossin@kernel.org>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>

---

V10:
* Fix documentation typos
V11:
* Get rid of unneeded `use bindings;`
* Move ASSUME_DISABLED into assume_disabled()
* Confirm using lockdep_assert_irqs_disabled() that local interrupts are in
  fact disabled when LocalInterruptDisabled::assume_disabled() is called.
V18:
* Add missing __rust_helper annotations

 rust/helpers/helpers.c   |  1 +
 rust/helpers/interrupt.c | 18 +++++++++
 rust/helpers/sync.c      |  5 +++
 rust/kernel/interrupt.rs | 86 ++++++++++++++++++++++++++++++++++++++++
 rust/kernel/lib.rs       |  1 +
 5 files changed, 111 insertions(+)
 create mode 100644 rust/helpers/interrupt.c
 create mode 100644 rust/kernel/interrupt.rs

diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index a3c42e51f00a0..ff628e9c5ffdd 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
 #include "err.c"
 #include "irq.c"
 #include "fs.c"
+#include "interrupt.c"
 #include "io.c"
 #include "jump_label.c"
 #include "kunit.c"
diff --git a/rust/helpers/interrupt.c b/rust/helpers/interrupt.c
new file mode 100644
index 0000000000000..51b319bd4c006
--- /dev/null
+++ b/rust/helpers/interrupt.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/spinlock.h>
+
+__rust_helper void rust_helper_local_interrupt_disable(void)
+{
+	local_interrupt_disable();
+}
+
+__rust_helper void rust_helper_local_interrupt_enable(void)
+{
+	local_interrupt_enable();
+}
+
+__rust_helper bool rust_helper_irqs_disabled(void)
+{
+	return irqs_disabled();
+}
diff --git a/rust/helpers/sync.c b/rust/helpers/sync.c
index 82d6aff73b04f..4f474fe847c41 100644
--- a/rust/helpers/sync.c
+++ b/rust/helpers/sync.c
@@ -11,3 +11,8 @@ __rust_helper void rust_helper_lockdep_unregister_key(struct lock_class_key *k)
 {
 	lockdep_unregister_key(k);
 }
+
+__rust_helper void rust_helper_lockdep_assert_irqs_disabled(void)
+{
+	lockdep_assert_irqs_disabled();
+}
diff --git a/rust/kernel/interrupt.rs b/rust/kernel/interrupt.rs
new file mode 100644
index 0000000000000..6c8d2f58bca70
--- /dev/null
+++ b/rust/kernel/interrupt.rs
@@ -0,0 +1,86 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Interrupt controls
+//!
+//! This module allows Rust code to annotate areas of code where local processor interrupts should
+//! be disabled, along with actually disabling local processor interrupts.
+//!
+//! # ⚠️ Warning! ⚠️
+//!
+//! The usage of this module can be more complicated than meets the eye, especially surrounding
+//! [preemptible kernels]. It's recommended to take care when using the functions and types defined
+//! here and familiarize yourself with the various documentation we have before using them, along
+//! with the various documents we link to here.
+//!
+//! # Reading material
+//!
+//! - [Software interrupts and realtime (LWN)](https://lwn.net/Articles/520076)
+//!
+//! [preemptible kernels]: https://www.kernel.org/doc/html/latest/locking/preempt-locking.html
+
+use kernel::types::NotThreadSafe;
+
+/// A guard that represents local processor interrupt disablement on preemptible kernels.
+///
+/// [`LocalInterruptDisabled`] is a guard type that represents that local processor interrupts have
+/// been disabled on a preemptible kernel.
+///
+/// Certain functions take an immutable reference of [`LocalInterruptDisabled`] in order to require
+/// that they may only be run in local-interrupt-disabled contexts on preemptible kernels.
+///
+/// This is a marker type; it has no size, and is simply used as a compile-time guarantee that local
+/// processor interrupts are disabled on preemptible kernels. Note that no guarantees about the
+/// state of interrupts are made by this type on non-preemptible kernels.
+///
+/// # Invariants
+///
+/// Local processor interrupts are disabled on preemptible kernels for as long as an object of this
+/// type exists.
+pub struct LocalInterruptDisabled(NotThreadSafe);
+
+/// Disable local processor interrupts on a preemptible kernel.
+///
+/// This function disables local processor interrupts on a preemptible kernel, and returns a
+/// [`LocalInterruptDisabled`] token as proof of this. On non-preemptible kernels, this function is
+/// a no-op.
+///
+/// **Usage of this function is discouraged** unless you are absolutely sure you know what you are
+/// doing, as kernel interfaces for rust that deal with interrupt state will typically handle local
+/// processor interrupt state management on their own and managing this by hand is quite error
+/// prone.
+pub fn local_interrupt_disable() -> LocalInterruptDisabled {
+    // SAFETY: It's always safe to call `local_interrupt_disable()`.
+    unsafe { bindings::local_interrupt_disable() };
+
+    LocalInterruptDisabled(NotThreadSafe)
+}
+
+impl Drop for LocalInterruptDisabled {
+    fn drop(&mut self) {
+        // SAFETY: Per type invariants, a `local_interrupt_disable()` must be called to create this
+        // object, hence call the corresponding `local_interrupt_enable()` is safe.
+        unsafe { bindings::local_interrupt_enable() };
+    }
+}
+
+impl LocalInterruptDisabled {
+    /// Assume that local processor interrupts are disabled on preemptible kernels.
+    ///
+    /// This can be used for annotating code that is known to be run in contexts where local
+    /// processor interrupts are disabled on preemptible kernels. It makes no changes to the local
+    /// interrupt state on its own.
+    ///
+    /// # Safety
+    ///
+    /// For the whole life `'a`, local interrupts must be disabled on preemptible kernels. This
+    /// could be a context like for example, an interrupt handler.
+    pub unsafe fn assume_disabled<'a>() -> &'a LocalInterruptDisabled {
+        const ASSUME_DISABLED: &LocalInterruptDisabled = &LocalInterruptDisabled(NotThreadSafe);
+
+        // Confirm they're actually disabled if lockdep is available
+        // SAFETY: It's always safe to call `lockdep_assert_irqs_disabled()`
+        unsafe { bindings::lockdep_assert_irqs_disabled() };
+
+        ASSUME_DISABLED
+    }
+}
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index f812cf1200428..fbe830c4c54e4 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -101,6 +101,7 @@
 pub mod i2c;
 pub mod id_pool;
 pub mod init;
+pub mod interrupt;
 pub mod io;
 pub mod ioctl;
 pub mod iov;
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v18 2/5] rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers
  2026-02-05 20:44 [PATCH v18 0/5] SpinLockIrq for rust Lyude Paul
  2026-02-05 20:44 ` [PATCH v18 1/5] rust: Introduce interrupt module Lyude Paul
@ 2026-02-05 20:44 ` Lyude Paul
  2026-02-05 23:13   ` Gary Guo
  2026-02-05 20:44 ` [PATCH v18 3/5] rust: sync: use super::* in spinlock.rs Lyude Paul
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Lyude Paul @ 2026-02-05 20:44 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

From: Boqun Feng <boqun.feng@gmail.com>

spin_lock_irq_disable() and spin_unlock_irq_enable() are inline
functions, to use them in Rust, helpers are introduced. This is for
interrupt disabling lock abstraction in Rust.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>

---
V18:
* Add missing __rust_helper annotations

 rust/helpers/spinlock.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/rust/helpers/spinlock.c b/rust/helpers/spinlock.c
index 4d13062cf253e..d53400c150227 100644
--- a/rust/helpers/spinlock.c
+++ b/rust/helpers/spinlock.c
@@ -36,3 +36,18 @@ __rust_helper void rust_helper_spin_assert_is_held(spinlock_t *lock)
 {
 	lockdep_assert_held(lock);
 }
+
+__rust_helper void rust_helper_spin_lock_irq_disable(spinlock_t *lock)
+{
+	spin_lock_irq_disable(lock);
+}
+
+__rust_helper void rust_helper_spin_unlock_irq_enable(spinlock_t *lock)
+{
+	spin_unlock_irq_enable(lock);
+}
+
+__rust_helper int rust_helper_spin_trylock_irq_disable(spinlock_t *lock)
+{
+	return spin_trylock_irq_disable(lock);
+}
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v18 3/5] rust: sync: use super::* in spinlock.rs
  2026-02-05 20:44 [PATCH v18 0/5] SpinLockIrq for rust Lyude Paul
  2026-02-05 20:44 ` [PATCH v18 1/5] rust: Introduce interrupt module Lyude Paul
  2026-02-05 20:44 ` [PATCH v18 2/5] rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers Lyude Paul
@ 2026-02-05 20:44 ` Lyude Paul
  2026-02-05 20:44 ` [PATCH v18 4/5] rust: sync: Add SpinLockIrq Lyude Paul
  2026-02-05 20:44 ` [PATCH v18 5/5] rust: sync: Introduce SpinLockIrq::lock_with() and friends Lyude Paul
  4 siblings, 0 replies; 11+ messages in thread
From: Lyude Paul @ 2026-02-05 20:44 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

No functional changes.

Signed-off-by: Lyude Paul <lyude@redhat.com>
---
 rust/kernel/sync/lock/spinlock.rs | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
index ef76fa07ca3a2..d75af32218bae 100644
--- a/rust/kernel/sync/lock/spinlock.rs
+++ b/rust/kernel/sync/lock/spinlock.rs
@@ -3,6 +3,7 @@
 //! A kernel spinlock.
 //!
 //! This module allows Rust code to use the kernel's `spinlock_t`.
+use super::*;
 
 /// Creates a [`SpinLock`] initialiser with the given name and a newly-created lock class.
 ///
@@ -82,7 +83,7 @@ macro_rules! new_spinlock {
 /// ```
 ///
 /// [`spinlock_t`]: srctree/include/linux/spinlock.h
-pub type SpinLock<T> = super::Lock<T, SpinLockBackend>;
+pub type SpinLock<T> = Lock<T, SpinLockBackend>;
 
 /// A kernel `spinlock_t` lock backend.
 pub struct SpinLockBackend;
@@ -91,13 +92,11 @@ macro_rules! new_spinlock {
 ///
 /// This is simply a type alias for a [`Guard`] returned from locking a [`SpinLock`]. It will unlock
 /// the [`SpinLock`] upon being dropped.
-///
-/// [`Guard`]: super::Guard
-pub type SpinLockGuard<'a, T> = super::Guard<'a, T, SpinLockBackend>;
+pub type SpinLockGuard<'a, T> = Guard<'a, T, SpinLockBackend>;
 
 // SAFETY: The underlying kernel `spinlock_t` object ensures mutual exclusion. `relock` uses the
 // default implementation that always calls the same locking method.
-unsafe impl super::Backend for SpinLockBackend {
+unsafe impl Backend for SpinLockBackend {
     type State = bindings::spinlock_t;
     type GuardState = ();
 
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v18 4/5] rust: sync: Add SpinLockIrq
  2026-02-05 20:44 [PATCH v18 0/5] SpinLockIrq for rust Lyude Paul
                   ` (2 preceding siblings ...)
  2026-02-05 20:44 ` [PATCH v18 3/5] rust: sync: use super::* in spinlock.rs Lyude Paul
@ 2026-02-05 20:44 ` Lyude Paul
  2026-02-05 23:19   ` Gary Guo
  2026-02-05 20:44 ` [PATCH v18 5/5] rust: sync: Introduce SpinLockIrq::lock_with() and friends Lyude Paul
  4 siblings, 1 reply; 11+ messages in thread
From: Lyude Paul @ 2026-02-05 20:44 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

A variant of `SpinLock` that ensures interrupts are disabled in the
critical section. `lock()` will ensure that either interrupts are already
disabled or disable them. `unlock()` will reverse the respective operation.

[Boqun: Port to use spin_lock_irq_disable() and
spin_unlock_irq_enable()]

Signed-off-by: Lyude Paul <lyude@redhat.com>
Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>

---
V10:
* Also add support to GlobalLock
* Documentation fixes from Dirk
V11:
* Add unit test requested by Daniel Almeida
V14:
* Improve rustdoc for SpinLockIrqBackend
V17:
* Update Git summary according to Benno's review
V18:
* Add missing comment change requested by Benno in V16

 rust/kernel/sync.rs               |   4 +-
 rust/kernel/sync/lock/global.rs   |   3 +
 rust/kernel/sync/lock/spinlock.rs | 228 ++++++++++++++++++++++++++++++
 3 files changed, 234 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index 993dbf2caa0e3..58af785d62b99 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -27,7 +27,9 @@
 pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult};
 pub use lock::global::{global_lock, GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy};
 pub use lock::mutex::{new_mutex, Mutex, MutexGuard};
-pub use lock::spinlock::{new_spinlock, SpinLock, SpinLockGuard};
+pub use lock::spinlock::{
+    new_spinlock, new_spinlock_irq, SpinLock, SpinLockGuard, SpinLockIrq, SpinLockIrqGuard,
+};
 pub use locked_by::LockedBy;
 pub use refcount::Refcount;
 pub use set_once::SetOnce;
diff --git a/rust/kernel/sync/lock/global.rs b/rust/kernel/sync/lock/global.rs
index aecbdc34738fb..32efcbc870500 100644
--- a/rust/kernel/sync/lock/global.rs
+++ b/rust/kernel/sync/lock/global.rs
@@ -304,4 +304,7 @@ macro_rules! global_lock_inner {
     (backend SpinLock) => {
         $crate::sync::lock::spinlock::SpinLockBackend
     };
+    (backend SpinLockIrq) => {
+        $crate::sync::lock::spinlock::SpinLockIrqBackend
+    };
 }
diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
index d75af32218bae..f11a84505ba0e 100644
--- a/rust/kernel/sync/lock/spinlock.rs
+++ b/rust/kernel/sync/lock/spinlock.rs
@@ -4,6 +4,7 @@
 //!
 //! This module allows Rust code to use the kernel's `spinlock_t`.
 use super::*;
+use crate::prelude::*;
 
 /// Creates a [`SpinLock`] initialiser with the given name and a newly-created lock class.
 ///
@@ -143,3 +144,230 @@ unsafe fn assert_is_held(ptr: *mut Self::State) {
         unsafe { bindings::spin_assert_is_held(ptr) }
     }
 }
+
+/// Creates a [`SpinLockIrq`] initialiser with the given name and a newly-created lock class.
+///
+/// It uses the name if one is given, otherwise it generates one based on the file name and line
+/// number.
+#[macro_export]
+macro_rules! new_spinlock_irq {
+    ($inner:expr $(, $name:literal)? $(,)?) => {
+        $crate::sync::SpinLockIrq::new(
+            $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
+    };
+}
+pub use new_spinlock_irq;
+
+/// A variant of `SpinLock` that ensures interrupts are disabled in the critical section.
+///
+/// This is a version of [`SpinLock`] that can only be used in contexts where interrupts for the
+/// local CPU are disabled. It can be acquired in two ways:
+///
+/// - Using [`lock()`] like any other type of lock, in which case the bindings will modify the
+///   interrupt state to ensure that local processor interrupts remain disabled for at least as long
+///   as the [`SpinLockIrqGuard`] exists.
+/// - Using [`lock_with()`] in contexts where a [`LocalInterruptDisabled`] token is present and
+///   local processor interrupts are already known to be disabled, in which case the local interrupt
+///   state will not be touched. This method should be preferred if a [`LocalInterruptDisabled`]
+///   token is present in the scope.
+///
+/// For more info on spinlocks, see [`SpinLock`]. For more information on interrupts,
+/// [see the interrupt module](kernel::interrupt).
+///
+/// # Examples
+///
+/// The following example shows how to declare, allocate initialise and access a struct (`Example`)
+/// that contains an inner struct (`Inner`) that is protected by a spinlock that requires local
+/// processor interrupts to be disabled.
+///
+/// ```
+/// use kernel::sync::{new_spinlock_irq, SpinLockIrq};
+///
+/// struct Inner {
+///     a: u32,
+///     b: u32,
+/// }
+///
+/// #[pin_data]
+/// struct Example {
+///     #[pin]
+///     c: SpinLockIrq<Inner>,
+///     #[pin]
+///     d: SpinLockIrq<Inner>,
+/// }
+///
+/// impl Example {
+///     fn new() -> impl PinInit<Self> {
+///         pin_init!(Self {
+///             c <- new_spinlock_irq!(Inner { a: 0, b: 10 }),
+///             d <- new_spinlock_irq!(Inner { a: 20, b: 30 }),
+///         })
+///     }
+/// }
+///
+/// // Allocate a boxed `Example`
+/// let e = KBox::pin_init(Example::new(), GFP_KERNEL)?;
+///
+/// // Accessing an `Example` from a context where interrupts may not be disabled already.
+/// let c_guard = e.c.lock(); // interrupts are disabled now, +1 interrupt disable refcount
+/// let d_guard = e.d.lock(); // no interrupt state change, +1 interrupt disable refcount
+///
+/// assert_eq!(c_guard.a, 0);
+/// assert_eq!(c_guard.b, 10);
+/// assert_eq!(d_guard.a, 20);
+/// assert_eq!(d_guard.b, 30);
+///
+/// drop(c_guard); // Dropping c_guard will not re-enable interrupts just yet, since d_guard is
+///                // still in scope.
+/// drop(d_guard); // Last interrupt disable reference dropped here, so interrupts are re-enabled
+///                // now
+/// # Ok::<(), Error>(())
+/// ```
+///
+/// [`lock()`]: SpinLockIrq::lock
+/// [`lock_with()`]: SpinLockIrq::lock_with
+pub type SpinLockIrq<T> = super::Lock<T, SpinLockIrqBackend>;
+
+/// A kernel `spinlock_t` lock backend that can only be acquired in interrupt disabled contexts.
+pub struct SpinLockIrqBackend;
+
+/// A [`Guard`] acquired from locking a [`SpinLockIrq`] using [`lock()`].
+///
+/// This is simply a type alias for a [`Guard`] returned from locking a [`SpinLockIrq`] using
+/// [`lock_with()`]. It will unlock the [`SpinLockIrq`] and decrement the local processor's
+/// interrupt disablement refcount upon being dropped.
+///
+/// [`lock()`]: SpinLockIrq::lock
+/// [`lock_with()`]: SpinLockIrq::lock_with
+pub type SpinLockIrqGuard<'a, T> = Guard<'a, T, SpinLockIrqBackend>;
+
+// SAFETY: The underlying kernel `spinlock_t` object ensures mutual exclusion. `relock` uses the
+// default implementation that always calls the same locking method.
+unsafe impl Backend for SpinLockIrqBackend {
+    type State = bindings::spinlock_t;
+    type GuardState = ();
+
+    unsafe fn init(
+        ptr: *mut Self::State,
+        name: *const crate::ffi::c_char,
+        key: *mut bindings::lock_class_key,
+    ) {
+        // SAFETY: The safety requirements ensure that `ptr` is valid for writes, and `name` and
+        // `key` are valid for read indefinitely.
+        unsafe { bindings::__spin_lock_init(ptr, name, key) }
+    }
+
+    unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState {
+        // SAFETY: The safety requirements of this function ensure that `ptr` points to valid
+        // memory, and that it has been initialised before.
+        unsafe { bindings::spin_lock_irq_disable(ptr) }
+    }
+
+    unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) {
+        // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the
+        // caller is the owner of the spinlock.
+        unsafe { bindings::spin_unlock_irq_enable(ptr) }
+    }
+
+    unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> {
+        // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use.
+        let result = unsafe { bindings::spin_trylock_irq_disable(ptr) };
+
+        if result != 0 {
+            Some(())
+        } else {
+            None
+        }
+    }
+
+    unsafe fn assert_is_held(ptr: *mut Self::State) {
+        // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use.
+        unsafe { bindings::spin_assert_is_held(ptr) }
+    }
+}
+
+#[kunit_tests(rust_spinlock_irq_condvar)]
+mod tests {
+    use super::*;
+    use crate::{
+        sync::*,
+        workqueue::{self, impl_has_work, new_work, Work, WorkItem},
+    };
+
+    struct TestState {
+        value: u32,
+        waiter_ready: bool,
+    }
+
+    #[pin_data]
+    struct Test {
+        #[pin]
+        state: SpinLockIrq<TestState>,
+
+        #[pin]
+        state_changed: CondVar,
+
+        #[pin]
+        waiter_state_changed: CondVar,
+
+        #[pin]
+        wait_work: Work<Self>,
+    }
+
+    impl_has_work! {
+        impl HasWork<Self> for Test { self.wait_work }
+    }
+
+    impl Test {
+        pub(crate) fn new() -> Result<Arc<Self>> {
+            Arc::try_pin_init(
+                try_pin_init!(
+                    Self {
+                        state <- new_spinlock_irq!(TestState {
+                            value: 1,
+                            waiter_ready: false
+                        }),
+                        state_changed <- new_condvar!(),
+                        waiter_state_changed <- new_condvar!(),
+                        wait_work <- new_work!("IrqCondvarTest::wait_work")
+                    }
+                ),
+                GFP_KERNEL,
+            )
+        }
+    }
+
+    impl WorkItem for Test {
+        type Pointer = Arc<Self>;
+
+        fn run(this: Arc<Self>) {
+            // Wait for the test to be ready to wait for us
+            let mut state = this.state.lock();
+
+            while !state.waiter_ready {
+                this.waiter_state_changed.wait(&mut state);
+            }
+
+            // Deliver the exciting value update our test has been waiting for
+            state.value += 1;
+            this.state_changed.notify_sync();
+        }
+    }
+
+    #[test]
+    fn spinlock_irq_condvar() -> Result {
+        let testdata = Test::new()?;
+
+        let _ = workqueue::system().enqueue(testdata.clone());
+
+        // Let the updater know when we're ready to wait
+        let mut state = testdata.state.lock();
+        state.waiter_ready = true;
+        testdata.waiter_state_changed.notify_sync();
+
+        // Wait for the exciting value update
+        testdata.state_changed.wait(&mut state);
+        assert_eq!(state.value, 2);
+        Ok(())
+    }
+}
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v18 5/5] rust: sync: Introduce SpinLockIrq::lock_with() and friends
  2026-02-05 20:44 [PATCH v18 0/5] SpinLockIrq for rust Lyude Paul
                   ` (3 preceding siblings ...)
  2026-02-05 20:44 ` [PATCH v18 4/5] rust: sync: Add SpinLockIrq Lyude Paul
@ 2026-02-05 20:44 ` Lyude Paul
  2026-02-05 23:23   ` Gary Guo
  4 siblings, 1 reply; 11+ messages in thread
From: Lyude Paul @ 2026-02-05 20:44 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

`SpinLockIrq` and `SpinLock` use the exact same underlying C structure,
with the only real difference being that the former uses the irq_disable()
and irq_enable() variants for locking/unlocking. These variants can
introduce some minor overhead in contexts where we already know that
local processor interrupts are disabled, and as such we want a way to be
able to skip modifying processor interrupt state in said contexts in order
to avoid some overhead - just like the current C API allows us to do.

In order to do this, we add some special functions for SpinLockIrq:
lock_with() and try_lock_with(), which allow acquiring the lock without
changing the interrupt state - as long as the caller can provide a
LocalInterruptDisabled reference to prove that local processor interrupts
have been disabled.

In some hacked-together benchmarks we ran, most of the time this did
actually seem to lead to a noticeable difference in overhead:

  From an aarch64 VM running on a MacBook M4:
    lock() when irq is disabled, 100 times cost Delta { nanos: 500 }
    lock_with() when irq is disabled, 100 times cost Delta { nanos: 292 }
    lock() when irq is enabled, 100 times cost Delta { nanos: 834 }

    lock() when irq is disabled, 100 times cost Delta { nanos: 459 }
    lock_with() when irq is disabled, 100 times cost Delta { nanos: 291 }
    lock() when irq is enabled, 100 times cost Delta { nanos: 709 }

  From an x86_64 VM (qemu/kvm) running on a i7-13700H
    lock() when irq is disabled, 100 times cost Delta { nanos: 1002 }
    lock_with() when irq is disabled, 100 times cost Delta { nanos: 729 }
    lock() when irq is enabled, 100 times cost Delta { nanos: 1516 }

    lock() when irq is disabled, 100 times cost Delta { nanos: 754 }
    lock_with() when irq is disabled, 100 times cost Delta { nanos: 966 }
    lock() when irq is enabled, 100 times cost Delta { nanos: 1227 }

    (note that there were some runs on x86_64 where lock() on irq disabled
    vs. lock_with() on irq disabled had equivalent benchmarks, but it very
    much appeared to be a minority of test runs.

While it's not clear how this affects real-world workloads yet, let's add
this for the time being so we can find out.

This makes it so that a `SpinLockIrq` will work like a `SpinLock` if
interrupts are disabled. So a function:

        (&'a SpinLockIrq, &'a InterruptDisabled) -> Guard<'a, .., SpinLockBackend>

makes sense. Note that due to `Guard` and `InterruptDisabled` having the
same lifetime, interrupts cannot be enabled while the Guard exists.

Signed-off-by: Lyude Paul <lyude@redhat.com>
Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>

---
This was originally two patches, but keeping them split didn't make sense
after going from BackendInContext to BackendWithContext.

V10:
* Fix typos - Dirk/Lyude
* Since we're adding support for context locks to GlobalLock as well, let's
  also make sure to cover try_lock while we're at it and add try_lock_with
* Add a private function as_lock_in_context() for handling casting from a
  Lock<T, B> to Lock<T, B::ContextualBackend> so we don't have to duplicate
  safety comments
V11:
* Fix clippy::ref_as_ptr error in Lock::as_lock_in_context()
V14:
* Add benchmark results, rewrite commit message
V17:
* Introduce `BackendWithContext`, move context-related bits into there and
  out of `Backend`.
* Add missing #[must_use = …] for try_lock_with()
* Remove all unsafe code from lock_with() and try_lock_with():
  Somehow I never noticed that literally none of the unsafe code in these
  two functions is needed with as_lock_in_context()...
V18:
* Get rid of BackendWithContext
* Just use transmute in as_lock_in_context()
* Now that we're only supporting IRQ spinlocks and not using traits, use
  the type aliases for SpinLock and SpinLockGuard
* Improve the docs now that we're not using traits.

 rust/kernel/sync/lock/spinlock.rs | 78 ++++++++++++++++++++++++++++++-
 1 file changed, 77 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
index f11a84505ba0e..de736cb777e93 100644
--- a/rust/kernel/sync/lock/spinlock.rs
+++ b/rust/kernel/sync/lock/spinlock.rs
@@ -4,7 +4,10 @@
 //!
 //! This module allows Rust code to use the kernel's `spinlock_t`.
 use super::*;
-use crate::prelude::*;
+use crate::{
+    interrupt::LocalInterruptDisabled,
+    prelude::*, //
+};
 
 /// Creates a [`SpinLock`] initialiser with the given name and a newly-created lock class.
 ///
@@ -224,6 +227,45 @@ macro_rules! new_spinlock_irq {
 /// # Ok::<(), Error>(())
 /// ```
 ///
+/// The next example demonstrates locking a [`SpinLockIrq`] using [`lock_with()`] in a function
+/// which can only be called when local processor interrupts are already disabled.
+///
+/// ```
+/// use kernel::sync::{new_spinlock_irq, SpinLockIrq};
+/// use kernel::interrupt::*;
+///
+/// struct Inner {
+///     a: u32,
+/// }
+///
+/// #[pin_data]
+/// struct Example {
+///     #[pin]
+///     inner: SpinLockIrq<Inner>,
+/// }
+///
+/// impl Example {
+///     fn new() -> impl PinInit<Self> {
+///         pin_init!(Self {
+///             inner <- new_spinlock_irq!(Inner { a: 20 }),
+///         })
+///     }
+/// }
+///
+/// // Accessing an `Example` from a function that can only be called in no-interrupt contexts.
+/// fn noirq_work(e: &Example, interrupt_disabled: &LocalInterruptDisabled) {
+///     // Because we know interrupts are disabled from interrupt_disable, we can skip toggling
+///     // interrupt state using lock_with() and the provided token
+///     assert_eq!(e.inner.lock_with(interrupt_disabled).a, 20);
+/// }
+///
+/// # let e = KBox::pin_init(Example::new(), GFP_KERNEL)?;
+/// # let interrupt_guard = local_interrupt_disable();
+/// # noirq_work(&e, &interrupt_guard);
+/// #
+/// # Ok::<(), Error>(())
+/// ```
+///
 /// [`lock()`]: SpinLockIrq::lock
 /// [`lock_with()`]: SpinLockIrq::lock_with
 pub type SpinLockIrq<T> = super::Lock<T, SpinLockIrqBackend>;
@@ -286,6 +328,40 @@ unsafe fn assert_is_held(ptr: *mut Self::State) {
     }
 }
 
+impl<T: ?Sized> Lock<T, SpinLockIrqBackend> {
+    /// Casts the lock as a `Lock<T, SpinLockBackend>`.
+    fn as_lock_in_interrupt<'a>(&'a self, _context: &'a LocalInterruptDisabled) -> &'a SpinLock<T> {
+        // SAFETY:
+        // - `Lock<T, SpinLockBackend>` and `Lock<T, SpinLockIrqBackend>` both have identical data
+        //   layouts.
+        // - As long as local interrupts are disabled (which is proven to be true by _context), it
+        //   is safe to treat a lock with SpinLockIrqBackend as a SpinLockBackend lock.
+        unsafe { core::mem::transmute(self) }
+    }
+
+    /// Acquires the lock without modifying local interrupt state.
+    ///
+    /// This function should be used in place of the more expensive [`Lock::lock()`] function when
+    /// possible for [`SpinLockIrq`] locks.
+    pub fn lock_with<'a>(&'a self, context: &'a LocalInterruptDisabled) -> SpinLockGuard<'a, T> {
+        self.as_lock_in_interrupt(context).lock()
+    }
+
+    /// Tries to acquire the lock without modifying local interrupt state.
+    ///
+    /// This function should be used in place of the more expensive [`Lock::try_lock()`] function
+    /// when possible for [`SpinLockIrq`] locks.
+    ///
+    /// Returns a guard that can be used to access the data protected by the lock if successful.
+    #[must_use = "if unused, the lock will be immediately unlocked"]
+    pub fn try_lock_with<'a>(
+        &'a self,
+        context: &'a LocalInterruptDisabled,
+    ) -> Option<SpinLockGuard<'a, T>> {
+        self.as_lock_in_interrupt(context).try_lock()
+    }
+}
+
 #[kunit_tests(rust_spinlock_irq_condvar)]
 mod tests {
     use super::*;
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v18 1/5] rust: Introduce interrupt module
  2026-02-05 20:44 ` [PATCH v18 1/5] rust: Introduce interrupt module Lyude Paul
@ 2026-02-05 23:13   ` Gary Guo
  0 siblings, 0 replies; 11+ messages in thread
From: Gary Guo @ 2026-02-05 23:13 UTC (permalink / raw)
  To: Lyude Paul, rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu Feb 5, 2026 at 8:44 PM GMT, Lyude Paul wrote:
> This introduces a module for dealing with interrupt-disabled contexts,
> including the ability to enable and disable interrupts along with the
> ability to annotate functions as expecting that IRQs are already
> disabled on the local CPU.
>
> [Boqun: This is based on Lyude's work on interrupt disable abstraction,
> I port to the new local_interrupt_disable() mechanism to make it work
> as a guard type. I cannot even take the credit of this design, since
> Lyude also brought up the same idea in zulip. Anyway, this is only for
> POC purpose, and of course all bugs are mine]
>
> Signed-off-by: Lyude Paul <lyude@redhat.com>
> Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> Reviewed-by: Benno Lossin <lossin@kernel.org>
> Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
>
> ---
>
> V10:
> * Fix documentation typos
> V11:
> * Get rid of unneeded `use bindings;`
> * Move ASSUME_DISABLED into assume_disabled()
> * Confirm using lockdep_assert_irqs_disabled() that local interrupts are in
>   fact disabled when LocalInterruptDisabled::assume_disabled() is called.
> V18:
> * Add missing __rust_helper annotations
>
>  rust/helpers/helpers.c   |  1 +
>  rust/helpers/interrupt.c | 18 +++++++++
>  rust/helpers/sync.c      |  5 +++
>  rust/kernel/interrupt.rs | 86 ++++++++++++++++++++++++++++++++++++++++
>  rust/kernel/lib.rs       |  1 +
>  5 files changed, 111 insertions(+)
>  create mode 100644 rust/helpers/interrupt.c
>  create mode 100644 rust/kernel/interrupt.rs
>
> diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
> index a3c42e51f00a0..ff628e9c5ffdd 100644
> --- a/rust/helpers/helpers.c
> +++ b/rust/helpers/helpers.c
> @@ -32,6 +32,7 @@
>  #include "err.c"
>  #include "irq.c"
>  #include "fs.c"
> +#include "interrupt.c"
>  #include "io.c"
>  #include "jump_label.c"
>  #include "kunit.c"
> diff --git a/rust/helpers/interrupt.c b/rust/helpers/interrupt.c
> new file mode 100644
> index 0000000000000..51b319bd4c006
> --- /dev/null
> +++ b/rust/helpers/interrupt.c
> @@ -0,0 +1,18 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/spinlock.h>
> +
> +__rust_helper void rust_helper_local_interrupt_disable(void)
> +{
> +	local_interrupt_disable();
> +}
> +
> +__rust_helper void rust_helper_local_interrupt_enable(void)
> +{
> +	local_interrupt_enable();
> +}
> +
> +__rust_helper bool rust_helper_irqs_disabled(void)
> +{
> +	return irqs_disabled();
> +}
> diff --git a/rust/helpers/sync.c b/rust/helpers/sync.c
> index 82d6aff73b04f..4f474fe847c41 100644
> --- a/rust/helpers/sync.c
> +++ b/rust/helpers/sync.c
> @@ -11,3 +11,8 @@ __rust_helper void rust_helper_lockdep_unregister_key(struct lock_class_key *k)
>  {
>  	lockdep_unregister_key(k);
>  }
> +
> +__rust_helper void rust_helper_lockdep_assert_irqs_disabled(void)
> +{
> +	lockdep_assert_irqs_disabled();
> +}
> diff --git a/rust/kernel/interrupt.rs b/rust/kernel/interrupt.rs
> new file mode 100644
> index 0000000000000..6c8d2f58bca70
> --- /dev/null
> +++ b/rust/kernel/interrupt.rs
> @@ -0,0 +1,86 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Interrupt controls
> +//!
> +//! This module allows Rust code to annotate areas of code where local processor interrupts should
> +//! be disabled, along with actually disabling local processor interrupts.
> +//!
> +//! # ⚠️ Warning! ⚠️
> +//!
> +//! The usage of this module can be more complicated than meets the eye, especially surrounding
> +//! [preemptible kernels]. It's recommended to take care when using the functions and types defined
> +//! here and familiarize yourself with the various documentation we have before using them, along
> +//! with the various documents we link to here.
> +//!
> +//! # Reading material
> +//!
> +//! - [Software interrupts and realtime (LWN)](https://lwn.net/Articles/520076)
> +//!
> +//! [preemptible kernels]: https://www.kernel.org/doc/html/latest/locking/preempt-locking.html
> +
> +use kernel::types::NotThreadSafe;

I think we currently more commonly use `crate::`, although I personally wouldn't
mind to use `kernel::`.

> +
> +/// A guard that represents local processor interrupt disablement on preemptible kernels.
> +///
> +/// [`LocalInterruptDisabled`] is a guard type that represents that local processor interrupts have
> +/// been disabled on a preemptible kernel.
> +///
> +/// Certain functions take an immutable reference of [`LocalInterruptDisabled`] in order to require
> +/// that they may only be run in local-interrupt-disabled contexts on preemptible kernels.
> +///
> +/// This is a marker type; it has no size, and is simply used as a compile-time guarantee that local
> +/// processor interrupts are disabled on preemptible kernels. Note that no guarantees about the
> +/// state of interrupts are made by this type on non-preemptible kernels.
> +///
> +/// # Invariants
> +///
> +/// Local processor interrupts are disabled on preemptible kernels for as long as an object of this
> +/// type exists.
> +pub struct LocalInterruptDisabled(NotThreadSafe);
> +
> +/// Disable local processor interrupts on a preemptible kernel.
> +///
> +/// This function disables local processor interrupts on a preemptible kernel, and returns a
> +/// [`LocalInterruptDisabled`] token as proof of this. On non-preemptible kernels, this function is
> +/// a no-op.
> +///
> +/// **Usage of this function is discouraged** unless you are absolutely sure you know what you are
> +/// doing, as kernel interfaces for rust that deal with interrupt state will typically handle local
> +/// processor interrupt state management on their own and managing this by hand is quite error
> +/// prone.

#[inline] for this and other small wrapping functions.

This is a non-generic function. While newer version of Rust compiler starts to
automatically mark things inline for small functions when it sees fit, we should
still ensure that trivial functions are annotated as such.

> +pub fn local_interrupt_disable() -> LocalInterruptDisabled {
> +    // SAFETY: It's always safe to call `local_interrupt_disable()`.
> +    unsafe { bindings::local_interrupt_disable() };
> +
> +    LocalInterruptDisabled(NotThreadSafe)
> +}
> +
> +impl Drop for LocalInterruptDisabled {
> +    fn drop(&mut self) {
> +        // SAFETY: Per type invariants, a `local_interrupt_disable()` must be called to create this
> +        // object, hence call the corresponding `local_interrupt_enable()` is safe.
> +        unsafe { bindings::local_interrupt_enable() };
> +    }
> +}
> +
> +impl LocalInterruptDisabled {
> +    /// Assume that local processor interrupts are disabled on preemptible kernels.
> +    ///
> +    /// This can be used for annotating code that is known to be run in contexts where local
> +    /// processor interrupts are disabled on preemptible kernels. It makes no changes to the local
> +    /// interrupt state on its own.
> +    ///
> +    /// # Safety
> +    ///
> +    /// For the whole life `'a`, local interrupts must be disabled on preemptible kernels. This
> +    /// could be a context like for example, an interrupt handler.
> +    pub unsafe fn assume_disabled<'a>() -> &'a LocalInterruptDisabled {
> +        const ASSUME_DISABLED: &LocalInterruptDisabled = &LocalInterruptDisabled(NotThreadSafe);
> +
> +        // Confirm they're actually disabled if lockdep is available
> +        // SAFETY: It's always safe to call `lockdep_assert_irqs_disabled()`
> +        unsafe { bindings::lockdep_assert_irqs_disabled() };
> +
> +        ASSUME_DISABLED
> +    }
> +}
> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
> index f812cf1200428..fbe830c4c54e4 100644
> --- a/rust/kernel/lib.rs
> +++ b/rust/kernel/lib.rs
> @@ -101,6 +101,7 @@
>  pub mod i2c;
>  pub mod id_pool;
>  pub mod init;
> +pub mod interrupt;
>  pub mod io;
>  pub mod ioctl;
>  pub mod iov;


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v18 2/5] rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers
  2026-02-05 20:44 ` [PATCH v18 2/5] rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers Lyude Paul
@ 2026-02-05 23:13   ` Gary Guo
  0 siblings, 0 replies; 11+ messages in thread
From: Gary Guo @ 2026-02-05 23:13 UTC (permalink / raw)
  To: Lyude Paul, rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu Feb 5, 2026 at 8:44 PM GMT, Lyude Paul wrote:
> From: Boqun Feng <boqun.feng@gmail.com>
> 
> spin_lock_irq_disable() and spin_unlock_irq_enable() are inline
> functions, to use them in Rust, helpers are introduced. This is for
> interrupt disabling lock abstraction in Rust.
> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>

Reviewed-by: Gary Guo <gary@garyguo.net>

> ---
> V18:
> * Add missing __rust_helper annotations
> 
>  rust/helpers/spinlock.c | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v18 4/5] rust: sync: Add SpinLockIrq
  2026-02-05 20:44 ` [PATCH v18 4/5] rust: sync: Add SpinLockIrq Lyude Paul
@ 2026-02-05 23:19   ` Gary Guo
  0 siblings, 0 replies; 11+ messages in thread
From: Gary Guo @ 2026-02-05 23:19 UTC (permalink / raw)
  To: Lyude Paul, rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu Feb 5, 2026 at 8:44 PM GMT, Lyude Paul wrote:
> A variant of `SpinLock` that ensures interrupts are disabled in the
> critical section. `lock()` will ensure that either interrupts are already
> disabled or disable them. `unlock()` will reverse the respective operation.
>
> [Boqun: Port to use spin_lock_irq_disable() and
> spin_unlock_irq_enable()]
>
> Signed-off-by: Lyude Paul <lyude@redhat.com>
> Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
>
> ---
> V10:
> * Also add support to GlobalLock
> * Documentation fixes from Dirk
> V11:
> * Add unit test requested by Daniel Almeida
> V14:
> * Improve rustdoc for SpinLockIrqBackend
> V17:
> * Update Git summary according to Benno's review
> V18:
> * Add missing comment change requested by Benno in V16
>
>  rust/kernel/sync.rs               |   4 +-
>  rust/kernel/sync/lock/global.rs   |   3 +
>  rust/kernel/sync/lock/spinlock.rs | 228 ++++++++++++++++++++++++++++++
>  3 files changed, 234 insertions(+), 1 deletion(-)
>
> diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
> index 993dbf2caa0e3..58af785d62b99 100644
> --- a/rust/kernel/sync.rs
> +++ b/rust/kernel/sync.rs
> @@ -27,7 +27,9 @@
>  pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult};
>  pub use lock::global::{global_lock, GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy};
>  pub use lock::mutex::{new_mutex, Mutex, MutexGuard};
> -pub use lock::spinlock::{new_spinlock, SpinLock, SpinLockGuard};
> +pub use lock::spinlock::{
> +    new_spinlock, new_spinlock_irq, SpinLock, SpinLockGuard, SpinLockIrq, SpinLockIrqGuard,
> +};

Please convert to kernel import style

>  pub use locked_by::LockedBy;
>  pub use refcount::Refcount;
>  pub use set_once::SetOnce;
> diff --git a/rust/kernel/sync/lock/global.rs b/rust/kernel/sync/lock/global.rs
> index aecbdc34738fb..32efcbc870500 100644
> --- a/rust/kernel/sync/lock/global.rs
> +++ b/rust/kernel/sync/lock/global.rs
> @@ -304,4 +304,7 @@ macro_rules! global_lock_inner {
>      (backend SpinLock) => {
>          $crate::sync::lock::spinlock::SpinLockBackend
>      };
> +    (backend SpinLockIrq) => {
> +        $crate::sync::lock::spinlock::SpinLockIrqBackend
> +    };
>  }
> diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
> index d75af32218bae..f11a84505ba0e 100644
> --- a/rust/kernel/sync/lock/spinlock.rs
> +++ b/rust/kernel/sync/lock/spinlock.rs
> @@ -4,6 +4,7 @@
>  //!
>  //! This module allows Rust code to use the kernel's `spinlock_t`.
>  use super::*;
> +use crate::prelude::*;
>  
>  /// Creates a [`SpinLock`] initialiser with the given name and a newly-created lock class.
>  ///
> @@ -143,3 +144,230 @@ unsafe fn assert_is_held(ptr: *mut Self::State) {
>          unsafe { bindings::spin_assert_is_held(ptr) }
>      }
>  }
> +
> +/// Creates a [`SpinLockIrq`] initialiser with the given name and a newly-created lock class.
> +///
> +/// It uses the name if one is given, otherwise it generates one based on the file name and line
> +/// number.
> +#[macro_export]
> +macro_rules! new_spinlock_irq {
> +    ($inner:expr $(, $name:literal)? $(,)?) => {
> +        $crate::sync::SpinLockIrq::new(
> +            $inner, $crate::optional_name!($($name)?), $crate::static_lock_class!())
> +    };
> +}
> +pub use new_spinlock_irq;
> +
> +/// A variant of `SpinLock` that ensures interrupts are disabled in the critical section.
> +///
> +/// This is a version of [`SpinLock`] that can only be used in contexts where interrupts for the
> +/// local CPU are disabled. It can be acquired in two ways:
> +///
> +/// - Using [`lock()`] like any other type of lock, in which case the bindings will modify the
> +///   interrupt state to ensure that local processor interrupts remain disabled for at least as long
> +///   as the [`SpinLockIrqGuard`] exists.
> +/// - Using [`lock_with()`] in contexts where a [`LocalInterruptDisabled`] token is present and
> +///   local processor interrupts are already known to be disabled, in which case the local interrupt
> +///   state will not be touched. This method should be preferred if a [`LocalInterruptDisabled`]
> +///   token is present in the scope.
> +///
> +/// For more info on spinlocks, see [`SpinLock`]. For more information on interrupts,
> +/// [see the interrupt module](kernel::interrupt).
> +///
> +/// # Examples
> +///
> +/// The following example shows how to declare, allocate initialise and access a struct (`Example`)
> +/// that contains an inner struct (`Inner`) that is protected by a spinlock that requires local
> +/// processor interrupts to be disabled.
> +///
> +/// ```
> +/// use kernel::sync::{new_spinlock_irq, SpinLockIrq};
> +///
> +/// struct Inner {
> +///     a: u32,
> +///     b: u32,
> +/// }
> +///
> +/// #[pin_data]
> +/// struct Example {
> +///     #[pin]
> +///     c: SpinLockIrq<Inner>,
> +///     #[pin]
> +///     d: SpinLockIrq<Inner>,
> +/// }
> +///
> +/// impl Example {
> +///     fn new() -> impl PinInit<Self> {
> +///         pin_init!(Self {
> +///             c <- new_spinlock_irq!(Inner { a: 0, b: 10 }),
> +///             d <- new_spinlock_irq!(Inner { a: 20, b: 30 }),
> +///         })
> +///     }
> +/// }
> +///
> +/// // Allocate a boxed `Example`
> +/// let e = KBox::pin_init(Example::new(), GFP_KERNEL)?;
> +///
> +/// // Accessing an `Example` from a context where interrupts may not be disabled already.
> +/// let c_guard = e.c.lock(); // interrupts are disabled now, +1 interrupt disable refcount
> +/// let d_guard = e.d.lock(); // no interrupt state change, +1 interrupt disable refcount
> +///
> +/// assert_eq!(c_guard.a, 0);
> +/// assert_eq!(c_guard.b, 10);
> +/// assert_eq!(d_guard.a, 20);
> +/// assert_eq!(d_guard.b, 30);
> +///
> +/// drop(c_guard); // Dropping c_guard will not re-enable interrupts just yet, since d_guard is
> +///                // still in scope.
> +/// drop(d_guard); // Last interrupt disable reference dropped here, so interrupts are re-enabled
> +///                // now
> +/// # Ok::<(), Error>(())
> +/// ```
> +///
> +/// [`lock()`]: SpinLockIrq::lock
> +/// [`lock_with()`]: SpinLockIrq::lock_with
> +pub type SpinLockIrq<T> = super::Lock<T, SpinLockIrqBackend>;
> +
> +/// A kernel `spinlock_t` lock backend that can only be acquired in interrupt disabled contexts.
> +pub struct SpinLockIrqBackend;
> +
> +/// A [`Guard`] acquired from locking a [`SpinLockIrq`] using [`lock()`].
> +///
> +/// This is simply a type alias for a [`Guard`] returned from locking a [`SpinLockIrq`] using
> +/// [`lock_with()`]. It will unlock the [`SpinLockIrq`] and decrement the local processor's
> +/// interrupt disablement refcount upon being dropped.
> +///
> +/// [`lock()`]: SpinLockIrq::lock
> +/// [`lock_with()`]: SpinLockIrq::lock_with
> +pub type SpinLockIrqGuard<'a, T> = Guard<'a, T, SpinLockIrqBackend>;
> +
> +// SAFETY: The underlying kernel `spinlock_t` object ensures mutual exclusion. `relock` uses the
> +// default implementation that always calls the same locking method.
> +unsafe impl Backend for SpinLockIrqBackend {
> +    type State = bindings::spinlock_t;
> +    type GuardState = ();
> +

#[inline] hints for all of these functions.

> +    unsafe fn init(
> +        ptr: *mut Self::State,
> +        name: *const crate::ffi::c_char,
> +        key: *mut bindings::lock_class_key,
> +    ) {
> +        // SAFETY: The safety requirements ensure that `ptr` is valid for writes, and `name` and
> +        // `key` are valid for read indefinitely.
> +        unsafe { bindings::__spin_lock_init(ptr, name, key) }
> +    }
> +
> +    unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState {
> +        // SAFETY: The safety requirements of this function ensure that `ptr` points to valid
> +        // memory, and that it has been initialised before.
> +        unsafe { bindings::spin_lock_irq_disable(ptr) }
> +    }
> +
> +    unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) {
> +        // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the
> +        // caller is the owner of the spinlock.
> +        unsafe { bindings::spin_unlock_irq_enable(ptr) }
> +    }
> +
> +    unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> {
> +        // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use.
> +        let result = unsafe { bindings::spin_trylock_irq_disable(ptr) };
> +
> +        if result != 0 {
> +            Some(())
> +        } else {
> +            None
> +        }
> +    }
> +
> +    unsafe fn assert_is_held(ptr: *mut Self::State) {
> +        // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use.
> +        unsafe { bindings::spin_assert_is_held(ptr) }
> +    }
> +}
> +
> +#[kunit_tests(rust_spinlock_irq_condvar)]
> +mod tests {
> +    use super::*;
> +    use crate::{
> +        sync::*,
> +        workqueue::{self, impl_has_work, new_work, Work, WorkItem},
> +    };
> +
> +    struct TestState {
> +        value: u32,
> +        waiter_ready: bool,
> +    }
> +
> +    #[pin_data]
> +    struct Test {
> +        #[pin]
> +        state: SpinLockIrq<TestState>,
> +
> +        #[pin]
> +        state_changed: CondVar,
> +
> +        #[pin]
> +        waiter_state_changed: CondVar,
> +
> +        #[pin]
> +        wait_work: Work<Self>,
> +    }
> +
> +    impl_has_work! {
> +        impl HasWork<Self> for Test { self.wait_work }
> +    }
> +
> +    impl Test {
> +        pub(crate) fn new() -> Result<Arc<Self>> {
> +            Arc::try_pin_init(
> +                try_pin_init!(
> +                    Self {
> +                        state <- new_spinlock_irq!(TestState {
> +                            value: 1,
> +                            waiter_ready: false
> +                        }),
> +                        state_changed <- new_condvar!(),
> +                        waiter_state_changed <- new_condvar!(),
> +                        wait_work <- new_work!("IrqCondvarTest::wait_work")
> +                    }
> +                ),
> +                GFP_KERNEL,
> +            )
> +        }
> +    }
> +
> +    impl WorkItem for Test {
> +        type Pointer = Arc<Self>;
> +
> +        fn run(this: Arc<Self>) {
> +            // Wait for the test to be ready to wait for us
> +            let mut state = this.state.lock();

Perhaps throw in an irq actually disabled assertion from the module in patch 1
here?

With these fixed:

Reviewed-by: Gary Guo <gary@garyguo.net>

Best,
Gary

> +
> +            while !state.waiter_ready {
> +                this.waiter_state_changed.wait(&mut state);
> +            }
> +
> +            // Deliver the exciting value update our test has been waiting for
> +            state.value += 1;
> +            this.state_changed.notify_sync();
> +        }
> +    }
> +
> +    #[test]
> +    fn spinlock_irq_condvar() -> Result {
> +        let testdata = Test::new()?;
> +
> +        let _ = workqueue::system().enqueue(testdata.clone());
> +
> +        // Let the updater know when we're ready to wait
> +        let mut state = testdata.state.lock();
> +        state.waiter_ready = true;
> +        testdata.waiter_state_changed.notify_sync();
> +
> +        // Wait for the exciting value update
> +        testdata.state_changed.wait(&mut state);
> +        assert_eq!(state.value, 2);
> +        Ok(())
> +    }
> +}


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v18 5/5] rust: sync: Introduce SpinLockIrq::lock_with() and friends
  2026-02-05 20:44 ` [PATCH v18 5/5] rust: sync: Introduce SpinLockIrq::lock_with() and friends Lyude Paul
@ 2026-02-05 23:23   ` Gary Guo
  2026-03-02 22:47     ` lyude
  0 siblings, 1 reply; 11+ messages in thread
From: Gary Guo @ 2026-02-05 23:23 UTC (permalink / raw)
  To: Lyude Paul, rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu Feb 5, 2026 at 8:44 PM GMT, Lyude Paul wrote:
> `SpinLockIrq` and `SpinLock` use the exact same underlying C structure,
> with the only real difference being that the former uses the irq_disable()
> and irq_enable() variants for locking/unlocking. These variants can
> introduce some minor overhead in contexts where we already know that
> local processor interrupts are disabled, and as such we want a way to be
> able to skip modifying processor interrupt state in said contexts in order
> to avoid some overhead - just like the current C API allows us to do.
>
> In order to do this, we add some special functions for SpinLockIrq:
> lock_with() and try_lock_with(), which allow acquiring the lock without
> changing the interrupt state - as long as the caller can provide a
> LocalInterruptDisabled reference to prove that local processor interrupts
> have been disabled.
>
> In some hacked-together benchmarks we ran, most of the time this did
> actually seem to lead to a noticeable difference in overhead:
>
>   From an aarch64 VM running on a MacBook M4:
>     lock() when irq is disabled, 100 times cost Delta { nanos: 500 }
>     lock_with() when irq is disabled, 100 times cost Delta { nanos: 292 }
>     lock() when irq is enabled, 100 times cost Delta { nanos: 834 }
>
>     lock() when irq is disabled, 100 times cost Delta { nanos: 459 }
>     lock_with() when irq is disabled, 100 times cost Delta { nanos: 291 }
>     lock() when irq is enabled, 100 times cost Delta { nanos: 709 }
>
>   From an x86_64 VM (qemu/kvm) running on a i7-13700H
>     lock() when irq is disabled, 100 times cost Delta { nanos: 1002 }
>     lock_with() when irq is disabled, 100 times cost Delta { nanos: 729 }
>     lock() when irq is enabled, 100 times cost Delta { nanos: 1516 }
>
>     lock() when irq is disabled, 100 times cost Delta { nanos: 754 }
>     lock_with() when irq is disabled, 100 times cost Delta { nanos: 966 }
>     lock() when irq is enabled, 100 times cost Delta { nanos: 1227 }
>
>     (note that there were some runs on x86_64 where lock() on irq disabled
>     vs. lock_with() on irq disabled had equivalent benchmarks, but it very
>     much appeared to be a minority of test runs.
>
> While it's not clear how this affects real-world workloads yet, let's add
> this for the time being so we can find out.
>
> This makes it so that a `SpinLockIrq` will work like a `SpinLock` if
> interrupts are disabled. So a function:
>
>         (&'a SpinLockIrq, &'a InterruptDisabled) -> Guard<'a, .., SpinLockBackend>
>
> makes sense. Note that due to `Guard` and `InterruptDisabled` having the
> same lifetime, interrupts cannot be enabled while the Guard exists.
>
> Signed-off-by: Lyude Paul <lyude@redhat.com>
> Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
>
> ---
> This was originally two patches, but keeping them split didn't make sense
> after going from BackendInContext to BackendWithContext.
>
> V10:
> * Fix typos - Dirk/Lyude
> * Since we're adding support for context locks to GlobalLock as well, let's
>   also make sure to cover try_lock while we're at it and add try_lock_with
> * Add a private function as_lock_in_context() for handling casting from a
>   Lock<T, B> to Lock<T, B::ContextualBackend> so we don't have to duplicate
>   safety comments
> V11:
> * Fix clippy::ref_as_ptr error in Lock::as_lock_in_context()
> V14:
> * Add benchmark results, rewrite commit message
> V17:
> * Introduce `BackendWithContext`, move context-related bits into there and
>   out of `Backend`.
> * Add missing #[must_use = …] for try_lock_with()
> * Remove all unsafe code from lock_with() and try_lock_with():
>   Somehow I never noticed that literally none of the unsafe code in these
>   two functions is needed with as_lock_in_context()...
> V18:
> * Get rid of BackendWithContext
> * Just use transmute in as_lock_in_context()
> * Now that we're only supporting IRQ spinlocks and not using traits, use
>   the type aliases for SpinLock and SpinLockGuard
> * Improve the docs now that we're not using traits.
>
>  rust/kernel/sync/lock/spinlock.rs | 78 ++++++++++++++++++++++++++++++-
>  1 file changed, 77 insertions(+), 1 deletion(-)
>
> diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
> index f11a84505ba0e..de736cb777e93 100644
> --- a/rust/kernel/sync/lock/spinlock.rs
> +++ b/rust/kernel/sync/lock/spinlock.rs
> @@ -4,7 +4,10 @@
>  //!
>  //! This module allows Rust code to use the kernel's `spinlock_t`.
>  use super::*;
> -use crate::prelude::*;
> +use crate::{
> +    interrupt::LocalInterruptDisabled,
> +    prelude::*, //
> +};
>  
>  /// Creates a [`SpinLock`] initialiser with the given name and a newly-created lock class.
>  ///
> @@ -224,6 +227,45 @@ macro_rules! new_spinlock_irq {
>  /// # Ok::<(), Error>(())
>  /// ```
>  ///
> +/// The next example demonstrates locking a [`SpinLockIrq`] using [`lock_with()`] in a function
> +/// which can only be called when local processor interrupts are already disabled.
> +///
> +/// ```
> +/// use kernel::sync::{new_spinlock_irq, SpinLockIrq};
> +/// use kernel::interrupt::*;
> +///
> +/// struct Inner {
> +///     a: u32,
> +/// }
> +///
> +/// #[pin_data]
> +/// struct Example {
> +///     #[pin]
> +///     inner: SpinLockIrq<Inner>,
> +/// }
> +///
> +/// impl Example {
> +///     fn new() -> impl PinInit<Self> {
> +///         pin_init!(Self {
> +///             inner <- new_spinlock_irq!(Inner { a: 20 }),
> +///         })
> +///     }
> +/// }
> +///
> +/// // Accessing an `Example` from a function that can only be called in no-interrupt contexts.
> +/// fn noirq_work(e: &Example, interrupt_disabled: &LocalInterruptDisabled) {
> +///     // Because we know interrupts are disabled from interrupt_disable, we can skip toggling
> +///     // interrupt state using lock_with() and the provided token
> +///     assert_eq!(e.inner.lock_with(interrupt_disabled).a, 20);
> +/// }
> +///
> +/// # let e = KBox::pin_init(Example::new(), GFP_KERNEL)?;
> +/// # let interrupt_guard = local_interrupt_disable();
> +/// # noirq_work(&e, &interrupt_guard);
> +/// #
> +/// # Ok::<(), Error>(())
> +/// ```
> +///
>  /// [`lock()`]: SpinLockIrq::lock
>  /// [`lock_with()`]: SpinLockIrq::lock_with
>  pub type SpinLockIrq<T> = super::Lock<T, SpinLockIrqBackend>;
> @@ -286,6 +328,40 @@ unsafe fn assert_is_held(ptr: *mut Self::State) {
>      }
>  }
>  

This is much cleaner now compared the previous series. Thanks!

With functions marked as `#[inline]` as appropriate:

    Reviewed-by: Gary Guo <gary@garyguo.net>

BTW: do we want to have a

    impl<'a> lock::Guard<'a, T, SpinLockIrqBackend> {
        fn as_local_irq_disabled(&self) -> &LocalInterruptDisabled;
    }

?

Best,
Gary

> +impl<T: ?Sized> Lock<T, SpinLockIrqBackend> {
> +    /// Casts the lock as a `Lock<T, SpinLockBackend>`.
> +    fn as_lock_in_interrupt<'a>(&'a self, _context: &'a LocalInterruptDisabled) -> &'a SpinLock<T> {
> +        // SAFETY:
> +        // - `Lock<T, SpinLockBackend>` and `Lock<T, SpinLockIrqBackend>` both have identical data
> +        //   layouts.
> +        // - As long as local interrupts are disabled (which is proven to be true by _context), it
> +        //   is safe to treat a lock with SpinLockIrqBackend as a SpinLockBackend lock.
> +        unsafe { core::mem::transmute(self) }
> +    }
> +
> +    /// Acquires the lock without modifying local interrupt state.
> +    ///
> +    /// This function should be used in place of the more expensive [`Lock::lock()`] function when
> +    /// possible for [`SpinLockIrq`] locks.
> +    pub fn lock_with<'a>(&'a self, context: &'a LocalInterruptDisabled) -> SpinLockGuard<'a, T> {
> +        self.as_lock_in_interrupt(context).lock()
> +    }
> +
> +    /// Tries to acquire the lock without modifying local interrupt state.
> +    ///
> +    /// This function should be used in place of the more expensive [`Lock::try_lock()`] function
> +    /// when possible for [`SpinLockIrq`] locks.
> +    ///
> +    /// Returns a guard that can be used to access the data protected by the lock if successful.
> +    #[must_use = "if unused, the lock will be immediately unlocked"]
> +    pub fn try_lock_with<'a>(
> +        &'a self,
> +        context: &'a LocalInterruptDisabled,
> +    ) -> Option<SpinLockGuard<'a, T>> {
> +        self.as_lock_in_interrupt(context).try_lock()
> +    }
> +}
> +
>  #[kunit_tests(rust_spinlock_irq_condvar)]
>  mod tests {
>      use super::*;


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v18 5/5] rust: sync: Introduce SpinLockIrq::lock_with() and friends
  2026-02-05 23:23   ` Gary Guo
@ 2026-03-02 22:47     ` lyude
  0 siblings, 0 replies; 11+ messages in thread
From: lyude @ 2026-03-02 22:47 UTC (permalink / raw)
  To: Gary Guo, rust-for-linux, linux-kernel, Thomas Gleixner
  Cc: Boqun Feng, Daniel Almeida, Miguel Ojeda, Alex Gaynor,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Andrew Morton, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Waiman Long

On Thu, 2026-02-05 at 23:23 +0000, Gary Guo wrote:
> On Thu Feb 5, 2026 at 8:44 PM GMT, Lyude Paul wrote:
> > `SpinLockIrq` and `SpinLock` use the exact same underlying C
> > structure,
> > with the only real difference being that the former uses the
> > irq_disable()
> > and irq_enable() variants for locking/unlocking. These variants can
> > introduce some minor overhead in contexts where we already know
> > that
> > local processor interrupts are disabled, and as such we want a way
> > to be
> > able to skip modifying processor interrupt state in said contexts
> > in order
> > to avoid some overhead - just like the current C API allows us to
> > do.
> > 
> > In order to do this, we add some special functions for SpinLockIrq:
> > lock_with() and try_lock_with(), which allow acquiring the lock
> > without
> > changing the interrupt state - as long as the caller can provide a
> > LocalInterruptDisabled reference to prove that local processor
> > interrupts
> > have been disabled.
> > 
> > In some hacked-together benchmarks we ran, most of the time this
> > did
> > actually seem to lead to a noticeable difference in overhead:
> > 
> >   From an aarch64 VM running on a MacBook M4:
> >     lock() when irq is disabled, 100 times cost Delta { nanos: 500
> > }
> >     lock_with() when irq is disabled, 100 times cost Delta { nanos:
> > 292 }
> >     lock() when irq is enabled, 100 times cost Delta { nanos: 834 }
> > 
> >     lock() when irq is disabled, 100 times cost Delta { nanos: 459
> > }
> >     lock_with() when irq is disabled, 100 times cost Delta { nanos:
> > 291 }
> >     lock() when irq is enabled, 100 times cost Delta { nanos: 709 }
> > 
> >   From an x86_64 VM (qemu/kvm) running on a i7-13700H
> >     lock() when irq is disabled, 100 times cost Delta { nanos: 1002
> > }
> >     lock_with() when irq is disabled, 100 times cost Delta { nanos:
> > 729 }
> >     lock() when irq is enabled, 100 times cost Delta { nanos: 1516
> > }
> > 
> >     lock() when irq is disabled, 100 times cost Delta { nanos: 754
> > }
> >     lock_with() when irq is disabled, 100 times cost Delta { nanos:
> > 966 }
> >     lock() when irq is enabled, 100 times cost Delta { nanos: 1227
> > }
> > 
> >     (note that there were some runs on x86_64 where lock() on irq
> > disabled
> >     vs. lock_with() on irq disabled had equivalent benchmarks, but
> > it very
> >     much appeared to be a minority of test runs.
> > 
> > While it's not clear how this affects real-world workloads yet,
> > let's add
> > this for the time being so we can find out.
> > 
> > This makes it so that a `SpinLockIrq` will work like a `SpinLock`
> > if
> > interrupts are disabled. So a function:
> > 
> >         (&'a SpinLockIrq, &'a InterruptDisabled) -> Guard<'a, ..,
> > SpinLockBackend>
> > 
> > makes sense. Note that due to `Guard` and `InterruptDisabled`
> > having the
> > same lifetime, interrupts cannot be enabled while the Guard exists.
> > 
> > Signed-off-by: Lyude Paul <lyude@redhat.com>
> > Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > 
> > ---
> > This was originally two patches, but keeping them split didn't make
> > sense
> > after going from BackendInContext to BackendWithContext.
> > 
> > V10:
> > * Fix typos - Dirk/Lyude
> > * Since we're adding support for context locks to GlobalLock as
> > well, let's
> >   also make sure to cover try_lock while we're at it and add
> > try_lock_with
> > * Add a private function as_lock_in_context() for handling casting
> > from a
> >   Lock<T, B> to Lock<T, B::ContextualBackend> so we don't have to
> > duplicate
> >   safety comments
> > V11:
> > * Fix clippy::ref_as_ptr error in Lock::as_lock_in_context()
> > V14:
> > * Add benchmark results, rewrite commit message
> > V17:
> > * Introduce `BackendWithContext`, move context-related bits into
> > there and
> >   out of `Backend`.
> > * Add missing #[must_use = …] for try_lock_with()
> > * Remove all unsafe code from lock_with() and try_lock_with():
> >   Somehow I never noticed that literally none of the unsafe code in
> > these
> >   two functions is needed with as_lock_in_context()...
> > V18:
> > * Get rid of BackendWithContext
> > * Just use transmute in as_lock_in_context()
> > * Now that we're only supporting IRQ spinlocks and not using
> > traits, use
> >   the type aliases for SpinLock and SpinLockGuard
> > * Improve the docs now that we're not using traits.
> > 
> >  rust/kernel/sync/lock/spinlock.rs | 78
> > ++++++++++++++++++++++++++++++-
> >  1 file changed, 77 insertions(+), 1 deletion(-)
> > 
> > diff --git a/rust/kernel/sync/lock/spinlock.rs
> > b/rust/kernel/sync/lock/spinlock.rs
> > index f11a84505ba0e..de736cb777e93 100644
> > --- a/rust/kernel/sync/lock/spinlock.rs
> > +++ b/rust/kernel/sync/lock/spinlock.rs
> > @@ -4,7 +4,10 @@
> >  //!
> >  //! This module allows Rust code to use the kernel's `spinlock_t`.
> >  use super::*;
> > -use crate::prelude::*;
> > +use crate::{
> > +    interrupt::LocalInterruptDisabled,
> > +    prelude::*, //
> > +};
> >  
> >  /// Creates a [`SpinLock`] initialiser with the given name and a
> > newly-created lock class.
> >  ///
> > @@ -224,6 +227,45 @@ macro_rules! new_spinlock_irq {
> >  /// # Ok::<(), Error>(())
> >  /// ```
> >  ///
> > +/// The next example demonstrates locking a [`SpinLockIrq`] using
> > [`lock_with()`] in a function
> > +/// which can only be called when local processor interrupts are
> > already disabled.
> > +///
> > +/// ```
> > +/// use kernel::sync::{new_spinlock_irq, SpinLockIrq};
> > +/// use kernel::interrupt::*;
> > +///
> > +/// struct Inner {
> > +///     a: u32,
> > +/// }
> > +///
> > +/// #[pin_data]
> > +/// struct Example {
> > +///     #[pin]
> > +///     inner: SpinLockIrq<Inner>,
> > +/// }
> > +///
> > +/// impl Example {
> > +///     fn new() -> impl PinInit<Self> {
> > +///         pin_init!(Self {
> > +///             inner <- new_spinlock_irq!(Inner { a: 20 }),
> > +///         })
> > +///     }
> > +/// }
> > +///
> > +/// // Accessing an `Example` from a function that can only be
> > called in no-interrupt contexts.
> > +/// fn noirq_work(e: &Example, interrupt_disabled:
> > &LocalInterruptDisabled) {
> > +///     // Because we know interrupts are disabled from
> > interrupt_disable, we can skip toggling
> > +///     // interrupt state using lock_with() and the provided
> > token
> > +///     assert_eq!(e.inner.lock_with(interrupt_disabled).a, 20);
> > +/// }
> > +///
> > +/// # let e = KBox::pin_init(Example::new(), GFP_KERNEL)?;
> > +/// # let interrupt_guard = local_interrupt_disable();
> > +/// # noirq_work(&e, &interrupt_guard);
> > +/// #
> > +/// # Ok::<(), Error>(())
> > +/// ```
> > +///
> >  /// [`lock()`]: SpinLockIrq::lock
> >  /// [`lock_with()`]: SpinLockIrq::lock_with
> >  pub type SpinLockIrq<T> = super::Lock<T, SpinLockIrqBackend>;
> > @@ -286,6 +328,40 @@ unsafe fn assert_is_held(ptr: *mut
> > Self::State) {
> >      }
> >  }
> >  
> 
> This is much cleaner now compared the previous series. Thanks!
> 
> With functions marked as `#[inline]` as appropriate:
> 
>     Reviewed-by: Gary Guo <gary@garyguo.net>
> 
> BTW: do we want to have a
> 
>     impl<'a> lock::Guard<'a, T, SpinLockIrqBackend> {
>         fn as_local_irq_disabled(&self) -> &LocalInterruptDisabled;
>     }
> 

Yes imo - since it lets us get to the level of skipping assertions that
C is currently able to hit. Though, I wonder if this would be better
suited for AsRef?

I will save this for a follow-up patch though, since we're so close to
getting this in right now.

> ?
> 
> Best,
> Gary
> 
> > +impl<T: ?Sized> Lock<T, SpinLockIrqBackend> {
> > +    /// Casts the lock as a `Lock<T, SpinLockBackend>`.
> > +    fn as_lock_in_interrupt<'a>(&'a self, _context: &'a
> > LocalInterruptDisabled) -> &'a SpinLock<T> {
> > +        // SAFETY:
> > +        // - `Lock<T, SpinLockBackend>` and `Lock<T,
> > SpinLockIrqBackend>` both have identical data
> > +        //   layouts.
> > +        // - As long as local interrupts are disabled (which is
> > proven to be true by _context), it
> > +        //   is safe to treat a lock with SpinLockIrqBackend as a
> > SpinLockBackend lock.
> > +        unsafe { core::mem::transmute(self) }
> > +    }
> > +
> > +    /// Acquires the lock without modifying local interrupt state.
> > +    ///
> > +    /// This function should be used in place of the more
> > expensive [`Lock::lock()`] function when
> > +    /// possible for [`SpinLockIrq`] locks.
> > +    pub fn lock_with<'a>(&'a self, context: &'a
> > LocalInterruptDisabled) -> SpinLockGuard<'a, T> {
> > +        self.as_lock_in_interrupt(context).lock()
> > +    }
> > +
> > +    /// Tries to acquire the lock without modifying local
> > interrupt state.
> > +    ///
> > +    /// This function should be used in place of the more
> > expensive [`Lock::try_lock()`] function
> > +    /// when possible for [`SpinLockIrq`] locks.
> > +    ///
> > +    /// Returns a guard that can be used to access the data
> > protected by the lock if successful.
> > +    #[must_use = "if unused, the lock will be immediately
> > unlocked"]
> > +    pub fn try_lock_with<'a>(
> > +        &'a self,
> > +        context: &'a LocalInterruptDisabled,
> > +    ) -> Option<SpinLockGuard<'a, T>> {
> > +        self.as_lock_in_interrupt(context).try_lock()
> > +    }
> > +}
> > +
> >  #[kunit_tests(rust_spinlock_irq_condvar)]
> >  mod tests {
> >      use super::*;


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2026-03-02 22:47 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-05 20:44 [PATCH v18 0/5] SpinLockIrq for rust Lyude Paul
2026-02-05 20:44 ` [PATCH v18 1/5] rust: Introduce interrupt module Lyude Paul
2026-02-05 23:13   ` Gary Guo
2026-02-05 20:44 ` [PATCH v18 2/5] rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers Lyude Paul
2026-02-05 23:13   ` Gary Guo
2026-02-05 20:44 ` [PATCH v18 3/5] rust: sync: use super::* in spinlock.rs Lyude Paul
2026-02-05 20:44 ` [PATCH v18 4/5] rust: sync: Add SpinLockIrq Lyude Paul
2026-02-05 23:19   ` Gary Guo
2026-02-05 20:44 ` [PATCH v18 5/5] rust: sync: Introduce SpinLockIrq::lock_with() and friends Lyude Paul
2026-02-05 23:23   ` Gary Guo
2026-03-02 22:47     ` lyude

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox