From: Lyude Paul <lyude@redhat.com>
To: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org,
Thomas Gleixner <tglx@linutronix.de>
Cc: "Boqun Feng" <boqun.feng@gmail.com>,
"Daniel Almeida" <daniel.almeida@collabora.com>,
"Miguel Ojeda" <ojeda@kernel.org>,
"Alex Gaynor" <alex.gaynor@gmail.com>,
"Gary Guo" <gary@garyguo.net>,
"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
"Benno Lossin" <lossin@kernel.org>,
"Andreas Hindborg" <a.hindborg@kernel.org>,
"Alice Ryhl" <aliceryhl@google.com>,
"Trevor Gross" <tmgross@umich.edu>,
"Danilo Krummrich" <dakr@kernel.org>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Peter Zijlstra" <peterz@infradead.org>,
"Ingo Molnar" <mingo@redhat.com>, "Will Deacon" <will@kernel.org>,
"Waiman Long" <longman@redhat.com>
Subject: [PATCH v14 10/16] rust: sync: lock: Add `Backend::BackendInContext`
Date: Thu, 20 Nov 2025 16:46:02 -0500 [thread overview]
Message-ID: <20251120214616.14386-11-lyude@redhat.com> (raw)
In-Reply-To: <20251120214616.14386-1-lyude@redhat.com>
From: Boqun Feng <boqun.feng@gmail.com>
`SpinLockIrq` and `SpinLock` use the exact same underlying C structure,
with the only real difference being that the former uses the irq_disable()
and irq_enable() variants for locking/unlocking. These variants can
introduce some minor overhead in contexts where we already know that
local processor interrupts are disabled, and as such we want a way to be
able to skip modifying processor interrupt state in said contexts in order
to avoid some overhead - just like the current C API allows us to do. So,
`BackendInContext` allows us to cast a lock into it's contextless version
for situations where we already have whatever guarantees would be provided
by `Backend::Context` in place.
In some hacked-together benchmarks we ran, most of the time this did
actually seem to lead to a noticeable difference in overhead:
From an aarch64 VM running on a MacBook M4:
lock() when irq is disabled, 100 times cost Delta { nanos: 500 }
lock_with() when irq is disabled, 100 times cost Delta { nanos: 292 }
lock() when irq is enabled, 100 times cost Delta { nanos: 834 }
lock() when irq is disabled, 100 times cost Delta { nanos: 459 }
lock_with() when irq is disabled, 100 times cost Delta { nanos: 291 }
lock() when irq is enabled, 100 times cost Delta { nanos: 709 }
From an x86_64 VM (qemu/kvm) running on a i7-13700H
lock() when irq is disabled, 100 times cost Delta { nanos: 1002 }
lock_with() when irq is disabled, 100 times cost Delta { nanos: 729 }
lock() when irq is enabled, 100 times cost Delta { nanos: 1516 }
lock() when irq is disabled, 100 times cost Delta { nanos: 754 }
lock_with() when irq is disabled, 100 times cost Delta { nanos: 966 }
lock() when irq is enabled, 100 times cost Delta { nanos: 1227 }
(note that there were some runs on x86_64 where lock() on irq disabled
vs. lock_with() on irq disabled had equivalent benchmarks, but it very
much appeared to be a minority of test runs.
While it's not clear how this affects real-world workloads yet, let's add
this for the time being so we can find out.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Co-developed-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Lyude Paul <lyude@redhat.com>
---
V10:
* Fix typos - Dirk/Lyude
* Since we're adding support for context locks to GlobalLock as well, let's
also make sure to cover try_lock while we're at it and add try_lock_with
* Add a private function as_lock_in_context() for handling casting from a
Lock<T, B> to Lock<T, B::BackendInContext> so we don't have to duplicate
safety comments
V11:
* Fix clippy::ref_as_ptr error in Lock::as_lock_in_context()
V14:
* Add benchmark results, rewrite commit message
rust/kernel/sync/lock.rs | 61 ++++++++++++++++++++++++++++++-
rust/kernel/sync/lock/mutex.rs | 1 +
rust/kernel/sync/lock/spinlock.rs | 41 +++++++++++++++++++++
3 files changed, 101 insertions(+), 2 deletions(-)
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index 7df62923608b7..abc7ceff35994 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -30,10 +30,15 @@
/// is owned, that is, between calls to [`lock`] and [`unlock`].
/// - Implementers must also ensure that [`relock`] uses the same locking method as the original
/// lock operation.
+/// - Implementers must ensure if [`BackendInContext`] is a [`Backend`], it's safe to acquire the
+/// lock under the [`Context`], the [`State`] of two backends must be the same.
///
/// [`lock`]: Backend::lock
/// [`unlock`]: Backend::unlock
/// [`relock`]: Backend::relock
+/// [`BackendInContext`]: Backend::BackendInContext
+/// [`Context`]: Backend::Context
+/// [`State`]: Backend::State
pub unsafe trait Backend {
/// The state required by the lock.
type State;
@@ -47,6 +52,9 @@ pub unsafe trait Backend {
/// The context which can be provided to acquire the lock with a different backend.
type Context<'a>;
+ /// The alternative backend we can use if a [`Context`](Backend::Context) is provided.
+ type BackendInContext: Sized;
+
/// Initialises the lock.
///
/// # Safety
@@ -166,10 +174,59 @@ pub unsafe fn from_raw<'a>(ptr: *mut B::State) -> &'a Self {
}
impl<T: ?Sized, B: Backend> Lock<T, B> {
+ /// Casts the lock as a `Lock<T, B::BackendInContext>`.
+ fn as_lock_in_context<'a>(
+ &'a self,
+ _context: B::Context<'a>,
+ ) -> &'a Lock<T, B::BackendInContext>
+ where
+ B::BackendInContext: Backend,
+ {
+ // SAFETY:
+ // - Per the safety guarantee of `Backend`, if `B::BackendInContext` and `B` should
+ // have the same state, the layout of the lock is the same so it's safe to convert one to
+ // another.
+ // - The caller provided `B::Context<'a>`, so it is safe to recast and return this lock.
+ unsafe { &*(core::ptr::from_ref(self) as *const _) }
+ }
+
/// Acquires the lock with the given context and gives the caller access to the data protected
/// by it.
- pub fn lock_with<'a>(&'a self, _context: B::Context<'a>) -> Guard<'a, T, B> {
- todo!()
+ pub fn lock_with<'a>(&'a self, context: B::Context<'a>) -> Guard<'a, T, B::BackendInContext>
+ where
+ B::BackendInContext: Backend,
+ {
+ let lock = self.as_lock_in_context(context);
+
+ // SAFETY: The constructor of the type calls `init`, so the existence of the object proves
+ // that `init` was called. Plus the safety guarantee of `Backend` guarantees that `B::State`
+ // is the same as `B::BackendInContext::State`, also it's safe to call another backend
+ // because there is `B::Context<'a>`.
+ let state = unsafe { B::BackendInContext::lock(lock.state.get()) };
+
+ // SAFETY: The lock was just acquired.
+ unsafe { Guard::new(lock, state) }
+ }
+
+ /// Tries to acquire the lock with the given context.
+ ///
+ /// Returns a guard that can be used to access the data protected by the lock if successful.
+ pub fn try_lock_with<'a>(
+ &'a self,
+ context: B::Context<'a>,
+ ) -> Option<Guard<'a, T, B::BackendInContext>>
+ where
+ B::BackendInContext: Backend,
+ {
+ let lock = self.as_lock_in_context(context);
+
+ // SAFETY: The constructor of the type calls `init`, so the existence of the object proves
+ // that `init` was called. Plus the safety guarantee of `Backend` guarantees that `B::State`
+ // is the same as `B::BackendInContext::State`, also it's safe to call another backend
+ // because there is `B::Context<'a>`.
+ unsafe {
+ B::BackendInContext::try_lock(lock.state.get()).map(|state| Guard::new(lock, state))
+ }
}
/// Acquires the lock and gives the caller access to the data protected by it.
diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs
index be1e2e18cf42d..662a530750703 100644
--- a/rust/kernel/sync/lock/mutex.rs
+++ b/rust/kernel/sync/lock/mutex.rs
@@ -102,6 +102,7 @@ unsafe impl super::Backend for MutexBackend {
type State = bindings::mutex;
type GuardState = ();
type Context<'a> = ();
+ type BackendInContext = ();
unsafe fn init(
ptr: *mut Self::State,
diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
index 70d19a2636afe..81384ea239955 100644
--- a/rust/kernel/sync/lock/spinlock.rs
+++ b/rust/kernel/sync/lock/spinlock.rs
@@ -102,6 +102,7 @@ unsafe impl super::Backend for SpinLockBackend {
type State = bindings::spinlock_t;
type GuardState = ();
type Context<'a> = ();
+ type BackendInContext = ();
unsafe fn init(
ptr: *mut Self::State,
@@ -221,6 +222,45 @@ macro_rules! new_spinlock_irq {
/// # Ok::<(), Error>(())
/// ```
///
+/// The next example demonstrates locking a [`SpinLockIrq`] using [`lock_with()`] in a function
+/// which can only be called when local processor interrupts are already disabled.
+///
+/// ```
+/// use kernel::sync::{new_spinlock_irq, SpinLockIrq};
+/// use kernel::interrupt::*;
+///
+/// struct Inner {
+/// a: u32,
+/// }
+///
+/// #[pin_data]
+/// struct Example {
+/// #[pin]
+/// inner: SpinLockIrq<Inner>,
+/// }
+///
+/// impl Example {
+/// fn new() -> impl PinInit<Self> {
+/// pin_init!(Self {
+/// inner <- new_spinlock_irq!(Inner { a: 20 }),
+/// })
+/// }
+/// }
+///
+/// // Accessing an `Example` from a function that can only be called in no-interrupt contexts.
+/// fn noirq_work(e: &Example, interrupt_disabled: &LocalInterruptDisabled) {
+/// // Because we know interrupts are disabled from interrupt_disable, we can skip toggling
+/// // interrupt state using lock_with() and the provided token
+/// assert_eq!(e.inner.lock_with(interrupt_disabled).a, 20);
+/// }
+///
+/// # let e = KBox::pin_init(Example::new(), GFP_KERNEL)?;
+/// # let interrupt_guard = local_interrupt_disable();
+/// # noirq_work(&e, &interrupt_guard);
+/// #
+/// # Ok::<(), Error>(())
+/// ```
+///
/// [`lock()`]: SpinLockIrq::lock
/// [`lock_with()`]: SpinLockIrq::lock_with
pub type SpinLockIrq<T> = super::Lock<T, SpinLockIrqBackend>;
@@ -245,6 +285,7 @@ unsafe impl super::Backend for SpinLockIrqBackend {
type State = bindings::spinlock_t;
type GuardState = ();
type Context<'a> = &'a LocalInterruptDisabled;
+ type BackendInContext = SpinLockBackend;
unsafe fn init(
ptr: *mut Self::State,
--
2.51.1
next prev parent reply other threads:[~2025-11-20 21:47 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-20 21:45 [PATCH v14 00/16] Refcounted interrupts, SpinLockIrq for rust Lyude Paul
2025-11-20 21:45 ` [PATCH v14 01/16] preempt: Introduce HARDIRQ_DISABLE_BITS Lyude Paul
2025-11-20 21:45 ` [PATCH v14 02/16] preempt: Track NMI nesting to separate per-CPU counter Lyude Paul
2025-11-20 21:45 ` [PATCH v14 03/16] preempt: Introduce __preempt_count_{sub, add}_return() Lyude Paul
2025-11-20 21:45 ` [PATCH v14 04/16] irq & spin_lock: Add counted interrupt disabling/enabling Lyude Paul
2025-11-20 21:45 ` [PATCH v14 05/16] irq: Add KUnit test for refcounted interrupt enable/disable Lyude Paul
2025-11-20 21:45 ` [PATCH v14 06/16] rust: Introduce interrupt module Lyude Paul
2025-11-20 21:45 ` [PATCH v14 07/16] rust: helper: Add spin_{un,}lock_irq_{enable,disable}() helpers Lyude Paul
2025-11-20 21:46 ` [PATCH v14 08/16] rust: sync: Add SpinLockIrq Lyude Paul
2025-11-20 21:46 ` [PATCH v14 09/16] rust: sync: Introduce lock::Backend::Context Lyude Paul
2025-11-20 21:46 ` Lyude Paul [this message]
2025-11-20 21:46 ` [PATCH v14 11/16] rust: sync: lock/global: Rename B to G in trait bounds Lyude Paul
2025-11-20 21:46 ` [PATCH v14 12/16] rust: sync: Add a lifetime parameter to lock::global::GlobalGuard Lyude Paul
2025-11-20 21:46 ` [PATCH v14 13/16] rust: sync: Expose lock::Backend Lyude Paul
2025-11-20 21:46 ` [PATCH v14 14/16] rust: sync: lock/global: Add Backend parameter to GlobalGuard Lyude Paul
2025-11-20 21:46 ` [PATCH v14 15/16] rust: sync: lock/global: Add BackendInContext support to GlobalLock Lyude Paul
2025-11-20 21:46 ` [PATCH v14 16/16] locking: Switch to _irq_{disable,enable}() variants in cleanup guards Lyude Paul
2025-11-20 23:16 ` [PATCH v14 00/16] Refcounted interrupts, SpinLockIrq for rust John Hubbard
2025-11-21 17:47 ` Boqun Feng
2025-11-22 1:09 ` John Hubbard
2025-11-22 2:38 ` Boqun Feng
2025-11-22 2:56 ` John Hubbard
2025-11-22 3:35 ` Boqun Feng
2025-11-22 4:14 ` John Hubbard
2025-11-22 4:24 ` Boqun Feng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251120214616.14386-11-lyude@redhat.com \
--to=lyude@redhat.com \
--cc=a.hindborg@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alex.gaynor@gmail.com \
--cc=aliceryhl@google.com \
--cc=bjorn3_gh@protonmail.com \
--cc=boqun.feng@gmail.com \
--cc=dakr@kernel.org \
--cc=daniel.almeida@collabora.com \
--cc=gary@garyguo.net \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=lossin@kernel.org \
--cc=mingo@redhat.com \
--cc=ojeda@kernel.org \
--cc=peterz@infradead.org \
--cc=rust-for-linux@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=tmgross@umich.edu \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).