public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Onur Özkan" <work@onurozkan.dev>
To: rust-for-linux@vger.kernel.org
Cc: lossin@kernel.org, lyude@redhat.com, ojeda@kernel.org,
	alex.gaynor@gmail.com, boqun.feng@gmail.com, gary@garyguo.net,
	a.hindborg@kernel.org, aliceryhl@google.com, tmgross@umich.edu,
	dakr@kernel.org, peterz@infradead.org, mingo@redhat.com,
	will@kernel.org, longman@redhat.com, felipe_life@live.com,
	daniel@sedlak.dev, bjorn3_gh@protonmail.com,
	daniel.almeida@collabora.com, linux-kernel@vger.kernel.org,
	"Onur Özkan" <work@onurozkan.dev>
Subject: [PATCH v7 5/6] rust: ww_mutex: implement LockSet
Date: Sat,  1 Nov 2025 19:10:55 +0300	[thread overview]
Message-ID: <20251101161056.22408-6-work@onurozkan.dev> (raw)
In-Reply-To: <20251101161056.22408-1-work@onurozkan.dev>

`LockSet` is a high-level and safe API built on top of
ww_mutex, provides a simple API while keeping the ww_mutex
semantics.

When `EDEADLK` is hit, it drops all held locks, resets
the acquire context and retries the given (by the user)
locking algorithm until it succeeds.

Signed-off-by: Onur Özkan <work@onurozkan.dev>
---
 rust/kernel/sync/lock/ww_mutex.rs          |   6 +
 rust/kernel/sync/lock/ww_mutex/lock_set.rs | 245 +++++++++++++++++++++
 2 files changed, 251 insertions(+)
 create mode 100644 rust/kernel/sync/lock/ww_mutex/lock_set.rs

diff --git a/rust/kernel/sync/lock/ww_mutex.rs b/rust/kernel/sync/lock/ww_mutex.rs
index 2a9c1c20281b..d4c3b272912d 100644
--- a/rust/kernel/sync/lock/ww_mutex.rs
+++ b/rust/kernel/sync/lock/ww_mutex.rs
@@ -5,6 +5,10 @@
 //! It is designed to avoid deadlocks when locking multiple [`Mutex`]es
 //! that belong to the same [`Class`]. Each lock acquisition uses an
 //! [`AcquireCtx`] to track ordering and ensure forward progress.
+//!
+//! It is recommended to use [`LockSet`] as it provides safe high-level
+//! interface that automatically handles deadlocks, retries and context
+//! management.
 
 use crate::error::to_result;
 use crate::prelude::*;
@@ -16,9 +20,11 @@
 
 pub use acquire_ctx::AcquireCtx;
 pub use class::Class;
+pub use lock_set::LockSet;
 
 mod acquire_ctx;
 mod class;
+mod lock_set;
 
 /// A wound-wait (ww) mutex that is powered with deadlock avoidance
 /// when acquiring multiple locks of the same [`Class`].
diff --git a/rust/kernel/sync/lock/ww_mutex/lock_set.rs b/rust/kernel/sync/lock/ww_mutex/lock_set.rs
new file mode 100644
index 000000000000..ae234fd1e0be
--- /dev/null
+++ b/rust/kernel/sync/lock/ww_mutex/lock_set.rs
@@ -0,0 +1,245 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Provides [`LockSet`] which automatically detects [`EDEADLK`],
+//! releases all locks, resets the state and retries the user
+//! supplied locking algorithm until success.
+
+use super::{AcquireCtx, Class, Mutex};
+use crate::bindings;
+use crate::prelude::*;
+use crate::types::NotThreadSafe;
+use core::ptr::NonNull;
+
+/// A tracked set of [`Mutex`] locks acquired under the same [`Class`].
+///
+/// It ensures proper cleanup and retry mechanism on deadlocks and provides
+/// safe access to locked data via [`LockSet::with_locked`].
+///
+/// Typical usage is through [`LockSet::lock_all`], which retries a
+/// user supplied locking algorithm until it succeeds without deadlock.
+pub struct LockSet<'a> {
+    acquire_ctx: Pin<KBox<AcquireCtx<'a>>>,
+    taken: KVec<RawGuard>,
+    class: &'a Class,
+}
+
+/// Used by `LockSet` to track acquired locks.
+///
+/// This type is strictly crate-private and must never be exposed
+/// outside this crate.
+struct RawGuard {
+    mutex_ptr: NonNull<bindings::ww_mutex>,
+    _not_send: NotThreadSafe,
+}
+
+impl Drop for RawGuard {
+    fn drop(&mut self) {
+        // SAFETY: `mutex_ptr` originates from a locked `Mutex` and remains
+        // valid for the lifetime of this guard, so unlocking here is sound.
+        unsafe { bindings::ww_mutex_unlock(self.mutex_ptr.as_ptr()) };
+    }
+}
+
+impl<'a> Drop for LockSet<'a> {
+    fn drop(&mut self) {
+        self.release_all_locks();
+    }
+}
+
+impl<'a> LockSet<'a> {
+    /// Creates a new [`LockSet`] with the given class.
+    ///
+    /// All locks taken through this [`LockSet`] must belong to the
+    /// same class.
+    pub fn new(class: &'a Class) -> Result<Self> {
+        Ok(Self {
+            acquire_ctx: KBox::pin_init(AcquireCtx::new(class), GFP_KERNEL)?,
+            taken: KVec::new(),
+            class,
+        })
+    }
+
+    /// Creates a new [`LockSet`] using an existing [`AcquireCtx`] and
+    /// [`Class`].
+    ///
+    /// # Safety
+    ///
+    /// The caller must ensure that `acquire_ctx` is properly initialized,
+    /// holds no mutexes and that the provided `class` matches the one used
+    /// to initialize the given `acquire_ctx`.
+    pub unsafe fn new_with_acquire_ctx(
+        acquire_ctx: Pin<KBox<AcquireCtx<'a>>>,
+        class: &'a Class,
+    ) -> Self {
+        Self {
+            acquire_ctx,
+            taken: KVec::new(),
+            class,
+        }
+    }
+
+    /// Attempts to lock a [`Mutex`] and records the guard.
+    ///
+    /// Returns [`EDEADLK`] if lock ordering would cause a deadlock.
+    ///
+    /// Returns [`EBUSY`] if `mutex` was locked outside of this [`LockSet`].
+    ///
+    /// # Safety
+    ///
+    /// The given `mutex` must be created with the [`Class`] that was used
+    /// to initialize this [`LockSet`].
+    pub unsafe fn lock<T>(&mut self, mutex: &'a Mutex<'a, T>) -> Result {
+        if mutex.is_locked()
+            && !self
+                .taken
+                .iter()
+                .any(|guard| guard.mutex_ptr.as_ptr() == mutex.inner.get())
+        {
+            return Err(EBUSY);
+        }
+
+        // SAFETY: By the safety contract, `mutex` belongs to the same `Class`
+        // as `self.acquire_ctx` does.
+        let guard = unsafe { self.acquire_ctx.lock(mutex)? };
+
+        self.taken.push(
+            RawGuard {
+                // SAFETY: We just locked it above so it's a valid pointer.
+                mutex_ptr: unsafe { NonNull::new_unchecked(guard.mutex.inner.get()) },
+                _not_send: NotThreadSafe,
+            },
+            GFP_KERNEL,
+        )?;
+
+        // Avoid unlocking here; `release_all_locks` (also run by `Drop`)
+        // performs the unlock for `LockSet`.
+        core::mem::forget(guard);
+
+        Ok(())
+    }
+
+    /// Runs `locking_algorithm` until success with retrying on deadlock.
+    ///
+    /// `locking_algorithm` should attempt to acquire all needed locks.
+    /// If [`EDEADLK`] is detected, this function will roll back, reset
+    /// the context and retry automatically.
+    ///
+    /// Once all locks are acquired successfully, `on_all_locks_taken` is
+    /// invoked for exclusive access to the locked values. Afterwards, all
+    /// locks are released.
+    ///
+    /// # Example
+    ///
+    /// ```
+    /// use kernel::alloc::KBox;
+    /// use kernel::c_str;
+    /// use kernel::prelude::*;
+    /// use kernel::sync::Arc;
+    /// use kernel::sync::lock::ww_mutex::{Class, LockSet, Mutex};
+    /// use pin_init::stack_pin_init;
+    ///
+    /// stack_pin_init!(let class = Class::new_wound_wait(c_str!("test")));
+    ///
+    /// let mutex1 = Arc::pin_init(Mutex::new(0, &class), GFP_KERNEL)?;
+    /// let mutex2 = Arc::pin_init(Mutex::new(0, &class), GFP_KERNEL)?;
+    /// let mut lock_set = KBox::pin_init(LockSet::new(&class)?, GFP_KERNEL)?;
+    ///
+    /// lock_set.lock_all(
+    ///     // `locking_algorithm` closure
+    ///     |lock_set| {
+    ///         // SAFETY: Both `lock_set` and `mutex1` uses the same class.
+    ///         unsafe { lock_set.lock(&mutex1)? };
+    ///
+    ///         // SAFETY: Both `lock_set` and `mutex2` uses the same class.
+    ///         unsafe { lock_set.lock(&mutex2)? };
+    ///
+    ///         Ok(())
+    ///     },
+    ///     // `on_all_locks_taken` closure
+    ///     |lock_set| {
+    ///         // Safely mutate both values while holding the locks.
+    ///         lock_set.with_locked(&mutex1, |v| *v += 1)?;
+    ///         lock_set.with_locked(&mutex2, |v| *v += 1)?;
+    ///
+    ///         Ok(())
+    ///     },
+    /// )?;
+    ///
+    /// # Ok::<(), Error>(())
+    /// ```
+    pub fn lock_all<T, Y, Z>(
+        &mut self,
+        mut locking_algorithm: T,
+        mut on_all_locks_taken: Y,
+    ) -> Result<Z>
+    where
+        T: FnMut(&mut LockSet<'a>) -> Result,
+        Y: FnMut(&mut LockSet<'a>) -> Result<Z>,
+    {
+        loop {
+            match locking_algorithm(self) {
+                Ok(()) => {
+                    // All locks in `locking_algorithm` succeeded.
+                    // The user can now safely use them in `on_all_locks_taken`.
+                    let res = on_all_locks_taken(self);
+                    self.release_all_locks();
+
+                    return res;
+                }
+                Err(e) if e == EDEADLK => {
+                    // Deadlock detected, retry from scratch.
+                    self.cleanup_on_deadlock();
+                    continue;
+                }
+                Err(e) => {
+                    self.release_all_locks();
+                    return Err(e);
+                }
+            }
+        }
+    }
+
+    /// Executes `access` with a mutable reference to the data behind `mutex`.
+    ///
+    /// Fails with [`EINVAL`] if the mutex was not locked in this [`LockSet`].
+    pub fn with_locked<T: Unpin, Y>(
+        &mut self,
+        mutex: &'a Mutex<'a, T>,
+        access: impl for<'b> FnOnce(&'b mut T) -> Y,
+    ) -> Result<Y> {
+        let mutex_ptr = mutex.inner.get();
+
+        if self
+            .taken
+            .iter()
+            .any(|guard| guard.mutex_ptr.as_ptr() == mutex_ptr)
+        {
+            // SAFETY: We hold the lock corresponding to `mutex`, so we have
+            // exclusive access to its protected data.
+            let value = unsafe { &mut *mutex.data.get() };
+            Ok(access(value))
+        } else {
+            // `mutex` isn't locked in this `LockSet`.
+            Err(EINVAL)
+        }
+    }
+
+    /// Releases all currently held locks in this [`LockSet`].
+    fn release_all_locks(&mut self) {
+        // `Drop` implementation of the `RawGuard` takes care of the unlocking.
+        self.taken.clear();
+    }
+
+    /// Resets this [`LockSet`] after a deadlock detection.
+    ///
+    /// Drops all held locks and reinitializes the [`AcquireCtx`].
+    ///
+    /// It is intended to be used for internal implementation only.
+    fn cleanup_on_deadlock(&mut self) {
+        self.release_all_locks();
+
+        // SAFETY: We are passing the same `class` that was used
+        // to initialize `self.acquire_ctx`.
+        unsafe { self.acquire_ctx.as_mut().reinit(self.class) };
+    }
+}
-- 
2.51.2


  parent reply	other threads:[~2025-11-01 16:11 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-01 16:10 [PATCH v7 0/6] rust: add `ww_mutex` support Onur Özkan
2025-11-01 16:10 ` [PATCH v7 1/6] rust: add C wrappers for ww_mutex inline functions Onur Özkan
2025-11-21 19:08   ` Lyude Paul
2025-11-25 15:53   ` Daniel Almeida
2025-11-01 16:10 ` [PATCH v7 2/6] rust: implement `Class` for ww_class support Onur Özkan
2025-11-21 19:15   ` Lyude Paul
2025-11-27  8:57     ` Onur Özkan
2025-11-25 16:12   ` Daniel Almeida
2025-11-01 16:10 ` [PATCH v7 3/6] rust: error: add EDEADLK Onur Özkan
2025-11-21 19:49   ` Lyude Paul
2025-11-25 16:13   ` Daniel Almeida
2025-11-01 16:10 ` [PATCH v7 4/6] rust: ww_mutex: add Mutex, AcquireCtx and MutexGuard Onur Özkan
2025-11-21 21:00   ` Lyude Paul
2025-11-27  9:24     ` Onur Özkan
2025-11-28 11:37     ` Onur Özkan
2025-11-25 18:32   ` Daniel Almeida
2025-11-25 18:59     ` Onur Özkan
2025-11-01 16:10 ` Onur Özkan [this message]
2025-11-21 22:34   ` [PATCH v7 5/6] rust: ww_mutex: implement LockSet Lyude Paul
2025-11-24 15:49     ` Onur Özkan
2025-11-25 19:01       ` Daniel Almeida
2025-11-25 20:08         ` Onur Özkan
2025-11-25 21:35       ` Lyude Paul
2025-11-25 21:47         ` Daniel Almeida
2025-11-25 22:14           ` Lyude Paul
2025-11-27 10:16         ` Onur Özkan
2025-11-27 13:46           ` Daniel Almeida
2025-11-01 16:10 ` [PATCH v7 6/6] rust: add test coverage for ww_mutex implementation Onur Özkan
2025-11-10  5:28 ` [PATCH v7 0/6] rust: add `ww_mutex` support Onur Özkan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251101161056.22408-6-work@onurozkan.dev \
    --to=work@onurozkan.dev \
    --cc=a.hindborg@kernel.org \
    --cc=alex.gaynor@gmail.com \
    --cc=aliceryhl@google.com \
    --cc=bjorn3_gh@protonmail.com \
    --cc=boqun.feng@gmail.com \
    --cc=dakr@kernel.org \
    --cc=daniel.almeida@collabora.com \
    --cc=daniel@sedlak.dev \
    --cc=felipe_life@live.com \
    --cc=gary@garyguo.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=lossin@kernel.org \
    --cc=lyude@redhat.com \
    --cc=mingo@redhat.com \
    --cc=ojeda@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rust-for-linux@vger.kernel.org \
    --cc=tmgross@umich.edu \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox