From: Oliver Mangold <oliver.mangold@pm.me>
To: Andreas Hindborg <a.hindborg@kernel.org>
Cc: "Miguel Ojeda" <ojeda@kernel.org>,
"Alex Gaynor" <alex.gaynor@gmail.com>,
"Boqun Feng" <boqun.feng@gmail.com>,
"Gary Guo" <gary@garyguo.net>,
"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
"Benno Lossin" <benno.lossin@proton.me>,
"Alice Ryhl" <aliceryhl@google.com>,
"Trevor Gross" <tmgross@umich.edu>,
linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org
Subject: [PATCH v3] rust: adding UniqueRefCounted and UniqueRef types
Date: Mon, 03 Mar 2025 13:29:36 +0000 [thread overview]
Message-ID: <Z8Wuud2UQX6Yukyr@mango> (raw)
In-Reply-To: <87frjxncsx.fsf@kernel.org>
From 5bdbcd54855fed6ad9ae6bcc53dba3aab2c6c6b1 Mon Sep 17 00:00:00 2001
From: Oliver Mangold <oliver.mangold@pm.me>
Date: Fri, 21 Feb 2025 08:36:46 +0100
Subject: [PATCH] rust: adding UniqueRefCounted and UniqueRef types
Add `UniqueRef` as a variant of `ARef` that is guaranteed to be unique.
This is useful when mutable access to the underlying type is required
and we can guarantee uniqueness, and when APIs that would normally take
an `ARef` require uniqueness.
Signed-off-by: Oliver Mangold <oliver.mangold@pm.me>
---
rust/kernel/types.rs | 315 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 315 insertions(+)
diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs
index 55ddd50e8aaa..7ea0a266caa5 100644
--- a/rust/kernel/types.rs
+++ b/rust/kernel/types.rs
@@ -543,6 +543,12 @@ fn from(b: &T) -> Self {
}
}
+impl<T: UniqueRefCounted> From<UniqueRef<T>> for ARef<T> {
+ fn from(b: UniqueRef<T>) -> Self {
+ UniqueRefCounted::unique_to_shared(b)
+ }
+}
+
impl<T: AlwaysRefCounted> Drop for ARef<T> {
fn drop(&mut self) {
// SAFETY: The type invariants guarantee that the `ARef` owns the reference we're about to
@@ -551,6 +557,315 @@ fn drop(&mut self) {
}
}
+/// Types that are [`AlwaysRefCounted`] and can be safely converted to an [`UniqueRef`]
+///
+/// # Safety
+///
+/// Implementers must ensure that the methods of the trait
+/// change the reference count of the underlying object such that:
+/// - the uniqueness invariant is upheld, i.e. it is not possible
+/// to obtain another reference by any means (other than through the [`UniqueRef`])
+/// until the [`UniqueRef`] is dropped or converted to an [`ARef`].
+/// - [`dec_ref()`](UniqueRefCounted::dec_ref) correctly frees the underlying object.
+/// - [`unique_to_shared()`](UniqueRefCounted::unique_to_shared) set the reference count
+/// to the value which the returned [`ARef`] expects for an object with a single reference
+/// in existence. This implies that if [`unique_to_shared()`](UniqueRefCounted::unique_to_shared)
+/// is left on the default implementation, which just rewraps the underlying object,
+/// the reference count needs not to be
+/// modified when converting a [`UniqueRef`] to an [`ARef`].
+///
+/// # Examples
+///
+/// A minimal example implementation of [`AlwaysRefCounted`] and
+/// [`UniqueRefCounted`] and their usage
+/// with [`ARef`] and [`UniqueRef`] looks like this:
+///
+/// ```
+/// # #![expect(clippy::disallowed_names)]
+/// use core::cell::Cell;
+/// use core::ptr::NonNull;
+/// use kernel::alloc::{flags, kbox::KBox, AllocError};
+/// use kernel::types::{
+/// ARef, AlwaysRefCounted, UniqueRef, UniqueRefCounted,
+/// };
+///
+/// struct Foo {
+/// refcount: Cell<usize>,
+/// }
+///
+/// impl Foo {
+/// fn new() -> Result<UniqueRef<Self>, AllocError> {
+/// // Use a KBox to handle the actual allocation
+/// let result = KBox::new(
+/// Foo {
+/// refcount: Cell::new(1),
+/// },
+/// flags::GFP_KERNEL,
+/// )?;
+/// // SAFETY: we just allocated the `Foo`, thus it is valid
+/// Ok(unsafe { UniqueRef::from_raw(NonNull::new(KBox::into_raw(result)).unwrap()) })
+/// }
+/// }
+///
+/// // SAFETY: we increment and decrement correctly and only free the Foo
+/// // when the refcount reaches zero
+/// unsafe impl AlwaysRefCounted for Foo {
+/// fn inc_ref(&self) {
+/// self.refcount.replace(self.refcount.get() + 1);
+/// }
+/// unsafe fn dec_ref(this: NonNull<Self>) {
+/// // SAFETY: the underlying object is always valid when the function is called
+/// let refcount = unsafe { &this.as_ref().refcount };
+/// let new_refcount = refcount.get() - 1;
+/// if new_refcount == 0 {
+/// // Foo will be dropped when KBox goes out of scope
+/// // SAFETY: the `Box<Foo>` is still alive as the old refcount is 1
+/// unsafe { KBox::from_raw(this.as_ptr()) };
+/// } else {
+/// refcount.replace(new_refcount);
+/// }
+/// }
+/// }
+/// // SAFETY: we only convert into an `UniqueRef` when the refcount is 1
+/// unsafe impl UniqueRefCounted for Foo {
+/// fn try_shared_to_unique(this: ARef<Self>) -> Result<UniqueRef<Self>, ARef<Self>> {
+/// if this.refcount.get() == 1 {
+/// // SAFETY: the `Foo` is still alive as the refcount is 1
+/// Ok(unsafe { UniqueRef::from_raw(ARef::into_raw(this)) })
+/// } else {
+/// Err(this)
+/// }
+/// }
+/// }
+///
+/// let foo = Foo::new().unwrap();
+/// let mut foo = ARef::from(foo);
+/// {
+/// let bar = foo.clone();
+/// assert!(UniqueRef::try_from(bar).is_err());
+/// }
+/// assert!(UniqueRef::try_from(foo).is_ok());
+/// ```
+pub unsafe trait UniqueRefCounted: AlwaysRefCounted + Sized {
+ /// Checks if the [`ARef`] is unique and convert it
+ /// to an [`UniqueRef`] it that is that case.
+ /// Otherwise it returns again an [`ARef`] to the same
+ /// underlying object.
+ fn try_shared_to_unique(this: ARef<Self>) -> Result<UniqueRef<Self>, ARef<Self>>;
+ /// Converts the [`UniqueRef`] into an [`ARef`].
+ fn unique_to_shared(this: UniqueRef<Self>) -> ARef<Self> {
+ // SAFETY: safe by the conditions on implementing the trait
+ unsafe { ARef::from_raw(UniqueRef::into_raw(this)) }
+ }
+ /// Decrements the reference count on the object when the [`UniqueRef`] is dropped.
+ ///
+ /// Frees the object when the count reaches zero.
+ ///
+ /// It defaults to [`AlwaysRefCounted::dec_ref`],
+ /// but overriding it may be useful, e.g. in case of non-standard refcounting
+ /// schemes.
+ ///
+ /// # Safety
+ ///
+ /// The same safety constraints as for [`AlwaysRefCounted::dec_ref`] apply,
+ /// but as the reference is unique, it can be assumed that the function
+ /// will not be called twice. In case the default implementation is not
+ /// overridden, it has to be ensured that the call to [`AlwaysRefCounted::dec_ref`]
+ /// can be used for an [`UniqueRef`], too.
+ unsafe fn dec_ref(obj: NonNull<Self>) {
+ // SAFETY: correct by function safety requirements
+ unsafe { AlwaysRefCounted::dec_ref(obj) };
+ }
+}
+
+/// This trait allows to implement [`UniqueRefCounted`] in a simplified way,
+/// only requiring to provide an [`is_unique()`](SimpleUniqueRefCounted::is_unique) method.
+///
+/// # Safety
+///
+/// - The same safety requirements as for [`UniqueRefCounted`] apply.
+/// - [`is_unique`](SimpleUniqueRefCounted::is_unique) must only return `true`
+/// in case only one [`ARef`] exists and it is impossible for one to be obtained
+/// other than by cloning an existing [`ARef`] or converting an [`UniqueRef`] to an [`ARef`].
+/// - It is safe to convert an unique [`ARef`] into an [`UniqueRef`]
+/// simply by re-wrapping the underlying object without modifying the refcount.
+///
+/// # Examples
+///
+/// A minimal example implementation of [`AlwaysRefCounted`] and
+/// [`SimpleUniqueRefCounted`] and their usage
+/// with [`ARef`] and [`UniqueRef`] looks like this:
+///
+/// ```
+/// # #![expect(clippy::disallowed_names)]
+/// use core::cell::Cell;
+/// use core::ptr::NonNull;
+/// use kernel::alloc::{flags, kbox::KBox, AllocError};
+/// use kernel::types::{
+/// ARef, AlwaysRefCounted, SimpleUniqueRefCounted, UniqueRef,
+/// };
+///
+/// struct Foo {
+/// refcount: Cell<usize>,
+/// }
+///
+/// impl Foo {
+/// fn new() -> Result<UniqueRef<Self>, AllocError> {
+/// // Use a KBox to handle the actual allocation
+/// let result = KBox::new(
+/// Foo {
+/// refcount: Cell::new(1),
+/// },
+/// flags::GFP_KERNEL,
+/// )?;
+/// // SAFETY: we just allocated the `Foo`, thus it is valid
+/// Ok(unsafe { UniqueRef::from_raw(NonNull::new(KBox::into_raw(result)).unwrap()) })
+/// }
+/// }
+///
+/// // SAFETY: we just allocated the `Foo`, thus it is valid
+/// unsafe impl AlwaysRefCounted for Foo {
+/// fn inc_ref(&self) {
+/// self.refcount.replace(self.refcount.get() + 1);
+/// }
+/// unsafe fn dec_ref(this: NonNull<Self>) {
+/// // SAFETY: the underlying object is always valid when the function is called
+/// let refcount = unsafe { &this.as_ref().refcount };
+/// let new_refcount = refcount.get() - 1;
+/// if new_refcount == 0 {
+/// // Foo will be dropped when KBox goes out of scope
+/// // SAFETY: the `Box<Foo>` is still alive as the old refcount is 1
+/// unsafe { KBox::from_raw(this.as_ptr()) };
+/// } else {
+/// refcount.replace(new_refcount);
+/// }
+/// }
+/// }
+///
+/// // SAFETY: We check the refcount as required. Races are impossible as the object is not `Sync`.
+/// unsafe impl SimpleUniqueRefCounted for Foo {
+/// fn is_unique(&self) -> bool {
+/// self.refcount.get() == 1
+/// }
+/// }
+///
+/// let foo = Foo::new().unwrap();
+/// let mut foo = ARef::from(foo);
+/// {
+/// let bar = foo.clone();
+/// assert!(UniqueRef::try_from(bar).is_err());
+/// }
+/// assert!(UniqueRef::try_from(foo).is_ok());
+/// ```
+pub unsafe trait SimpleUniqueRefCounted: AlwaysRefCounted + Sized {
+ /// Checks if exactly one [`ARef`] to the object exists.
+ /// In case the object is [`Sync`] the check needs to be race-free.
+ fn is_unique(&self) -> bool;
+}
+
+// SAFETY: safe by the requirements on implementation of `[SimpleUniqueRefCounted`]
+unsafe impl<T: SimpleUniqueRefCounted> UniqueRefCounted for T {
+ fn try_shared_to_unique(this: ARef<Self>) -> Result<UniqueRef<Self>, ARef<Self>> {
+ if this.is_unique() {
+ // SAFETY: safe by the requirements on implementation of `[SimpleUniqueRefCounted`]
+ Ok(unsafe { UniqueRef::from_raw(ARef::into_raw(this)) })
+ } else {
+ Err(this)
+ }
+ }
+}
+
+/// An unique, owned reference to an [`AlwaysRefCounted`] object.
+///
+/// It works the same ways as [`ARef`] but ensures that the reference is unique
+/// and thus can be dereferenced mutably.
+///
+/// # Invariants
+///
+/// - The pointer stored in `ptr` is non-null and valid for the lifetime of the [`UniqueRef`]
+/// instance. In particular, the [`UniqueRef`] instance owns an increment
+/// on the underlying object's reference count.
+/// - No other [`UniqueRef`] or [`ARef`] to the underlying object exist
+/// while the [`UniqueRef`] is live.
+pub struct UniqueRef<T: UniqueRefCounted> {
+ ptr: NonNull<T>,
+ _p: PhantomData<T>,
+}
+
+// SAFETY: It is safe to send `UniqueRef<T>` to another thread
+// when the underlying `T` is `Send` because it effectively means a transfer of ownership,
+// equivalently to sending a `Box<T>`.
+unsafe impl<T: UniqueRefCounted + Send> Send for UniqueRef<T> {}
+
+// SAFETY: It is safe to send `&UniqueRef<T>` to another thread when the underlying `T` is `Sync`
+// because it effectively means sharing `&T` (which is safe because `T` is `Sync`).
+unsafe impl<T: UniqueRefCounted + Sync> Sync for UniqueRef<T> {}
+
+impl<T: UniqueRefCounted> UniqueRef<T> {
+ /// Creates a new instance of [`UniqueRef`].
+ ///
+ /// It takes over an increment of the reference count on the underlying object.
+ ///
+ /// # Safety
+ ///
+ /// Callers must ensure that the reference count is set to such a value
+ /// that a call to [`dec_ref()`](UniqueRefCounted::dec_ref) will release the underlying object
+ /// in the way which is expected when the last reference is dropped.
+ /// Callers must not use the underlying object anymore --
+ /// it is only safe to do so via the newly created [`UniqueRef`].
+ pub unsafe fn from_raw(ptr: NonNull<T>) -> Self {
+ // INVARIANT: The safety requirements guarantee that the new instance now owns the
+ // increment on the refcount.
+ Self {
+ ptr,
+ _p: PhantomData,
+ }
+ }
+
+ /// Consumes the [`UniqueRef`], returning a raw pointer.
+ ///
+ /// This function does not change the refcount. After calling this function, the caller is
+ /// responsible for the refcount previously managed by the [`UniqueRef`].
+ pub fn into_raw(me: Self) -> NonNull<T> {
+ ManuallyDrop::new(me).ptr
+ }
+}
+
+impl<T: UniqueRefCounted> Deref for UniqueRef<T> {
+ type Target = T;
+
+ fn deref(&self) -> &Self::Target {
+ // SAFETY: The type invariants guarantee that the object is valid.
+ unsafe { self.ptr.as_ref() }
+ }
+}
+
+impl<T: UniqueRefCounted> DerefMut for UniqueRef<T> {
+ fn deref_mut(&mut self) -> &mut Self::Target {
+ // SAFETY: The type invariants guarantee that the object is valid.
+ unsafe { self.ptr.as_mut() }
+ }
+}
+
+impl<T: UniqueRefCounted> TryFrom<ARef<T>> for UniqueRef<T> {
+ type Error = ARef<T>;
+ /// Tries to convert the [`ARef`] to an [`UniqueRef`]
+ /// by calling [`try_shared_to_unique()`](UniqueRefCounted::try_shared_to_unique).
+ /// In case the [`ARef`] is not unique it returns again an [`ARef`] to the same
+ /// underlying object.
+ fn try_from(b: ARef<T>) -> Result<UniqueRef<T>, Self::Error> {
+ UniqueRefCounted::try_shared_to_unique(b)
+ }
+}
+
+impl<T: UniqueRefCounted> Drop for UniqueRef<T> {
+ fn drop(&mut self) {
+ // SAFETY: The type invariants guarantee that the [`UniqueRef`] owns the reference
+ // we're about to decrement.
+ unsafe { UniqueRefCounted::dec_ref(self.ptr) };
+ }
+}
+
/// A sum type that always holds either a value of type `L` or `R`.
///
/// # Examples
--
This should address all issues that have been raised with v2:
- Added a default implementation for unique_to_shared() which does a simple
rewrap of the underlying object.
- Added a SimpleUniqueRefCounted trait which requires only to implement
is_unique() as Benoît asked for. Maybe the feature is not worth
the extra code, though. For me keeping it or removing would be both fine.
- Removed the unsound conversion from &T to UniqueRef, as spotted by Benoît.
- Relaxed the requirements for Send and Sync, to be identical to the ones
for Box. See comment below.
- Added Examples for both UniqueRefCounted and SimpleUniqueRefCounted
as asked for by Boqun Feng.
For me they compile and run without errors as KUnits.
- Changed the commit message like suggested by Andreas.
@Benoît: I think you are right about Send and Sync.
What gave me a bit of a headache is if Send really does not require
the underlying object to be Sync, as the refcount itself -
which is part of the object - might be touched concurrently in a case
like with tag_to_req(), but I think one would not implement
something like that without having a synchronized refcount.
Best regards,
Oliver
next prev parent reply other threads:[~2025-03-03 13:29 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-28 14:43 [PATCH] rust: adding UniqueRefCounted and UniqueRef types Oliver Mangold
2025-02-28 15:21 ` Miguel Ojeda
2025-02-28 15:54 ` Boqun Feng
2025-02-28 18:01 ` [PATCH v2] " Oliver Mangold
2025-02-28 18:09 ` Boqun Feng
2025-02-28 18:16 ` Boqun Feng
2025-02-28 18:29 ` Andreas Hindborg
2025-03-03 13:29 ` Oliver Mangold [this message]
2025-03-03 14:22 ` [PATCH v3] " Andreas Hindborg
2025-03-03 16:33 ` Oliver Mangold
2025-03-03 14:09 ` [PATCH v2] " Andreas Hindborg
2025-02-28 23:41 ` [PATCH] " Benoît du Garreau
2025-03-01 8:06 ` Oliver Mangold
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z8Wuud2UQX6Yukyr@mango \
--to=oliver.mangold@pm.me \
--cc=a.hindborg@kernel.org \
--cc=alex.gaynor@gmail.com \
--cc=aliceryhl@google.com \
--cc=benno.lossin@proton.me \
--cc=bjorn3_gh@protonmail.com \
--cc=boqun.feng@gmail.com \
--cc=gary@garyguo.net \
--cc=linux-kernel@vger.kernel.org \
--cc=ojeda@kernel.org \
--cc=rust-for-linux@vger.kernel.org \
--cc=tmgross@umich.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).