* [PATCH v1 0/1] rust: add Work::disable_sync
@ 2026-04-28 10:44 Onur Özkan
2026-04-28 10:44 ` [PATCH v1 1/1] " Onur Özkan
0 siblings, 1 reply; 5+ messages in thread
From: Onur Özkan @ 2026-04-28 10:44 UTC (permalink / raw)
To: rust-for-linux, linux-kernel
Cc: ojeda, boqun, gary, bjorn3_gh, lossin, a.hindborg, aliceryhl,
tmgross, dakr, peterz, fujita.tomonori, tamird, Onur Özkan
The immediate motivation is the Tyr reset infrastructure [1] which needs
to stop queued or running reset work during teardown before dropping the
resources used by that work. The reset series started to require too many
independent dependencies, so this is split out as a standalone change to
keep the reset series focused on the reset logic and easier to review,
rebase and land.
[1]: https://lore.kernel.org/all/20260416171728.205141-1-work@onurozkan.dev/
Onur Özkan (1):
rust: add Work::disable_sync
rust/kernel/workqueue.rs | 102 +++++++++++++++++++++++++++++----------
1 file changed, 76 insertions(+), 26 deletions(-)
--
2.51.2
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v1 1/1] rust: add Work::disable_sync
2026-04-28 10:44 [PATCH v1 0/1] rust: add Work::disable_sync Onur Özkan
@ 2026-04-28 10:44 ` Onur Özkan
2026-04-28 11:42 ` Gary Guo
2026-05-02 11:18 ` kernel test robot
0 siblings, 2 replies; 5+ messages in thread
From: Onur Özkan @ 2026-04-28 10:44 UTC (permalink / raw)
To: rust-for-linux, linux-kernel
Cc: ojeda, boqun, gary, bjorn3_gh, lossin, a.hindborg, aliceryhl,
tmgross, dakr, peterz, fujita.tomonori, tamird, Onur Özkan
Adds Work::disable_sync() as a safe wrapper for disable_work_sync().
Drivers can use this during teardown to stop new queueing and wait for
queued or running work to finish before dropping related resources.
Signed-off-by: Onur Özkan <work@onurozkan.dev>
---
rust/kernel/workqueue.rs | 102 +++++++++++++++++++++++++++++----------
1 file changed, 76 insertions(+), 26 deletions(-)
diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
index 7e253b6f299c..443fc84dbeeb 100644
--- a/rust/kernel/workqueue.rs
+++ b/rust/kernel/workqueue.rs
@@ -442,21 +442,48 @@ pub unsafe trait RawDelayedWorkItem<const ID: u64>: RawWorkItem<ID> {}
///
/// # Safety
///
-/// Implementers must ensure that [`__enqueue`] uses a `work_struct` initialized with the [`run`]
-/// method of this trait as the function pointer.
-///
-/// [`__enqueue`]: RawWorkItem::__enqueue
-/// [`run`]: WorkItemPointer::run
-pub unsafe trait WorkItemPointer<const ID: u64>: RawWorkItem<ID> {
+/// Implementers must ensure that [`WorkItemPointer::from_raw_work`] rebuilds the exact ownership
+/// transferred by a successful [`RawWorkItem::__enqueue`] call.
+pub unsafe trait WorkItemPointer<const ID: u64>: RawWorkItem<ID> + Sized {
+ /// The work item type containing the embedded `work_struct`.
+ type Item: WorkItem<ID, Pointer = Self> + ?Sized;
+
+ /// Rebuild this work item's pointer from its embedded `work_struct`.
+ ///
+ /// # Safety
+ ///
+ /// The provided `work_struct` pointer must originate from a previous call to
+ /// [`RawWorkItem::__enqueue`] where the `queue_work_on` closure returned true
+ /// and the pointer must still be valid.
+ unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self;
+
/// Run this work item.
///
/// # Safety
///
- /// The provided `work_struct` pointer must originate from a previous call to [`__enqueue`]
- /// where the `queue_work_on` closure returned true, and the pointer must still be valid.
+ /// The provided `work_struct` pointer must satisfy the same requirements as
+ /// [`WorkItemPointer::from_raw_work`].
+ #[inline]
+ unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
+ <Self::Item as WorkItem<ID>>::run(
+ // SAFETY: The requirements for `run` are exactly those of `from_raw_work`.
+ unsafe { Self::from_raw_work(ptr) },
+ );
+ }
+
+ /// Reclaim a previously enqueued work item that will no longer run.
+ ///
+ /// # Safety
///
- /// [`__enqueue`]: RawWorkItem::__enqueue
- unsafe extern "C" fn run(ptr: *mut bindings::work_struct);
+ /// The provided `work_struct` pointer must satisfy the same requirements as
+ /// [`WorkItemPointer::from_raw_work`].
+ #[inline]
+ unsafe fn cancel(ptr: *mut bindings::work_struct) {
+ drop(
+ // SAFETY: The requirements for `cancel` are exactly those of `from_raw_work`.
+ unsafe { Self::from_raw_work(ptr) },
+ );
+ }
}
/// Defines the method that should be called when this work item is executed.
@@ -537,6 +564,29 @@ pub unsafe fn raw_get(ptr: *const Self) -> *mut bindings::work_struct {
// the compiler does not complain that the `work` field is unused.
unsafe { Opaque::cast_into(core::ptr::addr_of!((*ptr).work)) }
}
+
+ /// Disables this work item and waits for queued/running executions to finish.
+ ///
+ /// # Note
+ ///
+ /// Should be called from a sleepable context if the work was last queued on a non-BH
+ /// workqueue.
+ #[inline]
+ pub fn disable_sync(&self)
+ where
+ T: WorkItem<ID>,
+ {
+ let ptr: *const Self = self;
+ // SAFETY: `self` points to a valid initialized work.
+ let raw_work = unsafe { Self::raw_get(ptr) };
+ // SAFETY: `raw_work` is a valid embedded `work_struct`.
+ if unsafe { bindings::disable_work_sync(raw_work) } {
+ // SAFETY: A `true` return means the work was pending and got canceled, so the queued
+ // ownership transfer performed by `__enqueue` must be reclaimed here and the work
+ // item will not subsequently run.
+ unsafe { T::Pointer::cancel(raw_work) };
+ }
+ }
}
/// Declares that a type contains a [`Work<T, ID>`].
@@ -817,22 +867,22 @@ unsafe fn work_container_of(
// - `Work::new` makes sure that `T::Pointer::run` is passed to `init_work_with_key`.
// - Finally `Work` and `RawWorkItem` guarantee that the correct `Work` field
// will be used because of the ID const generic bound. This makes sure that `T::raw_get_work`
-// uses the correct offset for the `Work` field, and `Work::new` picks the correct
-// implementation of `WorkItemPointer` for `Arc<T>`.
+// uses the correct offset for the `Work` field, and `T::Pointer::from_raw_work` rebuilds the
+// correct pointer type for `Arc<T>`.
unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Arc<T>
where
T: WorkItem<ID, Pointer = Self>,
T: HasWork<T, ID>,
{
- unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
+ type Item = T;
+
+ unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self {
// The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
let ptr = ptr.cast::<Work<T, ID>>();
// SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`.
let ptr = unsafe { T::work_container_of(ptr) };
// SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership.
- let arc = unsafe { Arc::from_raw(ptr) };
-
- T::run(arc)
+ unsafe { Arc::from_raw(ptr) }
}
}
@@ -887,7 +937,9 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<KBox<T>>
T: WorkItem<ID, Pointer = Self>,
T: HasWork<T, ID>,
{
- unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
+ type Item = T;
+
+ unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self {
// The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
let ptr = ptr.cast::<Work<T, ID>>();
// SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`.
@@ -895,9 +947,7 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<KBox<T>>
// SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership.
let boxed = unsafe { KBox::from_raw(ptr) };
// SAFETY: The box was already pinned when it was enqueued.
- let pinned = unsafe { Pin::new_unchecked(boxed) };
-
- T::run(pinned)
+ unsafe { Pin::new_unchecked(boxed) }
}
}
@@ -950,15 +1000,17 @@ unsafe impl<T, const ID: u64> RawDelayedWorkItem<ID> for Pin<KBox<T>>
// - `Work::new` makes sure that `T::Pointer::run` is passed to `init_work_with_key`.
// - Finally `Work` and `RawWorkItem` guarantee that the correct `Work` field
// will be used because of the ID const generic bound. This makes sure that `T::raw_get_work`
-// uses the correct offset for the `Work` field, and `Work::new` picks the correct
-// implementation of `WorkItemPointer` for `ARef<T>`.
+// uses the correct offset for the `Work` field, and `T::Pointer::from_raw_work` rebuilds the
+// correct pointer type for `ARef<T>`.
unsafe impl<T, const ID: u64> WorkItemPointer<ID> for ARef<T>
where
T: AlwaysRefCounted,
T: WorkItem<ID, Pointer = Self>,
T: HasWork<T, ID>,
{
- unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
+ type Item = T;
+
+ unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self {
// The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
let ptr = ptr.cast::<Work<T, ID>>();
@@ -972,9 +1024,7 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for ARef<T>
// SAFETY: This pointer comes from `ARef::into_raw` and we've been given
// back ownership.
- let aref = unsafe { ARef::from_raw(ptr) };
-
- T::run(aref)
+ unsafe { ARef::from_raw(ptr) }
}
}
--
2.51.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v1 1/1] rust: add Work::disable_sync
2026-04-28 10:44 ` [PATCH v1 1/1] " Onur Özkan
@ 2026-04-28 11:42 ` Gary Guo
2026-04-28 12:33 ` Onur Özkan
2026-05-02 11:18 ` kernel test robot
1 sibling, 1 reply; 5+ messages in thread
From: Gary Guo @ 2026-04-28 11:42 UTC (permalink / raw)
To: Onur Özkan, rust-for-linux, linux-kernel
Cc: ojeda, boqun, gary, bjorn3_gh, lossin, a.hindborg, aliceryhl,
tmgross, dakr, peterz, fujita.tomonori, tamird
On Tue Apr 28, 2026 at 11:44 AM BST, Onur Özkan wrote:
> Adds Work::disable_sync() as a safe wrapper for disable_work_sync().
>
> Drivers can use this during teardown to stop new queueing and wait for
> queued or running work to finish before dropping related resources.
>
> Signed-off-by: Onur Özkan <work@onurozkan.dev>
> ---
> rust/kernel/workqueue.rs | 102 +++++++++++++++++++++++++++++----------
> 1 file changed, 76 insertions(+), 26 deletions(-)
>
> diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
> index 7e253b6f299c..443fc84dbeeb 100644
> --- a/rust/kernel/workqueue.rs
> +++ b/rust/kernel/workqueue.rs
> @@ -442,21 +442,48 @@ pub unsafe trait RawDelayedWorkItem<const ID: u64>: RawWorkItem<ID> {}
> ///
> /// # Safety
> ///
> -/// Implementers must ensure that [`__enqueue`] uses a `work_struct` initialized with the [`run`]
> -/// method of this trait as the function pointer.
> -///
> -/// [`__enqueue`]: RawWorkItem::__enqueue
> -/// [`run`]: WorkItemPointer::run
> -pub unsafe trait WorkItemPointer<const ID: u64>: RawWorkItem<ID> {
> +/// Implementers must ensure that [`WorkItemPointer::from_raw_work`] rebuilds the exact ownership
> +/// transferred by a successful [`RawWorkItem::__enqueue`] call.
> +pub unsafe trait WorkItemPointer<const ID: u64>: RawWorkItem<ID> + Sized {
> + /// The work item type containing the embedded `work_struct`.
> + type Item: WorkItem<ID, Pointer = Self> + ?Sized;
> +
> + /// Rebuild this work item's pointer from its embedded `work_struct`.
> + ///
> + /// # Safety
> + ///
> + /// The provided `work_struct` pointer must originate from a previous call to
> + /// [`RawWorkItem::__enqueue`] where the `queue_work_on` closure returned true
> + /// and the pointer must still be valid.
> + unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self;
> +
> /// Run this work item.
> ///
> /// # Safety
> ///
> - /// The provided `work_struct` pointer must originate from a previous call to [`__enqueue`]
> - /// where the `queue_work_on` closure returned true, and the pointer must still be valid.
> + /// The provided `work_struct` pointer must satisfy the same requirements as
> + /// [`WorkItemPointer::from_raw_work`].
> + #[inline]
> + unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
> + <Self::Item as WorkItem<ID>>::run(
> + // SAFETY: The requirements for `run` are exactly those of `from_raw_work`.
> + unsafe { Self::from_raw_work(ptr) },
> + );
> + }
> +
> + /// Reclaim a previously enqueued work item that will no longer run.
> + ///
> + /// # Safety
> ///
> - /// [`__enqueue`]: RawWorkItem::__enqueue
> - unsafe extern "C" fn run(ptr: *mut bindings::work_struct);
> + /// The provided `work_struct` pointer must satisfy the same requirements as
> + /// [`WorkItemPointer::from_raw_work`].
> + #[inline]
> + unsafe fn cancel(ptr: *mut bindings::work_struct) {
> + drop(
> + // SAFETY: The requirements for `cancel` are exactly those of `from_raw_work`.
> + unsafe { Self::from_raw_work(ptr) },
> + );
> + }
> }
>
> /// Defines the method that should be called when this work item is executed.
> @@ -537,6 +564,29 @@ pub unsafe fn raw_get(ptr: *const Self) -> *mut bindings::work_struct {
> // the compiler does not complain that the `work` field is unused.
> unsafe { Opaque::cast_into(core::ptr::addr_of!((*ptr).work)) }
> }
> +
> + /// Disables this work item and waits for queued/running executions to finish.
> + ///
> + /// # Note
> + ///
> + /// Should be called from a sleepable context if the work was last queued on a non-BH
> + /// workqueue.
> + #[inline]
> + pub fn disable_sync(&self)
> + where
> + T: WorkItem<ID>,
> + {
> + let ptr: *const Self = self;
> + // SAFETY: `self` points to a valid initialized work.
> + let raw_work = unsafe { Self::raw_get(ptr) };
> + // SAFETY: `raw_work` is a valid embedded `work_struct`.
> + if unsafe { bindings::disable_work_sync(raw_work) } {
> + // SAFETY: A `true` return means the work was pending and got canceled, so the queued
> + // ownership transfer performed by `__enqueue` must be reclaimed here and the work
> + // item will not subsequently run.
> + unsafe { T::Pointer::cancel(raw_work) };
I think have an explicit drop is clearer and remove the need of additional
function in the trait.
// SAFETY: A `true` return means the work was pending and got canceled, so the queued
// ownership transfer performed by `__enqueue` is reclaimed here.
drop(unsafe { T::Pointer::from_raw_work(ptr) })
The `cancel` name is confusing because this is called *after* the work has
already been cancelled.
Best,
Gary
> + }
> + }
> }
>
> /// Declares that a type contains a [`Work<T, ID>`].
> @@ -817,22 +867,22 @@ unsafe fn work_container_of(
> // - `Work::new` makes sure that `T::Pointer::run` is passed to `init_work_with_key`.
> // - Finally `Work` and `RawWorkItem` guarantee that the correct `Work` field
> // will be used because of the ID const generic bound. This makes sure that `T::raw_get_work`
> -// uses the correct offset for the `Work` field, and `Work::new` picks the correct
> -// implementation of `WorkItemPointer` for `Arc<T>`.
> +// uses the correct offset for the `Work` field, and `T::Pointer::from_raw_work` rebuilds the
> +// correct pointer type for `Arc<T>`.
> unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Arc<T>
> where
> T: WorkItem<ID, Pointer = Self>,
> T: HasWork<T, ID>,
> {
> - unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
> + type Item = T;
> +
> + unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self {
> // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
> let ptr = ptr.cast::<Work<T, ID>>();
> // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`.
> let ptr = unsafe { T::work_container_of(ptr) };
> // SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership.
> - let arc = unsafe { Arc::from_raw(ptr) };
> -
> - T::run(arc)
> + unsafe { Arc::from_raw(ptr) }
> }
> }
>
> @@ -887,7 +937,9 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<KBox<T>>
> T: WorkItem<ID, Pointer = Self>,
> T: HasWork<T, ID>,
> {
> - unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
> + type Item = T;
> +
> + unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self {
> // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
> let ptr = ptr.cast::<Work<T, ID>>();
> // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`.
> @@ -895,9 +947,7 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<KBox<T>>
> // SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership.
> let boxed = unsafe { KBox::from_raw(ptr) };
> // SAFETY: The box was already pinned when it was enqueued.
> - let pinned = unsafe { Pin::new_unchecked(boxed) };
> -
> - T::run(pinned)
> + unsafe { Pin::new_unchecked(boxed) }
> }
> }
>
> @@ -950,15 +1000,17 @@ unsafe impl<T, const ID: u64> RawDelayedWorkItem<ID> for Pin<KBox<T>>
> // - `Work::new` makes sure that `T::Pointer::run` is passed to `init_work_with_key`.
> // - Finally `Work` and `RawWorkItem` guarantee that the correct `Work` field
> // will be used because of the ID const generic bound. This makes sure that `T::raw_get_work`
> -// uses the correct offset for the `Work` field, and `Work::new` picks the correct
> -// implementation of `WorkItemPointer` for `ARef<T>`.
> +// uses the correct offset for the `Work` field, and `T::Pointer::from_raw_work` rebuilds the
> +// correct pointer type for `ARef<T>`.
> unsafe impl<T, const ID: u64> WorkItemPointer<ID> for ARef<T>
> where
> T: AlwaysRefCounted,
> T: WorkItem<ID, Pointer = Self>,
> T: HasWork<T, ID>,
> {
> - unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
> + type Item = T;
> +
> + unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self {
> // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
> let ptr = ptr.cast::<Work<T, ID>>();
>
> @@ -972,9 +1024,7 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for ARef<T>
>
> // SAFETY: This pointer comes from `ARef::into_raw` and we've been given
> // back ownership.
> - let aref = unsafe { ARef::from_raw(ptr) };
> -
> - T::run(aref)
> + unsafe { ARef::from_raw(ptr) }
> }
> }
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v1 1/1] rust: add Work::disable_sync
2026-04-28 11:42 ` Gary Guo
@ 2026-04-28 12:33 ` Onur Özkan
0 siblings, 0 replies; 5+ messages in thread
From: Onur Özkan @ 2026-04-28 12:33 UTC (permalink / raw)
To: Gary Guo
Cc: rust-for-linux, linux-kernel, ojeda, boqun, bjorn3_gh, lossin,
a.hindborg, aliceryhl, tmgross, dakr, peterz, fujita.tomonori,
tamird
On Tue, 28 Apr 2026 12:42:59 +0100
Gary Guo <gary@garyguo.net> wrote:
> On Tue Apr 28, 2026 at 11:44 AM BST, Onur Özkan wrote:
> > Adds Work::disable_sync() as a safe wrapper for disable_work_sync().
> >
> > Drivers can use this during teardown to stop new queueing and wait for
> > queued or running work to finish before dropping related resources.
> >
> > Signed-off-by: Onur Özkan <work@onurozkan.dev>
> > ---
> > rust/kernel/workqueue.rs | 102 +++++++++++++++++++++++++++++----------
> > 1 file changed, 76 insertions(+), 26 deletions(-)
> >
> > diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
> > index 7e253b6f299c..443fc84dbeeb 100644
> > --- a/rust/kernel/workqueue.rs
> > +++ b/rust/kernel/workqueue.rs
> > @@ -442,21 +442,48 @@ pub unsafe trait RawDelayedWorkItem<const ID: u64>: RawWorkItem<ID> {}
> > ///
> > /// # Safety
> > ///
> > -/// Implementers must ensure that [`__enqueue`] uses a `work_struct` initialized with the [`run`]
> > -/// method of this trait as the function pointer.
> > -///
> > -/// [`__enqueue`]: RawWorkItem::__enqueue
> > -/// [`run`]: WorkItemPointer::run
> > -pub unsafe trait WorkItemPointer<const ID: u64>: RawWorkItem<ID> {
> > +/// Implementers must ensure that [`WorkItemPointer::from_raw_work`] rebuilds the exact ownership
> > +/// transferred by a successful [`RawWorkItem::__enqueue`] call.
> > +pub unsafe trait WorkItemPointer<const ID: u64>: RawWorkItem<ID> + Sized {
> > + /// The work item type containing the embedded `work_struct`.
> > + type Item: WorkItem<ID, Pointer = Self> + ?Sized;
> > +
> > + /// Rebuild this work item's pointer from its embedded `work_struct`.
> > + ///
> > + /// # Safety
> > + ///
> > + /// The provided `work_struct` pointer must originate from a previous call to
> > + /// [`RawWorkItem::__enqueue`] where the `queue_work_on` closure returned true
> > + /// and the pointer must still be valid.
> > + unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self;
> > +
> > /// Run this work item.
> > ///
> > /// # Safety
> > ///
> > - /// The provided `work_struct` pointer must originate from a previous call to [`__enqueue`]
> > - /// where the `queue_work_on` closure returned true, and the pointer must still be valid.
> > + /// The provided `work_struct` pointer must satisfy the same requirements as
> > + /// [`WorkItemPointer::from_raw_work`].
> > + #[inline]
> > + unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
> > + <Self::Item as WorkItem<ID>>::run(
> > + // SAFETY: The requirements for `run` are exactly those of `from_raw_work`.
> > + unsafe { Self::from_raw_work(ptr) },
> > + );
> > + }
> > +
> > + /// Reclaim a previously enqueued work item that will no longer run.
> > + ///
> > + /// # Safety
> > ///
> > - /// [`__enqueue`]: RawWorkItem::__enqueue
> > - unsafe extern "C" fn run(ptr: *mut bindings::work_struct);
> > + /// The provided `work_struct` pointer must satisfy the same requirements as
> > + /// [`WorkItemPointer::from_raw_work`].
> > + #[inline]
> > + unsafe fn cancel(ptr: *mut bindings::work_struct) {
> > + drop(
> > + // SAFETY: The requirements for `cancel` are exactly those of `from_raw_work`.
> > + unsafe { Self::from_raw_work(ptr) },
> > + );
> > + }
> > }
> >
> > /// Defines the method that should be called when this work item is executed.
> > @@ -537,6 +564,29 @@ pub unsafe fn raw_get(ptr: *const Self) -> *mut bindings::work_struct {
> > // the compiler does not complain that the `work` field is unused.
> > unsafe { Opaque::cast_into(core::ptr::addr_of!((*ptr).work)) }
> > }
> > +
> > + /// Disables this work item and waits for queued/running executions to finish.
> > + ///
> > + /// # Note
> > + ///
> > + /// Should be called from a sleepable context if the work was last queued on a non-BH
> > + /// workqueue.
> > + #[inline]
> > + pub fn disable_sync(&self)
> > + where
> > + T: WorkItem<ID>,
> > + {
> > + let ptr: *const Self = self;
> > + // SAFETY: `self` points to a valid initialized work.
> > + let raw_work = unsafe { Self::raw_get(ptr) };
> > + // SAFETY: `raw_work` is a valid embedded `work_struct`.
> > + if unsafe { bindings::disable_work_sync(raw_work) } {
> > + // SAFETY: A `true` return means the work was pending and got canceled, so the queued
> > + // ownership transfer performed by `__enqueue` must be reclaimed here and the work
> > + // item will not subsequently run.
> > + unsafe { T::Pointer::cancel(raw_work) };
>
> I think have an explicit drop is clearer and remove the need of additional
> function in the trait.
>
> // SAFETY: A `true` return means the work was pending and got canceled, so the queued
> // ownership transfer performed by `__enqueue` is reclaimed here.
> drop(unsafe { T::Pointer::from_raw_work(ptr) })
>
> The `cancel` name is confusing because this is called *after* the work has
> already been cancelled.
Makes sense.
Thanks,
Onur
>
> Best,
> Gary
>
> > + }
> > + }
> > }
> >
> > /// Declares that a type contains a [`Work<T, ID>`].
> > @@ -817,22 +867,22 @@ unsafe fn work_container_of(
> > // - `Work::new` makes sure that `T::Pointer::run` is passed to `init_work_with_key`.
> > // - Finally `Work` and `RawWorkItem` guarantee that the correct `Work` field
> > // will be used because of the ID const generic bound. This makes sure that `T::raw_get_work`
> > -// uses the correct offset for the `Work` field, and `Work::new` picks the correct
> > -// implementation of `WorkItemPointer` for `Arc<T>`.
> > +// uses the correct offset for the `Work` field, and `T::Pointer::from_raw_work` rebuilds the
> > +// correct pointer type for `Arc<T>`.
> > unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Arc<T>
> > where
> > T: WorkItem<ID, Pointer = Self>,
> > T: HasWork<T, ID>,
> > {
> > - unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
> > + type Item = T;
> > +
> > + unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self {
> > // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
> > let ptr = ptr.cast::<Work<T, ID>>();
> > // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`.
> > let ptr = unsafe { T::work_container_of(ptr) };
> > // SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership.
> > - let arc = unsafe { Arc::from_raw(ptr) };
> > -
> > - T::run(arc)
> > + unsafe { Arc::from_raw(ptr) }
> > }
> > }
> >
> > @@ -887,7 +937,9 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<KBox<T>>
> > T: WorkItem<ID, Pointer = Self>,
> > T: HasWork<T, ID>,
> > {
> > - unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
> > + type Item = T;
> > +
> > + unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self {
> > // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
> > let ptr = ptr.cast::<Work<T, ID>>();
> > // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`.
> > @@ -895,9 +947,7 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<KBox<T>>
> > // SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership.
> > let boxed = unsafe { KBox::from_raw(ptr) };
> > // SAFETY: The box was already pinned when it was enqueued.
> > - let pinned = unsafe { Pin::new_unchecked(boxed) };
> > -
> > - T::run(pinned)
> > + unsafe { Pin::new_unchecked(boxed) }
> > }
> > }
> >
> > @@ -950,15 +1000,17 @@ unsafe impl<T, const ID: u64> RawDelayedWorkItem<ID> for Pin<KBox<T>>
> > // - `Work::new` makes sure that `T::Pointer::run` is passed to `init_work_with_key`.
> > // - Finally `Work` and `RawWorkItem` guarantee that the correct `Work` field
> > // will be used because of the ID const generic bound. This makes sure that `T::raw_get_work`
> > -// uses the correct offset for the `Work` field, and `Work::new` picks the correct
> > -// implementation of `WorkItemPointer` for `ARef<T>`.
> > +// uses the correct offset for the `Work` field, and `T::Pointer::from_raw_work` rebuilds the
> > +// correct pointer type for `ARef<T>`.
> > unsafe impl<T, const ID: u64> WorkItemPointer<ID> for ARef<T>
> > where
> > T: AlwaysRefCounted,
> > T: WorkItem<ID, Pointer = Self>,
> > T: HasWork<T, ID>,
> > {
> > - unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
> > + type Item = T;
> > +
> > + unsafe fn from_raw_work(ptr: *mut bindings::work_struct) -> Self {
> > // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
> > let ptr = ptr.cast::<Work<T, ID>>();
> >
> > @@ -972,9 +1024,7 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for ARef<T>
> >
> > // SAFETY: This pointer comes from `ARef::into_raw` and we've been given
> > // back ownership.
> > - let aref = unsafe { ARef::from_raw(ptr) };
> > -
> > - T::run(aref)
> > + unsafe { ARef::from_raw(ptr) }
> > }
> > }
> >
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v1 1/1] rust: add Work::disable_sync
2026-04-28 10:44 ` [PATCH v1 1/1] " Onur Özkan
2026-04-28 11:42 ` Gary Guo
@ 2026-05-02 11:18 ` kernel test robot
1 sibling, 0 replies; 5+ messages in thread
From: kernel test robot @ 2026-05-02 11:18 UTC (permalink / raw)
To: Onur Özkan, rust-for-linux, linux-kernel
Cc: oe-kbuild-all, ojeda, boqun, gary, bjorn3_gh, lossin, a.hindborg,
aliceryhl, tmgross, dakr, peterz, fujita.tomonori, tamird,
Onur Özkan
Hi Onur,
kernel test robot noticed the following build warnings:
[auto build test WARNING on rust/rust-next]
[also build test WARNING on linus/master v7.1-rc1 next-20260430]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Onur-zkan/rust-add-Work-disable_sync/20260502-140019
base: https://github.com/Rust-for-Linux/linux rust-next
patch link: https://lore.kernel.org/r/20260428104459.174602-2-work%40onurozkan.dev
patch subject: [PATCH v1 1/1] rust: add Work::disable_sync
config: x86_64-rhel-9.4-rust (https://download.01.org/0day-ci/archive/20260502/202605021337.ck8OCWEF-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
rustc: rustc 1.88.0 (6b00bc388 2025-06-23)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260502/202605021337.ck8OCWEF-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202605021337.ck8OCWEF-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> warning: unresolved link to `run`
--> rust/kernel/workqueue.rs:436:62
|
436 | /// structs, you would implement [`WorkItem`] instead. The [`run`] method on
| ^^^ no item named `run` in scope
|
= help: to escape `[` and `]` characters, add '\' before them like `\[` or `\]`
= note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-05-02 11:19 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-28 10:44 [PATCH v1 0/1] rust: add Work::disable_sync Onur Özkan
2026-04-28 10:44 ` [PATCH v1 1/1] " Onur Özkan
2026-04-28 11:42 ` Gary Guo
2026-04-28 12:33 ` Onur Özkan
2026-05-02 11:18 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox