public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] Creation of workqueues in Rust
@ 2026-02-27 14:53 Alice Ryhl
  2026-02-27 14:53 ` [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs Alice Ryhl
  2026-02-27 14:53 ` [PATCH v3 2/2] rust: workqueue: add creation of workqueues Alice Ryhl
  0 siblings, 2 replies; 26+ messages in thread
From: Alice Ryhl @ 2026-02-27 14:53 UTC (permalink / raw)
  To: Tejun Heo, Miguel Ojeda
  Cc: Lai Jiangshan, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, Daniel Almeida, John Hubbard,
	Philipp Stanner, rust-for-linux, linux-kernel, Alice Ryhl,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

GPU drivers often need to create their own workqueues for various
reasons. Add the ability to do so.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
Changes in v3:
- Switch to builder pattern.
- Drop BH workqueues for now.
- Mark delayed wq change as fix.
- Link to v2: https://lore.kernel.org/r/20251113-create-workqueue-v2-0-8b45277119bc@google.com

Changes in v2:
- Redo how flagging works.
- Restrict delayed work to not be usable on custom workqueues.
- Link to v1: https://lore.kernel.org/r/20250411-create-workqueue-v1-1-f7dbe7f1e05f@google.com

---
Alice Ryhl (2):
      rust: workqueue: restrict delayed work to global wqs
      rust: workqueue: add creation of workqueues

 rust/helpers/workqueue.c |   7 ++
 rust/kernel/workqueue.rs | 199 ++++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 202 insertions(+), 4 deletions(-)
---
base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
change-id: 20250411-create-workqueue-d053158c7a4b

Best regards,
-- 
Alice Ryhl <aliceryhl@google.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 14:53 [PATCH v3 0/2] Creation of workqueues in Rust Alice Ryhl
@ 2026-02-27 14:53 ` Alice Ryhl
  2026-02-27 15:10   ` Danilo Krummrich
                     ` (2 more replies)
  2026-02-27 14:53 ` [PATCH v3 2/2] rust: workqueue: add creation of workqueues Alice Ryhl
  1 sibling, 3 replies; 26+ messages in thread
From: Alice Ryhl @ 2026-02-27 14:53 UTC (permalink / raw)
  To: Tejun Heo, Miguel Ojeda
  Cc: Lai Jiangshan, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, Daniel Almeida, John Hubbard,
	Philipp Stanner, rust-for-linux, linux-kernel, Alice Ryhl,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

When a workqueue is shut down, delayed work that is pending but not
scheduled does not get properly cleaned up, so it's not safe to use
`enqueue_delayed` on a workqueue that might be destroyed. To fix this,
restricted `enqueue_delayed` to static queues.

Cc: stable@vger.kernel.org
Fixes: 7c098cd5eaae ("workqueue: rust: add delayed work items")
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/workqueue.rs | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
index 706e833e9702..1acd113c04ee 100644
--- a/rust/kernel/workqueue.rs
+++ b/rust/kernel/workqueue.rs
@@ -296,8 +296,15 @@ pub fn enqueue<W, const ID: u64>(&self, w: W) -> W::EnqueueOutput
     ///
     /// This may fail if the work item is already enqueued in a workqueue.
     ///
+    /// This is only valid for global workqueues (with static lifetimes) because those are the only
+    /// ones that outlive all possible delayed work items.
+    ///
     /// The work item will be submitted using `WORK_CPU_UNBOUND`.
-    pub fn enqueue_delayed<W, const ID: u64>(&self, w: W, delay: Jiffies) -> W::EnqueueOutput
+    pub fn enqueue_delayed<W, const ID: u64>(
+        &'static self,
+        w: W,
+        delay: Jiffies,
+    ) -> W::EnqueueOutput
     where
         W: RawDelayedWorkItem<ID> + Send + 'static,
     {

-- 
2.53.0.473.g4a7958ca14-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-27 14:53 [PATCH v3 0/2] Creation of workqueues in Rust Alice Ryhl
  2026-02-27 14:53 ` [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs Alice Ryhl
@ 2026-02-27 14:53 ` Alice Ryhl
  2026-02-27 15:30   ` Danilo Krummrich
                     ` (2 more replies)
  1 sibling, 3 replies; 26+ messages in thread
From: Alice Ryhl @ 2026-02-27 14:53 UTC (permalink / raw)
  To: Tejun Heo, Miguel Ojeda
  Cc: Lai Jiangshan, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, Daniel Almeida, John Hubbard,
	Philipp Stanner, rust-for-linux, linux-kernel, Alice Ryhl,
	Boqun Feng, Benno Lossin, Tamir Duberstein

Creating workqueues is needed by various GPU drivers. Not only does it
give you better control over execution, it also allows devices to ensure
that all tasks have exited before the device is unbound (or similar) by
running the workqueue destructor.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/helpers/workqueue.c |   7 ++
 rust/kernel/workqueue.rs | 190 ++++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 194 insertions(+), 3 deletions(-)

diff --git a/rust/helpers/workqueue.c b/rust/helpers/workqueue.c
index ce1c3a5b2150..e4b9d1b3d6bf 100644
--- a/rust/helpers/workqueue.c
+++ b/rust/helpers/workqueue.c
@@ -14,3 +14,10 @@ __rust_helper void rust_helper_init_work_with_key(struct work_struct *work,
 	INIT_LIST_HEAD(&work->entry);
 	work->func = func;
 }
+
+__rust_helper
+struct workqueue_struct *rust_helper_alloc_workqueue(const char *fmt, unsigned int flags,
+						     int max_active, const void *data)
+{
+	return alloc_workqueue(fmt, flags, max_active, data);
+}
diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
index 1acd113c04ee..4ef02a537cd9 100644
--- a/rust/kernel/workqueue.rs
+++ b/rust/kernel/workqueue.rs
@@ -186,7 +186,10 @@
 //! C header: [`include/linux/workqueue.h`](srctree/include/linux/workqueue.h)
 
 use crate::{
-    alloc::{AllocError, Flags},
+    alloc::{
+        self,
+        AllocError, //
+    },
     container_of,
     prelude::*,
     sync::Arc,
@@ -194,7 +197,11 @@
     time::Jiffies,
     types::Opaque,
 };
-use core::marker::PhantomData;
+use core::{
+    marker::PhantomData,
+    ops::Deref,
+    ptr::NonNull, //
+};
 
 /// Creates a [`Work`] initialiser with the given name and a newly-created lock class.
 #[macro_export]
@@ -340,7 +347,7 @@ pub fn enqueue_delayed<W, const ID: u64>(
     /// This method can fail because it allocates memory to store the work item.
     pub fn try_spawn<T: 'static + Send + FnOnce()>(
         &self,
-        flags: Flags,
+        flags: alloc::Flags,
         func: T,
     ) -> Result<(), AllocError> {
         let init = pin_init!(ClosureWork {
@@ -353,6 +360,183 @@ pub fn try_spawn<T: 'static + Send + FnOnce()>(
     }
 }
 
+/// Workqueue builder.
+///
+/// A valid combination of workqueue flags contains one of the base flags (`WQ_UNBOUND`, `WQ_BH`,
+/// or `WQ_PERCPU`) and a combination of modifier flags that are compatible with the selected base
+/// flag.
+///
+/// For details, please refer to `Documentation/core-api/workqueue.rst`.
+pub struct Builder {
+    flags: bindings::wq_flags,
+    max_active: i32,
+}
+
+impl Builder {
+    /// Not bound to any cpu.
+    #[inline]
+    pub fn unbound() -> Builder {
+        Builder {
+            flags: bindings::wq_flags_WQ_UNBOUND,
+            max_active: 0,
+        }
+    }
+
+    /// Bound to a specific cpu.
+    #[inline]
+    pub fn percpu() -> Builder {
+        Builder {
+            flags: bindings::wq_flags_WQ_PERCPU,
+            max_active: 0,
+        }
+    }
+
+    /// Set the maximum number of active cpus.
+    ///
+    /// If not set, a reasonable default value is used. The maximum value is `WQ_MAX_ACTIVE`.
+    #[inline]
+    pub fn max_active(mut self, max_active: u32) -> Builder {
+        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
+        self
+    }
+
+    /// Allow this workqueue to be frozen during suspend.
+    #[inline]
+    pub fn freezable(mut self) -> Self {
+        self.flags |= bindings::wq_flags_WQ_FREEZABLE;
+        self
+    }
+
+    /// This workqueue may be used during memory reclaim.
+    #[inline]
+    pub fn mem_reclaim(mut self) -> Self {
+        self.flags |= bindings::wq_flags_WQ_MEM_RECLAIM;
+        self
+    }
+
+    /// Mark this workqueue as cpu intensive.
+    #[inline]
+    pub fn cpu_intensive(mut self) -> Self {
+        self.flags |= bindings::wq_flags_WQ_CPU_INTENSIVE;
+        self
+    }
+
+    /// Make this workqueue visible in sysfs.
+    #[inline]
+    pub fn sysfs(mut self) -> Self {
+        self.flags |= bindings::wq_flags_WQ_SYSFS;
+        self
+    }
+
+    /// Mark this workqueue high priority.
+    #[inline]
+    pub fn highpri(mut self) -> Self {
+        self.flags |= bindings::wq_flags_WQ_HIGHPRI;
+        self
+    }
+
+    /// Allocates a new workqueue.
+    ///
+    /// The provided name is used verbatim as the workqueue name.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use kernel::workqueue;
+    ///
+    /// // create an unbound workqueue registered with sysfs
+    /// let wq = workqueue::Builder::unbound().sysfs().build(c"my-wq")?;
+    ///
+    /// // spawn a work item on it
+    /// wq.try_spawn(
+    ///     GFP_KERNEL,
+    ///     || pr_warn!("Printing from my-wq"),
+    /// )?;
+    /// # Ok::<(), Error>(())
+    /// ```
+    #[inline]
+    pub fn build(self, name: &CStr) -> Result<OwnedQueue, AllocError> {
+        // SAFETY:
+        // * c"%s" is compatible with passing the name as a c-string.
+        // * the builder only permits valid flag combinations
+        let ptr = unsafe {
+            bindings::alloc_workqueue(
+                c"%s".as_char_ptr(),
+                self.flags,
+                self.max_active,
+                name.as_char_ptr().cast::<c_void>(),
+            )
+        };
+
+        Ok(OwnedQueue {
+            queue: NonNull::new(ptr).ok_or(AllocError)?.cast(),
+        })
+    }
+
+    /// Allocates a new workqueue.
+    ///
+    /// # Examples
+    ///
+    /// This example shows how to pass a Rust string formatter to the workqueue name, creating
+    /// workqueues with names such as `my-wq-1` and `my-wq-2`.
+    ///
+    /// ```
+    /// use kernel::{
+    ///     alloc::AllocError,
+    ///     workqueue::{self, OwnedQueue},
+    /// };
+    ///
+    /// fn my_wq(num: u32) -> Result<OwnedQueue, AllocError> {
+    ///     // create a percpu workqueue called my-wq-{num}
+    ///     workqueue::Builder::percpu().build_fmt(fmt!("my-wq-{num}"))
+    /// }
+    /// ```
+    #[inline]
+    pub fn build_fmt(self, name: kernel::fmt::Arguments<'_>) -> Result<OwnedQueue, AllocError> {
+        // SAFETY:
+        // * c"%pA" is compatible with passing an `Arguments` pointer.
+        // * the builder only permits valid flag combinations
+        let ptr = unsafe {
+            bindings::alloc_workqueue(
+                c"%pA".as_char_ptr(),
+                self.flags,
+                self.max_active,
+                core::ptr::from_ref(&name).cast::<c_void>(),
+            )
+        };
+
+        Ok(OwnedQueue {
+            queue: NonNull::new(ptr).ok_or(AllocError)?.cast(),
+        })
+    }
+}
+
+/// An owned kernel work queue.
+///
+/// Dropping a workqueue blocks on all pending work.
+///
+/// # Invariants
+///
+/// `queue` points at a valid workqueue that is owned by this `OwnedQueue`.
+pub struct OwnedQueue {
+    queue: NonNull<Queue>,
+}
+
+impl Deref for OwnedQueue {
+    type Target = Queue;
+    fn deref(&self) -> &Queue {
+        // SAFETY: By the type invariants, this pointer references a valid queue.
+        unsafe { &*self.queue.as_ptr() }
+    }
+}
+
+impl Drop for OwnedQueue {
+    fn drop(&mut self) {
+        // SAFETY: The `OwnedQueue` is being destroyed, so we can destroy the workqueue it owns.
+        unsafe { bindings::destroy_workqueue(self.queue.as_ptr().cast()) }
+    }
+}
+
 /// A helper type used in [`try_spawn`].
 ///
 /// [`try_spawn`]: Queue::try_spawn

-- 
2.53.0.473.g4a7958ca14-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 14:53 ` [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs Alice Ryhl
@ 2026-02-27 15:10   ` Danilo Krummrich
  2026-02-27 15:47   ` Gary Guo
  2026-02-27 17:09   ` Tejun Heo
  2 siblings, 0 replies; 26+ messages in thread
From: Danilo Krummrich @ 2026-02-27 15:10 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Tejun Heo, Miguel Ojeda, Lai Jiangshan, Gary Guo,
	Björn Roy Baron, Andreas Hindborg, Trevor Gross,
	Daniel Almeida, John Hubbard, Philipp Stanner, rust-for-linux,
	linux-kernel, Boqun Feng, Benno Lossin, Tamir Duberstein, stable

On Fri Feb 27, 2026 at 3:53 PM CET, Alice Ryhl wrote:
> When a workqueue is shut down, delayed work that is pending but not
> scheduled does not get properly cleaned up, so it's not safe to use
> `enqueue_delayed` on a workqueue that might be destroyed. To fix this,
> restricted `enqueue_delayed` to static queues.

:(

Reviewed-by: Danilo Krummrich <dakr@kernel.org>

> Cc: stable@vger.kernel.org
> Fixes: 7c098cd5eaae ("workqueue: rust: add delayed work items")
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  rust/kernel/workqueue.rs | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
> index 706e833e9702..1acd113c04ee 100644
> --- a/rust/kernel/workqueue.rs
> +++ b/rust/kernel/workqueue.rs
> @@ -296,8 +296,15 @@ pub fn enqueue<W, const ID: u64>(&self, w: W) -> W::EnqueueOutput
>      ///
>      /// This may fail if the work item is already enqueued in a workqueue.
>      ///
> +    /// This is only valid for global workqueues (with static lifetimes) because those are the only
> +    /// ones that outlive all possible delayed work items.

We should probably add a FIXME comment pointing out that this should be fixed in
the C code.

Maybe also link your approach?

> +    ///
>      /// The work item will be submitted using `WORK_CPU_UNBOUND`.
> -    pub fn enqueue_delayed<W, const ID: u64>(&self, w: W, delay: Jiffies) -> W::EnqueueOutput
> +    pub fn enqueue_delayed<W, const ID: u64>(
> +        &'static self,
> +        w: W,
> +        delay: Jiffies,
> +    ) -> W::EnqueueOutput
>      where
>          W: RawDelayedWorkItem<ID> + Send + 'static,
>      {
>
> -- 
> 2.53.0.473.g4a7958ca14-goog


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-27 14:53 ` [PATCH v3 2/2] rust: workqueue: add creation of workqueues Alice Ryhl
@ 2026-02-27 15:30   ` Danilo Krummrich
  2026-02-27 16:00     ` Gary Guo
  2026-02-27 19:05     ` Alice Ryhl
  2026-02-27 16:04   ` Gary Guo
  2026-02-28  2:24   ` kernel test robot
  2 siblings, 2 replies; 26+ messages in thread
From: Danilo Krummrich @ 2026-02-27 15:30 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Tejun Heo, Miguel Ojeda, Lai Jiangshan, Gary Guo,
	Björn Roy Baron, Andreas Hindborg, Trevor Gross,
	Daniel Almeida, John Hubbard, Philipp Stanner, rust-for-linux,
	linux-kernel, Boqun Feng, Benno Lossin, Tamir Duberstein

On Fri Feb 27, 2026 at 3:53 PM CET, Alice Ryhl wrote:
> +    /// Set the maximum number of active cpus.
> +    ///
> +    /// If not set, a reasonable default value is used. The maximum value is `WQ_MAX_ACTIVE`.

Should we just mention the default value?

> +    #[inline]
> +    pub fn max_active(mut self, max_active: u32) -> Builder {
> +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);

The workqueue code prints a warning for max_active >  WQ_MAX_ACTIVE. Maybe use
debug_assert()?

It's also a bit unfortunate that alloc_ordered_workqueue() becomes
.max_active(1).

At the same time having a separate ordered() method competes with max_active().

Mybe a type state, i.e. Builder<Ordered> that doesn't have max_active()?

> +        self
> +    }

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 14:53 ` [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs Alice Ryhl
  2026-02-27 15:10   ` Danilo Krummrich
@ 2026-02-27 15:47   ` Gary Guo
  2026-02-27 17:09   ` Tejun Heo
  2 siblings, 0 replies; 26+ messages in thread
From: Gary Guo @ 2026-02-27 15:47 UTC (permalink / raw)
  To: Alice Ryhl, Tejun Heo, Miguel Ojeda
  Cc: Lai Jiangshan, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, Daniel Almeida, John Hubbard,
	Philipp Stanner, rust-for-linux, linux-kernel, Boqun Feng,
	Benno Lossin, Tamir Duberstein, stable

On Fri Feb 27, 2026 at 2:53 PM GMT, Alice Ryhl wrote:
> When a workqueue is shut down, delayed work that is pending but not
> scheduled does not get properly cleaned up, so it's not safe to use
> `enqueue_delayed` on a workqueue that might be destroyed. To fix this,
> restricted `enqueue_delayed` to static queues.
> 
> Cc: stable@vger.kernel.org
> Fixes: 7c098cd5eaae ("workqueue: rust: add delayed work items")
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>

Reviewed-by: Gary Guo <gary@garyguo.net>

> ---
>  rust/kernel/workqueue.rs | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-27 15:30   ` Danilo Krummrich
@ 2026-02-27 16:00     ` Gary Guo
  2026-02-27 16:12       ` Danilo Krummrich
  2026-02-27 19:05     ` Alice Ryhl
  1 sibling, 1 reply; 26+ messages in thread
From: Gary Guo @ 2026-02-27 16:00 UTC (permalink / raw)
  To: Danilo Krummrich, Alice Ryhl
  Cc: Tejun Heo, Miguel Ojeda, Lai Jiangshan, Gary Guo,
	Björn Roy Baron, Andreas Hindborg, Trevor Gross,
	Daniel Almeida, John Hubbard, Philipp Stanner, rust-for-linux,
	linux-kernel, Boqun Feng, Benno Lossin, Tamir Duberstein

On Fri Feb 27, 2026 at 3:30 PM GMT, Danilo Krummrich wrote:
> On Fri Feb 27, 2026 at 3:53 PM CET, Alice Ryhl wrote:
>> +    /// Set the maximum number of active cpus.
>> +    ///
>> +    /// If not set, a reasonable default value is used. The maximum value is `WQ_MAX_ACTIVE`.
>
> Should we just mention the default value?
>
>> +    #[inline]
>> +    pub fn max_active(mut self, max_active: u32) -> Builder {
>> +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
>
> The workqueue code prints a warning for max_active >  WQ_MAX_ACTIVE. Maybe use
> debug_assert()?
>
> It's also a bit unfortunate that alloc_ordered_workqueue() becomes
> .max_active(1).

I think ordered workqueue still to have __WQ_ORDERED flag, too?

>
> At the same time having a separate ordered() method competes with max_active().

I don't think its too bad to have both, and before constructing checks that it's
used correctly:

    warn_on!(self.flags & __WQ_ORDERED != 0 && self.max_active != 1);

but type state works, too.

Best,
Gary

>
> Mybe a type state, i.e. Builder<Ordered> that doesn't have max_active()?
>
>> +        self
>> +    }


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-27 14:53 ` [PATCH v3 2/2] rust: workqueue: add creation of workqueues Alice Ryhl
  2026-02-27 15:30   ` Danilo Krummrich
@ 2026-02-27 16:04   ` Gary Guo
  2026-02-27 19:08     ` Alice Ryhl
  2026-02-28  2:24   ` kernel test robot
  2 siblings, 1 reply; 26+ messages in thread
From: Gary Guo @ 2026-02-27 16:04 UTC (permalink / raw)
  To: Alice Ryhl, Tejun Heo, Miguel Ojeda
  Cc: Lai Jiangshan, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, Daniel Almeida, John Hubbard,
	Philipp Stanner, rust-for-linux, linux-kernel, Boqun Feng,
	Benno Lossin, Tamir Duberstein

On Fri Feb 27, 2026 at 2:53 PM GMT, Alice Ryhl wrote:
> Creating workqueues is needed by various GPU drivers. Not only does it
> give you better control over execution, it also allows devices to ensure
> that all tasks have exited before the device is unbound (or similar) by
> running the workqueue destructor.
>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  rust/helpers/workqueue.c |   7 ++
>  rust/kernel/workqueue.rs | 190 ++++++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 194 insertions(+), 3 deletions(-)
>
> diff --git a/rust/helpers/workqueue.c b/rust/helpers/workqueue.c
> index ce1c3a5b2150..e4b9d1b3d6bf 100644
> --- a/rust/helpers/workqueue.c
> +++ b/rust/helpers/workqueue.c
> @@ -14,3 +14,10 @@ __rust_helper void rust_helper_init_work_with_key(struct work_struct *work,
>  	INIT_LIST_HEAD(&work->entry);
>  	work->func = func;
>  }
> +
> +__rust_helper
> +struct workqueue_struct *rust_helper_alloc_workqueue(const char *fmt, unsigned int flags,
> +						     int max_active, const void *data)
> +{
> +	return alloc_workqueue(fmt, flags, max_active, data);
> +}
> diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
> index 1acd113c04ee..4ef02a537cd9 100644
> --- a/rust/kernel/workqueue.rs
> +++ b/rust/kernel/workqueue.rs
> @@ -186,7 +186,10 @@
>  //! C header: [`include/linux/workqueue.h`](srctree/include/linux/workqueue.h)
>  
>  use crate::{
> -    alloc::{AllocError, Flags},
> +    alloc::{
> +        self,
> +        AllocError, //
> +    },
>      container_of,
>      prelude::*,
>      sync::Arc,
> @@ -194,7 +197,11 @@
>      time::Jiffies,
>      types::Opaque,
>  };
> -use core::marker::PhantomData;
> +use core::{
> +    marker::PhantomData,
> +    ops::Deref,
> +    ptr::NonNull, //
> +};
>  
>  /// Creates a [`Work`] initialiser with the given name and a newly-created lock class.
>  #[macro_export]
> @@ -340,7 +347,7 @@ pub fn enqueue_delayed<W, const ID: u64>(
>      /// This method can fail because it allocates memory to store the work item.
>      pub fn try_spawn<T: 'static + Send + FnOnce()>(
>          &self,
> -        flags: Flags,
> +        flags: alloc::Flags,
>          func: T,
>      ) -> Result<(), AllocError> {
>          let init = pin_init!(ClosureWork {
> @@ -353,6 +360,183 @@ pub fn try_spawn<T: 'static + Send + FnOnce()>(
>      }
>  }
>  
> +/// Workqueue builder.
> +///
> +/// A valid combination of workqueue flags contains one of the base flags (`WQ_UNBOUND`, `WQ_BH`,
> +/// or `WQ_PERCPU`) and a combination of modifier flags that are compatible with the selected base
> +/// flag.
> +///
> +/// For details, please refer to `Documentation/core-api/workqueue.rst`.
> +pub struct Builder {
> +    flags: bindings::wq_flags,
> +    max_active: i32,
> +}
> +
> +impl Builder {
> +    /// Not bound to any cpu.
> +    #[inline]
> +    pub fn unbound() -> Builder {

These can all be "-> Self".

> +        Builder {
> +            flags: bindings::wq_flags_WQ_UNBOUND,
> +            max_active: 0,
> +        }
> +    }
> +
> +    /// Bound to a specific cpu.
> +    #[inline]
> +    pub fn percpu() -> Builder {
> +        Builder {
> +            flags: bindings::wq_flags_WQ_PERCPU,
> +            max_active: 0,
> +        }
> +    }
> +
> +    /// Set the maximum number of active cpus.
> +    ///
> +    /// If not set, a reasonable default value is used. The maximum value is `WQ_MAX_ACTIVE`.
> +    #[inline]
> +    pub fn max_active(mut self, max_active: u32) -> Builder {
> +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
> +        self
> +    }
> +
> +    /// Allow this workqueue to be frozen during suspend.
> +    #[inline]
> +    pub fn freezable(mut self) -> Self {
> +        self.flags |= bindings::wq_flags_WQ_FREEZABLE;
> +        self
> +    }
> +
> +    /// This workqueue may be used during memory reclaim.
> +    #[inline]
> +    pub fn mem_reclaim(mut self) -> Self {
> +        self.flags |= bindings::wq_flags_WQ_MEM_RECLAIM;
> +        self
> +    }
> +
> +    /// Mark this workqueue as cpu intensive.
> +    #[inline]
> +    pub fn cpu_intensive(mut self) -> Self {
> +        self.flags |= bindings::wq_flags_WQ_CPU_INTENSIVE;
> +        self
> +    }
> +
> +    /// Make this workqueue visible in sysfs.
> +    #[inline]
> +    pub fn sysfs(mut self) -> Self {
> +        self.flags |= bindings::wq_flags_WQ_SYSFS;
> +        self
> +    }
> +
> +    /// Mark this workqueue high priority.
> +    #[inline]
> +    pub fn highpri(mut self) -> Self {
> +        self.flags |= bindings::wq_flags_WQ_HIGHPRI;
> +        self
> +    }
> +
> +    /// Allocates a new workqueue.
> +    ///
> +    /// The provided name is used verbatim as the workqueue name.
> +    ///
> +    /// # Examples
> +    ///
> +    /// ```
> +    /// use kernel::workqueue;
> +    ///
> +    /// // create an unbound workqueue registered with sysfs
> +    /// let wq = workqueue::Builder::unbound().sysfs().build(c"my-wq")?;
> +    ///
> +    /// // spawn a work item on it
> +    /// wq.try_spawn(
> +    ///     GFP_KERNEL,
> +    ///     || pr_warn!("Printing from my-wq"),
> +    /// )?;
> +    /// # Ok::<(), Error>(())
> +    /// ```
> +    #[inline]
> +    pub fn build(self, name: &CStr) -> Result<OwnedQueue, AllocError> {
> +        // SAFETY:
> +        // * c"%s" is compatible with passing the name as a c-string.
> +        // * the builder only permits valid flag combinations
> +        let ptr = unsafe {
> +            bindings::alloc_workqueue(
> +                c"%s".as_char_ptr(),
> +                self.flags,
> +                self.max_active,
> +                name.as_char_ptr().cast::<c_void>(),
> +            )
> +        };

INVARIANT comments?

> +
> +        Ok(OwnedQueue {
> +            queue: NonNull::new(ptr).ok_or(AllocError)?.cast(),
> +        })
> +    }
> +
> +    /// Allocates a new workqueue.
> +    ///
> +    /// # Examples
> +    ///
> +    /// This example shows how to pass a Rust string formatter to the workqueue name, creating
> +    /// workqueues with names such as `my-wq-1` and `my-wq-2`.
> +    ///
> +    /// ```
> +    /// use kernel::{
> +    ///     alloc::AllocError,
> +    ///     workqueue::{self, OwnedQueue},
> +    /// };
> +    ///
> +    /// fn my_wq(num: u32) -> Result<OwnedQueue, AllocError> {
> +    ///     // create a percpu workqueue called my-wq-{num}
> +    ///     workqueue::Builder::percpu().build_fmt(fmt!("my-wq-{num}"))
> +    /// }
> +    /// ```
> +    #[inline]
> +    pub fn build_fmt(self, name: kernel::fmt::Arguments<'_>) -> Result<OwnedQueue, AllocError> {
> +        // SAFETY:
> +        // * c"%pA" is compatible with passing an `Arguments` pointer.
> +        // * the builder only permits valid flag combinations
> +        let ptr = unsafe {
> +            bindings::alloc_workqueue(
> +                c"%pA".as_char_ptr(),
> +                self.flags,
> +                self.max_active,
> +                core::ptr::from_ref(&name).cast::<c_void>(),
> +            )
> +        };
> +
> +        Ok(OwnedQueue {
> +            queue: NonNull::new(ptr).ok_or(AllocError)?.cast(),
> +        })
> +    }
> +}
> +
> +/// An owned kernel work queue.
> +///
> +/// Dropping a workqueue blocks on all pending work.
> +///
> +/// # Invariants
> +///
> +/// `queue` points at a valid workqueue that is owned by this `OwnedQueue`.
> +pub struct OwnedQueue {
> +    queue: NonNull<Queue>,
> +}
> +
> +impl Deref for OwnedQueue {
> +    type Target = Queue;
> +    fn deref(&self) -> &Queue {
> +        // SAFETY: By the type invariants, this pointer references a valid queue.
> +        unsafe { &*self.queue.as_ptr() }
> +    }
> +}
> +
> +impl Drop for OwnedQueue {
> +    fn drop(&mut self) {
> +        // SAFETY: The `OwnedQueue` is being destroyed, so we can destroy the workqueue it owns.

Hmm, is this correct? This should say *why* the call is safe, not what it is
doing.

I think this should mention: (1) the pointer is valid and (2) no delayed work is
being scheduled on this queue.

Best,
Gary

> +        unsafe { bindings::destroy_workqueue(self.queue.as_ptr().cast()) }
> +    }
> +}
> +
>  /// A helper type used in [`try_spawn`].
>  ///
>  /// [`try_spawn`]: Queue::try_spawn


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-27 16:00     ` Gary Guo
@ 2026-02-27 16:12       ` Danilo Krummrich
  0 siblings, 0 replies; 26+ messages in thread
From: Danilo Krummrich @ 2026-02-27 16:12 UTC (permalink / raw)
  To: Gary Guo
  Cc: Alice Ryhl, Tejun Heo, Miguel Ojeda, Lai Jiangshan,
	Björn Roy Baron, Andreas Hindborg, Trevor Gross,
	Daniel Almeida, John Hubbard, Philipp Stanner, rust-for-linux,
	linux-kernel, Boqun Feng, Benno Lossin, Tamir Duberstein

On Fri Feb 27, 2026 at 5:00 PM CET, Gary Guo wrote:
> On Fri Feb 27, 2026 at 3:30 PM GMT, Danilo Krummrich wrote:
>> On Fri Feb 27, 2026 at 3:53 PM CET, Alice Ryhl wrote:
>>> +    /// Set the maximum number of active cpus.
>>> +    ///
>>> +    /// If not set, a reasonable default value is used. The maximum value is `WQ_MAX_ACTIVE`.
>>
>> Should we just mention the default value?
>>
>>> +    #[inline]
>>> +    pub fn max_active(mut self, max_active: u32) -> Builder {
>>> +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
>>
>> The workqueue code prints a warning for max_active >  WQ_MAX_ACTIVE. Maybe use
>> debug_assert()?
>>
>> It's also a bit unfortunate that alloc_ordered_workqueue() becomes
>> .max_active(1).
>
> I think ordered workqueue still to have __WQ_ORDERED flag, too?
>
>>
>> At the same time having a separate ordered() method competes with max_active().
>
> I don't think its too bad to have both, and before constructing checks that it's
> used correctly:
>
>     warn_on!(self.flags & __WQ_ORDERED != 0 && self.max_active != 1);
>
> but type state works, too.

I think we can ensure this at compile time, so we should do it.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 14:53 ` [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs Alice Ryhl
  2026-02-27 15:10   ` Danilo Krummrich
  2026-02-27 15:47   ` Gary Guo
@ 2026-02-27 17:09   ` Tejun Heo
  2026-02-27 19:01     ` Alice Ryhl
  2 siblings, 1 reply; 26+ messages in thread
From: Tejun Heo @ 2026-02-27 17:09 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Lai Jiangshan, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

On Fri, Feb 27, 2026 at 02:53:20PM +0000, Alice Ryhl wrote:
> When a workqueue is shut down, delayed work that is pending but not
> scheduled does not get properly cleaned up, so it's not safe to use
> `enqueue_delayed` on a workqueue that might be destroyed. To fix this,
> restricted `enqueue_delayed` to static queues.

C being C, we've been just chalking this up as "user error", but please feel
free to add per-workqueue percpu ref for pending delayed work items if
that'd help. That shouldn't be noticeably expensive and should help
straighten this out for rust hopefully.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 17:09   ` Tejun Heo
@ 2026-02-27 19:01     ` Alice Ryhl
  2026-02-27 19:08       ` Tejun Heo
  0 siblings, 1 reply; 26+ messages in thread
From: Alice Ryhl @ 2026-02-27 19:01 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Miguel Ojeda, Lai Jiangshan, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

On Fri, Feb 27, 2026 at 07:09:07AM -1000, Tejun Heo wrote:
> On Fri, Feb 27, 2026 at 02:53:20PM +0000, Alice Ryhl wrote:
> > When a workqueue is shut down, delayed work that is pending but not
> > scheduled does not get properly cleaned up, so it's not safe to use
> > `enqueue_delayed` on a workqueue that might be destroyed. To fix this,
> > restricted `enqueue_delayed` to static queues.
> 
> C being C, we've been just chalking this up as "user error", but please feel
> free to add per-workqueue percpu ref for pending delayed work items if
> that'd help. That shouldn't be noticeably expensive and should help
> straighten this out for rust hopefully.

I had been thinking I would pick up this patch again:
https://lore.kernel.org/all/20250423-destroy-workqueue-flush-v1-1-3d74820780a5@google.com/

but it sounds like you're suggesting a different solution?

Alice

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-27 15:30   ` Danilo Krummrich
  2026-02-27 16:00     ` Gary Guo
@ 2026-02-27 19:05     ` Alice Ryhl
  2026-02-27 19:23       ` Danilo Krummrich
  1 sibling, 1 reply; 26+ messages in thread
From: Alice Ryhl @ 2026-02-27 19:05 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Tejun Heo, Miguel Ojeda, Lai Jiangshan, Gary Guo,
	Björn Roy Baron, Andreas Hindborg, Trevor Gross,
	Daniel Almeida, John Hubbard, Philipp Stanner, rust-for-linux,
	linux-kernel, Boqun Feng, Benno Lossin, Tamir Duberstein

On Fri, Feb 27, 2026 at 04:30:59PM +0100, Danilo Krummrich wrote:
> On Fri Feb 27, 2026 at 3:53 PM CET, Alice Ryhl wrote:
> > +    /// Set the maximum number of active cpus.
> > +    ///
> > +    /// If not set, a reasonable default value is used. The maximum value is `WQ_MAX_ACTIVE`.
> 
> Should we just mention the default value?

I can mention the constant name, but I'd like to avoid mentioning a
value that might change.

> > +    #[inline]
> > +    pub fn max_active(mut self, max_active: u32) -> Builder {
> > +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
> 
> The workqueue code prints a warning for max_active >  WQ_MAX_ACTIVE. Maybe use
> debug_assert()?

What's wrong with just making use of the C-side warning?

> It's also a bit unfortunate that alloc_ordered_workqueue() becomes
> .max_active(1).
> 
> At the same time having a separate ordered() method competes with max_active().
> 
> Mybe a type state, i.e. Builder<Ordered> that doesn't have max_active()?

Sorry I'm a bit confused by this. Why does an ordered() compete with
max_active()?

Alice

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-27 16:04   ` Gary Guo
@ 2026-02-27 19:08     ` Alice Ryhl
  0 siblings, 0 replies; 26+ messages in thread
From: Alice Ryhl @ 2026-02-27 19:08 UTC (permalink / raw)
  To: Gary Guo
  Cc: Tejun Heo, Miguel Ojeda, Lai Jiangshan, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Boqun Feng, Benno Lossin, Tamir Duberstein

On Fri, Feb 27, 2026 at 04:04:57PM +0000, Gary Guo wrote:
> On Fri Feb 27, 2026 at 2:53 PM GMT, Alice Ryhl wrote:
> > Creating workqueues is needed by various GPU drivers. Not only does it
> > give you better control over execution, it also allows devices to ensure
> > that all tasks have exited before the device is unbound (or similar) by
> > running the workqueue destructor.
> >
> > Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> > ---
> >  rust/helpers/workqueue.c |   7 ++
> >  rust/kernel/workqueue.rs | 190 ++++++++++++++++++++++++++++++++++++++++++++++-
> >  2 files changed, 194 insertions(+), 3 deletions(-)
> >
> > diff --git a/rust/helpers/workqueue.c b/rust/helpers/workqueue.c
> > index ce1c3a5b2150..e4b9d1b3d6bf 100644
> > --- a/rust/helpers/workqueue.c
> > +++ b/rust/helpers/workqueue.c
> > @@ -14,3 +14,10 @@ __rust_helper void rust_helper_init_work_with_key(struct work_struct *work,
> >  	INIT_LIST_HEAD(&work->entry);
> >  	work->func = func;
> >  }
> > +
> > +__rust_helper
> > +struct workqueue_struct *rust_helper_alloc_workqueue(const char *fmt, unsigned int flags,
> > +						     int max_active, const void *data)
> > +{
> > +	return alloc_workqueue(fmt, flags, max_active, data);
> > +}
> > diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
> > index 1acd113c04ee..4ef02a537cd9 100644
> > --- a/rust/kernel/workqueue.rs
> > +++ b/rust/kernel/workqueue.rs
> > @@ -186,7 +186,10 @@
> >  //! C header: [`include/linux/workqueue.h`](srctree/include/linux/workqueue.h)
> >  
> >  use crate::{
> > -    alloc::{AllocError, Flags},
> > +    alloc::{
> > +        self,
> > +        AllocError, //
> > +    },
> >      container_of,
> >      prelude::*,
> >      sync::Arc,
> > @@ -194,7 +197,11 @@
> >      time::Jiffies,
> >      types::Opaque,
> >  };
> > -use core::marker::PhantomData;
> > +use core::{
> > +    marker::PhantomData,
> > +    ops::Deref,
> > +    ptr::NonNull, //
> > +};
> >  
> >  /// Creates a [`Work`] initialiser with the given name and a newly-created lock class.
> >  #[macro_export]
> > @@ -340,7 +347,7 @@ pub fn enqueue_delayed<W, const ID: u64>(
> >      /// This method can fail because it allocates memory to store the work item.
> >      pub fn try_spawn<T: 'static + Send + FnOnce()>(
> >          &self,
> > -        flags: Flags,
> > +        flags: alloc::Flags,
> >          func: T,
> >      ) -> Result<(), AllocError> {
> >          let init = pin_init!(ClosureWork {
> > @@ -353,6 +360,183 @@ pub fn try_spawn<T: 'static + Send + FnOnce()>(
> >      }
> >  }
> >  
> > +/// Workqueue builder.
> > +///
> > +/// A valid combination of workqueue flags contains one of the base flags (`WQ_UNBOUND`, `WQ_BH`,
> > +/// or `WQ_PERCPU`) and a combination of modifier flags that are compatible with the selected base
> > +/// flag.
> > +///
> > +/// For details, please refer to `Documentation/core-api/workqueue.rst`.
> > +pub struct Builder {
> > +    flags: bindings::wq_flags,
> > +    max_active: i32,
> > +}
> > +
> > +impl Builder {
> > +    /// Not bound to any cpu.
> > +    #[inline]
> > +    pub fn unbound() -> Builder {
> 
> These can all be "-> Self".
> 
> > +        Builder {
> > +            flags: bindings::wq_flags_WQ_UNBOUND,
> > +            max_active: 0,
> > +        }
> > +    }
> > +
> > +    /// Bound to a specific cpu.
> > +    #[inline]
> > +    pub fn percpu() -> Builder {
> > +        Builder {
> > +            flags: bindings::wq_flags_WQ_PERCPU,
> > +            max_active: 0,
> > +        }
> > +    }
> > +
> > +    /// Set the maximum number of active cpus.
> > +    ///
> > +    /// If not set, a reasonable default value is used. The maximum value is `WQ_MAX_ACTIVE`.
> > +    #[inline]
> > +    pub fn max_active(mut self, max_active: u32) -> Builder {
> > +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
> > +        self
> > +    }
> > +
> > +    /// Allow this workqueue to be frozen during suspend.
> > +    #[inline]
> > +    pub fn freezable(mut self) -> Self {
> > +        self.flags |= bindings::wq_flags_WQ_FREEZABLE;
> > +        self
> > +    }
> > +
> > +    /// This workqueue may be used during memory reclaim.
> > +    #[inline]
> > +    pub fn mem_reclaim(mut self) -> Self {
> > +        self.flags |= bindings::wq_flags_WQ_MEM_RECLAIM;
> > +        self
> > +    }
> > +
> > +    /// Mark this workqueue as cpu intensive.
> > +    #[inline]
> > +    pub fn cpu_intensive(mut self) -> Self {
> > +        self.flags |= bindings::wq_flags_WQ_CPU_INTENSIVE;
> > +        self
> > +    }
> > +
> > +    /// Make this workqueue visible in sysfs.
> > +    #[inline]
> > +    pub fn sysfs(mut self) -> Self {
> > +        self.flags |= bindings::wq_flags_WQ_SYSFS;
> > +        self
> > +    }
> > +
> > +    /// Mark this workqueue high priority.
> > +    #[inline]
> > +    pub fn highpri(mut self) -> Self {
> > +        self.flags |= bindings::wq_flags_WQ_HIGHPRI;
> > +        self
> > +    }
> > +
> > +    /// Allocates a new workqueue.
> > +    ///
> > +    /// The provided name is used verbatim as the workqueue name.
> > +    ///
> > +    /// # Examples
> > +    ///
> > +    /// ```
> > +    /// use kernel::workqueue;
> > +    ///
> > +    /// // create an unbound workqueue registered with sysfs
> > +    /// let wq = workqueue::Builder::unbound().sysfs().build(c"my-wq")?;
> > +    ///
> > +    /// // spawn a work item on it
> > +    /// wq.try_spawn(
> > +    ///     GFP_KERNEL,
> > +    ///     || pr_warn!("Printing from my-wq"),
> > +    /// )?;
> > +    /// # Ok::<(), Error>(())
> > +    /// ```
> > +    #[inline]
> > +    pub fn build(self, name: &CStr) -> Result<OwnedQueue, AllocError> {
> > +        // SAFETY:
> > +        // * c"%s" is compatible with passing the name as a c-string.
> > +        // * the builder only permits valid flag combinations
> > +        let ptr = unsafe {
> > +            bindings::alloc_workqueue(
> > +                c"%s".as_char_ptr(),
> > +                self.flags,
> > +                self.max_active,
> > +                name.as_char_ptr().cast::<c_void>(),
> > +            )
> > +        };
> 
> INVARIANT comments?
> 
> > +
> > +        Ok(OwnedQueue {
> > +            queue: NonNull::new(ptr).ok_or(AllocError)?.cast(),
> > +        })
> > +    }
> > +
> > +    /// Allocates a new workqueue.
> > +    ///
> > +    /// # Examples
> > +    ///
> > +    /// This example shows how to pass a Rust string formatter to the workqueue name, creating
> > +    /// workqueues with names such as `my-wq-1` and `my-wq-2`.
> > +    ///
> > +    /// ```
> > +    /// use kernel::{
> > +    ///     alloc::AllocError,
> > +    ///     workqueue::{self, OwnedQueue},
> > +    /// };
> > +    ///
> > +    /// fn my_wq(num: u32) -> Result<OwnedQueue, AllocError> {
> > +    ///     // create a percpu workqueue called my-wq-{num}
> > +    ///     workqueue::Builder::percpu().build_fmt(fmt!("my-wq-{num}"))
> > +    /// }
> > +    /// ```
> > +    #[inline]
> > +    pub fn build_fmt(self, name: kernel::fmt::Arguments<'_>) -> Result<OwnedQueue, AllocError> {
> > +        // SAFETY:
> > +        // * c"%pA" is compatible with passing an `Arguments` pointer.
> > +        // * the builder only permits valid flag combinations
> > +        let ptr = unsafe {
> > +            bindings::alloc_workqueue(
> > +                c"%pA".as_char_ptr(),
> > +                self.flags,
> > +                self.max_active,
> > +                core::ptr::from_ref(&name).cast::<c_void>(),
> > +            )
> > +        };
> > +
> > +        Ok(OwnedQueue {
> > +            queue: NonNull::new(ptr).ok_or(AllocError)?.cast(),
> > +        })
> > +    }
> > +}
> > +
> > +/// An owned kernel work queue.
> > +///
> > +/// Dropping a workqueue blocks on all pending work.
> > +///
> > +/// # Invariants
> > +///
> > +/// `queue` points at a valid workqueue that is owned by this `OwnedQueue`.
> > +pub struct OwnedQueue {
> > +    queue: NonNull<Queue>,
> > +}
> > +
> > +impl Deref for OwnedQueue {
> > +    type Target = Queue;
> > +    fn deref(&self) -> &Queue {
> > +        // SAFETY: By the type invariants, this pointer references a valid queue.
> > +        unsafe { &*self.queue.as_ptr() }
> > +    }
> > +}
> > +
> > +impl Drop for OwnedQueue {
> > +    fn drop(&mut self) {
> > +        // SAFETY: The `OwnedQueue` is being destroyed, so we can destroy the workqueue it owns.
> 
> Hmm, is this correct? This should say *why* the call is safe, not what it is
> doing.
> 
> I think this should mention: (1) the pointer is valid and (2) no delayed work is
> being scheduled on this queue.

Although it is missing the part about delayed work, I do think this
talks about why. The pointer being valid is far insufficient. We need
actual ownership of the workqueue - nobody can use it after this. We can
destroy the inner workqueue *because* the OwnedQueue owning it is being
dropped.

Alice

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 19:01     ` Alice Ryhl
@ 2026-02-27 19:08       ` Tejun Heo
  2026-02-27 19:19         ` Alice Ryhl
  0 siblings, 1 reply; 26+ messages in thread
From: Tejun Heo @ 2026-02-27 19:08 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Lai Jiangshan, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

On Fri, Feb 27, 2026 at 07:01:18PM +0000, Alice Ryhl wrote:
> On Fri, Feb 27, 2026 at 07:09:07AM -1000, Tejun Heo wrote:
> > On Fri, Feb 27, 2026 at 02:53:20PM +0000, Alice Ryhl wrote:
> > > When a workqueue is shut down, delayed work that is pending but not
> > > scheduled does not get properly cleaned up, so it's not safe to use
> > > `enqueue_delayed` on a workqueue that might be destroyed. To fix this,
> > > restricted `enqueue_delayed` to static queues.
> > 
> > C being C, we've been just chalking this up as "user error", but please feel
> > free to add per-workqueue percpu ref for pending delayed work items if
> > that'd help. That shouldn't be noticeably expensive and should help
> > straighten this out for rust hopefully.
> 
> I had been thinking I would pick up this patch again:
> https://lore.kernel.org/all/20250423-destroy-workqueue-flush-v1-1-3d74820780a5@google.com/
> 
> but it sounds like you're suggesting a different solution?

I'm not remembering much context at this point, but if it *could* work,
percpu refcnt counting the number of delayed work items would be cheaper.
Again, I could easily be forgetting why we didn't do that in the first
place.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 19:08       ` Tejun Heo
@ 2026-02-27 19:19         ` Alice Ryhl
  2026-02-27 19:24           ` Tejun Heo
  0 siblings, 1 reply; 26+ messages in thread
From: Alice Ryhl @ 2026-02-27 19:19 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Miguel Ojeda, Lai Jiangshan, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

On Fri, Feb 27, 2026 at 09:08:55AM -1000, Tejun Heo wrote:
> On Fri, Feb 27, 2026 at 07:01:18PM +0000, Alice Ryhl wrote:
> > On Fri, Feb 27, 2026 at 07:09:07AM -1000, Tejun Heo wrote:
> > > On Fri, Feb 27, 2026 at 02:53:20PM +0000, Alice Ryhl wrote:
> > > > When a workqueue is shut down, delayed work that is pending but not
> > > > scheduled does not get properly cleaned up, so it's not safe to use
> > > > `enqueue_delayed` on a workqueue that might be destroyed. To fix this,
> > > > restricted `enqueue_delayed` to static queues.
> > > 
> > > C being C, we've been just chalking this up as "user error", but please feel
> > > free to add per-workqueue percpu ref for pending delayed work items if
> > > that'd help. That shouldn't be noticeably expensive and should help
> > > straighten this out for rust hopefully.
> > 
> > I had been thinking I would pick up this patch again:
> > https://lore.kernel.org/all/20250423-destroy-workqueue-flush-v1-1-3d74820780a5@google.com/
> > 
> > but it sounds like you're suggesting a different solution?
> 
> I'm not remembering much context at this point, but if it *could* work,
> percpu refcnt counting the number of delayed work items would be cheaper.
> Again, I could easily be forgetting why we didn't do that in the first
> place.

I guess the question is, what does destroy_workqueue() do?

- Does it wait for the timers to finish?
- Does it immediately run the delayed works?
- Does it exit without waiting for timers?

It sounds like the refcount approach is the last solution, where
destroy_workqueue() just exits without waiting for timers, but then
keeping the workqueue alive until the timers elapse.

The main concern I can see is that this means that delayed work can run
after destroy_workqueue() is called. That may be a problem if
destroy_workqueue() is used to guard module unload (or device unbind).

Alice

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-27 19:05     ` Alice Ryhl
@ 2026-02-27 19:23       ` Danilo Krummrich
  2026-02-28 12:59         ` Alice Ryhl
  0 siblings, 1 reply; 26+ messages in thread
From: Danilo Krummrich @ 2026-02-27 19:23 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Tejun Heo, Miguel Ojeda, Lai Jiangshan, Gary Guo,
	Björn Roy Baron, Andreas Hindborg, Trevor Gross,
	Daniel Almeida, John Hubbard, Philipp Stanner, rust-for-linux,
	linux-kernel, Boqun Feng, Benno Lossin, Tamir Duberstein

On Fri Feb 27, 2026 at 8:05 PM CET, Alice Ryhl wrote:
> On Fri, Feb 27, 2026 at 04:30:59PM +0100, Danilo Krummrich wrote:
>> On Fri Feb 27, 2026 at 3:53 PM CET, Alice Ryhl wrote:
>> > +    /// Set the maximum number of active cpus.
>> > +    ///
>> > +    /// If not set, a reasonable default value is used. The maximum value is `WQ_MAX_ACTIVE`.
>> 
>> Should we just mention the default value?
>
> I can mention the constant name, but I'd like to avoid mentioning a
> value that might change.

Yes, that's what I meant.

>> > +    #[inline]
>> > +    pub fn max_active(mut self, max_active: u32) -> Builder {
>> > +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
>> 
>> The workqueue code prints a warning for max_active >  WQ_MAX_ACTIVE. Maybe use
>> debug_assert()?
>
> What's wrong with just making use of the C-side warning?

IIRC, we have the same pattern in other Rust code that we use debug_assert()
when a value got clamped, e.g. in udelay().

>> It's also a bit unfortunate that alloc_ordered_workqueue() becomes
>> .max_active(1).
>> 
>> At the same time having a separate ordered() method competes with max_active().
>> 
>> Mybe a type state, i.e. Builder<Ordered> that doesn't have max_active()?
>
> Sorry I'm a bit confused by this. Why does an ordered() compete with
> max_active()?

Because you could get an inconsistent state with __WQ_ORDERED and
max_active > 1.

It also conflicts with sysfs() I think [1].

[1] https://elixir.bootlin.com/linux/v6.19.3/source/kernel/workqueue.c#L7417

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 19:19         ` Alice Ryhl
@ 2026-02-27 19:24           ` Tejun Heo
  2026-02-27 19:28             ` Alice Ryhl
  0 siblings, 1 reply; 26+ messages in thread
From: Tejun Heo @ 2026-02-27 19:24 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Lai Jiangshan, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

Hello,

On Fri, Feb 27, 2026 at 07:19:56PM +0000, Alice Ryhl wrote:
> I guess the question is, what does destroy_workqueue() do?
> 
> - Does it wait for the timers to finish?
> - Does it immediately run the delayed works?
> - Does it exit without waiting for timers?
> 
> It sounds like the refcount approach is the last solution, where
> destroy_workqueue() just exits without waiting for timers, but then
> keeping the workqueue alive until the timers elapse.
> 
> The main concern I can see is that this means that delayed work can run
> after destroy_workqueue() is called. That may be a problem if
> destroy_workqueue() is used to guard module unload (or device unbind).

delayed_work is just pointing to the wq pointer. On destroy_workqueue(), we
can shut it down and free all the supporting stuff while leaving zombie wq
struct which noops execution and let the whole thing go away when refs reach
zero?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 19:24           ` Tejun Heo
@ 2026-02-27 19:28             ` Alice Ryhl
  2026-02-27 19:46               ` Tejun Heo
  0 siblings, 1 reply; 26+ messages in thread
From: Alice Ryhl @ 2026-02-27 19:28 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Miguel Ojeda, Lai Jiangshan, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

On Fri, Feb 27, 2026 at 09:24:34AM -1000, Tejun Heo wrote:
> Hello,
> 
> On Fri, Feb 27, 2026 at 07:19:56PM +0000, Alice Ryhl wrote:
> > I guess the question is, what does destroy_workqueue() do?
> > 
> > - Does it wait for the timers to finish?
> > - Does it immediately run the delayed works?
> > - Does it exit without waiting for timers?
> > 
> > It sounds like the refcount approach is the last solution, where
> > destroy_workqueue() just exits without waiting for timers, but then
> > keeping the workqueue alive until the timers elapse.
> > 
> > The main concern I can see is that this means that delayed work can run
> > after destroy_workqueue() is called. That may be a problem if
> > destroy_workqueue() is used to guard module unload (or device unbind).
> 
> delayed_work is just pointing to the wq pointer. On destroy_workqueue(), we
> can shut it down and free all the supporting stuff while leaving zombie wq
> struct which noops execution and let the whole thing go away when refs reach
> zero?

But isn't that a problem for e.g. self-freeing work? If we don't run the
work, then its memory is just leaked.

Alice

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 19:28             ` Alice Ryhl
@ 2026-02-27 19:46               ` Tejun Heo
  2026-02-27 20:36                 ` Alice Ryhl
  0 siblings, 1 reply; 26+ messages in thread
From: Tejun Heo @ 2026-02-27 19:46 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Lai Jiangshan, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

On Fri, Feb 27, 2026 at 07:28:11PM +0000, Alice Ryhl wrote:
> > delayed_work is just pointing to the wq pointer. On destroy_workqueue(), we
> > can shut it down and free all the supporting stuff while leaving zombie wq
> > struct which noops execution and let the whole thing go away when refs reach
> > zero?
> 
> But isn't that a problem for e.g. self-freeing work? If we don't run the
> work, then its memory is just leaked.

Yeah, good point. Maybe we should just keep the whole thing up while
removing it from sysfs. Would that work?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 19:46               ` Tejun Heo
@ 2026-02-27 20:36                 ` Alice Ryhl
  2026-02-27 21:25                   ` Tejun Heo
  0 siblings, 1 reply; 26+ messages in thread
From: Alice Ryhl @ 2026-02-27 20:36 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Miguel Ojeda, Lai Jiangshan, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

On Fri, Feb 27, 2026 at 8:46 PM Tejun Heo <tj@kernel.org> wrote:
>
> On Fri, Feb 27, 2026 at 07:28:11PM +0000, Alice Ryhl wrote:
> > > delayed_work is just pointing to the wq pointer. On destroy_workqueue(), we
> > > can shut it down and free all the supporting stuff while leaving zombie wq
> > > struct which noops execution and let the whole thing go away when refs reach
> > > zero?
> >
> > But isn't that a problem for e.g. self-freeing work? If we don't run the
> > work, then its memory is just leaked.
>
> Yeah, good point. Maybe we should just keep the whole thing up while
> removing it from sysfs. Would that work?

We can but there are two variants of that:

If destroy_workqueue() waits for delayed work, then it may take a long time.

If destroy_workqueue() does not wait for delayed work, then I'm
worried about bugs resulting from module unload and similar.

Alice

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs
  2026-02-27 20:36                 ` Alice Ryhl
@ 2026-02-27 21:25                   ` Tejun Heo
  0 siblings, 0 replies; 26+ messages in thread
From: Tejun Heo @ 2026-02-27 21:25 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Lai Jiangshan, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Boqun Feng, Benno Lossin, Tamir Duberstein, stable

On Fri, Feb 27, 2026 at 09:36:22PM +0100, Alice Ryhl wrote:
> On Fri, Feb 27, 2026 at 8:46 PM Tejun Heo <tj@kernel.org> wrote:
> >
> > On Fri, Feb 27, 2026 at 07:28:11PM +0000, Alice Ryhl wrote:
> > > > delayed_work is just pointing to the wq pointer. On destroy_workqueue(), we
> > > > can shut it down and free all the supporting stuff while leaving zombie wq
> > > > struct which noops execution and let the whole thing go away when refs reach
> > > > zero?
> > >
> > > But isn't that a problem for e.g. self-freeing work? If we don't run the
> > > work, then its memory is just leaked.
> >
> > Yeah, good point. Maybe we should just keep the whole thing up while
> > removing it from sysfs. Would that work?
> 
> We can but there are two variants of that:
> 
> If destroy_workqueue() waits for delayed work, then it may take a long time.
> 
> If destroy_workqueue() does not wait for delayed work, then I'm
> worried about bugs resulting from module unload and similar.

I see. Yeah, neither seems workable. We should be able to flush the delayed
work items. Maybe we can make that an optional feature so that rust wrappers
can turn it on for safety.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-27 14:53 ` [PATCH v3 2/2] rust: workqueue: add creation of workqueues Alice Ryhl
  2026-02-27 15:30   ` Danilo Krummrich
  2026-02-27 16:04   ` Gary Guo
@ 2026-02-28  2:24   ` kernel test robot
  2 siblings, 0 replies; 26+ messages in thread
From: kernel test robot @ 2026-02-28  2:24 UTC (permalink / raw)
  To: Alice Ryhl, Tejun Heo, Miguel Ojeda
  Cc: oe-kbuild-all, Lai Jiangshan, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, Daniel Almeida,
	John Hubbard, Philipp Stanner, rust-for-linux, linux-kernel,
	Alice Ryhl, Boqun Feng, Benno Lossin, Tamir Duberstein

Hi Alice,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f]

url:    https://github.com/intel-lab-lkp/linux/commits/Alice-Ryhl/rust-workqueue-restrict-delayed-work-to-global-wqs/20260227-230029
base:   6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
patch link:    https://lore.kernel.org/r/20260227-create-workqueue-v3-2-87de133f7849%40google.com
patch subject: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
config: riscv-randconfig-r063-20260228 (https://download.01.org/0day-ci/archive/20260228/202602281033.afIyhHsn-lkp@intel.com/config)
compiler: clang version 23.0.0git (https://github.com/llvm/llvm-project 9a109fbb6e184ec9bcce10615949f598f4c974a9)
rustc: rustc 1.88.0 (6b00bc388 2025-06-23)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260228/202602281033.afIyhHsn-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602281033.afIyhHsn-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> clang diag: include/linux/workqueue.h:513:76: warning: diagnostic behavior may be improved by adding the 'format(printf, 1, 4)' attribute to the declaration of 'rust_helper_alloc_workqueue' [-Wmissing-format-attribute]
--
   In file included from rust/helpers/helpers.c:66:
>> rust/helpers/workqueue.c:22:9: warning: diagnostic behavior may be improved by adding the 'format(printf, 1, 4)' attribute to the declaration of 'rust_helper_alloc_workqueue' [-Wmissing-format-attribute]
      19 | struct workqueue_struct *rust_helper_alloc_workqueue(const char *fmt, unsigned int flags,
         | __attribute__((format(printf, 1, 4))) 
      20 |                                                      int max_active, const void *data)
      21 | {
      22 |         return alloc_workqueue(fmt, flags, max_active, data);
         |                ^
   include/linux/workqueue.h:513:76: note: expanded from macro 'alloc_workqueue'
     513 | #define alloc_workqueue(...)    alloc_hooks(alloc_workqueue_noprof(__VA_ARGS__))
         |                                                                               ^
   include/linux/alloc_tag.h:265:31: note: expanded from macro 'alloc_hooks'
     265 |         alloc_hooks_tag(&_alloc_tag, _do_alloc);                        \
         |                                      ^
   include/linux/alloc_tag.h:251:9: note: expanded from macro 'alloc_hooks_tag'
     251 |         typeof(_do_alloc) _res;                                         \
         |                ^
   rust/helpers/workqueue.c:19:26: note: 'rust_helper_alloc_workqueue' declared here
      19 | struct workqueue_struct *rust_helper_alloc_workqueue(const char *fmt, unsigned int flags,
         |                          ^
   1 warning generated.
--
   llvm-nm: error: rust/helpers/helpers.o: No such file or directory
>> clang diag: include/linux/workqueue.h:513:76: warning: diagnostic behavior may be improved by adding the 'format(printf, 1, 4)' attribute to the declaration of 'rust_helper_alloc_workqueue' [-Wmissing-format-attribute]

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-27 19:23       ` Danilo Krummrich
@ 2026-02-28 12:59         ` Alice Ryhl
  2026-02-28 14:43           ` Danilo Krummrich
  0 siblings, 1 reply; 26+ messages in thread
From: Alice Ryhl @ 2026-02-28 12:59 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Tejun Heo, Miguel Ojeda, Lai Jiangshan, Gary Guo,
	Björn Roy Baron, Andreas Hindborg, Trevor Gross,
	Daniel Almeida, John Hubbard, Philipp Stanner, rust-for-linux,
	linux-kernel, Boqun Feng, Benno Lossin, Tamir Duberstein

On Fri, Feb 27, 2026 at 08:23:44PM +0100, Danilo Krummrich wrote:
> On Fri Feb 27, 2026 at 8:05 PM CET, Alice Ryhl wrote:
> > On Fri, Feb 27, 2026 at 04:30:59PM +0100, Danilo Krummrich wrote:
> >> On Fri Feb 27, 2026 at 3:53 PM CET, Alice Ryhl wrote:
> >> > +    #[inline]
> >> > +    pub fn max_active(mut self, max_active: u32) -> Builder {
> >> > +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
> >> 
> >> The workqueue code prints a warning for max_active >  WQ_MAX_ACTIVE. Maybe use
> >> debug_assert()?
> >
> > What's wrong with just making use of the C-side warning?
> 
> IIRC, we have the same pattern in other Rust code that we use debug_assert()
> when a value got clamped, e.g. in udelay().

In udelay(), the clamping happens on the Rust side, so it makes sense
that Rust is the one to warn about it.

Here, the clamping happens in C code. To warn about it, I'd have to
duplicate the existing C-side check to clamp in Rust.

> >> It's also a bit unfortunate that alloc_ordered_workqueue() becomes
> >> .max_active(1).
> >> 
> >> At the same time having a separate ordered() method competes with max_active().
> >> 
> >> Mybe a type state, i.e. Builder<Ordered> that doesn't have max_active()?
> >
> > Sorry I'm a bit confused by this. Why does an ordered() compete with
> > max_active()?
> 
> Because you could get an inconsistent state with __WQ_ORDERED and
> max_active > 1.
> 
> It also conflicts with sysfs() I think [1].
> 
> [1] https://elixir.bootlin.com/linux/v6.19.3/source/kernel/workqueue.c#L7417

And I guess the further argument is that we have a use-case for ordered
workqueues?

Alice

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-28 12:59         ` Alice Ryhl
@ 2026-02-28 14:43           ` Danilo Krummrich
  2026-03-01 11:55             ` Alice Ryhl
  0 siblings, 1 reply; 26+ messages in thread
From: Danilo Krummrich @ 2026-02-28 14:43 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Tejun Heo, Miguel Ojeda, Lai Jiangshan, Gary Guo,
	Björn Roy Baron, Andreas Hindborg, Trevor Gross,
	Daniel Almeida, John Hubbard, Philipp Stanner, rust-for-linux,
	linux-kernel, Boqun Feng, Benno Lossin, Tamir Duberstein

On Sat Feb 28, 2026 at 1:59 PM CET, Alice Ryhl wrote:
> On Fri, Feb 27, 2026 at 08:23:44PM +0100, Danilo Krummrich wrote:
>> On Fri Feb 27, 2026 at 8:05 PM CET, Alice Ryhl wrote:
>> > On Fri, Feb 27, 2026 at 04:30:59PM +0100, Danilo Krummrich wrote:
>> >> On Fri Feb 27, 2026 at 3:53 PM CET, Alice Ryhl wrote:
>> >> > +    #[inline]
>> >> > +    pub fn max_active(mut self, max_active: u32) -> Builder {
>> >> > +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
>> >> 
>> >> The workqueue code prints a warning for max_active >  WQ_MAX_ACTIVE. Maybe use
>> >> debug_assert()?
>> >
>> > What's wrong with just making use of the C-side warning?
>> 
>> IIRC, we have the same pattern in other Rust code that we use debug_assert()
>> when a value got clamped, e.g. in udelay().
>
> In udelay(), the clamping happens on the Rust side, so it makes sense
> that Rust is the one to warn about it.
>
> Here, the clamping happens in C code. To warn about it, I'd have to
> duplicate the existing C-side check to clamp in Rust.

That's fair, although I also think that it is not unreasonable. Given that this
uses the builder pattern, I think it would be nice to ensure that nothing
"invalid" can be built in the first place.

Maybe we can use a bounded integer?

>> >> It's also a bit unfortunate that alloc_ordered_workqueue() becomes
>> >> .max_active(1).
>> >> 
>> >> At the same time having a separate ordered() method competes with max_active().
>> >> 
>> >> Mybe a type state, i.e. Builder<Ordered> that doesn't have max_active()?
>> >
>> > Sorry I'm a bit confused by this. Why does an ordered() compete with
>> > max_active()?
>> 
>> Because you could get an inconsistent state with __WQ_ORDERED and
>> max_active > 1.
>> 
>> It also conflicts with sysfs() I think [1].
>> 
>> [1] https://elixir.bootlin.com/linux/v6.19.3/source/kernel/workqueue.c#L7417
>
> And I guess the further argument is that we have a use-case for ordered
> workqueues?

In the context of

	GPU drivers often need to create their own workqueues for various
	reasons. Add the ability to do so.

I think we do.

Depending on the final implementation details and the driver it may be needed by
the job queue.

They are also pretty common outside the scheduler use-case in GPU drivers. I
think panthor has one as well, so you might also need one in Tyr. In nova-core I
expect this to be used in MM code.

But even without that, I think it would be reasonble to consider ordered queues
for this abstraction, since alloc_ordered_workqueue() and
create_singlethread_workqueue() seem to have more users than the non-ordered
constructors (without checking whether alloc_workqueue() is also used directly
to create ordered queues).

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-02-28 14:43           ` Danilo Krummrich
@ 2026-03-01 11:55             ` Alice Ryhl
  2026-03-02 12:45               ` Philipp Stanner
  0 siblings, 1 reply; 26+ messages in thread
From: Alice Ryhl @ 2026-03-01 11:55 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Tejun Heo, Miguel Ojeda, Lai Jiangshan, Gary Guo,
	Björn Roy Baron, Andreas Hindborg, Trevor Gross,
	Daniel Almeida, John Hubbard, Philipp Stanner, rust-for-linux,
	linux-kernel, Boqun Feng, Benno Lossin, Tamir Duberstein

On Sat, Feb 28, 2026 at 03:43:02PM +0100, Danilo Krummrich wrote:
> On Sat Feb 28, 2026 at 1:59 PM CET, Alice Ryhl wrote:
> > On Fri, Feb 27, 2026 at 08:23:44PM +0100, Danilo Krummrich wrote:
> >> On Fri Feb 27, 2026 at 8:05 PM CET, Alice Ryhl wrote:
> >> > On Fri, Feb 27, 2026 at 04:30:59PM +0100, Danilo Krummrich wrote:
> >> >> On Fri Feb 27, 2026 at 3:53 PM CET, Alice Ryhl wrote:
> >> >> > +    #[inline]
> >> >> > +    pub fn max_active(mut self, max_active: u32) -> Builder {
> >> >> > +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
> >> >> 
> >> >> The workqueue code prints a warning for max_active >  WQ_MAX_ACTIVE. Maybe use
> >> >> debug_assert()?
> >> >
> >> > What's wrong with just making use of the C-side warning?
> >> 
> >> IIRC, we have the same pattern in other Rust code that we use debug_assert()
> >> when a value got clamped, e.g. in udelay().
> >
> > In udelay(), the clamping happens on the Rust side, so it makes sense
> > that Rust is the one to warn about it.
> >
> > Here, the clamping happens in C code. To warn about it, I'd have to
> > duplicate the existing C-side check to clamp in Rust.
> 
> That's fair, although I also think that it is not unreasonable. Given that this
> uses the builder pattern, I think it would be nice to ensure that nothing
> "invalid" can be built in the first place.
> 
> Maybe we can use a bounded integer?

Bounded integers allow zero, which is also illegal.

I think it's a bit much honestly.

> >> >> It's also a bit unfortunate that alloc_ordered_workqueue() becomes
> >> >> .max_active(1).
> >> >> 
> >> >> At the same time having a separate ordered() method competes with max_active().
> >> >> 
> >> >> Mybe a type state, i.e. Builder<Ordered> that doesn't have max_active()?
> >> >
> >> > Sorry I'm a bit confused by this. Why does an ordered() compete with
> >> > max_active()?
> >> 
> >> Because you could get an inconsistent state with __WQ_ORDERED and
> >> max_active > 1.
> >> 
> >> It also conflicts with sysfs() I think [1].
> >> 
> >> [1] https://elixir.bootlin.com/linux/v6.19.3/source/kernel/workqueue.c#L7417
> >
> > And I guess the further argument is that we have a use-case for ordered
> > workqueues?
> 
> In the context of
> 
> 	GPU drivers often need to create their own workqueues for various
> 	reasons. Add the ability to do so.
> 
> I think we do.
> 
> Depending on the final implementation details and the driver it may be needed by
> the job queue.
> 
> They are also pretty common outside the scheduler use-case in GPU drivers. I
> think panthor has one as well, so you might also need one in Tyr. In nova-core I
> expect this to be used in MM code.
> 
> But even without that, I think it would be reasonble to consider ordered queues
> for this abstraction, since alloc_ordered_workqueue() and
> create_singlethread_workqueue() seem to have more users than the non-ordered
> constructors (without checking whether alloc_workqueue() is also used directly
> to create ordered queues).

Ok, I'll consider this for next version.

Alice

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/2] rust: workqueue: add creation of workqueues
  2026-03-01 11:55             ` Alice Ryhl
@ 2026-03-02 12:45               ` Philipp Stanner
  0 siblings, 0 replies; 26+ messages in thread
From: Philipp Stanner @ 2026-03-02 12:45 UTC (permalink / raw)
  To: Alice Ryhl, Danilo Krummrich
  Cc: Tejun Heo, Miguel Ojeda, Lai Jiangshan, Gary Guo,
	Björn Roy Baron, Andreas Hindborg, Trevor Gross,
	Daniel Almeida, John Hubbard, Philipp Stanner, rust-for-linux,
	linux-kernel, Boqun Feng, Benno Lossin, Tamir Duberstein

On Sun, 2026-03-01 at 11:55 +0000, Alice Ryhl wrote:
> On Sat, Feb 28, 2026 at 03:43:02PM +0100, Danilo Krummrich wrote:
> > On Sat Feb 28, 2026 at 1:59 PM CET, Alice Ryhl wrote:
> > > On Fri, Feb 27, 2026 at 08:23:44PM +0100, Danilo Krummrich wrote:
> > > > On Fri Feb 27, 2026 at 8:05 PM CET, Alice Ryhl wrote:
> > > > > On Fri, Feb 27, 2026 at 04:30:59PM +0100, Danilo Krummrich wrote:
> > > > > > On Fri Feb 27, 2026 at 3:53 PM CET, Alice Ryhl wrote:
> > > > > > > +    #[inline]
> > > > > > > +    pub fn max_active(mut self, max_active: u32) -> Builder {
> > > > > > > +        self.max_active = i32::try_from(max_active).unwrap_or(i32::MAX);
> > > > > > 
> > > > > > The workqueue code prints a warning for max_active >  WQ_MAX_ACTIVE. Maybe use
> > > > > > debug_assert()?
> > > > > 
> > > > > What's wrong with just making use of the C-side warning?
> > > > 
> > > > IIRC, we have the same pattern in other Rust code that we use debug_assert()
> > > > when a value got clamped, e.g. in udelay().
> > > 
> > > In udelay(), the clamping happens on the Rust side, so it makes sense
> > > that Rust is the one to warn about it.
> > > 
> > > Here, the clamping happens in C code. To warn about it, I'd have to
> > > duplicate the existing C-side check to clamp in Rust.
> > 
> > That's fair, although I also think that it is not unreasonable. Given that this
> > uses the builder pattern, I think it would be nice to ensure that nothing
> > "invalid" can be built in the first place.
> > 
> > Maybe we can use a bounded integer?
> 
> Bounded integers allow zero, which is also illegal.
> 
> I think it's a bit much honestly.


My two cents here would be too that it's more elegant to just leverage
the C side's warning.

P.

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2026-03-02 12:54 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-27 14:53 [PATCH v3 0/2] Creation of workqueues in Rust Alice Ryhl
2026-02-27 14:53 ` [PATCH v3 1/2] rust: workqueue: restrict delayed work to global wqs Alice Ryhl
2026-02-27 15:10   ` Danilo Krummrich
2026-02-27 15:47   ` Gary Guo
2026-02-27 17:09   ` Tejun Heo
2026-02-27 19:01     ` Alice Ryhl
2026-02-27 19:08       ` Tejun Heo
2026-02-27 19:19         ` Alice Ryhl
2026-02-27 19:24           ` Tejun Heo
2026-02-27 19:28             ` Alice Ryhl
2026-02-27 19:46               ` Tejun Heo
2026-02-27 20:36                 ` Alice Ryhl
2026-02-27 21:25                   ` Tejun Heo
2026-02-27 14:53 ` [PATCH v3 2/2] rust: workqueue: add creation of workqueues Alice Ryhl
2026-02-27 15:30   ` Danilo Krummrich
2026-02-27 16:00     ` Gary Guo
2026-02-27 16:12       ` Danilo Krummrich
2026-02-27 19:05     ` Alice Ryhl
2026-02-27 19:23       ` Danilo Krummrich
2026-02-28 12:59         ` Alice Ryhl
2026-02-28 14:43           ` Danilo Krummrich
2026-03-01 11:55             ` Alice Ryhl
2026-03-02 12:45               ` Philipp Stanner
2026-02-27 16:04   ` Gary Guo
2026-02-27 19:08     ` Alice Ryhl
2026-02-28  2:24   ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox