* [PATCH] rust: add task_work abstraction
@ 2026-04-21 6:38 Ashutosh Desai
2026-04-21 6:44 ` Greg KH
0 siblings, 1 reply; 2+ messages in thread
From: Ashutosh Desai @ 2026-04-21 6:38 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
linux-kernel, rust-for-linux, Ashutosh Desai
The kernel's task_work API (include/linux/task_work.h) allows any
kernel thread to queue a callback that runs on a specific task the
next time it returns to user space (or on exit). The C interface
requires manual lifetime management of the callback_head allocation
and careful ordering of init_task_work() and task_work_add().
Add a safe Rust abstraction consisting of:
- TaskWork<T>: an owned, heap-allocated work item. Allocating
with TaskWork::new() ties a user-supplied value T (bound by the
new TaskWorkItem trait) to a callback_head. Scheduling with
TaskWork::add() initializes the callback, transfers ownership to
the kernel, and returns Err((ESRCH, data)) if the target task is
already exiting, so callers can always recover the value.
- TaskWorkItem: a trait for types that can be scheduled as a task
work item. Implementors provide a run() method that receives the
inner value by move when the callback fires.
- NotifyMode: a type-safe enum wrapping task_work_notify_mode,
covering TWA_NONE, TWA_RESUME, TWA_SIGNAL, TWA_SIGNAL_NO_IPI,
and TWA_NMI_CURRENT.
The repr(C) layout of the internal TaskWorkInner<T> places
callback_head at offset 0, which lets the C callback pointer be
cast back to the full allocation. Option<T> in that struct permits
moving the data out before dropping the allocation without a Drop
impl on TaskWork<T>.
Signed-off-by: Ashutosh Desai <ashutoshdesai993@gmail.com>
---
rust/kernel/lib.rs | 1 +
rust/kernel/task_work.rs | 158 +++++++++++++++++++++++++++++++++++++++
2 files changed, 159 insertions(+)
create mode 100644 rust/kernel/task_work.rs
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index d93292d47420..22878fd73528 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -154,6 +154,7 @@
pub mod str;
pub mod sync;
pub mod task;
+pub mod task_work;
pub mod time;
pub mod tracepoint;
pub mod transmute;
diff --git a/rust/kernel/task_work.rs b/rust/kernel/task_work.rs
new file mode 100644
index 000000000000..59635d5fed13
--- /dev/null
+++ b/rust/kernel/task_work.rs
@@ -0,0 +1,158 @@
+// SPDX-License-Identifier: GPL-2.0
+
+// Copyright (C) 2026 Ashutosh Desai <ashutoshdesai993@gmail.com>
+
+//! Task work.
+//!
+//! Wraps the kernel's task work API, which lets you schedule a callback to run
+//! on a task the next time it returns to userspace.
+//!
+//! The entry point is [`TaskWork`]. Allocate one with [`TaskWork::new`] and
+//! schedule it with [`TaskWork::add`]. On success the kernel takes ownership
+//! and calls [`TaskWorkItem::run`] when the task returns to userspace.
+//!
+//! C header: [`include/linux/task_work.h`](srctree/include/linux/task_work.h)
+
+use crate::{
+ alloc::{AllocError, Flags, KBox},
+ bindings,
+ error::{code::ESRCH, Error},
+ task::Task,
+ types::Opaque,
+};
+
+/// Notification mode passed to [`TaskWork::add`].
+///
+/// Controls how the target task is woken after the work item is queued.
+#[derive(Copy, Clone, Debug, Eq, PartialEq)]
+pub enum NotifyMode {
+ /// No extra wakeup; the work runs when the task returns to userspace naturally.
+ None,
+ /// Resume the task if it is sleeping in a restartable syscall.
+ Resume,
+ /// Send a signal to force the task out of the current syscall.
+ Signal,
+ /// Like [`NotifyMode::Signal`] but without an inter-processor interrupt when
+ /// the target is running on a remote CPU.
+ SignalNoIpi,
+ /// Run the work immediately on the current CPU from NMI context. Only valid
+ /// when scheduling on the current task from an NMI handler.
+ NmiCurrent,
+}
+
+impl NotifyMode {
+ fn as_raw(self) -> bindings::task_work_notify_mode {
+ match self {
+ NotifyMode::None => bindings::task_work_notify_mode_TWA_NONE,
+ NotifyMode::Resume => bindings::task_work_notify_mode_TWA_RESUME,
+ NotifyMode::Signal => bindings::task_work_notify_mode_TWA_SIGNAL,
+ NotifyMode::SignalNoIpi => bindings::task_work_notify_mode_TWA_SIGNAL_NO_IPI,
+ NotifyMode::NmiCurrent => bindings::task_work_notify_mode_TWA_NMI_CURRENT,
+ }
+ }
+}
+
+/// Implemented by types that can be scheduled as a task work item.
+///
+/// The implementing type is heap-allocated by [`TaskWork::new`] and passed by
+/// value to [`run`] when the target task returns to userspace. Implementors
+/// must be [`Send`] because the callback can run on any CPU.
+///
+/// [`run`]: TaskWorkItem::run
+pub trait TaskWorkItem: Sized + Send {
+ /// Called when the target task returns to userspace.
+ fn run(this: Self);
+}
+
+// The kernel's `struct callback_head` is the first field so that a
+// `*mut callback_head` handed back by the kernel can be cast directly to
+// `*mut TaskWorkInner<T>`.
+#[repr(C)]
+struct TaskWorkInner<T> {
+ callback_head: Opaque<bindings::callback_head>,
+ // Wrapped in Option so that we can move T out before the allocation is
+ // freed, without needing a Drop impl on TaskWork (which would block
+ // moving self.inner in add()).
+ data: Option<T>,
+}
+
+/// A heap-allocated task work item ready to be scheduled on a [`Task`].
+///
+/// Dropping this before calling [`add`] drops the inner value normally.
+///
+/// # C counterpart
+///
+/// Wraps `struct callback_head` from `<linux/task_work.h>`.
+///
+/// [`add`]: TaskWork::add
+#[doc(alias = "callback_head")]
+pub struct TaskWork<T: TaskWorkItem> {
+ inner: KBox<TaskWorkInner<T>>,
+}
+
+impl<T: TaskWorkItem> TaskWork<T> {
+ /// Allocates a new task work item wrapping `data`.
+ pub fn new(data: T, flags: Flags) -> Result<Self, AllocError> {
+ Ok(Self {
+ inner: KBox::new(
+ TaskWorkInner {
+ callback_head: Opaque::uninit(),
+ data: Some(data),
+ },
+ flags,
+ )?,
+ })
+ }
+
+ /// Schedules this item to run when `task` next returns to userspace.
+ ///
+ /// On success, ownership passes to the kernel and [`TaskWorkItem::run`] will
+ /// be called with the inner data when the task returns to userspace or exits.
+ ///
+ /// On failure (`ESRCH`), the task is exiting and the inner data is returned
+ /// so the caller can clean up.
+ pub fn add(self, task: &Task, mode: NotifyMode) -> Result<(), (Error, T)> {
+ let inner = KBox::into_raw(self.inner);
+
+ // SAFETY: We have exclusive access to the allocation and are writing
+ // the callback pointer before passing it to the kernel.
+ unsafe {
+ bindings::init_task_work(
+ Opaque::cast_into(core::ptr::addr_of!((*inner).callback_head)),
+ Some(run_callback::<T>),
+ );
+ }
+
+ // SAFETY: The callback_head is initialized above and sits at offset 0
+ // of the repr(C) TaskWorkInner<T>. On success the kernel owns the
+ // allocation until run_callback fires; on failure we reclaim it below.
+ let ret = unsafe {
+ bindings::task_work_add(
+ task.as_ptr(),
+ Opaque::cast_into(core::ptr::addr_of!((*inner).callback_head)),
+ mode.as_raw(),
+ )
+ };
+
+ if ret != 0 {
+ // SAFETY: task_work_add failed so we still own the allocation.
+ let mut boxed = unsafe { KBox::from_raw(inner) };
+ let data = boxed.data.take().expect("data present before add");
+ return Err((ESRCH, data));
+ }
+
+ Ok(())
+ }
+}
+
+// SAFETY: cb points at the callback_head field of a TaskWorkInner<T> allocated
+// by TaskWork::new and successfully handed to task_work_add. The kernel gives us
+// back exclusive ownership of the allocation here.
+unsafe extern "C" fn run_callback<T: TaskWorkItem>(cb: *mut bindings::callback_head) {
+ // SAFETY: callback_head is at offset 0 of the repr(C) TaskWorkInner<T> and
+ // Opaque<T> is repr(transparent), so the cast is valid.
+ let mut inner = unsafe { KBox::from_raw(cb.cast::<TaskWorkInner<T>>()) };
+ let data = inner.data.take().expect("data present in callback");
+ drop(inner);
+ T::run(data);
+}
base-commit: 7f87a5ea75f011d2c9bc8ac0167e5e2d1adb1594
prerequisite-patch-id: 161541863a5d4d8c2a41c6f5cfdf712463bf50c9
prerequisite-patch-id: 3b286037e1aeb5a942ff450ced7ec52048bfea7a
prerequisite-patch-id: 14ac8c330c1207cbba2096c4771e000ae82bb765
--
2.34.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] rust: add task_work abstraction
2026-04-21 6:38 [PATCH] rust: add task_work abstraction Ashutosh Desai
@ 2026-04-21 6:44 ` Greg KH
0 siblings, 0 replies; 2+ messages in thread
From: Greg KH @ 2026-04-21 6:44 UTC (permalink / raw)
To: Ashutosh Desai
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, linux-kernel, rust-for-linux
On Tue, Apr 21, 2026 at 06:38:36AM +0000, Ashutosh Desai wrote:
> The kernel's task_work API (include/linux/task_work.h) allows any
> kernel thread to queue a callback that runs on a specific task the
> next time it returns to user space (or on exit). The C interface
> requires manual lifetime management of the callback_head allocation
> and careful ordering of init_task_work() and task_work_add().
>
> Add a safe Rust abstraction consisting of:
>
> - TaskWork<T>: an owned, heap-allocated work item. Allocating
> with TaskWork::new() ties a user-supplied value T (bound by the
> new TaskWorkItem trait) to a callback_head. Scheduling with
> TaskWork::add() initializes the callback, transfers ownership to
> the kernel, and returns Err((ESRCH, data)) if the target task is
> already exiting, so callers can always recover the value.
>
> - TaskWorkItem: a trait for types that can be scheduled as a task
> work item. Implementors provide a run() method that receives the
> inner value by move when the callback fires.
>
> - NotifyMode: a type-safe enum wrapping task_work_notify_mode,
> covering TWA_NONE, TWA_RESUME, TWA_SIGNAL, TWA_SIGNAL_NO_IPI,
> and TWA_NMI_CURRENT.
>
> The repr(C) layout of the internal TaskWorkInner<T> places
> callback_head at offset 0, which lets the C callback pointer be
> cast back to the full allocation. Option<T> in that struct permits
> moving the data out before dropping the allocation without a Drop
> impl on TaskWork<T>.
>
> Signed-off-by: Ashutosh Desai <ashutoshdesai993@gmail.com>
> ---
> rust/kernel/lib.rs | 1 +
> rust/kernel/task_work.rs | 158 +++++++++++++++++++++++++++++++++++++++
> 2 files changed, 159 insertions(+)
> create mode 100644 rust/kernel/task_work.rs
Do you have a user for this binding so that we can see how it is being
used to determine if it is correct?
thanks,
greg k-h
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-04-21 6:44 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-21 6:38 [PATCH] rust: add task_work abstraction Ashutosh Desai
2026-04-21 6:44 ` Greg KH
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox