From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5EB043DFC79 for ; Fri, 10 Apr 2026 21:36:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.173 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775856989; cv=none; b=mfOkmE66SwVcvCfMzMNu93+pW6ITBMFi4512FRbvTfiqBF+Ceh+WEQTDU4UQVqHQbCVjDfZWv6VsQEko7Y6qD6b8X/hVxFND9lDB6vF+NQdO4okru4BtkaphLKEUGcS34QpqzY8FzoCz+g6CkvSrg74TjWLd9hKTLBTwQVGgsB8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775856989; c=relaxed/simple; bh=MZsr3UdiERwV1f/QQfUp88/S78aGt9zYSriryeD+vLw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=sujz4lxFfmzuueHn99/+m1LzLsKy7jXBqO+Djv7T/KQQThkSR97hmLzTYZLEKc5+y0ks3t1kOEYSGHQyxgNJL9/jBqIPV24ktGi2Zyl2HKuwaQWi4qzFzhSCDd4/zp+SpGQctuPxGGFgbtAnY2AqeMJKk6sf+H4gD9AMD/DFsPg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=smlNTIgN; arc=none smtp.client-ip=209.85.210.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="smlNTIgN" Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-82cebbdab08so2001340b3a.2 for ; Fri, 10 Apr 2026 14:36:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1775856987; x=1776461787; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=Qp18656DxFkgwOS2U3bgLEL3hq8+puEOLyZcxkp/3uY=; b=smlNTIgNbEDERR9GDG8/chy5EfUWP+Chh9YhoL6GAaaF81IP20dlFrhInzS+EHTIaY JbxKHte6sasbB37HaPOyfOGj604MkovjjASbFniFaY85iHVqULGVsWS6zAAyCCZ5p7xI vzH1S9CyDVwxJ5M5lgI7sRZ3LdIkzZJJUx//tU4EqCEoVt4LMvJ0jucm1BwrrTyi+/Im nPEPDORslG9E8Xpc9q2B4OYTzYE2AG+/wWvEBAu+OQdyOz74M6LPlsTRjQYA/Gy3nKUY rksquuzqnC1dPp+kE8LlgU9Ykp0MMpTrGZkXpy0w7rLuOIlLhgNA22HcNlzV75T8SSPW yNCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775856987; x=1776461787; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=Qp18656DxFkgwOS2U3bgLEL3hq8+puEOLyZcxkp/3uY=; b=gMthq01Ye2kxuJ1ueMLZsvT1oBNHwMANcX5+qAZuEYqyPftLMPQBjPINt65fLG7+NQ UGFJ08v6S+tmM9pwtZCnA9CGs3iObojudDMGykj+FH5vv19fZ469g/dqKVBjUYQF4mCs tgCtlys9hRT/YsvsBhabBZQ9hMdphVL6mepbrKazoKQ3Ydo+0yys6As0fyt1cwVtzXx4 J/acafutg/lEmmO9nybl+hKPgFU0iXCjElPjVWKI12glEBdLLMOxKNCgSHtQIrwSh6p7 4O86RoIlJWVqLfDMzzbSUDGmI5Sit/M6zPlMXHFsXgMEuOvPqdBig5F3sUxQD/qTsIL1 qJoQ== X-Forwarded-Encrypted: i=1; AJvYcCXTKV6DMlktci1oak4C5f87S7JqIMVM0oQ0aVpTcvlsu3BgqM1QnHdtRhzj5EK5ULFwEQmZSvofwPwPCC4=@vger.kernel.org X-Gm-Message-State: AOJu0YxRHwlKSoIJ3mgwLLa3lyhYUoGQ7QiH4XUOnZzlXhJoo6ZuhAFA FQOYrJJi/Pt17T0koFAQBC1VWd/doYnspSGVyYjhhB/q+5AjL4zPZq3s X-Gm-Gg: AeBDievYe6AZXUCThlFb67TM/qYkQEwAqrn47Mnc8j1AcxJ/JAx/MRPnwa1Z/PYc9fe pR8ZvQrKLG4BfvzColmeazcfux3r6/yepaFjiEhTS452Z9xgc3+Mb5dBKh3sQccCD5egiNn/c5d epWhdr02rrkQ4+ED0XA+rd3qWXAvUpPzhbneDoJUHRe7msATelybLB9vR6PPq9HNZjZbc6w02M6 okWwb/T1/QvNBrZldL4XcMS/F38uIRVEl+8Sfw6Zcu7Eq/x0dN9fv9rMBWFyaNRim4vnQRusaIl e3s1WFma3TIXpoabDCsWASNWCjwNJCcXie+5HYhC5mUJcDC9P0aZxlaPivRUTXVFzwNxTTE6cLi PsAKl5yIdfvlaCV6H0SGtUH+8MJptQd0k5eQHYUc3Q5pWZkA/5+SGJYWi64LtfJ8yEgs4OX3ZJW deC3rVVZrPMlUcN8eNfYlDtlb0jmOuRLt9pYnY+6+ZhvS1 X-Received: by 2002:a05:6a00:4388:b0:82c:daa4:ce2b with SMTP id d2e1a72fcca58-82f0c2f1536mr5606609b3a.49.1775856986535; Fri, 10 Apr 2026 14:36:26 -0700 (PDT) Received: from mitchelllevy.localdomain ([131.107.1.135]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-82f0c4e3d41sm5111551b3a.48.2026.04.10.14.36.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Apr 2026 14:36:26 -0700 (PDT) From: Mitchell Levy Date: Fri, 10 Apr 2026 14:35:36 -0700 Subject: [PATCH v5 6/8] rust: percpu: add a rust per-CPU variable sample Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260410-rust-percpu-v5-6-4292380d7a41@gmail.com> References: <20260410-rust-percpu-v5-0-4292380d7a41@gmail.com> In-Reply-To: <20260410-rust-percpu-v5-0-4292380d7a41@gmail.com> To: Miguel Ojeda , Alex Gaynor , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Andreas Hindborg , Alice Ryhl , Trevor Gross , Andrew Morton , Dennis Zhou , Tejun Heo , Christoph Lameter , Danilo Krummrich , Benno Lossin , Yury Norov , Viresh Kumar , Boqun Feng Cc: Tyler Hicks , Allen Pais , linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-mm@kvack.org, Mitchell Levy X-Mailer: b4 0.15.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1775856978; l=13379; i=levymitchell0@gmail.com; s=20240719; h=from:subject:message-id; bh=MZsr3UdiERwV1f/QQfUp88/S78aGt9zYSriryeD+vLw=; b=rKc93uyRO4qfeWUI6y1uol7p76Ye5NgzhZp3pAGG5J/gi0WD/mvuCE/Sq1fkPMzpQCuRmoB/N Ur2QmXulKQ9BolYILXGzMRLyvFRjw8NpVKiAPCPSKhnRjb0Yh6UWMrz X-Developer-Key: i=levymitchell0@gmail.com; a=ed25519; pk=n6kBmUnb+UNmjVkTnDwrLwTJAEKUfs2e8E+MFPZI93E= Add a short exercise for Rust's per-CPU variable API, modelled after lib/percpu_test.c Signed-off-by: Mitchell Levy --- rust/helpers/percpu.c | 1 + rust/kernel/percpu.rs | 2 +- samples/rust/Kconfig | 9 ++ samples/rust/Makefile | 1 + samples/rust/rust_percpu.rs | 278 ++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 290 insertions(+), 1 deletion(-) diff --git a/rust/helpers/percpu.c b/rust/helpers/percpu.c index 3b2f69a96c66..173b516cd813 100644 --- a/rust/helpers/percpu.c +++ b/rust/helpers/percpu.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include +#include __rust_helper void __percpu *rust_helper_alloc_percpu(size_t sz, size_t align) diff --git a/rust/kernel/percpu.rs b/rust/kernel/percpu.rs index 0ec1245038bb..72c83fef68ee 100644 --- a/rust/kernel/percpu.rs +++ b/rust/kernel/percpu.rs @@ -2,7 +2,7 @@ //! Per-CPU variables. //! //! See the [`crate::define_per_cpu!`] macro, the [`DynamicPerCpu`] type, and the [`PerCpu`] -//! trait. +//! trait. Example usage can be found in `samples/rust/rust_percpu.rs`. pub mod cpu_guard; mod dynamic; diff --git a/samples/rust/Kconfig b/samples/rust/Kconfig index c49ab9106345..2616dbd7b37a 100644 --- a/samples/rust/Kconfig +++ b/samples/rust/Kconfig @@ -172,6 +172,15 @@ config SAMPLE_RUST_SOC If unsure, say N. +config SAMPLE_RUST_PERCPU + tristate "Per-CPU support" + depends on m + help + Enable this option to build a module which demonstrates Rust per-CPU + operations. + + If unsure, say N. + config SAMPLE_RUST_HOSTPROGS bool "Host programs" help diff --git a/samples/rust/Makefile b/samples/rust/Makefile index 6c0aaa58cccc..1ce120ac0402 100644 --- a/samples/rust/Makefile +++ b/samples/rust/Makefile @@ -16,6 +16,7 @@ obj-$(CONFIG_SAMPLE_RUST_DRIVER_FAUX) += rust_driver_faux.o obj-$(CONFIG_SAMPLE_RUST_DRIVER_AUXILIARY) += rust_driver_auxiliary.o obj-$(CONFIG_SAMPLE_RUST_CONFIGFS) += rust_configfs.o obj-$(CONFIG_SAMPLE_RUST_SOC) += rust_soc.o +obj-$(CONFIG_SAMPLE_RUST_PERCPU) += rust_percpu.o rust_print-y := rust_print_main.o rust_print_events.o diff --git a/samples/rust/rust_percpu.rs b/samples/rust/rust_percpu.rs new file mode 100644 index 000000000000..5adb30509bd4 --- /dev/null +++ b/samples/rust/rust_percpu.rs @@ -0,0 +1,278 @@ +// SPDX-License-Identifier: GPL-2.0 +//! A simple demonstration of the rust per-CPU API. + +use core::cell::RefCell; +use core::ffi::c_void; + +use kernel::{ + bindings::on_each_cpu, + cpu::CpuId, + define_per_cpu, get_static_per_cpu, + percpu::{cpu_guard::*, *}, + pr_info, + prelude::*, + sync::Arc, +}; + +module! { + type: PerCpuMod, + name: "rust_percpu", + authors: ["Mitchell Levy"], + description: "Sample to demonstrate the Rust per-CPU API", + license: "GPL v2", +} + +struct PerCpuMod; + +define_per_cpu!(PERCPU: i64 = 0); +define_per_cpu!(UPERCPU: u64 = 0); +define_per_cpu!(CHECKED: RefCell = RefCell::new(0)); + +impl kernel::Module for PerCpuMod { + fn init(_module: &'static ThisModule) -> Result { + pr_info!("rust percpu test start\n"); + + let mut native: i64 = 0; + let mut pcpu: StaticPerCpu = get_static_per_cpu!(PERCPU); + + // SAFETY: We only have one PerCpu that points at PERCPU + unsafe { pcpu.get_mut(CpuGuard::new()) }.with(|val: &mut i64| { + pr_info!("The contents of pcpu are {}\n", *val); + + native += -1; + *val += -1; + pr_info!("Native: {}, *pcpu: {}\n", native, *val); + assert!(native == *val && native == -1); + + native += 1; + *val += 1; + pr_info!("Native: {}, *pcpu: {}\n", native, *val); + assert!(native == *val && native == 0); + }); + + let mut unative: u64 = 0; + let mut upcpu: StaticPerCpu = get_static_per_cpu!(UPERCPU); + + // SAFETY: We only have one PerCpu pointing at UPERCPU + unsafe { upcpu.get_mut(CpuGuard::new()) }.with(|val: &mut u64| { + unative += 1; + *val += 1; + pr_info!("Unative: {}, *upcpu: {}\n", unative, *val); + assert!(unative == *val && unative == 1); + + unative = unative.wrapping_add((-1i64) as u64); + *val = val.wrapping_add((-1i64) as u64); + pr_info!("Unative: {}, *upcpu: {}\n", unative, *val); + assert!(unative == *val && unative == 0); + + unative = unative.wrapping_add((-1i64) as u64); + *val = val.wrapping_add((-1i64) as u64); + pr_info!("Unative: {}, *upcpu: {}\n", unative, *val); + assert!(unative == *val && unative == (-1i64) as u64); + + unative = 0; + *val = 0; + + unative = unative.wrapping_sub(1); + *val = val.wrapping_sub(1); + pr_info!("Unative: {}, *upcpu: {}\n", unative, *val); + assert!(unative == *val && unative == (-1i64) as u64); + assert!(unative == *val && unative == u64::MAX); + }); + + let mut checked_native: u64 = 0; + let checked: StaticPerCpu> = get_static_per_cpu!(CHECKED); + checked.get(CpuGuard::new()).with(|val: &RefCell| { + checked_native += 1; + *val.borrow_mut() += 1; + pr_info!( + "Checked native: {}, *checked: {}\n", + checked_native, + *val.borrow() + ); + assert!(checked_native == *val.borrow() && checked_native == 1); + + checked_native = checked_native.wrapping_add((-1i64) as u64); + val.replace_with(|old: &mut u64| old.wrapping_add((-1i64) as u64)); + pr_info!( + "Checked native: {}, *checked: {}\n", + checked_native, + *val.borrow() + ); + assert!(checked_native == *val.borrow() && checked_native == 0); + + checked_native = checked_native.wrapping_add((-1i64) as u64); + val.replace_with(|old: &mut u64| old.wrapping_add((-1i64) as u64)); + pr_info!( + "Checked native: {}, *checked: {}\n", + checked_native, + *val.borrow() + ); + assert!(checked_native == *val.borrow() && checked_native == (-1i64) as u64); + + checked_native = 0; + *val.borrow_mut() = 0; + + checked_native = checked_native.wrapping_sub(1); + val.replace_with(|old: &mut u64| old.wrapping_sub(1)); + pr_info!( + "Checked native: {}, *checked: {}\n", + checked_native, + *val.borrow() + ); + assert!(checked_native == *val.borrow() && checked_native == (-1i64) as u64); + assert!(checked_native == *val.borrow() && checked_native == u64::MAX); + }); + + pr_info!("rust static percpu test done\n"); + + pr_info!("rust dynamic percpu test start\n"); + let mut test: DynamicPerCpu = DynamicPerCpu::new_zero(GFP_KERNEL).unwrap(); + + // SAFETY: No prerequisites for on_each_cpu. + unsafe { + on_each_cpu(Some(inc_percpu_u64), (&raw mut test).cast(), 0); + on_each_cpu(Some(inc_percpu_u64), (&raw mut test).cast(), 0); + on_each_cpu(Some(inc_percpu_u64), (&raw mut test).cast(), 0); + on_each_cpu(Some(inc_percpu_u64), (&raw mut test).cast(), 1); + on_each_cpu(Some(check_percpu_u64), (&raw mut test).cast(), 1); + } + + let checked: DynamicPerCpu> = + DynamicPerCpu::new_with(&RefCell::new(100), GFP_KERNEL).unwrap(); + + // SAFETY: No prerequisites for on_each_cpu. + unsafe { + on_each_cpu( + Some(inc_percpu_refcell_u64), + (&raw const checked) as *mut c_void, + 0, + ); + on_each_cpu( + Some(inc_percpu_refcell_u64), + (&raw const checked) as *mut c_void, + 0, + ); + on_each_cpu( + Some(inc_percpu_refcell_u64), + (&raw const checked) as *mut c_void, + 0, + ); + on_each_cpu( + Some(inc_percpu_refcell_u64), + (&raw const checked) as *mut c_void, + 1, + ); + on_each_cpu( + Some(check_percpu_refcell_u64), + (&raw const checked) as *mut c_void, + 1, + ); + } + + checked.get(CpuGuard::new()).with(|val: &RefCell| { + assert!(*val.borrow() == 104); + + let mut checked_native = 0; + *val.borrow_mut() = 0; + + checked_native += 1; + *val.borrow_mut() += 1; + pr_info!( + "Checked native: {}, *checked: {}\n", + checked_native, + *val.borrow() + ); + assert!(checked_native == *val.borrow() && checked_native == 1); + + checked_native = checked_native.wrapping_add((-1i64) as u64); + val.replace_with(|old: &mut u64| old.wrapping_add((-1i64) as u64)); + pr_info!( + "Checked native: {}, *checked: {}\n", + checked_native, + *val.borrow() + ); + assert!(checked_native == *val.borrow() && checked_native == 0); + + checked_native = checked_native.wrapping_add((-1i64) as u64); + val.replace_with(|old: &mut u64| old.wrapping_add((-1i64) as u64)); + pr_info!( + "Checked native: {}, *checked: {}\n", + checked_native, + *val.borrow() + ); + assert!(checked_native == *val.borrow() && checked_native == (-1i64) as u64); + + checked_native = 0; + *val.borrow_mut() = 0; + + checked_native = checked_native.wrapping_sub(1); + val.replace_with(|old: &mut u64| old.wrapping_sub(1)); + pr_info!( + "Checked native: {}, *checked: {}\n", + checked_native, + *val.borrow() + ); + assert!(checked_native == *val.borrow() && checked_native == (-1i64) as u64); + assert!(checked_native == *val.borrow() && checked_native == u64::MAX); + }); + + let arc = Arc::new(0, GFP_KERNEL).unwrap(); + { + let _arc_pcpu: DynamicPerCpu> = + DynamicPerCpu::new_with(&arc, GFP_KERNEL).unwrap(); + } + // `arc` should be unique, since all the clones on each CPU should be dropped when + // `_arc_pcpu` is dropped + assert!(Arc::into_unique_or_drop(arc).is_some()); + + pr_info!("rust dynamic percpu test done\n"); + + // Return Err to unload the module + Result::Err(EINVAL) + } +} + +extern "C" fn inc_percpu_u64(info: *mut c_void) { + // SAFETY: We know that info is a void *const DynamicPerCpu and DynamicPerCpu is Send. + let mut pcpu = unsafe { (*(info as *const DynamicPerCpu)).clone() }; + pr_info!("Incrementing on {}\n", CpuId::current().as_u32()); + + // SAFETY: We don't have multiple clones of pcpu in scope + unsafe { pcpu.get_mut(CpuGuard::new()) }.with(|val: &mut u64| *val += 1); +} + +extern "C" fn check_percpu_u64(info: *mut c_void) { + // SAFETY: We know that info is a void *const DynamicPerCpu and DynamicPerCpu is Send. + let mut pcpu = unsafe { (*(info as *const DynamicPerCpu)).clone() }; + pr_info!("Asserting on {}\n", CpuId::current().as_u32()); + + // SAFETY: We don't have multiple clones of pcpu in scope + unsafe { pcpu.get_mut(CpuGuard::new()) }.with(|val: &mut u64| assert!(*val == 4)); +} + +extern "C" fn inc_percpu_refcell_u64(info: *mut c_void) { + // SAFETY: We know that info is a void *const DynamicPerCpu> and + // DynamicPerCpu> is Send. + let pcpu = unsafe { (*(info as *const DynamicPerCpu>)).clone() }; + // SAFETY: smp_processor_id has no preconditions + pr_info!("Incrementing on {}\n", CpuId::current().as_u32()); + + pcpu.get(CpuGuard::new()).with(|val: &RefCell| { + let mut val = val.borrow_mut(); + *val += 1; + }); +} + +extern "C" fn check_percpu_refcell_u64(info: *mut c_void) { + // SAFETY: We know that info is a void *const DynamicPerCpu> and + // DynamicPerCpu> is Send. + let pcpu = unsafe { (*(info as *const DynamicPerCpu>)).clone() }; + // SAFETY: smp_processor_id has no preconditions + pr_info!("Asserting on {}\n", CpuId::current().as_u32()); + + pcpu.get(CpuGuard::new()).with(|val: &RefCell| { + let val = val.borrow(); + assert!(*val == 104); + }); +} -- 2.34.1