From: Mitchell Levy <levymitchell0@gmail.com>
To: "Miguel Ojeda" <ojeda@kernel.org>,
"Alex Gaynor" <alex.gaynor@gmail.com>,
"Gary Guo" <gary@garyguo.net>,
"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
"Andreas Hindborg" <a.hindborg@kernel.org>,
"Alice Ryhl" <aliceryhl@google.com>,
"Trevor Gross" <tmgross@umich.edu>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Dennis Zhou" <dennis@kernel.org>, "Tejun Heo" <tj@kernel.org>,
"Christoph Lameter" <cl@linux.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"Benno Lossin" <lossin@kernel.org>,
"Yury Norov" <yury.norov@gmail.com>,
"Viresh Kumar" <viresh.kumar@linaro.org>,
"Boqun Feng" <boqun@kernel.org>
Cc: Tyler Hicks <code@tyhicks.com>,
Allen Pais <apais@linux.microsoft.com>,
linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org,
linux-mm@kvack.org, Mitchell Levy <levymitchell0@gmail.com>
Subject: [PATCH v5 1/8] rust: cpumask: Add a `Cpumask` iterator
Date: Fri, 10 Apr 2026 14:35:31 -0700 [thread overview]
Message-ID: <20260410-rust-percpu-v5-1-4292380d7a41@gmail.com> (raw)
In-Reply-To: <20260410-rust-percpu-v5-0-4292380d7a41@gmail.com>
Add an iterator for `Cpumask` making use of C's `cpumask_next`.
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Mitchell Levy <levymitchell0@gmail.com>
---
rust/helpers/cpumask.c | 6 ++++++
rust/kernel/cpumask.rs | 51 +++++++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 56 insertions(+), 1 deletion(-)
diff --git a/rust/helpers/cpumask.c b/rust/helpers/cpumask.c
index 5deced5b975e..76990e14dfdd 100644
--- a/rust/helpers/cpumask.c
+++ b/rust/helpers/cpumask.c
@@ -50,6 +50,12 @@ bool rust_helper_cpumask_full(struct cpumask *srcp)
return cpumask_full(srcp);
}
+__rust_helper
+unsigned int rust_helper_cpumask_next(int n, struct cpumask *srcp)
+{
+ return cpumask_next(n, srcp);
+}
+
__rust_helper
unsigned int rust_helper_cpumask_weight(struct cpumask *srcp)
{
diff --git a/rust/kernel/cpumask.rs b/rust/kernel/cpumask.rs
index 44bb36636ee3..b74a3fccf4b4 100644
--- a/rust/kernel/cpumask.rs
+++ b/rust/kernel/cpumask.rs
@@ -14,7 +14,10 @@
#[cfg(CONFIG_CPUMASK_OFFSTACK)]
use core::ptr::{self, NonNull};
-use core::ops::{Deref, DerefMut};
+use core::{
+ iter::FusedIterator,
+ ops::{Deref, DerefMut},
+};
/// A CPU Mask.
///
@@ -161,6 +164,52 @@ pub fn copy(&self, dstp: &mut Self) {
}
}
+/// Iterator for a `Cpumask`.
+pub struct CpumaskIter<'a> {
+ mask: &'a Cpumask,
+ /// [`None`] if no bits have been returned yet, or the index of the last bit returned by the
+ /// iterator. Equal to [`kernel::cpu::nr_cpu_ids()`] when the iterator has been exhausted.
+ last: Option<u32>,
+}
+
+impl<'a> CpumaskIter<'a> {
+ /// Creates a new `CpumaskIter` for the given `Cpumask`.
+ fn new(mask: &'a Cpumask) -> CpumaskIter<'a> {
+ Self { mask, last: None }
+ }
+}
+
+impl<'a> Iterator for CpumaskIter<'a> {
+ type Item = CpuId;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ // cpumask_next WARNs if the first argument is >= nr_cpu_ids when CONFIG_DEBUG_PER_CPU_MAPS
+ // is set. So, early out in that case
+ if let Some(last) = self.last {
+ if last >= kernel::cpu::nr_cpu_ids() {
+ return None;
+ }
+ }
+
+ // SAFETY: By the type invariant, `self.mask.as_raw` is a `struct cpumask *`.
+ let next = unsafe {
+ bindings::cpumask_next(self.last.map_or(-1, |l| l as i32), self.mask.as_raw())
+ };
+
+ self.last = Some(next);
+ CpuId::from_u32(next)
+ }
+}
+
+impl<'a> FusedIterator for CpumaskIter<'a> {}
+
+impl Cpumask {
+ /// Returns an iterator over the set bits in the cpumask.
+ pub fn iter(&self) -> CpumaskIter<'_> {
+ CpumaskIter::new(self)
+ }
+}
+
/// A CPU Mask pointer.
///
/// Rust abstraction for the C `struct cpumask_var_t`.
--
2.34.1
next prev parent reply other threads:[~2026-04-10 21:36 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-10 21:35 [PATCH v5 0/8] rust: Add Per-CPU Variable API Mitchell Levy
2026-04-10 21:35 ` Mitchell Levy [this message]
2026-04-10 21:35 ` [PATCH v5 2/8] rust: cpumask: Add getters for globally defined cpumasks Mitchell Levy
2026-04-10 21:35 ` [PATCH v5 3/8] rust: percpu: Add C bindings for per-CPU variable API Mitchell Levy
2026-04-10 21:35 ` [PATCH v5 4/8] rust: percpu: introduce a rust API for static per-CPU variables Mitchell Levy
2026-04-10 21:35 ` [PATCH v5 5/8] rust: percpu: introduce a rust API for dynamic " Mitchell Levy
2026-04-10 21:35 ` [PATCH v5 6/8] rust: percpu: add a rust per-CPU variable sample Mitchell Levy
2026-04-10 21:35 ` [PATCH v5 7/8] rust: percpu: Add pin-hole optimizations for numerics Mitchell Levy
2026-04-10 21:35 ` [PATCH v5 8/8] rust: percpu: cache per-CPU pointers in the dynamic case Mitchell Levy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260410-rust-percpu-v5-1-4292380d7a41@gmail.com \
--to=levymitchell0@gmail.com \
--cc=a.hindborg@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alex.gaynor@gmail.com \
--cc=aliceryhl@google.com \
--cc=apais@linux.microsoft.com \
--cc=bjorn3_gh@protonmail.com \
--cc=boqun@kernel.org \
--cc=cl@linux.com \
--cc=code@tyhicks.com \
--cc=dakr@kernel.org \
--cc=dennis@kernel.org \
--cc=gary@garyguo.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lossin@kernel.org \
--cc=ojeda@kernel.org \
--cc=rust-for-linux@vger.kernel.org \
--cc=tj@kernel.org \
--cc=tmgross@umich.edu \
--cc=viresh.kumar@linaro.org \
--cc=yury.norov@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox