* [PATCH v10 1/5] rust: add bindings for bitmap.h
2025-06-02 13:36 [PATCH v10 0/5] rust: adds Bitmap API, ID pool and bindings Burak Emir
@ 2025-06-02 13:36 ` Burak Emir
2025-06-02 13:36 ` [PATCH v10 2/5] rust: add bindings for bitops.h Burak Emir
` (4 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Burak Emir @ 2025-06-02 13:36 UTC (permalink / raw)
To: Yury Norov, Kees Cook
Cc: Burak Emir, Rasmus Villemoes, Viresh Kumar, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Gustavo A . R . Silva, Carlos LLama, Pekka Ristola,
rust-for-linux, linux-kernel, linux-hardening
Makes the bitmap_copy_and_extend inline function available to Rust.
Adds F: to existing MAINTAINERS section BITMAP API BINDINGS [RUST].
Suggested-by: Alice Ryhl <aliceryhl@google.com>
Suggested-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Burak Emir <bqe@google.com>
Acked-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
---
MAINTAINERS | 1 +
rust/bindings/bindings_helper.h | 1 +
rust/helpers/bitmap.c | 9 +++++++++
rust/helpers/helpers.c | 1 +
4 files changed, 12 insertions(+)
create mode 100644 rust/helpers/bitmap.c
diff --git a/MAINTAINERS b/MAINTAINERS
index d48dd6726fe6..86cae0ca5287 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4124,6 +4124,7 @@ F: tools/lib/find_bit.c
BITMAP API BINDINGS [RUST]
M: Yury Norov <yury.norov@gmail.com>
S: Maintained
+F: rust/helpers/bitmap.c
F: rust/helpers/cpumask.c
BITOPS API
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index ab37e1d35c70..b6bf3b039c1b 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -7,6 +7,7 @@
*/
#include <kunit/test.h>
+#include <linux/bitmap.h>
#include <linux/blk-mq.h>
#include <linux/blk_types.h>
#include <linux/blkdev.h>
diff --git a/rust/helpers/bitmap.c b/rust/helpers/bitmap.c
new file mode 100644
index 000000000000..a50e2f082e47
--- /dev/null
+++ b/rust/helpers/bitmap.c
@@ -0,0 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bitmap.h>
+
+void rust_helper_bitmap_copy_and_extend(unsigned long *to, const unsigned long *from,
+ unsigned int count, unsigned int size)
+{
+ bitmap_copy_and_extend(to, from, count, size);
+}
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 1e7c84df7252..92721d165e35 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -7,6 +7,7 @@
* Sorted alphabetically.
*/
+#include "bitmap.c"
#include "blk.c"
#include "bug.c"
#include "build_assert.c"
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v10 2/5] rust: add bindings for bitops.h
2025-06-02 13:36 [PATCH v10 0/5] rust: adds Bitmap API, ID pool and bindings Burak Emir
2025-06-02 13:36 ` [PATCH v10 1/5] rust: add bindings for bitmap.h Burak Emir
@ 2025-06-02 13:36 ` Burak Emir
2025-06-02 13:36 ` [PATCH v10 3/5] rust: add bitmap API Burak Emir
` (3 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Burak Emir @ 2025-06-02 13:36 UTC (permalink / raw)
To: Yury Norov, Kees Cook
Cc: Burak Emir, Rasmus Villemoes, Viresh Kumar, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Gustavo A . R . Silva, Carlos LLama, Pekka Ristola,
rust-for-linux, linux-kernel, linux-hardening
Makes atomic set_bit and clear_bit inline functions as well as the
non-atomic variants __set_bit and __clear_bit available to Rust.
Adds a new MAINTAINERS section BITOPS API BINDINGS [RUST].
Suggested-by: Alice Ryhl <aliceryhl@google.com>
Suggested-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Burak Emir <bqe@google.com>
Acked-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
---
MAINTAINERS | 5 +++++
rust/helpers/bitops.c | 23 +++++++++++++++++++++++
rust/helpers/helpers.c | 1 +
3 files changed, 29 insertions(+)
create mode 100644 rust/helpers/bitops.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 86cae0ca5287..04d6727e944c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4141,6 +4141,11 @@ F: include/linux/bitops.h
F: lib/test_bitops.c
F: tools/*/bitops*
+BITOPS API BINDINGS [RUST]
+M: Yury Norov <yury.norov@gmail.com>
+S: Maintained
+F: rust/helpers/bitops.c
+
BLINKM RGB LED DRIVER
M: Jan-Simon Moeller <jansimon.moeller@gmx.de>
S: Maintained
diff --git a/rust/helpers/bitops.c b/rust/helpers/bitops.c
new file mode 100644
index 000000000000..5d0861d29d3f
--- /dev/null
+++ b/rust/helpers/bitops.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bitops.h>
+
+void rust_helper___set_bit(unsigned long nr, unsigned long *addr)
+{
+ __set_bit(nr, addr);
+}
+
+void rust_helper___clear_bit(unsigned long nr, unsigned long *addr)
+{
+ __clear_bit(nr, addr);
+}
+
+void rust_helper_set_bit(unsigned long nr, volatile unsigned long *addr)
+{
+ set_bit(nr, addr);
+}
+
+void rust_helper_clear_bit(unsigned long nr, volatile unsigned long *addr)
+{
+ clear_bit(nr, addr);
+}
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 92721d165e35..4de8ac390241 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -8,6 +8,7 @@
*/
#include "bitmap.c"
+#include "bitops.c"
#include "blk.c"
#include "bug.c"
#include "build_assert.c"
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v10 3/5] rust: add bitmap API.
2025-06-02 13:36 [PATCH v10 0/5] rust: adds Bitmap API, ID pool and bindings Burak Emir
2025-06-02 13:36 ` [PATCH v10 1/5] rust: add bindings for bitmap.h Burak Emir
2025-06-02 13:36 ` [PATCH v10 2/5] rust: add bindings for bitops.h Burak Emir
@ 2025-06-02 13:36 ` Burak Emir
2025-06-02 13:36 ` [PATCH v10 4/5] rust: add find_bit_benchmark_rust module Burak Emir
` (2 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Burak Emir @ 2025-06-02 13:36 UTC (permalink / raw)
To: Yury Norov, Kees Cook
Cc: Burak Emir, Rasmus Villemoes, Viresh Kumar, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Gustavo A . R . Silva, Carlos LLama, Pekka Ristola,
rust-for-linux, linux-kernel, linux-hardening
Provides an abstraction for C bitmap API and bitops operations.
This commit enables a Rust implementation of an Android Binder
data structure from commit 15d9da3f818c ("binder: use bitmap for faster
descriptor lookup"), which can be found in drivers/android/dbitmap.h.
It is a step towards upstreaming the Rust port of Android Binder driver.
We follow the C Bitmap API closely in naming and semantics, with
a few differences that take advantage of Rust language facilities
and idioms:
* We leverage Rust type system guarantees as follows:
* all (non-atomic) mutating operations require a &mut reference which
amounts to exclusive access.
* the Bitmap type implements Send. This enables transferring
ownership between threads and is needed for Binder.
* the Bitmap type implements Sync, which enables passing shared
references &Bitmap between threads. Atomic operations can be
used to safely modify from multiple threads (interior
mutability), though without ordering guarantees.
* The Rust API uses `{set,clear}_bit` vs `{set,clear}_bit_atomic` as
names, which differs from the C naming convention which uses
set_bit for atomic vs __set_bit for non-atomic.
* we include enough operations for the API to be useful, but not all
operations are exposed yet in order to avoid dead code. The missing
ones can be added later.
* We follow the C API closely with a fine-grained approach to safety:
* Low-level bit-ops get a safe API with bounds checks. Calling with
an out-of-bounds arguments to {set,clear}_bit becomes a no-op and
get logged as errors.
* We introduce a RUST_BITMAP_HARDENED config, which
causes invocations with out-of-bounds arguments to panic.
* methods correspond to find_* C methods tolerate out-of-bounds
since the C implementation does. Also here, we log out-of-bounds
arguments as errors and panic in RUST_BITMAP_HARDENED mode.
* We add a way to "borrow" bitmaps from C in Rust, to make C bitmaps
that were allocated in C directly usable in Rust code (`CBitmap`).
* the Rust API is optimized to represent the bitmap inline if it would
fit into a pointer. This saves allocations which is
relevant in the Binder use case.
The underlying C bitmap is *not* exposed, and must never be exposed
(except in tests). Exposing the representation of the owned bitmap would
lose static guarantees.
An alternative route of vendoring an existing Rust bitmap package was
considered but suboptimal overall. Reusing the C implementation is
preferable for a basic data structure like bitmaps. It enables Rust
code to be a lot more similar and predictable with respect to C code
that uses the same data structures and enables the use of code that
has been tried-and-tested in the kernel, with the same performance
characteristics whenever possible.
We use the `usize` type for sizes and indices into the bitmap,
because Rust generally always uses that type for indices and lengths
and it will be more convenient if the API accepts that type. This means
that we need to perform some casts to/from u32 and usize, since the C
headers use unsigned int instead of size_t/unsigned long for these
numbers in some places.
Adds new MAINTAINERS section BITMAP API [RUST].
Suggested-by: Alice Ryhl <aliceryhl@google.com>
Suggested-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Burak Emir <bqe@google.com>
---
MAINTAINERS | 7 +
rust/kernel/bitmap.rs | 574 +++++++++++++++++++++++++++++++++++++
rust/kernel/lib.rs | 1 +
security/Kconfig.hardening | 10 +
4 files changed, 592 insertions(+)
create mode 100644 rust/kernel/bitmap.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 04d6727e944c..565eaa015d9e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4127,6 +4127,13 @@ S: Maintained
F: rust/helpers/bitmap.c
F: rust/helpers/cpumask.c
+BITMAP API [RUST]
+M: Alice Ryhl <aliceryhl@google.com>
+M: Burak Emir <bqe@google.com>
+R: Yury Norov <yury.norov@gmail.com>
+S: Maintained
+F: rust/kernel/bitmap.rs
+
BITOPS API
M: Yury Norov <yury.norov@gmail.com>
R: Rasmus Villemoes <linux@rasmusvillemoes.dk>
diff --git a/rust/kernel/bitmap.rs b/rust/kernel/bitmap.rs
new file mode 100644
index 000000000000..28c11e400d1e
--- /dev/null
+++ b/rust/kernel/bitmap.rs
@@ -0,0 +1,574 @@
+// SPDX-License-Identifier: GPL-2.0
+
+// Copyright (C) 2025 Google LLC.
+
+//! Rust API for bitmap.
+//!
+//! C headers: [`include/linux/bitmap.h`](srctree/include/linux/bitmap.h).
+
+use crate::alloc::{AllocError, Flags};
+use crate::bindings;
+use crate::pr_err;
+use core::ptr::NonNull;
+
+/// Represents a C bitmap. Wraps underlying C bitmap API.
+///
+/// # Invariants
+///
+/// Must reference a `[c_ulong]` long enough to fit `data.len()` bits.
+#[cfg_attr(CONFIG_64BIT, repr(align(8)))]
+#[cfg_attr(not(CONFIG_64BIT), repr(align(4)))]
+pub struct CBitmap {
+ data: [()],
+}
+
+/// SAFETY: All methods that take immutable references are either atomic or read-only.
+unsafe impl Sync for CBitmap {}
+
+impl CBitmap {
+ /// Borrows a C bitmap.
+ ///
+ /// # Safety
+ ///
+ /// * `ptr` holds a non-null address of an initialized array of `unsigned long`
+ /// that is large enough to hold `nbits` bits.
+ /// * the array must not be freed for the lifetime of this [`CBitmap`]
+ /// * concurrent access only happens through atomic operations
+ pub unsafe fn from_raw<'a>(ptr: *const usize, nbits: usize) -> &'a CBitmap {
+ let data: *const [()] = core::ptr::slice_from_raw_parts(ptr.cast(), nbits);
+ // INVARIANT: `data` references an initialized array that can hold `nbits` bits.
+ // SAFETY:
+ // The caller guarantees that `data` (derived from `ptr` and `nbits`)
+ // points to a valid, initialized, and appropriately sized memory region
+ // that will not be freed for the lifetime 'a.
+ // We are casting `*const [()]` to `*const CBitmap`. The `CBitmap`
+ // struct is a ZST with a `data: [()]` field. This means its layout
+ // is compatible with a slice of `()`, and effectively it's a "thin pointer"
+ // (its size is 0 and alignment is 1). The `slice_from_raw_parts`
+ // function correctly encodes the length (number of bits, not elements)
+ // into the metadata of the fat pointer. Therefore, dereferencing this
+ // pointer as `&CBitmap` is safe given the caller's guarantees.
+ unsafe { &*(data as *const CBitmap) }
+ }
+
+ /// Borrows a C bitmap exclusively.
+ ///
+ /// # Safety
+ ///
+ /// * `ptr` holds a non-null address of an initialized array of `unsigned long`
+ /// that is large enough to hold `nbits` bits.
+ /// * the array must not be freed for the lifetime of this [`CBitmap`]
+ /// * no concurrent access may happen.
+ pub unsafe fn from_raw_mut<'a>(ptr: *mut usize, nbits: usize) -> &'a mut CBitmap {
+ let data: *mut [()] = core::ptr::slice_from_raw_parts_mut(ptr.cast(), nbits);
+ // INVARIANT: `data` references an initialized array that can hold `nbits` bits.
+ // SAFETY:
+ // The caller guarantees that `data` (derived from `ptr` and `nbits`)
+ // points to a valid, initialized, and appropriately sized memory region
+ // that will not be freed for the lifetime 'a.
+ // Furthermore, the caller guarantees no concurrent access will happen,
+ // which upholds the exclusivity requirement for a mutable reference.
+ // Similar to `from_raw`, casting `*mut [()]` to `*mut CBitmap` is
+ // safe because `CBitmap` is a ZST with a `data: [()]` field,
+ // making its layout compatible with a slice of `()`.
+ unsafe { &mut *(data as *mut CBitmap) }
+ }
+
+ /// Returns a raw pointer to the backing [`Bitmap`].
+ pub fn as_ptr(&self) -> *const usize {
+ self as *const CBitmap as *const usize
+ }
+
+ /// Returns a mutable raw pointer to the backing [`Bitmap`].
+ pub fn as_mut_ptr(&mut self) -> *mut usize {
+ self as *mut CBitmap as *mut usize
+ }
+
+ /// Returns length of this [`CBitmap`].
+ #[allow(clippy::len_without_is_empty)]
+ pub fn len(&self) -> usize {
+ self.data.len()
+ }
+}
+
+/// Holds either a pointer to array of `unsigned long` or a small bitmap.
+#[repr(C)]
+union BitmapRepr {
+ bitmap: usize,
+ ptr: NonNull<usize>,
+}
+
+macro_rules! bitmap_assert {
+ ($cond:expr, $($arg:tt)+) => {
+ #[cfg(RUST_BITMAP_HARDENED)]
+ assert!($e, $($arg)*);
+ }
+}
+
+macro_rules! bitmap_assert_return {
+ ($cond:expr, $($arg:tt)+) => {
+ #[cfg(RUST_BITMAP_HARDENED)]
+ assert!($e, $($arg)*);
+
+ #[cfg(not(RUST_BITMAP_HARDENED))]
+ if !($cond) {
+ pr_err!($($arg)*);
+ return
+ }
+ }
+}
+
+/// Represents an owned bitmap.
+///
+/// Wraps underlying C bitmap API. See [`CBitmap`] for available
+/// methods.
+///
+/// # Examples
+///
+/// Basic usage
+///
+/// ```
+/// use kernel::alloc::flags::GFP_KERNEL;
+/// use kernel::bitmap::Bitmap;
+///
+/// let mut b = Bitmap::new(16, GFP_KERNEL)?;
+///
+/// assert_eq!(16, b.len());
+/// for i in 0..16 {
+/// if i % 4 == 0 {
+/// b.set_bit(i);
+/// }
+/// }
+/// assert_eq!(Some(0), b.next_bit(0));
+/// assert_eq!(Some(1), b.next_zero_bit(0));
+/// assert_eq!(Some(4), b.next_bit(1));
+/// assert_eq!(Some(5), b.next_zero_bit(4));
+/// assert_eq!(Some(12), b.last_bit());
+/// # Ok::<(), Error>(())
+/// ```
+///
+/// # Invariants
+///
+/// * `nbits` is `<= i32::MAX` and never changes.
+/// * if `nbits <= bindings::BITS_PER_LONG`, then `repr` is a `usize`.
+/// * otherwise, `repr` holds a non-null pointer to an initialized
+/// array of `unsigned long` that is large enough to hold `nbits` bits.
+pub struct Bitmap {
+ /// Representation of bitmap.
+ repr: BitmapRepr,
+ /// Length of this bitmap. Must be `<= i32::MAX`.
+ nbits: usize,
+}
+
+impl core::ops::Deref for Bitmap {
+ type Target = CBitmap;
+
+ fn deref(&self) -> &CBitmap {
+ let ptr = if self.nbits <= bindings::BITS_PER_LONG as _ {
+ // SAFETY: Bitmap is represented inline.
+ unsafe { core::ptr::addr_of!(self.repr.bitmap) }
+ } else {
+ // SAFETY: Bitmap is represented as array of `unsigned long`.
+ unsafe { self.repr.ptr.as_ptr() }
+ };
+
+ // SAFETY: We got the right pointer and invariants of [`Bitmap`] hold.
+ // An inline bitmap is treated like an array with single element.
+ unsafe { CBitmap::from_raw(ptr, self.nbits) }
+ }
+}
+
+impl core::ops::DerefMut for Bitmap {
+ fn deref_mut(&mut self) -> &mut CBitmap {
+ let ptr = if self.nbits <= bindings::BITS_PER_LONG as _ {
+ // SAFETY: Bitmap is represented inline.
+ unsafe { core::ptr::addr_of_mut!(self.repr.bitmap) }
+ } else {
+ // SAFETY: Bitmap is represented as array of `unsigned long`.
+ unsafe { self.repr.ptr.as_mut() }
+ };
+
+ // SAFETY: We got the right pointer and invariants of [`Bitmap`] hold.
+ // An inline bitmap is treated like an array with single element.
+ unsafe { CBitmap::from_raw_mut(ptr, self.nbits) }
+ }
+}
+
+/// Enable ownership transfer to other threads.
+///
+/// SAFETY: We own the underlying bitmap representation.
+unsafe impl Send for Bitmap {}
+
+/// Enable unsynchronized concurrent access to [`Bitmap`] through shared references.
+///
+/// SAFETY: `deref()` will return a reference to a [`CBitmap`] which is Sync. Its methods
+/// that take immutable references are either atomic or read-only.
+unsafe impl Sync for Bitmap {}
+
+impl Drop for Bitmap {
+ fn drop(&mut self) {
+ if self.nbits <= bindings::BITS_PER_LONG as _ {
+ return;
+ }
+ // SAFETY: `self.ptr` was returned by the C `bitmap_zalloc`.
+ //
+ // INVARIANT: there is no other use of the `self.ptr` after this
+ // call and the value is being dropped so the broken invariant is
+ // not observable on function exit.
+ unsafe { bindings::bitmap_free(self.repr.ptr.as_ptr()) };
+ }
+}
+
+impl Bitmap {
+ /// Constructs a new [`Bitmap`].
+ ///
+ /// Fails with [`AllocError`] when the [`Bitmap`] could not be allocated. This
+ /// includes the case when `nbits` is greater than `i32::MAX`.
+ #[inline]
+ pub fn new(nbits: usize, flags: Flags) -> Result<Self, AllocError> {
+ if nbits <= bindings::BITS_PER_LONG as _ {
+ return Ok(Bitmap {
+ repr: BitmapRepr { bitmap: 0 },
+ nbits,
+ });
+ }
+ if nbits > i32::MAX.try_into().unwrap() {
+ return Err(AllocError);
+ }
+ let nbits_u32 = u32::try_from(nbits).unwrap();
+ // SAFETY: `bindings::BITS_PER_LONG < nbits` and `nbits <= i32::MAX`.
+ let ptr = unsafe { bindings::bitmap_zalloc(nbits_u32, flags.as_raw()) };
+ let ptr = NonNull::new(ptr).ok_or(AllocError)?;
+ // INVARIANT: `ptr` returned by C `bitmap_zalloc` and `nbits` checked.
+ Ok(Bitmap {
+ repr: BitmapRepr { ptr },
+ nbits,
+ })
+ }
+
+ /// Returns length of this [`Bitmap`].
+ #[allow(clippy::len_without_is_empty)]
+ #[inline]
+ pub fn len(&self) -> usize {
+ self.nbits
+ }
+}
+
+impl CBitmap {
+ /// Set bit with index `index`.
+ ///
+ /// ATTENTION: `set_bit` is non-atomic, which differs from the naming
+ /// convention in C code. The corresponding C function is `__set_bit`.
+ ///
+ /// If RUST_BITMAP_HARDENED is not enabled and `index` is greater than
+ /// or equal to `self.nbits`, does nothing.
+ ///
+ /// # Panics
+ ///
+ /// Panics if RUST_BITMAP_HARDENED is enabled and `index` is greater than
+ /// or equal to `self.nbits`.
+ #[inline]
+ pub fn set_bit(&mut self, index: usize) {
+ bitmap_assert_return!(
+ index < self.len(),
+ "Bit `index` must be < {}, was {}",
+ self.len(),
+ index
+ );
+ // SAFETY: Bit `index` is within bounds.
+ unsafe { bindings::__set_bit(index, self.as_mut_ptr()) };
+ }
+
+ /// Set bit with index `index`, atomically.
+ ///
+ /// This is a relaxed atomic operation (no implied memory barriers).
+ ///
+ /// ATTENTION: The naming convention differs from C, where the corresponding
+ /// function is called `set_bit`.
+ ///
+ /// If RUST_BITMAP_HARDENED is not enabled and `index` is greater than
+ /// or equal to `self.len()`, does nothing.
+ ///
+ /// # Panics
+ ///
+ /// Panics if RUST_BITMAP_HARDENED is enabled and `index` is greater than
+ /// or equal to `self.len()`.
+ #[inline]
+ pub fn set_bit_atomic(&self, index: usize) {
+ bitmap_assert_return!(
+ index < self.len(),
+ "Bit `index` must be < {}, was {}",
+ self.len(),
+ index
+ );
+ // SAFETY: `index` is within bounds and the caller has ensured that
+ // there is no mix of non-atomic and atomic operations.
+ unsafe { bindings::set_bit(index, self.as_ptr() as *mut usize) };
+ }
+
+ /// Clear `index` bit.
+ ///
+ /// ATTENTION: `clear_bit` is non-atomic, which differs from the naming
+ /// convention in C code. The corresponding C function is `__clear_bit`.
+ ///
+ /// If RUST_BITMAP_HARDENED is not enabled and `index` is greater than
+ /// or equal to `self.len()`, does nothing.
+ ///
+ /// # Panics
+ ///
+ /// Panics if RUST_BITMAP_HARDENED is enabled and `index` is greater than
+ /// or equal to `self.len()`.
+ #[inline]
+ pub fn clear_bit(&mut self, index: usize) {
+ bitmap_assert_return!(
+ index < self.len(),
+ "Bit `index` must be < {}, was {}",
+ self.len(),
+ index
+ );
+ // SAFETY: `index` is within bounds.
+ unsafe { bindings::__clear_bit(index, self.as_mut_ptr()) };
+ }
+
+ /// Clear `index` bit, atomically.
+ ///
+ /// This is a relaxed atomic operation (no implied memory barriers).
+ ///
+ /// ATTENTION: The naming convention differs from C, where the corresponding
+ /// function is called `clear_bit`.
+ ///
+ /// If RUST_BITMAP_HARDENED is not enabled and `index` is greater than
+ /// or equal to `self.len()`, does nothing.
+ ///
+ /// # Panics
+ ///
+ /// Panics if RUST_BITMAP_HARDENED is enabled and `index` is greater than
+ /// or equal to `self.len()`.
+ #[inline]
+ pub fn clear_bit_atomic(&self, index: usize) {
+ bitmap_assert_return!(
+ index < self.len(),
+ "Bit `index` must be < {}, was {}",
+ self.len(),
+ index
+ );
+ // SAFETY: `index` is within bounds and the caller has ensured that
+ // there is no mix of non-atomic and atomic operations.
+ unsafe { bindings::clear_bit(index, self.as_ptr() as *mut usize) };
+ }
+
+ /// Copy `src` into this [`Bitmap`] and set any remaining bits to zero.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
+ /// use kernel::bitmap::Bitmap;
+ ///
+ /// let mut long_bitmap = Bitmap::new(256, GFP_KERNEL)?;
+ //
+ /// assert_eq!(None, long_bitmap.last_bit());
+ //
+ /// let mut short_bitmap = Bitmap::new(16, GFP_KERNEL)?;
+ //
+ /// short_bitmap.set_bit(7);
+ /// long_bitmap.copy_and_extend(&short_bitmap);
+ /// assert_eq!(Some(7), long_bitmap.last_bit());
+ ///
+ /// # Ok::<(), AllocError>(())
+ /// ```
+ #[inline]
+ pub fn copy_and_extend(&mut self, src: &Bitmap) {
+ let len = core::cmp::min(src.nbits, self.len());
+ // SAFETY: access to `self` and `src` is within bounds.
+ unsafe {
+ bindings::bitmap_copy_and_extend(
+ self.as_mut_ptr(),
+ src.as_ptr(),
+ len as u32,
+ self.len() as u32,
+ )
+ };
+ }
+
+ /// Finds last set bit.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
+ /// use kernel::bitmap::Bitmap;
+ ///
+ /// let bitmap = Bitmap::new(64, GFP_KERNEL)?;
+ ///
+ /// match bitmap.last_bit() {
+ /// Some(idx) => {
+ /// pr_info!("The last bit has index {idx}.\n");
+ /// }
+ /// None => {
+ /// pr_info!("All bits in this bitmap are 0.\n");
+ /// }
+ /// }
+ /// # Ok::<(), AllocError>(())
+ /// ```
+ #[inline]
+ pub fn last_bit(&self) -> Option<usize> {
+ // SAFETY: `_find_next_bit` access is within bounds due to invariant.
+ let index = unsafe { bindings::_find_last_bit(self.as_ptr(), self.len()) };
+ if index >= self.len() {
+ None
+ } else {
+ Some(index)
+ }
+ }
+
+ /// Finds next set bit, starting from `start`.
+ /// Returns `None` if `start` is greater of equal than `self.nbits`.
+ #[inline]
+ pub fn next_bit(&self, start: usize) -> Option<usize> {
+ bitmap_assert!(
+ start < self.len(),
+ "`start` must be < {} was {}",
+ self.len(),
+ start
+ );
+ // SAFETY: `_find_next_bit` tolerates out-of-bounds arguments and returns a
+ // value larger than or equal to `self.len()` in that case.
+ let index = unsafe { bindings::_find_next_bit(self.as_ptr(), self.len(), start) };
+ if index >= self.len() {
+ None
+ } else {
+ Some(index)
+ }
+ }
+
+ /// Finds next zero bit, starting from `start`.
+ /// Returns `None` if `start` is greater than or equal to `self.len()`.
+ #[inline]
+ pub fn next_zero_bit(&self, start: usize) -> Option<usize> {
+ bitmap_assert!(
+ start < self.len(),
+ "`start` must be < {} was {}",
+ self.len(),
+ start
+ );
+ // SAFETY: `_find_next_zero_bit` tolerates out-of-bounds arguments and returns a
+ // value larger than or equal to `self.len()` in that case.
+ let index = unsafe { bindings::_find_next_zero_bit(self.as_ptr(), self.len(), start) };
+ if index >= self.len() {
+ None
+ } else {
+ Some(index)
+ }
+ }
+}
+
+use macros::kunit_tests;
+
+#[kunit_tests(rust_kernel_bitmap)]
+mod tests {
+ use super::*;
+ use kernel::alloc::flags::GFP_KERNEL;
+
+ #[test]
+ fn cbitmap_borrow() {
+ let fake_c_bitmap: [usize; 2] = [0, 0];
+ // SAFETY: `fake_c_bitmap` is an array of expected length.
+ let b = unsafe {
+ CBitmap::from_raw(
+ core::ptr::addr_of!(fake_c_bitmap) as *const usize,
+ 2 * bindings::BITS_PER_LONG as usize,
+ )
+ };
+ assert_eq!(2 * bindings::BITS_PER_LONG as usize, b.len());
+ assert_eq!(None, b.next_bit(0));
+ }
+
+ #[test]
+ fn cbitmap_copy() {
+ let fake_c_bitmap: usize = 0xFF;
+ // SAFETY: `fake_c_bitmap` can be used as one-element array of expected length.
+ let b = unsafe { CBitmap::from_raw(core::ptr::addr_of!(fake_c_bitmap), 8) };
+ assert_eq!(8, b.len());
+ assert_eq!(None, b.next_zero_bit(0));
+ }
+
+ #[test]
+ fn bitmap_new() {
+ let b = Bitmap::new(0, GFP_KERNEL).unwrap();
+ assert_eq!(0, b.len());
+
+ let b = Bitmap::new(3, GFP_KERNEL).unwrap();
+ assert_eq!(3, b.len());
+
+ let b = Bitmap::new(1024, GFP_KERNEL).unwrap();
+ assert_eq!(1024, b.len());
+
+ // Requesting too large values results in [`AllocError`].
+ let b = Bitmap::new(1 << 31, GFP_KERNEL);
+ assert!(b.is_err());
+ }
+
+ #[test]
+ fn bitmap_set_clear_find() {
+ let mut b = Bitmap::new(128, GFP_KERNEL).unwrap();
+
+ // Zero-initialized
+ assert_eq!(None, b.next_bit(0));
+ assert_eq!(Some(0), b.next_zero_bit(0));
+ assert_eq!(None, b.last_bit());
+
+ b.set_bit(17);
+
+ assert_eq!(Some(17), b.next_bit(0));
+ assert_eq!(Some(17), b.next_bit(17));
+ assert_eq!(None, b.next_bit(18));
+ assert_eq!(Some(17), b.last_bit());
+
+ b.set_bit(107);
+
+ assert_eq!(Some(17), b.next_bit(0));
+ assert_eq!(Some(17), b.next_bit(17));
+ assert_eq!(Some(107), b.next_bit(18));
+ assert_eq!(Some(107), b.last_bit());
+
+ b.clear_bit(17);
+
+ assert_eq!(Some(107), b.next_bit(0));
+ assert_eq!(Some(107), b.last_bit());
+ }
+
+ #[cfg(not(RUST_BITMAP_HARDENED))]
+ #[test]
+ fn bitmap_out_of_bounds() {
+ let mut b = Bitmap::new(128, GFP_KERNEL).unwrap();
+
+ b.set_bit(2048);
+ b.set_bit_atomic(2048);
+ b.clear_bit(2048);
+ b.clear_bit_atomic(2048);
+ assert_eq!(None, b.next_bit(2048));
+ assert_eq!(None, b.next_zero_bit(2048));
+ assert_eq!(None, b.last_bit());
+ }
+
+ // TODO: add test RUST_BITMAP_HARDENED when KUnit supports this type of config.
+
+ #[test]
+ fn bitmap_copy_and_extend() {
+ let mut long_bitmap = Bitmap::new(256, GFP_KERNEL).unwrap();
+
+ long_bitmap.set_bit(3);
+ long_bitmap.set_bit(200);
+
+ let mut short_bitmap = Bitmap::new(32, GFP_KERNEL).unwrap();
+
+ short_bitmap.set_bit(17);
+
+ long_bitmap.copy_and_extend(&short_bitmap);
+
+ // Previous bits have been cleared.
+ assert_eq!(Some(17), long_bitmap.next_bit(0));
+ assert_eq!(Some(17), long_bitmap.last_bit());
+ }
+}
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index de07aadd1ff5..8c4161cd82ac 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -38,6 +38,7 @@
pub use ffi;
pub mod alloc;
+pub mod bitmap;
#[cfg(CONFIG_BLOCK)]
pub mod block;
#[doc(hidden)]
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 3fe9d7b945c4..3ca3c7dc4381 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -324,6 +324,16 @@ config LIST_HARDENED
If unsure, say N.
+config RUST_BITMAP_HARDENED
+ bool "Check integrity of linked list manipulation"
+ depends on CONFIG_RUST
+ help
+ Enables additional assertions in the Rust Bitmap API to catch
+ arguments that are not guaranteed to result in an immediate access
+ fault.
+
+ If unsure, say N.
+
config BUG_ON_DATA_CORRUPTION
bool "Trigger a BUG when data corruption is detected"
select LIST_HARDENED
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v10 4/5] rust: add find_bit_benchmark_rust module.
2025-06-02 13:36 [PATCH v10 0/5] rust: adds Bitmap API, ID pool and bindings Burak Emir
` (2 preceding siblings ...)
2025-06-02 13:36 ` [PATCH v10 3/5] rust: add bitmap API Burak Emir
@ 2025-06-02 13:36 ` Burak Emir
2025-06-02 14:31 ` Yury Norov
2025-06-02 13:36 ` [PATCH v10 5/5] rust: add dynamic ID pool abstraction for bitmap Burak Emir
2025-06-02 13:48 ` [PATCH v10 0/5] rust: adds Bitmap API, ID pool and bindings Burak Emir
5 siblings, 1 reply; 11+ messages in thread
From: Burak Emir @ 2025-06-02 13:36 UTC (permalink / raw)
To: Yury Norov, Kees Cook
Cc: Burak Emir, Rasmus Villemoes, Viresh Kumar, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Gustavo A . R . Silva, Carlos LLama, Pekka Ristola,
rust-for-linux, linux-kernel, linux-hardening
Microbenchmark protected by a config FIND_BIT_BENCHMARK_RUST,
following `find_bit_benchmark.c` but testing the Rust Bitmap API.
We add a fill_random() method protected by the config in order to
maintain the abstraction.
The sample output from the benchmark, both C and Rust version:
find_bit_benchmark.c output:
```
Start testing find_bit() with random-filled bitmap
[ 438.101937] find_next_bit: 860188 ns, 163419 iterations
[ 438.109471] find_next_zero_bit: 912342 ns, 164262 iterations
[ 438.116820] find_last_bit: 726003 ns, 163419 iterations
[ 438.130509] find_nth_bit: 7056993 ns, 16269 iterations
[ 438.139099] find_first_bit: 1963272 ns, 16270 iterations
[ 438.173043] find_first_and_bit: 27314224 ns, 32654 iterations
[ 438.180065] find_next_and_bit: 398752 ns, 73705 iterations
[ 438.186689]
Start testing find_bit() with sparse bitmap
[ 438.193375] find_next_bit: 9675 ns, 656 iterations
[ 438.201765] find_next_zero_bit: 1766136 ns, 327025 iterations
[ 438.208429] find_last_bit: 9017 ns, 656 iterations
[ 438.217816] find_nth_bit: 2749742 ns, 655 iterations
[ 438.225168] find_first_bit: 721799 ns, 656 iterations
[ 438.231797] find_first_and_bit: 2819 ns, 1 iterations
[ 438.238441] find_next_and_bit: 3159 ns, 1 iterations
```
find_bit_benchmark_rust.rs output:
```
[ 451.182459] find_bit_benchmark_rust_module:
[ 451.186688] Start testing find_bit() Rust with random-filled bitmap
[ 451.194450] next_bit: 777950 ns, 163644 iterations
[ 451.201997] next_zero_bit: 918889 ns, 164036 iterations
[ 451.208642] Start testing find_bit() Rust with sparse bitmap
[ 451.214300] next_bit: 9181 ns, 654 iterations
[ 451.222806] next_zero_bit: 1855504 ns, 327026 iterations
```
Here are the results from 32 samples, with 95% confidence interval.
The microbenchmark was built with RUST_BITMAP_HARDENED=n and run on a
machine that did not execute other processes.
Random-filled bitmap:
+-----------+-------+-----------+--------------+-----------+-----------+
| Benchmark | Lang | Mean (ms) | Std Dev (ms) | 95% CI Lo | 95% CI Hi |
+-----------+-------+-----------+--------------+-----------+-----------+
| find_bit/ | C | 825.07 | 53.89 | 806.40 | 843.74 |
| next_bit | Rust | 870.91 | 46.29 | 854.88 | 886.95 |
+-----------+-------+-----------+--------------+-----------+-----------+
| find_zero/| C | 933.56 | 56.34 | 914.04 | 953.08 |
| next_zero | Rust | 945.85 | 60.44 | 924.91 | 966.79 |
+-----------+-------+-----------+--------------+-----------+-----------+
Rust appears 5.5% slower for next_bit, 1.3% slower for next_zero.
Sparse bitmap:
+-----------+-------+-----------+--------------+-----------+-----------+
| Benchmark | Lang | Mean (ms) | Std Dev (ms) | 95% CI Lo | 95% CI Hi |
+-----------+-------+-----------+--------------+-----------+-----------+
| find_bit/ | C | 13.17 | 6.21 | 11.01 | 15.32 |
| next_bit | Rust | 14.30 | 8.27 | 11.43 | 17.17 |
+-----------+-------+-----------+--------------+-----------+-----------+
| find_zero/| C | 1859.31 | 82.30 | 1830.80 | 1887.83 |
| next_zero | Rust | 1908.09 | 139.82 | 1859.65 | 1956.54 |
+-----------+-------+-----------+--------------+-----------+-----------+
Rust appears 8.5% slower for next_bit, 2.6% slower for next_zero.
In summary, taking the arithmetic mean of all slow-downs, we can say
the Rust API has a 4.5% slowdown.
Suggested-by: Alice Ryhl <aliceryhl@google.com>
Suggested-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Burak Emir <bqe@google.com>
---
MAINTAINERS | 1 +
lib/Kconfig.debug | 13 +++++
lib/Makefile | 1 +
lib/find_bit_benchmark_rust.rs | 95 +++++++++++++++++++++++++++++++++
rust/bindings/bindings_helper.h | 1 +
rust/kernel/bitmap.rs | 14 +++++
6 files changed, 125 insertions(+)
create mode 100644 lib/find_bit_benchmark_rust.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 565eaa015d9e..943d85ed1876 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4132,6 +4132,7 @@ M: Alice Ryhl <aliceryhl@google.com>
M: Burak Emir <bqe@google.com>
R: Yury Norov <yury.norov@gmail.com>
S: Maintained
+F: lib/find_bit_benchmark_rust.rs
F: rust/kernel/bitmap.rs
BITOPS API
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index f9051ab610d5..d8ed53f35495 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -2605,6 +2605,19 @@ config FIND_BIT_BENCHMARK
If unsure, say N.
+config FIND_BIT_BENCHMARK_RUST
+ tristate "Test find_bit functions in Rust"
+ depends on RUST
+ help
+ This builds the "find_bit_benchmark_rust" module. It is a micro
+ benchmark that measures the performance of Rust functions that
+ correspond to the find_*_bit() operations in C. It follows the
+ FIND_BIT_BENCHMARK closely but will in general not yield same
+ numbers due to extra bounds checks and overhead of foreign
+ function calls.
+
+ If unsure, say N.
+
config TEST_FIRMWARE
tristate "Test firmware loading via userspace interface"
depends on FW_LOADER
diff --git a/lib/Makefile b/lib/Makefile
index f07b24ce1b3f..99e49a8f5bf8 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -62,6 +62,7 @@ obj-y += hexdump.o
obj-$(CONFIG_TEST_HEXDUMP) += test_hexdump.o
obj-y += kstrtox.o
obj-$(CONFIG_FIND_BIT_BENCHMARK) += find_bit_benchmark.o
+obj-$(CONFIG_FIND_BIT_BENCHMARK_RUST) += find_bit_benchmark_rust.o
obj-$(CONFIG_TEST_BPF) += test_bpf.o
test_dhry-objs := dhry_1.o dhry_2.o dhry_run.o
obj-$(CONFIG_TEST_DHRY) += test_dhry.o
diff --git a/lib/find_bit_benchmark_rust.rs b/lib/find_bit_benchmark_rust.rs
new file mode 100644
index 000000000000..468a2087f68c
--- /dev/null
+++ b/lib/find_bit_benchmark_rust.rs
@@ -0,0 +1,95 @@
+// SPDX-License-Identifier: GPL-2.0
+//! Benchmark for find_bit-like methods in Bitmap Rust API.
+
+use kernel::alloc::flags::GFP_KERNEL;
+use kernel::bindings;
+use kernel::bitmap::Bitmap;
+use kernel::error::{code, Result};
+use kernel::prelude::module;
+use kernel::time::Ktime;
+use kernel::ThisModule;
+use kernel::{pr_cont, pr_err};
+
+const BITMAP_LEN: usize = 4096 * 8 * 10;
+// Reciprocal of the fraction of bits that are set in sparse bitmap.
+const SPARSENESS: usize = 500;
+
+/// Test module that benchmarks performance of traversing bitmaps.
+struct FindBitBenchmarkModule();
+
+fn test_next_bit(bitmap: &Bitmap) {
+ let mut time = Ktime::ktime_get();
+ let mut cnt = 0;
+ let mut i = 0;
+
+ while let Some(index) = bitmap.next_bit(i) {
+ cnt += 1;
+ i = index + 1;
+ }
+
+ time = Ktime::ktime_get() - time;
+ pr_cont!(
+ "next_bit: {:18} ns, {:6} iterations\n",
+ time.to_ns(),
+ cnt
+ );
+}
+
+fn test_next_zero_bit(bitmap: &Bitmap) {
+ let mut time = Ktime::ktime_get();
+ let mut cnt = 0;
+ let mut i = 0;
+
+ while let Some(index) = bitmap.next_zero_bit(i) {
+ cnt += 1;
+ i = index + 1;
+ }
+
+ time = Ktime::ktime_get() - time;
+ pr_cont!(
+ "next_zero_bit: {:18} ns, {:6} iterations\n",
+ time.to_ns(),
+ cnt
+ );
+}
+
+fn find_bit_test() {
+ pr_err!("\n");
+ pr_cont!("Start testing find_bit() Rust with random-filled bitmap\n");
+
+ let mut bitmap = Bitmap::new(BITMAP_LEN, GFP_KERNEL).expect("alloc bitmap failed");
+ bitmap.fill_random();
+
+ test_next_bit(&bitmap);
+ test_next_zero_bit(&bitmap);
+
+ pr_cont!("Start testing find_bit() Rust with sparse bitmap\n");
+
+ let mut bitmap = Bitmap::new(BITMAP_LEN, GFP_KERNEL).expect("alloc sparse bitmap failed");
+ let nbits = BITMAP_LEN / SPARSENESS;
+ for _i in 0..nbits {
+ // SAFETY: BITMAP_LEN fits in 32 bits.
+ let bit: usize =
+ unsafe { bindings::__get_random_u32_below(BITMAP_LEN.try_into().unwrap()) as _ };
+ bitmap.set_bit(bit);
+ }
+
+ test_next_bit(&bitmap);
+ test_next_zero_bit(&bitmap);
+}
+
+impl kernel::Module for FindBitBenchmarkModule {
+ fn init(_module: &'static ThisModule) -> Result<Self> {
+ find_bit_test();
+ // Return error so test module can be inserted again without rmmod.
+ Err(code::EINVAL)
+ }
+}
+
+module! {
+ type: FindBitBenchmarkModule,
+ name: "find_bit_benchmark_rust_module",
+ authors: ["Burak Emir <bqe@google.com>"],
+ description: "Module with benchmark for bitmap Rust API",
+ license: "GPL v2",
+}
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index b6bf3b039c1b..f6ca7f1dd08b 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -31,6 +31,7 @@
#include <linux/platform_device.h>
#include <linux/poll.h>
#include <linux/property.h>
+#include <linux/random.h>
#include <linux/refcount.h>
#include <linux/sched.h>
#include <linux/security.h>
diff --git a/rust/kernel/bitmap.rs b/rust/kernel/bitmap.rs
index 28c11e400d1e..9fefb2473099 100644
--- a/rust/kernel/bitmap.rs
+++ b/rust/kernel/bitmap.rs
@@ -252,6 +252,20 @@ pub fn new(nbits: usize, flags: Flags) -> Result<Self, AllocError> {
pub fn len(&self) -> usize {
self.nbits
}
+
+ /// Fills this `Bitmap` with random bits.
+ #[cfg(CONFIG_FIND_BIT_BENCHMARK_RUST)]
+ pub fn fill_random(&mut self) {
+ // SAFETY: `self.as_mut_ptr` points to either an array of the
+ // appropriate length or one usize.
+ unsafe {
+ bindings::get_random_bytes(
+ self.as_mut_ptr() as *mut ffi::c_void,
+ usize::div_ceil(self.nbits, bindings::BITS_PER_LONG as usize)
+ * bindings::BITS_PER_LONG as usize,
+ );
+ }
+ }
}
impl CBitmap {
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v10 4/5] rust: add find_bit_benchmark_rust module.
2025-06-02 13:36 ` [PATCH v10 4/5] rust: add find_bit_benchmark_rust module Burak Emir
@ 2025-06-02 14:31 ` Yury Norov
2025-06-02 14:40 ` Alice Ryhl
2025-06-02 14:44 ` Miguel Ojeda
0 siblings, 2 replies; 11+ messages in thread
From: Yury Norov @ 2025-06-02 14:31 UTC (permalink / raw)
To: Burak Emir
Cc: Kees Cook, Rasmus Villemoes, Viresh Kumar, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Gustavo A . R . Silva, Carlos LLama, Pekka Ristola,
rust-for-linux, linux-kernel, linux-hardening
On Mon, Jun 02, 2025 at 01:36:45PM +0000, Burak Emir wrote:
> Microbenchmark protected by a config FIND_BIT_BENCHMARK_RUST,
> following `find_bit_benchmark.c` but testing the Rust Bitmap API.
>
> We add a fill_random() method protected by the config in order to
> maintain the abstraction.
>
> The sample output from the benchmark, both C and Rust version:
>
> find_bit_benchmark.c output:
> ```
> Start testing find_bit() with random-filled bitmap
> [ 438.101937] find_next_bit: 860188 ns, 163419 iterations
> [ 438.109471] find_next_zero_bit: 912342 ns, 164262 iterations
> [ 438.116820] find_last_bit: 726003 ns, 163419 iterations
> [ 438.130509] find_nth_bit: 7056993 ns, 16269 iterations
> [ 438.139099] find_first_bit: 1963272 ns, 16270 iterations
> [ 438.173043] find_first_and_bit: 27314224 ns, 32654 iterations
> [ 438.180065] find_next_and_bit: 398752 ns, 73705 iterations
> [ 438.186689]
> Start testing find_bit() with sparse bitmap
> [ 438.193375] find_next_bit: 9675 ns, 656 iterations
> [ 438.201765] find_next_zero_bit: 1766136 ns, 327025 iterations
> [ 438.208429] find_last_bit: 9017 ns, 656 iterations
> [ 438.217816] find_nth_bit: 2749742 ns, 655 iterations
> [ 438.225168] find_first_bit: 721799 ns, 656 iterations
> [ 438.231797] find_first_and_bit: 2819 ns, 1 iterations
> [ 438.238441] find_next_and_bit: 3159 ns, 1 iterations
> ```
>
> find_bit_benchmark_rust.rs output:
> ```
> [ 451.182459] find_bit_benchmark_rust_module:
> [ 451.186688] Start testing find_bit() Rust with random-filled bitmap
> [ 451.194450] next_bit: 777950 ns, 163644 iterations
> [ 451.201997] next_zero_bit: 918889 ns, 164036 iterations
> [ 451.208642] Start testing find_bit() Rust with sparse bitmap
> [ 451.214300] next_bit: 9181 ns, 654 iterations
> [ 451.222806] next_zero_bit: 1855504 ns, 327026 iterations
> ```
>
> Here are the results from 32 samples, with 95% confidence interval.
> The microbenchmark was built with RUST_BITMAP_HARDENED=n and run on a
> machine that did not execute other processes.
>
> Random-filled bitmap:
> +-----------+-------+-----------+--------------+-----------+-----------+
> | Benchmark | Lang | Mean (ms) | Std Dev (ms) | 95% CI Lo | 95% CI Hi |
> +-----------+-------+-----------+--------------+-----------+-----------+
> | find_bit/ | C | 825.07 | 53.89 | 806.40 | 843.74 |
> | next_bit | Rust | 870.91 | 46.29 | 854.88 | 886.95 |
> +-----------+-------+-----------+--------------+-----------+-----------+
> | find_zero/| C | 933.56 | 56.34 | 914.04 | 953.08 |
> | next_zero | Rust | 945.85 | 60.44 | 924.91 | 966.79 |
> +-----------+-------+-----------+--------------+-----------+-----------+
>
> Rust appears 5.5% slower for next_bit, 1.3% slower for next_zero.
>
> Sparse bitmap:
> +-----------+-------+-----------+--------------+-----------+-----------+
> | Benchmark | Lang | Mean (ms) | Std Dev (ms) | 95% CI Lo | 95% CI Hi |
> +-----------+-------+-----------+--------------+-----------+-----------+
> | find_bit/ | C | 13.17 | 6.21 | 11.01 | 15.32 |
> | next_bit | Rust | 14.30 | 8.27 | 11.43 | 17.17 |
> +-----------+-------+-----------+--------------+-----------+-----------+
> | find_zero/| C | 1859.31 | 82.30 | 1830.80 | 1887.83 |
> | next_zero | Rust | 1908.09 | 139.82 | 1859.65 | 1956.54 |
> +-----------+-------+-----------+--------------+-----------+-----------+
>
> Rust appears 8.5% slower for next_bit, 2.6% slower for next_zero.
>
> In summary, taking the arithmetic mean of all slow-downs, we can say
> the Rust API has a 4.5% slowdown.
>
> Suggested-by: Alice Ryhl <aliceryhl@google.com>
> Suggested-by: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Burak Emir <bqe@google.com>
> ---
> MAINTAINERS | 1 +
> lib/Kconfig.debug | 13 +++++
> lib/Makefile | 1 +
> lib/find_bit_benchmark_rust.rs | 95 +++++++++++++++++++++++++++++++++
> rust/bindings/bindings_helper.h | 1 +
> rust/kernel/bitmap.rs | 14 +++++
> 6 files changed, 125 insertions(+)
> create mode 100644 lib/find_bit_benchmark_rust.rs
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 565eaa015d9e..943d85ed1876 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -4132,6 +4132,7 @@ M: Alice Ryhl <aliceryhl@google.com>
> M: Burak Emir <bqe@google.com>
> R: Yury Norov <yury.norov@gmail.com>
> S: Maintained
> +F: lib/find_bit_benchmark_rust.rs
> F: rust/kernel/bitmap.rs
>
> BITOPS API
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index f9051ab610d5..d8ed53f35495 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -2605,6 +2605,19 @@ config FIND_BIT_BENCHMARK
>
> If unsure, say N.
>
> +config FIND_BIT_BENCHMARK_RUST
> + tristate "Test find_bit functions in Rust"
> + depends on RUST
> + help
> + This builds the "find_bit_benchmark_rust" module. It is a micro
> + benchmark that measures the performance of Rust functions that
> + correspond to the find_*_bit() operations in C. It follows the
> + FIND_BIT_BENCHMARK closely but will in general not yield same
> + numbers due to extra bounds checks and overhead of foreign
> + function calls.
> +
> + If unsure, say N.
> +
> config TEST_FIRMWARE
> tristate "Test firmware loading via userspace interface"
> depends on FW_LOADER
> diff --git a/lib/Makefile b/lib/Makefile
> index f07b24ce1b3f..99e49a8f5bf8 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -62,6 +62,7 @@ obj-y += hexdump.o
> obj-$(CONFIG_TEST_HEXDUMP) += test_hexdump.o
> obj-y += kstrtox.o
> obj-$(CONFIG_FIND_BIT_BENCHMARK) += find_bit_benchmark.o
> +obj-$(CONFIG_FIND_BIT_BENCHMARK_RUST) += find_bit_benchmark_rust.o
> obj-$(CONFIG_TEST_BPF) += test_bpf.o
> test_dhry-objs := dhry_1.o dhry_2.o dhry_run.o
> obj-$(CONFIG_TEST_DHRY) += test_dhry.o
> diff --git a/lib/find_bit_benchmark_rust.rs b/lib/find_bit_benchmark_rust.rs
> new file mode 100644
> index 000000000000..468a2087f68c
> --- /dev/null
> +++ b/lib/find_bit_benchmark_rust.rs
> @@ -0,0 +1,95 @@
> +// SPDX-License-Identifier: GPL-2.0
> +//! Benchmark for find_bit-like methods in Bitmap Rust API.
> +
> +use kernel::alloc::flags::GFP_KERNEL;
> +use kernel::bindings;
> +use kernel::bitmap::Bitmap;
> +use kernel::error::{code, Result};
> +use kernel::prelude::module;
> +use kernel::time::Ktime;
> +use kernel::ThisModule;
> +use kernel::{pr_cont, pr_err};
> +
> +const BITMAP_LEN: usize = 4096 * 8 * 10;
> +// Reciprocal of the fraction of bits that are set in sparse bitmap.
> +const SPARSENESS: usize = 500;
Is there any simple mechanism to keep C and rust sizes synced? (If no,
not a big deal to redefine them.)
> +
> +/// Test module that benchmarks performance of traversing bitmaps.
> +struct FindBitBenchmarkModule();
> +
> +fn test_next_bit(bitmap: &Bitmap) {
> + let mut time = Ktime::ktime_get();
> + let mut cnt = 0;
> + let mut i = 0;
> +
> + while let Some(index) = bitmap.next_bit(i) {
> + cnt += 1;
> + i = index + 1;
> + }
> +
> + time = Ktime::ktime_get() - time;
> + pr_cont!(
> + "next_bit: {:18} ns, {:6} iterations\n",
> + time.to_ns(),
> + cnt
> + );
> +}
> +
> +fn test_next_zero_bit(bitmap: &Bitmap) {
> + let mut time = Ktime::ktime_get();
> + let mut cnt = 0;
> + let mut i = 0;
> +
> + while let Some(index) = bitmap.next_zero_bit(i) {
> + cnt += 1;
> + i = index + 1;
> + }
> +
> + time = Ktime::ktime_get() - time;
> + pr_cont!(
> + "next_zero_bit: {:18} ns, {:6} iterations\n",
> + time.to_ns(),
> + cnt
> + );
> +}
> +
> +fn find_bit_test() {
> + pr_err!("\n");
> + pr_cont!("Start testing find_bit() Rust with random-filled bitmap\n");
> +
> + let mut bitmap = Bitmap::new(BITMAP_LEN, GFP_KERNEL).expect("alloc bitmap failed");
> + bitmap.fill_random();
> +
> + test_next_bit(&bitmap);
> + test_next_zero_bit(&bitmap);
> +
> + pr_cont!("Start testing find_bit() Rust with sparse bitmap\n");
> +
> + let mut bitmap = Bitmap::new(BITMAP_LEN, GFP_KERNEL).expect("alloc sparse bitmap failed");
> + let nbits = BITMAP_LEN / SPARSENESS;
> + for _i in 0..nbits {
> + // SAFETY: BITMAP_LEN fits in 32 bits.
> + let bit: usize =
> + unsafe { bindings::__get_random_u32_below(BITMAP_LEN.try_into().unwrap()) as _ };
> + bitmap.set_bit(bit);
> + }
> +
> + test_next_bit(&bitmap);
> + test_next_zero_bit(&bitmap);
> +}
> +
> +impl kernel::Module for FindBitBenchmarkModule {
> + fn init(_module: &'static ThisModule) -> Result<Self> {
> + find_bit_test();
> + // Return error so test module can be inserted again without rmmod.
> + Err(code::EINVAL)
> + }
> +}
> +
> +module! {
> + type: FindBitBenchmarkModule,
I think we agreed to have the type something less unique, like:
Benchmark.
> + name: "find_bit_benchmark_rust_module",
What is the name policy for rust? Maybe a more human-readable name
would work better here?
All the above are nits. Please have my
Reviewed-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
Thanks,
Yury
> + authors: ["Burak Emir <bqe@google.com>"],
> + description: "Module with benchmark for bitmap Rust API",
> + license: "GPL v2",
> +}
> diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
> index b6bf3b039c1b..f6ca7f1dd08b 100644
> --- a/rust/bindings/bindings_helper.h
> +++ b/rust/bindings/bindings_helper.h
> @@ -31,6 +31,7 @@
> #include <linux/platform_device.h>
> #include <linux/poll.h>
> #include <linux/property.h>
> +#include <linux/random.h>
> #include <linux/refcount.h>
> #include <linux/sched.h>
> #include <linux/security.h>
> diff --git a/rust/kernel/bitmap.rs b/rust/kernel/bitmap.rs
> index 28c11e400d1e..9fefb2473099 100644
> --- a/rust/kernel/bitmap.rs
> +++ b/rust/kernel/bitmap.rs
> @@ -252,6 +252,20 @@ pub fn new(nbits: usize, flags: Flags) -> Result<Self, AllocError> {
> pub fn len(&self) -> usize {
> self.nbits
> }
> +
> + /// Fills this `Bitmap` with random bits.
> + #[cfg(CONFIG_FIND_BIT_BENCHMARK_RUST)]
> + pub fn fill_random(&mut self) {
> + // SAFETY: `self.as_mut_ptr` points to either an array of the
> + // appropriate length or one usize.
> + unsafe {
> + bindings::get_random_bytes(
> + self.as_mut_ptr() as *mut ffi::c_void,
> + usize::div_ceil(self.nbits, bindings::BITS_PER_LONG as usize)
> + * bindings::BITS_PER_LONG as usize,
> + );
> + }
> + }
> }
>
> impl CBitmap {
> --
> 2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v10 4/5] rust: add find_bit_benchmark_rust module.
2025-06-02 14:31 ` Yury Norov
@ 2025-06-02 14:40 ` Alice Ryhl
2025-06-02 14:44 ` Miguel Ojeda
1 sibling, 0 replies; 11+ messages in thread
From: Alice Ryhl @ 2025-06-02 14:40 UTC (permalink / raw)
To: Yury Norov
Cc: Burak Emir, Kees Cook, Rasmus Villemoes, Viresh Kumar,
Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Gustavo A . R . Silva, Carlos LLama, Pekka Ristola,
rust-for-linux, linux-kernel, linux-hardening
On Mon, Jun 2, 2025 at 4:32 PM Yury Norov <yury.norov@gmail.com> wrote:
>
> On Mon, Jun 02, 2025 at 01:36:45PM +0000, Burak Emir wrote:
> > Microbenchmark protected by a config FIND_BIT_BENCHMARK_RUST,
> > following `find_bit_benchmark.c` but testing the Rust Bitmap API.
> >
> > We add a fill_random() method protected by the config in order to
> > maintain the abstraction.
> >
> > The sample output from the benchmark, both C and Rust version:
> >
> > find_bit_benchmark.c output:
> > ```
> > Start testing find_bit() with random-filled bitmap
> > [ 438.101937] find_next_bit: 860188 ns, 163419 iterations
> > [ 438.109471] find_next_zero_bit: 912342 ns, 164262 iterations
> > [ 438.116820] find_last_bit: 726003 ns, 163419 iterations
> > [ 438.130509] find_nth_bit: 7056993 ns, 16269 iterations
> > [ 438.139099] find_first_bit: 1963272 ns, 16270 iterations
> > [ 438.173043] find_first_and_bit: 27314224 ns, 32654 iterations
> > [ 438.180065] find_next_and_bit: 398752 ns, 73705 iterations
> > [ 438.186689]
> > Start testing find_bit() with sparse bitmap
> > [ 438.193375] find_next_bit: 9675 ns, 656 iterations
> > [ 438.201765] find_next_zero_bit: 1766136 ns, 327025 iterations
> > [ 438.208429] find_last_bit: 9017 ns, 656 iterations
> > [ 438.217816] find_nth_bit: 2749742 ns, 655 iterations
> > [ 438.225168] find_first_bit: 721799 ns, 656 iterations
> > [ 438.231797] find_first_and_bit: 2819 ns, 1 iterations
> > [ 438.238441] find_next_and_bit: 3159 ns, 1 iterations
> > ```
> >
> > find_bit_benchmark_rust.rs output:
> > ```
> > [ 451.182459] find_bit_benchmark_rust_module:
> > [ 451.186688] Start testing find_bit() Rust with random-filled bitmap
> > [ 451.194450] next_bit: 777950 ns, 163644 iterations
> > [ 451.201997] next_zero_bit: 918889 ns, 164036 iterations
> > [ 451.208642] Start testing find_bit() Rust with sparse bitmap
> > [ 451.214300] next_bit: 9181 ns, 654 iterations
> > [ 451.222806] next_zero_bit: 1855504 ns, 327026 iterations
> > ```
> >
> > Here are the results from 32 samples, with 95% confidence interval.
> > The microbenchmark was built with RUST_BITMAP_HARDENED=n and run on a
> > machine that did not execute other processes.
> >
> > Random-filled bitmap:
> > +-----------+-------+-----------+--------------+-----------+-----------+
> > | Benchmark | Lang | Mean (ms) | Std Dev (ms) | 95% CI Lo | 95% CI Hi |
> > +-----------+-------+-----------+--------------+-----------+-----------+
> > | find_bit/ | C | 825.07 | 53.89 | 806.40 | 843.74 |
> > | next_bit | Rust | 870.91 | 46.29 | 854.88 | 886.95 |
> > +-----------+-------+-----------+--------------+-----------+-----------+
> > | find_zero/| C | 933.56 | 56.34 | 914.04 | 953.08 |
> > | next_zero | Rust | 945.85 | 60.44 | 924.91 | 966.79 |
> > +-----------+-------+-----------+--------------+-----------+-----------+
> >
> > Rust appears 5.5% slower for next_bit, 1.3% slower for next_zero.
> >
> > Sparse bitmap:
> > +-----------+-------+-----------+--------------+-----------+-----------+
> > | Benchmark | Lang | Mean (ms) | Std Dev (ms) | 95% CI Lo | 95% CI Hi |
> > +-----------+-------+-----------+--------------+-----------+-----------+
> > | find_bit/ | C | 13.17 | 6.21 | 11.01 | 15.32 |
> > | next_bit | Rust | 14.30 | 8.27 | 11.43 | 17.17 |
> > +-----------+-------+-----------+--------------+-----------+-----------+
> > | find_zero/| C | 1859.31 | 82.30 | 1830.80 | 1887.83 |
> > | next_zero | Rust | 1908.09 | 139.82 | 1859.65 | 1956.54 |
> > +-----------+-------+-----------+--------------+-----------+-----------+
> >
> > Rust appears 8.5% slower for next_bit, 2.6% slower for next_zero.
> >
> > In summary, taking the arithmetic mean of all slow-downs, we can say
> > the Rust API has a 4.5% slowdown.
> >
> > Suggested-by: Alice Ryhl <aliceryhl@google.com>
> > Suggested-by: Yury Norov <yury.norov@gmail.com>
> > Signed-off-by: Burak Emir <bqe@google.com>
> > +const BITMAP_LEN: usize = 4096 * 8 * 10;
> > +// Reciprocal of the fraction of bits that are set in sparse bitmap.
> > +const SPARSENESS: usize = 500;
>
> Is there any simple mechanism to keep C and rust sizes synced? (If no,
> not a big deal to redefine them.)
Rust can access constants from header files, so you can move it to a
header file.
> > +module! {
> > + type: FindBitBenchmarkModule,
>
> I think we agreed to have the type something less unique, like:
>
> Benchmark.
>
> > + name: "find_bit_benchmark_rust_module",
>
> What is the name policy for rust? Maybe a more human-readable name
> would work better here?
I don't think there's any particular policy for Rust. Name modules in
the same manner you would C modules.
> All the above are nits. Please have my
>
> Reviewed-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
>
> Thanks,
> Yury
Alice
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v10 4/5] rust: add find_bit_benchmark_rust module.
2025-06-02 14:31 ` Yury Norov
2025-06-02 14:40 ` Alice Ryhl
@ 2025-06-02 14:44 ` Miguel Ojeda
2025-06-11 19:36 ` Burak Emir
1 sibling, 1 reply; 11+ messages in thread
From: Miguel Ojeda @ 2025-06-02 14:44 UTC (permalink / raw)
To: Yury Norov
Cc: Burak Emir, Kees Cook, Rasmus Villemoes, Viresh Kumar,
Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Gustavo A . R . Silva, Carlos LLama, Pekka Ristola,
rust-for-linux, linux-kernel, linux-hardening
On Mon, Jun 2, 2025 at 4:32 PM Yury Norov <yury.norov@gmail.com> wrote:
>
> > +const BITMAP_LEN: usize = 4096 * 8 * 10;
> > +// Reciprocal of the fraction of bits that are set in sparse bitmap.
> > +const SPARSENESS: usize = 500;
>
> Is there any simple mechanism to keep C and rust sizes synced? (If no,
> not a big deal to redefine them.)
One may pick them from C (possibly with a `RUST_HELPER_*` if needed).
If they are non-trivial macros, then using an `enum` instead of a
`#define` on the C side is also an alternative.
> What is the name policy for rust? Maybe a more human-readable name
> would work better here?
Up to the maintainers, and generally the same as for C. In the global
Rust samples and things like that we have `rust` in the name since
they are Rust samples after all, but there is no need to say `rust` or
`module` in actual modules etc. unless there is a reason for it.
I hope that helps!
Cheers,
Miguel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v10 4/5] rust: add find_bit_benchmark_rust module.
2025-06-02 14:44 ` Miguel Ojeda
@ 2025-06-11 19:36 ` Burak Emir
0 siblings, 0 replies; 11+ messages in thread
From: Burak Emir @ 2025-06-11 19:36 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Yury Norov, Kees Cook, Rasmus Villemoes, Viresh Kumar,
Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Gustavo A . R . Silva, Carlos LLama, Pekka Ristola,
rust-for-linux, linux-kernel, linux-hardening
Thanks all for the review & comments.
On Mon, Jun 2, 2025 at 4:44 PM Miguel Ojeda
<miguel.ojeda.sandonis@gmail.com> wrote:
>
> On Mon, Jun 2, 2025 at 4:32 PM Yury Norov <yury.norov@gmail.com> wrote:
> >
> > > +const BITMAP_LEN: usize = 4096 * 8 * 10;
> > > +// Reciprocal of the fraction of bits that are set in sparse bitmap.
> > > +const SPARSENESS: usize = 500;
> >
> > Is there any simple mechanism to keep C and rust sizes synced? (If no,
> > not a big deal to redefine them.)
>
> One may pick them from C (possibly with a `RUST_HELPER_*` if needed).
> If they are non-trivial macros, then using an `enum` instead of a
> `#define` on the C side is also an alternative.
I'd prefer not to move these to a header file and define RUST_HELPER and such.
I'd prefer if test & benchmark code was somewhat contained (although I
agree it would be nice to keep the two definitions in sync).
> > What is the name policy for rust? Maybe a more human-readable name
> > would work better here?
>
> Up to the maintainers, and generally the same as for C. In the global
> Rust samples and things like that we have `rust` in the name since
> they are Rust samples after all, but there is no need to say `rust` or
> `module` in actual modules etc. unless there is a reason for it.
>
> I hope that helps!
I renamed the module struct to `Benchmark` now and made the "name" the
same as the file name.
So the _rust suffix is there because the rust file really corresponds
to the C file.
In this particular case, I think it does help to see how it relates to
the C file.
Cheers,
- Burak
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v10 5/5] rust: add dynamic ID pool abstraction for bitmap
2025-06-02 13:36 [PATCH v10 0/5] rust: adds Bitmap API, ID pool and bindings Burak Emir
` (3 preceding siblings ...)
2025-06-02 13:36 ` [PATCH v10 4/5] rust: add find_bit_benchmark_rust module Burak Emir
@ 2025-06-02 13:36 ` Burak Emir
2025-06-02 13:48 ` [PATCH v10 0/5] rust: adds Bitmap API, ID pool and bindings Burak Emir
5 siblings, 0 replies; 11+ messages in thread
From: Burak Emir @ 2025-06-02 13:36 UTC (permalink / raw)
To: Yury Norov, Kees Cook
Cc: Burak Emir, Rasmus Villemoes, Viresh Kumar, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Gustavo A . R . Silva, Carlos LLama, Pekka Ristola,
rust-for-linux, linux-kernel, linux-hardening
This is a port of the Binder data structure introduced in commit
15d9da3f818c ("binder: use bitmap for faster descriptor lookup") to
Rust.
Like drivers/android/dbitmap.h, the ID pool abstraction lets
clients acquire and release IDs. The implementation uses a bitmap to
know what IDs are in use, and gives clients fine-grained control over
the time of allocation. This fine-grained control is needed in the
Android Binder. We provide an example that release a spinlock for
allocation and unit tests (rustdoc examples).
The implementation does not permit shrinking below capacity below
BITS_PER_LONG.
Suggested-by: Alice Ryhl <aliceryhl@google.com>
Suggested-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Burak Emir <bqe@google.com>
---
MAINTAINERS | 1 +
rust/kernel/id_pool.rs | 222 +++++++++++++++++++++++++++++++++++++++++
rust/kernel/lib.rs | 1 +
3 files changed, 224 insertions(+)
create mode 100644 rust/kernel/id_pool.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 943d85ed1876..bc95d98f266b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4134,6 +4134,7 @@ R: Yury Norov <yury.norov@gmail.com>
S: Maintained
F: lib/find_bit_benchmark_rust.rs
F: rust/kernel/bitmap.rs
+F: rust/kernel/id_pool.rs
BITOPS API
M: Yury Norov <yury.norov@gmail.com>
diff --git a/rust/kernel/id_pool.rs b/rust/kernel/id_pool.rs
new file mode 100644
index 000000000000..cf26d405d9bb
--- /dev/null
+++ b/rust/kernel/id_pool.rs
@@ -0,0 +1,222 @@
+// SPDX-License-Identifier: GPL-2.0
+
+// Copyright (C) 2025 Google LLC.
+
+//! Rust API for an ID pool backed by a [`Bitmap`].
+
+use crate::alloc::{AllocError, Flags};
+use crate::bitmap::Bitmap;
+
+/// Represents a dynamic ID pool backed by a [`Bitmap`].
+///
+/// Clients acquire and release IDs from unset bits in a bitmap.
+///
+/// The capacity of the ID pool may be adjusted by users as
+/// needed. The API supports the scenario where users need precise control
+/// over the time of allocation of a new backing bitmap, which may require
+/// release of spinlock.
+/// Due to concurrent updates, all operations are re-verified to determine
+/// if the grow or shrink is sill valid.
+///
+/// # Examples
+///
+/// Basic usage
+///
+/// ```
+/// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
+/// use kernel::id_pool::IdPool;
+///
+/// let mut pool = IdPool::new(64, GFP_KERNEL)?;
+/// for i in 0..64 {
+/// assert_eq!(i, pool.acquire_next_id(i).ok_or(ENOSPC)?);
+/// }
+///
+/// pool.release_id(23);
+/// assert_eq!(23, pool.acquire_next_id(0).ok_or(ENOSPC)?);
+///
+/// assert_eq!(None, pool.acquire_next_id(0)); // time to realloc.
+/// let resizer = pool.grow_request().ok_or(ENOSPC)?.realloc(GFP_KERNEL)?;
+/// pool.grow(resizer);
+///
+/// assert_eq!(pool.acquire_next_id(0), Some(64));
+/// # Ok::<(), Error>(())
+/// ```
+///
+/// Releasing spinlock to grow the pool
+///
+/// ```no_run
+/// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
+/// use kernel::sync::{new_spinlock, SpinLock};
+/// use kernel::id_pool::IdPool;
+///
+/// fn get_id_maybe_realloc(guarded_pool: &SpinLock<IdPool>) -> Result<usize, AllocError> {
+/// let mut pool = guarded_pool.lock();
+/// loop {
+/// match pool.acquire_next_id(0) {
+/// Some(index) => return Ok(index),
+/// None => {
+/// let alloc_request = pool.grow_request();
+/// drop(pool);
+/// let resizer = alloc_request.ok_or(AllocError)?.realloc(GFP_KERNEL)?;
+/// pool = guarded_pool.lock();
+/// pool.grow(resizer)
+/// }
+/// }
+/// }
+/// }
+/// ```
+pub struct IdPool {
+ map: Bitmap,
+}
+
+/// Indicates that an [`IdPool`] should change to a new target size.
+pub struct ReallocRequest {
+ num_ids: usize,
+}
+
+/// Contains a [`Bitmap`] of a size suitable for reallocating [`IdPool`].
+pub struct PoolResizer {
+ new: Bitmap,
+}
+
+impl ReallocRequest {
+ /// Allocates a new backing [`Bitmap`] for [`IdPool`].
+ ///
+ /// This method only prepares reallocation and does not complete it.
+ /// Reallocation will complete after passing the [`PoolResizer`] to the
+ /// [`IdPool::grow`] or [`IdPool::shrink`] operation, which will check
+ /// that reallocation still makes sense.
+ pub fn realloc(&self, flags: Flags) -> Result<PoolResizer, AllocError> {
+ let new = Bitmap::new(self.num_ids, flags)?;
+ Ok(PoolResizer { new })
+ }
+}
+
+impl IdPool {
+ /// Constructs a new `[IdPool]`.
+ ///
+ /// A capacity below [`BITS_PER_LONG`] is adjusted to [`BITS_PER_LONG`].
+ #[inline]
+ pub fn new(num_ids: usize, flags: Flags) -> Result<Self, AllocError> {
+ let num_ids = core::cmp::max(num_ids, bindings::BITS_PER_LONG as usize);
+ let map = Bitmap::new(num_ids, flags)?;
+ Ok(Self { map })
+ }
+
+ /// Returns how many IDs this pool can currently have.
+ #[inline]
+ pub fn len(&self) -> usize {
+ self.map.len()
+ }
+
+ /// Returns a [`ReallocRequest`] if the [`IdPool`] can be shrunk, [`None`] otherwise.
+ ///
+ /// The capacity of an [`IdPool`] cannot be shrunk below [`BITS_PER_LONG`].
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
+ /// use kernel::id_pool::{ReallocRequest, IdPool};
+ ///
+ /// let mut pool = IdPool::new(1024, GFP_KERNEL)?;
+ /// let alloc_request = pool.shrink_request().ok_or(AllocError)?;
+ /// let resizer = alloc_request.realloc(GFP_KERNEL)?;
+ /// pool.shrink(resizer);
+ /// assert_eq!(pool.len(), kernel::bindings::BITS_PER_LONG as usize);
+ /// # Ok::<(), AllocError>(())
+ /// ```
+ #[inline]
+ pub fn shrink_request(&self) -> Option<ReallocRequest> {
+ let len = self.map.len();
+ // Shrinking below [`BITS_PER_LONG`] is never possible.
+ if len <= bindings::BITS_PER_LONG as usize {
+ return None;
+ }
+ // Determine if the bitmap can shrink based on the position of
+ // its last set bit. If the bit is within the first quarter of
+ // the bitmap then shrinking is possible. In this case, the
+ // bitmap should shrink to half its current size.
+ let last_bit = self.map.last_bit();
+ if last_bit.is_none() {
+ return Some(ReallocRequest {
+ num_ids: bindings::BITS_PER_LONG as usize,
+ });
+ }
+ let bit = last_bit.unwrap();
+ if bit >= (len / 4) {
+ return None;
+ }
+ let num_ids = core::cmp::max(bindings::BITS_PER_LONG as usize, len / 2);
+ Some(ReallocRequest { num_ids })
+ }
+
+ /// Shrinks pool by using a new [`Bitmap`], if still possible.
+ #[inline]
+ pub fn shrink(&mut self, mut resizer: PoolResizer) {
+ // Between request to shrink that led to allocation of `resizer` and now,
+ // bits may have changed.
+ // Verify that shrinking is still possible. In case shrinking to
+ // the size of `resizer` is no longer possible, do nothing,
+ // drop `resizer` and move on.
+ let updated = self.shrink_request();
+ if updated.is_none() {
+ return;
+ }
+ if updated.unwrap().num_ids > resizer.new.len() {
+ return;
+ }
+
+ resizer.new.copy_and_extend(&self.map);
+ self.map = resizer.new;
+ }
+
+ /// Returns a [`ReallocRequest`] for growing this [`IdPool`], if possible.
+ ///
+ /// The capacity of an [`IdPool`] cannot be grown above [`i32::MAX`].
+ #[inline]
+ pub fn grow_request(&self) -> Option<ReallocRequest> {
+ let num_ids = self.map.len() * 2;
+ if num_ids > i32::MAX.try_into().unwrap() {
+ return None;
+ }
+ Some(ReallocRequest { num_ids })
+ }
+
+ /// Grows pool by using a new [`Bitmap`], if still necessary.
+ ///
+ /// The `resizer` arguments has to be obtained by calling [`grow_request`]
+ /// on this object and performing a `realloc`.
+ #[inline]
+ pub fn grow(&mut self, mut resizer: PoolResizer) {
+ // Between request to grow that led to allocation of `resizer` and now,
+ // another thread may have already grown the capacity.
+ // In this case, do nothing, drop `resizer` and move on.
+ if resizer.new.len() <= self.map.len() {
+ return;
+ }
+
+ resizer.new.copy_and_extend(&self.map);
+ self.map = resizer.new;
+ }
+
+ /// Acquires a new ID by finding and setting the next zero bit in the
+ /// bitmap.
+ ///
+ /// Upon success, returns its index. Otherwise, returns `None`
+ /// to indicate that a `grow_request` is needed.
+ #[inline]
+ pub fn acquire_next_id(&mut self, offset: usize) -> Option<usize> {
+ let next_zero_bit = self.map.next_zero_bit(offset);
+ if let Some(nr) = next_zero_bit {
+ self.map.set_bit(nr);
+ }
+ next_zero_bit
+ }
+
+ /// Releases an ID.
+ #[inline]
+ pub fn release_id(&mut self, id: usize) {
+ self.map.clear_bit(id);
+ }
+}
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index 8c4161cd82ac..d7def807900a 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -54,6 +54,7 @@
#[cfg(CONFIG_RUST_FW_LOADER_ABSTRACTIONS)]
pub mod firmware;
pub mod fs;
+pub mod id_pool;
pub mod init;
pub mod io;
pub mod ioctl;
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v10 0/5] rust: adds Bitmap API, ID pool and bindings
2025-06-02 13:36 [PATCH v10 0/5] rust: adds Bitmap API, ID pool and bindings Burak Emir
` (4 preceding siblings ...)
2025-06-02 13:36 ` [PATCH v10 5/5] rust: add dynamic ID pool abstraction for bitmap Burak Emir
@ 2025-06-02 13:48 ` Burak Emir
5 siblings, 0 replies; 11+ messages in thread
From: Burak Emir @ 2025-06-02 13:48 UTC (permalink / raw)
To: Yury Norov, Kees Cook
Cc: Rasmus Villemoes, Viresh Kumar, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Gustavo A . R . Silva,
Carlos LLama, Pekka Ristola, rust-for-linux, linux-kernel,
linux-hardening
Sorry for the spam, please ignore this version. Had to fix one more
typo in Kconfig.
On Mon, Jun 2, 2025 at 3:36 PM Burak Emir <bqe@google.com> wrote:
>
> This series adds a Rust bitmap API for porting the approach from
> commit 15d9da3f818c ("binder: use bitmap for faster descriptor lookup")
> to Rust. The functionality in dbitmap.h makes use of bitmap and bitops.
^ permalink raw reply [flat|nested] 11+ messages in thread