rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver
@ 2025-11-12 12:47 Alice Ryhl
  2025-11-12 12:47 ` [PATCH v5 1/6] rust: bitmap: add MAX_LEN and MAX_INLINE_LEN constants Alice Ryhl
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Alice Ryhl @ 2025-11-12 12:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Yury Norov
  Cc: Arve Hjønnevåg, Todd Kjos, Martijn Coenen,
	Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel,
	Alice Ryhl

See commit message for rust binder commit for details.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
Changes in v5:
- Rename NO_ALLOC_MAX_LEN to MAX_INLINE_LEN.
- Update all uses of MAX_LEN/MAX_INLINE_LEN to reference the new
  constants.
- Link to v4: https://lore.kernel.org/r/20251110-binder-bitmap-v4-0-5ed8a7fab1b9@google.com

Changes in v4:
- Keep existing IdPool constructor (renamed to with_capacity).
- Undo docs changes that were due to the removal of the existing IdPool
  constructor.
- List id < pool.map.len() as invariant for UnusedId and mention it in
  as_u32() comment.
- Fix [`acquire`] docs links.
- Link to v3: https://lore.kernel.org/r/20251028-binder-bitmap-v3-0-32822d4b3207@google.com

Changes in v3:
- Also update documentation to fix compile errors.
- Use new_inline() instead of new_small().
- Reword commit message of "rust: id_pool: do not supply starting capacity"
- Change UnusedId::acquire() to return usize.
- Link to v2: https://lore.kernel.org/r/20251021-binder-bitmap-v2-0-e652d172c62b@google.com

Changes in v2:
- Use id_pool.
- Link to v1: https://lore.kernel.org/r/20251020-binder-bitmap-v1-0-879bec9cddc1@google.com

---
Alice Ryhl (6):
      rust: bitmap: add MAX_LEN and MAX_INLINE_LEN constants
      rust: bitmap: add BitmapVec::new_inline()
      rust: bitmap: rename IdPool::new() to with_capacity()
      rust: id_pool: do not supply starting capacity
      rust: id_pool: do not immediately acquire new ids
      rust_binder: use bitmap for allocation of handles

 drivers/android/binder/process.rs |  63 ++++++++++++++-----
 rust/kernel/bitmap.rs             |  43 ++++++++-----
 rust/kernel/id_pool.rs            | 129 ++++++++++++++++++++++++++++----------
 3 files changed, 171 insertions(+), 64 deletions(-)
---
base-commit: 211ddde0823f1442e4ad052a2f30f050145ccada
change-id: 20251007-binder-bitmap-5aa0d966fb1f

Best regards,
-- 
Alice Ryhl <aliceryhl@google.com>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v5 1/6] rust: bitmap: add MAX_LEN and MAX_INLINE_LEN constants
  2025-11-12 12:47 [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Alice Ryhl
@ 2025-11-12 12:47 ` Alice Ryhl
  2025-11-12 12:47 ` [PATCH v5 2/6] rust: bitmap: add BitmapVec::new_inline() Alice Ryhl
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Alice Ryhl @ 2025-11-12 12:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Yury Norov
  Cc: Arve Hjønnevåg, Todd Kjos, Martijn Coenen,
	Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel,
	Alice Ryhl

To avoid hard-coding these values in drivers, define constants for them
that drivers can reference. Also, update all instances in bitmap.rs and
id_pool.rs that use these values to use the new constants.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/bitmap.rs  | 33 +++++++++++++++++++--------------
 rust/kernel/id_pool.rs | 29 ++++++++++++++---------------
 2 files changed, 33 insertions(+), 29 deletions(-)

diff --git a/rust/kernel/bitmap.rs b/rust/kernel/bitmap.rs
index aa8fc7bf06fc99865ae755d8694e4bec3dc8e7f0..0705646c6251a49f213a45f1f013cb9eb2ed81de 100644
--- a/rust/kernel/bitmap.rs
+++ b/rust/kernel/bitmap.rs
@@ -12,8 +12,6 @@
 use crate::pr_err;
 use core::ptr::NonNull;
 
-const BITS_PER_LONG: usize = bindings::BITS_PER_LONG as usize;
-
 /// Represents a C bitmap. Wraps underlying C bitmap API.
 ///
 /// # Invariants
@@ -149,14 +147,14 @@ macro_rules! bitmap_assert_return {
 ///
 /// # Invariants
 ///
-/// * `nbits` is `<= i32::MAX` and never changes.
-/// * if `nbits <= bindings::BITS_PER_LONG`, then `repr` is a `usize`.
+/// * `nbits` is `<= MAX_LEN`.
+/// * if `nbits <= MAX_INLINE_LEN`, then `repr` is a `usize`.
 /// * otherwise, `repr` holds a non-null pointer to an initialized
 ///   array of `unsigned long` that is large enough to hold `nbits` bits.
 pub struct BitmapVec {
     /// Representation of bitmap.
     repr: BitmapRepr,
-    /// Length of this bitmap. Must be `<= i32::MAX`.
+    /// Length of this bitmap. Must be `<= MAX_LEN`.
     nbits: usize,
 }
 
@@ -164,7 +162,7 @@ impl core::ops::Deref for BitmapVec {
     type Target = Bitmap;
 
     fn deref(&self) -> &Bitmap {
-        let ptr = if self.nbits <= BITS_PER_LONG {
+        let ptr = if self.nbits <= BitmapVec::MAX_INLINE_LEN {
             // SAFETY: Bitmap is represented inline.
             #[allow(unused_unsafe, reason = "Safe since Rust 1.92.0")]
             unsafe {
@@ -183,7 +181,7 @@ fn deref(&self) -> &Bitmap {
 
 impl core::ops::DerefMut for BitmapVec {
     fn deref_mut(&mut self) -> &mut Bitmap {
-        let ptr = if self.nbits <= BITS_PER_LONG {
+        let ptr = if self.nbits <= BitmapVec::MAX_INLINE_LEN {
             // SAFETY: Bitmap is represented inline.
             #[allow(unused_unsafe, reason = "Safe since Rust 1.92.0")]
             unsafe {
@@ -213,7 +211,7 @@ unsafe impl Sync for BitmapVec {}
 
 impl Drop for BitmapVec {
     fn drop(&mut self) {
-        if self.nbits <= BITS_PER_LONG {
+        if self.nbits <= BitmapVec::MAX_INLINE_LEN {
             return;
         }
         // SAFETY: `self.ptr` was returned by the C `bitmap_zalloc`.
@@ -226,23 +224,29 @@ fn drop(&mut self) {
 }
 
 impl BitmapVec {
+    /// The maximum possible length of a `BitmapVec`.
+    pub const MAX_LEN: usize = i32::MAX as usize;
+
+    /// The maximum length that uses the inline representation.
+    pub const MAX_INLINE_LEN: usize = usize::BITS as usize;
+
     /// Constructs a new [`BitmapVec`].
     ///
     /// Fails with [`AllocError`] when the [`BitmapVec`] could not be allocated. This
-    /// includes the case when `nbits` is greater than `i32::MAX`.
+    /// includes the case when `nbits` is greater than `MAX_LEN`.
     #[inline]
     pub fn new(nbits: usize, flags: Flags) -> Result<Self, AllocError> {
-        if nbits <= BITS_PER_LONG {
+        if nbits <= BitmapVec::MAX_INLINE_LEN {
             return Ok(BitmapVec {
                 repr: BitmapRepr { bitmap: 0 },
                 nbits,
             });
         }
-        if nbits > i32::MAX.try_into().unwrap() {
+        if nbits > Self::MAX_LEN {
             return Err(AllocError);
         }
         let nbits_u32 = u32::try_from(nbits).unwrap();
-        // SAFETY: `BITS_PER_LONG < nbits` and `nbits <= i32::MAX`.
+        // SAFETY: `MAX_INLINE_LEN < nbits` and `nbits <= MAX_LEN`.
         let ptr = unsafe { bindings::bitmap_zalloc(nbits_u32, flags.as_raw()) };
         let ptr = NonNull::new(ptr).ok_or(AllocError)?;
         // INVARIANT: `ptr` returned by C `bitmap_zalloc` and `nbits` checked.
@@ -495,9 +499,10 @@ mod tests {
     #[test]
     fn bitmap_borrow() {
         let fake_bitmap: [usize; 2] = [0, 0];
+        let fake_bitmap_len = 2 * usize::BITS as usize;
         // SAFETY: `fake_c_bitmap` is an array of expected length.
-        let b = unsafe { Bitmap::from_raw(fake_bitmap.as_ptr(), 2 * BITS_PER_LONG) };
-        assert_eq!(2 * BITS_PER_LONG, b.len());
+        let b = unsafe { Bitmap::from_raw(fake_bitmap.as_ptr(), fake_bitmap_len) };
+        assert_eq!(fake_bitmap_len, b.len());
         assert_eq!(None, b.next_bit(0));
     }
 
diff --git a/rust/kernel/id_pool.rs b/rust/kernel/id_pool.rs
index a41a3404213ca92d53b14c80101afff6ac8c416e..8f68b45a3da1f62dd0d010480837de49b9a343ba 100644
--- a/rust/kernel/id_pool.rs
+++ b/rust/kernel/id_pool.rs
@@ -7,8 +7,6 @@
 use crate::alloc::{AllocError, Flags};
 use crate::bitmap::BitmapVec;
 
-const BITS_PER_LONG: usize = bindings::BITS_PER_LONG as usize;
-
 /// Represents a dynamic ID pool backed by a [`BitmapVec`].
 ///
 /// Clients acquire and release IDs from unset bits in a bitmap.
@@ -97,13 +95,12 @@ pub fn realloc(&self, flags: Flags) -> Result<PoolResizer, AllocError> {
 impl IdPool {
     /// Constructs a new [`IdPool`].
     ///
-    /// A capacity below [`BITS_PER_LONG`] is adjusted to
-    /// [`BITS_PER_LONG`].
+    /// A capacity below [`MAX_INLINE_LEN`] is adjusted to [`MAX_INLINE_LEN`].
     ///
-    /// [`BITS_PER_LONG`]: srctree/include/asm-generic/bitsperlong.h
+    /// [`MAX_INLINE_LEN`]: BitmapVec::MAX_INLINE_LEN
     #[inline]
     pub fn new(num_ids: usize, flags: Flags) -> Result<Self, AllocError> {
-        let num_ids = core::cmp::max(num_ids, BITS_PER_LONG);
+        let num_ids = usize::max(num_ids, BitmapVec::MAX_INLINE_LEN);
         let map = BitmapVec::new(num_ids, flags)?;
         Ok(Self { map })
     }
@@ -116,9 +113,9 @@ pub fn capacity(&self) -> usize {
 
     /// Returns a [`ReallocRequest`] if the [`IdPool`] can be shrunk, [`None`] otherwise.
     ///
-    /// The capacity of an [`IdPool`] cannot be shrunk below [`BITS_PER_LONG`].
+    /// The capacity of an [`IdPool`] cannot be shrunk below [`MAX_INLINE_LEN`].
     ///
-    /// [`BITS_PER_LONG`]: srctree/include/asm-generic/bitsperlong.h
+    /// [`MAX_INLINE_LEN`]: BitmapVec::MAX_INLINE_LEN
     ///
     /// # Examples
     ///
@@ -130,14 +127,14 @@ pub fn capacity(&self) -> usize {
     /// let alloc_request = pool.shrink_request().ok_or(AllocError)?;
     /// let resizer = alloc_request.realloc(GFP_KERNEL)?;
     /// pool.shrink(resizer);
-    /// assert_eq!(pool.capacity(), kernel::bindings::BITS_PER_LONG as usize);
+    /// assert_eq!(pool.capacity(), kernel::BitmapVec::MAX_INLINE_LEN);
     /// # Ok::<(), AllocError>(())
     /// ```
     #[inline]
     pub fn shrink_request(&self) -> Option<ReallocRequest> {
         let cap = self.capacity();
-        // Shrinking below [`BITS_PER_LONG`] is never possible.
-        if cap <= BITS_PER_LONG {
+        // Shrinking below `MAX_INLINE_LEN` is never possible.
+        if cap <= BitmapVec::MAX_INLINE_LEN {
             return None;
         }
         // Determine if the bitmap can shrink based on the position of
@@ -146,13 +143,13 @@ pub fn shrink_request(&self) -> Option<ReallocRequest> {
         // bitmap should shrink to half its current size.
         let Some(bit) = self.map.last_bit() else {
             return Some(ReallocRequest {
-                num_ids: BITS_PER_LONG,
+                num_ids: BitmapVec::MAX_INLINE_LEN,
             });
         };
         if bit >= (cap / 4) {
             return None;
         }
-        let num_ids = usize::max(BITS_PER_LONG, cap / 2);
+        let num_ids = usize::max(BitmapVec::MAX_INLINE_LEN, cap / 2);
         Some(ReallocRequest { num_ids })
     }
 
@@ -177,11 +174,13 @@ pub fn shrink(&mut self, mut resizer: PoolResizer) {
 
     /// Returns a [`ReallocRequest`] for growing this [`IdPool`], if possible.
     ///
-    /// The capacity of an [`IdPool`] cannot be grown above [`i32::MAX`].
+    /// The capacity of an [`IdPool`] cannot be grown above [`MAX_LEN`].
+    ///
+    /// [`MAX_LEN`]: BitmapVec::MAX_LEN
     #[inline]
     pub fn grow_request(&self) -> Option<ReallocRequest> {
         let num_ids = self.capacity() * 2;
-        if num_ids > i32::MAX.try_into().unwrap() {
+        if num_ids > BitmapVec::MAX_LEN {
             return None;
         }
         Some(ReallocRequest { num_ids })

-- 
2.51.2.1041.gc1ab5b90ca-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 2/6] rust: bitmap: add BitmapVec::new_inline()
  2025-11-12 12:47 [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Alice Ryhl
  2025-11-12 12:47 ` [PATCH v5 1/6] rust: bitmap: add MAX_LEN and MAX_INLINE_LEN constants Alice Ryhl
@ 2025-11-12 12:47 ` Alice Ryhl
  2025-11-12 15:36   ` Yury Norov
  2025-11-13 17:41   ` kernel test robot
  2025-11-12 12:47 ` [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity() Alice Ryhl
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Alice Ryhl @ 2025-11-12 12:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Yury Norov
  Cc: Arve Hjønnevåg, Todd Kjos, Martijn Coenen,
	Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel,
	Alice Ryhl

This constructor is useful when you just want to create a BitmapVec
without allocating but don't care how large it is.

Acked-by: Yury Norov (NVIDIA) <yury.norov@gmail.com>
Reviewed-by: Burak Emir <bqe@google.com>
Reviewed-by: Danilo Krummrich <dakr@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/bitmap.rs | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/rust/kernel/bitmap.rs b/rust/kernel/bitmap.rs
index 0705646c6251a49f213a45f1f013cb9eb2ed81de..bbbf0f9e5d9251953584581af57042143447b996 100644
--- a/rust/kernel/bitmap.rs
+++ b/rust/kernel/bitmap.rs
@@ -230,6 +230,16 @@ impl BitmapVec {
     /// The maximum length that uses the inline representation.
     pub const MAX_INLINE_LEN: usize = usize::BITS as usize;
 
+    /// Construct a longest possible inline [`BitmapVec`].
+    #[inline]
+    pub fn new_inline() -> Self {
+        // INVARIANT: `nbits <= NO_ALLOC_MAX_LEN`, so an inline bitmap is the right repr.
+        BitmapVec {
+            repr: BitmapRepr { bitmap: 0 },
+            nbits: BitmapVec::NO_ALLOC_MAX_LEN,
+        }
+    }
+
     /// Constructs a new [`BitmapVec`].
     ///
     /// Fails with [`AllocError`] when the [`BitmapVec`] could not be allocated. This

-- 
2.51.2.1041.gc1ab5b90ca-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity()
  2025-11-12 12:47 [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Alice Ryhl
  2025-11-12 12:47 ` [PATCH v5 1/6] rust: bitmap: add MAX_LEN and MAX_INLINE_LEN constants Alice Ryhl
  2025-11-12 12:47 ` [PATCH v5 2/6] rust: bitmap: add BitmapVec::new_inline() Alice Ryhl
@ 2025-11-12 12:47 ` Alice Ryhl
  2025-11-12 12:55   ` Alice Ryhl
                     ` (2 more replies)
  2025-11-12 12:47 ` [PATCH v5 4/6] rust: id_pool: do not supply starting capacity Alice Ryhl
                   ` (3 subsequent siblings)
  6 siblings, 3 replies; 20+ messages in thread
From: Alice Ryhl @ 2025-11-12 12:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Yury Norov
  Cc: Arve Hjønnevåg, Todd Kjos, Martijn Coenen,
	Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel,
	Alice Ryhl

We want to change ::new() to take no parameters and produce a pool that
is as large as possible while also being inline because that is the
constructor that Rust Binder actually needs.

However, to avoid complications in examples, we still need the current
constructor. So rename it to with_capacity(), which is the idiomatic
Rust name for this kind constructor.

Reviewed-by: Burak Emir <bqe@google.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/id_pool.rs | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/rust/kernel/id_pool.rs b/rust/kernel/id_pool.rs
index 8f68b45a3da1f62dd0d010480837de49b9a343ba..90836b05c6155f98ab8393c8291749f408dbd4da 100644
--- a/rust/kernel/id_pool.rs
+++ b/rust/kernel/id_pool.rs
@@ -26,7 +26,7 @@
 /// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
 /// use kernel::id_pool::IdPool;
 ///
-/// let mut pool = IdPool::new(64, GFP_KERNEL)?;
+/// let mut pool = IdPool::with_capacity(64, GFP_KERNEL)?;
 /// for i in 0..64 {
 ///     assert_eq!(i, pool.acquire_next_id(i).ok_or(ENOSPC)?);
 /// }
@@ -93,14 +93,14 @@ pub fn realloc(&self, flags: Flags) -> Result<PoolResizer, AllocError> {
 }
 
 impl IdPool {
-    /// Constructs a new [`IdPool`].
+    /// Constructs a new [`IdPool`] with space for a specific number of bits.
     ///
     /// A capacity below [`MAX_INLINE_LEN`] is adjusted to [`MAX_INLINE_LEN`].
     ///
     /// [`MAX_INLINE_LEN`]: BitmapVec::MAX_INLINE_LEN
     #[inline]
-    pub fn new(num_ids: usize, flags: Flags) -> Result<Self, AllocError> {
-        let num_ids = usize::max(num_ids, BitmapVec::MAX_INLINE_LEN);
+    pub fn with_capacity(num_ids: usize, flags: Flags) -> Result<Self, AllocError> {
+        let num_ids = usize::max(num_ids, BITS_PER_LONG);
         let map = BitmapVec::new(num_ids, flags)?;
         Ok(Self { map })
     }
@@ -123,7 +123,7 @@ pub fn capacity(&self) -> usize {
     /// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
     /// use kernel::id_pool::{ReallocRequest, IdPool};
     ///
-    /// let mut pool = IdPool::new(1024, GFP_KERNEL)?;
+    /// let mut pool = IdPool::with_capacity(1024, GFP_KERNEL)?;
     /// let alloc_request = pool.shrink_request().ok_or(AllocError)?;
     /// let resizer = alloc_request.realloc(GFP_KERNEL)?;
     /// pool.shrink(resizer);

-- 
2.51.2.1041.gc1ab5b90ca-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 4/6] rust: id_pool: do not supply starting capacity
  2025-11-12 12:47 [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Alice Ryhl
                   ` (2 preceding siblings ...)
  2025-11-12 12:47 ` [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity() Alice Ryhl
@ 2025-11-12 12:47 ` Alice Ryhl
  2025-11-12 12:47 ` [PATCH v5 5/6] rust: id_pool: do not immediately acquire new ids Alice Ryhl
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Alice Ryhl @ 2025-11-12 12:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Yury Norov
  Cc: Arve Hjønnevåg, Todd Kjos, Martijn Coenen,
	Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel,
	Alice Ryhl

Rust Binder wants to use inline bitmaps whenever possible to avoid
allocations, so introduce a constructor for an IdPool with arbitrary
capacity that stores the bitmap inline.

The existing constructor could be renamed to with_capacity() to match
constructors for other similar types, but it is removed as there is
currently no user for it.

Acked-by: Yury Norov (NVIDIA) <yury.norov@gmail.com>
Reviewed-by: Burak Emir <bqe@google.com>
Reviewed-by: Danilo Krummrich <dakr@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/id_pool.rs | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/rust/kernel/id_pool.rs b/rust/kernel/id_pool.rs
index 90836b05c6155f98ab8393c8291749f408dbd4da..dc856fc68f4cf38971332f15e63440f27868a3b6 100644
--- a/rust/kernel/id_pool.rs
+++ b/rust/kernel/id_pool.rs
@@ -93,6 +93,18 @@ pub fn realloc(&self, flags: Flags) -> Result<PoolResizer, AllocError> {
 }
 
 impl IdPool {
+    /// Constructs a new [`IdPool`].
+    ///
+    /// The pool will have a capacity of [`NO_ALLOC_MAX_LEN`].
+    ///
+    /// [`NO_ALLOC_MAX_LEN`]: BitmapVec::NO_ALLOC_MAX_LEN
+    #[inline]
+    pub fn new() -> Self {
+        Self {
+            map: BitmapVec::new_inline(),
+        }
+    }
+
     /// Constructs a new [`IdPool`] with space for a specific number of bits.
     ///
     /// A capacity below [`MAX_INLINE_LEN`] is adjusted to [`MAX_INLINE_LEN`].
@@ -223,3 +235,10 @@ pub fn release_id(&mut self, id: usize) {
         self.map.clear_bit(id);
     }
 }
+
+impl Default for IdPool {
+    #[inline]
+    fn default() -> Self {
+        Self::new()
+    }
+}

-- 
2.51.2.1041.gc1ab5b90ca-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 5/6] rust: id_pool: do not immediately acquire new ids
  2025-11-12 12:47 [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Alice Ryhl
                   ` (3 preceding siblings ...)
  2025-11-12 12:47 ` [PATCH v5 4/6] rust: id_pool: do not supply starting capacity Alice Ryhl
@ 2025-11-12 12:47 ` Alice Ryhl
  2025-11-12 12:47 ` [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles Alice Ryhl
  2025-11-12 15:43 ` [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Yury Norov
  6 siblings, 0 replies; 20+ messages in thread
From: Alice Ryhl @ 2025-11-12 12:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Yury Norov
  Cc: Arve Hjønnevåg, Todd Kjos, Martijn Coenen,
	Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel,
	Alice Ryhl

When Rust Binder assigns a new ID, it performs various fallible
operations before it "commits" to actually using the new ID. To support
this pattern, change acquire_next_id() so that it does not immediately
call set_bit(), but instead returns an object that may be used to call
set_bit() later.

The UnusedId type holds a exclusive reference to the IdPool, so it's
guaranteed that nobody else can call find_unused_id() while the UnusedId
object is live.

Reviewed-by: Burak Emir <bqe@google.com>
Reviewed-by: Danilo Krummrich <dakr@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/id_pool.rs | 75 ++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 60 insertions(+), 15 deletions(-)

diff --git a/rust/kernel/id_pool.rs b/rust/kernel/id_pool.rs
index dc856fc68f4cf38971332f15e63440f27868a3b6..114e39176cb48b3806797aba48e4d43e0c42e3d2 100644
--- a/rust/kernel/id_pool.rs
+++ b/rust/kernel/id_pool.rs
@@ -23,8 +23,8 @@
 /// Basic usage
 ///
 /// ```
-/// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
-/// use kernel::id_pool::IdPool;
+/// use kernel::alloc::AllocError;
+/// use kernel::id_pool::{IdPool, UnusedId};
 ///
 /// let mut pool = IdPool::with_capacity(64, GFP_KERNEL)?;
 /// for i in 0..64 {
@@ -32,13 +32,13 @@
 /// }
 ///
 /// pool.release_id(23);
-/// assert_eq!(23, pool.acquire_next_id(0).ok_or(ENOSPC)?);
+/// assert_eq!(23, pool.find_unused_id(0).ok_or(ENOSPC)?.acquire());
 ///
-/// assert_eq!(None, pool.acquire_next_id(0));  // time to realloc.
+/// assert!(pool.find_unused_id(0).is_none());  // time to realloc.
 /// let resizer = pool.grow_request().ok_or(ENOSPC)?.realloc(GFP_KERNEL)?;
 /// pool.grow(resizer);
 ///
-/// assert_eq!(pool.acquire_next_id(0), Some(64));
+/// assert_eq!(pool.find_unused_id(0).ok_or(ENOSPC)?.acquire(), 64);
 /// # Ok::<(), Error>(())
 /// ```
 ///
@@ -52,8 +52,8 @@
 /// fn get_id_maybe_realloc(guarded_pool: &SpinLock<IdPool>) -> Result<usize, AllocError> {
 ///     let mut pool = guarded_pool.lock();
 ///     loop {
-///         match pool.acquire_next_id(0) {
-///             Some(index) => return Ok(index),
+///         match pool.find_unused_id(0) {
+///             Some(index) => return Ok(index.acquire()),
 ///             None => {
 ///                 let alloc_request = pool.grow_request();
 ///                 drop(pool);
@@ -215,18 +215,18 @@ pub fn grow(&mut self, mut resizer: PoolResizer) {
         self.map = resizer.new;
     }
 
-    /// Acquires a new ID by finding and setting the next zero bit in the
-    /// bitmap.
+    /// Finds an unused ID in the bitmap.
     ///
     /// Upon success, returns its index. Otherwise, returns [`None`]
     /// to indicate that a [`Self::grow_request`] is needed.
     #[inline]
-    pub fn acquire_next_id(&mut self, offset: usize) -> Option<usize> {
-        let next_zero_bit = self.map.next_zero_bit(offset);
-        if let Some(nr) = next_zero_bit {
-            self.map.set_bit(nr);
-        }
-        next_zero_bit
+    #[must_use]
+    pub fn find_unused_id(&mut self, offset: usize) -> Option<UnusedId<'_>> {
+        // INVARIANT: `next_zero_bit()` returns None or an integer less than `map.len()`
+        Some(UnusedId {
+            id: self.map.next_zero_bit(offset)?,
+            pool: self,
+        })
     }
 
     /// Releases an ID.
@@ -236,6 +236,51 @@ pub fn release_id(&mut self, id: usize) {
     }
 }
 
+/// Represents an unused id in an [`IdPool`].
+///
+/// # Invariants
+///
+/// The value of `id` is less than `pool.map.len()`.
+pub struct UnusedId<'pool> {
+    id: usize,
+    pool: &'pool mut IdPool,
+}
+
+impl<'pool> UnusedId<'pool> {
+    /// Get the unused id as an usize.
+    ///
+    /// Be aware that the id has not yet been acquired in the pool. The
+    /// [`acquire`] method must be called to prevent others from taking the id.
+    ///
+    /// [`acquire`]: UnusedId::acquire()
+    #[inline]
+    #[must_use]
+    pub fn as_usize(&self) -> usize {
+        self.id
+    }
+
+    /// Get the unused id as an u32.
+    ///
+    /// Be aware that the id has not yet been acquired in the pool. The
+    /// [`acquire`] method must be called to prevent others from taking the id.
+    ///
+    /// [`acquire`]: UnusedId::acquire()
+    #[inline]
+    #[must_use]
+    pub fn as_u32(&self) -> u32 {
+        // CAST: By the type invariants:
+        // `self.id < pool.map.len() <= BitmapVec::MAX_LEN = i32::MAX`.
+        self.id as u32
+    }
+
+    /// Acquire the unused id.
+    #[inline]
+    pub fn acquire(self) -> usize {
+        self.pool.map.set_bit(self.id);
+        self.id
+    }
+}
+
 impl Default for IdPool {
     #[inline]
     fn default() -> Self {

-- 
2.51.2.1041.gc1ab5b90ca-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles
  2025-11-12 12:47 [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Alice Ryhl
                   ` (4 preceding siblings ...)
  2025-11-12 12:47 ` [PATCH v5 5/6] rust: id_pool: do not immediately acquire new ids Alice Ryhl
@ 2025-11-12 12:47 ` Alice Ryhl
  2025-11-12 19:09   ` Yury Norov
  2025-11-12 15:43 ` [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Yury Norov
  6 siblings, 1 reply; 20+ messages in thread
From: Alice Ryhl @ 2025-11-12 12:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Yury Norov
  Cc: Arve Hjønnevåg, Todd Kjos, Martijn Coenen,
	Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel,
	Alice Ryhl

To find an unused Binder handle, Rust Binder currently iterates the
red/black tree from the beginning until it finds a gap in the keys. This
is extremely slow.

To improve the performance, add a bitmap that keeps track of which
indices are actually in use. This allows us to quickly find an unused
key in the red/black tree.

This logic matches the approach used by C Binder. It was chosen
partially because it's the most memory efficient solution.

Reviewed-by: Burak Emir <bqe@google.com>
Acked-by: Carlos Llamas <cmllamas@google.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 drivers/android/binder/process.rs | 63 ++++++++++++++++++++++++++++-----------
 1 file changed, 46 insertions(+), 17 deletions(-)

diff --git a/drivers/android/binder/process.rs b/drivers/android/binder/process.rs
index f13a747e784c84a0fb09cbf47442712106eba07c..933b0f737b38ffac536b19c9330dfc63ffc72f2b 100644
--- a/drivers/android/binder/process.rs
+++ b/drivers/android/binder/process.rs
@@ -19,6 +19,7 @@
     cred::Credential,
     error::Error,
     fs::file::{self, File},
+    id_pool::IdPool,
     list::{List, ListArc, ListArcField, ListLinks},
     mm,
     prelude::*,
@@ -367,6 +368,8 @@ impl ListItem<{Self::LIST_NODE}> for NodeRefInfo {
 struct ProcessNodeRefs {
     /// Used to look up nodes using the 32-bit id that this process knows it by.
     by_handle: RBTree<u32, ListArc<NodeRefInfo, { NodeRefInfo::LIST_PROC }>>,
+    /// Used to quickly find unused ids in `by_handle`.
+    handle_is_present: IdPool,
     /// Used to look up nodes without knowing their local 32-bit id. The usize is the address of
     /// the underlying `Node` struct as returned by `Node::global_id`.
     by_node: RBTree<usize, u32>,
@@ -381,6 +384,7 @@ impl ProcessNodeRefs {
     fn new() -> Self {
         Self {
             by_handle: RBTree::new(),
+            handle_is_present: IdPool::new(),
             by_node: RBTree::new(),
             freeze_listeners: RBTree::new(),
         }
@@ -775,7 +779,7 @@ pub(crate) fn get_node(
     pub(crate) fn insert_or_update_handle(
         self: ArcBorrow<'_, Process>,
         node_ref: NodeRef,
-        is_mananger: bool,
+        is_manager: bool,
     ) -> Result<u32> {
         {
             let mut refs = self.node_refs.lock();
@@ -794,7 +798,32 @@ pub(crate) fn insert_or_update_handle(
         let reserve2 = RBTreeNodeReservation::new(GFP_KERNEL)?;
         let info = UniqueArc::new_uninit(GFP_KERNEL)?;
 
-        let mut refs = self.node_refs.lock();
+        let mut refs_lock = self.node_refs.lock();
+        let mut refs = &mut *refs_lock;
+
+        let (unused_id, by_handle_slot) = loop {
+            // ID 0 may only be used by the manager.
+            let start = if is_manager { 0 } else { 1 };
+
+            if let Some(res) = refs.handle_is_present.find_unused_id(start) {
+                match refs.by_handle.entry(res.as_u32()) {
+                    rbtree::Entry::Vacant(entry) => break (res, entry),
+                    rbtree::Entry::Occupied(_) => {
+                        pr_err!("Detected mismatch between handle_is_present and by_handle");
+                        res.acquire();
+                        continue;
+                    }
+                }
+            }
+
+            let grow_request = refs.handle_is_present.grow_request().ok_or(ENOMEM)?;
+            drop(refs_lock);
+            let resizer = grow_request.realloc(GFP_KERNEL)?;
+            refs_lock = self.node_refs.lock();
+            refs = &mut *refs_lock;
+            refs.handle_is_present.grow(resizer);
+        };
+        let handle = unused_id.as_u32();
 
         // Do a lookup again as node may have been inserted before the lock was reacquired.
         if let Some(handle_ref) = refs.by_node.get(&node_ref.node.global_id()) {
@@ -804,20 +833,9 @@ pub(crate) fn insert_or_update_handle(
             return Ok(handle);
         }
 
-        // Find id.
-        let mut target: u32 = if is_mananger { 0 } else { 1 };
-        for handle in refs.by_handle.keys() {
-            if *handle > target {
-                break;
-            }
-            if *handle == target {
-                target = target.checked_add(1).ok_or(ENOMEM)?;
-            }
-        }
-
         let gid = node_ref.node.global_id();
         let (info_proc, info_node) = {
-            let info_init = NodeRefInfo::new(node_ref, target, self.into());
+            let info_init = NodeRefInfo::new(node_ref, handle, self.into());
             match info.pin_init_with(info_init) {
                 Ok(info) => ListArc::pair_from_pin_unique(info),
                 // error is infallible
@@ -838,9 +856,10 @@ pub(crate) fn insert_or_update_handle(
         // `info_node` into the right node's `refs` list.
         unsafe { info_proc.node_ref2().node.insert_node_info(info_node) };
 
-        refs.by_node.insert(reserve1.into_node(gid, target));
-        refs.by_handle.insert(reserve2.into_node(target, info_proc));
-        Ok(target)
+        refs.by_node.insert(reserve1.into_node(gid, handle));
+        by_handle_slot.insert(info_proc, reserve2);
+        unused_id.acquire();
+        Ok(handle)
     }
 
     pub(crate) fn get_transaction_node(&self, handle: u32) -> BinderResult<NodeRef> {
@@ -905,6 +924,16 @@ pub(crate) fn update_ref(
                 let id = info.node_ref().node.global_id();
                 refs.by_handle.remove(&handle);
                 refs.by_node.remove(&id);
+                refs.handle_is_present.release_id(handle as usize);
+
+                if let Some(shrink) = refs.handle_is_present.shrink_request() {
+                    drop(refs);
+                    // This intentionally ignores allocation failures.
+                    if let Ok(new_bitmap) = shrink.realloc(GFP_KERNEL) {
+                        refs = self.node_refs.lock();
+                        refs.handle_is_present.shrink(new_bitmap);
+                    }
+                }
             }
         } else {
             // All refs are cleared in process exit, so this warning is expected in that case.

-- 
2.51.2.1041.gc1ab5b90ca-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity()
  2025-11-12 12:47 ` [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity() Alice Ryhl
@ 2025-11-12 12:55   ` Alice Ryhl
  2025-11-12 15:46   ` Yury Norov
  2025-11-14  1:31   ` kernel test robot
  2 siblings, 0 replies; 20+ messages in thread
From: Alice Ryhl @ 2025-11-12 12:55 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Yury Norov
  Cc: Arve Hjønnevåg, Todd Kjos, Martijn Coenen,
	Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel

On Wed, Nov 12, 2025 at 12:47:21PM +0000, Alice Ryhl wrote:
> We want to change ::new() to take no parameters and produce a pool that
> is as large as possible while also being inline because that is the
> constructor that Rust Binder actually needs.
> 
> However, to avoid complications in examples, we still need the current
> constructor. So rename it to with_capacity(), which is the idiomatic
> Rust name for this kind constructor.
> 
> Reviewed-by: Burak Emir <bqe@google.com>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>

> -        let num_ids = usize::max(num_ids, BitmapVec::MAX_INLINE_LEN);
> +        let num_ids = usize::max(num_ids, BITS_PER_LONG);

Ah ... I messed up the rebase here.

Alice

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 2/6] rust: bitmap: add BitmapVec::new_inline()
  2025-11-12 12:47 ` [PATCH v5 2/6] rust: bitmap: add BitmapVec::new_inline() Alice Ryhl
@ 2025-11-12 15:36   ` Yury Norov
  2025-11-13 17:41   ` kernel test robot
  1 sibling, 0 replies; 20+ messages in thread
From: Yury Norov @ 2025-11-12 15:36 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Greg Kroah-Hartman, Arve Hjønnevåg, Todd Kjos,
	Martijn Coenen, Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel

On Wed, Nov 12, 2025 at 12:47:20PM +0000, Alice Ryhl wrote:
> This constructor is useful when you just want to create a BitmapVec
> without allocating but don't care how large it is.
> 
> Acked-by: Yury Norov (NVIDIA) <yury.norov@gmail.com>
> Reviewed-by: Burak Emir <bqe@google.com>
> Reviewed-by: Danilo Krummrich <dakr@kernel.org>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  rust/kernel/bitmap.rs | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/rust/kernel/bitmap.rs b/rust/kernel/bitmap.rs
> index 0705646c6251a49f213a45f1f013cb9eb2ed81de..bbbf0f9e5d9251953584581af57042143447b996 100644
> --- a/rust/kernel/bitmap.rs
> +++ b/rust/kernel/bitmap.rs
> @@ -230,6 +230,16 @@ impl BitmapVec {
>      /// The maximum length that uses the inline representation.
>      pub const MAX_INLINE_LEN: usize = usize::BITS as usize;
>  
> +    /// Construct a longest possible inline [`BitmapVec`].
> +    #[inline]
> +    pub fn new_inline() -> Self {
> +        // INVARIANT: `nbits <= NO_ALLOC_MAX_LEN`, so an inline bitmap is the right repr.
> +        BitmapVec {
> +            repr: BitmapRepr { bitmap: 0 },
> +            nbits: BitmapVec::NO_ALLOC_MAX_LEN,

This should be MAX_INLINE_LEN, I guess?

> +        }
> +    }
> +
>      /// Constructs a new [`BitmapVec`].
>      ///
>      /// Fails with [`AllocError`] when the [`BitmapVec`] could not be allocated. This
> 
> -- 
> 2.51.2.1041.gc1ab5b90ca-goog

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver
  2025-11-12 12:47 [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Alice Ryhl
                   ` (5 preceding siblings ...)
  2025-11-12 12:47 ` [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles Alice Ryhl
@ 2025-11-12 15:43 ` Yury Norov
  6 siblings, 0 replies; 20+ messages in thread
From: Yury Norov @ 2025-11-12 15:43 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Greg Kroah-Hartman, Arve Hjønnevåg, Todd Kjos,
	Martijn Coenen, Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel

On Wed, Nov 12, 2025 at 12:47:18PM +0000, Alice Ryhl wrote:
> See commit message for rust binder commit for details.
> 
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
> Changes in v5:
> - Rename NO_ALLOC_MAX_LEN to MAX_INLINE_LEN.
> - Update all uses of MAX_LEN/MAX_INLINE_LEN to reference the new
>   constants.
> - Link to v4: https://lore.kernel.org/r/20251110-binder-bitmap-v4-0-5ed8a7fab1b9@google.com

Patches 2 and 4 still refer to NO_ALLOC_MAX_LEN. I think you need to
resend.

Thanks,
Yury
 
> Changes in v4:
> - Keep existing IdPool constructor (renamed to with_capacity).
> - Undo docs changes that were due to the removal of the existing IdPool
>   constructor.
> - List id < pool.map.len() as invariant for UnusedId and mention it in
>   as_u32() comment.
> - Fix [`acquire`] docs links.
> - Link to v3: https://lore.kernel.org/r/20251028-binder-bitmap-v3-0-32822d4b3207@google.com
> 
> Changes in v3:
> - Also update documentation to fix compile errors.
> - Use new_inline() instead of new_small().
> - Reword commit message of "rust: id_pool: do not supply starting capacity"
> - Change UnusedId::acquire() to return usize.
> - Link to v2: https://lore.kernel.org/r/20251021-binder-bitmap-v2-0-e652d172c62b@google.com
> 
> Changes in v2:
> - Use id_pool.
> - Link to v1: https://lore.kernel.org/r/20251020-binder-bitmap-v1-0-879bec9cddc1@google.com
> 
> ---
> Alice Ryhl (6):
>       rust: bitmap: add MAX_LEN and MAX_INLINE_LEN constants
>       rust: bitmap: add BitmapVec::new_inline()
>       rust: bitmap: rename IdPool::new() to with_capacity()
>       rust: id_pool: do not supply starting capacity
>       rust: id_pool: do not immediately acquire new ids
>       rust_binder: use bitmap for allocation of handles
> 
>  drivers/android/binder/process.rs |  63 ++++++++++++++-----
>  rust/kernel/bitmap.rs             |  43 ++++++++-----
>  rust/kernel/id_pool.rs            | 129 ++++++++++++++++++++++++++++----------
>  3 files changed, 171 insertions(+), 64 deletions(-)
> ---
> base-commit: 211ddde0823f1442e4ad052a2f30f050145ccada
> change-id: 20251007-binder-bitmap-5aa0d966fb1f
> 
> Best regards,
> -- 
> Alice Ryhl <aliceryhl@google.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity()
  2025-11-12 12:47 ` [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity() Alice Ryhl
  2025-11-12 12:55   ` Alice Ryhl
@ 2025-11-12 15:46   ` Yury Norov
  2025-11-14  1:31   ` kernel test robot
  2 siblings, 0 replies; 20+ messages in thread
From: Yury Norov @ 2025-11-12 15:46 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Greg Kroah-Hartman, Arve Hjønnevåg, Todd Kjos,
	Martijn Coenen, Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel

On Wed, Nov 12, 2025 at 12:47:21PM +0000, Alice Ryhl wrote:
> We want to change ::new() to take no parameters and produce a pool that
> is as large as possible while also being inline because that is the
> constructor that Rust Binder actually needs.
> 
> However, to avoid complications in examples, we still need the current
> constructor. So rename it to with_capacity(), which is the idiomatic
> Rust name for this kind constructor.
> 
> Reviewed-by: Burak Emir <bqe@google.com>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  rust/kernel/id_pool.rs | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/rust/kernel/id_pool.rs b/rust/kernel/id_pool.rs
> index 8f68b45a3da1f62dd0d010480837de49b9a343ba..90836b05c6155f98ab8393c8291749f408dbd4da 100644
> --- a/rust/kernel/id_pool.rs
> +++ b/rust/kernel/id_pool.rs
> @@ -26,7 +26,7 @@
>  /// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
>  /// use kernel::id_pool::IdPool;
>  ///
> -/// let mut pool = IdPool::new(64, GFP_KERNEL)?;
> +/// let mut pool = IdPool::with_capacity(64, GFP_KERNEL)?;
>  /// for i in 0..64 {
>  ///     assert_eq!(i, pool.acquire_next_id(i).ok_or(ENOSPC)?);
>  /// }
> @@ -93,14 +93,14 @@ pub fn realloc(&self, flags: Flags) -> Result<PoolResizer, AllocError> {
>  }
>  
>  impl IdPool {
> -    /// Constructs a new [`IdPool`].
> +    /// Constructs a new [`IdPool`] with space for a specific number of bits.
>      ///
>      /// A capacity below [`MAX_INLINE_LEN`] is adjusted to [`MAX_INLINE_LEN`].
>      ///
>      /// [`MAX_INLINE_LEN`]: BitmapVec::MAX_INLINE_LEN
>      #[inline]
> -    pub fn new(num_ids: usize, flags: Flags) -> Result<Self, AllocError> {
> -        let num_ids = usize::max(num_ids, BitmapVec::MAX_INLINE_LEN);
> +    pub fn with_capacity(num_ids: usize, flags: Flags) -> Result<Self, AllocError> {
> +        let num_ids = usize::max(num_ids, BITS_PER_LONG);

           let num_ids = usize::max(num_ids, MAX_INLINE_LEN);

>          let map = BitmapVec::new(num_ids, flags)?;
>          Ok(Self { map })
>      }
> @@ -123,7 +123,7 @@ pub fn capacity(&self) -> usize {
>      /// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
>      /// use kernel::id_pool::{ReallocRequest, IdPool};
>      ///
> -    /// let mut pool = IdPool::new(1024, GFP_KERNEL)?;
> +    /// let mut pool = IdPool::with_capacity(1024, GFP_KERNEL)?;
>      /// let alloc_request = pool.shrink_request().ok_or(AllocError)?;
>      /// let resizer = alloc_request.realloc(GFP_KERNEL)?;
>      /// pool.shrink(resizer);
> 
> -- 
> 2.51.2.1041.gc1ab5b90ca-goog

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles
  2025-11-12 12:47 ` [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles Alice Ryhl
@ 2025-11-12 19:09   ` Yury Norov
  2025-11-12 23:35     ` Alice Ryhl
  0 siblings, 1 reply; 20+ messages in thread
From: Yury Norov @ 2025-11-12 19:09 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Greg Kroah-Hartman, Arve Hjønnevåg, Todd Kjos,
	Martijn Coenen, Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel

On Wed, Nov 12, 2025 at 12:47:24PM +0000, Alice Ryhl wrote:
> To find an unused Binder handle, Rust Binder currently iterates the
> red/black tree from the beginning until it finds a gap in the keys. This
> is extremely slow.

Can you share performance numbers? 
 
> To improve the performance, add a bitmap that keeps track of which
> indices are actually in use. This allows us to quickly find an unused
> key in the red/black tree.
> 
> This logic matches the approach used by C Binder. It was chosen
> partially because it's the most memory efficient solution.

That inaccurate. You are adding a new data structure (bitmap), advocating
it with an improvement on search side, and that makes sense.

But now you're saying it's also a more memory efficient approach, which
doesn't sound trivial because the most memory efficient solution is to
bring no new data structures at all.

I guess you meant that bitmap is the most efficient data structure to
index used/unused nodes. If so, can you please rephrase the sentence?

> Reviewed-by: Burak Emir <bqe@google.com>
> Acked-by: Carlos Llamas <cmllamas@google.com>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  drivers/android/binder/process.rs | 63 ++++++++++++++++++++++++++++-----------
>  1 file changed, 46 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/android/binder/process.rs b/drivers/android/binder/process.rs
> index f13a747e784c84a0fb09cbf47442712106eba07c..933b0f737b38ffac536b19c9330dfc63ffc72f2b 100644
> --- a/drivers/android/binder/process.rs
> +++ b/drivers/android/binder/process.rs
> @@ -19,6 +19,7 @@
>      cred::Credential,
>      error::Error,
>      fs::file::{self, File},
> +    id_pool::IdPool,
>      list::{List, ListArc, ListArcField, ListLinks},
>      mm,
>      prelude::*,
> @@ -367,6 +368,8 @@ impl ListItem<{Self::LIST_NODE}> for NodeRefInfo {
>  struct ProcessNodeRefs {
>      /// Used to look up nodes using the 32-bit id that this process knows it by.
>      by_handle: RBTree<u32, ListArc<NodeRefInfo, { NodeRefInfo::LIST_PROC }>>,
> +    /// Used to quickly find unused ids in `by_handle`.
> +    handle_is_present: IdPool,
>      /// Used to look up nodes without knowing their local 32-bit id. The usize is the address of
>      /// the underlying `Node` struct as returned by `Node::global_id`.
>      by_node: RBTree<usize, u32>,
> @@ -381,6 +384,7 @@ impl ProcessNodeRefs {
>      fn new() -> Self {
>          Self {
>              by_handle: RBTree::new(),
> +            handle_is_present: IdPool::new(),
>              by_node: RBTree::new(),
>              freeze_listeners: RBTree::new(),
>          }
> @@ -775,7 +779,7 @@ pub(crate) fn get_node(
>      pub(crate) fn insert_or_update_handle(
>          self: ArcBorrow<'_, Process>,
>          node_ref: NodeRef,
> -        is_mananger: bool,
> +        is_manager: bool,
>      ) -> Result<u32> {
>          {
>              let mut refs = self.node_refs.lock();
> @@ -794,7 +798,32 @@ pub(crate) fn insert_or_update_handle(
>          let reserve2 = RBTreeNodeReservation::new(GFP_KERNEL)?;
>          let info = UniqueArc::new_uninit(GFP_KERNEL)?;
>  
> -        let mut refs = self.node_refs.lock();
> +        let mut refs_lock = self.node_refs.lock();
> +        let mut refs = &mut *refs_lock;
> +
> +        let (unused_id, by_handle_slot) = loop {
> +            // ID 0 may only be used by the manager.
> +            let start = if is_manager { 0 } else { 1 };
> +
> +            if let Some(res) = refs.handle_is_present.find_unused_id(start) {
> +                match refs.by_handle.entry(res.as_u32()) {
> +                    rbtree::Entry::Vacant(entry) => break (res, entry),
> +                    rbtree::Entry::Occupied(_) => {
> +                        pr_err!("Detected mismatch between handle_is_present and by_handle");
> +                        res.acquire();
> +                        continue;

At this point you've detected mismatch between two linked data
structures. It means that one of them or both are corrupted. To
me it looks fatal, and your pr_err() confirms it. How could you
continue then?

> +                    }
> +                }
> +            }
> +
> +            let grow_request = refs.handle_is_present.grow_request().ok_or(ENOMEM)?;
> +            drop(refs_lock);
> +            let resizer = grow_request.realloc(GFP_KERNEL)?;
> +            refs_lock = self.node_refs.lock();
> +            refs = &mut *refs_lock;
> +            refs.handle_is_present.grow(resizer);

Is it possible to turn this block into a less wordy statement? Maybe a
wrapper function for it? Ideally, the grow request should be handled
transparently in .find_unused_id().

> +        };
> +        let handle = unused_id.as_u32();
>  
>          // Do a lookup again as node may have been inserted before the lock was reacquired.
>          if let Some(handle_ref) = refs.by_node.get(&node_ref.node.global_id()) {
> @@ -804,20 +833,9 @@ pub(crate) fn insert_or_update_handle(
>              return Ok(handle);
>          }
>  
> -        // Find id.
> -        let mut target: u32 = if is_mananger { 0 } else { 1 };
> -        for handle in refs.by_handle.keys() {
> -            if *handle > target {
> -                break;
> -            }
> -            if *handle == target {
> -                target = target.checked_add(1).ok_or(ENOMEM)?;
> -            }
> -        }
> -
>          let gid = node_ref.node.global_id();
>          let (info_proc, info_node) = {
> -            let info_init = NodeRefInfo::new(node_ref, target, self.into());
> +            let info_init = NodeRefInfo::new(node_ref, handle, self.into());
>              match info.pin_init_with(info_init) {
>                  Ok(info) => ListArc::pair_from_pin_unique(info),
>                  // error is infallible
> @@ -838,9 +856,10 @@ pub(crate) fn insert_or_update_handle(
>          // `info_node` into the right node's `refs` list.
>          unsafe { info_proc.node_ref2().node.insert_node_info(info_node) };
>  
> -        refs.by_node.insert(reserve1.into_node(gid, target));
> -        refs.by_handle.insert(reserve2.into_node(target, info_proc));
> -        Ok(target)
> +        refs.by_node.insert(reserve1.into_node(gid, handle));
> +        by_handle_slot.insert(info_proc, reserve2);
> +        unused_id.acquire();
> +        Ok(handle)
>      }
>  
>      pub(crate) fn get_transaction_node(&self, handle: u32) -> BinderResult<NodeRef> {
> @@ -905,6 +924,16 @@ pub(crate) fn update_ref(
>                  let id = info.node_ref().node.global_id();
>                  refs.by_handle.remove(&handle);
>                  refs.by_node.remove(&id);
> +                refs.handle_is_present.release_id(handle as usize);
> +
> +                if let Some(shrink) = refs.handle_is_present.shrink_request() {

This is questionable. Shrinking is usually the very slow path, and we
don't shrink unless we're really close or even inside the OOM condition.

In this case, shrink_request() on average returns false, but it's
O(N), which makes _every_ release_id() O(N), while it should be O(1).

Consider a very realistic case: you're destroying every object, and thus
removing every ID in the associate ID pool, doing it in LIFO order. That
way you will need to call shrink_request() about O(log(N)) times, making
the whole complexity ~O(N*log(N)); and you'll have to make log(N)
realloc()s for nothing. If you release IDs in FIFO order, you don't
call realloc(), but your shrink_request() total complexity will be O(N^2). 

Can you compare performance numbers with and without shrinking under a
typical payload? Is there any mechanism to inspect the ID pools at runtime,
like expose via procfs?

Can you make your shrinking logic conditional on some reasonable OOM
heuristics, maybe OOM event driven?

And even without OOM, you can safely skip shrinking if the number of IDs in
use is greater than 1/4 of the capacity, or there's any used ID with
the index greater than the 1/2 capacity.

> +                    drop(refs);
> +                    // This intentionally ignores allocation failures.
> +                    if let Ok(new_bitmap) = shrink.realloc(GFP_KERNEL) {
> +                        refs = self.node_refs.lock();
> +                        refs.handle_is_present.shrink(new_bitmap);
> +                    }
> +                }
>              }
>          } else {
>              // All refs are cleared in process exit, so this warning is expected in that case.
> 
> -- 
> 2.51.2.1041.gc1ab5b90ca-goog

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles
  2025-11-12 19:09   ` Yury Norov
@ 2025-11-12 23:35     ` Alice Ryhl
  2025-11-13  8:32       ` Miguel Ojeda
  2025-11-13 17:50       ` Yury Norov
  0 siblings, 2 replies; 20+ messages in thread
From: Alice Ryhl @ 2025-11-12 23:35 UTC (permalink / raw)
  To: Yury Norov
  Cc: Greg Kroah-Hartman, Arve Hjønnevåg, Todd Kjos,
	Martijn Coenen, Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel

On Wed, Nov 12, 2025 at 02:09:19PM -0500, Yury Norov wrote:
> On Wed, Nov 12, 2025 at 12:47:24PM +0000, Alice Ryhl wrote:
> > To find an unused Binder handle, Rust Binder currently iterates the
> > red/black tree from the beginning until it finds a gap in the keys. This
> > is extremely slow.
> 
> Can you share performance numbers? 

I have not benchmarked it in the Rust driver, but it replaces
potentially thousands of calls to rb_next() with a single call to
find_unused_id(), so I'm feeling good about the performance. And the
equivalent change in the C driver was done because this particular piece
of code was causing contention issues by holding the spinlock for a long
time.

The characteristics of this collection is that there is one per process
using the driver. Most processes have only a few entries in this bitmap,
so the inline representation will apply in most cases. However, there
are a few processes that have a large number of entries in the 4 to
maybe 5 figures range. Those processes are what caused the contention
issue mentioned above.

> > To improve the performance, add a bitmap that keeps track of which
> > indices are actually in use. This allows us to quickly find an unused
> > key in the red/black tree.
> > 
> > This logic matches the approach used by C Binder. It was chosen
> > partially because it's the most memory efficient solution.
> 
> That inaccurate. You are adding a new data structure (bitmap), advocating
> it with an improvement on search side, and that makes sense.
> 
> But now you're saying it's also a more memory efficient approach, which
> doesn't sound trivial because the most memory efficient solution is to
> bring no new data structures at all.
> 
> I guess you meant that bitmap is the most efficient data structure to
> index used/unused nodes. If so, can you please rephrase the sentence?

Yes I can rephrase.

Adding more data is of course always less memory efficient. What I meant
is that this is more memory efficient than the competing solution of
using an augmented rbtree that Carlos mentioned here:

https://lore.kernel.org/p/aC1PQ7tmcqMSmbHc@google.com

> > +            if let Some(res) = refs.handle_is_present.find_unused_id(start) {
> > +                match refs.by_handle.entry(res.as_u32()) {
> > +                    rbtree::Entry::Vacant(entry) => break (res, entry),
> > +                    rbtree::Entry::Occupied(_) => {
> > +                        pr_err!("Detected mismatch between handle_is_present and by_handle");
> > +                        res.acquire();
> > +                        continue;
> 
> At this point you've detected mismatch between two linked data
> structures. It means that one of them or both are corrupted. To
> me it looks fatal, and your pr_err() confirms it. How could you
> continue then?

Although we should never hit this codepath in real code, I don't think
we need to kill the kernel. We can treat the r/b tree as source of truth
and adjust the bitmap when mismathces are detected.

I could add a kernel warning, though. That shouldn't kill an Android
device.

> > +                    }
> > +                }
> > +            }
> > +
> > +            let grow_request = refs.handle_is_present.grow_request().ok_or(ENOMEM)?;
> > +            drop(refs_lock);
> > +            let resizer = grow_request.realloc(GFP_KERNEL)?;
> > +            refs_lock = self.node_refs.lock();
> > +            refs = &mut *refs_lock;
> > +            refs.handle_is_present.grow(resizer);
> 
> Is it possible to turn this block into a less wordy statement? Maybe a
> wrapper function for it? Ideally, the grow request should be handled
> transparently in .find_unused_id().

I can extract this block into a separate function, but I think it would
be tricky to move the entire logic inside .find_unused_id() because of
the mutex lock/unlock situation.

> > @@ -905,6 +924,16 @@ pub(crate) fn update_ref(
> >                  let id = info.node_ref().node.global_id();
> >                  refs.by_handle.remove(&handle);
> >                  refs.by_node.remove(&id);
> > +                refs.handle_is_present.release_id(handle as usize);
> > +
> > +                if let Some(shrink) = refs.handle_is_present.shrink_request() {
> 
> This is questionable. Shrinking is usually the very slow path, and we
> don't shrink unless we're really close or even inside the OOM condition.
> 
> In this case, shrink_request() on average returns false, but it's
> O(N), which makes _every_ release_id() O(N), while it should be O(1).

The current implementation of shrink_request() will refuse to shrink the
pool unless the largest bit is less than 1/4 of the capacity, so it
should not perform the expensive operation very often.

That said, it does call find_last_bit() each time, which I guess is
O(N). But my assumption was that find_last_bit() is cheap enough that it
wouldn't be a problem.

> Consider a very realistic case: you're destroying every object, and thus
> removing every ID in the associate ID pool, doing it in LIFO order. That
> way you will need to call shrink_request() about O(log(N)) times, making
> the whole complexity ~O(N*log(N)); and you'll have to make log(N)
> realloc()s for nothing. If you release IDs in FIFO order, you don't
> call realloc(), but your shrink_request() total complexity will be O(N^2). 

Even if we end up making log(N) reallocs, the total complexity of the
reallocs is O(N) because the amount of data being reallocd halves each
time. So the total number of bytes copied by reallocs ends up being:

    1 + 2 + 4 + 8 + ... + 2^log(N) <= 2^(1+log(N)) = 2*N

which is O(N).

Of course, deleting the corresponding entry from the red/black tree is
O(log N), so it's still O(N*log(N)) for the N deletions from the rb
tree.

> Can you compare performance numbers with and without shrinking under a
> typical payload? Is there any mechanism to inspect the ID pools at runtime,
> like expose via procfs?

We expose the contents of the red/black tree via the binder_logs
mechanism, but that doesn't show the *capacity* of the bitmap. Only the
index of the largest set bit.

> Can you make your shrinking logic conditional on some reasonable OOM
> heuristics, maybe OOM event driven?
> 
> And even without OOM, you can safely skip shrinking if the number of IDs in
> use is greater than 1/4 of the capacity, or there's any used ID with
> the index greater than the 1/2 capacity.

I guess we could register a shrinker, but I don't think it's worth it.

Thanks for the review!

Alice

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles
  2025-11-12 23:35     ` Alice Ryhl
@ 2025-11-13  8:32       ` Miguel Ojeda
  2025-11-13  9:14         ` Alice Ryhl
  2025-11-13 17:50       ` Yury Norov
  1 sibling, 1 reply; 20+ messages in thread
From: Miguel Ojeda @ 2025-11-13  8:32 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Yury Norov, Greg Kroah-Hartman, Arve Hjønnevåg,
	Todd Kjos, Martijn Coenen, Joel Fernandes, Christian Brauner,
	Carlos Llamas, Suren Baghdasaryan, Burak Emir, Miguel Ojeda,
	Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, rust-for-linux,
	linux-kernel

On Thu, Nov 13, 2025 at 12:35 AM Alice Ryhl <aliceryhl@google.com> wrote:
>
> Although we should never hit this codepath in real code, I don't think
> we need to kill the kernel. We can treat the r/b tree as source of truth
> and adjust the bitmap when mismathces are detected.
>
> I could add a kernel warning, though. That shouldn't kill an Android
> device.

I guess you mean warning in the sense of `pr_warn!` instead of
`warn_on!`, right?

If so, could you please add as well a `debug_assert!(false)` on that path?

Thanks!

I will create a first issue for the combination, since I hope we can
use it more and more.

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles
  2025-11-13  8:32       ` Miguel Ojeda
@ 2025-11-13  9:14         ` Alice Ryhl
  2025-11-13  9:21           ` Miguel Ojeda
  0 siblings, 1 reply; 20+ messages in thread
From: Alice Ryhl @ 2025-11-13  9:14 UTC (permalink / raw)
  To: Miguel Ojeda
  Cc: Yury Norov, Greg Kroah-Hartman, Arve Hjønnevåg,
	Todd Kjos, Martijn Coenen, Joel Fernandes, Christian Brauner,
	Carlos Llamas, Suren Baghdasaryan, Burak Emir, Miguel Ojeda,
	Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, rust-for-linux,
	linux-kernel

On Thu, Nov 13, 2025 at 09:32:10AM +0100, Miguel Ojeda wrote:
> On Thu, Nov 13, 2025 at 12:35 AM Alice Ryhl <aliceryhl@google.com> wrote:
> >
> > Although we should never hit this codepath in real code, I don't think
> > we need to kill the kernel. We can treat the r/b tree as source of truth
> > and adjust the bitmap when mismathces are detected.
> >
> > I could add a kernel warning, though. That shouldn't kill an Android
> > device.
> 
> I guess you mean warning in the sense of `pr_warn!` instead of
> `warn_on!`, right?

I was thinking of warn_on!. There is already a pr_err call.

> If so, could you please add as well a `debug_assert!(false)` on that path?
> 
> Thanks!
> 
> I will create a first issue for the combination, since I hope we can
> use it more and more.
> 
> Cheers,
> Miguel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles
  2025-11-13  9:14         ` Alice Ryhl
@ 2025-11-13  9:21           ` Miguel Ojeda
  0 siblings, 0 replies; 20+ messages in thread
From: Miguel Ojeda @ 2025-11-13  9:21 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Yury Norov, Greg Kroah-Hartman, Arve Hjønnevåg,
	Todd Kjos, Martijn Coenen, Joel Fernandes, Christian Brauner,
	Carlos Llamas, Suren Baghdasaryan, Burak Emir, Miguel Ojeda,
	Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
	Andreas Hindborg, Trevor Gross, Danilo Krummrich, rust-for-linux,
	linux-kernel

On Thu, Nov 13, 2025 at 10:14 AM Alice Ryhl <aliceryhl@google.com> wrote:
>
> I was thinking of warn_on!. There is already a pr_err call.

I see -- if you end up not adding the `warn_on!`, then please add the
`debug_assert!` instead.

That should help and it also makes it even more clear that it is
something that should never ever happen.

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 2/6] rust: bitmap: add BitmapVec::new_inline()
  2025-11-12 12:47 ` [PATCH v5 2/6] rust: bitmap: add BitmapVec::new_inline() Alice Ryhl
  2025-11-12 15:36   ` Yury Norov
@ 2025-11-13 17:41   ` kernel test robot
  1 sibling, 0 replies; 20+ messages in thread
From: kernel test robot @ 2025-11-13 17:41 UTC (permalink / raw)
  To: Alice Ryhl, Greg Kroah-Hartman, Yury Norov
  Cc: llvm, oe-kbuild-all, Arve Hjønnevåg, Todd Kjos,
	Martijn Coenen, Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel,
	Alice Ryhl

Hi Alice,

kernel test robot noticed the following build errors:

[auto build test ERROR on 211ddde0823f1442e4ad052a2f30f050145ccada]

url:    https://github.com/intel-lab-lkp/linux/commits/Alice-Ryhl/rust-bitmap-add-MAX_LEN-and-MAX_INLINE_LEN-constants/20251112-205752
base:   211ddde0823f1442e4ad052a2f30f050145ccada
patch link:    https://lore.kernel.org/r/20251112-binder-bitmap-v5-2-8b9d7c7eca82%40google.com
patch subject: [PATCH v5 2/6] rust: bitmap: add BitmapVec::new_inline()
config: x86_64-rhel-9.4-rust (https://download.01.org/0day-ci/archive/20251114/202511140146.shdRx5TZ-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
rustc: rustc 1.88.0 (6b00bc388 2025-06-23)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251114/202511140146.shdRx5TZ-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511140146.shdRx5TZ-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from kernel/sched/rq-offsets.c:5:
   kernel/sched/sched.h:3743:18: warning: variable 'cpumask' set but not used [-Wunused-but-set-variable]
   3743 |         struct cpumask *cpumask;
   |                         ^
   1 warning generated.
>> error[E0599]: no associated item named `NO_ALLOC_MAX_LEN` found for struct `bitmap::BitmapVec` in the current scope
   --> rust/kernel/bitmap.rs:239:31
   |
   154 | pub struct BitmapVec {
   | -------------------- associated item `NO_ALLOC_MAX_LEN` not found for this struct
   ...
   239 |             nbits: BitmapVec::NO_ALLOC_MAX_LEN,
   |                               ^^^^^^^^^^^^^^^^ associated item not found in `BitmapVec`

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles
  2025-11-12 23:35     ` Alice Ryhl
  2025-11-13  8:32       ` Miguel Ojeda
@ 2025-11-13 17:50       ` Yury Norov
  2025-11-19 23:16         ` Alice Ryhl
  1 sibling, 1 reply; 20+ messages in thread
From: Yury Norov @ 2025-11-13 17:50 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Greg Kroah-Hartman, Arve Hjønnevåg, Todd Kjos,
	Martijn Coenen, Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel

On Wed, Nov 12, 2025 at 11:35:07PM +0000, Alice Ryhl wrote:
> On Wed, Nov 12, 2025 at 02:09:19PM -0500, Yury Norov wrote:
> > On Wed, Nov 12, 2025 at 12:47:24PM +0000, Alice Ryhl wrote:
> > > To find an unused Binder handle, Rust Binder currently iterates the
> > > red/black tree from the beginning until it finds a gap in the keys. This
> > > is extremely slow.
> > 
> > Can you share performance numbers? 
> 
> I have not benchmarked it in the Rust driver, but it replaces
> potentially thousands of calls to rb_next() with a single call to
> find_unused_id(), so I'm feeling good about the performance. And the

Feelings are good, but numbers are better. In the original dbitmap
patch, Carlos shared the experiment details and rough numbers, and
it's enough for me. Can you just reproduce his steps?

> equivalent change in the C driver was done because this particular piece
> of code was causing contention issues by holding the spinlock for a long
> time.
> 
> The characteristics of this collection is that there is one per process
> using the driver. Most processes have only a few entries in this bitmap,
> so the inline representation will apply in most cases. However, there
> are a few processes that have a large number of entries in the 4 to
> maybe 5 figures range. Those processes are what caused the contention
> issue mentioned above.
> 
> > > To improve the performance, add a bitmap that keeps track of which
> > > indices are actually in use. This allows us to quickly find an unused
> > > key in the red/black tree.
> > > 
> > > This logic matches the approach used by C Binder. It was chosen
> > > partially because it's the most memory efficient solution.
> > 
> > That inaccurate. You are adding a new data structure (bitmap), advocating
> > it with an improvement on search side, and that makes sense.
> > 
> > But now you're saying it's also a more memory efficient approach, which
> > doesn't sound trivial because the most memory efficient solution is to
> > bring no new data structures at all.
> > 
> > I guess you meant that bitmap is the most efficient data structure to
> > index used/unused nodes. If so, can you please rephrase the sentence?
> 
> Yes I can rephrase.
> 
> Adding more data is of course always less memory efficient. What I meant
> is that this is more memory efficient than the competing solution of
> using an augmented rbtree that Carlos mentioned here:
> 
> https://lore.kernel.org/p/aC1PQ7tmcqMSmbHc@google.com
> 
> > > +            if let Some(res) = refs.handle_is_present.find_unused_id(start) {
> > > +                match refs.by_handle.entry(res.as_u32()) {
> > > +                    rbtree::Entry::Vacant(entry) => break (res, entry),
> > > +                    rbtree::Entry::Occupied(_) => {
> > > +                        pr_err!("Detected mismatch between handle_is_present and by_handle");
> > > +                        res.acquire();
> > > +                        continue;
> > 
> > At this point you've detected mismatch between two linked data
> > structures. It means that one of them or both are corrupted. To
> > me it looks fatal, and your pr_err() confirms it. How could you
> > continue then?
> 
> Although we should never hit this codepath in real code, I don't think
> we need to kill the kernel. We can treat the r/b tree as source of truth
> and adjust the bitmap when mismathces are detected.

There's no such thing like a 'source of truth', and rb-tree is not even
close to that.

You add bitmap for performance reasons, but with that you also bring
some redundancy. Now, you cross-check two data structures and see
discrepancy. At this point you can only say that either one of them
or both are corrupted.

Relying on rb-tree over bitmap is simply wrong. Statistically
speaking, there's more chances to corrupt rb-tree - just because it
is scattered and takes more memory.

> I could add a kernel warning, though. That shouldn't kill an Android
> device.

Assuming, you're talking about WARN(), I agree. But it looks like my
reasoning differs.
 
You never hit this path during a normal operation, as you said. So if
you're there, it means that something is already going wrong. If you
issue WARN(), you let those focused on system integrity to leverage
panic_on_warn and shut the system down to minimize any possible damage. 

With that precaution, you're free to do whatever you find practical to
'recover', or even do nothing. But please avoid referring rb-tree as a
source of truth - it's a misleading and dangerous misconception.

> > > +                    }
> > > +                }
> > > +            }
> > > +
> > > +            let grow_request = refs.handle_is_present.grow_request().ok_or(ENOMEM)?;
> > > +            drop(refs_lock);
> > > +            let resizer = grow_request.realloc(GFP_KERNEL)?;
> > > +            refs_lock = self.node_refs.lock();
> > > +            refs = &mut *refs_lock;
> > > +            refs.handle_is_present.grow(resizer);
> > 
> > Is it possible to turn this block into a less wordy statement? Maybe a
> > wrapper function for it? Ideally, the grow request should be handled
> > transparently in .find_unused_id().
> 
> I can extract this block into a separate function, but I think it would
> be tricky to move the entire logic inside .find_unused_id() because of
> the mutex lock/unlock situation.
> 
> > > @@ -905,6 +924,16 @@ pub(crate) fn update_ref(
> > >                  let id = info.node_ref().node.global_id();
> > >                  refs.by_handle.remove(&handle);
> > >                  refs.by_node.remove(&id);
> > > +                refs.handle_is_present.release_id(handle as usize);
> > > +
> > > +                if let Some(shrink) = refs.handle_is_present.shrink_request() {
> > 
> > This is questionable. Shrinking is usually the very slow path, and we
> > don't shrink unless we're really close or even inside the OOM condition.
> > 
> > In this case, shrink_request() on average returns false, but it's
> > O(N), which makes _every_ release_id() O(N), while it should be O(1).
> 
> The current implementation of shrink_request() will refuse to shrink the
> pool unless the largest bit is less than 1/4 of the capacity, so it
> should not perform the expensive operation very often.
> 
> That said, it does call find_last_bit() each time, which I guess is
> O(N). But my assumption was that find_last_bit() is cheap enough that it
> wouldn't be a problem.

It's O(N), while the expectation for release_id() is to be O(1). But if
you think it's OK for you - I'm OK as well. Can you explicitly mention
it in function description?

> > Consider a very realistic case: you're destroying every object, and thus
> > removing every ID in the associate ID pool, doing it in LIFO order. That
> > way you will need to call shrink_request() about O(log(N)) times, making
> > the whole complexity ~O(N*log(N)); and you'll have to make log(N)
> > realloc()s for nothing. If you release IDs in FIFO order, you don't
> > call realloc(), but your shrink_request() total complexity will be O(N^2). 
> 
> Even if we end up making log(N) reallocs, the total complexity of the
> reallocs is O(N) because the amount of data being reallocd halves each
> time. So the total number of bytes copied by reallocs ends up being:
> 
>     1 + 2 + 4 + 8 + ... + 2^log(N) <= 2^(1+log(N)) = 2*N
> 
> which is O(N).

OK, I trust your math better than mine. :)

> Of course, deleting the corresponding entry from the red/black tree is
> O(log N), so it's still O(N*log(N)) for the N deletions from the rb
> tree.
> 
> > Can you compare performance numbers with and without shrinking under a
> > typical payload? Is there any mechanism to inspect the ID pools at runtime,
> > like expose via procfs?
> 
> We expose the contents of the red/black tree via the binder_logs
> mechanism, but that doesn't show the *capacity* of the bitmap. Only the
> index of the largest set bit.
> 
> > Can you make your shrinking logic conditional on some reasonable OOM
> > heuristics, maybe OOM event driven?
> > 
> > And even without OOM, you can safely skip shrinking if the number of IDs in
> > use is greater than 1/4 of the capacity, or there's any used ID with
> > the index greater than the 1/2 capacity.
> 
> I guess we could register a shrinker, but I don't think it's worth it.

OK, if you're fine with O(N) for release_id() - I'm fine as well. Can
you mention OOM-driven shrinking as a possible alternative for the
following improvements?

Thanks,
Yury

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity()
  2025-11-12 12:47 ` [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity() Alice Ryhl
  2025-11-12 12:55   ` Alice Ryhl
  2025-11-12 15:46   ` Yury Norov
@ 2025-11-14  1:31   ` kernel test robot
  2 siblings, 0 replies; 20+ messages in thread
From: kernel test robot @ 2025-11-14  1:31 UTC (permalink / raw)
  To: Alice Ryhl, Greg Kroah-Hartman, Yury Norov
  Cc: llvm, oe-kbuild-all, Arve Hjønnevåg, Todd Kjos,
	Martijn Coenen, Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel,
	Alice Ryhl

Hi Alice,

kernel test robot noticed the following build errors:

[auto build test ERROR on 211ddde0823f1442e4ad052a2f30f050145ccada]

url:    https://github.com/intel-lab-lkp/linux/commits/Alice-Ryhl/rust-bitmap-add-MAX_LEN-and-MAX_INLINE_LEN-constants/20251112-205752
base:   211ddde0823f1442e4ad052a2f30f050145ccada
patch link:    https://lore.kernel.org/r/20251112-binder-bitmap-v5-3-8b9d7c7eca82%40google.com
patch subject: [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity()
config: x86_64-rhel-9.4-rust (https://download.01.org/0day-ci/archive/20251114/202511140825.Bbsm1dKT-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
rustc: rustc 1.88.0 (6b00bc388 2025-06-23)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251114/202511140825.Bbsm1dKT-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511140825.Bbsm1dKT-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from kernel/sched/rq-offsets.c:5:
   kernel/sched/sched.h:3743:18: warning: variable 'cpumask' set but not used [-Wunused-but-set-variable]
   3743 |         struct cpumask *cpumask;
   |                         ^
   1 warning generated.
>> error[E0425]: cannot find value `BITS_PER_LONG` in this scope
   --> rust/kernel/id_pool.rs:103:43
   |
   103 |         let num_ids = usize::max(num_ids, BITS_PER_LONG);
   |                                           ^^^^^^^^^^^^^ not found in this scope
   |
   help: consider importing one of these constants
   |
   7   + use crate::bindings::BITS_PER_LONG;
   |
   7   + use crate::uapi::BITS_PER_LONG;
   |
   7   + use bindings::BITS_PER_LONG;
   |
   7   + use uapi::BITS_PER_LONG;
   |

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles
  2025-11-13 17:50       ` Yury Norov
@ 2025-11-19 23:16         ` Alice Ryhl
  0 siblings, 0 replies; 20+ messages in thread
From: Alice Ryhl @ 2025-11-19 23:16 UTC (permalink / raw)
  To: Yury Norov
  Cc: Greg Kroah-Hartman, Arve Hjønnevåg, Todd Kjos,
	Martijn Coenen, Joel Fernandes, Christian Brauner, Carlos Llamas,
	Suren Baghdasaryan, Burak Emir, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, rust-for-linux, linux-kernel

On Thu, Nov 13, 2025 at 12:50:37PM -0500, Yury Norov wrote:
> On Wed, Nov 12, 2025 at 11:35:07PM +0000, Alice Ryhl wrote:
> > On Wed, Nov 12, 2025 at 02:09:19PM -0500, Yury Norov wrote:
> > > On Wed, Nov 12, 2025 at 12:47:24PM +0000, Alice Ryhl wrote:
> > > > To find an unused Binder handle, Rust Binder currently iterates the
> > > > red/black tree from the beginning until it finds a gap in the keys. This
> > > > is extremely slow.
> > > 
> > > Can you share performance numbers? 
> > 
> > I have not benchmarked it in the Rust driver, but it replaces
> > potentially thousands of calls to rb_next() with a single call to
> > find_unused_id(), so I'm feeling good about the performance. And the
> 
> Feelings are good, but numbers are better. In the original dbitmap
> patch, Carlos shared the experiment details and rough numbers, and
> it's enough for me. Can you just reproduce his steps?

Unfortunately Carlos doesn't have his benchmark files anymore, but I
made my own benchmark. It's a test where a client repeatedly sends a
transaction containing a node to a server, and the server stashes it
forever. This way, the number of nodes keeps growing forever. (Up to
100k nodes.)

Transaction Range │ Baseline (Rust) │ Bitmap (Rust) │ Comparison (C)
0 - 10,000        │          176.88 │         92.93 │          99.41
10,000 - 20,000   │          437.37 │         87.74 │          98.55
20,000 - 30,000   │          677.49 │         76.24 │          96.37
30,000 - 40,000   │          901.76 │         83.39 │          96.73
40,000 - 50,000   │         1126.62 │        100.44 │          94.57
50,000 - 60,000   │         1288.98 │         94.38 │          96.64
60,000 - 70,000   │         1588.74 │         88.27 │          96.36
70,000 - 80,000   │         1812.97 │         93.97 │          91.24
80,000 - 90,000   │         2062.95 │         92.22 │         102.01
90,000 - 100,000  │         2330.03 │         97.18 │         100.31

The numbers measure the roundtrip latency in microseconds that it takes
to send the transaction containing the node, and to receive the
response.

I would not necessarily trust the precise numbers. Rust seems slightly
better here, but I ran it on an emulator so it may be noisy. Regardless,
I think it's pretty clear that an improvement has happened.

> > equivalent change in the C driver was done because this particular piece
> > of code was causing contention issues by holding the spinlock for a long
> > time.
> > 
> > The characteristics of this collection is that there is one per process
> > using the driver. Most processes have only a few entries in this bitmap,
> > so the inline representation will apply in most cases. However, there
> > are a few processes that have a large number of entries in the 4 to
> > maybe 5 figures range. Those processes are what caused the contention
> > issue mentioned above.
> > 
> > > > To improve the performance, add a bitmap that keeps track of which
> > > > indices are actually in use. This allows us to quickly find an unused
> > > > key in the red/black tree.
> > > > 
> > > > This logic matches the approach used by C Binder. It was chosen
> > > > partially because it's the most memory efficient solution.
> > > 
> > > That inaccurate. You are adding a new data structure (bitmap), advocating
> > > it with an improvement on search side, and that makes sense.
> > > 
> > > But now you're saying it's also a more memory efficient approach, which
> > > doesn't sound trivial because the most memory efficient solution is to
> > > bring no new data structures at all.
> > > 
> > > I guess you meant that bitmap is the most efficient data structure to
> > > index used/unused nodes. If so, can you please rephrase the sentence?
> > 
> > Yes I can rephrase.
> > 
> > Adding more data is of course always less memory efficient. What I meant
> > is that this is more memory efficient than the competing solution of
> > using an augmented rbtree that Carlos mentioned here:
> > 
> > https://lore.kernel.org/p/aC1PQ7tmcqMSmbHc@google.com
> > 
> > > > +            if let Some(res) = refs.handle_is_present.find_unused_id(start) {
> > > > +                match refs.by_handle.entry(res.as_u32()) {
> > > > +                    rbtree::Entry::Vacant(entry) => break (res, entry),
> > > > +                    rbtree::Entry::Occupied(_) => {
> > > > +                        pr_err!("Detected mismatch between handle_is_present and by_handle");
> > > > +                        res.acquire();
> > > > +                        continue;
> > > 
> > > At this point you've detected mismatch between two linked data
> > > structures. It means that one of them or both are corrupted. To
> > > me it looks fatal, and your pr_err() confirms it. How could you
> > > continue then?
> > 
> > Although we should never hit this codepath in real code, I don't think
> > we need to kill the kernel. We can treat the r/b tree as source of truth
> > and adjust the bitmap when mismathces are detected.
> 
> There's no such thing like a 'source of truth', and rb-tree is not even
> close to that.
> 
> You add bitmap for performance reasons, but with that you also bring
> some redundancy. Now, you cross-check two data structures and see
> discrepancy. At this point you can only say that either one of them
> or both are corrupted.
> 
> Relying on rb-tree over bitmap is simply wrong. Statistically
> speaking, there's more chances to corrupt rb-tree - just because it
> is scattered and takes more memory.
> 
> > I could add a kernel warning, though. That shouldn't kill an Android
> > device.
> 
> Assuming, you're talking about WARN(), I agree. But it looks like my
> reasoning differs.
>  
> You never hit this path during a normal operation, as you said. So if
> you're there, it means that something is already going wrong. If you
> issue WARN(), you let those focused on system integrity to leverage
> panic_on_warn and shut the system down to minimize any possible damage. 
> 
> With that precaution, you're free to do whatever you find practical to
> 'recover', or even do nothing. But please avoid referring rb-tree as a
> source of truth - it's a misleading and dangerous misconception.

Ok.

I picked the rb-tree because using the bitmap isn't possible - it
doesn't store the auxiliary data that we need.

> > > > +                    }
> > > > +                }
> > > > +            }
> > > > +
> > > > +            let grow_request = refs.handle_is_present.grow_request().ok_or(ENOMEM)?;
> > > > +            drop(refs_lock);
> > > > +            let resizer = grow_request.realloc(GFP_KERNEL)?;
> > > > +            refs_lock = self.node_refs.lock();
> > > > +            refs = &mut *refs_lock;
> > > > +            refs.handle_is_present.grow(resizer);
> > > 
> > > Is it possible to turn this block into a less wordy statement? Maybe a
> > > wrapper function for it? Ideally, the grow request should be handled
> > > transparently in .find_unused_id().
> > 
> > I can extract this block into a separate function, but I think it would
> > be tricky to move the entire logic inside .find_unused_id() because of
> > the mutex lock/unlock situation.
> > 
> > > > @@ -905,6 +924,16 @@ pub(crate) fn update_ref(
> > > >                  let id = info.node_ref().node.global_id();
> > > >                  refs.by_handle.remove(&handle);
> > > >                  refs.by_node.remove(&id);
> > > > +                refs.handle_is_present.release_id(handle as usize);
> > > > +
> > > > +                if let Some(shrink) = refs.handle_is_present.shrink_request() {
> > > 
> > > This is questionable. Shrinking is usually the very slow path, and we
> > > don't shrink unless we're really close or even inside the OOM condition.
> > > 
> > > In this case, shrink_request() on average returns false, but it's
> > > O(N), which makes _every_ release_id() O(N), while it should be O(1).
> > 
> > The current implementation of shrink_request() will refuse to shrink the
> > pool unless the largest bit is less than 1/4 of the capacity, so it
> > should not perform the expensive operation very often.
> > 
> > That said, it does call find_last_bit() each time, which I guess is
> > O(N). But my assumption was that find_last_bit() is cheap enough that it
> > wouldn't be a problem.
> 
> It's O(N), while the expectation for release_id() is to be O(1). But if
> you think it's OK for you - I'm OK as well. Can you explicitly mention
> it in function description?

Sure I will mention it.

> > > Consider a very realistic case: you're destroying every object, and thus
> > > removing every ID in the associate ID pool, doing it in LIFO order. That
> > > way you will need to call shrink_request() about O(log(N)) times, making
> > > the whole complexity ~O(N*log(N)); and you'll have to make log(N)
> > > realloc()s for nothing. If you release IDs in FIFO order, you don't
> > > call realloc(), but your shrink_request() total complexity will be O(N^2). 
> > 
> > Even if we end up making log(N) reallocs, the total complexity of the
> > reallocs is O(N) because the amount of data being reallocd halves each
> > time. So the total number of bytes copied by reallocs ends up being:
> > 
> >     1 + 2 + 4 + 8 + ... + 2^log(N) <= 2^(1+log(N)) = 2*N
> > 
> > which is O(N).
> 
> OK, I trust your math better than mine. :)
> 
> > Of course, deleting the corresponding entry from the red/black tree is
> > O(log N), so it's still O(N*log(N)) for the N deletions from the rb
> > tree.
> > 
> > > Can you compare performance numbers with and without shrinking under a
> > > typical payload? Is there any mechanism to inspect the ID pools at runtime,
> > > like expose via procfs?
> > 
> > We expose the contents of the red/black tree via the binder_logs
> > mechanism, but that doesn't show the *capacity* of the bitmap. Only the
> > index of the largest set bit.
> > 
> > > Can you make your shrinking logic conditional on some reasonable OOM
> > > heuristics, maybe OOM event driven?
> > > 
> > > And even without OOM, you can safely skip shrinking if the number of IDs in
> > > use is greater than 1/4 of the capacity, or there's any used ID with
> > > the index greater than the 1/2 capacity.
> > 
> > I guess we could register a shrinker, but I don't think it's worth it.
> 
> OK, if you're fine with O(N) for release_id() - I'm fine as well. Can
> you mention OOM-driven shrinking as a possible alternative for the
> following improvements?

Sure.

Alice

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2025-11-19 23:16 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-12 12:47 [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Alice Ryhl
2025-11-12 12:47 ` [PATCH v5 1/6] rust: bitmap: add MAX_LEN and MAX_INLINE_LEN constants Alice Ryhl
2025-11-12 12:47 ` [PATCH v5 2/6] rust: bitmap: add BitmapVec::new_inline() Alice Ryhl
2025-11-12 15:36   ` Yury Norov
2025-11-13 17:41   ` kernel test robot
2025-11-12 12:47 ` [PATCH v5 3/6] rust: bitmap: rename IdPool::new() to with_capacity() Alice Ryhl
2025-11-12 12:55   ` Alice Ryhl
2025-11-12 15:46   ` Yury Norov
2025-11-14  1:31   ` kernel test robot
2025-11-12 12:47 ` [PATCH v5 4/6] rust: id_pool: do not supply starting capacity Alice Ryhl
2025-11-12 12:47 ` [PATCH v5 5/6] rust: id_pool: do not immediately acquire new ids Alice Ryhl
2025-11-12 12:47 ` [PATCH v5 6/6] rust_binder: use bitmap for allocation of handles Alice Ryhl
2025-11-12 19:09   ` Yury Norov
2025-11-12 23:35     ` Alice Ryhl
2025-11-13  8:32       ` Miguel Ojeda
2025-11-13  9:14         ` Alice Ryhl
2025-11-13  9:21           ` Miguel Ojeda
2025-11-13 17:50       ` Yury Norov
2025-11-19 23:16         ` Alice Ryhl
2025-11-12 15:43 ` [PATCH v5 0/6] Use Rust Bitmap from Rust Binder driver Yury Norov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).