public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] rust: add a ring buffer implementation
@ 2026-02-15 20:24 Andreas Hindborg
  2026-02-16  4:35 ` Daniel Almeida
  2026-02-16 12:25 ` Alice Ryhl
  0 siblings, 2 replies; 17+ messages in thread
From: Andreas Hindborg @ 2026-02-15 20:24 UTC (permalink / raw)
  To: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Alice Ryhl, Trevor Gross, Danilo Krummrich
  Cc: linux-kernel, rust-for-linux, Andreas Hindborg

Add a fixed-capacity FIFO ring buffer. The implementation uses a circular
buffer with head and tail pointers, providing constant-time push and pop
operations.

The module includes a few tests covering basic operations, wrap-around
behavior, interleaved push/pop sequences, and edge cases such as
single-capacity buffers.

Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
 rust/kernel/lib.rs        |   1 +
 rust/kernel/ringbuffer.rs | 321 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 322 insertions(+)

diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index f812cf1200428..d6555ccceb32f 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -133,6 +133,7 @@
 pub mod rbtree;
 pub mod regulator;
 pub mod revocable;
+pub mod ringbuffer;
 pub mod scatterlist;
 pub mod security;
 pub mod seq_file;
diff --git a/rust/kernel/ringbuffer.rs b/rust/kernel/ringbuffer.rs
new file mode 100644
index 0000000000000..9a66ebf1bb390
--- /dev/null
+++ b/rust/kernel/ringbuffer.rs
@@ -0,0 +1,321 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! A fixed-capacity FIFO ring buffer.
+//!
+//! This module provides [`RingBuffer`], a circular buffer implementation that
+//! supports efficient push and pop operations at opposite ends of the buffer.
+
+use kernel::prelude::*;
+
+/// A fixed-capacity FIFO ring buffer.
+///
+/// `RingBuffer` is a circular buffer that allows pushing elements to the head
+/// and popping elements from the tail in constant time. The buffer has a fixed
+/// capacity specified at construction time and will return an error if a push
+/// is attempted when full.
+///
+/// # Invariants
+///
+/// - `self.head` points at the next empty slot.
+/// - `self.tail` points at the last full slot, except if the buffer is empty.
+/// - The buffer is empty when `self.head == self.tail`.
+/// - The buffer will always have at least one empty slot, even when full.
+pub struct RingBuffer<T> {
+    nodes: KVec<Option<T>>,
+    size: usize,
+    head: usize,
+    tail: usize,
+}
+
+impl<T> RingBuffer<T> {
+    /// Creates a new `RingBuffer` with the specified capacity.
+    ///
+    /// The buffer will be able to hold exactly `capacity` elements. Memory is
+    /// allocated during construction.
+    ///
+    /// # Errors
+    ///
+    /// Returns an error if memory allocation fails.
+    pub fn new(capacity: usize) -> Result<Self> {
+        let mut this = Self {
+            nodes: KVec::with_capacity(capacity + 1, GFP_KERNEL)?,
+            size: capacity + 1,
+            head: 0,
+            tail: 0,
+        };
+
+        for _ in 0..this.size {
+            this.nodes.push_within_capacity(None)?;
+        }
+
+        Ok(this)
+    }
+
+    /// Returns `true` if the buffer is full.
+    ///
+    /// When the buffer is full, any call to [`push_head`] will return an error.
+    ///
+    /// [`push_head`]: Self::push_head
+    pub fn full(&self) -> bool {
+        (self.head + 1) % self.size == self.tail
+    }
+
+    fn empty(&self) -> bool {
+        self.head == self.tail
+    }
+
+    /// Returns the number of available slots in the buffer.
+    ///
+    /// This is the number of elements that can be pushed before the buffer
+    /// becomes full.
+    pub fn free_count(&self) -> usize {
+        (if self.head >= self.tail {
+            self.size - (self.head - self.tail)
+        } else {
+            (self.size - self.tail) + self.head
+        } - 1)
+    }
+
+    /// Pushes a value to the head of the buffer.
+    ///
+    /// # Errors
+    ///
+    /// Returns [`ENOSPC`] if the buffer is full.
+    pub fn push_head(&mut self, value: T) -> Result {
+        if self.full() {
+            return Err(ENOSPC);
+        }
+
+        self.nodes[self.head] = Some(value);
+        self.head = (self.head + 1) % self.size;
+
+        Ok(())
+    }
+
+    /// Pops and returns the value at the tail of the buffer.
+    ///
+    /// Returns `None` if the buffer is empty. Values are returned in FIFO
+    /// order, i.e., the oldest value is returned first.
+    pub fn pop_tail(&mut self) -> Option<T> {
+        if self.empty() {
+            return None;
+        }
+
+        let value = self.nodes[self.tail].take();
+        self.tail = (self.tail + 1) % self.size;
+        value
+    }
+}
+
+impl<T> Drop for RingBuffer<T> {
+    fn drop(&mut self) {
+        while !self.empty() {
+            drop(self.pop_tail().expect("Not empty"));
+        }
+    }
+}
+
+#[kunit_tests(rust_kernel_ringbuffer)]
+mod tests {
+    use super::*;
+
+    #[test]
+    fn test_new_buffer_is_empty() {
+        let buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
+        assert!(!buffer.full());
+        assert!(buffer.empty());
+    }
+
+    #[test]
+    fn test_new_buffer_has_correct_capacity() {
+        let capacity = 10;
+        let buffer: RingBuffer<i32> = RingBuffer::new(capacity).expect("Failed to create buffer");
+        assert_eq!(buffer.free_count(), capacity);
+    }
+
+    #[test]
+    fn test_push_single_element() {
+        let mut buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
+        assert!(buffer.push_head(42).is_ok());
+        assert!(!buffer.empty());
+        assert!(!buffer.full());
+    }
+
+    #[test]
+    fn test_push_and_pop_single_element() {
+        let mut buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
+        buffer.push_head(42).expect("Failed to push");
+        let value = buffer.pop_tail();
+        assert_eq!(value, Some(42));
+        assert!(buffer.empty());
+    }
+
+    #[test]
+    fn test_push_and_pop_multiple_elements() {
+        let mut buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
+
+        for i in 0..5 {
+            buffer.push_head(i).expect("Failed to push");
+        }
+
+        for i in 0..5 {
+            assert_eq!(buffer.pop_tail(), Some(i));
+        }
+
+        assert!(buffer.empty());
+    }
+
+    #[test]
+    fn test_pop_from_empty_buffer() {
+        let mut buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
+        assert_eq!(buffer.pop_tail(), None);
+    }
+
+    #[test]
+    fn test_buffer_full() {
+        let capacity = 3;
+        let mut buffer: RingBuffer<i32> =
+            RingBuffer::new(capacity).expect("Failed to create buffer");
+
+        for i in 0..capacity {
+            assert!(buffer.push_head(i as i32).is_ok());
+        }
+
+        assert!(buffer.full());
+        assert_eq!(buffer.free_count(), 0);
+    }
+
+    #[test]
+    fn test_push_to_full_buffer_fails() {
+        let capacity = 3;
+        let mut buffer: RingBuffer<i32> =
+            RingBuffer::new(capacity).expect("Failed to create buffer");
+
+        for i in 0..capacity {
+            buffer.push_head(i as i32).expect("Failed to push");
+        }
+
+        assert!(buffer.full());
+        assert!(buffer.push_head(999).is_err());
+    }
+
+    #[test]
+    fn test_free_count_decreases_on_push() {
+        let capacity = 5;
+        let mut buffer: RingBuffer<i32> =
+            RingBuffer::new(capacity).expect("Failed to create buffer");
+
+        assert_eq!(buffer.free_count(), capacity);
+
+        for i in 0..capacity {
+            buffer.push_head(i as i32).expect("Failed to push");
+            assert_eq!(buffer.free_count(), capacity - i - 1);
+        }
+    }
+
+    #[test]
+    fn test_free_count_increases_on_pop() {
+        let capacity = 5;
+        let mut buffer: RingBuffer<i32> =
+            RingBuffer::new(capacity).expect("Failed to create buffer");
+
+        for i in 0..capacity {
+            buffer.push_head(i as i32).expect("Failed to push");
+        }
+
+        assert_eq!(buffer.free_count(), 0);
+
+        for i in 1..=capacity {
+            buffer.pop_tail();
+            assert_eq!(buffer.free_count(), i);
+        }
+    }
+
+    #[test]
+    fn test_wrap_around_behavior() {
+        let capacity = 3;
+        let mut buffer: RingBuffer<i32> =
+            RingBuffer::new(capacity).expect("Failed to create buffer");
+
+        // Fill the buffer.
+        for i in 0..capacity {
+            buffer.push_head(i as i32).expect("Failed to push");
+        }
+
+        // Pop two elements.
+        assert_eq!(buffer.pop_tail(), Some(0));
+        assert_eq!(buffer.pop_tail(), Some(1));
+
+        // Push two more (should wrap around).
+        buffer.push_head(10).expect("Failed to push");
+        buffer.push_head(11).expect("Failed to push");
+
+        // Verify order.
+        assert_eq!(buffer.pop_tail(), Some(2));
+        assert_eq!(buffer.pop_tail(), Some(10));
+        assert_eq!(buffer.pop_tail(), Some(11));
+        assert!(buffer.empty());
+    }
+
+    #[test]
+    fn test_fifo_order() {
+        let mut buffer: RingBuffer<i32> = RingBuffer::new(10).expect("Failed to create buffer");
+
+        let values = [1, 2, 3, 4, 5];
+        for &value in &values {
+            buffer.push_head(value).expect("Failed to push");
+        }
+
+        for &expected in &values {
+            assert_eq!(buffer.pop_tail(), Some(expected));
+        }
+    }
+
+    #[test]
+    fn test_interleaved_push_pop() {
+        let mut buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
+
+        buffer.push_head(1).expect("Failed to push");
+        buffer.push_head(2).expect("Failed to push");
+        assert_eq!(buffer.pop_tail(), Some(1));
+
+        buffer.push_head(3).expect("Failed to push");
+        assert_eq!(buffer.pop_tail(), Some(2));
+        assert_eq!(buffer.pop_tail(), Some(3));
+
+        assert!(buffer.empty());
+    }
+
+    #[test]
+    fn test_buffer_state_after_fill_and_drain() {
+        let capacity = 4;
+        let mut buffer: RingBuffer<i32> =
+            RingBuffer::new(capacity).expect("Failed to create buffer");
+
+        // Fill and drain twice.
+        for _ in 0..2 {
+            for i in 0..capacity {
+                buffer.push_head(i as i32).expect("Failed to push");
+            }
+            assert!(buffer.full());
+
+            for _ in 0..capacity {
+                buffer.pop_tail();
+            }
+            assert!(buffer.empty());
+        }
+    }
+
+    #[test]
+    fn test_single_capacity_buffer() {
+        let mut buffer: RingBuffer<i32> = RingBuffer::new(1).expect("Failed to create buffer");
+
+        assert_eq!(buffer.free_count(), 1);
+        buffer.push_head(42).expect("Failed to push");
+        assert!(buffer.full());
+        assert_eq!(buffer.free_count(), 0);
+
+        assert_eq!(buffer.pop_tail(), Some(42));
+        assert!(buffer.empty());
+    }
+}

---
base-commit: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b
change-id: 20260215-ringbuffer-42455964aaf2

Best regards,
-- 
Andreas Hindborg <a.hindborg@kernel.org>



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-15 20:24 [PATCH] rust: add a ring buffer implementation Andreas Hindborg
@ 2026-02-16  4:35 ` Daniel Almeida
  2026-02-16  7:11   ` Andreas Hindborg
  2026-02-16 12:25 ` Alice Ryhl
  1 sibling, 1 reply; 17+ messages in thread
From: Daniel Almeida @ 2026-02-16  4:35 UTC (permalink / raw)
  To: Andreas Hindborg
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	linux-kernel, rust-for-linux, Andreas Hindborg


Hi Andreas,

> On 15 Feb 2026, at 17:25, Andreas Hindborg <a.hindborg@kernel.org> wrote:
> 
> Add a fixed-capacity FIFO ring buffer. The implementation uses a circular
> buffer with head and tail pointers, providing constant-time push and pop
> operations.
> 
> The module includes a few tests covering basic operations, wrap-around
> behavior, interleaved push/pop sequences, and edge cases such as
> single-capacity buffers.
> 
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> rust/kernel/lib.rs        |   1 +
> rust/kernel/ringbuffer.rs | 321 ++++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 322 insertions(+)
> 
> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
> index f812cf1200428..d6555ccceb32f 100644
> --- a/rust/kernel/lib.rs
> +++ b/rust/kernel/lib.rs
> @@ -133,6 +133,7 @@
> pub mod rbtree;
> pub mod regulator;
> pub mod revocable;
> +pub mod ringbuffer;
> pub mod scatterlist;
> pub mod security;
> pub mod seq_file;
> diff --git a/rust/kernel/ringbuffer.rs b/rust/kernel/ringbuffer.rs
> new file mode 100644
> index 0000000000000..9a66ebf1bb390
> --- /dev/null
> +++ b/rust/kernel/ringbuffer.rs
> @@ -0,0 +1,321 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! A fixed-capacity FIFO ring buffer.
> +//!
> +//! This module provides [`RingBuffer`], a circular buffer implementation that
> +//! supports efficient push and pop operations at opposite ends of the buffer.
> +
> +use kernel::prelude::*;
> +
> +/// A fixed-capacity FIFO ring buffer.
> +///
> +/// `RingBuffer` is a circular buffer that allows pushing elements to the head
> +/// and popping elements from the tail in constant time. The buffer has a fixed
> +/// capacity specified at construction time and will return an error if a push
> +/// is attempted when full.
> +///
> +/// # Invariants
> +///
> +/// - `self.head` points at the next empty slot.
> +/// - `self.tail` points at the last full slot, except if the buffer is empty.
> +/// - The buffer is empty when `self.head == self.tail`.
> +/// - The buffer will always have at least one empty slot, even when full.
> +pub struct RingBuffer<T> {
> +    nodes: KVec<Option<T>>,
> +    size: usize,
> +    head: usize,
> +    tail: usize,
> +}
> +
> +impl<T> RingBuffer<T> {
> +    /// Creates a new `RingBuffer` with the specified capacity.
> +    ///
> +    /// The buffer will be able to hold exactly `capacity` elements. Memory is
> +    /// allocated during construction.
> +    ///
> +    /// # Errors
> +    ///
> +    /// Returns an error if memory allocation fails.
> +    pub fn new(capacity: usize) -> Result<Self> {
> +        let mut this = Self {
> +            nodes: KVec::with_capacity(capacity + 1, GFP_KERNEL)?,
> +            size: capacity + 1,
> +            head: 0,
> +            tail: 0,
> +        };
> +
> +        for _ in 0..this.size {
> +            this.nodes.push_within_capacity(None)?;
> +        }
> +
> +        Ok(this)
> +    }
> +
> +    /// Returns `true` if the buffer is full.
> +    ///
> +    /// When the buffer is full, any call to [`push_head`] will return an error.
> +    ///
> +    /// [`push_head`]: Self::push_head
> +    pub fn full(&self) -> bool {
> +        (self.head + 1) % self.size == self.tail
> +    }
> +
> +    fn empty(&self) -> bool {
> +        self.head == self.tail
> +    }
> +
> +    /// Returns the number of available slots in the buffer.
> +    ///
> +    /// This is the number of elements that can be pushed before the buffer
> +    /// becomes full.
> +    pub fn free_count(&self) -> usize {
> +        (if self.head >= self.tail {
> +            self.size - (self.head - self.tail)
> +        } else {
> +            (self.size - self.tail) + self.head
> +        } - 1)
> +    }
> +
> +    /// Pushes a value to the head of the buffer.
> +    ///
> +    /// # Errors
> +    ///
> +    /// Returns [`ENOSPC`] if the buffer is full.
> +    pub fn push_head(&mut self, value: T) -> Result {
> +        if self.full() {
> +            return Err(ENOSPC);
> +        }
> +
> +        self.nodes[self.head] = Some(value);
> +        self.head = (self.head + 1) % self.size;
> +
> +        Ok(())
> +    }
> +
> +    /// Pops and returns the value at the tail of the buffer.
> +    ///
> +    /// Returns `None` if the buffer is empty. Values are returned in FIFO
> +    /// order, i.e., the oldest value is returned first.
> +    pub fn pop_tail(&mut self) -> Option<T> {
> +        if self.empty() {
> +            return None;
> +        }
> +
> +        let value = self.nodes[self.tail].take();
> +        self.tail = (self.tail + 1) % self.size;
> +        value
> +    }
> +}
> +
> +impl<T> Drop for RingBuffer<T> {
> +    fn drop(&mut self) {
> +        while !self.empty() {
> +            drop(self.pop_tail().expect("Not empty"));
> +        }
> +    }
> +}
> +
> +#[kunit_tests(rust_kernel_ringbuffer)]
> +mod tests {
> +    use super::*;
> +
> +    #[test]
> +    fn test_new_buffer_is_empty() {
> +        let buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
> +        assert!(!buffer.full());
> +        assert!(buffer.empty());
> +    }
> +
> +    #[test]
> +    fn test_new_buffer_has_correct_capacity() {
> +        let capacity = 10;
> +        let buffer: RingBuffer<i32> = RingBuffer::new(capacity).expect("Failed to create buffer");
> +        assert_eq!(buffer.free_count(), capacity);
> +    }
> +
> +    #[test]
> +    fn test_push_single_element() {
> +        let mut buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
> +        assert!(buffer.push_head(42).is_ok());
> +        assert!(!buffer.empty());
> +        assert!(!buffer.full());
> +    }
> +
> +    #[test]
> +    fn test_push_and_pop_single_element() {
> +        let mut buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
> +        buffer.push_head(42).expect("Failed to push");
> +        let value = buffer.pop_tail();
> +        assert_eq!(value, Some(42));
> +        assert!(buffer.empty());
> +    }
> +
> +    #[test]
> +    fn test_push_and_pop_multiple_elements() {
> +        let mut buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
> +
> +        for i in 0..5 {
> +            buffer.push_head(i).expect("Failed to push");
> +        }
> +
> +        for i in 0..5 {
> +            assert_eq!(buffer.pop_tail(), Some(i));
> +        }
> +
> +        assert!(buffer.empty());
> +    }
> +
> +    #[test]
> +    fn test_pop_from_empty_buffer() {
> +        let mut buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
> +        assert_eq!(buffer.pop_tail(), None);
> +    }
> +
> +    #[test]
> +    fn test_buffer_full() {
> +        let capacity = 3;
> +        let mut buffer: RingBuffer<i32> =
> +            RingBuffer::new(capacity).expect("Failed to create buffer");
> +
> +        for i in 0..capacity {
> +            assert!(buffer.push_head(i as i32).is_ok());
> +        }
> +
> +        assert!(buffer.full());
> +        assert_eq!(buffer.free_count(), 0);
> +    }
> +
> +    #[test]
> +    fn test_push_to_full_buffer_fails() {
> +        let capacity = 3;
> +        let mut buffer: RingBuffer<i32> =
> +            RingBuffer::new(capacity).expect("Failed to create buffer");
> +
> +        for i in 0..capacity {
> +            buffer.push_head(i as i32).expect("Failed to push");
> +        }
> +
> +        assert!(buffer.full());
> +        assert!(buffer.push_head(999).is_err());
> +    }
> +
> +    #[test]
> +    fn test_free_count_decreases_on_push() {
> +        let capacity = 5;
> +        let mut buffer: RingBuffer<i32> =
> +            RingBuffer::new(capacity).expect("Failed to create buffer");
> +
> +        assert_eq!(buffer.free_count(), capacity);
> +
> +        for i in 0..capacity {
> +            buffer.push_head(i as i32).expect("Failed to push");
> +            assert_eq!(buffer.free_count(), capacity - i - 1);
> +        }
> +    }
> +
> +    #[test]
> +    fn test_free_count_increases_on_pop() {
> +        let capacity = 5;
> +        let mut buffer: RingBuffer<i32> =
> +            RingBuffer::new(capacity).expect("Failed to create buffer");
> +
> +        for i in 0..capacity {
> +            buffer.push_head(i as i32).expect("Failed to push");
> +        }
> +
> +        assert_eq!(buffer.free_count(), 0);
> +
> +        for i in 1..=capacity {
> +            buffer.pop_tail();
> +            assert_eq!(buffer.free_count(), i);
> +        }
> +    }
> +
> +    #[test]
> +    fn test_wrap_around_behavior() {
> +        let capacity = 3;
> +        let mut buffer: RingBuffer<i32> =
> +            RingBuffer::new(capacity).expect("Failed to create buffer");
> +
> +        // Fill the buffer.
> +        for i in 0..capacity {
> +            buffer.push_head(i as i32).expect("Failed to push");
> +        }
> +
> +        // Pop two elements.
> +        assert_eq!(buffer.pop_tail(), Some(0));
> +        assert_eq!(buffer.pop_tail(), Some(1));
> +
> +        // Push two more (should wrap around).
> +        buffer.push_head(10).expect("Failed to push");
> +        buffer.push_head(11).expect("Failed to push");
> +
> +        // Verify order.
> +        assert_eq!(buffer.pop_tail(), Some(2));
> +        assert_eq!(buffer.pop_tail(), Some(10));
> +        assert_eq!(buffer.pop_tail(), Some(11));
> +        assert!(buffer.empty());
> +    }
> +
> +    #[test]
> +    fn test_fifo_order() {
> +        let mut buffer: RingBuffer<i32> = RingBuffer::new(10).expect("Failed to create buffer");
> +
> +        let values = [1, 2, 3, 4, 5];
> +        for &value in &values {
> +            buffer.push_head(value).expect("Failed to push");
> +        }
> +
> +        for &expected in &values {
> +            assert_eq!(buffer.pop_tail(), Some(expected));
> +        }
> +    }
> +
> +    #[test]
> +    fn test_interleaved_push_pop() {
> +        let mut buffer: RingBuffer<i32> = RingBuffer::new(5).expect("Failed to create buffer");
> +
> +        buffer.push_head(1).expect("Failed to push");
> +        buffer.push_head(2).expect("Failed to push");
> +        assert_eq!(buffer.pop_tail(), Some(1));
> +
> +        buffer.push_head(3).expect("Failed to push");
> +        assert_eq!(buffer.pop_tail(), Some(2));
> +        assert_eq!(buffer.pop_tail(), Some(3));
> +
> +        assert!(buffer.empty());
> +    }
> +
> +    #[test]
> +    fn test_buffer_state_after_fill_and_drain() {
> +        let capacity = 4;
> +        let mut buffer: RingBuffer<i32> =
> +            RingBuffer::new(capacity).expect("Failed to create buffer");
> +
> +        // Fill and drain twice.
> +        for _ in 0..2 {
> +            for i in 0..capacity {
> +                buffer.push_head(i as i32).expect("Failed to push");
> +            }
> +            assert!(buffer.full());
> +
> +            for _ in 0..capacity {
> +                buffer.pop_tail();
> +            }
> +            assert!(buffer.empty());
> +        }
> +    }
> +
> +    #[test]
> +    fn test_single_capacity_buffer() {
> +        let mut buffer: RingBuffer<i32> = RingBuffer::new(1).expect("Failed to create buffer");
> +
> +        assert_eq!(buffer.free_count(), 1);
> +        buffer.push_head(42).expect("Failed to push");
> +        assert!(buffer.full());
> +        assert_eq!(buffer.free_count(), 0);
> +
> +        assert_eq!(buffer.pop_tail(), Some(42));
> +        assert!(buffer.empty());
> +    }
> +}
> 
> ---
> base-commit: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b
> change-id: 20260215-ringbuffer-42455964aaf2
> 
> Best regards,
> --
> Andreas Hindborg <a.hindborg@kernel.org>
> 
> 
> 

(sorry for the formatting, replying from my phone)

We should probably make the backing memory configurable from the get go instead of hardcoding KVec. It will be cumbersome to retrofit this change later on, IMHO. I assume that others agree?

For example, we can immediately use this in Tyr if you let the backing memory be a mapped GEM object. Having it be DMA memory is probably also going to be useful for others.

— Daniel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-16  4:35 ` Daniel Almeida
@ 2026-02-16  7:11   ` Andreas Hindborg
  2026-02-16 11:44     ` Daniel Almeida
  0 siblings, 1 reply; 17+ messages in thread
From: Andreas Hindborg @ 2026-02-16  7:11 UTC (permalink / raw)
  To: Daniel Almeida
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	linux-kernel, rust-for-linux

"Daniel Almeida" <daniel.almeida@collabora.com> writes:

> Hi Andreas,
>

<cut>

>
> (sorry for the formatting, replying from my phone)
>
> We should probably make the backing memory configurable from the get go instead of hardcoding KVec. It will be cumbersome to retrofit this change later on, IMHO. I assume that others agree?
>
> For example, we can immediately use this in Tyr if you let the backing memory be a mapped GEM object. Having it be DMA memory is probably also going to be useful for others.

It should at least take flags.

iQe can make the backing completely configurable. We could have an
unsafe initializer that require just a pointer and size at the lowest
level. The current version is using array indexing to access the memory.
We could overlay a mutable slice on the pointer and continue using array
accesses. Or we could change it to just pointer offset calculations, but
that seems less idea.


Best regards,
Andreas Hindborg



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-16  7:11   ` Andreas Hindborg
@ 2026-02-16 11:44     ` Daniel Almeida
  0 siblings, 0 replies; 17+ messages in thread
From: Daniel Almeida @ 2026-02-16 11:44 UTC (permalink / raw)
  To: Andreas Hindborg
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	linux-kernel, rust-for-linux



> On 16 Feb 2026, at 04:14, Andreas Hindborg <a.hindborg@kernel.org> wrote:
> 
> "Daniel Almeida" <daniel.almeida@collabora.com> writes:
> 
>> Hi Andreas,
>> 
> 
> <cut>
> 
>> 
>> (sorry for the formatting, replying from my phone)
>> 
>> We should probably make the backing memory configurable from the get go instead of hardcoding KVec. It will be cumbersome to retrofit this change later on, IMHO. I assume that others agree?
>> 
>> For example, we can immediately use this in Tyr if you let the backing memory be a mapped GEM object. Having it be DMA memory is probably also going to be useful for others.
> 
> It should at least take flags.
> 
> iQe can make the backing completely configurable. We could have an
> unsafe initializer that require just a pointer and size at the lowest
> level. The current version is using array indexing to access the memory.
> We could overlay a mutable slice on the pointer and continue using array
> accesses. Or we could change it to just pointer offset calculations, but
> that seems less idea.
> 
> 
> Best regards,
> Andreas Hindborg
> 


Consider adding a second generic parameter with a bound on IoCapable. This is probably what we what here. This will let us delegate the memory accesses to the underlying IoCapable implementation.

— Daniel 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-15 20:24 [PATCH] rust: add a ring buffer implementation Andreas Hindborg
  2026-02-16  4:35 ` Daniel Almeida
@ 2026-02-16 12:25 ` Alice Ryhl
  2026-02-16 12:43   ` Daniel Almeida
  2026-02-16 13:27   ` Andreas Hindborg
  1 sibling, 2 replies; 17+ messages in thread
From: Alice Ryhl @ 2026-02-16 12:25 UTC (permalink / raw)
  To: Andreas Hindborg
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Trevor Gross, Danilo Krummrich, linux-kernel,
	rust-for-linux

On Sun, Feb 15, 2026 at 09:24:59PM +0100, Andreas Hindborg wrote:
> Add a fixed-capacity FIFO ring buffer. The implementation uses a circular
> buffer with head and tail pointers, providing constant-time push and pop
> operations.
> 
> The module includes a few tests covering basic operations, wrap-around
> behavior, interleaved push/pop sequences, and edge cases such as
> single-capacity buffers.
> 
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>

Why call this ringbuffer instead of matching the stdlib name for the
same collection? VecDeque.

And a more general question .. is there any chance we could avoid
rolling our own for this? Is there an impl in the kernel we could take?
Or could we vendor code from stdlib?

>  rust/kernel/lib.rs        |   1 +
>  rust/kernel/ringbuffer.rs | 321 ++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 322 insertions(+)
> 
> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
> index f812cf1200428..d6555ccceb32f 100644
> --- a/rust/kernel/lib.rs
> +++ b/rust/kernel/lib.rs
> @@ -133,6 +133,7 @@
>  pub mod rbtree;
>  pub mod regulator;
>  pub mod revocable;
> +pub mod ringbuffer;
>  pub mod scatterlist;
>  pub mod security;
>  pub mod seq_file;
> diff --git a/rust/kernel/ringbuffer.rs b/rust/kernel/ringbuffer.rs
> new file mode 100644
> index 0000000000000..9a66ebf1bb390
> --- /dev/null
> +++ b/rust/kernel/ringbuffer.rs

This should probably be in rust/kernel/alloc/ next to Vec?

> @@ -0,0 +1,321 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! A fixed-capacity FIFO ring buffer.
> +//!
> +//! This module provides [`RingBuffer`], a circular buffer implementation that
> +//! supports efficient push and pop operations at opposite ends of the buffer.
> +
> +use kernel::prelude::*;
> +
> +/// A fixed-capacity FIFO ring buffer.
> +///
> +/// `RingBuffer` is a circular buffer that allows pushing elements to the head
> +/// and popping elements from the tail in constant time. The buffer has a fixed
> +/// capacity specified at construction time and will return an error if a push
> +/// is attempted when full.
> +///
> +/// # Invariants
> +///
> +/// - `self.head` points at the next empty slot.
> +/// - `self.tail` points at the last full slot, except if the buffer is empty.
> +/// - The buffer is empty when `self.head == self.tail`.
> +/// - The buffer will always have at least one empty slot, even when full.
> +pub struct RingBuffer<T> {
> +    nodes: KVec<Option<T>>,

This is quite inefficient storage for any type T that does not contain a
non-nullable pointer.

> +    size: usize,

This size is just the vector's capacity. Field is redundant.

> +    pub fn new(capacity: usize) -> Result<Self> {
> +        let mut this = Self {
> +            nodes: KVec::with_capacity(capacity + 1, GFP_KERNEL)?,

Should return ENOMEM on capacity == usize::MAX instead of panic.

> +    /// Returns the number of available slots in the buffer.
> +    ///
> +    /// This is the number of elements that can be pushed before the buffer
> +    /// becomes full.
> +    pub fn free_count(&self) -> usize {
> +        (if self.head >= self.tail {
> +            self.size - (self.head - self.tail)
> +        } else {
> +            (self.size - self.tail) + self.head
> +        } - 1)

The else branch should just be `self.tail - self.head`.

> +impl<T> Drop for RingBuffer<T> {
> +    fn drop(&mut self) {
> +        while !self.empty() {
> +            drop(self.pop_tail().expect("Not empty"));
> +        }

The destructor of KVec already drops the items.

Alice

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-16 12:25 ` Alice Ryhl
@ 2026-02-16 12:43   ` Daniel Almeida
  2026-02-16 13:27   ` Andreas Hindborg
  1 sibling, 0 replies; 17+ messages in thread
From: Daniel Almeida @ 2026-02-16 12:43 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Andreas Hindborg, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Trevor Gross,
	Danilo Krummrich, linux-kernel, rust-for-linux




> On 16 Feb 2026, at 09:26, Alice Ryhl <aliceryhl@google.com> wrote:
> 
> On Sun, Feb 15, 2026 at 09:24:59PM +0100, Andreas Hindborg wrote:
>> Add a fixed-capacity FIFO ring buffer. The implementation uses a circular
>> buffer with head and tail pointers, providing constant-time push and pop
>> operations.
>> 
>> The module includes a few tests covering basic operations, wrap-around
>> behavior, interleaved push/pop sequences, and edge cases such as
>> single-capacity buffers.
>> 
>> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> 
> Why call this ringbuffer instead of matching the stdlib name for the
> same collection? VecDeque.
> 
> And a more general question .. is there any chance we could avoid
> rolling our own for this? Is there an impl in the kernel we could take?
> Or could we vendor code from stdlib?
> 

Well, a pretty good reason is matching this with IoCapable as I suggested. This is also a pretty strong argument against VecDeque, since this is now unrelated to Vec itself.

— Daniel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-16 12:25 ` Alice Ryhl
  2026-02-16 12:43   ` Daniel Almeida
@ 2026-02-16 13:27   ` Andreas Hindborg
  2026-02-16 13:45     ` Daniel Almeida
  1 sibling, 1 reply; 17+ messages in thread
From: Andreas Hindborg @ 2026-02-16 13:27 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Trevor Gross, Danilo Krummrich, linux-kernel,
	rust-for-linux

"Alice Ryhl" <aliceryhl@google.com> writes:

> On Sun, Feb 15, 2026 at 09:24:59PM +0100, Andreas Hindborg wrote:
>> Add a fixed-capacity FIFO ring buffer. The implementation uses a circular
>> buffer with head and tail pointers, providing constant-time push and pop
>> operations.
>>
>> The module includes a few tests covering basic operations, wrap-around
>> behavior, interleaved push/pop sequences, and edge cases such as
>> single-capacity buffers.
>>
>> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
>
> Why call this ringbuffer instead of matching the stdlib name for the
> same collection? VecDeque.

I did not have stdlib in mind at all when writing this. I needed a
ringbuffer, so that is what I called it. I am fine with renaming it to
whatever is more idiomatic.

>
> And a more general question .. is there any chance we could avoid
> rolling our own for this? Is there an impl in the kernel we could take?
> Or could we vendor code from stdlib?

I did not have a look on the stdlib VecDeque. With us decupling from
`alloc` I don't think that makes sense. If you think it is worthwhile, I
can go have a look.

>
>>  rust/kernel/lib.rs        |   1 +
>>  rust/kernel/ringbuffer.rs | 321 ++++++++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 322 insertions(+)
>>
>> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
>> index f812cf1200428..d6555ccceb32f 100644
>> --- a/rust/kernel/lib.rs
>> +++ b/rust/kernel/lib.rs
>> @@ -133,6 +133,7 @@
>>  pub mod rbtree;
>>  pub mod regulator;
>>  pub mod revocable;
>> +pub mod ringbuffer;
>>  pub mod scatterlist;
>>  pub mod security;
>>  pub mod seq_file;
>> diff --git a/rust/kernel/ringbuffer.rs b/rust/kernel/ringbuffer.rs
>> new file mode 100644
>> index 0000000000000..9a66ebf1bb390
>> --- /dev/null
>> +++ b/rust/kernel/ringbuffer.rs
>
> This should probably be in rust/kernel/alloc/ next to Vec?

Ok.

>
>> @@ -0,0 +1,321 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +//! A fixed-capacity FIFO ring buffer.
>> +//!
>> +//! This module provides [`RingBuffer`], a circular buffer implementation that
>> +//! supports efficient push and pop operations at opposite ends of the buffer.
>> +
>> +use kernel::prelude::*;
>> +
>> +/// A fixed-capacity FIFO ring buffer.
>> +///
>> +/// `RingBuffer` is a circular buffer that allows pushing elements to the head
>> +/// and popping elements from the tail in constant time. The buffer has a fixed
>> +/// capacity specified at construction time and will return an error if a push
>> +/// is attempted when full.
>> +///
>> +/// # Invariants
>> +///
>> +/// - `self.head` points at the next empty slot.
>> +/// - `self.tail` points at the last full slot, except if the buffer is empty.
>> +/// - The buffer is empty when `self.head == self.tail`.
>> +/// - The buffer will always have at least one empty slot, even when full.
>> +pub struct RingBuffer<T> {
>> +    nodes: KVec<Option<T>>,
>
> This is quite inefficient storage for any type T that does not contain a
> non-nullable pointer.

But for nonzero types it is great, and makes this entire module have
zero unsafe code. We can remove it and rely on head and tail to decide
what is valid if you prefer?

>
>> +    size: usize,
>
> This size is just the vector's capacity. Field is redundant.

Cool.

>
>> +    pub fn new(capacity: usize) -> Result<Self> {
>> +        let mut this = Self {
>> +            nodes: KVec::with_capacity(capacity + 1, GFP_KERNEL)?,
>
> Should return ENOMEM on capacity == usize::MAX instead of panic.

Good call.

>
>> +    /// Returns the number of available slots in the buffer.
>> +    ///
>> +    /// This is the number of elements that can be pushed before the buffer
>> +    /// becomes full.
>> +    pub fn free_count(&self) -> usize {
>> +        (if self.head >= self.tail {
>> +            self.size - (self.head - self.tail)
>> +        } else {
>> +            (self.size - self.tail) + self.head
>> +        } - 1)
>
> The else branch should just be `self.tail - self.head`.

Right.

>
>> +impl<T> Drop for RingBuffer<T> {
>> +    fn drop(&mut self) {
>> +        while !self.empty() {
>> +            drop(self.pop_tail().expect("Not empty"));
>> +        }
>
> The destructor of KVec already drops the items.

Of course. While writing this response I was thinking to get rid of the
option to make the KVec elements MaybeUninit. Then we would need to drop
the valid ones here.

Best regards,
Andreas Hindborg



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-16 13:27   ` Andreas Hindborg
@ 2026-02-16 13:45     ` Daniel Almeida
  2026-02-16 14:06       ` Danilo Krummrich
  0 siblings, 1 reply; 17+ messages in thread
From: Daniel Almeida @ 2026-02-16 13:45 UTC (permalink / raw)
  To: Andreas Hindborg
  Cc: Alice Ryhl, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Trevor Gross,
	Danilo Krummrich, linux-kernel, rust-for-linux


> 
>> 
>>> rust/kernel/lib.rs        |   1 +
>>> rust/kernel/ringbuffer.rs | 321 ++++++++++++++++++++++++++++++++++++++++++++++
>>> 2 files changed, 322 insertions(+)
>>> 
>>> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
>>> index f812cf1200428..d6555ccceb32f 100644
>>> --- a/rust/kernel/lib.rs
>>> +++ b/rust/kernel/lib.rs
>>> @@ -133,6 +133,7 @@
>>> pub mod rbtree;
>>> pub mod regulator;
>>> pub mod revocable;
>>> +pub mod ringbuffer;
>>> pub mod scatterlist;
>>> pub mod security;
>>> pub mod seq_file;
>>> diff --git a/rust/kernel/ringbuffer.rs b/rust/kernel/ringbuffer.rs
>>> new file mode 100644
>>> index 0000000000000..9a66ebf1bb390
>>> --- /dev/null
>>> +++ b/rust/kernel/ringbuffer.rs
>> 
>> This should probably be in rust/kernel/alloc/ next to Vec?
> 
> Ok.

With the allocation being handled by a separate component, I don’t think
this is right. I think a better location is rust/kernel/io



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-16 13:45     ` Daniel Almeida
@ 2026-02-16 14:06       ` Danilo Krummrich
  2026-02-16 14:21         ` Daniel Almeida
  0 siblings, 1 reply; 17+ messages in thread
From: Danilo Krummrich @ 2026-02-16 14:06 UTC (permalink / raw)
  To: Daniel Almeida
  Cc: Andreas Hindborg, Alice Ryhl, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Trevor Gross, linux-kernel,
	rust-for-linux

On Mon Feb 16, 2026 at 2:45 PM CET, Daniel Almeida wrote:
> With the allocation being handled by a separate component, I don’t think
> this is right. I think a better location is rust/kernel/io

I'm not sure it is reasonable to ask people who just want a ringbuffer in system
memory to take the indirection over an I/O ringbuffer implementation with
generic I/O backends choosing the system memory I/O backend.

The proposed code is simple, without comments and tests, less than 100 lines of
code. The I/O infrastructure to make this happen is still WIP. So, I think it's
fine to land it as VecDeque for now.

Once we have the I/O backend infrastructure, a system memory I/O backend that
can deal with separate allocators *and* a ring buffer implementation that sits
on top of it, we can still revisit if it makes sense to take advantage of
synergies.

But for now this seems a bit premature in terms of delaying Andreas' work.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-16 14:06       ` Danilo Krummrich
@ 2026-02-16 14:21         ` Daniel Almeida
  2026-02-16 14:39           ` Danilo Krummrich
  0 siblings, 1 reply; 17+ messages in thread
From: Daniel Almeida @ 2026-02-16 14:21 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Andreas Hindborg, Alice Ryhl, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Trevor Gross, linux-kernel,
	rust-for-linux



> On 16 Feb 2026, at 11:06, Danilo Krummrich <dakr@kernel.org> wrote:
> 
> On Mon Feb 16, 2026 at 2:45 PM CET, Daniel Almeida wrote:
>> With the allocation being handled by a separate component, I don’t think
>> this is right. I think a better location is rust/kernel/io
> 
> I'm not sure it is reasonable to ask people who just want a ringbuffer in system
> memory to take the indirection over an I/O ringbuffer implementation with
> generic I/O backends choosing the system memory I/O backend.
> 
> The proposed code is simple, without comments and tests, less than 100 lines of
> code. The I/O infrastructure to make this happen is still WIP. So, I think it's
> fine to land it as VecDeque for now.


Well, this is a 100 line patch, but nothing was said of how much else was going
to be added on top in the future. In order to avoiding iterating on what I
consider the wrong approach, I suggested that we start out in the right
direction from the start, something that Andreas himself apparently agreed to.


> Once we have the I/O backend infrastructure, a system memory I/O backend that
> can deal with separate allocators *and* a ring buffer implementation that sits
> on top of it, we can still revisit if it makes sense to take advantage of
> synergies.
> 
> But for now this seems a bit premature in terms of delaying Andreas' work.

IIUC, and feel free to correct me on this, the I/O backends are already in the
works. What is missing is a trivial system memory backend, and similarly a
ringbuffer implementation, which is the subject of this patch. I don't see this
as a lot of work or an unreasonable ask.

— Daniel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-16 14:21         ` Daniel Almeida
@ 2026-02-16 14:39           ` Danilo Krummrich
  2026-02-16 14:46             ` Daniel Almeida
  0 siblings, 1 reply; 17+ messages in thread
From: Danilo Krummrich @ 2026-02-16 14:39 UTC (permalink / raw)
  To: Daniel Almeida
  Cc: Andreas Hindborg, Alice Ryhl, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Trevor Gross, linux-kernel,
	rust-for-linux

On Mon Feb 16, 2026 at 3:21 PM CET, Daniel Almeida wrote:
>
>
>> On 16 Feb 2026, at 11:06, Danilo Krummrich <dakr@kernel.org> wrote:
>> 
>> On Mon Feb 16, 2026 at 2:45 PM CET, Daniel Almeida wrote:
>>> With the allocation being handled by a separate component, I don’t think
>>> this is right. I think a better location is rust/kernel/io
>> 
>> I'm not sure it is reasonable to ask people who just want a ringbuffer in system
>> memory to take the indirection over an I/O ringbuffer implementation with
>> generic I/O backends choosing the system memory I/O backend.
>> 
>> The proposed code is simple, without comments and tests, less than 100 lines of
>> code. The I/O infrastructure to make this happen is still WIP. So, I think it's
>> fine to land it as VecDeque for now.
>
>
> Well, this is a 100 line patch, but nothing was said of how much else was going
> to be added on top in the future. In order to avoiding iterating on what I
> consider the wrong approach, I suggested that we start out in the right
> direction from the start, something that Andreas himself apparently agreed to.

I haven't seen any commitment to implement this in terms of generic I/O backends
from Andreas.

>> Once we have the I/O backend infrastructure, a system memory I/O backend that
>> can deal with separate allocators *and* a ring buffer implementation that sits
>> on top of it, we can still revisit if it makes sense to take advantage of
>> synergies.
>> 
>> But for now this seems a bit premature in terms of delaying Andreas' work.
>
> IIUC, and feel free to correct me on this, the I/O backends are already in the
> works. What is missing is a trivial system memory backend, and similarly a
> ringbuffer implementation, which is the subject of this patch. I don't see this
> as a lot of work or an unreasonable ask.

I think you are highly underestimating the consequences of this design.

First of all, we're missing IoView (the generalization of IoSlice), which also
supports projection. Gary is working on this, but it is completely unclear when
it will land.

Second, the system memory backend would have to be implemented and needs to be
generic over arbitrary allocators. Also note that I/O backends do *not*
implement growing and shrinking of the backing memory, which would be another
limitation for a derived VecDeque type to swallow. Alternatively, it is yet
another thing we have to implement.

Then we have to implement the I/O ringbuffer type, which, due to being generic
over I/O backends, also has to consider the device lifecycle requirements for
the corresponding I/O backend, which is additional complexity as well.

So, don't get me wrong, it is a good idea and exactly in the sense of my vision
of how powerful I want the I/O infrastructure to be, but for now, I think it is
unreasonable to ask Andreas to wait for all this.

- Danilo

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-16 14:39           ` Danilo Krummrich
@ 2026-02-16 14:46             ` Daniel Almeida
  2026-02-17 10:02               ` Andreas Hindborg
  0 siblings, 1 reply; 17+ messages in thread
From: Daniel Almeida @ 2026-02-16 14:46 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Andreas Hindborg, Alice Ryhl, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Trevor Gross, linux-kernel,
	rust-for-linux



> On 16 Feb 2026, at 11:39, Danilo Krummrich <dakr@kernel.org> wrote:
> 
> On Mon Feb 16, 2026 at 3:21 PM CET, Daniel Almeida wrote:
>> 
>> 
>>> On 16 Feb 2026, at 11:06, Danilo Krummrich <dakr@kernel.org> wrote:
>>> 
>>> On Mon Feb 16, 2026 at 2:45 PM CET, Daniel Almeida wrote:
>>>> With the allocation being handled by a separate component, I don’t think
>>>> this is right. I think a better location is rust/kernel/io
>>> 
>>> I'm not sure it is reasonable to ask people who just want a ringbuffer in system
>>> memory to take the indirection over an I/O ringbuffer implementation with
>>> generic I/O backends choosing the system memory I/O backend.
>>> 
>>> The proposed code is simple, without comments and tests, less than 100 lines of
>>> code. The I/O infrastructure to make this happen is still WIP. So, I think it's
>>> fine to land it as VecDeque for now.
>> 
>> 
>> Well, this is a 100 line patch, but nothing was said of how much else was going
>> to be added on top in the future. In order to avoiding iterating on what I
>> consider the wrong approach, I suggested that we start out in the right
>> direction from the start, something that Andreas himself apparently agreed to.
> 
> I haven't seen any commitment to implement this in terms of generic I/O backends
> from Andreas.
> 
>>> Once we have the I/O backend infrastructure, a system memory I/O backend that
>>> can deal with separate allocators *and* a ring buffer implementation that sits
>>> on top of it, we can still revisit if it makes sense to take advantage of
>>> synergies.
>>> 
>>> But for now this seems a bit premature in terms of delaying Andreas' work.
>> 
>> IIUC, and feel free to correct me on this, the I/O backends are already in the
>> works. What is missing is a trivial system memory backend, and similarly a
>> ringbuffer implementation, which is the subject of this patch. I don't see this
>> as a lot of work or an unreasonable ask.
> 
> I think you are highly underestimating the consequences of this design.

Might very well be the case, yeah.

> 
> First of all, we're missing IoView (the generalization of IoSlice), which also
> supports projection. Gary is working on this, but it is completely unclear when
> it will land.
> 
> Second, the system memory backend would have to be implemented and needs to be
> generic over arbitrary allocators. Also note that I/O backends do *not*
> implement growing and shrinking of the backing memory, which would be another
> limitation for a derived VecDeque type to swallow. Alternatively, it is yet
> another thing we have to implement.
> 
> Then we have to implement the I/O ringbuffer type, which, due to being generic
> over I/O backends, also has to consider the device lifecycle requirements for
> the corresponding I/O backend, which is additional complexity as well.
> 
> So, don't get me wrong, it is a good idea and exactly in the sense of my vision
> of how powerful I want the I/O infrastructure to be, but for now, I think it is
> unreasonable to ask Andreas to wait for all this.
> 
> - Danilo

Fine, let’s stick to VecDeque then. From what I am reading above, there
doesn’t seem to be any naks for a more general ring buffer type due to the
current patch; it’s just not a good time yet. In which case, there’s no
complaints from me.

— Daniel



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-16 14:46             ` Daniel Almeida
@ 2026-02-17 10:02               ` Andreas Hindborg
  2026-02-17 14:26                 ` Danilo Krummrich
  0 siblings, 1 reply; 17+ messages in thread
From: Andreas Hindborg @ 2026-02-17 10:02 UTC (permalink / raw)
  To: Daniel Almeida, Danilo Krummrich
  Cc: Alice Ryhl, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Trevor Gross, linux-kernel,
	rust-for-linux

"Daniel Almeida" <daniel.almeida@collabora.com> writes:

>> On 16 Feb 2026, at 11:39, Danilo Krummrich <dakr@kernel.org> wrote:
>>
>> On Mon Feb 16, 2026 at 3:21 PM CET, Daniel Almeida wrote:
>>>
>>>
>>>> On 16 Feb 2026, at 11:06, Danilo Krummrich <dakr@kernel.org> wrote:
>>>>
>>>> On Mon Feb 16, 2026 at 2:45 PM CET, Daniel Almeida wrote:
>>>>> With the allocation being handled by a separate component, I don’t think
>>>>> this is right. I think a better location is rust/kernel/io
>>>>
>>>> I'm not sure it is reasonable to ask people who just want a ringbuffer in system
>>>> memory to take the indirection over an I/O ringbuffer implementation with
>>>> generic I/O backends choosing the system memory I/O backend.
>>>>
>>>> The proposed code is simple, without comments and tests, less than 100 lines of
>>>> code. The I/O infrastructure to make this happen is still WIP. So, I think it's
>>>> fine to land it as VecDeque for now.
>>>
>>>
>>> Well, this is a 100 line patch, but nothing was said of how much else was going
>>> to be added on top in the future. In order to avoiding iterating on what I
>>> consider the wrong approach, I suggested that we start out in the right
>>> direction from the start, something that Andreas himself apparently agreed to.
>>
>> I haven't seen any commitment to implement this in terms of generic I/O backends
>> from Andreas.
>>
>>>> Once we have the I/O backend infrastructure, a system memory I/O backend that
>>>> can deal with separate allocators *and* a ring buffer implementation that sits
>>>> on top of it, we can still revisit if it makes sense to take advantage of
>>>> synergies.
>>>>
>>>> But for now this seems a bit premature in terms of delaying Andreas' work.
>>>
>>> IIUC, and feel free to correct me on this, the I/O backends are already in the
>>> works. What is missing is a trivial system memory backend, and similarly a
>>> ringbuffer implementation, which is the subject of this patch. I don't see this
>>> as a lot of work or an unreasonable ask.
>>
>> I think you are highly underestimating the consequences of this design.
>
> Might very well be the case, yeah.
>
>>
>> First of all, we're missing IoView (the generalization of IoSlice), which also
>> supports projection. Gary is working on this, but it is completely unclear when
>> it will land.
>>
>> Second, the system memory backend would have to be implemented and needs to be
>> generic over arbitrary allocators. Also note that I/O backends do *not*
>> implement growing and shrinking of the backing memory, which would be another
>> limitation for a derived VecDeque type to swallow. Alternatively, it is yet
>> another thing we have to implement.
>>
>> Then we have to implement the I/O ringbuffer type, which, due to being generic
>> over I/O backends, also has to consider the device lifecycle requirements for
>> the corresponding I/O backend, which is additional complexity as well.
>>
>> So, don't get me wrong, it is a good idea and exactly in the sense of my vision
>> of how powerful I want the I/O infrastructure to be, but for now, I think it is
>> unreasonable to ask Andreas to wait for all this.
>>
>> - Danilo
>
> Fine, let’s stick to VecDeque then. From what I am reading above, there
> doesn’t seem to be any naks for a more general ring buffer type due to the
> current patch; it’s just not a good time yet. In which case, there’s no
> complaints from me.

We can change the code down the road, no problem. It's not set in stone
just because we merge it without generic alloc support.

Perhaps one could imagine a simple API like in this patch being provided
by a configurable implementation behind the scenes.


Best regards,
Andreas Hindborg



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-17 10:02               ` Andreas Hindborg
@ 2026-02-17 14:26                 ` Danilo Krummrich
  2026-02-17 19:10                   ` Andreas Hindborg
  0 siblings, 1 reply; 17+ messages in thread
From: Danilo Krummrich @ 2026-02-17 14:26 UTC (permalink / raw)
  To: Andreas Hindborg
  Cc: Daniel Almeida, Alice Ryhl, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Trevor Gross, linux-kernel,
	rust-for-linux

On Tue Feb 17, 2026 at 11:02 AM CET, Andreas Hindborg wrote:
> We can change the code down the road, no problem. It's not set in stone
> just because we merge it without generic alloc support.

Just to avoid any ambiguity, we should merge it with generic allocator support,
but aiming for arbitrary I/O backend support would be a bit too much.

> Perhaps one could imagine a simple API like in this patch being provided
> by a configurable implementation behind the scenes.

Yeah, in the future we could implement the system memory specific one with a
type alias on top of the I/O backend agnostic one. But it remains to see if this
will actually work out properly or if it's even worth in terms of
maintainability etc.

(E.g. one of the limitations that I've mentioned already is that I/O backends do
not support growing and shrinking of the backing memory. And I'm not yet sure if
they ever should.)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-17 14:26                 ` Danilo Krummrich
@ 2026-02-17 19:10                   ` Andreas Hindborg
  2026-02-17 19:25                     ` Daniel Almeida
  2026-02-18  8:29                     ` Alice Ryhl
  0 siblings, 2 replies; 17+ messages in thread
From: Andreas Hindborg @ 2026-02-17 19:10 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Daniel Almeida, Alice Ryhl, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Trevor Gross, linux-kernel,
	rust-for-linux

"Danilo Krummrich" <dakr@kernel.org> writes:

> On Tue Feb 17, 2026 at 11:02 AM CET, Andreas Hindborg wrote:
>> We can change the code down the road, no problem. It's not set in stone
>> just because we merge it without generic alloc support.
>
> Just to avoid any ambiguity, we should merge it with generic allocator support,
> but aiming for arbitrary I/O backend support would be a bit too much.

Right.

>
>> Perhaps one could imagine a simple API like in this patch being provided
>> by a configurable implementation behind the scenes.
>
> Yeah, in the future we could implement the system memory specific one with a
> type alias on top of the I/O backend agnostic one. But it remains to see if this
> will actually work out properly or if it's even worth in terms of
> maintainability etc.
>
> (E.g. one of the limitations that I've mentioned already is that I/O backends do
> not support growing and shrinking of the backing memory. And I'm not yet sure if
> they ever should.)

This particular ringbuffer* implementation does not support changing the
capacity after initialization, so I don't see that being an issue. For
applications that need to scale the size of the ring dynamically, we
could have another type. Or a generic parameter, if that makes sense.

Best regards,
Andreas Hindborg


* VecDeque? This kind of structure has always been a ring to me.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-17 19:10                   ` Andreas Hindborg
@ 2026-02-17 19:25                     ` Daniel Almeida
  2026-02-18  8:29                     ` Alice Ryhl
  1 sibling, 0 replies; 17+ messages in thread
From: Daniel Almeida @ 2026-02-17 19:25 UTC (permalink / raw)
  To: Andreas Hindborg
  Cc: Danilo Krummrich, Alice Ryhl, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Trevor Gross, linux-kernel,
	rust-for-linux



> On 17 Feb 2026, at 16:10, Andreas Hindborg <a.hindborg@kernel.org> wrote:
> 
> "Danilo Krummrich" <dakr@kernel.org> writes:
> 
>> On Tue Feb 17, 2026 at 11:02 AM CET, Andreas Hindborg wrote:
>>> We can change the code down the road, no problem. It's not set in stone
>>> just because we merge it without generic alloc support.
>> 
>> Just to avoid any ambiguity, we should merge it with generic allocator support,
>> but aiming for arbitrary I/O backend support would be a bit too much.
> 
> Right.
> 
>> 
>>> Perhaps one could imagine a simple API like in this patch being provided
>>> by a configurable implementation behind the scenes.
>> 
>> Yeah, in the future we could implement the system memory specific one with a
>> type alias on top of the I/O backend agnostic one. But it remains to see if this
>> will actually work out properly or if it's even worth in terms of
>> maintainability etc.
>> 
>> (E.g. one of the limitations that I've mentioned already is that I/O backends do
>> not support growing and shrinking of the backing memory. And I'm not yet sure if
>> they ever should.)
> 
> This particular ringbuffer* implementation does not support changing the
> capacity after initialization, so I don't see that being an issue. For
> applications that need to scale the size of the ring dynamically, we
> could have another type. Or a generic parameter, if that makes sense.

Maybe at the risk of sounding a bit silly with the oversimplification here, but
isn’t this a mere problem of offering specific impls for backends where you
can reallocate? i.e.:

impl<T> RingBuffer<T, SomeActualBackend> {
  fn force_push() { ... } // realloc if needed; only implemented for backends where this is possible
}

> 
> Best regards,
> Andreas Hindborg
> 
> 
> * VecDeque? This kind of structure has always been a ring to me.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] rust: add a ring buffer implementation
  2026-02-17 19:10                   ` Andreas Hindborg
  2026-02-17 19:25                     ` Daniel Almeida
@ 2026-02-18  8:29                     ` Alice Ryhl
  1 sibling, 0 replies; 17+ messages in thread
From: Alice Ryhl @ 2026-02-18  8:29 UTC (permalink / raw)
  To: Andreas Hindborg
  Cc: Danilo Krummrich, Daniel Almeida, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Trevor Gross,
	linux-kernel, rust-for-linux

On Tue, Feb 17, 2026 at 8:11 PM Andreas Hindborg <a.hindborg@kernel.org> wrote:
> * VecDeque? This kind of structure has always been a ring to me.

I suppose it's only a deque if you can push/pop from both ends.

Alice

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2026-02-18  8:29 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-15 20:24 [PATCH] rust: add a ring buffer implementation Andreas Hindborg
2026-02-16  4:35 ` Daniel Almeida
2026-02-16  7:11   ` Andreas Hindborg
2026-02-16 11:44     ` Daniel Almeida
2026-02-16 12:25 ` Alice Ryhl
2026-02-16 12:43   ` Daniel Almeida
2026-02-16 13:27   ` Andreas Hindborg
2026-02-16 13:45     ` Daniel Almeida
2026-02-16 14:06       ` Danilo Krummrich
2026-02-16 14:21         ` Daniel Almeida
2026-02-16 14:39           ` Danilo Krummrich
2026-02-16 14:46             ` Daniel Almeida
2026-02-17 10:02               ` Andreas Hindborg
2026-02-17 14:26                 ` Danilo Krummrich
2026-02-17 19:10                   ` Andreas Hindborg
2026-02-17 19:25                     ` Daniel Almeida
2026-02-18  8:29                     ` Alice Ryhl

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox