* [PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings
@ 2025-12-04 21:51 Joel Fernandes
2025-12-04 21:51 ` [PATCH v4 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Joel Fernandes @ 2025-12-04 21:51 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, dri-devel, Danilo Krummrich,
David Airlie
Cc: Alexandre Courbot, Alistair Popple, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Simona Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, John Hubbard,
Timur Tabi, joel, Elle Rhumsaa, Daniel Almeida, Andrea Righi,
Philipp Stanner, nouveau, Zhi Wang, Christian König,
Jonathan Corbet, Alex Deucher, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Edwin Peer, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
This series combines a number of other series which build up to the same goal:
to make it possible to use DRM buddy from nova-core rust code. See links to the
different series below.
The git tree with all patches can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: clist-buddy-v4-20251204)
Changes for v4:
- Combined the clist and drm buddy series:
- A rust module (CList) to access circular linked lists from rust code.
- DRM buddy allocator bindings that were originally part of RFC.
- DRM buddy allocator one level up to drivers/gpu/ so it can be used by
GPU drivers (example, nova-core) that have usecases other than DRM.
- Add Rust bindings for the GPU buddy allocator.
Notes from past cover letters about CList:
Introduction
============
This patchset introduces an interface to iterate over doubly circular linked
lists used in the kernel (allocated by C kernel code). The main usecase is
iterating over the list of blocks provided by the GPU buddy allocator.
The series also moves the DRM buddy allocator one level up and adds Rust
bindings for it, enabling GPU drivers like nova-core to use it.
A question may arise: Why not use rust list.rs for this?
=========================================================
Rust's list.rs is used to provide safe intrusive lists for Rust-allocated
items. In doing so, it takes ownership of the items in the list and the links
between list items. However, the usecase for GPU buddy allocator bindings, the
C side allocates the items in the list, and also links the list together. Due
to this, there is an ownership conflict making list.rs not the best abstraction
for this usecase. What we need is a view of the list, not ownership of it.
Further, the list links in a bindings usecase may come from C allocated
objects, not from the Rust side.
Link to v2 (clist only): https://lore.kernel.org/all/20251111171315.2196103-4-joelagnelf@nvidia.com/
Notes and patches about DRM buddy code movement and DRM buddy bindings:
Link to RFC: https://lore.kernel.org/all/20251030190613.1224287-1-joelagnelf@nvidia.com/
Link to DRM buddy move discussion: https://lore.kernel.org/all/20251124234432.1988476-1-joelagnelf@nvidia.com/
Joel Fernandes (3):
rust: clist: Add support to interface with C linked lists
gpu: Move DRM buddy allocator one level up
rust: gpu: Add GPU buddy allocator bindings
Documentation/gpu/drm-mm.rst | 10 +-
MAINTAINERS | 7 +
drivers/gpu/Kconfig | 13 +
drivers/gpu/Makefile | 2 +
drivers/gpu/buddy.c | 1310 +++++++++++++++++
drivers/gpu/drm/Kconfig | 1 +
drivers/gpu/drm/Kconfig.debug | 4 +-
drivers/gpu/drm/amd/amdgpu/Kconfig | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +-
.../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 80 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 20 +-
drivers/gpu/drm/drm_buddy.c | 1287 +---------------
drivers/gpu/drm/i915/Kconfig | 1 +
drivers/gpu/drm/i915/i915_scatterlist.c | 10 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 6 +-
.../drm/i915/selftests/intel_memory_region.c | 20 +-
drivers/gpu/drm/tests/Makefile | 1 -
.../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 5 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 4 +-
drivers/gpu/drm/xe/Kconfig | 1 +
drivers/gpu/drm/xe/xe_res_cursor.h | 34 +-
drivers/gpu/drm/xe/xe_svm.c | 12 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 73 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 4 +-
drivers/gpu/tests/Makefile | 3 +
.../gpu_buddy_test.c} | 390 ++---
drivers/gpu/tests/gpu_random.c | 48 +
drivers/gpu/tests/gpu_random.h | 28 +
drivers/video/Kconfig | 2 +
include/drm/drm_buddy.h | 163 +-
include/linux/gpu_buddy.h | 177 +++
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 +
rust/helpers/helpers.c | 2 +
rust/helpers/list.c | 12 +
rust/kernel/clist.rs | 357 +++++
rust/kernel/gpu/buddy.rs | 527 +++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 3 +
42 files changed, 2944 insertions(+), 1800 deletions(-)
create mode 100644 drivers/gpu/Kconfig
create mode 100644 drivers/gpu/buddy.c
create mode 100644 drivers/gpu/tests/Makefile
rename drivers/gpu/{drm/tests/drm_buddy_test.c => tests/gpu_buddy_test.c} (68%)
create mode 100644 drivers/gpu/tests/gpu_random.c
create mode 100644 drivers/gpu/tests/gpu_random.h
create mode 100644 include/linux/gpu_buddy.h
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/helpers/list.c
create mode 100644 rust/kernel/clist.rs
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
--
2.34.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v4 1/3] rust: clist: Add support to interface with C linked lists
2025-12-04 21:51 [PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings Joel Fernandes
@ 2025-12-04 21:51 ` Joel Fernandes
2025-12-04 21:51 ` [PATCH v4 2/3] gpu: Move DRM buddy allocator one level up Joel Fernandes
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Joel Fernandes @ 2025-12-04 21:51 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, dri-devel, Danilo Krummrich,
David Airlie
Cc: Alexandre Courbot, Alistair Popple, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Simona Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, John Hubbard,
Timur Tabi, joel, Elle Rhumsaa, Daniel Almeida, Andrea Righi,
Philipp Stanner, nouveau, Zhi Wang, Christian König,
Jonathan Corbet, Alex Deucher, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Edwin Peer, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add a new module `clist` for working with C's doubly circular linked
lists. Provide low-level iteration over list nodes.
Typed iteration over actual items is provided with a `clist_create`
macro to assist in creation of the `Clist` type.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
MAINTAINERS | 7 +
rust/helpers/helpers.c | 1 +
rust/helpers/list.c | 12 ++
rust/kernel/clist.rs | 357 +++++++++++++++++++++++++++++++++++++++++
rust/kernel/lib.rs | 1 +
5 files changed, 378 insertions(+)
create mode 100644 rust/helpers/list.c
create mode 100644 rust/kernel/clist.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 5f7aa6a8a9a1..fb2ff877b6d1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22666,6 +22666,13 @@ F: rust/kernel/init.rs
F: rust/pin-init/
K: \bpin-init\b|pin_init\b|PinInit
+RUST TO C LIST INTERFACES
+M: Joel Fernandes <joelagnelf@nvidia.com>
+M: Alexandre Courbot <acourbot@nvidia.com>
+L: rust-for-linux@vger.kernel.org
+S: Maintained
+F: rust/kernel/clist.rs
+
RXRPC SOCKETS (AF_RXRPC)
M: David Howells <dhowells@redhat.com>
M: Marc Dionne <marc.dionne@auristor.com>
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 79c72762ad9c..634fa2386bbb 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
+#include "list.c"
#include "maple_tree.c"
#include "mm.c"
#include "mutex.c"
diff --git a/rust/helpers/list.c b/rust/helpers/list.c
new file mode 100644
index 000000000000..6044979c7a2e
--- /dev/null
+++ b/rust/helpers/list.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Helpers for C Circular doubly linked list implementation.
+ */
+
+#include <linux/list.h>
+
+void rust_helper_list_add_tail(struct list_head *new, struct list_head *head)
+{
+ list_add_tail(new, head);
+}
diff --git a/rust/kernel/clist.rs b/rust/kernel/clist.rs
new file mode 100644
index 000000000000..b4ee3149903a
--- /dev/null
+++ b/rust/kernel/clist.rs
@@ -0,0 +1,357 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! A C doubly circular intrusive linked list interface for rust code.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! bindings,
+//! clist::init_list_head,
+//! clist_create,
+//! types::Opaque, //
+//! };
+//! # // Create test list with values (0, 10, 20) - normally done by C code but it is
+//! # // emulated here for doctests using the C bindings.
+//! # use core::mem::MaybeUninit;
+//! #
+//! # /// C struct with embedded `list_head` (typically will be allocated by C code).
+//! # #[repr(C)]
+//! # pub(crate) struct SampleItemC {
+//! # pub value: i32,
+//! # pub link: bindings::list_head,
+//! # }
+//! #
+//! # let mut head = MaybeUninit::<bindings::list_head>::uninit();
+//! #
+//! # let head = head.as_mut_ptr();
+//! # // SAFETY: head and all the items are test objects allocated in this scope.
+//! # unsafe { init_list_head(head) };
+//! #
+//! # let mut items = [
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # ];
+//! #
+//! # for (i, item) in items.iter_mut().enumerate() {
+//! # let ptr = item.as_mut_ptr();
+//! # // SAFETY: pointers are to allocated test objects with a list_head field.
+//! # unsafe {
+//! # (*ptr).value = i as i32 * 10;
+//! # // addr_of_mut!() computes address of link directly as link is uninitialized.
+//! # init_list_head(core::ptr::addr_of_mut!((*ptr).link));
+//! # bindings::list_add_tail(&mut (*ptr).link, head);
+//! # }
+//! # }
+//!
+//! // Rust wrapper for the C struct.
+//! // The list item struct in this example is defined in C code as:
+//! // struct SampleItemC {
+//! // int value;
+//! // struct list_head link;
+//! // };
+//! //
+//! #[repr(transparent)]
+//! pub(crate) struct Item(Opaque<SampleItemC>);
+//!
+//! impl Item {
+//! pub(crate) fn value(&self) -> i32 {
+//! // SAFETY: [`Item`] has same layout as [`SampleItemC`].
+//! unsafe { (*self.0.get()).value }
+//! }
+//! }
+//!
+//! // Create typed [`CList`] from sentinel head.
+//! // SAFETY: head is valid, items are [`SampleItemC`] with embedded `link` field.
+//! let list = unsafe { clist_create!(head, Item, SampleItemC, link) };
+//!
+//! // Iterate directly over typed items.
+//! let mut found_0 = false;
+//! let mut found_10 = false;
+//! let mut found_20 = false;
+//!
+//! for item in list.iter() {
+//! let val = item.value();
+//! if val == 0 { found_0 = true; }
+//! if val == 10 { found_10 = true; }
+//! if val == 20 { found_20 = true; }
+//! }
+//!
+//! assert!(found_0 && found_10 && found_20);
+//! ```
+
+use core::{
+ iter::FusedIterator,
+ marker::PhantomData, //
+};
+
+use crate::{
+ bindings,
+ types::Opaque, //
+};
+
+use pin_init::PinInit;
+
+/// Initialize a `list_head` object to point to itself.
+///
+/// # Safety
+///
+/// `list` must be a valid pointer to a `list_head` object.
+#[inline]
+pub unsafe fn init_list_head(list: *mut bindings::list_head) {
+ // SAFETY: Caller guarantees `list` is a valid pointer to a `list_head`.
+ unsafe {
+ (*list).next = list;
+ (*list).prev = list;
+ }
+}
+
+/// Wraps a `list_head` object for use in intrusive linked lists.
+///
+/// # Invariants
+///
+/// - [`CListHead`] represents an allocated and valid `list_head` structure.
+/// - Once a ClistHead is created in Rust, it will not be modified by non-Rust code.
+/// - All `list_head` for individual items are not modified for the lifetime of [`CListHead`].
+#[repr(transparent)]
+pub struct CListHead(Opaque<bindings::list_head>);
+
+impl CListHead {
+ /// Create a `&CListHead` reference from a raw `list_head` pointer.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure.
+ /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
+ #[inline]
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::list_head) -> &'a Self {
+ // SAFETY:
+ // - [`CListHead`] has same layout as `list_head`.
+ // - `ptr` is valid and unmodified for 'a.
+ unsafe { &*ptr.cast() }
+ }
+
+ /// Get the raw `list_head` pointer.
+ #[inline]
+ pub fn as_raw(&self) -> *mut bindings::list_head {
+ self.0.get()
+ }
+
+ /// Get the next [`CListHead`] in the list.
+ #[inline]
+ pub fn next(&self) -> &Self {
+ let raw = self.as_raw();
+ // SAFETY:
+ // - `self.as_raw()` is valid per type invariants.
+ // - The `next` pointer is guaranteed to be non-NULL.
+ unsafe { Self::from_raw((*raw).next) }
+ }
+
+ /// Get the previous [`CListHead`] in the list.
+ #[inline]
+ pub fn prev(&self) -> &Self {
+ let raw = self.as_raw();
+ // SAFETY:
+ // - self.as_raw() is valid per type invariants.
+ // - The `prev` pointer is guaranteed to be non-NULL.
+ unsafe { Self::from_raw((*raw).prev) }
+ }
+
+ /// Check if this node is linked in a list (not isolated).
+ #[inline]
+ pub fn is_linked(&self) -> bool {
+ let raw = self.as_raw();
+ // SAFETY: self.as_raw() is valid per type invariants.
+ unsafe { (*raw).next != raw && (*raw).prev != raw }
+ }
+
+ /// Fallible pin-initializer that initializes and then calls user closure.
+ ///
+ /// Initializes the list head first, then passes `&CListHead` to the closure.
+ /// This hides the raw FFI pointer from the user.
+ pub fn try_init<E>(
+ init_func: impl FnOnce(&CListHead) -> Result<(), E>,
+ ) -> impl PinInit<Self, E> {
+ // SAFETY: init_list_head initializes the list_head to point to itself.
+ // After initialization, we create a reference to pass to the closure.
+ unsafe {
+ pin_init::pin_init_from_closure(move |slot: *mut Self| {
+ init_list_head(slot.cast());
+ // SAFETY: slot is now initialized, safe to create reference.
+ init_func(&*slot)
+ })
+ }
+ }
+}
+
+// SAFETY: [`CListHead`] can be sent to any thread.
+unsafe impl Send for CListHead {}
+
+// SAFETY: [`CListHead`] can be shared among threads as it is not modified
+// by non-Rust code per type invariants.
+unsafe impl Sync for CListHead {}
+
+impl PartialEq for CListHead {
+ fn eq(&self, other: &Self) -> bool {
+ self.as_raw() == other.as_raw()
+ }
+}
+
+impl Eq for CListHead {}
+
+/// Low-level iterator over `list_head` nodes.
+///
+/// An iterator used to iterate over a C intrusive linked list (`list_head`). Caller has to
+/// perform conversion of returned [`CListHead`] to an item (using `container_of` macro or similar).
+///
+/// # Invariants
+///
+/// [`CListHeadIter`] is iterating over an allocated, initialized and valid list.
+struct CListHeadIter<'a> {
+ current_head: &'a CListHead,
+ list_head: &'a CListHead,
+}
+
+impl<'a> Iterator for CListHeadIter<'a> {
+ type Item = &'a CListHead;
+
+ #[inline]
+ fn next(&mut self) -> Option<Self::Item> {
+ // Advance to next node.
+ let next = self.current_head.next();
+
+ // Check if we've circled back to the sentinel head.
+ if next == self.list_head {
+ None
+ } else {
+ self.current_head = next;
+ Some(self.current_head)
+ }
+ }
+}
+
+impl<'a> FusedIterator for CListHeadIter<'a> {}
+
+/// A typed C linked list with a sentinel head.
+///
+/// A sentinel head represents the entire linked list and can be used for
+/// iteration over items of type `T`, it is not associated with a specific item.
+///
+/// The const generic `OFFSET` specifies the byte offset of the `list_head` field within
+/// the struct that `T` wraps.
+///
+/// # Invariants
+///
+/// - `head` is an allocated and valid C `list_head` structure that is the list's sentinel.
+/// - `OFFSET` is the byte offset of the `list_head` field within the struct that `T` wraps.
+/// - All the list's `list_head` nodes are allocated and have valid next/prev pointers.
+/// - The underlying `list_head` (and entire list) is not modified for the lifetime `'a`.
+pub struct CList<'a, T, const OFFSET: usize> {
+ head: &'a CListHead,
+ _phantom: PhantomData<&'a T>,
+}
+
+impl<'a, T, const OFFSET: usize> CList<'a, T, OFFSET> {
+ /// Create a typed [`CList`] from a raw sentinel `list_head` pointer.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure
+ /// representing a list sentinel.
+ /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
+ /// - The list must contain items where the `list_head` field is at byte offset `OFFSET`.
+ /// - `T` must be `#[repr(transparent)]` over the C struct.
+ #[inline]
+ pub unsafe fn from_raw(ptr: *mut bindings::list_head) -> Self {
+ Self {
+ // SAFETY: Caller guarantees `ptr` is a valid, sentinel `list_head` object.
+ head: unsafe { CListHead::from_raw(ptr) },
+ _phantom: PhantomData,
+ }
+ }
+
+ /// Get the raw sentinel `list_head` pointer.
+ #[inline]
+ pub fn as_raw(&self) -> *mut bindings::list_head {
+ self.head.as_raw()
+ }
+
+ /// Check if the list is empty.
+ #[inline]
+ pub fn is_empty(&self) -> bool {
+ let raw = self.as_raw();
+ // SAFETY: self.as_raw() is valid per type invariants.
+ unsafe { (*raw).next == raw }
+ }
+
+ /// Create an iterator over typed items.
+ #[inline]
+ pub fn iter(&self) -> CListIter<'a, T, OFFSET> {
+ CListIter {
+ head_iter: CListHeadIter {
+ current_head: self.head,
+ list_head: self.head,
+ },
+ _phantom: PhantomData,
+ }
+ }
+}
+
+/// High-level iterator over typed list items.
+pub struct CListIter<'a, T, const OFFSET: usize> {
+ head_iter: CListHeadIter<'a>,
+ _phantom: PhantomData<&'a T>,
+}
+
+impl<'a, T, const OFFSET: usize> Iterator for CListIter<'a, T, OFFSET> {
+ type Item = &'a T;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ let head = self.head_iter.next()?;
+
+ // Convert to item using OFFSET.
+ // SAFETY: `item_ptr` calculation from `OFFSET` (calculated using offset_of!)
+ // is valid per invariants.
+ Some(unsafe { &*head.as_raw().byte_sub(OFFSET).cast::<T>() })
+ }
+}
+
+impl<'a, T, const OFFSET: usize> FusedIterator for CListIter<'a, T, OFFSET> {}
+
+/// Create a C doubly-circular linked list interface [`CList`] from a raw `list_head` pointer.
+///
+/// This macro creates a [`CList<T, OFFSET>`] that can iterate over items of type `$rust_type`
+/// linked via the `$field` field in the underlying C struct `$c_type`.
+///
+/// # Arguments
+///
+/// - `$head`: Raw pointer to the sentinel `list_head` object (`*mut bindings::list_head`).
+/// - `$rust_type`: Each item's rust wrapper type.
+/// - `$c_type`: Each item's C struct type that contains the embedded `list_head`.
+/// - `$field`: The name of the `list_head` field within the C struct.
+///
+/// # Safety
+///
+/// The caller must ensure:
+/// - `$head` is a valid, initialized sentinel `list_head` pointing to a list that remains
+/// unmodified for the lifetime of the rust [`CList`].
+/// - The list contains items of type `$c_type` linked via an embedded `$field`.
+/// - `$rust_type` is `#[repr(transparent)]` over `$c_type` or has compatible layout.
+/// - The macro is called from an unsafe block.
+///
+/// # Examples
+///
+/// Refer to the examples in the [`crate::clist`] module documentation.
+#[macro_export]
+macro_rules! clist_create {
+ ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
+ // Compile-time check that field path is a list_head.
+ let _: fn(*const $c_type) -> *const $crate::bindings::list_head =
+ |p| ::core::ptr::addr_of!((*p).$($field).+);
+
+ // Calculate offset and create `CList`.
+ const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
+ $crate::clist::CList::<$rust_type, OFFSET>::from_raw($head)
+ }};
+}
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index c2eea9a2a345..b69cc5ed3b59 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -72,6 +72,7 @@
pub mod bug;
#[doc(hidden)]
pub mod build_assert;
+pub mod clist;
pub mod clk;
#[cfg(CONFIG_CONFIGFS_FS)]
pub mod configfs;
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v4 2/3] gpu: Move DRM buddy allocator one level up
2025-12-04 21:51 [PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings Joel Fernandes
2025-12-04 21:51 ` [PATCH v4 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
@ 2025-12-04 21:51 ` Joel Fernandes
2025-12-04 21:51 ` [PATCH v4 3/3] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
2025-12-04 21:57 ` [PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings Joel Fernandes
3 siblings, 0 replies; 5+ messages in thread
From: Joel Fernandes @ 2025-12-04 21:51 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, dri-devel, Danilo Krummrich,
David Airlie
Cc: Alexandre Courbot, Alistair Popple, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Simona Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, John Hubbard,
Timur Tabi, joel, Elle Rhumsaa, Daniel Almeida, Andrea Righi,
Philipp Stanner, nouveau, Zhi Wang, Christian König,
Jonathan Corbet, Alex Deucher, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Edwin Peer, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Move the DRM buddy allocator one level up so that it can be used by GPU
drivers (example, nova-core) that have usecases other than DRM (such as
VFIO vGPU support). Modify the API, structures and Kconfigs to use
"gpu_buddy" terminology. Adapt the drivers and tests to use the new API.
The commit cannot be split due to bissectability, however no functional
change is intended. Verified by running K-UNIT tests and build tested
various configurations.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
Documentation/gpu/drm-mm.rst | 10 +-
drivers/gpu/Kconfig | 13 +
drivers/gpu/Makefile | 2 +
drivers/gpu/buddy.c | 1310 +++++++++++++++++
drivers/gpu/drm/Kconfig | 1 +
drivers/gpu/drm/Kconfig.debug | 4 +-
drivers/gpu/drm/amd/amdgpu/Kconfig | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +-
.../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 80 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 20 +-
drivers/gpu/drm/drm_buddy.c | 1287 +---------------
drivers/gpu/drm/i915/Kconfig | 1 +
drivers/gpu/drm/i915/i915_scatterlist.c | 10 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 6 +-
.../drm/i915/selftests/intel_memory_region.c | 20 +-
drivers/gpu/drm/tests/Makefile | 1 -
.../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 5 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 4 +-
drivers/gpu/drm/xe/Kconfig | 1 +
drivers/gpu/drm/xe/xe_res_cursor.h | 34 +-
drivers/gpu/drm/xe/xe_svm.c | 12 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 73 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 4 +-
drivers/gpu/tests/Makefile | 3 +
.../gpu_buddy_test.c} | 390 ++---
drivers/gpu/tests/gpu_random.c | 48 +
drivers/gpu/tests/gpu_random.h | 28 +
drivers/video/Kconfig | 2 +
include/drm/drm_buddy.h | 163 +-
include/linux/gpu_buddy.h | 177 +++
33 files changed, 1997 insertions(+), 1800 deletions(-)
create mode 100644 drivers/gpu/Kconfig
create mode 100644 drivers/gpu/buddy.c
create mode 100644 drivers/gpu/tests/Makefile
rename drivers/gpu/{drm/tests/drm_buddy_test.c => tests/gpu_buddy_test.c} (68%)
create mode 100644 drivers/gpu/tests/gpu_random.c
create mode 100644 drivers/gpu/tests/gpu_random.h
create mode 100644 include/linux/gpu_buddy.h
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index d55751cad67c..8e0d31230b29 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -509,8 +509,14 @@ DRM GPUVM Function References
DRM Buddy Allocator
===================
-DRM Buddy Function References
------------------------------
+Buddy Allocator Function References (GPU buddy)
+-----------------------------------------------
+
+.. kernel-doc:: drivers/gpu/buddy.c
+ :export:
+
+DRM Buddy Specific Logging Function References
+----------------------------------------------
.. kernel-doc:: drivers/gpu/drm/drm_buddy.c
:export:
diff --git a/drivers/gpu/Kconfig b/drivers/gpu/Kconfig
new file mode 100644
index 000000000000..59bac03b0df6
--- /dev/null
+++ b/drivers/gpu/Kconfig
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0
+
+config GPU_BUDDY
+ tristate
+ help
+ A page based buddy allocator for GPU memory.
+
+config GPU_BUDDY_KUNIT_TEST
+ tristate "KUnit tests for GPU buddy allocator" if !KUNIT_ALL_TESTS
+ depends on GPU_BUDDY && KUNIT
+ default KUNIT_ALL_TESTS
+ help
+ KUnit tests for the GPU buddy allocator.
diff --git a/drivers/gpu/Makefile b/drivers/gpu/Makefile
index 36a54d456630..5063caccabdf 100644
--- a/drivers/gpu/Makefile
+++ b/drivers/gpu/Makefile
@@ -6,3 +6,5 @@ obj-y += host1x/ drm/ vga/
obj-$(CONFIG_IMX_IPUV3_CORE) += ipu-v3/
obj-$(CONFIG_TRACE_GPU_MEM) += trace/
obj-$(CONFIG_NOVA_CORE) += nova-core/
+obj-$(CONFIG_GPU_BUDDY) += buddy.o
+obj-y += tests/
diff --git a/drivers/gpu/buddy.c b/drivers/gpu/buddy.c
new file mode 100644
index 000000000000..1347c0436617
--- /dev/null
+++ b/drivers/gpu/buddy.c
@@ -0,0 +1,1310 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <kunit/test-bug.h>
+
+#include <linux/export.h>
+#include <linux/gpu_buddy.h>
+#include <linux/kmemleak.h>
+#include <linux/module.h>
+#include <linux/sizes.h>
+
+static struct kmem_cache *slab_blocks;
+
+static struct gpu_buddy_block *gpu_block_alloc(struct gpu_buddy *mm,
+ struct gpu_buddy_block *parent,
+ unsigned int order,
+ u64 offset)
+{
+ struct gpu_buddy_block *block;
+
+ BUG_ON(order > GPU_BUDDY_MAX_ORDER);
+
+ block = kmem_cache_zalloc(slab_blocks, GFP_KERNEL);
+ if (!block)
+ return NULL;
+
+ block->header = offset;
+ block->header |= order;
+ block->parent = parent;
+
+ RB_CLEAR_NODE(&block->rb);
+
+ BUG_ON(block->header & GPU_BUDDY_HEADER_UNUSED);
+ return block;
+}
+
+static void gpu_block_free(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ kmem_cache_free(slab_blocks, block);
+}
+
+static enum gpu_buddy_free_tree
+get_block_tree(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_is_clear(block) ?
+ GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
+}
+
+static struct gpu_buddy_block *
+rbtree_get_free_block(const struct rb_node *node)
+{
+ return node ? rb_entry(node, struct gpu_buddy_block, rb) : NULL;
+}
+
+static struct gpu_buddy_block *
+rbtree_last_free_block(struct rb_root *root)
+{
+ return rbtree_get_free_block(rb_last(root));
+}
+
+static bool rbtree_is_empty(struct rb_root *root)
+{
+ return RB_EMPTY_ROOT(root);
+}
+
+static bool gpu_buddy_block_offset_less(const struct gpu_buddy_block *block,
+ const struct gpu_buddy_block *node)
+{
+ return gpu_buddy_block_offset(block) < gpu_buddy_block_offset(node);
+}
+
+static bool rbtree_block_offset_less(struct rb_node *block,
+ const struct rb_node *node)
+{
+ return gpu_buddy_block_offset_less(rbtree_get_free_block(block),
+ rbtree_get_free_block(node));
+}
+
+static void rbtree_insert(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block,
+ enum gpu_buddy_free_tree tree)
+{
+ rb_add(&block->rb,
+ &mm->free_trees[tree][gpu_buddy_block_order(block)],
+ rbtree_block_offset_less);
+}
+
+static void rbtree_remove(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ unsigned int order = gpu_buddy_block_order(block);
+ enum gpu_buddy_free_tree tree;
+ struct rb_root *root;
+
+ tree = get_block_tree(block);
+ root = &mm->free_trees[tree][order];
+
+ rb_erase(&block->rb, root);
+ RB_CLEAR_NODE(&block->rb);
+}
+
+static void clear_reset(struct gpu_buddy_block *block)
+{
+ block->header &= ~GPU_BUDDY_HEADER_CLEAR;
+}
+
+static void mark_cleared(struct gpu_buddy_block *block)
+{
+ block->header |= GPU_BUDDY_HEADER_CLEAR;
+}
+
+static void mark_allocated(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ block->header &= ~GPU_BUDDY_HEADER_STATE;
+ block->header |= GPU_BUDDY_ALLOCATED;
+
+ rbtree_remove(mm, block);
+}
+
+static void mark_free(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ enum gpu_buddy_free_tree tree;
+
+ block->header &= ~GPU_BUDDY_HEADER_STATE;
+ block->header |= GPU_BUDDY_FREE;
+
+ tree = get_block_tree(block);
+ rbtree_insert(mm, block, tree);
+}
+
+static void mark_split(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ block->header &= ~GPU_BUDDY_HEADER_STATE;
+ block->header |= GPU_BUDDY_SPLIT;
+
+ rbtree_remove(mm, block);
+}
+
+static inline bool overlaps(u64 s1, u64 e1, u64 s2, u64 e2)
+{
+ return s1 <= e2 && e1 >= s2;
+}
+
+static inline bool contains(u64 s1, u64 e1, u64 s2, u64 e2)
+{
+ return s1 <= s2 && e1 >= e2;
+}
+
+static struct gpu_buddy_block *
+__get_buddy(struct gpu_buddy_block *block)
+{
+ struct gpu_buddy_block *parent;
+
+ parent = block->parent;
+ if (!parent)
+ return NULL;
+
+ if (parent->left == block)
+ return parent->right;
+
+ return parent->left;
+}
+
+static unsigned int __gpu_buddy_free(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block,
+ bool force_merge)
+{
+ struct gpu_buddy_block *parent;
+ unsigned int order;
+
+ while ((parent = block->parent)) {
+ struct gpu_buddy_block *buddy;
+
+ buddy = __get_buddy(block);
+
+ if (!gpu_buddy_block_is_free(buddy))
+ break;
+
+ if (!force_merge) {
+ /*
+ * Check the block and its buddy clear state and exit
+ * the loop if they both have the dissimilar state.
+ */
+ if (gpu_buddy_block_is_clear(block) !=
+ gpu_buddy_block_is_clear(buddy))
+ break;
+
+ if (gpu_buddy_block_is_clear(block))
+ mark_cleared(parent);
+ }
+
+ rbtree_remove(mm, buddy);
+ if (force_merge && gpu_buddy_block_is_clear(buddy))
+ mm->clear_avail -= gpu_buddy_block_size(mm, buddy);
+
+ gpu_block_free(mm, block);
+ gpu_block_free(mm, buddy);
+
+ block = parent;
+ }
+
+ order = gpu_buddy_block_order(block);
+ mark_free(mm, block);
+
+ return order;
+}
+
+static int __force_merge(struct gpu_buddy *mm,
+ u64 start,
+ u64 end,
+ unsigned int min_order)
+{
+ unsigned int tree, order;
+ int i;
+
+ if (!min_order)
+ return -ENOMEM;
+
+ if (min_order > mm->max_order)
+ return -EINVAL;
+
+ for_each_free_tree(tree) {
+ for (i = min_order - 1; i >= 0; i--) {
+ struct rb_node *iter = rb_last(&mm->free_trees[tree][i]);
+
+ while (iter) {
+ struct gpu_buddy_block *block, *buddy;
+ u64 block_start, block_end;
+
+ block = rbtree_get_free_block(iter);
+ iter = rb_prev(iter);
+
+ if (!block || !block->parent)
+ continue;
+
+ block_start = gpu_buddy_block_offset(block);
+ block_end = block_start + gpu_buddy_block_size(mm, block) - 1;
+
+ if (!contains(start, end, block_start, block_end))
+ continue;
+
+ buddy = __get_buddy(block);
+ if (!gpu_buddy_block_is_free(buddy))
+ continue;
+
+ WARN_ON(gpu_buddy_block_is_clear(block) ==
+ gpu_buddy_block_is_clear(buddy));
+
+ /*
+ * Advance to the next node when the current node is the buddy,
+ * as freeing the block will also remove its buddy from the tree.
+ */
+ if (iter == &buddy->rb)
+ iter = rb_prev(iter);
+
+ rbtree_remove(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail -= gpu_buddy_block_size(mm, block);
+
+ order = __gpu_buddy_free(mm, block, true);
+ if (order >= min_order)
+ return 0;
+ }
+ }
+ }
+
+ return -ENOMEM;
+}
+
+/**
+ * gpu_buddy_init - init memory manager
+ *
+ * @mm: GPU buddy manager to initialize
+ * @size: size in bytes to manage
+ * @chunk_size: minimum page size in bytes for our allocations
+ *
+ * Initializes the memory manager and its resources.
+ *
+ * Returns:
+ * 0 on success, error code on failure.
+ */
+int gpu_buddy_init(struct gpu_buddy *mm, u64 size, u64 chunk_size)
+{
+ unsigned int i, j, root_count = 0;
+ u64 offset = 0;
+
+ if (size < chunk_size)
+ return -EINVAL;
+
+ if (chunk_size < SZ_4K)
+ return -EINVAL;
+
+ if (!is_power_of_2(chunk_size))
+ return -EINVAL;
+
+ size = round_down(size, chunk_size);
+
+ mm->size = size;
+ mm->avail = size;
+ mm->clear_avail = 0;
+ mm->chunk_size = chunk_size;
+ mm->max_order = ilog2(size) - ilog2(chunk_size);
+
+ BUG_ON(mm->max_order > GPU_BUDDY_MAX_ORDER);
+
+ mm->free_trees = kmalloc_array(GPU_BUDDY_MAX_FREE_TREES,
+ sizeof(*mm->free_trees),
+ GFP_KERNEL);
+ if (!mm->free_trees)
+ return -ENOMEM;
+
+ for_each_free_tree(i) {
+ mm->free_trees[i] = kmalloc_array(mm->max_order + 1,
+ sizeof(struct rb_root),
+ GFP_KERNEL);
+ if (!mm->free_trees[i])
+ goto out_free_tree;
+
+ for (j = 0; j <= mm->max_order; ++j)
+ mm->free_trees[i][j] = RB_ROOT;
+ }
+
+ mm->n_roots = hweight64(size);
+
+ mm->roots = kmalloc_array(mm->n_roots,
+ sizeof(struct gpu_buddy_block *),
+ GFP_KERNEL);
+ if (!mm->roots)
+ goto out_free_tree;
+
+ /*
+ * Split into power-of-two blocks, in case we are given a size that is
+ * not itself a power-of-two.
+ */
+ do {
+ struct gpu_buddy_block *root;
+ unsigned int order;
+ u64 root_size;
+
+ order = ilog2(size) - ilog2(chunk_size);
+ root_size = chunk_size << order;
+
+ root = gpu_block_alloc(mm, NULL, order, offset);
+ if (!root)
+ goto out_free_roots;
+
+ mark_free(mm, root);
+
+ BUG_ON(root_count > mm->max_order);
+ BUG_ON(gpu_buddy_block_size(mm, root) < chunk_size);
+
+ mm->roots[root_count] = root;
+
+ offset += root_size;
+ size -= root_size;
+ root_count++;
+ } while (size);
+
+ return 0;
+
+out_free_roots:
+ while (root_count--)
+ gpu_block_free(mm, mm->roots[root_count]);
+ kfree(mm->roots);
+out_free_tree:
+ while (i--)
+ kfree(mm->free_trees[i]);
+ kfree(mm->free_trees);
+ return -ENOMEM;
+}
+EXPORT_SYMBOL(gpu_buddy_init);
+
+/**
+ * gpu_buddy_fini - tear down the memory manager
+ *
+ * @mm: GPU buddy manager to free
+ *
+ * Cleanup memory manager resources and the freetree
+ */
+void gpu_buddy_fini(struct gpu_buddy *mm)
+{
+ u64 root_size, size, start;
+ unsigned int order;
+ int i;
+
+ size = mm->size;
+
+ for (i = 0; i < mm->n_roots; ++i) {
+ order = ilog2(size) - ilog2(mm->chunk_size);
+ start = gpu_buddy_block_offset(mm->roots[i]);
+ __force_merge(mm, start, start + size, order);
+
+ if (WARN_ON(!gpu_buddy_block_is_free(mm->roots[i])))
+ kunit_fail_current_test("buddy_fini() root");
+
+ gpu_block_free(mm, mm->roots[i]);
+
+ root_size = mm->chunk_size << order;
+ size -= root_size;
+ }
+
+ WARN_ON(mm->avail != mm->size);
+
+ for_each_free_tree(i)
+ kfree(mm->free_trees[i]);
+ kfree(mm->roots);
+}
+EXPORT_SYMBOL(gpu_buddy_fini);
+
+static int split_block(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ unsigned int block_order = gpu_buddy_block_order(block) - 1;
+ u64 offset = gpu_buddy_block_offset(block);
+
+ BUG_ON(!gpu_buddy_block_is_free(block));
+ BUG_ON(!gpu_buddy_block_order(block));
+
+ block->left = gpu_block_alloc(mm, block, block_order, offset);
+ if (!block->left)
+ return -ENOMEM;
+
+ block->right = gpu_block_alloc(mm, block, block_order,
+ offset + (mm->chunk_size << block_order));
+ if (!block->right) {
+ gpu_block_free(mm, block->left);
+ return -ENOMEM;
+ }
+
+ mark_split(mm, block);
+
+ if (gpu_buddy_block_is_clear(block)) {
+ mark_cleared(block->left);
+ mark_cleared(block->right);
+ clear_reset(block);
+ }
+
+ mark_free(mm, block->left);
+ mark_free(mm, block->right);
+
+ return 0;
+}
+
+/**
+ * gpu_get_buddy - get buddy address
+ *
+ * @block: GPU buddy block
+ *
+ * Returns the corresponding buddy block for @block, or NULL
+ * if this is a root block and can't be merged further.
+ * Requires some kind of locking to protect against
+ * any concurrent allocate and free operations.
+ */
+struct gpu_buddy_block *
+gpu_get_buddy(struct gpu_buddy_block *block)
+{
+ return __get_buddy(block);
+}
+EXPORT_SYMBOL(gpu_get_buddy);
+
+/**
+ * gpu_buddy_reset_clear - reset blocks clear state
+ *
+ * @mm: GPU buddy manager
+ * @is_clear: blocks clear state
+ *
+ * Reset the clear state based on @is_clear value for each block
+ * in the freetree.
+ */
+void gpu_buddy_reset_clear(struct gpu_buddy *mm, bool is_clear)
+{
+ enum gpu_buddy_free_tree src_tree, dst_tree;
+ u64 root_size, size, start;
+ unsigned int order;
+ int i;
+
+ size = mm->size;
+ for (i = 0; i < mm->n_roots; ++i) {
+ order = ilog2(size) - ilog2(mm->chunk_size);
+ start = gpu_buddy_block_offset(mm->roots[i]);
+ __force_merge(mm, start, start + size, order);
+
+ root_size = mm->chunk_size << order;
+ size -= root_size;
+ }
+
+ src_tree = is_clear ? GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE;
+ dst_tree = is_clear ? GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
+
+ for (i = 0; i <= mm->max_order; ++i) {
+ struct rb_root *root = &mm->free_trees[src_tree][i];
+ struct gpu_buddy_block *block, *tmp;
+
+ rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) {
+ rbtree_remove(mm, block);
+ if (is_clear) {
+ mark_cleared(block);
+ mm->clear_avail += gpu_buddy_block_size(mm, block);
+ } else {
+ clear_reset(block);
+ mm->clear_avail -= gpu_buddy_block_size(mm, block);
+ }
+
+ rbtree_insert(mm, block, dst_tree);
+ }
+ }
+}
+EXPORT_SYMBOL(gpu_buddy_reset_clear);
+
+/**
+ * gpu_buddy_free_block - free a block
+ *
+ * @mm: GPU buddy manager
+ * @block: block to be freed
+ */
+void gpu_buddy_free_block(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ BUG_ON(!gpu_buddy_block_is_allocated(block));
+ mm->avail += gpu_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail += gpu_buddy_block_size(mm, block);
+
+ __gpu_buddy_free(mm, block, false);
+}
+EXPORT_SYMBOL(gpu_buddy_free_block);
+
+static void __gpu_buddy_free_list(struct gpu_buddy *mm,
+ struct list_head *objects,
+ bool mark_clear,
+ bool mark_dirty)
+{
+ struct gpu_buddy_block *block, *on;
+
+ WARN_ON(mark_dirty && mark_clear);
+
+ list_for_each_entry_safe(block, on, objects, link) {
+ if (mark_clear)
+ mark_cleared(block);
+ else if (mark_dirty)
+ clear_reset(block);
+ gpu_buddy_free_block(mm, block);
+ cond_resched();
+ }
+ INIT_LIST_HEAD(objects);
+}
+
+static void gpu_buddy_free_list_internal(struct gpu_buddy *mm,
+ struct list_head *objects)
+{
+ /*
+ * Don't touch the clear/dirty bit, since allocation is still internal
+ * at this point. For example we might have just failed part of the
+ * allocation.
+ */
+ __gpu_buddy_free_list(mm, objects, false, false);
+}
+
+/**
+ * gpu_buddy_free_list - free blocks
+ *
+ * @mm: GPU buddy manager
+ * @objects: input list head to free blocks
+ * @flags: optional flags like GPU_BUDDY_CLEARED
+ */
+void gpu_buddy_free_list(struct gpu_buddy *mm,
+ struct list_head *objects,
+ unsigned int flags)
+{
+ bool mark_clear = flags & GPU_BUDDY_CLEARED;
+
+ __gpu_buddy_free_list(mm, objects, mark_clear, !mark_clear);
+}
+EXPORT_SYMBOL(gpu_buddy_free_list);
+
+static bool block_incompatible(struct gpu_buddy_block *block, unsigned int flags)
+{
+ bool needs_clear = flags & GPU_BUDDY_CLEAR_ALLOCATION;
+
+ return needs_clear != gpu_buddy_block_is_clear(block);
+}
+
+static struct gpu_buddy_block *
+__alloc_range_bias(struct gpu_buddy *mm,
+ u64 start, u64 end,
+ unsigned int order,
+ unsigned long flags,
+ bool fallback)
+{
+ u64 req_size = mm->chunk_size << order;
+ struct gpu_buddy_block *block;
+ struct gpu_buddy_block *buddy;
+ LIST_HEAD(dfs);
+ int err;
+ int i;
+
+ end = end - 1;
+
+ for (i = 0; i < mm->n_roots; ++i)
+ list_add_tail(&mm->roots[i]->tmp_link, &dfs);
+
+ do {
+ u64 block_start;
+ u64 block_end;
+
+ block = list_first_entry_or_null(&dfs,
+ struct gpu_buddy_block,
+ tmp_link);
+ if (!block)
+ break;
+
+ list_del(&block->tmp_link);
+
+ if (gpu_buddy_block_order(block) < order)
+ continue;
+
+ block_start = gpu_buddy_block_offset(block);
+ block_end = block_start + gpu_buddy_block_size(mm, block) - 1;
+
+ if (!overlaps(start, end, block_start, block_end))
+ continue;
+
+ if (gpu_buddy_block_is_allocated(block))
+ continue;
+
+ if (block_start < start || block_end > end) {
+ u64 adjusted_start = max(block_start, start);
+ u64 adjusted_end = min(block_end, end);
+
+ if (round_down(adjusted_end + 1, req_size) <=
+ round_up(adjusted_start, req_size))
+ continue;
+ }
+
+ if (!fallback && block_incompatible(block, flags))
+ continue;
+
+ if (contains(start, end, block_start, block_end) &&
+ order == gpu_buddy_block_order(block)) {
+ /*
+ * Find the free block within the range.
+ */
+ if (gpu_buddy_block_is_free(block))
+ return block;
+
+ continue;
+ }
+
+ if (!gpu_buddy_block_is_split(block)) {
+ err = split_block(mm, block);
+ if (unlikely(err))
+ goto err_undo;
+ }
+
+ list_add(&block->right->tmp_link, &dfs);
+ list_add(&block->left->tmp_link, &dfs);
+ } while (1);
+
+ return ERR_PTR(-ENOSPC);
+
+err_undo:
+ /*
+ * We really don't want to leave around a bunch of split blocks, since
+ * bigger is better, so make sure we merge everything back before we
+ * free the allocated blocks.
+ */
+ buddy = __get_buddy(block);
+ if (buddy &&
+ (gpu_buddy_block_is_free(block) &&
+ gpu_buddy_block_is_free(buddy)))
+ __gpu_buddy_free(mm, block, false);
+ return ERR_PTR(err);
+}
+
+static struct gpu_buddy_block *
+__gpu_buddy_alloc_range_bias(struct gpu_buddy *mm,
+ u64 start, u64 end,
+ unsigned int order,
+ unsigned long flags)
+{
+ struct gpu_buddy_block *block;
+ bool fallback = false;
+
+ block = __alloc_range_bias(mm, start, end, order,
+ flags, fallback);
+ if (IS_ERR(block))
+ return __alloc_range_bias(mm, start, end, order,
+ flags, !fallback);
+
+ return block;
+}
+
+static struct gpu_buddy_block *
+get_maxblock(struct gpu_buddy *mm,
+ unsigned int order,
+ enum gpu_buddy_free_tree tree)
+{
+ struct gpu_buddy_block *max_block = NULL, *block = NULL;
+ struct rb_root *root;
+ unsigned int i;
+
+ for (i = order; i <= mm->max_order; ++i) {
+ root = &mm->free_trees[tree][i];
+ block = rbtree_last_free_block(root);
+ if (!block)
+ continue;
+
+ if (!max_block) {
+ max_block = block;
+ continue;
+ }
+
+ if (gpu_buddy_block_offset(block) >
+ gpu_buddy_block_offset(max_block)) {
+ max_block = block;
+ }
+ }
+
+ return max_block;
+}
+
+static struct gpu_buddy_block *
+alloc_from_freetree(struct gpu_buddy *mm,
+ unsigned int order,
+ unsigned long flags)
+{
+ struct gpu_buddy_block *block = NULL;
+ struct rb_root *root;
+ enum gpu_buddy_free_tree tree;
+ unsigned int tmp;
+ int err;
+
+ tree = (flags & GPU_BUDDY_CLEAR_ALLOCATION) ?
+ GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
+
+ if (flags & GPU_BUDDY_TOPDOWN_ALLOCATION) {
+ block = get_maxblock(mm, order, tree);
+ if (block)
+ /* Store the obtained block order */
+ tmp = gpu_buddy_block_order(block);
+ } else {
+ for (tmp = order; tmp <= mm->max_order; ++tmp) {
+ /* Get RB tree root for this order and tree */
+ root = &mm->free_trees[tree][tmp];
+ block = rbtree_last_free_block(root);
+ if (block)
+ break;
+ }
+ }
+
+ if (!block) {
+ /* Try allocating from the other tree */
+ tree = (tree == GPU_BUDDY_CLEAR_TREE) ?
+ GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE;
+
+ for (tmp = order; tmp <= mm->max_order; ++tmp) {
+ root = &mm->free_trees[tree][tmp];
+ block = rbtree_last_free_block(root);
+ if (block)
+ break;
+ }
+
+ if (!block)
+ return ERR_PTR(-ENOSPC);
+ }
+
+ BUG_ON(!gpu_buddy_block_is_free(block));
+
+ while (tmp != order) {
+ err = split_block(mm, block);
+ if (unlikely(err))
+ goto err_undo;
+
+ block = block->right;
+ tmp--;
+ }
+ return block;
+
+err_undo:
+ if (tmp != order)
+ __gpu_buddy_free(mm, block, false);
+ return ERR_PTR(err);
+}
+
+static int __alloc_range(struct gpu_buddy *mm,
+ struct list_head *dfs,
+ u64 start, u64 size,
+ struct list_head *blocks,
+ u64 *total_allocated_on_err)
+{
+ struct gpu_buddy_block *block;
+ struct gpu_buddy_block *buddy;
+ u64 total_allocated = 0;
+ LIST_HEAD(allocated);
+ u64 end;
+ int err;
+
+ end = start + size - 1;
+
+ do {
+ u64 block_start;
+ u64 block_end;
+
+ block = list_first_entry_or_null(dfs,
+ struct gpu_buddy_block,
+ tmp_link);
+ if (!block)
+ break;
+
+ list_del(&block->tmp_link);
+
+ block_start = gpu_buddy_block_offset(block);
+ block_end = block_start + gpu_buddy_block_size(mm, block) - 1;
+
+ if (!overlaps(start, end, block_start, block_end))
+ continue;
+
+ if (gpu_buddy_block_is_allocated(block)) {
+ err = -ENOSPC;
+ goto err_free;
+ }
+
+ if (contains(start, end, block_start, block_end)) {
+ if (gpu_buddy_block_is_free(block)) {
+ mark_allocated(mm, block);
+ total_allocated += gpu_buddy_block_size(mm, block);
+ mm->avail -= gpu_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail -= gpu_buddy_block_size(mm, block);
+ list_add_tail(&block->link, &allocated);
+ continue;
+ } else if (!mm->clear_avail) {
+ err = -ENOSPC;
+ goto err_free;
+ }
+ }
+
+ if (!gpu_buddy_block_is_split(block)) {
+ err = split_block(mm, block);
+ if (unlikely(err))
+ goto err_undo;
+ }
+
+ list_add(&block->right->tmp_link, dfs);
+ list_add(&block->left->tmp_link, dfs);
+ } while (1);
+
+ if (total_allocated < size) {
+ err = -ENOSPC;
+ goto err_free;
+ }
+
+ list_splice_tail(&allocated, blocks);
+
+ return 0;
+
+err_undo:
+ /*
+ * We really don't want to leave around a bunch of split blocks, since
+ * bigger is better, so make sure we merge everything back before we
+ * free the allocated blocks.
+ */
+ buddy = __get_buddy(block);
+ if (buddy &&
+ (gpu_buddy_block_is_free(block) &&
+ gpu_buddy_block_is_free(buddy)))
+ __gpu_buddy_free(mm, block, false);
+
+err_free:
+ if (err == -ENOSPC && total_allocated_on_err) {
+ list_splice_tail(&allocated, blocks);
+ *total_allocated_on_err = total_allocated;
+ } else {
+ gpu_buddy_free_list_internal(mm, &allocated);
+ }
+
+ return err;
+}
+
+static int __gpu_buddy_alloc_range(struct gpu_buddy *mm,
+ u64 start,
+ u64 size,
+ u64 *total_allocated_on_err,
+ struct list_head *blocks)
+{
+ LIST_HEAD(dfs);
+ int i;
+
+ for (i = 0; i < mm->n_roots; ++i)
+ list_add_tail(&mm->roots[i]->tmp_link, &dfs);
+
+ return __alloc_range(mm, &dfs, start, size,
+ blocks, total_allocated_on_err);
+}
+
+static int __alloc_contig_try_harder(struct gpu_buddy *mm,
+ u64 size,
+ u64 min_block_size,
+ struct list_head *blocks)
+{
+ u64 rhs_offset, lhs_offset, lhs_size, filled;
+ struct gpu_buddy_block *block;
+ unsigned int tree, order;
+ LIST_HEAD(blocks_lhs);
+ unsigned long pages;
+ u64 modify_size;
+ int err;
+
+ modify_size = rounddown_pow_of_two(size);
+ pages = modify_size >> ilog2(mm->chunk_size);
+ order = fls(pages) - 1;
+ if (order == 0)
+ return -ENOSPC;
+
+ for_each_free_tree(tree) {
+ struct rb_root *root;
+ struct rb_node *iter;
+
+ root = &mm->free_trees[tree][order];
+ if (rbtree_is_empty(root))
+ continue;
+
+ iter = rb_last(root);
+ while (iter) {
+ block = rbtree_get_free_block(iter);
+
+ /* Allocate blocks traversing RHS */
+ rhs_offset = gpu_buddy_block_offset(block);
+ err = __gpu_buddy_alloc_range(mm, rhs_offset, size,
+ &filled, blocks);
+ if (!err || err != -ENOSPC)
+ return err;
+
+ lhs_size = max((size - filled), min_block_size);
+ if (!IS_ALIGNED(lhs_size, min_block_size))
+ lhs_size = round_up(lhs_size, min_block_size);
+
+ /* Allocate blocks traversing LHS */
+ lhs_offset = gpu_buddy_block_offset(block) - lhs_size;
+ err = __gpu_buddy_alloc_range(mm, lhs_offset, lhs_size,
+ NULL, &blocks_lhs);
+ if (!err) {
+ list_splice(&blocks_lhs, blocks);
+ return 0;
+ } else if (err != -ENOSPC) {
+ gpu_buddy_free_list_internal(mm, blocks);
+ return err;
+ }
+ /* Free blocks for the next iteration */
+ gpu_buddy_free_list_internal(mm, blocks);
+
+ iter = rb_prev(iter);
+ }
+ }
+
+ return -ENOSPC;
+}
+
+/**
+ * gpu_buddy_block_trim - free unused pages
+ *
+ * @mm: GPU buddy manager
+ * @start: start address to begin the trimming.
+ * @new_size: original size requested
+ * @blocks: Input and output list of allocated blocks.
+ * MUST contain single block as input to be trimmed.
+ * On success will contain the newly allocated blocks
+ * making up the @new_size. Blocks always appear in
+ * ascending order
+ *
+ * For contiguous allocation, we round up the size to the nearest
+ * power of two value, drivers consume *actual* size, so remaining
+ * portions are unused and can be optionally freed with this function
+ *
+ * Returns:
+ * 0 on success, error code on failure.
+ */
+int gpu_buddy_block_trim(struct gpu_buddy *mm,
+ u64 *start,
+ u64 new_size,
+ struct list_head *blocks)
+{
+ struct gpu_buddy_block *parent;
+ struct gpu_buddy_block *block;
+ u64 block_start, block_end;
+ LIST_HEAD(dfs);
+ u64 new_start;
+ int err;
+
+ if (!list_is_singular(blocks))
+ return -EINVAL;
+
+ block = list_first_entry(blocks,
+ struct gpu_buddy_block,
+ link);
+
+ block_start = gpu_buddy_block_offset(block);
+ block_end = block_start + gpu_buddy_block_size(mm, block);
+
+ if (WARN_ON(!gpu_buddy_block_is_allocated(block)))
+ return -EINVAL;
+
+ if (new_size > gpu_buddy_block_size(mm, block))
+ return -EINVAL;
+
+ if (!new_size || !IS_ALIGNED(new_size, mm->chunk_size))
+ return -EINVAL;
+
+ if (new_size == gpu_buddy_block_size(mm, block))
+ return 0;
+
+ new_start = block_start;
+ if (start) {
+ new_start = *start;
+
+ if (new_start < block_start)
+ return -EINVAL;
+
+ if (!IS_ALIGNED(new_start, mm->chunk_size))
+ return -EINVAL;
+
+ if (range_overflows(new_start, new_size, block_end))
+ return -EINVAL;
+ }
+
+ list_del(&block->link);
+ mark_free(mm, block);
+ mm->avail += gpu_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail += gpu_buddy_block_size(mm, block);
+
+ /* Prevent recursively freeing this node */
+ parent = block->parent;
+ block->parent = NULL;
+
+ list_add(&block->tmp_link, &dfs);
+ err = __alloc_range(mm, &dfs, new_start, new_size, blocks, NULL);
+ if (err) {
+ mark_allocated(mm, block);
+ mm->avail -= gpu_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail -= gpu_buddy_block_size(mm, block);
+ list_add(&block->link, blocks);
+ }
+
+ block->parent = parent;
+ return err;
+}
+EXPORT_SYMBOL(gpu_buddy_block_trim);
+
+static struct gpu_buddy_block *
+__gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
+ u64 start, u64 end,
+ unsigned int order,
+ unsigned long flags)
+{
+ if (flags & GPU_BUDDY_RANGE_ALLOCATION)
+ /* Allocate traversing within the range */
+ return __gpu_buddy_alloc_range_bias(mm, start, end,
+ order, flags);
+ else
+ /* Allocate from freetree */
+ return alloc_from_freetree(mm, order, flags);
+}
+
+/**
+ * gpu_buddy_alloc_blocks - allocate power-of-two blocks
+ *
+ * @mm: GPU buddy manager to allocate from
+ * @start: start of the allowed range for this block
+ * @end: end of the allowed range for this block
+ * @size: size of the allocation in bytes
+ * @min_block_size: alignment of the allocation
+ * @blocks: output list head to add allocated blocks
+ * @flags: GPU_BUDDY_*_ALLOCATION flags
+ *
+ * alloc_range_bias() called on range limitations, which traverses
+ * the tree and returns the desired block.
+ *
+ * alloc_from_freetree() called when *no* range restrictions
+ * are enforced, which picks the block from the freetree.
+ *
+ * Returns:
+ * 0 on success, error code on failure.
+ */
+int gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
+ u64 start, u64 end, u64 size,
+ u64 min_block_size,
+ struct list_head *blocks,
+ unsigned long flags)
+{
+ struct gpu_buddy_block *block = NULL;
+ u64 original_size, original_min_size;
+ unsigned int min_order, order;
+ LIST_HEAD(allocated);
+ unsigned long pages;
+ int err;
+
+ if (size < mm->chunk_size)
+ return -EINVAL;
+
+ if (min_block_size < mm->chunk_size)
+ return -EINVAL;
+
+ if (!is_power_of_2(min_block_size))
+ return -EINVAL;
+
+ if (!IS_ALIGNED(start | end | size, mm->chunk_size))
+ return -EINVAL;
+
+ if (end > mm->size)
+ return -EINVAL;
+
+ if (range_overflows(start, size, mm->size))
+ return -EINVAL;
+
+ /* Actual range allocation */
+ if (start + size == end) {
+ if (!IS_ALIGNED(start | end, min_block_size))
+ return -EINVAL;
+
+ return __gpu_buddy_alloc_range(mm, start, size, NULL, blocks);
+ }
+
+ original_size = size;
+ original_min_size = min_block_size;
+
+ /* Roundup the size to power of 2 */
+ if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION) {
+ size = roundup_pow_of_two(size);
+ min_block_size = size;
+ /* Align size value to min_block_size */
+ } else if (!IS_ALIGNED(size, min_block_size)) {
+ size = round_up(size, min_block_size);
+ }
+
+ pages = size >> ilog2(mm->chunk_size);
+ order = fls(pages) - 1;
+ min_order = ilog2(min_block_size) - ilog2(mm->chunk_size);
+
+ do {
+ order = min(order, (unsigned int)fls(pages) - 1);
+ BUG_ON(order > mm->max_order);
+ BUG_ON(order < min_order);
+
+ do {
+ block = __gpu_buddy_alloc_blocks(mm, start,
+ end,
+ order,
+ flags);
+ if (!IS_ERR(block))
+ break;
+
+ if (order-- == min_order) {
+ /* Try allocation through force merge method */
+ if (mm->clear_avail &&
+ !__force_merge(mm, start, end, min_order)) {
+ block = __gpu_buddy_alloc_blocks(mm, start,
+ end,
+ min_order,
+ flags);
+ if (!IS_ERR(block)) {
+ order = min_order;
+ break;
+ }
+ }
+
+ /*
+ * Try contiguous block allocation through
+ * try harder method.
+ */
+ if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
+ !(flags & GPU_BUDDY_RANGE_ALLOCATION))
+ return __alloc_contig_try_harder(mm,
+ original_size,
+ original_min_size,
+ blocks);
+ err = -ENOSPC;
+ goto err_free;
+ }
+ } while (1);
+
+ mark_allocated(mm, block);
+ mm->avail -= gpu_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail -= gpu_buddy_block_size(mm, block);
+ kmemleak_update_trace(block);
+ list_add_tail(&block->link, &allocated);
+
+ pages -= BIT(order);
+
+ if (!pages)
+ break;
+ } while (1);
+
+ /* Trim the allocated block to the required size */
+ if (!(flags & GPU_BUDDY_TRIM_DISABLE) &&
+ original_size != size) {
+ struct list_head *trim_list;
+ LIST_HEAD(temp);
+ u64 trim_size;
+
+ trim_list = &allocated;
+ trim_size = original_size;
+
+ if (!list_is_singular(&allocated)) {
+ block = list_last_entry(&allocated, typeof(*block), link);
+ list_move(&block->link, &temp);
+ trim_list = &temp;
+ trim_size = gpu_buddy_block_size(mm, block) -
+ (size - original_size);
+ }
+
+ gpu_buddy_block_trim(mm,
+ NULL,
+ trim_size,
+ trim_list);
+
+ if (!list_empty(&temp))
+ list_splice_tail(trim_list, &allocated);
+ }
+
+ list_splice_tail(&allocated, blocks);
+ return 0;
+
+err_free:
+ gpu_buddy_free_list_internal(mm, &allocated);
+ return err;
+}
+EXPORT_SYMBOL(gpu_buddy_alloc_blocks);
+
+/**
+ * gpu_buddy_block_print - print block information
+ *
+ * @mm: GPU buddy manager
+ * @block: GPU buddy block
+ */
+void gpu_buddy_block_print(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ u64 start = gpu_buddy_block_offset(block);
+ u64 size = gpu_buddy_block_size(mm, block);
+
+ pr_info("%#018llx-%#018llx: %llu\n", start, start + size, size);
+}
+EXPORT_SYMBOL(gpu_buddy_block_print);
+
+/**
+ * gpu_buddy_print - print allocator state
+ *
+ * @mm: GPU buddy manager
+ */
+void gpu_buddy_print(struct gpu_buddy *mm)
+{
+ int order;
+
+ pr_info("chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n",
+ mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20);
+
+ for (order = mm->max_order; order >= 0; order--) {
+ struct gpu_buddy_block *block, *tmp;
+ struct rb_root *root;
+ u64 count = 0, free;
+ unsigned int tree;
+
+ for_each_free_tree(tree) {
+ root = &mm->free_trees[tree][order];
+
+ rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) {
+ BUG_ON(!gpu_buddy_block_is_free(block));
+ count++;
+ }
+ }
+
+ free = count * (mm->chunk_size << order);
+ if (free < SZ_1M)
+ pr_info("order-%2d free: %8llu KiB, blocks: %llu\n",
+ order, free >> 10, count);
+ else
+ pr_info("order-%2d free: %8llu MiB, blocks: %llu\n",
+ order, free >> 20, count);
+ }
+}
+EXPORT_SYMBOL(gpu_buddy_print);
+
+static void gpu_buddy_module_exit(void)
+{
+ kmem_cache_destroy(slab_blocks);
+}
+
+static int __init gpu_buddy_module_init(void)
+{
+ slab_blocks = KMEM_CACHE(gpu_buddy_block, 0);
+ if (!slab_blocks)
+ return -ENOMEM;
+
+ return 0;
+}
+
+module_init(gpu_buddy_module_init);
+module_exit(gpu_buddy_module_exit);
+
+MODULE_DESCRIPTION("GPU Buddy Allocator");
+MODULE_LICENSE("Dual MIT/GPL");
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 7e6bc0b3a589..0475defb37f0 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -220,6 +220,7 @@ config DRM_GPUSVM
config DRM_BUDDY
tristate
depends on DRM
+ select GPU_BUDDY
help
A page based buddy allocator
diff --git a/drivers/gpu/drm/Kconfig.debug b/drivers/gpu/drm/Kconfig.debug
index 05dc43c0b8c5..1f4c408c7920 100644
--- a/drivers/gpu/drm/Kconfig.debug
+++ b/drivers/gpu/drm/Kconfig.debug
@@ -71,6 +71,7 @@ config DRM_KUNIT_TEST
select DRM_KUNIT_TEST_HELPERS
select DRM_LIB_RANDOM
select DRM_SYSFB_HELPER
+ select GPU_BUDDY
select PRIME_NUMBERS
default KUNIT_ALL_TESTS
help
@@ -88,10 +89,11 @@ config DRM_TTM_KUNIT_TEST
tristate "KUnit tests for TTM" if !KUNIT_ALL_TESTS
default n
depends on DRM && KUNIT && MMU && (UML || COMPILE_TEST)
- select DRM_TTM
select DRM_BUDDY
+ select DRM_TTM
select DRM_EXPORT_FOR_TESTS if m
select DRM_KUNIT_TEST_HELPERS
+ select GPU_BUDDY
default KUNIT_ALL_TESTS
help
Enables unit tests for TTM, a GPU memory manager subsystem used
diff --git a/drivers/gpu/drm/amd/amdgpu/Kconfig b/drivers/gpu/drm/amd/amdgpu/Kconfig
index 1acfed2f92ef..e2b5a431a3b3 100644
--- a/drivers/gpu/drm/amd/amdgpu/Kconfig
+++ b/drivers/gpu/drm/amd/amdgpu/Kconfig
@@ -23,6 +23,7 @@ config DRM_AMDGPU
select CRC16
select BACKLIGHT_CLASS_DEVICE
select INTERVAL_TREE
+ select GPU_BUDDY
select DRM_BUDDY
select DRM_SUBALLOC_HELPER
select DRM_EXEC
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index c8b4dd3ea5c3..7b0c19695c8b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
@@ -5601,7 +5601,7 @@ int amdgpu_ras_add_critical_region(struct amdgpu_device *adev,
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
struct amdgpu_vram_mgr_resource *vres;
struct ras_critical_region *region;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
int ret = 0;
if (!bo || !bo->tbo.resource)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
index be2e56ce1355..8908d9e08a30 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
@@ -55,7 +55,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
uint64_t start, uint64_t size,
struct amdgpu_res_cursor *cur)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct list_head *head, *next;
struct drm_mm_node *node;
@@ -71,7 +71,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
head = &to_amdgpu_vram_mgr_resource(res)->blocks;
block = list_first_entry_or_null(head,
- struct drm_buddy_block,
+ struct gpu_buddy_block,
link);
if (!block)
goto fallback;
@@ -81,7 +81,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
next = block->link.next;
if (next != head)
- block = list_entry(next, struct drm_buddy_block, link);
+ block = list_entry(next, struct gpu_buddy_block, link);
}
cur->start = amdgpu_vram_mgr_block_start(block) + start;
@@ -125,7 +125,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
*/
static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct drm_mm_node *node;
struct list_head *next;
@@ -146,7 +146,7 @@ static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
block = cur->node;
next = block->link.next;
- block = list_entry(next, struct drm_buddy_block, link);
+ block = list_entry(next, struct gpu_buddy_block, link);
cur->node = block;
cur->start = amdgpu_vram_mgr_block_start(block);
@@ -175,7 +175,7 @@ static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
*/
static inline bool amdgpu_res_cleared(struct amdgpu_res_cursor *cur)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
switch (cur->mem_type) {
case TTM_PL_VRAM:
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
index 9d934c07fa6b..6c06a9c9b13f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
@@ -23,6 +23,8 @@
*/
#include <linux/dma-mapping.h>
+
+#include <drm/drm_buddy.h>
#include <drm/ttm/ttm_range_manager.h>
#include <drm/drm_drv.h>
@@ -52,15 +54,15 @@ to_amdgpu_device(struct amdgpu_vram_mgr *mgr)
return container_of(mgr, struct amdgpu_device, mman.vram_mgr);
}
-static inline struct drm_buddy_block *
+static inline struct gpu_buddy_block *
amdgpu_vram_mgr_first_block(struct list_head *list)
{
- return list_first_entry_or_null(list, struct drm_buddy_block, link);
+ return list_first_entry_or_null(list, struct gpu_buddy_block, link);
}
static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct list_head *head)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
u64 start, size;
block = amdgpu_vram_mgr_first_block(head);
@@ -71,7 +73,7 @@ static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct list_head *head)
start = amdgpu_vram_mgr_block_start(block);
size = amdgpu_vram_mgr_block_size(block);
- block = list_entry(block->link.next, struct drm_buddy_block, link);
+ block = list_entry(block->link.next, struct gpu_buddy_block, link);
if (start + size != amdgpu_vram_mgr_block_start(block))
return false;
}
@@ -81,7 +83,7 @@ static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct list_head *head)
static inline u64 amdgpu_vram_mgr_blocks_size(struct list_head *head)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
u64 size = 0;
list_for_each_entry(block, head, link)
@@ -254,7 +256,7 @@ const struct attribute_group amdgpu_vram_mgr_attr_group = {
* Calculate how many bytes of the DRM BUDDY block are inside visible VRAM
*/
static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
- struct drm_buddy_block *block)
+ struct gpu_buddy_block *block)
{
u64 start = amdgpu_vram_mgr_block_start(block);
u64 end = start + amdgpu_vram_mgr_block_size(block);
@@ -279,7 +281,7 @@ u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
struct ttm_resource *res = bo->tbo.resource;
struct amdgpu_vram_mgr_resource *vres = to_amdgpu_vram_mgr_resource(res);
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
u64 usage = 0;
if (amdgpu_gmc_vram_full_visible(&adev->gmc))
@@ -299,15 +301,15 @@ static void amdgpu_vram_mgr_do_reserve(struct ttm_resource_manager *man)
{
struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
struct amdgpu_device *adev = to_amdgpu_device(mgr);
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
struct amdgpu_vram_reservation *rsv, *temp;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
uint64_t vis_usage;
list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, blocks) {
- if (drm_buddy_alloc_blocks(mm, rsv->start, rsv->start + rsv->size,
+ if (gpu_buddy_alloc_blocks(mm, rsv->start, rsv->start + rsv->size,
rsv->size, mm->chunk_size, &rsv->allocated,
- DRM_BUDDY_RANGE_ALLOCATION))
+ GPU_BUDDY_RANGE_ALLOCATION))
continue;
block = amdgpu_vram_mgr_first_block(&rsv->allocated);
@@ -403,7 +405,7 @@ int amdgpu_vram_mgr_query_address_block_info(struct amdgpu_vram_mgr *mgr,
uint64_t address, struct amdgpu_vram_block_info *info)
{
struct amdgpu_vram_mgr_resource *vres;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
u64 start, size;
int ret = -ENOENT;
@@ -450,8 +452,8 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
struct amdgpu_vram_mgr_resource *vres;
u64 size, remaining_size, lpfn, fpfn;
unsigned int adjust_dcc_size = 0;
- struct drm_buddy *mm = &mgr->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &mgr->mm;
+ struct gpu_buddy_block *block;
unsigned long pages_per_block;
int r;
@@ -493,17 +495,17 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
INIT_LIST_HEAD(&vres->blocks);
if (place->flags & TTM_PL_FLAG_TOPDOWN)
- vres->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
+ vres->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION;
if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)
- vres->flags |= DRM_BUDDY_CONTIGUOUS_ALLOCATION;
+ vres->flags |= GPU_BUDDY_CONTIGUOUS_ALLOCATION;
if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CLEARED)
- vres->flags |= DRM_BUDDY_CLEAR_ALLOCATION;
+ vres->flags |= GPU_BUDDY_CLEAR_ALLOCATION;
if (fpfn || lpfn != mgr->mm.size)
/* Allocate blocks in desired range */
- vres->flags |= DRM_BUDDY_RANGE_ALLOCATION;
+ vres->flags |= GPU_BUDDY_RANGE_ALLOCATION;
if (bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC &&
adev->gmc.gmc_funcs->get_dcc_alignment)
@@ -516,7 +518,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
dcc_size = roundup_pow_of_two(vres->base.size + adjust_dcc_size);
remaining_size = (u64)dcc_size;
- vres->flags |= DRM_BUDDY_TRIM_DISABLE;
+ vres->flags |= GPU_BUDDY_TRIM_DISABLE;
}
mutex_lock(&mgr->lock);
@@ -536,7 +538,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
BUG_ON(min_block_size < mm->chunk_size);
- r = drm_buddy_alloc_blocks(mm, fpfn,
+ r = gpu_buddy_alloc_blocks(mm, fpfn,
lpfn,
size,
min_block_size,
@@ -545,7 +547,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
if (unlikely(r == -ENOSPC) && pages_per_block == ~0ul &&
!(place->flags & TTM_PL_FLAG_CONTIGUOUS)) {
- vres->flags &= ~DRM_BUDDY_CONTIGUOUS_ALLOCATION;
+ vres->flags &= ~GPU_BUDDY_CONTIGUOUS_ALLOCATION;
pages_per_block = max_t(u32, 2UL << (20UL - PAGE_SHIFT),
tbo->page_alignment);
@@ -566,7 +568,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
list_add_tail(&vres->vres_node, &mgr->allocated_vres_list);
if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS && adjust_dcc_size) {
- struct drm_buddy_block *dcc_block;
+ struct gpu_buddy_block *dcc_block;
unsigned long dcc_start;
u64 trim_start;
@@ -576,7 +578,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
roundup((unsigned long)amdgpu_vram_mgr_block_start(dcc_block),
adjust_dcc_size);
trim_start = (u64)dcc_start;
- drm_buddy_block_trim(mm, &trim_start,
+ gpu_buddy_block_trim(mm, &trim_start,
(u64)vres->base.size,
&vres->blocks);
}
@@ -614,7 +616,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
return 0;
error_free_blocks:
- drm_buddy_free_list(mm, &vres->blocks, 0);
+ gpu_buddy_free_list(mm, &vres->blocks, 0);
mutex_unlock(&mgr->lock);
error_fini:
ttm_resource_fini(man, &vres->base);
@@ -637,8 +639,8 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
struct amdgpu_vram_mgr_resource *vres = to_amdgpu_vram_mgr_resource(res);
struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
struct amdgpu_device *adev = to_amdgpu_device(mgr);
- struct drm_buddy *mm = &mgr->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &mgr->mm;
+ struct gpu_buddy_block *block;
uint64_t vis_usage = 0;
mutex_lock(&mgr->lock);
@@ -649,7 +651,7 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
list_for_each_entry(block, &vres->blocks, link)
vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
- drm_buddy_free_list(mm, &vres->blocks, vres->flags);
+ gpu_buddy_free_list(mm, &vres->blocks, vres->flags);
amdgpu_vram_mgr_do_reserve(man);
mutex_unlock(&mgr->lock);
@@ -688,7 +690,7 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
if (!*sgt)
return -ENOMEM;
- /* Determine the number of DRM_BUDDY blocks to export */
+ /* Determine the number of GPU_BUDDY blocks to export */
amdgpu_res_first(res, offset, length, &cursor);
while (cursor.remaining) {
num_entries++;
@@ -704,10 +706,10 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
sg->length = 0;
/*
- * Walk down DRM_BUDDY blocks to populate scatterlist nodes
- * @note: Use iterator api to get first the DRM_BUDDY block
+ * Walk down GPU_BUDDY blocks to populate scatterlist nodes
+ * @note: Use iterator api to get first the GPU_BUDDY block
* and the number of bytes from it. Access the following
- * DRM_BUDDY block(s) if more buffer needs to exported
+ * GPU_BUDDY block(s) if more buffer needs to exported
*/
amdgpu_res_first(res, offset, length, &cursor);
for_each_sgtable_sg((*sgt), sg, i) {
@@ -792,10 +794,10 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct amdgpu_vram_mgr *mgr)
void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev)
{
struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
mutex_lock(&mgr->lock);
- drm_buddy_reset_clear(mm, false);
+ gpu_buddy_reset_clear(mm, false);
mutex_unlock(&mgr->lock);
}
@@ -815,7 +817,7 @@ static bool amdgpu_vram_mgr_intersects(struct ttm_resource_manager *man,
size_t size)
{
struct amdgpu_vram_mgr_resource *mgr = to_amdgpu_vram_mgr_resource(res);
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
/* Check each drm buddy block individually */
list_for_each_entry(block, &mgr->blocks, link) {
@@ -848,7 +850,7 @@ static bool amdgpu_vram_mgr_compatible(struct ttm_resource_manager *man,
size_t size)
{
struct amdgpu_vram_mgr_resource *mgr = to_amdgpu_vram_mgr_resource(res);
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
/* Check each drm buddy block individually */
list_for_each_entry(block, &mgr->blocks, link) {
@@ -877,7 +879,7 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
struct amdgpu_vram_reservation *rsv;
drm_printf(printer, " vis usage:%llu\n",
@@ -930,7 +932,7 @@ int amdgpu_vram_mgr_init(struct amdgpu_device *adev)
mgr->default_page_size = PAGE_SIZE;
man->func = &amdgpu_vram_mgr_func;
- err = drm_buddy_init(&mgr->mm, man->size, PAGE_SIZE);
+ err = gpu_buddy_init(&mgr->mm, man->size, PAGE_SIZE);
if (err)
return err;
@@ -965,11 +967,11 @@ void amdgpu_vram_mgr_fini(struct amdgpu_device *adev)
kfree(rsv);
list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, blocks) {
- drm_buddy_free_list(&mgr->mm, &rsv->allocated, 0);
+ gpu_buddy_free_list(&mgr->mm, &rsv->allocated, 0);
kfree(rsv);
}
if (!adev->gmc.is_app_apu)
- drm_buddy_fini(&mgr->mm);
+ gpu_buddy_fini(&mgr->mm);
mutex_unlock(&mgr->lock);
ttm_resource_manager_cleanup(man);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
index 5f5fd9a911c2..429a21a2e9b2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
@@ -24,11 +24,11 @@
#ifndef __AMDGPU_VRAM_MGR_H__
#define __AMDGPU_VRAM_MGR_H__
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
struct amdgpu_vram_mgr {
struct ttm_resource_manager manager;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
/* protects access to buffer objects */
struct mutex lock;
struct list_head reservations_pending;
@@ -57,19 +57,19 @@ struct amdgpu_vram_mgr_resource {
struct amdgpu_vres_task task;
};
-static inline u64 amdgpu_vram_mgr_block_start(struct drm_buddy_block *block)
+static inline u64 amdgpu_vram_mgr_block_start(struct gpu_buddy_block *block)
{
- return drm_buddy_block_offset(block);
+ return gpu_buddy_block_offset(block);
}
-static inline u64 amdgpu_vram_mgr_block_size(struct drm_buddy_block *block)
+static inline u64 amdgpu_vram_mgr_block_size(struct gpu_buddy_block *block)
{
- return (u64)PAGE_SIZE << drm_buddy_block_order(block);
+ return (u64)PAGE_SIZE << gpu_buddy_block_order(block);
}
-static inline bool amdgpu_vram_mgr_is_cleared(struct drm_buddy_block *block)
+static inline bool amdgpu_vram_mgr_is_cleared(struct gpu_buddy_block *block)
{
- return drm_buddy_block_is_clear(block);
+ return gpu_buddy_block_is_clear(block);
}
static inline struct amdgpu_vram_mgr_resource *
@@ -82,8 +82,8 @@ static inline void amdgpu_vram_mgr_set_cleared(struct ttm_resource *res)
{
struct amdgpu_vram_mgr_resource *ares = to_amdgpu_vram_mgr_resource(res);
- WARN_ON(ares->flags & DRM_BUDDY_CLEARED);
- ares->flags |= DRM_BUDDY_CLEARED;
+ WARN_ON(ares->flags & GPU_BUDDY_CLEARED);
+ ares->flags |= GPU_BUDDY_CLEARED;
}
int amdgpu_vram_mgr_query_address_block_info(struct amdgpu_vram_mgr *mgr,
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
index f2c92902e4a3..d8445adb507a 100644
--- a/drivers/gpu/drm/drm_buddy.c
+++ b/drivers/gpu/drm/drm_buddy.c
@@ -3,1261 +3,24 @@
* Copyright © 2021 Intel Corporation
*/
-#include <kunit/test-bug.h>
-
#include <linux/export.h>
-#include <linux/kmemleak.h>
#include <linux/module.h>
#include <linux/sizes.h>
#include <drm/drm_buddy.h>
-enum drm_buddy_free_tree {
- DRM_BUDDY_CLEAR_TREE = 0,
- DRM_BUDDY_DIRTY_TREE,
- DRM_BUDDY_MAX_FREE_TREES,
-};
-
-static struct kmem_cache *slab_blocks;
-
-#define for_each_free_tree(tree) \
- for ((tree) = 0; (tree) < DRM_BUDDY_MAX_FREE_TREES; (tree)++)
-
-static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm,
- struct drm_buddy_block *parent,
- unsigned int order,
- u64 offset)
-{
- struct drm_buddy_block *block;
-
- BUG_ON(order > DRM_BUDDY_MAX_ORDER);
-
- block = kmem_cache_zalloc(slab_blocks, GFP_KERNEL);
- if (!block)
- return NULL;
-
- block->header = offset;
- block->header |= order;
- block->parent = parent;
-
- RB_CLEAR_NODE(&block->rb);
-
- BUG_ON(block->header & DRM_BUDDY_HEADER_UNUSED);
- return block;
-}
-
-static void drm_block_free(struct drm_buddy *mm,
- struct drm_buddy_block *block)
-{
- kmem_cache_free(slab_blocks, block);
-}
-
-static enum drm_buddy_free_tree
-get_block_tree(struct drm_buddy_block *block)
-{
- return drm_buddy_block_is_clear(block) ?
- DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE;
-}
-
-static struct drm_buddy_block *
-rbtree_get_free_block(const struct rb_node *node)
-{
- return node ? rb_entry(node, struct drm_buddy_block, rb) : NULL;
-}
-
-static struct drm_buddy_block *
-rbtree_last_free_block(struct rb_root *root)
-{
- return rbtree_get_free_block(rb_last(root));
-}
-
-static bool rbtree_is_empty(struct rb_root *root)
-{
- return RB_EMPTY_ROOT(root);
-}
-
-static bool drm_buddy_block_offset_less(const struct drm_buddy_block *block,
- const struct drm_buddy_block *node)
-{
- return drm_buddy_block_offset(block) < drm_buddy_block_offset(node);
-}
-
-static bool rbtree_block_offset_less(struct rb_node *block,
- const struct rb_node *node)
-{
- return drm_buddy_block_offset_less(rbtree_get_free_block(block),
- rbtree_get_free_block(node));
-}
-
-static void rbtree_insert(struct drm_buddy *mm,
- struct drm_buddy_block *block,
- enum drm_buddy_free_tree tree)
-{
- rb_add(&block->rb,
- &mm->free_trees[tree][drm_buddy_block_order(block)],
- rbtree_block_offset_less);
-}
-
-static void rbtree_remove(struct drm_buddy *mm,
- struct drm_buddy_block *block)
-{
- unsigned int order = drm_buddy_block_order(block);
- enum drm_buddy_free_tree tree;
- struct rb_root *root;
-
- tree = get_block_tree(block);
- root = &mm->free_trees[tree][order];
-
- rb_erase(&block->rb, root);
- RB_CLEAR_NODE(&block->rb);
-}
-
-static void clear_reset(struct drm_buddy_block *block)
-{
- block->header &= ~DRM_BUDDY_HEADER_CLEAR;
-}
-
-static void mark_cleared(struct drm_buddy_block *block)
-{
- block->header |= DRM_BUDDY_HEADER_CLEAR;
-}
-
-static void mark_allocated(struct drm_buddy *mm,
- struct drm_buddy_block *block)
-{
- block->header &= ~DRM_BUDDY_HEADER_STATE;
- block->header |= DRM_BUDDY_ALLOCATED;
-
- rbtree_remove(mm, block);
-}
-
-static void mark_free(struct drm_buddy *mm,
- struct drm_buddy_block *block)
-{
- enum drm_buddy_free_tree tree;
-
- block->header &= ~DRM_BUDDY_HEADER_STATE;
- block->header |= DRM_BUDDY_FREE;
-
- tree = get_block_tree(block);
- rbtree_insert(mm, block, tree);
-}
-
-static void mark_split(struct drm_buddy *mm,
- struct drm_buddy_block *block)
-{
- block->header &= ~DRM_BUDDY_HEADER_STATE;
- block->header |= DRM_BUDDY_SPLIT;
-
- rbtree_remove(mm, block);
-}
-
-static inline bool overlaps(u64 s1, u64 e1, u64 s2, u64 e2)
-{
- return s1 <= e2 && e1 >= s2;
-}
-
-static inline bool contains(u64 s1, u64 e1, u64 s2, u64 e2)
-{
- return s1 <= s2 && e1 >= e2;
-}
-
-static struct drm_buddy_block *
-__get_buddy(struct drm_buddy_block *block)
-{
- struct drm_buddy_block *parent;
-
- parent = block->parent;
- if (!parent)
- return NULL;
-
- if (parent->left == block)
- return parent->right;
-
- return parent->left;
-}
-
-static unsigned int __drm_buddy_free(struct drm_buddy *mm,
- struct drm_buddy_block *block,
- bool force_merge)
-{
- struct drm_buddy_block *parent;
- unsigned int order;
-
- while ((parent = block->parent)) {
- struct drm_buddy_block *buddy;
-
- buddy = __get_buddy(block);
-
- if (!drm_buddy_block_is_free(buddy))
- break;
-
- if (!force_merge) {
- /*
- * Check the block and its buddy clear state and exit
- * the loop if they both have the dissimilar state.
- */
- if (drm_buddy_block_is_clear(block) !=
- drm_buddy_block_is_clear(buddy))
- break;
-
- if (drm_buddy_block_is_clear(block))
- mark_cleared(parent);
- }
-
- rbtree_remove(mm, buddy);
- if (force_merge && drm_buddy_block_is_clear(buddy))
- mm->clear_avail -= drm_buddy_block_size(mm, buddy);
-
- drm_block_free(mm, block);
- drm_block_free(mm, buddy);
-
- block = parent;
- }
-
- order = drm_buddy_block_order(block);
- mark_free(mm, block);
-
- return order;
-}
-
-static int __force_merge(struct drm_buddy *mm,
- u64 start,
- u64 end,
- unsigned int min_order)
-{
- unsigned int tree, order;
- int i;
-
- if (!min_order)
- return -ENOMEM;
-
- if (min_order > mm->max_order)
- return -EINVAL;
-
- for_each_free_tree(tree) {
- for (i = min_order - 1; i >= 0; i--) {
- struct rb_node *iter = rb_last(&mm->free_trees[tree][i]);
-
- while (iter) {
- struct drm_buddy_block *block, *buddy;
- u64 block_start, block_end;
-
- block = rbtree_get_free_block(iter);
- iter = rb_prev(iter);
-
- if (!block || !block->parent)
- continue;
-
- block_start = drm_buddy_block_offset(block);
- block_end = block_start + drm_buddy_block_size(mm, block) - 1;
-
- if (!contains(start, end, block_start, block_end))
- continue;
-
- buddy = __get_buddy(block);
- if (!drm_buddy_block_is_free(buddy))
- continue;
-
- WARN_ON(drm_buddy_block_is_clear(block) ==
- drm_buddy_block_is_clear(buddy));
-
- /*
- * Advance to the next node when the current node is the buddy,
- * as freeing the block will also remove its buddy from the tree.
- */
- if (iter == &buddy->rb)
- iter = rb_prev(iter);
-
- rbtree_remove(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail -= drm_buddy_block_size(mm, block);
-
- order = __drm_buddy_free(mm, block, true);
- if (order >= min_order)
- return 0;
- }
- }
- }
-
- return -ENOMEM;
-}
-
-/**
- * drm_buddy_init - init memory manager
- *
- * @mm: DRM buddy manager to initialize
- * @size: size in bytes to manage
- * @chunk_size: minimum page size in bytes for our allocations
- *
- * Initializes the memory manager and its resources.
- *
- * Returns:
- * 0 on success, error code on failure.
- */
-int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size)
-{
- unsigned int i, j, root_count = 0;
- u64 offset = 0;
-
- if (size < chunk_size)
- return -EINVAL;
-
- if (chunk_size < SZ_4K)
- return -EINVAL;
-
- if (!is_power_of_2(chunk_size))
- return -EINVAL;
-
- size = round_down(size, chunk_size);
-
- mm->size = size;
- mm->avail = size;
- mm->clear_avail = 0;
- mm->chunk_size = chunk_size;
- mm->max_order = ilog2(size) - ilog2(chunk_size);
-
- BUG_ON(mm->max_order > DRM_BUDDY_MAX_ORDER);
-
- mm->free_trees = kmalloc_array(DRM_BUDDY_MAX_FREE_TREES,
- sizeof(*mm->free_trees),
- GFP_KERNEL);
- if (!mm->free_trees)
- return -ENOMEM;
-
- for_each_free_tree(i) {
- mm->free_trees[i] = kmalloc_array(mm->max_order + 1,
- sizeof(struct rb_root),
- GFP_KERNEL);
- if (!mm->free_trees[i])
- goto out_free_tree;
-
- for (j = 0; j <= mm->max_order; ++j)
- mm->free_trees[i][j] = RB_ROOT;
- }
-
- mm->n_roots = hweight64(size);
-
- mm->roots = kmalloc_array(mm->n_roots,
- sizeof(struct drm_buddy_block *),
- GFP_KERNEL);
- if (!mm->roots)
- goto out_free_tree;
-
- /*
- * Split into power-of-two blocks, in case we are given a size that is
- * not itself a power-of-two.
- */
- do {
- struct drm_buddy_block *root;
- unsigned int order;
- u64 root_size;
-
- order = ilog2(size) - ilog2(chunk_size);
- root_size = chunk_size << order;
-
- root = drm_block_alloc(mm, NULL, order, offset);
- if (!root)
- goto out_free_roots;
-
- mark_free(mm, root);
-
- BUG_ON(root_count > mm->max_order);
- BUG_ON(drm_buddy_block_size(mm, root) < chunk_size);
-
- mm->roots[root_count] = root;
-
- offset += root_size;
- size -= root_size;
- root_count++;
- } while (size);
-
- return 0;
-
-out_free_roots:
- while (root_count--)
- drm_block_free(mm, mm->roots[root_count]);
- kfree(mm->roots);
-out_free_tree:
- while (i--)
- kfree(mm->free_trees[i]);
- kfree(mm->free_trees);
- return -ENOMEM;
-}
-EXPORT_SYMBOL(drm_buddy_init);
-
-/**
- * drm_buddy_fini - tear down the memory manager
- *
- * @mm: DRM buddy manager to free
- *
- * Cleanup memory manager resources and the freetree
- */
-void drm_buddy_fini(struct drm_buddy *mm)
-{
- u64 root_size, size, start;
- unsigned int order;
- int i;
-
- size = mm->size;
-
- for (i = 0; i < mm->n_roots; ++i) {
- order = ilog2(size) - ilog2(mm->chunk_size);
- start = drm_buddy_block_offset(mm->roots[i]);
- __force_merge(mm, start, start + size, order);
-
- if (WARN_ON(!drm_buddy_block_is_free(mm->roots[i])))
- kunit_fail_current_test("buddy_fini() root");
-
- drm_block_free(mm, mm->roots[i]);
-
- root_size = mm->chunk_size << order;
- size -= root_size;
- }
-
- WARN_ON(mm->avail != mm->size);
-
- for_each_free_tree(i)
- kfree(mm->free_trees[i]);
- kfree(mm->roots);
-}
-EXPORT_SYMBOL(drm_buddy_fini);
-
-static int split_block(struct drm_buddy *mm,
- struct drm_buddy_block *block)
-{
- unsigned int block_order = drm_buddy_block_order(block) - 1;
- u64 offset = drm_buddy_block_offset(block);
-
- BUG_ON(!drm_buddy_block_is_free(block));
- BUG_ON(!drm_buddy_block_order(block));
-
- block->left = drm_block_alloc(mm, block, block_order, offset);
- if (!block->left)
- return -ENOMEM;
-
- block->right = drm_block_alloc(mm, block, block_order,
- offset + (mm->chunk_size << block_order));
- if (!block->right) {
- drm_block_free(mm, block->left);
- return -ENOMEM;
- }
-
- mark_split(mm, block);
-
- if (drm_buddy_block_is_clear(block)) {
- mark_cleared(block->left);
- mark_cleared(block->right);
- clear_reset(block);
- }
-
- mark_free(mm, block->left);
- mark_free(mm, block->right);
-
- return 0;
-}
-
-/**
- * drm_get_buddy - get buddy address
- *
- * @block: DRM buddy block
- *
- * Returns the corresponding buddy block for @block, or NULL
- * if this is a root block and can't be merged further.
- * Requires some kind of locking to protect against
- * any concurrent allocate and free operations.
- */
-struct drm_buddy_block *
-drm_get_buddy(struct drm_buddy_block *block)
-{
- return __get_buddy(block);
-}
-EXPORT_SYMBOL(drm_get_buddy);
-
-/**
- * drm_buddy_reset_clear - reset blocks clear state
- *
- * @mm: DRM buddy manager
- * @is_clear: blocks clear state
- *
- * Reset the clear state based on @is_clear value for each block
- * in the freetree.
- */
-void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
-{
- enum drm_buddy_free_tree src_tree, dst_tree;
- u64 root_size, size, start;
- unsigned int order;
- int i;
-
- size = mm->size;
- for (i = 0; i < mm->n_roots; ++i) {
- order = ilog2(size) - ilog2(mm->chunk_size);
- start = drm_buddy_block_offset(mm->roots[i]);
- __force_merge(mm, start, start + size, order);
-
- root_size = mm->chunk_size << order;
- size -= root_size;
- }
-
- src_tree = is_clear ? DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE;
- dst_tree = is_clear ? DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE;
-
- for (i = 0; i <= mm->max_order; ++i) {
- struct rb_root *root = &mm->free_trees[src_tree][i];
- struct drm_buddy_block *block, *tmp;
-
- rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) {
- rbtree_remove(mm, block);
- if (is_clear) {
- mark_cleared(block);
- mm->clear_avail += drm_buddy_block_size(mm, block);
- } else {
- clear_reset(block);
- mm->clear_avail -= drm_buddy_block_size(mm, block);
- }
-
- rbtree_insert(mm, block, dst_tree);
- }
- }
-}
-EXPORT_SYMBOL(drm_buddy_reset_clear);
-
-/**
- * drm_buddy_free_block - free a block
- *
- * @mm: DRM buddy manager
- * @block: block to be freed
- */
-void drm_buddy_free_block(struct drm_buddy *mm,
- struct drm_buddy_block *block)
-{
- BUG_ON(!drm_buddy_block_is_allocated(block));
- mm->avail += drm_buddy_block_size(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail += drm_buddy_block_size(mm, block);
-
- __drm_buddy_free(mm, block, false);
-}
-EXPORT_SYMBOL(drm_buddy_free_block);
-
-static void __drm_buddy_free_list(struct drm_buddy *mm,
- struct list_head *objects,
- bool mark_clear,
- bool mark_dirty)
-{
- struct drm_buddy_block *block, *on;
-
- WARN_ON(mark_dirty && mark_clear);
-
- list_for_each_entry_safe(block, on, objects, link) {
- if (mark_clear)
- mark_cleared(block);
- else if (mark_dirty)
- clear_reset(block);
- drm_buddy_free_block(mm, block);
- cond_resched();
- }
- INIT_LIST_HEAD(objects);
-}
-
-static void drm_buddy_free_list_internal(struct drm_buddy *mm,
- struct list_head *objects)
-{
- /*
- * Don't touch the clear/dirty bit, since allocation is still internal
- * at this point. For example we might have just failed part of the
- * allocation.
- */
- __drm_buddy_free_list(mm, objects, false, false);
-}
-
-/**
- * drm_buddy_free_list - free blocks
- *
- * @mm: DRM buddy manager
- * @objects: input list head to free blocks
- * @flags: optional flags like DRM_BUDDY_CLEARED
- */
-void drm_buddy_free_list(struct drm_buddy *mm,
- struct list_head *objects,
- unsigned int flags)
-{
- bool mark_clear = flags & DRM_BUDDY_CLEARED;
-
- __drm_buddy_free_list(mm, objects, mark_clear, !mark_clear);
-}
-EXPORT_SYMBOL(drm_buddy_free_list);
-
-static bool block_incompatible(struct drm_buddy_block *block, unsigned int flags)
-{
- bool needs_clear = flags & DRM_BUDDY_CLEAR_ALLOCATION;
-
- return needs_clear != drm_buddy_block_is_clear(block);
-}
-
-static struct drm_buddy_block *
-__alloc_range_bias(struct drm_buddy *mm,
- u64 start, u64 end,
- unsigned int order,
- unsigned long flags,
- bool fallback)
-{
- u64 req_size = mm->chunk_size << order;
- struct drm_buddy_block *block;
- struct drm_buddy_block *buddy;
- LIST_HEAD(dfs);
- int err;
- int i;
-
- end = end - 1;
-
- for (i = 0; i < mm->n_roots; ++i)
- list_add_tail(&mm->roots[i]->tmp_link, &dfs);
-
- do {
- u64 block_start;
- u64 block_end;
-
- block = list_first_entry_or_null(&dfs,
- struct drm_buddy_block,
- tmp_link);
- if (!block)
- break;
-
- list_del(&block->tmp_link);
-
- if (drm_buddy_block_order(block) < order)
- continue;
-
- block_start = drm_buddy_block_offset(block);
- block_end = block_start + drm_buddy_block_size(mm, block) - 1;
-
- if (!overlaps(start, end, block_start, block_end))
- continue;
-
- if (drm_buddy_block_is_allocated(block))
- continue;
-
- if (block_start < start || block_end > end) {
- u64 adjusted_start = max(block_start, start);
- u64 adjusted_end = min(block_end, end);
-
- if (round_down(adjusted_end + 1, req_size) <=
- round_up(adjusted_start, req_size))
- continue;
- }
-
- if (!fallback && block_incompatible(block, flags))
- continue;
-
- if (contains(start, end, block_start, block_end) &&
- order == drm_buddy_block_order(block)) {
- /*
- * Find the free block within the range.
- */
- if (drm_buddy_block_is_free(block))
- return block;
-
- continue;
- }
-
- if (!drm_buddy_block_is_split(block)) {
- err = split_block(mm, block);
- if (unlikely(err))
- goto err_undo;
- }
-
- list_add(&block->right->tmp_link, &dfs);
- list_add(&block->left->tmp_link, &dfs);
- } while (1);
-
- return ERR_PTR(-ENOSPC);
-
-err_undo:
- /*
- * We really don't want to leave around a bunch of split blocks, since
- * bigger is better, so make sure we merge everything back before we
- * free the allocated blocks.
- */
- buddy = __get_buddy(block);
- if (buddy &&
- (drm_buddy_block_is_free(block) &&
- drm_buddy_block_is_free(buddy)))
- __drm_buddy_free(mm, block, false);
- return ERR_PTR(err);
-}
-
-static struct drm_buddy_block *
-__drm_buddy_alloc_range_bias(struct drm_buddy *mm,
- u64 start, u64 end,
- unsigned int order,
- unsigned long flags)
-{
- struct drm_buddy_block *block;
- bool fallback = false;
-
- block = __alloc_range_bias(mm, start, end, order,
- flags, fallback);
- if (IS_ERR(block))
- return __alloc_range_bias(mm, start, end, order,
- flags, !fallback);
-
- return block;
-}
-
-static struct drm_buddy_block *
-get_maxblock(struct drm_buddy *mm,
- unsigned int order,
- enum drm_buddy_free_tree tree)
-{
- struct drm_buddy_block *max_block = NULL, *block = NULL;
- struct rb_root *root;
- unsigned int i;
-
- for (i = order; i <= mm->max_order; ++i) {
- root = &mm->free_trees[tree][i];
- block = rbtree_last_free_block(root);
- if (!block)
- continue;
-
- if (!max_block) {
- max_block = block;
- continue;
- }
-
- if (drm_buddy_block_offset(block) >
- drm_buddy_block_offset(max_block)) {
- max_block = block;
- }
- }
-
- return max_block;
-}
-
-static struct drm_buddy_block *
-alloc_from_freetree(struct drm_buddy *mm,
- unsigned int order,
- unsigned long flags)
-{
- struct drm_buddy_block *block = NULL;
- struct rb_root *root;
- enum drm_buddy_free_tree tree;
- unsigned int tmp;
- int err;
-
- tree = (flags & DRM_BUDDY_CLEAR_ALLOCATION) ?
- DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE;
-
- if (flags & DRM_BUDDY_TOPDOWN_ALLOCATION) {
- block = get_maxblock(mm, order, tree);
- if (block)
- /* Store the obtained block order */
- tmp = drm_buddy_block_order(block);
- } else {
- for (tmp = order; tmp <= mm->max_order; ++tmp) {
- /* Get RB tree root for this order and tree */
- root = &mm->free_trees[tree][tmp];
- block = rbtree_last_free_block(root);
- if (block)
- break;
- }
- }
-
- if (!block) {
- /* Try allocating from the other tree */
- tree = (tree == DRM_BUDDY_CLEAR_TREE) ?
- DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE;
-
- for (tmp = order; tmp <= mm->max_order; ++tmp) {
- root = &mm->free_trees[tree][tmp];
- block = rbtree_last_free_block(root);
- if (block)
- break;
- }
-
- if (!block)
- return ERR_PTR(-ENOSPC);
- }
-
- BUG_ON(!drm_buddy_block_is_free(block));
-
- while (tmp != order) {
- err = split_block(mm, block);
- if (unlikely(err))
- goto err_undo;
-
- block = block->right;
- tmp--;
- }
- return block;
-
-err_undo:
- if (tmp != order)
- __drm_buddy_free(mm, block, false);
- return ERR_PTR(err);
-}
-
-static int __alloc_range(struct drm_buddy *mm,
- struct list_head *dfs,
- u64 start, u64 size,
- struct list_head *blocks,
- u64 *total_allocated_on_err)
-{
- struct drm_buddy_block *block;
- struct drm_buddy_block *buddy;
- u64 total_allocated = 0;
- LIST_HEAD(allocated);
- u64 end;
- int err;
-
- end = start + size - 1;
-
- do {
- u64 block_start;
- u64 block_end;
-
- block = list_first_entry_or_null(dfs,
- struct drm_buddy_block,
- tmp_link);
- if (!block)
- break;
-
- list_del(&block->tmp_link);
-
- block_start = drm_buddy_block_offset(block);
- block_end = block_start + drm_buddy_block_size(mm, block) - 1;
-
- if (!overlaps(start, end, block_start, block_end))
- continue;
-
- if (drm_buddy_block_is_allocated(block)) {
- err = -ENOSPC;
- goto err_free;
- }
-
- if (contains(start, end, block_start, block_end)) {
- if (drm_buddy_block_is_free(block)) {
- mark_allocated(mm, block);
- total_allocated += drm_buddy_block_size(mm, block);
- mm->avail -= drm_buddy_block_size(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail -= drm_buddy_block_size(mm, block);
- list_add_tail(&block->link, &allocated);
- continue;
- } else if (!mm->clear_avail) {
- err = -ENOSPC;
- goto err_free;
- }
- }
-
- if (!drm_buddy_block_is_split(block)) {
- err = split_block(mm, block);
- if (unlikely(err))
- goto err_undo;
- }
-
- list_add(&block->right->tmp_link, dfs);
- list_add(&block->left->tmp_link, dfs);
- } while (1);
-
- if (total_allocated < size) {
- err = -ENOSPC;
- goto err_free;
- }
-
- list_splice_tail(&allocated, blocks);
-
- return 0;
-
-err_undo:
- /*
- * We really don't want to leave around a bunch of split blocks, since
- * bigger is better, so make sure we merge everything back before we
- * free the allocated blocks.
- */
- buddy = __get_buddy(block);
- if (buddy &&
- (drm_buddy_block_is_free(block) &&
- drm_buddy_block_is_free(buddy)))
- __drm_buddy_free(mm, block, false);
-
-err_free:
- if (err == -ENOSPC && total_allocated_on_err) {
- list_splice_tail(&allocated, blocks);
- *total_allocated_on_err = total_allocated;
- } else {
- drm_buddy_free_list_internal(mm, &allocated);
- }
-
- return err;
-}
-
-static int __drm_buddy_alloc_range(struct drm_buddy *mm,
- u64 start,
- u64 size,
- u64 *total_allocated_on_err,
- struct list_head *blocks)
-{
- LIST_HEAD(dfs);
- int i;
-
- for (i = 0; i < mm->n_roots; ++i)
- list_add_tail(&mm->roots[i]->tmp_link, &dfs);
-
- return __alloc_range(mm, &dfs, start, size,
- blocks, total_allocated_on_err);
-}
-
-static int __alloc_contig_try_harder(struct drm_buddy *mm,
- u64 size,
- u64 min_block_size,
- struct list_head *blocks)
-{
- u64 rhs_offset, lhs_offset, lhs_size, filled;
- struct drm_buddy_block *block;
- unsigned int tree, order;
- LIST_HEAD(blocks_lhs);
- unsigned long pages;
- u64 modify_size;
- int err;
-
- modify_size = rounddown_pow_of_two(size);
- pages = modify_size >> ilog2(mm->chunk_size);
- order = fls(pages) - 1;
- if (order == 0)
- return -ENOSPC;
-
- for_each_free_tree(tree) {
- struct rb_root *root;
- struct rb_node *iter;
-
- root = &mm->free_trees[tree][order];
- if (rbtree_is_empty(root))
- continue;
-
- iter = rb_last(root);
- while (iter) {
- block = rbtree_get_free_block(iter);
-
- /* Allocate blocks traversing RHS */
- rhs_offset = drm_buddy_block_offset(block);
- err = __drm_buddy_alloc_range(mm, rhs_offset, size,
- &filled, blocks);
- if (!err || err != -ENOSPC)
- return err;
-
- lhs_size = max((size - filled), min_block_size);
- if (!IS_ALIGNED(lhs_size, min_block_size))
- lhs_size = round_up(lhs_size, min_block_size);
-
- /* Allocate blocks traversing LHS */
- lhs_offset = drm_buddy_block_offset(block) - lhs_size;
- err = __drm_buddy_alloc_range(mm, lhs_offset, lhs_size,
- NULL, &blocks_lhs);
- if (!err) {
- list_splice(&blocks_lhs, blocks);
- return 0;
- } else if (err != -ENOSPC) {
- drm_buddy_free_list_internal(mm, blocks);
- return err;
- }
- /* Free blocks for the next iteration */
- drm_buddy_free_list_internal(mm, blocks);
-
- iter = rb_prev(iter);
- }
- }
-
- return -ENOSPC;
-}
-
-/**
- * drm_buddy_block_trim - free unused pages
- *
- * @mm: DRM buddy manager
- * @start: start address to begin the trimming.
- * @new_size: original size requested
- * @blocks: Input and output list of allocated blocks.
- * MUST contain single block as input to be trimmed.
- * On success will contain the newly allocated blocks
- * making up the @new_size. Blocks always appear in
- * ascending order
- *
- * For contiguous allocation, we round up the size to the nearest
- * power of two value, drivers consume *actual* size, so remaining
- * portions are unused and can be optionally freed with this function
- *
- * Returns:
- * 0 on success, error code on failure.
- */
-int drm_buddy_block_trim(struct drm_buddy *mm,
- u64 *start,
- u64 new_size,
- struct list_head *blocks)
-{
- struct drm_buddy_block *parent;
- struct drm_buddy_block *block;
- u64 block_start, block_end;
- LIST_HEAD(dfs);
- u64 new_start;
- int err;
-
- if (!list_is_singular(blocks))
- return -EINVAL;
-
- block = list_first_entry(blocks,
- struct drm_buddy_block,
- link);
-
- block_start = drm_buddy_block_offset(block);
- block_end = block_start + drm_buddy_block_size(mm, block);
-
- if (WARN_ON(!drm_buddy_block_is_allocated(block)))
- return -EINVAL;
-
- if (new_size > drm_buddy_block_size(mm, block))
- return -EINVAL;
-
- if (!new_size || !IS_ALIGNED(new_size, mm->chunk_size))
- return -EINVAL;
-
- if (new_size == drm_buddy_block_size(mm, block))
- return 0;
-
- new_start = block_start;
- if (start) {
- new_start = *start;
-
- if (new_start < block_start)
- return -EINVAL;
-
- if (!IS_ALIGNED(new_start, mm->chunk_size))
- return -EINVAL;
-
- if (range_overflows(new_start, new_size, block_end))
- return -EINVAL;
- }
-
- list_del(&block->link);
- mark_free(mm, block);
- mm->avail += drm_buddy_block_size(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail += drm_buddy_block_size(mm, block);
-
- /* Prevent recursively freeing this node */
- parent = block->parent;
- block->parent = NULL;
-
- list_add(&block->tmp_link, &dfs);
- err = __alloc_range(mm, &dfs, new_start, new_size, blocks, NULL);
- if (err) {
- mark_allocated(mm, block);
- mm->avail -= drm_buddy_block_size(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail -= drm_buddy_block_size(mm, block);
- list_add(&block->link, blocks);
- }
-
- block->parent = parent;
- return err;
-}
-EXPORT_SYMBOL(drm_buddy_block_trim);
-
-static struct drm_buddy_block *
-__drm_buddy_alloc_blocks(struct drm_buddy *mm,
- u64 start, u64 end,
- unsigned int order,
- unsigned long flags)
-{
- if (flags & DRM_BUDDY_RANGE_ALLOCATION)
- /* Allocate traversing within the range */
- return __drm_buddy_alloc_range_bias(mm, start, end,
- order, flags);
- else
- /* Allocate from freetree */
- return alloc_from_freetree(mm, order, flags);
-}
-
-/**
- * drm_buddy_alloc_blocks - allocate power-of-two blocks
- *
- * @mm: DRM buddy manager to allocate from
- * @start: start of the allowed range for this block
- * @end: end of the allowed range for this block
- * @size: size of the allocation in bytes
- * @min_block_size: alignment of the allocation
- * @blocks: output list head to add allocated blocks
- * @flags: DRM_BUDDY_*_ALLOCATION flags
- *
- * alloc_range_bias() called on range limitations, which traverses
- * the tree and returns the desired block.
- *
- * alloc_from_freetree() called when *no* range restrictions
- * are enforced, which picks the block from the freetree.
- *
- * Returns:
- * 0 on success, error code on failure.
- */
-int drm_buddy_alloc_blocks(struct drm_buddy *mm,
- u64 start, u64 end, u64 size,
- u64 min_block_size,
- struct list_head *blocks,
- unsigned long flags)
-{
- struct drm_buddy_block *block = NULL;
- u64 original_size, original_min_size;
- unsigned int min_order, order;
- LIST_HEAD(allocated);
- unsigned long pages;
- int err;
-
- if (size < mm->chunk_size)
- return -EINVAL;
-
- if (min_block_size < mm->chunk_size)
- return -EINVAL;
-
- if (!is_power_of_2(min_block_size))
- return -EINVAL;
-
- if (!IS_ALIGNED(start | end | size, mm->chunk_size))
- return -EINVAL;
-
- if (end > mm->size)
- return -EINVAL;
-
- if (range_overflows(start, size, mm->size))
- return -EINVAL;
-
- /* Actual range allocation */
- if (start + size == end) {
- if (!IS_ALIGNED(start | end, min_block_size))
- return -EINVAL;
-
- return __drm_buddy_alloc_range(mm, start, size, NULL, blocks);
- }
-
- original_size = size;
- original_min_size = min_block_size;
-
- /* Roundup the size to power of 2 */
- if (flags & DRM_BUDDY_CONTIGUOUS_ALLOCATION) {
- size = roundup_pow_of_two(size);
- min_block_size = size;
- /* Align size value to min_block_size */
- } else if (!IS_ALIGNED(size, min_block_size)) {
- size = round_up(size, min_block_size);
- }
-
- pages = size >> ilog2(mm->chunk_size);
- order = fls(pages) - 1;
- min_order = ilog2(min_block_size) - ilog2(mm->chunk_size);
-
- do {
- order = min(order, (unsigned int)fls(pages) - 1);
- BUG_ON(order > mm->max_order);
- BUG_ON(order < min_order);
-
- do {
- block = __drm_buddy_alloc_blocks(mm, start,
- end,
- order,
- flags);
- if (!IS_ERR(block))
- break;
-
- if (order-- == min_order) {
- /* Try allocation through force merge method */
- if (mm->clear_avail &&
- !__force_merge(mm, start, end, min_order)) {
- block = __drm_buddy_alloc_blocks(mm, start,
- end,
- min_order,
- flags);
- if (!IS_ERR(block)) {
- order = min_order;
- break;
- }
- }
-
- /*
- * Try contiguous block allocation through
- * try harder method.
- */
- if (flags & DRM_BUDDY_CONTIGUOUS_ALLOCATION &&
- !(flags & DRM_BUDDY_RANGE_ALLOCATION))
- return __alloc_contig_try_harder(mm,
- original_size,
- original_min_size,
- blocks);
- err = -ENOSPC;
- goto err_free;
- }
- } while (1);
-
- mark_allocated(mm, block);
- mm->avail -= drm_buddy_block_size(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail -= drm_buddy_block_size(mm, block);
- kmemleak_update_trace(block);
- list_add_tail(&block->link, &allocated);
-
- pages -= BIT(order);
-
- if (!pages)
- break;
- } while (1);
-
- /* Trim the allocated block to the required size */
- if (!(flags & DRM_BUDDY_TRIM_DISABLE) &&
- original_size != size) {
- struct list_head *trim_list;
- LIST_HEAD(temp);
- u64 trim_size;
-
- trim_list = &allocated;
- trim_size = original_size;
-
- if (!list_is_singular(&allocated)) {
- block = list_last_entry(&allocated, typeof(*block), link);
- list_move(&block->link, &temp);
- trim_list = &temp;
- trim_size = drm_buddy_block_size(mm, block) -
- (size - original_size);
- }
-
- drm_buddy_block_trim(mm,
- NULL,
- trim_size,
- trim_list);
-
- if (!list_empty(&temp))
- list_splice_tail(trim_list, &allocated);
- }
-
- list_splice_tail(&allocated, blocks);
- return 0;
-
-err_free:
- drm_buddy_free_list_internal(mm, &allocated);
- return err;
-}
-EXPORT_SYMBOL(drm_buddy_alloc_blocks);
-
/**
* drm_buddy_block_print - print block information
*
- * @mm: DRM buddy manager
- * @block: DRM buddy block
+ * @mm: GPU buddy manager
+ * @block: GPU buddy block
* @p: DRM printer to use
*/
-void drm_buddy_block_print(struct drm_buddy *mm,
- struct drm_buddy_block *block,
+void drm_buddy_block_print(struct gpu_buddy *mm, struct gpu_buddy_block *block,
struct drm_printer *p)
{
- u64 start = drm_buddy_block_offset(block);
- u64 size = drm_buddy_block_size(mm, block);
+ u64 start = gpu_buddy_block_offset(block);
+ u64 size = gpu_buddy_block_size(mm, block);
drm_printf(p, "%#018llx-%#018llx: %llu\n", start, start + size, size);
}
@@ -1266,27 +29,32 @@ EXPORT_SYMBOL(drm_buddy_block_print);
/**
* drm_buddy_print - print allocator state
*
- * @mm: DRM buddy manager
+ * @mm: GPU buddy manager
* @p: DRM printer to use
*/
-void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p)
+void drm_buddy_print(struct gpu_buddy *mm, struct drm_printer *p)
{
int order;
- drm_printf(p, "chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n",
- mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20);
+ drm_printf(
+ p,
+ "chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n",
+ mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20,
+ mm->clear_avail >> 20);
for (order = mm->max_order; order >= 0; order--) {
- struct drm_buddy_block *block, *tmp;
+ struct gpu_buddy_block *block, *tmp;
struct rb_root *root;
u64 count = 0, free;
unsigned int tree;
- for_each_free_tree(tree) {
+ for_each_free_tree(tree)
+ {
root = &mm->free_trees[tree][order];
- rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) {
- BUG_ON(!drm_buddy_block_is_free(block));
+ rbtree_postorder_for_each_entry_safe(block, tmp, root,
+ rb) {
+ BUG_ON(!gpu_buddy_block_is_free(block));
count++;
}
}
@@ -1304,22 +72,5 @@ void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p)
}
EXPORT_SYMBOL(drm_buddy_print);
-static void drm_buddy_module_exit(void)
-{
- kmem_cache_destroy(slab_blocks);
-}
-
-static int __init drm_buddy_module_init(void)
-{
- slab_blocks = KMEM_CACHE(drm_buddy_block, 0);
- if (!slab_blocks)
- return -ENOMEM;
-
- return 0;
-}
-
-module_init(drm_buddy_module_init);
-module_exit(drm_buddy_module_exit);
-
-MODULE_DESCRIPTION("DRM Buddy Allocator");
+MODULE_DESCRIPTION("DRM-specific GPU Buddy Allocator Print Helpers");
MODULE_LICENSE("Dual MIT/GPL");
diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig
index 5e939004b646..859aeca87c19 100644
--- a/drivers/gpu/drm/i915/Kconfig
+++ b/drivers/gpu/drm/i915/Kconfig
@@ -38,6 +38,7 @@ config DRM_I915
select CEC_CORE if CEC_NOTIFIER
select VMAP_PFN
select DRM_TTM
+ select GPU_BUDDY
select DRM_BUDDY
select AUXILIARY_BUS
help
diff --git a/drivers/gpu/drm/i915/i915_scatterlist.c b/drivers/gpu/drm/i915/i915_scatterlist.c
index 4d830740946d..6a34dae13769 100644
--- a/drivers/gpu/drm/i915/i915_scatterlist.c
+++ b/drivers/gpu/drm/i915/i915_scatterlist.c
@@ -7,7 +7,7 @@
#include "i915_scatterlist.h"
#include "i915_ttm_buddy_manager.h"
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_mm.h>
#include <linux/slab.h>
@@ -167,9 +167,9 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res,
struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res);
const u64 size = res->size;
const u32 max_segment = round_down(UINT_MAX, page_alignment);
- struct drm_buddy *mm = bman_res->mm;
+ struct gpu_buddy *mm = bman_res->mm;
struct list_head *blocks = &bman_res->blocks;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct i915_refct_sgt *rsgt;
struct scatterlist *sg;
struct sg_table *st;
@@ -202,8 +202,8 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res,
list_for_each_entry(block, blocks, link) {
u64 block_size, offset;
- block_size = min_t(u64, size, drm_buddy_block_size(mm, block));
- offset = drm_buddy_block_offset(block);
+ block_size = min_t(u64, size, gpu_buddy_block_size(mm, block));
+ offset = gpu_buddy_block_offset(block);
while (block_size) {
u64 len;
diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
index 942345548bc3..2b4987a3ca0f 100644
--- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
+++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
@@ -4,6 +4,7 @@
*/
#include <linux/slab.h>
+#include <linux/gpu_buddy.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/ttm/ttm_bo.h>
@@ -16,7 +17,7 @@
struct i915_ttm_buddy_manager {
struct ttm_resource_manager manager;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
struct list_head reserved;
struct mutex lock;
unsigned long visible_size;
@@ -38,7 +39,7 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man,
{
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
struct i915_ttm_buddy_resource *bman_res;
- struct drm_buddy *mm = &bman->mm;
+ struct gpu_buddy *mm = &bman->mm;
unsigned long n_pages, lpfn;
u64 min_page_size;
u64 size;
@@ -57,13 +58,13 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man,
bman_res->mm = mm;
if (place->flags & TTM_PL_FLAG_TOPDOWN)
- bman_res->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
+ bman_res->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION;
if (place->flags & TTM_PL_FLAG_CONTIGUOUS)
- bman_res->flags |= DRM_BUDDY_CONTIGUOUS_ALLOCATION;
+ bman_res->flags |= GPU_BUDDY_CONTIGUOUS_ALLOCATION;
if (place->fpfn || lpfn != man->size)
- bman_res->flags |= DRM_BUDDY_RANGE_ALLOCATION;
+ bman_res->flags |= GPU_BUDDY_RANGE_ALLOCATION;
GEM_BUG_ON(!bman_res->base.size);
size = bman_res->base.size;
@@ -89,7 +90,7 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man,
goto err_free_res;
}
- err = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
+ err = gpu_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
(u64)lpfn << PAGE_SHIFT,
(u64)n_pages << PAGE_SHIFT,
min_page_size,
@@ -101,15 +102,15 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man,
if (lpfn <= bman->visible_size) {
bman_res->used_visible_size = PFN_UP(bman_res->base.size);
} else {
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
list_for_each_entry(block, &bman_res->blocks, link) {
unsigned long start =
- drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ gpu_buddy_block_offset(block) >> PAGE_SHIFT;
if (start < bman->visible_size) {
unsigned long end = start +
- (drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
+ (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT);
bman_res->used_visible_size +=
min(end, bman->visible_size) - start;
@@ -126,7 +127,7 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man,
return 0;
err_free_blocks:
- drm_buddy_free_list(mm, &bman_res->blocks, 0);
+ gpu_buddy_free_list(mm, &bman_res->blocks, 0);
mutex_unlock(&bman->lock);
err_free_res:
ttm_resource_fini(man, &bman_res->base);
@@ -141,7 +142,7 @@ static void i915_ttm_buddy_man_free(struct ttm_resource_manager *man,
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
mutex_lock(&bman->lock);
- drm_buddy_free_list(&bman->mm, &bman_res->blocks, 0);
+ gpu_buddy_free_list(&bman->mm, &bman_res->blocks, 0);
bman->visible_avail += bman_res->used_visible_size;
mutex_unlock(&bman->lock);
@@ -156,8 +157,8 @@ static bool i915_ttm_buddy_man_intersects(struct ttm_resource_manager *man,
{
struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res);
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
- struct drm_buddy *mm = &bman->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &bman->mm;
+ struct gpu_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
@@ -176,9 +177,9 @@ static bool i915_ttm_buddy_man_intersects(struct ttm_resource_manager *man,
/* Check each drm buddy block individually */
list_for_each_entry(block, &bman_res->blocks, link) {
unsigned long fpfn =
- drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ gpu_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
- (drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
+ (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (place->fpfn < lpfn && place->lpfn > fpfn)
return true;
@@ -194,8 +195,8 @@ static bool i915_ttm_buddy_man_compatible(struct ttm_resource_manager *man,
{
struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res);
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
- struct drm_buddy *mm = &bman->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &bman->mm;
+ struct gpu_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
@@ -209,9 +210,9 @@ static bool i915_ttm_buddy_man_compatible(struct ttm_resource_manager *man,
/* Check each drm buddy block individually */
list_for_each_entry(block, &bman_res->blocks, link) {
unsigned long fpfn =
- drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ gpu_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
- (drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
+ (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (fpfn < place->fpfn || lpfn > place->lpfn)
return false;
@@ -224,7 +225,7 @@ static void i915_ttm_buddy_man_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
mutex_lock(&bman->lock);
drm_printf(printer, "default_page_size: %lluKiB\n",
@@ -293,7 +294,7 @@ int i915_ttm_buddy_man_init(struct ttm_device *bdev,
if (!bman)
return -ENOMEM;
- err = drm_buddy_init(&bman->mm, size, chunk_size);
+ err = gpu_buddy_init(&bman->mm, size, chunk_size);
if (err)
goto err_free_bman;
@@ -333,7 +334,7 @@ int i915_ttm_buddy_man_fini(struct ttm_device *bdev, unsigned int type)
{
struct ttm_resource_manager *man = ttm_manager_type(bdev, type);
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
- struct drm_buddy *mm = &bman->mm;
+ struct gpu_buddy *mm = &bman->mm;
int ret;
ttm_resource_manager_set_used(man, false);
@@ -345,8 +346,8 @@ int i915_ttm_buddy_man_fini(struct ttm_device *bdev, unsigned int type)
ttm_set_driver_manager(bdev, type, NULL);
mutex_lock(&bman->lock);
- drm_buddy_free_list(mm, &bman->reserved, 0);
- drm_buddy_fini(mm);
+ gpu_buddy_free_list(mm, &bman->reserved, 0);
+ gpu_buddy_fini(mm);
bman->visible_avail += bman->visible_reserved;
WARN_ON_ONCE(bman->visible_avail != bman->visible_size);
mutex_unlock(&bman->lock);
@@ -371,15 +372,15 @@ int i915_ttm_buddy_man_reserve(struct ttm_resource_manager *man,
u64 start, u64 size)
{
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
- struct drm_buddy *mm = &bman->mm;
+ struct gpu_buddy *mm = &bman->mm;
unsigned long fpfn = start >> PAGE_SHIFT;
unsigned long flags = 0;
int ret;
- flags |= DRM_BUDDY_RANGE_ALLOCATION;
+ flags |= GPU_BUDDY_RANGE_ALLOCATION;
mutex_lock(&bman->lock);
- ret = drm_buddy_alloc_blocks(mm, start,
+ ret = gpu_buddy_alloc_blocks(mm, start,
start + size,
size, mm->chunk_size,
&bman->reserved,
diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
index d64620712830..4a92dcf09766 100644
--- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
+++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
@@ -13,14 +13,14 @@
struct ttm_device;
struct ttm_resource_manager;
-struct drm_buddy;
+struct gpu_buddy;
/**
* struct i915_ttm_buddy_resource
*
* @base: struct ttm_resource base class we extend
* @blocks: the list of struct i915_buddy_block for this resource/allocation
- * @flags: DRM_BUDDY_*_ALLOCATION flags
+ * @flags: GPU_BUDDY_*_ALLOCATION flags
* @used_visible_size: How much of this resource, if any, uses the CPU visible
* portion, in pages.
* @mm: the struct i915_buddy_mm for this resource
@@ -33,7 +33,7 @@ struct i915_ttm_buddy_resource {
struct list_head blocks;
unsigned long flags;
unsigned long used_visible_size;
- struct drm_buddy *mm;
+ struct gpu_buddy *mm;
};
/**
diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index 7b856b5090f9..8307390943a2 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -6,7 +6,7 @@
#include <linux/prime_numbers.h>
#include <linux/sort.h>
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
#include "../i915_selftest.h"
@@ -371,7 +371,7 @@ static int igt_mock_splintered_region(void *arg)
struct drm_i915_private *i915 = mem->i915;
struct i915_ttm_buddy_resource *res;
struct drm_i915_gem_object *obj;
- struct drm_buddy *mm;
+ struct gpu_buddy *mm;
unsigned int expected_order;
LIST_HEAD(objects);
u64 size;
@@ -447,8 +447,8 @@ static int igt_mock_max_segment(void *arg)
struct drm_i915_private *i915 = mem->i915;
struct i915_ttm_buddy_resource *res;
struct drm_i915_gem_object *obj;
- struct drm_buddy_block *block;
- struct drm_buddy *mm;
+ struct gpu_buddy_block *block;
+ struct gpu_buddy *mm;
struct list_head *blocks;
struct scatterlist *sg;
I915_RND_STATE(prng);
@@ -487,8 +487,8 @@ static int igt_mock_max_segment(void *arg)
mm = res->mm;
size = 0;
list_for_each_entry(block, blocks, link) {
- if (drm_buddy_block_size(mm, block) > size)
- size = drm_buddy_block_size(mm, block);
+ if (gpu_buddy_block_size(mm, block) > size)
+ size = gpu_buddy_block_size(mm, block);
}
if (size < max_segment) {
pr_err("%s: Failed to create a huge contiguous block [> %u], largest block %lld\n",
@@ -527,14 +527,14 @@ static u64 igt_object_mappable_total(struct drm_i915_gem_object *obj)
struct intel_memory_region *mr = obj->mm.region;
struct i915_ttm_buddy_resource *bman_res =
to_ttm_buddy_resource(obj->mm.res);
- struct drm_buddy *mm = bman_res->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = bman_res->mm;
+ struct gpu_buddy_block *block;
u64 total;
total = 0;
list_for_each_entry(block, &bman_res->blocks, link) {
- u64 start = drm_buddy_block_offset(block);
- u64 end = start + drm_buddy_block_size(mm, block);
+ u64 start = gpu_buddy_block_offset(block);
+ u64 end = start + gpu_buddy_block_size(mm, block);
if (start < resource_size(&mr->io))
total += min_t(u64, end, resource_size(&mr->io)) - start;
diff --git a/drivers/gpu/drm/tests/Makefile b/drivers/gpu/drm/tests/Makefile
index c0e952293ad0..524310241e16 100644
--- a/drivers/gpu/drm/tests/Makefile
+++ b/drivers/gpu/drm/tests/Makefile
@@ -7,7 +7,6 @@ obj-$(CONFIG_DRM_KUNIT_TEST) += \
drm_atomic_test.o \
drm_atomic_state_test.o \
drm_bridge_test.o \
- drm_buddy_test.o \
drm_cmdline_parser_test.o \
drm_connector_test.o \
drm_damage_helper_test.o \
diff --git a/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c b/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
index 3a1eef83190c..87de969a04b2 100644
--- a/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
+++ b/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
@@ -3,6 +3,7 @@
* Copyright © 2023 Intel Corporation
*/
#include <linux/delay.h>
+#include <linux/gpu_buddy.h>
#include <linux/kthread.h>
#include <drm/ttm/ttm_resource.h>
@@ -251,7 +252,7 @@ static void ttm_bo_validate_basic(struct kunit *test)
NULL, &dummy_ttm_bo_destroy);
KUNIT_EXPECT_EQ(test, err, 0);
- snd_place = ttm_place_kunit_init(test, snd_mem, DRM_BUDDY_TOPDOWN_ALLOCATION);
+ snd_place = ttm_place_kunit_init(test, snd_mem, GPU_BUDDY_TOPDOWN_ALLOCATION);
snd_placement = ttm_placement_kunit_init(test, snd_place, 1);
err = ttm_bo_validate(bo, snd_placement, &ctx_val);
@@ -263,7 +264,7 @@ static void ttm_bo_validate_basic(struct kunit *test)
KUNIT_EXPECT_TRUE(test, ttm_tt_is_populated(bo->ttm));
KUNIT_EXPECT_EQ(test, bo->resource->mem_type, snd_mem);
KUNIT_EXPECT_EQ(test, bo->resource->placement,
- DRM_BUDDY_TOPDOWN_ALLOCATION);
+ GPU_BUDDY_TOPDOWN_ALLOCATION);
ttm_bo_fini(bo);
ttm_mock_manager_fini(priv->ttm_dev, snd_mem);
diff --git a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c
index d7eb6471f2ed..a9c233f982f6 100644
--- a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c
+++ b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c
@@ -30,7 +30,7 @@ static int ttm_mock_manager_alloc(struct ttm_resource_manager *man,
{
struct ttm_mock_manager *manager = to_mock_mgr(man);
struct ttm_mock_resource *mock_res;
- struct drm_buddy *mm = &manager->mm;
+ struct gpu_buddy *mm = &manager->mm;
u64 lpfn, fpfn, alloc_size;
int err;
@@ -46,14 +46,14 @@ static int ttm_mock_manager_alloc(struct ttm_resource_manager *man,
INIT_LIST_HEAD(&mock_res->blocks);
if (place->flags & TTM_PL_FLAG_TOPDOWN)
- mock_res->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
+ mock_res->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION;
if (place->flags & TTM_PL_FLAG_CONTIGUOUS)
- mock_res->flags |= DRM_BUDDY_CONTIGUOUS_ALLOCATION;
+ mock_res->flags |= GPU_BUDDY_CONTIGUOUS_ALLOCATION;
alloc_size = (uint64_t)mock_res->base.size;
mutex_lock(&manager->lock);
- err = drm_buddy_alloc_blocks(mm, fpfn, lpfn, alloc_size,
+ err = gpu_buddy_alloc_blocks(mm, fpfn, lpfn, alloc_size,
manager->default_page_size,
&mock_res->blocks,
mock_res->flags);
@@ -66,7 +66,7 @@ static int ttm_mock_manager_alloc(struct ttm_resource_manager *man,
return 0;
error_free_blocks:
- drm_buddy_free_list(mm, &mock_res->blocks, 0);
+ gpu_buddy_free_list(mm, &mock_res->blocks, 0);
ttm_resource_fini(man, &mock_res->base);
mutex_unlock(&manager->lock);
@@ -78,10 +78,10 @@ static void ttm_mock_manager_free(struct ttm_resource_manager *man,
{
struct ttm_mock_manager *manager = to_mock_mgr(man);
struct ttm_mock_resource *mock_res = to_mock_mgr_resource(res);
- struct drm_buddy *mm = &manager->mm;
+ struct gpu_buddy *mm = &manager->mm;
mutex_lock(&manager->lock);
- drm_buddy_free_list(mm, &mock_res->blocks, 0);
+ gpu_buddy_free_list(mm, &mock_res->blocks, 0);
mutex_unlock(&manager->lock);
ttm_resource_fini(man, res);
@@ -105,7 +105,7 @@ int ttm_mock_manager_init(struct ttm_device *bdev, u32 mem_type, u32 size)
mutex_init(&manager->lock);
- err = drm_buddy_init(&manager->mm, size, PAGE_SIZE);
+ err = gpu_buddy_init(&manager->mm, size, PAGE_SIZE);
if (err) {
kfree(manager);
@@ -141,7 +141,7 @@ void ttm_mock_manager_fini(struct ttm_device *bdev, u32 mem_type)
ttm_resource_manager_set_used(man, false);
mutex_lock(&mock_man->lock);
- drm_buddy_fini(&mock_man->mm);
+ gpu_buddy_fini(&mock_man->mm);
mutex_unlock(&mock_man->lock);
ttm_set_driver_manager(bdev, mem_type, NULL);
diff --git a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h
index e4c95f86a467..08710756fd8e 100644
--- a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h
+++ b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h
@@ -5,11 +5,11 @@
#ifndef TTM_MOCK_MANAGER_H
#define TTM_MOCK_MANAGER_H
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
struct ttm_mock_manager {
struct ttm_resource_manager man;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
u64 default_page_size;
/* protects allocations of mock buffer objects */
struct mutex lock;
diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
index 7219f6b884b6..0f2a480fda33 100644
--- a/drivers/gpu/drm/xe/Kconfig
+++ b/drivers/gpu/drm/xe/Kconfig
@@ -11,6 +11,7 @@ config DRM_XE
# the shmem_readpage() which depends upon tmpfs
select SHMEM
select TMPFS
+ select GPU_BUDDY
select DRM_BUDDY
select DRM_CLIENT_SELECTION
select DRM_EXEC
diff --git a/drivers/gpu/drm/xe/xe_res_cursor.h b/drivers/gpu/drm/xe/xe_res_cursor.h
index 4e00008b7081..5f4ab08c0686 100644
--- a/drivers/gpu/drm/xe/xe_res_cursor.h
+++ b/drivers/gpu/drm/xe/xe_res_cursor.h
@@ -58,7 +58,7 @@ struct xe_res_cursor {
/** @dma_addr: Current element in a struct drm_pagemap_addr array */
const struct drm_pagemap_addr *dma_addr;
/** @mm: Buddy allocator for VRAM cursor */
- struct drm_buddy *mm;
+ struct gpu_buddy *mm;
/**
* @dma_start: DMA start address for the current segment.
* This may be different to @dma_addr.addr since elements in
@@ -69,7 +69,7 @@ struct xe_res_cursor {
u64 dma_seg_size;
};
-static struct drm_buddy *xe_res_get_buddy(struct ttm_resource *res)
+static struct gpu_buddy *xe_res_get_buddy(struct ttm_resource *res)
{
struct ttm_resource_manager *mgr;
@@ -104,30 +104,30 @@ static inline void xe_res_first(struct ttm_resource *res,
case XE_PL_STOLEN:
case XE_PL_VRAM0:
case XE_PL_VRAM1: {
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct list_head *head, *next;
- struct drm_buddy *mm = xe_res_get_buddy(res);
+ struct gpu_buddy *mm = xe_res_get_buddy(res);
head = &to_xe_ttm_vram_mgr_resource(res)->blocks;
block = list_first_entry_or_null(head,
- struct drm_buddy_block,
+ struct gpu_buddy_block,
link);
if (!block)
goto fallback;
- while (start >= drm_buddy_block_size(mm, block)) {
- start -= drm_buddy_block_size(mm, block);
+ while (start >= gpu_buddy_block_size(mm, block)) {
+ start -= gpu_buddy_block_size(mm, block);
next = block->link.next;
if (next != head)
- block = list_entry(next, struct drm_buddy_block,
+ block = list_entry(next, struct gpu_buddy_block,
link);
}
cur->mm = mm;
- cur->start = drm_buddy_block_offset(block) + start;
- cur->size = min(drm_buddy_block_size(mm, block) - start,
+ cur->start = gpu_buddy_block_offset(block) + start;
+ cur->size = min(gpu_buddy_block_size(mm, block) - start,
size);
cur->remaining = size;
cur->node = block;
@@ -259,7 +259,7 @@ static inline void xe_res_first_dma(const struct drm_pagemap_addr *dma_addr,
*/
static inline void xe_res_next(struct xe_res_cursor *cur, u64 size)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct list_head *next;
u64 start;
@@ -295,18 +295,18 @@ static inline void xe_res_next(struct xe_res_cursor *cur, u64 size)
block = cur->node;
next = block->link.next;
- block = list_entry(next, struct drm_buddy_block, link);
+ block = list_entry(next, struct gpu_buddy_block, link);
- while (start >= drm_buddy_block_size(cur->mm, block)) {
- start -= drm_buddy_block_size(cur->mm, block);
+ while (start >= gpu_buddy_block_size(cur->mm, block)) {
+ start -= gpu_buddy_block_size(cur->mm, block);
next = block->link.next;
- block = list_entry(next, struct drm_buddy_block, link);
+ block = list_entry(next, struct gpu_buddy_block, link);
}
- cur->start = drm_buddy_block_offset(block) + start;
- cur->size = min(drm_buddy_block_size(cur->mm, block) - start,
+ cur->start = gpu_buddy_block_offset(block) + start;
+ cur->size = min(gpu_buddy_block_size(cur->mm, block) - start,
cur->remaining);
cur->node = block;
break;
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 13af589715a7..5b2cec42147a 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -677,7 +677,7 @@ static u64 block_offset_to_pfn(struct xe_vram_region *vr, u64 offset)
return PHYS_PFN(offset + vr->hpa_base);
}
-static struct drm_buddy *vram_to_buddy(struct xe_vram_region *vram)
+static struct gpu_buddy *vram_to_buddy(struct xe_vram_region *vram)
{
return &vram->ttm.mm;
}
@@ -688,16 +688,16 @@ static int xe_svm_populate_devmem_pfn(struct drm_pagemap_devmem *devmem_allocati
struct xe_bo *bo = to_xe_bo(devmem_allocation);
struct ttm_resource *res = bo->ttm.resource;
struct list_head *blocks = &to_xe_ttm_vram_mgr_resource(res)->blocks;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
int j = 0;
list_for_each_entry(block, blocks, link) {
struct xe_vram_region *vr = block->private;
- struct drm_buddy *buddy = vram_to_buddy(vr);
- u64 block_pfn = block_offset_to_pfn(vr, drm_buddy_block_offset(block));
+ struct gpu_buddy *buddy = vram_to_buddy(vr);
+ u64 block_pfn = block_offset_to_pfn(vr, gpu_buddy_block_offset(block));
int i;
- for (i = 0; i < drm_buddy_block_size(buddy, block) >> PAGE_SHIFT; ++i)
+ for (i = 0; i < gpu_buddy_block_size(buddy, block) >> PAGE_SHIFT; ++i)
pfn[j++] = block_pfn + i;
}
@@ -864,7 +864,7 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
struct xe_vram_region *vr = container_of(dpagemap, typeof(*vr), dpagemap);
struct xe_device *xe = vr->xe;
struct device *dev = xe->drm.dev;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct xe_validation_ctx vctx;
struct list_head *blocks;
struct drm_exec exec;
diff --git a/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c b/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c
index 9f70802fce92..8192957261e8 100644
--- a/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c
+++ b/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c
@@ -4,8 +4,9 @@
* Copyright (C) 2021-2022 Red Hat
*/
-#include <drm/drm_managed.h>
+#include <drm/drm_buddy.h>
#include <drm/drm_drv.h>
+#include <drm/drm_managed.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/ttm/ttm_range_manager.h>
@@ -17,16 +18,16 @@
#include "xe_ttm_vram_mgr.h"
#include "xe_vram_types.h"
-static inline struct drm_buddy_block *
+static inline struct gpu_buddy_block *
xe_ttm_vram_mgr_first_block(struct list_head *list)
{
- return list_first_entry_or_null(list, struct drm_buddy_block, link);
+ return list_first_entry_or_null(list, struct gpu_buddy_block, link);
}
-static inline bool xe_is_vram_mgr_blocks_contiguous(struct drm_buddy *mm,
+static inline bool xe_is_vram_mgr_blocks_contiguous(struct gpu_buddy *mm,
struct list_head *head)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
u64 start, size;
block = xe_ttm_vram_mgr_first_block(head);
@@ -34,12 +35,12 @@ static inline bool xe_is_vram_mgr_blocks_contiguous(struct drm_buddy *mm,
return false;
while (head != block->link.next) {
- start = drm_buddy_block_offset(block);
- size = drm_buddy_block_size(mm, block);
+ start = gpu_buddy_block_offset(block);
+ size = gpu_buddy_block_size(mm, block);
- block = list_entry(block->link.next, struct drm_buddy_block,
+ block = list_entry(block->link.next, struct gpu_buddy_block,
link);
- if (start + size != drm_buddy_block_offset(block))
+ if (start + size != gpu_buddy_block_offset(block))
return false;
}
@@ -53,7 +54,7 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
{
struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
struct xe_ttm_vram_mgr_resource *vres;
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
u64 size, min_page_size;
unsigned long lpfn;
int err;
@@ -80,10 +81,10 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
INIT_LIST_HEAD(&vres->blocks);
if (place->flags & TTM_PL_FLAG_TOPDOWN)
- vres->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
+ vres->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION;
if (place->fpfn || lpfn != man->size >> PAGE_SHIFT)
- vres->flags |= DRM_BUDDY_RANGE_ALLOCATION;
+ vres->flags |= GPU_BUDDY_RANGE_ALLOCATION;
if (WARN_ON(!vres->base.size)) {
err = -EINVAL;
@@ -119,27 +120,27 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
lpfn = max_t(unsigned long, place->fpfn + (size >> PAGE_SHIFT), lpfn);
}
- err = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
+ err = gpu_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
(u64)lpfn << PAGE_SHIFT, size,
min_page_size, &vres->blocks, vres->flags);
if (err)
goto error_unlock;
if (place->flags & TTM_PL_FLAG_CONTIGUOUS) {
- if (!drm_buddy_block_trim(mm, NULL, vres->base.size, &vres->blocks))
+ if (!gpu_buddy_block_trim(mm, NULL, vres->base.size, &vres->blocks))
size = vres->base.size;
}
if (lpfn <= mgr->visible_size >> PAGE_SHIFT) {
vres->used_visible_size = size;
} else {
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
list_for_each_entry(block, &vres->blocks, link) {
- u64 start = drm_buddy_block_offset(block);
+ u64 start = gpu_buddy_block_offset(block);
if (start < mgr->visible_size) {
- u64 end = start + drm_buddy_block_size(mm, block);
+ u64 end = start + gpu_buddy_block_size(mm, block);
vres->used_visible_size +=
min(end, mgr->visible_size) - start;
@@ -159,11 +160,11 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
* the object.
*/
if (vres->base.placement & TTM_PL_FLAG_CONTIGUOUS) {
- struct drm_buddy_block *block = list_first_entry(&vres->blocks,
+ struct gpu_buddy_block *block = list_first_entry(&vres->blocks,
typeof(*block),
link);
- vres->base.start = drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ vres->base.start = gpu_buddy_block_offset(block) >> PAGE_SHIFT;
} else {
vres->base.start = XE_BO_INVALID_OFFSET;
}
@@ -185,10 +186,10 @@ static void xe_ttm_vram_mgr_del(struct ttm_resource_manager *man,
struct xe_ttm_vram_mgr_resource *vres =
to_xe_ttm_vram_mgr_resource(res);
struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
mutex_lock(&mgr->lock);
- drm_buddy_free_list(mm, &vres->blocks, 0);
+ gpu_buddy_free_list(mm, &vres->blocks, 0);
mgr->visible_avail += vres->used_visible_size;
mutex_unlock(&mgr->lock);
@@ -201,7 +202,7 @@ static void xe_ttm_vram_mgr_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
mutex_lock(&mgr->lock);
drm_printf(printer, "default_page_size: %lluKiB\n",
@@ -224,8 +225,8 @@ static bool xe_ttm_vram_mgr_intersects(struct ttm_resource_manager *man,
struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
struct xe_ttm_vram_mgr_resource *vres =
to_xe_ttm_vram_mgr_resource(res);
- struct drm_buddy *mm = &mgr->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &mgr->mm;
+ struct gpu_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
@@ -235,9 +236,9 @@ static bool xe_ttm_vram_mgr_intersects(struct ttm_resource_manager *man,
list_for_each_entry(block, &vres->blocks, link) {
unsigned long fpfn =
- drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ gpu_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
- (drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
+ (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (place->fpfn < lpfn && place->lpfn > fpfn)
return true;
@@ -254,8 +255,8 @@ static bool xe_ttm_vram_mgr_compatible(struct ttm_resource_manager *man,
struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
struct xe_ttm_vram_mgr_resource *vres =
to_xe_ttm_vram_mgr_resource(res);
- struct drm_buddy *mm = &mgr->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &mgr->mm;
+ struct gpu_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
@@ -265,9 +266,9 @@ static bool xe_ttm_vram_mgr_compatible(struct ttm_resource_manager *man,
list_for_each_entry(block, &vres->blocks, link) {
unsigned long fpfn =
- drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ gpu_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
- (drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
+ (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (fpfn < place->fpfn || lpfn > place->lpfn)
return false;
@@ -297,7 +298,7 @@ static void xe_ttm_vram_mgr_fini(struct drm_device *dev, void *arg)
WARN_ON_ONCE(mgr->visible_avail != mgr->visible_size);
- drm_buddy_fini(&mgr->mm);
+ gpu_buddy_fini(&mgr->mm);
ttm_resource_manager_cleanup(&mgr->manager);
@@ -328,7 +329,7 @@ int __xe_ttm_vram_mgr_init(struct xe_device *xe, struct xe_ttm_vram_mgr *mgr,
mgr->visible_avail = io_size;
ttm_resource_manager_init(man, &xe->ttm, size);
- err = drm_buddy_init(&mgr->mm, man->size, default_page_size);
+ err = gpu_buddy_init(&mgr->mm, man->size, default_page_size);
if (err)
return err;
@@ -376,7 +377,7 @@ int xe_ttm_vram_mgr_alloc_sgt(struct xe_device *xe,
if (!*sgt)
return -ENOMEM;
- /* Determine the number of DRM_BUDDY blocks to export */
+ /* Determine the number of GPU_BUDDY blocks to export */
xe_res_first(res, offset, length, &cursor);
while (cursor.remaining) {
num_entries++;
@@ -393,10 +394,10 @@ int xe_ttm_vram_mgr_alloc_sgt(struct xe_device *xe,
sg->length = 0;
/*
- * Walk down DRM_BUDDY blocks to populate scatterlist nodes
- * @note: Use iterator api to get first the DRM_BUDDY block
+ * Walk down GPU_BUDDY blocks to populate scatterlist nodes
+ * @note: Use iterator api to get first the GPU_BUDDY block
* and the number of bytes from it. Access the following
- * DRM_BUDDY block(s) if more buffer needs to exported
+ * GPU_BUDDY block(s) if more buffer needs to exported
*/
xe_res_first(res, offset, length, &cursor);
for_each_sgtable_sg((*sgt), sg, i) {
diff --git a/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h b/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
index a71e14818ec2..9106da056b49 100644
--- a/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
+++ b/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
@@ -6,7 +6,7 @@
#ifndef _XE_TTM_VRAM_MGR_TYPES_H_
#define _XE_TTM_VRAM_MGR_TYPES_H_
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
#include <drm/ttm/ttm_device.h>
/**
@@ -18,7 +18,7 @@ struct xe_ttm_vram_mgr {
/** @manager: Base TTM resource manager */
struct ttm_resource_manager manager;
/** @mm: DRM buddy allocator which manages the VRAM */
- struct drm_buddy mm;
+ struct gpu_buddy mm;
/** @visible_size: Proped size of the CPU visible portion */
u64 visible_size;
/** @visible_avail: CPU visible portion still unallocated */
diff --git a/drivers/gpu/tests/Makefile b/drivers/gpu/tests/Makefile
new file mode 100644
index 000000000000..31a5ff44cb4e
--- /dev/null
+++ b/drivers/gpu/tests/Makefile
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: GPL-2.0
+
+obj-$(CONFIG_GPU_BUDDY_KUNIT_TEST) += gpu_buddy_test.o gpu_random.o
diff --git a/drivers/gpu/drm/tests/drm_buddy_test.c b/drivers/gpu/tests/gpu_buddy_test.c
similarity index 68%
rename from drivers/gpu/drm/tests/drm_buddy_test.c
rename to drivers/gpu/tests/gpu_buddy_test.c
index 5f40b5343bd8..dcd4741a905d 100644
--- a/drivers/gpu/drm/tests/drm_buddy_test.c
+++ b/drivers/gpu/tests/gpu_buddy_test.c
@@ -10,9 +10,9 @@
#include <linux/sched/signal.h>
#include <linux/sizes.h>
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
-#include "../lib/drm_random.h"
+#include "gpu_random.h"
static unsigned int random_seed;
@@ -21,9 +21,9 @@ static inline u64 get_size(int order, u64 chunk_size)
return (1 << order) * chunk_size;
}
-static void drm_test_buddy_fragmentation_performance(struct kunit *test)
+static void gpu_test_buddy_fragmentation_performance(struct kunit *test)
{
- struct drm_buddy_block *block, *tmp;
+ struct gpu_buddy_block *block, *tmp;
int num_blocks, i, ret, count = 0;
LIST_HEAD(allocated_blocks);
unsigned long elapsed_ms;
@@ -32,7 +32,7 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
LIST_HEAD(clear_list);
LIST_HEAD(dirty_list);
LIST_HEAD(free_list);
- struct drm_buddy mm;
+ struct gpu_buddy mm;
u64 mm_size = SZ_4G;
ktime_t start, end;
@@ -47,7 +47,7 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
* quickly the allocator can satisfy larger, aligned requests from a pool of
* highly fragmented space.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
"buddy_init failed\n");
num_blocks = mm_size / SZ_64K;
@@ -55,7 +55,7 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
start = ktime_get();
/* Allocate with maximum fragmentation - 8K blocks with 64K alignment */
for (i = 0; i < num_blocks; i++)
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K,
&allocated_blocks, 0),
"buddy_alloc hit an error size=%u\n", SZ_8K);
@@ -68,21 +68,21 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
}
/* Free with different flags to ensure no coalescing */
- drm_buddy_free_list(&mm, &clear_list, DRM_BUDDY_CLEARED);
- drm_buddy_free_list(&mm, &dirty_list, 0);
+ gpu_buddy_free_list(&mm, &clear_list, GPU_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &dirty_list, 0);
for (i = 0; i < num_blocks; i++)
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_64K, SZ_64K,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_64K, SZ_64K,
&test_blocks, 0),
"buddy_alloc hit an error size=%u\n", SZ_64K);
- drm_buddy_free_list(&mm, &test_blocks, 0);
+ gpu_buddy_free_list(&mm, &test_blocks, 0);
end = ktime_get();
elapsed_ms = ktime_to_ms(ktime_sub(end, start));
kunit_info(test, "Fragmented allocation took %lu ms\n", elapsed_ms);
- drm_buddy_fini(&mm);
+ gpu_buddy_fini(&mm);
/*
* Reverse free order under fragmentation
@@ -96,13 +96,13 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
* deallocation occurs in the opposite order of allocation, exposing the
* cost difference between a linear freelist scan and an ordered tree lookup.
*/
- ret = drm_buddy_init(&mm, mm_size, SZ_4K);
+ ret = gpu_buddy_init(&mm, mm_size, SZ_4K);
KUNIT_ASSERT_EQ(test, ret, 0);
start = ktime_get();
/* Allocate maximum fragmentation */
for (i = 0; i < num_blocks; i++)
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K,
&allocated_blocks, 0),
"buddy_alloc hit an error size=%u\n", SZ_8K);
@@ -111,28 +111,28 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
list_move_tail(&block->link, &free_list);
count++;
}
- drm_buddy_free_list(&mm, &free_list, DRM_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &free_list, GPU_BUDDY_CLEARED);
list_for_each_entry_safe_reverse(block, tmp, &allocated_blocks, link)
list_move(&block->link, &reverse_list);
- drm_buddy_free_list(&mm, &reverse_list, DRM_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &reverse_list, GPU_BUDDY_CLEARED);
end = ktime_get();
elapsed_ms = ktime_to_ms(ktime_sub(end, start));
kunit_info(test, "Reverse-ordered free took %lu ms\n", elapsed_ms);
- drm_buddy_fini(&mm);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_range_bias(struct kunit *test)
+static void gpu_test_buddy_alloc_range_bias(struct kunit *test)
{
u32 mm_size, size, ps, bias_size, bias_start, bias_end, bias_rem;
- DRM_RND_STATE(prng, random_seed);
+ GPU_RND_STATE(prng, random_seed);
unsigned int i, count, *order;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
unsigned long flags;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
LIST_HEAD(allocated);
bias_size = SZ_1M;
@@ -142,11 +142,11 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
kunit_info(test, "mm_size=%u, ps=%u\n", mm_size, ps);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, ps),
"buddy_init failed\n");
count = mm_size / bias_size;
- order = drm_random_order(count, &prng);
+ order = gpu_random_order(count, &prng);
KUNIT_EXPECT_TRUE(test, order);
/*
@@ -166,79 +166,79 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
/* internal round_up too big */
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, bias_size + ps, bias_size,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, bias_size, bias_size);
/* size too big */
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, bias_size + ps, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, bias_size + ps, ps);
/* bias range too small for size */
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start + ps,
+ gpu_buddy_alloc_blocks(&mm, bias_start + ps,
bias_end, bias_size, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n",
bias_start + ps, bias_end, bias_size, ps);
/* bias misaligned */
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start + ps,
+ gpu_buddy_alloc_blocks(&mm, bias_start + ps,
bias_end - ps,
bias_size >> 1, bias_size >> 1,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc h didn't fail with bias(%x-%x), size=%u, ps=%u\n",
bias_start + ps, bias_end - ps, bias_size >> 1, bias_size >> 1);
/* single big page */
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, bias_size, bias_size,
&tmp,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc i failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, bias_size, bias_size);
- drm_buddy_free_list(&mm, &tmp, 0);
+ gpu_buddy_free_list(&mm, &tmp, 0);
/* single page with internal round_up */
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, ps, bias_size,
&tmp,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, ps, bias_size);
- drm_buddy_free_list(&mm, &tmp, 0);
+ gpu_buddy_free_list(&mm, &tmp, 0);
/* random size within */
size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps);
if (size)
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, size, ps,
&tmp,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, size, ps);
bias_rem -= size;
/* too big for current avail */
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, bias_rem + ps, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, bias_rem + ps, ps);
@@ -248,10 +248,10 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
size = max(size, ps);
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, size, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, size, ps);
/*
@@ -259,15 +259,15 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
* unallocated, and ideally not always on the bias
* boundaries.
*/
- drm_buddy_free_list(&mm, &tmp, 0);
+ gpu_buddy_free_list(&mm, &tmp, 0);
} else {
list_splice_tail(&tmp, &allocated);
}
}
kfree(order);
- drm_buddy_free_list(&mm, &allocated, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_fini(&mm);
/*
* Something more free-form. Idea is to pick a random starting bias
@@ -278,7 +278,7 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
* allocated nodes in the middle of the address space.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, ps),
"buddy_init failed\n");
bias_start = round_up(prandom_u32_state(&prng) % (mm_size - ps), ps);
@@ -290,10 +290,10 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
u32 size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps);
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, size, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, size, ps);
bias_rem -= size;
@@ -319,24 +319,24 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
KUNIT_ASSERT_EQ(test, bias_start, 0);
KUNIT_ASSERT_EQ(test, bias_end, mm_size);
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start, bias_end,
+ gpu_buddy_alloc_blocks(&mm, bias_start, bias_end,
ps, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc passed with bias(%x-%x), size=%u\n",
bias_start, bias_end, ps);
- drm_buddy_free_list(&mm, &allocated, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_fini(&mm);
/*
- * Allocate cleared blocks in the bias range when the DRM buddy's clear avail is
+ * Allocate cleared blocks in the bias range when the GPU buddy's clear avail is
* zero. This will validate the bias range allocation in scenarios like system boot
* when no cleared blocks are available and exercise the fallback path too. The resulting
* blocks should always be dirty.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, ps),
"buddy_init failed\n");
bias_start = round_up(prandom_u32_state(&prng) % (mm_size - ps), ps);
@@ -344,11 +344,11 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
bias_end = max(bias_end, bias_start + ps);
bias_rem = bias_end - bias_start;
- flags = DRM_BUDDY_CLEAR_ALLOCATION | DRM_BUDDY_RANGE_ALLOCATION;
+ flags = GPU_BUDDY_CLEAR_ALLOCATION | GPU_BUDDY_RANGE_ALLOCATION;
size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps);
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, size, ps,
&allocated,
flags),
@@ -356,27 +356,27 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
bias_start, bias_end, size, ps);
list_for_each_entry(block, &allocated, link)
- KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), false);
+ KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), false);
- drm_buddy_free_list(&mm, &allocated, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_clear(struct kunit *test)
+static void gpu_test_buddy_alloc_clear(struct kunit *test)
{
unsigned long n_pages, total, i = 0;
const unsigned long ps = SZ_4K;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
const int max_order = 12;
LIST_HEAD(allocated);
- struct drm_buddy mm;
+ struct gpu_buddy mm;
unsigned int order;
u32 mm_size, size;
LIST_HEAD(dirty);
LIST_HEAD(clean);
mm_size = SZ_4K << max_order;
- KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps));
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps));
KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
@@ -389,11 +389,11 @@ static void drm_test_buddy_alloc_clear(struct kunit *test)
* is indeed all dirty pages and vice versa. Free it all again,
* keeping the dirty/clear status.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
5 * ps, ps, &allocated,
- DRM_BUDDY_TOPDOWN_ALLOCATION),
+ GPU_BUDDY_TOPDOWN_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 5 * ps);
- drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED);
n_pages = 10;
do {
@@ -406,37 +406,37 @@ static void drm_test_buddy_alloc_clear(struct kunit *test)
flags = 0;
} else {
list = &clean;
- flags = DRM_BUDDY_CLEAR_ALLOCATION;
+ flags = GPU_BUDDY_CLEAR_ALLOCATION;
}
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
ps, ps, list,
flags),
"buddy_alloc hit an error size=%lu\n", ps);
} while (++i < n_pages);
list_for_each_entry(block, &clean, link)
- KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), true);
+ KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), true);
list_for_each_entry(block, &dirty, link)
- KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), false);
+ KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), false);
- drm_buddy_free_list(&mm, &clean, DRM_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &clean, GPU_BUDDY_CLEARED);
/*
* Trying to go over the clear limit for some allocation.
* The allocation should never fail with reasonable page-size.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
10 * ps, ps, &clean,
- DRM_BUDDY_CLEAR_ALLOCATION),
+ GPU_BUDDY_CLEAR_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 10 * ps);
- drm_buddy_free_list(&mm, &clean, DRM_BUDDY_CLEARED);
- drm_buddy_free_list(&mm, &dirty, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &clean, GPU_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &dirty, 0);
+ gpu_buddy_fini(&mm);
- KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps));
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps));
/*
* Create a new mm. Intentionally fragment the address space by creating
@@ -458,34 +458,34 @@ static void drm_test_buddy_alloc_clear(struct kunit *test)
else
list = &clean;
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
ps, ps, list, 0),
"buddy_alloc hit an error size=%lu\n", ps);
} while (++i < n_pages);
- drm_buddy_free_list(&mm, &clean, DRM_BUDDY_CLEARED);
- drm_buddy_free_list(&mm, &dirty, 0);
+ gpu_buddy_free_list(&mm, &clean, GPU_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &dirty, 0);
order = 1;
do {
size = SZ_4K << order;
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
size, size, &allocated,
- DRM_BUDDY_CLEAR_ALLOCATION),
+ GPU_BUDDY_CLEAR_ALLOCATION),
"buddy_alloc hit an error size=%u\n", size);
total = 0;
list_for_each_entry(block, &allocated, link) {
if (size != mm_size)
- KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), false);
- total += drm_buddy_block_size(&mm, block);
+ KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), false);
+ total += gpu_buddy_block_size(&mm, block);
}
KUNIT_EXPECT_EQ(test, total, size);
- drm_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_free_list(&mm, &allocated, 0);
} while (++order <= max_order);
- drm_buddy_fini(&mm);
+ gpu_buddy_fini(&mm);
/*
* Create a new mm with a non power-of-two size. Allocate a random size from each
@@ -494,44 +494,44 @@ static void drm_test_buddy_alloc_clear(struct kunit *test)
*/
mm_size = (SZ_4K << max_order) + (SZ_4K << (max_order - 2));
- KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps));
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps));
KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order,
4 * ps, ps, &allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 4 * ps);
- drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order,
+ gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED);
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order,
2 * ps, ps, &allocated,
- DRM_BUDDY_CLEAR_ALLOCATION),
+ GPU_BUDDY_CLEAR_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 2 * ps);
- drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, SZ_4K << max_order, mm_size,
+ gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED);
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, SZ_4K << max_order, mm_size,
ps, ps, &allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", ps);
- drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_contiguous(struct kunit *test)
+static void gpu_test_buddy_alloc_contiguous(struct kunit *test)
{
const unsigned long ps = SZ_4K, mm_size = 16 * 3 * SZ_4K;
unsigned long i, n_pages, total;
- struct drm_buddy_block *block;
- struct drm_buddy mm;
+ struct gpu_buddy_block *block;
+ struct gpu_buddy mm;
LIST_HEAD(left);
LIST_HEAD(middle);
LIST_HEAD(right);
LIST_HEAD(allocated);
- KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps));
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps));
/*
* Idea is to fragment the address space by alternating block
* allocations between three different lists; one for left, middle and
* right. We can then free a list to simulate fragmentation. In
- * particular we want to exercise the DRM_BUDDY_CONTIGUOUS_ALLOCATION,
+ * particular we want to exercise the GPU_BUDDY_CONTIGUOUS_ALLOCATION,
* including the try_harder path.
*/
@@ -548,66 +548,66 @@ static void drm_test_buddy_alloc_contiguous(struct kunit *test)
else
list = &right;
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ gpu_buddy_alloc_blocks(&mm, 0, mm_size,
ps, ps, list, 0),
"buddy_alloc hit an error size=%lu\n",
ps);
} while (++i < n_pages);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
3 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc didn't error size=%lu\n", 3 * ps);
- drm_buddy_free_list(&mm, &middle, 0);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ gpu_buddy_free_list(&mm, &middle, 0);
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
3 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc didn't error size=%lu\n", 3 * ps);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
2 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc didn't error size=%lu\n", 2 * ps);
- drm_buddy_free_list(&mm, &right, 0);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ gpu_buddy_free_list(&mm, &right, 0);
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
3 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc didn't error size=%lu\n", 3 * ps);
/*
* At this point we should have enough contiguous space for 2 blocks,
* however they are never buddies (since we freed middle and right) so
* will require the try_harder logic to find them.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
2 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 2 * ps);
- drm_buddy_free_list(&mm, &left, 0);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ gpu_buddy_free_list(&mm, &left, 0);
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
3 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 3 * ps);
total = 0;
list_for_each_entry(block, &allocated, link)
- total += drm_buddy_block_size(&mm, block);
+ total += gpu_buddy_block_size(&mm, block);
KUNIT_ASSERT_EQ(test, total, ps * 2 + ps * 3);
- drm_buddy_free_list(&mm, &allocated, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_pathological(struct kunit *test)
+static void gpu_test_buddy_alloc_pathological(struct kunit *test)
{
u64 mm_size, size, start = 0;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
const int max_order = 3;
unsigned long flags = 0;
int order, top;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
LIST_HEAD(blocks);
LIST_HEAD(holes);
LIST_HEAD(tmp);
@@ -620,7 +620,7 @@ static void drm_test_buddy_alloc_pathological(struct kunit *test)
*/
mm_size = SZ_4K << max_order;
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
"buddy_init failed\n");
KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
@@ -630,18 +630,18 @@ static void drm_test_buddy_alloc_pathological(struct kunit *test)
block = list_first_entry_or_null(&blocks, typeof(*block), link);
if (block) {
list_del(&block->link);
- drm_buddy_free_block(&mm, block);
+ gpu_buddy_free_block(&mm, block);
}
for (order = top; order--;) {
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start,
mm_size, size, size,
&tmp, flags),
"buddy_alloc hit -ENOMEM with order=%d, top=%d\n",
order, top);
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_move_tail(&block->link, &blocks);
@@ -649,45 +649,45 @@ static void drm_test_buddy_alloc_pathological(struct kunit *test)
/* There should be one final page for this sub-allocation */
size = get_size(0, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc hit -ENOMEM for hole\n");
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_move_tail(&block->link, &holes);
size = get_size(top, mm.chunk_size);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc unexpectedly succeeded at top-order %d/%d, it should be full!",
top, max_order);
}
- drm_buddy_free_list(&mm, &holes, 0);
+ gpu_buddy_free_list(&mm, &holes, 0);
/* Nothing larger than blocks of chunk_size now available */
for (order = 1; order <= max_order; order++) {
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc unexpectedly succeeded at order %d, it should be full!",
order);
}
list_splice_tail(&holes, &blocks);
- drm_buddy_free_list(&mm, &blocks, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &blocks, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_pessimistic(struct kunit *test)
+static void gpu_test_buddy_alloc_pessimistic(struct kunit *test)
{
u64 mm_size, size, start = 0;
- struct drm_buddy_block *block, *bn;
+ struct gpu_buddy_block *block, *bn;
const unsigned int max_order = 16;
unsigned long flags = 0;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
unsigned int order;
LIST_HEAD(blocks);
LIST_HEAD(tmp);
@@ -699,19 +699,19 @@ static void drm_test_buddy_alloc_pessimistic(struct kunit *test)
*/
mm_size = SZ_4K << max_order;
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
"buddy_init failed\n");
KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
for (order = 0; order < max_order; order++) {
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc hit -ENOMEM with order=%d\n",
order);
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_move_tail(&block->link, &blocks);
@@ -719,11 +719,11 @@ static void drm_test_buddy_alloc_pessimistic(struct kunit *test)
/* And now the last remaining block available */
size = get_size(0, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc hit -ENOMEM on final alloc\n");
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_move_tail(&block->link, &blocks);
@@ -731,58 +731,58 @@ static void drm_test_buddy_alloc_pessimistic(struct kunit *test)
/* Should be completely full! */
for (order = max_order; order--;) {
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc unexpectedly succeeded, it should be full!");
}
block = list_last_entry(&blocks, typeof(*block), link);
list_del(&block->link);
- drm_buddy_free_block(&mm, block);
+ gpu_buddy_free_block(&mm, block);
/* As we free in increasing size, we make available larger blocks */
order = 1;
list_for_each_entry_safe(block, bn, &blocks, link) {
list_del(&block->link);
- drm_buddy_free_block(&mm, block);
+ gpu_buddy_free_block(&mm, block);
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc hit -ENOMEM with order=%d\n",
order);
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_del(&block->link);
- drm_buddy_free_block(&mm, block);
+ gpu_buddy_free_block(&mm, block);
order++;
}
/* To confirm, now the whole mm should be available */
size = get_size(max_order, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc (realloc) hit -ENOMEM with order=%d\n",
max_order);
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_del(&block->link);
- drm_buddy_free_block(&mm, block);
- drm_buddy_free_list(&mm, &blocks, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_block(&mm, block);
+ gpu_buddy_free_list(&mm, &blocks, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_optimistic(struct kunit *test)
+static void gpu_test_buddy_alloc_optimistic(struct kunit *test)
{
u64 mm_size, size, start = 0;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
unsigned long flags = 0;
const int max_order = 16;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
LIST_HEAD(blocks);
LIST_HEAD(tmp);
int order;
@@ -794,19 +794,19 @@ static void drm_test_buddy_alloc_optimistic(struct kunit *test)
mm_size = SZ_4K * ((1 << (max_order + 1)) - 1);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
"buddy_init failed\n");
KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
for (order = 0; order <= max_order; order++) {
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc hit -ENOMEM with order=%d\n",
order);
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_move_tail(&block->link, &blocks);
@@ -814,80 +814,80 @@ static void drm_test_buddy_alloc_optimistic(struct kunit *test)
/* Should be completely full! */
size = get_size(0, mm.chunk_size);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc unexpectedly succeeded, it should be full!");
- drm_buddy_free_list(&mm, &blocks, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &blocks, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_limit(struct kunit *test)
+static void gpu_test_buddy_alloc_limit(struct kunit *test)
{
u64 size = U64_MAX, start = 0;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
unsigned long flags = 0;
LIST_HEAD(allocated);
- struct drm_buddy mm;
+ struct gpu_buddy mm;
- KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, size, SZ_4K));
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, size, SZ_4K));
- KUNIT_EXPECT_EQ_MSG(test, mm.max_order, DRM_BUDDY_MAX_ORDER,
+ KUNIT_EXPECT_EQ_MSG(test, mm.max_order, GPU_BUDDY_MAX_ORDER,
"mm.max_order(%d) != %d\n", mm.max_order,
- DRM_BUDDY_MAX_ORDER);
+ GPU_BUDDY_MAX_ORDER);
size = mm.chunk_size << mm.max_order;
- KUNIT_EXPECT_FALSE(test, drm_buddy_alloc_blocks(&mm, start, size, size,
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_alloc_blocks(&mm, start, size, size,
mm.chunk_size, &allocated, flags));
- block = list_first_entry_or_null(&allocated, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&allocated, struct gpu_buddy_block, link);
KUNIT_EXPECT_TRUE(test, block);
- KUNIT_EXPECT_EQ_MSG(test, drm_buddy_block_order(block), mm.max_order,
+ KUNIT_EXPECT_EQ_MSG(test, gpu_buddy_block_order(block), mm.max_order,
"block order(%d) != %d\n",
- drm_buddy_block_order(block), mm.max_order);
+ gpu_buddy_block_order(block), mm.max_order);
- KUNIT_EXPECT_EQ_MSG(test, drm_buddy_block_size(&mm, block),
+ KUNIT_EXPECT_EQ_MSG(test, gpu_buddy_block_size(&mm, block),
BIT_ULL(mm.max_order) * mm.chunk_size,
"block size(%llu) != %llu\n",
- drm_buddy_block_size(&mm, block),
+ gpu_buddy_block_size(&mm, block),
BIT_ULL(mm.max_order) * mm.chunk_size);
- drm_buddy_free_list(&mm, &allocated, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_fini(&mm);
}
-static int drm_buddy_suite_init(struct kunit_suite *suite)
+static int gpu_buddy_suite_init(struct kunit_suite *suite)
{
while (!random_seed)
random_seed = get_random_u32();
- kunit_info(suite, "Testing DRM buddy manager, with random_seed=0x%x\n",
+ kunit_info(suite, "Testing GPU buddy manager, with random_seed=0x%x\n",
random_seed);
return 0;
}
-static struct kunit_case drm_buddy_tests[] = {
- KUNIT_CASE(drm_test_buddy_alloc_limit),
- KUNIT_CASE(drm_test_buddy_alloc_optimistic),
- KUNIT_CASE(drm_test_buddy_alloc_pessimistic),
- KUNIT_CASE(drm_test_buddy_alloc_pathological),
- KUNIT_CASE(drm_test_buddy_alloc_contiguous),
- KUNIT_CASE(drm_test_buddy_alloc_clear),
- KUNIT_CASE(drm_test_buddy_alloc_range_bias),
- KUNIT_CASE(drm_test_buddy_fragmentation_performance),
+static struct kunit_case gpu_buddy_tests[] = {
+ KUNIT_CASE(gpu_test_buddy_alloc_limit),
+ KUNIT_CASE(gpu_test_buddy_alloc_optimistic),
+ KUNIT_CASE(gpu_test_buddy_alloc_pessimistic),
+ KUNIT_CASE(gpu_test_buddy_alloc_pathological),
+ KUNIT_CASE(gpu_test_buddy_alloc_contiguous),
+ KUNIT_CASE(gpu_test_buddy_alloc_clear),
+ KUNIT_CASE(gpu_test_buddy_alloc_range_bias),
+ KUNIT_CASE(gpu_test_buddy_fragmentation_performance),
{}
};
-static struct kunit_suite drm_buddy_test_suite = {
- .name = "drm_buddy",
- .suite_init = drm_buddy_suite_init,
- .test_cases = drm_buddy_tests,
+static struct kunit_suite gpu_buddy_test_suite = {
+ .name = "gpu_buddy",
+ .suite_init = gpu_buddy_suite_init,
+ .test_cases = gpu_buddy_tests,
};
-kunit_test_suite(drm_buddy_test_suite);
+kunit_test_suite(gpu_buddy_test_suite);
MODULE_AUTHOR("Intel Corporation");
-MODULE_DESCRIPTION("Kunit test for drm_buddy functions");
+MODULE_DESCRIPTION("Kunit test for gpu_buddy functions");
MODULE_LICENSE("GPL");
diff --git a/drivers/gpu/tests/gpu_random.c b/drivers/gpu/tests/gpu_random.c
new file mode 100644
index 000000000000..54f1f6a3a6c1
--- /dev/null
+++ b/drivers/gpu/tests/gpu_random.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bitops.h>
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/random.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "gpu_random.h"
+
+u32 gpu_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
+{
+ return upper_32_bits((u64)prandom_u32_state(state) * ep_ro);
+}
+EXPORT_SYMBOL(gpu_prandom_u32_max_state);
+
+void gpu_random_reorder(unsigned int *order, unsigned int count,
+ struct rnd_state *state)
+{
+ unsigned int i, j;
+
+ for (i = 0; i < count; ++i) {
+ BUILD_BUG_ON(sizeof(unsigned int) > sizeof(u32));
+ j = gpu_prandom_u32_max_state(count, state);
+ swap(order[i], order[j]);
+ }
+}
+EXPORT_SYMBOL(gpu_random_reorder);
+
+unsigned int *gpu_random_order(unsigned int count, struct rnd_state *state)
+{
+ unsigned int *order, i;
+
+ order = kmalloc_array(count, sizeof(*order), GFP_KERNEL);
+ if (!order)
+ return order;
+
+ for (i = 0; i < count; i++)
+ order[i] = i;
+
+ gpu_random_reorder(order, count, state);
+ return order;
+}
+EXPORT_SYMBOL(gpu_random_order);
+
+MODULE_DESCRIPTION("GPU Randomization Utilities");
+MODULE_LICENSE("Dual MIT/GPL");
diff --git a/drivers/gpu/tests/gpu_random.h b/drivers/gpu/tests/gpu_random.h
new file mode 100644
index 000000000000..b68cf3448264
--- /dev/null
+++ b/drivers/gpu/tests/gpu_random.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __GPU_RANDOM_H__
+#define __GPU_RANDOM_H__
+
+/* This is a temporary home for a couple of utility functions that should
+ * be transposed to lib/ at the earliest convenience.
+ */
+
+#include <linux/prandom.h>
+
+#define GPU_RND_STATE_INITIALIZER(seed__) ({ \
+ struct rnd_state state__; \
+ prandom_seed_state(&state__, (seed__)); \
+ state__; \
+})
+
+#define GPU_RND_STATE(name__, seed__) \
+ struct rnd_state name__ = GPU_RND_STATE_INITIALIZER(seed__)
+
+unsigned int *gpu_random_order(unsigned int count,
+ struct rnd_state *state);
+void gpu_random_reorder(unsigned int *order,
+ unsigned int count,
+ struct rnd_state *state);
+u32 gpu_prandom_u32_max_state(u32 ep_ro,
+ struct rnd_state *state);
+
+#endif /* !__GPU_RANDOM_H__ */
diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig
index d51777df12d1..6ae1383b0e2e 100644
--- a/drivers/video/Kconfig
+++ b/drivers/video/Kconfig
@@ -37,6 +37,8 @@ source "drivers/char/agp/Kconfig"
source "drivers/gpu/vga/Kconfig"
+source "drivers/gpu/Kconfig"
+
source "drivers/gpu/host1x/Kconfig"
source "drivers/gpu/ipu-v3/Kconfig"
source "drivers/gpu/nova-core/Kconfig"
diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h
index c2e05a281252..dbbffad9c402 100644
--- a/include/drm/drm_buddy.h
+++ b/include/drm/drm_buddy.h
@@ -6,166 +6,13 @@
#ifndef __DRM_BUDDY_H__
#define __DRM_BUDDY_H__
-#include <linux/bitops.h>
-#include <linux/list.h>
-#include <linux/slab.h>
-#include <linux/sched.h>
-#include <linux/rbtree.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_print.h>
-#define DRM_BUDDY_RANGE_ALLOCATION BIT(0)
-#define DRM_BUDDY_TOPDOWN_ALLOCATION BIT(1)
-#define DRM_BUDDY_CONTIGUOUS_ALLOCATION BIT(2)
-#define DRM_BUDDY_CLEAR_ALLOCATION BIT(3)
-#define DRM_BUDDY_CLEARED BIT(4)
-#define DRM_BUDDY_TRIM_DISABLE BIT(5)
-
-struct drm_buddy_block {
-#define DRM_BUDDY_HEADER_OFFSET GENMASK_ULL(63, 12)
-#define DRM_BUDDY_HEADER_STATE GENMASK_ULL(11, 10)
-#define DRM_BUDDY_ALLOCATED (1 << 10)
-#define DRM_BUDDY_FREE (2 << 10)
-#define DRM_BUDDY_SPLIT (3 << 10)
-#define DRM_BUDDY_HEADER_CLEAR GENMASK_ULL(9, 9)
-/* Free to be used, if needed in the future */
-#define DRM_BUDDY_HEADER_UNUSED GENMASK_ULL(8, 6)
-#define DRM_BUDDY_HEADER_ORDER GENMASK_ULL(5, 0)
- u64 header;
-
- struct drm_buddy_block *left;
- struct drm_buddy_block *right;
- struct drm_buddy_block *parent;
-
- void *private; /* owned by creator */
-
- /*
- * While the block is allocated by the user through drm_buddy_alloc*,
- * the user has ownership of the link, for example to maintain within
- * a list, if so desired. As soon as the block is freed with
- * drm_buddy_free* ownership is given back to the mm.
- */
- union {
- struct rb_node rb;
- struct list_head link;
- };
-
- struct list_head tmp_link;
-};
-
-/* Order-zero must be at least SZ_4K */
-#define DRM_BUDDY_MAX_ORDER (63 - 12)
-
-/*
- * Binary Buddy System.
- *
- * Locking should be handled by the user, a simple mutex around
- * drm_buddy_alloc* and drm_buddy_free* should suffice.
- */
-struct drm_buddy {
- /* Maintain a free list for each order. */
- struct rb_root **free_trees;
-
- /*
- * Maintain explicit binary tree(s) to track the allocation of the
- * address space. This gives us a simple way of finding a buddy block
- * and performing the potentially recursive merge step when freeing a
- * block. Nodes are either allocated or free, in which case they will
- * also exist on the respective free list.
- */
- struct drm_buddy_block **roots;
-
- /*
- * Anything from here is public, and remains static for the lifetime of
- * the mm. Everything above is considered do-not-touch.
- */
- unsigned int n_roots;
- unsigned int max_order;
-
- /* Must be at least SZ_4K */
- u64 chunk_size;
- u64 size;
- u64 avail;
- u64 clear_avail;
-};
-
-static inline u64
-drm_buddy_block_offset(const struct drm_buddy_block *block)
-{
- return block->header & DRM_BUDDY_HEADER_OFFSET;
-}
-
-static inline unsigned int
-drm_buddy_block_order(struct drm_buddy_block *block)
-{
- return block->header & DRM_BUDDY_HEADER_ORDER;
-}
-
-static inline unsigned int
-drm_buddy_block_state(struct drm_buddy_block *block)
-{
- return block->header & DRM_BUDDY_HEADER_STATE;
-}
-
-static inline bool
-drm_buddy_block_is_allocated(struct drm_buddy_block *block)
-{
- return drm_buddy_block_state(block) == DRM_BUDDY_ALLOCATED;
-}
-
-static inline bool
-drm_buddy_block_is_clear(struct drm_buddy_block *block)
-{
- return block->header & DRM_BUDDY_HEADER_CLEAR;
-}
-
-static inline bool
-drm_buddy_block_is_free(struct drm_buddy_block *block)
-{
- return drm_buddy_block_state(block) == DRM_BUDDY_FREE;
-}
-
-static inline bool
-drm_buddy_block_is_split(struct drm_buddy_block *block)
-{
- return drm_buddy_block_state(block) == DRM_BUDDY_SPLIT;
-}
-
-static inline u64
-drm_buddy_block_size(struct drm_buddy *mm,
- struct drm_buddy_block *block)
-{
- return mm->chunk_size << drm_buddy_block_order(block);
-}
-
-int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size);
-
-void drm_buddy_fini(struct drm_buddy *mm);
-
-struct drm_buddy_block *
-drm_get_buddy(struct drm_buddy_block *block);
-
-int drm_buddy_alloc_blocks(struct drm_buddy *mm,
- u64 start, u64 end, u64 size,
- u64 min_page_size,
- struct list_head *blocks,
- unsigned long flags);
-
-int drm_buddy_block_trim(struct drm_buddy *mm,
- u64 *start,
- u64 new_size,
- struct list_head *blocks);
-
-void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear);
-
-void drm_buddy_free_block(struct drm_buddy *mm, struct drm_buddy_block *block);
-
-void drm_buddy_free_list(struct drm_buddy *mm,
- struct list_head *objects,
- unsigned int flags);
-
-void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p);
-void drm_buddy_block_print(struct drm_buddy *mm,
- struct drm_buddy_block *block,
+/* DRM-specific GPU Buddy Allocator print helpers */
+void drm_buddy_print(struct gpu_buddy *mm, struct drm_printer *p);
+void drm_buddy_block_print(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block,
struct drm_printer *p);
#endif
diff --git a/include/linux/gpu_buddy.h b/include/linux/gpu_buddy.h
new file mode 100644
index 000000000000..3e4bd11ccb71
--- /dev/null
+++ b/include/linux/gpu_buddy.h
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef __GPU_BUDDY_H__
+#define __GPU_BUDDY_H__
+
+#include <linux/bitops.h>
+#include <linux/list.h>
+#include <linux/rbtree.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+
+#define GPU_BUDDY_RANGE_ALLOCATION BIT(0)
+#define GPU_BUDDY_TOPDOWN_ALLOCATION BIT(1)
+#define GPU_BUDDY_CONTIGUOUS_ALLOCATION BIT(2)
+#define GPU_BUDDY_CLEAR_ALLOCATION BIT(3)
+#define GPU_BUDDY_CLEARED BIT(4)
+#define GPU_BUDDY_TRIM_DISABLE BIT(5)
+
+enum gpu_buddy_free_tree {
+ GPU_BUDDY_CLEAR_TREE = 0,
+ GPU_BUDDY_DIRTY_TREE,
+ GPU_BUDDY_MAX_FREE_TREES,
+};
+
+#define for_each_free_tree(tree) \
+ for ((tree) = 0; (tree) < GPU_BUDDY_MAX_FREE_TREES; (tree)++)
+
+struct gpu_buddy_block {
+#define GPU_BUDDY_HEADER_OFFSET GENMASK_ULL(63, 12)
+#define GPU_BUDDY_HEADER_STATE GENMASK_ULL(11, 10)
+#define GPU_BUDDY_ALLOCATED (1 << 10)
+#define GPU_BUDDY_FREE (2 << 10)
+#define GPU_BUDDY_SPLIT (3 << 10)
+#define GPU_BUDDY_HEADER_CLEAR GENMASK_ULL(9, 9)
+/* Free to be used, if needed in the future */
+#define GPU_BUDDY_HEADER_UNUSED GENMASK_ULL(8, 6)
+#define GPU_BUDDY_HEADER_ORDER GENMASK_ULL(5, 0)
+ u64 header;
+
+ struct gpu_buddy_block *left;
+ struct gpu_buddy_block *right;
+ struct gpu_buddy_block *parent;
+
+ void *private; /* owned by creator */
+
+ /*
+ * While the block is allocated by the user through gpu_buddy_alloc*,
+ * the user has ownership of the link, for example to maintain within
+ * a list, if so desired. As soon as the block is freed with
+ * gpu_buddy_free* ownership is given back to the mm.
+ */
+ union {
+ struct rb_node rb;
+ struct list_head link;
+ };
+
+ struct list_head tmp_link;
+};
+
+/* Order-zero must be at least SZ_4K */
+#define GPU_BUDDY_MAX_ORDER (63 - 12)
+
+/*
+ * Binary Buddy System.
+ *
+ * Locking should be handled by the user, a simple mutex around
+ * gpu_buddy_alloc* and gpu_buddy_free* should suffice.
+ */
+struct gpu_buddy {
+ /* Maintain a free list for each order. */
+ struct rb_root **free_trees;
+
+ /*
+ * Maintain explicit binary tree(s) to track the allocation of the
+ * address space. This gives us a simple way of finding a buddy block
+ * and performing the potentially recursive merge step when freeing a
+ * block. Nodes are either allocated or free, in which case they will
+ * also exist on the respective free list.
+ */
+ struct gpu_buddy_block **roots;
+
+ /*
+ * Anything from here is public, and remains static for the lifetime of
+ * the mm. Everything above is considered do-not-touch.
+ */
+ unsigned int n_roots;
+ unsigned int max_order;
+
+ /* Must be at least SZ_4K */
+ u64 chunk_size;
+ u64 size;
+ u64 avail;
+ u64 clear_avail;
+};
+
+static inline u64
+gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return block->header & GPU_BUDDY_HEADER_OFFSET;
+}
+
+static inline unsigned int
+gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return block->header & GPU_BUDDY_HEADER_ORDER;
+}
+
+static inline unsigned int
+gpu_buddy_block_state(struct gpu_buddy_block *block)
+{
+ return block->header & GPU_BUDDY_HEADER_STATE;
+}
+
+static inline bool
+gpu_buddy_block_is_allocated(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_state(block) == GPU_BUDDY_ALLOCATED;
+}
+
+static inline bool
+gpu_buddy_block_is_clear(struct gpu_buddy_block *block)
+{
+ return block->header & GPU_BUDDY_HEADER_CLEAR;
+}
+
+static inline bool
+gpu_buddy_block_is_free(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_state(block) == GPU_BUDDY_FREE;
+}
+
+static inline bool
+gpu_buddy_block_is_split(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_state(block) == GPU_BUDDY_SPLIT;
+}
+
+static inline u64
+gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return mm->chunk_size << gpu_buddy_block_order(block);
+}
+
+int gpu_buddy_init(struct gpu_buddy *mm, u64 size, u64 chunk_size);
+
+void gpu_buddy_fini(struct gpu_buddy *mm);
+
+struct gpu_buddy_block *
+gpu_get_buddy(struct gpu_buddy_block *block);
+
+int gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
+ u64 start, u64 end, u64 size,
+ u64 min_page_size,
+ struct list_head *blocks,
+ unsigned long flags);
+
+int gpu_buddy_block_trim(struct gpu_buddy *mm,
+ u64 *start,
+ u64 new_size,
+ struct list_head *blocks);
+
+void gpu_buddy_reset_clear(struct gpu_buddy *mm, bool is_clear);
+
+void gpu_buddy_free_block(struct gpu_buddy *mm, struct gpu_buddy_block *block);
+
+void gpu_buddy_free_list(struct gpu_buddy *mm,
+ struct list_head *objects,
+ unsigned int flags);
+
+void gpu_buddy_print(struct gpu_buddy *mm);
+void gpu_buddy_block_print(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block);
+#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v4 3/3] rust: gpu: Add GPU buddy allocator bindings
2025-12-04 21:51 [PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings Joel Fernandes
2025-12-04 21:51 ` [PATCH v4 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
2025-12-04 21:51 ` [PATCH v4 2/3] gpu: Move DRM buddy allocator one level up Joel Fernandes
@ 2025-12-04 21:51 ` Joel Fernandes
2025-12-04 21:57 ` [PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings Joel Fernandes
3 siblings, 0 replies; 5+ messages in thread
From: Joel Fernandes @ 2025-12-04 21:51 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, dri-devel, Danilo Krummrich,
David Airlie
Cc: Alexandre Courbot, Alistair Popple, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Simona Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, John Hubbard,
Timur Tabi, joel, Elle Rhumsaa, Daniel Almeida, Andrea Righi,
Philipp Stanner, nouveau, Zhi Wang, Christian König,
Jonathan Corbet, Alex Deucher, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Edwin Peer, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system for useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 527 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
6 files changed, 569 insertions(+)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 6b973589a546..86a7e304b7ab 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -112,6 +113,16 @@ const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC = XA_FLAGS_ALLOC;
const gfp_t RUST_CONST_HELPER_XA_FLAGS_ALLOC1 = XA_FLAGS_ALLOC1;
const vm_flags_t RUST_CONST_HELPER_VM_MERGEABLE = VM_MERGEABLE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..415836b86abf
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 634fa2386bbb..6db7c4c25afa 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -29,6 +29,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..3e1a9617e6aa
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,527 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{BuddyFlags, GpuBuddy, GpuBuddyAllocParams, GpuBuddyParams},
+//! prelude::*,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let mut buddy = GpuBuddy::new(GpuBuddyParams {
+//! physical_memory_size_bytes: SZ_1G as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Verify initial state.
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), SZ_4K as u64);
+//! let initial_free = buddy.free_memory_bytes();
+//!
+//! // Base allocation params - reused across tests with field overrides.
+//! let params = GpuBuddyAllocParams {
+//! start_range_address: 0,
+//! end_range_address: 0, // Entire range.
+//! size_bytes: SZ_16M as u64,
+//! min_block_size_bytes: SZ_16M as u64,
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?,
+//! };
+//!
+//! // Test top-down allocation (allocates from highest addresses).
+//! let topdown = buddy.alloc_blocks(GpuBuddyAllocParams {
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::TOPDOWN_ALLOCATION)?,
+//! ..params
+//! })?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in topdown.iter() {
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(topdown);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Allocate 16MB - should result in a single 16MB block at offset 0.
+//! let allocated = buddy.alloc_blocks(params)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in allocated.iter() {
+//! assert_eq!(block.offset(), 0);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(allocated);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test non-contiguous allocation with fragmented memory.
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! let params_4m = GpuBuddyAllocParams {
+//! end_range_address: SZ_4M as u64,
+//! size_bytes: SZ_4M as u64,
+//! min_block_size_bytes: SZ_4M as u64,
+//! ..params
+//! };
+//! let frag1 = buddy.alloc_blocks(params_4m)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_4M as u64);
+//!
+//! let frag2 = buddy.alloc_blocks(GpuBuddyAllocParams {
+//! start_range_address: SZ_8M as u64,
+//! end_range_address: (SZ_8M + SZ_4M) as u64,
+//! ..params_4m
+//! })?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB without CONTIGUOUS - should return 2 blocks from the holes.
+//! let fragmented = buddy.alloc_blocks(GpuBuddyAllocParams {
+//! end_range_address: SZ_16M as u64,
+//! size_bytes: SZ_8M as u64,
+//! min_block_size_bytes: SZ_4M as u64,
+//! ..params
+//! })?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - (SZ_16M) as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0u64);
+//! for block in fragmented.iter() {
+//! // The 8MB allocation should return 2 blocks, each 4MB.
+//! assert_eq!(block.size(), SZ_4M as u64);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M as u64);
+//! assert_eq!(count, 2);
+//! drop(fragmented);
+//! drop(frag2);
+//! drop(frag1);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test CONTIGUOUS failure when only fragmented space available.
+//! // Create a small buddy allocator with only 16MB of memory.
+//! let mut small = GpuBuddy::new(GpuBuddyParams {
+//! physical_memory_size_bytes: SZ_16M as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Allocate 4MB blocks at [0,4M) and [8M,12M) to create fragmented memory.
+//! let hole1 = small.alloc_blocks(params_4m)?;
+//! let hole2 = small.alloc_blocks(GpuBuddyAllocParams {
+//! start_range_address: SZ_8M as u64,
+//! end_range_address: (SZ_8M + SZ_4M) as u64,
+//! ..params_4m
+//! })?;
+//!
+//! // 8MB contiguous should fail - only two non-contiguous 4MB holes exist.
+//! let result = small.alloc_blocks(GpuBuddyAllocParams {
+//! size_bytes: SZ_8M as u64,
+//! min_block_size_bytes: SZ_4M as u64,
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::CONTIGUOUS_ALLOCATION)?,
+//! ..params
+//! });
+//! assert!(result.is_err());
+//! drop(hole2);
+//! drop(hole1);
+//!
+//! # Ok::<(), Error>(())
+//! ```
+
+use crate::{
+ bindings,
+ clist::CListHead,
+ clist_create,
+ error::to_result,
+ new_mutex,
+ prelude::*,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque,
+};
+
+/// Flags for GPU buddy allocator operations.
+///
+/// These flags control the allocation behavior of the buddy allocator.
+#[derive(Clone, Copy, Default, PartialEq, Eq)]
+pub struct BuddyFlags(usize);
+
+impl BuddyFlags {
+ /// Range-based allocation from start to end addresses.
+ pub const RANGE_ALLOCATION: usize = bindings::GPU_BUDDY_RANGE_ALLOCATION;
+
+ /// Allocate from top of address space downward.
+ pub const TOPDOWN_ALLOCATION: usize = bindings::GPU_BUDDY_TOPDOWN_ALLOCATION;
+
+ /// Allocate physically contiguous blocks.
+ pub const CONTIGUOUS_ALLOCATION: usize = bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+
+ /// Request allocation from the cleared (zeroed) memory. The zero'ing is not
+ /// done by the allocator, but by the caller before freeing old blocks.
+ pub const CLEAR_ALLOCATION: usize = bindings::GPU_BUDDY_CLEAR_ALLOCATION;
+
+ /// Disable trimming of partially used blocks.
+ pub const TRIM_DISABLE: usize = bindings::GPU_BUDDY_TRIM_DISABLE;
+
+ /// Mark blocks as cleared (zeroed) when freeing. When set during free,
+ /// indicates that the caller has already zeroed the memory.
+ pub const CLEARED: usize = bindings::GPU_BUDDY_CLEARED;
+
+ /// Create [`BuddyFlags`] from a raw value with validation.
+ ///
+ /// Use `|` operator to combine flags if needed, before calling this method.
+ pub fn try_new(flags: usize) -> Result<Self> {
+ // Flags must not exceed u32::MAX to satisfy the GPU buddy allocator C API.
+ if flags > u32::MAX as usize {
+ return Err(EINVAL);
+ }
+
+ // `TOPDOWN_ALLOCATION` only works without `RANGE_ALLOCATION`. When both are
+ // set, `TOPDOWN_ALLOCATION` is silently ignored by the allocator. Reject this.
+ if (flags & Self::RANGE_ALLOCATION) != 0 && (flags & Self::TOPDOWN_ALLOCATION) != 0 {
+ return Err(EINVAL);
+ }
+
+ Ok(Self(flags))
+ }
+
+ /// Get raw value of the flags.
+ pub(crate) fn as_raw(self) -> usize {
+ self.0
+ }
+}
+
+/// Parameters for creating a GPU buddy allocator.
+#[derive(Clone, Copy)]
+pub struct GpuBuddyParams {
+ /// Total physical memory size managed by the allocator in bytes.
+ pub physical_memory_size_bytes: u64,
+ /// Minimum allocation unit / chunk size in bytes, must be >= 4KB.
+ pub chunk_size_bytes: u64,
+}
+
+/// Parameters for allocating blocks from a GPU buddy allocator.
+#[derive(Clone, Copy)]
+pub struct GpuBuddyAllocParams {
+ /// Start of allocation range in bytes. Use 0 for beginning.
+ pub start_range_address: u64,
+ /// End of allocation range in bytes. Use 0 for entire range.
+ pub end_range_address: u64,
+ /// Total size to allocate in bytes.
+ pub size_bytes: u64,
+ /// Minimum block size for fragmented allocations in bytes.
+ pub min_block_size_bytes: u64,
+ /// Buddy allocator behavior flags.
+ pub buddy_flags: BuddyFlags,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see `include/linux/gpu_buddy.h`).
+/// The internal [`GpuBuddyGuard`] ensures that the lock is held for all
+/// allocator and free operations, preventing races between concurrent allocations
+/// and the freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+ #[pin]
+ lock: Mutex<()>,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: &GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let size = params.physical_memory_size_bytes;
+ let chunk_size = params.chunk_size_bytes;
+
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: ptr points to valid uninitialized memory from the pin-init
+ // infrastructure. gpu_buddy_init will initialize the structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size, chunk_size) })
+ }),
+ lock <- new_mutex!(()),
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: [`GpuBuddyInner`] can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: [`GpuBuddyInner`] is `Sync` because the internal [`GpuBuddyGuard`]
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+/// Guard that proves the lock is held, enabling access to the allocator.
+///
+/// # Invariants
+///
+/// The inner `_guard` holds the lock for the duration of this guard's lifetime.
+pub(crate) struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// # Invariants
+///
+/// The inner [`Arc`] points to a valid, initialized GPU buddy allocator.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at least 4KB).
+ pub fn new(params: GpuBuddyParams) -> Result<Self> {
+ Ok(Self(Arc::pin_init(
+ GpuBuddyInner::new(¶ms),
+ GFP_KERNEL,
+ )?))
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> u64 {
+ let guard = self.0.lock();
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).chunk_size }
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ let guard = self.0.lock();
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).size }
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn free_memory_bytes(&self) -> u64 {
+ let guard = self.0.lock();
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns an [`Arc<AllocatedBlocks>`] structure that owns the allocated blocks
+ /// and automatically frees them when all references are dropped.
+ pub fn alloc_blocks(&mut self, params: GpuBuddyAllocParams) -> Result<Arc<AllocatedBlocks>> {
+ let buddy_arc = Arc::clone(&self.0);
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ let init = try_pin_init!(AllocatedBlocks {
+ list <- CListHead::try_init(|list| {
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy_arc.lock();
+
+ // SAFETY: guard provides exclusive access, list is initialized.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ params.start_range_address,
+ params.end_range_address,
+ params.size_bytes,
+ params.min_block_size_bytes,
+ list.as_raw(),
+ params.buddy_flags.as_raw(),
+ )
+ })
+ }),
+ buddy: Arc::clone(&buddy_arc),
+ flags: params.buddy_flags,
+ });
+
+ Arc::pin_init(init, GFP_KERNEL)
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated [`Block`] structures.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+/// - `buddy` references a valid [`GpuBuddyInner`].
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+ flags: BuddyFlags,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] references. The blocks
+ /// are only valid for the duration of the borrow of `self`.
+ pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
+ // SAFETY: list contains gpu_buddy_block items linked via __bindgen_anon_1.link.
+ let clist = unsafe {
+ clist_create!(
+ self.list.as_raw(),
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ )
+ };
+
+ clist
+ .iter()
+ .map(|block| AllocatedBlock { block, alloc: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ // CAST: BuddyFlags were validated to fit in u32 at construction.
+ unsafe {
+ bindings::gpu_buddy_free_list(
+ guard.as_raw(),
+ self.list.as_raw(),
+ self.flags.as_raw() as u32,
+ );
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
+/// as references from [`CListIter`] during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+pub struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's offset in the address space.
+ pub(crate) fn offset(&self) -> u64 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ pub(crate) fn order(&self) -> u32 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is a transparent wrapper over `gpu_buddy_block` which is not
+// modified after allocation. It can be safely sent between threads.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is a transparent wrapper over `gpu_buddy_block` which is not
+// modified after allocation. It can be safely shared among threads.
+unsafe impl Sync for Block {}
+
+/// An allocated block with access to the buddy allocator.
+///
+/// This wrapper holds references to the block and the allocation list,
+/// enabling the `size()` method which requires the allocator.
+///
+/// # Invariants
+///
+/// - `block` is a valid reference to an allocated [`Block`].
+/// - `alloc` is a valid reference to the [`AllocatedBlocks`] that owns this block.
+pub struct AllocatedBlock<'a> {
+ block: &'a Block,
+ alloc: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ pub fn offset(&self) -> u64 {
+ self.block.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.block.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> u64 {
+ // Acquire guard to calculate block size since it is calculated from
+ // the chunk size, which requires access to the allocator. While the chunk size
+ // cannot change after initialization, we still need the guard to gain access
+ // to the allocator's pointer.
+ let guard = self.alloc.buddy.lock();
+ // SAFETY:
+ // - Guard provides exclusive access to the allocator.
+ // - `block.as_raw()` is a valid pointer per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_size(guard.as_raw(), self.block.as_raw()) }
+ }
+}
diff --git a/rust/kernel/gpu/mod.rs b/rust/kernel/gpu/mod.rs
new file mode 100644
index 000000000000..8f25e6367edc
--- /dev/null
+++ b/rust/kernel/gpu/mod.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+pub mod buddy;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index b69cc5ed3b59..850cbbf4c3e7 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -95,6 +95,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY)]
+pub mod gpu;
pub mod id_pool;
pub mod init;
pub mod io;
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings
2025-12-04 21:51 [PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings Joel Fernandes
` (2 preceding siblings ...)
2025-12-04 21:51 ` [PATCH v4 3/3] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
@ 2025-12-04 21:57 ` Joel Fernandes
3 siblings, 0 replies; 5+ messages in thread
From: Joel Fernandes @ 2025-12-04 21:57 UTC (permalink / raw)
To: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org,
dri-devel@lists.freedesktop.org, Danilo Krummrich, David Airlie
Cc: Alexandre Courbot, Alistair Popple, Miguel Ojeda, Alex Gaynor,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Simona Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, John Hubbard,
Timur Tabi, joel@joelfernandes.org, Elle Rhumsaa, Daniel Almeida,
Andrea Righi, Philipp Stanner, nouveau@lists.freedesktop.org,
Zhi Wang, Christian König, Jonathan Corbet, Alex Deucher,
Jani Nikula, Joonas Lahtinen, Vivi Rodrigo, Tvrtko Ursulin,
Rui Huang, Matthew Auld, Matthew Brost, Lucas De Marchi,
Thomas Hellström, Helge Deller, Edwin Peer,
linux-doc@vger.kernel.org, amd-gfx@lists.freedesktop.org,
intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
linux-fbdev@vger.kernel.org
> On Dec 4, 2025, at 4:53 PM, Joel Fernandes <joelagnelf@nvidia.com> wrote:
>
> This series combines a number of other series which build up to the same goal:
> to make it possible to use DRM buddy from nova-core rust code. See links to the
> different series below.
>
> The git tree with all patches can be found at:
> git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: clist-buddy-v4-20251204)
FYI - this series is rebased on linux-next to reduce conflicts, but I am happy to rebase on another tree if needed/requested.
Thanks,
- Joel
>
> Changes for v4:
> - Combined the clist and drm buddy series:
> - A rust module (CList) to access circular linked lists from rust code.
> - DRM buddy allocator bindings that were originally part of RFC.
> - DRM buddy allocator one level up to drivers/gpu/ so it can be used by
> GPU drivers (example, nova-core) that have usecases other than DRM.
> - Add Rust bindings for the GPU buddy allocator.
>
> Notes from past cover letters about CList:
>
> Introduction
> ============
> This patchset introduces an interface to iterate over doubly circular linked
> lists used in the kernel (allocated by C kernel code). The main usecase is
> iterating over the list of blocks provided by the GPU buddy allocator.
>
> The series also moves the DRM buddy allocator one level up and adds Rust
> bindings for it, enabling GPU drivers like nova-core to use it.
>
> A question may arise: Why not use rust list.rs for this?
> =========================================================
> Rust's list.rs is used to provide safe intrusive lists for Rust-allocated
> items. In doing so, it takes ownership of the items in the list and the links
> between list items. However, the usecase for GPU buddy allocator bindings, the
> C side allocates the items in the list, and also links the list together. Due
> to this, there is an ownership conflict making list.rs not the best abstraction
> for this usecase. What we need is a view of the list, not ownership of it.
> Further, the list links in a bindings usecase may come from C allocated
> objects, not from the Rust side.
>
> Link to v2 (clist only): https://lore.kernel.org/all/20251111171315.2196103-4-joelagnelf@nvidia.com/
>
> Notes and patches about DRM buddy code movement and DRM buddy bindings:
>
> Link to RFC: https://lore.kernel.org/all/20251030190613.1224287-1-joelagnelf@nvidia.com/
> Link to DRM buddy move discussion: https://lore.kernel.org/all/20251124234432.1988476-1-joelagnelf@nvidia.com/
>
> Joel Fernandes (3):
> rust: clist: Add support to interface with C linked lists
> gpu: Move DRM buddy allocator one level up
> rust: gpu: Add GPU buddy allocator bindings
>
> Documentation/gpu/drm-mm.rst | 10 +-
> MAINTAINERS | 7 +
> drivers/gpu/Kconfig | 13 +
> drivers/gpu/Makefile | 2 +
> drivers/gpu/buddy.c | 1310 +++++++++++++++++
> drivers/gpu/drm/Kconfig | 1 +
> drivers/gpu/drm/Kconfig.debug | 4 +-
> drivers/gpu/drm/amd/amdgpu/Kconfig | 1 +
> drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +-
> .../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 80 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 20 +-
> drivers/gpu/drm/drm_buddy.c | 1287 +---------------
> drivers/gpu/drm/i915/Kconfig | 1 +
> drivers/gpu/drm/i915/i915_scatterlist.c | 10 +-
> drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +-
> drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 6 +-
> .../drm/i915/selftests/intel_memory_region.c | 20 +-
> drivers/gpu/drm/tests/Makefile | 1 -
> .../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 5 +-
> drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +-
> drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 4 +-
> drivers/gpu/drm/xe/Kconfig | 1 +
> drivers/gpu/drm/xe/xe_res_cursor.h | 34 +-
> drivers/gpu/drm/xe/xe_svm.c | 12 +-
> drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 73 +-
> drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 4 +-
> drivers/gpu/tests/Makefile | 3 +
> .../gpu_buddy_test.c} | 390 ++---
> drivers/gpu/tests/gpu_random.c | 48 +
> drivers/gpu/tests/gpu_random.h | 28 +
> drivers/video/Kconfig | 2 +
> include/drm/drm_buddy.h | 163 +-
> include/linux/gpu_buddy.h | 177 +++
> rust/bindings/bindings_helper.h | 11 +
> rust/helpers/gpu.c | 23 +
> rust/helpers/helpers.c | 2 +
> rust/helpers/list.c | 12 +
> rust/kernel/clist.rs | 357 +++++
> rust/kernel/gpu/buddy.rs | 527 +++++++
> rust/kernel/gpu/mod.rs | 5 +
> rust/kernel/lib.rs | 3 +
> 42 files changed, 2944 insertions(+), 1800 deletions(-)
> create mode 100644 drivers/gpu/Kconfig
> create mode 100644 drivers/gpu/buddy.c
> create mode 100644 drivers/gpu/tests/Makefile
> rename drivers/gpu/{drm/tests/drm_buddy_test.c => tests/gpu_buddy_test.c} (68%)
> create mode 100644 drivers/gpu/tests/gpu_random.c
> create mode 100644 drivers/gpu/tests/gpu_random.h
> create mode 100644 include/linux/gpu_buddy.h
> create mode 100644 rust/helpers/gpu.c
> create mode 100644 rust/helpers/list.c
> create mode 100644 rust/kernel/clist.rs
> create mode 100644 rust/kernel/gpu/buddy.rs
> create mode 100644 rust/kernel/gpu/mod.rs
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-12-04 21:57 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-04 21:51 [PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings Joel Fernandes
2025-12-04 21:51 ` [PATCH v4 1/3] rust: clist: Add support to interface with C linked lists Joel Fernandes
2025-12-04 21:51 ` [PATCH v4 2/3] gpu: Move DRM buddy allocator one level up Joel Fernandes
2025-12-04 21:51 ` [PATCH v4 3/3] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
2025-12-04 21:57 ` [PATCH v4 0/3] Introduce support for C linked list interfacing and GPU Buddy bindings Joel Fernandes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).