* [PATCH v10 0/8] Preparatory patches for nova-core memory management
@ 2026-02-18 20:54 Joel Fernandes
2026-02-18 20:54 ` [PATCH v10 1/8] gpu: Move DRM buddy allocator one level up (part one) Joel Fernandes
` (9 more replies)
0 siblings, 10 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 20:54 UTC (permalink / raw)
To: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Joel Fernandes, Nikola Djukic
These are initial preparatory patches needed for nova-core memory management
support. The series moves the DRM buddy allocator one level up so it can be
shared across GPU subsystems, adds Rust FFI and clist bindings, and creates
Rust GPU buddy allocator bindings.
The clist/ffi patches are ready, reviewed by Gary and Danilo. Miguel, can you
pull those via the rust tree?
The non-Rust DRM buddy related patches are already being pulled into upstream
by Dave Airlie but I have included them here as they are needed for the rest of
the patches (thanks to Dave for reworking them so they applied).
I will post the nova-core memory management patches as a separate follow-up
series just after this one.
The git tree with all these patches can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: nova/mm)
Joel Fernandes (7):
gpu: Move DRM buddy allocator one level up (part one)
gpu: Move DRM buddy allocator one level up (part two)
rust: ffi: Convert pub use to pub mod and create ffi module
rust: clist: Add support to interface with C linked lists
rust: gpu: Add GPU buddy allocator bindings
nova-core: mm: Select GPU_BUDDY for VRAM allocation
nova-core: Kconfig: Sort select statements alphabetically
Koen Koning (1):
gpu: Fix uninitialized buddy for built-in drivers
Documentation/gpu/drm-mm.rst | 10 +-
MAINTAINERS | 15 +-
drivers/gpu/Kconfig | 13 +
drivers/gpu/Makefile | 3 +-
drivers/gpu/buddy.c | 1322 +++++++++++++++++
drivers/gpu/drm/Kconfig | 5 +-
drivers/gpu/drm/Kconfig.debug | 1 -
drivers/gpu/drm/Makefile | 1 -
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +-
.../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 79 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 20 +-
drivers/gpu/drm/drm_buddy.c | 1277 +---------------
drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +-
drivers/gpu/drm/i915/i915_scatterlist.c | 10 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 4 +-
.../drm/i915/selftests/intel_memory_region.c | 20 +-
drivers/gpu/drm/tests/Makefile | 1 -
drivers/gpu/drm/tests/drm_exec_test.c | 2 -
drivers/gpu/drm/tests/drm_mm_test.c | 2 -
.../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 4 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 4 +-
drivers/gpu/drm/xe/xe_res_cursor.h | 34 +-
drivers/gpu/drm/xe/xe_svm.c | 12 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 71 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 4 +-
drivers/gpu/nova-core/Kconfig | 3 +-
drivers/gpu/tests/Makefile | 4 +
.../gpu_buddy_test.c} | 416 +++---
.../lib/drm_random.c => tests/gpu_random.c} | 18 +-
.../lib/drm_random.h => tests/gpu_random.h} | 18 +-
drivers/video/Kconfig | 1 +
include/drm/drm_buddy.h | 163 +-
include/linux/gpu_buddy.h | 177 +++
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 +
rust/helpers/helpers.c | 2 +
rust/helpers/list.c | 17 +
rust/kernel/ffi/clist.rs | 327 ++++
rust/kernel/ffi/mod.rs | 9 +
rust/kernel/gpu/buddy.rs | 537 +++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 5 +-
45 files changed, 2893 insertions(+), 1846 deletions(-)
create mode 100644 drivers/gpu/Kconfig
create mode 100644 drivers/gpu/buddy.c
create mode 100644 drivers/gpu/tests/Makefile
rename drivers/gpu/{drm/tests/drm_buddy_test.c => tests/gpu_buddy_test.c} (67%)
rename drivers/gpu/{drm/lib/drm_random.c => tests/gpu_random.c} (59%)
rename drivers/gpu/{drm/lib/drm_random.h => tests/gpu_random.h} (53%)
create mode 100644 include/linux/gpu_buddy.h
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/helpers/list.c
create mode 100644 rust/kernel/ffi/clist.rs
create mode 100644 rust/kernel/ffi/mod.rs
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
Cc: Nikola Djukic <ndjukic@nvidia.com>
base-commit: 2961f841b025fb234860bac26dfb7fa7cb0fb122
--
2.34.1
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v10 1/8] gpu: Move DRM buddy allocator one level up (part one)
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
@ 2026-02-18 20:54 ` Joel Fernandes
2026-02-18 20:55 ` [PATCH v10 2/8] gpu: Move DRM buddy allocator one level up (part two) Joel Fernandes
` (8 subsequent siblings)
9 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 20:54 UTC (permalink / raw)
To: linux-kernel, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Jonathan Corbet, Shuah Khan,
Matthew Auld, Arun Pravin, Christian Koenig, Alex Deucher,
Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin,
Huang Rui, Matthew Brost, Thomas Hellström
Cc: Danilo Krummrich, Miguel Ojeda, Dave Airlie, Gary Guo,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Joel Fernandes, linux-doc, amd-gfx, intel-gfx, intel-xe
Move the DRM buddy allocator one level up so that it can be used by GPU
drivers (example, nova-core) that have usecases other than DRM (such as
VFIO vGPU support). Modify the API, structures and Kconfigs to use
"gpu_buddy" terminology. Adapt the drivers and tests to use the new API.
The commit cannot be split due to bisectability, however no functional
change is intended. Verified by running K-UNIT tests and build tested
various configurations.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
[airlied: I've split this into two so git can find copies easier.
I've also just nuked drm_random library, that stuff needs to be done
elsewhere and only the buddy tests seem to be using it].
Signed-off-by: Dave Airlie <airlied@redhat.com>
---
Documentation/gpu/drm-mm.rst | 6 +++---
drivers/gpu/Makefile | 2 +-
drivers/gpu/{drm/drm_buddy.c => buddy.c} | 2 +-
drivers/gpu/drm/Kconfig | 4 ----
drivers/gpu/drm/Kconfig.debug | 1 -
drivers/gpu/drm/Makefile | 3 +--
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 2 +-
drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +-
drivers/gpu/drm/i915/i915_scatterlist.c | 2 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 2 +-
drivers/gpu/drm/tests/Makefile | 1 -
drivers/gpu/drm/tests/drm_exec_test.c | 2 --
drivers/gpu/drm/tests/drm_mm_test.c | 2 --
drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 2 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 2 +-
drivers/gpu/tests/Makefile | 4 ++++
.../{drm/tests/drm_buddy_test.c => tests/gpu_buddy_test.c} | 4 ++--
drivers/gpu/{drm/lib/drm_random.c => tests/gpu_random.c} | 2 +-
drivers/gpu/{drm/lib/drm_random.h => tests/gpu_random.h} | 0
include/{drm/drm_buddy.h => linux/gpu_buddy.h} | 0
20 files changed, 19 insertions(+), 26 deletions(-)
rename drivers/gpu/{drm/drm_buddy.c => buddy.c} (99%)
create mode 100644 drivers/gpu/tests/Makefile
rename drivers/gpu/{drm/tests/drm_buddy_test.c => tests/gpu_buddy_test.c} (99%)
rename drivers/gpu/{drm/lib/drm_random.c => tests/gpu_random.c} (97%)
rename drivers/gpu/{drm/lib/drm_random.h => tests/gpu_random.h} (100%)
rename include/{drm/drm_buddy.h => linux/gpu_buddy.h} (100%)
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index f22433470c76..ceee0e663237 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -526,10 +526,10 @@ DRM GPUVM Function References
DRM Buddy Allocator
===================
-DRM Buddy Function References
------------------------------
+Buddy Allocator Function References (GPU buddy)
+-----------------------------------------------
-.. kernel-doc:: drivers/gpu/drm/drm_buddy.c
+.. kernel-doc:: drivers/gpu/buddy.c
:export:
DRM Cache Handling and Fast WC memcpy()
diff --git a/drivers/gpu/Makefile b/drivers/gpu/Makefile
index 36a54d456630..c5292ee2c852 100644
--- a/drivers/gpu/Makefile
+++ b/drivers/gpu/Makefile
@@ -2,7 +2,7 @@
# drm/tegra depends on host1x, so if both drivers are built-in care must be
# taken to initialize them in the correct order. Link order is the only way
# to ensure this currently.
-obj-y += host1x/ drm/ vga/
+obj-y += host1x/ drm/ vga/ tests/
obj-$(CONFIG_IMX_IPUV3_CORE) += ipu-v3/
obj-$(CONFIG_TRACE_GPU_MEM) += trace/
obj-$(CONFIG_NOVA_CORE) += nova-core/
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/buddy.c
similarity index 99%
rename from drivers/gpu/drm/drm_buddy.c
rename to drivers/gpu/buddy.c
index fd34d3755f7c..4cc63d961d26 100644
--- a/drivers/gpu/drm/drm_buddy.c
+++ b/drivers/gpu/buddy.c
@@ -10,7 +10,7 @@
#include <linux/module.h>
#include <linux/sizes.h>
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_print.h>
enum drm_buddy_free_tree {
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index d3d52310c9cc..ca2a2801e77f 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -269,10 +269,6 @@ config DRM_SCHED
config DRM_PANEL_BACKLIGHT_QUIRKS
tristate
-config DRM_LIB_RANDOM
- bool
- default n
-
config DRM_PRIVACY_SCREEN
bool
default n
diff --git a/drivers/gpu/drm/Kconfig.debug b/drivers/gpu/drm/Kconfig.debug
index 05dc43c0b8c5..3b7886865335 100644
--- a/drivers/gpu/drm/Kconfig.debug
+++ b/drivers/gpu/drm/Kconfig.debug
@@ -69,7 +69,6 @@ config DRM_KUNIT_TEST
select DRM_EXPORT_FOR_TESTS if m
select DRM_GEM_SHMEM_HELPER
select DRM_KUNIT_TEST_HELPERS
- select DRM_LIB_RANDOM
select DRM_SYSFB_HELPER
select PRIME_NUMBERS
default KUNIT_ALL_TESTS
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index ec2c5ff82382..5c86bc908955 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -78,7 +78,6 @@ drm-$(CONFIG_DRM_CLIENT) += \
drm_client_event.o \
drm_client_modeset.o \
drm_client_sysrq.o
-drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_PANEL) += drm_panel.o
drm-$(CONFIG_OF) += drm_of.o
@@ -114,7 +113,7 @@ drm_gpusvm_helper-$(CONFIG_ZONE_DEVICE) += \
obj-$(CONFIG_DRM_GPUSVM) += drm_gpusvm_helper.o
-obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
+obj-$(CONFIG_DRM_BUDDY) += ../buddy.o
drm_dma_helper-y := drm_gem_dma_helper.o
drm_dma_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fbdev_dma.o
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
index 5f5fd9a911c2..874779618056 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
@@ -24,7 +24,7 @@
#ifndef __AMDGPU_VRAM_MGR_H__
#define __AMDGPU_VRAM_MGR_H__
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
struct amdgpu_vram_mgr {
struct ttm_resource_manager manager;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index f65fe86c02b5..eeda5daa544f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -5,7 +5,7 @@
#include <linux/shmem_fs.h>
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_print.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/ttm/ttm_tt.h>
diff --git a/drivers/gpu/drm/i915/i915_scatterlist.c b/drivers/gpu/drm/i915/i915_scatterlist.c
index 4d830740946d..30246f02bcfe 100644
--- a/drivers/gpu/drm/i915/i915_scatterlist.c
+++ b/drivers/gpu/drm/i915/i915_scatterlist.c
@@ -7,7 +7,7 @@
#include "i915_scatterlist.h"
#include "i915_ttm_buddy_manager.h"
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_mm.h>
#include <linux/slab.h>
diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
index d5c6e6605086..6b256d95badd 100644
--- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
+++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
@@ -5,7 +5,7 @@
#include <linux/slab.h>
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_print.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/ttm/ttm_bo.h>
diff --git a/drivers/gpu/drm/tests/Makefile b/drivers/gpu/drm/tests/Makefile
index 87d5d5f9332a..d2e2e3d8349a 100644
--- a/drivers/gpu/drm/tests/Makefile
+++ b/drivers/gpu/drm/tests/Makefile
@@ -7,7 +7,6 @@ obj-$(CONFIG_DRM_KUNIT_TEST) += \
drm_atomic_test.o \
drm_atomic_state_test.o \
drm_bridge_test.o \
- drm_buddy_test.o \
drm_cmdline_parser_test.o \
drm_connector_test.o \
drm_damage_helper_test.o \
diff --git a/drivers/gpu/drm/tests/drm_exec_test.c b/drivers/gpu/drm/tests/drm_exec_test.c
index 3a20c788c51f..2fc47f3b463b 100644
--- a/drivers/gpu/drm/tests/drm_exec_test.c
+++ b/drivers/gpu/drm/tests/drm_exec_test.c
@@ -16,8 +16,6 @@
#include <drm/drm_gem.h>
#include <drm/drm_kunit_helpers.h>
-#include "../lib/drm_random.h"
-
struct drm_exec_priv {
struct device *dev;
struct drm_device *drm;
diff --git a/drivers/gpu/drm/tests/drm_mm_test.c b/drivers/gpu/drm/tests/drm_mm_test.c
index aec9eccdeae9..e24a619059d8 100644
--- a/drivers/gpu/drm/tests/drm_mm_test.c
+++ b/drivers/gpu/drm/tests/drm_mm_test.c
@@ -16,8 +16,6 @@
#include <drm/drm_mm.h>
#include <drm/drm_print.h>
-#include "../lib/drm_random.h"
-
enum {
BEST,
BOTTOMUP,
diff --git a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h
index e4c95f86a467..96ea8c9aae34 100644
--- a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h
+++ b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h
@@ -5,7 +5,7 @@
#ifndef TTM_MOCK_MANAGER_H
#define TTM_MOCK_MANAGER_H
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
struct ttm_mock_manager {
struct ttm_resource_manager man;
diff --git a/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h b/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
index a71e14818ec2..babeec5511d9 100644
--- a/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
+++ b/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
@@ -6,7 +6,7 @@
#ifndef _XE_TTM_VRAM_MGR_TYPES_H_
#define _XE_TTM_VRAM_MGR_TYPES_H_
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
#include <drm/ttm/ttm_device.h>
/**
diff --git a/drivers/gpu/tests/Makefile b/drivers/gpu/tests/Makefile
new file mode 100644
index 000000000000..8e7654e87d82
--- /dev/null
+++ b/drivers/gpu/tests/Makefile
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: GPL-2.0
+
+gpu_buddy_tests-y = gpu_buddy_test.o gpu_random.o
+obj-$(CONFIG_DRM_KUNIT_TEST) += gpu_buddy_tests.o
diff --git a/drivers/gpu/drm/tests/drm_buddy_test.c b/drivers/gpu/tests/gpu_buddy_test.c
similarity index 99%
rename from drivers/gpu/drm/tests/drm_buddy_test.c
rename to drivers/gpu/tests/gpu_buddy_test.c
index e6f8459c6c54..b905932da990 100644
--- a/drivers/gpu/drm/tests/drm_buddy_test.c
+++ b/drivers/gpu/tests/gpu_buddy_test.c
@@ -10,9 +10,9 @@
#include <linux/sched/signal.h>
#include <linux/sizes.h>
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
-#include "../lib/drm_random.h"
+#include "gpu_random.h"
static unsigned int random_seed;
diff --git a/drivers/gpu/drm/lib/drm_random.c b/drivers/gpu/tests/gpu_random.c
similarity index 97%
rename from drivers/gpu/drm/lib/drm_random.c
rename to drivers/gpu/tests/gpu_random.c
index 0e9dba1ef4af..ddd1f594b5d5 100644
--- a/drivers/gpu/drm/lib/drm_random.c
+++ b/drivers/gpu/tests/gpu_random.c
@@ -6,7 +6,7 @@
#include <linux/slab.h>
#include <linux/types.h>
-#include "drm_random.h"
+#include "gpu_random.h"
u32 drm_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
{
diff --git a/drivers/gpu/drm/lib/drm_random.h b/drivers/gpu/tests/gpu_random.h
similarity index 100%
rename from drivers/gpu/drm/lib/drm_random.h
rename to drivers/gpu/tests/gpu_random.h
diff --git a/include/drm/drm_buddy.h b/include/linux/gpu_buddy.h
similarity index 100%
rename from include/drm/drm_buddy.h
rename to include/linux/gpu_buddy.h
--
2.34.1
^ permalink raw reply related [flat|nested] 74+ messages in thread
* [PATCH v10 2/8] gpu: Move DRM buddy allocator one level up (part two)
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
2026-02-18 20:54 ` [PATCH v10 1/8] gpu: Move DRM buddy allocator one level up (part one) Joel Fernandes
@ 2026-02-18 20:55 ` Joel Fernandes
2026-02-19 3:18 ` Alexandre Courbot
2026-02-18 20:55 ` [PATCH v10 3/8] gpu: Fix uninitialized buddy for built-in drivers Joel Fernandes
` (7 subsequent siblings)
9 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 20:55 UTC (permalink / raw)
To: linux-kernel, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Jonathan Corbet, Shuah Khan,
Matthew Auld, Arun Pravin, Christian Koenig, Alex Deucher,
Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin,
Huang Rui, Matthew Brost, Thomas Hellström, Helge Deller
Cc: Danilo Krummrich, Miguel Ojeda, Dave Airlie, Gary Guo,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Joel Fernandes, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev
Move the DRM buddy allocator one level up so that it can be used by GPU
drivers (example, nova-core) that have usecases other than DRM (such as
VFIO vGPU support). Modify the API, structures and Kconfigs to use
"gpu_buddy" terminology. Adapt the drivers and tests to use the new API.
The commit cannot be split due to bisectability, however no functional
change is intended. Verified by running K-UNIT tests and build tested
various configurations.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
[airlied: I've split this into two so git can find copies easier.
I've also just nuked drm_random library, that stuff needs to be done
elsewhere and only the buddy tests seem to be using it].
Signed-off-by: Dave Airlie <airlied@redhat.com>
---
Documentation/gpu/drm-mm.rst | 6 +
MAINTAINERS | 8 +-
drivers/gpu/Kconfig | 13 +
drivers/gpu/Makefile | 1 +
drivers/gpu/buddy.c | 556 +++++++++---------
drivers/gpu/drm/Kconfig | 1 +
drivers/gpu/drm/Makefile | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +-
.../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 79 +--
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 18 +-
drivers/gpu/drm/drm_buddy.c | 77 +++
drivers/gpu/drm/i915/i915_scatterlist.c | 8 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +-
drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 4 +-
.../drm/i915/selftests/intel_memory_region.c | 20 +-
.../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 4 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +-
drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 2 +-
drivers/gpu/drm/xe/xe_res_cursor.h | 34 +-
drivers/gpu/drm/xe/xe_svm.c | 12 +-
drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 71 +--
drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 2 +-
drivers/gpu/tests/Makefile | 2 +-
drivers/gpu/tests/gpu_buddy_test.c | 412 ++++++-------
drivers/gpu/tests/gpu_random.c | 16 +-
drivers/gpu/tests/gpu_random.h | 18 +-
drivers/video/Kconfig | 1 +
include/drm/drm_buddy.h | 18 +
include/linux/gpu_buddy.h | 120 ++--
30 files changed, 853 insertions(+), 739 deletions(-)
create mode 100644 drivers/gpu/Kconfig
create mode 100644 drivers/gpu/drm/drm_buddy.c
create mode 100644 include/drm/drm_buddy.h
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index ceee0e663237..32fb506db05b 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -532,6 +532,12 @@ Buddy Allocator Function References (GPU buddy)
.. kernel-doc:: drivers/gpu/buddy.c
:export:
+DRM Buddy Specific Logging Function References
+----------------------------------------------
+
+.. kernel-doc:: drivers/gpu/drm/drm_buddy.c
+ :export:
+
DRM Cache Handling and Fast WC memcpy()
=======================================
diff --git a/MAINTAINERS b/MAINTAINERS
index dc82a6bd1a61..14b4f9af0e36 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8905,15 +8905,17 @@ T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: drivers/gpu/drm/ttm/
F: include/drm/ttm/
-DRM BUDDY ALLOCATOR
+GPU BUDDY ALLOCATOR
M: Matthew Auld <matthew.auld@intel.com>
M: Arun Pravin <arunpravin.paneerselvam@amd.com>
R: Christian Koenig <christian.koenig@amd.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
-F: drivers/gpu/drm/drm_buddy.c
-F: drivers/gpu/drm/tests/drm_buddy_test.c
+F: drivers/gpu/drm_buddy.c
+F: drivers/gpu/buddy.c
+F: drivers/gpu/tests/gpu_buddy_test.c
+F: include/linux/gpu_buddy.h
F: include/drm/drm_buddy.h
DRM AUTOMATED TESTING
diff --git a/drivers/gpu/Kconfig b/drivers/gpu/Kconfig
new file mode 100644
index 000000000000..ebb2ad4b7ea0
--- /dev/null
+++ b/drivers/gpu/Kconfig
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0
+
+config GPU_BUDDY
+ bool
+ help
+ A page based buddy allocator for GPU memory.
+
+config GPU_BUDDY_KUNIT_TEST
+ tristate "KUnit tests for GPU buddy allocator" if !KUNIT_ALL_TESTS
+ depends on GPU_BUDDY && KUNIT
+ default KUNIT_ALL_TESTS
+ help
+ KUnit tests for the GPU buddy allocator.
diff --git a/drivers/gpu/Makefile b/drivers/gpu/Makefile
index c5292ee2c852..5cd54d06e262 100644
--- a/drivers/gpu/Makefile
+++ b/drivers/gpu/Makefile
@@ -6,3 +6,4 @@ obj-y += host1x/ drm/ vga/ tests/
obj-$(CONFIG_IMX_IPUV3_CORE) += ipu-v3/
obj-$(CONFIG_TRACE_GPU_MEM) += trace/
obj-$(CONFIG_NOVA_CORE) += nova-core/
+obj-$(CONFIG_GPU_BUDDY) += buddy.o
diff --git a/drivers/gpu/buddy.c b/drivers/gpu/buddy.c
index 4cc63d961d26..603c59a2013a 100644
--- a/drivers/gpu/buddy.c
+++ b/drivers/gpu/buddy.c
@@ -11,27 +11,17 @@
#include <linux/sizes.h>
#include <linux/gpu_buddy.h>
-#include <drm/drm_print.h>
-
-enum drm_buddy_free_tree {
- DRM_BUDDY_CLEAR_TREE = 0,
- DRM_BUDDY_DIRTY_TREE,
- DRM_BUDDY_MAX_FREE_TREES,
-};
static struct kmem_cache *slab_blocks;
-#define for_each_free_tree(tree) \
- for ((tree) = 0; (tree) < DRM_BUDDY_MAX_FREE_TREES; (tree)++)
-
-static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm,
- struct drm_buddy_block *parent,
+static struct gpu_buddy_block *gpu_block_alloc(struct gpu_buddy *mm,
+ struct gpu_buddy_block *parent,
unsigned int order,
u64 offset)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
- BUG_ON(order > DRM_BUDDY_MAX_ORDER);
+ BUG_ON(order > GPU_BUDDY_MAX_ORDER);
block = kmem_cache_zalloc(slab_blocks, GFP_KERNEL);
if (!block)
@@ -43,30 +33,30 @@ static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm,
RB_CLEAR_NODE(&block->rb);
- BUG_ON(block->header & DRM_BUDDY_HEADER_UNUSED);
+ BUG_ON(block->header & GPU_BUDDY_HEADER_UNUSED);
return block;
}
-static void drm_block_free(struct drm_buddy *mm,
- struct drm_buddy_block *block)
+static void gpu_block_free(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
{
kmem_cache_free(slab_blocks, block);
}
-static enum drm_buddy_free_tree
-get_block_tree(struct drm_buddy_block *block)
+static enum gpu_buddy_free_tree
+get_block_tree(struct gpu_buddy_block *block)
{
- return drm_buddy_block_is_clear(block) ?
- DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE;
+ return gpu_buddy_block_is_clear(block) ?
+ GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
}
-static struct drm_buddy_block *
+static struct gpu_buddy_block *
rbtree_get_free_block(const struct rb_node *node)
{
- return node ? rb_entry(node, struct drm_buddy_block, rb) : NULL;
+ return node ? rb_entry(node, struct gpu_buddy_block, rb) : NULL;
}
-static struct drm_buddy_block *
+static struct gpu_buddy_block *
rbtree_last_free_block(struct rb_root *root)
{
return rbtree_get_free_block(rb_last(root));
@@ -77,33 +67,33 @@ static bool rbtree_is_empty(struct rb_root *root)
return RB_EMPTY_ROOT(root);
}
-static bool drm_buddy_block_offset_less(const struct drm_buddy_block *block,
- const struct drm_buddy_block *node)
+static bool gpu_buddy_block_offset_less(const struct gpu_buddy_block *block,
+ const struct gpu_buddy_block *node)
{
- return drm_buddy_block_offset(block) < drm_buddy_block_offset(node);
+ return gpu_buddy_block_offset(block) < gpu_buddy_block_offset(node);
}
static bool rbtree_block_offset_less(struct rb_node *block,
const struct rb_node *node)
{
- return drm_buddy_block_offset_less(rbtree_get_free_block(block),
+ return gpu_buddy_block_offset_less(rbtree_get_free_block(block),
rbtree_get_free_block(node));
}
-static void rbtree_insert(struct drm_buddy *mm,
- struct drm_buddy_block *block,
- enum drm_buddy_free_tree tree)
+static void rbtree_insert(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block,
+ enum gpu_buddy_free_tree tree)
{
rb_add(&block->rb,
- &mm->free_trees[tree][drm_buddy_block_order(block)],
+ &mm->free_trees[tree][gpu_buddy_block_order(block)],
rbtree_block_offset_less);
}
-static void rbtree_remove(struct drm_buddy *mm,
- struct drm_buddy_block *block)
+static void rbtree_remove(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
{
- unsigned int order = drm_buddy_block_order(block);
- enum drm_buddy_free_tree tree;
+ unsigned int order = gpu_buddy_block_order(block);
+ enum gpu_buddy_free_tree tree;
struct rb_root *root;
tree = get_block_tree(block);
@@ -113,42 +103,42 @@ static void rbtree_remove(struct drm_buddy *mm,
RB_CLEAR_NODE(&block->rb);
}
-static void clear_reset(struct drm_buddy_block *block)
+static void clear_reset(struct gpu_buddy_block *block)
{
- block->header &= ~DRM_BUDDY_HEADER_CLEAR;
+ block->header &= ~GPU_BUDDY_HEADER_CLEAR;
}
-static void mark_cleared(struct drm_buddy_block *block)
+static void mark_cleared(struct gpu_buddy_block *block)
{
- block->header |= DRM_BUDDY_HEADER_CLEAR;
+ block->header |= GPU_BUDDY_HEADER_CLEAR;
}
-static void mark_allocated(struct drm_buddy *mm,
- struct drm_buddy_block *block)
+static void mark_allocated(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
{
- block->header &= ~DRM_BUDDY_HEADER_STATE;
- block->header |= DRM_BUDDY_ALLOCATED;
+ block->header &= ~GPU_BUDDY_HEADER_STATE;
+ block->header |= GPU_BUDDY_ALLOCATED;
rbtree_remove(mm, block);
}
-static void mark_free(struct drm_buddy *mm,
- struct drm_buddy_block *block)
+static void mark_free(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
{
- enum drm_buddy_free_tree tree;
+ enum gpu_buddy_free_tree tree;
- block->header &= ~DRM_BUDDY_HEADER_STATE;
- block->header |= DRM_BUDDY_FREE;
+ block->header &= ~GPU_BUDDY_HEADER_STATE;
+ block->header |= GPU_BUDDY_FREE;
tree = get_block_tree(block);
rbtree_insert(mm, block, tree);
}
-static void mark_split(struct drm_buddy *mm,
- struct drm_buddy_block *block)
+static void mark_split(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
{
- block->header &= ~DRM_BUDDY_HEADER_STATE;
- block->header |= DRM_BUDDY_SPLIT;
+ block->header &= ~GPU_BUDDY_HEADER_STATE;
+ block->header |= GPU_BUDDY_SPLIT;
rbtree_remove(mm, block);
}
@@ -163,10 +153,10 @@ static inline bool contains(u64 s1, u64 e1, u64 s2, u64 e2)
return s1 <= s2 && e1 >= e2;
}
-static struct drm_buddy_block *
-__get_buddy(struct drm_buddy_block *block)
+static struct gpu_buddy_block *
+__get_buddy(struct gpu_buddy_block *block)
{
- struct drm_buddy_block *parent;
+ struct gpu_buddy_block *parent;
parent = block->parent;
if (!parent)
@@ -178,19 +168,19 @@ __get_buddy(struct drm_buddy_block *block)
return parent->left;
}
-static unsigned int __drm_buddy_free(struct drm_buddy *mm,
- struct drm_buddy_block *block,
+static unsigned int __gpu_buddy_free(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block,
bool force_merge)
{
- struct drm_buddy_block *parent;
+ struct gpu_buddy_block *parent;
unsigned int order;
while ((parent = block->parent)) {
- struct drm_buddy_block *buddy;
+ struct gpu_buddy_block *buddy;
buddy = __get_buddy(block);
- if (!drm_buddy_block_is_free(buddy))
+ if (!gpu_buddy_block_is_free(buddy))
break;
if (!force_merge) {
@@ -198,31 +188,31 @@ static unsigned int __drm_buddy_free(struct drm_buddy *mm,
* Check the block and its buddy clear state and exit
* the loop if they both have the dissimilar state.
*/
- if (drm_buddy_block_is_clear(block) !=
- drm_buddy_block_is_clear(buddy))
+ if (gpu_buddy_block_is_clear(block) !=
+ gpu_buddy_block_is_clear(buddy))
break;
- if (drm_buddy_block_is_clear(block))
+ if (gpu_buddy_block_is_clear(block))
mark_cleared(parent);
}
rbtree_remove(mm, buddy);
- if (force_merge && drm_buddy_block_is_clear(buddy))
- mm->clear_avail -= drm_buddy_block_size(mm, buddy);
+ if (force_merge && gpu_buddy_block_is_clear(buddy))
+ mm->clear_avail -= gpu_buddy_block_size(mm, buddy);
- drm_block_free(mm, block);
- drm_block_free(mm, buddy);
+ gpu_block_free(mm, block);
+ gpu_block_free(mm, buddy);
block = parent;
}
- order = drm_buddy_block_order(block);
+ order = gpu_buddy_block_order(block);
mark_free(mm, block);
return order;
}
-static int __force_merge(struct drm_buddy *mm,
+static int __force_merge(struct gpu_buddy *mm,
u64 start,
u64 end,
unsigned int min_order)
@@ -241,7 +231,7 @@ static int __force_merge(struct drm_buddy *mm,
struct rb_node *iter = rb_last(&mm->free_trees[tree][i]);
while (iter) {
- struct drm_buddy_block *block, *buddy;
+ struct gpu_buddy_block *block, *buddy;
u64 block_start, block_end;
block = rbtree_get_free_block(iter);
@@ -250,18 +240,18 @@ static int __force_merge(struct drm_buddy *mm,
if (!block || !block->parent)
continue;
- block_start = drm_buddy_block_offset(block);
- block_end = block_start + drm_buddy_block_size(mm, block) - 1;
+ block_start = gpu_buddy_block_offset(block);
+ block_end = block_start + gpu_buddy_block_size(mm, block) - 1;
if (!contains(start, end, block_start, block_end))
continue;
buddy = __get_buddy(block);
- if (!drm_buddy_block_is_free(buddy))
+ if (!gpu_buddy_block_is_free(buddy))
continue;
- WARN_ON(drm_buddy_block_is_clear(block) ==
- drm_buddy_block_is_clear(buddy));
+ WARN_ON(gpu_buddy_block_is_clear(block) ==
+ gpu_buddy_block_is_clear(buddy));
/*
* Advance to the next node when the current node is the buddy,
@@ -271,10 +261,10 @@ static int __force_merge(struct drm_buddy *mm,
iter = rb_prev(iter);
rbtree_remove(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail -= drm_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail -= gpu_buddy_block_size(mm, block);
- order = __drm_buddy_free(mm, block, true);
+ order = __gpu_buddy_free(mm, block, true);
if (order >= min_order)
return 0;
}
@@ -285,9 +275,9 @@ static int __force_merge(struct drm_buddy *mm,
}
/**
- * drm_buddy_init - init memory manager
+ * gpu_buddy_init - init memory manager
*
- * @mm: DRM buddy manager to initialize
+ * @mm: GPU buddy manager to initialize
* @size: size in bytes to manage
* @chunk_size: minimum page size in bytes for our allocations
*
@@ -296,7 +286,7 @@ static int __force_merge(struct drm_buddy *mm,
* Returns:
* 0 on success, error code on failure.
*/
-int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size)
+int gpu_buddy_init(struct gpu_buddy *mm, u64 size, u64 chunk_size)
{
unsigned int i, j, root_count = 0;
u64 offset = 0;
@@ -318,9 +308,9 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size)
mm->chunk_size = chunk_size;
mm->max_order = ilog2(size) - ilog2(chunk_size);
- BUG_ON(mm->max_order > DRM_BUDDY_MAX_ORDER);
+ BUG_ON(mm->max_order > GPU_BUDDY_MAX_ORDER);
- mm->free_trees = kmalloc_array(DRM_BUDDY_MAX_FREE_TREES,
+ mm->free_trees = kmalloc_array(GPU_BUDDY_MAX_FREE_TREES,
sizeof(*mm->free_trees),
GFP_KERNEL);
if (!mm->free_trees)
@@ -340,7 +330,7 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size)
mm->n_roots = hweight64(size);
mm->roots = kmalloc_array(mm->n_roots,
- sizeof(struct drm_buddy_block *),
+ sizeof(struct gpu_buddy_block *),
GFP_KERNEL);
if (!mm->roots)
goto out_free_tree;
@@ -350,21 +340,21 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size)
* not itself a power-of-two.
*/
do {
- struct drm_buddy_block *root;
+ struct gpu_buddy_block *root;
unsigned int order;
u64 root_size;
order = ilog2(size) - ilog2(chunk_size);
root_size = chunk_size << order;
- root = drm_block_alloc(mm, NULL, order, offset);
+ root = gpu_block_alloc(mm, NULL, order, offset);
if (!root)
goto out_free_roots;
mark_free(mm, root);
BUG_ON(root_count > mm->max_order);
- BUG_ON(drm_buddy_block_size(mm, root) < chunk_size);
+ BUG_ON(gpu_buddy_block_size(mm, root) < chunk_size);
mm->roots[root_count] = root;
@@ -377,7 +367,7 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size)
out_free_roots:
while (root_count--)
- drm_block_free(mm, mm->roots[root_count]);
+ gpu_block_free(mm, mm->roots[root_count]);
kfree(mm->roots);
out_free_tree:
while (i--)
@@ -385,16 +375,16 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size)
kfree(mm->free_trees);
return -ENOMEM;
}
-EXPORT_SYMBOL(drm_buddy_init);
+EXPORT_SYMBOL(gpu_buddy_init);
/**
- * drm_buddy_fini - tear down the memory manager
+ * gpu_buddy_fini - tear down the memory manager
*
- * @mm: DRM buddy manager to free
+ * @mm: GPU buddy manager to free
*
* Cleanup memory manager resources and the freetree
*/
-void drm_buddy_fini(struct drm_buddy *mm)
+void gpu_buddy_fini(struct gpu_buddy *mm)
{
u64 root_size, size, start;
unsigned int order;
@@ -404,13 +394,13 @@ void drm_buddy_fini(struct drm_buddy *mm)
for (i = 0; i < mm->n_roots; ++i) {
order = ilog2(size) - ilog2(mm->chunk_size);
- start = drm_buddy_block_offset(mm->roots[i]);
+ start = gpu_buddy_block_offset(mm->roots[i]);
__force_merge(mm, start, start + size, order);
- if (WARN_ON(!drm_buddy_block_is_free(mm->roots[i])))
+ if (WARN_ON(!gpu_buddy_block_is_free(mm->roots[i])))
kunit_fail_current_test("buddy_fini() root");
- drm_block_free(mm, mm->roots[i]);
+ gpu_block_free(mm, mm->roots[i]);
root_size = mm->chunk_size << order;
size -= root_size;
@@ -423,31 +413,31 @@ void drm_buddy_fini(struct drm_buddy *mm)
kfree(mm->free_trees);
kfree(mm->roots);
}
-EXPORT_SYMBOL(drm_buddy_fini);
+EXPORT_SYMBOL(gpu_buddy_fini);
-static int split_block(struct drm_buddy *mm,
- struct drm_buddy_block *block)
+static int split_block(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
{
- unsigned int block_order = drm_buddy_block_order(block) - 1;
- u64 offset = drm_buddy_block_offset(block);
+ unsigned int block_order = gpu_buddy_block_order(block) - 1;
+ u64 offset = gpu_buddy_block_offset(block);
- BUG_ON(!drm_buddy_block_is_free(block));
- BUG_ON(!drm_buddy_block_order(block));
+ BUG_ON(!gpu_buddy_block_is_free(block));
+ BUG_ON(!gpu_buddy_block_order(block));
- block->left = drm_block_alloc(mm, block, block_order, offset);
+ block->left = gpu_block_alloc(mm, block, block_order, offset);
if (!block->left)
return -ENOMEM;
- block->right = drm_block_alloc(mm, block, block_order,
+ block->right = gpu_block_alloc(mm, block, block_order,
offset + (mm->chunk_size << block_order));
if (!block->right) {
- drm_block_free(mm, block->left);
+ gpu_block_free(mm, block->left);
return -ENOMEM;
}
mark_split(mm, block);
- if (drm_buddy_block_is_clear(block)) {
+ if (gpu_buddy_block_is_clear(block)) {
mark_cleared(block->left);
mark_cleared(block->right);
clear_reset(block);
@@ -460,34 +450,34 @@ static int split_block(struct drm_buddy *mm,
}
/**
- * drm_get_buddy - get buddy address
+ * gpu_get_buddy - get buddy address
*
- * @block: DRM buddy block
+ * @block: GPU buddy block
*
* Returns the corresponding buddy block for @block, or NULL
* if this is a root block and can't be merged further.
* Requires some kind of locking to protect against
* any concurrent allocate and free operations.
*/
-struct drm_buddy_block *
-drm_get_buddy(struct drm_buddy_block *block)
+struct gpu_buddy_block *
+gpu_get_buddy(struct gpu_buddy_block *block)
{
return __get_buddy(block);
}
-EXPORT_SYMBOL(drm_get_buddy);
+EXPORT_SYMBOL(gpu_get_buddy);
/**
- * drm_buddy_reset_clear - reset blocks clear state
+ * gpu_buddy_reset_clear - reset blocks clear state
*
- * @mm: DRM buddy manager
+ * @mm: GPU buddy manager
* @is_clear: blocks clear state
*
* Reset the clear state based on @is_clear value for each block
* in the freetree.
*/
-void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
+void gpu_buddy_reset_clear(struct gpu_buddy *mm, bool is_clear)
{
- enum drm_buddy_free_tree src_tree, dst_tree;
+ enum gpu_buddy_free_tree src_tree, dst_tree;
u64 root_size, size, start;
unsigned int order;
int i;
@@ -495,60 +485,60 @@ void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
size = mm->size;
for (i = 0; i < mm->n_roots; ++i) {
order = ilog2(size) - ilog2(mm->chunk_size);
- start = drm_buddy_block_offset(mm->roots[i]);
+ start = gpu_buddy_block_offset(mm->roots[i]);
__force_merge(mm, start, start + size, order);
root_size = mm->chunk_size << order;
size -= root_size;
}
- src_tree = is_clear ? DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE;
- dst_tree = is_clear ? DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE;
+ src_tree = is_clear ? GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE;
+ dst_tree = is_clear ? GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
for (i = 0; i <= mm->max_order; ++i) {
struct rb_root *root = &mm->free_trees[src_tree][i];
- struct drm_buddy_block *block, *tmp;
+ struct gpu_buddy_block *block, *tmp;
rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) {
rbtree_remove(mm, block);
if (is_clear) {
mark_cleared(block);
- mm->clear_avail += drm_buddy_block_size(mm, block);
+ mm->clear_avail += gpu_buddy_block_size(mm, block);
} else {
clear_reset(block);
- mm->clear_avail -= drm_buddy_block_size(mm, block);
+ mm->clear_avail -= gpu_buddy_block_size(mm, block);
}
rbtree_insert(mm, block, dst_tree);
}
}
}
-EXPORT_SYMBOL(drm_buddy_reset_clear);
+EXPORT_SYMBOL(gpu_buddy_reset_clear);
/**
- * drm_buddy_free_block - free a block
+ * gpu_buddy_free_block - free a block
*
- * @mm: DRM buddy manager
+ * @mm: GPU buddy manager
* @block: block to be freed
*/
-void drm_buddy_free_block(struct drm_buddy *mm,
- struct drm_buddy_block *block)
+void gpu_buddy_free_block(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
{
- BUG_ON(!drm_buddy_block_is_allocated(block));
- mm->avail += drm_buddy_block_size(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail += drm_buddy_block_size(mm, block);
+ BUG_ON(!gpu_buddy_block_is_allocated(block));
+ mm->avail += gpu_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail += gpu_buddy_block_size(mm, block);
- __drm_buddy_free(mm, block, false);
+ __gpu_buddy_free(mm, block, false);
}
-EXPORT_SYMBOL(drm_buddy_free_block);
+EXPORT_SYMBOL(gpu_buddy_free_block);
-static void __drm_buddy_free_list(struct drm_buddy *mm,
+static void __gpu_buddy_free_list(struct gpu_buddy *mm,
struct list_head *objects,
bool mark_clear,
bool mark_dirty)
{
- struct drm_buddy_block *block, *on;
+ struct gpu_buddy_block *block, *on;
WARN_ON(mark_dirty && mark_clear);
@@ -557,13 +547,13 @@ static void __drm_buddy_free_list(struct drm_buddy *mm,
mark_cleared(block);
else if (mark_dirty)
clear_reset(block);
- drm_buddy_free_block(mm, block);
+ gpu_buddy_free_block(mm, block);
cond_resched();
}
INIT_LIST_HEAD(objects);
}
-static void drm_buddy_free_list_internal(struct drm_buddy *mm,
+static void gpu_buddy_free_list_internal(struct gpu_buddy *mm,
struct list_head *objects)
{
/*
@@ -571,43 +561,43 @@ static void drm_buddy_free_list_internal(struct drm_buddy *mm,
* at this point. For example we might have just failed part of the
* allocation.
*/
- __drm_buddy_free_list(mm, objects, false, false);
+ __gpu_buddy_free_list(mm, objects, false, false);
}
/**
- * drm_buddy_free_list - free blocks
+ * gpu_buddy_free_list - free blocks
*
- * @mm: DRM buddy manager
+ * @mm: GPU buddy manager
* @objects: input list head to free blocks
- * @flags: optional flags like DRM_BUDDY_CLEARED
+ * @flags: optional flags like GPU_BUDDY_CLEARED
*/
-void drm_buddy_free_list(struct drm_buddy *mm,
+void gpu_buddy_free_list(struct gpu_buddy *mm,
struct list_head *objects,
unsigned int flags)
{
- bool mark_clear = flags & DRM_BUDDY_CLEARED;
+ bool mark_clear = flags & GPU_BUDDY_CLEARED;
- __drm_buddy_free_list(mm, objects, mark_clear, !mark_clear);
+ __gpu_buddy_free_list(mm, objects, mark_clear, !mark_clear);
}
-EXPORT_SYMBOL(drm_buddy_free_list);
+EXPORT_SYMBOL(gpu_buddy_free_list);
-static bool block_incompatible(struct drm_buddy_block *block, unsigned int flags)
+static bool block_incompatible(struct gpu_buddy_block *block, unsigned int flags)
{
- bool needs_clear = flags & DRM_BUDDY_CLEAR_ALLOCATION;
+ bool needs_clear = flags & GPU_BUDDY_CLEAR_ALLOCATION;
- return needs_clear != drm_buddy_block_is_clear(block);
+ return needs_clear != gpu_buddy_block_is_clear(block);
}
-static struct drm_buddy_block *
-__alloc_range_bias(struct drm_buddy *mm,
+static struct gpu_buddy_block *
+__alloc_range_bias(struct gpu_buddy *mm,
u64 start, u64 end,
unsigned int order,
unsigned long flags,
bool fallback)
{
u64 req_size = mm->chunk_size << order;
- struct drm_buddy_block *block;
- struct drm_buddy_block *buddy;
+ struct gpu_buddy_block *block;
+ struct gpu_buddy_block *buddy;
LIST_HEAD(dfs);
int err;
int i;
@@ -622,23 +612,23 @@ __alloc_range_bias(struct drm_buddy *mm,
u64 block_end;
block = list_first_entry_or_null(&dfs,
- struct drm_buddy_block,
+ struct gpu_buddy_block,
tmp_link);
if (!block)
break;
list_del(&block->tmp_link);
- if (drm_buddy_block_order(block) < order)
+ if (gpu_buddy_block_order(block) < order)
continue;
- block_start = drm_buddy_block_offset(block);
- block_end = block_start + drm_buddy_block_size(mm, block) - 1;
+ block_start = gpu_buddy_block_offset(block);
+ block_end = block_start + gpu_buddy_block_size(mm, block) - 1;
if (!overlaps(start, end, block_start, block_end))
continue;
- if (drm_buddy_block_is_allocated(block))
+ if (gpu_buddy_block_is_allocated(block))
continue;
if (block_start < start || block_end > end) {
@@ -654,17 +644,17 @@ __alloc_range_bias(struct drm_buddy *mm,
continue;
if (contains(start, end, block_start, block_end) &&
- order == drm_buddy_block_order(block)) {
+ order == gpu_buddy_block_order(block)) {
/*
* Find the free block within the range.
*/
- if (drm_buddy_block_is_free(block))
+ if (gpu_buddy_block_is_free(block))
return block;
continue;
}
- if (!drm_buddy_block_is_split(block)) {
+ if (!gpu_buddy_block_is_split(block)) {
err = split_block(mm, block);
if (unlikely(err))
goto err_undo;
@@ -684,19 +674,19 @@ __alloc_range_bias(struct drm_buddy *mm,
*/
buddy = __get_buddy(block);
if (buddy &&
- (drm_buddy_block_is_free(block) &&
- drm_buddy_block_is_free(buddy)))
- __drm_buddy_free(mm, block, false);
+ (gpu_buddy_block_is_free(block) &&
+ gpu_buddy_block_is_free(buddy)))
+ __gpu_buddy_free(mm, block, false);
return ERR_PTR(err);
}
-static struct drm_buddy_block *
-__drm_buddy_alloc_range_bias(struct drm_buddy *mm,
+static struct gpu_buddy_block *
+__gpu_buddy_alloc_range_bias(struct gpu_buddy *mm,
u64 start, u64 end,
unsigned int order,
unsigned long flags)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
bool fallback = false;
block = __alloc_range_bias(mm, start, end, order,
@@ -708,12 +698,12 @@ __drm_buddy_alloc_range_bias(struct drm_buddy *mm,
return block;
}
-static struct drm_buddy_block *
-get_maxblock(struct drm_buddy *mm,
+static struct gpu_buddy_block *
+get_maxblock(struct gpu_buddy *mm,
unsigned int order,
- enum drm_buddy_free_tree tree)
+ enum gpu_buddy_free_tree tree)
{
- struct drm_buddy_block *max_block = NULL, *block = NULL;
+ struct gpu_buddy_block *max_block = NULL, *block = NULL;
struct rb_root *root;
unsigned int i;
@@ -728,8 +718,8 @@ get_maxblock(struct drm_buddy *mm,
continue;
}
- if (drm_buddy_block_offset(block) >
- drm_buddy_block_offset(max_block)) {
+ if (gpu_buddy_block_offset(block) >
+ gpu_buddy_block_offset(max_block)) {
max_block = block;
}
}
@@ -737,25 +727,25 @@ get_maxblock(struct drm_buddy *mm,
return max_block;
}
-static struct drm_buddy_block *
-alloc_from_freetree(struct drm_buddy *mm,
+static struct gpu_buddy_block *
+alloc_from_freetree(struct gpu_buddy *mm,
unsigned int order,
unsigned long flags)
{
- struct drm_buddy_block *block = NULL;
+ struct gpu_buddy_block *block = NULL;
struct rb_root *root;
- enum drm_buddy_free_tree tree;
+ enum gpu_buddy_free_tree tree;
unsigned int tmp;
int err;
- tree = (flags & DRM_BUDDY_CLEAR_ALLOCATION) ?
- DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE;
+ tree = (flags & GPU_BUDDY_CLEAR_ALLOCATION) ?
+ GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
- if (flags & DRM_BUDDY_TOPDOWN_ALLOCATION) {
+ if (flags & GPU_BUDDY_TOPDOWN_ALLOCATION) {
block = get_maxblock(mm, order, tree);
if (block)
/* Store the obtained block order */
- tmp = drm_buddy_block_order(block);
+ tmp = gpu_buddy_block_order(block);
} else {
for (tmp = order; tmp <= mm->max_order; ++tmp) {
/* Get RB tree root for this order and tree */
@@ -768,8 +758,8 @@ alloc_from_freetree(struct drm_buddy *mm,
if (!block) {
/* Try allocating from the other tree */
- tree = (tree == DRM_BUDDY_CLEAR_TREE) ?
- DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE;
+ tree = (tree == GPU_BUDDY_CLEAR_TREE) ?
+ GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE;
for (tmp = order; tmp <= mm->max_order; ++tmp) {
root = &mm->free_trees[tree][tmp];
@@ -782,7 +772,7 @@ alloc_from_freetree(struct drm_buddy *mm,
return ERR_PTR(-ENOSPC);
}
- BUG_ON(!drm_buddy_block_is_free(block));
+ BUG_ON(!gpu_buddy_block_is_free(block));
while (tmp != order) {
err = split_block(mm, block);
@@ -796,18 +786,18 @@ alloc_from_freetree(struct drm_buddy *mm,
err_undo:
if (tmp != order)
- __drm_buddy_free(mm, block, false);
+ __gpu_buddy_free(mm, block, false);
return ERR_PTR(err);
}
-static int __alloc_range(struct drm_buddy *mm,
+static int __alloc_range(struct gpu_buddy *mm,
struct list_head *dfs,
u64 start, u64 size,
struct list_head *blocks,
u64 *total_allocated_on_err)
{
- struct drm_buddy_block *block;
- struct drm_buddy_block *buddy;
+ struct gpu_buddy_block *block;
+ struct gpu_buddy_block *buddy;
u64 total_allocated = 0;
LIST_HEAD(allocated);
u64 end;
@@ -820,31 +810,31 @@ static int __alloc_range(struct drm_buddy *mm,
u64 block_end;
block = list_first_entry_or_null(dfs,
- struct drm_buddy_block,
+ struct gpu_buddy_block,
tmp_link);
if (!block)
break;
list_del(&block->tmp_link);
- block_start = drm_buddy_block_offset(block);
- block_end = block_start + drm_buddy_block_size(mm, block) - 1;
+ block_start = gpu_buddy_block_offset(block);
+ block_end = block_start + gpu_buddy_block_size(mm, block) - 1;
if (!overlaps(start, end, block_start, block_end))
continue;
- if (drm_buddy_block_is_allocated(block)) {
+ if (gpu_buddy_block_is_allocated(block)) {
err = -ENOSPC;
goto err_free;
}
if (contains(start, end, block_start, block_end)) {
- if (drm_buddy_block_is_free(block)) {
+ if (gpu_buddy_block_is_free(block)) {
mark_allocated(mm, block);
- total_allocated += drm_buddy_block_size(mm, block);
- mm->avail -= drm_buddy_block_size(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail -= drm_buddy_block_size(mm, block);
+ total_allocated += gpu_buddy_block_size(mm, block);
+ mm->avail -= gpu_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail -= gpu_buddy_block_size(mm, block);
list_add_tail(&block->link, &allocated);
continue;
} else if (!mm->clear_avail) {
@@ -853,7 +843,7 @@ static int __alloc_range(struct drm_buddy *mm,
}
}
- if (!drm_buddy_block_is_split(block)) {
+ if (!gpu_buddy_block_is_split(block)) {
err = split_block(mm, block);
if (unlikely(err))
goto err_undo;
@@ -880,22 +870,22 @@ static int __alloc_range(struct drm_buddy *mm,
*/
buddy = __get_buddy(block);
if (buddy &&
- (drm_buddy_block_is_free(block) &&
- drm_buddy_block_is_free(buddy)))
- __drm_buddy_free(mm, block, false);
+ (gpu_buddy_block_is_free(block) &&
+ gpu_buddy_block_is_free(buddy)))
+ __gpu_buddy_free(mm, block, false);
err_free:
if (err == -ENOSPC && total_allocated_on_err) {
list_splice_tail(&allocated, blocks);
*total_allocated_on_err = total_allocated;
} else {
- drm_buddy_free_list_internal(mm, &allocated);
+ gpu_buddy_free_list_internal(mm, &allocated);
}
return err;
}
-static int __drm_buddy_alloc_range(struct drm_buddy *mm,
+static int __gpu_buddy_alloc_range(struct gpu_buddy *mm,
u64 start,
u64 size,
u64 *total_allocated_on_err,
@@ -911,13 +901,13 @@ static int __drm_buddy_alloc_range(struct drm_buddy *mm,
blocks, total_allocated_on_err);
}
-static int __alloc_contig_try_harder(struct drm_buddy *mm,
+static int __alloc_contig_try_harder(struct gpu_buddy *mm,
u64 size,
u64 min_block_size,
struct list_head *blocks)
{
u64 rhs_offset, lhs_offset, lhs_size, filled;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
unsigned int tree, order;
LIST_HEAD(blocks_lhs);
unsigned long pages;
@@ -943,8 +933,8 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm,
block = rbtree_get_free_block(iter);
/* Allocate blocks traversing RHS */
- rhs_offset = drm_buddy_block_offset(block);
- err = __drm_buddy_alloc_range(mm, rhs_offset, size,
+ rhs_offset = gpu_buddy_block_offset(block);
+ err = __gpu_buddy_alloc_range(mm, rhs_offset, size,
&filled, blocks);
if (!err || err != -ENOSPC)
return err;
@@ -954,18 +944,18 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm,
lhs_size = round_up(lhs_size, min_block_size);
/* Allocate blocks traversing LHS */
- lhs_offset = drm_buddy_block_offset(block) - lhs_size;
- err = __drm_buddy_alloc_range(mm, lhs_offset, lhs_size,
+ lhs_offset = gpu_buddy_block_offset(block) - lhs_size;
+ err = __gpu_buddy_alloc_range(mm, lhs_offset, lhs_size,
NULL, &blocks_lhs);
if (!err) {
list_splice(&blocks_lhs, blocks);
return 0;
} else if (err != -ENOSPC) {
- drm_buddy_free_list_internal(mm, blocks);
+ gpu_buddy_free_list_internal(mm, blocks);
return err;
}
/* Free blocks for the next iteration */
- drm_buddy_free_list_internal(mm, blocks);
+ gpu_buddy_free_list_internal(mm, blocks);
iter = rb_prev(iter);
}
@@ -975,9 +965,9 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm,
}
/**
- * drm_buddy_block_trim - free unused pages
+ * gpu_buddy_block_trim - free unused pages
*
- * @mm: DRM buddy manager
+ * @mm: GPU buddy manager
* @start: start address to begin the trimming.
* @new_size: original size requested
* @blocks: Input and output list of allocated blocks.
@@ -993,13 +983,13 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm,
* Returns:
* 0 on success, error code on failure.
*/
-int drm_buddy_block_trim(struct drm_buddy *mm,
+int gpu_buddy_block_trim(struct gpu_buddy *mm,
u64 *start,
u64 new_size,
struct list_head *blocks)
{
- struct drm_buddy_block *parent;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *parent;
+ struct gpu_buddy_block *block;
u64 block_start, block_end;
LIST_HEAD(dfs);
u64 new_start;
@@ -1009,22 +999,22 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
return -EINVAL;
block = list_first_entry(blocks,
- struct drm_buddy_block,
+ struct gpu_buddy_block,
link);
- block_start = drm_buddy_block_offset(block);
- block_end = block_start + drm_buddy_block_size(mm, block);
+ block_start = gpu_buddy_block_offset(block);
+ block_end = block_start + gpu_buddy_block_size(mm, block);
- if (WARN_ON(!drm_buddy_block_is_allocated(block)))
+ if (WARN_ON(!gpu_buddy_block_is_allocated(block)))
return -EINVAL;
- if (new_size > drm_buddy_block_size(mm, block))
+ if (new_size > gpu_buddy_block_size(mm, block))
return -EINVAL;
if (!new_size || !IS_ALIGNED(new_size, mm->chunk_size))
return -EINVAL;
- if (new_size == drm_buddy_block_size(mm, block))
+ if (new_size == gpu_buddy_block_size(mm, block))
return 0;
new_start = block_start;
@@ -1043,9 +1033,9 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
list_del(&block->link);
mark_free(mm, block);
- mm->avail += drm_buddy_block_size(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail += drm_buddy_block_size(mm, block);
+ mm->avail += gpu_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail += gpu_buddy_block_size(mm, block);
/* Prevent recursively freeing this node */
parent = block->parent;
@@ -1055,26 +1045,26 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
err = __alloc_range(mm, &dfs, new_start, new_size, blocks, NULL);
if (err) {
mark_allocated(mm, block);
- mm->avail -= drm_buddy_block_size(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail -= drm_buddy_block_size(mm, block);
+ mm->avail -= gpu_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail -= gpu_buddy_block_size(mm, block);
list_add(&block->link, blocks);
}
block->parent = parent;
return err;
}
-EXPORT_SYMBOL(drm_buddy_block_trim);
+EXPORT_SYMBOL(gpu_buddy_block_trim);
-static struct drm_buddy_block *
-__drm_buddy_alloc_blocks(struct drm_buddy *mm,
+static struct gpu_buddy_block *
+__gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
u64 start, u64 end,
unsigned int order,
unsigned long flags)
{
- if (flags & DRM_BUDDY_RANGE_ALLOCATION)
+ if (flags & GPU_BUDDY_RANGE_ALLOCATION)
/* Allocate traversing within the range */
- return __drm_buddy_alloc_range_bias(mm, start, end,
+ return __gpu_buddy_alloc_range_bias(mm, start, end,
order, flags);
else
/* Allocate from freetree */
@@ -1082,15 +1072,15 @@ __drm_buddy_alloc_blocks(struct drm_buddy *mm,
}
/**
- * drm_buddy_alloc_blocks - allocate power-of-two blocks
+ * gpu_buddy_alloc_blocks - allocate power-of-two blocks
*
- * @mm: DRM buddy manager to allocate from
+ * @mm: GPU buddy manager to allocate from
* @start: start of the allowed range for this block
* @end: end of the allowed range for this block
* @size: size of the allocation in bytes
* @min_block_size: alignment of the allocation
* @blocks: output list head to add allocated blocks
- * @flags: DRM_BUDDY_*_ALLOCATION flags
+ * @flags: GPU_BUDDY_*_ALLOCATION flags
*
* alloc_range_bias() called on range limitations, which traverses
* the tree and returns the desired block.
@@ -1101,13 +1091,13 @@ __drm_buddy_alloc_blocks(struct drm_buddy *mm,
* Returns:
* 0 on success, error code on failure.
*/
-int drm_buddy_alloc_blocks(struct drm_buddy *mm,
+int gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
u64 start, u64 end, u64 size,
u64 min_block_size,
struct list_head *blocks,
unsigned long flags)
{
- struct drm_buddy_block *block = NULL;
+ struct gpu_buddy_block *block = NULL;
u64 original_size, original_min_size;
unsigned int min_order, order;
LIST_HEAD(allocated);
@@ -1137,14 +1127,14 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
if (!IS_ALIGNED(start | end, min_block_size))
return -EINVAL;
- return __drm_buddy_alloc_range(mm, start, size, NULL, blocks);
+ return __gpu_buddy_alloc_range(mm, start, size, NULL, blocks);
}
original_size = size;
original_min_size = min_block_size;
/* Roundup the size to power of 2 */
- if (flags & DRM_BUDDY_CONTIGUOUS_ALLOCATION) {
+ if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION) {
size = roundup_pow_of_two(size);
min_block_size = size;
/* Align size value to min_block_size */
@@ -1157,8 +1147,8 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
min_order = ilog2(min_block_size) - ilog2(mm->chunk_size);
if (order > mm->max_order || size > mm->size) {
- if ((flags & DRM_BUDDY_CONTIGUOUS_ALLOCATION) &&
- !(flags & DRM_BUDDY_RANGE_ALLOCATION))
+ if ((flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION) &&
+ !(flags & GPU_BUDDY_RANGE_ALLOCATION))
return __alloc_contig_try_harder(mm, original_size,
original_min_size, blocks);
@@ -1171,7 +1161,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
BUG_ON(order < min_order);
do {
- block = __drm_buddy_alloc_blocks(mm, start,
+ block = __gpu_buddy_alloc_blocks(mm, start,
end,
order,
flags);
@@ -1182,7 +1172,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
/* Try allocation through force merge method */
if (mm->clear_avail &&
!__force_merge(mm, start, end, min_order)) {
- block = __drm_buddy_alloc_blocks(mm, start,
+ block = __gpu_buddy_alloc_blocks(mm, start,
end,
min_order,
flags);
@@ -1196,8 +1186,8 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
* Try contiguous block allocation through
* try harder method.
*/
- if (flags & DRM_BUDDY_CONTIGUOUS_ALLOCATION &&
- !(flags & DRM_BUDDY_RANGE_ALLOCATION))
+ if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
+ !(flags & GPU_BUDDY_RANGE_ALLOCATION))
return __alloc_contig_try_harder(mm,
original_size,
original_min_size,
@@ -1208,9 +1198,9 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
} while (1);
mark_allocated(mm, block);
- mm->avail -= drm_buddy_block_size(mm, block);
- if (drm_buddy_block_is_clear(block))
- mm->clear_avail -= drm_buddy_block_size(mm, block);
+ mm->avail -= gpu_buddy_block_size(mm, block);
+ if (gpu_buddy_block_is_clear(block))
+ mm->clear_avail -= gpu_buddy_block_size(mm, block);
kmemleak_update_trace(block);
list_add_tail(&block->link, &allocated);
@@ -1221,7 +1211,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
} while (1);
/* Trim the allocated block to the required size */
- if (!(flags & DRM_BUDDY_TRIM_DISABLE) &&
+ if (!(flags & GPU_BUDDY_TRIM_DISABLE) &&
original_size != size) {
struct list_head *trim_list;
LIST_HEAD(temp);
@@ -1234,11 +1224,11 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
block = list_last_entry(&allocated, typeof(*block), link);
list_move(&block->link, &temp);
trim_list = &temp;
- trim_size = drm_buddy_block_size(mm, block) -
+ trim_size = gpu_buddy_block_size(mm, block) -
(size - original_size);
}
- drm_buddy_block_trim(mm,
+ gpu_buddy_block_trim(mm,
NULL,
trim_size,
trim_list);
@@ -1251,44 +1241,42 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
return 0;
err_free:
- drm_buddy_free_list_internal(mm, &allocated);
+ gpu_buddy_free_list_internal(mm, &allocated);
return err;
}
-EXPORT_SYMBOL(drm_buddy_alloc_blocks);
+EXPORT_SYMBOL(gpu_buddy_alloc_blocks);
/**
- * drm_buddy_block_print - print block information
+ * gpu_buddy_block_print - print block information
*
- * @mm: DRM buddy manager
- * @block: DRM buddy block
- * @p: DRM printer to use
+ * @mm: GPU buddy manager
+ * @block: GPU buddy block
*/
-void drm_buddy_block_print(struct drm_buddy *mm,
- struct drm_buddy_block *block,
- struct drm_printer *p)
+void gpu_buddy_block_print(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
{
- u64 start = drm_buddy_block_offset(block);
- u64 size = drm_buddy_block_size(mm, block);
+ u64 start = gpu_buddy_block_offset(block);
+ u64 size = gpu_buddy_block_size(mm, block);
- drm_printf(p, "%#018llx-%#018llx: %llu\n", start, start + size, size);
+ pr_info("%#018llx-%#018llx: %llu\n", start, start + size, size);
}
-EXPORT_SYMBOL(drm_buddy_block_print);
+EXPORT_SYMBOL(gpu_buddy_block_print);
/**
- * drm_buddy_print - print allocator state
+ * gpu_buddy_print - print allocator state
*
- * @mm: DRM buddy manager
- * @p: DRM printer to use
+ * @mm: GPU buddy manager
+ * @p: GPU printer to use
*/
-void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p)
+void gpu_buddy_print(struct gpu_buddy *mm)
{
int order;
- drm_printf(p, "chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n",
- mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20);
+ pr_info("chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n",
+ mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20);
for (order = mm->max_order; order >= 0; order--) {
- struct drm_buddy_block *block, *tmp;
+ struct gpu_buddy_block *block, *tmp;
struct rb_root *root;
u64 count = 0, free;
unsigned int tree;
@@ -1297,40 +1285,38 @@ void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p)
root = &mm->free_trees[tree][order];
rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) {
- BUG_ON(!drm_buddy_block_is_free(block));
+ BUG_ON(!gpu_buddy_block_is_free(block));
count++;
}
}
- drm_printf(p, "order-%2d ", order);
-
free = count * (mm->chunk_size << order);
if (free < SZ_1M)
- drm_printf(p, "free: %8llu KiB", free >> 10);
+ pr_info("order-%2d free: %8llu KiB, blocks: %llu\n",
+ order, free >> 10, count);
else
- drm_printf(p, "free: %8llu MiB", free >> 20);
-
- drm_printf(p, ", blocks: %llu\n", count);
+ pr_info("order-%2d free: %8llu MiB, blocks: %llu\n",
+ order, free >> 20, count);
}
}
-EXPORT_SYMBOL(drm_buddy_print);
+EXPORT_SYMBOL(gpu_buddy_print);
-static void drm_buddy_module_exit(void)
+static void gpu_buddy_module_exit(void)
{
kmem_cache_destroy(slab_blocks);
}
-static int __init drm_buddy_module_init(void)
+static int __init gpu_buddy_module_init(void)
{
- slab_blocks = KMEM_CACHE(drm_buddy_block, 0);
+ slab_blocks = KMEM_CACHE(gpu_buddy_block, 0);
if (!slab_blocks)
return -ENOMEM;
return 0;
}
-module_init(drm_buddy_module_init);
-module_exit(drm_buddy_module_exit);
+module_init(gpu_buddy_module_init);
+module_exit(gpu_buddy_module_exit);
-MODULE_DESCRIPTION("DRM Buddy Allocator");
+MODULE_DESCRIPTION("GPU Buddy Allocator");
MODULE_LICENSE("Dual MIT/GPL");
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index ca2a2801e77f..f48d00fe28cc 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -220,6 +220,7 @@ config DRM_GPUSVM
config DRM_BUDDY
tristate
depends on DRM
+ select GPU_BUDDY
help
A page based buddy allocator
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 5c86bc908955..6ff0f6f10b58 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -113,7 +113,7 @@ drm_gpusvm_helper-$(CONFIG_ZONE_DEVICE) += \
obj-$(CONFIG_DRM_GPUSVM) += drm_gpusvm_helper.o
-obj-$(CONFIG_DRM_BUDDY) += ../buddy.o
+obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
drm_dma_helper-y := drm_gem_dma_helper.o
drm_dma_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fbdev_dma.o
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index f582113d78b7..149f8f942eae 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
@@ -5663,7 +5663,7 @@ int amdgpu_ras_add_critical_region(struct amdgpu_device *adev,
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
struct amdgpu_vram_mgr_resource *vres;
struct ras_critical_region *region;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
int ret = 0;
if (!bo || !bo->tbo.resource)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
index be2e56ce1355..8908d9e08a30 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
@@ -55,7 +55,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
uint64_t start, uint64_t size,
struct amdgpu_res_cursor *cur)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct list_head *head, *next;
struct drm_mm_node *node;
@@ -71,7 +71,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
head = &to_amdgpu_vram_mgr_resource(res)->blocks;
block = list_first_entry_or_null(head,
- struct drm_buddy_block,
+ struct gpu_buddy_block,
link);
if (!block)
goto fallback;
@@ -81,7 +81,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
next = block->link.next;
if (next != head)
- block = list_entry(next, struct drm_buddy_block, link);
+ block = list_entry(next, struct gpu_buddy_block, link);
}
cur->start = amdgpu_vram_mgr_block_start(block) + start;
@@ -125,7 +125,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
*/
static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct drm_mm_node *node;
struct list_head *next;
@@ -146,7 +146,7 @@ static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
block = cur->node;
next = block->link.next;
- block = list_entry(next, struct drm_buddy_block, link);
+ block = list_entry(next, struct gpu_buddy_block, link);
cur->node = block;
cur->start = amdgpu_vram_mgr_block_start(block);
@@ -175,7 +175,7 @@ static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
*/
static inline bool amdgpu_res_cleared(struct amdgpu_res_cursor *cur)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
switch (cur->mem_type) {
case TTM_PL_VRAM:
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
index 9d934c07fa6b..cd94f6efb7cb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
@@ -25,6 +25,7 @@
#include <linux/dma-mapping.h>
#include <drm/ttm/ttm_range_manager.h>
#include <drm/drm_drv.h>
+#include <drm/drm_buddy.h>
#include "amdgpu.h"
#include "amdgpu_vm.h"
@@ -52,15 +53,15 @@ to_amdgpu_device(struct amdgpu_vram_mgr *mgr)
return container_of(mgr, struct amdgpu_device, mman.vram_mgr);
}
-static inline struct drm_buddy_block *
+static inline struct gpu_buddy_block *
amdgpu_vram_mgr_first_block(struct list_head *list)
{
- return list_first_entry_or_null(list, struct drm_buddy_block, link);
+ return list_first_entry_or_null(list, struct gpu_buddy_block, link);
}
static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct list_head *head)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
u64 start, size;
block = amdgpu_vram_mgr_first_block(head);
@@ -71,7 +72,7 @@ static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct list_head *head)
start = amdgpu_vram_mgr_block_start(block);
size = amdgpu_vram_mgr_block_size(block);
- block = list_entry(block->link.next, struct drm_buddy_block, link);
+ block = list_entry(block->link.next, struct gpu_buddy_block, link);
if (start + size != amdgpu_vram_mgr_block_start(block))
return false;
}
@@ -81,7 +82,7 @@ static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct list_head *head)
static inline u64 amdgpu_vram_mgr_blocks_size(struct list_head *head)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
u64 size = 0;
list_for_each_entry(block, head, link)
@@ -254,7 +255,7 @@ const struct attribute_group amdgpu_vram_mgr_attr_group = {
* Calculate how many bytes of the DRM BUDDY block are inside visible VRAM
*/
static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
- struct drm_buddy_block *block)
+ struct gpu_buddy_block *block)
{
u64 start = amdgpu_vram_mgr_block_start(block);
u64 end = start + amdgpu_vram_mgr_block_size(block);
@@ -279,7 +280,7 @@ u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
struct ttm_resource *res = bo->tbo.resource;
struct amdgpu_vram_mgr_resource *vres = to_amdgpu_vram_mgr_resource(res);
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
u64 usage = 0;
if (amdgpu_gmc_vram_full_visible(&adev->gmc))
@@ -299,15 +300,15 @@ static void amdgpu_vram_mgr_do_reserve(struct ttm_resource_manager *man)
{
struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
struct amdgpu_device *adev = to_amdgpu_device(mgr);
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
struct amdgpu_vram_reservation *rsv, *temp;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
uint64_t vis_usage;
list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, blocks) {
- if (drm_buddy_alloc_blocks(mm, rsv->start, rsv->start + rsv->size,
+ if (gpu_buddy_alloc_blocks(mm, rsv->start, rsv->start + rsv->size,
rsv->size, mm->chunk_size, &rsv->allocated,
- DRM_BUDDY_RANGE_ALLOCATION))
+ GPU_BUDDY_RANGE_ALLOCATION))
continue;
block = amdgpu_vram_mgr_first_block(&rsv->allocated);
@@ -403,7 +404,7 @@ int amdgpu_vram_mgr_query_address_block_info(struct amdgpu_vram_mgr *mgr,
uint64_t address, struct amdgpu_vram_block_info *info)
{
struct amdgpu_vram_mgr_resource *vres;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
u64 start, size;
int ret = -ENOENT;
@@ -450,8 +451,8 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
struct amdgpu_vram_mgr_resource *vres;
u64 size, remaining_size, lpfn, fpfn;
unsigned int adjust_dcc_size = 0;
- struct drm_buddy *mm = &mgr->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &mgr->mm;
+ struct gpu_buddy_block *block;
unsigned long pages_per_block;
int r;
@@ -493,17 +494,17 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
INIT_LIST_HEAD(&vres->blocks);
if (place->flags & TTM_PL_FLAG_TOPDOWN)
- vres->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
+ vres->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION;
if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)
- vres->flags |= DRM_BUDDY_CONTIGUOUS_ALLOCATION;
+ vres->flags |= GPU_BUDDY_CONTIGUOUS_ALLOCATION;
if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CLEARED)
- vres->flags |= DRM_BUDDY_CLEAR_ALLOCATION;
+ vres->flags |= GPU_BUDDY_CLEAR_ALLOCATION;
if (fpfn || lpfn != mgr->mm.size)
/* Allocate blocks in desired range */
- vres->flags |= DRM_BUDDY_RANGE_ALLOCATION;
+ vres->flags |= GPU_BUDDY_RANGE_ALLOCATION;
if (bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC &&
adev->gmc.gmc_funcs->get_dcc_alignment)
@@ -516,7 +517,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
dcc_size = roundup_pow_of_two(vres->base.size + adjust_dcc_size);
remaining_size = (u64)dcc_size;
- vres->flags |= DRM_BUDDY_TRIM_DISABLE;
+ vres->flags |= GPU_BUDDY_TRIM_DISABLE;
}
mutex_lock(&mgr->lock);
@@ -536,7 +537,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
BUG_ON(min_block_size < mm->chunk_size);
- r = drm_buddy_alloc_blocks(mm, fpfn,
+ r = gpu_buddy_alloc_blocks(mm, fpfn,
lpfn,
size,
min_block_size,
@@ -545,7 +546,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
if (unlikely(r == -ENOSPC) && pages_per_block == ~0ul &&
!(place->flags & TTM_PL_FLAG_CONTIGUOUS)) {
- vres->flags &= ~DRM_BUDDY_CONTIGUOUS_ALLOCATION;
+ vres->flags &= ~GPU_BUDDY_CONTIGUOUS_ALLOCATION;
pages_per_block = max_t(u32, 2UL << (20UL - PAGE_SHIFT),
tbo->page_alignment);
@@ -566,7 +567,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
list_add_tail(&vres->vres_node, &mgr->allocated_vres_list);
if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS && adjust_dcc_size) {
- struct drm_buddy_block *dcc_block;
+ struct gpu_buddy_block *dcc_block;
unsigned long dcc_start;
u64 trim_start;
@@ -576,7 +577,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
roundup((unsigned long)amdgpu_vram_mgr_block_start(dcc_block),
adjust_dcc_size);
trim_start = (u64)dcc_start;
- drm_buddy_block_trim(mm, &trim_start,
+ gpu_buddy_block_trim(mm, &trim_start,
(u64)vres->base.size,
&vres->blocks);
}
@@ -614,7 +615,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
return 0;
error_free_blocks:
- drm_buddy_free_list(mm, &vres->blocks, 0);
+ gpu_buddy_free_list(mm, &vres->blocks, 0);
mutex_unlock(&mgr->lock);
error_fini:
ttm_resource_fini(man, &vres->base);
@@ -637,8 +638,8 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
struct amdgpu_vram_mgr_resource *vres = to_amdgpu_vram_mgr_resource(res);
struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
struct amdgpu_device *adev = to_amdgpu_device(mgr);
- struct drm_buddy *mm = &mgr->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &mgr->mm;
+ struct gpu_buddy_block *block;
uint64_t vis_usage = 0;
mutex_lock(&mgr->lock);
@@ -649,7 +650,7 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
list_for_each_entry(block, &vres->blocks, link)
vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
- drm_buddy_free_list(mm, &vres->blocks, vres->flags);
+ gpu_buddy_free_list(mm, &vres->blocks, vres->flags);
amdgpu_vram_mgr_do_reserve(man);
mutex_unlock(&mgr->lock);
@@ -688,7 +689,7 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
if (!*sgt)
return -ENOMEM;
- /* Determine the number of DRM_BUDDY blocks to export */
+ /* Determine the number of GPU_BUDDY blocks to export */
amdgpu_res_first(res, offset, length, &cursor);
while (cursor.remaining) {
num_entries++;
@@ -704,10 +705,10 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
sg->length = 0;
/*
- * Walk down DRM_BUDDY blocks to populate scatterlist nodes
- * @note: Use iterator api to get first the DRM_BUDDY block
+ * Walk down GPU_BUDDY blocks to populate scatterlist nodes
+ * @note: Use iterator api to get first the GPU_BUDDY block
* and the number of bytes from it. Access the following
- * DRM_BUDDY block(s) if more buffer needs to exported
+ * GPU_BUDDY block(s) if more buffer needs to exported
*/
amdgpu_res_first(res, offset, length, &cursor);
for_each_sgtable_sg((*sgt), sg, i) {
@@ -792,10 +793,10 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct amdgpu_vram_mgr *mgr)
void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev)
{
struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
mutex_lock(&mgr->lock);
- drm_buddy_reset_clear(mm, false);
+ gpu_buddy_reset_clear(mm, false);
mutex_unlock(&mgr->lock);
}
@@ -815,7 +816,7 @@ static bool amdgpu_vram_mgr_intersects(struct ttm_resource_manager *man,
size_t size)
{
struct amdgpu_vram_mgr_resource *mgr = to_amdgpu_vram_mgr_resource(res);
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
/* Check each drm buddy block individually */
list_for_each_entry(block, &mgr->blocks, link) {
@@ -848,7 +849,7 @@ static bool amdgpu_vram_mgr_compatible(struct ttm_resource_manager *man,
size_t size)
{
struct amdgpu_vram_mgr_resource *mgr = to_amdgpu_vram_mgr_resource(res);
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
/* Check each drm buddy block individually */
list_for_each_entry(block, &mgr->blocks, link) {
@@ -877,7 +878,7 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
struct amdgpu_vram_reservation *rsv;
drm_printf(printer, " vis usage:%llu\n",
@@ -930,7 +931,7 @@ int amdgpu_vram_mgr_init(struct amdgpu_device *adev)
mgr->default_page_size = PAGE_SIZE;
man->func = &amdgpu_vram_mgr_func;
- err = drm_buddy_init(&mgr->mm, man->size, PAGE_SIZE);
+ err = gpu_buddy_init(&mgr->mm, man->size, PAGE_SIZE);
if (err)
return err;
@@ -965,11 +966,11 @@ void amdgpu_vram_mgr_fini(struct amdgpu_device *adev)
kfree(rsv);
list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, blocks) {
- drm_buddy_free_list(&mgr->mm, &rsv->allocated, 0);
+ gpu_buddy_free_list(&mgr->mm, &rsv->allocated, 0);
kfree(rsv);
}
if (!adev->gmc.is_app_apu)
- drm_buddy_fini(&mgr->mm);
+ gpu_buddy_fini(&mgr->mm);
mutex_unlock(&mgr->lock);
ttm_resource_manager_cleanup(man);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
index 874779618056..429a21a2e9b2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
@@ -28,7 +28,7 @@
struct amdgpu_vram_mgr {
struct ttm_resource_manager manager;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
/* protects access to buffer objects */
struct mutex lock;
struct list_head reservations_pending;
@@ -57,19 +57,19 @@ struct amdgpu_vram_mgr_resource {
struct amdgpu_vres_task task;
};
-static inline u64 amdgpu_vram_mgr_block_start(struct drm_buddy_block *block)
+static inline u64 amdgpu_vram_mgr_block_start(struct gpu_buddy_block *block)
{
- return drm_buddy_block_offset(block);
+ return gpu_buddy_block_offset(block);
}
-static inline u64 amdgpu_vram_mgr_block_size(struct drm_buddy_block *block)
+static inline u64 amdgpu_vram_mgr_block_size(struct gpu_buddy_block *block)
{
- return (u64)PAGE_SIZE << drm_buddy_block_order(block);
+ return (u64)PAGE_SIZE << gpu_buddy_block_order(block);
}
-static inline bool amdgpu_vram_mgr_is_cleared(struct drm_buddy_block *block)
+static inline bool amdgpu_vram_mgr_is_cleared(struct gpu_buddy_block *block)
{
- return drm_buddy_block_is_clear(block);
+ return gpu_buddy_block_is_clear(block);
}
static inline struct amdgpu_vram_mgr_resource *
@@ -82,8 +82,8 @@ static inline void amdgpu_vram_mgr_set_cleared(struct ttm_resource *res)
{
struct amdgpu_vram_mgr_resource *ares = to_amdgpu_vram_mgr_resource(res);
- WARN_ON(ares->flags & DRM_BUDDY_CLEARED);
- ares->flags |= DRM_BUDDY_CLEARED;
+ WARN_ON(ares->flags & GPU_BUDDY_CLEARED);
+ ares->flags |= GPU_BUDDY_CLEARED;
}
int amdgpu_vram_mgr_query_address_block_info(struct amdgpu_vram_mgr *mgr,
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
new file mode 100644
index 000000000000..841f3de5f307
--- /dev/null
+++ b/drivers/gpu/drm/drm_buddy.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <kunit/test-bug.h>
+
+#include <linux/export.h>
+#include <linux/kmemleak.h>
+#include <linux/module.h>
+#include <linux/sizes.h>
+
+#include <linux/gpu_buddy.h>
+#include <drm/drm_buddy.h>
+#include <drm/drm_print.h>
+
+/**
+ * drm_buddy_block_print - print block information
+ *
+ * @mm: DRM buddy manager
+ * @block: DRM buddy block
+ * @p: DRM printer to use
+ */
+void drm_buddy_block_print(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block,
+ struct drm_printer *p)
+{
+ u64 start = gpu_buddy_block_offset(block);
+ u64 size = gpu_buddy_block_size(mm, block);
+
+ drm_printf(p, "%#018llx-%#018llx: %llu\n", start, start + size, size);
+}
+EXPORT_SYMBOL(drm_buddy_block_print);
+
+/**
+ * drm_buddy_print - print allocator state
+ *
+ * @mm: DRM buddy manager
+ * @p: DRM printer to use
+ */
+void drm_buddy_print(struct gpu_buddy *mm, struct drm_printer *p)
+{
+ int order;
+
+ drm_printf(p, "chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n",
+ mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20);
+
+ for (order = mm->max_order; order >= 0; order--) {
+ struct gpu_buddy_block *block, *tmp;
+ struct rb_root *root;
+ u64 count = 0, free;
+ unsigned int tree;
+
+ for_each_free_tree(tree) {
+ root = &mm->free_trees[tree][order];
+
+ rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) {
+ BUG_ON(!gpu_buddy_block_is_free(block));
+ count++;
+ }
+ }
+
+ drm_printf(p, "order-%2d ", order);
+
+ free = count * (mm->chunk_size << order);
+ if (free < SZ_1M)
+ drm_printf(p, "free: %8llu KiB", free >> 10);
+ else
+ drm_printf(p, "free: %8llu MiB", free >> 20);
+
+ drm_printf(p, ", blocks: %llu\n", count);
+ }
+}
+EXPORT_SYMBOL(drm_buddy_print);
+
+MODULE_DESCRIPTION("DRM-specific GPU Buddy Allocator Print Helpers");
+MODULE_LICENSE("Dual MIT/GPL");
diff --git a/drivers/gpu/drm/i915/i915_scatterlist.c b/drivers/gpu/drm/i915/i915_scatterlist.c
index 30246f02bcfe..6a34dae13769 100644
--- a/drivers/gpu/drm/i915/i915_scatterlist.c
+++ b/drivers/gpu/drm/i915/i915_scatterlist.c
@@ -167,9 +167,9 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res,
struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res);
const u64 size = res->size;
const u32 max_segment = round_down(UINT_MAX, page_alignment);
- struct drm_buddy *mm = bman_res->mm;
+ struct gpu_buddy *mm = bman_res->mm;
struct list_head *blocks = &bman_res->blocks;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct i915_refct_sgt *rsgt;
struct scatterlist *sg;
struct sg_table *st;
@@ -202,8 +202,8 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res,
list_for_each_entry(block, blocks, link) {
u64 block_size, offset;
- block_size = min_t(u64, size, drm_buddy_block_size(mm, block));
- offset = drm_buddy_block_offset(block);
+ block_size = min_t(u64, size, gpu_buddy_block_size(mm, block));
+ offset = gpu_buddy_block_offset(block);
while (block_size) {
u64 len;
diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
index 6b256d95badd..c5ca90088705 100644
--- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
+++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
@@ -6,6 +6,7 @@
#include <linux/slab.h>
#include <linux/gpu_buddy.h>
+#include <drm/drm_buddy.h>
#include <drm/drm_print.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/ttm/ttm_bo.h>
@@ -16,7 +17,7 @@
struct i915_ttm_buddy_manager {
struct ttm_resource_manager manager;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
struct list_head reserved;
struct mutex lock;
unsigned long visible_size;
@@ -38,7 +39,7 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man,
{
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
struct i915_ttm_buddy_resource *bman_res;
- struct drm_buddy *mm = &bman->mm;
+ struct gpu_buddy *mm = &bman->mm;
unsigned long n_pages, lpfn;
u64 min_page_size;
u64 size;
@@ -57,13 +58,13 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man,
bman_res->mm = mm;
if (place->flags & TTM_PL_FLAG_TOPDOWN)
- bman_res->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
+ bman_res->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION;
if (place->flags & TTM_PL_FLAG_CONTIGUOUS)
- bman_res->flags |= DRM_BUDDY_CONTIGUOUS_ALLOCATION;
+ bman_res->flags |= GPU_BUDDY_CONTIGUOUS_ALLOCATION;
if (place->fpfn || lpfn != man->size)
- bman_res->flags |= DRM_BUDDY_RANGE_ALLOCATION;
+ bman_res->flags |= GPU_BUDDY_RANGE_ALLOCATION;
GEM_BUG_ON(!bman_res->base.size);
size = bman_res->base.size;
@@ -89,7 +90,7 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man,
goto err_free_res;
}
- err = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
+ err = gpu_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
(u64)lpfn << PAGE_SHIFT,
(u64)n_pages << PAGE_SHIFT,
min_page_size,
@@ -101,15 +102,15 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man,
if (lpfn <= bman->visible_size) {
bman_res->used_visible_size = PFN_UP(bman_res->base.size);
} else {
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
list_for_each_entry(block, &bman_res->blocks, link) {
unsigned long start =
- drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ gpu_buddy_block_offset(block) >> PAGE_SHIFT;
if (start < bman->visible_size) {
unsigned long end = start +
- (drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
+ (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT);
bman_res->used_visible_size +=
min(end, bman->visible_size) - start;
@@ -126,7 +127,7 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man,
return 0;
err_free_blocks:
- drm_buddy_free_list(mm, &bman_res->blocks, 0);
+ gpu_buddy_free_list(mm, &bman_res->blocks, 0);
mutex_unlock(&bman->lock);
err_free_res:
ttm_resource_fini(man, &bman_res->base);
@@ -141,7 +142,7 @@ static void i915_ttm_buddy_man_free(struct ttm_resource_manager *man,
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
mutex_lock(&bman->lock);
- drm_buddy_free_list(&bman->mm, &bman_res->blocks, 0);
+ gpu_buddy_free_list(&bman->mm, &bman_res->blocks, 0);
bman->visible_avail += bman_res->used_visible_size;
mutex_unlock(&bman->lock);
@@ -156,8 +157,8 @@ static bool i915_ttm_buddy_man_intersects(struct ttm_resource_manager *man,
{
struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res);
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
- struct drm_buddy *mm = &bman->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &bman->mm;
+ struct gpu_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
@@ -176,9 +177,9 @@ static bool i915_ttm_buddy_man_intersects(struct ttm_resource_manager *man,
/* Check each drm buddy block individually */
list_for_each_entry(block, &bman_res->blocks, link) {
unsigned long fpfn =
- drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ gpu_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
- (drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
+ (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (place->fpfn < lpfn && place->lpfn > fpfn)
return true;
@@ -194,8 +195,8 @@ static bool i915_ttm_buddy_man_compatible(struct ttm_resource_manager *man,
{
struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res);
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
- struct drm_buddy *mm = &bman->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &bman->mm;
+ struct gpu_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
@@ -209,9 +210,9 @@ static bool i915_ttm_buddy_man_compatible(struct ttm_resource_manager *man,
/* Check each drm buddy block individually */
list_for_each_entry(block, &bman_res->blocks, link) {
unsigned long fpfn =
- drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ gpu_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
- (drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
+ (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (fpfn < place->fpfn || lpfn > place->lpfn)
return false;
@@ -224,7 +225,7 @@ static void i915_ttm_buddy_man_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
mutex_lock(&bman->lock);
drm_printf(printer, "default_page_size: %lluKiB\n",
@@ -293,7 +294,7 @@ int i915_ttm_buddy_man_init(struct ttm_device *bdev,
if (!bman)
return -ENOMEM;
- err = drm_buddy_init(&bman->mm, size, chunk_size);
+ err = gpu_buddy_init(&bman->mm, size, chunk_size);
if (err)
goto err_free_bman;
@@ -333,7 +334,7 @@ int i915_ttm_buddy_man_fini(struct ttm_device *bdev, unsigned int type)
{
struct ttm_resource_manager *man = ttm_manager_type(bdev, type);
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
- struct drm_buddy *mm = &bman->mm;
+ struct gpu_buddy *mm = &bman->mm;
int ret;
ttm_resource_manager_set_used(man, false);
@@ -345,8 +346,8 @@ int i915_ttm_buddy_man_fini(struct ttm_device *bdev, unsigned int type)
ttm_set_driver_manager(bdev, type, NULL);
mutex_lock(&bman->lock);
- drm_buddy_free_list(mm, &bman->reserved, 0);
- drm_buddy_fini(mm);
+ gpu_buddy_free_list(mm, &bman->reserved, 0);
+ gpu_buddy_fini(mm);
bman->visible_avail += bman->visible_reserved;
WARN_ON_ONCE(bman->visible_avail != bman->visible_size);
mutex_unlock(&bman->lock);
@@ -371,15 +372,15 @@ int i915_ttm_buddy_man_reserve(struct ttm_resource_manager *man,
u64 start, u64 size)
{
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
- struct drm_buddy *mm = &bman->mm;
+ struct gpu_buddy *mm = &bman->mm;
unsigned long fpfn = start >> PAGE_SHIFT;
unsigned long flags = 0;
int ret;
- flags |= DRM_BUDDY_RANGE_ALLOCATION;
+ flags |= GPU_BUDDY_RANGE_ALLOCATION;
mutex_lock(&bman->lock);
- ret = drm_buddy_alloc_blocks(mm, start,
+ ret = gpu_buddy_alloc_blocks(mm, start,
start + size,
size, mm->chunk_size,
&bman->reserved,
diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
index d64620712830..1cff018c1689 100644
--- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
+++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
@@ -13,7 +13,7 @@
struct ttm_device;
struct ttm_resource_manager;
-struct drm_buddy;
+struct gpu_buddy;
/**
* struct i915_ttm_buddy_resource
@@ -33,7 +33,7 @@ struct i915_ttm_buddy_resource {
struct list_head blocks;
unsigned long flags;
unsigned long used_visible_size;
- struct drm_buddy *mm;
+ struct gpu_buddy *mm;
};
/**
diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index 7b856b5090f9..8307390943a2 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -6,7 +6,7 @@
#include <linux/prime_numbers.h>
#include <linux/sort.h>
-#include <drm/drm_buddy.h>
+#include <linux/gpu_buddy.h>
#include "../i915_selftest.h"
@@ -371,7 +371,7 @@ static int igt_mock_splintered_region(void *arg)
struct drm_i915_private *i915 = mem->i915;
struct i915_ttm_buddy_resource *res;
struct drm_i915_gem_object *obj;
- struct drm_buddy *mm;
+ struct gpu_buddy *mm;
unsigned int expected_order;
LIST_HEAD(objects);
u64 size;
@@ -447,8 +447,8 @@ static int igt_mock_max_segment(void *arg)
struct drm_i915_private *i915 = mem->i915;
struct i915_ttm_buddy_resource *res;
struct drm_i915_gem_object *obj;
- struct drm_buddy_block *block;
- struct drm_buddy *mm;
+ struct gpu_buddy_block *block;
+ struct gpu_buddy *mm;
struct list_head *blocks;
struct scatterlist *sg;
I915_RND_STATE(prng);
@@ -487,8 +487,8 @@ static int igt_mock_max_segment(void *arg)
mm = res->mm;
size = 0;
list_for_each_entry(block, blocks, link) {
- if (drm_buddy_block_size(mm, block) > size)
- size = drm_buddy_block_size(mm, block);
+ if (gpu_buddy_block_size(mm, block) > size)
+ size = gpu_buddy_block_size(mm, block);
}
if (size < max_segment) {
pr_err("%s: Failed to create a huge contiguous block [> %u], largest block %lld\n",
@@ -527,14 +527,14 @@ static u64 igt_object_mappable_total(struct drm_i915_gem_object *obj)
struct intel_memory_region *mr = obj->mm.region;
struct i915_ttm_buddy_resource *bman_res =
to_ttm_buddy_resource(obj->mm.res);
- struct drm_buddy *mm = bman_res->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = bman_res->mm;
+ struct gpu_buddy_block *block;
u64 total;
total = 0;
list_for_each_entry(block, &bman_res->blocks, link) {
- u64 start = drm_buddy_block_offset(block);
- u64 end = start + drm_buddy_block_size(mm, block);
+ u64 start = gpu_buddy_block_offset(block);
+ u64 end = start + gpu_buddy_block_size(mm, block);
if (start < resource_size(&mr->io))
total += min_t(u64, end, resource_size(&mr->io)) - start;
diff --git a/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c b/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
index 6d95447a989d..e32f3c8d7b84 100644
--- a/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
+++ b/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
@@ -251,7 +251,7 @@ static void ttm_bo_validate_basic(struct kunit *test)
NULL, &dummy_ttm_bo_destroy);
KUNIT_EXPECT_EQ(test, err, 0);
- snd_place = ttm_place_kunit_init(test, snd_mem, DRM_BUDDY_TOPDOWN_ALLOCATION);
+ snd_place = ttm_place_kunit_init(test, snd_mem, GPU_BUDDY_TOPDOWN_ALLOCATION);
snd_placement = ttm_placement_kunit_init(test, snd_place, 1);
err = ttm_bo_validate(bo, snd_placement, &ctx_val);
@@ -263,7 +263,7 @@ static void ttm_bo_validate_basic(struct kunit *test)
KUNIT_EXPECT_TRUE(test, ttm_tt_is_populated(bo->ttm));
KUNIT_EXPECT_EQ(test, bo->resource->mem_type, snd_mem);
KUNIT_EXPECT_EQ(test, bo->resource->placement,
- DRM_BUDDY_TOPDOWN_ALLOCATION);
+ GPU_BUDDY_TOPDOWN_ALLOCATION);
ttm_bo_fini(bo);
ttm_mock_manager_fini(priv->ttm_dev, snd_mem);
diff --git a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c
index dd395229e388..294d56d9067e 100644
--- a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c
+++ b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c
@@ -31,7 +31,7 @@ static int ttm_mock_manager_alloc(struct ttm_resource_manager *man,
{
struct ttm_mock_manager *manager = to_mock_mgr(man);
struct ttm_mock_resource *mock_res;
- struct drm_buddy *mm = &manager->mm;
+ struct gpu_buddy *mm = &manager->mm;
u64 lpfn, fpfn, alloc_size;
int err;
@@ -47,14 +47,14 @@ static int ttm_mock_manager_alloc(struct ttm_resource_manager *man,
INIT_LIST_HEAD(&mock_res->blocks);
if (place->flags & TTM_PL_FLAG_TOPDOWN)
- mock_res->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
+ mock_res->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION;
if (place->flags & TTM_PL_FLAG_CONTIGUOUS)
- mock_res->flags |= DRM_BUDDY_CONTIGUOUS_ALLOCATION;
+ mock_res->flags |= GPU_BUDDY_CONTIGUOUS_ALLOCATION;
alloc_size = (uint64_t)mock_res->base.size;
mutex_lock(&manager->lock);
- err = drm_buddy_alloc_blocks(mm, fpfn, lpfn, alloc_size,
+ err = gpu_buddy_alloc_blocks(mm, fpfn, lpfn, alloc_size,
manager->default_page_size,
&mock_res->blocks,
mock_res->flags);
@@ -67,7 +67,7 @@ static int ttm_mock_manager_alloc(struct ttm_resource_manager *man,
return 0;
error_free_blocks:
- drm_buddy_free_list(mm, &mock_res->blocks, 0);
+ gpu_buddy_free_list(mm, &mock_res->blocks, 0);
ttm_resource_fini(man, &mock_res->base);
mutex_unlock(&manager->lock);
@@ -79,10 +79,10 @@ static void ttm_mock_manager_free(struct ttm_resource_manager *man,
{
struct ttm_mock_manager *manager = to_mock_mgr(man);
struct ttm_mock_resource *mock_res = to_mock_mgr_resource(res);
- struct drm_buddy *mm = &manager->mm;
+ struct gpu_buddy *mm = &manager->mm;
mutex_lock(&manager->lock);
- drm_buddy_free_list(mm, &mock_res->blocks, 0);
+ gpu_buddy_free_list(mm, &mock_res->blocks, 0);
mutex_unlock(&manager->lock);
ttm_resource_fini(man, res);
@@ -106,7 +106,7 @@ int ttm_mock_manager_init(struct ttm_device *bdev, u32 mem_type, u32 size)
mutex_init(&manager->lock);
- err = drm_buddy_init(&manager->mm, size, PAGE_SIZE);
+ err = gpu_buddy_init(&manager->mm, size, PAGE_SIZE);
if (err) {
kfree(manager);
@@ -142,7 +142,7 @@ void ttm_mock_manager_fini(struct ttm_device *bdev, u32 mem_type)
ttm_resource_manager_set_used(man, false);
mutex_lock(&mock_man->lock);
- drm_buddy_fini(&mock_man->mm);
+ gpu_buddy_fini(&mock_man->mm);
mutex_unlock(&mock_man->lock);
ttm_set_driver_manager(bdev, mem_type, NULL);
diff --git a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h
index 96ea8c9aae34..08710756fd8e 100644
--- a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h
+++ b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h
@@ -9,7 +9,7 @@
struct ttm_mock_manager {
struct ttm_resource_manager man;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
u64 default_page_size;
/* protects allocations of mock buffer objects */
struct mutex lock;
diff --git a/drivers/gpu/drm/xe/xe_res_cursor.h b/drivers/gpu/drm/xe/xe_res_cursor.h
index 4e00008b7081..5f4ab08c0686 100644
--- a/drivers/gpu/drm/xe/xe_res_cursor.h
+++ b/drivers/gpu/drm/xe/xe_res_cursor.h
@@ -58,7 +58,7 @@ struct xe_res_cursor {
/** @dma_addr: Current element in a struct drm_pagemap_addr array */
const struct drm_pagemap_addr *dma_addr;
/** @mm: Buddy allocator for VRAM cursor */
- struct drm_buddy *mm;
+ struct gpu_buddy *mm;
/**
* @dma_start: DMA start address for the current segment.
* This may be different to @dma_addr.addr since elements in
@@ -69,7 +69,7 @@ struct xe_res_cursor {
u64 dma_seg_size;
};
-static struct drm_buddy *xe_res_get_buddy(struct ttm_resource *res)
+static struct gpu_buddy *xe_res_get_buddy(struct ttm_resource *res)
{
struct ttm_resource_manager *mgr;
@@ -104,30 +104,30 @@ static inline void xe_res_first(struct ttm_resource *res,
case XE_PL_STOLEN:
case XE_PL_VRAM0:
case XE_PL_VRAM1: {
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct list_head *head, *next;
- struct drm_buddy *mm = xe_res_get_buddy(res);
+ struct gpu_buddy *mm = xe_res_get_buddy(res);
head = &to_xe_ttm_vram_mgr_resource(res)->blocks;
block = list_first_entry_or_null(head,
- struct drm_buddy_block,
+ struct gpu_buddy_block,
link);
if (!block)
goto fallback;
- while (start >= drm_buddy_block_size(mm, block)) {
- start -= drm_buddy_block_size(mm, block);
+ while (start >= gpu_buddy_block_size(mm, block)) {
+ start -= gpu_buddy_block_size(mm, block);
next = block->link.next;
if (next != head)
- block = list_entry(next, struct drm_buddy_block,
+ block = list_entry(next, struct gpu_buddy_block,
link);
}
cur->mm = mm;
- cur->start = drm_buddy_block_offset(block) + start;
- cur->size = min(drm_buddy_block_size(mm, block) - start,
+ cur->start = gpu_buddy_block_offset(block) + start;
+ cur->size = min(gpu_buddy_block_size(mm, block) - start,
size);
cur->remaining = size;
cur->node = block;
@@ -259,7 +259,7 @@ static inline void xe_res_first_dma(const struct drm_pagemap_addr *dma_addr,
*/
static inline void xe_res_next(struct xe_res_cursor *cur, u64 size)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct list_head *next;
u64 start;
@@ -295,18 +295,18 @@ static inline void xe_res_next(struct xe_res_cursor *cur, u64 size)
block = cur->node;
next = block->link.next;
- block = list_entry(next, struct drm_buddy_block, link);
+ block = list_entry(next, struct gpu_buddy_block, link);
- while (start >= drm_buddy_block_size(cur->mm, block)) {
- start -= drm_buddy_block_size(cur->mm, block);
+ while (start >= gpu_buddy_block_size(cur->mm, block)) {
+ start -= gpu_buddy_block_size(cur->mm, block);
next = block->link.next;
- block = list_entry(next, struct drm_buddy_block, link);
+ block = list_entry(next, struct gpu_buddy_block, link);
}
- cur->start = drm_buddy_block_offset(block) + start;
- cur->size = min(drm_buddy_block_size(cur->mm, block) - start,
+ cur->start = gpu_buddy_block_offset(block) + start;
+ cur->size = min(gpu_buddy_block_size(cur->mm, block) - start,
cur->remaining);
cur->node = block;
break;
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 213f0334518a..cda3bf7e2418 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -747,7 +747,7 @@ static u64 block_offset_to_pfn(struct drm_pagemap *dpagemap, u64 offset)
return PHYS_PFN(offset + xpagemap->hpa_base);
}
-static struct drm_buddy *vram_to_buddy(struct xe_vram_region *vram)
+static struct gpu_buddy *vram_to_buddy(struct xe_vram_region *vram)
{
return &vram->ttm.mm;
}
@@ -758,17 +758,17 @@ static int xe_svm_populate_devmem_pfn(struct drm_pagemap_devmem *devmem_allocati
struct xe_bo *bo = to_xe_bo(devmem_allocation);
struct ttm_resource *res = bo->ttm.resource;
struct list_head *blocks = &to_xe_ttm_vram_mgr_resource(res)->blocks;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
int j = 0;
list_for_each_entry(block, blocks, link) {
struct xe_vram_region *vr = block->private;
- struct drm_buddy *buddy = vram_to_buddy(vr);
+ struct gpu_buddy *buddy = vram_to_buddy(vr);
u64 block_pfn = block_offset_to_pfn(devmem_allocation->dpagemap,
- drm_buddy_block_offset(block));
+ gpu_buddy_block_offset(block));
int i;
- for (i = 0; i < drm_buddy_block_size(buddy, block) >> PAGE_SHIFT; ++i)
+ for (i = 0; i < gpu_buddy_block_size(buddy, block) >> PAGE_SHIFT; ++i)
pfn[j++] = block_pfn + i;
}
@@ -1033,7 +1033,7 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
struct dma_fence *pre_migrate_fence = NULL;
struct xe_device *xe = vr->xe;
struct device *dev = xe->drm.dev;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
struct xe_validation_ctx vctx;
struct list_head *blocks;
struct drm_exec exec;
diff --git a/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c b/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c
index 6553a19f7cf2..d119217d566a 100644
--- a/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c
+++ b/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c
@@ -6,6 +6,7 @@
#include <drm/drm_managed.h>
#include <drm/drm_drv.h>
+#include <drm/drm_buddy.h>
#include <drm/ttm/ttm_placement.h>
#include <drm/ttm/ttm_range_manager.h>
@@ -16,16 +17,16 @@
#include "xe_ttm_vram_mgr.h"
#include "xe_vram_types.h"
-static inline struct drm_buddy_block *
+static inline struct gpu_buddy_block *
xe_ttm_vram_mgr_first_block(struct list_head *list)
{
- return list_first_entry_or_null(list, struct drm_buddy_block, link);
+ return list_first_entry_or_null(list, struct gpu_buddy_block, link);
}
-static inline bool xe_is_vram_mgr_blocks_contiguous(struct drm_buddy *mm,
+static inline bool xe_is_vram_mgr_blocks_contiguous(struct gpu_buddy *mm,
struct list_head *head)
{
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
u64 start, size;
block = xe_ttm_vram_mgr_first_block(head);
@@ -33,12 +34,12 @@ static inline bool xe_is_vram_mgr_blocks_contiguous(struct drm_buddy *mm,
return false;
while (head != block->link.next) {
- start = drm_buddy_block_offset(block);
- size = drm_buddy_block_size(mm, block);
+ start = gpu_buddy_block_offset(block);
+ size = gpu_buddy_block_size(mm, block);
- block = list_entry(block->link.next, struct drm_buddy_block,
+ block = list_entry(block->link.next, struct gpu_buddy_block,
link);
- if (start + size != drm_buddy_block_offset(block))
+ if (start + size != gpu_buddy_block_offset(block))
return false;
}
@@ -52,7 +53,7 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
{
struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
struct xe_ttm_vram_mgr_resource *vres;
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
u64 size, min_page_size;
unsigned long lpfn;
int err;
@@ -79,10 +80,10 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
INIT_LIST_HEAD(&vres->blocks);
if (place->flags & TTM_PL_FLAG_TOPDOWN)
- vres->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
+ vres->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION;
if (place->fpfn || lpfn != man->size >> PAGE_SHIFT)
- vres->flags |= DRM_BUDDY_RANGE_ALLOCATION;
+ vres->flags |= GPU_BUDDY_RANGE_ALLOCATION;
if (WARN_ON(!vres->base.size)) {
err = -EINVAL;
@@ -118,27 +119,27 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
lpfn = max_t(unsigned long, place->fpfn + (size >> PAGE_SHIFT), lpfn);
}
- err = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
+ err = gpu_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
(u64)lpfn << PAGE_SHIFT, size,
min_page_size, &vres->blocks, vres->flags);
if (err)
goto error_unlock;
if (place->flags & TTM_PL_FLAG_CONTIGUOUS) {
- if (!drm_buddy_block_trim(mm, NULL, vres->base.size, &vres->blocks))
+ if (!gpu_buddy_block_trim(mm, NULL, vres->base.size, &vres->blocks))
size = vres->base.size;
}
if (lpfn <= mgr->visible_size >> PAGE_SHIFT) {
vres->used_visible_size = size;
} else {
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
list_for_each_entry(block, &vres->blocks, link) {
- u64 start = drm_buddy_block_offset(block);
+ u64 start = gpu_buddy_block_offset(block);
if (start < mgr->visible_size) {
- u64 end = start + drm_buddy_block_size(mm, block);
+ u64 end = start + gpu_buddy_block_size(mm, block);
vres->used_visible_size +=
min(end, mgr->visible_size) - start;
@@ -158,11 +159,11 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
* the object.
*/
if (vres->base.placement & TTM_PL_FLAG_CONTIGUOUS) {
- struct drm_buddy_block *block = list_first_entry(&vres->blocks,
+ struct gpu_buddy_block *block = list_first_entry(&vres->blocks,
typeof(*block),
link);
- vres->base.start = drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ vres->base.start = gpu_buddy_block_offset(block) >> PAGE_SHIFT;
} else {
vres->base.start = XE_BO_INVALID_OFFSET;
}
@@ -184,10 +185,10 @@ static void xe_ttm_vram_mgr_del(struct ttm_resource_manager *man,
struct xe_ttm_vram_mgr_resource *vres =
to_xe_ttm_vram_mgr_resource(res);
struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
mutex_lock(&mgr->lock);
- drm_buddy_free_list(mm, &vres->blocks, 0);
+ gpu_buddy_free_list(mm, &vres->blocks, 0);
mgr->visible_avail += vres->used_visible_size;
mutex_unlock(&mgr->lock);
@@ -200,7 +201,7 @@ static void xe_ttm_vram_mgr_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
- struct drm_buddy *mm = &mgr->mm;
+ struct gpu_buddy *mm = &mgr->mm;
mutex_lock(&mgr->lock);
drm_printf(printer, "default_page_size: %lluKiB\n",
@@ -223,8 +224,8 @@ static bool xe_ttm_vram_mgr_intersects(struct ttm_resource_manager *man,
struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
struct xe_ttm_vram_mgr_resource *vres =
to_xe_ttm_vram_mgr_resource(res);
- struct drm_buddy *mm = &mgr->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &mgr->mm;
+ struct gpu_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
@@ -234,9 +235,9 @@ static bool xe_ttm_vram_mgr_intersects(struct ttm_resource_manager *man,
list_for_each_entry(block, &vres->blocks, link) {
unsigned long fpfn =
- drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ gpu_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
- (drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
+ (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (place->fpfn < lpfn && place->lpfn > fpfn)
return true;
@@ -253,8 +254,8 @@ static bool xe_ttm_vram_mgr_compatible(struct ttm_resource_manager *man,
struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
struct xe_ttm_vram_mgr_resource *vres =
to_xe_ttm_vram_mgr_resource(res);
- struct drm_buddy *mm = &mgr->mm;
- struct drm_buddy_block *block;
+ struct gpu_buddy *mm = &mgr->mm;
+ struct gpu_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
@@ -264,9 +265,9 @@ static bool xe_ttm_vram_mgr_compatible(struct ttm_resource_manager *man,
list_for_each_entry(block, &vres->blocks, link) {
unsigned long fpfn =
- drm_buddy_block_offset(block) >> PAGE_SHIFT;
+ gpu_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
- (drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
+ (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (fpfn < place->fpfn || lpfn > place->lpfn)
return false;
@@ -296,7 +297,7 @@ static void xe_ttm_vram_mgr_fini(struct drm_device *dev, void *arg)
WARN_ON_ONCE(mgr->visible_avail != mgr->visible_size);
- drm_buddy_fini(&mgr->mm);
+ gpu_buddy_fini(&mgr->mm);
ttm_resource_manager_cleanup(&mgr->manager);
@@ -327,7 +328,7 @@ int __xe_ttm_vram_mgr_init(struct xe_device *xe, struct xe_ttm_vram_mgr *mgr,
mgr->visible_avail = io_size;
ttm_resource_manager_init(man, &xe->ttm, size);
- err = drm_buddy_init(&mgr->mm, man->size, default_page_size);
+ err = gpu_buddy_init(&mgr->mm, man->size, default_page_size);
if (err)
return err;
@@ -375,7 +376,7 @@ int xe_ttm_vram_mgr_alloc_sgt(struct xe_device *xe,
if (!*sgt)
return -ENOMEM;
- /* Determine the number of DRM_BUDDY blocks to export */
+ /* Determine the number of GPU_BUDDY blocks to export */
xe_res_first(res, offset, length, &cursor);
while (cursor.remaining) {
num_entries++;
@@ -392,10 +393,10 @@ int xe_ttm_vram_mgr_alloc_sgt(struct xe_device *xe,
sg->length = 0;
/*
- * Walk down DRM_BUDDY blocks to populate scatterlist nodes
- * @note: Use iterator api to get first the DRM_BUDDY block
+ * Walk down GPU_BUDDY blocks to populate scatterlist nodes
+ * @note: Use iterator api to get first the GPU_BUDDY block
* and the number of bytes from it. Access the following
- * DRM_BUDDY block(s) if more buffer needs to exported
+ * GPU_BUDDY block(s) if more buffer needs to exported
*/
xe_res_first(res, offset, length, &cursor);
for_each_sgtable_sg((*sgt), sg, i) {
diff --git a/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h b/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
index babeec5511d9..9106da056b49 100644
--- a/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
+++ b/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
@@ -18,7 +18,7 @@ struct xe_ttm_vram_mgr {
/** @manager: Base TTM resource manager */
struct ttm_resource_manager manager;
/** @mm: DRM buddy allocator which manages the VRAM */
- struct drm_buddy mm;
+ struct gpu_buddy mm;
/** @visible_size: Proped size of the CPU visible portion */
u64 visible_size;
/** @visible_avail: CPU visible portion still unallocated */
diff --git a/drivers/gpu/tests/Makefile b/drivers/gpu/tests/Makefile
index 8e7654e87d82..4183e6e2de45 100644
--- a/drivers/gpu/tests/Makefile
+++ b/drivers/gpu/tests/Makefile
@@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
gpu_buddy_tests-y = gpu_buddy_test.o gpu_random.o
-obj-$(CONFIG_DRM_KUNIT_TEST) += gpu_buddy_tests.o
+obj-$(CONFIG_GPU_BUDDY_KUNIT_TEST) += gpu_buddy_tests.o
diff --git a/drivers/gpu/tests/gpu_buddy_test.c b/drivers/gpu/tests/gpu_buddy_test.c
index b905932da990..450e71deed90 100644
--- a/drivers/gpu/tests/gpu_buddy_test.c
+++ b/drivers/gpu/tests/gpu_buddy_test.c
@@ -21,9 +21,9 @@ static inline u64 get_size(int order, u64 chunk_size)
return (1 << order) * chunk_size;
}
-static void drm_test_buddy_fragmentation_performance(struct kunit *test)
+static void gpu_test_buddy_fragmentation_performance(struct kunit *test)
{
- struct drm_buddy_block *block, *tmp;
+ struct gpu_buddy_block *block, *tmp;
int num_blocks, i, ret, count = 0;
LIST_HEAD(allocated_blocks);
unsigned long elapsed_ms;
@@ -32,7 +32,7 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
LIST_HEAD(clear_list);
LIST_HEAD(dirty_list);
LIST_HEAD(free_list);
- struct drm_buddy mm;
+ struct gpu_buddy mm;
u64 mm_size = SZ_4G;
ktime_t start, end;
@@ -47,7 +47,7 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
* quickly the allocator can satisfy larger, aligned requests from a pool of
* highly fragmented space.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
"buddy_init failed\n");
num_blocks = mm_size / SZ_64K;
@@ -55,7 +55,7 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
start = ktime_get();
/* Allocate with maximum fragmentation - 8K blocks with 64K alignment */
for (i = 0; i < num_blocks; i++)
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K,
&allocated_blocks, 0),
"buddy_alloc hit an error size=%u\n", SZ_8K);
@@ -68,21 +68,21 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
}
/* Free with different flags to ensure no coalescing */
- drm_buddy_free_list(&mm, &clear_list, DRM_BUDDY_CLEARED);
- drm_buddy_free_list(&mm, &dirty_list, 0);
+ gpu_buddy_free_list(&mm, &clear_list, GPU_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &dirty_list, 0);
for (i = 0; i < num_blocks; i++)
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_64K, SZ_64K,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_64K, SZ_64K,
&test_blocks, 0),
"buddy_alloc hit an error size=%u\n", SZ_64K);
- drm_buddy_free_list(&mm, &test_blocks, 0);
+ gpu_buddy_free_list(&mm, &test_blocks, 0);
end = ktime_get();
elapsed_ms = ktime_to_ms(ktime_sub(end, start));
kunit_info(test, "Fragmented allocation took %lu ms\n", elapsed_ms);
- drm_buddy_fini(&mm);
+ gpu_buddy_fini(&mm);
/*
* Reverse free order under fragmentation
@@ -96,13 +96,13 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
* deallocation occurs in the opposite order of allocation, exposing the
* cost difference between a linear freelist scan and an ordered tree lookup.
*/
- ret = drm_buddy_init(&mm, mm_size, SZ_4K);
+ ret = gpu_buddy_init(&mm, mm_size, SZ_4K);
KUNIT_ASSERT_EQ(test, ret, 0);
start = ktime_get();
/* Allocate maximum fragmentation */
for (i = 0; i < num_blocks; i++)
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K,
&allocated_blocks, 0),
"buddy_alloc hit an error size=%u\n", SZ_8K);
@@ -111,28 +111,28 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test)
list_move_tail(&block->link, &free_list);
count++;
}
- drm_buddy_free_list(&mm, &free_list, DRM_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &free_list, GPU_BUDDY_CLEARED);
list_for_each_entry_safe_reverse(block, tmp, &allocated_blocks, link)
list_move(&block->link, &reverse_list);
- drm_buddy_free_list(&mm, &reverse_list, DRM_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &reverse_list, GPU_BUDDY_CLEARED);
end = ktime_get();
elapsed_ms = ktime_to_ms(ktime_sub(end, start));
kunit_info(test, "Reverse-ordered free took %lu ms\n", elapsed_ms);
- drm_buddy_fini(&mm);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_range_bias(struct kunit *test)
+static void gpu_test_buddy_alloc_range_bias(struct kunit *test)
{
u32 mm_size, size, ps, bias_size, bias_start, bias_end, bias_rem;
- DRM_RND_STATE(prng, random_seed);
+ GPU_RND_STATE(prng, random_seed);
unsigned int i, count, *order;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
unsigned long flags;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
LIST_HEAD(allocated);
bias_size = SZ_1M;
@@ -142,11 +142,11 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
kunit_info(test, "mm_size=%u, ps=%u\n", mm_size, ps);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, ps),
"buddy_init failed\n");
count = mm_size / bias_size;
- order = drm_random_order(count, &prng);
+ order = gpu_random_order(count, &prng);
KUNIT_EXPECT_TRUE(test, order);
/*
@@ -166,79 +166,79 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
/* internal round_up too big */
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, bias_size + ps, bias_size,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, bias_size, bias_size);
/* size too big */
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, bias_size + ps, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, bias_size + ps, ps);
/* bias range too small for size */
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start + ps,
+ gpu_buddy_alloc_blocks(&mm, bias_start + ps,
bias_end, bias_size, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n",
bias_start + ps, bias_end, bias_size, ps);
/* bias misaligned */
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start + ps,
+ gpu_buddy_alloc_blocks(&mm, bias_start + ps,
bias_end - ps,
bias_size >> 1, bias_size >> 1,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc h didn't fail with bias(%x-%x), size=%u, ps=%u\n",
bias_start + ps, bias_end - ps, bias_size >> 1, bias_size >> 1);
/* single big page */
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, bias_size, bias_size,
&tmp,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc i failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, bias_size, bias_size);
- drm_buddy_free_list(&mm, &tmp, 0);
+ gpu_buddy_free_list(&mm, &tmp, 0);
/* single page with internal round_up */
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, ps, bias_size,
&tmp,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, ps, bias_size);
- drm_buddy_free_list(&mm, &tmp, 0);
+ gpu_buddy_free_list(&mm, &tmp, 0);
/* random size within */
size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps);
if (size)
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, size, ps,
&tmp,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, size, ps);
bias_rem -= size;
/* too big for current avail */
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, bias_rem + ps, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, bias_rem + ps, ps);
@@ -248,10 +248,10 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
size = max(size, ps);
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, size, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, size, ps);
/*
@@ -259,15 +259,15 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
* unallocated, and ideally not always on the bias
* boundaries.
*/
- drm_buddy_free_list(&mm, &tmp, 0);
+ gpu_buddy_free_list(&mm, &tmp, 0);
} else {
list_splice_tail(&tmp, &allocated);
}
}
kfree(order);
- drm_buddy_free_list(&mm, &allocated, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_fini(&mm);
/*
* Something more free-form. Idea is to pick a random starting bias
@@ -278,7 +278,7 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
* allocated nodes in the middle of the address space.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, ps),
"buddy_init failed\n");
bias_start = round_up(prandom_u32_state(&prng) % (mm_size - ps), ps);
@@ -290,10 +290,10 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
u32 size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps);
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, size, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n",
bias_start, bias_end, size, ps);
bias_rem -= size;
@@ -319,24 +319,24 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
KUNIT_ASSERT_EQ(test, bias_start, 0);
KUNIT_ASSERT_EQ(test, bias_end, mm_size);
KUNIT_ASSERT_TRUE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start, bias_end,
+ gpu_buddy_alloc_blocks(&mm, bias_start, bias_end,
ps, ps,
&allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc passed with bias(%x-%x), size=%u\n",
bias_start, bias_end, ps);
- drm_buddy_free_list(&mm, &allocated, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_fini(&mm);
/*
- * Allocate cleared blocks in the bias range when the DRM buddy's clear avail is
+ * Allocate cleared blocks in the bias range when the GPU buddy's clear avail is
* zero. This will validate the bias range allocation in scenarios like system boot
* when no cleared blocks are available and exercise the fallback path too. The resulting
* blocks should always be dirty.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, ps),
"buddy_init failed\n");
bias_start = round_up(prandom_u32_state(&prng) % (mm_size - ps), ps);
@@ -344,11 +344,11 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
bias_end = max(bias_end, bias_start + ps);
bias_rem = bias_end - bias_start;
- flags = DRM_BUDDY_CLEAR_ALLOCATION | DRM_BUDDY_RANGE_ALLOCATION;
+ flags = GPU_BUDDY_CLEAR_ALLOCATION | GPU_BUDDY_RANGE_ALLOCATION;
size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps);
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, bias_start,
+ gpu_buddy_alloc_blocks(&mm, bias_start,
bias_end, size, ps,
&allocated,
flags),
@@ -356,27 +356,27 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test)
bias_start, bias_end, size, ps);
list_for_each_entry(block, &allocated, link)
- KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), false);
+ KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), false);
- drm_buddy_free_list(&mm, &allocated, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_clear(struct kunit *test)
+static void gpu_test_buddy_alloc_clear(struct kunit *test)
{
unsigned long n_pages, total, i = 0;
const unsigned long ps = SZ_4K;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
const int max_order = 12;
LIST_HEAD(allocated);
- struct drm_buddy mm;
+ struct gpu_buddy mm;
unsigned int order;
u32 mm_size, size;
LIST_HEAD(dirty);
LIST_HEAD(clean);
mm_size = SZ_4K << max_order;
- KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps));
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps));
KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
@@ -389,11 +389,11 @@ static void drm_test_buddy_alloc_clear(struct kunit *test)
* is indeed all dirty pages and vice versa. Free it all again,
* keeping the dirty/clear status.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
5 * ps, ps, &allocated,
- DRM_BUDDY_TOPDOWN_ALLOCATION),
+ GPU_BUDDY_TOPDOWN_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 5 * ps);
- drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED);
n_pages = 10;
do {
@@ -406,37 +406,37 @@ static void drm_test_buddy_alloc_clear(struct kunit *test)
flags = 0;
} else {
list = &clean;
- flags = DRM_BUDDY_CLEAR_ALLOCATION;
+ flags = GPU_BUDDY_CLEAR_ALLOCATION;
}
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
ps, ps, list,
flags),
"buddy_alloc hit an error size=%lu\n", ps);
} while (++i < n_pages);
list_for_each_entry(block, &clean, link)
- KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), true);
+ KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), true);
list_for_each_entry(block, &dirty, link)
- KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), false);
+ KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), false);
- drm_buddy_free_list(&mm, &clean, DRM_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &clean, GPU_BUDDY_CLEARED);
/*
* Trying to go over the clear limit for some allocation.
* The allocation should never fail with reasonable page-size.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
10 * ps, ps, &clean,
- DRM_BUDDY_CLEAR_ALLOCATION),
+ GPU_BUDDY_CLEAR_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 10 * ps);
- drm_buddy_free_list(&mm, &clean, DRM_BUDDY_CLEARED);
- drm_buddy_free_list(&mm, &dirty, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &clean, GPU_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &dirty, 0);
+ gpu_buddy_fini(&mm);
- KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps));
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps));
/*
* Create a new mm. Intentionally fragment the address space by creating
@@ -458,34 +458,34 @@ static void drm_test_buddy_alloc_clear(struct kunit *test)
else
list = &clean;
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
ps, ps, list, 0),
"buddy_alloc hit an error size=%lu\n", ps);
} while (++i < n_pages);
- drm_buddy_free_list(&mm, &clean, DRM_BUDDY_CLEARED);
- drm_buddy_free_list(&mm, &dirty, 0);
+ gpu_buddy_free_list(&mm, &clean, GPU_BUDDY_CLEARED);
+ gpu_buddy_free_list(&mm, &dirty, 0);
order = 1;
do {
size = SZ_4K << order;
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
size, size, &allocated,
- DRM_BUDDY_CLEAR_ALLOCATION),
+ GPU_BUDDY_CLEAR_ALLOCATION),
"buddy_alloc hit an error size=%u\n", size);
total = 0;
list_for_each_entry(block, &allocated, link) {
if (size != mm_size)
- KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), false);
- total += drm_buddy_block_size(&mm, block);
+ KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), false);
+ total += gpu_buddy_block_size(&mm, block);
}
KUNIT_EXPECT_EQ(test, total, size);
- drm_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_free_list(&mm, &allocated, 0);
} while (++order <= max_order);
- drm_buddy_fini(&mm);
+ gpu_buddy_fini(&mm);
/*
* Create a new mm with a non power-of-two size. Allocate a random size from each
@@ -494,44 +494,44 @@ static void drm_test_buddy_alloc_clear(struct kunit *test)
*/
mm_size = (SZ_4K << max_order) + (SZ_4K << (max_order - 2));
- KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps));
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps));
KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order,
4 * ps, ps, &allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 4 * ps);
- drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order,
+ gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED);
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order,
2 * ps, ps, &allocated,
- DRM_BUDDY_CLEAR_ALLOCATION),
+ GPU_BUDDY_CLEAR_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 2 * ps);
- drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, SZ_4K << max_order, mm_size,
+ gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED);
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, SZ_4K << max_order, mm_size,
ps, ps, &allocated,
- DRM_BUDDY_RANGE_ALLOCATION),
+ GPU_BUDDY_RANGE_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", ps);
- drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_contiguous(struct kunit *test)
+static void gpu_test_buddy_alloc_contiguous(struct kunit *test)
{
const unsigned long ps = SZ_4K, mm_size = 16 * 3 * SZ_4K;
unsigned long i, n_pages, total;
- struct drm_buddy_block *block;
- struct drm_buddy mm;
+ struct gpu_buddy_block *block;
+ struct gpu_buddy mm;
LIST_HEAD(left);
LIST_HEAD(middle);
LIST_HEAD(right);
LIST_HEAD(allocated);
- KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps));
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps));
/*
* Idea is to fragment the address space by alternating block
* allocations between three different lists; one for left, middle and
* right. We can then free a list to simulate fragmentation. In
- * particular we want to exercise the DRM_BUDDY_CONTIGUOUS_ALLOCATION,
+ * particular we want to exercise the GPU_BUDDY_CONTIGUOUS_ALLOCATION,
* including the try_harder path.
*/
@@ -548,66 +548,66 @@ static void drm_test_buddy_alloc_contiguous(struct kunit *test)
else
list = &right;
KUNIT_ASSERT_FALSE_MSG(test,
- drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ gpu_buddy_alloc_blocks(&mm, 0, mm_size,
ps, ps, list, 0),
"buddy_alloc hit an error size=%lu\n",
ps);
} while (++i < n_pages);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
3 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc didn't error size=%lu\n", 3 * ps);
- drm_buddy_free_list(&mm, &middle, 0);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ gpu_buddy_free_list(&mm, &middle, 0);
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
3 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc didn't error size=%lu\n", 3 * ps);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
2 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc didn't error size=%lu\n", 2 * ps);
- drm_buddy_free_list(&mm, &right, 0);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ gpu_buddy_free_list(&mm, &right, 0);
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
3 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc didn't error size=%lu\n", 3 * ps);
/*
* At this point we should have enough contiguous space for 2 blocks,
* however they are never buddies (since we freed middle and right) so
* will require the try_harder logic to find them.
*/
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
2 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 2 * ps);
- drm_buddy_free_list(&mm, &left, 0);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+ gpu_buddy_free_list(&mm, &left, 0);
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size,
3 * ps, ps, &allocated,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc hit an error size=%lu\n", 3 * ps);
total = 0;
list_for_each_entry(block, &allocated, link)
- total += drm_buddy_block_size(&mm, block);
+ total += gpu_buddy_block_size(&mm, block);
KUNIT_ASSERT_EQ(test, total, ps * 2 + ps * 3);
- drm_buddy_free_list(&mm, &allocated, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_pathological(struct kunit *test)
+static void gpu_test_buddy_alloc_pathological(struct kunit *test)
{
u64 mm_size, size, start = 0;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
const int max_order = 3;
unsigned long flags = 0;
int order, top;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
LIST_HEAD(blocks);
LIST_HEAD(holes);
LIST_HEAD(tmp);
@@ -620,7 +620,7 @@ static void drm_test_buddy_alloc_pathological(struct kunit *test)
*/
mm_size = SZ_4K << max_order;
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
"buddy_init failed\n");
KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
@@ -630,18 +630,18 @@ static void drm_test_buddy_alloc_pathological(struct kunit *test)
block = list_first_entry_or_null(&blocks, typeof(*block), link);
if (block) {
list_del(&block->link);
- drm_buddy_free_block(&mm, block);
+ gpu_buddy_free_block(&mm, block);
}
for (order = top; order--;) {
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start,
mm_size, size, size,
&tmp, flags),
"buddy_alloc hit -ENOMEM with order=%d, top=%d\n",
order, top);
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_move_tail(&block->link, &blocks);
@@ -649,45 +649,45 @@ static void drm_test_buddy_alloc_pathological(struct kunit *test)
/* There should be one final page for this sub-allocation */
size = get_size(0, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc hit -ENOMEM for hole\n");
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_move_tail(&block->link, &holes);
size = get_size(top, mm.chunk_size);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc unexpectedly succeeded at top-order %d/%d, it should be full!",
top, max_order);
}
- drm_buddy_free_list(&mm, &holes, 0);
+ gpu_buddy_free_list(&mm, &holes, 0);
/* Nothing larger than blocks of chunk_size now available */
for (order = 1; order <= max_order; order++) {
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc unexpectedly succeeded at order %d, it should be full!",
order);
}
list_splice_tail(&holes, &blocks);
- drm_buddy_free_list(&mm, &blocks, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &blocks, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_pessimistic(struct kunit *test)
+static void gpu_test_buddy_alloc_pessimistic(struct kunit *test)
{
u64 mm_size, size, start = 0;
- struct drm_buddy_block *block, *bn;
+ struct gpu_buddy_block *block, *bn;
const unsigned int max_order = 16;
unsigned long flags = 0;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
unsigned int order;
LIST_HEAD(blocks);
LIST_HEAD(tmp);
@@ -699,19 +699,19 @@ static void drm_test_buddy_alloc_pessimistic(struct kunit *test)
*/
mm_size = SZ_4K << max_order;
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
"buddy_init failed\n");
KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
for (order = 0; order < max_order; order++) {
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc hit -ENOMEM with order=%d\n",
order);
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_move_tail(&block->link, &blocks);
@@ -719,11 +719,11 @@ static void drm_test_buddy_alloc_pessimistic(struct kunit *test)
/* And now the last remaining block available */
size = get_size(0, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc hit -ENOMEM on final alloc\n");
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_move_tail(&block->link, &blocks);
@@ -731,58 +731,58 @@ static void drm_test_buddy_alloc_pessimistic(struct kunit *test)
/* Should be completely full! */
for (order = max_order; order--;) {
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc unexpectedly succeeded, it should be full!");
}
block = list_last_entry(&blocks, typeof(*block), link);
list_del(&block->link);
- drm_buddy_free_block(&mm, block);
+ gpu_buddy_free_block(&mm, block);
/* As we free in increasing size, we make available larger blocks */
order = 1;
list_for_each_entry_safe(block, bn, &blocks, link) {
list_del(&block->link);
- drm_buddy_free_block(&mm, block);
+ gpu_buddy_free_block(&mm, block);
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc hit -ENOMEM with order=%d\n",
order);
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_del(&block->link);
- drm_buddy_free_block(&mm, block);
+ gpu_buddy_free_block(&mm, block);
order++;
}
/* To confirm, now the whole mm should be available */
size = get_size(max_order, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc (realloc) hit -ENOMEM with order=%d\n",
max_order);
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_del(&block->link);
- drm_buddy_free_block(&mm, block);
- drm_buddy_free_list(&mm, &blocks, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_block(&mm, block);
+ gpu_buddy_free_list(&mm, &blocks, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_optimistic(struct kunit *test)
+static void gpu_test_buddy_alloc_optimistic(struct kunit *test)
{
u64 mm_size, size, start = 0;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
unsigned long flags = 0;
const int max_order = 16;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
LIST_HEAD(blocks);
LIST_HEAD(tmp);
int order;
@@ -794,19 +794,19 @@ static void drm_test_buddy_alloc_optimistic(struct kunit *test)
mm_size = SZ_4K * ((1 << (max_order + 1)) - 1);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
"buddy_init failed\n");
KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
for (order = 0; order <= max_order; order++) {
size = get_size(order, mm.chunk_size);
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc hit -ENOMEM with order=%d\n",
order);
- block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link);
KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n");
list_move_tail(&block->link, &blocks);
@@ -814,115 +814,115 @@ static void drm_test_buddy_alloc_optimistic(struct kunit *test)
/* Should be completely full! */
size = get_size(0, mm.chunk_size);
- KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size,
+ KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size,
size, size, &tmp, flags),
"buddy_alloc unexpectedly succeeded, it should be full!");
- drm_buddy_free_list(&mm, &blocks, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &blocks, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_limit(struct kunit *test)
+static void gpu_test_buddy_alloc_limit(struct kunit *test)
{
u64 size = U64_MAX, start = 0;
- struct drm_buddy_block *block;
+ struct gpu_buddy_block *block;
unsigned long flags = 0;
LIST_HEAD(allocated);
- struct drm_buddy mm;
+ struct gpu_buddy mm;
- KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, size, SZ_4K));
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, size, SZ_4K));
- KUNIT_EXPECT_EQ_MSG(test, mm.max_order, DRM_BUDDY_MAX_ORDER,
+ KUNIT_EXPECT_EQ_MSG(test, mm.max_order, GPU_BUDDY_MAX_ORDER,
"mm.max_order(%d) != %d\n", mm.max_order,
- DRM_BUDDY_MAX_ORDER);
+ GPU_BUDDY_MAX_ORDER);
size = mm.chunk_size << mm.max_order;
- KUNIT_EXPECT_FALSE(test, drm_buddy_alloc_blocks(&mm, start, size, size,
+ KUNIT_EXPECT_FALSE(test, gpu_buddy_alloc_blocks(&mm, start, size, size,
mm.chunk_size, &allocated, flags));
- block = list_first_entry_or_null(&allocated, struct drm_buddy_block, link);
+ block = list_first_entry_or_null(&allocated, struct gpu_buddy_block, link);
KUNIT_EXPECT_TRUE(test, block);
- KUNIT_EXPECT_EQ_MSG(test, drm_buddy_block_order(block), mm.max_order,
+ KUNIT_EXPECT_EQ_MSG(test, gpu_buddy_block_order(block), mm.max_order,
"block order(%d) != %d\n",
- drm_buddy_block_order(block), mm.max_order);
+ gpu_buddy_block_order(block), mm.max_order);
- KUNIT_EXPECT_EQ_MSG(test, drm_buddy_block_size(&mm, block),
+ KUNIT_EXPECT_EQ_MSG(test, gpu_buddy_block_size(&mm, block),
BIT_ULL(mm.max_order) * mm.chunk_size,
"block size(%llu) != %llu\n",
- drm_buddy_block_size(&mm, block),
+ gpu_buddy_block_size(&mm, block),
BIT_ULL(mm.max_order) * mm.chunk_size);
- drm_buddy_free_list(&mm, &allocated, 0);
- drm_buddy_fini(&mm);
+ gpu_buddy_free_list(&mm, &allocated, 0);
+ gpu_buddy_fini(&mm);
}
-static void drm_test_buddy_alloc_exceeds_max_order(struct kunit *test)
+static void gpu_test_buddy_alloc_exceeds_max_order(struct kunit *test)
{
u64 mm_size = SZ_8G + SZ_2G, size = SZ_8G + SZ_1G, min_block_size = SZ_8G;
- struct drm_buddy mm;
+ struct gpu_buddy mm;
LIST_HEAD(blocks);
int err;
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K),
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
"buddy_init failed\n");
/* CONTIGUOUS allocation should succeed via try_harder fallback */
- KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, size,
+ KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, size,
SZ_4K, &blocks,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION),
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION),
"buddy_alloc hit an error size=%llu\n", size);
- drm_buddy_free_list(&mm, &blocks, 0);
+ gpu_buddy_free_list(&mm, &blocks, 0);
/* Non-CONTIGUOUS with large min_block_size should return -EINVAL */
- err = drm_buddy_alloc_blocks(&mm, 0, mm_size, size, min_block_size, &blocks, 0);
+ err = gpu_buddy_alloc_blocks(&mm, 0, mm_size, size, min_block_size, &blocks, 0);
KUNIT_EXPECT_EQ(test, err, -EINVAL);
/* Non-CONTIGUOUS + RANGE with large min_block_size should return -EINVAL */
- err = drm_buddy_alloc_blocks(&mm, 0, mm_size, size, min_block_size, &blocks,
- DRM_BUDDY_RANGE_ALLOCATION);
+ err = gpu_buddy_alloc_blocks(&mm, 0, mm_size, size, min_block_size, &blocks,
+ GPU_BUDDY_RANGE_ALLOCATION);
KUNIT_EXPECT_EQ(test, err, -EINVAL);
/* CONTIGUOUS + RANGE should return -EINVAL (no try_harder for RANGE) */
- err = drm_buddy_alloc_blocks(&mm, 0, mm_size, size, SZ_4K, &blocks,
- DRM_BUDDY_CONTIGUOUS_ALLOCATION | DRM_BUDDY_RANGE_ALLOCATION);
+ err = gpu_buddy_alloc_blocks(&mm, 0, mm_size, size, SZ_4K, &blocks,
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION | GPU_BUDDY_RANGE_ALLOCATION);
KUNIT_EXPECT_EQ(test, err, -EINVAL);
- drm_buddy_fini(&mm);
+ gpu_buddy_fini(&mm);
}
-static int drm_buddy_suite_init(struct kunit_suite *suite)
+static int gpu_buddy_suite_init(struct kunit_suite *suite)
{
while (!random_seed)
random_seed = get_random_u32();
- kunit_info(suite, "Testing DRM buddy manager, with random_seed=0x%x\n",
+ kunit_info(suite, "Testing GPU buddy manager, with random_seed=0x%x\n",
random_seed);
return 0;
}
-static struct kunit_case drm_buddy_tests[] = {
- KUNIT_CASE(drm_test_buddy_alloc_limit),
- KUNIT_CASE(drm_test_buddy_alloc_optimistic),
- KUNIT_CASE(drm_test_buddy_alloc_pessimistic),
- KUNIT_CASE(drm_test_buddy_alloc_pathological),
- KUNIT_CASE(drm_test_buddy_alloc_contiguous),
- KUNIT_CASE(drm_test_buddy_alloc_clear),
- KUNIT_CASE(drm_test_buddy_alloc_range_bias),
- KUNIT_CASE(drm_test_buddy_fragmentation_performance),
- KUNIT_CASE(drm_test_buddy_alloc_exceeds_max_order),
+static struct kunit_case gpu_buddy_tests[] = {
+ KUNIT_CASE(gpu_test_buddy_alloc_limit),
+ KUNIT_CASE(gpu_test_buddy_alloc_optimistic),
+ KUNIT_CASE(gpu_test_buddy_alloc_pessimistic),
+ KUNIT_CASE(gpu_test_buddy_alloc_pathological),
+ KUNIT_CASE(gpu_test_buddy_alloc_contiguous),
+ KUNIT_CASE(gpu_test_buddy_alloc_clear),
+ KUNIT_CASE(gpu_test_buddy_alloc_range_bias),
+ KUNIT_CASE(gpu_test_buddy_fragmentation_performance),
+ KUNIT_CASE(gpu_test_buddy_alloc_exceeds_max_order),
{}
};
-static struct kunit_suite drm_buddy_test_suite = {
- .name = "drm_buddy",
- .suite_init = drm_buddy_suite_init,
- .test_cases = drm_buddy_tests,
+static struct kunit_suite gpu_buddy_test_suite = {
+ .name = "gpu_buddy",
+ .suite_init = gpu_buddy_suite_init,
+ .test_cases = gpu_buddy_tests,
};
-kunit_test_suite(drm_buddy_test_suite);
+kunit_test_suite(gpu_buddy_test_suite);
MODULE_AUTHOR("Intel Corporation");
-MODULE_DESCRIPTION("Kunit test for drm_buddy functions");
+MODULE_DESCRIPTION("Kunit test for gpu_buddy functions");
MODULE_LICENSE("GPL");
diff --git a/drivers/gpu/tests/gpu_random.c b/drivers/gpu/tests/gpu_random.c
index ddd1f594b5d5..6356372f7e52 100644
--- a/drivers/gpu/tests/gpu_random.c
+++ b/drivers/gpu/tests/gpu_random.c
@@ -8,26 +8,26 @@
#include "gpu_random.h"
-u32 drm_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
+u32 gpu_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state)
{
return upper_32_bits((u64)prandom_u32_state(state) * ep_ro);
}
-EXPORT_SYMBOL(drm_prandom_u32_max_state);
+EXPORT_SYMBOL(gpu_prandom_u32_max_state);
-void drm_random_reorder(unsigned int *order, unsigned int count,
+void gpu_random_reorder(unsigned int *order, unsigned int count,
struct rnd_state *state)
{
unsigned int i, j;
for (i = 0; i < count; ++i) {
BUILD_BUG_ON(sizeof(unsigned int) > sizeof(u32));
- j = drm_prandom_u32_max_state(count, state);
+ j = gpu_prandom_u32_max_state(count, state);
swap(order[i], order[j]);
}
}
-EXPORT_SYMBOL(drm_random_reorder);
+EXPORT_SYMBOL(gpu_random_reorder);
-unsigned int *drm_random_order(unsigned int count, struct rnd_state *state)
+unsigned int *gpu_random_order(unsigned int count, struct rnd_state *state)
{
unsigned int *order, i;
@@ -38,7 +38,7 @@ unsigned int *drm_random_order(unsigned int count, struct rnd_state *state)
for (i = 0; i < count; i++)
order[i] = i;
- drm_random_reorder(order, count, state);
+ gpu_random_reorder(order, count, state);
return order;
}
-EXPORT_SYMBOL(drm_random_order);
+EXPORT_SYMBOL(gpu_random_order);
diff --git a/drivers/gpu/tests/gpu_random.h b/drivers/gpu/tests/gpu_random.h
index 9f827260a89d..b68cf3448264 100644
--- a/drivers/gpu/tests/gpu_random.h
+++ b/drivers/gpu/tests/gpu_random.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __DRM_RANDOM_H__
-#define __DRM_RANDOM_H__
+#ifndef __GPU_RANDOM_H__
+#define __GPU_RANDOM_H__
/* This is a temporary home for a couple of utility functions that should
* be transposed to lib/ at the earliest convenience.
@@ -8,21 +8,21 @@
#include <linux/prandom.h>
-#define DRM_RND_STATE_INITIALIZER(seed__) ({ \
+#define GPU_RND_STATE_INITIALIZER(seed__) ({ \
struct rnd_state state__; \
prandom_seed_state(&state__, (seed__)); \
state__; \
})
-#define DRM_RND_STATE(name__, seed__) \
- struct rnd_state name__ = DRM_RND_STATE_INITIALIZER(seed__)
+#define GPU_RND_STATE(name__, seed__) \
+ struct rnd_state name__ = GPU_RND_STATE_INITIALIZER(seed__)
-unsigned int *drm_random_order(unsigned int count,
+unsigned int *gpu_random_order(unsigned int count,
struct rnd_state *state);
-void drm_random_reorder(unsigned int *order,
+void gpu_random_reorder(unsigned int *order,
unsigned int count,
struct rnd_state *state);
-u32 drm_prandom_u32_max_state(u32 ep_ro,
+u32 gpu_prandom_u32_max_state(u32 ep_ro,
struct rnd_state *state);
-#endif /* !__DRM_RANDOM_H__ */
+#endif /* !__GPU_RANDOM_H__ */
diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig
index 9884f003247d..a7144d275f54 100644
--- a/drivers/video/Kconfig
+++ b/drivers/video/Kconfig
@@ -37,6 +37,7 @@ source "drivers/char/agp/Kconfig"
source "drivers/gpu/vga/Kconfig"
+source "drivers/gpu/Kconfig"
source "drivers/gpu/host1x/Kconfig"
source "drivers/gpu/ipu-v3/Kconfig"
source "drivers/gpu/nova-core/Kconfig"
diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h
new file mode 100644
index 000000000000..3054369bebff
--- /dev/null
+++ b/include/drm/drm_buddy.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#ifndef __DRM_BUDDY_H__
+#define __DRM_BUDDY_H__
+
+#include <linux/gpu_buddy.h>
+
+struct drm_printer;
+
+/* DRM-specific GPU Buddy Allocator print helpers */
+void drm_buddy_print(struct gpu_buddy *mm, struct drm_printer *p);
+void drm_buddy_block_print(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block,
+ struct drm_printer *p);
+#endif
diff --git a/include/linux/gpu_buddy.h b/include/linux/gpu_buddy.h
index b909fa8f810a..07ac65db6d2e 100644
--- a/include/linux/gpu_buddy.h
+++ b/include/linux/gpu_buddy.h
@@ -3,8 +3,8 @@
* Copyright © 2021 Intel Corporation
*/
-#ifndef __DRM_BUDDY_H__
-#define __DRM_BUDDY_H__
+#ifndef __GPU_BUDDY_H__
+#define __GPU_BUDDY_H__
#include <linux/bitops.h>
#include <linux/list.h>
@@ -12,38 +12,45 @@
#include <linux/sched.h>
#include <linux/rbtree.h>
-struct drm_printer;
+#define GPU_BUDDY_RANGE_ALLOCATION BIT(0)
+#define GPU_BUDDY_TOPDOWN_ALLOCATION BIT(1)
+#define GPU_BUDDY_CONTIGUOUS_ALLOCATION BIT(2)
+#define GPU_BUDDY_CLEAR_ALLOCATION BIT(3)
+#define GPU_BUDDY_CLEARED BIT(4)
+#define GPU_BUDDY_TRIM_DISABLE BIT(5)
-#define DRM_BUDDY_RANGE_ALLOCATION BIT(0)
-#define DRM_BUDDY_TOPDOWN_ALLOCATION BIT(1)
-#define DRM_BUDDY_CONTIGUOUS_ALLOCATION BIT(2)
-#define DRM_BUDDY_CLEAR_ALLOCATION BIT(3)
-#define DRM_BUDDY_CLEARED BIT(4)
-#define DRM_BUDDY_TRIM_DISABLE BIT(5)
+enum gpu_buddy_free_tree {
+ GPU_BUDDY_CLEAR_TREE = 0,
+ GPU_BUDDY_DIRTY_TREE,
+ GPU_BUDDY_MAX_FREE_TREES,
+};
-struct drm_buddy_block {
-#define DRM_BUDDY_HEADER_OFFSET GENMASK_ULL(63, 12)
-#define DRM_BUDDY_HEADER_STATE GENMASK_ULL(11, 10)
-#define DRM_BUDDY_ALLOCATED (1 << 10)
-#define DRM_BUDDY_FREE (2 << 10)
-#define DRM_BUDDY_SPLIT (3 << 10)
-#define DRM_BUDDY_HEADER_CLEAR GENMASK_ULL(9, 9)
+#define for_each_free_tree(tree) \
+ for ((tree) = 0; (tree) < GPU_BUDDY_MAX_FREE_TREES; (tree)++)
+
+struct gpu_buddy_block {
+#define GPU_BUDDY_HEADER_OFFSET GENMASK_ULL(63, 12)
+#define GPU_BUDDY_HEADER_STATE GENMASK_ULL(11, 10)
+#define GPU_BUDDY_ALLOCATED (1 << 10)
+#define GPU_BUDDY_FREE (2 << 10)
+#define GPU_BUDDY_SPLIT (3 << 10)
+#define GPU_BUDDY_HEADER_CLEAR GENMASK_ULL(9, 9)
/* Free to be used, if needed in the future */
-#define DRM_BUDDY_HEADER_UNUSED GENMASK_ULL(8, 6)
-#define DRM_BUDDY_HEADER_ORDER GENMASK_ULL(5, 0)
+#define GPU_BUDDY_HEADER_UNUSED GENMASK_ULL(8, 6)
+#define GPU_BUDDY_HEADER_ORDER GENMASK_ULL(5, 0)
u64 header;
- struct drm_buddy_block *left;
- struct drm_buddy_block *right;
- struct drm_buddy_block *parent;
+ struct gpu_buddy_block *left;
+ struct gpu_buddy_block *right;
+ struct gpu_buddy_block *parent;
void *private; /* owned by creator */
/*
- * While the block is allocated by the user through drm_buddy_alloc*,
+ * While the block is allocated by the user through gpu_buddy_alloc*,
* the user has ownership of the link, for example to maintain within
* a list, if so desired. As soon as the block is freed with
- * drm_buddy_free* ownership is given back to the mm.
+ * gpu_buddy_free* ownership is given back to the mm.
*/
union {
struct rb_node rb;
@@ -54,15 +61,15 @@ struct drm_buddy_block {
};
/* Order-zero must be at least SZ_4K */
-#define DRM_BUDDY_MAX_ORDER (63 - 12)
+#define GPU_BUDDY_MAX_ORDER (63 - 12)
/*
* Binary Buddy System.
*
* Locking should be handled by the user, a simple mutex around
- * drm_buddy_alloc* and drm_buddy_free* should suffice.
+ * gpu_buddy_alloc* and gpu_buddy_free* should suffice.
*/
-struct drm_buddy {
+struct gpu_buddy {
/* Maintain a free list for each order. */
struct rb_root **free_trees;
@@ -73,7 +80,7 @@ struct drm_buddy {
* block. Nodes are either allocated or free, in which case they will
* also exist on the respective free list.
*/
- struct drm_buddy_block **roots;
+ struct gpu_buddy_block **roots;
/*
* Anything from here is public, and remains static for the lifetime of
@@ -90,82 +97,81 @@ struct drm_buddy {
};
static inline u64
-drm_buddy_block_offset(const struct drm_buddy_block *block)
+gpu_buddy_block_offset(const struct gpu_buddy_block *block)
{
- return block->header & DRM_BUDDY_HEADER_OFFSET;
+ return block->header & GPU_BUDDY_HEADER_OFFSET;
}
static inline unsigned int
-drm_buddy_block_order(struct drm_buddy_block *block)
+gpu_buddy_block_order(struct gpu_buddy_block *block)
{
- return block->header & DRM_BUDDY_HEADER_ORDER;
+ return block->header & GPU_BUDDY_HEADER_ORDER;
}
static inline unsigned int
-drm_buddy_block_state(struct drm_buddy_block *block)
+gpu_buddy_block_state(struct gpu_buddy_block *block)
{
- return block->header & DRM_BUDDY_HEADER_STATE;
+ return block->header & GPU_BUDDY_HEADER_STATE;
}
static inline bool
-drm_buddy_block_is_allocated(struct drm_buddy_block *block)
+gpu_buddy_block_is_allocated(struct gpu_buddy_block *block)
{
- return drm_buddy_block_state(block) == DRM_BUDDY_ALLOCATED;
+ return gpu_buddy_block_state(block) == GPU_BUDDY_ALLOCATED;
}
static inline bool
-drm_buddy_block_is_clear(struct drm_buddy_block *block)
+gpu_buddy_block_is_clear(struct gpu_buddy_block *block)
{
- return block->header & DRM_BUDDY_HEADER_CLEAR;
+ return block->header & GPU_BUDDY_HEADER_CLEAR;
}
static inline bool
-drm_buddy_block_is_free(struct drm_buddy_block *block)
+gpu_buddy_block_is_free(struct gpu_buddy_block *block)
{
- return drm_buddy_block_state(block) == DRM_BUDDY_FREE;
+ return gpu_buddy_block_state(block) == GPU_BUDDY_FREE;
}
static inline bool
-drm_buddy_block_is_split(struct drm_buddy_block *block)
+gpu_buddy_block_is_split(struct gpu_buddy_block *block)
{
- return drm_buddy_block_state(block) == DRM_BUDDY_SPLIT;
+ return gpu_buddy_block_state(block) == GPU_BUDDY_SPLIT;
}
static inline u64
-drm_buddy_block_size(struct drm_buddy *mm,
- struct drm_buddy_block *block)
+gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
{
- return mm->chunk_size << drm_buddy_block_order(block);
+ return mm->chunk_size << gpu_buddy_block_order(block);
}
-int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size);
+int gpu_buddy_init(struct gpu_buddy *mm, u64 size, u64 chunk_size);
-void drm_buddy_fini(struct drm_buddy *mm);
+void gpu_buddy_fini(struct gpu_buddy *mm);
-struct drm_buddy_block *
-drm_get_buddy(struct drm_buddy_block *block);
+struct gpu_buddy_block *
+gpu_get_buddy(struct gpu_buddy_block *block);
-int drm_buddy_alloc_blocks(struct drm_buddy *mm,
+int gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
u64 start, u64 end, u64 size,
u64 min_page_size,
struct list_head *blocks,
unsigned long flags);
-int drm_buddy_block_trim(struct drm_buddy *mm,
+int gpu_buddy_block_trim(struct gpu_buddy *mm,
u64 *start,
u64 new_size,
struct list_head *blocks);
-void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear);
+void gpu_buddy_reset_clear(struct gpu_buddy *mm, bool is_clear);
-void drm_buddy_free_block(struct drm_buddy *mm, struct drm_buddy_block *block);
+void gpu_buddy_free_block(struct gpu_buddy *mm, struct gpu_buddy_block *block);
-void drm_buddy_free_list(struct drm_buddy *mm,
+void gpu_buddy_free_list(struct gpu_buddy *mm,
struct list_head *objects,
unsigned int flags);
-void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p);
-void drm_buddy_block_print(struct drm_buddy *mm,
- struct drm_buddy_block *block,
- struct drm_printer *p);
+void gpu_buddy_print(struct gpu_buddy *mm);
+void gpu_buddy_block_print(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block);
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 74+ messages in thread
* [PATCH v10 3/8] gpu: Fix uninitialized buddy for built-in drivers
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
2026-02-18 20:54 ` [PATCH v10 1/8] gpu: Move DRM buddy allocator one level up (part one) Joel Fernandes
2026-02-18 20:55 ` [PATCH v10 2/8] gpu: Move DRM buddy allocator one level up (part two) Joel Fernandes
@ 2026-02-18 20:55 ` Joel Fernandes
2026-02-19 10:09 ` Danilo Krummrich
2026-02-18 20:55 ` [PATCH v10 4/8] rust: ffi: Convert pub use to pub mod and create ffi module Joel Fernandes
` (6 subsequent siblings)
9 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 20:55 UTC (permalink / raw)
To: linux-kernel, Matthew Auld, Arun Pravin, Christian Koenig,
David Airlie, Simona Vetter, Dave Airlie, Joel Fernandes
Cc: Danilo Krummrich, Miguel Ojeda, Gary Guo, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, intel-xe,
Peter Senna Tschudin
From: Koen Koning <koen.koning@linux.intel.com>
Use subsys_initcall instead of module_init for the GPU buddy allocator,
so its initialization code runs before any gpu drivers.
Otherwise, a built-in driver that tries to use the buddy allocator will
run into a kernel NULL pointer dereference because slab_blocks is
uninitialized.
Specifically, this fixes drm/xe (as built-in) running into a kernel
panic during boot, because it uses buddy during device probe.
Fixes: ba110db8e1bc ("gpu: Move DRM buddy allocator one level up (part two)")
Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: intel-xe@lists.freedesktop.org
Cc: Peter Senna Tschudin <peter.senna@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Koen Koning <koen.koning@linux.intel.com>
Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/buddy.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/buddy.c b/drivers/gpu/buddy.c
index 603c59a2013a..81f57fdf913b 100644
--- a/drivers/gpu/buddy.c
+++ b/drivers/gpu/buddy.c
@@ -1315,7 +1315,7 @@ static int __init gpu_buddy_module_init(void)
return 0;
}
-module_init(gpu_buddy_module_init);
+subsys_initcall(gpu_buddy_module_init);
module_exit(gpu_buddy_module_exit);
MODULE_DESCRIPTION("GPU Buddy Allocator");
--
2.34.1
^ permalink raw reply related [flat|nested] 74+ messages in thread
* [PATCH v10 4/8] rust: ffi: Convert pub use to pub mod and create ffi module
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
` (2 preceding siblings ...)
2026-02-18 20:55 ` [PATCH v10 3/8] gpu: Fix uninitialized buddy for built-in drivers Joel Fernandes
@ 2026-02-18 20:55 ` Joel Fernandes
2026-02-19 3:18 ` Alexandre Courbot
2026-02-18 20:55 ` [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists Joel Fernandes
` (5 subsequent siblings)
9 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 20:55 UTC (permalink / raw)
To: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Joel Fernandes
Convert `pub use ffi` to `pub mod ffi` in lib.rs and create the
corresponding `rust/kernel/ffi/mod.rs` module file. Also re-export all C
type definitions from `ffi` crate so that existing `kernel::ffi::c_int`
etc. paths continue to work.
This prepares the ffi module to host additional sub-modules in later
patches (clist).
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
rust/kernel/ffi/mod.rs | 7 +++++++
rust/kernel/lib.rs | 3 +--
2 files changed, 8 insertions(+), 2 deletions(-)
create mode 100644 rust/kernel/ffi/mod.rs
diff --git a/rust/kernel/ffi/mod.rs b/rust/kernel/ffi/mod.rs
new file mode 100644
index 000000000000..7d844e9cb339
--- /dev/null
+++ b/rust/kernel/ffi/mod.rs
@@ -0,0 +1,7 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! FFI infrastructure for interfacing with C code.
+
+// Re-export C type definitions from the `ffi` crate so that existing
+// `kernel::ffi::c_int` etc. paths continue to work.
+pub use ::ffi::*;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index 3da92f18f4ee..0a77b4c0ffeb 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -62,8 +62,6 @@
// Allow proc-macros to refer to `::kernel` inside the `kernel` crate (this crate).
extern crate self as kernel;
-pub use ffi;
-
pub mod acpi;
pub mod alloc;
#[cfg(CONFIG_AUXILIARY_BUS)]
@@ -93,6 +91,7 @@
pub mod drm;
pub mod error;
pub mod faux;
+pub mod ffi;
#[cfg(CONFIG_RUST_FW_LOADER_ABSTRACTIONS)]
pub mod firmware;
pub mod fmt;
--
2.34.1
^ permalink raw reply related [flat|nested] 74+ messages in thread
* [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
` (3 preceding siblings ...)
2026-02-18 20:55 ` [PATCH v10 4/8] rust: ffi: Convert pub use to pub mod and create ffi module Joel Fernandes
@ 2026-02-18 20:55 ` Joel Fernandes
2026-02-19 4:26 ` Alexandre Courbot
` (4 more replies)
2026-02-18 20:55 ` [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
` (4 subsequent siblings)
9 siblings, 5 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 20:55 UTC (permalink / raw)
To: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Joel Fernandes, Alexandre Courbot
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Nikola Djukic
Add a new module `clist` for working with C's doubly circular linked
lists. Provide low-level iteration over list nodes.
Typed iteration over actual items is provided with a `clist_create`
macro to assist in creation of the `CList` type.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Acked-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
MAINTAINERS | 7 +
rust/helpers/helpers.c | 1 +
rust/helpers/list.c | 17 ++
rust/kernel/ffi/clist.rs | 327 +++++++++++++++++++++++++++++++++++++++
rust/kernel/ffi/mod.rs | 2 +
5 files changed, 354 insertions(+)
create mode 100644 rust/helpers/list.c
create mode 100644 rust/kernel/ffi/clist.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 14b4f9af0e36..4647f4601038 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -23213,6 +23213,13 @@ S: Maintained
T: git https://github.com/Rust-for-Linux/linux.git rust-analyzer-next
F: scripts/generate_rust_analyzer.py
+RUST TO C LIST INTERFACES
+M: Joel Fernandes <joelagnelf@nvidia.com>
+M: Alexandre Courbot <acourbot@nvidia.com>
+L: rust-for-linux@vger.kernel.org
+S: Maintained
+F: rust/kernel/ffi/clist.rs
+
RXRPC SOCKETS (AF_RXRPC)
M: David Howells <dhowells@redhat.com>
M: Marc Dionne <marc.dionne@auristor.com>
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index a3c42e51f00a..724fcb8240ac 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -35,6 +35,7 @@
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
+#include "list.c"
#include "maple_tree.c"
#include "mm.c"
#include "mutex.c"
diff --git a/rust/helpers/list.c b/rust/helpers/list.c
new file mode 100644
index 000000000000..4c1f9c111ec8
--- /dev/null
+++ b/rust/helpers/list.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Helpers for C Circular doubly linked list implementation.
+ */
+
+#include <linux/list.h>
+
+__rust_helper void rust_helper_INIT_LIST_HEAD(struct list_head *list)
+{
+ INIT_LIST_HEAD(list);
+}
+
+__rust_helper void rust_helper_list_add_tail(struct list_head *new, struct list_head *head)
+{
+ list_add_tail(new, head);
+}
diff --git a/rust/kernel/ffi/clist.rs b/rust/kernel/ffi/clist.rs
new file mode 100644
index 000000000000..a84f395875dc
--- /dev/null
+++ b/rust/kernel/ffi/clist.rs
@@ -0,0 +1,327 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! FFI interface for C doubly circular intrusive linked lists.
+//!
+//! This module provides Rust abstractions for iterating over C `list_head`-based
+//! linked lists. It is intended for FFI use-cases where a C subsystem manages a
+//! circular linked list that Rust code needs to read. This is generally required
+//! only for special cases and should be avoided by drivers.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! bindings,
+//! clist_create,
+//! types::Opaque, //
+//! };
+//! # // Create test list with values (0, 10, 20) - normally done by C code but it is
+//! # // emulated here for doctests using the C bindings.
+//! # use core::mem::MaybeUninit;
+//! #
+//! # /// C struct with embedded `list_head` (typically will be allocated by C code).
+//! # #[repr(C)]
+//! # pub struct SampleItemC {
+//! # pub value: i32,
+//! # pub link: bindings::list_head,
+//! # }
+//! #
+//! # let mut head = MaybeUninit::<bindings::list_head>::uninit();
+//! #
+//! # let head = head.as_mut_ptr();
+//! # // SAFETY: head and all the items are test objects allocated in this scope.
+//! # unsafe { bindings::INIT_LIST_HEAD(head) };
+//! #
+//! # let mut items = [
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # MaybeUninit::<SampleItemC>::uninit(),
+//! # ];
+//! #
+//! # for (i, item) in items.iter_mut().enumerate() {
+//! # let ptr = item.as_mut_ptr();
+//! # // SAFETY: pointers are to allocated test objects with a list_head field.
+//! # unsafe {
+//! # (*ptr).value = i as i32 * 10;
+//! # // &raw mut computes address of link directly as link is uninitialized.
+//! # bindings::INIT_LIST_HEAD(&raw mut (*ptr).link);
+//! # bindings::list_add_tail(&mut (*ptr).link, head);
+//! # }
+//! # }
+//!
+//! // Rust wrapper for the C struct.
+//! // The list item struct in this example is defined in C code as:
+//! // struct SampleItemC {
+//! // int value;
+//! // struct list_head link;
+//! // };
+//! //
+//! #[repr(transparent)]
+//! pub struct Item(Opaque<SampleItemC>);
+//!
+//! impl Item {
+//! pub fn value(&self) -> i32 {
+//! // SAFETY: [`Item`] has same layout as [`SampleItemC`].
+//! unsafe { (*self.0.get()).value }
+//! }
+//! }
+//!
+//! // Create typed [`CList`] from sentinel head.
+//! // SAFETY: head is valid, items are [`SampleItemC`] with embedded `link` field.
+//! let list = unsafe { clist_create!(head, Item, SampleItemC, link) };
+//!
+//! // Iterate directly over typed items.
+//! let mut found_0 = false;
+//! let mut found_10 = false;
+//! let mut found_20 = false;
+//!
+//! for item in list.iter() {
+//! let val = item.value();
+//! if val == 0 { found_0 = true; }
+//! if val == 10 { found_10 = true; }
+//! if val == 20 { found_20 = true; }
+//! }
+//!
+//! assert!(found_0 && found_10 && found_20);
+//! ```
+
+use core::{
+ iter::FusedIterator,
+ marker::PhantomData, //
+};
+
+use crate::{
+ bindings,
+ types::Opaque, //
+};
+
+use pin_init::{
+ pin_data,
+ pin_init,
+ PinInit //
+};
+
+/// FFI wrapper for a C `list_head` object used in intrusive linked lists.
+///
+/// # Invariants
+///
+/// - [`CListHead`] represents an allocated and valid `list_head` structure.
+#[pin_data]
+#[repr(transparent)]
+pub struct CListHead {
+ #[pin]
+ inner: Opaque<bindings::list_head>,
+}
+
+impl CListHead {
+ /// Create a `&CListHead` reference from a raw `list_head` pointer.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure.
+ /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
+ /// - The list and all linked `list_head` nodes must not be modified by non-Rust code
+ /// for the lifetime `'a`.
+ #[inline]
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::list_head) -> &'a Self {
+ // SAFETY:
+ // - [`CListHead`] has same layout as `list_head`.
+ // - `ptr` is valid and unmodified for 'a per caller guarantees.
+ unsafe { &*ptr.cast() }
+ }
+
+ /// Get the raw `list_head` pointer.
+ #[inline]
+ pub fn as_raw(&self) -> *mut bindings::list_head {
+ self.inner.get()
+ }
+
+ /// Get the next [`CListHead`] in the list.
+ #[inline]
+ pub fn next(&self) -> &Self {
+ let raw = self.as_raw();
+ // SAFETY:
+ // - `self.as_raw()` is valid per type invariants.
+ // - The `next` pointer is guaranteed to be non-NULL.
+ unsafe { Self::from_raw((*raw).next) }
+ }
+
+ /// Check if this node is linked in a list (not isolated).
+ #[inline]
+ pub fn is_linked(&self) -> bool {
+ let raw = self.as_raw();
+ // SAFETY: self.as_raw() is valid per type invariants.
+ unsafe { (*raw).next != raw && (*raw).prev != raw }
+ }
+
+ /// Pin-initializer that initializes the list head.
+ pub fn new() -> impl PinInit<Self> {
+ pin_init!(Self {
+ // SAFETY: `INIT_LIST_HEAD` initializes `slot` to a valid empty list.
+ inner <- Opaque::ffi_init(|slot| unsafe { bindings::INIT_LIST_HEAD(slot) }),
+ })
+ }
+}
+
+// SAFETY: [`CListHead`] can be sent to any thread.
+unsafe impl Send for CListHead {}
+
+// SAFETY: [`CListHead`] can be shared among threads as it is not modified
+// by non-Rust code per safety requirements of [`CListHead::from_raw`].
+unsafe impl Sync for CListHead {}
+
+impl PartialEq for CListHead {
+ #[inline]
+ fn eq(&self, other: &Self) -> bool {
+ core::ptr::eq(self, other)
+ }
+}
+
+impl Eq for CListHead {}
+
+/// Low-level iterator over `list_head` nodes.
+///
+/// An iterator used to iterate over a C intrusive linked list (`list_head`). Caller has to
+/// perform conversion of returned [`CListHead`] to an item (using `container_of` macro or similar).
+///
+/// # Invariants
+///
+/// [`CListHeadIter`] is iterating over an allocated, initialized and valid list.
+struct CListHeadIter<'a> {
+ /// Current position in the list.
+ current: &'a CListHead,
+ /// The sentinel head (used to detect end of iteration).
+ sentinel: &'a CListHead,
+}
+
+impl<'a> Iterator for CListHeadIter<'a> {
+ type Item = &'a CListHead;
+
+ #[inline]
+ fn next(&mut self) -> Option<Self::Item> {
+ // Check if we've reached the sentinel (end of list).
+ if self.current == self.sentinel {
+ return None;
+ }
+
+ let item = self.current;
+ self.current = item.next();
+ Some(item)
+ }
+}
+
+impl<'a> FusedIterator for CListHeadIter<'a> {}
+
+/// A typed C linked list with a sentinel head intended for FFI use-cases where
+/// C subsystem manages a linked list that Rust code needs to read. Generally
+/// required only for special cases.
+///
+/// A sentinel head [`ClistHead`] represents the entire linked list and can be used
+/// for iteration over items of type `T`, it is not associated with a specific item.
+///
+/// The const generic `OFFSET` specifies the byte offset of the `list_head` field within
+/// the struct that `T` wraps.
+///
+/// # Invariants
+///
+/// - The [`CListHead`] is an allocated and valid sentinel C `list_head` structure.
+/// - `OFFSET` is the byte offset of the `list_head` field within the struct that `T` wraps.
+/// - All the list's `list_head` nodes are allocated and have valid next/prev pointers.
+#[repr(transparent)]
+pub struct CList<T, const OFFSET: usize>(CListHead, PhantomData<T>);
+
+impl<T, const OFFSET: usize> CList<T, OFFSET> {
+ /// Create a typed [`CList`] reference from a raw sentinel `list_head` pointer.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure
+ /// representing a list sentinel.
+ /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
+ /// - The list must contain items where the `list_head` field is at byte offset `OFFSET`.
+ /// - `T` must be `#[repr(transparent)]` over the C struct.
+ #[inline]
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::list_head) -> &'a Self {
+ // SAFETY:
+ // - [`CList`] has same layout as [`CListHead`] due to repr(transparent).
+ // - Caller guarantees `ptr` is a valid, sentinel `list_head` object.
+ unsafe { &*ptr.cast() }
+ }
+
+ /// Check if the list is empty.
+ #[inline]
+ pub fn is_empty(&self) -> bool {
+ !self.0.is_linked()
+ }
+
+ /// Create an iterator over typed items.
+ #[inline]
+ pub fn iter(&self) -> CListIter<'_, T, OFFSET> {
+ let head = &self.0;
+ CListIter {
+ head_iter: CListHeadIter {
+ current: head.next(),
+ sentinel: head,
+ },
+ _phantom: PhantomData,
+ }
+ }
+}
+
+/// High-level iterator over typed list items.
+pub struct CListIter<'a, T, const OFFSET: usize> {
+ head_iter: CListHeadIter<'a>,
+ _phantom: PhantomData<&'a T>,
+}
+
+impl<'a, T, const OFFSET: usize> Iterator for CListIter<'a, T, OFFSET> {
+ type Item = &'a T;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ let head = self.head_iter.next()?;
+
+ // Convert to item using OFFSET.
+ // SAFETY: `item_ptr` calculation from `OFFSET` (calculated using offset_of!)
+ // is valid per invariants.
+ Some(unsafe { &*head.as_raw().byte_sub(OFFSET).cast::<T>() })
+ }
+}
+
+impl<'a, T, const OFFSET: usize> FusedIterator for CListIter<'a, T, OFFSET> {}
+
+/// Create a C doubly-circular linked list interface `CList` from a raw `list_head` pointer.
+///
+/// This macro creates a `CList<T, OFFSET>` that can iterate over items of type `$rust_type`
+/// linked via the `$field` field in the underlying C struct `$c_type`.
+///
+/// # Arguments
+///
+/// - `$head`: Raw pointer to the sentinel `list_head` object (`*mut bindings::list_head`).
+/// - `$rust_type`: Each item's rust wrapper type.
+/// - `$c_type`: Each item's C struct type that contains the embedded `list_head`.
+/// - `$field`: The name of the `list_head` field within the C struct.
+///
+/// # Safety
+///
+/// This is an unsafe macro. The caller must ensure:
+///
+/// - `$head` is a valid, initialized sentinel `list_head` pointing to a list that remains
+/// unmodified for the lifetime of the rust `CList`.
+/// - The list contains items of type `$c_type` linked via an embedded `$field`.
+/// - `$rust_type` is `#[repr(transparent)]` over `$c_type` or has compatible layout.
+///
+/// # Examples
+///
+/// Refer to the examples in this module's documentation.
+#[macro_export]
+macro_rules! clist_create {
+ ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
+ // Compile-time check that field path is a list_head.
+ let _: fn(*const $c_type) -> *const $crate::bindings::list_head =
+ |p| &raw const (*p).$($field).+;
+
+ // Calculate offset and create `CList`.
+ const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
+ $crate::ffi::clist::CList::<$rust_type, OFFSET>::from_raw($head)
+ }};
+}
diff --git a/rust/kernel/ffi/mod.rs b/rust/kernel/ffi/mod.rs
index 7d844e9cb339..8c235ca0d1e3 100644
--- a/rust/kernel/ffi/mod.rs
+++ b/rust/kernel/ffi/mod.rs
@@ -5,3 +5,5 @@
// Re-export C type definitions from the `ffi` crate so that existing
// `kernel::ffi::c_int` etc. paths continue to work.
pub use ::ffi::*;
+
+pub mod clist;
--
2.34.1
^ permalink raw reply related [flat|nested] 74+ messages in thread
* [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
` (4 preceding siblings ...)
2026-02-18 20:55 ` [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists Joel Fernandes
@ 2026-02-18 20:55 ` Joel Fernandes
2026-02-19 5:13 ` Alexandre Courbot
` (2 more replies)
2026-02-18 20:55 ` [PATCH v10 7/8] nova-core: mm: Select GPU_BUDDY for VRAM allocation Joel Fernandes
` (3 subsequent siblings)
9 siblings, 3 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 20:55 UTC (permalink / raw)
To: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Joel Fernandes, Nikola Djukic
Add safe Rust abstractions over the Linux kernel's GPU buddy
allocator for physical memory management. The GPU buddy allocator
implements a binary buddy system useful for GPU physical memory
allocation. nova-core will use it for physical memory allocation.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
rust/bindings/bindings_helper.h | 11 +
rust/helpers/gpu.c | 23 ++
rust/helpers/helpers.c | 1 +
rust/kernel/gpu/buddy.rs | 537 ++++++++++++++++++++++++++++++++
rust/kernel/gpu/mod.rs | 5 +
rust/kernel/lib.rs | 2 +
6 files changed, 579 insertions(+)
create mode 100644 rust/helpers/gpu.c
create mode 100644 rust/kernel/gpu/buddy.rs
create mode 100644 rust/kernel/gpu/mod.rs
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 083cc44aa952..dbb765a9fdbd 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -29,6 +29,7 @@
#include <linux/hrtimer_types.h>
#include <linux/acpi.h>
+#include <linux/gpu_buddy.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
@@ -146,6 +147,16 @@ const vm_flags_t RUST_CONST_HELPER_VM_MIXEDMAP = VM_MIXEDMAP;
const vm_flags_t RUST_CONST_HELPER_VM_HUGEPAGE = VM_HUGEPAGE;
const vm_flags_t RUST_CONST_HELPER_VM_NOHUGEPAGE = VM_NOHUGEPAGE;
+#if IS_ENABLED(CONFIG_GPU_BUDDY)
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_RANGE_ALLOCATION = GPU_BUDDY_RANGE_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TOPDOWN_ALLOCATION = GPU_BUDDY_TOPDOWN_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CONTIGUOUS_ALLOCATION =
+ GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEAR_ALLOCATION = GPU_BUDDY_CLEAR_ALLOCATION;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_CLEARED = GPU_BUDDY_CLEARED;
+const unsigned long RUST_CONST_HELPER_GPU_BUDDY_TRIM_DISABLE = GPU_BUDDY_TRIM_DISABLE;
+#endif
+
#if IS_ENABLED(CONFIG_ANDROID_BINDER_IPC_RUST)
#include "../../drivers/android/binder/rust_binder.h"
#include "../../drivers/android/binder/rust_binder_events.h"
diff --git a/rust/helpers/gpu.c b/rust/helpers/gpu.c
new file mode 100644
index 000000000000..38b1a4e6bef8
--- /dev/null
+++ b/rust/helpers/gpu.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/gpu_buddy.h>
+
+#ifdef CONFIG_GPU_BUDDY
+
+__rust_helper u64 rust_helper_gpu_buddy_block_offset(const struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_offset(block);
+}
+
+__rust_helper unsigned int rust_helper_gpu_buddy_block_order(struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_order(block);
+}
+
+__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
+ struct gpu_buddy_block *block)
+{
+ return gpu_buddy_block_size(mm, block);
+}
+
+#endif /* CONFIG_GPU_BUDDY */
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 724fcb8240ac..a53929ce52a3 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -32,6 +32,7 @@
#include "err.c"
#include "irq.c"
#include "fs.c"
+#include "gpu.c"
#include "io.c"
#include "jump_label.c"
#include "kunit.c"
diff --git a/rust/kernel/gpu/buddy.rs b/rust/kernel/gpu/buddy.rs
new file mode 100644
index 000000000000..5df7a2199671
--- /dev/null
+++ b/rust/kernel/gpu/buddy.rs
@@ -0,0 +1,537 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU buddy allocator bindings.
+//!
+//! C header: [`include/linux/gpu_buddy.h`](srctree/include/linux/gpu_buddy.h)
+//!
+//! This module provides Rust abstractions over the Linux kernel's GPU buddy
+//! allocator, which implements a binary buddy memory allocator.
+//!
+//! The buddy allocator manages a contiguous address space and allocates blocks
+//! in power-of-two sizes, useful for GPU physical memory management.
+//!
+//! # Examples
+//!
+//! ```
+//! use kernel::{
+//! gpu::buddy::{BuddyFlags, GpuBuddy, GpuBuddyAllocParams, GpuBuddyParams},
+//! prelude::*,
+//! sizes::*, //
+//! };
+//!
+//! // Create a 1GB buddy allocator with 4KB minimum chunk size.
+//! let buddy = GpuBuddy::new(GpuBuddyParams {
+//! base_offset_bytes: 0,
+//! physical_memory_size_bytes: SZ_1G as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Verify initial state.
+//! assert_eq!(buddy.size(), SZ_1G as u64);
+//! assert_eq!(buddy.chunk_size(), SZ_4K as u64);
+//! let initial_free = buddy.free_memory_bytes();
+//!
+//! // Base allocation params - mutated between calls for field overrides.
+//! let mut params = GpuBuddyAllocParams {
+//! start_range_address: 0,
+//! end_range_address: 0, // Entire range.
+//! size_bytes: SZ_16M as u64,
+//! min_block_size_bytes: SZ_16M as u64,
+//! buddy_flags: BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?,
+//! };
+//!
+//! // Test top-down allocation (allocates from highest addresses).
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::TOPDOWN_ALLOCATION)?;
+//! let topdown = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in topdown.iter() {
+//! assert_eq!(block.offset(), (SZ_1G - SZ_16M) as u64);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(topdown);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Allocate 16MB - should result in a single 16MB block at offset 0.
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::RANGE_ALLOCATION)?;
+//! let allocated = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_16M as u64);
+//!
+//! for block in allocated.iter() {
+//! assert_eq!(block.offset(), 0);
+//! assert_eq!(block.order(), 12); // 2^12 pages
+//! assert_eq!(block.size(), SZ_16M as u64);
+//! }
+//! drop(allocated);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test non-contiguous allocation with fragmented memory.
+//! // Create fragmentation by allocating 4MB blocks at [0,4M) and [8M,12M).
+//! params.end_range_address = SZ_4M as u64;
+//! params.size_bytes = SZ_4M as u64;
+//! params.min_block_size_bytes = SZ_4M as u64;
+//! let frag1 = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_4M as u64);
+//!
+//! params.start_range_address = SZ_8M as u64;
+//! params.end_range_address = (SZ_8M + SZ_4M) as u64;
+//! let frag2 = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - SZ_8M as u64);
+//!
+//! // Allocate 8MB without CONTIGUOUS - should return 2 blocks from the holes.
+//! params.start_range_address = 0;
+//! params.end_range_address = SZ_16M as u64;
+//! params.size_bytes = SZ_8M as u64;
+//! let fragmented = KBox::pin_init(buddy.alloc_blocks(¶ms), GFP_KERNEL)?;
+//! assert_eq!(buddy.free_memory_bytes(), initial_free - (SZ_16M) as u64);
+//!
+//! let (mut count, mut total) = (0u32, 0u64);
+//! for block in fragmented.iter() {
+//! // The 8MB allocation should return 2 blocks, each 4MB.
+//! assert_eq!(block.size(), SZ_4M as u64);
+//! total += block.size();
+//! count += 1;
+//! }
+//! assert_eq!(total, SZ_8M as u64);
+//! assert_eq!(count, 2);
+//! drop(fragmented);
+//! drop(frag2);
+//! drop(frag1);
+//! assert_eq!(buddy.free_memory_bytes(), initial_free);
+//!
+//! // Test CONTIGUOUS failure when only fragmented space available.
+//! // Create a small buddy allocator with only 16MB of memory.
+//! let small = GpuBuddy::new(GpuBuddyParams {
+//! base_offset_bytes: 0,
+//! physical_memory_size_bytes: SZ_16M as u64,
+//! chunk_size_bytes: SZ_4K as u64,
+//! })?;
+//!
+//! // Allocate 4MB blocks at [0,4M) and [8M,12M) to create fragmented memory.
+//! params.start_range_address = 0;
+//! params.end_range_address = SZ_4M as u64;
+//! params.size_bytes = SZ_4M as u64;
+//! let hole1 = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL)?;
+//!
+//! params.start_range_address = SZ_8M as u64;
+//! params.end_range_address = (SZ_8M + SZ_4M) as u64;
+//! let hole2 = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL)?;
+//!
+//! // 8MB contiguous should fail - only two non-contiguous 4MB holes exist.
+//! params.start_range_address = 0;
+//! params.end_range_address = 0;
+//! params.size_bytes = SZ_8M as u64;
+//! params.buddy_flags = BuddyFlags::try_new(BuddyFlags::CONTIGUOUS_ALLOCATION)?;
+//! let result = KBox::pin_init(small.alloc_blocks(¶ms), GFP_KERNEL);
+//! assert!(result.is_err());
+//! drop(hole2);
+//! drop(hole1);
+//!
+//! # Ok::<(), Error>(())
+//! ```
+
+use crate::{
+ bindings,
+ clist_create,
+ error::to_result,
+ ffi::clist::CListHead,
+ new_mutex,
+ prelude::*,
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+ types::Opaque,
+};
+
+/// Flags for GPU buddy allocator operations.
+///
+/// These flags control the allocation behavior of the buddy allocator.
+#[derive(Clone, Copy, Default, PartialEq, Eq)]
+pub struct BuddyFlags(usize);
+
+impl BuddyFlags {
+ /// Range-based allocation from start to end addresses.
+ pub const RANGE_ALLOCATION: usize = bindings::GPU_BUDDY_RANGE_ALLOCATION;
+
+ /// Allocate from top of address space downward.
+ pub const TOPDOWN_ALLOCATION: usize = bindings::GPU_BUDDY_TOPDOWN_ALLOCATION;
+
+ /// Allocate physically contiguous blocks.
+ pub const CONTIGUOUS_ALLOCATION: usize = bindings::GPU_BUDDY_CONTIGUOUS_ALLOCATION;
+
+ /// Request allocation from the cleared (zeroed) memory. The zero'ing is not
+ /// done by the allocator, but by the caller before freeing old blocks.
+ pub const CLEAR_ALLOCATION: usize = bindings::GPU_BUDDY_CLEAR_ALLOCATION;
+
+ /// Disable trimming of partially used blocks.
+ pub const TRIM_DISABLE: usize = bindings::GPU_BUDDY_TRIM_DISABLE;
+
+ /// Mark blocks as cleared (zeroed) when freeing. When set during free,
+ /// indicates that the caller has already zeroed the memory.
+ pub const CLEARED: usize = bindings::GPU_BUDDY_CLEARED;
+
+ /// Create [`BuddyFlags`] from a raw value with validation.
+ ///
+ /// Use `|` operator to combine flags if needed, before calling this method.
+ pub fn try_new(flags: usize) -> Result<Self> {
+ // Flags must not exceed u32::MAX to satisfy the GPU buddy allocator C API.
+ if flags > u32::MAX as usize {
+ return Err(EINVAL);
+ }
+
+ // `TOPDOWN_ALLOCATION` only works without `RANGE_ALLOCATION`. When both are
+ // set, `TOPDOWN_ALLOCATION` is silently ignored by the allocator. Reject this.
+ if (flags & Self::RANGE_ALLOCATION) != 0 && (flags & Self::TOPDOWN_ALLOCATION) != 0 {
+ return Err(EINVAL);
+ }
+
+ Ok(Self(flags))
+ }
+
+ /// Get raw value of the flags.
+ pub(crate) fn as_raw(self) -> usize {
+ self.0
+ }
+}
+
+/// Parameters for creating a GPU buddy allocator.
+pub struct GpuBuddyParams {
+ /// Base offset in bytes where the managed memory region starts.
+ /// Allocations will be offset by this value.
+ pub base_offset_bytes: u64,
+ /// Total physical memory size managed by the allocator in bytes.
+ pub physical_memory_size_bytes: u64,
+ /// Minimum allocation unit / chunk size in bytes, must be >= 4KB.
+ pub chunk_size_bytes: u64,
+}
+
+/// Parameters for allocating blocks from a GPU buddy allocator.
+pub struct GpuBuddyAllocParams {
+ /// Start of allocation range in bytes. Use 0 for beginning.
+ pub start_range_address: u64,
+ /// End of allocation range in bytes. Use 0 for entire range.
+ pub end_range_address: u64,
+ /// Total size to allocate in bytes.
+ pub size_bytes: u64,
+ /// Minimum block size for fragmented allocations in bytes.
+ pub min_block_size_bytes: u64,
+ /// Buddy allocator behavior flags.
+ pub buddy_flags: BuddyFlags,
+}
+
+/// Inner structure holding the actual buddy allocator.
+///
+/// # Synchronization
+///
+/// The C `gpu_buddy` API requires synchronization (see `include/linux/gpu_buddy.h`).
+/// The internal [`GpuBuddyGuard`] ensures that the lock is held for all
+/// allocator and free operations, preventing races between concurrent allocations
+/// and the freeing that occurs when [`AllocatedBlocks`] is dropped.
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, initialized buddy allocator.
+#[pin_data(PinnedDrop)]
+struct GpuBuddyInner {
+ #[pin]
+ inner: Opaque<bindings::gpu_buddy>,
+ // TODO: Replace `Mutex<()>` with `Mutex<Opaque<..>>` once `Mutex::new()`
+ // accepts `impl PinInit<T>`.
+ #[pin]
+ lock: Mutex<()>,
+ /// Base offset for all allocations (does not change after init).
+ base_offset: u64,
+ /// Cached chunk size (does not change after init).
+ chunk_size: u64,
+ /// Cached total size (does not change after init).
+ size: u64,
+}
+
+impl GpuBuddyInner {
+ /// Create a pin-initializer for the buddy allocator.
+ fn new(params: GpuBuddyParams) -> impl PinInit<Self, Error> {
+ let base_offset = params.base_offset_bytes;
+ let size = params.physical_memory_size_bytes;
+ let chunk_size = params.chunk_size_bytes;
+
+ try_pin_init!(Self {
+ inner <- Opaque::try_ffi_init(|ptr| {
+ // SAFETY: ptr points to valid uninitialized memory from the pin-init
+ // infrastructure. gpu_buddy_init will initialize the structure.
+ to_result(unsafe { bindings::gpu_buddy_init(ptr, size, chunk_size) })
+ }),
+ lock <- new_mutex!(()),
+ base_offset: base_offset,
+ chunk_size: chunk_size,
+ size: size,
+ })
+ }
+
+ /// Lock the mutex and return a guard for accessing the allocator.
+ fn lock(&self) -> GpuBuddyGuard<'_> {
+ GpuBuddyGuard {
+ inner: self,
+ _guard: self.lock.lock(),
+ }
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for GpuBuddyInner {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.lock();
+
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe {
+ bindings::gpu_buddy_fini(guard.as_raw());
+ }
+ }
+}
+
+// SAFETY: [`GpuBuddyInner`] can be sent between threads.
+unsafe impl Send for GpuBuddyInner {}
+
+// SAFETY: [`GpuBuddyInner`] is `Sync` because the internal [`GpuBuddyGuard`]
+// serializes all access to the C allocator, preventing data races.
+unsafe impl Sync for GpuBuddyInner {}
+
+/// Guard that proves the lock is held, enabling access to the allocator.
+///
+/// # Invariants
+///
+/// The inner `_guard` holds the lock for the duration of this guard's lifetime.
+pub(crate) struct GpuBuddyGuard<'a> {
+ inner: &'a GpuBuddyInner,
+ _guard: MutexGuard<'a, ()>,
+}
+
+impl GpuBuddyGuard<'_> {
+ /// Get a raw pointer to the underlying C `gpu_buddy` structure.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy {
+ self.inner.inner.get()
+ }
+}
+
+/// GPU buddy allocator instance.
+///
+/// This structure wraps the C `gpu_buddy` allocator using reference counting.
+/// The allocator is automatically cleaned up when all references are dropped.
+///
+/// # Invariants
+///
+/// The inner [`Arc`] points to a valid, initialized GPU buddy allocator.
+pub struct GpuBuddy(Arc<GpuBuddyInner>);
+
+impl GpuBuddy {
+ /// Create a new buddy allocator.
+ ///
+ /// Creates a buddy allocator that manages a contiguous address space of the given
+ /// size, with the specified minimum allocation unit (chunk_size must be at least 4KB).
+ pub fn new(params: GpuBuddyParams) -> Result<Self> {
+ Ok(Self(Arc::pin_init(GpuBuddyInner::new(params), GFP_KERNEL)?))
+ }
+
+ /// Get the base offset for allocations.
+ pub fn base_offset(&self) -> u64 {
+ self.0.base_offset
+ }
+
+ /// Get the chunk size (minimum allocation unit).
+ pub fn chunk_size(&self) -> u64 {
+ self.0.chunk_size
+ }
+
+ /// Get the total managed size.
+ pub fn size(&self) -> u64 {
+ self.0.size
+ }
+
+ /// Get the available (free) memory in bytes.
+ pub fn free_memory_bytes(&self) -> u64 {
+ let guard = self.0.lock();
+ // SAFETY: guard provides exclusive access to the allocator.
+ unsafe { (*guard.as_raw()).avail }
+ }
+
+ /// Allocate blocks from the buddy allocator.
+ ///
+ /// Returns a pin-initializer for [`AllocatedBlocks`].
+ ///
+ /// Takes `&self` instead of `&mut self` because the internal [`Mutex`] provides
+ /// synchronization - no external `&mut` exclusivity needed.
+ pub fn alloc_blocks(
+ &self,
+ params: &GpuBuddyAllocParams,
+ ) -> impl PinInit<AllocatedBlocks, Error> {
+ let buddy_arc = Arc::clone(&self.0);
+ let start = params.start_range_address;
+ let end = params.end_range_address;
+ let size = params.size_bytes;
+ let min_block_size = params.min_block_size_bytes;
+ let flags = params.buddy_flags;
+
+ // Create pin-initializer that initializes list and allocates blocks.
+ try_pin_init!(AllocatedBlocks {
+ buddy: buddy_arc,
+ list <- CListHead::new(),
+ flags: flags,
+ _: {
+ // Lock while allocating to serialize with concurrent frees.
+ let guard = buddy.lock();
+
+ // SAFETY: `guard` provides exclusive access to the buddy allocator.
+ to_result(unsafe {
+ bindings::gpu_buddy_alloc_blocks(
+ guard.as_raw(),
+ start,
+ end,
+ size,
+ min_block_size,
+ list.as_raw(),
+ flags.as_raw(),
+ )
+ })?
+ }
+ })
+ }
+}
+
+/// Allocated blocks from the buddy allocator with automatic cleanup.
+///
+/// This structure owns a list of allocated blocks and ensures they are
+/// automatically freed when dropped. Use `iter()` to iterate over all
+/// allocated [`Block`] structures.
+///
+/// # Invariants
+///
+/// - `list` is an initialized, valid list head containing allocated blocks.
+/// - `buddy` references a valid [`GpuBuddyInner`].
+#[pin_data(PinnedDrop)]
+pub struct AllocatedBlocks {
+ #[pin]
+ list: CListHead,
+ buddy: Arc<GpuBuddyInner>,
+ flags: BuddyFlags,
+}
+
+impl AllocatedBlocks {
+ /// Check if the block list is empty.
+ pub fn is_empty(&self) -> bool {
+ // An empty list head points to itself.
+ !self.list.is_linked()
+ }
+
+ /// Iterate over allocated blocks.
+ ///
+ /// Returns an iterator yielding [`AllocatedBlock`] values. Each [`AllocatedBlock`]
+ /// borrows `self` and is only valid for the duration of that borrow.
+ pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
+ // SAFETY: list contains gpu_buddy_block items linked via __bindgen_anon_1.link.
+ let clist = unsafe {
+ clist_create!(
+ self.list.as_raw(),
+ Block,
+ bindings::gpu_buddy_block,
+ __bindgen_anon_1.link
+ )
+ };
+
+ clist
+ .iter()
+ .map(|block| AllocatedBlock { block, alloc: self })
+ }
+}
+
+#[pinned_drop]
+impl PinnedDrop for AllocatedBlocks {
+ fn drop(self: Pin<&mut Self>) {
+ let guard = self.buddy.lock();
+
+ // SAFETY:
+ // - list is valid per the type's invariants.
+ // - guard provides exclusive access to the allocator.
+ // CAST: BuddyFlags were validated to fit in u32 at construction.
+ unsafe {
+ bindings::gpu_buddy_free_list(
+ guard.as_raw(),
+ self.list.as_raw(),
+ self.flags.as_raw() as u32,
+ );
+ }
+ }
+}
+
+/// A GPU buddy block.
+///
+/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
+/// as references from [`CListIter`] during iteration over [`AllocatedBlocks`].
+///
+/// # Invariants
+///
+/// The inner [`Opaque`] contains a valid, allocated `gpu_buddy_block`.
+#[repr(transparent)]
+pub struct Block(Opaque<bindings::gpu_buddy_block>);
+
+impl Block {
+ /// Get a raw pointer to the underlying C block.
+ fn as_raw(&self) -> *mut bindings::gpu_buddy_block {
+ self.0.get()
+ }
+
+ /// Get the block's offset in the address space.
+ pub(crate) fn offset(&self) -> u64 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_offset(self.as_raw()) }
+ }
+
+ /// Get the block order.
+ pub(crate) fn order(&self) -> u32 {
+ // SAFETY: self.as_raw() is valid per the type's invariants.
+ unsafe { bindings::gpu_buddy_block_order(self.as_raw()) }
+ }
+}
+
+// SAFETY: `Block` is is not modified after allocation for the lifetime
+// of `AllocatedBlock`.
+unsafe impl Send for Block {}
+
+// SAFETY: `Block` is is not modified after allocation for the lifetime
+// of `AllocatedBlock`.
+unsafe impl Sync for Block {}
+
+/// An allocated block with access to the GPU buddy allocator.
+///
+/// It is returned by [`AllocatedBlocks::iter()`] and provides access to the
+/// GPU buddy allocator required for some accessors.
+///
+/// # Invariants
+///
+/// - `block` is a valid reference to an allocated [`Block`].
+/// - `alloc` is a valid reference to the [`AllocatedBlocks`] that owns this block.
+pub struct AllocatedBlock<'a> {
+ block: &'a Block,
+ alloc: &'a AllocatedBlocks,
+}
+
+impl AllocatedBlock<'_> {
+ /// Get the block's offset in the address space.
+ ///
+ /// Returns the absolute offset including the allocator's base offset.
+ /// This is the actual address to use for accessing the allocated memory.
+ pub fn offset(&self) -> u64 {
+ self.alloc.buddy.base_offset + self.block.offset()
+ }
+
+ /// Get the block order (size = chunk_size << order).
+ pub fn order(&self) -> u32 {
+ self.block.order()
+ }
+
+ /// Get the block's size in bytes.
+ pub fn size(&self) -> u64 {
+ self.alloc.buddy.chunk_size << self.block.order()
+ }
+}
diff --git a/rust/kernel/gpu/mod.rs b/rust/kernel/gpu/mod.rs
new file mode 100644
index 000000000000..8f25e6367edc
--- /dev/null
+++ b/rust/kernel/gpu/mod.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! GPU subsystem abstractions.
+
+pub mod buddy;
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index 0a77b4c0ffeb..1cd6feff4f02 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -96,6 +96,8 @@
pub mod firmware;
pub mod fmt;
pub mod fs;
+#[cfg(CONFIG_GPU_BUDDY)]
+pub mod gpu;
#[cfg(CONFIG_I2C = "y")]
pub mod i2c;
pub mod id_pool;
--
2.34.1
^ permalink raw reply related [flat|nested] 74+ messages in thread
* [PATCH v10 7/8] nova-core: mm: Select GPU_BUDDY for VRAM allocation
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
` (5 preceding siblings ...)
2026-02-18 20:55 ` [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
@ 2026-02-18 20:55 ` Joel Fernandes
2026-02-19 0:44 ` Alexandre Courbot
2026-02-18 20:55 ` [PATCH v10 8/8] nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
` (2 subsequent siblings)
9 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 20:55 UTC (permalink / raw)
To: linux-kernel, Danilo Krummrich, Alice Ryhl, Alexandre Courbot,
David Airlie, Simona Vetter
Cc: Miguel Ojeda, Dave Airlie, Gary Guo, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Joel Fernandes, Nikola Djukic
nova-core will use the GPU buddy allocator for physical VRAM management.
Enable it in Kconfig.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index 527920f9c4d3..809485167aff 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -5,6 +5,7 @@ config NOVA_CORE
depends on RUST
select RUST_FW_LOADER_ABSTRACTIONS
select AUXILIARY_BUS
+ select GPU_BUDDY
default n
help
Choose this if you want to build the Nova Core driver for Nvidia
--
2.34.1
^ permalink raw reply related [flat|nested] 74+ messages in thread
* [PATCH v10 8/8] nova-core: Kconfig: Sort select statements alphabetically
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
` (6 preceding siblings ...)
2026-02-18 20:55 ` [PATCH v10 7/8] nova-core: mm: Select GPU_BUDDY for VRAM allocation Joel Fernandes
@ 2026-02-18 20:55 ` Joel Fernandes
2026-02-18 20:59 ` [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
2026-02-18 22:24 ` Danilo Krummrich
9 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 20:55 UTC (permalink / raw)
To: linux-kernel, Danilo Krummrich, Alice Ryhl, Alexandre Courbot,
David Airlie, Simona Vetter
Cc: Miguel Ojeda, Dave Airlie, Gary Guo, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Joel Fernandes
Reorder the select statements in NOVA_CORE Kconfig to be in
alphabetical order.
Suggested-by: Danilo Krummrich <dakr@kernel.org>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index 809485167aff..6513007bf66f 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -3,9 +3,9 @@ config NOVA_CORE
depends on 64BIT
depends on PCI
depends on RUST
- select RUST_FW_LOADER_ABSTRACTIONS
select AUXILIARY_BUS
select GPU_BUDDY
+ select RUST_FW_LOADER_ABSTRACTIONS
default n
help
Choose this if you want to build the Nova Core driver for Nvidia
--
2.34.1
^ permalink raw reply related [flat|nested] 74+ messages in thread
* Re: [PATCH v10 0/8] Preparatory patches for nova-core memory management
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
` (7 preceding siblings ...)
2026-02-18 20:55 ` [PATCH v10 8/8] nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
@ 2026-02-18 20:59 ` Joel Fernandes
2026-02-18 22:24 ` Danilo Krummrich
9 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 20:59 UTC (permalink / raw)
To: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Nikola Djukic, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Jonathan Corbet,
Alex Deucher, Christian König, Jani Nikula, Joonas Lahtinen,
Rodrigo Vivi, Tvrtko Ursulin, Huang Rui, Matthew Auld,
Matthew Brost, Lucas De Marchi, Thomas Hellström,
Helge Deller, Danilo Krummrich, Alice Ryhl, Miguel Ojeda,
Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, Elle Rhumsaa, Daniel Almeida, joel, nouveau,
dri-devel, rust-for-linux, linux-doc, amd-gfx, intel-gfx,
intel-xe, linux-fbdev, Joel Fernandes
My CC list missed a lot of folks, sorry about that. Adding more CC's to this
email to make people aware of the posting. Thankfully it got posted to the
archives so for those on rust-for-linux, dri-devel and nouveau lore lists, they
would get it.
Thanks,
--
Joel Fernandes
On 2/18/2026 3:54 PM, Joel Fernandes wrote:
> These are initial preparatory patches needed for nova-core memory management
> support. The series moves the DRM buddy allocator one level up so it can be
> shared across GPU subsystems, adds Rust FFI and clist bindings, and creates
> Rust GPU buddy allocator bindings.
>
> The clist/ffi patches are ready, reviewed by Gary and Danilo. Miguel, can you
> pull those via the rust tree?
>
> The non-Rust DRM buddy related patches are already being pulled into upstream
> by Dave Airlie but I have included them here as they are needed for the rest of
> the patches (thanks to Dave for reworking them so they applied).
>
> I will post the nova-core memory management patches as a separate follow-up
> series just after this one.
>
> The git tree with all these patches can be found at:
> git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: nova/mm)
>
> Joel Fernandes (7):
> gpu: Move DRM buddy allocator one level up (part one)
> gpu: Move DRM buddy allocator one level up (part two)
> rust: ffi: Convert pub use to pub mod and create ffi module
> rust: clist: Add support to interface with C linked lists
> rust: gpu: Add GPU buddy allocator bindings
> nova-core: mm: Select GPU_BUDDY for VRAM allocation
> nova-core: Kconfig: Sort select statements alphabetically
>
> Koen Koning (1):
> gpu: Fix uninitialized buddy for built-in drivers
>
> Documentation/gpu/drm-mm.rst | 10 +-
> MAINTAINERS | 15 +-
> drivers/gpu/Kconfig | 13 +
> drivers/gpu/Makefile | 3 +-
> drivers/gpu/buddy.c | 1322 +++++++++++++++++
> drivers/gpu/drm/Kconfig | 5 +-
> drivers/gpu/drm/Kconfig.debug | 1 -
> drivers/gpu/drm/Makefile | 1 -
> drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +-
> .../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 79 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 20 +-
> drivers/gpu/drm/drm_buddy.c | 1277 +---------------
> drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +-
> drivers/gpu/drm/i915/i915_scatterlist.c | 10 +-
> drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +-
> drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 4 +-
> .../drm/i915/selftests/intel_memory_region.c | 20 +-
> drivers/gpu/drm/tests/Makefile | 1 -
> drivers/gpu/drm/tests/drm_exec_test.c | 2 -
> drivers/gpu/drm/tests/drm_mm_test.c | 2 -
> .../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 4 +-
> drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +-
> drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 4 +-
> drivers/gpu/drm/xe/xe_res_cursor.h | 34 +-
> drivers/gpu/drm/xe/xe_svm.c | 12 +-
> drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 71 +-
> drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 4 +-
> drivers/gpu/nova-core/Kconfig | 3 +-
> drivers/gpu/tests/Makefile | 4 +
> .../gpu_buddy_test.c} | 416 +++---
> .../lib/drm_random.c => tests/gpu_random.c} | 18 +-
> .../lib/drm_random.h => tests/gpu_random.h} | 18 +-
> drivers/video/Kconfig | 1 +
> include/drm/drm_buddy.h | 163 +-
> include/linux/gpu_buddy.h | 177 +++
> rust/bindings/bindings_helper.h | 11 +
> rust/helpers/gpu.c | 23 +
> rust/helpers/helpers.c | 2 +
> rust/helpers/list.c | 17 +
> rust/kernel/ffi/clist.rs | 327 ++++
> rust/kernel/ffi/mod.rs | 9 +
> rust/kernel/gpu/buddy.rs | 537 +++++++
> rust/kernel/gpu/mod.rs | 5 +
> rust/kernel/lib.rs | 5 +-
> 45 files changed, 2893 insertions(+), 1846 deletions(-)
> create mode 100644 drivers/gpu/Kconfig
> create mode 100644 drivers/gpu/buddy.c
> create mode 100644 drivers/gpu/tests/Makefile
> rename drivers/gpu/{drm/tests/drm_buddy_test.c => tests/gpu_buddy_test.c} (67%)
> rename drivers/gpu/{drm/lib/drm_random.c => tests/gpu_random.c} (59%)
> rename drivers/gpu/{drm/lib/drm_random.h => tests/gpu_random.h} (53%)
> create mode 100644 include/linux/gpu_buddy.h
> create mode 100644 rust/helpers/gpu.c
> create mode 100644 rust/helpers/list.c
> create mode 100644 rust/kernel/ffi/clist.rs
> create mode 100644 rust/kernel/ffi/mod.rs
> create mode 100644 rust/kernel/gpu/buddy.rs
> create mode 100644 rust/kernel/gpu/mod.rs
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> base-commit: 2961f841b025fb234860bac26dfb7fa7cb0fb122
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 0/8] Preparatory patches for nova-core memory management
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
` (8 preceding siblings ...)
2026-02-18 20:59 ` [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
@ 2026-02-18 22:24 ` Danilo Krummrich
2026-02-18 23:46 ` Joel Fernandes
9 siblings, 1 reply; 74+ messages in thread
From: Danilo Krummrich @ 2026-02-18 22:24 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, Koen Koning, dri-devel,
nouveau, rust-for-linux, Nikola Djukic, Alexandre Courbot
On Wed Feb 18, 2026 at 9:54 PM CET, Joel Fernandes wrote:
> The clist/ffi patches are ready, reviewed by Gary and Danilo. Miguel, can you
> pull those via the rust tree?
I requested changes in the last version and have yet to go through this one. I
also think that Alex still has some comments (Cc'd him).
Please note that if this goes through the Rust tree, we have to wait for the
full upcoming cycle before we can land the GPU buddy abstractions.
Alternatively, if it goes through the Rust tree, Miguel can provide a signed tag
for me to merge or we can simply take it through the drm-rust tree in the first
place, if Miguel agrees with that.
> The non-Rust DRM buddy related patches are already being pulled into upstream
They are in drm-misc-next, I will merge into drm-rust-next once they hit
drm-next and -rc1 is out.
> I will post the nova-core memory management patches as a separate follow-up
> series just after this one.
>
> The git tree with all these patches can be found at:
> git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: nova/mm)
This is now (at least) the third time I have to ask for a patch changelog.
"When sending a next version, add a patch changelog to the cover letter
or to individual patches explaining difference against previous
submission (see The canonical patch format)." [1, 2]
Please, add a patch changelog.
(This also goes for the nova-core MM series, which is flagged as v7 despite
actually being v2).
[1] https://docs.kernel.org/process/submitting-patches.html#respond-to-review-comments
[2] https://docs.kernel.org/process/submitting-patches.html#the-canonical-patch-format
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 0/8] Preparatory patches for nova-core memory management
2026-02-18 22:24 ` Danilo Krummrich
@ 2026-02-18 23:46 ` Joel Fernandes
2026-02-18 23:59 ` Joel Fernandes
0 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 23:46 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, Koen Koning, dri-devel,
nouveau, rust-for-linux, Nikola Djukic, Alexandre Courbot
Hi, Danilo,
> On Feb 18, 2026, at 5:24 PM, Danilo Krummrich <dakr@kernel.org> wrote:
>
> On Wed Feb 18, 2026 at 9:54 PM CET, Joel Fernandes wrote:
>> The clist/ffi patches are ready, reviewed by Gary and Danilo. Miguel, can you
>> pull those via the rust tree?
>
> I requested changes in the last version and have yet to go through this one. I
> also think that Alex still has some comments (Cc'd him).
Sure.
>
> Please note that if this goes through the Rust tree, we have to wait for the
> full upcoming cycle before we can land the GPU buddy abstractions.
>
> Alternatively, if it goes through the Rust tree, Miguel can provide a signed tag
> for me to merge or we can simply take it through the drm-rust tree in the first
> place, if Miguel agrees with that.
Ok.
>
>> The non-Rust DRM buddy related patches are already being pulled into upstream
>
> They are in drm-misc-next, I will merge into drm-rust-next once they hit
> drm-next and -rc1 is out.
Ok.
>
>> I will post the nova-core memory management patches as a separate follow-up
>> series just after this one.
>>
>> The git tree with all these patches can be found at:
>> git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: nova/mm)
>
> This is now (at least) the third time I have to ask for a patch changelog.
>
> "When sending a next version, add a patch changelog to the cover letter
> or to individual patches explaining difference against previous
> submission (see The canonical patch format)." [1, 2]
>
> Please, add a patch changelog.
Ah, I think I did not understand what you meant because of my different
interpretation of the words changelog. I have used this term interchangeable in
the past to summarize what a set of patches do in the cover letter, not what
changed since the last revision.
Anyway here is a changelog:
1. Moving of the clist code to rust ffi
2. Some comment changes in clist and gpu buddy bindings
3. Inclusion of the movement of code on C drm buddy.
For the other series:
- the main change is only for DRM fence signaling related stuff and some test
related changes.
If you want I could provide a range diff if it makes it easier. But yeah I
did drop the ball a bit on the changelog stuff here. Perhaps buying you a
beer the next LPC could be penance?
>
> (This also goes for the nova-core MM series, which is flagged as v7 despite
> actually being v2).
No, it was RFC v6, that is when I had included the full stack of these patches.
See:
https://lore.kernel.org/all/20260120204303.3229303-1-joelagnelf@nvidia.com/
I split it this way based on your request. I wanted to keep it all in one series
to reduce version number confusion.
Let me know if there’s something else I need to do to make it easier, I can
include a proper changelog in future respins.
Best,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 0/8] Preparatory patches for nova-core memory management
2026-02-18 23:46 ` Joel Fernandes
@ 2026-02-18 23:59 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-18 23:59 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel@vger.kernel.org, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org,
rust-for-linux@vger.kernel.org, Nikola Djukic, Alexandre Courbot
> On Feb 18, 2026, at 6:46 PM, Joel Fernandes <joelagnelf@nvidia.com> wrote:
>
> Hi, Danilo,
>
>> On Feb 18, 2026, at 5:24 PM, Danilo Krummrich <dakr@kernel.org> wrote:
>>
>>> On Wed Feb 18, 2026 at 9:54 PM CET, Joel Fernandes wrote:
>>> The clist/ffi patches are ready, reviewed by Gary and Danilo. Miguel, can you
>>> pull those via the rust tree?
>>
>> I requested changes in the last version and have yet to go through this one. I
>> also think that Alex still has some comments (Cc'd him).
>
> Sure.
>
>>
>> Please note that if this goes through the Rust tree, we have to wait for the
>> full upcoming cycle before we can land the GPU buddy abstractions.
>>
>> Alternatively, if it goes through the Rust tree, Miguel can provide a signed tag
>> for me to merge or we can simply take it through the drm-rust tree in the first
>> place, if Miguel agrees with that.
>
> Ok.
>
>>
>>> The non-Rust DRM buddy related patches are already being pulled into upstream
>>
>> They are in drm-misc-next, I will merge into drm-rust-next once they hit
>> drm-next and -rc1 is out.
>
> Ok.
>
>>
>>> I will post the nova-core memory management patches as a separate follow-up
>>> series just after this one.
>>>
>>> The git tree with all these patches can be found at:
>>> git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: nova/mm)
>>
>> This is now (at least) the third time I have to ask for a patch changelog.
>>
>> "When sending a next version, add a patch changelog to the cover letter
>> or to individual patches explaining difference against previous
>> submission (see The canonical patch format)." [1, 2]
>>
>> Please, add a patch changelog.
>
> Ah, I think I did not understand what you meant because of my different
> interpretation of the words changelog. I have used this term interchangeable in
> the past to summarize what a set of patches do in the cover letter, not what
> changed since the last revision.
>
> Anyway here is a changelog:
>
> 1. Moving of the clist code to rust ffi
> 2. Some comment changes in clist and gpu buddy bindings
> 3. Inclusion of the movement of code on C drm buddy.
And to clarify, I’ll go try to include this on a patch by patch basis hence forth in the cover letter as suggested by the documentation. Not just what changed since last time summary.
- Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 7/8] nova-core: mm: Select GPU_BUDDY for VRAM allocation
2026-02-18 20:55 ` [PATCH v10 7/8] nova-core: mm: Select GPU_BUDDY for VRAM allocation Joel Fernandes
@ 2026-02-19 0:44 ` Alexandre Courbot
2026-02-19 1:14 ` John Hubbard
` (2 more replies)
0 siblings, 3 replies; 74+ messages in thread
From: Alexandre Courbot @ 2026-02-19 0:44 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Danilo Krummrich, Alice Ryhl, David Airlie,
Simona Vetter, Miguel Ojeda, Dave Airlie, Gary Guo,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic
On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
> nova-core will use the GPU buddy allocator for physical VRAM management.
> Enable it in Kconfig.
Subject prefix should just be `nova-core:`, as this touches the module's
configuration.
I'd also suggest to select `GPU_BUDDY` in the series that actively
starts using it.
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 7/8] nova-core: mm: Select GPU_BUDDY for VRAM allocation
2026-02-19 0:44 ` Alexandre Courbot
@ 2026-02-19 1:14 ` John Hubbard
2026-02-19 15:31 ` Joel Fernandes
2026-02-19 2:06 ` Joel Fernandes
2026-02-19 15:31 ` Joel Fernandes
2 siblings, 1 reply; 74+ messages in thread
From: John Hubbard @ 2026-02-19 1:14 UTC (permalink / raw)
To: Alexandre Courbot, Joel Fernandes
Cc: linux-kernel, Danilo Krummrich, Alice Ryhl, David Airlie,
Simona Vetter, Miguel Ojeda, Dave Airlie, Gary Guo,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic
On 2/18/26 4:44 PM, Alexandre Courbot wrote:
> On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
>> nova-core will use the GPU buddy allocator for physical VRAM management.
>> Enable it in Kconfig.
>
> Subject prefix should just be `nova-core:`, as this touches the module's
> configuration.
Or "gpu: nova-core: ", actually.
That's the convention so far, where applicable of course.
>
> I'd also suggest to select `GPU_BUDDY` in the series that actively
> starts using it.
>
thanks,
--
John Hubbard
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 7/8] nova-core: mm: Select GPU_BUDDY for VRAM allocation
2026-02-19 0:44 ` Alexandre Courbot
2026-02-19 1:14 ` John Hubbard
@ 2026-02-19 2:06 ` Joel Fernandes
2026-02-19 15:31 ` Joel Fernandes
2 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 2:06 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel@vger.kernel.org, Danilo Krummrich, Alice Ryhl,
David Airlie, Simona Vetter, Miguel Ojeda, Dave Airlie, Gary Guo,
Daniel Almeida, Koen Koning, dri-devel@lists.freedesktop.org,
nouveau@lists.freedesktop.org, rust-for-linux@vger.kernel.org,
Nikola Djukic
> On Feb 18, 2026, at 7:44 PM, Alexandre Courbot <acourbot@nvidia.com> wrote:
>
> On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
>> nova-core will use the GPU buddy allocator for physical VRAM management.
>> Enable it in Kconfig.
>
> Subject prefix should just be `nova-core:`, as this touches the module's
> configuration.
>
> I'd also suggest to select `GPU_BUDDY` in the series that actively
> starts using it.
Both suggestions sound good to me.
- -
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 2/8] gpu: Move DRM buddy allocator one level up (part two)
2026-02-18 20:55 ` [PATCH v10 2/8] gpu: Move DRM buddy allocator one level up (part two) Joel Fernandes
@ 2026-02-19 3:18 ` Alexandre Courbot
2026-02-19 15:31 ` Joel Fernandes
0 siblings, 1 reply; 74+ messages in thread
From: Alexandre Courbot @ 2026-02-19 3:18 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Jonathan Corbet, Shuah Khan,
Matthew Auld, Arun Pravin, Christian Koenig, Alex Deucher,
Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin,
Huang Rui, Matthew Brost, Thomas Hellström, Helge Deller,
Danilo Krummrich, Miguel Ojeda, Dave Airlie, Gary Guo,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
linux-doc, amd-gfx, intel-gfx, intel-xe, linux-fbdev
On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
> Move the DRM buddy allocator one level up so that it can be used by GPU
> drivers (example, nova-core) that have usecases other than DRM (such as
> VFIO vGPU support). Modify the API, structures and Kconfigs to use
> "gpu_buddy" terminology. Adapt the drivers and tests to use the new API.
>
> The commit cannot be split due to bisectability, however no functional
> change is intended. Verified by running K-UNIT tests and build tested
> various configurations.
Patches 1 and 2 have the exact same commit log, but each one does only
part of it. Let's only keep the part of the log that applies to each
patch.
>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> Reviewed-by: Dave Airlie <airlied@redhat.com>
> [airlied: I've split this into two so git can find copies easier.
> I've also just nuked drm_random library, that stuff needs to be done
> elsewhere and only the buddy tests seem to be using it].
> Signed-off-by: Dave Airlie <airlied@redhat.com>
> ---
> Documentation/gpu/drm-mm.rst | 6 +
> MAINTAINERS | 8 +-
> drivers/gpu/Kconfig | 13 +
> drivers/gpu/Makefile | 1 +
> drivers/gpu/buddy.c | 556 +++++++++---------
> drivers/gpu/drm/Kconfig | 1 +
> drivers/gpu/drm/Makefile | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +-
> .../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 79 +--
> drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 18 +-
> drivers/gpu/drm/drm_buddy.c | 77 +++
> drivers/gpu/drm/i915/i915_scatterlist.c | 8 +-
> drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +-
> drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 4 +-
> .../drm/i915/selftests/intel_memory_region.c | 20 +-
> .../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 4 +-
> drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +-
> drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 2 +-
> drivers/gpu/drm/xe/xe_res_cursor.h | 34 +-
> drivers/gpu/drm/xe/xe_svm.c | 12 +-
> drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 71 +--
> drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 2 +-
> drivers/gpu/tests/Makefile | 2 +-
> drivers/gpu/tests/gpu_buddy_test.c | 412 ++++++-------
> drivers/gpu/tests/gpu_random.c | 16 +-
> drivers/gpu/tests/gpu_random.h | 18 +-
> drivers/video/Kconfig | 1 +
> include/drm/drm_buddy.h | 18 +
> include/linux/gpu_buddy.h | 120 ++--
> 30 files changed, 853 insertions(+), 739 deletions(-)
> create mode 100644 drivers/gpu/Kconfig
> create mode 100644 drivers/gpu/drm/drm_buddy.c
> create mode 100644 include/drm/drm_buddy.h
>
> diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
> index ceee0e663237..32fb506db05b 100644
> --- a/Documentation/gpu/drm-mm.rst
> +++ b/Documentation/gpu/drm-mm.rst
> @@ -532,6 +532,12 @@ Buddy Allocator Function References (GPU buddy)
> .. kernel-doc:: drivers/gpu/buddy.c
> :export:
>
> +DRM Buddy Specific Logging Function References
> +----------------------------------------------
> +
> +.. kernel-doc:: drivers/gpu/drm/drm_buddy.c
> + :export:
> +
> DRM Cache Handling and Fast WC memcpy()
> =======================================
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index dc82a6bd1a61..14b4f9af0e36 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -8905,15 +8905,17 @@ T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
> F: drivers/gpu/drm/ttm/
> F: include/drm/ttm/
>
> -DRM BUDDY ALLOCATOR
> +GPU BUDDY ALLOCATOR
> M: Matthew Auld <matthew.auld@intel.com>
> M: Arun Pravin <arunpravin.paneerselvam@amd.com>
> R: Christian Koenig <christian.koenig@amd.com>
> L: dri-devel@lists.freedesktop.org
> S: Maintained
> T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
> -F: drivers/gpu/drm/drm_buddy.c
> -F: drivers/gpu/drm/tests/drm_buddy_test.c
> +F: drivers/gpu/drm_buddy.c
This line should be `drivers/gpu/drm/drm_buddy.c`.
> +F: drivers/gpu/buddy.c
> +F: drivers/gpu/tests/gpu_buddy_test.c
> +F: include/linux/gpu_buddy.h
These files have been moved in patch 1, so their MAINTAINERS entry
should also be modified there.
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 4/8] rust: ffi: Convert pub use to pub mod and create ffi module
2026-02-18 20:55 ` [PATCH v10 4/8] rust: ffi: Convert pub use to pub mod and create ffi module Joel Fernandes
@ 2026-02-19 3:18 ` Alexandre Courbot
0 siblings, 0 replies; 74+ messages in thread
From: Alexandre Courbot @ 2026-02-19 3:18 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux
On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
> Convert `pub use ffi` to `pub mod ffi` in lib.rs and create the
> corresponding `rust/kernel/ffi/mod.rs` module file. Also re-export all C
> type definitions from `ffi` crate so that existing `kernel::ffi::c_int`
> etc. paths continue to work.
>
> This prepares the ffi module to host additional sub-modules in later
> patches (clist).
>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
Reviewed-by: Alexandre Courbot <acourbot@nvidia.com>
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-18 20:55 ` [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists Joel Fernandes
@ 2026-02-19 4:26 ` Alexandre Courbot
2026-02-19 15:27 ` Joel Fernandes
2026-02-19 9:58 ` Danilo Krummrich
` (3 subsequent siblings)
4 siblings, 1 reply; 74+ messages in thread
From: Alexandre Courbot @ 2026-02-19 4:26 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
<snip>
> +use core::{
> + iter::FusedIterator,
> + marker::PhantomData, //
> +};
> +
> +use crate::{
> + bindings,
> + types::Opaque, //
> +};
> +
> +use pin_init::{
> + pin_data,
> + pin_init,
> + PinInit //
`rustfmt` fixed this to
PinInit, //
<snip>
> +impl<'a> FusedIterator for CListHeadIter<'a> {}
> +
> +/// A typed C linked list with a sentinel head intended for FFI use-cases where
> +/// C subsystem manages a linked list that Rust code needs to read. Generally
> +/// required only for special cases.
> +///
> +/// A sentinel head [`ClistHead`] represents the entire linked list and can be used
Typo: `CListHead` (rustdoc complained about this).
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-18 20:55 ` [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
@ 2026-02-19 5:13 ` Alexandre Courbot
2026-02-19 8:54 ` Miguel Ojeda
2026-02-19 15:31 ` Joel Fernandes
2026-02-19 13:18 ` Danilo Krummrich
2026-02-20 8:22 ` Eliot Courtney
2 siblings, 2 replies; 74+ messages in thread
From: Alexandre Courbot @ 2026-02-19 5:13 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
Just a few things caught when building.
On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
<snip>
> +use crate::{
> + bindings,
> + clist_create,
> + error::to_result,
> + ffi::clist::CListHead,
> + new_mutex,
> + prelude::*,
> + sync::{
> + lock::mutex::MutexGuard,
> + Arc,
> + Mutex, //
> + },
> + types::Opaque,
Need a `//` or `rustfmt` will reformat.
<snip>
> +#[pinned_drop]
> +impl PinnedDrop for GpuBuddyInner {
> + fn drop(self: Pin<&mut Self>) {
> + let guard = self.lock();
> +
> + // SAFETY: guard provides exclusive access to the allocator.
> + unsafe {
> + bindings::gpu_buddy_fini(guard.as_raw());
> + }
> + }
> +}
> +
> +// SAFETY: [`GpuBuddyInner`] can be sent between threads.
No need to link on non-doccomments.
> +unsafe impl Send for GpuBuddyInner {}
> +
> +// SAFETY: [`GpuBuddyInner`] is `Sync` because the internal [`GpuBuddyGuard`]
> +// serializes all access to the C allocator, preventing data races.
Here as well.
<snip>
> +/// Allocated blocks from the buddy allocator with automatic cleanup.
> +///
> +/// This structure owns a list of allocated blocks and ensures they are
> +/// automatically freed when dropped. Use `iter()` to iterate over all
> +/// allocated [`Block`] structures.
> +///
> +/// # Invariants
> +///
> +/// - `list` is an initialized, valid list head containing allocated blocks.
> +/// - `buddy` references a valid [`GpuBuddyInner`].
rustdoc complains that this links to a private item in a public doc - we
should not mention `GpuBuddyInner` here.
> +#[pin_data(PinnedDrop)]
> +pub struct AllocatedBlocks {
> + #[pin]
> + list: CListHead,
> + buddy: Arc<GpuBuddyInner>,
> + flags: BuddyFlags,
> +}
> +
> +impl AllocatedBlocks {
> + /// Check if the block list is empty.
> + pub fn is_empty(&self) -> bool {
> + // An empty list head points to itself.
> + !self.list.is_linked()
> + }
> +
> + /// Iterate over allocated blocks.
> + ///
> + /// Returns an iterator yielding [`AllocatedBlock`] values. Each [`AllocatedBlock`]
> + /// borrows `self` and is only valid for the duration of that borrow.
> + pub fn iter(&self) -> impl Iterator<Item = AllocatedBlock<'_>> + '_ {
> + // SAFETY: list contains gpu_buddy_block items linked via __bindgen_anon_1.link.
> + let clist = unsafe {
> + clist_create!(
> + self.list.as_raw(),
> + Block,
> + bindings::gpu_buddy_block,
> + __bindgen_anon_1.link
> + )
> + };
> +
> + clist
> + .iter()
> + .map(|block| AllocatedBlock { block, alloc: self })
> + }
> +}
> +
> +#[pinned_drop]
> +impl PinnedDrop for AllocatedBlocks {
> + fn drop(self: Pin<&mut Self>) {
> + let guard = self.buddy.lock();
> +
> + // SAFETY:
> + // - list is valid per the type's invariants.
> + // - guard provides exclusive access to the allocator.
> + // CAST: BuddyFlags were validated to fit in u32 at construction.
> + unsafe {
> + bindings::gpu_buddy_free_list(
> + guard.as_raw(),
> + self.list.as_raw(),
> + self.flags.as_raw() as u32,
> + );
> + }
> + }
> +}
> +
> +/// A GPU buddy block.
> +///
> +/// Transparent wrapper over C `gpu_buddy_block` structure. This type is returned
> +/// as references from [`CListIter`] during iteration over [`AllocatedBlocks`].
Link should be [`CListIter`](kernel::ffi::clist::CListIter) to resolve.
But maybe we don't need to share that detail in the public
documentation?
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-19 5:13 ` Alexandre Courbot
@ 2026-02-19 8:54 ` Miguel Ojeda
2026-02-19 15:31 ` Joel Fernandes
2026-02-19 15:31 ` Joel Fernandes
1 sibling, 1 reply; 74+ messages in thread
From: Miguel Ojeda @ 2026-02-19 8:54 UTC (permalink / raw)
To: Alexandre Courbot
Cc: Joel Fernandes, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Thu, Feb 19, 2026 at 6:13 AM Alexandre Courbot <acourbot@nvidia.com> wrote:
>
> rustdoc complains that this links to a private item in a public doc - we
> should not mention `GpuBuddyInner` here.
If you all think something should be mentioned for practical reasons,
then please don't let `rustdoc` force you to not mention it, i.e.
please feel free to remove the square brackets if needed.
In other words, I don't want the intra-doc links convention we have to
make it harder for you to write certain things exceptionally.
I hope that helps.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-18 20:55 ` [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists Joel Fernandes
2026-02-19 4:26 ` Alexandre Courbot
@ 2026-02-19 9:58 ` Danilo Krummrich
2026-02-19 15:28 ` Joel Fernandes
2026-02-19 11:21 ` Danilo Krummrich
` (2 subsequent siblings)
4 siblings, 1 reply; 74+ messages in thread
From: Danilo Krummrich @ 2026-02-19 9:58 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Wed Feb 18, 2026 at 9:55 PM CET, Joel Fernandes wrote:
> Add a new module `clist` for working with C's doubly circular linked
> lists. Provide low-level iteration over list nodes.
>
> Typed iteration over actual items is provided with a `clist_create`
> macro to assist in creation of the `CList` type.
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
> Acked-by: Gary Guo <gary@garyguo.net>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
For reference: https://lore.kernel.org/rust-for-linux/DGIIMT4F1GWA.12UFBEUAC80VW@nvidia.com/
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 3/8] gpu: Fix uninitialized buddy for built-in drivers
2026-02-18 20:55 ` [PATCH v10 3/8] gpu: Fix uninitialized buddy for built-in drivers Joel Fernandes
@ 2026-02-19 10:09 ` Danilo Krummrich
2026-02-19 15:31 ` Joel Fernandes
0 siblings, 1 reply; 74+ messages in thread
From: Danilo Krummrich @ 2026-02-19 10:09 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Matthew Auld, Arun Pravin, Christian Koenig,
David Airlie, Simona Vetter, Dave Airlie, Miguel Ojeda, Gary Guo,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
intel-xe, Peter Senna Tschudin
On Wed Feb 18, 2026 at 9:55 PM CET, Joel Fernandes wrote:
> From: Koen Koning <koen.koning@linux.intel.com>
>
> Use subsys_initcall instead of module_init for the GPU buddy allocator,
> so its initialization code runs before any gpu drivers.
> Otherwise, a built-in driver that tries to use the buddy allocator will
> run into a kernel NULL pointer dereference because slab_blocks is
> uninitialized.
>
> Specifically, this fixes drm/xe (as built-in) running into a kernel
> panic during boot, because it uses buddy during device probe.
>
> Fixes: ba110db8e1bc ("gpu: Move DRM buddy allocator one level up (part two)")
This Fixes: tag seems wrong. How is this code move related to this problem?
This should rather be:
Fixes: 6387a3c4b0c4 ("drm: move the buddy allocator from i915 into common drm")
Also, please add:
Cc: stable@vger.kernel.org
> Cc: Joel Fernandes <joelagnelf@nvidia.com>
> Cc: Dave Airlie <airlied@redhat.com>
> Cc: intel-xe@lists.freedesktop.org
> Cc: Peter Senna Tschudin <peter.senna@linux.intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Signed-off-by: Koen Koning <koen.koning@linux.intel.com>
> Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
I also think this patch should be sent separately and go through drm-misc-fixes.
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-18 20:55 ` [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists Joel Fernandes
2026-02-19 4:26 ` Alexandre Courbot
2026-02-19 9:58 ` Danilo Krummrich
@ 2026-02-19 11:21 ` Danilo Krummrich
2026-02-19 14:37 ` Gary Guo
2026-02-19 15:27 ` Joel Fernandes
2026-02-20 8:16 ` Eliot Courtney
2026-02-21 8:59 ` Alice Ryhl
4 siblings, 2 replies; 74+ messages in thread
From: Danilo Krummrich @ 2026-02-19 11:21 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Wed Feb 18, 2026 at 9:55 PM CET, Joel Fernandes wrote:
> +RUST TO C LIST INTERFACES
Maybe this should just be "RUST [FFI]" instead (in case Alex and you want to
sign up for looking after FFI helper infrastructure in general)?
> +M: Joel Fernandes <joelagnelf@nvidia.com>
> +M: Alexandre Courbot <acourbot@nvidia.com>
> +L: rust-for-linux@vger.kernel.org
> +S: Maintained
> +F: rust/kernel/ffi/clist.rs
<snip>
> diff --git a/rust/kernel/ffi/clist.rs b/rust/kernel/ffi/clist.rs
> new file mode 100644
> index 000000000000..a84f395875dc
> --- /dev/null
> +++ b/rust/kernel/ffi/clist.rs
> @@ -0,0 +1,327 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! FFI interface for C doubly circular intrusive linked lists.
> +//!
> +//! This module provides Rust abstractions for iterating over C `list_head`-based
> +//! linked lists. It is intended for FFI use-cases where a C subsystem manages a
> +//! circular linked list that Rust code needs to read. This is generally required
> +//! only for special cases and should be avoided by drivers.
Maybe generalize the statement a bit and say that this should only be used for
cases where C and Rust code share direct access to the same linked list through
an FFI interface.
Additionally, add a separate note that this *must not* be used by Rust
components that just aim for a linked list primitive and instead refer to the
Rust linked list implementation with an intra-doc link.
> +//!
> +//! # Examples
> +//!
> +//! ```
> +//! use kernel::{
> +//! bindings,
> +//! clist_create,
> +//! types::Opaque, //
Examples don't necessarily need '//' at the end, as they are not automatically
formatted anyways.
(I hope that we will have a solution for import formatting before rustfmt
supports doc-comments. :)
> +//! };
> +//! # // Create test list with values (0, 10, 20) - normally done by C code but it is
> +//! # // emulated here for doctests using the C bindings.
> +//! # use core::mem::MaybeUninit;
> +//! #
> +//! # /// C struct with embedded `list_head` (typically will be allocated by C code).
> +//! # #[repr(C)]
> +//! # pub struct SampleItemC {
> +//! # pub value: i32,
> +//! # pub link: bindings::list_head,
> +//! # }
> +//! #
> +//! # let mut head = MaybeUninit::<bindings::list_head>::uninit();
> +//! #
> +//! # let head = head.as_mut_ptr();
> +//! # // SAFETY: head and all the items are test objects allocated in this scope.
> +//! # unsafe { bindings::INIT_LIST_HEAD(head) };
> +//! #
> +//! # let mut items = [
> +//! # MaybeUninit::<SampleItemC>::uninit(),
> +//! # MaybeUninit::<SampleItemC>::uninit(),
> +//! # MaybeUninit::<SampleItemC>::uninit(),
> +//! # ];
> +//! #
> +//! # for (i, item) in items.iter_mut().enumerate() {
> +//! # let ptr = item.as_mut_ptr();
> +//! # // SAFETY: pointers are to allocated test objects with a list_head field.
> +//! # unsafe {
I understand that this is just setup code for a doc-test, but I still think we
should hold it to the same standards, i.e. let's separate the different unsafe
calls into their own unsafe blocks and add proper safety comments.
> +//! # (*ptr).value = i as i32 * 10;
> +//! # // &raw mut computes address of link directly as link is uninitialized.
> +//! # bindings::INIT_LIST_HEAD(&raw mut (*ptr).link);
> +//! # bindings::list_add_tail(&mut (*ptr).link, head);
> +//! # }
> +//! # }
<snip>
> +use pin_init::{
> + pin_data,
> + pin_init,
> + PinInit //
Should be 'PinInit, //'.
> +};
> +
> +/// FFI wrapper for a C `list_head` object used in intrusive linked lists.
> +///
> +/// # Invariants
> +///
> +/// - [`CListHead`] represents an allocated and valid `list_head` structure.
What does "allocated" mean in this context? (Dynamic allocations, stack, .data
section of the binary, any of those?)
In case of the latter, I'd just remove "allocated".
> +#[pin_data]
> +#[repr(transparent)]
> +pub struct CListHead {
> + #[pin]
> + inner: Opaque<bindings::list_head>,
> +}
> +
> +impl CListHead {
> + /// Create a `&CListHead` reference from a raw `list_head` pointer.
> + ///
> + /// # Safety
> + ///
> + /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure.
Same here, what exactly is meant by "allocated"?
> + /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
> + /// - The list and all linked `list_head` nodes must not be modified by non-Rust code
> + /// for the lifetime `'a`.
This is a bit vague I think, concurrent modifications of (other) Rust code are
not OK either.
> + #[inline]
> + pub unsafe fn from_raw<'a>(ptr: *mut bindings::list_head) -> &'a Self {
> + // SAFETY:
> + // - [`CListHead`] has same layout as `list_head`.
> + // - `ptr` is valid and unmodified for 'a per caller guarantees.
> + unsafe { &*ptr.cast() }
> + }
> +
> + /// Get the raw `list_head` pointer.
> + #[inline]
> + pub fn as_raw(&self) -> *mut bindings::list_head {
> + self.inner.get()
> + }
> +
> + /// Get the next [`CListHead`] in the list.
> + #[inline]
> + pub fn next(&self) -> &Self {
> + let raw = self.as_raw();
> + // SAFETY:
> + // - `self.as_raw()` is valid per type invariants.
> + // - The `next` pointer is guaranteed to be non-NULL.
I'm not sure whether "valid" in the type invariant implies that the struct
list_head is initialized. From a language point of view it is also valid if the
pointers are NULL.
So, I think the invariant (and the safety requirements of from_raw()) have to
ensure that the struct list_head is initialized in the sense of
INIT_LIST_HEAD().
> + unsafe { Self::from_raw((*raw).next) }
> + }
<snip>
> +/// A typed C linked list with a sentinel head intended for FFI use-cases where
> +/// C subsystem manages a linked list that Rust code needs to read. Generally
> +/// required only for special cases.
> +///
> +/// A sentinel head [`ClistHead`] represents the entire linked list and can be used
> +/// for iteration over items of type `T`, it is not associated with a specific item.
> +///
> +/// The const generic `OFFSET` specifies the byte offset of the `list_head` field within
> +/// the struct that `T` wraps.
> +///
> +/// # Invariants
> +///
> +/// - The [`CListHead`] is an allocated and valid sentinel C `list_head` structure.
> +/// - `OFFSET` is the byte offset of the `list_head` field within the struct that `T` wraps.
> +/// - All the list's `list_head` nodes are allocated and have valid next/prev pointers.
> +#[repr(transparent)]
> +pub struct CList<T, const OFFSET: usize>(CListHead, PhantomData<T>);
> +
> +impl<T, const OFFSET: usize> CList<T, OFFSET> {
> + /// Create a typed [`CList`] reference from a raw sentinel `list_head` pointer.
> + ///
> + /// # Safety
> + ///
> + /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure
> + /// representing a list sentinel.
> + /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
> + /// - The list must contain items where the `list_head` field is at byte offset `OFFSET`.
> + /// - `T` must be `#[repr(transparent)]` over the C struct.
> + #[inline]
> + pub unsafe fn from_raw<'a>(ptr: *mut bindings::list_head) -> &'a Self {
> + // SAFETY:
> + // - [`CList`] has same layout as [`CListHead`] due to repr(transparent).
> + // - Caller guarantees `ptr` is a valid, sentinel `list_head` object.
> + unsafe { &*ptr.cast() }
> + }
Comments from CListHead also apply here.
> +/// Create a C doubly-circular linked list interface `CList` from a raw `list_head` pointer.
> +///
> +/// This macro creates a `CList<T, OFFSET>` that can iterate over items of type `$rust_type`
> +/// linked via the `$field` field in the underlying C struct `$c_type`.
> +///
> +/// # Arguments
> +///
> +/// - `$head`: Raw pointer to the sentinel `list_head` object (`*mut bindings::list_head`).
> +/// - `$rust_type`: Each item's rust wrapper type.
> +/// - `$c_type`: Each item's C struct type that contains the embedded `list_head`.
> +/// - `$field`: The name of the `list_head` field within the C struct.
> +///
> +/// # Safety
> +///
> +/// This is an unsafe macro. The caller must ensure:
Given that, we should probably use the same (or a similar) trick as in [1].
[1] https://rust.docs.kernel.org/src/kernel/device.rs.html#665-688
> +///
> +/// - `$head` is a valid, initialized sentinel `list_head` pointing to a list that remains
> +/// unmodified for the lifetime of the rust `CList`.
> +/// - The list contains items of type `$c_type` linked via an embedded `$field`.
> +/// - `$rust_type` is `#[repr(transparent)]` over `$c_type` or has compatible layout.
> +///
> +/// # Examples
> +///
> +/// Refer to the examples in this module's documentation.
> +#[macro_export]
> +macro_rules! clist_create {
> + ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
> + // Compile-time check that field path is a list_head.
> + let _: fn(*const $c_type) -> *const $crate::bindings::list_head =
> + |p| &raw const (*p).$($field).+;
> +
> + // Calculate offset and create `CList`.
> + const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
> + $crate::ffi::clist::CList::<$rust_type, OFFSET>::from_raw($head)
> + }};
> +}
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-18 20:55 ` [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
2026-02-19 5:13 ` Alexandre Courbot
@ 2026-02-19 13:18 ` Danilo Krummrich
2026-02-19 15:31 ` Joel Fernandes
2026-02-20 8:22 ` Eliot Courtney
2 siblings, 1 reply; 74+ messages in thread
From: Danilo Krummrich @ 2026-02-19 13:18 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, Koen Koning, dri-devel,
nouveau, rust-for-linux, Nikola Djukic, Arun Pravin,
Christian Koenig
(Cc: Arun, Christian)
On Wed Feb 18, 2026 at 9:55 PM CET, Joel Fernandes wrote:
> Add safe Rust abstractions over the Linux kernel's GPU buddy
> allocator for physical memory management. The GPU buddy allocator
> implements a binary buddy system useful for GPU physical memory
> allocation. nova-core will use it for physical memory allocation.
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
The patch should also update the MAINTAINERS file accordingly.
(I will go through the code later on.)
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 11:21 ` Danilo Krummrich
@ 2026-02-19 14:37 ` Gary Guo
2026-02-19 15:27 ` Joel Fernandes
1 sibling, 0 replies; 74+ messages in thread
From: Gary Guo @ 2026-02-19 14:37 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Joel Fernandes, linux-kernel, Miguel Ojeda, Boqun Feng,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On 2026-02-19 11:21, Danilo Krummrich wrote:
>
> Examples don't necessarily need '//' at the end, as they are not
> automatically
> formatted anyways.
>
> (I hope that we will have a solution for import formatting before
> rustfmt
> supports doc-comments. :)
There is format_code_in_doc_comments option in rustfmt, unfortunately
it's unstable.
Best,
Gary
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 11:21 ` Danilo Krummrich
2026-02-19 14:37 ` Gary Guo
@ 2026-02-19 15:27 ` Joel Fernandes
2026-02-19 15:44 ` Joel Fernandes
1 sibling, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:27 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Thu, Feb 19, 2026 at 12:21:56PM +0100, Danilo Krummrich wrote:
> On Wed Feb 18, 2026 at 9:55 PM CET, Joel Fernandes wrote:
> > +RUST TO C LIST INTERFACES
>
> Maybe this should just be "RUST [FFI]" instead (in case Alex and you want to
> sign up for looking after FFI helper infrastructure in general)?
Good idea, done.
> > +F: rust/kernel/ffi/clist.rs
>
> <snip>
>
> > +//! This module provides Rust abstractions for iterating over C `list_head`-based
> > +//! linked lists. It is intended for FFI use-cases where a C subsystem manages a
> > +//! circular linked list that Rust code needs to read. This is generally required
> > +//! only for special cases and should be avoided by drivers.
>
> Maybe generalize the statement a bit and say that this should only be used for
> cases where C and Rust code share direct access to the same linked list through
> an FFI interface.
>
> Additionally, add a separate note that this *must not* be used by Rust
> components that just aim for a linked list primitive and instead refer to the
> Rust linked list implementation with an intra-doc link.
Done. Updated the module doc to say "It should only be used for cases
where C and Rust code share direct access to the same linked list
through an FFI interface" and added a separate note:
Note: This *must not* be used by Rust components that just need a
linked list primitive. Use [`kernel::list::List`] instead.
> > +//! types::Opaque, //
>
> Examples don't necessarily need '//' at the end, as they are not automatically
> formatted anyways.
Removed from the example. Non-example imports keep the '//' as a
rustfmt guard.
> > +//! # // SAFETY: pointers are to allocated test objects with a list_head field.
> > +//! # unsafe {
>
> I understand that this is just setup code for a doc-test, but I still think we
> should hold it to the same standards, i.e. let's separate the different unsafe
> calls into their own unsafe blocks and add proper safety comments.
Done. Split into three separate unsafe blocks with individual SAFETY
comments for the value write, INIT_LIST_HEAD, and list_add_tail calls.
> > + PinInit //
>
> Should be 'PinInit, //'.
Fixed.
> > +/// - [`CListHead`] represents an allocated and valid `list_head` structure.
>
> What does "allocated" mean in this context? (Dynamic allocations, stack, .data
> section of the binary, any of those?)
>
> In case of the latter, I'd just remove "allocated".
Removed "allocated". The invariant now reads:
The underlying `list_head` has been initialized (e.g. via
`INIT_LIST_HEAD()`) and its `next`/`prev` pointers are valid and
non-NULL.
> > + /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure.
>
> Same here, what exactly is meant by "allocated"?
Removed "allocated" from from_raw() safety docs as well. Updated to:
`ptr` must be a valid pointer to an initialized `list_head` (e.g.
via `INIT_LIST_HEAD()`), with valid non-NULL `next`/`prev` pointers.
> > + /// - The list and all linked `list_head` nodes must not be modified by non-Rust code
> > + /// for the lifetime `'a`.
>
> This is a bit vague I think, concurrent modifications of (other) Rust code are
> not OK either.
Fixed. Changed to "must not be concurrently modified for the lifetime
`'a`" which covers both Rust and C code.
> > + // SAFETY:
> > + // - `self.as_raw()` is valid per type invariants.
> > + // - The `next` pointer is guaranteed to be non-NULL.
>
> I'm not sure whether "valid" in the type invariant implies that the struct
> list_head is initialized. From a language point of view it is also valid if the
> pointers are NULL.
>
> So, I think the invariant (and the safety requirements of from_raw()) have to
> ensure that the struct list_head is initialized in the sense of
> INIT_LIST_HEAD().
Agreed. The invariant and from_raw() safety requirements now explicitly
require INIT_LIST_HEAD() initialization with valid non-NULL next/prev
pointers. The next() SAFETY comment now reads:
- `self.as_raw()` is valid and initialized per type invariants.
- The `next` pointer is valid and non-NULL per type invariants
(initialized via `INIT_LIST_HEAD()` or equivalent).
> > +/// - The [`CListHead`] is an allocated and valid sentinel C `list_head` structure.
> > +/// - `OFFSET` is the byte offset of the `list_head` field within the struct that `T` wraps.
> > +/// - All the list's `list_head` nodes are allocated and have valid next/prev pointers.
>
> Comments from CListHead also apply here.
Updated CList invariants and from_raw() safety docs to match the
CListHead pattern (removed "allocated", added INIT_LIST_HEAD, non-NULL
pointers, "concurrently modified").
> > +/// This is an unsafe macro. The caller must ensure:
>
> Given that, we should probably use the same (or a similar) trick as in [1].
>
> [1] https://rust.docs.kernel.org/src/kernel/device.rs.html#665-688
Done. Applied the device.rs pattern - the macro now requires
`clist_create!(unsafe { ... })` syntax, which forces callers to
acknowledge the safety requirements at the call site. The macro
internally wraps the `CList::from_raw` call in an unsafe block.
Thanks for the review!
Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 4:26 ` Alexandre Courbot
@ 2026-02-19 15:27 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:27 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Thu, Feb 19, 2026 at 01:26:48PM +0900, Alexandre Courbot wrote:
> > + PinInit //
>
> `rustfmt` fixed this to
>
> PinInit, //
Fixed, thanks.
> > +/// A sentinel head [`ClistHead`] represents the entire linked list and can be used
>
> Typo: `CListHead` (rustdoc complained about this).
Fixed, thanks.
Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 9:58 ` Danilo Krummrich
@ 2026-02-19 15:28 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:28 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Thu, Feb 19, 2026 at 10:58:42AM +0100, Danilo Krummrich wrote:
> For reference: https://lore.kernel.org/rust-for-linux/DGIIMT4F1GWA.12UFBEUAC80VW@nvidia.com/
Thanks for the pointer. I've addressed Alex's v9 review comments as
well. I'll reply to that thread separately with the details.
Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 2/8] gpu: Move DRM buddy allocator one level up (part two)
2026-02-19 3:18 ` Alexandre Courbot
@ 2026-02-19 15:31 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:31 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel, Danilo Krummrich, Dave Airlie, Miguel Ojeda,
dri-devel, nouveau, rust-for-linux
On Thu, Feb 19, 2026 at 12:18:06PM +0900, Alexandre Courbot wrote:
> Patches 1 and 2 have the exact same commit log, but each one does only
> part of it. Let's only keep the part of the log that applies to each
> patch.
Good catch, will differentiate the commit logs for part one and part
two.
> > +F: drivers/gpu/drm_buddy.c
>
> This line should be `drivers/gpu/drm/drm_buddy.c`.
Good catch, will fix.
> These files have been moved in patch 1, so their MAINTAINERS entry
> should also be modified there.
Right, I'll move the MAINTAINERS changes to the appropriate patch
where the files are actually moved.
Thanks,
Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 3/8] gpu: Fix uninitialized buddy for built-in drivers
2026-02-19 10:09 ` Danilo Krummrich
@ 2026-02-19 15:31 ` Joel Fernandes
2026-02-19 16:24 ` Joel Fernandes
0 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:31 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel, Matthew Auld, Arun Pravin, Christian Koenig,
Dave Airlie, Koen Koning, dri-devel, nouveau, rust-for-linux,
intel-xe, Peter Senna Tschudin
On Thu, Feb 19, 2026 at 11:09:42AM +0100, Danilo Krummrich wrote:
> > Fixes: ba110db8e1bc ("gpu: Move DRM buddy allocator one level up (part two)")
>
> This Fixes: tag seems wrong. How is this code move related to this problem?
>
> This should rather be:
>
> Fixes: 6387a3c4b0c4 ("drm: move the buddy allocator from i915 into common drm")
You're right, the bug existed since the original move to common drm.
Will update the Fixes tag.
> Also, please add:
>
> Cc: stable@vger.kernel.org
Will add.
> I also think this patch should be sent separately and go through drm-misc-fixes.
Agreed. I'll pull this patch out of the series and send it separately
targeting drm-misc-fixes.
Thanks,
Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-19 5:13 ` Alexandre Courbot
2026-02-19 8:54 ` Miguel Ojeda
@ 2026-02-19 15:31 ` Joel Fernandes
2026-02-20 1:56 ` Alexandre Courbot
1 sibling, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:31 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel, Danilo Krummrich, Miguel Ojeda, Boqun Feng,
Gary Guo, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, dri-devel, nouveau,
rust-for-linux, Nikola Djukic
On Thu, Feb 19, 2026 at 02:13:37PM +0900, Alexandre Courbot wrote:
> > + types::Opaque,
>
> Need a `//` or `rustfmt` will reformat.
Fixed, thanks.
> > +// SAFETY: [`GpuBuddyInner`] can be sent between threads.
>
> No need to link on non-doccomments.
Fixed. Removed the brackets from SAFETY comments for GpuBuddyInner
and GpuBuddyGuard.
> > +/// - `buddy` references a valid [`GpuBuddyInner`].
>
> rustdoc complains that this links to a private item in a public doc - we
> should not mention `GpuBuddyInner` here.
Per Miguel's reply, I've kept the mention but removed the square
brackets so it doesn't try to create a link. This way it's still
mentioned for practical reference without triggering the rustdoc
warning.
> > /// as references from [`CListIter`] during iteration over [`AllocatedBlocks`].
>
> Link should be [`CListIter`](kernel::ffi::clist::CListIter) to resolve.
> But maybe we don't need to share that detail in the public
> documentation?
Agreed, removed the CListIter reference from the public doc. It now
just says "as references during iteration over AllocatedBlocks".
Thanks,
Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-19 13:18 ` Danilo Krummrich
@ 2026-02-19 15:31 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:31 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Dave Airlie,
Daniel Almeida, dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Arun Pravin, Christian Koenig
On Thu, Feb 19, 2026 at 02:18:31PM +0100, Danilo Krummrich wrote:
> The patch should also update the MAINTAINERS file accordingly.
Will add the MAINTAINERS update for GPU buddy bindings.
Thanks,
Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-19 8:54 ` Miguel Ojeda
@ 2026-02-19 15:31 ` Joel Fernandes
2026-03-01 13:23 ` Gary Guo
0 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:31 UTC (permalink / raw)
To: Miguel Ojeda
Cc: linux-kernel, Alexandre Courbot, Danilo Krummrich, Boqun Feng,
Gary Guo, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, dri-devel, nouveau,
rust-for-linux, Nikola Djukic
On Wed, Feb 19, 2026 at 09:54:27AM +0100, Miguel Ojeda wrote:
> If you all think something should be mentioned for practical reasons,
> then please don't let `rustdoc` force you to not mention it, i.e.
> please feel free to remove the square brackets if needed.
>
> In other words, I don't want the intra-doc links convention we have to
> make it harder for you to write certain things exceptionally.
Thanks Miguel, that's helpful! I've kept the GpuBuddyInner mention in
the invariants but removed the square brackets to avoid the rustdoc
warning while still providing the reference for readers.
Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 7/8] nova-core: mm: Select GPU_BUDDY for VRAM allocation
2026-02-19 0:44 ` Alexandre Courbot
2026-02-19 1:14 ` John Hubbard
2026-02-19 2:06 ` Joel Fernandes
@ 2026-02-19 15:31 ` Joel Fernandes
2 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:31 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel, Danilo Krummrich, John Hubbard, Alice Ryhl,
Miguel Ojeda, Dave Airlie, Gary Guo, Daniel Almeida, dri-devel,
nouveau, rust-for-linux, Nikola Djukic
On Thu, Feb 19, 2026 at 09:44:28AM +0900, Alexandre Courbot wrote:
> Subject prefix should just be `nova-core:`, as this touches the module's
> configuration.
Will update to "gpu: nova-core:" per John's suggestion on the
convention.
> I'd also suggest to select `GPU_BUDDY` in the series that actively
> starts using it.
Makes sense, I'll move the GPU_BUDDY select to the patch that
actually starts using it (the mm patch).
Thanks,
Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 7/8] nova-core: mm: Select GPU_BUDDY for VRAM allocation
2026-02-19 1:14 ` John Hubbard
@ 2026-02-19 15:31 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:31 UTC (permalink / raw)
To: John Hubbard
Cc: linux-kernel, Alexandre Courbot, Danilo Krummrich, Alice Ryhl,
Miguel Ojeda, Dave Airlie, Gary Guo, Daniel Almeida, dri-devel,
nouveau, rust-for-linux, Nikola Djukic
On Wed, Feb 18, 2026 at 05:14:18PM -0800, John Hubbard wrote:
> Or "gpu: nova-core: ", actually.
>
> That's the convention so far, where applicable of course.
Thanks John, will use "gpu: nova-core:" as the subject prefix.
Joel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 15:27 ` Joel Fernandes
@ 2026-02-19 15:44 ` Joel Fernandes
2026-02-19 16:24 ` Danilo Krummrich
0 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 15:44 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On 2/19/2026 10:27 AM, Joel Fernandes wrote:
> On Thu, Feb 19, 2026 at 12:21:56PM +0100, Danilo Krummrich wrote:
>> On Wed Feb 18, 2026 at 9:55 PM CET, Joel Fernandes wrote:
>>> +RUST TO C LIST INTERFACES
>> Maybe this should just be "RUST [FFI]" instead (in case Alex and you want to
>> sign up for looking after FFI helper infrastructure in general)?
>
> Good idea, done.
Actually, I am not sure we want to commit to entire RUST FFI infra though its
pretty tiny right now. Most of this infra right now is clist, let us start with
keeping it as is "RUST TO C LIST INTERFACES" ? Or we could make it "C LIST
INTERFACES [RUST]" sections.
Let me know yours and everyone's thoughts.
Thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 15:44 ` Joel Fernandes
@ 2026-02-19 16:24 ` Danilo Krummrich
2026-02-19 18:07 ` Joel Fernandes
2026-02-20 1:09 ` Gary Guo
0 siblings, 2 replies; 74+ messages in thread
From: Danilo Krummrich @ 2026-02-19 16:24 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Thu Feb 19, 2026 at 4:44 PM CET, Joel Fernandes wrote:
>
>
> On 2/19/2026 10:27 AM, Joel Fernandes wrote:
>> On Thu, Feb 19, 2026 at 12:21:56PM +0100, Danilo Krummrich wrote:
>>> On Wed Feb 18, 2026 at 9:55 PM CET, Joel Fernandes wrote:
>>>> +RUST TO C LIST INTERFACES
>>> Maybe this should just be "RUST [FFI]" instead (in case Alex and you want to
>>> sign up for looking after FFI helper infrastructure in general)?
>>
>> Good idea, done.
>
> Actually, I am not sure we want to commit to entire RUST FFI infra though its
> pretty tiny right now. Most of this infra right now is clist, let us start with
> keeping it as is "RUST TO C LIST INTERFACES" ? Or we could make it "C LIST
> INTERFACES [RUST]" sections.
I feel like it makes a bit more sense to have an entry for the entire class of
"RUST [FFI]" infrastructure.
I could imagine that we will find quite some more cases where an FFI abstraction
layer makes sense; at some point it might even go the other way around.
Once that happens, I think it would be good to have people looking after
intermediate FFI layers in general. But it does not have to be you of course.
Maybe we can create the "RUST [FFI]" entry already with the following
constraint:
RUST [FFI]
M: Joel Fernandes <joelagnelf@nvidia.com> (CLIST)
M: Alexandre Courbot <acourbot@nvidia.com> (CLIST)
L: rust-for-linux@vger.kernel.org
S: Maintained
F: rust/kernel/ffi/
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 3/8] gpu: Fix uninitialized buddy for built-in drivers
2026-02-19 15:31 ` Joel Fernandes
@ 2026-02-19 16:24 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 16:24 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel, Matthew Auld, Arun Pravin, Christian Koenig,
Dave Airlie, Koen Koning, dri-devel, nouveau, rust-for-linux,
intel-xe, Peter Senna Tschudin
On 2/19/2026 10:31 AM, Joel Fernandes wrote:
> On Thu, Feb 19, 2026 at 11:09:42AM +0100, Danilo Krummrich wrote:
>>> Fixes: ba110db8e1bc ("gpu: Move DRM buddy allocator one level up (part two)")
>>
>> This Fixes: tag seems wrong. How is this code move related to this problem?
>>
>> This should rather be:
>>
>> Fixes: 6387a3c4b0c4 ("drm: move the buddy allocator from i915 into common drm")
>
> You're right, the bug existed since the original move to common drm.
> Will update the Fixes tag.
>
>> Also, please add:
>>
>> Cc: stable@vger.kernel.org
>
> Will add.
>
>> I also think this patch should be sent separately and go through drm-misc-fixes.
>
> Agreed. I'll pull this patch out of the series and send it separately
> targeting drm-misc-fixes.
>
I just saw that Koen will send it separately so I'll let him handle it. In
future postings, I will put both this fix and earlier 2 patches (part one and
two of the move) with prefix [cherry-pick], since the only reason I am carrying
them is because it is a dependency that is not yet in torvalds/master branch.
I should have made that clear. Thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 16:24 ` Danilo Krummrich
@ 2026-02-19 18:07 ` Joel Fernandes
2026-02-19 18:38 ` Miguel Ojeda
2026-02-20 1:56 ` Alexandre Courbot
2026-02-20 1:09 ` Gary Guo
1 sibling, 2 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 18:07 UTC (permalink / raw)
To: Danilo Krummrich
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On 2/19/2026 11:24 AM, Danilo Krummrich wrote:
> On Thu Feb 19, 2026 at 4:44 PM CET, Joel Fernandes wrote:
>>
>>
>> On 2/19/2026 10:27 AM, Joel Fernandes wrote:
>>> On Thu, Feb 19, 2026 at 12:21:56PM +0100, Danilo Krummrich wrote:
>>>> On Wed Feb 18, 2026 at 9:55 PM CET, Joel Fernandes wrote:
>>>>> +RUST TO C LIST INTERFACES
>>>> Maybe this should just be "RUST [FFI]" instead (in case Alex and you want to
>>>> sign up for looking after FFI helper infrastructure in general)?
>>>
>>> Good idea, done.
>>
>> Actually, I am not sure we want to commit to entire RUST FFI infra though its
>> pretty tiny right now. Most of this infra right now is clist, let us start with
>> keeping it as is "RUST TO C LIST INTERFACES" ? Or we could make it "C LIST
>> INTERFACES [RUST]" sections.
>
> I feel like it makes a bit more sense to have an entry for the entire class of
> "RUST [FFI]" infrastructure.
>
> I could imagine that we will find quite some more cases where an FFI abstraction
> layer makes sense; at some point it might even go the other way around.
>
> Once that happens, I think it would be good to have people looking after
> intermediate FFI layers in general. But it does not have to be you of course.
>
> Maybe we can create the "RUST [FFI]" entry already with the following
> constraint:
>
> RUST [FFI]
> M: Joel Fernandes <joelagnelf@nvidia.com> (CLIST)
> M: Alexandre Courbot <acourbot@nvidia.com> (CLIST)
> L: rust-for-linux@vger.kernel.org
> S: Maintained
> F: rust/kernel/ffi/
Yeah, this is a good idea. I am Ok with that. Alex/Miguel, you're Ok with this too?
If all in agreement, I can make this change for next revision.
Thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 18:07 ` Joel Fernandes
@ 2026-02-19 18:38 ` Miguel Ojeda
2026-02-19 19:28 ` Joel Fernandes
2026-02-20 1:56 ` Alexandre Courbot
1 sibling, 1 reply; 74+ messages in thread
From: Miguel Ojeda @ 2026-02-19 18:38 UTC (permalink / raw)
To: Joel Fernandes, Alexandre Courbot
Cc: Danilo Krummrich, linux-kernel, Miguel Ojeda, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Thu, Feb 19, 2026 at 7:07 PM Joel Fernandes <joelagnelf@nvidia.com> wrote:
>
> Yeah, this is a good idea. I am Ok with that. Alex/Miguel, you're Ok with this too?
>
> If all in agreement, I can make this change for next revision.
It would be very good to get you guys (and NVIDIA) more involved in
general, so thank you! :) -- (and Danilo for proposing it)
Do you want that I set up a branch for that like `rust-ffi`? It is
usually what we are doing lately for things like this, and slowly
splitting things into more pieces. I see you both have already sent in
the past a few GIT PULLs etc., so possibly this is not that
interesting for you, but we can still do it.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 18:38 ` Miguel Ojeda
@ 2026-02-19 19:28 ` Joel Fernandes
2026-02-19 22:55 ` Miguel Ojeda
0 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-19 19:28 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Alexandre Courbot, Danilo Krummrich, linux-kernel, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Dave Airlie,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic
On Feb 19, 2026, at 1:38 PM, Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> wrote:
> On Thu, Feb 19, 2026 at 7:07 PM Joel Fernandes <joelagnelf@nvidia.com> wrote:
>>
>> Yeah, this is a good idea. I am Ok with that. Alex/Miguel, you're Ok
>> with this too?
>>
>> If all in agreement, I can make this change for next revision.
>
> It would be very good to get you guys (and NVIDIA) more involved in
> general, so thank you! :) -- (and Danilo for proposing it)
Sounds good. :)
> Do you want that I set up a branch for that like `rust-ffi`? It is
> usually what we are doing lately for things like this, and slowly
> splitting things into more pieces. I see you both have already sent in
> the past a few GIT PULLs etc., so possibly this is not that
> interesting for you, but we can still do it.
I think let us see how it goes and how much is the volume? If it is
light then I/we could send you a pull request from personal kernel.org
repository, if not then we can set up a branch at that time. What do
you think?
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 19:28 ` Joel Fernandes
@ 2026-02-19 22:55 ` Miguel Ojeda
2026-02-20 4:00 ` Joel Fernandes
0 siblings, 1 reply; 74+ messages in thread
From: Miguel Ojeda @ 2026-02-19 22:55 UTC (permalink / raw)
To: Joel Fernandes, Alexandre Courbot
Cc: Danilo Krummrich, linux-kernel, Miguel Ojeda, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Thu, Feb 19, 2026 at 8:29 PM Joel Fernandes <joelagnelf@nvidia.com> wrote:
>
> I think let us see how it goes and how much is the volume? If it is
> light then I/we could send you a pull request from personal kernel.org
> repository, if not then we can set up a branch at that time. What do
> you think?
If it is very light, then we could just do Acked-bys, but setting up a
branch is easy:
https://github.com/Rust-for-Linux/linux/tree/ffi-next
Please feel free to use it (if so, please let me know your GitHub
handle -- I already have Alexandre's); otherwise, we can delete it.
Having all branches in the same place is good for others to have a
single place to look into, and has the advantage of being able to add
it there as a `T:` field already.
Thanks!
Cheers,
Miguel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 16:24 ` Danilo Krummrich
2026-02-19 18:07 ` Joel Fernandes
@ 2026-02-20 1:09 ` Gary Guo
2026-02-20 1:19 ` Miguel Ojeda
2026-02-20 16:48 ` Danilo Krummrich
1 sibling, 2 replies; 74+ messages in thread
From: Gary Guo @ 2026-02-20 1:09 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Joel Fernandes, linux-kernel, Miguel Ojeda, Boqun Feng,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On 2026-02-19 16:24, Danilo Krummrich wrote:
> On Thu Feb 19, 2026 at 4:44 PM CET, Joel Fernandes wrote:
>>
>>
>> On 2/19/2026 10:27 AM, Joel Fernandes wrote:
>>> On Thu, Feb 19, 2026 at 12:21:56PM +0100, Danilo Krummrich wrote:
>>>> On Wed Feb 18, 2026 at 9:55 PM CET, Joel Fernandes wrote:
>>>>> +RUST TO C LIST INTERFACES
>>>> Maybe this should just be "RUST [FFI]" instead (in case Alex and you want to
>>>> sign up for looking after FFI helper infrastructure in general)?
>>>
>>> Good idea, done.
>>
>> Actually, I am not sure we want to commit to entire RUST FFI infra though its
>> pretty tiny right now. Most of this infra right now is clist, let us start with
>> keeping it as is "RUST TO C LIST INTERFACES" ? Or we could make it "C LIST
>> INTERFACES [RUST]" sections.
>
> I feel like it makes a bit more sense to have an entry for the entire class of
> "RUST [FFI]" infrastructure.
I don't think so. Most of the kernel crate is doing FFI. We have a `ffi` crate
defining FFI types, we have `CStr`/`CString` which in Rust std is inside `std::ffi`,
etc.
I feel that the FFI infra is the core responsibility of the top-level Rust entry,
while specific stuff can be splitted out.
Best,
Gary
>
> I could imagine that we will find quite some more cases where an FFI abstraction
> layer makes sense; at some point it might even go the other way around.
>
> Once that happens, I think it would be good to have people looking after
> intermediate FFI layers in general. But it does not have to be you of course.
>
> Maybe we can create the "RUST [FFI]" entry already with the following
> constraint:
>
> RUST [FFI]
> M: Joel Fernandes <joelagnelf@nvidia.com> (CLIST)
> M: Alexandre Courbot <acourbot@nvidia.com> (CLIST)
> L: rust-for-linux@vger.kernel.org
> S: Maintained
> F: rust/kernel/ffi/
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-20 1:09 ` Gary Guo
@ 2026-02-20 1:19 ` Miguel Ojeda
2026-02-20 16:48 ` Danilo Krummrich
1 sibling, 0 replies; 74+ messages in thread
From: Miguel Ojeda @ 2026-02-20 1:19 UTC (permalink / raw)
To: Gary Guo
Cc: Danilo Krummrich, Joel Fernandes, linux-kernel, Miguel Ojeda,
Boqun Feng, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Alexandre Courbot, Dave Airlie,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic
On Fri, Feb 20, 2026 at 2:09 AM Gary Guo <gary@garyguo.net> wrote:
>
> I don't think so. Most of the kernel crate is doing FFI. We have a `ffi` crate
> defining FFI types, we have `CStr`/`CString` which in Rust std is inside `std::ffi`,
> etc.
Yeah, I don't love that name either, for similar reasons.
(The entry would still be a sub-entry, either way).
Thanks!
Cheers,
Miguel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 18:07 ` Joel Fernandes
2026-02-19 18:38 ` Miguel Ojeda
@ 2026-02-20 1:56 ` Alexandre Courbot
1 sibling, 0 replies; 74+ messages in thread
From: Alexandre Courbot @ 2026-02-20 1:56 UTC (permalink / raw)
To: Joel Fernandes
Cc: Danilo Krummrich, linux-kernel, Miguel Ojeda, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Fri Feb 20, 2026 at 3:07 AM JST, Joel Fernandes wrote:
>
>
> On 2/19/2026 11:24 AM, Danilo Krummrich wrote:
>> On Thu Feb 19, 2026 at 4:44 PM CET, Joel Fernandes wrote:
>>>
>>>
>>> On 2/19/2026 10:27 AM, Joel Fernandes wrote:
>>>> On Thu, Feb 19, 2026 at 12:21:56PM +0100, Danilo Krummrich wrote:
>>>>> On Wed Feb 18, 2026 at 9:55 PM CET, Joel Fernandes wrote:
>>>>>> +RUST TO C LIST INTERFACES
>>>>> Maybe this should just be "RUST [FFI]" instead (in case Alex and you want to
>>>>> sign up for looking after FFI helper infrastructure in general)?
>>>>
>>>> Good idea, done.
>>>
>>> Actually, I am not sure we want to commit to entire RUST FFI infra though its
>>> pretty tiny right now. Most of this infra right now is clist, let us start with
>>> keeping it as is "RUST TO C LIST INTERFACES" ? Or we could make it "C LIST
>>> INTERFACES [RUST]" sections.
>>
>> I feel like it makes a bit more sense to have an entry for the entire class of
>> "RUST [FFI]" infrastructure.
>>
>> I could imagine that we will find quite some more cases where an FFI abstraction
>> layer makes sense; at some point it might even go the other way around.
>>
>> Once that happens, I think it would be good to have people looking after
>> intermediate FFI layers in general. But it does not have to be you of course.
>>
>> Maybe we can create the "RUST [FFI]" entry already with the following
>> constraint:
>>
>> RUST [FFI]
>> M: Joel Fernandes <joelagnelf@nvidia.com> (CLIST)
>> M: Alexandre Courbot <acourbot@nvidia.com> (CLIST)
>> L: rust-for-linux@vger.kernel.org
>> S: Maintained
>> F: rust/kernel/ffi/
>
> Yeah, this is a good idea. I am Ok with that. Alex/Miguel, you're Ok with this too?
>
> If all in agreement, I can make this change for next revision.
Sure (once we agree on what the entry should be named), that should be
low-bandwidth anyway as folks will be discouraged to use this module
whenever possible anyway. :)
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-19 15:31 ` Joel Fernandes
@ 2026-02-20 1:56 ` Alexandre Courbot
2026-02-23 1:02 ` Joel Fernandes
0 siblings, 1 reply; 74+ messages in thread
From: Alexandre Courbot @ 2026-02-20 1:56 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Danilo Krummrich, Miguel Ojeda, Boqun Feng,
Gary Guo, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, dri-devel, nouveau,
rust-for-linux, Nikola Djukic
On Fri Feb 20, 2026 at 12:31 AM JST, Joel Fernandes wrote:
> On Thu, Feb 19, 2026 at 02:13:37PM +0900, Alexandre Courbot wrote:
>> > + types::Opaque,
>>
>> Need a `//` or `rustfmt` will reformat.
>
> Fixed, thanks.
>
>> > +// SAFETY: [`GpuBuddyInner`] can be sent between threads.
>>
>> No need to link on non-doccomments.
>
> Fixed. Removed the brackets from SAFETY comments for GpuBuddyInner
> and GpuBuddyGuard.
>
>> > +/// - `buddy` references a valid [`GpuBuddyInner`].
>>
>> rustdoc complains that this links to a private item in a public doc - we
>> should not mention `GpuBuddyInner` here.
>
> Per Miguel's reply, I've kept the mention but removed the square
> brackets so it doesn't try to create a link. This way it's still
> mentioned for practical reference without triggering the rustdoc
> warning.
Won't it be confusing for readers of the public documentation? If a type
is private, it shouldn't be needed to folks who read the HTML.
This sounds like we instead want a regular `//` comment somewhere to
guide those who brave the code.
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-19 22:55 ` Miguel Ojeda
@ 2026-02-20 4:00 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-20 4:00 UTC (permalink / raw)
To: Miguel Ojeda, Alexandre Courbot
Cc: Danilo Krummrich, linux-kernel, Miguel Ojeda, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On 2/19/2026 5:55 PM, Miguel Ojeda wrote:
> On Thu, Feb 19, 2026 at 8:29 PM Joel Fernandes <joelagnelf@nvidia.com> wrote:
>>
>> I think let us see how it goes and how much is the volume? If it is
>> light then I/we could send you a pull request from personal kernel.org
>> repository, if not then we can set up a branch at that time. What do
>> you think?
>
> If it is very light, then we could just do Acked-bys, but setting up a
> branch is easy:
>
> https://github.com/Rust-for-Linux/linux/tree/ffi-next
>
> Please feel free to use it (if so, please let me know your GitHub
> handle -- I already have Alexandre's); otherwise, we can delete it.
Sounds good, my github handle is @joelagnel
>
> Having all branches in the same place is good for others to have a
> single place to look into, and has the advantage of being able to add
> it there as a `T:` field already.
Sure, I'll include it in `T:`.
Thanks!
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-18 20:55 ` [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists Joel Fernandes
` (2 preceding siblings ...)
2026-02-19 11:21 ` Danilo Krummrich
@ 2026-02-20 8:16 ` Eliot Courtney
2026-02-23 1:13 ` Joel Fernandes
2026-02-21 8:59 ` Alice Ryhl
4 siblings, 1 reply; 74+ messages in thread
From: Eliot Courtney @ 2026-02-20 8:16 UTC (permalink / raw)
To: Joel Fernandes, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Alexandre Courbot
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Nikola Djukic, dri-devel
On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
> +/// Create a C doubly-circular linked list interface `CList` from a raw `list_head` pointer.
> +///
> +/// This macro creates a `CList<T, OFFSET>` that can iterate over items of type `$rust_type`
> +/// linked via the `$field` field in the underlying C struct `$c_type`.
> +///
> +/// # Arguments
> +///
> +/// - `$head`: Raw pointer to the sentinel `list_head` object (`*mut bindings::list_head`).
> +/// - `$rust_type`: Each item's rust wrapper type.
> +/// - `$c_type`: Each item's C struct type that contains the embedded `list_head`.
> +/// - `$field`: The name of the `list_head` field within the C struct.
> +///
> +/// # Safety
> +///
> +/// This is an unsafe macro. The caller must ensure:
> +///
> +/// - `$head` is a valid, initialized sentinel `list_head` pointing to a list that remains
> +/// unmodified for the lifetime of the rust `CList`.
> +/// - The list contains items of type `$c_type` linked via an embedded `$field`.
> +/// - `$rust_type` is `#[repr(transparent)]` over `$c_type` or has compatible layout.
> +///
> +/// # Examples
> +///
> +/// Refer to the examples in this module's documentation.
> +#[macro_export]
> +macro_rules! clist_create {
> + ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
> + // Compile-time check that field path is a list_head.
> + let _: fn(*const $c_type) -> *const $crate::bindings::list_head =
> + |p| &raw const (*p).$($field).+;
> +
> + // Calculate offset and create `CList`.
> + const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
> + $crate::ffi::clist::CList::<$rust_type, OFFSET>::from_raw($head)
> + }};
> +}
This uses offset_of! in a way that requires the offset_of_nested
feature, so it doesn't build in rust 1.78.0. The feature is already
added to rust_allowed_features, so I think it's ok to add
#![feature(offset_of_nested)].
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-18 20:55 ` [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
2026-02-19 5:13 ` Alexandre Courbot
2026-02-19 13:18 ` Danilo Krummrich
@ 2026-02-20 8:22 ` Eliot Courtney
2026-02-20 14:54 ` Joel Fernandes
2 siblings, 1 reply; 74+ messages in thread
From: Eliot Courtney @ 2026-02-20 8:22 UTC (permalink / raw)
To: Joel Fernandes, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Nikola Djukic, dri-devel
On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
> +__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
> + struct gpu_buddy_block *block)
> +{
> + return gpu_buddy_block_size(mm, block);
> +}
> +
Will `rust_helper_gpu_buddy_block_size` be used in the future? It
doesn't appear to be used in buddy.rs.
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-20 8:22 ` Eliot Courtney
@ 2026-02-20 14:54 ` Joel Fernandes
2026-02-20 15:50 ` Joel Fernandes
2026-02-20 15:53 ` Danilo Krummrich
0 siblings, 2 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-20 14:54 UTC (permalink / raw)
To: Eliot Courtney, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Nikola Djukic, dri-devel
On 2/20/2026 3:22 AM, Eliot Courtney wrote:
> On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
>> +__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
>> + struct gpu_buddy_block *block)
>> +{
>> + return gpu_buddy_block_size(mm, block);
>> +}
>> +
>
> Will `rust_helper_gpu_buddy_block_size` be used in the future? It
> doesn't appear to be used in buddy.rs.
I think it is worth keeping because it is a pretty basic API the underlying
infrastructure. Finding the size of a block can be important in the future
IMO. It is only few lines, no?
Thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-20 14:54 ` Joel Fernandes
@ 2026-02-20 15:50 ` Joel Fernandes
2026-02-20 15:53 ` Danilo Krummrich
1 sibling, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-20 15:50 UTC (permalink / raw)
To: Eliot Courtney, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Nikola Djukic, dri-devel
On 2/20/2026 9:54 AM, Joel Fernandes wrote:
>
>
> On 2/20/2026 3:22 AM, Eliot Courtney wrote:
>> On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
>>> +__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
>>> + struct gpu_buddy_block *block)
>>> +{
>>> + return gpu_buddy_block_size(mm, block);
>>> +}
>>> +
>>
>> Will `rust_helper_gpu_buddy_block_size` be used in the future? It
>> doesn't appear to be used in buddy.rs.
>
> I think it is worth keeping because it is a pretty basic API the underlying
> infrastructure. Finding the size of a block can be important in the future
> IMO. It is only few lines, no?
By the way, this can become important for non-contiguous physical memory
allocations where an allocation is split across different blocks. In that
case, we would the information size of each individual block, not just the
whole allocation. I could probably add a test case for that.
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-20 14:54 ` Joel Fernandes
2026-02-20 15:50 ` Joel Fernandes
@ 2026-02-20 15:53 ` Danilo Krummrich
2026-02-20 21:20 ` Joel Fernandes
1 sibling, 1 reply; 74+ messages in thread
From: Danilo Krummrich @ 2026-02-20 15:53 UTC (permalink / raw)
To: Joel Fernandes
Cc: Eliot Courtney, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, Koen Koning, dri-devel,
nouveau, rust-for-linux, Nikola Djukic, dri-devel
On Fri Feb 20, 2026 at 3:54 PM CET, Joel Fernandes wrote:
>
>
> On 2/20/2026 3:22 AM, Eliot Courtney wrote:
>> On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
>>> +__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
>>> + struct gpu_buddy_block *block)
>>> +{
>>> + return gpu_buddy_block_size(mm, block);
>>> +}
>>> +
>>
>> Will `rust_helper_gpu_buddy_block_size` be used in the future? It
>> doesn't appear to be used in buddy.rs.
>
> I think it is worth keeping because it is a pretty basic API the underlying
> infrastructure. Finding the size of a block can be important in the future
> IMO. It is only few lines, no?
The helper should be added with the code using it.
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-20 1:09 ` Gary Guo
2026-02-20 1:19 ` Miguel Ojeda
@ 2026-02-20 16:48 ` Danilo Krummrich
2026-02-23 0:54 ` Joel Fernandes
2026-02-25 19:48 ` Boqun Feng
1 sibling, 2 replies; 74+ messages in thread
From: Danilo Krummrich @ 2026-02-20 16:48 UTC (permalink / raw)
To: Gary Guo
Cc: Joel Fernandes, linux-kernel, Miguel Ojeda, Boqun Feng,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Fri Feb 20, 2026 at 2:09 AM CET, Gary Guo wrote:
> On 2026-02-19 16:24, Danilo Krummrich wrote:
>> I feel like it makes a bit more sense to have an entry for the entire class of
>> "RUST [FFI]" infrastructure.
>
> I don't think so. Most of the kernel crate is doing FFI. We have a `ffi` crate
> defining FFI types, we have `CStr`/`CString` which in Rust std is inside `std::ffi`,
> etc.
The idea is not that everything that somehow has an FFI interface falls under
this category, as this would indeed be the majority.
The idea is rather everything that is specifically designed as a helper to
implement FFI interactions. (Given that maybe just "RUST [FFI HELPER]"?)
For instance, this would also apply to Opaque and ForeignOwnable. But also CStr
and CString, as you say.
But there's also lots of stuff that does not fall under this category, such as
pin-init, alloc, syn, num, bits (genmask), fmt, slice, revocable, list, ptr, assert,
print, arc, etc.
There are also things that are more on the "partially" side of things, such as
transmute, error or aref.
> I feel that the FFI infra is the core responsibility of the top-level Rust entry,
> while specific stuff can be splitted out.
I think the core responsibilities are compiler and general design topics, such
as abstraction design, (safety) documentation, etc., as well as core language
infrastructure, such as pin-init, syn, alloc, arc, etc.
Given the definition "helper to implement FFI interactions" I feel like we have
much more infrastructure that is not for this specific purpose.
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-20 15:53 ` Danilo Krummrich
@ 2026-02-20 21:20 ` Joel Fernandes
2026-02-20 23:43 ` Danilo Krummrich
0 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-20 21:20 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Eliot Courtney, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, Koen Koning, dri-devel,
nouveau, rust-for-linux, Nikola Djukic, dri-devel
On 2/20/2026 10:53 AM, Danilo Krummrich wrote:
> On Fri Feb 20, 2026 at 3:54 PM CET, Joel Fernandes wrote:
>>
>>
>> On 2/20/2026 3:22 AM, Eliot Courtney wrote:
>>> On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
>>>> +__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
>>>> + struct gpu_buddy_block *block)
>>>> +{
>>>> + return gpu_buddy_block_size(mm, block);
>>>> +}
>>>> +
>>>
>>> Will `rust_helper_gpu_buddy_block_size` be used in the future? It
>>> doesn't appear to be used in buddy.rs.
>>
>> I think it is worth keeping because it is a pretty basic API the underlying
>> infrastructure. Finding the size of a block can be important in the future
>> IMO. It is only few lines, no?
>
> The helper should be added with the code using it.
I will add this as a test case to exercise it and include it in that patch.
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-20 21:20 ` Joel Fernandes
@ 2026-02-20 23:43 ` Danilo Krummrich
2026-02-23 0:34 ` Joel Fernandes
0 siblings, 1 reply; 74+ messages in thread
From: Danilo Krummrich @ 2026-02-20 23:43 UTC (permalink / raw)
To: Joel Fernandes
Cc: Eliot Courtney, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, Koen Koning, dri-devel,
nouveau, rust-for-linux, Nikola Djukic, dri-devel
On Fri Feb 20, 2026 at 10:20 PM CET, Joel Fernandes wrote:
>
>
> On 2/20/2026 10:53 AM, Danilo Krummrich wrote:
>> On Fri Feb 20, 2026 at 3:54 PM CET, Joel Fernandes wrote:
>>>
>>>
>>> On 2/20/2026 3:22 AM, Eliot Courtney wrote:
>>>> On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
>>>>> +__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
>>>>> + struct gpu_buddy_block *block)
>>>>> +{
>>>>> + return gpu_buddy_block_size(mm, block);
>>>>> +}
>>>>> +
>>>>
>>>> Will `rust_helper_gpu_buddy_block_size` be used in the future? It
>>>> doesn't appear to be used in buddy.rs.
>>>
>>> I think it is worth keeping because it is a pretty basic API the underlying
>>> infrastructure. Finding the size of a block can be important in the future
>>> IMO. It is only few lines, no?
>>
>> The helper should be added with the code using it.
>
> I will add this as a test case to exercise it and include it in that patch.
A test case for a helper? Or do you mean you will add the actual abstraction?
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-18 20:55 ` [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists Joel Fernandes
` (3 preceding siblings ...)
2026-02-20 8:16 ` Eliot Courtney
@ 2026-02-21 8:59 ` Alice Ryhl
2026-02-23 0:41 ` Joel Fernandes
4 siblings, 1 reply; 74+ messages in thread
From: Alice Ryhl @ 2026-02-21 8:59 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Danilo Krummrich, Alexandre Courbot, Dave Airlie,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic
On Wed, Feb 18, 2026 at 03:55:03PM -0500, Joel Fernandes wrote:
> Add a new module `clist` for working with C's doubly circular linked
> lists. Provide low-level iteration over list nodes.
>
> Typed iteration over actual items is provided with a `clist_create`
> macro to assist in creation of the `CList` type.
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
> Acked-by: Gary Guo <gary@garyguo.net>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
In general this looks like a useful tool to write other abstractions, so
that's good. A few nits below.
Also, I think it would make more sense to split this series into two
with titles like this:
* Add clist helper for writing abstractions using C lists
* Move buddy alloctor one level up
That way, you can tell what the series actually does from its title.
Yes, the 'why' of a series is very important, and must be included in
the cover letter or commit messages, but I think the title of a series
should explain the 'what', not the 'why'.
> +impl CListHead {
> + /// Create a `&CListHead` reference from a raw `list_head` pointer.
> + ///
> + /// # Safety
> + ///
> + /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure.
> + /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
> + /// - The list and all linked `list_head` nodes must not be modified by non-Rust code
> + /// for the lifetime `'a`.
I don't think C vs Rust is useful here. What you want is that the list
is not modified by random other code in ways you didn't expect. It
doesn't matter if it's C or Rust code that carries out the illegal
modification.
> +// SAFETY: [`CListHead`] can be sent to any thread.
> +unsafe impl Send for CListHead {}
> +
> +// SAFETY: [`CListHead`] can be shared among threads as it is not modified
> +// by non-Rust code per safety requirements of [`CListHead::from_raw`].
> +unsafe impl Sync for CListHead {}
Same here. If another piece of Rust code modifies the list in parallel
from another thread, you'll have a bad time too. C vs Rust does not
matter.
Alice
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-20 23:43 ` Danilo Krummrich
@ 2026-02-23 0:34 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-23 0:34 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Eliot Courtney, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, Koen Koning, dri-devel,
nouveau, rust-for-linux, Nikola Djukic, dri-devel
On 2/20/2026 6:43 PM, Danilo Krummrich wrote:
> On Fri Feb 20, 2026 at 10:20 PM CET, Joel Fernandes wrote:
>>
>>
>> On 2/20/2026 10:53 AM, Danilo Krummrich wrote:
>>> On Fri Feb 20, 2026 at 3:54 PM CET, Joel Fernandes wrote:
>>>>
>>>>
>>>> On 2/20/2026 3:22 AM, Eliot Courtney wrote:
>>>>> On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
>>>>>> +__rust_helper u64 rust_helper_gpu_buddy_block_size(struct gpu_buddy *mm,
>>>>>> + struct gpu_buddy_block *block)
>>>>>> +{
>>>>>> + return gpu_buddy_block_size(mm, block);
>>>>>> +}
>>>>>> +
>>>>>
>>>>> Will `rust_helper_gpu_buddy_block_size` be used in the future? It
>>>>> doesn't appear to be used in buddy.rs.
>>>>
>>>> I think it is worth keeping because it is a pretty basic API the underlying
>>>> infrastructure. Finding the size of a block can be important in the future
>>>> IMO. It is only few lines, no?
>>>
>>> The helper should be added with the code using it.
>>
>> I will add this as a test case to exercise it and include it in that patch.
>
> A test case for a helper? Or do you mean you will add the actual abstraction?
Actual abstraction.
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-21 8:59 ` Alice Ryhl
@ 2026-02-23 0:41 ` Joel Fernandes
2026-02-23 9:38 ` Alice Ryhl
0 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-23 0:41 UTC (permalink / raw)
To: Alice Ryhl
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Danilo Krummrich, Alexandre Courbot, Dave Airlie,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic
Hi Alice,
On 2/21/2026 3:59 AM, Alice Ryhl wrote:
> On Wed, Feb 18, 2026 at 03:55:03PM -0500, Joel Fernandes wrote:
>> Add a new module `clist` for working with C's doubly circular linked
>> lists. Provide low-level iteration over list nodes.
>>
>> Typed iteration over actual items is provided with a `clist_create`
>> macro to assist in creation of the `CList` type.
>>
>> Cc: Nikola Djukic <ndjukic@nvidia.com>
>> Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
>> Acked-by: Gary Guo <gary@garyguo.net>
>> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
>
> In general this looks like a useful tool to write other abstractions, so
> that's good. A few nits below.
>
> Also, I think it would make more sense to split this series into two
> with titles like this:
>
> * Add clist helper for writing abstractions using C lists
> * Move buddy alloctor one level up
>
> That way, you can tell what the series actually does from its title.
> Yes, the 'why' of a series is very important, and must be included in
> the cover letter or commit messages, but I think the title of a series
> should explain the 'what', not the 'why'.
Sure, that makes sense I can move the buddy patches into a different series
indeed.
>
>> +impl CListHead {
>> + /// Create a `&CListHead` reference from a raw `list_head` pointer.
>> + ///
>> + /// # Safety
>> + ///
>> + /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure.
>> + /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
>> + /// - The list and all linked `list_head` nodes must not be modified by non-Rust code
>> + /// for the lifetime `'a`.
>
> I don't think C vs Rust is useful here. What you want is that the list
> is not modified by random other code in ways you didn't expect. It
> doesn't matter if it's C or Rust code that carries out the illegal
> modification.
Yeah, this is true. I will change it to the following then:
"The list and all linked `list_head` nodes must not be modified from
anywhere for the lifetime `'a`."
>
>> +// SAFETY: [`CListHead`] can be sent to any thread.
>> +unsafe impl Send for CListHead {}
>> +
>> +// SAFETY: [`CListHead`] can be shared among threads as it is not modified
>> +// by non-Rust code per safety requirements of [`CListHead::from_raw`].
>> +unsafe impl Sync for CListHead {}
>
> Same here. If another piece of Rust code modifies the list in parallel
> from another thread, you'll have a bad time too. C vs Rust does not
> matter.
Ack, will change it to:
// SAFETY: [`CListHead`] can be shared among threads as it is
// read-only per safety requirements of [`CListHead::from_raw`].
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-20 16:48 ` Danilo Krummrich
@ 2026-02-23 0:54 ` Joel Fernandes
2026-02-24 16:15 ` Miguel Ojeda
2026-02-25 19:48 ` Boqun Feng
1 sibling, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-23 0:54 UTC (permalink / raw)
To: Danilo Krummrich, Gary Guo
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Alexandre Courbot, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic
On 2/20/2026 11:48 AM, Danilo Krummrich wrote:
> On Fri Feb 20, 2026 at 2:09 AM CET, Gary Guo wrote:
>> On 2026-02-19 16:24, Danilo Krummrich wrote:
>>> I feel like it makes a bit more sense to have an entry for the entire class of
>>> "RUST [FFI]" infrastructure.
>>
>> I don't think so. Most of the kernel crate is doing FFI. We have a `ffi` crate
>> defining FFI types, we have `CStr`/`CString` which in Rust std is inside `std::ffi`,
>> etc.
>
> The idea is not that everything that somehow has an FFI interface falls under
> this category, as this would indeed be the majority.
>
> The idea is rather everything that is specifically designed as a helper to
> implement FFI interactions. (Given that maybe just "RUST [FFI HELPER]"?)
I do tend to agree with Danilo on this. Unless someone yells, I will change
the maintainer entry to "RUST [FFI HELPER]" for the next spin.
thanks,
--
Joel FErnandes
>
> For instance, this would also apply to Opaque and ForeignOwnable. But also CStr
> and CString, as you say.
>
> But there's also lots of stuff that does not fall under this category, such as
> pin-init, alloc, syn, num, bits (genmask), fmt, slice, revocable, list, ptr, assert,
> print, arc, etc.
>
> There are also things that are more on the "partially" side of things, such as
> transmute, error or aref.
>
>> I feel that the FFI infra is the core responsibility of the top-level Rust entry,
>> while specific stuff can be splitted out.
>
> I think the core responsibilities are compiler and general design topics, such
> as abstraction design, (safety) documentation, etc., as well as core language
> infrastructure, such as pin-init, syn, alloc, arc, etc.
>
> Given the definition "helper to implement FFI interactions" I feel like we have
> much more infrastructure that is not for this specific purpose.
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-20 1:56 ` Alexandre Courbot
@ 2026-02-23 1:02 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-23 1:02 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel, Danilo Krummrich, Miguel Ojeda, Boqun Feng,
Gary Guo, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, dri-devel, nouveau,
rust-for-linux, Nikola Djukic
On 2/19/2026 8:56 PM, Alexandre Courbot wrote:
> On Fri Feb 20, 2026 at 12:31 AM JST, Joel Fernandes wrote:
>> On Thu, Feb 19, 2026 at 02:13:37PM +0900, Alexandre Courbot wrote:
>>>> + types::Opaque,
>>>
>>> Need a `//` or `rustfmt` will reformat.
>>
>> Fixed, thanks.
>>
>>>> +// SAFETY: [`GpuBuddyInner`] can be sent between threads.
>>>
>>> No need to link on non-doccomments.
>>
>> Fixed. Removed the brackets from SAFETY comments for GpuBuddyInner
>> and GpuBuddyGuard.
>>
>>>> +/// - `buddy` references a valid [`GpuBuddyInner`].
>>>
>>> rustdoc complains that this links to a private item in a public doc - we
>>> should not mention `GpuBuddyInner` here.
>>
>> Per Miguel's reply, I've kept the mention but removed the square
>> brackets so it doesn't try to create a link. This way it's still
>> mentioned for practical reference without triggering the rustdoc
>> warning.
>
> Won't it be confusing for readers of the public documentation? If a type
> is private, it shouldn't be needed to folks who read the HTML.
>
> This sounds like we instead want a regular `//` comment somewhere to
> guide those who brave the code.
You are right about the audience of the docs perhaps not requiring this
information. I will remove it then.
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-20 8:16 ` Eliot Courtney
@ 2026-02-23 1:13 ` Joel Fernandes
2026-02-24 2:08 ` Eliot Courtney
2026-02-24 7:28 ` Alice Ryhl
0 siblings, 2 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-23 1:13 UTC (permalink / raw)
To: Eliot Courtney, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Alexandre Courbot
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Nikola Djukic, dri-devel
Hi Eliot,
On 2/20/2026 3:16 AM, Eliot Courtney wrote:
> On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
>> +/// Create a C doubly-circular linked list interface `CList` from a raw `list_head` pointer.
>> +///
>> +/// This macro creates a `CList<T, OFFSET>` that can iterate over items of type `$rust_type`
>> +/// linked via the `$field` field in the underlying C struct `$c_type`.
>> +///
>> +/// # Arguments
>> +///
>> +/// - `$head`: Raw pointer to the sentinel `list_head` object (`*mut bindings::list_head`).
>> +/// - `$rust_type`: Each item's rust wrapper type.
>> +/// - `$c_type`: Each item's C struct type that contains the embedded `list_head`.
>> +/// - `$field`: The name of the `list_head` field within the C struct.
>> +///
>> +/// # Safety
>> +///
>> +/// This is an unsafe macro. The caller must ensure:
>> +///
>> +/// - `$head` is a valid, initialized sentinel `list_head` pointing to a list that remains
>> +/// unmodified for the lifetime of the rust `CList`.
>> +/// - The list contains items of type `$c_type` linked via an embedded `$field`.
>> +/// - `$rust_type` is `#[repr(transparent)]` over `$c_type` or has compatible layout.
>> +///
>> +/// # Examples
>> +///
>> +/// Refer to the examples in this module's documentation.
>> +#[macro_export]
>> +macro_rules! clist_create {
>> + ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
>> + // Compile-time check that field path is a list_head.
>> + let _: fn(*const $c_type) -> *const $crate::bindings::list_head =
>> + |p| &raw const (*p).$($field).+;
>> +
>> + // Calculate offset and create `CList`.
>> + const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
>> + $crate::ffi::clist::CList::<$rust_type, OFFSET>::from_raw($head)
>> + }};
>> +}
>
> This uses offset_of! in a way that requires the offset_of_nested
> feature, so it doesn't build in rust 1.78.0. The feature is already
> added to rust_allowed_features, so I think it's ok to add
> #![feature(offset_of_nested)].
Maybe I am missing something, but why should the feature be gated behind
that if all compiler versions (>= 1.78) support it either in a stable way
or via an unstable feature flag?
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-23 0:41 ` Joel Fernandes
@ 2026-02-23 9:38 ` Alice Ryhl
2026-02-24 0:32 ` Joel Fernandes
0 siblings, 1 reply; 74+ messages in thread
From: Alice Ryhl @ 2026-02-23 9:38 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Danilo Krummrich, Alexandre Courbot, Dave Airlie,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic
On Sun, Feb 22, 2026 at 07:41:44PM -0500, Joel Fernandes wrote:
> Hi Alice,
>
> On 2/21/2026 3:59 AM, Alice Ryhl wrote:
> > On Wed, Feb 18, 2026 at 03:55:03PM -0500, Joel Fernandes wrote:
> >> +impl CListHead {
> >> + /// Create a `&CListHead` reference from a raw `list_head` pointer.
> >> + ///
> >> + /// # Safety
> >> + ///
> >> + /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure.
> >> + /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
> >> + /// - The list and all linked `list_head` nodes must not be modified by non-Rust code
> >> + /// for the lifetime `'a`.
> >
> > I don't think C vs Rust is useful here. What you want is that the list
> > is not modified by random other code in ways you didn't expect. It
> > doesn't matter if it's C or Rust code that carries out the illegal
> > modification.
>
> Yeah, this is true. I will change it to the following then:
>
> "The list and all linked `list_head` nodes must not be modified from
> anywhere for the lifetime `'a`."
Ok. Perhaps you should say that it must not be modified except through
this CListHead? I guess it depends on whether you want to add methods
for changing the list via this API.
Alice
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-23 9:38 ` Alice Ryhl
@ 2026-02-24 0:32 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-24 0:32 UTC (permalink / raw)
To: Alice Ryhl
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Danilo Krummrich, Alexandre Courbot, Dave Airlie,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic
On 2/23/2026 4:38 AM, Alice Ryhl wrote:
> On Sun, Feb 22, 2026 at 07:41:44PM -0500, Joel Fernandes wrote:
>> Hi Alice,
>>
>> On 2/21/2026 3:59 AM, Alice Ryhl wrote:
>>> On Wed, Feb 18, 2026 at 03:55:03PM -0500, Joel Fernandes wrote:
>>>> +impl CListHead {
>>>> + /// Create a `&CListHead` reference from a raw `list_head` pointer.
>>>> + ///
>>>> + /// # Safety
>>>> + ///
>>>> + /// - `ptr` must be a valid pointer to an allocated and initialized `list_head` structure.
>>>> + /// - `ptr` must remain valid and unmodified for the lifetime `'a`.
>>>> + /// - The list and all linked `list_head` nodes must not be modified by non-Rust code
>>>> + /// for the lifetime `'a`.
>>>
>>> I don't think C vs Rust is useful here. What you want is that the list
>>> is not modified by random other code in ways you didn't expect. It
>>> doesn't matter if it's C or Rust code that carries out the illegal
>>> modification.
>>
>> Yeah, this is true. I will change it to the following then:
>>
>> "The list and all linked `list_head` nodes must not be modified from
>> anywhere for the lifetime `'a`."
>
> Ok. Perhaps you should say that it must not be modified except through
> this CListHead? I guess it depends on whether you want to add methods
> for changing the list via this API.
>
At the moment, there isn't a usecase for it but I would predict we would want it
for other such use cases, so yet I will change it to your suggestion to
future-proof it:
"The list and all linked `list_head` nodes must not be modified from
anywhere for the lifetime `'a`, unless done so via any `ClistHead` APIs."
Let me know if it looks good?
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-23 1:13 ` Joel Fernandes
@ 2026-02-24 2:08 ` Eliot Courtney
2026-02-24 7:28 ` Alice Ryhl
1 sibling, 0 replies; 74+ messages in thread
From: Eliot Courtney @ 2026-02-24 2:08 UTC (permalink / raw)
To: Joel Fernandes, Eliot Courtney, linux-kernel, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Alexandre Courbot
Cc: Dave Airlie, Daniel Almeida, Koen Koning, dri-devel, nouveau,
rust-for-linux, Nikola Djukic, dri-devel
On Mon Feb 23, 2026 at 10:13 AM JST, Joel Fernandes wrote:
>>> +macro_rules! clist_create {
>>> + ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
>>> + // Compile-time check that field path is a list_head.
>>> + let _: fn(*const $c_type) -> *const $crate::bindings::list_head =
>>> + |p| &raw const (*p).$($field).+;
>>> +
>>> + // Calculate offset and create `CList`.
>>> + const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
>>> + $crate::ffi::clist::CList::<$rust_type, OFFSET>::from_raw($head)
>>> + }};
>>> +}
>>
>> This uses offset_of! in a way that requires the offset_of_nested
>> feature, so it doesn't build in rust 1.78.0. The feature is already
>> added to rust_allowed_features, so I think it's ok to add
>> #![feature(offset_of_nested)].
>
> Maybe I am missing something, but why should the feature be gated behind
> that if all compiler versions (>= 1.78) support it either in a stable way
> or via an unstable feature flag?
I think that's why it's in rust_allowed_features. IIUC that's where we
put unstable features that are stable enough in all supported compiler
versions to be used.
But, rust_allowed_features doesn't apply to the code here, so you'd have
to add an allow to rust/kernel/lib.rs.
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-23 1:13 ` Joel Fernandes
2026-02-24 2:08 ` Eliot Courtney
@ 2026-02-24 7:28 ` Alice Ryhl
2026-02-24 16:00 ` Joel Fernandes
1 sibling, 1 reply; 74+ messages in thread
From: Alice Ryhl @ 2026-02-24 7:28 UTC (permalink / raw)
To: Joel Fernandes
Cc: Eliot Courtney, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Danilo Krummrich, Alexandre Courbot, Dave Airlie,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic, dri-devel
On Mon, Feb 23, 2026 at 2:13 AM Joel Fernandes <joelagnelf@nvidia.com> wrote:
>
> Hi Eliot,
>
> On 2/20/2026 3:16 AM, Eliot Courtney wrote:
> > On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
> >> +/// Create a C doubly-circular linked list interface `CList` from a raw `list_head` pointer.
> >> +///
> >> +/// This macro creates a `CList<T, OFFSET>` that can iterate over items of type `$rust_type`
> >> +/// linked via the `$field` field in the underlying C struct `$c_type`.
> >> +///
> >> +/// # Arguments
> >> +///
> >> +/// - `$head`: Raw pointer to the sentinel `list_head` object (`*mut bindings::list_head`).
> >> +/// - `$rust_type`: Each item's rust wrapper type.
> >> +/// - `$c_type`: Each item's C struct type that contains the embedded `list_head`.
> >> +/// - `$field`: The name of the `list_head` field within the C struct.
> >> +///
> >> +/// # Safety
> >> +///
> >> +/// This is an unsafe macro. The caller must ensure:
> >> +///
> >> +/// - `$head` is a valid, initialized sentinel `list_head` pointing to a list that remains
> >> +/// unmodified for the lifetime of the rust `CList`.
> >> +/// - The list contains items of type `$c_type` linked via an embedded `$field`.
> >> +/// - `$rust_type` is `#[repr(transparent)]` over `$c_type` or has compatible layout.
> >> +///
> >> +/// # Examples
> >> +///
> >> +/// Refer to the examples in this module's documentation.
> >> +#[macro_export]
> >> +macro_rules! clist_create {
> >> + ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
> >> + // Compile-time check that field path is a list_head.
> >> + let _: fn(*const $c_type) -> *const $crate::bindings::list_head =
> >> + |p| &raw const (*p).$($field).+;
> >> +
> >> + // Calculate offset and create `CList`.
> >> + const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
> >> + $crate::ffi::clist::CList::<$rust_type, OFFSET>::from_raw($head)
> >> + }};
> >> +}
> >
> > This uses offset_of! in a way that requires the offset_of_nested
> > feature, so it doesn't build in rust 1.78.0. The feature is already
> > added to rust_allowed_features, so I think it's ok to add
> > #![feature(offset_of_nested)].
>
> Maybe I am missing something, but why should the feature be gated behind
> that if all compiler versions (>= 1.78) support it either in a stable way
> or via an unstable feature flag?
The rust_allowed_features list only applies to drivers and such. It
doesn't apply to the Rust crates in the rust/ directory, which need to
use #![feature] annotations manually.
Alice
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-24 7:28 ` Alice Ryhl
@ 2026-02-24 16:00 ` Joel Fernandes
2026-02-24 16:11 ` Miguel Ojeda
0 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-24 16:00 UTC (permalink / raw)
To: Alice Ryhl
Cc: Eliot Courtney, linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Danilo Krummrich, Alexandre Courbot, Dave Airlie,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic, dri-devel
On 2/24/2026 2:28 AM, Alice Ryhl wrote:
> On Mon, Feb 23, 2026 at 2:13 AM Joel Fernandes <joelagnelf@nvidia.com> wrote:
>>
>> Hi Eliot,
>>
>> On 2/20/2026 3:16 AM, Eliot Courtney wrote:
>>> On Thu Feb 19, 2026 at 5:55 AM JST, Joel Fernandes wrote:
>>>> +/// Create a C doubly-circular linked list interface `CList` from a raw `list_head` pointer.
>>>> +///
>>>> +/// This macro creates a `CList<T, OFFSET>` that can iterate over items of type `$rust_type`
>>>> +/// linked via the `$field` field in the underlying C struct `$c_type`.
>>>> +///
>>>> +/// # Arguments
>>>> +///
>>>> +/// - `$head`: Raw pointer to the sentinel `list_head` object (`*mut bindings::list_head`).
>>>> +/// - `$rust_type`: Each item's rust wrapper type.
>>>> +/// - `$c_type`: Each item's C struct type that contains the embedded `list_head`.
>>>> +/// - `$field`: The name of the `list_head` field within the C struct.
>>>> +///
>>>> +/// # Safety
>>>> +///
>>>> +/// This is an unsafe macro. The caller must ensure:
>>>> +///
>>>> +/// - `$head` is a valid, initialized sentinel `list_head` pointing to a list that remains
>>>> +/// unmodified for the lifetime of the rust `CList`.
>>>> +/// - The list contains items of type `$c_type` linked via an embedded `$field`.
>>>> +/// - `$rust_type` is `#[repr(transparent)]` over `$c_type` or has compatible layout.
>>>> +///
>>>> +/// # Examples
>>>> +///
>>>> +/// Refer to the examples in this module's documentation.
>>>> +#[macro_export]
>>>> +macro_rules! clist_create {
>>>> + ($head:expr, $rust_type:ty, $c_type:ty, $($field:tt).+) => {{
>>>> + // Compile-time check that field path is a list_head.
>>>> + let _: fn(*const $c_type) -> *const $crate::bindings::list_head =
>>>> + |p| &raw const (*p).$($field).+;
>>>> +
>>>> + // Calculate offset and create `CList`.
>>>> + const OFFSET: usize = ::core::mem::offset_of!($c_type, $($field).+);
>>>> + $crate::ffi::clist::CList::<$rust_type, OFFSET>::from_raw($head)
>>>> + }};
>>>> +}
>>>
>>> This uses offset_of! in a way that requires the offset_of_nested
>>> feature, so it doesn't build in rust 1.78.0. The feature is already
>>> added to rust_allowed_features, so I think it's ok to add
>>> #![feature(offset_of_nested)].
>>
>> Maybe I am missing something, but why should the feature be gated behind
>> that if all compiler versions (>= 1.78) support it either in a stable way
>> or via an unstable feature flag?
>
> The rust_allowed_features list only applies to drivers and such. It
> doesn't apply to the Rust crates in the rust/ directory, which need to
> use #![feature] annotations manually.
Ah fun! Ok, I will add it in in. :)
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-24 16:00 ` Joel Fernandes
@ 2026-02-24 16:11 ` Miguel Ojeda
0 siblings, 0 replies; 74+ messages in thread
From: Miguel Ojeda @ 2026-02-24 16:11 UTC (permalink / raw)
To: Joel Fernandes
Cc: Alice Ryhl, Eliot Courtney, linux-kernel, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Danilo Krummrich,
Alexandre Courbot, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic, dri-devel
On Tue, Feb 24, 2026 at 5:00 PM Joel Fernandes <joelagnelf@nvidia.com> wrote:
>
> Ah fun! Ok, I will add it in in. :)
To clarify further, the reason is that the crates in `rust/` may (and
do) use more features (which still need justification), but the rest
of the kernel is restricted to those in that "allowed" list only.
https://rust-for-linux.com/unstable-features#usage-in-the-kernel
Cheers,
Miguel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-23 0:54 ` Joel Fernandes
@ 2026-02-24 16:15 ` Miguel Ojeda
0 siblings, 0 replies; 74+ messages in thread
From: Miguel Ojeda @ 2026-02-24 16:15 UTC (permalink / raw)
To: Joel Fernandes
Cc: Danilo Krummrich, Gary Guo, linux-kernel, Miguel Ojeda,
Boqun Feng, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Alexandre Courbot, Dave Airlie,
Daniel Almeida, Koen Koning, dri-devel, nouveau, rust-for-linux,
Nikola Djukic
On Mon, Feb 23, 2026 at 1:54 AM Joel Fernandes <joelagnelf@nvidia.com> wrote:
>
> I do tend to agree with Danilo on this. Unless someone yells, I will change
> the maintainer entry to "RUST [FFI HELPER]" for the next spin.
Not sure I am following the "what FFI means" discussion, but in case
it clarifies:
The Rust subsystem is meant to be about anything related in some way
to Rust; typically meaning all the things that are not covered
elsewhere (including what both Gary and Danilo mention), sometimes
with overlap with other "global" infrastructure/subsystems (e.g.
Kbuild), sometimes acting as a fallback for Rust code out there if
really needed (i.e. similar to Andrew, but for Rust bits), and so on.
From what I understand, Joel and Alexandre want to focus on
maintaining the `clist` bits (at least for now), and if they are both
going to have the "(CLIST)" suffix, then it may be simpler to make the
entry just that for now since `MAINTAINERS` it is easy to change
anyway.
Now, what kind of things would we want to have inside such a `ffi`
module? (apart from `clist`). Does it mean the proposal is to
eventually move existing things like `CStr`?
Cheers,
Miguel
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-20 16:48 ` Danilo Krummrich
2026-02-23 0:54 ` Joel Fernandes
@ 2026-02-25 19:48 ` Boqun Feng
2026-02-25 20:20 ` Joel Fernandes
1 sibling, 1 reply; 74+ messages in thread
From: Boqun Feng @ 2026-02-25 19:48 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Gary Guo, Joel Fernandes, linux-kernel, Miguel Ojeda,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Alexandre Courbot, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic
On Fri, Feb 20, 2026 at 05:48:37PM +0100, Danilo Krummrich wrote:
> On Fri Feb 20, 2026 at 2:09 AM CET, Gary Guo wrote:
> > On 2026-02-19 16:24, Danilo Krummrich wrote:
> >> I feel like it makes a bit more sense to have an entry for the entire class of
> >> "RUST [FFI]" infrastructure.
> >
> > I don't think so. Most of the kernel crate is doing FFI. We have a `ffi` crate
> > defining FFI types, we have `CStr`/`CString` which in Rust std is inside `std::ffi`,
> > etc.
>
> The idea is not that everything that somehow has an FFI interface falls under
> this category, as this would indeed be the majority.
>
> The idea is rather everything that is specifically designed as a helper to
> implement FFI interactions. (Given that maybe just "RUST [FFI HELPER]"?)
>
I feel like you may want to call it "interop" then, because it's "Rust
doing something with interoperation on C data structure". If I
understand you correctly, the category you referred here is the area
that we could not simply call an FFI function to get the functionality
from C side, but rather we need to make sure that we are interpret C
data structure correctly to work with C side.
Regards,
Boqun
> For instance, this would also apply to Opaque and ForeignOwnable. But also CStr
> and CString, as you say.
>
> But there's also lots of stuff that does not fall under this category, such as
> pin-init, alloc, syn, num, bits (genmask), fmt, slice, revocable, list, ptr, assert,
> print, arc, etc.
>
> There are also things that are more on the "partially" side of things, such as
> transmute, error or aref.
>
> > I feel that the FFI infra is the core responsibility of the top-level Rust entry,
> > while specific stuff can be splitted out.
>
> I think the core responsibilities are compiler and general design topics, such
> as abstraction design, (safety) documentation, etc., as well as core language
> infrastructure, such as pin-init, syn, alloc, arc, etc.
>
> Given the definition "helper to implement FFI interactions" I feel like we have
> much more infrastructure that is not for this specific purpose.
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-25 19:48 ` Boqun Feng
@ 2026-02-25 20:20 ` Joel Fernandes
2026-02-26 0:32 ` Joel Fernandes
0 siblings, 1 reply; 74+ messages in thread
From: Joel Fernandes @ 2026-02-25 20:20 UTC (permalink / raw)
To: Boqun Feng, Danilo Krummrich
Cc: Gary Guo, linux-kernel, Miguel Ojeda, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Alexandre Courbot, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic
On 2/25/2026 2:48 PM, Boqun Feng wrote:
> On Fri, Feb 20, 2026 at 05:48:37PM +0100, Danilo Krummrich wrote:
>> On Fri Feb 20, 2026 at 2:09 AM CET, Gary Guo wrote:
>>> On 2026-02-19 16:24, Danilo Krummrich wrote:
>>>> I feel like it makes a bit more sense to have an entry for the entire class of
>>>> "RUST [FFI]" infrastructure.
>>>
>>> I don't think so. Most of the kernel crate is doing FFI. We have a `ffi` crate
>>> defining FFI types, we have `CStr`/`CString` which in Rust std is inside `std::ffi`,
>>> etc.
>>
>> The idea is not that everything that somehow has an FFI interface falls under
>> this category, as this would indeed be the majority.
>>
>> The idea is rather everything that is specifically designed as a helper to
>> implement FFI interactions. (Given that maybe just "RUST [FFI HELPER]"?)
>>
>
> I feel like you may want to call it "interop" then, because it's "Rust
> doing something with interoperation on C data structure". If I
> understand you correctly, the category you referred here is the area
> that we could not simply call an FFI function to get the functionality
> from C side, but rather we need to make sure that we are interpret C
> data structure correctly to work with C side.
Boqun has a point here: https://en.wikipedia.org/wiki/Language_interoperability
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists
2026-02-25 20:20 ` Joel Fernandes
@ 2026-02-26 0:32 ` Joel Fernandes
0 siblings, 0 replies; 74+ messages in thread
From: Joel Fernandes @ 2026-02-26 0:32 UTC (permalink / raw)
To: Boqun Feng, Danilo Krummrich
Cc: Gary Guo, linux-kernel, Miguel Ojeda, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Alexandre Courbot, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic
> On Feb 25, 2026, at 3:20 PM, Joel Fernandes <joelagnelf@nvidia.com> wrote:
>
>
>
>> On 2/25/2026 2:48 PM, Boqun Feng wrote:
>>> On Fri, Feb 20, 2026 at 05:48:37PM +0100, Danilo Krummrich wrote:
>>> On Fri Feb 20, 2026 at 2:09 AM CET, Gary Guo wrote:
>>>> On 2026-02-19 16:24, Danilo Krummrich wrote:
>>>>> I feel like it makes a bit more sense to have an entry for the entire class of
>>>>> "RUST [FFI]" infrastructure.
>>>>
>>>> I don't think so. Most of the kernel crate is doing FFI. We have a `ffi` crate
>>>> defining FFI types, we have `CStr`/`CString` which in Rust std is inside `std::ffi`,
>>>> etc.
>>>
>>> The idea is not that everything that somehow has an FFI interface falls under
>>> this category, as this would indeed be the majority.
>>>
>>> The idea is rather everything that is specifically designed as a helper to
>>> implement FFI interactions. (Given that maybe just "RUST [FFI HELPER]"?)
>>>
>>
>> I feel like you may want to call it "interop" then, because it's "Rust
>> doing something with interoperation on C data structure". If I
>> understand you correctly, the category you referred here is the area
>> that we could not simply call an FFI function to get the functionality
>> from C side, but rather we need to make sure that we are interpret C
>> data structure correctly to work with C side.
>
> Boqun has a point here: https://en.wikipedia.org/wiki/Language_interoperability
>
If we move forward with this wording we probably have to then rename the
directory to rust/interop. That's probably not worth it I think since
ffi and interop seem to be pretty closely related. I suggest let us keep
it as helper wording for now as Danilo suggested. We can always change
it later if need arises.
--
Joel Fernandes
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-02-19 15:31 ` Joel Fernandes
@ 2026-03-01 13:23 ` Gary Guo
2026-03-01 17:53 ` Miguel Ojeda
0 siblings, 1 reply; 74+ messages in thread
From: Gary Guo @ 2026-03-01 13:23 UTC (permalink / raw)
To: Joel Fernandes, Miguel Ojeda
Cc: linux-kernel, Alexandre Courbot, Danilo Krummrich, Boqun Feng,
Gary Guo, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, dri-devel, nouveau,
rust-for-linux, Nikola Djukic
On Thu Feb 19, 2026 at 3:31 PM GMT, Joel Fernandes wrote:
> On Wed, Feb 19, 2026 at 09:54:27AM +0100, Miguel Ojeda wrote:
>> If you all think something should be mentioned for practical reasons,
>> then please don't let `rustdoc` force you to not mention it, i.e.
>> please feel free to remove the square brackets if needed.
>>
>> In other words, I don't want the intra-doc links convention we have to
>> make it harder for you to write certain things exceptionally.
>
> Thanks Miguel, that's helpful! I've kept the GpuBuddyInner mention in
> the invariants but removed the square brackets to avoid the rustdoc
> warning while still providing the reference for readers.
>
> Joel
I started to think that the way we document invariants is problematic. For most
of the types, the invariants mentioned does not make sense to end up in public
facing docs.
Perhaps we should:
- Document invariants on specific fields as doc comments on the fields. So they
don't show up in the doc unless document-private-items is enabled.
- Invariants across multiple fields perhaps should either be documented as
normal (non-doc) comments, or we do something like:
struct Foo {
field_a: X,
field_b: Y,
// Put invariants here
_invariant: (),
}
This has an additional benefit where when you're constructing the type, you're
forced to write
Foo {
...,
_invariant: (),
}
where you're reminded that invariants exist on the type and cannot forget to
write an invariant comment.
Best,
Gary
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings
2026-03-01 13:23 ` Gary Guo
@ 2026-03-01 17:53 ` Miguel Ojeda
0 siblings, 0 replies; 74+ messages in thread
From: Miguel Ojeda @ 2026-03-01 17:53 UTC (permalink / raw)
To: Gary Guo
Cc: Joel Fernandes, linux-kernel, Alexandre Courbot, Danilo Krummrich,
Boqun Feng, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Dave Airlie, Daniel Almeida, dri-devel, nouveau,
rust-for-linux, Nikola Djukic
On Sun, Mar 1, 2026 at 2:23 PM Gary Guo <gary@garyguo.net> wrote:
>
> I started to think that the way we document invariants is problematic. For most
> of the types, the invariants mentioned does not make sense to end up in public
> facing docs.
Yeah, it isn't ideal.
To give some context in case it helps, so far we said that the `#
Invariants` section is a special case where mentioning private fields
is OK-ish if needed/helpful for a reader, even if technically it may
not be "proper", i.e. leak details.
One reason was that sometimes private invariants may make the type
easier to understand, even if technically it is a private one. We also
discussed at some point splitting invariants into public and private
ones, and perhaps have `rustdoc` know about that and only render the
private ones when one toggles the private items rendering (runtime
toggle, which is another feature I requested -- if it is compile-time
as it is the normal flag, then reading the private docs becomes too
hard).
Another reason was to be able to have them use doc comments and get a
nice rendering output, and your suggestion of using a dummy field
would work for that. I like the fact that it makes one remember to
write the invariant "natively". On the other hand, it seems like
something we could somehow do with changes in tooling, i.e. in
`rustdoc` or Clippy (and perhaps it could be enforced rather than just
be a reminder).
Another bit was that sometimes the invariant, even if it applies to a
private field, the type may be exposing e.g. a copy of the field
through a method, and thus it is useful to know about the invariant
anyway. Those cases could/should "properly" be a guarantee on the
return value of that method, keeping the invariant private, though.
In short, the current approach makes it easy to write for everyone,
but it does have downsides and ideally we would have something better
now that everyone is accustomed to writing them.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 74+ messages in thread
end of thread, other threads:[~2026-03-01 17:54 UTC | newest]
Thread overview: 74+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-18 20:54 [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
2026-02-18 20:54 ` [PATCH v10 1/8] gpu: Move DRM buddy allocator one level up (part one) Joel Fernandes
2026-02-18 20:55 ` [PATCH v10 2/8] gpu: Move DRM buddy allocator one level up (part two) Joel Fernandes
2026-02-19 3:18 ` Alexandre Courbot
2026-02-19 15:31 ` Joel Fernandes
2026-02-18 20:55 ` [PATCH v10 3/8] gpu: Fix uninitialized buddy for built-in drivers Joel Fernandes
2026-02-19 10:09 ` Danilo Krummrich
2026-02-19 15:31 ` Joel Fernandes
2026-02-19 16:24 ` Joel Fernandes
2026-02-18 20:55 ` [PATCH v10 4/8] rust: ffi: Convert pub use to pub mod and create ffi module Joel Fernandes
2026-02-19 3:18 ` Alexandre Courbot
2026-02-18 20:55 ` [PATCH v10 5/8] rust: clist: Add support to interface with C linked lists Joel Fernandes
2026-02-19 4:26 ` Alexandre Courbot
2026-02-19 15:27 ` Joel Fernandes
2026-02-19 9:58 ` Danilo Krummrich
2026-02-19 15:28 ` Joel Fernandes
2026-02-19 11:21 ` Danilo Krummrich
2026-02-19 14:37 ` Gary Guo
2026-02-19 15:27 ` Joel Fernandes
2026-02-19 15:44 ` Joel Fernandes
2026-02-19 16:24 ` Danilo Krummrich
2026-02-19 18:07 ` Joel Fernandes
2026-02-19 18:38 ` Miguel Ojeda
2026-02-19 19:28 ` Joel Fernandes
2026-02-19 22:55 ` Miguel Ojeda
2026-02-20 4:00 ` Joel Fernandes
2026-02-20 1:56 ` Alexandre Courbot
2026-02-20 1:09 ` Gary Guo
2026-02-20 1:19 ` Miguel Ojeda
2026-02-20 16:48 ` Danilo Krummrich
2026-02-23 0:54 ` Joel Fernandes
2026-02-24 16:15 ` Miguel Ojeda
2026-02-25 19:48 ` Boqun Feng
2026-02-25 20:20 ` Joel Fernandes
2026-02-26 0:32 ` Joel Fernandes
2026-02-20 8:16 ` Eliot Courtney
2026-02-23 1:13 ` Joel Fernandes
2026-02-24 2:08 ` Eliot Courtney
2026-02-24 7:28 ` Alice Ryhl
2026-02-24 16:00 ` Joel Fernandes
2026-02-24 16:11 ` Miguel Ojeda
2026-02-21 8:59 ` Alice Ryhl
2026-02-23 0:41 ` Joel Fernandes
2026-02-23 9:38 ` Alice Ryhl
2026-02-24 0:32 ` Joel Fernandes
2026-02-18 20:55 ` [PATCH v10 6/8] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
2026-02-19 5:13 ` Alexandre Courbot
2026-02-19 8:54 ` Miguel Ojeda
2026-02-19 15:31 ` Joel Fernandes
2026-03-01 13:23 ` Gary Guo
2026-03-01 17:53 ` Miguel Ojeda
2026-02-19 15:31 ` Joel Fernandes
2026-02-20 1:56 ` Alexandre Courbot
2026-02-23 1:02 ` Joel Fernandes
2026-02-19 13:18 ` Danilo Krummrich
2026-02-19 15:31 ` Joel Fernandes
2026-02-20 8:22 ` Eliot Courtney
2026-02-20 14:54 ` Joel Fernandes
2026-02-20 15:50 ` Joel Fernandes
2026-02-20 15:53 ` Danilo Krummrich
2026-02-20 21:20 ` Joel Fernandes
2026-02-20 23:43 ` Danilo Krummrich
2026-02-23 0:34 ` Joel Fernandes
2026-02-18 20:55 ` [PATCH v10 7/8] nova-core: mm: Select GPU_BUDDY for VRAM allocation Joel Fernandes
2026-02-19 0:44 ` Alexandre Courbot
2026-02-19 1:14 ` John Hubbard
2026-02-19 15:31 ` Joel Fernandes
2026-02-19 2:06 ` Joel Fernandes
2026-02-19 15:31 ` Joel Fernandes
2026-02-18 20:55 ` [PATCH v10 8/8] nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
2026-02-18 20:59 ` [PATCH v10 0/8] Preparatory patches for nova-core memory management Joel Fernandes
2026-02-18 22:24 ` Danilo Krummrich
2026-02-18 23:46 ` Joel Fernandes
2026-02-18 23:59 ` Joel Fernandes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox