public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH 0/3] USE drm mm instead of drm SA for CCS read/write
@ 2026-03-20 12:12 Satyanarayana K V P
  2026-03-20 12:12 ` [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support Satyanarayana K V P
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Satyanarayana K V P @ 2026-03-20 12:12 UTC (permalink / raw)
  To: intel-xe; +Cc: Satyanarayana K V P

The suballocator algorithm tracks a hole cursor at the last allocation
and tries to allocate after it. This is optimized for fence-ordered
progress, where older allocations are expected to become reusable first.

In fence-enabled mode, that ordering assumption holds. In fence-disabled
mode, allocations may be freed in arbitrary order, so limiting allocation
to the current hole window can miss valid free space and fail allocations
despite sufficient total space.

Use DRM memory manager instead of sub-allocator to get rid of this issue
as CCS read/write operations do not use fences.

Used drm mm instead of drm sa based on comments from
https://lore.kernel.org/all/bbf0d48d-a95a-46e1-ac8f-e8a0daa81365@amd.com/

Satyanarayana K V P (3):
  drm/xe/mm: add XE DRM MM manager with shadow support
  drm/xe/mm: Add batch buffer allocation functions for xe_drm_mm manager
  drm/xe/vf: Use drm mm instead of drm sa for CCS read/write

 drivers/gpu/drm/xe/Makefile                |   1 +
 drivers/gpu/drm/xe/xe_bb.c                 |  67 +++++++
 drivers/gpu/drm/xe/xe_bb.h                 |   6 +
 drivers/gpu/drm/xe/xe_bo_types.h           |   3 +-
 drivers/gpu/drm/xe/xe_drm_mm.c             | 200 +++++++++++++++++++++
 drivers/gpu/drm/xe/xe_drm_mm.h             |  55 ++++++
 drivers/gpu/drm/xe/xe_drm_mm_types.h       |  42 +++++
 drivers/gpu/drm/xe/xe_migrate.c            |  56 +++---
 drivers/gpu/drm/xe/xe_sriov_vf_ccs.c       |  39 ++--
 drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h |   2 +-
 10 files changed, 424 insertions(+), 47 deletions(-)
 create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.c
 create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.h
 create mode 100644 drivers/gpu/drm/xe/xe_drm_mm_types.h

-- 
2.43.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support
  2026-03-20 12:12 [PATCH 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
@ 2026-03-20 12:12 ` Satyanarayana K V P
  2026-03-26 19:48   ` Matthew Brost
  2026-03-26 19:57   ` Thomas Hellström
  2026-03-20 12:12 ` [PATCH 2/3] drm/xe/mm: Add batch buffer allocation functions for xe_drm_mm manager Satyanarayana K V P
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Satyanarayana K V P @ 2026-03-20 12:12 UTC (permalink / raw)
  To: intel-xe
  Cc: Satyanarayana K V P, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Michal Wajdeczko

Add a xe_drm_mm manager to allocate sub-ranges from a BO-backed pool
using drm_mm.

Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Maarten Lankhorst <dev@lankhorst.se>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
 drivers/gpu/drm/xe/Makefile          |   1 +
 drivers/gpu/drm/xe/xe_drm_mm.c       | 200 +++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_drm_mm.h       |  55 ++++++++
 drivers/gpu/drm/xe/xe_drm_mm_types.h |  42 ++++++
 4 files changed, 298 insertions(+)
 create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.c
 create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.h
 create mode 100644 drivers/gpu/drm/xe/xe_drm_mm_types.h

diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index dab979287a96..6ab4e2392df1 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -41,6 +41,7 @@ xe-y += xe_bb.o \
 	xe_device_sysfs.o \
 	xe_dma_buf.o \
 	xe_drm_client.o \
+	xe_drm_mm.o \
 	xe_drm_ras.o \
 	xe_eu_stall.o \
 	xe_exec.o \
diff --git a/drivers/gpu/drm/xe/xe_drm_mm.c b/drivers/gpu/drm/xe/xe_drm_mm.c
new file mode 100644
index 000000000000..c5b1766fa75a
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_drm_mm.c
@@ -0,0 +1,200 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2026 Intel Corporation
+ */
+
+#include <drm/drm_managed.h>
+#include <linux/kernel.h>
+
+#include "xe_device_types.h"
+#include "xe_drm_mm_types.h"
+#include "xe_drm_mm.h"
+#include "xe_map.h"
+
+static void xe_drm_mm_manager_fini(struct drm_device *drm, void *arg)
+{
+	struct xe_drm_mm_manager *drm_mm_manager = arg;
+	struct xe_bo *bo = drm_mm_manager->bo;
+
+	if (!bo) {
+		drm_err(drm, "no bo for drm mm manager\n");
+		return;
+	}
+
+	drm_mm_takedown(&drm_mm_manager->base);
+
+	if (drm_mm_manager->is_iomem)
+		kvfree(drm_mm_manager->cpu_addr);
+
+	drm_mm_manager->bo = NULL;
+	drm_mm_manager->shadow = NULL;
+}
+
+/**
+ * xe_drm_mm_manager_init() - Create and initialize the DRM MM manager.
+ * @tile: the &xe_tile where allocate.
+ * @size: number of bytes to allocate
+ * @guard: number of bytes to exclude from allocation for guard region
+ * @flags: additional flags for configuring the DRM MM manager.
+ *
+ * Initializes a DRM MM manager for managing memory allocations on a specific
+ * XE tile. The function allocates a buffer object to back the memory region
+ * managed by the DRM MM manager.
+ *
+ * Return: a pointer to the &xe_drm_mm_manager, or an error pointer on failure.
+ */
+struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile *tile, u32 size,
+						 u32 guard, u32 flags)
+{
+	struct xe_device *xe = tile_to_xe(tile);
+	struct xe_drm_mm_manager *drm_mm_manager;
+	u64 managed_size;
+	struct xe_bo *bo;
+	int ret;
+
+	xe_tile_assert(tile, size > guard);
+	managed_size = size - guard;
+
+	drm_mm_manager = drmm_kzalloc(&xe->drm, sizeof(*drm_mm_manager), GFP_KERNEL);
+	if (!drm_mm_manager)
+		return ERR_PTR(-ENOMEM);
+
+	bo = xe_managed_bo_create_pin_map(xe, tile, size,
+					  XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+					  XE_BO_FLAG_GGTT |
+					  XE_BO_FLAG_GGTT_INVALIDATE |
+					  XE_BO_FLAG_PINNED_NORESTORE);
+	if (IS_ERR(bo)) {
+		drm_err(&xe->drm, "Failed to prepare %uKiB BO for DRM MM manager (%pe)\n",
+			size / SZ_1K, bo);
+		return ERR_CAST(bo);
+	}
+	drm_mm_manager->bo = bo;
+	drm_mm_manager->is_iomem = bo->vmap.is_iomem;
+
+	if (bo->vmap.is_iomem) {
+		drm_mm_manager->cpu_addr = kvzalloc(managed_size, GFP_KERNEL);
+		if (!drm_mm_manager->cpu_addr)
+			return ERR_PTR(-ENOMEM);
+	} else {
+		drm_mm_manager->cpu_addr = bo->vmap.vaddr;
+		memset(drm_mm_manager->cpu_addr, 0, bo->ttm.base.size);
+	}
+
+	if (flags & XE_DRM_MM_BO_MANAGER_FLAG_SHADOW) {
+		struct xe_bo *shadow;
+
+		ret = drmm_mutex_init(&xe->drm, &drm_mm_manager->swap_guard);
+		if (ret)
+			return ERR_PTR(ret);
+		if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
+			fs_reclaim_acquire(GFP_KERNEL);
+			might_lock(&drm_mm_manager->swap_guard);
+			fs_reclaim_release(GFP_KERNEL);
+		}
+
+		shadow = xe_managed_bo_create_pin_map(xe, tile, size,
+						      XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+						      XE_BO_FLAG_GGTT |
+						      XE_BO_FLAG_GGTT_INVALIDATE |
+						      XE_BO_FLAG_PINNED_NORESTORE);
+		if (IS_ERR(shadow)) {
+			drm_err(&xe->drm,
+				"Failed to prepare %uKiB shadow BO for DRM MM manager (%pe)\n",
+				size / SZ_1K, shadow);
+			return ERR_CAST(shadow);
+		}
+		drm_mm_manager->shadow = shadow;
+	}
+
+	drm_mm_init(&drm_mm_manager->base, 0, managed_size);
+	ret = drmm_add_action_or_reset(&xe->drm, xe_drm_mm_manager_fini, drm_mm_manager);
+	if (ret)
+		return ERR_PTR(ret);
+
+	return drm_mm_manager;
+}
+
+/**
+ * xe_drm_mm_bo_swap_shadow() - Swap the primary BO with the shadow BO.
+ * @drm_mm_manager: the DRM MM manager containing the primary and shadow BOs.
+ *
+ * Swaps the primary buffer object with the shadow buffer object in the DRM MM
+ * manager.
+ *
+ * Return: None.
+ */
+void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager *drm_mm_manager)
+{
+	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo->tile);
+
+	xe_assert(xe, drm_mm_manager->shadow);
+	lockdep_assert_held(&drm_mm_manager->swap_guard);
+
+	swap(drm_mm_manager->bo, drm_mm_manager->shadow);
+	if (!drm_mm_manager->bo->vmap.is_iomem)
+		drm_mm_manager->cpu_addr = drm_mm_manager->bo->vmap.vaddr;
+}
+
+/**
+ * xe_drm_mm_sync_shadow() - Synchronize the shadow BO with the primary BO.
+ * @drm_mm_manager: the DRM MM manager containing the primary and shadow BOs.
+ * @node: the DRM MM node representing the region to synchronize.
+ *
+ * Copies the contents of the specified region from the primary buffer object to
+ * the shadow buffer object in the DRM MM manager.
+ *
+ * Return: None.
+ */
+void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager *drm_mm_manager,
+			   struct drm_mm_node *node)
+{
+	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo->tile);
+
+	xe_assert(xe, drm_mm_manager->shadow);
+	lockdep_assert_held(&drm_mm_manager->swap_guard);
+
+	xe_map_memcpy_to(xe, &drm_mm_manager->shadow->vmap,
+			 node->start,
+			 drm_mm_manager->cpu_addr + node->start,
+			 node->size);
+}
+
+/**
+ * xe_drm_mm_insert_node() - Insert a node into the DRM MM manager.
+ * @drm_mm_manager: the DRM MM manager to insert the node into.
+ * @node: the DRM MM node to insert.
+ * @size: the size of the node to insert.
+ *
+ * Inserts a node into the DRM MM manager and clears the corresponding memory region
+ * in both the primary and shadow buffer objects.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int xe_drm_mm_insert_node(struct xe_drm_mm_manager *drm_mm_manager,
+			  struct drm_mm_node *node, u32 size)
+{
+	struct drm_mm *mm = &drm_mm_manager->base;
+	int ret;
+
+	ret = drm_mm_insert_node(mm, node, size);
+	if (ret)
+		return ret;
+
+	memset((void *)drm_mm_manager->bo->vmap.vaddr + node->start, 0, node->size);
+	if (drm_mm_manager->shadow)
+		memset((void *)drm_mm_manager->shadow->vmap.vaddr + node->start, 0,
+		       node->size);
+	return 0;
+}
+
+/**
+ * xe_drm_mm_remove_node() - Remove a node from the DRM MM manager.
+ * @node: the DRM MM node to remove.
+ *
+ * Return: None.
+ */
+void xe_drm_mm_remove_node(struct drm_mm_node *node)
+{
+	return drm_mm_remove_node(node);
+}
diff --git a/drivers/gpu/drm/xe/xe_drm_mm.h b/drivers/gpu/drm/xe/xe_drm_mm.h
new file mode 100644
index 000000000000..aeb7cab92d0b
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_drm_mm.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2026 Intel Corporation
+ */
+#ifndef _XE_DRM_MM_H_
+#define _XE_DRM_MM_H_
+
+#include <linux/sizes.h>
+#include <linux/types.h>
+
+#include "xe_bo.h"
+#include "xe_drm_mm_types.h"
+
+struct dma_fence;
+struct xe_tile;
+
+#define XE_DRM_MM_BO_MANAGER_FLAG_SHADOW    BIT(0)
+
+struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile *tile, u32 size,
+						 u32 guard, u32 flags);
+void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager *drm_mm_manager);
+void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager *drm_mm_manager,
+			   struct drm_mm_node *node);
+int xe_drm_mm_insert_node(struct xe_drm_mm_manager *drm_mm_manager,
+			  struct drm_mm_node *node, u32 size);
+void xe_drm_mm_remove_node(struct drm_mm_node *node);
+
+/**
+ * xe_drm_mm_manager_gpu_addr() - Retrieve GPU address of a back storage BO
+ * within a memory manager.
+ * @drm_mm_manager: The DRM MM memory manager.
+ *
+ * Returns: GGTT address of the back storage BO
+ */
+static inline u64 xe_drm_mm_manager_gpu_addr(struct xe_drm_mm_manager
+					     *drm_mm_manager)
+{
+	return xe_bo_ggtt_addr(drm_mm_manager->bo);
+}
+
+/**
+ * xe_drm_mm_bo_swap_guard() - Retrieve the mutex used to guard swap operations
+ * on a memory manager.
+ * @drm_mm_manager: The DRM MM memory manager.
+ *
+ * Returns: Swap guard mutex.
+ */
+static inline struct mutex *xe_drm_mm_bo_swap_guard(struct xe_drm_mm_manager
+						    *drm_mm_manager)
+{
+	return &drm_mm_manager->swap_guard;
+}
+
+#endif
+
diff --git a/drivers/gpu/drm/xe/xe_drm_mm_types.h b/drivers/gpu/drm/xe/xe_drm_mm_types.h
new file mode 100644
index 000000000000..69e0937dd8de
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_drm_mm_types.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2026 Intel Corporation
+ */
+
+#ifndef _XE_DRM_MM_TYPES_H_
+#define _XE_DRM_MM_TYPES_H_
+
+#include <drm/drm_mm.h>
+
+struct xe_bo;
+
+struct xe_drm_mm_manager {
+	/** @base: Range allocator over [0, @size) in bytes */
+	struct drm_mm base;
+	/** @bo: Active pool BO (GGTT-pinned, CPU-mapped). */
+	struct xe_bo *bo;
+	/** @shadow: Shadow BO for atomic command updates. */
+	struct xe_bo *shadow;
+	/** @swap_guard: Timeline guard updating @bo and @shadow */
+	struct mutex swap_guard;
+	/** @cpu_addr: CPU virtual address of the active BO. */
+	void *cpu_addr;
+	/** @size: Total size of the managed address space. */
+	u64 size;
+	/** @is_iomem: Whether the managed address space is I/O memory. */
+	bool is_iomem;
+};
+
+struct xe_drm_mm_bb {
+	/** @node: Range node for this batch buffer. */
+	struct drm_mm_node node;
+	/** @manager: Manager this batch buffer belongs to. */
+	struct xe_drm_mm_manager *manager;
+	/** @cs: Command stream for this batch buffer. */
+	u32 *cs;
+	/** @len: Length of the CS in dwords. */
+	u32 len;
+};
+
+#endif
+
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/3] drm/xe/mm: Add batch buffer allocation functions for xe_drm_mm manager
  2026-03-20 12:12 [PATCH 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
  2026-03-20 12:12 ` [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support Satyanarayana K V P
@ 2026-03-20 12:12 ` Satyanarayana K V P
  2026-03-26 19:50   ` Matthew Brost
  2026-03-20 12:12 ` [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write Satyanarayana K V P
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Satyanarayana K V P @ 2026-03-20 12:12 UTC (permalink / raw)
  To: intel-xe
  Cc: Satyanarayana K V P, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Michal Wajdeczko

New APIs xe_drm_mm_bb_alloc(), xe_drm_mm_bb_insert() and
xe_drm_mm_bb_free() are created to manage allocations from the xe_drm_mm
manager.

Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Maarten Lankhorst <dev@lankhorst.se>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
 drivers/gpu/drm/xe/xe_bb.c | 67 ++++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_bb.h |  6 ++++
 2 files changed, 73 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_bb.c b/drivers/gpu/drm/xe/xe_bb.c
index b896b6f6615c..c366ec5b269a 100644
--- a/drivers/gpu/drm/xe/xe_bb.c
+++ b/drivers/gpu/drm/xe/xe_bb.c
@@ -8,6 +8,7 @@
 #include "instructions/xe_mi_commands.h"
 #include "xe_assert.h"
 #include "xe_device_types.h"
+#include "xe_drm_mm.h"
 #include "xe_exec_queue_types.h"
 #include "xe_gt.h"
 #include "xe_sa.h"
@@ -172,3 +173,69 @@ void xe_bb_free(struct xe_bb *bb, struct dma_fence *fence)
 	xe_sa_bo_free(bb->bo, fence);
 	kfree(bb);
 }
+
+/**
+ * xe_drm_mm_bb_alloc() - Allocate a new batch buffer structure for drm_mm
+ *
+ * Allocates a new xe_drm_mm_bb structure for use with xe_drm_mm memory management.
+ *
+ * Returns: Batch buffer structure or an ERR_PTR(-ENOMEM).
+ */
+struct xe_drm_mm_bb *xe_drm_mm_bb_alloc(void)
+{
+	struct xe_drm_mm_bb *bb = kzalloc(sizeof(*bb), GFP_KERNEL);
+
+	if (!bb)
+		return ERR_PTR(-ENOMEM);
+
+	return bb;
+}
+
+/**
+ * xe_drm_mm_bb_insert() - Initialize a batch buffer and insert a hole
+ * @bb: Batch buffer structure to initialize
+ * @bb_pool: drm_mm manager to allocate from
+ * @dwords: Number of dwords to be allocated
+ *
+ * Initializes the batch buffer by allocating memory from the specified
+ * drm_mm manager.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int xe_drm_mm_bb_insert(struct xe_drm_mm_bb *bb, struct xe_drm_mm_manager *bb_pool, u32 dwords)
+{
+	int err;
+
+	/*
+	 * We need to allocate space for the requested number of dwords &
+	 * one additional MI_BATCH_BUFFER_END dword. Since the whole SA
+	 * is submitted to HW, we need to make sure that the last instruction
+	 * is not over written when the last chunk of SA is allocated for BB.
+	 * So, this extra DW acts as a guard here.
+	 */
+	err = xe_drm_mm_insert_node(bb_pool, &bb->node, 4 * (dwords + 1));
+	if (err)
+		return err;
+
+	bb->manager = bb_pool;
+	bb->cs = bb_pool->cpu_addr + bb->node.start;
+	bb->len = 0;
+
+	memset(bb->cs, MI_NOOP, 4 * (dwords + 1));
+
+	return 0;
+}
+
+/**
+ * xe_drm_mm_bb_free() - Free a batch buffer allocated with drm_mm
+ * @bb: Batch buffer structure to free
+ */
+void xe_drm_mm_bb_free(struct xe_drm_mm_bb *bb)
+{
+	if (!bb)
+		return;
+
+	xe_drm_mm_remove_node(&bb->node);
+	kfree(bb);
+}
+
diff --git a/drivers/gpu/drm/xe/xe_bb.h b/drivers/gpu/drm/xe/xe_bb.h
index 231870b24c2f..d5417005d09b 100644
--- a/drivers/gpu/drm/xe/xe_bb.h
+++ b/drivers/gpu/drm/xe/xe_bb.h
@@ -11,6 +11,8 @@
 struct dma_fence;
 
 struct xe_gt;
+struct xe_drm_mm_bb;
+struct xe_drm_mm_manager;
 struct xe_exec_queue;
 struct xe_sa_manager;
 struct xe_sched_job;
@@ -24,5 +26,9 @@ struct xe_sched_job *xe_bb_create_migration_job(struct xe_exec_queue *q,
 						struct xe_bb *bb, u64 batch_ofs,
 						u32 second_idx);
 void xe_bb_free(struct xe_bb *bb, struct dma_fence *fence);
+struct xe_drm_mm_bb *xe_drm_mm_bb_alloc(void);
+int xe_drm_mm_bb_insert(struct xe_drm_mm_bb *bb, struct xe_drm_mm_manager
+		      *bb_pool, u32 dwords);
+void xe_drm_mm_bb_free(struct xe_drm_mm_bb *bb);
 
 #endif
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write
  2026-03-20 12:12 [PATCH 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
  2026-03-20 12:12 ` [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support Satyanarayana K V P
  2026-03-20 12:12 ` [PATCH 2/3] drm/xe/mm: Add batch buffer allocation functions for xe_drm_mm manager Satyanarayana K V P
@ 2026-03-20 12:12 ` Satyanarayana K V P
  2026-03-26 19:52   ` Matthew Brost
  2026-03-27 11:07   ` Michal Wajdeczko
  2026-03-20 12:17 ` ✗ CI.checkpatch: warning for USE drm mm instead of drm SA " Patchwork
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Satyanarayana K V P @ 2026-03-20 12:12 UTC (permalink / raw)
  To: intel-xe
  Cc: Satyanarayana K V P, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Michal Wajdeczko

The suballocator algorithm tracks a hole cursor at the last allocation
and tries to allocate after it. This is optimized for fence-ordered
progress, where older allocations are expected to become reusable first.

In fence-enabled mode, that ordering assumption holds. In fence-disabled
mode, allocations may be freed in arbitrary order, so limiting allocation
to the current hole window can miss valid free space and fail allocations
despite sufficient total space.

Use DRM memory manager instead of sub-allocator to get rid of this issue
as CCS read/write operations do not use fences.

Fixes: 864690cf4dd62 ("drm/xe/vf: Attach and detach CCS copy commands with BO")
Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Maarten Lankhorst <dev@lankhorst.se>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>

---
Used drm mm instead of drm sa based on comments from
https://lore.kernel.org/all/bbf0d48d-a95a-46e1-ac8f-e8a0daa81365@amd.com/
---
 drivers/gpu/drm/xe/xe_bo_types.h           |  3 +-
 drivers/gpu/drm/xe/xe_migrate.c            | 56 ++++++++++++----------
 drivers/gpu/drm/xe/xe_sriov_vf_ccs.c       | 39 ++++++++-------
 drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h |  2 +-
 4 files changed, 53 insertions(+), 47 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
index d4fe3c8dca5b..4c4f15c5648e 100644
--- a/drivers/gpu/drm/xe/xe_bo_types.h
+++ b/drivers/gpu/drm/xe/xe_bo_types.h
@@ -18,6 +18,7 @@
 #include "xe_ggtt_types.h"
 
 struct xe_device;
+struct xe_drm_mm_bb;
 struct xe_vm;
 
 #define XE_BO_MAX_PLACEMENTS	3
@@ -88,7 +89,7 @@ struct xe_bo {
 	bool ccs_cleared;
 
 	/** @bb_ccs: BB instructions of CCS read/write. Valid only for VF */
-	struct xe_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT];
+	struct xe_drm_mm_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT];
 
 	/**
 	 * @cpu_caching: CPU caching mode. Currently only used for userspace
diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
index fc918b4fba54..2fefd306cb2e 100644
--- a/drivers/gpu/drm/xe/xe_migrate.c
+++ b/drivers/gpu/drm/xe/xe_migrate.c
@@ -22,6 +22,7 @@
 #include "xe_assert.h"
 #include "xe_bb.h"
 #include "xe_bo.h"
+#include "xe_drm_mm.h"
 #include "xe_exec_queue.h"
 #include "xe_ggtt.h"
 #include "xe_gt.h"
@@ -1166,11 +1167,12 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
 	u32 batch_size, batch_size_allocated;
 	struct xe_device *xe = gt_to_xe(gt);
 	struct xe_res_cursor src_it, ccs_it;
+	struct xe_drm_mm_manager *bb_pool;
 	struct xe_sriov_vf_ccs_ctx *ctx;
-	struct xe_sa_manager *bb_pool;
+	struct xe_drm_mm_bb *bb = NULL;
 	u64 size = xe_bo_size(src_bo);
-	struct xe_bb *bb = NULL;
 	u64 src_L0, src_L0_ofs;
+	struct xe_bb xe_bb_tmp;
 	u32 src_L0_pt;
 	int err;
 
@@ -1208,18 +1210,18 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
 		size -= src_L0;
 	}
 
-	bb = xe_bb_alloc(gt);
+	bb = xe_drm_mm_bb_alloc();
 	if (IS_ERR(bb))
 		return PTR_ERR(bb);
 
 	bb_pool = ctx->mem.ccs_bb_pool;
-	scoped_guard(mutex, xe_sa_bo_swap_guard(bb_pool)) {
-		xe_sa_bo_swap_shadow(bb_pool);
+	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
+		xe_drm_mm_bo_swap_shadow(bb_pool);
 
-		err = xe_bb_init(bb, bb_pool, batch_size);
+		err = xe_drm_mm_bb_insert(bb, bb_pool, batch_size);
 		if (err) {
 			xe_gt_err(gt, "BB allocation failed.\n");
-			xe_bb_free(bb, NULL);
+			kfree(bb);
 			return err;
 		}
 
@@ -1227,6 +1229,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
 		size = xe_bo_size(src_bo);
 		batch_size = 0;
 
+		xe_bb_tmp = (struct xe_bb){ .cs = bb->cs, .len = 0 };
 		/*
 		 * Emit PTE and copy commands here.
 		 * The CCS copy command can only support limited size. If the size to be
@@ -1255,24 +1258,27 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
 			xe_assert(xe, IS_ALIGNED(ccs_it.start, PAGE_SIZE));
 			batch_size += EMIT_COPY_CCS_DW;
 
-			emit_pte(m, bb, src_L0_pt, false, true, &src_it, src_L0, src);
+			emit_pte(m, &xe_bb_tmp, src_L0_pt, false, true, &src_it, src_L0, src);
 
-			emit_pte(m, bb, ccs_pt, false, false, &ccs_it, ccs_size, src);
+			emit_pte(m, &xe_bb_tmp, ccs_pt, false, false, &ccs_it, ccs_size, src);
 
-			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
-			flush_flags = xe_migrate_ccs_copy(m, bb, src_L0_ofs, src_is_pltt,
+			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
+							      flush_flags);
+			flush_flags = xe_migrate_ccs_copy(m, &xe_bb_tmp, src_L0_ofs, src_is_pltt,
 							  src_L0_ofs, dst_is_pltt,
 							  src_L0, ccs_ofs, true);
-			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
+			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
+							      flush_flags);
 
 			size -= src_L0;
 		}
 
-		xe_assert(xe, (batch_size_allocated == bb->len));
+		xe_assert(xe, (batch_size_allocated == xe_bb_tmp.len));
+		bb->len = xe_bb_tmp.len;
 		src_bo->bb_ccs[read_write] = bb;
 
 		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
-		xe_sa_bo_sync_shadow(bb->bo);
+		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
 	}
 
 	return 0;
@@ -1297,10 +1303,10 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
 void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
 				  enum xe_sriov_vf_ccs_rw_ctxs read_write)
 {
-	struct xe_bb *bb = src_bo->bb_ccs[read_write];
+	struct xe_drm_mm_bb *bb = src_bo->bb_ccs[read_write];
 	struct xe_device *xe = xe_bo_device(src_bo);
+	struct xe_drm_mm_manager *bb_pool;
 	struct xe_sriov_vf_ccs_ctx *ctx;
-	struct xe_sa_manager *bb_pool;
 	u32 *cs;
 
 	xe_assert(xe, IS_SRIOV_VF(xe));
@@ -1308,17 +1314,17 @@ void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
 	ctx = &xe->sriov.vf.ccs.contexts[read_write];
 	bb_pool = ctx->mem.ccs_bb_pool;
 
-	guard(mutex) (xe_sa_bo_swap_guard(bb_pool));
-	xe_sa_bo_swap_shadow(bb_pool);
-
-	cs = xe_sa_bo_cpu_addr(bb->bo);
-	memset(cs, MI_NOOP, bb->len * sizeof(u32));
-	xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
+	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
+		xe_drm_mm_bo_swap_shadow(bb_pool);
 
-	xe_sa_bo_sync_shadow(bb->bo);
+		cs = bb_pool->cpu_addr + bb->node.start;
+		memset(cs, MI_NOOP, bb->len * sizeof(u32));
+		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
 
-	xe_bb_free(bb, NULL);
-	src_bo->bb_ccs[read_write] = NULL;
+		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
+		xe_drm_mm_bb_free(bb);
+		src_bo->bb_ccs[read_write] = NULL;
+	}
 }
 
 /**
diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
index db023fb66a27..6fb4641c6f0f 100644
--- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
@@ -8,6 +8,7 @@
 #include "xe_bb.h"
 #include "xe_bo.h"
 #include "xe_device.h"
+#include "xe_drm_mm.h"
 #include "xe_exec_queue.h"
 #include "xe_exec_queue_types.h"
 #include "xe_gt_sriov_vf.h"
@@ -16,7 +17,6 @@
 #include "xe_lrc.h"
 #include "xe_migrate.h"
 #include "xe_pm.h"
-#include "xe_sa.h"
 #include "xe_sriov_printk.h"
 #include "xe_sriov_vf.h"
 #include "xe_sriov_vf_ccs.h"
@@ -141,8 +141,8 @@ static u64 get_ccs_bb_pool_size(struct xe_device *xe)
 
 static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
 {
+	struct xe_drm_mm_manager *drm_mm_manager;
 	struct xe_device *xe = tile_to_xe(tile);
-	struct xe_sa_manager *sa_manager;
 	u64 bb_pool_size;
 	int offset, err;
 
@@ -150,34 +150,33 @@ static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
 	xe_sriov_info(xe, "Allocating %s CCS BB pool size = %lldMB\n",
 		      ctx->ctx_id ? "Restore" : "Save", bb_pool_size / SZ_1M);
 
-	sa_manager = __xe_sa_bo_manager_init(tile, bb_pool_size, SZ_4K, SZ_16,
-					     XE_SA_BO_MANAGER_FLAG_SHADOW);
-
-	if (IS_ERR(sa_manager)) {
-		xe_sriov_err(xe, "Suballocator init failed with error: %pe\n",
-			     sa_manager);
-		err = PTR_ERR(sa_manager);
+	drm_mm_manager = xe_drm_mm_manager_init(tile, bb_pool_size, SZ_4K,
+						XE_DRM_MM_BO_MANAGER_FLAG_SHADOW);
+	if (IS_ERR(drm_mm_manager)) {
+		xe_sriov_err(xe, "XE_DRM_MM init failed with error: %pe\n",
+			     drm_mm_manager);
+		err = PTR_ERR(drm_mm_manager);
 		return err;
 	}
 
 	offset = 0;
-	xe_map_memset(xe, &sa_manager->bo->vmap, offset, MI_NOOP,
+	xe_map_memset(xe, &drm_mm_manager->bo->vmap, offset, MI_NOOP,
 		      bb_pool_size);
-	xe_map_memset(xe, &sa_manager->shadow->vmap, offset, MI_NOOP,
+	xe_map_memset(xe, &drm_mm_manager->shadow->vmap, offset, MI_NOOP,
 		      bb_pool_size);
 
 	offset = bb_pool_size - sizeof(u32);
-	xe_map_wr(xe, &sa_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
-	xe_map_wr(xe, &sa_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
+	xe_map_wr(xe, &drm_mm_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
+	xe_map_wr(xe, &drm_mm_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
 
-	ctx->mem.ccs_bb_pool = sa_manager;
+	ctx->mem.ccs_bb_pool = drm_mm_manager;
 
 	return 0;
 }
 
 static void ccs_rw_update_ring(struct xe_sriov_vf_ccs_ctx *ctx)
 {
-	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
+	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
 	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
 	u32 dw[10], i = 0;
 
@@ -388,7 +387,7 @@ int xe_sriov_vf_ccs_init(struct xe_device *xe)
 #define XE_SRIOV_VF_CCS_RW_BB_ADDR_OFFSET	(2 * sizeof(u32))
 void xe_sriov_vf_ccs_rw_update_bb_addr(struct xe_sriov_vf_ccs_ctx *ctx)
 {
-	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
+	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
 	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
 	struct xe_device *xe = gt_to_xe(ctx->mig_q->gt);
 
@@ -412,8 +411,8 @@ int xe_sriov_vf_ccs_attach_bo(struct xe_bo *bo)
 	struct xe_device *xe = xe_bo_device(bo);
 	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
 	struct xe_sriov_vf_ccs_ctx *ctx;
+	struct xe_drm_mm_bb *bb;
 	struct xe_tile *tile;
-	struct xe_bb *bb;
 	int err = 0;
 
 	xe_assert(xe, IS_VF_CCS_READY(xe));
@@ -445,7 +444,7 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
 {
 	struct xe_device *xe = xe_bo_device(bo);
 	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
-	struct xe_bb *bb;
+	struct xe_drm_mm_bb *bb;
 
 	xe_assert(xe, IS_VF_CCS_READY(xe));
 
@@ -471,8 +470,8 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
  */
 void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
 {
-	struct xe_sa_manager *bb_pool;
 	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
+	struct xe_drm_mm_manager *bb_pool;
 
 	if (!IS_VF_CCS_READY(xe))
 		return;
@@ -485,7 +484,7 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
 
 		drm_printf(p, "ccs %s bb suballoc info\n", ctx_id ? "write" : "read");
 		drm_printf(p, "-------------------------\n");
-		drm_suballoc_dump_debug_info(&bb_pool->base, p, xe_sa_manager_gpu_addr(bb_pool));
+		drm_mm_print(&bb_pool->base, p);
 		drm_puts(p, "\n");
 	}
 }
diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
index 22c499943d2a..f2af074578c9 100644
--- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
+++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
@@ -33,7 +33,7 @@ struct xe_sriov_vf_ccs_ctx {
 	/** @mem: memory data */
 	struct {
 		/** @mem.ccs_bb_pool: Pool from which batch buffers are allocated. */
-		struct xe_sa_manager *ccs_bb_pool;
+		struct xe_drm_mm_manager *ccs_bb_pool;
 	} mem;
 };
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* ✗ CI.checkpatch: warning for USE drm mm instead of drm SA for CCS read/write
  2026-03-20 12:12 [PATCH 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
                   ` (2 preceding siblings ...)
  2026-03-20 12:12 ` [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write Satyanarayana K V P
@ 2026-03-20 12:17 ` Patchwork
  2026-03-20 12:19 ` ✓ CI.KUnit: success " Patchwork
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2026-03-20 12:17 UTC (permalink / raw)
  To: Satyanarayana K V P; +Cc: intel-xe

== Series Details ==

Series: USE drm mm instead of drm SA for CCS read/write
URL   : https://patchwork.freedesktop.org/series/163588/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
1f57ba1afceae32108bd24770069f764d940a0e4
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit d61f03c0378e9f320b3b24b5f48f67e461ce4d9f
Author: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
Date:   Fri Mar 20 12:12:31 2026 +0000

    drm/xe/vf: Use drm mm instead of drm sa for CCS read/write
    
    The suballocator algorithm tracks a hole cursor at the last allocation
    and tries to allocate after it. This is optimized for fence-ordered
    progress, where older allocations are expected to become reusable first.
    
    In fence-enabled mode, that ordering assumption holds. In fence-disabled
    mode, allocations may be freed in arbitrary order, so limiting allocation
    to the current hole window can miss valid free space and fail allocations
    despite sufficient total space.
    
    Use DRM memory manager instead of sub-allocator to get rid of this issue
    as CCS read/write operations do not use fences.
    
    Fixes: 864690cf4dd62 ("drm/xe/vf: Attach and detach CCS copy commands with BO")
    Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
    Cc: Matthew Brost <matthew.brost@intel.com>
    Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
    Cc: Maarten Lankhorst <dev@lankhorst.se>
    Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
+ /mt/dim checkpatch 7535044a2418d22b59be0eb64af0353971f16bd8 drm-intel
5538b0ed21e8 drm/xe/mm: add XE DRM MM manager with shadow support
-:31: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#31: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 304 lines checked
6c38cc4f5c7a drm/xe/mm: Add batch buffer allocation functions for xe_drm_mm manager
d61f03c0378e drm/xe/vf: Use drm mm instead of drm sa for CCS read/write



^ permalink raw reply	[flat|nested] 20+ messages in thread

* ✓ CI.KUnit: success for USE drm mm instead of drm SA for CCS read/write
  2026-03-20 12:12 [PATCH 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
                   ` (3 preceding siblings ...)
  2026-03-20 12:17 ` ✗ CI.checkpatch: warning for USE drm mm instead of drm SA " Patchwork
@ 2026-03-20 12:19 ` Patchwork
  2026-03-20 13:08 ` ✓ Xe.CI.BAT: " Patchwork
  2026-03-21 11:52 ` ✗ Xe.CI.FULL: failure " Patchwork
  6 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2026-03-20 12:19 UTC (permalink / raw)
  To: Satyanarayana K V P; +Cc: intel-xe

== Series Details ==

Series: USE drm mm instead of drm SA for CCS read/write
URL   : https://patchwork.freedesktop.org/series/163588/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[12:17:54] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[12:17:58] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[12:18:29] Starting KUnit Kernel (1/1)...
[12:18:29] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[12:18:29] ================== guc_buf (11 subtests) ===================
[12:18:29] [PASSED] test_smallest
[12:18:29] [PASSED] test_largest
[12:18:29] [PASSED] test_granular
[12:18:29] [PASSED] test_unique
[12:18:29] [PASSED] test_overlap
[12:18:29] [PASSED] test_reusable
[12:18:29] [PASSED] test_too_big
[12:18:29] [PASSED] test_flush
[12:18:29] [PASSED] test_lookup
[12:18:29] [PASSED] test_data
[12:18:29] [PASSED] test_class
[12:18:29] ===================== [PASSED] guc_buf =====================
[12:18:29] =================== guc_dbm (7 subtests) ===================
[12:18:29] [PASSED] test_empty
[12:18:29] [PASSED] test_default
[12:18:29] ======================== test_size  ========================
[12:18:29] [PASSED] 4
[12:18:29] [PASSED] 8
[12:18:29] [PASSED] 32
[12:18:29] [PASSED] 256
[12:18:29] ==================== [PASSED] test_size ====================
[12:18:29] ======================= test_reuse  ========================
[12:18:29] [PASSED] 4
[12:18:29] [PASSED] 8
[12:18:29] [PASSED] 32
[12:18:29] [PASSED] 256
[12:18:29] =================== [PASSED] test_reuse ====================
[12:18:29] =================== test_range_overlap  ====================
[12:18:29] [PASSED] 4
[12:18:29] [PASSED] 8
[12:18:29] [PASSED] 32
[12:18:29] [PASSED] 256
[12:18:29] =============== [PASSED] test_range_overlap ================
[12:18:29] =================== test_range_compact  ====================
[12:18:29] [PASSED] 4
[12:18:29] [PASSED] 8
[12:18:29] [PASSED] 32
[12:18:29] [PASSED] 256
[12:18:29] =============== [PASSED] test_range_compact ================
[12:18:29] ==================== test_range_spare  =====================
[12:18:29] [PASSED] 4
[12:18:29] [PASSED] 8
[12:18:29] [PASSED] 32
[12:18:29] [PASSED] 256
[12:18:29] ================ [PASSED] test_range_spare =================
[12:18:29] ===================== [PASSED] guc_dbm =====================
[12:18:29] =================== guc_idm (6 subtests) ===================
[12:18:29] [PASSED] bad_init
[12:18:29] [PASSED] no_init
[12:18:29] [PASSED] init_fini
[12:18:29] [PASSED] check_used
[12:18:29] [PASSED] check_quota
[12:18:29] [PASSED] check_all
[12:18:29] ===================== [PASSED] guc_idm =====================
[12:18:29] ================== no_relay (3 subtests) ===================
[12:18:29] [PASSED] xe_drops_guc2pf_if_not_ready
[12:18:29] [PASSED] xe_drops_guc2vf_if_not_ready
[12:18:29] [PASSED] xe_rejects_send_if_not_ready
[12:18:29] ==================== [PASSED] no_relay =====================
[12:18:29] ================== pf_relay (14 subtests) ==================
[12:18:29] [PASSED] pf_rejects_guc2pf_too_short
[12:18:29] [PASSED] pf_rejects_guc2pf_too_long
[12:18:29] [PASSED] pf_rejects_guc2pf_no_payload
[12:18:29] [PASSED] pf_fails_no_payload
[12:18:29] [PASSED] pf_fails_bad_origin
[12:18:29] [PASSED] pf_fails_bad_type
[12:18:29] [PASSED] pf_txn_reports_error
[12:18:29] [PASSED] pf_txn_sends_pf2guc
[12:18:29] [PASSED] pf_sends_pf2guc
[12:18:29] [SKIPPED] pf_loopback_nop
[12:18:29] [SKIPPED] pf_loopback_echo
[12:18:29] [SKIPPED] pf_loopback_fail
[12:18:29] [SKIPPED] pf_loopback_busy
[12:18:29] [SKIPPED] pf_loopback_retry
[12:18:29] ==================== [PASSED] pf_relay =====================
[12:18:29] ================== vf_relay (3 subtests) ===================
[12:18:29] [PASSED] vf_rejects_guc2vf_too_short
[12:18:29] [PASSED] vf_rejects_guc2vf_too_long
[12:18:29] [PASSED] vf_rejects_guc2vf_no_payload
[12:18:29] ==================== [PASSED] vf_relay =====================
[12:18:29] ================ pf_gt_config (9 subtests) =================
[12:18:29] [PASSED] fair_contexts_1vf
[12:18:29] [PASSED] fair_doorbells_1vf
[12:18:29] [PASSED] fair_ggtt_1vf
[12:18:29] ====================== fair_vram_1vf  ======================
[12:18:29] [PASSED] 3.50 GiB
[12:18:29] [PASSED] 11.5 GiB
[12:18:29] [PASSED] 15.5 GiB
[12:18:29] [PASSED] 31.5 GiB
[12:18:29] [PASSED] 63.5 GiB
[12:18:29] [PASSED] 1.91 GiB
[12:18:29] ================== [PASSED] fair_vram_1vf ==================
[12:18:29] ================ fair_vram_1vf_admin_only  =================
[12:18:29] [PASSED] 3.50 GiB
[12:18:29] [PASSED] 11.5 GiB
[12:18:29] [PASSED] 15.5 GiB
[12:18:29] [PASSED] 31.5 GiB
[12:18:29] [PASSED] 63.5 GiB
[12:18:29] [PASSED] 1.91 GiB
[12:18:29] ============ [PASSED] fair_vram_1vf_admin_only =============
[12:18:29] ====================== fair_contexts  ======================
[12:18:29] [PASSED] 1 VF
[12:18:29] [PASSED] 2 VFs
[12:18:29] [PASSED] 3 VFs
[12:18:29] [PASSED] 4 VFs
[12:18:29] [PASSED] 5 VFs
[12:18:29] [PASSED] 6 VFs
[12:18:29] [PASSED] 7 VFs
[12:18:29] [PASSED] 8 VFs
[12:18:29] [PASSED] 9 VFs
[12:18:29] [PASSED] 10 VFs
[12:18:29] [PASSED] 11 VFs
[12:18:29] [PASSED] 12 VFs
[12:18:29] [PASSED] 13 VFs
[12:18:29] [PASSED] 14 VFs
[12:18:29] [PASSED] 15 VFs
[12:18:29] [PASSED] 16 VFs
[12:18:29] [PASSED] 17 VFs
[12:18:29] [PASSED] 18 VFs
[12:18:29] [PASSED] 19 VFs
[12:18:29] [PASSED] 20 VFs
[12:18:29] [PASSED] 21 VFs
[12:18:29] [PASSED] 22 VFs
[12:18:29] [PASSED] 23 VFs
[12:18:29] [PASSED] 24 VFs
[12:18:29] [PASSED] 25 VFs
[12:18:29] [PASSED] 26 VFs
[12:18:29] [PASSED] 27 VFs
[12:18:29] [PASSED] 28 VFs
[12:18:29] [PASSED] 29 VFs
[12:18:29] [PASSED] 30 VFs
[12:18:29] [PASSED] 31 VFs
[12:18:29] [PASSED] 32 VFs
[12:18:29] [PASSED] 33 VFs
[12:18:30] [PASSED] 34 VFs
[12:18:30] [PASSED] 35 VFs
[12:18:30] [PASSED] 36 VFs
[12:18:30] [PASSED] 37 VFs
[12:18:30] [PASSED] 38 VFs
[12:18:30] [PASSED] 39 VFs
[12:18:30] [PASSED] 40 VFs
[12:18:30] [PASSED] 41 VFs
[12:18:30] [PASSED] 42 VFs
[12:18:30] [PASSED] 43 VFs
[12:18:30] [PASSED] 44 VFs
[12:18:30] [PASSED] 45 VFs
[12:18:30] [PASSED] 46 VFs
[12:18:30] [PASSED] 47 VFs
[12:18:30] [PASSED] 48 VFs
[12:18:30] [PASSED] 49 VFs
[12:18:30] [PASSED] 50 VFs
[12:18:30] [PASSED] 51 VFs
[12:18:30] [PASSED] 52 VFs
[12:18:30] [PASSED] 53 VFs
[12:18:30] [PASSED] 54 VFs
[12:18:30] [PASSED] 55 VFs
[12:18:30] [PASSED] 56 VFs
[12:18:30] [PASSED] 57 VFs
[12:18:30] [PASSED] 58 VFs
[12:18:30] [PASSED] 59 VFs
[12:18:30] [PASSED] 60 VFs
[12:18:30] [PASSED] 61 VFs
[12:18:30] [PASSED] 62 VFs
[12:18:30] [PASSED] 63 VFs
[12:18:30] ================== [PASSED] fair_contexts ==================
[12:18:30] ===================== fair_doorbells  ======================
[12:18:30] [PASSED] 1 VF
[12:18:30] [PASSED] 2 VFs
[12:18:30] [PASSED] 3 VFs
[12:18:30] [PASSED] 4 VFs
[12:18:30] [PASSED] 5 VFs
[12:18:30] [PASSED] 6 VFs
[12:18:30] [PASSED] 7 VFs
[12:18:30] [PASSED] 8 VFs
[12:18:30] [PASSED] 9 VFs
[12:18:30] [PASSED] 10 VFs
[12:18:30] [PASSED] 11 VFs
[12:18:30] [PASSED] 12 VFs
[12:18:30] [PASSED] 13 VFs
[12:18:30] [PASSED] 14 VFs
[12:18:30] [PASSED] 15 VFs
[12:18:30] [PASSED] 16 VFs
[12:18:30] [PASSED] 17 VFs
[12:18:30] [PASSED] 18 VFs
[12:18:30] [PASSED] 19 VFs
[12:18:30] [PASSED] 20 VFs
[12:18:30] [PASSED] 21 VFs
[12:18:30] [PASSED] 22 VFs
[12:18:30] [PASSED] 23 VFs
[12:18:30] [PASSED] 24 VFs
[12:18:30] [PASSED] 25 VFs
[12:18:30] [PASSED] 26 VFs
[12:18:30] [PASSED] 27 VFs
[12:18:30] [PASSED] 28 VFs
[12:18:30] [PASSED] 29 VFs
[12:18:30] [PASSED] 30 VFs
[12:18:30] [PASSED] 31 VFs
[12:18:30] [PASSED] 32 VFs
[12:18:30] [PASSED] 33 VFs
[12:18:30] [PASSED] 34 VFs
[12:18:30] [PASSED] 35 VFs
[12:18:30] [PASSED] 36 VFs
[12:18:30] [PASSED] 37 VFs
[12:18:30] [PASSED] 38 VFs
[12:18:30] [PASSED] 39 VFs
[12:18:30] [PASSED] 40 VFs
[12:18:30] [PASSED] 41 VFs
[12:18:30] [PASSED] 42 VFs
[12:18:30] [PASSED] 43 VFs
[12:18:30] [PASSED] 44 VFs
[12:18:30] [PASSED] 45 VFs
[12:18:30] [PASSED] 46 VFs
[12:18:30] [PASSED] 47 VFs
[12:18:30] [PASSED] 48 VFs
[12:18:30] [PASSED] 49 VFs
[12:18:30] [PASSED] 50 VFs
[12:18:30] [PASSED] 51 VFs
[12:18:30] [PASSED] 52 VFs
[12:18:30] [PASSED] 53 VFs
[12:18:30] [PASSED] 54 VFs
[12:18:30] [PASSED] 55 VFs
[12:18:30] [PASSED] 56 VFs
[12:18:30] [PASSED] 57 VFs
[12:18:30] [PASSED] 58 VFs
[12:18:30] [PASSED] 59 VFs
[12:18:30] [PASSED] 60 VFs
[12:18:30] [PASSED] 61 VFs
[12:18:30] [PASSED] 62 VFs
[12:18:30] [PASSED] 63 VFs
[12:18:30] ================= [PASSED] fair_doorbells ==================
[12:18:30] ======================== fair_ggtt  ========================
[12:18:30] [PASSED] 1 VF
[12:18:30] [PASSED] 2 VFs
[12:18:30] [PASSED] 3 VFs
[12:18:30] [PASSED] 4 VFs
[12:18:30] [PASSED] 5 VFs
[12:18:30] [PASSED] 6 VFs
[12:18:30] [PASSED] 7 VFs
[12:18:30] [PASSED] 8 VFs
[12:18:30] [PASSED] 9 VFs
[12:18:30] [PASSED] 10 VFs
[12:18:30] [PASSED] 11 VFs
[12:18:30] [PASSED] 12 VFs
[12:18:30] [PASSED] 13 VFs
[12:18:30] [PASSED] 14 VFs
[12:18:30] [PASSED] 15 VFs
[12:18:30] [PASSED] 16 VFs
[12:18:30] [PASSED] 17 VFs
[12:18:30] [PASSED] 18 VFs
[12:18:30] [PASSED] 19 VFs
[12:18:30] [PASSED] 20 VFs
[12:18:30] [PASSED] 21 VFs
[12:18:30] [PASSED] 22 VFs
[12:18:30] [PASSED] 23 VFs
[12:18:30] [PASSED] 24 VFs
[12:18:30] [PASSED] 25 VFs
[12:18:30] [PASSED] 26 VFs
[12:18:30] [PASSED] 27 VFs
[12:18:30] [PASSED] 28 VFs
[12:18:30] [PASSED] 29 VFs
[12:18:30] [PASSED] 30 VFs
[12:18:30] [PASSED] 31 VFs
[12:18:30] [PASSED] 32 VFs
[12:18:30] [PASSED] 33 VFs
[12:18:30] [PASSED] 34 VFs
[12:18:30] [PASSED] 35 VFs
[12:18:30] [PASSED] 36 VFs
[12:18:30] [PASSED] 37 VFs
[12:18:30] [PASSED] 38 VFs
[12:18:30] [PASSED] 39 VFs
[12:18:30] [PASSED] 40 VFs
[12:18:30] [PASSED] 41 VFs
[12:18:30] [PASSED] 42 VFs
[12:18:30] [PASSED] 43 VFs
[12:18:30] [PASSED] 44 VFs
[12:18:30] [PASSED] 45 VFs
[12:18:30] [PASSED] 46 VFs
[12:18:30] [PASSED] 47 VFs
[12:18:30] [PASSED] 48 VFs
[12:18:30] [PASSED] 49 VFs
[12:18:30] [PASSED] 50 VFs
[12:18:30] [PASSED] 51 VFs
[12:18:30] [PASSED] 52 VFs
[12:18:30] [PASSED] 53 VFs
[12:18:30] [PASSED] 54 VFs
[12:18:30] [PASSED] 55 VFs
[12:18:30] [PASSED] 56 VFs
[12:18:30] [PASSED] 57 VFs
[12:18:30] [PASSED] 58 VFs
[12:18:30] [PASSED] 59 VFs
[12:18:30] [PASSED] 60 VFs
[12:18:30] [PASSED] 61 VFs
[12:18:30] [PASSED] 62 VFs
[12:18:30] [PASSED] 63 VFs
[12:18:30] ==================== [PASSED] fair_ggtt ====================
[12:18:30] ======================== fair_vram  ========================
[12:18:30] [PASSED] 1 VF
[12:18:30] [PASSED] 2 VFs
[12:18:30] [PASSED] 3 VFs
[12:18:30] [PASSED] 4 VFs
[12:18:30] [PASSED] 5 VFs
[12:18:30] [PASSED] 6 VFs
[12:18:30] [PASSED] 7 VFs
[12:18:30] [PASSED] 8 VFs
[12:18:30] [PASSED] 9 VFs
[12:18:30] [PASSED] 10 VFs
[12:18:30] [PASSED] 11 VFs
[12:18:30] [PASSED] 12 VFs
[12:18:30] [PASSED] 13 VFs
[12:18:30] [PASSED] 14 VFs
[12:18:30] [PASSED] 15 VFs
[12:18:30] [PASSED] 16 VFs
[12:18:30] [PASSED] 17 VFs
[12:18:30] [PASSED] 18 VFs
[12:18:30] [PASSED] 19 VFs
[12:18:30] [PASSED] 20 VFs
[12:18:30] [PASSED] 21 VFs
[12:18:30] [PASSED] 22 VFs
[12:18:30] [PASSED] 23 VFs
[12:18:30] [PASSED] 24 VFs
[12:18:30] [PASSED] 25 VFs
[12:18:30] [PASSED] 26 VFs
[12:18:30] [PASSED] 27 VFs
[12:18:30] [PASSED] 28 VFs
[12:18:30] [PASSED] 29 VFs
[12:18:30] [PASSED] 30 VFs
[12:18:30] [PASSED] 31 VFs
[12:18:30] [PASSED] 32 VFs
[12:18:30] [PASSED] 33 VFs
[12:18:30] [PASSED] 34 VFs
[12:18:30] [PASSED] 35 VFs
[12:18:30] [PASSED] 36 VFs
[12:18:30] [PASSED] 37 VFs
[12:18:30] [PASSED] 38 VFs
[12:18:30] [PASSED] 39 VFs
[12:18:30] [PASSED] 40 VFs
[12:18:30] [PASSED] 41 VFs
[12:18:30] [PASSED] 42 VFs
[12:18:30] [PASSED] 43 VFs
[12:18:30] [PASSED] 44 VFs
[12:18:30] [PASSED] 45 VFs
[12:18:30] [PASSED] 46 VFs
[12:18:30] [PASSED] 47 VFs
[12:18:30] [PASSED] 48 VFs
[12:18:30] [PASSED] 49 VFs
[12:18:30] [PASSED] 50 VFs
[12:18:30] [PASSED] 51 VFs
[12:18:30] [PASSED] 52 VFs
[12:18:30] [PASSED] 53 VFs
[12:18:30] [PASSED] 54 VFs
[12:18:30] [PASSED] 55 VFs
[12:18:30] [PASSED] 56 VFs
[12:18:30] [PASSED] 57 VFs
[12:18:30] [PASSED] 58 VFs
[12:18:30] [PASSED] 59 VFs
[12:18:30] [PASSED] 60 VFs
[12:18:30] [PASSED] 61 VFs
[12:18:30] [PASSED] 62 VFs
[12:18:30] [PASSED] 63 VFs
[12:18:30] ==================== [PASSED] fair_vram ====================
[12:18:30] ================== [PASSED] pf_gt_config ===================
[12:18:30] ===================== lmtt (1 subtest) =====================
[12:18:30] ======================== test_ops  =========================
[12:18:30] [PASSED] 2-level
[12:18:30] [PASSED] multi-level
[12:18:30] ==================== [PASSED] test_ops =====================
[12:18:30] ====================== [PASSED] lmtt =======================
[12:18:30] ================= pf_service (11 subtests) =================
[12:18:30] [PASSED] pf_negotiate_any
[12:18:30] [PASSED] pf_negotiate_base_match
[12:18:30] [PASSED] pf_negotiate_base_newer
[12:18:30] [PASSED] pf_negotiate_base_next
[12:18:30] [SKIPPED] pf_negotiate_base_older
[12:18:30] [PASSED] pf_negotiate_base_prev
[12:18:30] [PASSED] pf_negotiate_latest_match
[12:18:30] [PASSED] pf_negotiate_latest_newer
[12:18:30] [PASSED] pf_negotiate_latest_next
[12:18:30] [SKIPPED] pf_negotiate_latest_older
[12:18:30] [SKIPPED] pf_negotiate_latest_prev
[12:18:30] =================== [PASSED] pf_service ====================
[12:18:30] ================= xe_guc_g2g (2 subtests) ==================
[12:18:30] ============== xe_live_guc_g2g_kunit_default  ==============
[12:18:30] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[12:18:30] ============== xe_live_guc_g2g_kunit_allmem  ===============
[12:18:30] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[12:18:30] =================== [SKIPPED] xe_guc_g2g ===================
[12:18:30] =================== xe_mocs (2 subtests) ===================
[12:18:30] ================ xe_live_mocs_kernel_kunit  ================
[12:18:30] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[12:18:30] ================ xe_live_mocs_reset_kunit  =================
[12:18:30] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[12:18:30] ==================== [SKIPPED] xe_mocs =====================
[12:18:30] ================= xe_migrate (2 subtests) ==================
[12:18:30] ================= xe_migrate_sanity_kunit  =================
[12:18:30] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[12:18:30] ================== xe_validate_ccs_kunit  ==================
[12:18:30] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[12:18:30] =================== [SKIPPED] xe_migrate ===================
[12:18:30] ================== xe_dma_buf (1 subtest) ==================
[12:18:30] ==================== xe_dma_buf_kunit  =====================
[12:18:30] ================ [SKIPPED] xe_dma_buf_kunit ================
[12:18:30] =================== [SKIPPED] xe_dma_buf ===================
[12:18:30] ================= xe_bo_shrink (1 subtest) =================
[12:18:30] =================== xe_bo_shrink_kunit  ====================
[12:18:30] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[12:18:30] ================== [SKIPPED] xe_bo_shrink ==================
[12:18:30] ==================== xe_bo (2 subtests) ====================
[12:18:30] ================== xe_ccs_migrate_kunit  ===================
[12:18:30] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[12:18:30] ==================== xe_bo_evict_kunit  ====================
[12:18:30] =============== [SKIPPED] xe_bo_evict_kunit ================
[12:18:30] ===================== [SKIPPED] xe_bo ======================
[12:18:30] ==================== args (13 subtests) ====================
[12:18:30] [PASSED] count_args_test
[12:18:30] [PASSED] call_args_example
[12:18:30] [PASSED] call_args_test
[12:18:30] [PASSED] drop_first_arg_example
[12:18:30] [PASSED] drop_first_arg_test
[12:18:30] [PASSED] first_arg_example
[12:18:30] [PASSED] first_arg_test
[12:18:30] [PASSED] last_arg_example
[12:18:30] [PASSED] last_arg_test
[12:18:30] [PASSED] pick_arg_example
[12:18:30] [PASSED] if_args_example
[12:18:30] [PASSED] if_args_test
[12:18:30] [PASSED] sep_comma_example
[12:18:30] ====================== [PASSED] args =======================
[12:18:30] =================== xe_pci (3 subtests) ====================
[12:18:30] ==================== check_graphics_ip  ====================
[12:18:30] [PASSED] 12.00 Xe_LP
[12:18:30] [PASSED] 12.10 Xe_LP+
[12:18:30] [PASSED] 12.55 Xe_HPG
[12:18:30] [PASSED] 12.60 Xe_HPC
[12:18:30] [PASSED] 12.70 Xe_LPG
[12:18:30] [PASSED] 12.71 Xe_LPG
[12:18:30] [PASSED] 12.74 Xe_LPG+
[12:18:30] [PASSED] 20.01 Xe2_HPG
[12:18:30] [PASSED] 20.02 Xe2_HPG
[12:18:30] [PASSED] 20.04 Xe2_LPG
[12:18:30] [PASSED] 30.00 Xe3_LPG
[12:18:30] [PASSED] 30.01 Xe3_LPG
[12:18:30] [PASSED] 30.03 Xe3_LPG
[12:18:30] [PASSED] 30.04 Xe3_LPG
[12:18:30] [PASSED] 30.05 Xe3_LPG
[12:18:30] [PASSED] 35.10 Xe3p_LPG
[12:18:30] [PASSED] 35.11 Xe3p_XPC
[12:18:30] ================ [PASSED] check_graphics_ip ================
[12:18:30] ===================== check_media_ip  ======================
[12:18:30] [PASSED] 12.00 Xe_M
[12:18:30] [PASSED] 12.55 Xe_HPM
[12:18:30] [PASSED] 13.00 Xe_LPM+
[12:18:30] [PASSED] 13.01 Xe2_HPM
[12:18:30] [PASSED] 20.00 Xe2_LPM
[12:18:30] [PASSED] 30.00 Xe3_LPM
[12:18:30] [PASSED] 30.02 Xe3_LPM
[12:18:30] [PASSED] 35.00 Xe3p_LPM
[12:18:30] [PASSED] 35.03 Xe3p_HPM
[12:18:30] ================= [PASSED] check_media_ip ==================
[12:18:30] =================== check_platform_desc  ===================
[12:18:30] [PASSED] 0x9A60 (TIGERLAKE)
[12:18:30] [PASSED] 0x9A68 (TIGERLAKE)
[12:18:30] [PASSED] 0x9A70 (TIGERLAKE)
[12:18:30] [PASSED] 0x9A40 (TIGERLAKE)
[12:18:30] [PASSED] 0x9A49 (TIGERLAKE)
[12:18:30] [PASSED] 0x9A59 (TIGERLAKE)
[12:18:30] [PASSED] 0x9A78 (TIGERLAKE)
[12:18:30] [PASSED] 0x9AC0 (TIGERLAKE)
[12:18:30] [PASSED] 0x9AC9 (TIGERLAKE)
[12:18:30] [PASSED] 0x9AD9 (TIGERLAKE)
[12:18:30] [PASSED] 0x9AF8 (TIGERLAKE)
[12:18:30] [PASSED] 0x4C80 (ROCKETLAKE)
[12:18:30] [PASSED] 0x4C8A (ROCKETLAKE)
[12:18:30] [PASSED] 0x4C8B (ROCKETLAKE)
[12:18:30] [PASSED] 0x4C8C (ROCKETLAKE)
[12:18:30] [PASSED] 0x4C90 (ROCKETLAKE)
[12:18:30] [PASSED] 0x4C9A (ROCKETLAKE)
[12:18:30] [PASSED] 0x4680 (ALDERLAKE_S)
[12:18:30] [PASSED] 0x4682 (ALDERLAKE_S)
[12:18:30] [PASSED] 0x4688 (ALDERLAKE_S)
[12:18:30] [PASSED] 0x468A (ALDERLAKE_S)
[12:18:30] [PASSED] 0x468B (ALDERLAKE_S)
[12:18:30] [PASSED] 0x4690 (ALDERLAKE_S)
[12:18:30] [PASSED] 0x4692 (ALDERLAKE_S)
[12:18:30] [PASSED] 0x4693 (ALDERLAKE_S)
[12:18:30] [PASSED] 0x46A0 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46A1 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46A2 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46A3 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46A6 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46A8 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46AA (ALDERLAKE_P)
[12:18:30] [PASSED] 0x462A (ALDERLAKE_P)
[12:18:30] [PASSED] 0x4626 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x4628 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46B0 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46B1 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46B2 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46B3 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46C0 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46C1 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46C2 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46C3 (ALDERLAKE_P)
[12:18:30] [PASSED] 0x46D0 (ALDERLAKE_N)
[12:18:30] [PASSED] 0x46D1 (ALDERLAKE_N)
[12:18:30] [PASSED] 0x46D2 (ALDERLAKE_N)
[12:18:30] [PASSED] 0x46D3 (ALDERLAKE_N)
[12:18:30] [PASSED] 0x46D4 (ALDERLAKE_N)
[12:18:30] [PASSED] 0xA721 (ALDERLAKE_P)
[12:18:30] [PASSED] 0xA7A1 (ALDERLAKE_P)
[12:18:30] [PASSED] 0xA7A9 (ALDERLAKE_P)
[12:18:30] [PASSED] 0xA7AC (ALDERLAKE_P)
[12:18:30] [PASSED] 0xA7AD (ALDERLAKE_P)
[12:18:30] [PASSED] 0xA720 (ALDERLAKE_P)
[12:18:30] [PASSED] 0xA7A0 (ALDERLAKE_P)
[12:18:30] [PASSED] 0xA7A8 (ALDERLAKE_P)
[12:18:30] [PASSED] 0xA7AA (ALDERLAKE_P)
[12:18:30] [PASSED] 0xA7AB (ALDERLAKE_P)
[12:18:30] [PASSED] 0xA780 (ALDERLAKE_S)
[12:18:30] [PASSED] 0xA781 (ALDERLAKE_S)
[12:18:30] [PASSED] 0xA782 (ALDERLAKE_S)
[12:18:30] [PASSED] 0xA783 (ALDERLAKE_S)
[12:18:30] [PASSED] 0xA788 (ALDERLAKE_S)
[12:18:30] [PASSED] 0xA789 (ALDERLAKE_S)
[12:18:30] [PASSED] 0xA78A (ALDERLAKE_S)
[12:18:30] [PASSED] 0xA78B (ALDERLAKE_S)
[12:18:30] [PASSED] 0x4905 (DG1)
[12:18:30] [PASSED] 0x4906 (DG1)
[12:18:30] [PASSED] 0x4907 (DG1)
[12:18:30] [PASSED] 0x4908 (DG1)
[12:18:30] [PASSED] 0x4909 (DG1)
[12:18:30] [PASSED] 0x56C0 (DG2)
[12:18:30] [PASSED] 0x56C2 (DG2)
[12:18:30] [PASSED] 0x56C1 (DG2)
[12:18:30] [PASSED] 0x7D51 (METEORLAKE)
[12:18:30] [PASSED] 0x7DD1 (METEORLAKE)
[12:18:30] [PASSED] 0x7D41 (METEORLAKE)
[12:18:30] [PASSED] 0x7D67 (METEORLAKE)
[12:18:30] [PASSED] 0xB640 (METEORLAKE)
[12:18:30] [PASSED] 0x56A0 (DG2)
[12:18:30] [PASSED] 0x56A1 (DG2)
[12:18:30] [PASSED] 0x56A2 (DG2)
[12:18:30] [PASSED] 0x56BE (DG2)
[12:18:30] [PASSED] 0x56BF (DG2)
[12:18:30] [PASSED] 0x5690 (DG2)
[12:18:30] [PASSED] 0x5691 (DG2)
[12:18:30] [PASSED] 0x5692 (DG2)
[12:18:30] [PASSED] 0x56A5 (DG2)
[12:18:30] [PASSED] 0x56A6 (DG2)
[12:18:30] [PASSED] 0x56B0 (DG2)
[12:18:30] [PASSED] 0x56B1 (DG2)
[12:18:30] [PASSED] 0x56BA (DG2)
[12:18:30] [PASSED] 0x56BB (DG2)
[12:18:30] [PASSED] 0x56BC (DG2)
[12:18:30] [PASSED] 0x56BD (DG2)
[12:18:30] [PASSED] 0x5693 (DG2)
[12:18:30] [PASSED] 0x5694 (DG2)
[12:18:30] [PASSED] 0x5695 (DG2)
[12:18:30] [PASSED] 0x56A3 (DG2)
[12:18:30] [PASSED] 0x56A4 (DG2)
[12:18:30] [PASSED] 0x56B2 (DG2)
[12:18:30] [PASSED] 0x56B3 (DG2)
[12:18:30] [PASSED] 0x5696 (DG2)
[12:18:30] [PASSED] 0x5697 (DG2)
[12:18:30] [PASSED] 0xB69 (PVC)
[12:18:30] [PASSED] 0xB6E (PVC)
[12:18:30] [PASSED] 0xBD4 (PVC)
[12:18:30] [PASSED] 0xBD5 (PVC)
[12:18:30] [PASSED] 0xBD6 (PVC)
[12:18:30] [PASSED] 0xBD7 (PVC)
[12:18:30] [PASSED] 0xBD8 (PVC)
[12:18:30] [PASSED] 0xBD9 (PVC)
[12:18:30] [PASSED] 0xBDA (PVC)
[12:18:30] [PASSED] 0xBDB (PVC)
[12:18:30] [PASSED] 0xBE0 (PVC)
[12:18:30] [PASSED] 0xBE1 (PVC)
[12:18:30] [PASSED] 0xBE5 (PVC)
[12:18:30] [PASSED] 0x7D40 (METEORLAKE)
[12:18:30] [PASSED] 0x7D45 (METEORLAKE)
[12:18:30] [PASSED] 0x7D55 (METEORLAKE)
[12:18:30] [PASSED] 0x7D60 (METEORLAKE)
[12:18:30] [PASSED] 0x7DD5 (METEORLAKE)
[12:18:30] [PASSED] 0x6420 (LUNARLAKE)
[12:18:30] [PASSED] 0x64A0 (LUNARLAKE)
[12:18:30] [PASSED] 0x64B0 (LUNARLAKE)
[12:18:30] [PASSED] 0xE202 (BATTLEMAGE)
[12:18:30] [PASSED] 0xE209 (BATTLEMAGE)
[12:18:30] [PASSED] 0xE20B (BATTLEMAGE)
[12:18:30] [PASSED] 0xE20C (BATTLEMAGE)
[12:18:30] [PASSED] 0xE20D (BATTLEMAGE)
[12:18:30] [PASSED] 0xE210 (BATTLEMAGE)
[12:18:30] [PASSED] 0xE211 (BATTLEMAGE)
[12:18:30] [PASSED] 0xE212 (BATTLEMAGE)
[12:18:30] [PASSED] 0xE216 (BATTLEMAGE)
[12:18:30] [PASSED] 0xE220 (BATTLEMAGE)
[12:18:30] [PASSED] 0xE221 (BATTLEMAGE)
[12:18:30] [PASSED] 0xE222 (BATTLEMAGE)
[12:18:30] [PASSED] 0xE223 (BATTLEMAGE)
[12:18:30] [PASSED] 0xB080 (PANTHERLAKE)
[12:18:30] [PASSED] 0xB081 (PANTHERLAKE)
[12:18:30] [PASSED] 0xB082 (PANTHERLAKE)
[12:18:30] [PASSED] 0xB083 (PANTHERLAKE)
[12:18:30] [PASSED] 0xB084 (PANTHERLAKE)
[12:18:30] [PASSED] 0xB085 (PANTHERLAKE)
[12:18:30] [PASSED] 0xB086 (PANTHERLAKE)
[12:18:30] [PASSED] 0xB087 (PANTHERLAKE)
[12:18:30] [PASSED] 0xB08F (PANTHERLAKE)
[12:18:30] [PASSED] 0xB090 (PANTHERLAKE)
[12:18:30] [PASSED] 0xB0A0 (PANTHERLAKE)
[12:18:30] [PASSED] 0xB0B0 (PANTHERLAKE)
[12:18:30] [PASSED] 0xFD80 (PANTHERLAKE)
[12:18:30] [PASSED] 0xFD81 (PANTHERLAKE)
[12:18:30] [PASSED] 0xD740 (NOVALAKE_S)
[12:18:30] [PASSED] 0xD741 (NOVALAKE_S)
[12:18:30] [PASSED] 0xD742 (NOVALAKE_S)
[12:18:30] [PASSED] 0xD743 (NOVALAKE_S)
[12:18:30] [PASSED] 0xD744 (NOVALAKE_S)
[12:18:30] [PASSED] 0xD745 (NOVALAKE_S)
[12:18:30] [PASSED] 0x674C (CRESCENTISLAND)
[12:18:30] [PASSED] 0xD750 (NOVALAKE_P)
[12:18:30] [PASSED] 0xD751 (NOVALAKE_P)
[12:18:30] [PASSED] 0xD752 (NOVALAKE_P)
[12:18:30] [PASSED] 0xD753 (NOVALAKE_P)
[12:18:30] [PASSED] 0xD754 (NOVALAKE_P)
[12:18:30] [PASSED] 0xD755 (NOVALAKE_P)
[12:18:30] [PASSED] 0xD756 (NOVALAKE_P)
[12:18:30] [PASSED] 0xD757 (NOVALAKE_P)
[12:18:30] [PASSED] 0xD75F (NOVALAKE_P)
[12:18:30] =============== [PASSED] check_platform_desc ===============
[12:18:30] ===================== [PASSED] xe_pci ======================
[12:18:30] =================== xe_rtp (2 subtests) ====================
[12:18:30] =============== xe_rtp_process_to_sr_tests  ================
[12:18:30] [PASSED] coalesce-same-reg
[12:18:30] [PASSED] no-match-no-add
[12:18:30] [PASSED] match-or
[12:18:30] [PASSED] match-or-xfail
[12:18:30] [PASSED] no-match-no-add-multiple-rules
[12:18:30] [PASSED] two-regs-two-entries
[12:18:30] [PASSED] clr-one-set-other
[12:18:30] [PASSED] set-field
[12:18:30] [PASSED] conflict-duplicate
stty: 'standard input': Inappropriate ioctl for device
[12:18:30] [PASSED] conflict-not-disjoint
[12:18:30] [PASSED] conflict-reg-type
[12:18:30] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[12:18:30] ================== xe_rtp_process_tests  ===================
[12:18:30] [PASSED] active1
[12:18:30] [PASSED] active2
[12:18:30] [PASSED] active-inactive
[12:18:30] [PASSED] inactive-active
[12:18:30] [PASSED] inactive-1st_or_active-inactive
[12:18:30] [PASSED] inactive-2nd_or_active-inactive
[12:18:30] [PASSED] inactive-last_or_active-inactive
[12:18:30] [PASSED] inactive-no_or_active-inactive
[12:18:30] ============== [PASSED] xe_rtp_process_tests ===============
[12:18:30] ===================== [PASSED] xe_rtp ======================
[12:18:30] ==================== xe_wa (1 subtest) =====================
[12:18:30] ======================== xe_wa_gt  =========================
[12:18:30] [PASSED] TIGERLAKE B0
[12:18:30] [PASSED] DG1 A0
[12:18:30] [PASSED] DG1 B0
[12:18:30] [PASSED] ALDERLAKE_S A0
[12:18:30] [PASSED] ALDERLAKE_S B0
[12:18:30] [PASSED] ALDERLAKE_S C0
[12:18:30] [PASSED] ALDERLAKE_S D0
[12:18:30] [PASSED] ALDERLAKE_P A0
[12:18:30] [PASSED] ALDERLAKE_P B0
[12:18:30] [PASSED] ALDERLAKE_P C0
[12:18:30] [PASSED] ALDERLAKE_S RPLS D0
[12:18:30] [PASSED] ALDERLAKE_P RPLU E0
[12:18:30] [PASSED] DG2 G10 C0
[12:18:30] [PASSED] DG2 G11 B1
[12:18:30] [PASSED] DG2 G12 A1
[12:18:30] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[12:18:30] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[12:18:30] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[12:18:30] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[12:18:30] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[12:18:30] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[12:18:30] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[12:18:30] ==================== [PASSED] xe_wa_gt =====================
[12:18:30] ====================== [PASSED] xe_wa ======================
[12:18:30] ============================================================
[12:18:30] Testing complete. Ran 597 tests: passed: 579, skipped: 18
[12:18:30] Elapsed time: 35.601s total, 4.281s configuring, 30.652s building, 0.611s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[12:18:30] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[12:18:32] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[12:18:56] Starting KUnit Kernel (1/1)...
[12:18:56] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[12:18:56] ============ drm_test_pick_cmdline (2 subtests) ============
[12:18:56] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[12:18:56] =============== drm_test_pick_cmdline_named  ===============
[12:18:56] [PASSED] NTSC
[12:18:56] [PASSED] NTSC-J
[12:18:56] [PASSED] PAL
[12:18:56] [PASSED] PAL-M
[12:18:56] =========== [PASSED] drm_test_pick_cmdline_named ===========
[12:18:56] ============== [PASSED] drm_test_pick_cmdline ==============
[12:18:56] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[12:18:56] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[12:18:56] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[12:18:56] =========== drm_validate_clone_mode (2 subtests) ===========
[12:18:56] ============== drm_test_check_in_clone_mode  ===============
[12:18:56] [PASSED] in_clone_mode
[12:18:56] [PASSED] not_in_clone_mode
[12:18:56] ========== [PASSED] drm_test_check_in_clone_mode ===========
[12:18:56] =============== drm_test_check_valid_clones  ===============
[12:18:56] [PASSED] not_in_clone_mode
[12:18:56] [PASSED] valid_clone
[12:18:56] [PASSED] invalid_clone
[12:18:56] =========== [PASSED] drm_test_check_valid_clones ===========
[12:18:56] ============= [PASSED] drm_validate_clone_mode =============
[12:18:56] ============= drm_validate_modeset (1 subtest) =============
[12:18:56] [PASSED] drm_test_check_connector_changed_modeset
[12:18:56] ============== [PASSED] drm_validate_modeset ===============
[12:18:56] ====== drm_test_bridge_get_current_state (2 subtests) ======
[12:18:56] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[12:18:56] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[12:18:56] ======== [PASSED] drm_test_bridge_get_current_state ========
[12:18:56] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[12:18:56] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[12:18:56] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[12:18:56] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[12:18:56] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[12:18:56] ============== drm_bridge_alloc (2 subtests) ===============
[12:18:56] [PASSED] drm_test_drm_bridge_alloc_basic
[12:18:56] [PASSED] drm_test_drm_bridge_alloc_get_put
[12:18:56] ================ [PASSED] drm_bridge_alloc =================
[12:18:56] ============= drm_cmdline_parser (40 subtests) =============
[12:18:56] [PASSED] drm_test_cmdline_force_d_only
[12:18:56] [PASSED] drm_test_cmdline_force_D_only_dvi
[12:18:56] [PASSED] drm_test_cmdline_force_D_only_hdmi
[12:18:56] [PASSED] drm_test_cmdline_force_D_only_not_digital
[12:18:56] [PASSED] drm_test_cmdline_force_e_only
[12:18:56] [PASSED] drm_test_cmdline_res
[12:18:56] [PASSED] drm_test_cmdline_res_vesa
[12:18:56] [PASSED] drm_test_cmdline_res_vesa_rblank
[12:18:56] [PASSED] drm_test_cmdline_res_rblank
[12:18:56] [PASSED] drm_test_cmdline_res_bpp
[12:18:56] [PASSED] drm_test_cmdline_res_refresh
[12:18:56] [PASSED] drm_test_cmdline_res_bpp_refresh
[12:18:56] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[12:18:56] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[12:18:56] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[12:18:56] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[12:18:56] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[12:18:56] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[12:18:56] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[12:18:56] [PASSED] drm_test_cmdline_res_margins_force_on
[12:18:56] [PASSED] drm_test_cmdline_res_vesa_margins
[12:18:56] [PASSED] drm_test_cmdline_name
[12:18:56] [PASSED] drm_test_cmdline_name_bpp
[12:18:56] [PASSED] drm_test_cmdline_name_option
[12:18:56] [PASSED] drm_test_cmdline_name_bpp_option
[12:18:56] [PASSED] drm_test_cmdline_rotate_0
[12:18:56] [PASSED] drm_test_cmdline_rotate_90
[12:18:56] [PASSED] drm_test_cmdline_rotate_180
[12:18:56] [PASSED] drm_test_cmdline_rotate_270
[12:18:56] [PASSED] drm_test_cmdline_hmirror
[12:18:56] [PASSED] drm_test_cmdline_vmirror
[12:18:56] [PASSED] drm_test_cmdline_margin_options
[12:18:56] [PASSED] drm_test_cmdline_multiple_options
[12:18:56] [PASSED] drm_test_cmdline_bpp_extra_and_option
[12:18:56] [PASSED] drm_test_cmdline_extra_and_option
[12:18:56] [PASSED] drm_test_cmdline_freestanding_options
[12:18:56] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[12:18:56] [PASSED] drm_test_cmdline_panel_orientation
[12:18:56] ================ drm_test_cmdline_invalid  =================
[12:18:56] [PASSED] margin_only
[12:18:56] [PASSED] interlace_only
[12:18:56] [PASSED] res_missing_x
[12:18:56] [PASSED] res_missing_y
[12:18:56] [PASSED] res_bad_y
[12:18:56] [PASSED] res_missing_y_bpp
[12:18:56] [PASSED] res_bad_bpp
[12:18:56] [PASSED] res_bad_refresh
[12:18:56] [PASSED] res_bpp_refresh_force_on_off
[12:18:56] [PASSED] res_invalid_mode
[12:18:56] [PASSED] res_bpp_wrong_place_mode
[12:18:56] [PASSED] name_bpp_refresh
[12:18:56] [PASSED] name_refresh
[12:18:56] [PASSED] name_refresh_wrong_mode
[12:18:56] [PASSED] name_refresh_invalid_mode
[12:18:56] [PASSED] rotate_multiple
[12:18:56] [PASSED] rotate_invalid_val
[12:18:56] [PASSED] rotate_truncated
[12:18:56] [PASSED] invalid_option
[12:18:56] [PASSED] invalid_tv_option
[12:18:56] [PASSED] truncated_tv_option
[12:18:56] ============ [PASSED] drm_test_cmdline_invalid =============
[12:18:56] =============== drm_test_cmdline_tv_options  ===============
[12:18:56] [PASSED] NTSC
[12:18:56] [PASSED] NTSC_443
[12:18:56] [PASSED] NTSC_J
[12:18:56] [PASSED] PAL
[12:18:56] [PASSED] PAL_M
[12:18:56] [PASSED] PAL_N
[12:18:56] [PASSED] SECAM
[12:18:56] [PASSED] MONO_525
[12:18:56] [PASSED] MONO_625
[12:18:56] =========== [PASSED] drm_test_cmdline_tv_options ===========
[12:18:56] =============== [PASSED] drm_cmdline_parser ================
[12:18:56] ========== drmm_connector_hdmi_init (20 subtests) ==========
[12:18:56] [PASSED] drm_test_connector_hdmi_init_valid
[12:18:56] [PASSED] drm_test_connector_hdmi_init_bpc_8
[12:18:56] [PASSED] drm_test_connector_hdmi_init_bpc_10
[12:18:56] [PASSED] drm_test_connector_hdmi_init_bpc_12
[12:18:56] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[12:18:56] [PASSED] drm_test_connector_hdmi_init_bpc_null
[12:18:56] [PASSED] drm_test_connector_hdmi_init_formats_empty
[12:18:56] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[12:18:56] === drm_test_connector_hdmi_init_formats_yuv420_allowed  ===
[12:18:56] [PASSED] supported_formats=0x9 yuv420_allowed=1
[12:18:56] [PASSED] supported_formats=0x9 yuv420_allowed=0
[12:18:56] [PASSED] supported_formats=0x3 yuv420_allowed=1
[12:18:56] [PASSED] supported_formats=0x3 yuv420_allowed=0
[12:18:56] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[12:18:56] [PASSED] drm_test_connector_hdmi_init_null_ddc
[12:18:56] [PASSED] drm_test_connector_hdmi_init_null_product
[12:18:56] [PASSED] drm_test_connector_hdmi_init_null_vendor
[12:18:56] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[12:18:56] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[12:18:56] [PASSED] drm_test_connector_hdmi_init_product_valid
[12:18:56] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[12:18:56] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[12:18:56] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[12:18:56] ========= drm_test_connector_hdmi_init_type_valid  =========
[12:18:56] [PASSED] HDMI-A
[12:18:56] [PASSED] HDMI-B
[12:18:56] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[12:18:56] ======== drm_test_connector_hdmi_init_type_invalid  ========
[12:18:56] [PASSED] Unknown
[12:18:56] [PASSED] VGA
[12:18:56] [PASSED] DVI-I
[12:18:56] [PASSED] DVI-D
[12:18:56] [PASSED] DVI-A
[12:18:56] [PASSED] Composite
[12:18:56] [PASSED] SVIDEO
[12:18:56] [PASSED] LVDS
[12:18:56] [PASSED] Component
[12:18:56] [PASSED] DIN
[12:18:56] [PASSED] DP
[12:18:56] [PASSED] TV
[12:18:56] [PASSED] eDP
[12:18:56] [PASSED] Virtual
[12:18:56] [PASSED] DSI
[12:18:56] [PASSED] DPI
[12:18:56] [PASSED] Writeback
[12:18:56] [PASSED] SPI
[12:18:56] [PASSED] USB
[12:18:56] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[12:18:56] ============ [PASSED] drmm_connector_hdmi_init =============
[12:18:56] ============= drmm_connector_init (3 subtests) =============
[12:18:56] [PASSED] drm_test_drmm_connector_init
[12:18:56] [PASSED] drm_test_drmm_connector_init_null_ddc
[12:18:56] ========= drm_test_drmm_connector_init_type_valid  =========
[12:18:56] [PASSED] Unknown
[12:18:56] [PASSED] VGA
[12:18:56] [PASSED] DVI-I
[12:18:56] [PASSED] DVI-D
[12:18:56] [PASSED] DVI-A
[12:18:56] [PASSED] Composite
[12:18:56] [PASSED] SVIDEO
[12:18:56] [PASSED] LVDS
[12:18:56] [PASSED] Component
[12:18:56] [PASSED] DIN
[12:18:56] [PASSED] DP
[12:18:56] [PASSED] HDMI-A
[12:18:56] [PASSED] HDMI-B
[12:18:56] [PASSED] TV
[12:18:56] [PASSED] eDP
[12:18:56] [PASSED] Virtual
[12:18:56] [PASSED] DSI
[12:18:56] [PASSED] DPI
[12:18:56] [PASSED] Writeback
[12:18:56] [PASSED] SPI
[12:18:56] [PASSED] USB
[12:18:56] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[12:18:56] =============== [PASSED] drmm_connector_init ===============
[12:18:56] ========= drm_connector_dynamic_init (6 subtests) ==========
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_init
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_init_properties
[12:18:56] ===== drm_test_drm_connector_dynamic_init_type_valid  ======
[12:18:56] [PASSED] Unknown
[12:18:56] [PASSED] VGA
[12:18:56] [PASSED] DVI-I
[12:18:56] [PASSED] DVI-D
[12:18:56] [PASSED] DVI-A
[12:18:56] [PASSED] Composite
[12:18:56] [PASSED] SVIDEO
[12:18:56] [PASSED] LVDS
[12:18:56] [PASSED] Component
[12:18:56] [PASSED] DIN
[12:18:56] [PASSED] DP
[12:18:56] [PASSED] HDMI-A
[12:18:56] [PASSED] HDMI-B
[12:18:56] [PASSED] TV
[12:18:56] [PASSED] eDP
[12:18:56] [PASSED] Virtual
[12:18:56] [PASSED] DSI
[12:18:56] [PASSED] DPI
[12:18:56] [PASSED] Writeback
[12:18:56] [PASSED] SPI
[12:18:56] [PASSED] USB
[12:18:56] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[12:18:56] ======== drm_test_drm_connector_dynamic_init_name  =========
[12:18:56] [PASSED] Unknown
[12:18:56] [PASSED] VGA
[12:18:56] [PASSED] DVI-I
[12:18:56] [PASSED] DVI-D
[12:18:56] [PASSED] DVI-A
[12:18:56] [PASSED] Composite
[12:18:56] [PASSED] SVIDEO
[12:18:56] [PASSED] LVDS
[12:18:56] [PASSED] Component
[12:18:56] [PASSED] DIN
[12:18:56] [PASSED] DP
[12:18:56] [PASSED] HDMI-A
[12:18:56] [PASSED] HDMI-B
[12:18:56] [PASSED] TV
[12:18:56] [PASSED] eDP
[12:18:56] [PASSED] Virtual
[12:18:56] [PASSED] DSI
[12:18:56] [PASSED] DPI
[12:18:56] [PASSED] Writeback
[12:18:56] [PASSED] SPI
[12:18:56] [PASSED] USB
[12:18:56] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[12:18:56] =========== [PASSED] drm_connector_dynamic_init ============
[12:18:56] ==== drm_connector_dynamic_register_early (4 subtests) =====
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[12:18:56] ====== [PASSED] drm_connector_dynamic_register_early =======
[12:18:56] ======= drm_connector_dynamic_register (7 subtests) ========
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[12:18:56] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[12:18:56] ========= [PASSED] drm_connector_dynamic_register ==========
[12:18:56] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[12:18:56] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[12:18:56] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[12:18:56] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[12:18:56] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[12:18:56] ========== drm_test_get_tv_mode_from_name_valid  ===========
[12:18:56] [PASSED] NTSC
[12:18:56] [PASSED] NTSC-443
[12:18:56] [PASSED] NTSC-J
[12:18:56] [PASSED] PAL
[12:18:56] [PASSED] PAL-M
[12:18:56] [PASSED] PAL-N
[12:18:56] [PASSED] SECAM
[12:18:56] [PASSED] Mono
[12:18:56] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[12:18:56] [PASSED] drm_test_get_tv_mode_from_name_truncated
[12:18:56] ============ [PASSED] drm_get_tv_mode_from_name ============
[12:18:56] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[12:18:56] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[12:18:56] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[12:18:56] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[12:18:56] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[12:18:56] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[12:18:56] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[12:18:56] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[12:18:56] [PASSED] VIC 96
[12:18:56] [PASSED] VIC 97
[12:18:56] [PASSED] VIC 101
[12:18:56] [PASSED] VIC 102
[12:18:56] [PASSED] VIC 106
[12:18:56] [PASSED] VIC 107
[12:18:56] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[12:18:56] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[12:18:56] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[12:18:56] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[12:18:56] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[12:18:56] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[12:18:56] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[12:18:56] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[12:18:56] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[12:18:56] [PASSED] Automatic
[12:18:56] [PASSED] Full
[12:18:56] [PASSED] Limited 16:235
[12:18:56] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[12:18:56] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[12:18:56] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[12:18:56] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[12:18:56] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[12:18:56] [PASSED] RGB
[12:18:56] [PASSED] YUV 4:2:0
[12:18:56] [PASSED] YUV 4:2:2
[12:18:56] [PASSED] YUV 4:4:4
[12:18:56] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[12:18:56] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[12:18:56] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[12:18:56] ============= drm_damage_helper (21 subtests) ==============
[12:18:56] [PASSED] drm_test_damage_iter_no_damage
[12:18:56] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[12:18:56] [PASSED] drm_test_damage_iter_no_damage_src_moved
[12:18:56] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[12:18:56] [PASSED] drm_test_damage_iter_no_damage_not_visible
[12:18:56] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[12:18:56] [PASSED] drm_test_damage_iter_no_damage_no_fb
[12:18:56] [PASSED] drm_test_damage_iter_simple_damage
[12:18:56] [PASSED] drm_test_damage_iter_single_damage
[12:18:56] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[12:18:56] [PASSED] drm_test_damage_iter_single_damage_outside_src
[12:18:56] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[12:18:56] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[12:18:56] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[12:18:56] [PASSED] drm_test_damage_iter_single_damage_src_moved
[12:18:56] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[12:18:56] [PASSED] drm_test_damage_iter_damage
[12:18:56] [PASSED] drm_test_damage_iter_damage_one_intersect
[12:18:56] [PASSED] drm_test_damage_iter_damage_one_outside
[12:18:56] [PASSED] drm_test_damage_iter_damage_src_moved
[12:18:56] [PASSED] drm_test_damage_iter_damage_not_visible
[12:18:56] ================ [PASSED] drm_damage_helper ================
[12:18:56] ============== drm_dp_mst_helper (3 subtests) ==============
[12:18:56] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[12:18:56] [PASSED] Clock 154000 BPP 30 DSC disabled
[12:18:56] [PASSED] Clock 234000 BPP 30 DSC disabled
[12:18:56] [PASSED] Clock 297000 BPP 24 DSC disabled
[12:18:56] [PASSED] Clock 332880 BPP 24 DSC enabled
[12:18:56] [PASSED] Clock 324540 BPP 24 DSC enabled
[12:18:56] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[12:18:56] ============== drm_test_dp_mst_calc_pbn_div  ===============
[12:18:56] [PASSED] Link rate 2000000 lane count 4
[12:18:56] [PASSED] Link rate 2000000 lane count 2
[12:18:56] [PASSED] Link rate 2000000 lane count 1
[12:18:56] [PASSED] Link rate 1350000 lane count 4
[12:18:56] [PASSED] Link rate 1350000 lane count 2
[12:18:56] [PASSED] Link rate 1350000 lane count 1
[12:18:56] [PASSED] Link rate 1000000 lane count 4
[12:18:56] [PASSED] Link rate 1000000 lane count 2
[12:18:56] [PASSED] Link rate 1000000 lane count 1
[12:18:56] [PASSED] Link rate 810000 lane count 4
[12:18:56] [PASSED] Link rate 810000 lane count 2
[12:18:56] [PASSED] Link rate 810000 lane count 1
[12:18:56] [PASSED] Link rate 540000 lane count 4
[12:18:56] [PASSED] Link rate 540000 lane count 2
[12:18:56] [PASSED] Link rate 540000 lane count 1
[12:18:56] [PASSED] Link rate 270000 lane count 4
[12:18:56] [PASSED] Link rate 270000 lane count 2
[12:18:56] [PASSED] Link rate 270000 lane count 1
[12:18:56] [PASSED] Link rate 162000 lane count 4
[12:18:56] [PASSED] Link rate 162000 lane count 2
[12:18:56] [PASSED] Link rate 162000 lane count 1
[12:18:56] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[12:18:56] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[12:18:56] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[12:18:56] [PASSED] DP_POWER_UP_PHY with port number
[12:18:56] [PASSED] DP_POWER_DOWN_PHY with port number
[12:18:56] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[12:18:56] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[12:18:56] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[12:18:56] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[12:18:56] [PASSED] DP_QUERY_PAYLOAD with port number
[12:18:56] [PASSED] DP_QUERY_PAYLOAD with VCPI
[12:18:56] [PASSED] DP_REMOTE_DPCD_READ with port number
[12:18:56] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[12:18:56] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[12:18:56] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[12:18:56] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[12:18:56] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[12:18:56] [PASSED] DP_REMOTE_I2C_READ with port number
[12:18:56] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[12:18:56] [PASSED] DP_REMOTE_I2C_READ with transactions array
[12:18:56] [PASSED] DP_REMOTE_I2C_WRITE with port number
[12:18:56] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[12:18:56] [PASSED] DP_REMOTE_I2C_WRITE with data array
[12:18:56] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[12:18:56] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[12:18:56] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[12:18:56] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[12:18:56] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[12:18:56] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[12:18:56] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[12:18:56] ================ [PASSED] drm_dp_mst_helper ================
[12:18:56] ================== drm_exec (7 subtests) ===================
[12:18:56] [PASSED] sanitycheck
[12:18:56] [PASSED] test_lock
[12:18:56] [PASSED] test_lock_unlock
[12:18:56] [PASSED] test_duplicates
[12:18:56] [PASSED] test_prepare
[12:18:56] [PASSED] test_prepare_array
[12:18:56] [PASSED] test_multiple_loops
[12:18:56] ==================== [PASSED] drm_exec =====================
[12:18:56] =========== drm_format_helper_test (17 subtests) ===========
[12:18:56] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[12:18:56] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[12:18:56] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[12:18:56] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[12:18:56] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[12:18:56] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[12:18:56] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[12:18:56] ============= drm_test_fb_xrgb8888_to_bgr888  ==============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[12:18:56] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[12:18:56] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[12:18:56] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[12:18:56] ============== drm_test_fb_xrgb8888_to_mono  ===============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[12:18:56] ==================== drm_test_fb_swab  =====================
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ================ [PASSED] drm_test_fb_swab =================
[12:18:56] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[12:18:56] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[12:18:56] [PASSED] single_pixel_source_buffer
[12:18:56] [PASSED] single_pixel_clip_rectangle
[12:18:56] [PASSED] well_known_colors
[12:18:56] [PASSED] destination_pitch
[12:18:56] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[12:18:56] ================= drm_test_fb_clip_offset  =================
[12:18:56] [PASSED] pass through
[12:18:56] [PASSED] horizontal offset
[12:18:56] [PASSED] vertical offset
[12:18:56] [PASSED] horizontal and vertical offset
[12:18:56] [PASSED] horizontal offset (custom pitch)
[12:18:56] [PASSED] vertical offset (custom pitch)
[12:18:56] [PASSED] horizontal and vertical offset (custom pitch)
[12:18:56] ============= [PASSED] drm_test_fb_clip_offset =============
[12:18:56] =================== drm_test_fb_memcpy  ====================
[12:18:56] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[12:18:56] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[12:18:56] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[12:18:56] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[12:18:56] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[12:18:56] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[12:18:56] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[12:18:56] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[12:18:56] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[12:18:56] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[12:18:56] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[12:18:56] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[12:18:56] =============== [PASSED] drm_test_fb_memcpy ================
[12:18:56] ============= [PASSED] drm_format_helper_test ==============
[12:18:56] ================= drm_format (18 subtests) =================
[12:18:56] [PASSED] drm_test_format_block_width_invalid
[12:18:56] [PASSED] drm_test_format_block_width_one_plane
[12:18:56] [PASSED] drm_test_format_block_width_two_plane
[12:18:56] [PASSED] drm_test_format_block_width_three_plane
[12:18:56] [PASSED] drm_test_format_block_width_tiled
[12:18:56] [PASSED] drm_test_format_block_height_invalid
[12:18:56] [PASSED] drm_test_format_block_height_one_plane
[12:18:56] [PASSED] drm_test_format_block_height_two_plane
[12:18:56] [PASSED] drm_test_format_block_height_three_plane
[12:18:56] [PASSED] drm_test_format_block_height_tiled
[12:18:56] [PASSED] drm_test_format_min_pitch_invalid
[12:18:56] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[12:18:56] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[12:18:56] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[12:18:56] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[12:18:56] [PASSED] drm_test_format_min_pitch_two_plane
[12:18:56] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[12:18:56] [PASSED] drm_test_format_min_pitch_tiled
[12:18:56] =================== [PASSED] drm_format ====================
[12:18:56] ============== drm_framebuffer (10 subtests) ===============
[12:18:56] ========== drm_test_framebuffer_check_src_coords  ==========
[12:18:56] [PASSED] Success: source fits into fb
[12:18:56] [PASSED] Fail: overflowing fb with x-axis coordinate
[12:18:56] [PASSED] Fail: overflowing fb with y-axis coordinate
[12:18:56] [PASSED] Fail: overflowing fb with source width
[12:18:56] [PASSED] Fail: overflowing fb with source height
[12:18:56] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[12:18:56] [PASSED] drm_test_framebuffer_cleanup
[12:18:56] =============== drm_test_framebuffer_create  ===============
[12:18:56] [PASSED] ABGR8888 normal sizes
[12:18:56] [PASSED] ABGR8888 max sizes
[12:18:56] [PASSED] ABGR8888 pitch greater than min required
[12:18:56] [PASSED] ABGR8888 pitch less than min required
[12:18:56] [PASSED] ABGR8888 Invalid width
[12:18:56] [PASSED] ABGR8888 Invalid buffer handle
[12:18:56] [PASSED] No pixel format
[12:18:56] [PASSED] ABGR8888 Width 0
[12:18:56] [PASSED] ABGR8888 Height 0
[12:18:56] [PASSED] ABGR8888 Out of bound height * pitch combination
[12:18:56] [PASSED] ABGR8888 Large buffer offset
[12:18:56] [PASSED] ABGR8888 Buffer offset for inexistent plane
[12:18:56] [PASSED] ABGR8888 Invalid flag
[12:18:56] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[12:18:56] [PASSED] ABGR8888 Valid buffer modifier
[12:18:56] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[12:18:56] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[12:18:56] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[12:18:56] [PASSED] NV12 Normal sizes
[12:18:56] [PASSED] NV12 Max sizes
[12:18:56] [PASSED] NV12 Invalid pitch
[12:18:56] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[12:18:56] [PASSED] NV12 different  modifier per-plane
[12:18:56] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[12:18:56] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[12:18:56] [PASSED] NV12 Modifier for inexistent plane
[12:18:56] [PASSED] NV12 Handle for inexistent plane
[12:18:56] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[12:18:56] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[12:18:56] [PASSED] YVU420 Normal sizes
[12:18:56] [PASSED] YVU420 Max sizes
[12:18:56] [PASSED] YVU420 Invalid pitch
[12:18:56] [PASSED] YVU420 Different pitches
[12:18:56] [PASSED] YVU420 Different buffer offsets/pitches
[12:18:56] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[12:18:56] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[12:18:56] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[12:18:56] [PASSED] YVU420 Valid modifier
[12:18:56] [PASSED] YVU420 Different modifiers per plane
[12:18:56] [PASSED] YVU420 Modifier for inexistent plane
[12:18:56] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[12:18:56] [PASSED] X0L2 Normal sizes
[12:18:56] [PASSED] X0L2 Max sizes
[12:18:56] [PASSED] X0L2 Invalid pitch
[12:18:56] [PASSED] X0L2 Pitch greater than minimum required
[12:18:56] [PASSED] X0L2 Handle for inexistent plane
[12:18:56] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[12:18:56] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[12:18:56] [PASSED] X0L2 Valid modifier
[12:18:56] [PASSED] X0L2 Modifier for inexistent plane
[12:18:56] =========== [PASSED] drm_test_framebuffer_create ===========
[12:18:56] [PASSED] drm_test_framebuffer_free
[12:18:56] [PASSED] drm_test_framebuffer_init
[12:18:56] [PASSED] drm_test_framebuffer_init_bad_format
[12:18:56] [PASSED] drm_test_framebuffer_init_dev_mismatch
[12:18:56] [PASSED] drm_test_framebuffer_lookup
[12:18:56] [PASSED] drm_test_framebuffer_lookup_inexistent
[12:18:56] [PASSED] drm_test_framebuffer_modifiers_not_supported
[12:18:56] ================= [PASSED] drm_framebuffer =================
[12:18:56] ================ drm_gem_shmem (8 subtests) ================
[12:18:56] [PASSED] drm_gem_shmem_test_obj_create
[12:18:56] [PASSED] drm_gem_shmem_test_obj_create_private
[12:18:56] [PASSED] drm_gem_shmem_test_pin_pages
[12:18:56] [PASSED] drm_gem_shmem_test_vmap
[12:18:56] [PASSED] drm_gem_shmem_test_get_sg_table
[12:18:56] [PASSED] drm_gem_shmem_test_get_pages_sgt
[12:18:56] [PASSED] drm_gem_shmem_test_madvise
[12:18:56] [PASSED] drm_gem_shmem_test_purge
[12:18:56] ================== [PASSED] drm_gem_shmem ==================
[12:18:56] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[12:18:56] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[12:18:56] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[12:18:56] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[12:18:56] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[12:18:56] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[12:18:56] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[12:18:56] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420  =======
[12:18:56] [PASSED] Automatic
[12:18:56] [PASSED] Full
[12:18:56] [PASSED] Limited 16:235
[12:18:56] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[12:18:56] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[12:18:56] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[12:18:56] [PASSED] drm_test_check_disable_connector
[12:18:56] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[12:18:56] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[12:18:56] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[12:18:56] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[12:18:56] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[12:18:56] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[12:18:56] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[12:18:56] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[12:18:56] [PASSED] drm_test_check_output_bpc_dvi
[12:18:56] [PASSED] drm_test_check_output_bpc_format_vic_1
[12:18:56] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[12:18:56] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[12:18:56] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[12:18:56] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[12:18:56] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[12:18:56] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[12:18:56] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[12:18:56] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[12:18:56] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[12:18:56] [PASSED] drm_test_check_broadcast_rgb_value
[12:18:56] [PASSED] drm_test_check_bpc_8_value
[12:18:56] [PASSED] drm_test_check_bpc_10_value
[12:18:56] [PASSED] drm_test_check_bpc_12_value
[12:18:56] [PASSED] drm_test_check_format_value
[12:18:56] [PASSED] drm_test_check_tmds_char_value
[12:18:56] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[12:18:56] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[12:18:56] [PASSED] drm_test_check_mode_valid
[12:18:56] [PASSED] drm_test_check_mode_valid_reject
[12:18:56] [PASSED] drm_test_check_mode_valid_reject_rate
[12:18:56] [PASSED] drm_test_check_mode_valid_reject_max_clock
[12:18:56] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[12:18:56] = drm_atomic_helper_connector_hdmi_infoframes (5 subtests) =
[12:18:56] [PASSED] drm_test_check_infoframes
[12:18:56] [PASSED] drm_test_check_reject_avi_infoframe
[12:18:56] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_8
[12:18:56] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_10
[12:18:56] [PASSED] drm_test_check_reject_audio_infoframe
[12:18:56] === [PASSED] drm_atomic_helper_connector_hdmi_infoframes ===
[12:18:56] ================= drm_managed (2 subtests) =================
[12:18:56] [PASSED] drm_test_managed_release_action
[12:18:56] [PASSED] drm_test_managed_run_action
[12:18:56] =================== [PASSED] drm_managed ===================
[12:18:56] =================== drm_mm (6 subtests) ====================
[12:18:56] [PASSED] drm_test_mm_init
[12:18:56] [PASSED] drm_test_mm_debug
[12:18:56] [PASSED] drm_test_mm_align32
[12:18:56] [PASSED] drm_test_mm_align64
[12:18:56] [PASSED] drm_test_mm_lowest
[12:18:56] [PASSED] drm_test_mm_highest
[12:18:56] ===================== [PASSED] drm_mm ======================
[12:18:56] ============= drm_modes_analog_tv (5 subtests) =============
[12:18:56] [PASSED] drm_test_modes_analog_tv_mono_576i
[12:18:56] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[12:18:56] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[12:18:56] [PASSED] drm_test_modes_analog_tv_pal_576i
[12:18:56] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[12:18:56] =============== [PASSED] drm_modes_analog_tv ===============
[12:18:56] ============== drm_plane_helper (2 subtests) ===============
[12:18:56] =============== drm_test_check_plane_state  ================
[12:18:56] [PASSED] clipping_simple
[12:18:56] [PASSED] clipping_rotate_reflect
[12:18:56] [PASSED] positioning_simple
[12:18:56] [PASSED] upscaling
[12:18:56] [PASSED] downscaling
[12:18:56] [PASSED] rounding1
[12:18:56] [PASSED] rounding2
[12:18:56] [PASSED] rounding3
[12:18:56] [PASSED] rounding4
[12:18:56] =========== [PASSED] drm_test_check_plane_state ============
[12:18:56] =========== drm_test_check_invalid_plane_state  ============
[12:18:56] [PASSED] positioning_invalid
[12:18:56] [PASSED] upscaling_invalid
[12:18:56] [PASSED] downscaling_invalid
[12:18:56] ======= [PASSED] drm_test_check_invalid_plane_state ========
[12:18:56] ================ [PASSED] drm_plane_helper =================
[12:18:56] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[12:18:56] ====== drm_test_connector_helper_tv_get_modes_check  =======
[12:18:56] [PASSED] None
[12:18:56] [PASSED] PAL
[12:18:56] [PASSED] NTSC
[12:18:56] [PASSED] Both, NTSC Default
[12:18:56] [PASSED] Both, PAL Default
[12:18:56] [PASSED] Both, NTSC Default, with PAL on command-line
[12:18:56] [PASSED] Both, PAL Default, with NTSC on command-line
[12:18:56] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[12:18:56] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[12:18:56] ================== drm_rect (9 subtests) ===================
[12:18:56] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[12:18:56] [PASSED] drm_test_rect_clip_scaled_not_clipped
[12:18:56] [PASSED] drm_test_rect_clip_scaled_clipped
[12:18:56] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[12:18:56] ================= drm_test_rect_intersect  =================
[12:18:56] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[12:18:56] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[12:18:56] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[12:18:56] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[12:18:56] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[12:18:56] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[12:18:56] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[12:18:56] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[12:18:56] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[12:18:56] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[12:18:56] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[12:18:56] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[12:18:56] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[12:18:56] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[12:18:56] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[12:18:56] ============= [PASSED] drm_test_rect_intersect =============
[12:18:56] ================ drm_test_rect_calc_hscale  ================
[12:18:56] [PASSED] normal use
[12:18:56] [PASSED] out of max range
[12:18:56] [PASSED] out of min range
[12:18:56] [PASSED] zero dst
[12:18:56] [PASSED] negative src
[12:18:56] [PASSED] negative dst
[12:18:56] ============ [PASSED] drm_test_rect_calc_hscale ============
[12:18:56] ================ drm_test_rect_calc_vscale  ================
[12:18:56] [PASSED] normal use
[12:18:56] [PASSED] out of max range
[12:18:56] [PASSED] out of min range
[12:18:56] [PASSED] zero dst
[12:18:56] [PASSED] negative src
[12:18:56] [PASSED] negative dst
stty: 'standard input': Inappropriate ioctl for device
[12:18:56] ============ [PASSED] drm_test_rect_calc_vscale ============
[12:18:56] ================== drm_test_rect_rotate  ===================
[12:18:56] [PASSED] reflect-x
[12:18:56] [PASSED] reflect-y
[12:18:56] [PASSED] rotate-0
[12:18:56] [PASSED] rotate-90
[12:18:56] [PASSED] rotate-180
[12:18:56] [PASSED] rotate-270
[12:18:56] ============== [PASSED] drm_test_rect_rotate ===============
[12:18:56] ================ drm_test_rect_rotate_inv  =================
[12:18:56] [PASSED] reflect-x
[12:18:56] [PASSED] reflect-y
[12:18:56] [PASSED] rotate-0
[12:18:56] [PASSED] rotate-90
[12:18:56] [PASSED] rotate-180
[12:18:56] [PASSED] rotate-270
[12:18:56] ============ [PASSED] drm_test_rect_rotate_inv =============
[12:18:56] ==================== [PASSED] drm_rect =====================
[12:18:56] ============ drm_sysfb_modeset_test (1 subtest) ============
[12:18:56] ============ drm_test_sysfb_build_fourcc_list  =============
[12:18:56] [PASSED] no native formats
[12:18:56] [PASSED] XRGB8888 as native format
[12:18:56] [PASSED] remove duplicates
[12:18:56] [PASSED] convert alpha formats
[12:18:56] [PASSED] random formats
[12:18:56] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[12:18:56] ============= [PASSED] drm_sysfb_modeset_test ==============
[12:18:56] ================== drm_fixp (2 subtests) ===================
[12:18:56] [PASSED] drm_test_int2fixp
[12:18:56] [PASSED] drm_test_sm2fixp
[12:18:56] ==================== [PASSED] drm_fixp =====================
[12:18:56] ============================================================
[12:18:56] Testing complete. Ran 621 tests: passed: 621
[12:18:56] Elapsed time: 25.954s total, 1.677s configuring, 24.106s building, 0.143s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[12:18:56] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[12:18:58] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[12:19:07] Starting KUnit Kernel (1/1)...
[12:19:07] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[12:19:07] ================= ttm_device (5 subtests) ==================
[12:19:07] [PASSED] ttm_device_init_basic
[12:19:07] [PASSED] ttm_device_init_multiple
[12:19:07] [PASSED] ttm_device_fini_basic
[12:19:07] [PASSED] ttm_device_init_no_vma_man
[12:19:07] ================== ttm_device_init_pools  ==================
[12:19:07] [PASSED] No DMA allocations, no DMA32 required
[12:19:07] [PASSED] DMA allocations, DMA32 required
[12:19:07] [PASSED] No DMA allocations, DMA32 required
[12:19:07] [PASSED] DMA allocations, no DMA32 required
[12:19:07] ============== [PASSED] ttm_device_init_pools ==============
[12:19:07] =================== [PASSED] ttm_device ====================
[12:19:07] ================== ttm_pool (8 subtests) ===================
[12:19:07] ================== ttm_pool_alloc_basic  ===================
[12:19:07] [PASSED] One page
[12:19:07] [PASSED] More than one page
[12:19:07] [PASSED] Above the allocation limit
[12:19:07] [PASSED] One page, with coherent DMA mappings enabled
[12:19:07] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[12:19:07] ============== [PASSED] ttm_pool_alloc_basic ===============
[12:19:07] ============== ttm_pool_alloc_basic_dma_addr  ==============
[12:19:07] [PASSED] One page
[12:19:07] [PASSED] More than one page
[12:19:07] [PASSED] Above the allocation limit
[12:19:07] [PASSED] One page, with coherent DMA mappings enabled
[12:19:07] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[12:19:07] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[12:19:07] [PASSED] ttm_pool_alloc_order_caching_match
[12:19:07] [PASSED] ttm_pool_alloc_caching_mismatch
[12:19:07] [PASSED] ttm_pool_alloc_order_mismatch
[12:19:07] [PASSED] ttm_pool_free_dma_alloc
[12:19:07] [PASSED] ttm_pool_free_no_dma_alloc
[12:19:07] [PASSED] ttm_pool_fini_basic
[12:19:07] ==================== [PASSED] ttm_pool =====================
[12:19:07] ================ ttm_resource (8 subtests) =================
[12:19:07] ================= ttm_resource_init_basic  =================
[12:19:07] [PASSED] Init resource in TTM_PL_SYSTEM
[12:19:07] [PASSED] Init resource in TTM_PL_VRAM
[12:19:07] [PASSED] Init resource in a private placement
[12:19:07] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[12:19:07] ============= [PASSED] ttm_resource_init_basic =============
[12:19:07] [PASSED] ttm_resource_init_pinned
[12:19:07] [PASSED] ttm_resource_fini_basic
[12:19:07] [PASSED] ttm_resource_manager_init_basic
[12:19:07] [PASSED] ttm_resource_manager_usage_basic
[12:19:07] [PASSED] ttm_resource_manager_set_used_basic
[12:19:07] [PASSED] ttm_sys_man_alloc_basic
[12:19:07] [PASSED] ttm_sys_man_free_basic
[12:19:07] ================== [PASSED] ttm_resource ===================
[12:19:07] =================== ttm_tt (15 subtests) ===================
[12:19:07] ==================== ttm_tt_init_basic  ====================
[12:19:07] [PASSED] Page-aligned size
[12:19:07] [PASSED] Extra pages requested
[12:19:07] ================ [PASSED] ttm_tt_init_basic ================
[12:19:07] [PASSED] ttm_tt_init_misaligned
[12:19:07] [PASSED] ttm_tt_fini_basic
[12:19:07] [PASSED] ttm_tt_fini_sg
[12:19:07] [PASSED] ttm_tt_fini_shmem
[12:19:07] [PASSED] ttm_tt_create_basic
[12:19:07] [PASSED] ttm_tt_create_invalid_bo_type
[12:19:07] [PASSED] ttm_tt_create_ttm_exists
[12:19:07] [PASSED] ttm_tt_create_failed
[12:19:07] [PASSED] ttm_tt_destroy_basic
[12:19:07] [PASSED] ttm_tt_populate_null_ttm
[12:19:07] [PASSED] ttm_tt_populate_populated_ttm
[12:19:07] [PASSED] ttm_tt_unpopulate_basic
[12:19:07] [PASSED] ttm_tt_unpopulate_empty_ttm
[12:19:07] [PASSED] ttm_tt_swapin_basic
[12:19:07] ===================== [PASSED] ttm_tt ======================
[12:19:07] =================== ttm_bo (14 subtests) ===================
[12:19:07] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[12:19:07] [PASSED] Cannot be interrupted and sleeps
[12:19:07] [PASSED] Cannot be interrupted, locks straight away
[12:19:07] [PASSED] Can be interrupted, sleeps
[12:19:07] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[12:19:07] [PASSED] ttm_bo_reserve_locked_no_sleep
[12:19:07] [PASSED] ttm_bo_reserve_no_wait_ticket
[12:19:07] [PASSED] ttm_bo_reserve_double_resv
[12:19:07] [PASSED] ttm_bo_reserve_interrupted
[12:19:07] [PASSED] ttm_bo_reserve_deadlock
[12:19:07] [PASSED] ttm_bo_unreserve_basic
[12:19:07] [PASSED] ttm_bo_unreserve_pinned
[12:19:07] [PASSED] ttm_bo_unreserve_bulk
[12:19:07] [PASSED] ttm_bo_fini_basic
[12:19:07] [PASSED] ttm_bo_fini_shared_resv
[12:19:07] [PASSED] ttm_bo_pin_basic
[12:19:07] [PASSED] ttm_bo_pin_unpin_resource
[12:19:07] [PASSED] ttm_bo_multiple_pin_one_unpin
[12:19:07] ===================== [PASSED] ttm_bo ======================
[12:19:07] ============== ttm_bo_validate (22 subtests) ===============
[12:19:07] ============== ttm_bo_init_reserved_sys_man  ===============
[12:19:07] [PASSED] Buffer object for userspace
[12:19:07] [PASSED] Kernel buffer object
[12:19:07] [PASSED] Shared buffer object
[12:19:07] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[12:19:07] ============== ttm_bo_init_reserved_mock_man  ==============
[12:19:07] [PASSED] Buffer object for userspace
[12:19:07] [PASSED] Kernel buffer object
[12:19:07] [PASSED] Shared buffer object
[12:19:07] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[12:19:07] [PASSED] ttm_bo_init_reserved_resv
[12:19:07] ================== ttm_bo_validate_basic  ==================
[12:19:07] [PASSED] Buffer object for userspace
[12:19:07] [PASSED] Kernel buffer object
[12:19:07] [PASSED] Shared buffer object
[12:19:07] ============== [PASSED] ttm_bo_validate_basic ==============
[12:19:07] [PASSED] ttm_bo_validate_invalid_placement
[12:19:07] ============= ttm_bo_validate_same_placement  ==============
[12:19:07] [PASSED] System manager
[12:19:07] [PASSED] VRAM manager
[12:19:07] ========= [PASSED] ttm_bo_validate_same_placement ==========
[12:19:07] [PASSED] ttm_bo_validate_failed_alloc
[12:19:07] [PASSED] ttm_bo_validate_pinned
[12:19:07] [PASSED] ttm_bo_validate_busy_placement
[12:19:07] ================ ttm_bo_validate_multihop  =================
[12:19:07] [PASSED] Buffer object for userspace
[12:19:07] [PASSED] Kernel buffer object
[12:19:07] [PASSED] Shared buffer object
[12:19:07] ============ [PASSED] ttm_bo_validate_multihop =============
[12:19:07] ========== ttm_bo_validate_no_placement_signaled  ==========
[12:19:07] [PASSED] Buffer object in system domain, no page vector
[12:19:07] [PASSED] Buffer object in system domain with an existing page vector
[12:19:07] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[12:19:07] ======== ttm_bo_validate_no_placement_not_signaled  ========
[12:19:07] [PASSED] Buffer object for userspace
[12:19:07] [PASSED] Kernel buffer object
[12:19:07] [PASSED] Shared buffer object
[12:19:07] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[12:19:07] [PASSED] ttm_bo_validate_move_fence_signaled
[12:19:07] ========= ttm_bo_validate_move_fence_not_signaled  =========
[12:19:07] [PASSED] Waits for GPU
[12:19:07] [PASSED] Tries to lock straight away
[12:19:07] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[12:19:07] [PASSED] ttm_bo_validate_swapout
[12:19:07] [PASSED] ttm_bo_validate_happy_evict
[12:19:07] [PASSED] ttm_bo_validate_all_pinned_evict
[12:19:07] [PASSED] ttm_bo_validate_allowed_only_evict
[12:19:07] [PASSED] ttm_bo_validate_deleted_evict
[12:19:07] [PASSED] ttm_bo_validate_busy_domain_evict
[12:19:07] [PASSED] ttm_bo_validate_evict_gutting
[12:19:07] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[12:19:07] ================= [PASSED] ttm_bo_validate =================
[12:19:07] ============================================================
[12:19:07] Testing complete. Ran 102 tests: passed: 102
[12:19:08] Elapsed time: 11.551s total, 1.678s configuring, 9.606s building, 0.228s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 20+ messages in thread

* ✓ Xe.CI.BAT: success for USE drm mm instead of drm SA for CCS read/write
  2026-03-20 12:12 [PATCH 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
                   ` (4 preceding siblings ...)
  2026-03-20 12:19 ` ✓ CI.KUnit: success " Patchwork
@ 2026-03-20 13:08 ` Patchwork
  2026-03-21 11:52 ` ✗ Xe.CI.FULL: failure " Patchwork
  6 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2026-03-20 13:08 UTC (permalink / raw)
  To: Satyanarayana K V P; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 1412 bytes --]

== Series Details ==

Series: USE drm mm instead of drm SA for CCS read/write
URL   : https://patchwork.freedesktop.org/series/163588/
State : success

== Summary ==

CI Bug Log - changes from xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8_BAT -> xe-pw-163588v1_BAT
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (14 -> 14)
------------------------------

  No changes in participating hosts

Known issues
------------

  Here are the changes found in xe-pw-163588v1_BAT that come from known issues:

### IGT changes ###

#### Possible fixes ####

  * igt@xe_waitfence@engine:
    - bat-dg2-oem2:       [FAIL][1] ([Intel XE#6519]) -> [PASS][2]
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/bat-dg2-oem2/igt@xe_waitfence@engine.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/bat-dg2-oem2/igt@xe_waitfence@engine.html

  
  [Intel XE#6519]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6519


Build changes
-------------

  * Linux: xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8 -> xe-pw-163588v1

  IGT_8814: 8814
  xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8: 7535044a2418d22b59be0eb64af0353971f16bd8
  xe-pw-163588v1: 163588v1

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/index.html

[-- Attachment #2: Type: text/html, Size: 1977 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* ✗ Xe.CI.FULL: failure for USE drm mm instead of drm SA for CCS read/write
  2026-03-20 12:12 [PATCH 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
                   ` (5 preceding siblings ...)
  2026-03-20 13:08 ` ✓ Xe.CI.BAT: " Patchwork
@ 2026-03-21 11:52 ` Patchwork
  6 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2026-03-21 11:52 UTC (permalink / raw)
  To: Satyanarayana K V P; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 51528 bytes --]

== Series Details ==

Series: USE drm mm instead of drm SA for CCS read/write
URL   : https://patchwork.freedesktop.org/series/163588/
State : failure

== Summary ==

CI Bug Log - changes from xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8_FULL -> xe-pw-163588v1_FULL
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-163588v1_FULL absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-163588v1_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (2 -> 2)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-163588v1_FULL:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-dp-2:
    - shard-bmg:          [PASS][1] -> [FAIL][2] +1 other test fail
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-9/igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-dp-2.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-4/igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-dp-2.html

  
Known issues
------------

  Here are the changes found in xe-pw-163588v1_FULL that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@core_hotunplug@hotunplug-rescan:
    - shard-bmg:          [PASS][3] -> [ABORT][4] ([Intel XE#7578])
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-6/igt@core_hotunplug@hotunplug-rescan.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-7/igt@core_hotunplug@hotunplug-rescan.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
    - shard-bmg:          [PASS][5] -> [SKIP][6] ([Intel XE#6703]) +150 other tests skip
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@linear-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-bmg:          NOTRUN -> [SKIP][7] ([Intel XE#7059] / [Intel XE#7085])
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-8/igt@kms_big_fb@linear-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@x-tiled-16bpp-rotate-180:
    - shard-bmg:          [PASS][8] -> [DMESG-FAIL][9] ([Intel XE#5545])
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_big_fb@x-tiled-16bpp-rotate-180.html
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_big_fb@x-tiled-16bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip:
    - shard-bmg:          NOTRUN -> [SKIP][10] ([Intel XE#1124]) +1 other test skip
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip.html

  * igt@kms_bw@linear-tiling-1-displays-2160x1440p:
    - shard-bmg:          NOTRUN -> [SKIP][11] ([Intel XE#367] / [Intel XE#7354]) +1 other test skip
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-8/igt@kms_bw@linear-tiling-1-displays-2160x1440p.html

  * igt@kms_ccs@bad-pixel-format-y-tiled-gen12-rc-ccs-cc:
    - shard-bmg:          NOTRUN -> [SKIP][12] ([Intel XE#2887]) +4 other tests skip
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_ccs@bad-pixel-format-y-tiled-gen12-rc-ccs-cc.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs@pipe-b-dp-2:
    - shard-bmg:          NOTRUN -> [SKIP][13] ([Intel XE#2652]) +3 other tests skip
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs@pipe-b-dp-2.html

  * igt@kms_content_protection@uevent-hdcp14@pipe-a-dp-2:
    - shard-bmg:          NOTRUN -> [FAIL][14] ([Intel XE#6707] / [Intel XE#7439])
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_content_protection@uevent-hdcp14@pipe-a-dp-2.html

  * igt@kms_cursor_crc@cursor-offscreen-128x42:
    - shard-bmg:          NOTRUN -> [SKIP][15] ([Intel XE#2320]) +2 other tests skip
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_cursor_crc@cursor-offscreen-128x42.html

  * igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy:
    - shard-bmg:          NOTRUN -> [SKIP][16] ([Intel XE#2291])
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy.html

  * igt@kms_cursor_legacy@2x-long-nonblocking-modeset-vs-cursor-atomic:
    - shard-bmg:          [PASS][17] -> [SKIP][18] ([Intel XE#2291]) +1 other test skip
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-2/igt@kms_cursor_legacy@2x-long-nonblocking-modeset-vs-cursor-atomic.html
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_cursor_legacy@2x-long-nonblocking-modeset-vs-cursor-atomic.html

  * igt@kms_cursor_legacy@flip-vs-cursor-legacy:
    - shard-bmg:          [PASS][19] -> [FAIL][20] ([Intel XE#7586])
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-9/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-4/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html

  * igt@kms_dp_aux_dev:
    - shard-bmg:          [PASS][21] -> [SKIP][22] ([Intel XE#3009])
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-2/igt@kms_dp_aux_dev.html
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_dp_aux_dev.html

  * igt@kms_flip@2x-flip-vs-rmfb-interruptible:
    - shard-bmg:          NOTRUN -> [SKIP][23] ([Intel XE#2316])
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html

  * igt@kms_flip@2x-flip-vs-wf_vblank-interruptible:
    - shard-bmg:          [PASS][24] -> [SKIP][25] ([Intel XE#2316]) +7 other tests skip
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-2/igt@kms_flip@2x-flip-vs-wf_vblank-interruptible.html
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_flip@2x-flip-vs-wf_vblank-interruptible.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-bmg:          [PASS][26] -> [INCOMPLETE][27] ([Intel XE#2049] / [Intel XE#2597]) +3 other tests incomplete
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_flip@flip-vs-suspend-interruptible.html
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-32bpp-4tiledg2rcccs-upscaling:
    - shard-bmg:          NOTRUN -> [SKIP][28] ([Intel XE#7178] / [Intel XE#7349])
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-8/igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-32bpp-4tiledg2rcccs-upscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-upscaling:
    - shard-bmg:          NOTRUN -> [SKIP][29] ([Intel XE#7178] / [Intel XE#7351]) +2 other tests skip
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-upscaling.html

  * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-move:
    - shard-bmg:          NOTRUN -> [SKIP][30] ([Intel XE#2311]) +2 other tests skip
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-8/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-move.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-spr-indfb-draw-render:
    - shard-bmg:          NOTRUN -> [SKIP][31] ([Intel XE#2312]) +5 other tests skip
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-argb161616f-draw-render:
    - shard-bmg:          NOTRUN -> [SKIP][32] ([Intel XE#7061] / [Intel XE#7356])
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcpsr-argb161616f-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-rgb101010-draw-render:
    - shard-bmg:          NOTRUN -> [SKIP][33] ([Intel XE#2313]) +3 other tests skip
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcpsr-rgb101010-draw-render.html

  * igt@kms_hdr@static-toggle-suspend:
    - shard-bmg:          [PASS][34] -> [SKIP][35] ([Intel XE#1503]) +1 other test skip
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-2/igt@kms_hdr@static-toggle-suspend.html
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_hdr@static-toggle-suspend.html

  * igt@kms_joiner@basic-force-big-joiner:
    - shard-bmg:          [PASS][36] -> [SKIP][37] ([Intel XE#7086])
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-4/igt@kms_joiner@basic-force-big-joiner.html
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-5/igt@kms_joiner@basic-force-big-joiner.html

  * igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-modifier-source-clamping:
    - shard-bmg:          NOTRUN -> [SKIP][38] ([Intel XE#7283])
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-modifier-source-clamping.html

  * igt@kms_psr2_sf@fbc-psr2-overlay-plane-update-sf-dmg-area:
    - shard-bmg:          NOTRUN -> [SKIP][39] ([Intel XE#1489])
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_psr2_sf@fbc-psr2-overlay-plane-update-sf-dmg-area.html

  * igt@kms_psr@psr-sprite-render:
    - shard-bmg:          NOTRUN -> [SKIP][40] ([Intel XE#2234] / [Intel XE#2850]) +1 other test skip
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_psr@psr-sprite-render.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90:
    - shard-bmg:          NOTRUN -> [SKIP][41] ([Intel XE#3414] / [Intel XE#3904] / [Intel XE#7342]) +1 other test skip
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90.html

  * igt@kms_scaling_modes@scaling-mode-full-aspect:
    - shard-bmg:          NOTRUN -> [SKIP][42] ([Intel XE#2413])
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-8/igt@kms_scaling_modes@scaling-mode-full-aspect.html

  * igt@kms_vrr@flip-basic-fastset@pipe-a-edp-1:
    - shard-lnl:          [PASS][43] -> [FAIL][44] ([Intel XE#4227] / [Intel XE#7397]) +1 other test fail
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-lnl-7/igt@kms_vrr@flip-basic-fastset@pipe-a-edp-1.html
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-lnl-1/igt@kms_vrr@flip-basic-fastset@pipe-a-edp-1.html

  * igt@xe_eudebug_online@writes-caching-vram-bb-sram-target-vram:
    - shard-bmg:          NOTRUN -> [SKIP][45] ([Intel XE#4837] / [Intel XE#6665]) +1 other test skip
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-8/igt@xe_eudebug_online@writes-caching-vram-bb-sram-target-vram.html

  * igt@xe_exec_basic@multigpu-many-execqueues-many-vm-basic-defer-mmap:
    - shard-bmg:          NOTRUN -> [SKIP][46] ([Intel XE#2322] / [Intel XE#7372]) +1 other test skip
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-basic-defer-mmap.html

  * igt@xe_exec_multi_queue@many-execs-preempt-mode-close-fd:
    - shard-bmg:          NOTRUN -> [SKIP][47] ([Intel XE#6874]) +6 other tests skip
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-8/igt@xe_exec_multi_queue@many-execs-preempt-mode-close-fd.html

  * igt@xe_exec_threads@threads-multi-queue-cm-shared-vm-basic:
    - shard-bmg:          NOTRUN -> [SKIP][48] ([Intel XE#7138])
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@xe_exec_threads@threads-multi-queue-cm-shared-vm-basic.html

  * igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit:
    - shard-bmg:          NOTRUN -> [SKIP][49] ([Intel XE#2229])
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html

  
#### Possible fixes ####

  * igt@kms_atomic_transition@plane-all-transition-fencing:
    - shard-bmg:          [INCOMPLETE][50] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#6819]) -> [PASS][51]
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_atomic_transition@plane-all-transition-fencing.html
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_atomic_transition@plane-all-transition-fencing.html

  * igt@kms_atomic_transition@plane-all-transition-fencing@pipe-b-hdmi-a-3:
    - shard-bmg:          [DMESG-FAIL][52] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#6819]) -> [PASS][53]
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_atomic_transition@plane-all-transition-fencing@pipe-b-hdmi-a-3.html
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_atomic_transition@plane-all-transition-fencing@pipe-b-hdmi-a-3.html

  * igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p:
    - shard-bmg:          [SKIP][54] ([Intel XE#2314] / [Intel XE#2894] / [Intel XE#7373]) -> [PASS][55]
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p.html
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-atomic:
    - shard-bmg:          [SKIP][56] ([Intel XE#2291]) -> [PASS][57] +5 other tests pass
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_cursor_legacy@cursora-vs-flipb-atomic.html
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_cursor_legacy@cursora-vs-flipb-atomic.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions:
    - shard-bmg:          [SKIP][58] ([Intel XE#2291] / [Intel XE#7343]) -> [PASS][59]
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-5/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions.html
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-1/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions.html

  * igt@kms_dither@fb-8bpc-vs-panel-6bpc:
    - shard-bmg:          [SKIP][60] ([Intel XE#1340]) -> [PASS][61]
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_dither@fb-8bpc-vs-panel-6bpc.html
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_dither@fb-8bpc-vs-panel-6bpc.html

  * igt@kms_dp_link_training@non-uhbr-sst:
    - shard-bmg:          [SKIP][62] ([Intel XE#4354]) -> [PASS][63]
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_dp_link_training@non-uhbr-sst.html
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_dp_link_training@non-uhbr-sst.html

  * igt@kms_flip@2x-plain-flip-fb-recreate:
    - shard-bmg:          [SKIP][64] ([Intel XE#2316]) -> [PASS][65] +9 other tests pass
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-5/igt@kms_flip@2x-plain-flip-fb-recreate.html
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_flip@2x-plain-flip-fb-recreate.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible:
    - shard-lnl:          [FAIL][66] ([Intel XE#301]) -> [PASS][67] +1 other test pass
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-lnl-2/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
    - shard-bmg:          [FAIL][68] ([Intel XE#3149] / [Intel XE#7545]) -> [PASS][69]
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_flip@flip-vs-expired-vblank-interruptible.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@d-hdmi-a3:
    - shard-bmg:          [FAIL][70] ([Intel XE#3149]) -> [PASS][71]
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_flip@flip-vs-expired-vblank-interruptible@d-hdmi-a3.html
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_flip@flip-vs-expired-vblank-interruptible@d-hdmi-a3.html

  * igt@kms_setmode@clone-exclusive-crtc:
    - shard-bmg:          [SKIP][72] ([Intel XE#1435]) -> [PASS][73]
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_setmode@clone-exclusive-crtc.html
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_setmode@clone-exclusive-crtc.html

  * igt@kms_vrr@negative-basic:
    - shard-bmg:          [SKIP][74] ([Intel XE#1499]) -> [PASS][75]
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-5/igt@kms_vrr@negative-basic.html
   [75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-1/igt@kms_vrr@negative-basic.html

  * igt@xe_evict@evict-beng-mixed-many-threads-small:
    - shard-bmg:          [INCOMPLETE][76] ([Intel XE#6321]) -> [PASS][77]
   [76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@xe_evict@evict-beng-mixed-many-threads-small.html
   [77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@xe_evict@evict-beng-mixed-many-threads-small.html

  
#### Warnings ####

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - shard-bmg:          [SKIP][78] ([Intel XE#2233]) -> [SKIP][79] ([Intel XE#6703])
   [78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
   [79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_big_fb@x-tiled-64bpp-rotate-270:
    - shard-bmg:          [SKIP][80] ([Intel XE#2327]) -> [SKIP][81] ([Intel XE#6703])
   [80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_big_fb@x-tiled-64bpp-rotate-270.html
   [81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_big_fb@x-tiled-64bpp-rotate-270.html

  * igt@kms_big_fb@y-tiled-addfb-size-offset-overflow:
    - shard-bmg:          [SKIP][82] ([Intel XE#607] / [Intel XE#7361]) -> [SKIP][83] ([Intel XE#6703])
   [82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html
   [83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
    - shard-bmg:          [SKIP][84] ([Intel XE#1124]) -> [SKIP][85] ([Intel XE#6703]) +1 other test skip
   [84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
   [85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html

  * igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p:
    - shard-bmg:          [SKIP][86] ([Intel XE#2314] / [Intel XE#2894] / [Intel XE#7373]) -> [SKIP][87] ([Intel XE#7621])
   [86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-5/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html
   [87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-1/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html

  * igt@kms_bw@linear-tiling-4-displays-3840x2160p:
    - shard-bmg:          [SKIP][88] ([Intel XE#367] / [Intel XE#7354]) -> [SKIP][89] ([Intel XE#6703]) +1 other test skip
   [88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_bw@linear-tiling-4-displays-3840x2160p.html
   [89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_bw@linear-tiling-4-displays-3840x2160p.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-dg2-rc-ccs:
    - shard-bmg:          [SKIP][90] ([Intel XE#2887]) -> [SKIP][91] ([Intel XE#6703]) +2 other tests skip
   [90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_ccs@crc-primary-basic-4-tiled-dg2-rc-ccs.html
   [91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_ccs@crc-primary-basic-4-tiled-dg2-rc-ccs.html

  * igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs:
    - shard-bmg:          [SKIP][92] ([Intel XE#2652]) -> [SKIP][93] ([Intel XE#6703])
   [92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs.html
   [93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs.html

  * igt@kms_chamelium_color@ctm-max:
    - shard-bmg:          [SKIP][94] ([Intel XE#2325] / [Intel XE#7358]) -> [SKIP][95] ([Intel XE#6703])
   [94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_chamelium_color@ctm-max.html
   [95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_chamelium_color@ctm-max.html

  * igt@kms_chamelium_hpd@dp-hpd-with-enabled-mode:
    - shard-bmg:          [SKIP][96] ([Intel XE#2252]) -> [SKIP][97] ([Intel XE#6703])
   [96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_chamelium_hpd@dp-hpd-with-enabled-mode.html
   [97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_chamelium_hpd@dp-hpd-with-enabled-mode.html

  * igt@kms_content_protection@atomic-dpms-hdcp14:
    - shard-bmg:          [FAIL][98] ([Intel XE#3304] / [Intel XE#7374]) -> [SKIP][99] ([Intel XE#7194])
   [98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-4/igt@kms_content_protection@atomic-dpms-hdcp14.html
   [99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-5/igt@kms_content_protection@atomic-dpms-hdcp14.html

  * igt@kms_content_protection@atomic-hdcp14:
    - shard-bmg:          [FAIL][100] ([Intel XE#1178] / [Intel XE#3304] / [Intel XE#7374]) -> [SKIP][101] ([Intel XE#6703])
   [100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_content_protection@atomic-hdcp14.html
   [101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_content_protection@atomic-hdcp14.html

  * igt@kms_content_protection@dp-mst-type-0:
    - shard-bmg:          [SKIP][102] ([Intel XE#2390] / [Intel XE#6974]) -> [SKIP][103] ([Intel XE#6703])
   [102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_content_protection@dp-mst-type-0.html
   [103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_content_protection@dp-mst-type-0.html

  * igt@kms_content_protection@legacy-hdcp14:
    - shard-bmg:          [FAIL][104] ([Intel XE#1178] / [Intel XE#3304] / [Intel XE#7374]) -> [SKIP][105] ([Intel XE#7194])
   [104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-4/igt@kms_content_protection@legacy-hdcp14.html
   [105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-5/igt@kms_content_protection@legacy-hdcp14.html

  * igt@kms_content_protection@lic-type-0:
    - shard-bmg:          [FAIL][106] ([Intel XE#1178] / [Intel XE#3304] / [Intel XE#7374]) -> [SKIP][107] ([Intel XE#2341])
   [106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-2/igt@kms_content_protection@lic-type-0.html
   [107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_content_protection@lic-type-0.html

  * igt@kms_content_protection@suspend-resume:
    - shard-bmg:          [FAIL][108] ([Intel XE#1178] / [Intel XE#3304] / [Intel XE#7374]) -> [SKIP][109] ([Intel XE#6705])
   [108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-4/igt@kms_content_protection@suspend-resume.html
   [109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-5/igt@kms_content_protection@suspend-resume.html

  * igt@kms_content_protection@uevent-hdcp14:
    - shard-bmg:          [SKIP][110] ([Intel XE#7194]) -> [FAIL][111] ([Intel XE#6707] / [Intel XE#7439])
   [110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_content_protection@uevent-hdcp14.html
   [111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_content_protection@uevent-hdcp14.html

  * igt@kms_dsc@dsc-with-bpc:
    - shard-bmg:          [SKIP][112] ([Intel XE#2244]) -> [SKIP][113] ([Intel XE#6703])
   [112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_dsc@dsc-with-bpc.html
   [113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_dsc@dsc-with-bpc.html

  * igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-dirtyfb-tests:
    - shard-bmg:          [SKIP][114] ([Intel XE#4422] / [Intel XE#7442]) -> [SKIP][115] ([Intel XE#6703])
   [114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-dirtyfb-tests.html
   [115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-dirtyfb-tests.html

  * igt@kms_flip_scaled_crc@flip-p016-linear-to-p016-linear-reflect-x:
    - shard-bmg:          [SKIP][116] ([Intel XE#7179]) -> [SKIP][117] ([Intel XE#6703])
   [116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_flip_scaled_crc@flip-p016-linear-to-p016-linear-reflect-x.html
   [117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_flip_scaled_crc@flip-p016-linear-to-p016-linear-reflect-x.html

  * igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-indfb-draw-render:
    - shard-bmg:          [SKIP][118] ([Intel XE#2311]) -> [SKIP][119] ([Intel XE#6703]) +5 other tests skip
   [118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-indfb-draw-render.html
   [119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@drrs-2p-pri-indfb-multidraw:
    - shard-bmg:          [SKIP][120] ([Intel XE#2312]) -> [SKIP][121] ([Intel XE#2311]) +22 other tests skip
   [120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-2p-pri-indfb-multidraw.html
   [121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-1/igt@kms_frontbuffer_tracking@drrs-2p-pri-indfb-multidraw.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-move:
    - shard-bmg:          [SKIP][122] ([Intel XE#4141]) -> [SKIP][123] ([Intel XE#6703]) +3 other tests skip
   [122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-move.html
   [123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-move.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render:
    - shard-bmg:          [SKIP][124] ([Intel XE#2312]) -> [SKIP][125] ([Intel XE#4141]) +13 other tests skip
   [124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render.html
   [125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff:
    - shard-bmg:          [SKIP][126] ([Intel XE#4141]) -> [SKIP][127] ([Intel XE#2312]) +8 other tests skip
   [126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
   [127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-shrfb-msflip-blt:
    - shard-bmg:          [SKIP][128] ([Intel XE#2311]) -> [SKIP][129] ([Intel XE#2312]) +14 other tests skip
   [128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-1/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-shrfb-msflip-blt.html
   [129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-shrfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-shrfb-pgflip-blt:
    - shard-bmg:          [SKIP][130] ([Intel XE#2313]) -> [SKIP][131] ([Intel XE#2312]) +19 other tests skip
   [130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-shrfb-pgflip-blt.html
   [131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-shrfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-abgr161616f-draw-render:
    - shard-bmg:          [SKIP][132] ([Intel XE#7061] / [Intel XE#7356]) -> [SKIP][133] ([Intel XE#6703]) +1 other test skip
   [132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_frontbuffer_tracking@fbcpsr-abgr161616f-draw-render.html
   [133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-abgr161616f-draw-render.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen:
    - shard-bmg:          [SKIP][134] ([Intel XE#2312]) -> [SKIP][135] ([Intel XE#2313]) +19 other tests skip
   [134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-3/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen.html
   [135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-9/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-blt:
    - shard-bmg:          [SKIP][136] ([Intel XE#2313]) -> [SKIP][137] ([Intel XE#6703]) +5 other tests skip
   [136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-blt.html
   [137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-blt.html

  * igt@kms_hdr@brightness-with-hdr:
    - shard-bmg:          [SKIP][138] ([Intel XE#3374] / [Intel XE#3544]) -> [SKIP][139] ([Intel XE#3544])
   [138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-1/igt@kms_hdr@brightness-with-hdr.html
   [139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-10/igt@kms_hdr@brightness-with-hdr.html

  * igt@kms_plane@pixel-format-yf-tiled-ccs-modifier:
    - shard-bmg:          [SKIP][140] ([Intel XE#7283]) -> [SKIP][141] ([Intel XE#6703])
   [140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier.html
   [141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier.html

  * igt@kms_plane_multiple@2x-tiling-y:
    - shard-bmg:          [SKIP][142] ([Intel XE#4596]) -> [SKIP][143] ([Intel XE#5021] / [Intel XE#7377])
   [142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-5/igt@kms_plane_multiple@2x-tiling-y.html
   [143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-1/igt@kms_plane_multiple@2x-tiling-y.html

  * igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75:
    - shard-bmg:          [SKIP][144] ([Intel XE#2763] / [Intel XE#6886]) -> [SKIP][145] ([Intel XE#6703])
   [144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75.html
   [145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75.html

  * igt@kms_pm_backlight@bad-brightness:
    - shard-bmg:          [SKIP][146] ([Intel XE#7376] / [Intel XE#870]) -> [SKIP][147] ([Intel XE#6703])
   [146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_pm_backlight@bad-brightness.html
   [147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_pm_backlight@bad-brightness.html

  * igt@kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area:
    - shard-bmg:          [SKIP][148] ([Intel XE#1489]) -> [SKIP][149] ([Intel XE#6703]) +1 other test skip
   [148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area.html
   [149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area.html

  * igt@kms_psr@pr-sprite-render:
    - shard-bmg:          [SKIP][150] ([Intel XE#2234] / [Intel XE#2850]) -> [SKIP][151] ([Intel XE#6703]) +2 other tests skip
   [150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_psr@pr-sprite-render.html
   [151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_psr@pr-sprite-render.html

  * igt@kms_rotation_crc@primary-rotation-90:
    - shard-bmg:          [SKIP][152] ([Intel XE#3904] / [Intel XE#7342]) -> [SKIP][153] ([Intel XE#6703])
   [152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_rotation_crc@primary-rotation-90.html
   [153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_rotation_crc@primary-rotation-90.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-180:
    - shard-bmg:          [SKIP][154] ([Intel XE#2330] / [Intel XE#5813]) -> [SKIP][155] ([Intel XE#6703])
   [154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@kms_rotation_crc@primary-y-tiled-reflect-x-180.html
   [155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@kms_rotation_crc@primary-y-tiled-reflect-x-180.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-90:
    - shard-bmg:          [SKIP][156] ([Intel XE#3904] / [Intel XE#7342]) -> [SKIP][157] ([Intel XE#3414] / [Intel XE#3904] / [Intel XE#7342])
   [156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-2/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
   [157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html

  * igt@kms_rotation_crc@sprite-rotation-90-pos-100-0:
    - shard-bmg:          [SKIP][158] ([Intel XE#3414] / [Intel XE#3904] / [Intel XE#7342]) -> [SKIP][159] ([Intel XE#3904] / [Intel XE#7342]) +1 other test skip
   [158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-5/igt@kms_rotation_crc@sprite-rotation-90-pos-100-0.html
   [159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-1/igt@kms_rotation_crc@sprite-rotation-90-pos-100-0.html

  * igt@kms_tiled_display@basic-test-pattern-with-chamelium:
    - shard-bmg:          [SKIP][160] ([Intel XE#2509] / [Intel XE#7437]) -> [SKIP][161] ([Intel XE#2426] / [Intel XE#5848])
   [160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-2/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
   [161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-3/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html

  * igt@xe_eudebug@basic-vms:
    - shard-bmg:          [SKIP][162] ([Intel XE#4837]) -> [SKIP][163] ([Intel XE#6703])
   [162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@xe_eudebug@basic-vms.html
   [163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@xe_eudebug@basic-vms.html

  * igt@xe_eudebug_online@interrupt-reconnect:
    - shard-bmg:          [SKIP][164] ([Intel XE#4837] / [Intel XE#6665]) -> [SKIP][165] ([Intel XE#6703])
   [164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@xe_eudebug_online@interrupt-reconnect.html
   [165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@xe_eudebug_online@interrupt-reconnect.html

  * igt@xe_eudebug_sriov@deny-sriov:
    - shard-bmg:          [SKIP][166] ([Intel XE#5793] / [Intel XE#7320] / [Intel XE#7464]) -> [SKIP][167] ([Intel XE#6703])
   [166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@xe_eudebug_sriov@deny-sriov.html
   [167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@xe_eudebug_sriov@deny-sriov.html

  * igt@xe_evict@evict-small-multi-queue-cm:
    - shard-bmg:          [SKIP][168] ([Intel XE#7140]) -> [SKIP][169] ([Intel XE#6703])
   [168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@xe_evict@evict-small-multi-queue-cm.html
   [169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@xe_evict@evict-small-multi-queue-cm.html

  * igt@xe_exec_basic@multigpu-no-exec-null-defer-bind:
    - shard-bmg:          [SKIP][170] ([Intel XE#2322] / [Intel XE#7372]) -> [SKIP][171] ([Intel XE#6703]) +1 other test skip
   [170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@xe_exec_basic@multigpu-no-exec-null-defer-bind.html
   [171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@xe_exec_basic@multigpu-no-exec-null-defer-bind.html

  * igt@xe_exec_fault_mode@many-multi-queue-userptr-invalidate-imm:
    - shard-bmg:          [SKIP][172] ([Intel XE#7136]) -> [SKIP][173] ([Intel XE#6703]) +1 other test skip
   [172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@xe_exec_fault_mode@many-multi-queue-userptr-invalidate-imm.html
   [173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@xe_exec_fault_mode@many-multi-queue-userptr-invalidate-imm.html

  * igt@xe_exec_multi_queue@few-execs-preempt-mode-priority:
    - shard-bmg:          [SKIP][174] ([Intel XE#6874]) -> [SKIP][175] ([Intel XE#6703]) +5 other tests skip
   [174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@xe_exec_multi_queue@few-execs-preempt-mode-priority.html
   [175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@xe_exec_multi_queue@few-execs-preempt-mode-priority.html

  * igt@xe_exec_threads@threads-multi-queue-mixed-userptr-invalidate:
    - shard-bmg:          [SKIP][176] ([Intel XE#7138]) -> [SKIP][177] ([Intel XE#6703]) +2 other tests skip
   [176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@xe_exec_threads@threads-multi-queue-mixed-userptr-invalidate.html
   [177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@xe_exec_threads@threads-multi-queue-mixed-userptr-invalidate.html

  * igt@xe_pxp@pxp-stale-bo-exec-post-suspend:
    - shard-bmg:          [SKIP][178] ([Intel XE#4733] / [Intel XE#7417]) -> [SKIP][179] ([Intel XE#6703])
   [178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8/shard-bmg-10/igt@xe_pxp@pxp-stale-bo-exec-post-suspend.html
   [179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/shard-bmg-2/igt@xe_pxp@pxp-stale-bo-exec-post-suspend.html

  
  [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
  [Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
  [Intel XE#1340]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1340
  [Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
  [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
  [Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
  [Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
  [Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
  [Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
  [Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
  [Intel XE#2233]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2233
  [Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
  [Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
  [Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
  [Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
  [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
  [Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
  [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
  [Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
  [Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
  [Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
  [Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
  [Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
  [Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
  [Intel XE#2330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2330
  [Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
  [Intel XE#2390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2390
  [Intel XE#2413]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2413
  [Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
  [Intel XE#2509]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2509
  [Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
  [Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
  [Intel XE#2705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2705
  [Intel XE#2763]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2763
  [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
  [Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
  [Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
  [Intel XE#3009]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3009
  [Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
  [Intel XE#3149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3149
  [Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304
  [Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
  [Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
  [Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
  [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
  [Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
  [Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
  [Intel XE#4227]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4227
  [Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
  [Intel XE#4422]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4422
  [Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
  [Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
  [Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
  [Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
  [Intel XE#5545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5545
  [Intel XE#5793]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5793
  [Intel XE#5813]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5813
  [Intel XE#5848]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5848
  [Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
  [Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321
  [Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665
  [Intel XE#6703]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6703
  [Intel XE#6705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6705
  [Intel XE#6707]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6707
  [Intel XE#6819]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6819
  [Intel XE#6874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6874
  [Intel XE#6886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6886
  [Intel XE#6974]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6974
  [Intel XE#7059]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7059
  [Intel XE#7061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7061
  [Intel XE#7085]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7085
  [Intel XE#7086]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7086
  [Intel XE#7136]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7136
  [Intel XE#7138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7138
  [Intel XE#7140]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7140
  [Intel XE#7178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7178
  [Intel XE#7179]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7179
  [Intel XE#7194]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7194
  [Intel XE#7283]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7283
  [Intel XE#7320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7320
  [Intel XE#7342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7342
  [Intel XE#7343]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7343
  [Intel XE#7349]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7349
  [Intel XE#7351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7351
  [Intel XE#7354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7354
  [Intel XE#7356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7356
  [Intel XE#7358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7358
  [Intel XE#7361]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7361
  [Intel XE#7372]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7372
  [Intel XE#7373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7373
  [Intel XE#7374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7374
  [Intel XE#7376]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7376
  [Intel XE#7377]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7377
  [Intel XE#7397]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7397
  [Intel XE#7417]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7417
  [Intel XE#7437]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7437
  [Intel XE#7439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7439
  [Intel XE#7442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7442
  [Intel XE#7464]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7464
  [Intel XE#7545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7545
  [Intel XE#7578]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7578
  [Intel XE#7586]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7586
  [Intel XE#7621]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7621
  [Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870


Build changes
-------------

  * Linux: xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8 -> xe-pw-163588v1

  IGT_8814: 8814
  xe-4752-7535044a2418d22b59be0eb64af0353971f16bd8: 7535044a2418d22b59be0eb64af0353971f16bd8
  xe-pw-163588v1: 163588v1

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-163588v1/index.html

[-- Attachment #2: Type: text/html, Size: 61471 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support
  2026-03-20 12:12 ` [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support Satyanarayana K V P
@ 2026-03-26 19:48   ` Matthew Brost
  2026-03-26 19:57   ` Thomas Hellström
  1 sibling, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2026-03-26 19:48 UTC (permalink / raw)
  To: Satyanarayana K V P
  Cc: intel-xe, Thomas Hellström, Maarten Lankhorst,
	Michal Wajdeczko

On Fri, Mar 20, 2026 at 12:12:29PM +0000, Satyanarayana K V P wrote:
> Add a xe_drm_mm manager to allocate sub-ranges from a BO-backed pool
> using drm_mm.
> 
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Maarten Lankhorst <dev@lankhorst.se>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  drivers/gpu/drm/xe/Makefile          |   1 +
>  drivers/gpu/drm/xe/xe_drm_mm.c       | 200 +++++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_drm_mm.h       |  55 ++++++++
>  drivers/gpu/drm/xe/xe_drm_mm_types.h |  42 ++++++
>  4 files changed, 298 insertions(+)
>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.c
>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.h
>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm_types.h
> 
> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
> index dab979287a96..6ab4e2392df1 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -41,6 +41,7 @@ xe-y += xe_bb.o \
>  	xe_device_sysfs.o \
>  	xe_dma_buf.o \
>  	xe_drm_client.o \
> +	xe_drm_mm.o \
>  	xe_drm_ras.o \
>  	xe_eu_stall.o \
>  	xe_exec.o \
> diff --git a/drivers/gpu/drm/xe/xe_drm_mm.c b/drivers/gpu/drm/xe/xe_drm_mm.c
> new file mode 100644
> index 000000000000..c5b1766fa75a
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_drm_mm.c
> @@ -0,0 +1,200 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2026 Intel Corporation
> + */
> +
> +#include <drm/drm_managed.h>
> +#include <linux/kernel.h>
> +
> +#include "xe_device_types.h"
> +#include "xe_drm_mm_types.h"
> +#include "xe_drm_mm.h"
> +#include "xe_map.h"
> +
> +static void xe_drm_mm_manager_fini(struct drm_device *drm, void *arg)
> +{
> +	struct xe_drm_mm_manager *drm_mm_manager = arg;
> +	struct xe_bo *bo = drm_mm_manager->bo;
> +
> +	if (!bo) {
> +		drm_err(drm, "no bo for drm mm manager\n");
> +		return;
> +	}
> +
> +	drm_mm_takedown(&drm_mm_manager->base);
> +
> +	if (drm_mm_manager->is_iomem)
> +		kvfree(drm_mm_manager->cpu_addr);
> +
> +	drm_mm_manager->bo = NULL;
> +	drm_mm_manager->shadow = NULL;
> +}
> +
> +/**
> + * xe_drm_mm_manager_init() - Create and initialize the DRM MM manager.
> + * @tile: the &xe_tile where allocate.
> + * @size: number of bytes to allocate
> + * @guard: number of bytes to exclude from allocation for guard region
> + * @flags: additional flags for configuring the DRM MM manager.
> + *
> + * Initializes a DRM MM manager for managing memory allocations on a specific
> + * XE tile. The function allocates a buffer object to back the memory region
> + * managed by the DRM MM manager.
> + *
> + * Return: a pointer to the &xe_drm_mm_manager, or an error pointer on failure.
> + */
> +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile *tile, u32 size,
> +						 u32 guard, u32 flags)
> +{
> +	struct xe_device *xe = tile_to_xe(tile);
> +	struct xe_drm_mm_manager *drm_mm_manager;
> +	u64 managed_size;
> +	struct xe_bo *bo;
> +	int ret;
> +
> +	xe_tile_assert(tile, size > guard);
> +	managed_size = size - guard;
> +
> +	drm_mm_manager = drmm_kzalloc(&xe->drm, sizeof(*drm_mm_manager), GFP_KERNEL);
> +	if (!drm_mm_manager)
> +		return ERR_PTR(-ENOMEM);
> +
> +	bo = xe_managed_bo_create_pin_map(xe, tile, size,
> +					  XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> +					  XE_BO_FLAG_GGTT |
> +					  XE_BO_FLAG_GGTT_INVALIDATE |
> +					  XE_BO_FLAG_PINNED_NORESTORE);
> +	if (IS_ERR(bo)) {
> +		drm_err(&xe->drm, "Failed to prepare %uKiB BO for DRM MM manager (%pe)\n",
> +			size / SZ_1K, bo);
> +		return ERR_CAST(bo);
> +	}
> +	drm_mm_manager->bo = bo;
> +	drm_mm_manager->is_iomem = bo->vmap.is_iomem;
> +
> +	if (bo->vmap.is_iomem) {
> +		drm_mm_manager->cpu_addr = kvzalloc(managed_size, GFP_KERNEL);
> +		if (!drm_mm_manager->cpu_addr)
> +			return ERR_PTR(-ENOMEM);
> +	} else {
> +		drm_mm_manager->cpu_addr = bo->vmap.vaddr;
> +		memset(drm_mm_manager->cpu_addr, 0, bo->ttm.base.size);

I don't think you need this memset... As is in alloc_bb_pool() (patch
#3) the first thing done after calling xe_drm_mm_manager_init() is a
xe_map_memset(..., MI_NOOP, ...) on both the primary BO and shadow.

> +	}
> +
> +	if (flags & XE_DRM_MM_BO_MANAGER_FLAG_SHADOW) {
> +		struct xe_bo *shadow;
> +
> +		ret = drmm_mutex_init(&xe->drm, &drm_mm_manager->swap_guard);
> +		if (ret)
> +			return ERR_PTR(ret);
> +		if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
> +			fs_reclaim_acquire(GFP_KERNEL);
> +			might_lock(&drm_mm_manager->swap_guard);
> +			fs_reclaim_release(GFP_KERNEL);
> +		}
> +
> +		shadow = xe_managed_bo_create_pin_map(xe, tile, size,
> +						      XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> +						      XE_BO_FLAG_GGTT |
> +						      XE_BO_FLAG_GGTT_INVALIDATE |
> +						      XE_BO_FLAG_PINNED_NORESTORE);
> +		if (IS_ERR(shadow)) {
> +			drm_err(&xe->drm,
> +				"Failed to prepare %uKiB shadow BO for DRM MM manager (%pe)\n",
> +				size / SZ_1K, shadow);
> +			return ERR_CAST(shadow);
> +		}
> +		drm_mm_manager->shadow = shadow;
> +	}
> +
> +	drm_mm_init(&drm_mm_manager->base, 0, managed_size);
> +	ret = drmm_add_action_or_reset(&xe->drm, xe_drm_mm_manager_fini, drm_mm_manager);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	return drm_mm_manager;
> +}
> +
> +/**
> + * xe_drm_mm_bo_swap_shadow() - Swap the primary BO with the shadow BO.
> + * @drm_mm_manager: the DRM MM manager containing the primary and shadow BOs.
> + *
> + * Swaps the primary buffer object with the shadow buffer object in the DRM MM
> + * manager.
> + *
> + * Return: None.
> + */
> +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager *drm_mm_manager)
> +{
> +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo->tile);
> +
> +	xe_assert(xe, drm_mm_manager->shadow);
> +	lockdep_assert_held(&drm_mm_manager->swap_guard);
> +
> +	swap(drm_mm_manager->bo, drm_mm_manager->shadow);
> +	if (!drm_mm_manager->bo->vmap.is_iomem)
> +		drm_mm_manager->cpu_addr = drm_mm_manager->bo->vmap.vaddr;
> +}
> +
> +/**
> + * xe_drm_mm_sync_shadow() - Synchronize the shadow BO with the primary BO.
> + * @drm_mm_manager: the DRM MM manager containing the primary and shadow BOs.
> + * @node: the DRM MM node representing the region to synchronize.
> + *
> + * Copies the contents of the specified region from the primary buffer object to
> + * the shadow buffer object in the DRM MM manager.
> + *
> + * Return: None.
> + */
> +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager *drm_mm_manager,
> +			   struct drm_mm_node *node)
> +{
> +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo->tile);
> +
> +	xe_assert(xe, drm_mm_manager->shadow);
> +	lockdep_assert_held(&drm_mm_manager->swap_guard);
> +
> +	xe_map_memcpy_to(xe, &drm_mm_manager->shadow->vmap,
> +			 node->start,
> +			 drm_mm_manager->cpu_addr + node->start,
> +			 node->size);
> +}
> +
> +/**
> + * xe_drm_mm_insert_node() - Insert a node into the DRM MM manager.
> + * @drm_mm_manager: the DRM MM manager to insert the node into.
> + * @node: the DRM MM node to insert.
> + * @size: the size of the node to insert.
> + *
> + * Inserts a node into the DRM MM manager and clears the corresponding memory region
> + * in both the primary and shadow buffer objects.
> + *
> + * Return: 0 on success, or a negative error code on failure.
> + */
> +int xe_drm_mm_insert_node(struct xe_drm_mm_manager *drm_mm_manager,
> +			  struct drm_mm_node *node, u32 size)
> +{
> +	struct drm_mm *mm = &drm_mm_manager->base;
> +	int ret;
> +
> +	ret = drm_mm_insert_node(mm, node, size);
> +	if (ret)
> +		return ret;
> +
> +	memset((void *)drm_mm_manager->bo->vmap.vaddr + node->start, 0, node->size);
> +	if (drm_mm_manager->shadow)
> +		memset((void *)drm_mm_manager->shadow->vmap.vaddr + node->start, 0,
> +		       node->size);

Likewise here, I don't think you need these memsets as both the primary
and shadow are initialized with MI_NOOP, then on each SA release set
back to MI_NOOP.

Other than that, patch looks good.

Matt

> +	return 0;
> +}
> +
> +/**
> + * xe_drm_mm_remove_node() - Remove a node from the DRM MM manager.
> + * @node: the DRM MM node to remove.
> + *
> + * Return: None.
> + */
> +void xe_drm_mm_remove_node(struct drm_mm_node *node)
> +{
> +	return drm_mm_remove_node(node);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_drm_mm.h b/drivers/gpu/drm/xe/xe_drm_mm.h
> new file mode 100644
> index 000000000000..aeb7cab92d0b
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_drm_mm.h
> @@ -0,0 +1,55 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2026 Intel Corporation
> + */
> +#ifndef _XE_DRM_MM_H_
> +#define _XE_DRM_MM_H_
> +
> +#include <linux/sizes.h>
> +#include <linux/types.h>
> +
> +#include "xe_bo.h"
> +#include "xe_drm_mm_types.h"
> +
> +struct dma_fence;
> +struct xe_tile;
> +
> +#define XE_DRM_MM_BO_MANAGER_FLAG_SHADOW    BIT(0)
> +
> +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile *tile, u32 size,
> +						 u32 guard, u32 flags);
> +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager *drm_mm_manager);
> +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager *drm_mm_manager,
> +			   struct drm_mm_node *node);
> +int xe_drm_mm_insert_node(struct xe_drm_mm_manager *drm_mm_manager,
> +			  struct drm_mm_node *node, u32 size);
> +void xe_drm_mm_remove_node(struct drm_mm_node *node);
> +
> +/**
> + * xe_drm_mm_manager_gpu_addr() - Retrieve GPU address of a back storage BO
> + * within a memory manager.
> + * @drm_mm_manager: The DRM MM memory manager.
> + *
> + * Returns: GGTT address of the back storage BO
> + */
> +static inline u64 xe_drm_mm_manager_gpu_addr(struct xe_drm_mm_manager
> +					     *drm_mm_manager)
> +{
> +	return xe_bo_ggtt_addr(drm_mm_manager->bo);
> +}
> +
> +/**
> + * xe_drm_mm_bo_swap_guard() - Retrieve the mutex used to guard swap operations
> + * on a memory manager.
> + * @drm_mm_manager: The DRM MM memory manager.
> + *
> + * Returns: Swap guard mutex.
> + */
> +static inline struct mutex *xe_drm_mm_bo_swap_guard(struct xe_drm_mm_manager
> +						    *drm_mm_manager)
> +{
> +	return &drm_mm_manager->swap_guard;
> +}
> +
> +#endif
> +
> diff --git a/drivers/gpu/drm/xe/xe_drm_mm_types.h b/drivers/gpu/drm/xe/xe_drm_mm_types.h
> new file mode 100644
> index 000000000000..69e0937dd8de
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_drm_mm_types.h
> @@ -0,0 +1,42 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2026 Intel Corporation
> + */
> +
> +#ifndef _XE_DRM_MM_TYPES_H_
> +#define _XE_DRM_MM_TYPES_H_
> +
> +#include <drm/drm_mm.h>
> +
> +struct xe_bo;
> +
> +struct xe_drm_mm_manager {
> +	/** @base: Range allocator over [0, @size) in bytes */
> +	struct drm_mm base;
> +	/** @bo: Active pool BO (GGTT-pinned, CPU-mapped). */
> +	struct xe_bo *bo;
> +	/** @shadow: Shadow BO for atomic command updates. */
> +	struct xe_bo *shadow;
> +	/** @swap_guard: Timeline guard updating @bo and @shadow */
> +	struct mutex swap_guard;
> +	/** @cpu_addr: CPU virtual address of the active BO. */
> +	void *cpu_addr;
> +	/** @size: Total size of the managed address space. */
> +	u64 size;
> +	/** @is_iomem: Whether the managed address space is I/O memory. */
> +	bool is_iomem;
> +};
> +
> +struct xe_drm_mm_bb {
> +	/** @node: Range node for this batch buffer. */
> +	struct drm_mm_node node;
> +	/** @manager: Manager this batch buffer belongs to. */
> +	struct xe_drm_mm_manager *manager;
> +	/** @cs: Command stream for this batch buffer. */
> +	u32 *cs;
> +	/** @len: Length of the CS in dwords. */
> +	u32 len;
> +};
> +
> +#endif
> +
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/3] drm/xe/mm: Add batch buffer allocation functions for xe_drm_mm manager
  2026-03-20 12:12 ` [PATCH 2/3] drm/xe/mm: Add batch buffer allocation functions for xe_drm_mm manager Satyanarayana K V P
@ 2026-03-26 19:50   ` Matthew Brost
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2026-03-26 19:50 UTC (permalink / raw)
  To: Satyanarayana K V P
  Cc: intel-xe, Thomas Hellström, Maarten Lankhorst,
	Michal Wajdeczko

On Fri, Mar 20, 2026 at 12:12:30PM +0000, Satyanarayana K V P wrote:
> New APIs xe_drm_mm_bb_alloc(), xe_drm_mm_bb_insert() and
> xe_drm_mm_bb_free() are created to manage allocations from the xe_drm_mm
> manager.
> 
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Maarten Lankhorst <dev@lankhorst.se>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_bb.c | 67 ++++++++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_bb.h |  6 ++++
>  2 files changed, 73 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_bb.c b/drivers/gpu/drm/xe/xe_bb.c
> index b896b6f6615c..c366ec5b269a 100644
> --- a/drivers/gpu/drm/xe/xe_bb.c
> +++ b/drivers/gpu/drm/xe/xe_bb.c
> @@ -8,6 +8,7 @@
>  #include "instructions/xe_mi_commands.h"
>  #include "xe_assert.h"
>  #include "xe_device_types.h"
> +#include "xe_drm_mm.h"
>  #include "xe_exec_queue_types.h"
>  #include "xe_gt.h"
>  #include "xe_sa.h"
> @@ -172,3 +173,69 @@ void xe_bb_free(struct xe_bb *bb, struct dma_fence *fence)
>  	xe_sa_bo_free(bb->bo, fence);
>  	kfree(bb);
>  }
> +
> +/**
> + * xe_drm_mm_bb_alloc() - Allocate a new batch buffer structure for drm_mm
> + *
> + * Allocates a new xe_drm_mm_bb structure for use with xe_drm_mm memory management.
> + *
> + * Returns: Batch buffer structure or an ERR_PTR(-ENOMEM).
> + */
> +struct xe_drm_mm_bb *xe_drm_mm_bb_alloc(void)
> +{
> +	struct xe_drm_mm_bb *bb = kzalloc(sizeof(*bb), GFP_KERNEL);
> +
> +	if (!bb)
> +		return ERR_PTR(-ENOMEM);
> +
> +	return bb;
> +}
> +
> +/**
> + * xe_drm_mm_bb_insert() - Initialize a batch buffer and insert a hole
> + * @bb: Batch buffer structure to initialize
> + * @bb_pool: drm_mm manager to allocate from
> + * @dwords: Number of dwords to be allocated
> + *
> + * Initializes the batch buffer by allocating memory from the specified
> + * drm_mm manager.
> + *
> + * Return: 0 on success, negative error code on failure.
> + */
> +int xe_drm_mm_bb_insert(struct xe_drm_mm_bb *bb, struct xe_drm_mm_manager *bb_pool, u32 dwords)
> +{
> +	int err;
> +
> +	/*
> +	 * We need to allocate space for the requested number of dwords &
> +	 * one additional MI_BATCH_BUFFER_END dword. Since the whole SA
> +	 * is submitted to HW, we need to make sure that the last instruction
> +	 * is not over written when the last chunk of SA is allocated for BB.
> +	 * So, this extra DW acts as a guard here.
> +	 */
> +	err = xe_drm_mm_insert_node(bb_pool, &bb->node, 4 * (dwords + 1));
> +	if (err)
> +		return err;
> +
> +	bb->manager = bb_pool;
> +	bb->cs = bb_pool->cpu_addr + bb->node.start;
> +	bb->len = 0;
> +
> +	memset(bb->cs, MI_NOOP, 4 * (dwords + 1));

Again same patch #1, this memset looks unnecessary as default state upon
alloc should be MI_NOOP, right? Then immediately in
xe_migrate_ccs_rw_copy, 'bb->cs' is written with instructions.

Other than this, patch LGTM.

Matt

> +
> +	return 0;
> +}
> +
> +/**
> + * xe_drm_mm_bb_free() - Free a batch buffer allocated with drm_mm
> + * @bb: Batch buffer structure to free
> + */
> +void xe_drm_mm_bb_free(struct xe_drm_mm_bb *bb)
> +{
> +	if (!bb)
> +		return;
> +
> +	xe_drm_mm_remove_node(&bb->node);
> +	kfree(bb);
> +}
> +
> diff --git a/drivers/gpu/drm/xe/xe_bb.h b/drivers/gpu/drm/xe/xe_bb.h
> index 231870b24c2f..d5417005d09b 100644
> --- a/drivers/gpu/drm/xe/xe_bb.h
> +++ b/drivers/gpu/drm/xe/xe_bb.h
> @@ -11,6 +11,8 @@
>  struct dma_fence;
>  
>  struct xe_gt;
> +struct xe_drm_mm_bb;
> +struct xe_drm_mm_manager;
>  struct xe_exec_queue;
>  struct xe_sa_manager;
>  struct xe_sched_job;
> @@ -24,5 +26,9 @@ struct xe_sched_job *xe_bb_create_migration_job(struct xe_exec_queue *q,
>  						struct xe_bb *bb, u64 batch_ofs,
>  						u32 second_idx);
>  void xe_bb_free(struct xe_bb *bb, struct dma_fence *fence);
> +struct xe_drm_mm_bb *xe_drm_mm_bb_alloc(void);
> +int xe_drm_mm_bb_insert(struct xe_drm_mm_bb *bb, struct xe_drm_mm_manager
> +		      *bb_pool, u32 dwords);
> +void xe_drm_mm_bb_free(struct xe_drm_mm_bb *bb);
>  
>  #endif
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write
  2026-03-20 12:12 ` [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write Satyanarayana K V P
@ 2026-03-26 19:52   ` Matthew Brost
  2026-03-27 11:07   ` Michal Wajdeczko
  1 sibling, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2026-03-26 19:52 UTC (permalink / raw)
  To: Satyanarayana K V P
  Cc: intel-xe, Thomas Hellström, Maarten Lankhorst,
	Michal Wajdeczko

On Fri, Mar 20, 2026 at 12:12:31PM +0000, Satyanarayana K V P wrote:
> The suballocator algorithm tracks a hole cursor at the last allocation
> and tries to allocate after it. This is optimized for fence-ordered
> progress, where older allocations are expected to become reusable first.
> 
> In fence-enabled mode, that ordering assumption holds. In fence-disabled
> mode, allocations may be freed in arbitrary order, so limiting allocation
> to the current hole window can miss valid free space and fail allocations
> despite sufficient total space.
> 
> Use DRM memory manager instead of sub-allocator to get rid of this issue
> as CCS read/write operations do not use fences.
> 
> Fixes: 864690cf4dd62 ("drm/xe/vf: Attach and detach CCS copy commands with BO")
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>

Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Maarten Lankhorst <dev@lankhorst.se>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> 
> ---
> Used drm mm instead of drm sa based on comments from
> https://lore.kernel.org/all/bbf0d48d-a95a-46e1-ac8f-e8a0daa81365@amd.com/
> ---
>  drivers/gpu/drm/xe/xe_bo_types.h           |  3 +-
>  drivers/gpu/drm/xe/xe_migrate.c            | 56 ++++++++++++----------
>  drivers/gpu/drm/xe/xe_sriov_vf_ccs.c       | 39 ++++++++-------
>  drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h |  2 +-
>  4 files changed, 53 insertions(+), 47 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
> index d4fe3c8dca5b..4c4f15c5648e 100644
> --- a/drivers/gpu/drm/xe/xe_bo_types.h
> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
> @@ -18,6 +18,7 @@
>  #include "xe_ggtt_types.h"
>  
>  struct xe_device;
> +struct xe_drm_mm_bb;
>  struct xe_vm;
>  
>  #define XE_BO_MAX_PLACEMENTS	3
> @@ -88,7 +89,7 @@ struct xe_bo {
>  	bool ccs_cleared;
>  
>  	/** @bb_ccs: BB instructions of CCS read/write. Valid only for VF */
> -	struct xe_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT];
> +	struct xe_drm_mm_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT];
>  
>  	/**
>  	 * @cpu_caching: CPU caching mode. Currently only used for userspace
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index fc918b4fba54..2fefd306cb2e 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -22,6 +22,7 @@
>  #include "xe_assert.h"
>  #include "xe_bb.h"
>  #include "xe_bo.h"
> +#include "xe_drm_mm.h"
>  #include "xe_exec_queue.h"
>  #include "xe_ggtt.h"
>  #include "xe_gt.h"
> @@ -1166,11 +1167,12 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  	u32 batch_size, batch_size_allocated;
>  	struct xe_device *xe = gt_to_xe(gt);
>  	struct xe_res_cursor src_it, ccs_it;
> +	struct xe_drm_mm_manager *bb_pool;
>  	struct xe_sriov_vf_ccs_ctx *ctx;
> -	struct xe_sa_manager *bb_pool;
> +	struct xe_drm_mm_bb *bb = NULL;
>  	u64 size = xe_bo_size(src_bo);
> -	struct xe_bb *bb = NULL;
>  	u64 src_L0, src_L0_ofs;
> +	struct xe_bb xe_bb_tmp;
>  	u32 src_L0_pt;
>  	int err;
>  
> @@ -1208,18 +1210,18 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  		size -= src_L0;
>  	}
>  
> -	bb = xe_bb_alloc(gt);
> +	bb = xe_drm_mm_bb_alloc();
>  	if (IS_ERR(bb))
>  		return PTR_ERR(bb);
>  
>  	bb_pool = ctx->mem.ccs_bb_pool;
> -	scoped_guard(mutex, xe_sa_bo_swap_guard(bb_pool)) {
> -		xe_sa_bo_swap_shadow(bb_pool);
> +	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
> +		xe_drm_mm_bo_swap_shadow(bb_pool);
>  
> -		err = xe_bb_init(bb, bb_pool, batch_size);
> +		err = xe_drm_mm_bb_insert(bb, bb_pool, batch_size);
>  		if (err) {
>  			xe_gt_err(gt, "BB allocation failed.\n");
> -			xe_bb_free(bb, NULL);
> +			kfree(bb);
>  			return err;
>  		}
>  
> @@ -1227,6 +1229,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  		size = xe_bo_size(src_bo);
>  		batch_size = 0;
>  
> +		xe_bb_tmp = (struct xe_bb){ .cs = bb->cs, .len = 0 };
>  		/*
>  		 * Emit PTE and copy commands here.
>  		 * The CCS copy command can only support limited size. If the size to be
> @@ -1255,24 +1258,27 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  			xe_assert(xe, IS_ALIGNED(ccs_it.start, PAGE_SIZE));
>  			batch_size += EMIT_COPY_CCS_DW;
>  
> -			emit_pte(m, bb, src_L0_pt, false, true, &src_it, src_L0, src);
> +			emit_pte(m, &xe_bb_tmp, src_L0_pt, false, true, &src_it, src_L0, src);
>  
> -			emit_pte(m, bb, ccs_pt, false, false, &ccs_it, ccs_size, src);
> +			emit_pte(m, &xe_bb_tmp, ccs_pt, false, false, &ccs_it, ccs_size, src);
>  
> -			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
> -			flush_flags = xe_migrate_ccs_copy(m, bb, src_L0_ofs, src_is_pltt,
> +			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
> +							      flush_flags);
> +			flush_flags = xe_migrate_ccs_copy(m, &xe_bb_tmp, src_L0_ofs, src_is_pltt,
>  							  src_L0_ofs, dst_is_pltt,
>  							  src_L0, ccs_ofs, true);
> -			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
> +			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
> +							      flush_flags);
>  
>  			size -= src_L0;
>  		}
>  
> -		xe_assert(xe, (batch_size_allocated == bb->len));
> +		xe_assert(xe, (batch_size_allocated == xe_bb_tmp.len));
> +		bb->len = xe_bb_tmp.len;
>  		src_bo->bb_ccs[read_write] = bb;
>  
>  		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
> -		xe_sa_bo_sync_shadow(bb->bo);
> +		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
>  	}
>  
>  	return 0;
> @@ -1297,10 +1303,10 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
>  				  enum xe_sriov_vf_ccs_rw_ctxs read_write)
>  {
> -	struct xe_bb *bb = src_bo->bb_ccs[read_write];
> +	struct xe_drm_mm_bb *bb = src_bo->bb_ccs[read_write];
>  	struct xe_device *xe = xe_bo_device(src_bo);
> +	struct xe_drm_mm_manager *bb_pool;
>  	struct xe_sriov_vf_ccs_ctx *ctx;
> -	struct xe_sa_manager *bb_pool;
>  	u32 *cs;
>  
>  	xe_assert(xe, IS_SRIOV_VF(xe));
> @@ -1308,17 +1314,17 @@ void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
>  	ctx = &xe->sriov.vf.ccs.contexts[read_write];
>  	bb_pool = ctx->mem.ccs_bb_pool;
>  
> -	guard(mutex) (xe_sa_bo_swap_guard(bb_pool));
> -	xe_sa_bo_swap_shadow(bb_pool);
> -
> -	cs = xe_sa_bo_cpu_addr(bb->bo);
> -	memset(cs, MI_NOOP, bb->len * sizeof(u32));
> -	xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
> +	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
> +		xe_drm_mm_bo_swap_shadow(bb_pool);
>  
> -	xe_sa_bo_sync_shadow(bb->bo);
> +		cs = bb_pool->cpu_addr + bb->node.start;
> +		memset(cs, MI_NOOP, bb->len * sizeof(u32));
> +		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
>  
> -	xe_bb_free(bb, NULL);
> -	src_bo->bb_ccs[read_write] = NULL;
> +		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
> +		xe_drm_mm_bb_free(bb);
> +		src_bo->bb_ccs[read_write] = NULL;
> +	}
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> index db023fb66a27..6fb4641c6f0f 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> @@ -8,6 +8,7 @@
>  #include "xe_bb.h"
>  #include "xe_bo.h"
>  #include "xe_device.h"
> +#include "xe_drm_mm.h"
>  #include "xe_exec_queue.h"
>  #include "xe_exec_queue_types.h"
>  #include "xe_gt_sriov_vf.h"
> @@ -16,7 +17,6 @@
>  #include "xe_lrc.h"
>  #include "xe_migrate.h"
>  #include "xe_pm.h"
> -#include "xe_sa.h"
>  #include "xe_sriov_printk.h"
>  #include "xe_sriov_vf.h"
>  #include "xe_sriov_vf_ccs.h"
> @@ -141,8 +141,8 @@ static u64 get_ccs_bb_pool_size(struct xe_device *xe)
>  
>  static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
>  {
> +	struct xe_drm_mm_manager *drm_mm_manager;
>  	struct xe_device *xe = tile_to_xe(tile);
> -	struct xe_sa_manager *sa_manager;
>  	u64 bb_pool_size;
>  	int offset, err;
>  
> @@ -150,34 +150,33 @@ static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
>  	xe_sriov_info(xe, "Allocating %s CCS BB pool size = %lldMB\n",
>  		      ctx->ctx_id ? "Restore" : "Save", bb_pool_size / SZ_1M);
>  
> -	sa_manager = __xe_sa_bo_manager_init(tile, bb_pool_size, SZ_4K, SZ_16,
> -					     XE_SA_BO_MANAGER_FLAG_SHADOW);
> -
> -	if (IS_ERR(sa_manager)) {
> -		xe_sriov_err(xe, "Suballocator init failed with error: %pe\n",
> -			     sa_manager);
> -		err = PTR_ERR(sa_manager);
> +	drm_mm_manager = xe_drm_mm_manager_init(tile, bb_pool_size, SZ_4K,
> +						XE_DRM_MM_BO_MANAGER_FLAG_SHADOW);
> +	if (IS_ERR(drm_mm_manager)) {
> +		xe_sriov_err(xe, "XE_DRM_MM init failed with error: %pe\n",
> +			     drm_mm_manager);
> +		err = PTR_ERR(drm_mm_manager);
>  		return err;
>  	}
>  
>  	offset = 0;
> -	xe_map_memset(xe, &sa_manager->bo->vmap, offset, MI_NOOP,
> +	xe_map_memset(xe, &drm_mm_manager->bo->vmap, offset, MI_NOOP,
>  		      bb_pool_size);
> -	xe_map_memset(xe, &sa_manager->shadow->vmap, offset, MI_NOOP,
> +	xe_map_memset(xe, &drm_mm_manager->shadow->vmap, offset, MI_NOOP,
>  		      bb_pool_size);
>  
>  	offset = bb_pool_size - sizeof(u32);
> -	xe_map_wr(xe, &sa_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
> -	xe_map_wr(xe, &sa_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
> +	xe_map_wr(xe, &drm_mm_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
> +	xe_map_wr(xe, &drm_mm_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
>  
> -	ctx->mem.ccs_bb_pool = sa_manager;
> +	ctx->mem.ccs_bb_pool = drm_mm_manager;
>  
>  	return 0;
>  }
>  
>  static void ccs_rw_update_ring(struct xe_sriov_vf_ccs_ctx *ctx)
>  {
> -	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
> +	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>  	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
>  	u32 dw[10], i = 0;
>  
> @@ -388,7 +387,7 @@ int xe_sriov_vf_ccs_init(struct xe_device *xe)
>  #define XE_SRIOV_VF_CCS_RW_BB_ADDR_OFFSET	(2 * sizeof(u32))
>  void xe_sriov_vf_ccs_rw_update_bb_addr(struct xe_sriov_vf_ccs_ctx *ctx)
>  {
> -	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
> +	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>  	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
>  	struct xe_device *xe = gt_to_xe(ctx->mig_q->gt);
>  
> @@ -412,8 +411,8 @@ int xe_sriov_vf_ccs_attach_bo(struct xe_bo *bo)
>  	struct xe_device *xe = xe_bo_device(bo);
>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
>  	struct xe_sriov_vf_ccs_ctx *ctx;
> +	struct xe_drm_mm_bb *bb;
>  	struct xe_tile *tile;
> -	struct xe_bb *bb;
>  	int err = 0;
>  
>  	xe_assert(xe, IS_VF_CCS_READY(xe));
> @@ -445,7 +444,7 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
>  {
>  	struct xe_device *xe = xe_bo_device(bo);
>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
> -	struct xe_bb *bb;
> +	struct xe_drm_mm_bb *bb;
>  
>  	xe_assert(xe, IS_VF_CCS_READY(xe));
>  
> @@ -471,8 +470,8 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
>   */
>  void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
>  {
> -	struct xe_sa_manager *bb_pool;
>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
> +	struct xe_drm_mm_manager *bb_pool;
>  
>  	if (!IS_VF_CCS_READY(xe))
>  		return;
> @@ -485,7 +484,7 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
>  
>  		drm_printf(p, "ccs %s bb suballoc info\n", ctx_id ? "write" : "read");
>  		drm_printf(p, "-------------------------\n");
> -		drm_suballoc_dump_debug_info(&bb_pool->base, p, xe_sa_manager_gpu_addr(bb_pool));
> +		drm_mm_print(&bb_pool->base, p);
>  		drm_puts(p, "\n");
>  	}
>  }
> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
> index 22c499943d2a..f2af074578c9 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
> +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
> @@ -33,7 +33,7 @@ struct xe_sriov_vf_ccs_ctx {
>  	/** @mem: memory data */
>  	struct {
>  		/** @mem.ccs_bb_pool: Pool from which batch buffers are allocated. */
> -		struct xe_sa_manager *ccs_bb_pool;
> +		struct xe_drm_mm_manager *ccs_bb_pool;
>  	} mem;
>  };
>  
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support
  2026-03-20 12:12 ` [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support Satyanarayana K V P
  2026-03-26 19:48   ` Matthew Brost
@ 2026-03-26 19:57   ` Thomas Hellström
  2026-03-27 10:54     ` Michal Wajdeczko
  1 sibling, 1 reply; 20+ messages in thread
From: Thomas Hellström @ 2026-03-26 19:57 UTC (permalink / raw)
  To: Satyanarayana K V P, intel-xe
  Cc: Matthew Brost, Maarten Lankhorst, Michal Wajdeczko

On Fri, 2026-03-20 at 12:12 +0000, Satyanarayana K V P wrote:
> Add a xe_drm_mm manager to allocate sub-ranges from a BO-backed pool
> using drm_mm.

Just a comment on the naming. xe_drm_mm sounds like this is yet another
specialized range manager implementation.

Could we invent a better name? 

xe_mm_suballoc? Something even better?


Thanks,
Thomas



> 
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Maarten Lankhorst <dev@lankhorst.se>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  drivers/gpu/drm/xe/Makefile          |   1 +
>  drivers/gpu/drm/xe/xe_drm_mm.c       | 200
> +++++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_drm_mm.h       |  55 ++++++++
>  drivers/gpu/drm/xe/xe_drm_mm_types.h |  42 ++++++
>  4 files changed, 298 insertions(+)
>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.c
>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.h
>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm_types.h
> 
> diff --git a/drivers/gpu/drm/xe/Makefile
> b/drivers/gpu/drm/xe/Makefile
> index dab979287a96..6ab4e2392df1 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -41,6 +41,7 @@ xe-y += xe_bb.o \
>  	xe_device_sysfs.o \
>  	xe_dma_buf.o \
>  	xe_drm_client.o \
> +	xe_drm_mm.o \
>  	xe_drm_ras.o \
>  	xe_eu_stall.o \
>  	xe_exec.o \
> diff --git a/drivers/gpu/drm/xe/xe_drm_mm.c
> b/drivers/gpu/drm/xe/xe_drm_mm.c
> new file mode 100644
> index 000000000000..c5b1766fa75a
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_drm_mm.c
> @@ -0,0 +1,200 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2026 Intel Corporation
> + */
> +
> +#include <drm/drm_managed.h>
> +#include <linux/kernel.h>
> +
> +#include "xe_device_types.h"
> +#include "xe_drm_mm_types.h"
> +#include "xe_drm_mm.h"
> +#include "xe_map.h"
> +
> +static void xe_drm_mm_manager_fini(struct drm_device *drm, void
> *arg)
> +{
> +	struct xe_drm_mm_manager *drm_mm_manager = arg;
> +	struct xe_bo *bo = drm_mm_manager->bo;
> +
> +	if (!bo) {
> +		drm_err(drm, "no bo for drm mm manager\n");
> +		return;
> +	}
> +
> +	drm_mm_takedown(&drm_mm_manager->base);
> +
> +	if (drm_mm_manager->is_iomem)
> +		kvfree(drm_mm_manager->cpu_addr);
> +
> +	drm_mm_manager->bo = NULL;
> +	drm_mm_manager->shadow = NULL;
> +}
> +
> +/**
> + * xe_drm_mm_manager_init() - Create and initialize the DRM MM
> manager.
> + * @tile: the &xe_tile where allocate.
> + * @size: number of bytes to allocate
> + * @guard: number of bytes to exclude from allocation for guard
> region
> + * @flags: additional flags for configuring the DRM MM manager.
> + *
> + * Initializes a DRM MM manager for managing memory allocations on a
> specific
> + * XE tile. The function allocates a buffer object to back the
> memory region
> + * managed by the DRM MM manager.
> + *
> + * Return: a pointer to the &xe_drm_mm_manager, or an error pointer
> on failure.
> + */
> +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile
> *tile, u32 size,
> +						 u32 guard, u32
> flags)
> +{
> +	struct xe_device *xe = tile_to_xe(tile);
> +	struct xe_drm_mm_manager *drm_mm_manager;
> +	u64 managed_size;
> +	struct xe_bo *bo;
> +	int ret;
> +
> +	xe_tile_assert(tile, size > guard);
> +	managed_size = size - guard;
> +
> +	drm_mm_manager = drmm_kzalloc(&xe->drm,
> sizeof(*drm_mm_manager), GFP_KERNEL);
> +	if (!drm_mm_manager)
> +		return ERR_PTR(-ENOMEM);
> +
> +	bo = xe_managed_bo_create_pin_map(xe, tile, size,
> +					 
> XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> +					  XE_BO_FLAG_GGTT |
> +					  XE_BO_FLAG_GGTT_INVALIDATE
> |
> +					 
> XE_BO_FLAG_PINNED_NORESTORE);
> +	if (IS_ERR(bo)) {
> +		drm_err(&xe->drm, "Failed to prepare %uKiB BO for
> DRM MM manager (%pe)\n",
> +			size / SZ_1K, bo);
> +		return ERR_CAST(bo);
> +	}
> +	drm_mm_manager->bo = bo;
> +	drm_mm_manager->is_iomem = bo->vmap.is_iomem;
> +
> +	if (bo->vmap.is_iomem) {
> +		drm_mm_manager->cpu_addr = kvzalloc(managed_size,
> GFP_KERNEL);
> +		if (!drm_mm_manager->cpu_addr)
> +			return ERR_PTR(-ENOMEM);
> +	} else {
> +		drm_mm_manager->cpu_addr = bo->vmap.vaddr;
> +		memset(drm_mm_manager->cpu_addr, 0, bo-
> >ttm.base.size);
> +	}
> +
> +	if (flags & XE_DRM_MM_BO_MANAGER_FLAG_SHADOW) {
> +		struct xe_bo *shadow;
> +
> +		ret = drmm_mutex_init(&xe->drm, &drm_mm_manager-
> >swap_guard);
> +		if (ret)
> +			return ERR_PTR(ret);
> +		if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
> +			fs_reclaim_acquire(GFP_KERNEL);
> +			might_lock(&drm_mm_manager->swap_guard);
> +			fs_reclaim_release(GFP_KERNEL);
> +		}
> +
> +		shadow = xe_managed_bo_create_pin_map(xe, tile,
> size,
> +						     
> XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> +						     
> XE_BO_FLAG_GGTT |
> +						     
> XE_BO_FLAG_GGTT_INVALIDATE |
> +						     
> XE_BO_FLAG_PINNED_NORESTORE);
> +		if (IS_ERR(shadow)) {
> +			drm_err(&xe->drm,
> +				"Failed to prepare %uKiB shadow BO
> for DRM MM manager (%pe)\n",
> +				size / SZ_1K, shadow);
> +			return ERR_CAST(shadow);
> +		}
> +		drm_mm_manager->shadow = shadow;
> +	}
> +
> +	drm_mm_init(&drm_mm_manager->base, 0, managed_size);
> +	ret = drmm_add_action_or_reset(&xe->drm,
> xe_drm_mm_manager_fini, drm_mm_manager);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	return drm_mm_manager;
> +}
> +
> +/**
> + * xe_drm_mm_bo_swap_shadow() - Swap the primary BO with the shadow
> BO.
> + * @drm_mm_manager: the DRM MM manager containing the primary and
> shadow BOs.
> + *
> + * Swaps the primary buffer object with the shadow buffer object in
> the DRM MM
> + * manager.
> + *
> + * Return: None.
> + */
> +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager
> *drm_mm_manager)
> +{
> +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo->tile);
> +
> +	xe_assert(xe, drm_mm_manager->shadow);
> +	lockdep_assert_held(&drm_mm_manager->swap_guard);
> +
> +	swap(drm_mm_manager->bo, drm_mm_manager->shadow);
> +	if (!drm_mm_manager->bo->vmap.is_iomem)
> +		drm_mm_manager->cpu_addr = drm_mm_manager->bo-
> >vmap.vaddr;
> +}
> +
> +/**
> + * xe_drm_mm_sync_shadow() - Synchronize the shadow BO with the
> primary BO.
> + * @drm_mm_manager: the DRM MM manager containing the primary and
> shadow BOs.
> + * @node: the DRM MM node representing the region to synchronize.
> + *
> + * Copies the contents of the specified region from the primary
> buffer object to
> + * the shadow buffer object in the DRM MM manager.
> + *
> + * Return: None.
> + */
> +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager *drm_mm_manager,
> +			   struct drm_mm_node *node)
> +{
> +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo->tile);
> +
> +	xe_assert(xe, drm_mm_manager->shadow);
> +	lockdep_assert_held(&drm_mm_manager->swap_guard);
> +
> +	xe_map_memcpy_to(xe, &drm_mm_manager->shadow->vmap,
> +			 node->start,
> +			 drm_mm_manager->cpu_addr + node->start,
> +			 node->size);
> +}
> +
> +/**
> + * xe_drm_mm_insert_node() - Insert a node into the DRM MM manager.
> + * @drm_mm_manager: the DRM MM manager to insert the node into.
> + * @node: the DRM MM node to insert.
> + * @size: the size of the node to insert.
> + *
> + * Inserts a node into the DRM MM manager and clears the
> corresponding memory region
> + * in both the primary and shadow buffer objects.
> + *
> + * Return: 0 on success, or a negative error code on failure.
> + */
> +int xe_drm_mm_insert_node(struct xe_drm_mm_manager *drm_mm_manager,
> +			  struct drm_mm_node *node, u32 size)
> +{
> +	struct drm_mm *mm = &drm_mm_manager->base;
> +	int ret;
> +
> +	ret = drm_mm_insert_node(mm, node, size);
> +	if (ret)
> +		return ret;
> +
> +	memset((void *)drm_mm_manager->bo->vmap.vaddr + node->start,
> 0, node->size);
> +	if (drm_mm_manager->shadow)
> +		memset((void *)drm_mm_manager->shadow->vmap.vaddr +
> node->start, 0,
> +		       node->size);
> +	return 0;
> +}
> +
> +/**
> + * xe_drm_mm_remove_node() - Remove a node from the DRM MM manager.
> + * @node: the DRM MM node to remove.
> + *
> + * Return: None.
> + */
> +void xe_drm_mm_remove_node(struct drm_mm_node *node)
> +{
> +	return drm_mm_remove_node(node);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_drm_mm.h
> b/drivers/gpu/drm/xe/xe_drm_mm.h
> new file mode 100644
> index 000000000000..aeb7cab92d0b
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_drm_mm.h
> @@ -0,0 +1,55 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2026 Intel Corporation
> + */
> +#ifndef _XE_DRM_MM_H_
> +#define _XE_DRM_MM_H_
> +
> +#include <linux/sizes.h>
> +#include <linux/types.h>
> +
> +#include "xe_bo.h"
> +#include "xe_drm_mm_types.h"
> +
> +struct dma_fence;
> +struct xe_tile;
> +
> +#define XE_DRM_MM_BO_MANAGER_FLAG_SHADOW    BIT(0)
> +
> +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile
> *tile, u32 size,
> +						 u32 guard, u32
> flags);
> +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager
> *drm_mm_manager);
> +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager *drm_mm_manager,
> +			   struct drm_mm_node *node);
> +int xe_drm_mm_insert_node(struct xe_drm_mm_manager *drm_mm_manager,
> +			  struct drm_mm_node *node, u32 size);
> +void xe_drm_mm_remove_node(struct drm_mm_node *node);
> +
> +/**
> + * xe_drm_mm_manager_gpu_addr() - Retrieve GPU address of a back
> storage BO
> + * within a memory manager.
> + * @drm_mm_manager: The DRM MM memory manager.
> + *
> + * Returns: GGTT address of the back storage BO
> + */
> +static inline u64 xe_drm_mm_manager_gpu_addr(struct
> xe_drm_mm_manager
> +					     *drm_mm_manager)
> +{
> +	return xe_bo_ggtt_addr(drm_mm_manager->bo);
> +}
> +
> +/**
> + * xe_drm_mm_bo_swap_guard() - Retrieve the mutex used to guard swap
> operations
> + * on a memory manager.
> + * @drm_mm_manager: The DRM MM memory manager.
> + *
> + * Returns: Swap guard mutex.
> + */
> +static inline struct mutex *xe_drm_mm_bo_swap_guard(struct
> xe_drm_mm_manager
> +						    *drm_mm_manager)
> +{
> +	return &drm_mm_manager->swap_guard;
> +}
> +
> +#endif
> +
> diff --git a/drivers/gpu/drm/xe/xe_drm_mm_types.h
> b/drivers/gpu/drm/xe/xe_drm_mm_types.h
> new file mode 100644
> index 000000000000..69e0937dd8de
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_drm_mm_types.h
> @@ -0,0 +1,42 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2026 Intel Corporation
> + */
> +
> +#ifndef _XE_DRM_MM_TYPES_H_
> +#define _XE_DRM_MM_TYPES_H_
> +
> +#include <drm/drm_mm.h>
> +
> +struct xe_bo;
> +
> +struct xe_drm_mm_manager {
> +	/** @base: Range allocator over [0, @size) in bytes */
> +	struct drm_mm base;
> +	/** @bo: Active pool BO (GGTT-pinned, CPU-mapped). */
> +	struct xe_bo *bo;
> +	/** @shadow: Shadow BO for atomic command updates. */
> +	struct xe_bo *shadow;
> +	/** @swap_guard: Timeline guard updating @bo and @shadow */
> +	struct mutex swap_guard;
> +	/** @cpu_addr: CPU virtual address of the active BO. */
> +	void *cpu_addr;
> +	/** @size: Total size of the managed address space. */
> +	u64 size;
> +	/** @is_iomem: Whether the managed address space is I/O
> memory. */
> +	bool is_iomem;
> +};
> +
> +struct xe_drm_mm_bb {
> +	/** @node: Range node for this batch buffer. */
> +	struct drm_mm_node node;
> +	/** @manager: Manager this batch buffer belongs to. */
> +	struct xe_drm_mm_manager *manager;
> +	/** @cs: Command stream for this batch buffer. */
> +	u32 *cs;
> +	/** @len: Length of the CS in dwords. */
> +	u32 len;
> +};
> +
> +#endif
> +

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support
  2026-03-26 19:57   ` Thomas Hellström
@ 2026-03-27 10:54     ` Michal Wajdeczko
  2026-03-27 11:06       ` Thomas Hellström
  2026-03-27 21:26       ` Matthew Brost
  0 siblings, 2 replies; 20+ messages in thread
From: Michal Wajdeczko @ 2026-03-27 10:54 UTC (permalink / raw)
  To: Thomas Hellström, Satyanarayana K V P, intel-xe
  Cc: Matthew Brost, Maarten Lankhorst



On 3/26/2026 8:57 PM, Thomas Hellström wrote:
> On Fri, 2026-03-20 at 12:12 +0000, Satyanarayana K V P wrote:
>> Add a xe_drm_mm manager to allocate sub-ranges from a BO-backed pool
>> using drm_mm.
> 
> Just a comment on the naming. xe_drm_mm sounds like this is yet another
> specialized range manager implementation.

well, in fact it looks like *very* specialized MM, not much reusable elsewhere

> 
> Could we invent a better name? 
> 
> xe_mm_suballoc? Something even better?

xe_shadow_pool ?

or if we split this MM into "plain pool" and "shadow pool":

xe_pool		--> like xe_sa but works without fences (can be reused in xe_guc_buf)
xe_shadow_pool	--> built on top of plain, with shadow logic


more comments below

> 
> 
> Thanks,
> Thomas
> 
> 
> 
>>
>> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Maarten Lankhorst <dev@lankhorst.se>
>> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> ---
>>  drivers/gpu/drm/xe/Makefile          |   1 +
>>  drivers/gpu/drm/xe/xe_drm_mm.c       | 200
>> +++++++++++++++++++++++++++
>>  drivers/gpu/drm/xe/xe_drm_mm.h       |  55 ++++++++
>>  drivers/gpu/drm/xe/xe_drm_mm_types.h |  42 ++++++
>>  4 files changed, 298 insertions(+)
>>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.c
>>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.h
>>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm_types.h
>>
>> diff --git a/drivers/gpu/drm/xe/Makefile
>> b/drivers/gpu/drm/xe/Makefile
>> index dab979287a96..6ab4e2392df1 100644
>> --- a/drivers/gpu/drm/xe/Makefile
>> +++ b/drivers/gpu/drm/xe/Makefile
>> @@ -41,6 +41,7 @@ xe-y += xe_bb.o \
>>  	xe_device_sysfs.o \
>>  	xe_dma_buf.o \
>>  	xe_drm_client.o \
>> +	xe_drm_mm.o \
>>  	xe_drm_ras.o \
>>  	xe_eu_stall.o \
>>  	xe_exec.o \
>> diff --git a/drivers/gpu/drm/xe/xe_drm_mm.c
>> b/drivers/gpu/drm/xe/xe_drm_mm.c
>> new file mode 100644
>> index 000000000000..c5b1766fa75a
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xe/xe_drm_mm.c
>> @@ -0,0 +1,200 @@
>> +// SPDX-License-Identifier: MIT
>> +/*
>> + * Copyright © 2026 Intel Corporation
>> + */
>> +
>> +#include <drm/drm_managed.h>

nit: <linux> headers go first

>> +#include <linux/kernel.h>
>> +
>> +#include "xe_device_types.h"
>> +#include "xe_drm_mm_types.h"
>> +#include "xe_drm_mm.h"
>> +#include "xe_map.h"
>> +
>> +static void xe_drm_mm_manager_fini(struct drm_device *drm, void
>> *arg)
>> +{
>> +	struct xe_drm_mm_manager *drm_mm_manager = arg;
>> +	struct xe_bo *bo = drm_mm_manager->bo;
>> +
>> +	if (!bo) {

not needed, we shouldn't be here if we failed to allocate a bo

>> +		drm_err(drm, "no bo for drm mm manager\n");

btw, our MM seems to be 'tile' oriented, so we should use xe_tile_err() 

>> +		return;
>> +	}
>> +
>> +	drm_mm_takedown(&drm_mm_manager->base);
>> +
>> +	if (drm_mm_manager->is_iomem)
>> +		kvfree(drm_mm_manager->cpu_addr);
>> +
>> +	drm_mm_manager->bo = NULL;
>> +	drm_mm_manager->shadow = NULL;
>> +}
>> +
>> +/**
>> + * xe_drm_mm_manager_init() - Create and initialize the DRM MM
>> manager.
>> + * @tile: the &xe_tile where allocate.
>> + * @size: number of bytes to allocate
>> + * @guard: number of bytes to exclude from allocation for guard
>> region

do we really need this guard ? it was already questionable on the xe_sa

>> + * @flags: additional flags for configuring the DRM MM manager.
>> + *
>> + * Initializes a DRM MM manager for managing memory allocations on a
>> specific
>> + * XE tile. The function allocates a buffer object to back the
>> memory region
>> + * managed by the DRM MM manager.
>> + *
>> + * Return: a pointer to the &xe_drm_mm_manager, or an error pointer
>> on failure.
>> + */
>> +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile
>> *tile, u32 size,
>> +						 u32 guard, u32
>> flags)
>> +{
>> +	struct xe_device *xe = tile_to_xe(tile);
>> +	struct xe_drm_mm_manager *drm_mm_manager;
>> +	u64 managed_size;
>> +	struct xe_bo *bo;
>> +	int ret;
>> +
>> +	xe_tile_assert(tile, size > guard);
>> +	managed_size = size - guard;
>> +
>> +	drm_mm_manager = drmm_kzalloc(&xe->drm,
>> sizeof(*drm_mm_manager), GFP_KERNEL);
>> +	if (!drm_mm_manager)
>> +		return ERR_PTR(-ENOMEM);

can't we make this manager a member of the tile and then use
container_of to get parent tile pointer?

I guess we will have exactly one this MM per tile, no?

>> +
>> +	bo = xe_managed_bo_create_pin_map(xe, tile, size,
>> +					 
>> XE_BO_FLAG_VRAM_IF_DGFX(tile) |
>> +					  XE_BO_FLAG_GGTT |
>> +					  XE_BO_FLAG_GGTT_INVALIDATE
>> |
>> +					 
>> XE_BO_FLAG_PINNED_NORESTORE);
>> +	if (IS_ERR(bo)) {
>> +		drm_err(&xe->drm, "Failed to prepare %uKiB BO for

xe_tile_err(tile, ...

but maybe nicer solution would be to add such error to the
xe_managed_bo_create_pin_map() to avoid duplicating this diag
messages in all callers

>> DRM MM manager (%pe)\n",
>> +			size / SZ_1K, bo);
>> +		return ERR_CAST(bo);
>> +	}
>> +	drm_mm_manager->bo = bo;
>> +	drm_mm_manager->is_iomem = bo->vmap.is_iomem;

do we need to cache this?

>> +
>> +	if (bo->vmap.is_iomem) {
>> +		drm_mm_manager->cpu_addr = kvzalloc(managed_size,
>> GFP_KERNEL);
>> +		if (!drm_mm_manager->cpu_addr)
>> +			return ERR_PTR(-ENOMEM);
>> +	} else {
>> +		drm_mm_manager->cpu_addr = bo->vmap.vaddr;
>> +		memset(drm_mm_manager->cpu_addr, 0, bo-
>>> ttm.base.size);

btw, maybe we should consider adding XE_BO_FLAG_ZERO and let
the xe_create_bo do initial clear for us?

@Matt @Thomas ?

>> +	}
>> +
>> +	if (flags & XE_DRM_MM_BO_MANAGER_FLAG_SHADOW) {

hmm, so this is not a main feature of this MM?
then maybe we should have two components:

	* xe_pool (plain MM, like xe_sa but without fences)
	* xe_shadow (adds shadow BO on top of plain MM)

>> +		struct xe_bo *shadow;
>> +
>> +		ret = drmm_mutex_init(&xe->drm, &drm_mm_manager-
>>> swap_guard);
>> +		if (ret)
>> +			return ERR_PTR(ret);
>> +		if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
>> +			fs_reclaim_acquire(GFP_KERNEL);
>> +			might_lock(&drm_mm_manager->swap_guard);
>> +			fs_reclaim_release(GFP_KERNEL);
>> +		}
>> +
>> +		shadow = xe_managed_bo_create_pin_map(xe, tile,
>> size,
>> +						     
>> XE_BO_FLAG_VRAM_IF_DGFX(tile) |
>> +						     
>> XE_BO_FLAG_GGTT |
>> +						     
>> XE_BO_FLAG_GGTT_INVALIDATE |
>> +						     
>> XE_BO_FLAG_PINNED_NORESTORE);
>> +		if (IS_ERR(shadow)) {
>> +			drm_err(&xe->drm,
>> +				"Failed to prepare %uKiB shadow BO
>> for DRM MM manager (%pe)\n",
>> +				size / SZ_1K, shadow);
>> +			return ERR_CAST(shadow);
>> +		}
>> +		drm_mm_manager->shadow = shadow;
>> +	}
>> +
>> +	drm_mm_init(&drm_mm_manager->base, 0, managed_size);
>> +	ret = drmm_add_action_or_reset(&xe->drm,
>> xe_drm_mm_manager_fini, drm_mm_manager);
>> +	if (ret)
>> +		return ERR_PTR(ret);
>> +
>> +	return drm_mm_manager;
>> +}
>> +
>> +/**
>> + * xe_drm_mm_bo_swap_shadow() - Swap the primary BO with the shadow
>> BO.

do we need _bo_ in the function name here?

>> + * @drm_mm_manager: the DRM MM manager containing the primary and
>> shadow BOs.
>> + *
>> + * Swaps the primary buffer object with the shadow buffer object in
>> the DRM MM
>> + * manager.

say a word about required swap_guard mutex

and/or add the _locked suffix to the function name

>> + *
>> + * Return: None.
>> + */
>> +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager
>> *drm_mm_manager)
>> +{
>> +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo->tile);
>> +
>> +	xe_assert(xe, drm_mm_manager->shadow);

use xe_tile_assert

>> +	lockdep_assert_held(&drm_mm_manager->swap_guard);
>> +
>> +	swap(drm_mm_manager->bo, drm_mm_manager->shadow);
>> +	if (!drm_mm_manager->bo->vmap.is_iomem)
>> +		drm_mm_manager->cpu_addr = drm_mm_manager->bo-
>>> vmap.vaddr;
>> +}
>> +
>> +/**
>> + * xe_drm_mm_sync_shadow() - Synchronize the shadow BO with the
>> primary BO.
>> + * @drm_mm_manager: the DRM MM manager containing the primary and
>> shadow BOs.
>> + * @node: the DRM MM node representing the region to synchronize.
>> + *
>> + * Copies the contents of the specified region from the primary
>> buffer object to
>> + * the shadow buffer object in the DRM MM manager.
>> + *
>> + * Return: None.
>> + */
>> +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager *drm_mm_manager,
>> +			   struct drm_mm_node *node)
>> +{
>> +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo->tile);
>> +
>> +	xe_assert(xe, drm_mm_manager->shadow);
>> +	lockdep_assert_held(&drm_mm_manager->swap_guard);
>> +
>> +	xe_map_memcpy_to(xe, &drm_mm_manager->shadow->vmap,
>> +			 node->start,
>> +			 drm_mm_manager->cpu_addr + node->start,
>> +			 node->size);

maybe I'm missing something, but if primary BO.is_iomem==true then
who/when updates the actual primary BO memory? or is it unused by
design and only shadow has the data ...

maybe some DOC section with theory-of-operation will help here?

>> +}
>> +
>> +/**
>> + * xe_drm_mm_insert_node() - Insert a node into the DRM MM manager.
>> + * @drm_mm_manager: the DRM MM manager to insert the node into.
>> + * @node: the DRM MM node to insert.

in recent changes to xe_ggtt we finally hidden the implementation details
of the MM used by the xe_ggtt

why here we start again exposing impl detail as part of the API?
if we can't allocate xe_drm_mm_node here, maybe at least take it
as a parameter and update in place

>> + * @size: the size of the node to insert.
>> + *
>> + * Inserts a node into the DRM MM manager and clears the
>> corresponding memory region
>> + * in both the primary and shadow buffer objects.
>> + *
>> + * Return: 0 on success, or a negative error code on failure.
>> + */
>> +int xe_drm_mm_insert_node(struct xe_drm_mm_manager *drm_mm_manager,
>> +			  struct drm_mm_node *node, u32 size)
>> +{
>> +	struct drm_mm *mm = &drm_mm_manager->base;
>> +	int ret;
>> +
>> +	ret = drm_mm_insert_node(mm, node, size);
>> +	if (ret)
>> +		return ret;
>> +
>> +	memset((void *)drm_mm_manager->bo->vmap.vaddr + node->start,
>> 0, node->size);

iosys_map_memset(bo->vmap, start, 0, size) ?

>> +	if (drm_mm_manager->shadow)
>> +		memset((void *)drm_mm_manager->shadow->vmap.vaddr +
>> node->start, 0,
>> +		       node->size);

what about clearing the drm_mm_manager->cpu_addr ?

>> +	return 0;
>> +}
>> +
>> +/**
>> + * xe_drm_mm_remove_node() - Remove a node from the DRM MM manager.
>> + * @node: the DRM MM node to remove.
>> + *
>> + * Return: None.
>> + */
>> +void xe_drm_mm_remove_node(struct drm_mm_node *node)
>> +{
>> +	return drm_mm_remove_node(node);
>> +}
>> diff --git a/drivers/gpu/drm/xe/xe_drm_mm.h
>> b/drivers/gpu/drm/xe/xe_drm_mm.h
>> new file mode 100644
>> index 000000000000..aeb7cab92d0b
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xe/xe_drm_mm.h
>> @@ -0,0 +1,55 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2026 Intel Corporation
>> + */
>> +#ifndef _XE_DRM_MM_H_
>> +#define _XE_DRM_MM_H_
>> +
>> +#include <linux/sizes.h>
>> +#include <linux/types.h>
>> +
>> +#include "xe_bo.h"
>> +#include "xe_drm_mm_types.h"
>> +
>> +struct dma_fence;

do we need this?

>> +struct xe_tile;
>> +
>> +#define XE_DRM_MM_BO_MANAGER_FLAG_SHADOW    BIT(0)
>> +
>> +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile
>> *tile, u32 size,
>> +						 u32 guard, u32
>> flags);
>> +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager
>> *drm_mm_manager);
>> +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager *drm_mm_manager,
>> +			   struct drm_mm_node *node);
>> +int xe_drm_mm_insert_node(struct xe_drm_mm_manager *drm_mm_manager,
>> +			  struct drm_mm_node *node, u32 size);
>> +void xe_drm_mm_remove_node(struct drm_mm_node *node);
>> +
>> +/**
>> + * xe_drm_mm_manager_gpu_addr() - Retrieve GPU address of a back
>> storage BO
>> + * within a memory manager.
>> + * @drm_mm_manager: The DRM MM memory manager.
>> + *
>> + * Returns: GGTT address of the back storage BO
>> + */
>> +static inline u64 xe_drm_mm_manager_gpu_addr(struct
>> xe_drm_mm_manager
>> +					     *drm_mm_manager)
>> +{
>> +	return xe_bo_ggtt_addr(drm_mm_manager->bo);
>> +}
>> +
>> +/**
>> + * xe_drm_mm_bo_swap_guard() - Retrieve the mutex used to guard swap
>> operations

hmm, do we need the _bo_ here?

>> + * on a memory manager.
>> + * @drm_mm_manager: The DRM MM memory manager.
>> + *
>> + * Returns: Swap guard mutex.
>> + */
>> +static inline struct mutex *xe_drm_mm_bo_swap_guard(struct
>> xe_drm_mm_manager
>> +						    *drm_mm_manager)
>> +{
>> +	return &drm_mm_manager->swap_guard;
>> +}
>> +
>> +#endif
>> +
>> diff --git a/drivers/gpu/drm/xe/xe_drm_mm_types.h
>> b/drivers/gpu/drm/xe/xe_drm_mm_types.h
>> new file mode 100644
>> index 000000000000..69e0937dd8de
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xe/xe_drm_mm_types.h
>> @@ -0,0 +1,42 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2026 Intel Corporation
>> + */
>> +
>> +#ifndef _XE_DRM_MM_TYPES_H_
>> +#define _XE_DRM_MM_TYPES_H_
>> +
>> +#include <drm/drm_mm.h>
>> +
>> +struct xe_bo;
>> +

without kernel-doc for the struct itself, below kernel-docs for the
members are currently not recognized by the tool

>> +struct xe_drm_mm_manager {
>> +	/** @base: Range allocator over [0, @size) in bytes */
>> +	struct drm_mm base;
>> +	/** @bo: Active pool BO (GGTT-pinned, CPU-mapped). */
>> +	struct xe_bo *bo;
>> +	/** @shadow: Shadow BO for atomic command updates. */
>> +	struct xe_bo *shadow;
>> +	/** @swap_guard: Timeline guard updating @bo and @shadow */
>> +	struct mutex swap_guard;
>> +	/** @cpu_addr: CPU virtual address of the active BO. */
>> +	void *cpu_addr;
>> +	/** @size: Total size of the managed address space. */
>> +	u64 size;
>> +	/** @is_iomem: Whether the managed address space is I/O
>> memory. */
>> +	bool is_iomem;
>> +};
>> +

ditto

>> +struct xe_drm_mm_bb {
>> +	/** @node: Range node for this batch buffer. */
>> +	struct drm_mm_node node;
>> +	/** @manager: Manager this batch buffer belongs to. */
>> +	struct xe_drm_mm_manager *manager;
>> +	/** @cs: Command stream for this batch buffer. */
>> +	u32 *cs;
>> +	/** @len: Length of the CS in dwords. */
>> +	u32 len;
>> +};

but we are not using this struct yet in this patch, correct?

>> +
>> +#endif
>> +


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support
  2026-03-27 10:54     ` Michal Wajdeczko
@ 2026-03-27 11:06       ` Thomas Hellström
  2026-03-27 19:54         ` Matthew Brost
  2026-03-27 21:26       ` Matthew Brost
  1 sibling, 1 reply; 20+ messages in thread
From: Thomas Hellström @ 2026-03-27 11:06 UTC (permalink / raw)
  To: Michal Wajdeczko, Satyanarayana K V P, intel-xe
  Cc: Matthew Brost, Maarten Lankhorst

On Fri, 2026-03-27 at 11:54 +0100, Michal Wajdeczko wrote:
> 
> 
> On 3/26/2026 8:57 PM, Thomas Hellström wrote:
> > On Fri, 2026-03-20 at 12:12 +0000, Satyanarayana K V P wrote:
> > > Add a xe_drm_mm manager to allocate sub-ranges from a BO-backed
> > > pool
> > > using drm_mm.
> > 
> > Just a comment on the naming. xe_drm_mm sounds like this is yet
> > another
> > specialized range manager implementation.
> 
> well, in fact it looks like *very* specialized MM, not much reusable
> elsewhere

True, what I meant was it's not *implementing* a new range manager,
like drm_mm, but rather using an existing.

> 
> > 
> > Could we invent a better name? 
> > 
> > xe_mm_suballoc? Something even better?
> 
> xe_shadow_pool ?

I noticed a new patch was sent out renaming to xe_mm_suballoc but 
xe_shadow_pool sounds much better.

Thanks,
Thomas



> 
> or if we split this MM into "plain pool" and "shadow pool":
> 
> xe_pool		--> like xe_sa but works without fences (can be
> reused in xe_guc_buf)
> xe_shadow_pool	--> built on top of plain, with shadow logic
> 
> 
> more comments below
> 
> > 
> > 
> > Thanks,
> > Thomas
> > 
> > 
> > 
> > > 
> > > Signed-off-by: Satyanarayana K V P
> > > <satyanarayana.k.v.p@intel.com>
> > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > Cc: Maarten Lankhorst <dev@lankhorst.se>
> > > Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/Makefile          |   1 +
> > >  drivers/gpu/drm/xe/xe_drm_mm.c       | 200
> > > +++++++++++++++++++++++++++
> > >  drivers/gpu/drm/xe/xe_drm_mm.h       |  55 ++++++++
> > >  drivers/gpu/drm/xe/xe_drm_mm_types.h |  42 ++++++
> > >  4 files changed, 298 insertions(+)
> > >  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.c
> > >  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.h
> > >  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm_types.h
> > > 
> > > diff --git a/drivers/gpu/drm/xe/Makefile
> > > b/drivers/gpu/drm/xe/Makefile
> > > index dab979287a96..6ab4e2392df1 100644
> > > --- a/drivers/gpu/drm/xe/Makefile
> > > +++ b/drivers/gpu/drm/xe/Makefile
> > > @@ -41,6 +41,7 @@ xe-y += xe_bb.o \
> > >  	xe_device_sysfs.o \
> > >  	xe_dma_buf.o \
> > >  	xe_drm_client.o \
> > > +	xe_drm_mm.o \
> > >  	xe_drm_ras.o \
> > >  	xe_eu_stall.o \
> > >  	xe_exec.o \
> > > diff --git a/drivers/gpu/drm/xe/xe_drm_mm.c
> > > b/drivers/gpu/drm/xe/xe_drm_mm.c
> > > new file mode 100644
> > > index 000000000000..c5b1766fa75a
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xe/xe_drm_mm.c
> > > @@ -0,0 +1,200 @@
> > > +// SPDX-License-Identifier: MIT
> > > +/*
> > > + * Copyright © 2026 Intel Corporation
> > > + */
> > > +
> > > +#include <drm/drm_managed.h>
> 
> nit: <linux> headers go first
> 
> > > +#include <linux/kernel.h>
> > > +
> > > +#include "xe_device_types.h"
> > > +#include "xe_drm_mm_types.h"
> > > +#include "xe_drm_mm.h"
> > > +#include "xe_map.h"
> > > +
> > > +static void xe_drm_mm_manager_fini(struct drm_device *drm, void
> > > *arg)
> > > +{
> > > +	struct xe_drm_mm_manager *drm_mm_manager = arg;
> > > +	struct xe_bo *bo = drm_mm_manager->bo;
> > > +
> > > +	if (!bo) {
> 
> not needed, we shouldn't be here if we failed to allocate a bo
> 
> > > +		drm_err(drm, "no bo for drm mm manager\n");
> 
> btw, our MM seems to be 'tile' oriented, so we should use
> xe_tile_err() 
> 
> > > +		return;
> > > +	}
> > > +
> > > +	drm_mm_takedown(&drm_mm_manager->base);
> > > +
> > > +	if (drm_mm_manager->is_iomem)
> > > +		kvfree(drm_mm_manager->cpu_addr);
> > > +
> > > +	drm_mm_manager->bo = NULL;
> > > +	drm_mm_manager->shadow = NULL;
> > > +}
> > > +
> > > +/**
> > > + * xe_drm_mm_manager_init() - Create and initialize the DRM MM
> > > manager.
> > > + * @tile: the &xe_tile where allocate.
> > > + * @size: number of bytes to allocate
> > > + * @guard: number of bytes to exclude from allocation for guard
> > > region
> 
> do we really need this guard ? it was already questionable on the
> xe_sa
> 
> > > + * @flags: additional flags for configuring the DRM MM manager.
> > > + *
> > > + * Initializes a DRM MM manager for managing memory allocations
> > > on a
> > > specific
> > > + * XE tile. The function allocates a buffer object to back the
> > > memory region
> > > + * managed by the DRM MM manager.
> > > + *
> > > + * Return: a pointer to the &xe_drm_mm_manager, or an error
> > > pointer
> > > on failure.
> > > + */
> > > +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile
> > > *tile, u32 size,
> > > +						 u32 guard, u32
> > > flags)
> > > +{
> > > +	struct xe_device *xe = tile_to_xe(tile);
> > > +	struct xe_drm_mm_manager *drm_mm_manager;
> > > +	u64 managed_size;
> > > +	struct xe_bo *bo;
> > > +	int ret;
> > > +
> > > +	xe_tile_assert(tile, size > guard);
> > > +	managed_size = size - guard;
> > > +
> > > +	drm_mm_manager = drmm_kzalloc(&xe->drm,
> > > sizeof(*drm_mm_manager), GFP_KERNEL);
> > > +	if (!drm_mm_manager)
> > > +		return ERR_PTR(-ENOMEM);
> 
> can't we make this manager a member of the tile and then use
> container_of to get parent tile pointer?
> 
> I guess we will have exactly one this MM per tile, no?
> 
> > > +
> > > +	bo = xe_managed_bo_create_pin_map(xe, tile, size,
> > > +					 
> > > XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> > > +					  XE_BO_FLAG_GGTT |
> > > +					 
> > > XE_BO_FLAG_GGTT_INVALIDATE
> > > > 
> > > +					 
> > > XE_BO_FLAG_PINNED_NORESTORE);
> > > +	if (IS_ERR(bo)) {
> > > +		drm_err(&xe->drm, "Failed to prepare %uKiB BO
> > > for
> 
> xe_tile_err(tile, ...
> 
> but maybe nicer solution would be to add such error to the
> xe_managed_bo_create_pin_map() to avoid duplicating this diag
> messages in all callers
> 
> > > DRM MM manager (%pe)\n",
> > > +			size / SZ_1K, bo);
> > > +		return ERR_CAST(bo);
> > > +	}
> > > +	drm_mm_manager->bo = bo;
> > > +	drm_mm_manager->is_iomem = bo->vmap.is_iomem;
> 
> do we need to cache this?
> 
> > > +
> > > +	if (bo->vmap.is_iomem) {
> > > +		drm_mm_manager->cpu_addr =
> > > kvzalloc(managed_size,
> > > GFP_KERNEL);
> > > +		if (!drm_mm_manager->cpu_addr)
> > > +			return ERR_PTR(-ENOMEM);
> > > +	} else {
> > > +		drm_mm_manager->cpu_addr = bo->vmap.vaddr;
> > > +		memset(drm_mm_manager->cpu_addr, 0, bo-
> > > > ttm.base.size);
> 
> btw, maybe we should consider adding XE_BO_FLAG_ZERO and let
> the xe_create_bo do initial clear for us?
> 
> @Matt @Thomas ?
> 
> > > +	}
> > > +
> > > +	if (flags & XE_DRM_MM_BO_MANAGER_FLAG_SHADOW) {
> 
> hmm, so this is not a main feature of this MM?
> then maybe we should have two components:
> 
> 	* xe_pool (plain MM, like xe_sa but without fences)
> 	* xe_shadow (adds shadow BO on top of plain MM)
> 
> > > +		struct xe_bo *shadow;
> > > +
> > > +		ret = drmm_mutex_init(&xe->drm, &drm_mm_manager-
> > > > swap_guard);
> > > +		if (ret)
> > > +			return ERR_PTR(ret);
> > > +		if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
> > > +			fs_reclaim_acquire(GFP_KERNEL);
> > > +			might_lock(&drm_mm_manager->swap_guard);
> > > +			fs_reclaim_release(GFP_KERNEL);
> > > +		}
> > > +
> > > +		shadow = xe_managed_bo_create_pin_map(xe, tile,
> > > size,
> > > +						     
> > > XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> > > +						     
> > > XE_BO_FLAG_GGTT |
> > > +						     
> > > XE_BO_FLAG_GGTT_INVALIDATE |
> > > +						     
> > > XE_BO_FLAG_PINNED_NORESTORE);
> > > +		if (IS_ERR(shadow)) {
> > > +			drm_err(&xe->drm,
> > > +				"Failed to prepare %uKiB shadow
> > > BO
> > > for DRM MM manager (%pe)\n",
> > > +				size / SZ_1K, shadow);
> > > +			return ERR_CAST(shadow);
> > > +		}
> > > +		drm_mm_manager->shadow = shadow;
> > > +	}
> > > +
> > > +	drm_mm_init(&drm_mm_manager->base, 0, managed_size);
> > > +	ret = drmm_add_action_or_reset(&xe->drm,
> > > xe_drm_mm_manager_fini, drm_mm_manager);
> > > +	if (ret)
> > > +		return ERR_PTR(ret);
> > > +
> > > +	return drm_mm_manager;
> > > +}
> > > +
> > > +/**
> > > + * xe_drm_mm_bo_swap_shadow() - Swap the primary BO with the
> > > shadow
> > > BO.
> 
> do we need _bo_ in the function name here?
> 
> > > + * @drm_mm_manager: the DRM MM manager containing the primary
> > > and
> > > shadow BOs.
> > > + *
> > > + * Swaps the primary buffer object with the shadow buffer object
> > > in
> > > the DRM MM
> > > + * manager.
> 
> say a word about required swap_guard mutex
> 
> and/or add the _locked suffix to the function name
> 
> > > + *
> > > + * Return: None.
> > > + */
> > > +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager
> > > *drm_mm_manager)
> > > +{
> > > +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo-
> > > >tile);
> > > +
> > > +	xe_assert(xe, drm_mm_manager->shadow);
> 
> use xe_tile_assert
> 
> > > +	lockdep_assert_held(&drm_mm_manager->swap_guard);
> > > +
> > > +	swap(drm_mm_manager->bo, drm_mm_manager->shadow);
> > > +	if (!drm_mm_manager->bo->vmap.is_iomem)
> > > +		drm_mm_manager->cpu_addr = drm_mm_manager->bo-
> > > > vmap.vaddr;
> > > +}
> > > +
> > > +/**
> > > + * xe_drm_mm_sync_shadow() - Synchronize the shadow BO with the
> > > primary BO.
> > > + * @drm_mm_manager: the DRM MM manager containing the primary
> > > and
> > > shadow BOs.
> > > + * @node: the DRM MM node representing the region to
> > > synchronize.
> > > + *
> > > + * Copies the contents of the specified region from the primary
> > > buffer object to
> > > + * the shadow buffer object in the DRM MM manager.
> > > + *
> > > + * Return: None.
> > > + */
> > > +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager
> > > *drm_mm_manager,
> > > +			   struct drm_mm_node *node)
> > > +{
> > > +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo-
> > > >tile);
> > > +
> > > +	xe_assert(xe, drm_mm_manager->shadow);
> > > +	lockdep_assert_held(&drm_mm_manager->swap_guard);
> > > +
> > > +	xe_map_memcpy_to(xe, &drm_mm_manager->shadow->vmap,
> > > +			 node->start,
> > > +			 drm_mm_manager->cpu_addr + node->start,
> > > +			 node->size);
> 
> maybe I'm missing something, but if primary BO.is_iomem==true then
> who/when updates the actual primary BO memory? or is it unused by
> design and only shadow has the data ...
> 
> maybe some DOC section with theory-of-operation will help here?
> 
> > > +}
> > > +
> > > +/**
> > > + * xe_drm_mm_insert_node() - Insert a node into the DRM MM
> > > manager.
> > > + * @drm_mm_manager: the DRM MM manager to insert the node into.
> > > + * @node: the DRM MM node to insert.
> 
> in recent changes to xe_ggtt we finally hidden the implementation
> details
> of the MM used by the xe_ggtt
> 
> why here we start again exposing impl detail as part of the API?
> if we can't allocate xe_drm_mm_node here, maybe at least take it
> as a parameter and update in place
> 
> > > + * @size: the size of the node to insert.
> > > + *
> > > + * Inserts a node into the DRM MM manager and clears the
> > > corresponding memory region
> > > + * in both the primary and shadow buffer objects.
> > > + *
> > > + * Return: 0 on success, or a negative error code on failure.
> > > + */
> > > +int xe_drm_mm_insert_node(struct xe_drm_mm_manager
> > > *drm_mm_manager,
> > > +			  struct drm_mm_node *node, u32 size)
> > > +{
> > > +	struct drm_mm *mm = &drm_mm_manager->base;
> > > +	int ret;
> > > +
> > > +	ret = drm_mm_insert_node(mm, node, size);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	memset((void *)drm_mm_manager->bo->vmap.vaddr + node-
> > > >start,
> > > 0, node->size);
> 
> iosys_map_memset(bo->vmap, start, 0, size) ?
> 
> > > +	if (drm_mm_manager->shadow)
> > > +		memset((void *)drm_mm_manager->shadow-
> > > >vmap.vaddr +
> > > node->start, 0,
> > > +		       node->size);
> 
> what about clearing the drm_mm_manager->cpu_addr ?
> 
> > > +	return 0;
> > > +}
> > > +
> > > +/**
> > > + * xe_drm_mm_remove_node() - Remove a node from the DRM MM
> > > manager.
> > > + * @node: the DRM MM node to remove.
> > > + *
> > > + * Return: None.
> > > + */
> > > +void xe_drm_mm_remove_node(struct drm_mm_node *node)
> > > +{
> > > +	return drm_mm_remove_node(node);
> > > +}
> > > diff --git a/drivers/gpu/drm/xe/xe_drm_mm.h
> > > b/drivers/gpu/drm/xe/xe_drm_mm.h
> > > new file mode 100644
> > > index 000000000000..aeb7cab92d0b
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xe/xe_drm_mm.h
> > > @@ -0,0 +1,55 @@
> > > +/* SPDX-License-Identifier: MIT */
> > > +/*
> > > + * Copyright © 2026 Intel Corporation
> > > + */
> > > +#ifndef _XE_DRM_MM_H_
> > > +#define _XE_DRM_MM_H_
> > > +
> > > +#include <linux/sizes.h>
> > > +#include <linux/types.h>
> > > +
> > > +#include "xe_bo.h"
> > > +#include "xe_drm_mm_types.h"
> > > +
> > > +struct dma_fence;
> 
> do we need this?
> 
> > > +struct xe_tile;
> > > +
> > > +#define XE_DRM_MM_BO_MANAGER_FLAG_SHADOW    BIT(0)
> > > +
> > > +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile
> > > *tile, u32 size,
> > > +						 u32 guard, u32
> > > flags);
> > > +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager
> > > *drm_mm_manager);
> > > +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager
> > > *drm_mm_manager,
> > > +			   struct drm_mm_node *node);
> > > +int xe_drm_mm_insert_node(struct xe_drm_mm_manager
> > > *drm_mm_manager,
> > > +			  struct drm_mm_node *node, u32 size);
> > > +void xe_drm_mm_remove_node(struct drm_mm_node *node);
> > > +
> > > +/**
> > > + * xe_drm_mm_manager_gpu_addr() - Retrieve GPU address of a back
> > > storage BO
> > > + * within a memory manager.
> > > + * @drm_mm_manager: The DRM MM memory manager.
> > > + *
> > > + * Returns: GGTT address of the back storage BO
> > > + */
> > > +static inline u64 xe_drm_mm_manager_gpu_addr(struct
> > > xe_drm_mm_manager
> > > +					     *drm_mm_manager)
> > > +{
> > > +	return xe_bo_ggtt_addr(drm_mm_manager->bo);
> > > +}
> > > +
> > > +/**
> > > + * xe_drm_mm_bo_swap_guard() - Retrieve the mutex used to guard
> > > swap
> > > operations
> 
> hmm, do we need the _bo_ here?
> 
> > > + * on a memory manager.
> > > + * @drm_mm_manager: The DRM MM memory manager.
> > > + *
> > > + * Returns: Swap guard mutex.
> > > + */
> > > +static inline struct mutex *xe_drm_mm_bo_swap_guard(struct
> > > xe_drm_mm_manager
> > > +						   
> > > *drm_mm_manager)
> > > +{
> > > +	return &drm_mm_manager->swap_guard;
> > > +}
> > > +
> > > +#endif
> > > +
> > > diff --git a/drivers/gpu/drm/xe/xe_drm_mm_types.h
> > > b/drivers/gpu/drm/xe/xe_drm_mm_types.h
> > > new file mode 100644
> > > index 000000000000..69e0937dd8de
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xe/xe_drm_mm_types.h
> > > @@ -0,0 +1,42 @@
> > > +/* SPDX-License-Identifier: MIT */
> > > +/*
> > > + * Copyright © 2026 Intel Corporation
> > > + */
> > > +
> > > +#ifndef _XE_DRM_MM_TYPES_H_
> > > +#define _XE_DRM_MM_TYPES_H_
> > > +
> > > +#include <drm/drm_mm.h>
> > > +
> > > +struct xe_bo;
> > > +
> 
> without kernel-doc for the struct itself, below kernel-docs for the
> members are currently not recognized by the tool
> 
> > > +struct xe_drm_mm_manager {
> > > +	/** @base: Range allocator over [0, @size) in bytes */
> > > +	struct drm_mm base;
> > > +	/** @bo: Active pool BO (GGTT-pinned, CPU-mapped). */
> > > +	struct xe_bo *bo;
> > > +	/** @shadow: Shadow BO for atomic command updates. */
> > > +	struct xe_bo *shadow;
> > > +	/** @swap_guard: Timeline guard updating @bo and @shadow
> > > */
> > > +	struct mutex swap_guard;
> > > +	/** @cpu_addr: CPU virtual address of the active BO. */
> > > +	void *cpu_addr;
> > > +	/** @size: Total size of the managed address space. */
> > > +	u64 size;
> > > +	/** @is_iomem: Whether the managed address space is I/O
> > > memory. */
> > > +	bool is_iomem;
> > > +};
> > > +
> 
> ditto
> 
> > > +struct xe_drm_mm_bb {
> > > +	/** @node: Range node for this batch buffer. */
> > > +	struct drm_mm_node node;
> > > +	/** @manager: Manager this batch buffer belongs to. */
> > > +	struct xe_drm_mm_manager *manager;
> > > +	/** @cs: Command stream for this batch buffer. */
> > > +	u32 *cs;
> > > +	/** @len: Length of the CS in dwords. */
> > > +	u32 len;
> > > +};
> 
> but we are not using this struct yet in this patch, correct?
> 
> > > +
> > > +#endif
> > > +

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write
  2026-03-20 12:12 ` [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write Satyanarayana K V P
  2026-03-26 19:52   ` Matthew Brost
@ 2026-03-27 11:07   ` Michal Wajdeczko
  2026-03-27 11:17     ` K V P, Satyanarayana
  1 sibling, 1 reply; 20+ messages in thread
From: Michal Wajdeczko @ 2026-03-27 11:07 UTC (permalink / raw)
  To: Satyanarayana K V P, intel-xe
  Cc: Matthew Brost, Thomas Hellström, Maarten Lankhorst



On 3/20/2026 1:12 PM, Satyanarayana K V P wrote:
> The suballocator algorithm tracks a hole cursor at the last allocation
> and tries to allocate after it. This is optimized for fence-ordered
> progress, where older allocations are expected to become reusable first.
> 
> In fence-enabled mode, that ordering assumption holds. In fence-disabled
> mode, allocations may be freed in arbitrary order, so limiting allocation
> to the current hole window can miss valid free space and fail allocations
> despite sufficient total space.
> 
> Use DRM memory manager instead of sub-allocator to get rid of this issue
> as CCS read/write operations do not use fences.
> 
> Fixes: 864690cf4dd62 ("drm/xe/vf: Attach and detach CCS copy commands with BO")
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Maarten Lankhorst <dev@lankhorst.se>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> 
> ---
> Used drm mm instead of drm sa based on comments from
> https://lore.kernel.org/all/bbf0d48d-a95a-46e1-ac8f-e8a0daa81365@amd.com/
> ---
>  drivers/gpu/drm/xe/xe_bo_types.h           |  3 +-
>  drivers/gpu/drm/xe/xe_migrate.c            | 56 ++++++++++++----------
>  drivers/gpu/drm/xe/xe_sriov_vf_ccs.c       | 39 ++++++++-------
>  drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h |  2 +-
>  4 files changed, 53 insertions(+), 47 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
> index d4fe3c8dca5b..4c4f15c5648e 100644
> --- a/drivers/gpu/drm/xe/xe_bo_types.h
> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
> @@ -18,6 +18,7 @@
>  #include "xe_ggtt_types.h"
>  
>  struct xe_device;
> +struct xe_drm_mm_bb;
>  struct xe_vm;
>  
>  #define XE_BO_MAX_PLACEMENTS	3
> @@ -88,7 +89,7 @@ struct xe_bo {
>  	bool ccs_cleared;
>  
>  	/** @bb_ccs: BB instructions of CCS read/write. Valid only for VF */
> -	struct xe_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT];
> +	struct xe_drm_mm_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT];
>  
>  	/**
>  	 * @cpu_caching: CPU caching mode. Currently only used for userspace
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index fc918b4fba54..2fefd306cb2e 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -22,6 +22,7 @@
>  #include "xe_assert.h"
>  #include "xe_bb.h"
>  #include "xe_bo.h"
> +#include "xe_drm_mm.h"
>  #include "xe_exec_queue.h"
>  #include "xe_ggtt.h"
>  #include "xe_gt.h"
> @@ -1166,11 +1167,12 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  	u32 batch_size, batch_size_allocated;
>  	struct xe_device *xe = gt_to_xe(gt);
>  	struct xe_res_cursor src_it, ccs_it;
> +	struct xe_drm_mm_manager *bb_pool;
>  	struct xe_sriov_vf_ccs_ctx *ctx;
> -	struct xe_sa_manager *bb_pool;
> +	struct xe_drm_mm_bb *bb = NULL;
>  	u64 size = xe_bo_size(src_bo);
> -	struct xe_bb *bb = NULL;
>  	u64 src_L0, src_L0_ofs;
> +	struct xe_bb xe_bb_tmp;
>  	u32 src_L0_pt;
>  	int err;
>  
> @@ -1208,18 +1210,18 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  		size -= src_L0;
>  	}
>  
> -	bb = xe_bb_alloc(gt);
> +	bb = xe_drm_mm_bb_alloc();
>  	if (IS_ERR(bb))
>  		return PTR_ERR(bb);
>  
>  	bb_pool = ctx->mem.ccs_bb_pool;
> -	scoped_guard(mutex, xe_sa_bo_swap_guard(bb_pool)) {
> -		xe_sa_bo_swap_shadow(bb_pool);
> +	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
> +		xe_drm_mm_bo_swap_shadow(bb_pool);
>  
> -		err = xe_bb_init(bb, bb_pool, batch_size);
> +		err = xe_drm_mm_bb_insert(bb, bb_pool, batch_size);
>  		if (err) {
>  			xe_gt_err(gt, "BB allocation failed.\n");
> -			xe_bb_free(bb, NULL);
> +			kfree(bb);
>  			return err;
>  		}
>  
> @@ -1227,6 +1229,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  		size = xe_bo_size(src_bo);
>  		batch_size = 0;
>  
> +		xe_bb_tmp = (struct xe_bb){ .cs = bb->cs, .len = 0 };
>  		/*
>  		 * Emit PTE and copy commands here.
>  		 * The CCS copy command can only support limited size. If the size to be
> @@ -1255,24 +1258,27 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  			xe_assert(xe, IS_ALIGNED(ccs_it.start, PAGE_SIZE));
>  			batch_size += EMIT_COPY_CCS_DW;
>  
> -			emit_pte(m, bb, src_L0_pt, false, true, &src_it, src_L0, src);
> +			emit_pte(m, &xe_bb_tmp, src_L0_pt, false, true, &src_it, src_L0, src);
>  
> -			emit_pte(m, bb, ccs_pt, false, false, &ccs_it, ccs_size, src);
> +			emit_pte(m, &xe_bb_tmp, ccs_pt, false, false, &ccs_it, ccs_size, src);
>  
> -			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
> -			flush_flags = xe_migrate_ccs_copy(m, bb, src_L0_ofs, src_is_pltt,
> +			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
> +							      flush_flags);
> +			flush_flags = xe_migrate_ccs_copy(m, &xe_bb_tmp, src_L0_ofs, src_is_pltt,
>  							  src_L0_ofs, dst_is_pltt,
>  							  src_L0, ccs_ofs, true);
> -			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
> +			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
> +							      flush_flags);
>  
>  			size -= src_L0;
>  		}
>  
> -		xe_assert(xe, (batch_size_allocated == bb->len));
> +		xe_assert(xe, (batch_size_allocated == xe_bb_tmp.len));
> +		bb->len = xe_bb_tmp.len;
>  		src_bo->bb_ccs[read_write] = bb;
>  
>  		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
> -		xe_sa_bo_sync_shadow(bb->bo);
> +		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
>  	}
>  
>  	return 0;
> @@ -1297,10 +1303,10 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
>  				  enum xe_sriov_vf_ccs_rw_ctxs read_write)
>  {
> -	struct xe_bb *bb = src_bo->bb_ccs[read_write];
> +	struct xe_drm_mm_bb *bb = src_bo->bb_ccs[read_write];
>  	struct xe_device *xe = xe_bo_device(src_bo);
> +	struct xe_drm_mm_manager *bb_pool;
>  	struct xe_sriov_vf_ccs_ctx *ctx;
> -	struct xe_sa_manager *bb_pool;
>  	u32 *cs;
>  
>  	xe_assert(xe, IS_SRIOV_VF(xe));
> @@ -1308,17 +1314,17 @@ void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
>  	ctx = &xe->sriov.vf.ccs.contexts[read_write];
>  	bb_pool = ctx->mem.ccs_bb_pool;
>  
> -	guard(mutex) (xe_sa_bo_swap_guard(bb_pool));
> -	xe_sa_bo_swap_shadow(bb_pool);
> -
> -	cs = xe_sa_bo_cpu_addr(bb->bo);
> -	memset(cs, MI_NOOP, bb->len * sizeof(u32));
> -	xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
> +	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
> +		xe_drm_mm_bo_swap_shadow(bb_pool);
>  
> -	xe_sa_bo_sync_shadow(bb->bo);
> +		cs = bb_pool->cpu_addr + bb->node.start;
> +		memset(cs, MI_NOOP, bb->len * sizeof(u32));
> +		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
>  
> -	xe_bb_free(bb, NULL);
> -	src_bo->bb_ccs[read_write] = NULL;
> +		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
> +		xe_drm_mm_bb_free(bb);
> +		src_bo->bb_ccs[read_write] = NULL;
> +	}
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> index db023fb66a27..6fb4641c6f0f 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> @@ -8,6 +8,7 @@
>  #include "xe_bb.h"
>  #include "xe_bo.h"
>  #include "xe_device.h"
> +#include "xe_drm_mm.h"
>  #include "xe_exec_queue.h"
>  #include "xe_exec_queue_types.h"
>  #include "xe_gt_sriov_vf.h"
> @@ -16,7 +17,6 @@
>  #include "xe_lrc.h"
>  #include "xe_migrate.h"
>  #include "xe_pm.h"
> -#include "xe_sa.h"
>  #include "xe_sriov_printk.h"
>  #include "xe_sriov_vf.h"
>  #include "xe_sriov_vf_ccs.h"
> @@ -141,8 +141,8 @@ static u64 get_ccs_bb_pool_size(struct xe_device *xe)
>  
>  static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
>  {
> +	struct xe_drm_mm_manager *drm_mm_manager;
>  	struct xe_device *xe = tile_to_xe(tile);
> -	struct xe_sa_manager *sa_manager;
>  	u64 bb_pool_size;
>  	int offset, err;
>  
> @@ -150,34 +150,33 @@ static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
>  	xe_sriov_info(xe, "Allocating %s CCS BB pool size = %lldMB\n",
>  		      ctx->ctx_id ? "Restore" : "Save", bb_pool_size / SZ_1M);
>  
> -	sa_manager = __xe_sa_bo_manager_init(tile, bb_pool_size, SZ_4K, SZ_16,
> -					     XE_SA_BO_MANAGER_FLAG_SHADOW);
> -
> -	if (IS_ERR(sa_manager)) {
> -		xe_sriov_err(xe, "Suballocator init failed with error: %pe\n",
> -			     sa_manager);
> -		err = PTR_ERR(sa_manager);
> +	drm_mm_manager = xe_drm_mm_manager_init(tile, bb_pool_size, SZ_4K,
> +						XE_DRM_MM_BO_MANAGER_FLAG_SHADOW);
> +	if (IS_ERR(drm_mm_manager)) {
> +		xe_sriov_err(xe, "XE_DRM_MM init failed with error: %pe\n",
> +			     drm_mm_manager);
> +		err = PTR_ERR(drm_mm_manager);
>  		return err;
>  	}
>  
>  	offset = 0;
> -	xe_map_memset(xe, &sa_manager->bo->vmap, offset, MI_NOOP,
> +	xe_map_memset(xe, &drm_mm_manager->bo->vmap, offset, MI_NOOP,
>  		      bb_pool_size);
> -	xe_map_memset(xe, &sa_manager->shadow->vmap, offset, MI_NOOP,
> +	xe_map_memset(xe, &drm_mm_manager->shadow->vmap, offset, MI_NOOP,
>  		      bb_pool_size);
>  
>  	offset = bb_pool_size - sizeof(u32);
> -	xe_map_wr(xe, &sa_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
> -	xe_map_wr(xe, &sa_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
> +	xe_map_wr(xe, &drm_mm_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
> +	xe_map_wr(xe, &drm_mm_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);

this seems to break the new XE MM component isolation, as you are directly
touching the XE MM internals without XE MM being aware of this...

are we sure that XE MM will not overwrite this last dword from the pool BO?
maybe it should be exposed to the XE MM user as this 'trail guard' location?

>  
> -	ctx->mem.ccs_bb_pool = sa_manager;
> +	ctx->mem.ccs_bb_pool = drm_mm_manager;
>  
>  	return 0;
>  }
>  
>  static void ccs_rw_update_ring(struct xe_sriov_vf_ccs_ctx *ctx)
>  {
> -	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
> +	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>  	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
>  	u32 dw[10], i = 0;
>  
> @@ -388,7 +387,7 @@ int xe_sriov_vf_ccs_init(struct xe_device *xe)
>  #define XE_SRIOV_VF_CCS_RW_BB_ADDR_OFFSET	(2 * sizeof(u32))
>  void xe_sriov_vf_ccs_rw_update_bb_addr(struct xe_sriov_vf_ccs_ctx *ctx)
>  {
> -	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
> +	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>  	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
>  	struct xe_device *xe = gt_to_xe(ctx->mig_q->gt);
>  
> @@ -412,8 +411,8 @@ int xe_sriov_vf_ccs_attach_bo(struct xe_bo *bo)
>  	struct xe_device *xe = xe_bo_device(bo);
>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
>  	struct xe_sriov_vf_ccs_ctx *ctx;
> +	struct xe_drm_mm_bb *bb;
>  	struct xe_tile *tile;
> -	struct xe_bb *bb;
>  	int err = 0;
>  
>  	xe_assert(xe, IS_VF_CCS_READY(xe));
> @@ -445,7 +444,7 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
>  {
>  	struct xe_device *xe = xe_bo_device(bo);
>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
> -	struct xe_bb *bb;
> +	struct xe_drm_mm_bb *bb;
>  
>  	xe_assert(xe, IS_VF_CCS_READY(xe));
>  
> @@ -471,8 +470,8 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
>   */
>  void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
>  {
> -	struct xe_sa_manager *bb_pool;
>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
> +	struct xe_drm_mm_manager *bb_pool;
>  
>  	if (!IS_VF_CCS_READY(xe))
>  		return;
> @@ -485,7 +484,7 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
>  
>  		drm_printf(p, "ccs %s bb suballoc info\n", ctx_id ? "write" : "read");
>  		drm_printf(p, "-------------------------\n");
> -		drm_suballoc_dump_debug_info(&bb_pool->base, p, xe_sa_manager_gpu_addr(bb_pool));
> +		drm_mm_print(&bb_pool->base, p);
>  		drm_puts(p, "\n");
>  	}
>  }
> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
> index 22c499943d2a..f2af074578c9 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
> +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
> @@ -33,7 +33,7 @@ struct xe_sriov_vf_ccs_ctx {
>  	/** @mem: memory data */
>  	struct {
>  		/** @mem.ccs_bb_pool: Pool from which batch buffers are allocated. */
> -		struct xe_sa_manager *ccs_bb_pool;
> +		struct xe_drm_mm_manager *ccs_bb_pool;
>  	} mem;
>  };
>  


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write
  2026-03-27 11:07   ` Michal Wajdeczko
@ 2026-03-27 11:17     ` K V P, Satyanarayana
  2026-03-27 11:47       ` Michal Wajdeczko
  0 siblings, 1 reply; 20+ messages in thread
From: K V P, Satyanarayana @ 2026-03-27 11:17 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-xe
  Cc: Matthew Brost, Thomas Hellström, Maarten Lankhorst

[-- Attachment #1: Type: text/plain, Size: 13198 bytes --]


On 27-Mar-26 4:37 PM, Michal Wajdeczko wrote:
>
> On 3/20/2026 1:12 PM, Satyanarayana K V P wrote:
>> The suballocator algorithm tracks a hole cursor at the last allocation
>> and tries to allocate after it. This is optimized for fence-ordered
>> progress, where older allocations are expected to become reusable first.
>>
>> In fence-enabled mode, that ordering assumption holds. In fence-disabled
>> mode, allocations may be freed in arbitrary order, so limiting allocation
>> to the current hole window can miss valid free space and fail allocations
>> despite sufficient total space.
>>
>> Use DRM memory manager instead of sub-allocator to get rid of this issue
>> as CCS read/write operations do not use fences.
>>
>> Fixes: 864690cf4dd62 ("drm/xe/vf: Attach and detach CCS copy commands with BO")
>> Signed-off-by: Satyanarayana K V P<satyanarayana.k.v.p@intel.com>
>> Cc: Matthew Brost<matthew.brost@intel.com>
>> Cc: Thomas Hellström<thomas.hellstrom@linux.intel.com>
>> Cc: Maarten Lankhorst<dev@lankhorst.se>
>> Cc: Michal Wajdeczko<michal.wajdeczko@intel.com>
>>
>> ---
>> Used drm mm instead of drm sa based on comments from
>> https://lore.kernel.org/all/bbf0d48d-a95a-46e1-ac8f-e8a0daa81365@amd.com/
>> ---
>>   drivers/gpu/drm/xe/xe_bo_types.h           |  3 +-
>>   drivers/gpu/drm/xe/xe_migrate.c            | 56 ++++++++++++----------
>>   drivers/gpu/drm/xe/xe_sriov_vf_ccs.c       | 39 ++++++++-------
>>   drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h |  2 +-
>>   4 files changed, 53 insertions(+), 47 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
>> index d4fe3c8dca5b..4c4f15c5648e 100644
>> --- a/drivers/gpu/drm/xe/xe_bo_types.h
>> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
>> @@ -18,6 +18,7 @@
>>   #include "xe_ggtt_types.h" struct xe_device; +struct xe_drm_mm_bb; struct xe_vm; #define 
>> XE_BO_MAX_PLACEMENTS 3 @@ -88,7 +89,7 @@ struct xe_bo { bool 
>> ccs_cleared; /** @bb_ccs: BB instructions of CCS read/write. Valid 
>> only for VF */ - struct xe_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT]; + 
>> struct xe_drm_mm_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT]; /** * 
>> @cpu_caching: CPU caching mode. Currently only used for userspace 
>> diff --git a/drivers/gpu/drm/xe/xe_migrate.c 
>> b/drivers/gpu/drm/xe/xe_migrate.c index fc918b4fba54..2fefd306cb2e 
>> 100644 --- a/drivers/gpu/drm/xe/xe_migrate.c +++ 
>> b/drivers/gpu/drm/xe/xe_migrate.c @@ -22,6 +22,7 @@ #include "xe_assert.h"
>>   #include "xe_bb.h"
>>   #include "xe_bo.h"
>> +#include "xe_drm_mm.h"
>>   #include "xe_exec_queue.h"
>>   #include "xe_ggtt.h"
>>   #include "xe_gt.h"
>> @@ -1166,11 +1167,12 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>>   	u32 batch_size, batch_size_allocated;
>>   	struct xe_device *xe = gt_to_xe(gt);
>>   	struct xe_res_cursor src_it, ccs_it;
>> +	struct xe_drm_mm_manager *bb_pool;
>>   	struct xe_sriov_vf_ccs_ctx *ctx;
>> -	struct xe_sa_manager *bb_pool;
>> +	struct xe_drm_mm_bb *bb = NULL;
>>   	u64 size = xe_bo_size(src_bo);
>> -	struct xe_bb *bb = NULL;
>>   	u64 src_L0, src_L0_ofs;
>> +	struct xe_bb xe_bb_tmp;
>>   	u32 src_L0_pt;
>>   	int err;
>>   
>> @@ -1208,18 +1210,18 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>>   		size -= src_L0;
>>   	}
>>   
>> -	bb = xe_bb_alloc(gt);
>> +	bb = xe_drm_mm_bb_alloc();
>>   	if (IS_ERR(bb))
>>   		return PTR_ERR(bb);
>>   
>>   	bb_pool = ctx->mem.ccs_bb_pool;
>> -	scoped_guard(mutex, xe_sa_bo_swap_guard(bb_pool)) {
>> -		xe_sa_bo_swap_shadow(bb_pool);
>> +	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
>> +		xe_drm_mm_bo_swap_shadow(bb_pool);
>>   
>> -		err = xe_bb_init(bb, bb_pool, batch_size);
>> +		err = xe_drm_mm_bb_insert(bb, bb_pool, batch_size);
>>   		if (err) {
>>   			xe_gt_err(gt, "BB allocation failed.\n");
>> -			xe_bb_free(bb, NULL);
>> +			kfree(bb);
>>   			return err;
>>   		}
>>   
>> @@ -1227,6 +1229,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>>   		size = xe_bo_size(src_bo);
>>   		batch_size = 0;
>>   
>> +		xe_bb_tmp = (struct xe_bb){ .cs = bb->cs, .len = 0 };
>>   		/*
>>   		 * Emit PTE and copy commands here.
>>   		 * The CCS copy command can only support limited size. If the size to be
>> @@ -1255,24 +1258,27 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>>   			xe_assert(xe, IS_ALIGNED(ccs_it.start, PAGE_SIZE));
>>   			batch_size += EMIT_COPY_CCS_DW;
>>   
>> -			emit_pte(m, bb, src_L0_pt, false, true, &src_it, src_L0, src);
>> +			emit_pte(m, &xe_bb_tmp, src_L0_pt, false, true, &src_it, src_L0, src);
>>   
>> -			emit_pte(m, bb, ccs_pt, false, false, &ccs_it, ccs_size, src);
>> +			emit_pte(m, &xe_bb_tmp, ccs_pt, false, false, &ccs_it, ccs_size, src);
>>   
>> -			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
>> -			flush_flags = xe_migrate_ccs_copy(m, bb, src_L0_ofs, src_is_pltt,
>> +			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
>> +							      flush_flags);
>> +			flush_flags = xe_migrate_ccs_copy(m, &xe_bb_tmp, src_L0_ofs, src_is_pltt,
>>   							  src_L0_ofs, dst_is_pltt,
>>   							  src_L0, ccs_ofs, true);
>> -			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
>> +			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
>> +							      flush_flags);
>>   
>>   			size -= src_L0;
>>   		}
>>   
>> -		xe_assert(xe, (batch_size_allocated == bb->len));
>> +		xe_assert(xe, (batch_size_allocated == xe_bb_tmp.len));
>> +		bb->len = xe_bb_tmp.len;
>>   		src_bo->bb_ccs[read_write] = bb;
>>   
>>   		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
>> -		xe_sa_bo_sync_shadow(bb->bo);
>> +		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
>>   	}
>>   
>>   	return 0;
>> @@ -1297,10 +1303,10 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>>   void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
>>   				  enum xe_sriov_vf_ccs_rw_ctxs read_write)
>>   {
>> -	struct xe_bb *bb = src_bo->bb_ccs[read_write];
>> +	struct xe_drm_mm_bb *bb = src_bo->bb_ccs[read_write];
>>   	struct xe_device *xe = xe_bo_device(src_bo);
>> +	struct xe_drm_mm_manager *bb_pool;
>>   	struct xe_sriov_vf_ccs_ctx *ctx;
>> -	struct xe_sa_manager *bb_pool;
>>   	u32 *cs;
>>   
>>   	xe_assert(xe, IS_SRIOV_VF(xe));
>> @@ -1308,17 +1314,17 @@ void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
>>   	ctx = &xe->sriov.vf.ccs.contexts[read_write];
>>   	bb_pool = ctx->mem.ccs_bb_pool;
>>   
>> -	guard(mutex) (xe_sa_bo_swap_guard(bb_pool));
>> -	xe_sa_bo_swap_shadow(bb_pool);
>> -
>> -	cs = xe_sa_bo_cpu_addr(bb->bo);
>> -	memset(cs, MI_NOOP, bb->len * sizeof(u32));
>> -	xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
>> +	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
>> +		xe_drm_mm_bo_swap_shadow(bb_pool);
>>   
>> -	xe_sa_bo_sync_shadow(bb->bo);
>> +		cs = bb_pool->cpu_addr + bb->node.start;
>> +		memset(cs, MI_NOOP, bb->len * sizeof(u32));
>> +		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
>>   
>> -	xe_bb_free(bb, NULL);
>> -	src_bo->bb_ccs[read_write] = NULL;
>> +		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
>> +		xe_drm_mm_bb_free(bb);
>> +		src_bo->bb_ccs[read_write] = NULL;
>> +	}
>>   }
>>   
>>   /**
>> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
>> index db023fb66a27..6fb4641c6f0f 100644
>> --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
>> +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
>> @@ -8,6 +8,7 @@
>>   #include "xe_bb.h"
>>   #include "xe_bo.h"
>>   #include "xe_device.h"
>> +#include "xe_drm_mm.h"
>>   #include "xe_exec_queue.h"
>>   #include "xe_exec_queue_types.h"
>>   #include "xe_gt_sriov_vf.h"
>> @@ -16,7 +17,6 @@
>>   #include "xe_lrc.h"
>>   #include "xe_migrate.h"
>>   #include "xe_pm.h"
>> -#include "xe_sa.h"
>>   #include "xe_sriov_printk.h"
>>   #include "xe_sriov_vf.h"
>>   #include "xe_sriov_vf_ccs.h"
>> @@ -141,8 +141,8 @@ static u64 get_ccs_bb_pool_size(struct xe_device *xe)
>>   
>>   static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
>>   {
>> +	struct xe_drm_mm_manager *drm_mm_manager;
>>   	struct xe_device *xe = tile_to_xe(tile);
>> -	struct xe_sa_manager *sa_manager;
>>   	u64 bb_pool_size;
>>   	int offset, err;
>>   
>> @@ -150,34 +150,33 @@ static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
>>   	xe_sriov_info(xe, "Allocating %s CCS BB pool size = %lldMB\n",
>>   		      ctx->ctx_id ? "Restore" : "Save", bb_pool_size / SZ_1M);
>>   
>> -	sa_manager = __xe_sa_bo_manager_init(tile, bb_pool_size, SZ_4K, SZ_16,
>> -					     XE_SA_BO_MANAGER_FLAG_SHADOW);
>> -
>> -	if (IS_ERR(sa_manager)) {
>> -		xe_sriov_err(xe, "Suballocator init failed with error: %pe\n",
>> -			     sa_manager);
>> -		err = PTR_ERR(sa_manager);
>> +	drm_mm_manager = xe_drm_mm_manager_init(tile, bb_pool_size, SZ_4K,
>> +						XE_DRM_MM_BO_MANAGER_FLAG_SHADOW);
>> +	if (IS_ERR(drm_mm_manager)) {
>> +		xe_sriov_err(xe, "XE_DRM_MM init failed with error: %pe\n",
>> +			     drm_mm_manager);
>> +		err = PTR_ERR(drm_mm_manager);
>>   		return err;
>>   	}
>>   
>>   	offset = 0;
>> -	xe_map_memset(xe, &sa_manager->bo->vmap, offset, MI_NOOP,
>> +	xe_map_memset(xe, &drm_mm_manager->bo->vmap, offset, MI_NOOP,
>>   		      bb_pool_size);
>> -	xe_map_memset(xe, &sa_manager->shadow->vmap, offset, MI_NOOP,
>> +	xe_map_memset(xe, &drm_mm_manager->shadow->vmap, offset, MI_NOOP,
>>   		      bb_pool_size);
>>   
>>   	offset = bb_pool_size - sizeof(u32);
>> -	xe_map_wr(xe, &sa_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
>> -	xe_map_wr(xe, &sa_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
>> +	xe_map_wr(xe, &drm_mm_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
>> +	xe_map_wr(xe, &drm_mm_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
> this seems to break the new XE MM component isolation, as you are directly
> touching the XE MM internals without XE MM being aware of this...
>
> are we sure that XE MM will not overwrite this last dword from the pool BO?
> maybe it should be exposed to the XE MM user as this 'trail guard' location?

For CCS save/restore, we submit this complete MM to Guc and whenever VM is paused, Guc submits this MM to HW.
While allocating BBs, we always allocate size + 1 so that even if the allocation happens at the end of the MM,
the MI_BATCH_BUFFER_END instruction is not overwritten.

>>   
>> -	ctx->mem.ccs_bb_pool = sa_manager;
>> +	ctx->mem.ccs_bb_pool = drm_mm_manager;
>>   
>>   	return 0;
>>   }
>>   
>>   static void ccs_rw_update_ring(struct xe_sriov_vf_ccs_ctx *ctx)
>>   {
>> -	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>> +	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>>   	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
>>   	u32 dw[10], i = 0;
>>   
>> @@ -388,7 +387,7 @@ int xe_sriov_vf_ccs_init(struct xe_device *xe)
>>   #define XE_SRIOV_VF_CCS_RW_BB_ADDR_OFFSET	(2 * sizeof(u32))
>>   void xe_sriov_vf_ccs_rw_update_bb_addr(struct xe_sriov_vf_ccs_ctx *ctx)
>>   {
>> -	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>> +	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>>   	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
>>   	struct xe_device *xe = gt_to_xe(ctx->mig_q->gt);
>>   
>> @@ -412,8 +411,8 @@ int xe_sriov_vf_ccs_attach_bo(struct xe_bo *bo)
>>   	struct xe_device *xe = xe_bo_device(bo);
>>   	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
>>   	struct xe_sriov_vf_ccs_ctx *ctx;
>> +	struct xe_drm_mm_bb *bb;
>>   	struct xe_tile *tile;
>> -	struct xe_bb *bb;
>>   	int err = 0;
>>   
>>   	xe_assert(xe, IS_VF_CCS_READY(xe));
>> @@ -445,7 +444,7 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
>>   {
>>   	struct xe_device *xe = xe_bo_device(bo);
>>   	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
>> -	struct xe_bb *bb;
>> +	struct xe_drm_mm_bb *bb;
>>   
>>   	xe_assert(xe, IS_VF_CCS_READY(xe));
>>   
>> @@ -471,8 +470,8 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
>>    */
>>   void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
>>   {
>> -	struct xe_sa_manager *bb_pool;
>>   	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
>> +	struct xe_drm_mm_manager *bb_pool;
>>   
>>   	if (!IS_VF_CCS_READY(xe))
>>   		return;
>> @@ -485,7 +484,7 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
>>   
>>   		drm_printf(p, "ccs %s bb suballoc info\n", ctx_id ? "write" : "read");
>>   		drm_printf(p, "-------------------------\n");
>> -		drm_suballoc_dump_debug_info(&bb_pool->base, p, xe_sa_manager_gpu_addr(bb_pool));
>> +		drm_mm_print(&bb_pool->base, p);
>>   		drm_puts(p, "\n");
>>   	}
>>   }
>> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
>> index 22c499943d2a..f2af074578c9 100644
>> --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
>> +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
>> @@ -33,7 +33,7 @@ struct xe_sriov_vf_ccs_ctx {
>>   	/** @mem: memory data */
>>   	struct {
>>   		/** @mem.ccs_bb_pool: Pool from which batch buffers are allocated. */
>> -		struct xe_sa_manager *ccs_bb_pool;
>> +		struct xe_drm_mm_manager *ccs_bb_pool;
>>   	} mem;
>>   };
>>   

[-- Attachment #2: Type: text/html, Size: 14673 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write
  2026-03-27 11:17     ` K V P, Satyanarayana
@ 2026-03-27 11:47       ` Michal Wajdeczko
  2026-03-27 20:07         ` Matthew Brost
  0 siblings, 1 reply; 20+ messages in thread
From: Michal Wajdeczko @ 2026-03-27 11:47 UTC (permalink / raw)
  To: K V P, Satyanarayana, intel-xe
  Cc: Matthew Brost, Thomas Hellström, Maarten Lankhorst



On 3/27/2026 12:17 PM, K V P, Satyanarayana wrote:
> 
> On 27-Mar-26 4:37 PM, Michal Wajdeczko wrote:
>> On 3/20/2026 1:12 PM, Satyanarayana K V P wrote:
>>> The suballocator algorithm tracks a hole cursor at the last allocation
>>> and tries to allocate after it. This is optimized for fence-ordered
>>> progress, where older allocations are expected to become reusable first.
>>>
>>> In fence-enabled mode, that ordering assumption holds. In fence-disabled
>>> mode, allocations may be freed in arbitrary order, so limiting allocation
>>> to the current hole window can miss valid free space and fail allocations
>>> despite sufficient total space.
>>>
>>> Use DRM memory manager instead of sub-allocator to get rid of this issue
>>> as CCS read/write operations do not use fences.
>>>
>>> Fixes: 864690cf4dd62 ("drm/xe/vf: Attach and detach CCS copy commands with BO")
>>> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Cc: Maarten Lankhorst <dev@lankhorst.se>
>>> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>>>
>>> ---
>>> Used drm mm instead of drm sa based on comments from
>>> https://lore.kernel.org/all/bbf0d48d-a95a-46e1-ac8f-e8a0daa81365@amd.com/
>>> ---
>>>  drivers/gpu/drm/xe/xe_bo_types.h           |  3 +-
>>>  drivers/gpu/drm/xe/xe_migrate.c            | 56 ++++++++++++----------
>>>  drivers/gpu/drm/xe/xe_sriov_vf_ccs.c       | 39 ++++++++-------
>>>  drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h |  2 +-
>>>  4 files changed, 53 insertions(+), 47 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
>>> index d4fe3c8dca5b..4c4f15c5648e 100644
>>> --- a/drivers/gpu/drm/xe/xe_bo_types.h
>>> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
>>> @@ -18,6 +18,7 @@
>>>  #include "xe_ggtt_types.h" struct xe_device; +struct xe_drm_mm_bb; struct xe_vm; #define XE_BO_MAX_PLACEMENTS 3 @@ -88,7 +89,7 @@ struct xe_bo { bool ccs_cleared; /** @bb_ccs: BB instructions of CCS read/write. Valid only for VF */ - struct xe_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT]; + struct xe_drm_mm_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT]; /** * @cpu_caching: CPU caching mode. Currently only used for userspace diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c index fc918b4fba54..2fefd306cb2e 100644 --- a/drivers/gpu/drm/xe/xe_migrate.c +++ b/drivers/gpu/drm/xe/xe_migrate.c @@ -22,6 +22,7 @@ #include "xe_assert.h"
>>>  #include "xe_bb.h"
>>>  #include "xe_bo.h"
>>> +#include "xe_drm_mm.h"
>>>  #include "xe_exec_queue.h"
>>>  #include "xe_ggtt.h"
>>>  #include "xe_gt.h"
>>> @@ -1166,11 +1167,12 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>>>  	u32 batch_size, batch_size_allocated;
>>>  	struct xe_device *xe = gt_to_xe(gt);
>>>  	struct xe_res_cursor src_it, ccs_it;
>>> +	struct xe_drm_mm_manager *bb_pool;
>>>  	struct xe_sriov_vf_ccs_ctx *ctx;
>>> -	struct xe_sa_manager *bb_pool;
>>> +	struct xe_drm_mm_bb *bb = NULL;
>>>  	u64 size = xe_bo_size(src_bo);
>>> -	struct xe_bb *bb = NULL;
>>>  	u64 src_L0, src_L0_ofs;
>>> +	struct xe_bb xe_bb_tmp;
>>>  	u32 src_L0_pt;
>>>  	int err;
>>>  
>>> @@ -1208,18 +1210,18 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>>>  		size -= src_L0;
>>>  	}
>>>  
>>> -	bb = xe_bb_alloc(gt);
>>> +	bb = xe_drm_mm_bb_alloc();
>>>  	if (IS_ERR(bb))
>>>  		return PTR_ERR(bb);
>>>  
>>>  	bb_pool = ctx->mem.ccs_bb_pool;
>>> -	scoped_guard(mutex, xe_sa_bo_swap_guard(bb_pool)) {
>>> -		xe_sa_bo_swap_shadow(bb_pool);
>>> +	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
>>> +		xe_drm_mm_bo_swap_shadow(bb_pool);
>>>  
>>> -		err = xe_bb_init(bb, bb_pool, batch_size);
>>> +		err = xe_drm_mm_bb_insert(bb, bb_pool, batch_size);
>>>  		if (err) {
>>>  			xe_gt_err(gt, "BB allocation failed.\n");
>>> -			xe_bb_free(bb, NULL);
>>> +			kfree(bb);
>>>  			return err;
>>>  		}
>>>  
>>> @@ -1227,6 +1229,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>>>  		size = xe_bo_size(src_bo);
>>>  		batch_size = 0;
>>>  
>>> +		xe_bb_tmp = (struct xe_bb){ .cs = bb->cs, .len = 0 };
>>>  		/*
>>>  		 * Emit PTE and copy commands here.
>>>  		 * The CCS copy command can only support limited size. If the size to be
>>> @@ -1255,24 +1258,27 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>>>  			xe_assert(xe, IS_ALIGNED(ccs_it.start, PAGE_SIZE));
>>>  			batch_size += EMIT_COPY_CCS_DW;
>>>  
>>> -			emit_pte(m, bb, src_L0_pt, false, true, &src_it, src_L0, src);
>>> +			emit_pte(m, &xe_bb_tmp, src_L0_pt, false, true, &src_it, src_L0, src);
>>>  
>>> -			emit_pte(m, bb, ccs_pt, false, false, &ccs_it, ccs_size, src);
>>> +			emit_pte(m, &xe_bb_tmp, ccs_pt, false, false, &ccs_it, ccs_size, src);
>>>  
>>> -			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
>>> -			flush_flags = xe_migrate_ccs_copy(m, bb, src_L0_ofs, src_is_pltt,
>>> +			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
>>> +							      flush_flags);
>>> +			flush_flags = xe_migrate_ccs_copy(m, &xe_bb_tmp, src_L0_ofs, src_is_pltt,
>>>  							  src_L0_ofs, dst_is_pltt,
>>>  							  src_L0, ccs_ofs, true);
>>> -			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
>>> +			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
>>> +							      flush_flags);
>>>  
>>>  			size -= src_L0;
>>>  		}
>>>  
>>> -		xe_assert(xe, (batch_size_allocated == bb->len));
>>> +		xe_assert(xe, (batch_size_allocated == xe_bb_tmp.len));
>>> +		bb->len = xe_bb_tmp.len;
>>>  		src_bo->bb_ccs[read_write] = bb;
>>>  
>>>  		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
>>> -		xe_sa_bo_sync_shadow(bb->bo);
>>> +		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
>>>  	}
>>>  
>>>  	return 0;
>>> @@ -1297,10 +1303,10 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>>>  void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
>>>  				  enum xe_sriov_vf_ccs_rw_ctxs read_write)
>>>  {
>>> -	struct xe_bb *bb = src_bo->bb_ccs[read_write];
>>> +	struct xe_drm_mm_bb *bb = src_bo->bb_ccs[read_write];
>>>  	struct xe_device *xe = xe_bo_device(src_bo);
>>> +	struct xe_drm_mm_manager *bb_pool;
>>>  	struct xe_sriov_vf_ccs_ctx *ctx;
>>> -	struct xe_sa_manager *bb_pool;
>>>  	u32 *cs;
>>>  
>>>  	xe_assert(xe, IS_SRIOV_VF(xe));
>>> @@ -1308,17 +1314,17 @@ void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
>>>  	ctx = &xe->sriov.vf.ccs.contexts[read_write];
>>>  	bb_pool = ctx->mem.ccs_bb_pool;
>>>  
>>> -	guard(mutex) (xe_sa_bo_swap_guard(bb_pool));
>>> -	xe_sa_bo_swap_shadow(bb_pool);
>>> -
>>> -	cs = xe_sa_bo_cpu_addr(bb->bo);
>>> -	memset(cs, MI_NOOP, bb->len * sizeof(u32));
>>> -	xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
>>> +	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
>>> +		xe_drm_mm_bo_swap_shadow(bb_pool);
>>>  
>>> -	xe_sa_bo_sync_shadow(bb->bo);
>>> +		cs = bb_pool->cpu_addr + bb->node.start;
>>> +		memset(cs, MI_NOOP, bb->len * sizeof(u32));
>>> +		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
>>>  
>>> -	xe_bb_free(bb, NULL);
>>> -	src_bo->bb_ccs[read_write] = NULL;
>>> +		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
>>> +		xe_drm_mm_bb_free(bb);
>>> +		src_bo->bb_ccs[read_write] = NULL;
>>> +	}
>>>  }
>>>  
>>>  /**
>>> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
>>> index db023fb66a27..6fb4641c6f0f 100644
>>> --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
>>> +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
>>> @@ -8,6 +8,7 @@
>>>  #include "xe_bb.h"
>>>  #include "xe_bo.h"
>>>  #include "xe_device.h"
>>> +#include "xe_drm_mm.h"
>>>  #include "xe_exec_queue.h"
>>>  #include "xe_exec_queue_types.h"
>>>  #include "xe_gt_sriov_vf.h"
>>> @@ -16,7 +17,6 @@
>>>  #include "xe_lrc.h"
>>>  #include "xe_migrate.h"
>>>  #include "xe_pm.h"
>>> -#include "xe_sa.h"
>>>  #include "xe_sriov_printk.h"
>>>  #include "xe_sriov_vf.h"
>>>  #include "xe_sriov_vf_ccs.h"
>>> @@ -141,8 +141,8 @@ static u64 get_ccs_bb_pool_size(struct xe_device *xe)
>>>  
>>>  static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
>>>  {
>>> +	struct xe_drm_mm_manager *drm_mm_manager;
>>>  	struct xe_device *xe = tile_to_xe(tile);
>>> -	struct xe_sa_manager *sa_manager;
>>>  	u64 bb_pool_size;
>>>  	int offset, err;
>>>  
>>> @@ -150,34 +150,33 @@ static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
>>>  	xe_sriov_info(xe, "Allocating %s CCS BB pool size = %lldMB\n",
>>>  		      ctx->ctx_id ? "Restore" : "Save", bb_pool_size / SZ_1M);
>>>  
>>> -	sa_manager = __xe_sa_bo_manager_init(tile, bb_pool_size, SZ_4K, SZ_16,
>>> -					     XE_SA_BO_MANAGER_FLAG_SHADOW);
>>> -
>>> -	if (IS_ERR(sa_manager)) {
>>> -		xe_sriov_err(xe, "Suballocator init failed with error: %pe\n",
>>> -			     sa_manager);
>>> -		err = PTR_ERR(sa_manager);
>>> +	drm_mm_manager = xe_drm_mm_manager_init(tile, bb_pool_size, SZ_4K,
>>> +						XE_DRM_MM_BO_MANAGER_FLAG_SHADOW);
>>> +	if (IS_ERR(drm_mm_manager)) {
>>> +		xe_sriov_err(xe, "XE_DRM_MM init failed with error: %pe\n",
>>> +			     drm_mm_manager);
>>> +		err = PTR_ERR(drm_mm_manager);
>>>  		return err;
>>>  	}
>>>  
>>>  	offset = 0;
>>> -	xe_map_memset(xe, &sa_manager->bo->vmap, offset, MI_NOOP,
>>> +	xe_map_memset(xe, &drm_mm_manager->bo->vmap, offset, MI_NOOP,
>>>  		      bb_pool_size);
>>> -	xe_map_memset(xe, &sa_manager->shadow->vmap, offset, MI_NOOP,
>>> +	xe_map_memset(xe, &drm_mm_manager->shadow->vmap, offset, MI_NOOP,
>>>  		      bb_pool_size);
>>>  
>>>  	offset = bb_pool_size - sizeof(u32);
>>> -	xe_map_wr(xe, &sa_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
>>> -	xe_map_wr(xe, &sa_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
>>> +	xe_map_wr(xe, &drm_mm_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
>>> +	xe_map_wr(xe, &drm_mm_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
>> this seems to break the new XE MM component isolation, as you are directly
>> touching the XE MM internals without XE MM being aware of this...
>>
>> are we sure that XE MM will not overwrite this last dword from the pool BO?
>> maybe it should be exposed to the XE MM user as this 'trail guard' location?
> 
> For CCS save/restore, we submit this complete MM to Guc and whenever VM is paused, Guc submits this MM to HW.
> While allocating BBs, we always allocate size + 1 

that sounds like a hack and relies that XEMM will not zeroed the new BB
but in v1 xe_drm_mm_insert_node it looks that new node->size is zeroed...

btw, it also unnecessary wastes lot of dwords (1 per each BB)

but this could be done in a way that XEMM has some control over it
and we will not need to deep dive into XEMM internals
 
assuming that we will allocate XEMM as size + trail/guard:


   <------------------- pool size -----> <-> trail size
  +-------------------------------------+---+
  |  ____    _______                    |   |
  +-------------------------------------+---+
     ^       ^                           ^
     |       |                           |
     bb      bb                          trail

XEMM may return vmap of the trail (or special trail node) where we
could write our BB_END and then we can let XEMM flush it to the BOs

and XEMM could allocate all BBs (without any extra +1) within real
pool size without any risk that it will overwrite our data in trail

> so that even if the allocation happens at the end of the MM,
> the MI_BATCH_BUFFER_END instruction is not overwritten.
> 
>>>  
>>> -	ctx->mem.ccs_bb_pool = sa_manager;
>>> +	ctx->mem.ccs_bb_pool = drm_mm_manager;
>>>  
>>>  	return 0;
>>>  }
>>>  
>>>  static void ccs_rw_update_ring(struct xe_sriov_vf_ccs_ctx *ctx)
>>>  {
>>> -	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>>> +	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>>>  	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
>>>  	u32 dw[10], i = 0;
>>>  
>>> @@ -388,7 +387,7 @@ int xe_sriov_vf_ccs_init(struct xe_device *xe)
>>>  #define XE_SRIOV_VF_CCS_RW_BB_ADDR_OFFSET	(2 * sizeof(u32))
>>>  void xe_sriov_vf_ccs_rw_update_bb_addr(struct xe_sriov_vf_ccs_ctx *ctx)
>>>  {
>>> -	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>>> +	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
>>>  	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
>>>  	struct xe_device *xe = gt_to_xe(ctx->mig_q->gt);
>>>  
>>> @@ -412,8 +411,8 @@ int xe_sriov_vf_ccs_attach_bo(struct xe_bo *bo)
>>>  	struct xe_device *xe = xe_bo_device(bo);
>>>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
>>>  	struct xe_sriov_vf_ccs_ctx *ctx;
>>> +	struct xe_drm_mm_bb *bb;
>>>  	struct xe_tile *tile;
>>> -	struct xe_bb *bb;
>>>  	int err = 0;
>>>  
>>>  	xe_assert(xe, IS_VF_CCS_READY(xe));
>>> @@ -445,7 +444,7 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
>>>  {
>>>  	struct xe_device *xe = xe_bo_device(bo);
>>>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
>>> -	struct xe_bb *bb;
>>> +	struct xe_drm_mm_bb *bb;
>>>  
>>>  	xe_assert(xe, IS_VF_CCS_READY(xe));
>>>  
>>> @@ -471,8 +470,8 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
>>>   */
>>>  void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
>>>  {
>>> -	struct xe_sa_manager *bb_pool;
>>>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
>>> +	struct xe_drm_mm_manager *bb_pool;
>>>  
>>>  	if (!IS_VF_CCS_READY(xe))
>>>  		return;
>>> @@ -485,7 +484,7 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
>>>  
>>>  		drm_printf(p, "ccs %s bb suballoc info\n", ctx_id ? "write" : "read");
>>>  		drm_printf(p, "-------------------------\n");
>>> -		drm_suballoc_dump_debug_info(&bb_pool->base, p, xe_sa_manager_gpu_addr(bb_pool));
>>> +		drm_mm_print(&bb_pool->base, p);
>>>  		drm_puts(p, "\n");
>>>  	}
>>>  }
>>> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
>>> index 22c499943d2a..f2af074578c9 100644
>>> --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
>>> +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
>>> @@ -33,7 +33,7 @@ struct xe_sriov_vf_ccs_ctx {
>>>  	/** @mem: memory data */
>>>  	struct {
>>>  		/** @mem.ccs_bb_pool: Pool from which batch buffers are allocated. */
>>> -		struct xe_sa_manager *ccs_bb_pool;
>>> +		struct xe_drm_mm_manager *ccs_bb_pool;
>>>  	} mem;
>>>  };
>>>  


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support
  2026-03-27 11:06       ` Thomas Hellström
@ 2026-03-27 19:54         ` Matthew Brost
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2026-03-27 19:54 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: Michal Wajdeczko, Satyanarayana K V P, intel-xe,
	Maarten Lankhorst

On Fri, Mar 27, 2026 at 12:06:21PM +0100, Thomas Hellström wrote:
> On Fri, 2026-03-27 at 11:54 +0100, Michal Wajdeczko wrote:
> > 
> > 
> > On 3/26/2026 8:57 PM, Thomas Hellström wrote:
> > > On Fri, 2026-03-20 at 12:12 +0000, Satyanarayana K V P wrote:
> > > > Add a xe_drm_mm manager to allocate sub-ranges from a BO-backed
> > > > pool
> > > > using drm_mm.
> > > 
> > > Just a comment on the naming. xe_drm_mm sounds like this is yet
> > > another
> > > specialized range manager implementation.
> > 
> > well, in fact it looks like *very* specialized MM, not much reusable
> > elsewhere
> 
> True, what I meant was it's not *implementing* a new range manager,
> like drm_mm, but rather using an existing.
> 
> > 
> > > 
> > > Could we invent a better name? 
> > > 
> > > xe_mm_suballoc? Something even better?
> > 
> > xe_shadow_pool ?
> 
> I noticed a new patch was sent out renaming to xe_mm_suballoc but 
> xe_shadow_pool sounds much better.

+1, tend to not look at naming nits when reviewing. xe_shadow_pool is a
better name.

Matt

> 
> Thanks,
> Thomas
> 
> 
> 
> > 
> > or if we split this MM into "plain pool" and "shadow pool":
> > 
> > xe_pool		--> like xe_sa but works without fences (can be
> > reused in xe_guc_buf)
> > xe_shadow_pool	--> built on top of plain, with shadow logic
> > 
> > 
> > more comments below
> > 
> > > 
> > > 
> > > Thanks,
> > > Thomas
> > > 
> > > 
> > > 
> > > > 
> > > > Signed-off-by: Satyanarayana K V P
> > > > <satyanarayana.k.v.p@intel.com>
> > > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > > Cc: Maarten Lankhorst <dev@lankhorst.se>
> > > > Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > > > ---
> > > >  drivers/gpu/drm/xe/Makefile          |   1 +
> > > >  drivers/gpu/drm/xe/xe_drm_mm.c       | 200
> > > > +++++++++++++++++++++++++++
> > > >  drivers/gpu/drm/xe/xe_drm_mm.h       |  55 ++++++++
> > > >  drivers/gpu/drm/xe/xe_drm_mm_types.h |  42 ++++++
> > > >  4 files changed, 298 insertions(+)
> > > >  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.c
> > > >  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.h
> > > >  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm_types.h
> > > > 
> > > > diff --git a/drivers/gpu/drm/xe/Makefile
> > > > b/drivers/gpu/drm/xe/Makefile
> > > > index dab979287a96..6ab4e2392df1 100644
> > > > --- a/drivers/gpu/drm/xe/Makefile
> > > > +++ b/drivers/gpu/drm/xe/Makefile
> > > > @@ -41,6 +41,7 @@ xe-y += xe_bb.o \
> > > >  	xe_device_sysfs.o \
> > > >  	xe_dma_buf.o \
> > > >  	xe_drm_client.o \
> > > > +	xe_drm_mm.o \
> > > >  	xe_drm_ras.o \
> > > >  	xe_eu_stall.o \
> > > >  	xe_exec.o \
> > > > diff --git a/drivers/gpu/drm/xe/xe_drm_mm.c
> > > > b/drivers/gpu/drm/xe/xe_drm_mm.c
> > > > new file mode 100644
> > > > index 000000000000..c5b1766fa75a
> > > > --- /dev/null
> > > > +++ b/drivers/gpu/drm/xe/xe_drm_mm.c
> > > > @@ -0,0 +1,200 @@
> > > > +// SPDX-License-Identifier: MIT
> > > > +/*
> > > > + * Copyright © 2026 Intel Corporation
> > > > + */
> > > > +
> > > > +#include <drm/drm_managed.h>
> > 
> > nit: <linux> headers go first
> > 
> > > > +#include <linux/kernel.h>
> > > > +
> > > > +#include "xe_device_types.h"
> > > > +#include "xe_drm_mm_types.h"
> > > > +#include "xe_drm_mm.h"
> > > > +#include "xe_map.h"
> > > > +
> > > > +static void xe_drm_mm_manager_fini(struct drm_device *drm, void
> > > > *arg)
> > > > +{
> > > > +	struct xe_drm_mm_manager *drm_mm_manager = arg;
> > > > +	struct xe_bo *bo = drm_mm_manager->bo;
> > > > +
> > > > +	if (!bo) {
> > 
> > not needed, we shouldn't be here if we failed to allocate a bo
> > 
> > > > +		drm_err(drm, "no bo for drm mm manager\n");
> > 
> > btw, our MM seems to be 'tile' oriented, so we should use
> > xe_tile_err() 
> > 
> > > > +		return;
> > > > +	}
> > > > +
> > > > +	drm_mm_takedown(&drm_mm_manager->base);
> > > > +
> > > > +	if (drm_mm_manager->is_iomem)
> > > > +		kvfree(drm_mm_manager->cpu_addr);
> > > > +
> > > > +	drm_mm_manager->bo = NULL;
> > > > +	drm_mm_manager->shadow = NULL;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_drm_mm_manager_init() - Create and initialize the DRM MM
> > > > manager.
> > > > + * @tile: the &xe_tile where allocate.
> > > > + * @size: number of bytes to allocate
> > > > + * @guard: number of bytes to exclude from allocation for guard
> > > > region
> > 
> > do we really need this guard ? it was already questionable on the
> > xe_sa
> > 
> > > > + * @flags: additional flags for configuring the DRM MM manager.
> > > > + *
> > > > + * Initializes a DRM MM manager for managing memory allocations
> > > > on a
> > > > specific
> > > > + * XE tile. The function allocates a buffer object to back the
> > > > memory region
> > > > + * managed by the DRM MM manager.
> > > > + *
> > > > + * Return: a pointer to the &xe_drm_mm_manager, or an error
> > > > pointer
> > > > on failure.
> > > > + */
> > > > +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile
> > > > *tile, u32 size,
> > > > +						 u32 guard, u32
> > > > flags)
> > > > +{
> > > > +	struct xe_device *xe = tile_to_xe(tile);
> > > > +	struct xe_drm_mm_manager *drm_mm_manager;
> > > > +	u64 managed_size;
> > > > +	struct xe_bo *bo;
> > > > +	int ret;
> > > > +
> > > > +	xe_tile_assert(tile, size > guard);
> > > > +	managed_size = size - guard;
> > > > +
> > > > +	drm_mm_manager = drmm_kzalloc(&xe->drm,
> > > > sizeof(*drm_mm_manager), GFP_KERNEL);
> > > > +	if (!drm_mm_manager)
> > > > +		return ERR_PTR(-ENOMEM);
> > 
> > can't we make this manager a member of the tile and then use
> > container_of to get parent tile pointer?
> > 
> > I guess we will have exactly one this MM per tile, no?
> > 
> > > > +
> > > > +	bo = xe_managed_bo_create_pin_map(xe, tile, size,
> > > > +					 
> > > > XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> > > > +					  XE_BO_FLAG_GGTT |
> > > > +					 
> > > > XE_BO_FLAG_GGTT_INVALIDATE
> > > > > 
> > > > +					 
> > > > XE_BO_FLAG_PINNED_NORESTORE);
> > > > +	if (IS_ERR(bo)) {
> > > > +		drm_err(&xe->drm, "Failed to prepare %uKiB BO
> > > > for
> > 
> > xe_tile_err(tile, ...
> > 
> > but maybe nicer solution would be to add such error to the
> > xe_managed_bo_create_pin_map() to avoid duplicating this diag
> > messages in all callers
> > 
> > > > DRM MM manager (%pe)\n",
> > > > +			size / SZ_1K, bo);
> > > > +		return ERR_CAST(bo);
> > > > +	}
> > > > +	drm_mm_manager->bo = bo;
> > > > +	drm_mm_manager->is_iomem = bo->vmap.is_iomem;
> > 
> > do we need to cache this?
> > 
> > > > +
> > > > +	if (bo->vmap.is_iomem) {
> > > > +		drm_mm_manager->cpu_addr =
> > > > kvzalloc(managed_size,
> > > > GFP_KERNEL);
> > > > +		if (!drm_mm_manager->cpu_addr)
> > > > +			return ERR_PTR(-ENOMEM);
> > > > +	} else {
> > > > +		drm_mm_manager->cpu_addr = bo->vmap.vaddr;
> > > > +		memset(drm_mm_manager->cpu_addr, 0, bo-
> > > > > ttm.base.size);
> > 
> > btw, maybe we should consider adding XE_BO_FLAG_ZERO and let
> > the xe_create_bo do initial clear for us?
> > 
> > @Matt @Thomas ?
> > 
> > > > +	}
> > > > +
> > > > +	if (flags & XE_DRM_MM_BO_MANAGER_FLAG_SHADOW) {
> > 
> > hmm, so this is not a main feature of this MM?
> > then maybe we should have two components:
> > 
> > 	* xe_pool (plain MM, like xe_sa but without fences)
> > 	* xe_shadow (adds shadow BO on top of plain MM)
> > 
> > > > +		struct xe_bo *shadow;
> > > > +
> > > > +		ret = drmm_mutex_init(&xe->drm, &drm_mm_manager-
> > > > > swap_guard);
> > > > +		if (ret)
> > > > +			return ERR_PTR(ret);
> > > > +		if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
> > > > +			fs_reclaim_acquire(GFP_KERNEL);
> > > > +			might_lock(&drm_mm_manager->swap_guard);
> > > > +			fs_reclaim_release(GFP_KERNEL);
> > > > +		}
> > > > +
> > > > +		shadow = xe_managed_bo_create_pin_map(xe, tile,
> > > > size,
> > > > +						     
> > > > XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> > > > +						     
> > > > XE_BO_FLAG_GGTT |
> > > > +						     
> > > > XE_BO_FLAG_GGTT_INVALIDATE |
> > > > +						     
> > > > XE_BO_FLAG_PINNED_NORESTORE);
> > > > +		if (IS_ERR(shadow)) {
> > > > +			drm_err(&xe->drm,
> > > > +				"Failed to prepare %uKiB shadow
> > > > BO
> > > > for DRM MM manager (%pe)\n",
> > > > +				size / SZ_1K, shadow);
> > > > +			return ERR_CAST(shadow);
> > > > +		}
> > > > +		drm_mm_manager->shadow = shadow;
> > > > +	}
> > > > +
> > > > +	drm_mm_init(&drm_mm_manager->base, 0, managed_size);
> > > > +	ret = drmm_add_action_or_reset(&xe->drm,
> > > > xe_drm_mm_manager_fini, drm_mm_manager);
> > > > +	if (ret)
> > > > +		return ERR_PTR(ret);
> > > > +
> > > > +	return drm_mm_manager;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_drm_mm_bo_swap_shadow() - Swap the primary BO with the
> > > > shadow
> > > > BO.
> > 
> > do we need _bo_ in the function name here?
> > 
> > > > + * @drm_mm_manager: the DRM MM manager containing the primary
> > > > and
> > > > shadow BOs.
> > > > + *
> > > > + * Swaps the primary buffer object with the shadow buffer object
> > > > in
> > > > the DRM MM
> > > > + * manager.
> > 
> > say a word about required swap_guard mutex
> > 
> > and/or add the _locked suffix to the function name
> > 
> > > > + *
> > > > + * Return: None.
> > > > + */
> > > > +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager
> > > > *drm_mm_manager)
> > > > +{
> > > > +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo-
> > > > >tile);
> > > > +
> > > > +	xe_assert(xe, drm_mm_manager->shadow);
> > 
> > use xe_tile_assert
> > 
> > > > +	lockdep_assert_held(&drm_mm_manager->swap_guard);
> > > > +
> > > > +	swap(drm_mm_manager->bo, drm_mm_manager->shadow);
> > > > +	if (!drm_mm_manager->bo->vmap.is_iomem)
> > > > +		drm_mm_manager->cpu_addr = drm_mm_manager->bo-
> > > > > vmap.vaddr;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_drm_mm_sync_shadow() - Synchronize the shadow BO with the
> > > > primary BO.
> > > > + * @drm_mm_manager: the DRM MM manager containing the primary
> > > > and
> > > > shadow BOs.
> > > > + * @node: the DRM MM node representing the region to
> > > > synchronize.
> > > > + *
> > > > + * Copies the contents of the specified region from the primary
> > > > buffer object to
> > > > + * the shadow buffer object in the DRM MM manager.
> > > > + *
> > > > + * Return: None.
> > > > + */
> > > > +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager
> > > > *drm_mm_manager,
> > > > +			   struct drm_mm_node *node)
> > > > +{
> > > > +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo-
> > > > >tile);
> > > > +
> > > > +	xe_assert(xe, drm_mm_manager->shadow);
> > > > +	lockdep_assert_held(&drm_mm_manager->swap_guard);
> > > > +
> > > > +	xe_map_memcpy_to(xe, &drm_mm_manager->shadow->vmap,
> > > > +			 node->start,
> > > > +			 drm_mm_manager->cpu_addr + node->start,
> > > > +			 node->size);
> > 
> > maybe I'm missing something, but if primary BO.is_iomem==true then
> > who/when updates the actual primary BO memory? or is it unused by
> > design and only shadow has the data ...
> > 
> > maybe some DOC section with theory-of-operation will help here?
> > 
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_drm_mm_insert_node() - Insert a node into the DRM MM
> > > > manager.
> > > > + * @drm_mm_manager: the DRM MM manager to insert the node into.
> > > > + * @node: the DRM MM node to insert.
> > 
> > in recent changes to xe_ggtt we finally hidden the implementation
> > details
> > of the MM used by the xe_ggtt
> > 
> > why here we start again exposing impl detail as part of the API?
> > if we can't allocate xe_drm_mm_node here, maybe at least take it
> > as a parameter and update in place
> > 
> > > > + * @size: the size of the node to insert.
> > > > + *
> > > > + * Inserts a node into the DRM MM manager and clears the
> > > > corresponding memory region
> > > > + * in both the primary and shadow buffer objects.
> > > > + *
> > > > + * Return: 0 on success, or a negative error code on failure.
> > > > + */
> > > > +int xe_drm_mm_insert_node(struct xe_drm_mm_manager
> > > > *drm_mm_manager,
> > > > +			  struct drm_mm_node *node, u32 size)
> > > > +{
> > > > +	struct drm_mm *mm = &drm_mm_manager->base;
> > > > +	int ret;
> > > > +
> > > > +	ret = drm_mm_insert_node(mm, node, size);
> > > > +	if (ret)
> > > > +		return ret;
> > > > +
> > > > +	memset((void *)drm_mm_manager->bo->vmap.vaddr + node-
> > > > >start,
> > > > 0, node->size);
> > 
> > iosys_map_memset(bo->vmap, start, 0, size) ?
> > 
> > > > +	if (drm_mm_manager->shadow)
> > > > +		memset((void *)drm_mm_manager->shadow-
> > > > >vmap.vaddr +
> > > > node->start, 0,
> > > > +		       node->size);
> > 
> > what about clearing the drm_mm_manager->cpu_addr ?
> > 
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_drm_mm_remove_node() - Remove a node from the DRM MM
> > > > manager.
> > > > + * @node: the DRM MM node to remove.
> > > > + *
> > > > + * Return: None.
> > > > + */
> > > > +void xe_drm_mm_remove_node(struct drm_mm_node *node)
> > > > +{
> > > > +	return drm_mm_remove_node(node);
> > > > +}
> > > > diff --git a/drivers/gpu/drm/xe/xe_drm_mm.h
> > > > b/drivers/gpu/drm/xe/xe_drm_mm.h
> > > > new file mode 100644
> > > > index 000000000000..aeb7cab92d0b
> > > > --- /dev/null
> > > > +++ b/drivers/gpu/drm/xe/xe_drm_mm.h
> > > > @@ -0,0 +1,55 @@
> > > > +/* SPDX-License-Identifier: MIT */
> > > > +/*
> > > > + * Copyright © 2026 Intel Corporation
> > > > + */
> > > > +#ifndef _XE_DRM_MM_H_
> > > > +#define _XE_DRM_MM_H_
> > > > +
> > > > +#include <linux/sizes.h>
> > > > +#include <linux/types.h>
> > > > +
> > > > +#include "xe_bo.h"
> > > > +#include "xe_drm_mm_types.h"
> > > > +
> > > > +struct dma_fence;
> > 
> > do we need this?
> > 
> > > > +struct xe_tile;
> > > > +
> > > > +#define XE_DRM_MM_BO_MANAGER_FLAG_SHADOW    BIT(0)
> > > > +
> > > > +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile
> > > > *tile, u32 size,
> > > > +						 u32 guard, u32
> > > > flags);
> > > > +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager
> > > > *drm_mm_manager);
> > > > +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager
> > > > *drm_mm_manager,
> > > > +			   struct drm_mm_node *node);
> > > > +int xe_drm_mm_insert_node(struct xe_drm_mm_manager
> > > > *drm_mm_manager,
> > > > +			  struct drm_mm_node *node, u32 size);
> > > > +void xe_drm_mm_remove_node(struct drm_mm_node *node);
> > > > +
> > > > +/**
> > > > + * xe_drm_mm_manager_gpu_addr() - Retrieve GPU address of a back
> > > > storage BO
> > > > + * within a memory manager.
> > > > + * @drm_mm_manager: The DRM MM memory manager.
> > > > + *
> > > > + * Returns: GGTT address of the back storage BO
> > > > + */
> > > > +static inline u64 xe_drm_mm_manager_gpu_addr(struct
> > > > xe_drm_mm_manager
> > > > +					     *drm_mm_manager)
> > > > +{
> > > > +	return xe_bo_ggtt_addr(drm_mm_manager->bo);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_drm_mm_bo_swap_guard() - Retrieve the mutex used to guard
> > > > swap
> > > > operations
> > 
> > hmm, do we need the _bo_ here?
> > 
> > > > + * on a memory manager.
> > > > + * @drm_mm_manager: The DRM MM memory manager.
> > > > + *
> > > > + * Returns: Swap guard mutex.
> > > > + */
> > > > +static inline struct mutex *xe_drm_mm_bo_swap_guard(struct
> > > > xe_drm_mm_manager
> > > > +						   
> > > > *drm_mm_manager)
> > > > +{
> > > > +	return &drm_mm_manager->swap_guard;
> > > > +}
> > > > +
> > > > +#endif
> > > > +
> > > > diff --git a/drivers/gpu/drm/xe/xe_drm_mm_types.h
> > > > b/drivers/gpu/drm/xe/xe_drm_mm_types.h
> > > > new file mode 100644
> > > > index 000000000000..69e0937dd8de
> > > > --- /dev/null
> > > > +++ b/drivers/gpu/drm/xe/xe_drm_mm_types.h
> > > > @@ -0,0 +1,42 @@
> > > > +/* SPDX-License-Identifier: MIT */
> > > > +/*
> > > > + * Copyright © 2026 Intel Corporation
> > > > + */
> > > > +
> > > > +#ifndef _XE_DRM_MM_TYPES_H_
> > > > +#define _XE_DRM_MM_TYPES_H_
> > > > +
> > > > +#include <drm/drm_mm.h>
> > > > +
> > > > +struct xe_bo;
> > > > +
> > 
> > without kernel-doc for the struct itself, below kernel-docs for the
> > members are currently not recognized by the tool
> > 
> > > > +struct xe_drm_mm_manager {
> > > > +	/** @base: Range allocator over [0, @size) in bytes */
> > > > +	struct drm_mm base;
> > > > +	/** @bo: Active pool BO (GGTT-pinned, CPU-mapped). */
> > > > +	struct xe_bo *bo;
> > > > +	/** @shadow: Shadow BO for atomic command updates. */
> > > > +	struct xe_bo *shadow;
> > > > +	/** @swap_guard: Timeline guard updating @bo and @shadow
> > > > */
> > > > +	struct mutex swap_guard;
> > > > +	/** @cpu_addr: CPU virtual address of the active BO. */
> > > > +	void *cpu_addr;
> > > > +	/** @size: Total size of the managed address space. */
> > > > +	u64 size;
> > > > +	/** @is_iomem: Whether the managed address space is I/O
> > > > memory. */
> > > > +	bool is_iomem;
> > > > +};
> > > > +
> > 
> > ditto
> > 
> > > > +struct xe_drm_mm_bb {
> > > > +	/** @node: Range node for this batch buffer. */
> > > > +	struct drm_mm_node node;
> > > > +	/** @manager: Manager this batch buffer belongs to. */
> > > > +	struct xe_drm_mm_manager *manager;
> > > > +	/** @cs: Command stream for this batch buffer. */
> > > > +	u32 *cs;
> > > > +	/** @len: Length of the CS in dwords. */
> > > > +	u32 len;
> > > > +};
> > 
> > but we are not using this struct yet in this patch, correct?
> > 
> > > > +
> > > > +#endif
> > > > +

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write
  2026-03-27 11:47       ` Michal Wajdeczko
@ 2026-03-27 20:07         ` Matthew Brost
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2026-03-27 20:07 UTC (permalink / raw)
  To: Michal Wajdeczko
  Cc: K V P, Satyanarayana, intel-xe, Thomas Hellström,
	Maarten Lankhorst

On Fri, Mar 27, 2026 at 12:47:36PM +0100, Michal Wajdeczko wrote:
> 
> 
> On 3/27/2026 12:17 PM, K V P, Satyanarayana wrote:
> > 
> > On 27-Mar-26 4:37 PM, Michal Wajdeczko wrote:
> >> On 3/20/2026 1:12 PM, Satyanarayana K V P wrote:
> >>> The suballocator algorithm tracks a hole cursor at the last allocation
> >>> and tries to allocate after it. This is optimized for fence-ordered
> >>> progress, where older allocations are expected to become reusable first.
> >>>
> >>> In fence-enabled mode, that ordering assumption holds. In fence-disabled
> >>> mode, allocations may be freed in arbitrary order, so limiting allocation
> >>> to the current hole window can miss valid free space and fail allocations
> >>> despite sufficient total space.
> >>>
> >>> Use DRM memory manager instead of sub-allocator to get rid of this issue
> >>> as CCS read/write operations do not use fences.
> >>>
> >>> Fixes: 864690cf4dd62 ("drm/xe/vf: Attach and detach CCS copy commands with BO")
> >>> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> >>> Cc: Matthew Brost <matthew.brost@intel.com>
> >>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> >>> Cc: Maarten Lankhorst <dev@lankhorst.se>
> >>> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> >>>
> >>> ---
> >>> Used drm mm instead of drm sa based on comments from
> >>> https://lore.kernel.org/all/bbf0d48d-a95a-46e1-ac8f-e8a0daa81365@amd.com/
> >>> ---
> >>>  drivers/gpu/drm/xe/xe_bo_types.h           |  3 +-
> >>>  drivers/gpu/drm/xe/xe_migrate.c            | 56 ++++++++++++----------
> >>>  drivers/gpu/drm/xe/xe_sriov_vf_ccs.c       | 39 ++++++++-------
> >>>  drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h |  2 +-
> >>>  4 files changed, 53 insertions(+), 47 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
> >>> index d4fe3c8dca5b..4c4f15c5648e 100644
> >>> --- a/drivers/gpu/drm/xe/xe_bo_types.h
> >>> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
> >>> @@ -18,6 +18,7 @@
> >>>  #include "xe_ggtt_types.h" struct xe_device; +struct xe_drm_mm_bb; struct xe_vm; #define XE_BO_MAX_PLACEMENTS 3 @@ -88,7 +89,7 @@ struct xe_bo { bool ccs_cleared; /** @bb_ccs: BB instructions of CCS read/write. Valid only for VF */ - struct xe_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT]; + struct xe_drm_mm_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT]; /** * @cpu_caching: CPU caching mode. Currently only used for userspace diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c index fc918b4fba54..2fefd306cb2e 100644 --- a/drivers/gpu/drm/xe/xe_migrate.c +++ b/drivers/gpu/drm/xe/xe_migrate.c @@ -22,6 +22,7 @@ #include "xe_assert.h"
> >>>  #include "xe_bb.h"
> >>>  #include "xe_bo.h"
> >>> +#include "xe_drm_mm.h"
> >>>  #include "xe_exec_queue.h"
> >>>  #include "xe_ggtt.h"
> >>>  #include "xe_gt.h"
> >>> @@ -1166,11 +1167,12 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
> >>>  	u32 batch_size, batch_size_allocated;
> >>>  	struct xe_device *xe = gt_to_xe(gt);
> >>>  	struct xe_res_cursor src_it, ccs_it;
> >>> +	struct xe_drm_mm_manager *bb_pool;
> >>>  	struct xe_sriov_vf_ccs_ctx *ctx;
> >>> -	struct xe_sa_manager *bb_pool;
> >>> +	struct xe_drm_mm_bb *bb = NULL;
> >>>  	u64 size = xe_bo_size(src_bo);
> >>> -	struct xe_bb *bb = NULL;
> >>>  	u64 src_L0, src_L0_ofs;
> >>> +	struct xe_bb xe_bb_tmp;
> >>>  	u32 src_L0_pt;
> >>>  	int err;
> >>>  
> >>> @@ -1208,18 +1210,18 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
> >>>  		size -= src_L0;
> >>>  	}
> >>>  
> >>> -	bb = xe_bb_alloc(gt);
> >>> +	bb = xe_drm_mm_bb_alloc();
> >>>  	if (IS_ERR(bb))
> >>>  		return PTR_ERR(bb);
> >>>  
> >>>  	bb_pool = ctx->mem.ccs_bb_pool;
> >>> -	scoped_guard(mutex, xe_sa_bo_swap_guard(bb_pool)) {
> >>> -		xe_sa_bo_swap_shadow(bb_pool);
> >>> +	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
> >>> +		xe_drm_mm_bo_swap_shadow(bb_pool);
> >>>  
> >>> -		err = xe_bb_init(bb, bb_pool, batch_size);
> >>> +		err = xe_drm_mm_bb_insert(bb, bb_pool, batch_size);
> >>>  		if (err) {
> >>>  			xe_gt_err(gt, "BB allocation failed.\n");
> >>> -			xe_bb_free(bb, NULL);
> >>> +			kfree(bb);
> >>>  			return err;
> >>>  		}
> >>>  
> >>> @@ -1227,6 +1229,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
> >>>  		size = xe_bo_size(src_bo);
> >>>  		batch_size = 0;
> >>>  
> >>> +		xe_bb_tmp = (struct xe_bb){ .cs = bb->cs, .len = 0 };
> >>>  		/*
> >>>  		 * Emit PTE and copy commands here.
> >>>  		 * The CCS copy command can only support limited size. If the size to be
> >>> @@ -1255,24 +1258,27 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
> >>>  			xe_assert(xe, IS_ALIGNED(ccs_it.start, PAGE_SIZE));
> >>>  			batch_size += EMIT_COPY_CCS_DW;
> >>>  
> >>> -			emit_pte(m, bb, src_L0_pt, false, true, &src_it, src_L0, src);
> >>> +			emit_pte(m, &xe_bb_tmp, src_L0_pt, false, true, &src_it, src_L0, src);
> >>>  
> >>> -			emit_pte(m, bb, ccs_pt, false, false, &ccs_it, ccs_size, src);
> >>> +			emit_pte(m, &xe_bb_tmp, ccs_pt, false, false, &ccs_it, ccs_size, src);
> >>>  
> >>> -			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
> >>> -			flush_flags = xe_migrate_ccs_copy(m, bb, src_L0_ofs, src_is_pltt,
> >>> +			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
> >>> +							      flush_flags);
> >>> +			flush_flags = xe_migrate_ccs_copy(m, &xe_bb_tmp, src_L0_ofs, src_is_pltt,
> >>>  							  src_L0_ofs, dst_is_pltt,
> >>>  							  src_L0, ccs_ofs, true);
> >>> -			bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
> >>> +			xe_bb_tmp.len = emit_flush_invalidate(xe_bb_tmp.cs, xe_bb_tmp.len,
> >>> +							      flush_flags);
> >>>  
> >>>  			size -= src_L0;
> >>>  		}
> >>>  
> >>> -		xe_assert(xe, (batch_size_allocated == bb->len));
> >>> +		xe_assert(xe, (batch_size_allocated == xe_bb_tmp.len));
> >>> +		bb->len = xe_bb_tmp.len;
> >>>  		src_bo->bb_ccs[read_write] = bb;
> >>>  
> >>>  		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
> >>> -		xe_sa_bo_sync_shadow(bb->bo);
> >>> +		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
> >>>  	}
> >>>  
> >>>  	return 0;
> >>> @@ -1297,10 +1303,10 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
> >>>  void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
> >>>  				  enum xe_sriov_vf_ccs_rw_ctxs read_write)
> >>>  {
> >>> -	struct xe_bb *bb = src_bo->bb_ccs[read_write];
> >>> +	struct xe_drm_mm_bb *bb = src_bo->bb_ccs[read_write];
> >>>  	struct xe_device *xe = xe_bo_device(src_bo);
> >>> +	struct xe_drm_mm_manager *bb_pool;
> >>>  	struct xe_sriov_vf_ccs_ctx *ctx;
> >>> -	struct xe_sa_manager *bb_pool;
> >>>  	u32 *cs;
> >>>  
> >>>  	xe_assert(xe, IS_SRIOV_VF(xe));
> >>> @@ -1308,17 +1314,17 @@ void xe_migrate_ccs_rw_copy_clear(struct xe_bo *src_bo,
> >>>  	ctx = &xe->sriov.vf.ccs.contexts[read_write];
> >>>  	bb_pool = ctx->mem.ccs_bb_pool;
> >>>  
> >>> -	guard(mutex) (xe_sa_bo_swap_guard(bb_pool));
> >>> -	xe_sa_bo_swap_shadow(bb_pool);
> >>> -
> >>> -	cs = xe_sa_bo_cpu_addr(bb->bo);
> >>> -	memset(cs, MI_NOOP, bb->len * sizeof(u32));
> >>> -	xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
> >>> +	scoped_guard(mutex, xe_drm_mm_bo_swap_guard(bb_pool)) {
> >>> +		xe_drm_mm_bo_swap_shadow(bb_pool);
> >>>  
> >>> -	xe_sa_bo_sync_shadow(bb->bo);
> >>> +		cs = bb_pool->cpu_addr + bb->node.start;
> >>> +		memset(cs, MI_NOOP, bb->len * sizeof(u32));
> >>> +		xe_sriov_vf_ccs_rw_update_bb_addr(ctx);
> >>>  
> >>> -	xe_bb_free(bb, NULL);
> >>> -	src_bo->bb_ccs[read_write] = NULL;
> >>> +		xe_drm_mm_sync_shadow(bb_pool, &bb->node);
> >>> +		xe_drm_mm_bb_free(bb);
> >>> +		src_bo->bb_ccs[read_write] = NULL;
> >>> +	}
> >>>  }
> >>>  
> >>>  /**
> >>> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> >>> index db023fb66a27..6fb4641c6f0f 100644
> >>> --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> >>> +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> >>> @@ -8,6 +8,7 @@
> >>>  #include "xe_bb.h"
> >>>  #include "xe_bo.h"
> >>>  #include "xe_device.h"
> >>> +#include "xe_drm_mm.h"
> >>>  #include "xe_exec_queue.h"
> >>>  #include "xe_exec_queue_types.h"
> >>>  #include "xe_gt_sriov_vf.h"
> >>> @@ -16,7 +17,6 @@
> >>>  #include "xe_lrc.h"
> >>>  #include "xe_migrate.h"
> >>>  #include "xe_pm.h"
> >>> -#include "xe_sa.h"
> >>>  #include "xe_sriov_printk.h"
> >>>  #include "xe_sriov_vf.h"
> >>>  #include "xe_sriov_vf_ccs.h"
> >>> @@ -141,8 +141,8 @@ static u64 get_ccs_bb_pool_size(struct xe_device *xe)
> >>>  
> >>>  static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
> >>>  {
> >>> +	struct xe_drm_mm_manager *drm_mm_manager;
> >>>  	struct xe_device *xe = tile_to_xe(tile);
> >>> -	struct xe_sa_manager *sa_manager;
> >>>  	u64 bb_pool_size;
> >>>  	int offset, err;
> >>>  
> >>> @@ -150,34 +150,33 @@ static int alloc_bb_pool(struct xe_tile *tile, struct xe_sriov_vf_ccs_ctx *ctx)
> >>>  	xe_sriov_info(xe, "Allocating %s CCS BB pool size = %lldMB\n",
> >>>  		      ctx->ctx_id ? "Restore" : "Save", bb_pool_size / SZ_1M);
> >>>  
> >>> -	sa_manager = __xe_sa_bo_manager_init(tile, bb_pool_size, SZ_4K, SZ_16,
> >>> -					     XE_SA_BO_MANAGER_FLAG_SHADOW);
> >>> -
> >>> -	if (IS_ERR(sa_manager)) {
> >>> -		xe_sriov_err(xe, "Suballocator init failed with error: %pe\n",
> >>> -			     sa_manager);
> >>> -		err = PTR_ERR(sa_manager);
> >>> +	drm_mm_manager = xe_drm_mm_manager_init(tile, bb_pool_size, SZ_4K,
> >>> +						XE_DRM_MM_BO_MANAGER_FLAG_SHADOW);
> >>> +	if (IS_ERR(drm_mm_manager)) {
> >>> +		xe_sriov_err(xe, "XE_DRM_MM init failed with error: %pe\n",
> >>> +			     drm_mm_manager);
> >>> +		err = PTR_ERR(drm_mm_manager);
> >>>  		return err;
> >>>  	}
> >>>  
> >>>  	offset = 0;
> >>> -	xe_map_memset(xe, &sa_manager->bo->vmap, offset, MI_NOOP,
> >>> +	xe_map_memset(xe, &drm_mm_manager->bo->vmap, offset, MI_NOOP,
> >>>  		      bb_pool_size);
> >>> -	xe_map_memset(xe, &sa_manager->shadow->vmap, offset, MI_NOOP,
> >>> +	xe_map_memset(xe, &drm_mm_manager->shadow->vmap, offset, MI_NOOP,
> >>>  		      bb_pool_size);
> >>>  
> >>>  	offset = bb_pool_size - sizeof(u32);
> >>> -	xe_map_wr(xe, &sa_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
> >>> -	xe_map_wr(xe, &sa_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
> >>> +	xe_map_wr(xe, &drm_mm_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END);
> >>> +	xe_map_wr(xe, &drm_mm_manager->shadow->vmap, offset, u32, MI_BATCH_BUFFER_END);
> >> this seems to break the new XE MM component isolation, as you are directly
> >> touching the XE MM internals without XE MM being aware of this...
> >>
> >> are we sure that XE MM will not overwrite this last dword from the pool BO?
> >> maybe it should be exposed to the XE MM user as this 'trail guard' location?
> > 
> > For CCS save/restore, we submit this complete MM to Guc and whenever VM is paused, Guc submits this MM to HW.
> > While allocating BBs, we always allocate size + 1 
> 
> that sounds like a hack and relies that XEMM will not zeroed the new BB
> but in v1 xe_drm_mm_insert_node it looks that new node->size is zeroed...
> 
> btw, it also unnecessary wastes lot of dwords (1 per each BB)
> 
> but this could be done in a way that XEMM has some control over it
> and we will not need to deep dive into XEMM internals
>  
> assuming that we will allocate XEMM as size + trail/guard:
> 
> 
>    <------------------- pool size -----> <-> trail size
>   +-------------------------------------+---+
>   |  ____    _______                    |   |
>   +-------------------------------------+---+
>      ^       ^                           ^
>      |       |                           |
>      bb      bb                          trail
> 
> XEMM may return vmap of the trail (or special trail node) where we
> could write our BB_END and then we can let XEMM flush it to the BOs
> 

All you should need to do is create a fixed-placement drm_mm node and
insert it with drm_mm_insert_node_in_range when xe_drm_mm_manager_init
(or whatever it is renamed to) is called, based on a trailing argument.
I think this is a good suggestion.

> and XEMM could allocate all BBs (without any extra +1) within real
> pool size without any risk that it will overwrite our data in trail
> 

Yes.

But I do think the caller should own the initialization values for the
BO, shadow, and tail, so that the allocator remains generic while the
upper layer defines how the contents are used. Of course, we could add
some helpers to extract things like drm_mm_manager->bo->vmap, etc.

Matt

> > so that even if the allocation happens at the end of the MM,
> > the MI_BATCH_BUFFER_END instruction is not overwritten.
> > 
> >>>  
> >>> -	ctx->mem.ccs_bb_pool = sa_manager;
> >>> +	ctx->mem.ccs_bb_pool = drm_mm_manager;
> >>>  
> >>>  	return 0;
> >>>  }
> >>>  
> >>>  static void ccs_rw_update_ring(struct xe_sriov_vf_ccs_ctx *ctx)
> >>>  {
> >>> -	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
> >>> +	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
> >>>  	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
> >>>  	u32 dw[10], i = 0;
> >>>  
> >>> @@ -388,7 +387,7 @@ int xe_sriov_vf_ccs_init(struct xe_device *xe)
> >>>  #define XE_SRIOV_VF_CCS_RW_BB_ADDR_OFFSET	(2 * sizeof(u32))
> >>>  void xe_sriov_vf_ccs_rw_update_bb_addr(struct xe_sriov_vf_ccs_ctx *ctx)
> >>>  {
> >>> -	u64 addr = xe_sa_manager_gpu_addr(ctx->mem.ccs_bb_pool);
> >>> +	u64 addr = xe_drm_mm_manager_gpu_addr(ctx->mem.ccs_bb_pool);
> >>>  	struct xe_lrc *lrc = xe_exec_queue_lrc(ctx->mig_q);
> >>>  	struct xe_device *xe = gt_to_xe(ctx->mig_q->gt);
> >>>  
> >>> @@ -412,8 +411,8 @@ int xe_sriov_vf_ccs_attach_bo(struct xe_bo *bo)
> >>>  	struct xe_device *xe = xe_bo_device(bo);
> >>>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
> >>>  	struct xe_sriov_vf_ccs_ctx *ctx;
> >>> +	struct xe_drm_mm_bb *bb;
> >>>  	struct xe_tile *tile;
> >>> -	struct xe_bb *bb;
> >>>  	int err = 0;
> >>>  
> >>>  	xe_assert(xe, IS_VF_CCS_READY(xe));
> >>> @@ -445,7 +444,7 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
> >>>  {
> >>>  	struct xe_device *xe = xe_bo_device(bo);
> >>>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
> >>> -	struct xe_bb *bb;
> >>> +	struct xe_drm_mm_bb *bb;
> >>>  
> >>>  	xe_assert(xe, IS_VF_CCS_READY(xe));
> >>>  
> >>> @@ -471,8 +470,8 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
> >>>   */
> >>>  void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
> >>>  {
> >>> -	struct xe_sa_manager *bb_pool;
> >>>  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
> >>> +	struct xe_drm_mm_manager *bb_pool;
> >>>  
> >>>  	if (!IS_VF_CCS_READY(xe))
> >>>  		return;
> >>> @@ -485,7 +484,7 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
> >>>  
> >>>  		drm_printf(p, "ccs %s bb suballoc info\n", ctx_id ? "write" : "read");
> >>>  		drm_printf(p, "-------------------------\n");
> >>> -		drm_suballoc_dump_debug_info(&bb_pool->base, p, xe_sa_manager_gpu_addr(bb_pool));
> >>> +		drm_mm_print(&bb_pool->base, p);
> >>>  		drm_puts(p, "\n");
> >>>  	}
> >>>  }
> >>> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
> >>> index 22c499943d2a..f2af074578c9 100644
> >>> --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
> >>> +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h
> >>> @@ -33,7 +33,7 @@ struct xe_sriov_vf_ccs_ctx {
> >>>  	/** @mem: memory data */
> >>>  	struct {
> >>>  		/** @mem.ccs_bb_pool: Pool from which batch buffers are allocated. */
> >>> -		struct xe_sa_manager *ccs_bb_pool;
> >>> +		struct xe_drm_mm_manager *ccs_bb_pool;
> >>>  	} mem;
> >>>  };
> >>>  
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support
  2026-03-27 10:54     ` Michal Wajdeczko
  2026-03-27 11:06       ` Thomas Hellström
@ 2026-03-27 21:26       ` Matthew Brost
  1 sibling, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2026-03-27 21:26 UTC (permalink / raw)
  To: Michal Wajdeczko
  Cc: Thomas Hellström, Satyanarayana K V P, intel-xe,
	Maarten Lankhorst

On Fri, Mar 27, 2026 at 11:54:15AM +0100, Michal Wajdeczko wrote:
> 
> 
> On 3/26/2026 8:57 PM, Thomas Hellström wrote:
> > On Fri, 2026-03-20 at 12:12 +0000, Satyanarayana K V P wrote:
> >> Add a xe_drm_mm manager to allocate sub-ranges from a BO-backed pool
> >> using drm_mm.
> > 
> > Just a comment on the naming. xe_drm_mm sounds like this is yet another
> > specialized range manager implementation.
> 
> well, in fact it looks like *very* specialized MM, not much reusable elsewhere
> 
> > 
> > Could we invent a better name? 
> > 
> > xe_mm_suballoc? Something even better?
> 
> xe_shadow_pool ?
> 
> or if we split this MM into "plain pool" and "shadow pool":
> 
> xe_pool		--> like xe_sa but works without fences (can be reused in xe_guc_buf)
> xe_shadow_pool	--> built on top of plain, with shadow logic
> 
> 
> more comments below
> 
> > 
> > 
> > Thanks,
> > Thomas
> > 
> > 
> > 
> >>
> >> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> >> Cc: Matthew Brost <matthew.brost@intel.com>
> >> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> >> Cc: Maarten Lankhorst <dev@lankhorst.se>
> >> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> >> ---
> >>  drivers/gpu/drm/xe/Makefile          |   1 +
> >>  drivers/gpu/drm/xe/xe_drm_mm.c       | 200
> >> +++++++++++++++++++++++++++
> >>  drivers/gpu/drm/xe/xe_drm_mm.h       |  55 ++++++++
> >>  drivers/gpu/drm/xe/xe_drm_mm_types.h |  42 ++++++
> >>  4 files changed, 298 insertions(+)
> >>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.c
> >>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm.h
> >>  create mode 100644 drivers/gpu/drm/xe/xe_drm_mm_types.h
> >>
> >> diff --git a/drivers/gpu/drm/xe/Makefile
> >> b/drivers/gpu/drm/xe/Makefile
> >> index dab979287a96..6ab4e2392df1 100644
> >> --- a/drivers/gpu/drm/xe/Makefile
> >> +++ b/drivers/gpu/drm/xe/Makefile
> >> @@ -41,6 +41,7 @@ xe-y += xe_bb.o \
> >>  	xe_device_sysfs.o \
> >>  	xe_dma_buf.o \
> >>  	xe_drm_client.o \
> >> +	xe_drm_mm.o \
> >>  	xe_drm_ras.o \
> >>  	xe_eu_stall.o \
> >>  	xe_exec.o \
> >> diff --git a/drivers/gpu/drm/xe/xe_drm_mm.c
> >> b/drivers/gpu/drm/xe/xe_drm_mm.c
> >> new file mode 100644
> >> index 000000000000..c5b1766fa75a
> >> --- /dev/null
> >> +++ b/drivers/gpu/drm/xe/xe_drm_mm.c
> >> @@ -0,0 +1,200 @@
> >> +// SPDX-License-Identifier: MIT
> >> +/*
> >> + * Copyright © 2026 Intel Corporation
> >> + */
> >> +
> >> +#include <drm/drm_managed.h>
> 
> nit: <linux> headers go first
> 
> >> +#include <linux/kernel.h>
> >> +
> >> +#include "xe_device_types.h"
> >> +#include "xe_drm_mm_types.h"
> >> +#include "xe_drm_mm.h"
> >> +#include "xe_map.h"
> >> +
> >> +static void xe_drm_mm_manager_fini(struct drm_device *drm, void
> >> *arg)
> >> +{
> >> +	struct xe_drm_mm_manager *drm_mm_manager = arg;
> >> +	struct xe_bo *bo = drm_mm_manager->bo;
> >> +
> >> +	if (!bo) {
> 
> not needed, we shouldn't be here if we failed to allocate a bo
> 

Agree this likely isn't needed.

> >> +		drm_err(drm, "no bo for drm mm manager\n");
> 
> btw, our MM seems to be 'tile' oriented, so we should use xe_tile_err() 
> 

I'd rather leave this as an independent component which we can reuse
anywhere in the future.

> >> +		return;
> >> +	}
> >> +
> >> +	drm_mm_takedown(&drm_mm_manager->base);
> >> +
> >> +	if (drm_mm_manager->is_iomem)
> >> +		kvfree(drm_mm_manager->cpu_addr);
> >> +
> >> +	drm_mm_manager->bo = NULL;
> >> +	drm_mm_manager->shadow = NULL;
> >> +}
> >> +
> >> +/**
> >> + * xe_drm_mm_manager_init() - Create and initialize the DRM MM
> >> manager.
> >> + * @tile: the &xe_tile where allocate.
> >> + * @size: number of bytes to allocate
> >> + * @guard: number of bytes to exclude from allocation for guard
> >> region
> 
> do we really need this guard ? it was already questionable on the xe_sa
> 

Isn’t this what you were suggesting in patch #3 with a tail region? It
looks like this is implementing your suggestion to avoid having each BB
allocate an extra DW — we’d just set this to 1 for the current use case.
It’s simpler than my suggestion to allocate a drm_mm node to reserve the
“tail” / “guard..

> >> + * @flags: additional flags for configuring the DRM MM manager.
> >> + *
> >> + * Initializes a DRM MM manager for managing memory allocations on a
> >> specific
> >> + * XE tile. The function allocates a buffer object to back the
> >> memory region
> >> + * managed by the DRM MM manager.
> >> + *
> >> + * Return: a pointer to the &xe_drm_mm_manager, or an error pointer
> >> on failure.
> >> + */
> >> +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile
> >> *tile, u32 size,
> >> +						 u32 guard, u32
> >> flags)
> >> +{
> >> +	struct xe_device *xe = tile_to_xe(tile);
> >> +	struct xe_drm_mm_manager *drm_mm_manager;
> >> +	u64 managed_size;
> >> +	struct xe_bo *bo;
> >> +	int ret;
> >> +
> >> +	xe_tile_assert(tile, size > guard);
> >> +	managed_size = size - guard;
> >> +
> >> +	drm_mm_manager = drmm_kzalloc(&xe->drm,
> >> sizeof(*drm_mm_manager), GFP_KERNEL);
> >> +	if (!drm_mm_manager)
> >> +		return ERR_PTR(-ENOMEM);
> 
> can't we make this manager a member of the tile and then use
> container_of to get parent tile pointer?
> 
> I guess we will have exactly one this MM per tile, no?
> 

Again see above - I strong prefer this is an independent component
(malloc'd object) rather than an embedded object for future reuse.

> >> +
> >> +	bo = xe_managed_bo_create_pin_map(xe, tile, size,
> >> +					 
> >> XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> >> +					  XE_BO_FLAG_GGTT |
> >> +					  XE_BO_FLAG_GGTT_INVALIDATE
> >> |
> >> +					 
> >> XE_BO_FLAG_PINNED_NORESTORE);
> >> +	if (IS_ERR(bo)) {
> >> +		drm_err(&xe->drm, "Failed to prepare %uKiB BO for
> 
> xe_tile_err(tile, ...
> 
> but maybe nicer solution would be to add such error to the
> xe_managed_bo_create_pin_map() to avoid duplicating this diag
> messages in all callers
> 
> >> DRM MM manager (%pe)\n",
> >> +			size / SZ_1K, bo);
> >> +		return ERR_CAST(bo);
> >> +	}
> >> +	drm_mm_manager->bo = bo;
> >> +	drm_mm_manager->is_iomem = bo->vmap.is_iomem;
> 
> do we need to cache this?
> 
> >> +
> >> +	if (bo->vmap.is_iomem) {
> >> +		drm_mm_manager->cpu_addr = kvzalloc(managed_size,
> >> GFP_KERNEL);
> >> +		if (!drm_mm_manager->cpu_addr)
> >> +			return ERR_PTR(-ENOMEM);
> >> +	} else {
> >> +		drm_mm_manager->cpu_addr = bo->vmap.vaddr;
> >> +		memset(drm_mm_manager->cpu_addr, 0, bo-
> >>> ttm.base.size);
> 
> btw, maybe we should consider adding XE_BO_FLAG_ZERO and let
> the xe_create_bo do initial clear for us?
> 

The memset isn't needed per my comments here [1].

[1] https://patchwork.freedesktop.org/patch/713119/?series=163588&rev=1#comment_1315262

> @Matt @Thomas ?
> 
> >> +	}
> >> +
> >> +	if (flags & XE_DRM_MM_BO_MANAGER_FLAG_SHADOW) {
> 
> hmm, so this is not a main feature of this MM?
> then maybe we should have two components:
> 
> 	* xe_pool (plain MM, like xe_sa but without fences)
> 	* xe_shadow (adds shadow BO on top of plain MM)
> 
> >> +		struct xe_bo *shadow;
> >> +
> >> +		ret = drmm_mutex_init(&xe->drm, &drm_mm_manager-
> >>> swap_guard);
> >> +		if (ret)
> >> +			return ERR_PTR(ret);
> >> +		if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
> >> +			fs_reclaim_acquire(GFP_KERNEL);
> >> +			might_lock(&drm_mm_manager->swap_guard);
> >> +			fs_reclaim_release(GFP_KERNEL);
> >> +		}
> >> +
> >> +		shadow = xe_managed_bo_create_pin_map(xe, tile,
> >> size,
> >> +						     
> >> XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> >> +						     
> >> XE_BO_FLAG_GGTT |
> >> +						     
> >> XE_BO_FLAG_GGTT_INVALIDATE |
> >> +						     
> >> XE_BO_FLAG_PINNED_NORESTORE);
> >> +		if (IS_ERR(shadow)) {
> >> +			drm_err(&xe->drm,
> >> +				"Failed to prepare %uKiB shadow BO
> >> for DRM MM manager (%pe)\n",
> >> +				size / SZ_1K, shadow);
> >> +			return ERR_CAST(shadow);
> >> +		}
> >> +		drm_mm_manager->shadow = shadow;
> >> +	}
> >> +
> >> +	drm_mm_init(&drm_mm_manager->base, 0, managed_size);
> >> +	ret = drmm_add_action_or_reset(&xe->drm,
> >> xe_drm_mm_manager_fini, drm_mm_manager);
> >> +	if (ret)
> >> +		return ERR_PTR(ret);
> >> +
> >> +	return drm_mm_manager;
> >> +}
> >> +
> >> +/**
> >> + * xe_drm_mm_bo_swap_shadow() - Swap the primary BO with the shadow
> >> BO.
> 
> do we need _bo_ in the function name here?
> 
> >> + * @drm_mm_manager: the DRM MM manager containing the primary and
> >> shadow BOs.
> >> + *
> >> + * Swaps the primary buffer object with the shadow buffer object in
> >> the DRM MM
> >> + * manager.
> 
> say a word about required swap_guard mutex
> 
> and/or add the _locked suffix to the function name
> 
> >> + *
> >> + * Return: None.
> >> + */
> >> +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager
> >> *drm_mm_manager)
> >> +{
> >> +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo->tile);
> >> +
> >> +	xe_assert(xe, drm_mm_manager->shadow);
> 
> use xe_tile_assert
> 
> >> +	lockdep_assert_held(&drm_mm_manager->swap_guard);
> >> +
> >> +	swap(drm_mm_manager->bo, drm_mm_manager->shadow);
> >> +	if (!drm_mm_manager->bo->vmap.is_iomem)
> >> +		drm_mm_manager->cpu_addr = drm_mm_manager->bo-
> >>> vmap.vaddr;
> >> +}
> >> +
> >> +/**
> >> + * xe_drm_mm_sync_shadow() - Synchronize the shadow BO with the
> >> primary BO.
> >> + * @drm_mm_manager: the DRM MM manager containing the primary and
> >> shadow BOs.
> >> + * @node: the DRM MM node representing the region to synchronize.
> >> + *
> >> + * Copies the contents of the specified region from the primary
> >> buffer object to
> >> + * the shadow buffer object in the DRM MM manager.
> >> + *
> >> + * Return: None.
> >> + */
> >> +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager *drm_mm_manager,
> >> +			   struct drm_mm_node *node)
> >> +{
> >> +	struct xe_device *xe = tile_to_xe(drm_mm_manager->bo->tile);
> >> +
> >> +	xe_assert(xe, drm_mm_manager->shadow);
> >> +	lockdep_assert_held(&drm_mm_manager->swap_guard);
> >> +
> >> +	xe_map_memcpy_to(xe, &drm_mm_manager->shadow->vmap,
> >> +			 node->start,
> >> +			 drm_mm_manager->cpu_addr + node->start,
> >> +			 node->size);
> 
> maybe I'm missing something, but if primary BO.is_iomem==true then
> who/when updates the actual primary BO memory? or is it unused by
> design and only shadow has the data ...
> 
> maybe some DOC section with theory-of-operation will help here?
> 

Yes, I think this an issue, this won't work as...

I think we need a xe_drm_mm_flush call...

e.g.

xe_drm_mm_flush();
xe_sriov_vf_ccs_rw_update_bb_addr();
xe_sriov_vf_ccs_rw_update_bb_addr(); 

> >> +}
> >> +
> >> +/**
> >> + * xe_drm_mm_insert_node() - Insert a node into the DRM MM manager.
> >> + * @drm_mm_manager: the DRM MM manager to insert the node into.
> >> + * @node: the DRM MM node to insert.
> 
> in recent changes to xe_ggtt we finally hidden the implementation details
> of the MM used by the xe_ggtt
> 

We should probably privatize struct xe_drm_mm_manager as well. For
example, we could move the struct xe_drm_mm_manager definition into
xe_drm_mm.c rather than xe_drm_mm_types.h, given my suggestion to make
this a component rather than an embedded struct.

> why here we start again exposing impl detail as part of the API?
> if we can't allocate xe_drm_mm_node here, maybe at least take it
> as a parameter and update in place
> 
> >> + * @size: the size of the node to insert.
> >> + *
> >> + * Inserts a node into the DRM MM manager and clears the
> >> corresponding memory region
> >> + * in both the primary and shadow buffer objects.
> >> + *
> >> + * Return: 0 on success, or a negative error code on failure.
> >> + */
> >> +int xe_drm_mm_insert_node(struct xe_drm_mm_manager *drm_mm_manager,
> >> +			  struct drm_mm_node *node, u32 size)
> >> +{
> >> +	struct drm_mm *mm = &drm_mm_manager->base;
> >> +	int ret;
> >> +
> >> +	ret = drm_mm_insert_node(mm, node, size);
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	memset((void *)drm_mm_manager->bo->vmap.vaddr + node->start,
> >> 0, node->size);
> 
> iosys_map_memset(bo->vmap, start, 0, size) ?
> 
> >> +	if (drm_mm_manager->shadow)
> >> +		memset((void *)drm_mm_manager->shadow->vmap.vaddr +
> >> node->start, 0,
> >> +		       node->size);
> 
> what about clearing the drm_mm_manager->cpu_addr ?
> 

None of the memset calls are needed [1].

[1] https://patchwork.freedesktop.org/patch/713119/?series=163588&rev=1#comment_1315262

> >> +	return 0;
> >> +}
> >> +
> >> +/**
> >> + * xe_drm_mm_remove_node() - Remove a node from the DRM MM manager.
> >> + * @node: the DRM MM node to remove.
> >> + *
> >> + * Return: None.
> >> + */
> >> +void xe_drm_mm_remove_node(struct drm_mm_node *node)
> >> +{
> >> +	return drm_mm_remove_node(node);
> >> +}
> >> diff --git a/drivers/gpu/drm/xe/xe_drm_mm.h
> >> b/drivers/gpu/drm/xe/xe_drm_mm.h
> >> new file mode 100644
> >> index 000000000000..aeb7cab92d0b
> >> --- /dev/null
> >> +++ b/drivers/gpu/drm/xe/xe_drm_mm.h
> >> @@ -0,0 +1,55 @@
> >> +/* SPDX-License-Identifier: MIT */
> >> +/*
> >> + * Copyright © 2026 Intel Corporation
> >> + */
> >> +#ifndef _XE_DRM_MM_H_
> >> +#define _XE_DRM_MM_H_
> >> +
> >> +#include <linux/sizes.h>
> >> +#include <linux/types.h>
> >> +
> >> +#include "xe_bo.h"
> >> +#include "xe_drm_mm_types.h"
> >> +
> >> +struct dma_fence;
> 
> do we need this?
>

Nope.

Matt

> >> +struct xe_tile;
> >> +
> >> +#define XE_DRM_MM_BO_MANAGER_FLAG_SHADOW    BIT(0)
> >> +
> >> +struct xe_drm_mm_manager *xe_drm_mm_manager_init(struct xe_tile
> >> *tile, u32 size,
> >> +						 u32 guard, u32
> >> flags);
> >> +void xe_drm_mm_bo_swap_shadow(struct xe_drm_mm_manager
> >> *drm_mm_manager);
> >> +void xe_drm_mm_sync_shadow(struct xe_drm_mm_manager *drm_mm_manager,
> >> +			   struct drm_mm_node *node);
> >> +int xe_drm_mm_insert_node(struct xe_drm_mm_manager *drm_mm_manager,
> >> +			  struct drm_mm_node *node, u32 size);
> >> +void xe_drm_mm_remove_node(struct drm_mm_node *node);
> >> +
> >> +/**
> >> + * xe_drm_mm_manager_gpu_addr() - Retrieve GPU address of a back
> >> storage BO
> >> + * within a memory manager.
> >> + * @drm_mm_manager: The DRM MM memory manager.
> >> + *
> >> + * Returns: GGTT address of the back storage BO
> >> + */
> >> +static inline u64 xe_drm_mm_manager_gpu_addr(struct
> >> xe_drm_mm_manager
> >> +					     *drm_mm_manager)
> >> +{
> >> +	return xe_bo_ggtt_addr(drm_mm_manager->bo);
> >> +}
> >> +
> >> +/**
> >> + * xe_drm_mm_bo_swap_guard() - Retrieve the mutex used to guard swap
> >> operations
> 
> hmm, do we need the _bo_ here?
> 
> >> + * on a memory manager.
> >> + * @drm_mm_manager: The DRM MM memory manager.
> >> + *
> >> + * Returns: Swap guard mutex.
> >> + */
> >> +static inline struct mutex *xe_drm_mm_bo_swap_guard(struct
> >> xe_drm_mm_manager
> >> +						    *drm_mm_manager)
> >> +{
> >> +	return &drm_mm_manager->swap_guard;
> >> +}
> >> +
> >> +#endif
> >> +
> >> diff --git a/drivers/gpu/drm/xe/xe_drm_mm_types.h
> >> b/drivers/gpu/drm/xe/xe_drm_mm_types.h
> >> new file mode 100644
> >> index 000000000000..69e0937dd8de
> >> --- /dev/null
> >> +++ b/drivers/gpu/drm/xe/xe_drm_mm_types.h
> >> @@ -0,0 +1,42 @@
> >> +/* SPDX-License-Identifier: MIT */
> >> +/*
> >> + * Copyright © 2026 Intel Corporation
> >> + */
> >> +
> >> +#ifndef _XE_DRM_MM_TYPES_H_
> >> +#define _XE_DRM_MM_TYPES_H_
> >> +
> >> +#include <drm/drm_mm.h>
> >> +
> >> +struct xe_bo;
> >> +
> 
> without kernel-doc for the struct itself, below kernel-docs for the
> members are currently not recognized by the tool
> 
> >> +struct xe_drm_mm_manager {
> >> +	/** @base: Range allocator over [0, @size) in bytes */
> >> +	struct drm_mm base;
> >> +	/** @bo: Active pool BO (GGTT-pinned, CPU-mapped). */
> >> +	struct xe_bo *bo;
> >> +	/** @shadow: Shadow BO for atomic command updates. */
> >> +	struct xe_bo *shadow;
> >> +	/** @swap_guard: Timeline guard updating @bo and @shadow */
> >> +	struct mutex swap_guard;
> >> +	/** @cpu_addr: CPU virtual address of the active BO. */
> >> +	void *cpu_addr;
> >> +	/** @size: Total size of the managed address space. */
> >> +	u64 size;
> >> +	/** @is_iomem: Whether the managed address space is I/O
> >> memory. */
> >> +	bool is_iomem;
> >> +};
> >> +
> 
> ditto
> 
> >> +struct xe_drm_mm_bb {
> >> +	/** @node: Range node for this batch buffer. */
> >> +	struct drm_mm_node node;
> >> +	/** @manager: Manager this batch buffer belongs to. */
> >> +	struct xe_drm_mm_manager *manager;
> >> +	/** @cs: Command stream for this batch buffer. */
> >> +	u32 *cs;
> >> +	/** @len: Length of the CS in dwords. */
> >> +	u32 len;
> >> +};
> 
> but we are not using this struct yet in this patch, correct?
> 
> >> +
> >> +#endif
> >> +
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2026-03-27 21:26 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-20 12:12 [PATCH 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
2026-03-20 12:12 ` [PATCH 1/3] drm/xe/mm: add XE DRM MM manager with shadow support Satyanarayana K V P
2026-03-26 19:48   ` Matthew Brost
2026-03-26 19:57   ` Thomas Hellström
2026-03-27 10:54     ` Michal Wajdeczko
2026-03-27 11:06       ` Thomas Hellström
2026-03-27 19:54         ` Matthew Brost
2026-03-27 21:26       ` Matthew Brost
2026-03-20 12:12 ` [PATCH 2/3] drm/xe/mm: Add batch buffer allocation functions for xe_drm_mm manager Satyanarayana K V P
2026-03-26 19:50   ` Matthew Brost
2026-03-20 12:12 ` [PATCH 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write Satyanarayana K V P
2026-03-26 19:52   ` Matthew Brost
2026-03-27 11:07   ` Michal Wajdeczko
2026-03-27 11:17     ` K V P, Satyanarayana
2026-03-27 11:47       ` Michal Wajdeczko
2026-03-27 20:07         ` Matthew Brost
2026-03-20 12:17 ` ✗ CI.checkpatch: warning for USE drm mm instead of drm SA " Patchwork
2026-03-20 12:19 ` ✓ CI.KUnit: success " Patchwork
2026-03-20 13:08 ` ✓ Xe.CI.BAT: " Patchwork
2026-03-21 11:52 ` ✗ Xe.CI.FULL: failure " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox