* [PATCH v2 01/17] drm/xe/svm: Fix a debug printout
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
@ 2025-11-11 16:43 ` Thomas Hellström
2025-11-12 4:29 ` Ghimiray, Himal Prasad
2025-11-11 16:43 ` [PATCH v2 02/17] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap Thomas Hellström
` (20 subsequent siblings)
21 siblings, 1 reply; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, Matthew Brost, Himal Prasad Ghimiray,
stable, dri-devel, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
Avoid spamming the log with drm_info(). Use drm_dbg() instead.
Fixes: cc795e041034 ("drm/xe/svm: Make xe_svm_range_needs_migrate_to_vram() public")
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: <stable@vger.kernel.org> # v6.17+
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 55c5a0eb82e1..894e8f092e3f 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -941,7 +941,7 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
xe_assert(vm->xe, IS_DGFX(vm->xe));
if (xe_svm_range_in_vram(range)) {
- drm_info(&vm->xe->drm, "Range is already in VRAM\n");
+ drm_dbg(&vm->xe->drm, "Range is already in VRAM\n");
return false;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v2 01/17] drm/xe/svm: Fix a debug printout
2025-11-11 16:43 ` [PATCH v2 01/17] drm/xe/svm: Fix a debug printout Thomas Hellström
@ 2025-11-12 4:29 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 33+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-11-12 4:29 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: Matthew Brost, stable, dri-devel, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
On 11-11-2025 22:13, Thomas Hellström wrote:
> Avoid spamming the log with drm_info(). Use drm_dbg() instead.
>
> Fixes: cc795e041034 ("drm/xe/svm: Make xe_svm_range_needs_migrate_to_vram() public")
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Cc: <stable@vger.kernel.org> # v6.17+
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 55c5a0eb82e1..894e8f092e3f 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -941,7 +941,7 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
> xe_assert(vm->xe, IS_DGFX(vm->xe));
>
> if (xe_svm_range_in_vram(range)) {
> - drm_info(&vm->xe->drm, "Range is already in VRAM\n");
> + drm_dbg(&vm->xe->drm, "Range is already in VRAM\n");
Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> return false;
> }
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 02/17] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
2025-11-11 16:43 ` [PATCH v2 01/17] drm/xe/svm: Fix a debug printout Thomas Hellström
@ 2025-11-11 16:43 ` Thomas Hellström
2025-11-12 6:07 ` Ghimiray, Himal Prasad
2025-11-11 16:43 ` [PATCH v2 03/17] drm/pagemap: Add a refcounted drm_pagemap backpointer to struct drm_pagemap_zdd Thomas Hellström
` (19 subsequent siblings)
21 siblings, 1 reply; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, Matthew Brost, dri-devel,
himal.prasad.ghimiray, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
With the end goal of being able to free unused pagemaps
and allocate them on demand, add a refcount to struct drm_pagemap,
remove the xe embedded drm_pagemap, allocating and freeing it
explicitly.
v2:
- Make the drm_pagemap pointer in drm_gpusvm_pages reference-counted.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com> #v1
---
drivers/gpu/drm/drm_gpusvm.c | 4 ++-
drivers/gpu/drm/drm_pagemap.c | 51 ++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_svm.c | 26 ++++++++++-----
drivers/gpu/drm/xe/xe_vram_types.h | 2 +-
include/drm/drm_pagemap.h | 36 +++++++++++++++++++++
5 files changed, 109 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 73e550c8ff8c..1f96375d1f2b 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -1038,6 +1038,7 @@ static void __drm_gpusvm_unmap_pages(struct drm_gpusvm *gpusvm,
flags.has_dma_mapping = false;
WRITE_ONCE(svm_pages->flags.__flags, flags.__flags);
+ drm_pagemap_put(svm_pages->dpagemap);
svm_pages->dpagemap = NULL;
}
}
@@ -1431,7 +1432,8 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
if (pagemap) {
flags.has_devmem_pages = true;
- svm_pages->dpagemap = dpagemap;
+ drm_pagemap_put(svm_pages->dpagemap);
+ svm_pages->dpagemap = drm_pagemap_get(dpagemap);
}
/* WRITE_ONCE pairs with READ_ONCE for opportunistic checks */
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 22c44807e3fe..4b8692f0b2a2 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -538,6 +538,57 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
return -ENOMEM;
}
+static void drm_pagemap_release(struct kref *ref)
+{
+ struct drm_pagemap *dpagemap = container_of(ref, typeof(*dpagemap), ref);
+
+ kfree(dpagemap);
+}
+
+/**
+ * drm_pagemap_create() - Create a struct drm_pagemap.
+ * @dev: Pointer to a struct device providing the device-private memory.
+ * @pagemap: Pointer to a pre-setup struct dev_pagemap providing the struct pages.
+ * @ops: Pointer to the struct drm_pagemap_ops.
+ *
+ * Allocate and initialize a struct drm_pagemap.
+ *
+ * Return: A refcounted pointer to a struct drm_pagemap on success.
+ * Error pointer on error.
+ */
+struct drm_pagemap *
+drm_pagemap_create(struct device *dev,
+ struct dev_pagemap *pagemap,
+ const struct drm_pagemap_ops *ops)
+{
+ struct drm_pagemap *dpagemap = kzalloc(sizeof(*dpagemap), GFP_KERNEL);
+
+ if (!dpagemap)
+ return ERR_PTR(-ENOMEM);
+
+ kref_init(&dpagemap->ref);
+ dpagemap->dev = dev;
+ dpagemap->ops = ops;
+ dpagemap->pagemap = pagemap;
+
+ return dpagemap;
+}
+EXPORT_SYMBOL(drm_pagemap_create);
+
+/**
+ * drm_pagemap_put() - Put a struct drm_pagemap reference
+ * @dpagemap: Pointer to a struct drm_pagemap object.
+ *
+ * Puts a struct drm_pagemap reference and frees the drm_pagemap object
+ * if the refount reaches zero.
+ */
+void drm_pagemap_put(struct drm_pagemap *dpagemap)
+{
+ if (likely(dpagemap))
+ kref_put(&dpagemap->ref, drm_pagemap_release);
+}
+EXPORT_SYMBOL(drm_pagemap_put);
+
/**
* drm_pagemap_evict_to_ram() - Evict GPU SVM range to RAM
* @devmem_allocation: Pointer to the device memory allocation
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 894e8f092e3f..a3f97cf9c254 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -860,7 +860,7 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
struct mm_struct *mm,
unsigned long timeslice_ms)
{
- struct xe_vram_region *vr = container_of(dpagemap, typeof(*vr), dpagemap);
+ struct xe_vram_region *vr = container_of(dpagemap->pagemap, typeof(*vr), pagemap);
struct xe_device *xe = vr->xe;
struct device *dev = xe->drm.dev;
struct drm_buddy_block *block;
@@ -1371,7 +1371,7 @@ u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
static struct drm_pagemap *tile_local_pagemap(struct xe_tile *tile)
{
- return &tile->mem.vram->dpagemap;
+ return tile->mem.vram->dpagemap;
}
/**
@@ -1481,6 +1481,15 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
return ret;
}
+ vr->dpagemap = drm_pagemap_create(dev, &vr->pagemap,
+ &xe_drm_pagemap_ops);
+ if (IS_ERR(vr->dpagemap)) {
+ drm_err(&xe->drm, "Failed to create drm_pagemap tile %d memory: %pe\n",
+ tile->id, vr->dpagemap);
+ ret = PTR_ERR(vr->dpagemap);
+ goto out_no_dpagemap;
+ }
+
vr->pagemap.type = MEMORY_DEVICE_PRIVATE;
vr->pagemap.range.start = res->start;
vr->pagemap.range.end = res->end;
@@ -1488,22 +1497,23 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
vr->pagemap.ops = drm_pagemap_pagemap_ops_get();
vr->pagemap.owner = xe_svm_devm_owner(xe);
addr = devm_memremap_pages(dev, &vr->pagemap);
-
- vr->dpagemap.dev = dev;
- vr->dpagemap.ops = &xe_drm_pagemap_ops;
-
if (IS_ERR(addr)) {
- devm_release_mem_region(dev, res->start, resource_size(res));
ret = PTR_ERR(addr);
drm_err(&xe->drm, "Failed to remap tile %d memory, errno %pe\n",
tile->id, ERR_PTR(ret));
- return ret;
+ goto out_failed_memremap;
}
vr->hpa_base = res->start;
drm_dbg(&xe->drm, "Added tile %d memory [%llx-%llx] to devm, remapped to %pr\n",
tile->id, vr->io_start, vr->io_start + vr->usable_size, res);
return 0;
+
+out_failed_memremap:
+ drm_pagemap_put(vr->dpagemap);
+out_no_dpagemap:
+ devm_release_mem_region(dev, res->start, resource_size(res));
+ return ret;
}
#else
int xe_svm_alloc_vram(struct xe_tile *tile,
diff --git a/drivers/gpu/drm/xe/xe_vram_types.h b/drivers/gpu/drm/xe/xe_vram_types.h
index 83772dcbf1af..c0d2c5ee8c10 100644
--- a/drivers/gpu/drm/xe/xe_vram_types.h
+++ b/drivers/gpu/drm/xe/xe_vram_types.h
@@ -72,7 +72,7 @@ struct xe_vram_region {
* @dpagemap: The struct drm_pagemap of the ZONE_DEVICE memory
* pages of this tile.
*/
- struct drm_pagemap dpagemap;
+ struct drm_pagemap *dpagemap;
/**
* @hpa_base: base host physical address
*
diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
index f6e7e234c089..2c7de928865b 100644
--- a/include/drm/drm_pagemap.h
+++ b/include/drm/drm_pagemap.h
@@ -129,11 +129,15 @@ struct drm_pagemap_ops {
* struct drm_pagemap: Additional information for a struct dev_pagemap
* used for device p2p handshaking.
* @ops: The struct drm_pagemap_ops.
+ * @ref: Reference count.
* @dev: The struct drevice owning the device-private memory.
+ * @pagemap: Pointer to the underlying dev_pagemap.
*/
struct drm_pagemap {
const struct drm_pagemap_ops *ops;
+ struct kref ref;
struct device *dev;
+ struct dev_pagemap *pagemap;
};
struct drm_pagemap_devmem;
@@ -202,6 +206,37 @@ struct drm_pagemap_devmem_ops {
unsigned long npages);
};
+struct drm_pagemap *drm_pagemap_create(struct device *dev,
+ struct dev_pagemap *pagemap,
+ const struct drm_pagemap_ops *ops);
+
+#if IS_ENABLED(CONFIG_DRM_GPUSVM)
+
+void drm_pagemap_put(struct drm_pagemap *dpagemap);
+
+#else
+
+static inline void drm_pagemap_put(struct drm_pagemap *dpagemap)
+{
+}
+
+#endif /* IS_ENABLED(CONFIG_DRM_GPUSVM) */
+
+/**
+ * drm_pagemap_get() - Obtain a reference on a struct drm_pagemap
+ * @dpagemap: Pointer to the struct drm_pagemap.
+ *
+ * Return: Pointer to the struct drm_pagemap.
+ */
+static inline struct drm_pagemap *
+drm_pagemap_get(struct drm_pagemap *dpagemap)
+{
+ if (likely(dpagemap))
+ kref_get(&dpagemap->ref);
+
+ return dpagemap;
+}
+
/**
* struct drm_pagemap_devmem - Structure representing a GPU SVM device memory allocation
*
@@ -246,3 +281,4 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
unsigned long timeslice_ms);
#endif
+
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v2 02/17] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap
2025-11-11 16:43 ` [PATCH v2 02/17] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap Thomas Hellström
@ 2025-11-12 6:07 ` Ghimiray, Himal Prasad
2025-11-21 10:19 ` Thomas Hellström
0 siblings, 1 reply; 33+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-11-12 6:07 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: Matthew Brost, dri-devel, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
On 11-11-2025 22:13, Thomas Hellström wrote:
> With the end goal of being able to free unused pagemaps
> and allocate them on demand, add a refcount to struct drm_pagemap,
> remove the xe embedded drm_pagemap, allocating and freeing it
> explicitly.
>
> v2:
> - Make the drm_pagemap pointer in drm_gpusvm_pages reference-counted.
>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Reviewed-by: Matthew Brost <matthew.brost@intel.com> #v1
> ---
> drivers/gpu/drm/drm_gpusvm.c | 4 ++-
> drivers/gpu/drm/drm_pagemap.c | 51 ++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_svm.c | 26 ++++++++++-----
> drivers/gpu/drm/xe/xe_vram_types.h | 2 +-
> include/drm/drm_pagemap.h | 36 +++++++++++++++++++++
> 5 files changed, 109 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
> index 73e550c8ff8c..1f96375d1f2b 100644
> --- a/drivers/gpu/drm/drm_gpusvm.c
> +++ b/drivers/gpu/drm/drm_gpusvm.c
> @@ -1038,6 +1038,7 @@ static void __drm_gpusvm_unmap_pages(struct drm_gpusvm *gpusvm,
> flags.has_dma_mapping = false;
> WRITE_ONCE(svm_pages->flags.__flags, flags.__flags);
>
> + drm_pagemap_put(svm_pages->dpagemap);
> svm_pages->dpagemap = NULL;
> }
> }
> @@ -1431,7 +1432,8 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
>
> if (pagemap) {
> flags.has_devmem_pages = true;
> - svm_pages->dpagemap = dpagemap;
> + drm_pagemap_put(svm_pages->dpagemap);
Dont we risk a UAF for dpagemap if svm_pages->dpagemap is same as dpagemap ?
> + svm_pages->dpagemap = drm_pagemap_get(dpagemap);
> }
>
> /* WRITE_ONCE pairs with READ_ONCE for opportunistic checks */
> diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> index 22c44807e3fe..4b8692f0b2a2 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -538,6 +538,57 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
> return -ENOMEM;
> }
>
> +static void drm_pagemap_release(struct kref *ref)
> +{
> + struct drm_pagemap *dpagemap = container_of(ref, typeof(*dpagemap), ref);
> +
> + kfree(dpagemap);
> +}
> +
> +/**
> + * drm_pagemap_create() - Create a struct drm_pagemap.
> + * @dev: Pointer to a struct device providing the device-private memory.
> + * @pagemap: Pointer to a pre-setup struct dev_pagemap providing the struct pages.
> + * @ops: Pointer to the struct drm_pagemap_ops.
> + *
> + * Allocate and initialize a struct drm_pagemap.
> + *
> + * Return: A refcounted pointer to a struct drm_pagemap on success.
> + * Error pointer on error.
> + */
> +struct drm_pagemap *
> +drm_pagemap_create(struct device *dev,
> + struct dev_pagemap *pagemap,
> + const struct drm_pagemap_ops *ops)
> +{
> + struct drm_pagemap *dpagemap = kzalloc(sizeof(*dpagemap), GFP_KERNEL);
> +
> + if (!dpagemap)
> + return ERR_PTR(-ENOMEM);
> +
> + kref_init(&dpagemap->ref);
> + dpagemap->dev = dev;
> + dpagemap->ops = ops;
> + dpagemap->pagemap = pagemap;
> +
> + return dpagemap;
> +}
> +EXPORT_SYMBOL(drm_pagemap_create);
> +
> +/**
> + * drm_pagemap_put() - Put a struct drm_pagemap reference
> + * @dpagemap: Pointer to a struct drm_pagemap object.
> + *
> + * Puts a struct drm_pagemap reference and frees the drm_pagemap object
> + * if the refount reaches zero.
> + */
> +void drm_pagemap_put(struct drm_pagemap *dpagemap)
> +{
> + if (likely(dpagemap))
> + kref_put(&dpagemap->ref, drm_pagemap_release);
> +}
> +EXPORT_SYMBOL(drm_pagemap_put);
> +
> /**
> * drm_pagemap_evict_to_ram() - Evict GPU SVM range to RAM
> * @devmem_allocation: Pointer to the device memory allocation
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 894e8f092e3f..a3f97cf9c254 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -860,7 +860,7 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> struct mm_struct *mm,
> unsigned long timeslice_ms)
> {
> - struct xe_vram_region *vr = container_of(dpagemap, typeof(*vr), dpagemap);
> + struct xe_vram_region *vr = container_of(dpagemap->pagemap, typeof(*vr), pagemap);
> struct xe_device *xe = vr->xe;
> struct device *dev = xe->drm.dev;
> struct drm_buddy_block *block;
> @@ -1371,7 +1371,7 @@ u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
>
> static struct drm_pagemap *tile_local_pagemap(struct xe_tile *tile)
> {
> - return &tile->mem.vram->dpagemap;
> + return tile->mem.vram->dpagemap;
> }
>
> /**
> @@ -1481,6 +1481,15 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
> return ret;
> }
>
> + vr->dpagemap = drm_pagemap_create(dev, &vr->pagemap,
> + &xe_drm_pagemap_ops);
> + if (IS_ERR(vr->dpagemap)) {
> + drm_err(&xe->drm, "Failed to create drm_pagemap tile %d memory: %pe\n",
> + tile->id, vr->dpagemap);
> + ret = PTR_ERR(vr->dpagemap);
> + goto out_no_dpagemap;
> + }
> +
> vr->pagemap.type = MEMORY_DEVICE_PRIVATE;
> vr->pagemap.range.start = res->start;
> vr->pagemap.range.end = res->end;
> @@ -1488,22 +1497,23 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
> vr->pagemap.ops = drm_pagemap_pagemap_ops_get();
> vr->pagemap.owner = xe_svm_devm_owner(xe);
> addr = devm_memremap_pages(dev, &vr->pagemap);
> -
> - vr->dpagemap.dev = dev;
> - vr->dpagemap.ops = &xe_drm_pagemap_ops;
> -
> if (IS_ERR(addr)) {
> - devm_release_mem_region(dev, res->start, resource_size(res));
> ret = PTR_ERR(addr);
> drm_err(&xe->drm, "Failed to remap tile %d memory, errno %pe\n",
> tile->id, ERR_PTR(ret));
> - return ret;
> + goto out_failed_memremap;
> }
> vr->hpa_base = res->start;
>
> drm_dbg(&xe->drm, "Added tile %d memory [%llx-%llx] to devm, remapped to %pr\n",
> tile->id, vr->io_start, vr->io_start + vr->usable_size, res);
> return 0;
> +
> +out_failed_memremap:
> + drm_pagemap_put(vr->dpagemap);
> +out_no_dpagemap:
> + devm_release_mem_region(dev, res->start, resource_size(res));
> + return ret;
> }
> #else
> int xe_svm_alloc_vram(struct xe_tile *tile,
> diff --git a/drivers/gpu/drm/xe/xe_vram_types.h b/drivers/gpu/drm/xe/xe_vram_types.h
> index 83772dcbf1af..c0d2c5ee8c10 100644
> --- a/drivers/gpu/drm/xe/xe_vram_types.h
> +++ b/drivers/gpu/drm/xe/xe_vram_types.h
> @@ -72,7 +72,7 @@ struct xe_vram_region {
> * @dpagemap: The struct drm_pagemap of the ZONE_DEVICE memory
> * pages of this tile.
> */
> - struct drm_pagemap dpagemap;
> + struct drm_pagemap *dpagemap;
> /**
> * @hpa_base: base host physical address
> *
> diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> index f6e7e234c089..2c7de928865b 100644
> --- a/include/drm/drm_pagemap.h
> +++ b/include/drm/drm_pagemap.h
> @@ -129,11 +129,15 @@ struct drm_pagemap_ops {
> * struct drm_pagemap: Additional information for a struct dev_pagemap
> * used for device p2p handshaking.
> * @ops: The struct drm_pagemap_ops.
> + * @ref: Reference count.
> * @dev: The struct drevice owning the device-private memory.
> + * @pagemap: Pointer to the underlying dev_pagemap.
> */
> struct drm_pagemap {
> const struct drm_pagemap_ops *ops;
> + struct kref ref;
> struct device *dev;
> + struct dev_pagemap *pagemap;
> };
>
> struct drm_pagemap_devmem;
> @@ -202,6 +206,37 @@ struct drm_pagemap_devmem_ops {
> unsigned long npages);
> };
>
> +struct drm_pagemap *drm_pagemap_create(struct device *dev,
> + struct dev_pagemap *pagemap,
> + const struct drm_pagemap_ops *ops);
> +
> +#if IS_ENABLED(CONFIG_DRM_GPUSVM)
> +
> +void drm_pagemap_put(struct drm_pagemap *dpagemap);
> +
> +#else
> +
> +static inline void drm_pagemap_put(struct drm_pagemap *dpagemap)
> +{
> +}
> +
> +#endif /* IS_ENABLED(CONFIG_DRM_GPUSVM) */
> +
> +/**
> + * drm_pagemap_get() - Obtain a reference on a struct drm_pagemap
> + * @dpagemap: Pointer to the struct drm_pagemap.
> + *
> + * Return: Pointer to the struct drm_pagemap.
> + */
> +static inline struct drm_pagemap *
> +drm_pagemap_get(struct drm_pagemap *dpagemap)
> +{
> + if (likely(dpagemap))
> + kref_get(&dpagemap->ref);
> +
> + return dpagemap;
> +}
> +
> /**
> * struct drm_pagemap_devmem - Structure representing a GPU SVM device memory allocation
> *
> @@ -246,3 +281,4 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> unsigned long timeslice_ms);
>
> #endif
> +
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v2 02/17] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap
2025-11-12 6:07 ` Ghimiray, Himal Prasad
@ 2025-11-21 10:19 ` Thomas Hellström
0 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-21 10:19 UTC (permalink / raw)
To: Ghimiray, Himal Prasad, intel-xe
Cc: Matthew Brost, dri-devel, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
Hi, Himal!
On Wed, 2025-11-12 at 11:37 +0530, Ghimiray, Himal Prasad wrote:
>
>
> On 11-11-2025 22:13, Thomas Hellström wrote:
> > With the end goal of being able to free unused pagemaps
> > and allocate them on demand, add a refcount to struct drm_pagemap,
> > remove the xe embedded drm_pagemap, allocating and freeing it
> > explicitly.
> >
> > v2:
> > - Make the drm_pagemap pointer in drm_gpusvm_pages reference-
> > counted.
> >
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Reviewed-by: Matthew Brost <matthew.brost@intel.com> #v1
> > ---
> > drivers/gpu/drm/drm_gpusvm.c | 4 ++-
> > drivers/gpu/drm/drm_pagemap.c | 51
> > ++++++++++++++++++++++++++++++
> > drivers/gpu/drm/xe/xe_svm.c | 26 ++++++++++-----
> > drivers/gpu/drm/xe/xe_vram_types.h | 2 +-
> > include/drm/drm_pagemap.h | 36 +++++++++++++++++++++
> > 5 files changed, 109 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_gpusvm.c
> > b/drivers/gpu/drm/drm_gpusvm.c
> > index 73e550c8ff8c..1f96375d1f2b 100644
> > --- a/drivers/gpu/drm/drm_gpusvm.c
> > +++ b/drivers/gpu/drm/drm_gpusvm.c
> > @@ -1038,6 +1038,7 @@ static void __drm_gpusvm_unmap_pages(struct
> > drm_gpusvm *gpusvm,
> > flags.has_dma_mapping = false;
> > WRITE_ONCE(svm_pages->flags.__flags,
> > flags.__flags);
> >
> > + drm_pagemap_put(svm_pages->dpagemap);
> > svm_pages->dpagemap = NULL;
> > }
> > }
> > @@ -1431,7 +1432,8 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> > *gpusvm,
> >
> > if (pagemap) {
> > flags.has_devmem_pages = true;
> > - svm_pages->dpagemap = dpagemap;
> > + drm_pagemap_put(svm_pages->dpagemap);
>
> Dont we risk a UAF for dpagemap if svm_pages->dpagemap is same as
> dpagemap ?
Thanks for reviewing. Here the dpagemap refcount is protected from
dropping to zero by a page presence in the CPU address space and us
holding the notifier lock. But agree that this looks bad from a
reader's perspective. I'll fix that up.
Thanks,
Thomas
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 03/17] drm/pagemap: Add a refcounted drm_pagemap backpointer to struct drm_pagemap_zdd
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
2025-11-11 16:43 ` [PATCH v2 01/17] drm/xe/svm: Fix a debug printout Thomas Hellström
2025-11-11 16:43 ` [PATCH v2 02/17] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap Thomas Hellström
@ 2025-11-11 16:43 ` Thomas Hellström
2025-11-11 16:43 ` [PATCH v2 04/17] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes Thomas Hellström
` (18 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, Matthew Brost, dri-devel,
himal.prasad.ghimiray, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
To be able to keep track of drm_pagemap usage, add a refcounted
backpointer to struct drm_pagemap_zdd. This will keep the drm_pagemap
reference count from dropping to zero as long as there are drm_pagemap
pages present in a CPU address space.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 4b8692f0b2a2..173b3ecb07d5 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -62,6 +62,7 @@
*
* @refcount: Reference count for the zdd
* @devmem_allocation: device memory allocation
+ * @dpagemap: Refcounted pointer to the underlying struct drm_pagemap.
* @device_private_page_owner: Device private pages owner
*
* This structure serves as a generic wrapper installed in
@@ -74,11 +75,13 @@
struct drm_pagemap_zdd {
struct kref refcount;
struct drm_pagemap_devmem *devmem_allocation;
+ struct drm_pagemap *dpagemap;
void *device_private_page_owner;
};
/**
* drm_pagemap_zdd_alloc() - Allocate a zdd structure.
+ * @dpagemap: Pointer to the underlying struct drm_pagemap.
* @device_private_page_owner: Device private pages owner
*
* This function allocates and initializes a new zdd structure. It sets up the
@@ -87,7 +90,7 @@ struct drm_pagemap_zdd {
* Return: Pointer to the allocated zdd on success, ERR_PTR() on failure.
*/
static struct drm_pagemap_zdd *
-drm_pagemap_zdd_alloc(void *device_private_page_owner)
+drm_pagemap_zdd_alloc(struct drm_pagemap *dpagemap, void *device_private_page_owner)
{
struct drm_pagemap_zdd *zdd;
@@ -98,6 +101,7 @@ drm_pagemap_zdd_alloc(void *device_private_page_owner)
kref_init(&zdd->refcount);
zdd->devmem_allocation = NULL;
zdd->device_private_page_owner = device_private_page_owner;
+ zdd->dpagemap = drm_pagemap_get(dpagemap);
return zdd;
}
@@ -127,6 +131,7 @@ static void drm_pagemap_zdd_destroy(struct kref *ref)
struct drm_pagemap_zdd *zdd =
container_of(ref, struct drm_pagemap_zdd, refcount);
struct drm_pagemap_devmem *devmem = zdd->devmem_allocation;
+ struct drm_pagemap *dpagemap = zdd->dpagemap;
if (devmem) {
complete_all(&devmem->detached);
@@ -134,6 +139,7 @@ static void drm_pagemap_zdd_destroy(struct kref *ref)
devmem->ops->devmem_release(devmem);
}
kfree(zdd);
+ drm_pagemap_put(dpagemap);
}
/**
@@ -366,7 +372,7 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
pagemap_addr = buf + (2 * sizeof(*migrate.src) * npages);
pages = buf + (2 * sizeof(*migrate.src) + sizeof(*pagemap_addr)) * npages;
- zdd = drm_pagemap_zdd_alloc(pgmap_owner);
+ zdd = drm_pagemap_zdd_alloc(devmem_allocation->dpagemap, pgmap_owner);
if (!zdd) {
err = -ENOMEM;
goto err_free;
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 04/17] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (2 preceding siblings ...)
2025-11-11 16:43 ` [PATCH v2 03/17] drm/pagemap: Add a refcounted drm_pagemap backpointer to struct drm_pagemap_zdd Thomas Hellström
@ 2025-11-11 16:43 ` Thomas Hellström
2025-11-18 0:44 ` Matthew Brost
2025-11-11 16:43 ` [PATCH v2 05/17] drm/pagemap: Add a drm_pagemap cache and shrinker Thomas Hellström
` (17 subsequent siblings)
21 siblings, 1 reply; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, dri-devel, himal.prasad.ghimiray, apopple,
airlied, Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
If a device holds a reference on a foregin device's drm_pagemap,
and a device unbind is executed on the foreign device,
Typically that foreign device would evict its device-private
pages and then continue its device-managed cleanup eventually
releasing its drm device and possibly allow for module unload.
However, since we're still holding a reference on a drm_pagemap,
when that reference is released and the provider module is
unloaded we'd execute out of undefined memory.
Therefore keep a reference on the provider device and module until
the last drm_pagemap reference is gone.
Note that in theory, the drm_gpusvm_helper module may be unloaded
as soon as the final module_put() of the provider driver module is
executed, so we need to add a module_exit() function that waits
for the work item executing the module_put() has completed.
v2:
- Better commit message (Matt Brost)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 101 ++++++++++++++++++++++++++++++++--
drivers/gpu/drm/xe/xe_svm.c | 15 ++++-
include/drm/drm_pagemap.h | 10 +++-
3 files changed, 117 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 173b3ecb07d5..fb18a80d6a1c 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -8,6 +8,7 @@
#include <linux/pagemap.h>
#include <drm/drm_drv.h>
#include <drm/drm_pagemap.h>
+#include <drm/drm_print.h>
/**
* DOC: Overview
@@ -544,16 +545,92 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
return -ENOMEM;
}
+static void drm_pagemap_dev_unhold_work(struct work_struct *work);
+static LLIST_HEAD(drm_pagemap_unhold_list);
+static DECLARE_WORK(drm_pagemap_work, drm_pagemap_dev_unhold_work);
+
+/**
+ * struct drm_pagemap_dev_hold - Struct to aid in drm_device release.
+ * @link: Link into drm_pagemap_unhold_list for deferred reference releases.
+ * @drm: drm device to put.
+ *
+ * When a struct drm_pagemap is released, we also need to release the
+ * reference it holds on the drm device. However, typically that needs
+ * to be done separately from a system-wide workqueue.
+ * Each time a struct drm_pagemap is initialized
+ * (or re-initialized if cached) therefore allocate a separate
+ * drm_pagemap_dev_hold item, from which we put the drm device and
+ * associated module.
+ */
+struct drm_pagemap_dev_hold {
+ struct llist_node link;
+ struct drm_device *drm;
+};
+
static void drm_pagemap_release(struct kref *ref)
{
struct drm_pagemap *dpagemap = container_of(ref, typeof(*dpagemap), ref);
-
+ struct drm_pagemap_dev_hold *dev_hold = dpagemap->dev_hold;
+
+ /*
+ * We know the pagemap provider is alive at this point, since
+ * the struct drm_pagemap_dev_hold holds a reference to the
+ * pagemap provider drm_device and its module.
+ */
+ dpagemap->dev_hold = NULL;
kfree(dpagemap);
+ llist_add(&dev_hold->link, &drm_pagemap_unhold_list);
+ schedule_work(&drm_pagemap_work);
+ /*
+ * Here, either the provider device is still alive, since if called from
+ * page_free(), the caller is holding a reference on the dev_pagemap,
+ * or if called from drm_pagemap_put(), the direct caller is still alive.
+ * This ensures we can't race with THIS module unload.
+ */
+}
+
+static void drm_pagemap_dev_unhold_work(struct work_struct *work)
+{
+ struct llist_node *node = llist_del_all(&drm_pagemap_unhold_list);
+ struct drm_pagemap_dev_hold *dev_hold, *next;
+
+ /*
+ * Deferred release of drm_pagemap provider device and module.
+ * THIS module is kept alive during the release by the
+ * flush_work() in the drm_pagemap_exit() function.
+ */
+ llist_for_each_entry_safe(dev_hold, next, node, link) {
+ struct drm_device *drm = dev_hold->drm;
+ struct module *module = drm->driver->fops->owner;
+
+ drm_dbg(drm, "Releasing reference on provider device and module.\n");
+ drm_dev_put(drm);
+ module_put(module);
+ kfree(dev_hold);
+ }
+}
+
+static struct drm_pagemap_dev_hold *
+drm_pagemap_dev_hold(struct drm_pagemap *dpagemap)
+{
+ struct drm_pagemap_dev_hold *dev_hold;
+ struct drm_device *drm = dpagemap->drm;
+
+ dev_hold = kzalloc(sizeof(*dev_hold), GFP_KERNEL);
+ if (!dev_hold)
+ return ERR_PTR(-ENOMEM);
+
+ init_llist_node(&dev_hold->link);
+ dev_hold->drm = drm;
+ (void)try_module_get(drm->driver->fops->owner);
+ drm_dev_get(drm);
+
+ return dev_hold;
}
/**
* drm_pagemap_create() - Create a struct drm_pagemap.
- * @dev: Pointer to a struct device providing the device-private memory.
+ * @drm: Pointer to a struct drm_device providing the device-private memory.
* @pagemap: Pointer to a pre-setup struct dev_pagemap providing the struct pages.
* @ops: Pointer to the struct drm_pagemap_ops.
*
@@ -563,20 +640,28 @@ static void drm_pagemap_release(struct kref *ref)
* Error pointer on error.
*/
struct drm_pagemap *
-drm_pagemap_create(struct device *dev,
+drm_pagemap_create(struct drm_device *drm,
struct dev_pagemap *pagemap,
const struct drm_pagemap_ops *ops)
{
struct drm_pagemap *dpagemap = kzalloc(sizeof(*dpagemap), GFP_KERNEL);
+ struct drm_pagemap_dev_hold *dev_hold;
if (!dpagemap)
return ERR_PTR(-ENOMEM);
kref_init(&dpagemap->ref);
- dpagemap->dev = dev;
+ dpagemap->drm = drm;
dpagemap->ops = ops;
dpagemap->pagemap = pagemap;
+ dev_hold = drm_pagemap_dev_hold(dpagemap);
+ if (IS_ERR(dev_hold)) {
+ kfree(dpagemap);
+ return ERR_CAST(dev_hold);
+ }
+ dpagemap->dev_hold = dev_hold;
+
return dpagemap;
}
EXPORT_SYMBOL(drm_pagemap_create);
@@ -937,3 +1022,11 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
return err;
}
EXPORT_SYMBOL(drm_pagemap_populate_mm);
+
+static void drm_pagemap_exit(void)
+{
+ flush_work(&drm_pagemap_work);
+ if (WARN_ON(!llist_empty(&drm_pagemap_unhold_list)))
+ disable_work_sync(&drm_pagemap_work);
+}
+module_exit(drm_pagemap_exit);
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index a3f97cf9c254..aab939fbcf80 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -1436,7 +1436,7 @@ xe_drm_pagemap_device_map(struct drm_pagemap *dpagemap,
unsigned int order,
enum dma_data_direction dir)
{
- struct device *pgmap_dev = dpagemap->dev;
+ struct device *pgmap_dev = dpagemap->drm->dev;
enum drm_interconnect_protocol prot;
dma_addr_t addr;
@@ -1456,6 +1456,14 @@ static const struct drm_pagemap_ops xe_drm_pagemap_ops = {
.populate_mm = xe_drm_pagemap_populate_mm,
};
+static void xe_devm_release(void *data)
+{
+ struct xe_vram_region *vr = data;
+
+ drm_pagemap_put(vr->dpagemap);
+ vr->dpagemap = NULL;
+}
+
/**
* xe_devm_add: Remap and provide memmap backing for device memory
* @tile: tile that the memory region belongs to
@@ -1481,7 +1489,7 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
return ret;
}
- vr->dpagemap = drm_pagemap_create(dev, &vr->pagemap,
+ vr->dpagemap = drm_pagemap_create(&xe->drm, &vr->pagemap,
&xe_drm_pagemap_ops);
if (IS_ERR(vr->dpagemap)) {
drm_err(&xe->drm, "Failed to create drm_pagemap tile %d memory: %pe\n",
@@ -1489,6 +1497,9 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
ret = PTR_ERR(vr->dpagemap);
goto out_no_dpagemap;
}
+ ret = devm_add_action_or_reset(dev, xe_devm_release, vr);
+ if (ret)
+ goto out_no_dpagemap;
vr->pagemap.type = MEMORY_DEVICE_PRIVATE;
vr->pagemap.range.start = res->start;
diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
index 2c7de928865b..5cfe54331ba7 100644
--- a/include/drm/drm_pagemap.h
+++ b/include/drm/drm_pagemap.h
@@ -9,6 +9,7 @@
#define NR_PAGES(order) (1U << (order))
struct drm_pagemap;
+struct drm_pagemap_dev_hold;
struct drm_pagemap_zdd;
struct device;
@@ -130,14 +131,17 @@ struct drm_pagemap_ops {
* used for device p2p handshaking.
* @ops: The struct drm_pagemap_ops.
* @ref: Reference count.
- * @dev: The struct drevice owning the device-private memory.
+ * @drm: The struct drm device owning the device-private memory.
* @pagemap: Pointer to the underlying dev_pagemap.
+ * @dev_hold: Pointer to a struct drm_pagemap_dev_hold for
+ * device referencing.
*/
struct drm_pagemap {
const struct drm_pagemap_ops *ops;
struct kref ref;
- struct device *dev;
+ struct drm_device *drm;
struct dev_pagemap *pagemap;
+ struct drm_pagemap_dev_hold *dev_hold;
};
struct drm_pagemap_devmem;
@@ -206,7 +210,7 @@ struct drm_pagemap_devmem_ops {
unsigned long npages);
};
-struct drm_pagemap *drm_pagemap_create(struct device *dev,
+struct drm_pagemap *drm_pagemap_create(struct drm_device *drm,
struct dev_pagemap *pagemap,
const struct drm_pagemap_ops *ops);
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v2 04/17] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes
2025-11-11 16:43 ` [PATCH v2 04/17] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes Thomas Hellström
@ 2025-11-18 0:44 ` Matthew Brost
0 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2025-11-18 0:44 UTC (permalink / raw)
To: Thomas Hellström
Cc: intel-xe, dri-devel, himal.prasad.ghimiray, apopple, airlied,
Simona Vetter, felix.kuehling, Christian König, dakr,
Mrozek, Michal, Joonas Lahtinen
On Tue, Nov 11, 2025 at 05:43:54PM +0100, Thomas Hellström wrote:
> If a device holds a reference on a foregin device's drm_pagemap,
> and a device unbind is executed on the foreign device,
> Typically that foreign device would evict its device-private
> pages and then continue its device-managed cleanup eventually
> releasing its drm device and possibly allow for module unload.
> However, since we're still holding a reference on a drm_pagemap,
> when that reference is released and the provider module is
> unloaded we'd execute out of undefined memory.
>
> Therefore keep a reference on the provider device and module until
> the last drm_pagemap reference is gone.
>
> Note that in theory, the drm_gpusvm_helper module may be unloaded
> as soon as the final module_put() of the provider driver module is
> executed, so we need to add a module_exit() function that waits
> for the work item executing the module_put() has completed.
>
> v2:
> - Better commit message (Matt Brost)
>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/drm_pagemap.c | 101 ++++++++++++++++++++++++++++++++--
> drivers/gpu/drm/xe/xe_svm.c | 15 ++++-
> include/drm/drm_pagemap.h | 10 +++-
> 3 files changed, 117 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> index 173b3ecb07d5..fb18a80d6a1c 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -8,6 +8,7 @@
> #include <linux/pagemap.h>
> #include <drm/drm_drv.h>
> #include <drm/drm_pagemap.h>
> +#include <drm/drm_print.h>
>
> /**
> * DOC: Overview
> @@ -544,16 +545,92 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
> return -ENOMEM;
> }
>
> +static void drm_pagemap_dev_unhold_work(struct work_struct *work);
> +static LLIST_HEAD(drm_pagemap_unhold_list);
> +static DECLARE_WORK(drm_pagemap_work, drm_pagemap_dev_unhold_work);
> +
> +/**
> + * struct drm_pagemap_dev_hold - Struct to aid in drm_device release.
> + * @link: Link into drm_pagemap_unhold_list for deferred reference releases.
> + * @drm: drm device to put.
> + *
> + * When a struct drm_pagemap is released, we also need to release the
> + * reference it holds on the drm device. However, typically that needs
> + * to be done separately from a system-wide workqueue.
> + * Each time a struct drm_pagemap is initialized
> + * (or re-initialized if cached) therefore allocate a separate
> + * drm_pagemap_dev_hold item, from which we put the drm device and
> + * associated module.
> + */
> +struct drm_pagemap_dev_hold {
> + struct llist_node link;
> + struct drm_device *drm;
> +};
> +
> static void drm_pagemap_release(struct kref *ref)
> {
> struct drm_pagemap *dpagemap = container_of(ref, typeof(*dpagemap), ref);
> -
> + struct drm_pagemap_dev_hold *dev_hold = dpagemap->dev_hold;
> +
> + /*
> + * We know the pagemap provider is alive at this point, since
> + * the struct drm_pagemap_dev_hold holds a reference to the
> + * pagemap provider drm_device and its module.
> + */
> + dpagemap->dev_hold = NULL;
> kfree(dpagemap);
> + llist_add(&dev_hold->link, &drm_pagemap_unhold_list);
> + schedule_work(&drm_pagemap_work);
> + /*
> + * Here, either the provider device is still alive, since if called from
> + * page_free(), the caller is holding a reference on the dev_pagemap,
> + * or if called from drm_pagemap_put(), the direct caller is still alive.
> + * This ensures we can't race with THIS module unload.
> + */
> +}
> +
> +static void drm_pagemap_dev_unhold_work(struct work_struct *work)
> +{
> + struct llist_node *node = llist_del_all(&drm_pagemap_unhold_list);
> + struct drm_pagemap_dev_hold *dev_hold, *next;
> +
> + /*
> + * Deferred release of drm_pagemap provider device and module.
> + * THIS module is kept alive during the release by the
> + * flush_work() in the drm_pagemap_exit() function.
> + */
> + llist_for_each_entry_safe(dev_hold, next, node, link) {
> + struct drm_device *drm = dev_hold->drm;
> + struct module *module = drm->driver->fops->owner;
> +
> + drm_dbg(drm, "Releasing reference on provider device and module.\n");
> + drm_dev_put(drm);
> + module_put(module);
> + kfree(dev_hold);
> + }
> +}
> +
> +static struct drm_pagemap_dev_hold *
> +drm_pagemap_dev_hold(struct drm_pagemap *dpagemap)
> +{
> + struct drm_pagemap_dev_hold *dev_hold;
> + struct drm_device *drm = dpagemap->drm;
> +
> + dev_hold = kzalloc(sizeof(*dev_hold), GFP_KERNEL);
> + if (!dev_hold)
> + return ERR_PTR(-ENOMEM);
> +
> + init_llist_node(&dev_hold->link);
> + dev_hold->drm = drm;
> + (void)try_module_get(drm->driver->fops->owner);
> + drm_dev_get(drm);
> +
> + return dev_hold;
> }
>
> /**
> * drm_pagemap_create() - Create a struct drm_pagemap.
> - * @dev: Pointer to a struct device providing the device-private memory.
> + * @drm: Pointer to a struct drm_device providing the device-private memory.
> * @pagemap: Pointer to a pre-setup struct dev_pagemap providing the struct pages.
> * @ops: Pointer to the struct drm_pagemap_ops.
> *
> @@ -563,20 +640,28 @@ static void drm_pagemap_release(struct kref *ref)
> * Error pointer on error.
> */
> struct drm_pagemap *
> -drm_pagemap_create(struct device *dev,
> +drm_pagemap_create(struct drm_device *drm,
> struct dev_pagemap *pagemap,
> const struct drm_pagemap_ops *ops)
> {
> struct drm_pagemap *dpagemap = kzalloc(sizeof(*dpagemap), GFP_KERNEL);
> + struct drm_pagemap_dev_hold *dev_hold;
>
> if (!dpagemap)
> return ERR_PTR(-ENOMEM);
>
> kref_init(&dpagemap->ref);
> - dpagemap->dev = dev;
> + dpagemap->drm = drm;
> dpagemap->ops = ops;
> dpagemap->pagemap = pagemap;
>
> + dev_hold = drm_pagemap_dev_hold(dpagemap);
> + if (IS_ERR(dev_hold)) {
> + kfree(dpagemap);
> + return ERR_CAST(dev_hold);
> + }
> + dpagemap->dev_hold = dev_hold;
> +
> return dpagemap;
> }
> EXPORT_SYMBOL(drm_pagemap_create);
> @@ -937,3 +1022,11 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> return err;
> }
> EXPORT_SYMBOL(drm_pagemap_populate_mm);
> +
> +static void drm_pagemap_exit(void)
> +{
> + flush_work(&drm_pagemap_work);
> + if (WARN_ON(!llist_empty(&drm_pagemap_unhold_list)))
> + disable_work_sync(&drm_pagemap_work);
> +}
> +module_exit(drm_pagemap_exit);
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index a3f97cf9c254..aab939fbcf80 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -1436,7 +1436,7 @@ xe_drm_pagemap_device_map(struct drm_pagemap *dpagemap,
> unsigned int order,
> enum dma_data_direction dir)
> {
> - struct device *pgmap_dev = dpagemap->dev;
> + struct device *pgmap_dev = dpagemap->drm->dev;
> enum drm_interconnect_protocol prot;
> dma_addr_t addr;
>
> @@ -1456,6 +1456,14 @@ static const struct drm_pagemap_ops xe_drm_pagemap_ops = {
> .populate_mm = xe_drm_pagemap_populate_mm,
> };
>
> +static void xe_devm_release(void *data)
> +{
> + struct xe_vram_region *vr = data;
> +
> + drm_pagemap_put(vr->dpagemap);
> + vr->dpagemap = NULL;
> +}
> +
> /**
> * xe_devm_add: Remap and provide memmap backing for device memory
> * @tile: tile that the memory region belongs to
> @@ -1481,7 +1489,7 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
> return ret;
> }
>
> - vr->dpagemap = drm_pagemap_create(dev, &vr->pagemap,
> + vr->dpagemap = drm_pagemap_create(&xe->drm, &vr->pagemap,
> &xe_drm_pagemap_ops);
> if (IS_ERR(vr->dpagemap)) {
> drm_err(&xe->drm, "Failed to create drm_pagemap tile %d memory: %pe\n",
> @@ -1489,6 +1497,9 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
> ret = PTR_ERR(vr->dpagemap);
> goto out_no_dpagemap;
> }
> + ret = devm_add_action_or_reset(dev, xe_devm_release, vr);
> + if (ret)
> + goto out_no_dpagemap;
>
> vr->pagemap.type = MEMORY_DEVICE_PRIVATE;
> vr->pagemap.range.start = res->start;
> diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> index 2c7de928865b..5cfe54331ba7 100644
> --- a/include/drm/drm_pagemap.h
> +++ b/include/drm/drm_pagemap.h
> @@ -9,6 +9,7 @@
> #define NR_PAGES(order) (1U << (order))
>
> struct drm_pagemap;
> +struct drm_pagemap_dev_hold;
> struct drm_pagemap_zdd;
> struct device;
>
> @@ -130,14 +131,17 @@ struct drm_pagemap_ops {
> * used for device p2p handshaking.
> * @ops: The struct drm_pagemap_ops.
> * @ref: Reference count.
> - * @dev: The struct drevice owning the device-private memory.
> + * @drm: The struct drm device owning the device-private memory.
> * @pagemap: Pointer to the underlying dev_pagemap.
> + * @dev_hold: Pointer to a struct drm_pagemap_dev_hold for
> + * device referencing.
> */
> struct drm_pagemap {
> const struct drm_pagemap_ops *ops;
> struct kref ref;
> - struct device *dev;
> + struct drm_device *drm;
> struct dev_pagemap *pagemap;
> + struct drm_pagemap_dev_hold *dev_hold;
> };
>
> struct drm_pagemap_devmem;
> @@ -206,7 +210,7 @@ struct drm_pagemap_devmem_ops {
> unsigned long npages);
> };
>
> -struct drm_pagemap *drm_pagemap_create(struct device *dev,
> +struct drm_pagemap *drm_pagemap_create(struct drm_device *drm,
> struct dev_pagemap *pagemap,
> const struct drm_pagemap_ops *ops);
>
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 05/17] drm/pagemap: Add a drm_pagemap cache and shrinker
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (3 preceding siblings ...)
2025-11-11 16:43 ` [PATCH v2 04/17] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes Thomas Hellström
@ 2025-11-11 16:43 ` Thomas Hellström
2025-11-19 19:28 ` Matthew Brost
2025-11-11 16:43 ` [PATCH v2 06/17] drm/xe: Use the " Thomas Hellström
` (16 subsequent siblings)
21 siblings, 1 reply; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, dri-devel, himal.prasad.ghimiray, apopple,
airlied, Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
Pagemaps are costly to set up and tear down, and they consume a lot
of system memory for the struct pages. Ideally they should be
created only when needed.
Add a caching mechanism to allow doing just that: Create the drm_pagemaps
when needed for migration. Keep them around to avoid destruction and
re-creation latencies and destroy inactive/unused drm_pagemaps on memory
pressure using a shrinker.
Only add the helper functions. They will be hooked up to the xe driver
in the upcoming patch.
v2:
- Add lockdep checking for drm_pagemap_put(). (Matt Brost)
- Add a copyright notice. (Matt Brost)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/Makefile | 3 +-
drivers/gpu/drm/drm_pagemap.c | 83 +++++-
drivers/gpu/drm/drm_pagemap_util.c | 450 +++++++++++++++++++++++++++++
include/drm/drm_pagemap.h | 53 +++-
include/drm/drm_pagemap_util.h | 42 +++
5 files changed, 613 insertions(+), 18 deletions(-)
create mode 100644 drivers/gpu/drm/drm_pagemap_util.c
create mode 100644 include/drm/drm_pagemap_util.h
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 7789f42027ff..04ff0b3e55b0 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -107,7 +107,8 @@ obj-$(CONFIG_DRM_GPUVM) += drm_gpuvm.o
drm_gpusvm_helper-y := \
drm_gpusvm.o\
- drm_pagemap.o
+ drm_pagemap.o\
+ drm_pagemap_util.o
obj-$(CONFIG_DRM_GPUSVM) += drm_gpusvm_helper.o
obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index fb18a80d6a1c..50d3963ddbbc 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -8,6 +8,7 @@
#include <linux/pagemap.h>
#include <drm/drm_drv.h>
#include <drm/drm_pagemap.h>
+#include <drm/drm_pagemap_util.h>
#include <drm/drm_print.h>
/**
@@ -578,7 +579,7 @@ static void drm_pagemap_release(struct kref *ref)
* pagemap provider drm_device and its module.
*/
dpagemap->dev_hold = NULL;
- kfree(dpagemap);
+ drm_pagemap_shrinker_add(dpagemap);
llist_add(&dev_hold->link, &drm_pagemap_unhold_list);
schedule_work(&drm_pagemap_work);
/*
@@ -628,6 +629,58 @@ drm_pagemap_dev_hold(struct drm_pagemap *dpagemap)
return dev_hold;
}
+/**
+ * drm_pagemap_reinit() - Reinitialize a drm_pagemap
+ * @dpagemap: The drm_pagemap to reinitialize
+ *
+ * Reinitialize a drm_pagemap, for which drm_pagemap_release
+ * has already been called. This interface is intended for the
+ * situation where the driver caches a destroyed drm_pagemap.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int drm_pagemap_reinit(struct drm_pagemap *dpagemap)
+{
+ dpagemap->dev_hold = drm_pagemap_dev_hold(dpagemap);
+ if (IS_ERR(dpagemap->dev_hold))
+ return PTR_ERR(dpagemap->dev_hold);
+
+ kref_init(&dpagemap->ref);
+ return 0;
+}
+EXPORT_SYMBOL(drm_pagemap_reinit);
+
+/**
+ * drm_pagemap_init() - Initialize a pre-allocated drm_pagemap
+ * @dpagemap: The drm_pagemap to initialize.
+ * @pagemap: The associated dev_pagemap providing the device
+ * private pages.
+ * @drm: The drm device. The drm_pagemap holds a reference on the
+ * drm_device and the module owning the drm_device until
+ * drm_pagemap_release(). This facilitates drm_pagemap exporting.
+ * @ops: The drm_pagemap ops.
+ *
+ * Initialize and take an initial reference on a drm_pagemap.
+ * After successful return, use drm_pagemap_put() to destroy.
+ *
+ ** Return: 0 on success, negative error code on error.
+ */
+int drm_pagemap_init(struct drm_pagemap *dpagemap,
+ struct dev_pagemap *pagemap,
+ struct drm_device *drm,
+ const struct drm_pagemap_ops *ops)
+{
+ kref_init(&dpagemap->ref);
+ dpagemap->ops = ops;
+ dpagemap->pagemap = pagemap;
+ dpagemap->drm = drm;
+ dpagemap->cache = NULL;
+ INIT_LIST_HEAD(&dpagemap->shrink_link);
+
+ return drm_pagemap_reinit(dpagemap);
+}
+EXPORT_SYMBOL(drm_pagemap_init);
+
/**
* drm_pagemap_create() - Create a struct drm_pagemap.
* @drm: Pointer to a struct drm_device providing the device-private memory.
@@ -645,22 +698,14 @@ drm_pagemap_create(struct drm_device *drm,
const struct drm_pagemap_ops *ops)
{
struct drm_pagemap *dpagemap = kzalloc(sizeof(*dpagemap), GFP_KERNEL);
- struct drm_pagemap_dev_hold *dev_hold;
+ int err;
if (!dpagemap)
return ERR_PTR(-ENOMEM);
- kref_init(&dpagemap->ref);
- dpagemap->drm = drm;
- dpagemap->ops = ops;
- dpagemap->pagemap = pagemap;
-
- dev_hold = drm_pagemap_dev_hold(dpagemap);
- if (IS_ERR(dev_hold)) {
- kfree(dpagemap);
- return ERR_CAST(dev_hold);
- }
- dpagemap->dev_hold = dev_hold;
+ err = drm_pagemap_init(dpagemap, pagemap, drm, ops);
+ if (err)
+ return ERR_PTR(err);
return dpagemap;
}
@@ -675,8 +720,10 @@ EXPORT_SYMBOL(drm_pagemap_create);
*/
void drm_pagemap_put(struct drm_pagemap *dpagemap)
{
- if (likely(dpagemap))
+ if (likely(dpagemap)) {
+ drm_pagemap_shrinker_might_lock(dpagemap);
kref_put(&dpagemap->ref, drm_pagemap_release);
+ }
}
EXPORT_SYMBOL(drm_pagemap_put);
@@ -1023,6 +1070,14 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
}
EXPORT_SYMBOL(drm_pagemap_populate_mm);
+void drm_pagemap_destroy(struct drm_pagemap *dpagemap, bool is_atomic_or_reclaim)
+{
+ if (dpagemap->ops->destroy)
+ dpagemap->ops->destroy(dpagemap, is_atomic_or_reclaim);
+ else
+ kfree(dpagemap);
+}
+
static void drm_pagemap_exit(void)
{
flush_work(&drm_pagemap_work);
diff --git a/drivers/gpu/drm/drm_pagemap_util.c b/drivers/gpu/drm/drm_pagemap_util.c
new file mode 100644
index 000000000000..84a7a4807bef
--- /dev/null
+++ b/drivers/gpu/drm/drm_pagemap_util.c
@@ -0,0 +1,450 @@
+// SPDX-License-Identifier: GPL-2.0-only OR MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include <drm/drm_drv.h>
+#include <drm/drm_managed.h>
+#include <drm/drm_pagemap.h>
+#include <drm/drm_pagemap_util.h>
+#include <drm/drm_print.h>
+
+/**
+ * struct drm_pagemap_cache - Lookup structure for pagemaps
+ *
+ * Structure to keep track of active (refcount > 1) and inactive
+ * (refcount == 0) pagemaps. Inactive pagemaps can be made active
+ * again by waiting for the @queued completion (indicating that the
+ * pagemap has been put on the @shrinker's list of shrinkable
+ * pagemaps, and then successfully removing it from @shrinker's
+ * list. The latter may fail if the shrinker is already in the
+ * process of freeing the pagemap. A struct drm_pagemap_cache can
+ * hold a single struct drm_pagemap.
+ */
+struct drm_pagemap_cache {
+ /** @lookup_mutex: Mutex making the lookup process atomic */
+ struct mutex lookup_mutex;
+ /** @lock: Lock protecting the @dpagemap pointer */
+ spinlock_t lock;
+ /** @shrinker: Pointer to the shrinker used for this cache. Immutable. */
+ struct drm_pagemap_shrinker *shrinker;
+ /** @dpagemap: Non-refcounted pointer to the drm_pagemap */
+ struct drm_pagemap *dpagemap;
+ /**
+ * @queued: Signals when an inactive drm_pagemap has been put on
+ * @shrinker's list.
+ */
+ struct completion queued;
+};
+
+/**
+ * struct drm_pagemap_shrinker - Shrinker to remove unused pagemaps
+ */
+struct drm_pagemap_shrinker {
+ /** @drm: Pointer to the drm device. */
+ struct drm_device *drm;
+ /** @lock: Spinlock to protect the @dpagemaps list. */
+ spinlock_t lock;
+ /** @dpagemaps: List of unused dpagemaps. */
+ struct list_head dpagemaps;
+ /** @num_dpagemaps: Number of unused dpagemaps in @dpagemaps. */
+ atomic_t num_dpagemaps;
+ /** @shrink: Pointer to the struct shrinker. */
+ struct shrinker *shrink;
+};
+
+static bool drm_pagemap_shrinker_cancel(struct drm_pagemap *dpagemap);
+
+static void drm_pagemap_cache_fini(void *arg)
+{
+ struct drm_pagemap_cache *cache = arg;
+ struct drm_pagemap *dpagemap;
+
+ drm_dbg(cache->shrinker->drm, "Destroying dpagemap cache.\n");
+ spin_lock(&cache->lock);
+ dpagemap = cache->dpagemap;
+ if (!dpagemap) {
+ spin_unlock(&cache->lock);
+ goto out;
+ }
+
+ if (drm_pagemap_shrinker_cancel(dpagemap)) {
+ cache->dpagemap = NULL;
+ spin_unlock(&cache->lock);
+ drm_pagemap_destroy(dpagemap, false);
+ }
+
+out:
+ mutex_destroy(&cache->lookup_mutex);
+ kfree(cache);
+}
+
+/**
+ * drm_pagemap_cache_create_devm() - Create a drm_pagemap_cache
+ * @shrinker: Pointer to a struct drm_pagemap_shrinker.
+ *
+ * Create a device-managed drm_pagemap cache. The cache is automatically
+ * destroyed on struct device removal, at which point any *inactive*
+ * drm_pagemap's are destroyed.
+ *
+ * Return: Pointer to a struct drm_pagemap_cache on success. Error pointer
+ * on failure.
+ */
+struct drm_pagemap_cache *drm_pagemap_cache_create_devm(struct drm_pagemap_shrinker *shrinker)
+{
+ struct drm_pagemap_cache *cache = kzalloc(sizeof(*cache), GFP_KERNEL);
+ int err;
+
+ if (!cache)
+ return ERR_PTR(-ENOMEM);
+
+ mutex_init(&cache->lookup_mutex);
+ spin_lock_init(&cache->lock);
+ cache->shrinker = shrinker;
+ init_completion(&cache->queued);
+ err = devm_add_action_or_reset(shrinker->drm->dev, drm_pagemap_cache_fini, cache);
+ if (err)
+ return ERR_PTR(err);
+
+ return cache;
+}
+EXPORT_SYMBOL(drm_pagemap_cache_create_devm);
+
+/**
+ * DOC: Cache lookup
+ *
+ * Cache lookup should be done under a locked mutex, so that a
+ * failed drm_pagemap_get_from_cache() and a following
+ * drm_pagemap_cache_setpagemap() are carried out as an atomic
+ * operation WRT other lookups. Otherwise, racing lookups may
+ * unnecessarily concurrently create pagemaps to fulfill a
+ * failed lookup. The API provides two functions to perform this lock,
+ * drm_pagemap_lock_lookup() and drm_pagemap_unlock_lookup() and they
+ * should be used in the following way:
+ *
+ * .. code-block:: c
+ *
+ * drm_pagemap_lock_lookup(cache);
+ * dpagemap = drm_pagemap_get_from_cache(cache);
+ * if (dpagemap)
+ * goto out_unlock;
+ *
+ * dpagemap = driver_create_new_dpagemap();
+ * if (!IS_ERR(dpagemap))
+ * drm_pagemap_cache_set_pagemap(cache, dpagemap);
+ *
+ * out_unlock:
+ * drm_pagemap_unlock_lookup(cache);
+ */
+
+/**
+ * drm_pagemap_cache_lock_lookup() Lock a drm_pagemap_cache for lookup
+ * @cache: The drm_pagemap_cache to lock.
+ *
+ * Return: %-EINTR if interrupted while blocking. %0 otherwise.
+ */
+int drm_pagemap_cache_lock_lookup(struct drm_pagemap_cache *cache)
+{
+ return mutex_lock_interruptible(&cache->lookup_mutex);
+}
+EXPORT_SYMBOL(drm_pagemap_cache_lock_lookup);
+
+/**
+ * drm_pagemap_cache_unlock_lookup() Unlock a drm_pagemap_cache after lookup
+ * @cache: The drm_pagemap_cache to unlock.
+ */
+void drm_pagemap_cache_unlock_lookup(struct drm_pagemap_cache *cache)
+{
+ mutex_unlock(&cache->lookup_mutex);
+}
+EXPORT_SYMBOL(drm_pagemap_cache_unlock_lookup);
+
+/**
+ * drm_pagemap_get_from_cache() - Lookup of drm_pagemaps.
+ * @cache: The cache used for lookup.
+ *
+ * If an active pagemap is present in the cache, it is immediately returned.
+ * If an inactive pagemap is present, it's removed from the shrinker list and
+ * an attempt is made to make it active.
+ * If no pagemap present or the attempt to make it active failed, %NULL is returned
+ * to indicate to the caller to create a new drm_pagemap and insert it into
+ * the cache.
+ *
+ * Return: A reference-counted pointer to a drm_pagemap if successful. An error
+ * pointer if an error occurred, or %NULL if no drm_pagemap was found and
+ * the caller should insert a new one.
+ */
+struct drm_pagemap *drm_pagemap_get_from_cache(struct drm_pagemap_cache *cache)
+{
+ struct drm_pagemap *dpagemap;
+ int err;
+
+ lockdep_assert_held(&cache->lookup_mutex);
+retry:
+ spin_lock(&cache->lock);
+ dpagemap = cache->dpagemap;
+ if (drm_pagemap_get_unless_zero(dpagemap)) {
+ spin_unlock(&cache->lock);
+ return dpagemap;
+ }
+
+ if (!dpagemap) {
+ spin_unlock(&cache->lock);
+ return NULL;
+ }
+
+ if (!try_wait_for_completion(&cache->queued)) {
+ spin_unlock(&cache->lock);
+ err = wait_for_completion_interruptible(&cache->queued);
+ if (err)
+ return ERR_PTR(err);
+ goto retry;
+ }
+
+ if (drm_pagemap_shrinker_cancel(dpagemap)) {
+ cache->dpagemap = NULL;
+ spin_unlock(&cache->lock);
+ err = drm_pagemap_reinit(dpagemap);
+ if (err) {
+ drm_pagemap_destroy(dpagemap, false);
+ return ERR_PTR(err);
+ }
+ drm_pagemap_cache_set_pagemap(cache, dpagemap);
+ } else {
+ cache->dpagemap = NULL;
+ spin_unlock(&cache->lock);
+ dpagemap = NULL;
+ }
+
+ return dpagemap;
+}
+EXPORT_SYMBOL(drm_pagemap_get_from_cache);
+
+/**
+ * drm_pagemap_cache_set_pagemap() - Assign a drm_pagemap to a drm_pagemap_cache
+ * @cache: The cache to assign the drm_pagemap to.
+ * @dpagemap: The drm_pagemap to assign.
+ *
+ * The function must be called to populate a drm_pagemap_cache only
+ * after a call to drm_pagemap_get_from_cache() returns NULL.
+ */
+void drm_pagemap_cache_set_pagemap(struct drm_pagemap_cache *cache, struct drm_pagemap *dpagemap)
+{
+ struct drm_device *drm = dpagemap->drm;
+
+ lockdep_assert_held(&cache->lookup_mutex);
+ spin_lock(&cache->lock);
+ dpagemap->cache = cache;
+ swap(cache->dpagemap, dpagemap);
+ reinit_completion(&cache->queued);
+ spin_unlock(&cache->lock);
+ drm_WARN_ON(drm, !!dpagemap);
+}
+EXPORT_SYMBOL(drm_pagemap_cache_set_pagemap);
+
+/**
+ * drm_pagemap_get_from_cache_if_active() - Quick lookup of active drm_pagemaps
+ * @cache: The cache to lookup from.
+ *
+ * Function that should be used to lookup a drm_pagemap that is already active.
+ * (refcount > 0).
+ *
+ * Return: A pointer to the cache's drm_pagemap if it's active; %NULL otherwise.
+ */
+struct drm_pagemap *drm_pagemap_get_from_cache_if_active(struct drm_pagemap_cache *cache)
+{
+ struct drm_pagemap *dpagemap;
+
+ spin_lock(&cache->lock);
+ dpagemap = drm_pagemap_get_unless_zero(cache->dpagemap);
+ spin_unlock(&cache->lock);
+
+ return dpagemap;
+}
+EXPORT_SYMBOL(drm_pagemap_get_from_cache_if_active);
+
+static bool drm_pagemap_shrinker_cancel(struct drm_pagemap *dpagemap)
+{
+ struct drm_pagemap_cache *cache = dpagemap->cache;
+ struct drm_pagemap_shrinker *shrinker = cache->shrinker;
+
+ spin_lock(&shrinker->lock);
+ if (list_empty(&dpagemap->shrink_link)) {
+ spin_unlock(&shrinker->lock);
+ return false;
+ }
+
+ list_del_init(&dpagemap->shrink_link);
+ atomic_dec(&shrinker->num_dpagemaps);
+ spin_unlock(&shrinker->lock);
+ return true;
+}
+
+#ifdef CONFIG_PROVE_LOCKING
+/**
+ * drm_pagemap_shrinker_might_lock() - lockdep test for drm_pagemap_shrinker_add()
+ * @dpagemap: The drm pagemap.
+ *
+ * The drm_pagemap_shrinker_add() function performs some locking.
+ * This function can be called in code-paths that might
+ * call drm_pagemap_shrinker_add() to detect any lockdep problems early.
+ */
+void drm_pagemap_shrinker_might_lock(struct drm_pagemap *dpagemap)
+{
+ int idx;
+
+ if (drm_dev_enter(dpagemap->drm, &idx)) {
+ struct drm_pagemap_cache *cache = dpagemap->cache;
+
+ if (cache)
+ might_lock(&cache->shrinker->lock);
+
+ drm_dev_exit(idx);
+ }
+}
+#endif
+
+/**
+ * drm_pagemap_shrinker_add() - Add a drm_pagemap to the shrinker list or destroy
+ * @dpagemap: The drm_pagemap.
+ *
+ * If @dpagemap is associated with a &struct drm_pagemap_cache AND the
+ * struct device backing the drm device is still alive, add @dpagemap to
+ * the &struct drm_pagemap_shrinker list of shrinkable drm_pagemaps.
+ *
+ * Otherwise destroy the pagemap directly using drm_pagemap_destroy().
+ *
+ * This is an internal function which is not intended to be exposed to drivers.
+ */
+void drm_pagemap_shrinker_add(struct drm_pagemap *dpagemap)
+{
+ struct drm_pagemap_cache *cache;
+ struct drm_pagemap_shrinker *shrinker;
+ int idx;
+
+ /*
+ * The pagemap cache and shrinker are disabled at
+ * pci device remove time. After that, dpagemaps
+ * are freed directly.
+ */
+ if (!drm_dev_enter(dpagemap->drm, &idx))
+ goto out_no_cache;
+
+ cache = dpagemap->cache;
+ if (!cache) {
+ drm_dev_exit(idx);
+ goto out_no_cache;
+ }
+
+ shrinker = cache->shrinker;
+ spin_lock(&shrinker->lock);
+ list_add_tail(&dpagemap->shrink_link, &shrinker->dpagemaps);
+ atomic_inc(&shrinker->num_dpagemaps);
+ spin_unlock(&shrinker->lock);
+ complete_all(&cache->queued);
+ drm_dev_exit(idx);
+ return;
+
+out_no_cache:
+ drm_pagemap_destroy(dpagemap, true);
+}
+
+static unsigned long
+drm_pagemap_shrinker_count(struct shrinker *shrink, struct shrink_control *sc)
+{
+ struct drm_pagemap_shrinker *shrinker = shrink->private_data;
+ unsigned long count = atomic_read(&shrinker->num_dpagemaps);
+
+ return count ? : SHRINK_EMPTY;
+}
+
+static unsigned long
+drm_pagemap_shrinker_scan(struct shrinker *shrink, struct shrink_control *sc)
+{
+ struct drm_pagemap_shrinker *shrinker = shrink->private_data;
+ struct drm_pagemap *dpagemap;
+ struct drm_pagemap_cache *cache;
+ unsigned long nr_freed = 0;
+
+ sc->nr_scanned = 0;
+ spin_lock(&shrinker->lock);
+ do {
+ dpagemap = list_first_entry_or_null(&shrinker->dpagemaps, typeof(*dpagemap),
+ shrink_link);
+ if (!dpagemap)
+ break;
+
+ atomic_dec(&shrinker->num_dpagemaps);
+ list_del_init(&dpagemap->shrink_link);
+ spin_unlock(&shrinker->lock);
+
+ sc->nr_scanned++;
+ nr_freed++;
+
+ cache = dpagemap->cache;
+ spin_lock(&cache->lock);
+ cache->dpagemap = NULL;
+ spin_unlock(&cache->lock);
+
+ drm_dbg(dpagemap->drm, "Shrinking dpagemap %p.\n", dpagemap);
+ drm_pagemap_destroy(dpagemap, true);
+ spin_lock(&shrinker->lock);
+ } while (sc->nr_scanned < sc->nr_to_scan);
+ spin_unlock(&shrinker->lock);
+
+ return sc->nr_scanned ? nr_freed : SHRINK_STOP;
+}
+
+static void drm_pagemap_shrinker_fini(void *arg)
+{
+ struct drm_pagemap_shrinker *shrinker = arg;
+
+ drm_dbg(shrinker->drm, "Destroying dpagemap shrinker.\n");
+ drm_WARN_ON(shrinker->drm, !!atomic_read(&shrinker->num_dpagemaps));
+ shrinker_free(shrinker->shrink);
+ kfree(shrinker);
+}
+
+/**
+ * drm_pagemap_shrinker_create_devm() - Create and register a pagemap shrinker
+ * @drm: The drm device
+ *
+ * Create and register a pagemap shrinker that shrinks unused pagemaps
+ * and thereby reduces memory footprint.
+ * The shrinker is drm_device managed and unregisters itself when
+ * the drm device is removed.
+ *
+ * Return: %0 on success, negative error code on failure.
+ */
+struct drm_pagemap_shrinker *drm_pagemap_shrinker_create_devm(struct drm_device *drm)
+{
+ struct drm_pagemap_shrinker *shrinker;
+ struct shrinker *shrink;
+ int err;
+
+ shrinker = kzalloc(sizeof(*shrinker), GFP_KERNEL);
+ if (!shrinker)
+ return ERR_PTR(-ENOMEM);
+
+ shrink = shrinker_alloc(0, "drm-drm_pagemap:%s", drm->unique);
+ if (!shrink) {
+ kfree(shrinker);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ spin_lock_init(&shrinker->lock);
+ INIT_LIST_HEAD(&shrinker->dpagemaps);
+ shrinker->drm = drm;
+ shrinker->shrink = shrink;
+ shrink->count_objects = drm_pagemap_shrinker_count;
+ shrink->scan_objects = drm_pagemap_shrinker_scan;
+ shrink->private_data = shrinker;
+ shrinker_register(shrink);
+
+ err = devm_add_action_or_reset(drm->dev, drm_pagemap_shrinker_fini, shrinker);
+ if (err)
+ return ERR_PTR(err);
+
+ return shrinker;
+}
+EXPORT_SYMBOL(drm_pagemap_shrinker_create_devm);
diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
index 5cfe54331ba7..4b9af5e785c6 100644
--- a/include/drm/drm_pagemap.h
+++ b/include/drm/drm_pagemap.h
@@ -9,6 +9,7 @@
#define NR_PAGES(order) (1U << (order))
struct drm_pagemap;
+struct drm_pagemap_cache;
struct drm_pagemap_dev_hold;
struct drm_pagemap_zdd;
struct device;
@@ -124,6 +125,25 @@ struct drm_pagemap_ops {
unsigned long start, unsigned long end,
struct mm_struct *mm,
unsigned long timeslice_ms);
+ /**
+ * @destroy: Destroy the drm_pagemap and associated resources.
+ * @dpagemap: The drm_pagemap to destroy.
+ * @is_atomic_or_reclaim: The function may be called from
+ * atomic- or reclaim context.
+ *
+ * The implementation should take care not to attempt to
+ * destroy resources that may already have been destroyed
+ * using devm_ callbacks, since this function may be called
+ * after the underlying struct device has been unbound.
+ * If the implementation defers the execution to a work item
+ * to avoid locking issues, then it must make sure the work
+ * items are flushed before module exit. If the destroy call
+ * happens after the provider's pci_remove() callback has
+ * been executed, a module reference and drm device reference is
+ * held across the destroy callback.
+ */
+ void (*destroy)(struct drm_pagemap *dpagemap,
+ bool is_atomic_or_reclaim);
};
/**
@@ -135,6 +155,10 @@ struct drm_pagemap_ops {
* @pagemap: Pointer to the underlying dev_pagemap.
* @dev_hold: Pointer to a struct drm_pagemap_dev_hold for
* device referencing.
+ * @cache: Back-pointer to the &struct drm_pagemap_cache used for this
+ * &struct drm_pagemap. May be NULL if no cache is used.
+ * @shrink_link: Link into the shrinker's list of drm_pagemaps. Only
+ * used if also using a pagemap cache.
*/
struct drm_pagemap {
const struct drm_pagemap_ops *ops;
@@ -142,6 +166,8 @@ struct drm_pagemap {
struct drm_device *drm;
struct dev_pagemap *pagemap;
struct drm_pagemap_dev_hold *dev_hold;
+ struct drm_pagemap_cache *cache;
+ struct list_head shrink_link;
};
struct drm_pagemap_devmem;
@@ -210,6 +236,11 @@ struct drm_pagemap_devmem_ops {
unsigned long npages);
};
+int drm_pagemap_init(struct drm_pagemap *dpagemap,
+ struct dev_pagemap *pagemap,
+ struct drm_device *drm,
+ const struct drm_pagemap_ops *ops);
+
struct drm_pagemap *drm_pagemap_create(struct drm_device *drm,
struct dev_pagemap *pagemap,
const struct drm_pagemap_ops *ops);
@@ -228,9 +259,9 @@ static inline void drm_pagemap_put(struct drm_pagemap *dpagemap)
/**
* drm_pagemap_get() - Obtain a reference on a struct drm_pagemap
- * @dpagemap: Pointer to the struct drm_pagemap.
+ * @dpagemap: Pointer to the struct drm_pagemap, or NULL.
*
- * Return: Pointer to the struct drm_pagemap.
+ * Return: Pointer to the struct drm_pagemap, or NULL.
*/
static inline struct drm_pagemap *
drm_pagemap_get(struct drm_pagemap *dpagemap)
@@ -241,6 +272,20 @@ drm_pagemap_get(struct drm_pagemap *dpagemap)
return dpagemap;
}
+/**
+ * drm_pagemap_get_unless_zero() - Obtain a reference on a struct drm_pagemap
+ * unless the current reference count is zero.
+ * @dpagemap: Pointer to the drm_pagemap or NULL.
+ *
+ * Return: A pointer to @dpagemap if the reference count was successfully
+ * incremented. NULL if @dpagemap was NULL, or its refcount was 0.
+ */
+static inline struct drm_pagemap * __must_check
+drm_pagemap_get_unless_zero(struct drm_pagemap *dpagemap)
+{
+ return (dpagemap && kref_get_unless_zero(&dpagemap->ref)) ? dpagemap : NULL;
+}
+
/**
* struct drm_pagemap_devmem - Structure representing a GPU SVM device memory allocation
*
@@ -284,5 +329,7 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
struct mm_struct *mm,
unsigned long timeslice_ms);
-#endif
+void drm_pagemap_destroy(struct drm_pagemap *dpagemap, bool is_atomic_or_reclaim);
+int drm_pagemap_reinit(struct drm_pagemap *dpagemap);
+#endif
diff --git a/include/drm/drm_pagemap_util.h b/include/drm/drm_pagemap_util.h
new file mode 100644
index 000000000000..924244d5b899
--- /dev/null
+++ b/include/drm/drm_pagemap_util.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _DRM_PAGEMAP_UTIL_H_
+#define _DRM_PAGEMAP_UTIL_H_
+
+struct drm_device;
+struct drm_pagemap;
+struct drm_pagemap_cache;
+struct drm_pagemap_shrinker;
+
+void drm_pagemap_shrinker_add(struct drm_pagemap *dpagemap);
+
+int drm_pagemap_cache_lock_lookup(struct drm_pagemap_cache *cache);
+
+void drm_pagemap_cache_unlock_lookup(struct drm_pagemap_cache *cache);
+
+struct drm_pagemap_shrinker *drm_pagemap_shrinker_create_devm(struct drm_device *drm);
+
+struct drm_pagemap_cache *drm_pagemap_cache_create_devm(struct drm_pagemap_shrinker *shrinker);
+
+struct drm_pagemap *drm_pagemap_get_from_cache(struct drm_pagemap_cache *cache);
+
+void drm_pagemap_cache_set_pagemap(struct drm_pagemap_cache *cache, struct drm_pagemap *dpagemap);
+
+struct drm_pagemap *drm_pagemap_get_from_cache_if_active(struct drm_pagemap_cache *cache);
+
+#ifdef CONFIG_PROVE_LOCKING
+
+void drm_pagemap_shrinker_might_lock(struct drm_pagemap *dpagemap);
+
+#else
+
+static inline void drm_pagemap_shrinker_might_lock(struct drm_pagemap *dpagemap)
+{
+}
+
+#endif /* CONFIG_PROVE_LOCKING */
+
+#endif
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v2 05/17] drm/pagemap: Add a drm_pagemap cache and shrinker
2025-11-11 16:43 ` [PATCH v2 05/17] drm/pagemap: Add a drm_pagemap cache and shrinker Thomas Hellström
@ 2025-11-19 19:28 ` Matthew Brost
0 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2025-11-19 19:28 UTC (permalink / raw)
To: Thomas Hellström
Cc: intel-xe, dri-devel, himal.prasad.ghimiray, apopple, airlied,
Simona Vetter, felix.kuehling, Christian König, dakr,
Mrozek, Michal, Joonas Lahtinen
On Tue, Nov 11, 2025 at 05:43:55PM +0100, Thomas Hellström wrote:
> Pagemaps are costly to set up and tear down, and they consume a lot
> of system memory for the struct pages. Ideally they should be
> created only when needed.
>
> Add a caching mechanism to allow doing just that: Create the drm_pagemaps
> when needed for migration. Keep them around to avoid destruction and
> re-creation latencies and destroy inactive/unused drm_pagemaps on memory
> pressure using a shrinker.
>
> Only add the helper functions. They will be hooked up to the xe driver
> in the upcoming patch.
>
> v2:
> - Add lockdep checking for drm_pagemap_put(). (Matt Brost)
> - Add a copyright notice. (Matt Brost)
>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/Makefile | 3 +-
> drivers/gpu/drm/drm_pagemap.c | 83 +++++-
> drivers/gpu/drm/drm_pagemap_util.c | 450 +++++++++++++++++++++++++++++
> include/drm/drm_pagemap.h | 53 +++-
> include/drm/drm_pagemap_util.h | 42 +++
> 5 files changed, 613 insertions(+), 18 deletions(-)
> create mode 100644 drivers/gpu/drm/drm_pagemap_util.c
> create mode 100644 include/drm/drm_pagemap_util.h
>
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index 7789f42027ff..04ff0b3e55b0 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -107,7 +107,8 @@ obj-$(CONFIG_DRM_GPUVM) += drm_gpuvm.o
>
> drm_gpusvm_helper-y := \
> drm_gpusvm.o\
> - drm_pagemap.o
> + drm_pagemap.o\
> + drm_pagemap_util.o
> obj-$(CONFIG_DRM_GPUSVM) += drm_gpusvm_helper.o
>
> obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
> diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> index fb18a80d6a1c..50d3963ddbbc 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -8,6 +8,7 @@
> #include <linux/pagemap.h>
> #include <drm/drm_drv.h>
> #include <drm/drm_pagemap.h>
> +#include <drm/drm_pagemap_util.h>
> #include <drm/drm_print.h>
>
> /**
> @@ -578,7 +579,7 @@ static void drm_pagemap_release(struct kref *ref)
> * pagemap provider drm_device and its module.
> */
> dpagemap->dev_hold = NULL;
> - kfree(dpagemap);
> + drm_pagemap_shrinker_add(dpagemap);
> llist_add(&dev_hold->link, &drm_pagemap_unhold_list);
> schedule_work(&drm_pagemap_work);
> /*
> @@ -628,6 +629,58 @@ drm_pagemap_dev_hold(struct drm_pagemap *dpagemap)
> return dev_hold;
> }
>
> +/**
> + * drm_pagemap_reinit() - Reinitialize a drm_pagemap
> + * @dpagemap: The drm_pagemap to reinitialize
> + *
> + * Reinitialize a drm_pagemap, for which drm_pagemap_release
> + * has already been called. This interface is intended for the
> + * situation where the driver caches a destroyed drm_pagemap.
> + *
> + * Return: 0 on success, negative error code on failure.
> + */
> +int drm_pagemap_reinit(struct drm_pagemap *dpagemap)
> +{
> + dpagemap->dev_hold = drm_pagemap_dev_hold(dpagemap);
> + if (IS_ERR(dpagemap->dev_hold))
> + return PTR_ERR(dpagemap->dev_hold);
> +
> + kref_init(&dpagemap->ref);
> + return 0;
> +}
> +EXPORT_SYMBOL(drm_pagemap_reinit);
> +
> +/**
> + * drm_pagemap_init() - Initialize a pre-allocated drm_pagemap
> + * @dpagemap: The drm_pagemap to initialize.
> + * @pagemap: The associated dev_pagemap providing the device
> + * private pages.
> + * @drm: The drm device. The drm_pagemap holds a reference on the
> + * drm_device and the module owning the drm_device until
> + * drm_pagemap_release(). This facilitates drm_pagemap exporting.
> + * @ops: The drm_pagemap ops.
> + *
> + * Initialize and take an initial reference on a drm_pagemap.
> + * After successful return, use drm_pagemap_put() to destroy.
> + *
> + ** Return: 0 on success, negative error code on error.
> + */
> +int drm_pagemap_init(struct drm_pagemap *dpagemap,
> + struct dev_pagemap *pagemap,
> + struct drm_device *drm,
> + const struct drm_pagemap_ops *ops)
> +{
> + kref_init(&dpagemap->ref);
> + dpagemap->ops = ops;
> + dpagemap->pagemap = pagemap;
> + dpagemap->drm = drm;
> + dpagemap->cache = NULL;
> + INIT_LIST_HEAD(&dpagemap->shrink_link);
> +
> + return drm_pagemap_reinit(dpagemap);
> +}
> +EXPORT_SYMBOL(drm_pagemap_init);
> +
> /**
> * drm_pagemap_create() - Create a struct drm_pagemap.
> * @drm: Pointer to a struct drm_device providing the device-private memory.
> @@ -645,22 +698,14 @@ drm_pagemap_create(struct drm_device *drm,
> const struct drm_pagemap_ops *ops)
> {
> struct drm_pagemap *dpagemap = kzalloc(sizeof(*dpagemap), GFP_KERNEL);
> - struct drm_pagemap_dev_hold *dev_hold;
> + int err;
>
> if (!dpagemap)
> return ERR_PTR(-ENOMEM);
>
> - kref_init(&dpagemap->ref);
> - dpagemap->drm = drm;
> - dpagemap->ops = ops;
> - dpagemap->pagemap = pagemap;
> -
> - dev_hold = drm_pagemap_dev_hold(dpagemap);
> - if (IS_ERR(dev_hold)) {
> - kfree(dpagemap);
> - return ERR_CAST(dev_hold);
> - }
> - dpagemap->dev_hold = dev_hold;
> + err = drm_pagemap_init(dpagemap, pagemap, drm, ops);
> + if (err)
> + return ERR_PTR(err);
>
> return dpagemap;
> }
> @@ -675,8 +720,10 @@ EXPORT_SYMBOL(drm_pagemap_create);
> */
> void drm_pagemap_put(struct drm_pagemap *dpagemap)
> {
> - if (likely(dpagemap))
> + if (likely(dpagemap)) {
> + drm_pagemap_shrinker_might_lock(dpagemap);
> kref_put(&dpagemap->ref, drm_pagemap_release);
> + }
> }
> EXPORT_SYMBOL(drm_pagemap_put);
>
> @@ -1023,6 +1070,14 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> }
> EXPORT_SYMBOL(drm_pagemap_populate_mm);
>
> +void drm_pagemap_destroy(struct drm_pagemap *dpagemap, bool is_atomic_or_reclaim)
> +{
> + if (dpagemap->ops->destroy)
> + dpagemap->ops->destroy(dpagemap, is_atomic_or_reclaim);
> + else
> + kfree(dpagemap);
> +}
> +
> static void drm_pagemap_exit(void)
> {
> flush_work(&drm_pagemap_work);
> diff --git a/drivers/gpu/drm/drm_pagemap_util.c b/drivers/gpu/drm/drm_pagemap_util.c
> new file mode 100644
> index 000000000000..84a7a4807bef
> --- /dev/null
> +++ b/drivers/gpu/drm/drm_pagemap_util.c
> @@ -0,0 +1,450 @@
> +// SPDX-License-Identifier: GPL-2.0-only OR MIT
> +/*
> + * Copyright © 2025 Intel Corporation
> + */
> +
> +#include <drm/drm_drv.h>
> +#include <drm/drm_managed.h>
> +#include <drm/drm_pagemap.h>
> +#include <drm/drm_pagemap_util.h>
> +#include <drm/drm_print.h>
> +
> +/**
> + * struct drm_pagemap_cache - Lookup structure for pagemaps
> + *
> + * Structure to keep track of active (refcount > 1) and inactive
> + * (refcount == 0) pagemaps. Inactive pagemaps can be made active
> + * again by waiting for the @queued completion (indicating that the
> + * pagemap has been put on the @shrinker's list of shrinkable
> + * pagemaps, and then successfully removing it from @shrinker's
> + * list. The latter may fail if the shrinker is already in the
> + * process of freeing the pagemap. A struct drm_pagemap_cache can
> + * hold a single struct drm_pagemap.
> + */
> +struct drm_pagemap_cache {
> + /** @lookup_mutex: Mutex making the lookup process atomic */
> + struct mutex lookup_mutex;
> + /** @lock: Lock protecting the @dpagemap pointer */
> + spinlock_t lock;
> + /** @shrinker: Pointer to the shrinker used for this cache. Immutable. */
> + struct drm_pagemap_shrinker *shrinker;
> + /** @dpagemap: Non-refcounted pointer to the drm_pagemap */
> + struct drm_pagemap *dpagemap;
> + /**
> + * @queued: Signals when an inactive drm_pagemap has been put on
> + * @shrinker's list.
> + */
> + struct completion queued;
> +};
> +
> +/**
> + * struct drm_pagemap_shrinker - Shrinker to remove unused pagemaps
> + */
> +struct drm_pagemap_shrinker {
> + /** @drm: Pointer to the drm device. */
> + struct drm_device *drm;
> + /** @lock: Spinlock to protect the @dpagemaps list. */
> + spinlock_t lock;
> + /** @dpagemaps: List of unused dpagemaps. */
> + struct list_head dpagemaps;
> + /** @num_dpagemaps: Number of unused dpagemaps in @dpagemaps. */
> + atomic_t num_dpagemaps;
> + /** @shrink: Pointer to the struct shrinker. */
> + struct shrinker *shrink;
> +};
> +
> +static bool drm_pagemap_shrinker_cancel(struct drm_pagemap *dpagemap);
> +
> +static void drm_pagemap_cache_fini(void *arg)
> +{
> + struct drm_pagemap_cache *cache = arg;
> + struct drm_pagemap *dpagemap;
> +
> + drm_dbg(cache->shrinker->drm, "Destroying dpagemap cache.\n");
> + spin_lock(&cache->lock);
> + dpagemap = cache->dpagemap;
> + if (!dpagemap) {
> + spin_unlock(&cache->lock);
> + goto out;
> + }
> +
> + if (drm_pagemap_shrinker_cancel(dpagemap)) {
> + cache->dpagemap = NULL;
> + spin_unlock(&cache->lock);
> + drm_pagemap_destroy(dpagemap, false);
> + }
> +
> +out:
> + mutex_destroy(&cache->lookup_mutex);
> + kfree(cache);
> +}
> +
> +/**
> + * drm_pagemap_cache_create_devm() - Create a drm_pagemap_cache
> + * @shrinker: Pointer to a struct drm_pagemap_shrinker.
> + *
> + * Create a device-managed drm_pagemap cache. The cache is automatically
> + * destroyed on struct device removal, at which point any *inactive*
> + * drm_pagemap's are destroyed.
> + *
> + * Return: Pointer to a struct drm_pagemap_cache on success. Error pointer
> + * on failure.
> + */
> +struct drm_pagemap_cache *drm_pagemap_cache_create_devm(struct drm_pagemap_shrinker *shrinker)
> +{
> + struct drm_pagemap_cache *cache = kzalloc(sizeof(*cache), GFP_KERNEL);
> + int err;
> +
> + if (!cache)
> + return ERR_PTR(-ENOMEM);
> +
> + mutex_init(&cache->lookup_mutex);
> + spin_lock_init(&cache->lock);
> + cache->shrinker = shrinker;
> + init_completion(&cache->queued);
> + err = devm_add_action_or_reset(shrinker->drm->dev, drm_pagemap_cache_fini, cache);
> + if (err)
> + return ERR_PTR(err);
> +
> + return cache;
> +}
> +EXPORT_SYMBOL(drm_pagemap_cache_create_devm);
> +
> +/**
> + * DOC: Cache lookup
> + *
> + * Cache lookup should be done under a locked mutex, so that a
> + * failed drm_pagemap_get_from_cache() and a following
> + * drm_pagemap_cache_setpagemap() are carried out as an atomic
> + * operation WRT other lookups. Otherwise, racing lookups may
> + * unnecessarily concurrently create pagemaps to fulfill a
> + * failed lookup. The API provides two functions to perform this lock,
> + * drm_pagemap_lock_lookup() and drm_pagemap_unlock_lookup() and they
> + * should be used in the following way:
> + *
> + * .. code-block:: c
> + *
> + * drm_pagemap_lock_lookup(cache);
> + * dpagemap = drm_pagemap_get_from_cache(cache);
> + * if (dpagemap)
> + * goto out_unlock;
> + *
> + * dpagemap = driver_create_new_dpagemap();
> + * if (!IS_ERR(dpagemap))
> + * drm_pagemap_cache_set_pagemap(cache, dpagemap);
> + *
> + * out_unlock:
> + * drm_pagemap_unlock_lookup(cache);
> + */
> +
> +/**
> + * drm_pagemap_cache_lock_lookup() Lock a drm_pagemap_cache for lookup
> + * @cache: The drm_pagemap_cache to lock.
> + *
> + * Return: %-EINTR if interrupted while blocking. %0 otherwise.
> + */
> +int drm_pagemap_cache_lock_lookup(struct drm_pagemap_cache *cache)
> +{
> + return mutex_lock_interruptible(&cache->lookup_mutex);
> +}
> +EXPORT_SYMBOL(drm_pagemap_cache_lock_lookup);
> +
> +/**
> + * drm_pagemap_cache_unlock_lookup() Unlock a drm_pagemap_cache after lookup
> + * @cache: The drm_pagemap_cache to unlock.
> + */
> +void drm_pagemap_cache_unlock_lookup(struct drm_pagemap_cache *cache)
> +{
> + mutex_unlock(&cache->lookup_mutex);
> +}
> +EXPORT_SYMBOL(drm_pagemap_cache_unlock_lookup);
> +
> +/**
> + * drm_pagemap_get_from_cache() - Lookup of drm_pagemaps.
> + * @cache: The cache used for lookup.
> + *
> + * If an active pagemap is present in the cache, it is immediately returned.
> + * If an inactive pagemap is present, it's removed from the shrinker list and
> + * an attempt is made to make it active.
> + * If no pagemap present or the attempt to make it active failed, %NULL is returned
> + * to indicate to the caller to create a new drm_pagemap and insert it into
> + * the cache.
> + *
> + * Return: A reference-counted pointer to a drm_pagemap if successful. An error
> + * pointer if an error occurred, or %NULL if no drm_pagemap was found and
> + * the caller should insert a new one.
> + */
> +struct drm_pagemap *drm_pagemap_get_from_cache(struct drm_pagemap_cache *cache)
> +{
> + struct drm_pagemap *dpagemap;
> + int err;
> +
> + lockdep_assert_held(&cache->lookup_mutex);
> +retry:
> + spin_lock(&cache->lock);
> + dpagemap = cache->dpagemap;
> + if (drm_pagemap_get_unless_zero(dpagemap)) {
> + spin_unlock(&cache->lock);
> + return dpagemap;
> + }
> +
> + if (!dpagemap) {
> + spin_unlock(&cache->lock);
> + return NULL;
> + }
> +
> + if (!try_wait_for_completion(&cache->queued)) {
> + spin_unlock(&cache->lock);
> + err = wait_for_completion_interruptible(&cache->queued);
> + if (err)
> + return ERR_PTR(err);
> + goto retry;
> + }
> +
> + if (drm_pagemap_shrinker_cancel(dpagemap)) {
> + cache->dpagemap = NULL;
> + spin_unlock(&cache->lock);
> + err = drm_pagemap_reinit(dpagemap);
> + if (err) {
> + drm_pagemap_destroy(dpagemap, false);
> + return ERR_PTR(err);
> + }
> + drm_pagemap_cache_set_pagemap(cache, dpagemap);
> + } else {
> + cache->dpagemap = NULL;
> + spin_unlock(&cache->lock);
> + dpagemap = NULL;
> + }
> +
> + return dpagemap;
> +}
> +EXPORT_SYMBOL(drm_pagemap_get_from_cache);
> +
> +/**
> + * drm_pagemap_cache_set_pagemap() - Assign a drm_pagemap to a drm_pagemap_cache
> + * @cache: The cache to assign the drm_pagemap to.
> + * @dpagemap: The drm_pagemap to assign.
> + *
> + * The function must be called to populate a drm_pagemap_cache only
> + * after a call to drm_pagemap_get_from_cache() returns NULL.
> + */
> +void drm_pagemap_cache_set_pagemap(struct drm_pagemap_cache *cache, struct drm_pagemap *dpagemap)
> +{
> + struct drm_device *drm = dpagemap->drm;
> +
> + lockdep_assert_held(&cache->lookup_mutex);
> + spin_lock(&cache->lock);
> + dpagemap->cache = cache;
> + swap(cache->dpagemap, dpagemap);
> + reinit_completion(&cache->queued);
> + spin_unlock(&cache->lock);
> + drm_WARN_ON(drm, !!dpagemap);
> +}
> +EXPORT_SYMBOL(drm_pagemap_cache_set_pagemap);
> +
> +/**
> + * drm_pagemap_get_from_cache_if_active() - Quick lookup of active drm_pagemaps
> + * @cache: The cache to lookup from.
> + *
> + * Function that should be used to lookup a drm_pagemap that is already active.
> + * (refcount > 0).
> + *
> + * Return: A pointer to the cache's drm_pagemap if it's active; %NULL otherwise.
> + */
> +struct drm_pagemap *drm_pagemap_get_from_cache_if_active(struct drm_pagemap_cache *cache)
> +{
> + struct drm_pagemap *dpagemap;
> +
> + spin_lock(&cache->lock);
> + dpagemap = drm_pagemap_get_unless_zero(cache->dpagemap);
> + spin_unlock(&cache->lock);
> +
> + return dpagemap;
> +}
> +EXPORT_SYMBOL(drm_pagemap_get_from_cache_if_active);
> +
> +static bool drm_pagemap_shrinker_cancel(struct drm_pagemap *dpagemap)
> +{
> + struct drm_pagemap_cache *cache = dpagemap->cache;
> + struct drm_pagemap_shrinker *shrinker = cache->shrinker;
> +
> + spin_lock(&shrinker->lock);
> + if (list_empty(&dpagemap->shrink_link)) {
> + spin_unlock(&shrinker->lock);
> + return false;
> + }
> +
> + list_del_init(&dpagemap->shrink_link);
> + atomic_dec(&shrinker->num_dpagemaps);
> + spin_unlock(&shrinker->lock);
> + return true;
> +}
> +
> +#ifdef CONFIG_PROVE_LOCKING
> +/**
> + * drm_pagemap_shrinker_might_lock() - lockdep test for drm_pagemap_shrinker_add()
> + * @dpagemap: The drm pagemap.
> + *
> + * The drm_pagemap_shrinker_add() function performs some locking.
> + * This function can be called in code-paths that might
> + * call drm_pagemap_shrinker_add() to detect any lockdep problems early.
> + */
> +void drm_pagemap_shrinker_might_lock(struct drm_pagemap *dpagemap)
> +{
> + int idx;
> +
> + if (drm_dev_enter(dpagemap->drm, &idx)) {
> + struct drm_pagemap_cache *cache = dpagemap->cache;
> +
> + if (cache)
> + might_lock(&cache->shrinker->lock);
> +
> + drm_dev_exit(idx);
> + }
> +}
> +#endif
> +
> +/**
> + * drm_pagemap_shrinker_add() - Add a drm_pagemap to the shrinker list or destroy
> + * @dpagemap: The drm_pagemap.
> + *
> + * If @dpagemap is associated with a &struct drm_pagemap_cache AND the
> + * struct device backing the drm device is still alive, add @dpagemap to
> + * the &struct drm_pagemap_shrinker list of shrinkable drm_pagemaps.
> + *
> + * Otherwise destroy the pagemap directly using drm_pagemap_destroy().
> + *
> + * This is an internal function which is not intended to be exposed to drivers.
> + */
> +void drm_pagemap_shrinker_add(struct drm_pagemap *dpagemap)
> +{
> + struct drm_pagemap_cache *cache;
> + struct drm_pagemap_shrinker *shrinker;
> + int idx;
> +
> + /*
> + * The pagemap cache and shrinker are disabled at
> + * pci device remove time. After that, dpagemaps
> + * are freed directly.
> + */
> + if (!drm_dev_enter(dpagemap->drm, &idx))
> + goto out_no_cache;
> +
> + cache = dpagemap->cache;
> + if (!cache) {
> + drm_dev_exit(idx);
> + goto out_no_cache;
> + }
> +
> + shrinker = cache->shrinker;
> + spin_lock(&shrinker->lock);
> + list_add_tail(&dpagemap->shrink_link, &shrinker->dpagemaps);
> + atomic_inc(&shrinker->num_dpagemaps);
> + spin_unlock(&shrinker->lock);
> + complete_all(&cache->queued);
> + drm_dev_exit(idx);
> + return;
> +
> +out_no_cache:
> + drm_pagemap_destroy(dpagemap, true);
> +}
> +
> +static unsigned long
> +drm_pagemap_shrinker_count(struct shrinker *shrink, struct shrink_control *sc)
> +{
> + struct drm_pagemap_shrinker *shrinker = shrink->private_data;
> + unsigned long count = atomic_read(&shrinker->num_dpagemaps);
> +
> + return count ? : SHRINK_EMPTY;
> +}
> +
> +static unsigned long
> +drm_pagemap_shrinker_scan(struct shrinker *shrink, struct shrink_control *sc)
> +{
> + struct drm_pagemap_shrinker *shrinker = shrink->private_data;
> + struct drm_pagemap *dpagemap;
> + struct drm_pagemap_cache *cache;
> + unsigned long nr_freed = 0;
> +
> + sc->nr_scanned = 0;
> + spin_lock(&shrinker->lock);
> + do {
> + dpagemap = list_first_entry_or_null(&shrinker->dpagemaps, typeof(*dpagemap),
> + shrink_link);
> + if (!dpagemap)
> + break;
> +
> + atomic_dec(&shrinker->num_dpagemaps);
> + list_del_init(&dpagemap->shrink_link);
> + spin_unlock(&shrinker->lock);
> +
> + sc->nr_scanned++;
> + nr_freed++;
> +
> + cache = dpagemap->cache;
> + spin_lock(&cache->lock);
> + cache->dpagemap = NULL;
> + spin_unlock(&cache->lock);
> +
> + drm_dbg(dpagemap->drm, "Shrinking dpagemap %p.\n", dpagemap);
> + drm_pagemap_destroy(dpagemap, true);
> + spin_lock(&shrinker->lock);
> + } while (sc->nr_scanned < sc->nr_to_scan);
> + spin_unlock(&shrinker->lock);
> +
> + return sc->nr_scanned ? nr_freed : SHRINK_STOP;
> +}
> +
> +static void drm_pagemap_shrinker_fini(void *arg)
> +{
> + struct drm_pagemap_shrinker *shrinker = arg;
> +
> + drm_dbg(shrinker->drm, "Destroying dpagemap shrinker.\n");
> + drm_WARN_ON(shrinker->drm, !!atomic_read(&shrinker->num_dpagemaps));
> + shrinker_free(shrinker->shrink);
> + kfree(shrinker);
> +}
> +
> +/**
> + * drm_pagemap_shrinker_create_devm() - Create and register a pagemap shrinker
> + * @drm: The drm device
> + *
> + * Create and register a pagemap shrinker that shrinks unused pagemaps
> + * and thereby reduces memory footprint.
> + * The shrinker is drm_device managed and unregisters itself when
> + * the drm device is removed.
> + *
> + * Return: %0 on success, negative error code on failure.
> + */
> +struct drm_pagemap_shrinker *drm_pagemap_shrinker_create_devm(struct drm_device *drm)
> +{
> + struct drm_pagemap_shrinker *shrinker;
> + struct shrinker *shrink;
> + int err;
> +
> + shrinker = kzalloc(sizeof(*shrinker), GFP_KERNEL);
> + if (!shrinker)
> + return ERR_PTR(-ENOMEM);
> +
> + shrink = shrinker_alloc(0, "drm-drm_pagemap:%s", drm->unique);
> + if (!shrink) {
> + kfree(shrinker);
> + return ERR_PTR(-ENOMEM);
> + }
> +
> + spin_lock_init(&shrinker->lock);
> + INIT_LIST_HEAD(&shrinker->dpagemaps);
> + shrinker->drm = drm;
> + shrinker->shrink = shrink;
> + shrink->count_objects = drm_pagemap_shrinker_count;
> + shrink->scan_objects = drm_pagemap_shrinker_scan;
> + shrink->private_data = shrinker;
> + shrinker_register(shrink);
> +
> + err = devm_add_action_or_reset(drm->dev, drm_pagemap_shrinker_fini, shrinker);
> + if (err)
> + return ERR_PTR(err);
> +
> + return shrinker;
> +}
> +EXPORT_SYMBOL(drm_pagemap_shrinker_create_devm);
> diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> index 5cfe54331ba7..4b9af5e785c6 100644
> --- a/include/drm/drm_pagemap.h
> +++ b/include/drm/drm_pagemap.h
> @@ -9,6 +9,7 @@
> #define NR_PAGES(order) (1U << (order))
>
> struct drm_pagemap;
> +struct drm_pagemap_cache;
> struct drm_pagemap_dev_hold;
> struct drm_pagemap_zdd;
> struct device;
> @@ -124,6 +125,25 @@ struct drm_pagemap_ops {
> unsigned long start, unsigned long end,
> struct mm_struct *mm,
> unsigned long timeslice_ms);
> + /**
> + * @destroy: Destroy the drm_pagemap and associated resources.
> + * @dpagemap: The drm_pagemap to destroy.
> + * @is_atomic_or_reclaim: The function may be called from
> + * atomic- or reclaim context.
> + *
> + * The implementation should take care not to attempt to
> + * destroy resources that may already have been destroyed
> + * using devm_ callbacks, since this function may be called
> + * after the underlying struct device has been unbound.
> + * If the implementation defers the execution to a work item
> + * to avoid locking issues, then it must make sure the work
> + * items are flushed before module exit. If the destroy call
> + * happens after the provider's pci_remove() callback has
> + * been executed, a module reference and drm device reference is
> + * held across the destroy callback.
> + */
> + void (*destroy)(struct drm_pagemap *dpagemap,
> + bool is_atomic_or_reclaim);
> };
>
> /**
> @@ -135,6 +155,10 @@ struct drm_pagemap_ops {
> * @pagemap: Pointer to the underlying dev_pagemap.
> * @dev_hold: Pointer to a struct drm_pagemap_dev_hold for
> * device referencing.
> + * @cache: Back-pointer to the &struct drm_pagemap_cache used for this
> + * &struct drm_pagemap. May be NULL if no cache is used.
> + * @shrink_link: Link into the shrinker's list of drm_pagemaps. Only
> + * used if also using a pagemap cache.
> */
> struct drm_pagemap {
> const struct drm_pagemap_ops *ops;
> @@ -142,6 +166,8 @@ struct drm_pagemap {
> struct drm_device *drm;
> struct dev_pagemap *pagemap;
> struct drm_pagemap_dev_hold *dev_hold;
> + struct drm_pagemap_cache *cache;
> + struct list_head shrink_link;
> };
>
> struct drm_pagemap_devmem;
> @@ -210,6 +236,11 @@ struct drm_pagemap_devmem_ops {
> unsigned long npages);
> };
>
> +int drm_pagemap_init(struct drm_pagemap *dpagemap,
> + struct dev_pagemap *pagemap,
> + struct drm_device *drm,
> + const struct drm_pagemap_ops *ops);
> +
> struct drm_pagemap *drm_pagemap_create(struct drm_device *drm,
> struct dev_pagemap *pagemap,
> const struct drm_pagemap_ops *ops);
> @@ -228,9 +259,9 @@ static inline void drm_pagemap_put(struct drm_pagemap *dpagemap)
>
> /**
> * drm_pagemap_get() - Obtain a reference on a struct drm_pagemap
> - * @dpagemap: Pointer to the struct drm_pagemap.
> + * @dpagemap: Pointer to the struct drm_pagemap, or NULL.
> *
> - * Return: Pointer to the struct drm_pagemap.
> + * Return: Pointer to the struct drm_pagemap, or NULL.
> */
> static inline struct drm_pagemap *
> drm_pagemap_get(struct drm_pagemap *dpagemap)
> @@ -241,6 +272,20 @@ drm_pagemap_get(struct drm_pagemap *dpagemap)
> return dpagemap;
> }
>
> +/**
> + * drm_pagemap_get_unless_zero() - Obtain a reference on a struct drm_pagemap
> + * unless the current reference count is zero.
> + * @dpagemap: Pointer to the drm_pagemap or NULL.
> + *
> + * Return: A pointer to @dpagemap if the reference count was successfully
> + * incremented. NULL if @dpagemap was NULL, or its refcount was 0.
> + */
> +static inline struct drm_pagemap * __must_check
> +drm_pagemap_get_unless_zero(struct drm_pagemap *dpagemap)
> +{
> + return (dpagemap && kref_get_unless_zero(&dpagemap->ref)) ? dpagemap : NULL;
> +}
> +
> /**
> * struct drm_pagemap_devmem - Structure representing a GPU SVM device memory allocation
> *
> @@ -284,5 +329,7 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> struct mm_struct *mm,
> unsigned long timeslice_ms);
>
> -#endif
> +void drm_pagemap_destroy(struct drm_pagemap *dpagemap, bool is_atomic_or_reclaim);
>
> +int drm_pagemap_reinit(struct drm_pagemap *dpagemap);
> +#endif
> diff --git a/include/drm/drm_pagemap_util.h b/include/drm/drm_pagemap_util.h
> new file mode 100644
> index 000000000000..924244d5b899
> --- /dev/null
> +++ b/include/drm/drm_pagemap_util.h
> @@ -0,0 +1,42 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2025 Intel Corporation
> + */
> +
> +#ifndef _DRM_PAGEMAP_UTIL_H_
> +#define _DRM_PAGEMAP_UTIL_H_
> +
> +struct drm_device;
> +struct drm_pagemap;
> +struct drm_pagemap_cache;
> +struct drm_pagemap_shrinker;
> +
> +void drm_pagemap_shrinker_add(struct drm_pagemap *dpagemap);
> +
> +int drm_pagemap_cache_lock_lookup(struct drm_pagemap_cache *cache);
> +
> +void drm_pagemap_cache_unlock_lookup(struct drm_pagemap_cache *cache);
> +
> +struct drm_pagemap_shrinker *drm_pagemap_shrinker_create_devm(struct drm_device *drm);
> +
> +struct drm_pagemap_cache *drm_pagemap_cache_create_devm(struct drm_pagemap_shrinker *shrinker);
> +
> +struct drm_pagemap *drm_pagemap_get_from_cache(struct drm_pagemap_cache *cache);
> +
> +void drm_pagemap_cache_set_pagemap(struct drm_pagemap_cache *cache, struct drm_pagemap *dpagemap);
> +
> +struct drm_pagemap *drm_pagemap_get_from_cache_if_active(struct drm_pagemap_cache *cache);
> +
> +#ifdef CONFIG_PROVE_LOCKING
> +
> +void drm_pagemap_shrinker_might_lock(struct drm_pagemap *dpagemap);
> +
> +#else
> +
> +static inline void drm_pagemap_shrinker_might_lock(struct drm_pagemap *dpagemap)
> +{
> +}
> +
> +#endif /* CONFIG_PROVE_LOCKING */
> +
> +#endif
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 06/17] drm/xe: Use the drm_pagemap cache and shrinker
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (4 preceding siblings ...)
2025-11-11 16:43 ` [PATCH v2 05/17] drm/pagemap: Add a drm_pagemap cache and shrinker Thomas Hellström
@ 2025-11-11 16:43 ` Thomas Hellström
2025-11-11 16:43 ` [PATCH v2 07/17] drm/pagemap: Remove the drm_pagemap_create() interface Thomas Hellström
` (15 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, dri-devel, himal.prasad.ghimiray, apopple,
airlied, Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
Define a struct xe_pagemap that embeds all pagemap-related
data used by xekmd, and use the drm_pagemap cache- and
shrinker to manage lifetime.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 6 +
drivers/gpu/drm/xe/xe_device_types.h | 5 +
drivers/gpu/drm/xe/xe_svm.c | 354 +++++++++++++++++++++------
drivers/gpu/drm/xe/xe_svm.h | 38 ++-
drivers/gpu/drm/xe/xe_tile.c | 34 ++-
drivers/gpu/drm/xe/xe_tile.h | 21 ++
drivers/gpu/drm/xe/xe_vm_types.h | 1 +
drivers/gpu/drm/xe/xe_vram_types.h | 15 +-
8 files changed, 379 insertions(+), 95 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index c7d373c70f0f..ff598d0c68d7 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -16,6 +16,7 @@
#include <drm/drm_gem_ttm_helper.h>
#include <drm/drm_ioctl.h>
#include <drm/drm_managed.h>
+#include <drm/drm_pagemap_util.h>
#include <drm/drm_print.h>
#include <uapi/drm/xe_drm.h>
@@ -63,6 +64,7 @@
#include "xe_shrinker.h"
#include "xe_survivability_mode.h"
#include "xe_sriov.h"
+#include "xe_svm.h"
#include "xe_tile.h"
#include "xe_ttm_stolen_mgr.h"
#include "xe_ttm_sys_mgr.h"
@@ -466,6 +468,10 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
init_rwsem(&xe->usm.lock);
+ err = xe_pagemap_shrinker_create(xe);
+ if (err)
+ goto err;
+
xa_init_flags(&xe->usm.asid_to_vm, XA_FLAGS_ALLOC);
if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) {
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 7baf15f51575..6d1160dc3103 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -36,6 +36,7 @@
#endif
struct dram_info;
+struct drm_pagemap_shrinker;
struct intel_display;
struct intel_dg_nvm_dev;
struct xe_ggtt;
@@ -429,6 +430,10 @@ struct xe_device {
#define XE_PAGEFAULT_QUEUE_COUNT 4
/** @usm.pf_queue: Page fault queues */
struct xe_pagefault_queue pf_queue[XE_PAGEFAULT_QUEUE_COUNT];
+#if IS_ENABLED(CONFIG_DRM_XE_PAGEMAP)
+ /** @usm.pagemap_shrinker: Shrinker for unused pagemaps */
+ struct drm_pagemap_shrinker *dpagemap_shrinker;
+#endif
} usm;
/** @pinned: pinned BO state */
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index aab939fbcf80..025c0a3aed8b 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -4,6 +4,9 @@
*/
#include <drm/drm_drv.h>
+#include <drm/drm_managed.h>
+#include <drm/drm_pagemap.h>
+#include <drm/drm_pagemap_util.h>
#include "xe_bo.h"
#include "xe_exec_queue_types.h"
@@ -19,6 +22,8 @@
#include "xe_vm_types.h"
#include "xe_vram_types.h"
+static int xe_svm_get_pagemaps(struct xe_vm *vm);
+
static bool xe_svm_range_in_vram(struct xe_svm_range *range)
{
/*
@@ -394,22 +399,34 @@ static void xe_svm_garbage_collector_work_func(struct work_struct *w)
#if IS_ENABLED(CONFIG_DRM_XE_PAGEMAP)
-static struct xe_vram_region *page_to_vr(struct page *page)
+static struct xe_vram_region *xe_pagemap_to_vr(struct xe_pagemap *xpagemap)
{
- return container_of(page_pgmap(page), struct xe_vram_region, pagemap);
+ return xpagemap->vr;
}
-static u64 xe_vram_region_page_to_dpa(struct xe_vram_region *vr,
- struct page *page)
+static struct xe_pagemap *xe_page_to_pagemap(struct page *page)
{
- u64 dpa;
+ return container_of(page_pgmap(page), struct xe_pagemap, pagemap);
+}
+
+static struct xe_vram_region *xe_page_to_vr(struct page *page)
+{
+ return xe_pagemap_to_vr(xe_page_to_pagemap(page));
+}
+
+static u64 xe_page_to_dpa(struct page *page)
+{
+ struct xe_pagemap *xpagemap = xe_page_to_pagemap(page);
+ struct xe_vram_region *vr = xe_pagemap_to_vr(xpagemap);
+ u64 hpa_base = xpagemap->hpa_base;
u64 pfn = page_to_pfn(page);
u64 offset;
+ u64 dpa;
xe_assert(vr->xe, is_device_private_page(page));
- xe_assert(vr->xe, (pfn << PAGE_SHIFT) >= vr->hpa_base);
+ xe_assert(vr->xe, (pfn << PAGE_SHIFT) >= hpa_base);
- offset = (pfn << PAGE_SHIFT) - vr->hpa_base;
+ offset = (pfn << PAGE_SHIFT) - hpa_base;
dpa = vr->dpa_base + offset;
return dpa;
@@ -513,11 +530,11 @@ static int xe_svm_copy(struct page **pages,
continue;
if (!vr && spage) {
- vr = page_to_vr(spage);
+ vr = xe_page_to_vr(spage);
gt = xe_migrate_exec_queue(vr->migrate)->gt;
xe = vr->xe;
}
- XE_WARN_ON(spage && page_to_vr(spage) != vr);
+ XE_WARN_ON(spage && xe_page_to_vr(spage) != vr);
/*
* CPU page and device page valid, capture physical address on
@@ -525,7 +542,7 @@ static int xe_svm_copy(struct page **pages,
* device pages.
*/
if (pagemap_addr[i].addr && spage) {
- __vram_addr = xe_vram_region_page_to_dpa(vr, spage);
+ __vram_addr = xe_page_to_dpa(spage);
if (vram_addr == XE_VRAM_ADDR_INVALID) {
vram_addr = __vram_addr;
pos = i;
@@ -671,9 +688,11 @@ static void xe_svm_devmem_release(struct drm_pagemap_devmem *devmem_allocation)
xe_pm_runtime_put(xe);
}
-static u64 block_offset_to_pfn(struct xe_vram_region *vr, u64 offset)
+static u64 block_offset_to_pfn(struct drm_pagemap *dpagemap, u64 offset)
{
- return PHYS_PFN(offset + vr->hpa_base);
+ struct xe_pagemap *xpagemap = container_of(dpagemap, typeof(*xpagemap), dpagemap);
+
+ return PHYS_PFN(offset + xpagemap->hpa_base);
}
static struct drm_buddy *vram_to_buddy(struct xe_vram_region *vram)
@@ -693,7 +712,8 @@ static int xe_svm_populate_devmem_pfn(struct drm_pagemap_devmem *devmem_allocati
list_for_each_entry(block, blocks, link) {
struct xe_vram_region *vr = block->private;
struct drm_buddy *buddy = vram_to_buddy(vr);
- u64 block_pfn = block_offset_to_pfn(vr, drm_buddy_block_offset(block));
+ u64 block_pfn = block_offset_to_pfn(devmem_allocation->dpagemap,
+ drm_buddy_block_offset(block));
int i;
for (i = 0; i < drm_buddy_block_size(buddy, block) >> PAGE_SHIFT; ++i)
@@ -710,6 +730,11 @@ static const struct drm_pagemap_devmem_ops dpagemap_devmem_ops = {
.copy_to_ram = xe_svm_copy_to_ram,
};
+#else
+static int xe_svm_get_pagemaps(struct xe_vm *vm)
+{
+ return 0;
+}
#endif
static const struct drm_gpusvm_ops gpusvm_ops = {
@@ -724,6 +749,26 @@ static const unsigned long fault_chunk_sizes[] = {
SZ_4K,
};
+static void xe_pagemap_put(struct xe_pagemap *xpagemap)
+{
+ drm_pagemap_put(&xpagemap->dpagemap);
+}
+
+static void xe_svm_put_pagemaps(struct xe_vm *vm)
+{
+ struct xe_device *xe = vm->xe;
+ struct xe_tile *tile;
+ int id;
+
+ for_each_tile(tile, xe, id) {
+ struct xe_pagemap *xpagemap = vm->svm.pagemaps[id];
+
+ if (xpagemap)
+ xe_pagemap_put(xpagemap);
+ vm->svm.pagemaps[id] = NULL;
+ }
+}
+
/**
* xe_svm_init() - SVM initialize
* @vm: The VM.
@@ -742,12 +787,21 @@ int xe_svm_init(struct xe_vm *vm)
INIT_WORK(&vm->svm.garbage_collector.work,
xe_svm_garbage_collector_work_func);
+ err = xe_svm_get_pagemaps(vm);
+ if (err)
+ return err;
+
err = drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM", &vm->xe->drm,
current->mm, 0, vm->size,
xe_modparam.svm_notifier_size * SZ_1M,
&gpusvm_ops, fault_chunk_sizes,
ARRAY_SIZE(fault_chunk_sizes));
drm_gpusvm_driver_set_lock(&vm->svm.gpusvm, &vm->lock);
+
+ if (err) {
+ xe_svm_put_pagemaps(vm);
+ return err;
+ }
} else {
err = drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM (simple)",
&vm->xe->drm, NULL, 0, 0, 0, NULL,
@@ -767,6 +821,7 @@ void xe_svm_close(struct xe_vm *vm)
{
xe_assert(vm->xe, xe_vm_is_closed(vm));
flush_work(&vm->svm.garbage_collector.work);
+ xe_svm_put_pagemaps(vm);
}
/**
@@ -860,7 +915,8 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
struct mm_struct *mm,
unsigned long timeslice_ms)
{
- struct xe_vram_region *vr = container_of(dpagemap->pagemap, typeof(*vr), pagemap);
+ struct xe_pagemap *xpagemap = container_of(dpagemap, typeof(*xpagemap), dpagemap);
+ struct xe_vram_region *vr = xe_pagemap_to_vr(xpagemap);
struct xe_device *xe = vr->xe;
struct device *dev = xe->drm.dev;
struct drm_buddy_block *block;
@@ -1369,11 +1425,6 @@ u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
#if IS_ENABLED(CONFIG_DRM_XE_PAGEMAP)
-static struct drm_pagemap *tile_local_pagemap(struct xe_tile *tile)
-{
- return tile->mem.vram->dpagemap;
-}
-
/**
* xe_vma_resolve_pagemap - Resolve the appropriate DRM pagemap for a VMA
* @vma: Pointer to the xe_vma structure containing memory attributes
@@ -1399,7 +1450,7 @@ struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *t
return NULL;
if (fd == DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE)
- return IS_DGFX(tile_to_xe(tile)) ? tile_local_pagemap(tile) : NULL;
+ return IS_DGFX(tile_to_xe(tile)) ? xe_tile_local_pagemap(tile) : NULL;
/* TODO: Support multi-device with drm_pagemap_from_fd(fd) */
return NULL;
@@ -1422,7 +1473,7 @@ int xe_svm_alloc_vram(struct xe_tile *tile, struct xe_svm_range *range,
xe_assert(tile_to_xe(tile), range->base.pages.flags.migrate_devmem);
range_debug(range, "ALLOCATE VRAM");
- dpagemap = tile_local_pagemap(tile);
+ dpagemap = xe_tile_local_pagemap(tile);
return drm_pagemap_populate_mm(dpagemap, xe_svm_range_start(range),
xe_svm_range_end(range),
range->base.gpusvm->mm,
@@ -1441,7 +1492,7 @@ xe_drm_pagemap_device_map(struct drm_pagemap *dpagemap,
dma_addr_t addr;
if (pgmap_dev == dev) {
- addr = xe_vram_region_page_to_dpa(page_to_vr(page), page);
+ addr = xe_page_to_dpa(page);
prot = XE_INTERCONNECT_VRAM;
} else {
addr = DMA_MAPPING_ERROR;
@@ -1451,94 +1502,243 @@ xe_drm_pagemap_device_map(struct drm_pagemap *dpagemap,
return drm_pagemap_addr_encode(addr, prot, order, dir);
}
-static const struct drm_pagemap_ops xe_drm_pagemap_ops = {
- .device_map = xe_drm_pagemap_device_map,
- .populate_mm = xe_drm_pagemap_populate_mm,
-};
+static void xe_pagemap_destroy_work(struct work_struct *work)
+{
+ struct xe_pagemap *xpagemap = container_of(work, typeof(*xpagemap), destroy_work);
+ struct dev_pagemap *pagemap = &xpagemap->pagemap;
+ struct drm_device *drm = xpagemap->dpagemap.drm;
+ int idx;
-static void xe_devm_release(void *data)
+ /*
+ * Only unmap / release if devm_ release hasn't run yet.
+ * Otherwise the devm_ callbacks have already released, or
+ * will do shortly.
+ */
+ if (drm_dev_enter(drm, &idx)) {
+ devm_memunmap_pages(drm->dev, pagemap);
+ devm_release_mem_region(drm->dev, pagemap->range.start,
+ pagemap->range.end - pagemap->range.start + 1);
+ drm_dev_exit(idx);
+ }
+ kfree(xpagemap);
+}
+
+static void xe_pagemap_destroy(struct drm_pagemap *dpagemap, bool from_atomic_or_reclaim)
{
- struct xe_vram_region *vr = data;
+ struct xe_pagemap *xpagemap = container_of(dpagemap, typeof(*xpagemap), dpagemap);
+ struct xe_device *xe = to_xe_device(dpagemap->drm);
- drm_pagemap_put(vr->dpagemap);
- vr->dpagemap = NULL;
+ if (from_atomic_or_reclaim)
+ queue_work(xe->destroy_wq, &xpagemap->destroy_work);
+ else
+ xe_pagemap_destroy_work(&xpagemap->destroy_work);
}
+static const struct drm_pagemap_ops xe_drm_pagemap_ops = {
+ .device_map = xe_drm_pagemap_device_map,
+ .populate_mm = xe_drm_pagemap_populate_mm,
+ .destroy = xe_pagemap_destroy,
+};
+
/**
- * xe_devm_add: Remap and provide memmap backing for device memory
- * @tile: tile that the memory region belongs to
- * @vr: vram memory region to remap
+ * xe_pagemap_create() - Create a struct xe_pagemap object
+ * @xe: The xe device.
+ * @vr: Back-pointer to the struct xe_vram_region.
*
- * This remap device memory to host physical address space and create
- * struct page to back device memory
+ * Allocate and initialize a struct xe_pagemap. On successful
+ * return, drm_pagemap_put() on the embedded struct drm_pagemap
+ * should be used to unreference.
*
- * Return: 0 on success standard error code otherwise
+ * Return: Pointer to a struct xe_pagemap if successful. Error pointer
+ * on failure.
*/
-int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
+static struct xe_pagemap *xe_pagemap_create(struct xe_device *xe, struct xe_vram_region *vr)
{
- struct xe_device *xe = tile_to_xe(tile);
- struct device *dev = &to_pci_dev(xe->drm.dev)->dev;
+ struct device *dev = xe->drm.dev;
+ struct xe_pagemap *xpagemap;
+ struct dev_pagemap *pagemap;
+ struct drm_pagemap *dpagemap;
struct resource *res;
void *addr;
- int ret;
+ int err;
+
+ xpagemap = kzalloc(sizeof(*xpagemap), GFP_KERNEL);
+ if (!xpagemap)
+ return ERR_PTR(-ENOMEM);
+
+ pagemap = &xpagemap->pagemap;
+ dpagemap = &xpagemap->dpagemap;
+ INIT_WORK(&xpagemap->destroy_work, xe_pagemap_destroy_work);
+ xpagemap->vr = vr;
+
+ err = drm_pagemap_init(dpagemap, pagemap, &xe->drm, &xe_drm_pagemap_ops);
+ if (err)
+ goto out_no_dpagemap;
res = devm_request_free_mem_region(dev, &iomem_resource,
vr->usable_size);
if (IS_ERR(res)) {
- ret = PTR_ERR(res);
- return ret;
+ err = PTR_ERR(res);
+ goto out_err;
}
- vr->dpagemap = drm_pagemap_create(&xe->drm, &vr->pagemap,
- &xe_drm_pagemap_ops);
- if (IS_ERR(vr->dpagemap)) {
- drm_err(&xe->drm, "Failed to create drm_pagemap tile %d memory: %pe\n",
- tile->id, vr->dpagemap);
- ret = PTR_ERR(vr->dpagemap);
- goto out_no_dpagemap;
+ pagemap->type = MEMORY_DEVICE_PRIVATE;
+ pagemap->range.start = res->start;
+ pagemap->range.end = res->end;
+ pagemap->nr_range = 1;
+ pagemap->owner = xe_svm_devm_owner(xe);
+ pagemap->ops = drm_pagemap_pagemap_ops_get();
+ addr = devm_memremap_pages(dev, pagemap);
+ if (IS_ERR(addr)) {
+ err = PTR_ERR(addr);
+ devm_release_mem_region(dev, res->start, res->end - res->start + 1);
+ goto out_err;
}
- ret = devm_add_action_or_reset(dev, xe_devm_release, vr);
- if (ret)
- goto out_no_dpagemap;
+ xpagemap->hpa_base = res->start;
+ return xpagemap;
- vr->pagemap.type = MEMORY_DEVICE_PRIVATE;
- vr->pagemap.range.start = res->start;
- vr->pagemap.range.end = res->end;
- vr->pagemap.nr_range = 1;
- vr->pagemap.ops = drm_pagemap_pagemap_ops_get();
- vr->pagemap.owner = xe_svm_devm_owner(xe);
- addr = devm_memremap_pages(dev, &vr->pagemap);
- if (IS_ERR(addr)) {
- ret = PTR_ERR(addr);
- drm_err(&xe->drm, "Failed to remap tile %d memory, errno %pe\n",
- tile->id, ERR_PTR(ret));
- goto out_failed_memremap;
+out_err:
+ drm_pagemap_put(dpagemap);
+ return ERR_PTR(err);
+
+out_no_dpagemap:
+ kfree(xpagemap);
+ return ERR_PTR(err);
+}
+
+/**
+ * xe_pagemap_find_or_create() - Find or create a struct xe_pagemap
+ * @xe: The xe device.
+ * @cache: The struct xe_pagemap_cache.
+ * @vr: The VRAM region.
+ *
+ * Check if there is an already used xe_pagemap for this tile, and in that case,
+ * return it.
+ * If not, check if there is a cached xe_pagemap for this tile, and in that case,
+ * cancel its destruction, re-initialize it and return it.
+ * Finally if there is no cached or already used pagemap, create one and
+ * register it in the tile's pagemap cache.
+ *
+ * Note that this function is typically called from within an IOCTL, and waits are
+ * therefore carried out interruptible if possible.
+ *
+ * Return: A pointer to a struct xe_pagemap if successful, Error pointer on failure.
+ */
+static struct xe_pagemap *
+xe_pagemap_find_or_create(struct xe_device *xe, struct drm_pagemap_cache *cache,
+ struct xe_vram_region *vr)
+{
+ struct drm_pagemap *dpagemap;
+ struct xe_pagemap *xpagemap;
+ int err;
+
+ err = drm_pagemap_cache_lock_lookup(cache);
+ if (err)
+ return ERR_PTR(err);
+
+ dpagemap = drm_pagemap_get_from_cache(cache);
+ if (IS_ERR(dpagemap)) {
+ xpagemap = ERR_CAST(dpagemap);
+ } else if (!dpagemap) {
+ xpagemap = xe_pagemap_create(xe, vr);
+ if (IS_ERR(xpagemap))
+ goto out_unlock;
+ drm_pagemap_cache_set_pagemap(cache, &xpagemap->dpagemap);
+ } else {
+ xpagemap = container_of(dpagemap, typeof(*xpagemap), dpagemap);
+ }
+
+out_unlock:
+ drm_pagemap_cache_unlock_lookup(cache);
+ return xpagemap;
+}
+
+static int xe_svm_get_pagemaps(struct xe_vm *vm)
+{
+ struct xe_device *xe = vm->xe;
+ struct xe_pagemap *xpagemap;
+ struct xe_tile *tile;
+ int id;
+
+ for_each_tile(tile, xe, id) {
+ struct xe_vram_region *vr;
+
+ if (!((BIT(id) << 1) & xe->info.mem_region_mask))
+ continue;
+
+ vr = xe_tile_to_vr(tile);
+ xpagemap = xe_pagemap_find_or_create(xe, vr->dpagemap_cache, vr);
+ if (IS_ERR(xpagemap))
+ break;
+ vm->svm.pagemaps[id] = xpagemap;
+ }
+
+ if (IS_ERR(xpagemap)) {
+ xe_svm_put_pagemaps(vm);
+ return PTR_ERR(xpagemap);
}
- vr->hpa_base = res->start;
- drm_dbg(&xe->drm, "Added tile %d memory [%llx-%llx] to devm, remapped to %pr\n",
- tile->id, vr->io_start, vr->io_start + vr->usable_size, res);
return 0;
+}
-out_failed_memremap:
- drm_pagemap_put(vr->dpagemap);
-out_no_dpagemap:
- devm_release_mem_region(dev, res->start, resource_size(res));
- return ret;
+/**
+ * xe_pagemap_shrinker_create() - Create a drm_pagemap shrinker
+ * @xe: The xe device
+ *
+ * Create a drm_pagemap shrinker and register with the xe device.
+ *
+ * Return: %0 on success, negative error code on failure.
+ */
+int xe_pagemap_shrinker_create(struct xe_device *xe)
+{
+ xe->usm.dpagemap_shrinker = drm_pagemap_shrinker_create_devm(&xe->drm);
+ return PTR_ERR_OR_ZERO(xe->usm.dpagemap_shrinker);
}
+
+/**
+ * xe_pagemap_cache_create() - Create a drm_pagemap cache
+ * @tile: The tile to register the cache with
+ *
+ * Create a drm_pagemap cache and register with the tile.
+ *
+ * Return: %0 on success, negative error code on failure.
+ */
+int xe_pagemap_cache_create(struct xe_tile *tile)
+{
+ struct xe_device *xe = tile_to_xe(tile);
+
+ if (IS_DGFX(xe)) {
+ struct drm_pagemap_cache *cache =
+ drm_pagemap_cache_create_devm(xe->usm.dpagemap_shrinker);
+
+ if (IS_ERR(cache))
+ return PTR_ERR(cache);
+
+ tile->mem.vram->dpagemap_cache = cache;
+ }
+
+ return 0;
+}
+
#else
-int xe_svm_alloc_vram(struct xe_tile *tile,
- struct xe_svm_range *range,
- const struct drm_gpusvm_ctx *ctx)
+
+int xe_pagemap_shrinker_create(struct xe_device *xe)
{
- return -EOPNOTSUPP;
+ return 0;
}
-int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
+int xe_pagemap_cache_create(struct xe_tile *tile)
{
return 0;
}
+int xe_svm_alloc_vram(struct xe_tile *tile,
+ struct xe_svm_range *range,
+ const struct drm_gpusvm_ctx *ctx)
+{
+ return -EOPNOTSUPP;
+}
+
struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
{
return NULL;
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 0955d2ac8d74..6166f5358d6d 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -27,8 +27,13 @@ static inline void *xe_svm_devm_owner(struct xe_device *xe)
#define XE_INTERCONNECT_VRAM DRM_INTERCONNECT_DRIVER
+struct drm_device;
+struct drm_file;
+
struct xe_bo;
struct xe_gt;
+struct xe_device;
+struct xe_vram_region;
struct xe_tile;
struct xe_vm;
struct xe_vma;
@@ -55,6 +60,22 @@ struct xe_svm_range {
u8 tile_invalidated;
};
+/**
+ * struct xe_pagemap - Manages xe device_private memory for SVM.
+ * @pagemap: The struct dev_pagemap providing the struct pages.
+ * @dpagemap: The drm_pagemap managing allocation and migration.
+ * @destroy_work: Handles asnynchronous destruction and caching.
+ * @hpa_base: The host physical address base for the managemd memory.
+ * @vr: Backpointer to the xe_vram region.
+ */
+struct xe_pagemap {
+ struct dev_pagemap pagemap;
+ struct drm_pagemap dpagemap;
+ struct work_struct destroy_work;
+ resource_size_t hpa_base;
+ struct xe_vram_region *vr;
+};
+
/**
* xe_svm_range_pages_valid() - SVM range pages valid
* @range: SVM range
@@ -171,6 +192,10 @@ static inline unsigned long xe_svm_range_size(struct xe_svm_range *range)
void xe_svm_flush(struct xe_vm *vm);
+int xe_pagemap_shrinker_create(struct xe_device *xe);
+
+int xe_pagemap_cache_create(struct xe_tile *tile);
+
#else
#include <linux/interval_tree.h>
#include "xe_vm.h"
@@ -179,7 +204,7 @@ struct drm_pagemap_addr;
struct drm_gpusvm_ctx;
struct drm_gpusvm_range;
struct xe_bo;
-struct xe_gt;
+struct xe_device;
struct xe_vm;
struct xe_vma;
struct xe_tile;
@@ -346,6 +371,17 @@ struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *t
static inline void xe_svm_flush(struct xe_vm *vm)
{
}
+
+static inline int xe_pagemap_shrinker_create(struct xe_device *xe)
+{
+ return 0;
+}
+
+static inline int xe_pagemap_cache_create(struct xe_tile *tile)
+{
+ return 0;
+}
+
#define xe_svm_range_has_dma_mapping(...) false
#endif /* CONFIG_DRM_XE_GPUSVM */
diff --git a/drivers/gpu/drm/xe/xe_tile.c b/drivers/gpu/drm/xe/xe_tile.c
index 4f4f9a5c43af..051b191377df 100644
--- a/drivers/gpu/drm/xe/xe_tile.c
+++ b/drivers/gpu/drm/xe/xe_tile.c
@@ -6,6 +6,7 @@
#include <linux/fault-inject.h>
#include <drm/drm_managed.h>
+#include <drm/drm_pagemap_util.h>
#include "xe_bo.h"
#include "xe_device.h"
@@ -180,17 +181,19 @@ ALLOW_ERROR_INJECTION(xe_tile_init_early, ERRNO); /* See xe_pci_probe() */
int xe_tile_init_noalloc(struct xe_tile *tile)
{
struct xe_device *xe = tile_to_xe(tile);
+ int err;
xe_wa_apply_tile_workarounds(tile);
- if (xe->info.has_usm && IS_DGFX(xe))
- xe_devm_add(tile, tile->mem.vram);
+ err = xe_pagemap_cache_create(tile);
+ if (err)
+ return err;
if (IS_DGFX(xe) && !ttm_resource_manager_used(&tile->mem.vram->ttm.manager)) {
- int err = xe_ttm_vram_mgr_init(xe, tile->mem.vram);
-
+ err = xe_ttm_vram_mgr_init(xe, tile->mem.vram);
if (err)
return err;
+
xe->info.mem_region_mask |= BIT(tile->mem.vram->id) << 1;
}
@@ -215,3 +218,26 @@ void xe_tile_migrate_wait(struct xe_tile *tile)
{
xe_migrate_wait(tile->migrate);
}
+
+#if IS_ENABLED(CONFIG_DRM_XE_PAGEMAP)
+/**
+ * xe_tile_local_pagemap() - Return a pointer to the tile's local drm_pagemap if any
+ * @tile: The tile.
+ *
+ * Return: A pointer to the tile's local drm_pagemap, or NULL if local pagemap
+ * support has been compiled out.
+ */
+struct drm_pagemap *xe_tile_local_pagemap(struct xe_tile *tile)
+{
+ struct drm_pagemap *dpagemap =
+ drm_pagemap_get_from_cache_if_active(xe_tile_to_vr(tile)->dpagemap_cache);
+
+ if (dpagemap) {
+ xe_assert(tile_to_xe(tile), kref_read(&dpagemap->ref) >= 2);
+ drm_pagemap_put(dpagemap);
+ }
+
+ return dpagemap;
+}
+#endif
+
diff --git a/drivers/gpu/drm/xe/xe_tile.h b/drivers/gpu/drm/xe/xe_tile.h
index dceb6297aa01..734132eddda5 100644
--- a/drivers/gpu/drm/xe/xe_tile.h
+++ b/drivers/gpu/drm/xe/xe_tile.h
@@ -8,6 +8,7 @@
#include "xe_device_types.h"
+struct xe_pagemap;
struct xe_tile;
int xe_tile_init_early(struct xe_tile *tile, struct xe_device *xe, u8 id);
@@ -23,4 +24,24 @@ static inline bool xe_tile_is_root(struct xe_tile *tile)
return tile->id == 0;
}
+/**
+ * xe_tile_to_vr() - Return the struct xe_vram_region pointer from a
+ * struct xe_tile pointer
+ * @tile: Pointer to the struct xe_tile.
+ *
+ * Return: Pointer to the struct xe_vram_region embedded in *@tile.
+ */
+static inline struct xe_vram_region *xe_tile_to_vr(struct xe_tile *tile)
+{
+ return tile->mem.vram;
+}
+
+#if IS_ENABLED(CONFIG_DRM_XE_PAGEMAP)
+struct drm_pagemap *xe_tile_local_pagemap(struct xe_tile *tile);
+#else
+static inline struct drm_pagemap *xe_tile_local_pagemap(struct xe_tile *tile)
+{
+ return NULL;
+}
+#endif
#endif
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index ccd6cc090309..fd9308426ac4 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -191,6 +191,7 @@ struct xe_vm {
*/
struct work_struct work;
} garbage_collector;
+ struct xe_pagemap *pagemaps[XE_MAX_TILES_PER_DEVICE];
} svm;
struct xe_device *xe;
diff --git a/drivers/gpu/drm/xe/xe_vram_types.h b/drivers/gpu/drm/xe/xe_vram_types.h
index c0d2c5ee8c10..646e3c12ae9f 100644
--- a/drivers/gpu/drm/xe/xe_vram_types.h
+++ b/drivers/gpu/drm/xe/xe_vram_types.h
@@ -66,19 +66,8 @@ struct xe_vram_region {
#if IS_ENABLED(CONFIG_DRM_XE_PAGEMAP)
/** @migrate: Back pointer to migrate */
struct xe_migrate *migrate;
- /** @pagemap: Used to remap device memory as ZONE_DEVICE */
- struct dev_pagemap pagemap;
- /**
- * @dpagemap: The struct drm_pagemap of the ZONE_DEVICE memory
- * pages of this tile.
- */
- struct drm_pagemap *dpagemap;
- /**
- * @hpa_base: base host physical address
- *
- * This is generated when remap device memory as ZONE_DEVICE
- */
- resource_size_t hpa_base;
+ /** @dpagemap_cache: drm_pagemap cache. */
+ struct drm_pagemap_cache *dpagemap_cache;
#endif
};
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 07/17] drm/pagemap: Remove the drm_pagemap_create() interface
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (5 preceding siblings ...)
2025-11-11 16:43 ` [PATCH v2 06/17] drm/xe: Use the " Thomas Hellström
@ 2025-11-11 16:43 ` Thomas Hellström
2025-11-11 16:43 ` [PATCH v2 08/17] drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus Thomas Hellström
` (14 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, Matthew Brost, dri-devel,
himal.prasad.ghimiray, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
With the drm_pagemap_init() interface, drm_pagemap_create() is not
used anymore.
v2:
- Slightly more verbose commit message. (Matt Brost)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 30 ------------------------------
1 file changed, 30 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 50d3963ddbbc..1477a2057a15 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -681,36 +681,6 @@ int drm_pagemap_init(struct drm_pagemap *dpagemap,
}
EXPORT_SYMBOL(drm_pagemap_init);
-/**
- * drm_pagemap_create() - Create a struct drm_pagemap.
- * @drm: Pointer to a struct drm_device providing the device-private memory.
- * @pagemap: Pointer to a pre-setup struct dev_pagemap providing the struct pages.
- * @ops: Pointer to the struct drm_pagemap_ops.
- *
- * Allocate and initialize a struct drm_pagemap.
- *
- * Return: A refcounted pointer to a struct drm_pagemap on success.
- * Error pointer on error.
- */
-struct drm_pagemap *
-drm_pagemap_create(struct drm_device *drm,
- struct dev_pagemap *pagemap,
- const struct drm_pagemap_ops *ops)
-{
- struct drm_pagemap *dpagemap = kzalloc(sizeof(*dpagemap), GFP_KERNEL);
- int err;
-
- if (!dpagemap)
- return ERR_PTR(-ENOMEM);
-
- err = drm_pagemap_init(dpagemap, pagemap, drm, ops);
- if (err)
- return ERR_PTR(err);
-
- return dpagemap;
-}
-EXPORT_SYMBOL(drm_pagemap_create);
-
/**
* drm_pagemap_put() - Put a struct drm_pagemap reference
* @dpagemap: Pointer to a struct drm_pagemap object.
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 08/17] drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (6 preceding siblings ...)
2025-11-11 16:43 ` [PATCH v2 07/17] drm/pagemap: Remove the drm_pagemap_create() interface Thomas Hellström
@ 2025-11-11 16:43 ` Thomas Hellström
2025-11-11 16:43 ` [PATCH v2 09/17] drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner Thomas Hellström
` (13 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, Matthew Brost, dri-devel,
himal.prasad.ghimiray, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
The hmm_range_fault() and the migration helpers currently need a common
"owner" to identify pagemaps and clients with fast interconnect.
Add a drm_pagemap utility to setup such owners by registering
drm_pagemaps, in a registry, and for each new drm_pagemap,
query which existing drm_pagemaps have fast interconnects with the new
drm_pagemap.
The "owner" scheme is limited in that it is static at drm_pagemap creation.
Ideally one would want the owner to be adjusted at run-time, but that
requires changes to hmm. If the proposed scheme becomes too limited,
we need to revisit.
v2:
- Improve documentation of DRM_PAGEMAP_OWNER_LIST_DEFINE(). (Matt Brost)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/drm_pagemap_util.c | 118 +++++++++++++++++++++++++++++
include/drm/drm_pagemap_util.h | 50 ++++++++++++
2 files changed, 168 insertions(+)
diff --git a/drivers/gpu/drm/drm_pagemap_util.c b/drivers/gpu/drm/drm_pagemap_util.c
index 84a7a4807bef..413183b2e871 100644
--- a/drivers/gpu/drm/drm_pagemap_util.c
+++ b/drivers/gpu/drm/drm_pagemap_util.c
@@ -3,6 +3,8 @@
* Copyright © 2025 Intel Corporation
*/
+#include <linux/slab.h>
+
#include <drm/drm_drv.h>
#include <drm/drm_managed.h>
#include <drm/drm_pagemap.h>
@@ -448,3 +450,119 @@ struct drm_pagemap_shrinker *drm_pagemap_shrinker_create_devm(struct drm_device
return shrinker;
}
EXPORT_SYMBOL(drm_pagemap_shrinker_create_devm);
+
+/**
+ * struct drm_pagemap_owner - Device interconnect group
+ * @kref: Reference count.
+ *
+ * A struct drm_pagemap_owner identifies a device interconnect group.
+ */
+struct drm_pagemap_owner {
+ struct kref kref;
+};
+
+static void drm_pagemap_owner_release(struct kref *kref)
+{
+ kfree(container_of(kref, struct drm_pagemap_owner, kref));
+}
+
+/**
+ * drm_pagemap_release_owner() - Stop participating in an interconnect group
+ * @peer: Pointer to the struct drm_pagemap_peer used when joining the group
+ *
+ * Stop participating in an interconnect group. This function is typically
+ * called when a pagemap is removed to indicate that it doesn't need to
+ * be taken into account.
+ */
+void drm_pagemap_release_owner(struct drm_pagemap_peer *peer)
+{
+ struct drm_pagemap_owner_list *owner_list = peer->list;
+
+ if (!owner_list)
+ return;
+
+ mutex_lock(&owner_list->lock);
+ list_del(&peer->link);
+ kref_put(&peer->owner->kref, drm_pagemap_owner_release);
+ peer->owner = NULL;
+ mutex_unlock(&owner_list->lock);
+}
+EXPORT_SYMBOL(drm_pagemap_release_owner);
+
+/**
+ * typedef interconnect_fn - Callback function to identify fast interconnects
+ * @peer1: First endpoint.
+ * @peer2: Second endpont.
+ *
+ * The function returns %true iff @peer1 and @peer2 have a fast interconnect.
+ * Note that this is symmetrical. The function has no notion of client and provider,
+ * which may not be sufficient in some cases. However, since the callback is intended
+ * to guide in providing common pagemap owners, the notion of a common owner to
+ * indicate fast interconnects would then have to change as well.
+ *
+ * Return: %true iff @peer1 and @peer2 have a fast interconnect. Otherwise @false.
+ */
+typedef bool (*interconnect_fn)(struct drm_pagemap_peer *peer1, struct drm_pagemap_peer *peer2);
+
+/**
+ * drm_pagemap_acquire_owner() - Join an interconnect group
+ * @peer: A struct drm_pagemap_peer keeping track of the device interconnect
+ * @owner_list: Pointer to the owner_list, keeping track of all interconnects
+ * @has_interconnect: Callback function to determine whether two peers have a
+ * fast local interconnect.
+ *
+ * Repeatedly calls @has_interconnect for @peer and other peers on @owner_list to
+ * determine a set of peers for which @peer has a fast interconnect. That set will
+ * have common &struct drm_pagemap_owner, and upon successful return, @peer::owner
+ * will point to that struct, holding a reference, and @peer will be registered in
+ * @owner_list. If @peer doesn't have any fast interconnects to other @peers, a
+ * new unique &struct drm_pagemap_owner will be allocated for it, and that
+ * may be shared with other peers that, at a later point, are determined to have
+ * a fast interconnect with @peer.
+ *
+ * When @peer no longer participates in an interconnect group,
+ * drm_pagemap_release_owner() should be called to drop the reference on the
+ * struct drm_pagemap_owner.
+ *
+ * Return: %0 on success, negative error code on failure.
+ */
+int drm_pagemap_acquire_owner(struct drm_pagemap_peer *peer,
+ struct drm_pagemap_owner_list *owner_list,
+ interconnect_fn has_interconnect)
+{
+ struct drm_pagemap_peer *cur_peer;
+ struct drm_pagemap_owner *owner = NULL;
+ bool interconnect = false;
+
+ mutex_lock(&owner_list->lock);
+ might_alloc(GFP_KERNEL);
+ list_for_each_entry(cur_peer, &owner_list->peers, link) {
+ if (cur_peer->owner != owner) {
+ if (owner && interconnect)
+ break;
+ owner = cur_peer->owner;
+ interconnect = true;
+ }
+ if (interconnect && !has_interconnect(peer, cur_peer))
+ interconnect = false;
+ }
+
+ if (!interconnect) {
+ owner = kmalloc(sizeof(*owner), GFP_KERNEL);
+ if (!owner) {
+ mutex_unlock(&owner_list->lock);
+ return -ENOMEM;
+ }
+ kref_init(&owner->kref);
+ list_add_tail(&peer->link, &owner_list->peers);
+ } else {
+ kref_get(&owner->kref);
+ list_add_tail(&peer->link, &cur_peer->link);
+ }
+ peer->owner = owner;
+ peer->list = owner_list;
+ mutex_unlock(&owner_list->lock);
+
+ return 0;
+}
+EXPORT_SYMBOL(drm_pagemap_acquire_owner);
diff --git a/include/drm/drm_pagemap_util.h b/include/drm/drm_pagemap_util.h
index 924244d5b899..19169b42b891 100644
--- a/include/drm/drm_pagemap_util.h
+++ b/include/drm/drm_pagemap_util.h
@@ -6,11 +6,55 @@
#ifndef _DRM_PAGEMAP_UTIL_H_
#define _DRM_PAGEMAP_UTIL_H_
+#include <linux/list.h>
+#include <linux/mutex.h>
+
struct drm_device;
struct drm_pagemap;
struct drm_pagemap_cache;
+struct drm_pagemap_owner;
struct drm_pagemap_shrinker;
+/**
+ * struct drm_pagemap_peer - Structure representing a fast interconnect peer
+ * @list: Pointer to a &struct drm_pagemap_owner_list used to keep track of peers
+ * @link: List link for @list's list of peers.
+ * @owner: Pointer to a &struct drm_pagemap_owner, common for a set of peers having
+ * fast interconnects.
+ * @private: Pointer private to the struct embedding this struct.
+ */
+struct drm_pagemap_peer {
+ struct drm_pagemap_owner_list *list;
+ struct list_head link;
+ struct drm_pagemap_owner *owner;
+ void *private;
+};
+
+/**
+ * struct drm_pagemap_owner_list - Keeping track of peers and owners
+ * @peer: List of peers.
+ *
+ * The owner list defines the scope where we identify peers having fast interconnects
+ * and a common owner. Typically a driver has a single global owner list to
+ * keep track of common owners for the driver's pagemaps.
+ */
+struct drm_pagemap_owner_list {
+ /** @lock: Mutex protecting the @peers list. */
+ struct mutex lock;
+ /** @peers: List of peers. */
+ struct list_head peers;
+};
+
+/*
+ * Convenience macro to define an owner list.
+ * Typically the owner list statically declared
+ * driver-wide.
+ */
+#define DRM_PAGEMAP_OWNER_LIST_DEFINE(_name) \
+ struct drm_pagemap_owner_list _name = { \
+ .lock = __MUTEX_INITIALIZER((_name).lock), \
+ .peers = LIST_HEAD_INIT((_name).peers) }
+
void drm_pagemap_shrinker_add(struct drm_pagemap *dpagemap);
int drm_pagemap_cache_lock_lookup(struct drm_pagemap_cache *cache);
@@ -39,4 +83,10 @@ static inline void drm_pagemap_shrinker_might_lock(struct drm_pagemap *dpagemap)
#endif /* CONFIG_PROVE_LOCKING */
+void drm_pagemap_release_owner(struct drm_pagemap_peer *peer);
+
+int drm_pagemap_acquire_owner(struct drm_pagemap_peer *peer,
+ struct drm_pagemap_owner_list *owner_list,
+ bool (*has_interconnect)(struct drm_pagemap_peer *peer1,
+ struct drm_pagemap_peer *peer2));
#endif
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 09/17] drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (7 preceding siblings ...)
2025-11-11 16:43 ` [PATCH v2 08/17] drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus Thomas Hellström
@ 2025-11-11 16:43 ` Thomas Hellström
2025-11-11 16:44 ` [PATCH v2 10/17] drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes Thomas Hellström
` (12 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, dri-devel, himal.prasad.ghimiray, apopple,
airlied, Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
Register a driver-wide owner list, provide a callback to identify
fast interconnects and use the drm_pagemap_util helper to allocate
or reuse a suitable owner struct. For now we consider pagemaps on
different tiles on the same device as having fast interconnect and
thus the same owner.
v2:
- Fix up the error onion unwind in xe_pagemap_create(). (Matt Brost)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 64 ++++++++++++++++++++++++++++----
drivers/gpu/drm/xe/xe_svm.h | 24 +++++-------
drivers/gpu/drm/xe/xe_userptr.c | 2 +-
drivers/gpu/drm/xe/xe_vm.c | 2 +-
drivers/gpu/drm/xe/xe_vm_types.h | 3 ++
5 files changed, 71 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 025c0a3aed8b..7db9eafec66b 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -22,8 +22,17 @@
#include "xe_vm_types.h"
#include "xe_vram_types.h"
+/* Identifies subclasses of struct drm_pagemap_peer */
+#define XE_PEER_PAGEMAP ((void *)0ul)
+#define XE_PEER_VM ((void *)1ul)
+
static int xe_svm_get_pagemaps(struct xe_vm *vm);
+void *xe_svm_private_page_owner(struct xe_vm *vm, bool force_smem)
+{
+ return force_smem ? NULL : vm->svm.peer.owner;
+}
+
static bool xe_svm_range_in_vram(struct xe_svm_range *range)
{
/*
@@ -769,6 +778,25 @@ static void xe_svm_put_pagemaps(struct xe_vm *vm)
}
}
+static struct device *xe_peer_to_dev(struct drm_pagemap_peer *peer)
+{
+ if (peer->private == XE_PEER_PAGEMAP)
+ return container_of(peer, struct xe_pagemap, peer)->dpagemap.drm->dev;
+
+ return container_of(peer, struct xe_vm, svm.peer)->xe->drm.dev;
+}
+
+static bool xe_has_interconnect(struct drm_pagemap_peer *peer1,
+ struct drm_pagemap_peer *peer2)
+{
+ struct device *dev1 = xe_peer_to_dev(peer1);
+ struct device *dev2 = xe_peer_to_dev(peer2);
+
+ return dev1 == dev2;
+}
+
+static DRM_PAGEMAP_OWNER_LIST_DEFINE(xe_owner_list);
+
/**
* xe_svm_init() - SVM initialize
* @vm: The VM.
@@ -787,10 +815,18 @@ int xe_svm_init(struct xe_vm *vm)
INIT_WORK(&vm->svm.garbage_collector.work,
xe_svm_garbage_collector_work_func);
- err = xe_svm_get_pagemaps(vm);
+ vm->svm.peer.private = XE_PEER_VM;
+ err = drm_pagemap_acquire_owner(&vm->svm.peer, &xe_owner_list,
+ xe_has_interconnect);
if (err)
return err;
+ err = xe_svm_get_pagemaps(vm);
+ if (err) {
+ drm_pagemap_release_owner(&vm->svm.peer);
+ return err;
+ }
+
err = drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM", &vm->xe->drm,
current->mm, 0, vm->size,
xe_modparam.svm_notifier_size * SZ_1M,
@@ -800,6 +836,7 @@ int xe_svm_init(struct xe_vm *vm)
if (err) {
xe_svm_put_pagemaps(vm);
+ drm_pagemap_release_owner(&vm->svm.peer);
return err;
}
} else {
@@ -822,6 +859,7 @@ void xe_svm_close(struct xe_vm *vm)
xe_assert(vm->xe, xe_vm_is_closed(vm));
flush_work(&vm->svm.garbage_collector.work);
xe_svm_put_pagemaps(vm);
+ drm_pagemap_release_owner(&vm->svm.peer);
}
/**
@@ -956,7 +994,7 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
xe_pm_runtime_get_noresume(xe);
err = drm_pagemap_migrate_to_devmem(&bo->devmem_allocation, mm,
start, end, timeslice_ms,
- xe_svm_devm_owner(xe));
+ xpagemap->pagemap.owner);
if (err)
xe_svm_devmem_release(&bo->devmem_allocation);
xe_bo_unlock(bo);
@@ -1071,7 +1109,6 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
.devmem_only = need_vram && devmem_possible,
.timeslice_ms = need_vram && devmem_possible ?
vm->xe->atomic_svm_timeslice_ms : 0,
- .device_private_page_owner = xe_svm_devm_owner(vm->xe),
};
struct xe_validation_ctx vctx;
struct drm_exec exec;
@@ -1095,8 +1132,8 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
return err;
dpagemap = xe_vma_resolve_pagemap(vma, tile);
- if (!dpagemap && !ctx.devmem_only)
- ctx.device_private_page_owner = NULL;
+ ctx.device_private_page_owner =
+ xe_svm_private_page_owner(vm, !dpagemap && !ctx.devmem_only);
range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx);
if (IS_ERR(range))
@@ -1520,6 +1557,8 @@ static void xe_pagemap_destroy_work(struct work_struct *work)
pagemap->range.end - pagemap->range.start + 1);
drm_dev_exit(idx);
}
+
+ drm_pagemap_release_owner(&xpagemap->peer);
kfree(xpagemap);
}
@@ -1570,6 +1609,7 @@ static struct xe_pagemap *xe_pagemap_create(struct xe_device *xe, struct xe_vram
dpagemap = &xpagemap->dpagemap;
INIT_WORK(&xpagemap->destroy_work, xe_pagemap_destroy_work);
xpagemap->vr = vr;
+ xpagemap->peer.private = XE_PEER_PAGEMAP;
err = drm_pagemap_init(dpagemap, pagemap, &xe->drm, &xe_drm_pagemap_ops);
if (err)
@@ -1582,21 +1622,29 @@ static struct xe_pagemap *xe_pagemap_create(struct xe_device *xe, struct xe_vram
goto out_err;
}
+ err = drm_pagemap_acquire_owner(&xpagemap->peer, &xe_owner_list,
+ xe_has_interconnect);
+ if (err)
+ goto out_no_owner;
+
pagemap->type = MEMORY_DEVICE_PRIVATE;
pagemap->range.start = res->start;
pagemap->range.end = res->end;
pagemap->nr_range = 1;
- pagemap->owner = xe_svm_devm_owner(xe);
+ pagemap->owner = xpagemap->peer.owner;
pagemap->ops = drm_pagemap_pagemap_ops_get();
addr = devm_memremap_pages(dev, pagemap);
if (IS_ERR(addr)) {
err = PTR_ERR(addr);
- devm_release_mem_region(dev, res->start, res->end - res->start + 1);
- goto out_err;
+ goto out_no_pages;
}
xpagemap->hpa_base = res->start;
return xpagemap;
+out_no_pages:
+ drm_pagemap_release_owner(&xpagemap->peer);
+out_no_owner:
+ devm_release_mem_region(dev, res->start, res->end - res->start + 1);
out_err:
drm_pagemap_put(dpagemap);
return ERR_PTR(err);
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 6166f5358d6d..e99d483e82c2 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -6,24 +6,11 @@
#ifndef _XE_SVM_H_
#define _XE_SVM_H_
-struct xe_device;
-
-/**
- * xe_svm_devm_owner() - Return the owner of device private memory
- * @xe: The xe device.
- *
- * Return: The owner of this device's device private memory to use in
- * hmm_range_fault()-
- */
-static inline void *xe_svm_devm_owner(struct xe_device *xe)
-{
- return xe;
-}
-
#if IS_ENABLED(CONFIG_DRM_XE_GPUSVM)
#include <drm/drm_pagemap.h>
#include <drm/drm_gpusvm.h>
+#include <drm/drm_pagemap_util.h>
#define XE_INTERCONNECT_VRAM DRM_INTERCONNECT_DRIVER
@@ -65,6 +52,7 @@ struct xe_svm_range {
* @pagemap: The struct dev_pagemap providing the struct pages.
* @dpagemap: The drm_pagemap managing allocation and migration.
* @destroy_work: Handles asnynchronous destruction and caching.
+ * @peer: Used for pagemap owner computation.
* @hpa_base: The host physical address base for the managemd memory.
* @vr: Backpointer to the xe_vram region.
*/
@@ -72,6 +60,7 @@ struct xe_pagemap {
struct dev_pagemap pagemap;
struct drm_pagemap dpagemap;
struct work_struct destroy_work;
+ struct drm_pagemap_peer peer;
resource_size_t hpa_base;
struct xe_vram_region *vr;
};
@@ -131,6 +120,8 @@ u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end);
struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile);
+void *xe_svm_private_page_owner(struct xe_vm *vm, bool force_smem);
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -368,6 +359,11 @@ struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *t
return NULL;
}
+static inline void *xe_svm_private_page_owner(struct xe_vm *vm, bool force_smem)
+{
+ return NULL;
+}
+
static inline void xe_svm_flush(struct xe_vm *vm)
{
}
diff --git a/drivers/gpu/drm/xe/xe_userptr.c b/drivers/gpu/drm/xe/xe_userptr.c
index 0d9130b1958a..e120323c43bc 100644
--- a/drivers/gpu/drm/xe/xe_userptr.c
+++ b/drivers/gpu/drm/xe/xe_userptr.c
@@ -55,7 +55,7 @@ int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma)
struct xe_device *xe = vm->xe;
struct drm_gpusvm_ctx ctx = {
.read_only = xe_vma_read_only(vma),
- .device_private_page_owner = xe_svm_devm_owner(xe),
+ .device_private_page_owner = xe_svm_private_page_owner(vm, false),
.allow_mixed = true,
};
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 8fb5cc6a69ec..2321e7c8ae76 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2888,7 +2888,7 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
ctx.read_only = xe_vma_read_only(vma);
ctx.devmem_possible = devmem_possible;
ctx.check_pages_threshold = devmem_possible ? SZ_64K : 0;
- ctx.device_private_page_owner = xe_svm_devm_owner(vm->xe);
+ ctx.device_private_page_owner = xe_svm_private_page_owner(vm, !tile);
/* TODO: Threading the migration */
xa_for_each(&op->prefetch_range.range, i, svm_range) {
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index fd9308426ac4..0d09a322199d 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -8,6 +8,7 @@
#include <drm/drm_gpusvm.h>
#include <drm/drm_gpuvm.h>
+#include <drm/drm_pagemap_util.h>
#include <linux/dma-resv.h>
#include <linux/kref.h>
@@ -192,6 +193,8 @@ struct xe_vm {
struct work_struct work;
} garbage_collector;
struct xe_pagemap *pagemaps[XE_MAX_TILES_PER_DEVICE];
+ /** @svm.peer: Used for pagemap connectivity computations. */
+ struct drm_pagemap_peer peer;
} svm;
struct xe_device *xe;
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 10/17] drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (8 preceding siblings ...)
2025-11-11 16:43 ` [PATCH v2 09/17] drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner Thomas Hellström
@ 2025-11-11 16:44 ` Thomas Hellström
2025-11-11 16:44 ` [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
` (11 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:44 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, dri-devel, himal.prasad.ghimiray, apopple,
airlied, Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
As a consequence, struct xe_vma_mem_attr() can't simply be assigned
or freed without taking the reference count of individual members
into account. Also add helpers to do that.
v2:
- Move some calls to xe_vma_mem_attr_fini() to xe_vma_free(). (Matt Brost)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 2 +-
drivers/gpu/drm/xe/xe_vm.c | 34 +++++++++++++++++++++++++-----
drivers/gpu/drm/xe/xe_vm.h | 1 +
drivers/gpu/drm/xe/xe_vm_madvise.c | 1 +
drivers/gpu/drm/xe/xe_vm_types.h | 9 ++++++++
5 files changed, 41 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 7db9eafec66b..4a3853a5cd64 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -329,7 +329,7 @@ static int xe_svm_range_set_default_attr(struct xe_vm *vm, u64 range_start, u64
if (xe_vma_start(vma) == range_start && xe_vma_end(vma) == range_end) {
default_attr.pat_index = vma->attr.default_pat_index;
default_attr.default_pat_index = vma->attr.default_pat_index;
- vma->attr = default_attr;
+ xe_vma_mem_attr_copy(&vma->attr, &default_attr);
} else {
vm_dbg(&vm->xe->drm, "Split VMA start=0x%016llx, vma_end=0x%016llx",
range_start, range_end);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 2321e7c8ae76..27669f80b7ff 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -957,14 +957,37 @@ struct dma_fence *xe_vm_range_unbind(struct xe_vm *vm,
return fence;
}
+static void xe_vma_mem_attr_fini(struct xe_vma_mem_attr *attr)
+{
+ drm_pagemap_put(attr->preferred_loc.dpagemap);
+}
+
static void xe_vma_free(struct xe_vma *vma)
{
+ xe_vma_mem_attr_fini(&vma->attr);
+
if (xe_vma_is_userptr(vma))
kfree(to_userptr_vma(vma));
else
kfree(vma);
}
+/**
+ * xe_vma_mem_attr_copy() - copy an xe_vma_mem_attr structure.
+ * @to: Destination.
+ * @from: Source.
+ *
+ * Copies an xe_vma_mem_attr structure taking care to get reference
+ * counting of individual members right.
+ */
+void xe_vma_mem_attr_copy(struct xe_vma_mem_attr *to, struct xe_vma_mem_attr *from)
+{
+ xe_vma_mem_attr_fini(to);
+ *to = *from;
+ if (to->preferred_loc.dpagemap)
+ drm_pagemap_get(to->preferred_loc.dpagemap);
+}
+
static struct xe_vma *xe_vma_create(struct xe_vm *vm,
struct xe_bo *bo,
u64 bo_offset_or_userptr,
@@ -1015,8 +1038,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
if (vm->xe->info.has_atomic_enable_pte_bit)
vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
- vma->attr = *attr;
-
+ xe_vma_mem_attr_copy(&vma->attr, attr);
if (bo) {
struct drm_gpuvm_bo *vm_bo;
@@ -4240,7 +4262,7 @@ static int xe_vm_alloc_vma(struct xe_vm *vm,
struct drm_gpuva_op *__op;
unsigned int vma_flags = 0;
bool remap_op = false;
- struct xe_vma_mem_attr tmp_attr;
+ struct xe_vma_mem_attr tmp_attr = {};
u16 default_pat;
int err;
@@ -4333,7 +4355,7 @@ static int xe_vm_alloc_vma(struct xe_vm *vm,
* VMA, so they can be assigned to newly MAP created vma.
*/
if (is_madvise)
- tmp_attr = vma->attr;
+ xe_vma_mem_attr_copy(&tmp_attr, &vma->attr);
xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
} else if (__op->op == DRM_GPUVA_OP_MAP) {
@@ -4343,12 +4365,13 @@ static int xe_vm_alloc_vma(struct xe_vm *vm,
* copy them to new vma.
*/
if (is_madvise)
- vma->attr = tmp_attr;
+ xe_vma_mem_attr_copy(&vma->attr, &tmp_attr);
}
}
xe_vm_unlock(vm);
drm_gpuva_ops_free(&vm->gpuvm, ops);
+ xe_vma_mem_attr_fini(&tmp_attr);
return 0;
unwind_ops:
@@ -4406,3 +4429,4 @@ int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t start, uint64_t r
return xe_vm_alloc_vma(vm, &map_req, false);
}
+
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index ef8a5019574e..d328d31afe8e 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -411,4 +411,5 @@ static inline struct drm_exec *xe_vm_validation_exec(struct xe_vm *vm)
#define xe_vm_has_valid_gpu_mapping(tile, tile_present, tile_invalidated) \
((READ_ONCE(tile_present) & ~READ_ONCE(tile_invalidated)) & BIT((tile)->id))
+void xe_vma_mem_attr_copy(struct xe_vma_mem_attr *to, struct xe_vma_mem_attr *from);
#endif
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index cad3cf627c3f..9553008409d1 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -95,6 +95,7 @@ static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
*/
vmas[i]->attr.preferred_loc.migration_policy =
op->preferred_mem_loc.migration_policy;
+ vmas[i]->attr.preferred_loc.dpagemap = NULL;
}
}
}
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 0d09a322199d..ca489aa7c652 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -20,6 +20,8 @@
#include "xe_range_fence.h"
#include "xe_userptr.h"
+struct drm_pagemap;
+
struct xe_bo;
struct xe_svm_range;
struct xe_sync_entry;
@@ -65,6 +67,13 @@ struct xe_vma_mem_attr {
* closest device memory respectively.
*/
u32 devmem_fd;
+ /**
+ * @preferred_loc.dpagemap: Reference-counted pointer to the drm_pagemap preferred
+ * for migration on a SVM page-fault. The pointer is protected by the
+ * vm lock, and is %NULL if @devmem_fd should be consulted for special
+ * values.
+ */
+ struct drm_pagemap *dpagemap;
} preferred_loc;
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (9 preceding siblings ...)
2025-11-11 16:44 ` [PATCH v2 10/17] drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes Thomas Hellström
@ 2025-11-11 16:44 ` Thomas Hellström
2025-11-12 5:22 ` kernel test robot
` (2 more replies)
2025-11-11 16:44 ` [PATCH v2 12/17] drm/xe: Simplify madvise_preferred_mem_loc() Thomas Hellström
` (10 subsequent siblings)
21 siblings, 3 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:44 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, dri-devel, himal.prasad.ghimiray, apopple,
airlied, Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
Honor the drm_pagemap vma attribute when migrating SVM pages.
Ensure that when the desired placement is validated as device
memory, that we also check that the requested drm_pagemap is
consistent with the current.
v2:
- Initialize a struct drm_pagemap pointer to NULL that could
otherwise be dereferenced uninitialized. (CI)
- Remove a redundant assignment (Matt Brost)
- Slightly improved commit message (Matt Brost)
- Extended drm_pagemap validation.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 86 ++++++++++++++++++++------------
drivers/gpu/drm/xe/xe_svm.h | 12 ++---
drivers/gpu/drm/xe/xe_vm.c | 24 ++++-----
drivers/gpu/drm/xe/xe_vm_types.h | 6 +--
4 files changed, 71 insertions(+), 57 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 4a3853a5cd64..006de141dfa7 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -875,13 +875,34 @@ void xe_svm_fini(struct xe_vm *vm)
drm_gpusvm_fini(&vm->svm.gpusvm);
}
+static bool xe_svm_range_has_pagemap_locked(const struct xe_svm_range *range,
+ const struct drm_pagemap *dpagemap)
+{
+ return range->base.pages.dpagemap == dpagemap;
+}
+
+static bool xe_svm_range_has_pagemap(struct xe_svm_range *range,
+ const struct drm_pagemap *dpagemap)
+{
+ struct xe_vm *vm = range_to_vm(&range->base);
+ bool ret;
+
+ xe_svm_notifier_lock(vm);
+ ret = xe_svm_range_has_pagemap_locked(range, dpagemap);
+ xe_svm_notifier_unlock(vm);
+
+ return ret;
+}
+
static bool xe_svm_range_is_valid(struct xe_svm_range *range,
struct xe_tile *tile,
- bool devmem_only)
+ bool devmem_only,
+ const struct drm_pagemap *dpagemap)
+
{
return (xe_vm_has_valid_gpu_mapping(tile, range->tile_present,
range->tile_invalidated) &&
- (!devmem_only || xe_svm_range_in_vram(range)));
+ (!devmem_only || xe_svm_range_has_pagemap(range, dpagemap)));
}
/** xe_svm_range_migrate_to_smem() - Move range pages from VRAM to SMEM
@@ -902,7 +923,8 @@ void xe_svm_range_migrate_to_smem(struct xe_vm *vm, struct xe_svm_range *range)
* @vm: xe_vm pointer
* @range: Pointer to the SVM range structure
* @tile_mask: Mask representing the tiles to be checked
- * @devmem_preferred : if true range needs to be in devmem
+ * @dpagemap: if !%NULL, the range is expected to be present
+ * in device memory identified by this parameter.
*
* The xe_svm_range_validate() function checks if a range is
* valid and located in the desired memory region.
@@ -911,14 +933,15 @@ void xe_svm_range_migrate_to_smem(struct xe_vm *vm, struct xe_svm_range *range)
*/
bool xe_svm_range_validate(struct xe_vm *vm,
struct xe_svm_range *range,
- u8 tile_mask, bool devmem_preferred)
+ u8 tile_mask, const struct drm_pagemap *dpagemap)
{
bool ret;
xe_svm_notifier_lock(vm);
- ret = (range->tile_present & ~range->tile_invalidated & tile_mask) == tile_mask &&
- (devmem_preferred == range->base.pages.flags.has_devmem_pages);
+ ret = (range->tile_present & ~range->tile_invalidated & tile_mask) == tile_mask;
+ if (dpagemap)
+ ret = ret && xe_svm_range_has_pagemap_locked(range, dpagemap);
xe_svm_notifier_unlock(vm);
@@ -1019,22 +1042,22 @@ static bool supports_4K_migration(struct xe_device *xe)
* xe_svm_range_needs_migrate_to_vram() - SVM range needs migrate to VRAM or not
* @range: SVM range for which migration needs to be decided
* @vma: vma which has range
- * @preferred_region_is_vram: preferred region for range is vram
+ * @dpagemap: The preferred struct drm_pagemap to migrate to.
*
* Return: True for range needing migration and migration is supported else false
*/
bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
- bool preferred_region_is_vram)
+ const struct drm_pagemap *dpagemap)
{
struct xe_vm *vm = range_to_vm(&range->base);
u64 range_size = xe_svm_range_size(range);
- if (!range->base.pages.flags.migrate_devmem || !preferred_region_is_vram)
+ if (!range->base.pages.flags.migrate_devmem || !dpagemap)
return false;
xe_assert(vm->xe, IS_DGFX(vm->xe));
- if (xe_svm_range_in_vram(range)) {
+ if (xe_svm_range_has_pagemap(range, dpagemap)) {
drm_dbg(&vm->xe->drm, "Range is already in VRAM\n");
return false;
}
@@ -1131,9 +1154,9 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
if (err)
return err;
- dpagemap = xe_vma_resolve_pagemap(vma, tile);
- ctx.device_private_page_owner =
- xe_svm_private_page_owner(vm, !dpagemap && !ctx.devmem_only);
+ dpagemap = ctx.devmem_only ? xe_tile_local_pagemap(tile) :
+ xe_vma_resolve_pagemap(vma, tile);
+ ctx.device_private_page_owner = xe_svm_private_page_owner(vm, !dpagemap);
range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx);
if (IS_ERR(range))
@@ -1146,7 +1169,7 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
goto out;
}
- if (xe_svm_range_is_valid(range, tile, ctx.devmem_only)) {
+ if (xe_svm_range_is_valid(range, tile, ctx.devmem_only, dpagemap)) {
xe_svm_range_valid_fault_count_stats_incr(gt, range);
range_debug(range, "PAGE FAULT - VALID");
goto out;
@@ -1155,16 +1178,11 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
range_debug(range, "PAGE FAULT");
if (--migrate_try_count >= 0 &&
- xe_svm_range_needs_migrate_to_vram(range, vma, !!dpagemap || ctx.devmem_only)) {
+ xe_svm_range_needs_migrate_to_vram(range, vma, dpagemap)) {
ktime_t migrate_start = xe_svm_stats_ktime_get();
- /* TODO : For multi-device dpagemap will be used to find the
- * remote tile and remote device. Will need to modify
- * xe_svm_alloc_vram to use dpagemap for future multi-device
- * support.
- */
xe_svm_range_migrate_count_stats_incr(gt, range);
- err = xe_svm_alloc_vram(tile, range, &ctx);
+ err = xe_svm_alloc_vram(range, &ctx, dpagemap);
xe_svm_range_migrate_us_stats_incr(gt, range, migrate_start);
ctx.timeslice_ms <<= 1; /* Double timeslice if we have to retry */
if (err) {
@@ -1481,7 +1499,13 @@ u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
*/
struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
{
- s32 fd = (s32)vma->attr.preferred_loc.devmem_fd;
+ struct drm_pagemap *dpagemap = vma->attr.preferred_loc.dpagemap;
+ s32 fd;
+
+ if (dpagemap)
+ return dpagemap;
+
+ fd = (s32)vma->attr.preferred_loc.devmem_fd;
if (fd == DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM)
return NULL;
@@ -1489,28 +1513,24 @@ struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *t
if (fd == DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE)
return IS_DGFX(tile_to_xe(tile)) ? xe_tile_local_pagemap(tile) : NULL;
- /* TODO: Support multi-device with drm_pagemap_from_fd(fd) */
return NULL;
}
/**
* xe_svm_alloc_vram()- Allocate device memory pages for range,
* migrating existing data.
- * @tile: tile to allocate vram from
* @range: SVM range
* @ctx: DRM GPU SVM context
+ * @dpagemap: The struct drm_pagemap representing the memory to allocate.
*
* Return: 0 on success, error code on failure.
*/
-int xe_svm_alloc_vram(struct xe_tile *tile, struct xe_svm_range *range,
- const struct drm_gpusvm_ctx *ctx)
+int xe_svm_alloc_vram(struct xe_svm_range *range, const struct drm_gpusvm_ctx *ctx,
+ struct drm_pagemap *dpagemap)
{
- struct drm_pagemap *dpagemap;
-
- xe_assert(tile_to_xe(tile), range->base.pages.flags.migrate_devmem);
+ xe_assert(range_to_vm(&range->base)->xe, range->base.pages.flags.migrate_devmem);
range_debug(range, "ALLOCATE VRAM");
- dpagemap = xe_tile_local_pagemap(tile);
return drm_pagemap_populate_mm(dpagemap, xe_svm_range_start(range),
xe_svm_range_end(range),
range->base.gpusvm->mm,
@@ -1780,9 +1800,9 @@ int xe_pagemap_cache_create(struct xe_tile *tile)
return 0;
}
-int xe_svm_alloc_vram(struct xe_tile *tile,
- struct xe_svm_range *range,
- const struct drm_gpusvm_ctx *ctx)
+int xe_svm_alloc_vram(struct xe_svm_range *range,
+ const struct drm_gpusvm_ctx *ctx,
+ struct drm_pagemap *dpagemap)
{
return -EOPNOTSUPP;
}
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index e99d483e82c2..a0ec173c6bf0 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -94,8 +94,8 @@ int xe_svm_bo_evict(struct xe_bo *bo);
void xe_svm_range_debug(struct xe_svm_range *range, const char *operation);
-int xe_svm_alloc_vram(struct xe_tile *tile, struct xe_svm_range *range,
- const struct drm_gpusvm_ctx *ctx);
+int xe_svm_alloc_vram(struct xe_svm_range *range, const struct drm_gpusvm_ctx *ctx,
+ struct drm_pagemap *dpagemap);
struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
struct xe_vma *vma, struct drm_gpusvm_ctx *ctx);
@@ -104,13 +104,13 @@ int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
struct drm_gpusvm_ctx *ctx);
bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
- bool preferred_region_is_vram);
+ const struct drm_pagemap *dpagemap);
void xe_svm_range_migrate_to_smem(struct xe_vm *vm, struct xe_svm_range *range);
bool xe_svm_range_validate(struct xe_vm *vm,
struct xe_svm_range *range,
- u8 tile_mask, bool devmem_preferred);
+ u8 tile_mask, const struct drm_pagemap *dpagemap);
u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *vma);
@@ -276,8 +276,8 @@ void xe_svm_range_debug(struct xe_svm_range *range, const char *operation)
}
static inline int
-xe_svm_alloc_vram(struct xe_tile *tile, struct xe_svm_range *range,
- const struct drm_gpusvm_ctx *ctx)
+xe_svm_alloc_vram(struct xe_svm_range *range, const struct drm_gpusvm_ctx *ctx,
+ struct drm_pagemap *dpagemap)
{
return -EOPNOTSUPP;
}
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 27669f80b7ff..85c2c1dea26f 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2332,7 +2332,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
struct xe_tile *tile;
struct xe_svm_range *svm_range;
struct drm_gpusvm_ctx ctx = {};
- struct drm_pagemap *dpagemap;
+ struct drm_pagemap *dpagemap = NULL;
u8 id, tile_mask = 0;
u32 i;
@@ -2350,23 +2350,17 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
xa_init_flags(&op->prefetch_range.range, XA_FLAGS_ALLOC);
op->prefetch_range.ranges_count = 0;
- tile = NULL;
if (prefetch_region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC) {
dpagemap = xe_vma_resolve_pagemap(vma,
xe_device_get_root_tile(vm->xe));
- /*
- * TODO: Once multigpu support is enabled will need
- * something to dereference tile from dpagemap.
- */
- if (dpagemap)
- tile = xe_device_get_root_tile(vm->xe);
} else if (prefetch_region) {
tile = &vm->xe->tiles[region_to_mem_type[prefetch_region] -
XE_PL_VRAM0];
+ dpagemap = xe_tile_local_pagemap(tile);
}
- op->prefetch_range.tile = tile;
+ op->prefetch_range.dpagemap = dpagemap;
alloc_next_range:
svm_range = xe_svm_range_find_or_insert(vm, addr, vma, &ctx);
@@ -2385,7 +2379,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
goto unwind_prefetch_ops;
}
- if (xe_svm_range_validate(vm, svm_range, tile_mask, !!tile)) {
+ if (xe_svm_range_validate(vm, svm_range, tile_mask, dpagemap)) {
xe_svm_range_debug(svm_range, "PREFETCH - RANGE IS VALID");
goto check_next_range;
}
@@ -2897,7 +2891,7 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
{
bool devmem_possible = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
- struct xe_tile *tile = op->prefetch_range.tile;
+ struct drm_pagemap *dpagemap = op->prefetch_range.dpagemap;
int err = 0;
struct xe_svm_range *svm_range;
@@ -2910,15 +2904,15 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
ctx.read_only = xe_vma_read_only(vma);
ctx.devmem_possible = devmem_possible;
ctx.check_pages_threshold = devmem_possible ? SZ_64K : 0;
- ctx.device_private_page_owner = xe_svm_private_page_owner(vm, !tile);
+ ctx.device_private_page_owner = xe_svm_private_page_owner(vm, !dpagemap);
/* TODO: Threading the migration */
xa_for_each(&op->prefetch_range.range, i, svm_range) {
- if (!tile)
+ if (!dpagemap)
xe_svm_range_migrate_to_smem(vm, svm_range);
- if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, !!tile)) {
- err = xe_svm_alloc_vram(tile, svm_range, &ctx);
+ if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, dpagemap)) {
+ err = xe_svm_alloc_vram(svm_range, &ctx, dpagemap);
if (err) {
drm_dbg(&vm->xe->drm, "VRAM allocation failed, retry from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index ca489aa7c652..392c4caf2a63 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -408,10 +408,10 @@ struct xe_vma_op_prefetch_range {
/** @ranges_count: number of svm ranges to map */
u32 ranges_count;
/**
- * @tile: Pointer to the tile structure containing memory to prefetch.
- * NULL if prefetch requested region is smem
+ * @dpagemap: Pointer to the dpagemap structure containing memory to prefetch.
+ * NULL if prefetch requested region is smem
*/
- struct xe_tile *tile;
+ struct drm_pagemap *dpagemap;
};
/** enum xe_vma_op_flags - flags for VMA operation */
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate
2025-11-11 16:44 ` [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
@ 2025-11-12 5:22 ` kernel test robot
2025-11-12 7:16 ` kernel test robot
2025-11-13 4:51 ` kernel test robot
2 siblings, 0 replies; 33+ messages in thread
From: kernel test robot @ 2025-11-12 5:22 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: oe-kbuild-all, Thomas Hellström, dri-devel,
himal.prasad.ghimiray, apopple, airlied, Simona Vetter,
felix.kuehling, Matthew Brost, Christian König, dakr,
Mrozek, Michal, Joonas Lahtinen
Hi Thomas,
kernel test robot noticed the following build warnings:
[auto build test WARNING on drm-xe/drm-xe-next]
[also build test WARNING on next-20251111]
[cannot apply to linus/master v6.18-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Thomas-Hellstr-m/drm-xe-svm-Fix-a-debug-printout/20251112-004643
base: https://gitlab.freedesktop.org/drm/xe/kernel.git drm-xe-next
patch link: https://lore.kernel.org/r/20251111164408.113070-12-thomas.hellstrom%40linux.intel.com
patch subject: [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate
config: arc-randconfig-001-20251112 (https://download.01.org/0day-ci/archive/20251112/202511121243.jhIjqQi8-lkp@intel.com/config)
compiler: arc-linux-gcc (GCC) 8.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251112/202511121243.jhIjqQi8-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511121243.jhIjqQi8-lkp@intel.com/
All warnings (new ones prefixed by >>):
drivers/gpu/drm/xe/xe_vm.c: In function 'prefetch_ranges':
>> drivers/gpu/drm/xe/xe_vm.c:2914:58: warning: passing argument 3 of 'xe_svm_range_needs_migrate_to_vram' makes integer from pointer without a cast [-Wint-conversion]
if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, dpagemap)) {
^~~~~~~~
In file included from drivers/gpu/drm/xe/xe_res_cursor.h:38,
from drivers/gpu/drm/xe/xe_vm.c:36:
drivers/gpu/drm/xe/xe_svm.h:321:10: note: expected 'u32' {aka 'unsigned int'} but argument is of type 'struct drm_pagemap *'
u32 region)
~~~~^~~~~~
vim +/xe_svm_range_needs_migrate_to_vram +2914 drivers/gpu/drm/xe/xe_vm.c
2889
2890 static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
2891 {
2892 bool devmem_possible = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
2893 struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
2894 struct drm_pagemap *dpagemap = op->prefetch_range.dpagemap;
2895 int err = 0;
2896
2897 struct xe_svm_range *svm_range;
2898 struct drm_gpusvm_ctx ctx = {};
2899 unsigned long i;
2900
2901 if (!xe_vma_is_cpu_addr_mirror(vma))
2902 return 0;
2903
2904 ctx.read_only = xe_vma_read_only(vma);
2905 ctx.devmem_possible = devmem_possible;
2906 ctx.check_pages_threshold = devmem_possible ? SZ_64K : 0;
2907 ctx.device_private_page_owner = xe_svm_private_page_owner(vm, !dpagemap);
2908
2909 /* TODO: Threading the migration */
2910 xa_for_each(&op->prefetch_range.range, i, svm_range) {
2911 if (!dpagemap)
2912 xe_svm_range_migrate_to_smem(vm, svm_range);
2913
> 2914 if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, dpagemap)) {
2915 err = xe_svm_alloc_vram(svm_range, &ctx, dpagemap);
2916 if (err) {
2917 drm_dbg(&vm->xe->drm, "VRAM allocation failed, retry from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
2918 vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
2919 return -ENODATA;
2920 }
2921 xe_svm_range_debug(svm_range, "PREFETCH - RANGE MIGRATED TO VRAM");
2922 }
2923
2924 err = xe_svm_range_get_pages(vm, svm_range, &ctx);
2925 if (err) {
2926 drm_dbg(&vm->xe->drm, "Get pages failed, asid=%u, gpusvm=%p, errno=%pe\n",
2927 vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
2928 if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM)
2929 err = -ENODATA;
2930 return err;
2931 }
2932 xe_svm_range_debug(svm_range, "PREFETCH - RANGE GET PAGES DONE");
2933 }
2934
2935 return err;
2936 }
2937
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate
2025-11-11 16:44 ` [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
2025-11-12 5:22 ` kernel test robot
@ 2025-11-12 7:16 ` kernel test robot
2025-11-13 4:51 ` kernel test robot
2 siblings, 0 replies; 33+ messages in thread
From: kernel test robot @ 2025-11-12 7:16 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: llvm, oe-kbuild-all, Thomas Hellström, dri-devel,
himal.prasad.ghimiray, apopple, airlied, Simona Vetter,
felix.kuehling, Matthew Brost, Christian König, dakr,
Mrozek, Michal, Joonas Lahtinen
Hi Thomas,
kernel test robot noticed the following build errors:
[auto build test ERROR on drm-xe/drm-xe-next]
[also build test ERROR on next-20251112]
[cannot apply to linus/master v6.18-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Thomas-Hellstr-m/drm-xe-svm-Fix-a-debug-printout/20251112-004643
base: https://gitlab.freedesktop.org/drm/xe/kernel.git drm-xe-next
patch link: https://lore.kernel.org/r/20251111164408.113070-12-thomas.hellstrom%40linux.intel.com
patch subject: [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate
config: arm-randconfig-002-20251112 (https://download.01.org/0day-ci/archive/20251112/202511121414.yVnBaDhb-lkp@intel.com/config)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 996639d6ebb86ff15a8c99b67f1c2e2117636ae7)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251112/202511121414.yVnBaDhb-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511121414.yVnBaDhb-lkp@intel.com/
All errors (new ones prefixed by >>):
>> drivers/gpu/drm/xe/xe_vm.c:2914:58: error: incompatible pointer to integer conversion passing 'struct drm_pagemap *' to parameter of type 'u32' (aka 'unsigned int') [-Wint-conversion]
2914 | if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, dpagemap)) {
| ^~~~~~~~
drivers/gpu/drm/xe/xe_svm.h:321:10: note: passing argument to parameter 'region' here
321 | u32 region)
| ^
1 error generated.
vim +2914 drivers/gpu/drm/xe/xe_vm.c
2889
2890 static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
2891 {
2892 bool devmem_possible = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
2893 struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
2894 struct drm_pagemap *dpagemap = op->prefetch_range.dpagemap;
2895 int err = 0;
2896
2897 struct xe_svm_range *svm_range;
2898 struct drm_gpusvm_ctx ctx = {};
2899 unsigned long i;
2900
2901 if (!xe_vma_is_cpu_addr_mirror(vma))
2902 return 0;
2903
2904 ctx.read_only = xe_vma_read_only(vma);
2905 ctx.devmem_possible = devmem_possible;
2906 ctx.check_pages_threshold = devmem_possible ? SZ_64K : 0;
2907 ctx.device_private_page_owner = xe_svm_private_page_owner(vm, !dpagemap);
2908
2909 /* TODO: Threading the migration */
2910 xa_for_each(&op->prefetch_range.range, i, svm_range) {
2911 if (!dpagemap)
2912 xe_svm_range_migrate_to_smem(vm, svm_range);
2913
> 2914 if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, dpagemap)) {
2915 err = xe_svm_alloc_vram(svm_range, &ctx, dpagemap);
2916 if (err) {
2917 drm_dbg(&vm->xe->drm, "VRAM allocation failed, retry from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
2918 vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
2919 return -ENODATA;
2920 }
2921 xe_svm_range_debug(svm_range, "PREFETCH - RANGE MIGRATED TO VRAM");
2922 }
2923
2924 err = xe_svm_range_get_pages(vm, svm_range, &ctx);
2925 if (err) {
2926 drm_dbg(&vm->xe->drm, "Get pages failed, asid=%u, gpusvm=%p, errno=%pe\n",
2927 vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
2928 if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM)
2929 err = -ENODATA;
2930 return err;
2931 }
2932 xe_svm_range_debug(svm_range, "PREFETCH - RANGE GET PAGES DONE");
2933 }
2934
2935 return err;
2936 }
2937
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate
2025-11-11 16:44 ` [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
2025-11-12 5:22 ` kernel test robot
2025-11-12 7:16 ` kernel test robot
@ 2025-11-13 4:51 ` kernel test robot
2 siblings, 0 replies; 33+ messages in thread
From: kernel test robot @ 2025-11-13 4:51 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: oe-kbuild-all, Thomas Hellström, dri-devel,
himal.prasad.ghimiray, apopple, airlied, Simona Vetter,
felix.kuehling, Matthew Brost, Christian König, dakr,
Mrozek, Michal, Joonas Lahtinen
Hi Thomas,
kernel test robot noticed the following build errors:
[auto build test ERROR on drm-xe/drm-xe-next]
[also build test ERROR on next-20251112]
[cannot apply to linus/master v6.18-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Thomas-Hellstr-m/drm-xe-svm-Fix-a-debug-printout/20251112-004643
base: https://gitlab.freedesktop.org/drm/xe/kernel.git drm-xe-next
patch link: https://lore.kernel.org/r/20251111164408.113070-12-thomas.hellstrom%40linux.intel.com
patch subject: [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate
config: x86_64-buildonly-randconfig-005-20251112 (https://download.01.org/0day-ci/archive/20251113/202511130207.qwkzEI6l-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251113/202511130207.qwkzEI6l-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202511130207.qwkzEI6l-lkp@intel.com/
All errors (new ones prefixed by >>):
drivers/gpu/drm/xe/xe_vm.c: In function 'prefetch_ranges':
>> drivers/gpu/drm/xe/xe_vm.c:2914:72: error: passing argument 3 of 'xe_svm_range_needs_migrate_to_vram' makes integer from pointer without a cast [-Wint-conversion]
2914 | if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, dpagemap)) {
| ^~~~~~~~
| |
| struct drm_pagemap *
In file included from drivers/gpu/drm/xe/xe_res_cursor.h:38,
from drivers/gpu/drm/xe/xe_vm.c:36:
drivers/gpu/drm/xe/xe_svm.h:321:45: note: expected 'u32' {aka 'unsigned int'} but argument is of type 'struct drm_pagemap *'
321 | u32 region)
| ~~~~^~~~~~
vim +/xe_svm_range_needs_migrate_to_vram +2914 drivers/gpu/drm/xe/xe_vm.c
2889
2890 static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
2891 {
2892 bool devmem_possible = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
2893 struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
2894 struct drm_pagemap *dpagemap = op->prefetch_range.dpagemap;
2895 int err = 0;
2896
2897 struct xe_svm_range *svm_range;
2898 struct drm_gpusvm_ctx ctx = {};
2899 unsigned long i;
2900
2901 if (!xe_vma_is_cpu_addr_mirror(vma))
2902 return 0;
2903
2904 ctx.read_only = xe_vma_read_only(vma);
2905 ctx.devmem_possible = devmem_possible;
2906 ctx.check_pages_threshold = devmem_possible ? SZ_64K : 0;
2907 ctx.device_private_page_owner = xe_svm_private_page_owner(vm, !dpagemap);
2908
2909 /* TODO: Threading the migration */
2910 xa_for_each(&op->prefetch_range.range, i, svm_range) {
2911 if (!dpagemap)
2912 xe_svm_range_migrate_to_smem(vm, svm_range);
2913
> 2914 if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, dpagemap)) {
2915 err = xe_svm_alloc_vram(svm_range, &ctx, dpagemap);
2916 if (err) {
2917 drm_dbg(&vm->xe->drm, "VRAM allocation failed, retry from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
2918 vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
2919 return -ENODATA;
2920 }
2921 xe_svm_range_debug(svm_range, "PREFETCH - RANGE MIGRATED TO VRAM");
2922 }
2923
2924 err = xe_svm_range_get_pages(vm, svm_range, &ctx);
2925 if (err) {
2926 drm_dbg(&vm->xe->drm, "Get pages failed, asid=%u, gpusvm=%p, errno=%pe\n",
2927 vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
2928 if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM)
2929 err = -ENODATA;
2930 return err;
2931 }
2932 xe_svm_range_debug(svm_range, "PREFETCH - RANGE GET PAGES DONE");
2933 }
2934
2935 return err;
2936 }
2937
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 12/17] drm/xe: Simplify madvise_preferred_mem_loc()
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (10 preceding siblings ...)
2025-11-11 16:44 ` [PATCH v2 11/17] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
@ 2025-11-11 16:44 ` Thomas Hellström
2025-11-11 16:44 ` [PATCH v2 13/17] drm/xe/uapi: Extend the madvise functionality to support foreign pagemap placement for svm Thomas Hellström
` (9 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:44 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, Matthew Brost, dri-devel,
himal.prasad.ghimiray, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
Simplify madvise_preferred_mem_loc by removing repetitive patterns
in favour of local variables.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm_madvise.c | 21 +++++++++++----------
drivers/gpu/drm/xe/xe_vm_types.h | 2 +-
2 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 9553008409d1..d6f47c8e146d 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -81,21 +81,22 @@ static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC);
for (i = 0; i < num_vmas; i++) {
+ struct xe_vma *vma = vmas[i];
+ struct xe_vma_preferred_loc *loc = &vma->attr.preferred_loc;
+
/*TODO: Extend attributes to bo based vmas */
- if ((vmas[i]->attr.preferred_loc.devmem_fd == op->preferred_mem_loc.devmem_fd &&
- vmas[i]->attr.preferred_loc.migration_policy ==
- op->preferred_mem_loc.migration_policy) ||
- !xe_vma_is_cpu_addr_mirror(vmas[i])) {
- vmas[i]->skip_invalidation = true;
+ if ((loc->devmem_fd == op->preferred_mem_loc.devmem_fd &&
+ loc->migration_policy == op->preferred_mem_loc.migration_policy) ||
+ !xe_vma_is_cpu_addr_mirror(vma)) {
+ vma->skip_invalidation = true;
} else {
- vmas[i]->skip_invalidation = false;
- vmas[i]->attr.preferred_loc.devmem_fd = op->preferred_mem_loc.devmem_fd;
+ vma->skip_invalidation = false;
+ loc->devmem_fd = op->preferred_mem_loc.devmem_fd;
/* Till multi-device support is not added migration_policy
* is of no use and can be ignored.
*/
- vmas[i]->attr.preferred_loc.migration_policy =
- op->preferred_mem_loc.migration_policy;
- vmas[i]->attr.preferred_loc.dpagemap = NULL;
+ loc->migration_policy = op->preferred_mem_loc.migration_policy;
+ loc->dpagemap = NULL;
}
}
}
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 392c4caf2a63..5c76d75b224b 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -56,7 +56,7 @@ struct xe_vm_pgtable_update_op;
*/
struct xe_vma_mem_attr {
/** @preferred_loc: preferred memory_location */
- struct {
+ struct xe_vma_preferred_loc {
/** @preferred_loc.migration_policy: Pages migration policy */
u32 migration_policy;
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 13/17] drm/xe/uapi: Extend the madvise functionality to support foreign pagemap placement for svm
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (11 preceding siblings ...)
2025-11-11 16:44 ` [PATCH v2 12/17] drm/xe: Simplify madvise_preferred_mem_loc() Thomas Hellström
@ 2025-11-11 16:44 ` Thomas Hellström
2025-11-11 16:44 ` [PATCH v2 14/17] drm/xe: Support pcie p2p dma as a fast interconnect Thomas Hellström
` (8 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:44 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, dri-devel, himal.prasad.ghimiray, apopple,
airlied, Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
Use device file descriptors and regions to represent pagemaps on
foreign or local devices.
The underlying files are type-checked at madvise time, and
references are kept on the drm_pagemap as long as there is are
madvises pointing to it.
Extend the madvise preferred_location UAPI to support the region
instance to identify the foreign placement.
v2:
- Improve UAPI documentation. (Matt Brost)
- Sanitize preferred_mem_loc.region_instance madvise. (Matt Brost)
- Clarify madvise drm_pagemap vs xe_pagemap refcounting. (Matt Brost)
- Don't allow a foreign drm_pagemap madvise without a fast
interconnect.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 14 +++++
drivers/gpu/drm/xe/xe_device.h | 2 +
drivers/gpu/drm/xe/xe_svm.c | 78 +++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_svm.h | 7 +++
drivers/gpu/drm/xe/xe_vm_madvise.c | 86 ++++++++++++++++++++++++++----
include/uapi/drm/xe_drm.h | 18 +++++--
6 files changed, 191 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index ff598d0c68d7..2465c7a9a63e 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -373,6 +373,20 @@ static const struct file_operations xe_driver_fops = {
.fop_flags = FOP_UNSIGNED_OFFSET,
};
+/**
+ * xe_is_xe_file() - Is the file an xe device file?
+ * @file: The file.
+ *
+ * Checks whether the file is opened against
+ * an xe device.
+ *
+ * Return: %true if an xe file, %false if not.
+ */
+bool xe_is_xe_file(const struct file *file)
+{
+ return file->f_op == &xe_driver_fops;
+}
+
static struct drm_driver driver = {
/* Don't use MTRRs here; the Xserver or userspace app should
* deal with them for Intel hardware.
diff --git a/drivers/gpu/drm/xe/xe_device.h b/drivers/gpu/drm/xe/xe_device.h
index 32cc6323b7f6..475e2245c955 100644
--- a/drivers/gpu/drm/xe/xe_device.h
+++ b/drivers/gpu/drm/xe/xe_device.h
@@ -195,6 +195,8 @@ void xe_file_put(struct xe_file *xef);
int xe_is_injection_active(void);
+bool xe_is_xe_file(const struct file *file);
+
/*
* Occasionally it is seen that the G2H worker starts running after a delay of more than
* a second even after being queued and activated by the Linux workqueue subsystem. This
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 006de141dfa7..c0b17b548a00 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -1788,6 +1788,78 @@ int xe_pagemap_cache_create(struct xe_tile *tile)
return 0;
}
+static struct drm_pagemap *xe_devmem_open(struct xe_device *xe, u32 region_instance)
+{
+ u32 tile_id = region_instance - 1;
+ struct xe_pagemap *xpagemap;
+ struct drm_pagemap *dpagemap;
+ struct xe_vram_region *vr;
+
+ if (tile_id >= xe->info.tile_count)
+ return ERR_PTR(-ENOENT);
+
+ if (!((BIT(tile_id) << 1) & xe->info.mem_region_mask))
+ return ERR_PTR(-ENOENT);
+
+ vr = xe_tile_to_vr(&xe->tiles[tile_id]);
+ xpagemap = xe_pagemap_find_or_create(xe, vr->dpagemap_cache, vr);
+ if (IS_ERR(xpagemap))
+ return ERR_CAST(xpagemap);
+
+ /* Below is for clarity only. The reference counter is the same. */
+ dpagemap = drm_pagemap_get(&xpagemap->dpagemap);
+ xe_pagemap_put(xpagemap);
+
+ return dpagemap;
+}
+
+/**
+ * xe_drm_pagemap_from_fd() - Return a drm_pagemap pointer from a
+ * (file_descriptor, region_instance) pair.
+ * @fd: An fd opened against an xe device.
+ * @region_instance: The region instance representing the device memory
+ * on the opened xe device.
+ *
+ * Opens a struct drm_pagemap pointer on the
+ * indicated device and region_instance.
+ *
+ * Return: A reference-counted struct drm_pagemap pointer on success,
+ * negative error pointer on failure.
+ */
+struct drm_pagemap *xe_drm_pagemap_from_fd(int fd, u32 region_instance)
+{
+ struct drm_pagemap *dpagemap;
+ struct file *file;
+ struct drm_file *fpriv;
+ struct drm_device *drm;
+ int idx;
+
+ if (fd <= 0)
+ return ERR_PTR(-EINVAL);
+
+ file = fget(fd);
+ if (!file)
+ return ERR_PTR(-ENOENT);
+
+ if (!xe_is_xe_file(file)) {
+ dpagemap = ERR_PTR(-ENOENT);
+ goto out;
+ }
+
+ fpriv = file->private_data;
+ drm = fpriv->minor->dev;
+ if (!drm_dev_enter(drm, &idx)) {
+ dpagemap = ERR_PTR(-ENODEV);
+ goto out;
+ }
+
+ dpagemap = xe_devmem_open(to_xe_device(drm), region_instance);
+ drm_dev_exit(idx);
+out:
+ fput(file);
+ return dpagemap;
+}
+
#else
int xe_pagemap_shrinker_create(struct xe_device *xe)
@@ -1811,6 +1883,12 @@ struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *t
{
return NULL;
}
+
+struct drm_pagemap *xe_drm_pagemap_from_fd(int fd, u32 region_instance)
+{
+ return ERR_PTR(-ENOENT);
+}
+
#endif
/**
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index a0ec173c6bf0..60eae01a4220 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -187,6 +187,8 @@ int xe_pagemap_shrinker_create(struct xe_device *xe);
int xe_pagemap_cache_create(struct xe_tile *tile);
+struct drm_pagemap *xe_drm_pagemap_from_fd(int fd, u32 region_instance);
+
#else
#include <linux/interval_tree.h>
#include "xe_vm.h"
@@ -378,6 +380,11 @@ static inline int xe_pagemap_cache_create(struct xe_tile *tile)
return 0;
}
+static inline struct drm_pagemap *xe_drm_pagemap_from_fd(int fd, u32 region_instance)
+{
+ return ERR_PTR(-ENOENT);
+}
+
#define xe_svm_range_has_dma_mapping(...) false
#endif /* CONFIG_DRM_XE_GPUSVM */
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index d6f47c8e146d..add9a6ca2390 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -22,6 +22,19 @@ struct xe_vmas_in_madvise_range {
bool has_svm_userptr_vmas;
};
+/**
+ * struct xe_madvise_details - Argument to madvise_funcs
+ * @dpagemap: Reference-counted pointer to a struct drm_pagemap.
+ *
+ * The madvise IOCTL handler may, in addition to the user-space
+ * args, have additional info to pass into the madvise_func that
+ * handles the madvise type. Use a struct_xe_madvise_details
+ * for that and extend the struct as necessary.
+ */
+struct xe_madvise_details {
+ struct drm_pagemap *dpagemap;
+};
+
static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_range)
{
u64 addr = madvise_range->addr;
@@ -74,7 +87,8 @@ static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_r
static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
- struct drm_xe_madvise *op)
+ struct drm_xe_madvise *op,
+ struct xe_madvise_details *details)
{
int i;
@@ -96,14 +110,18 @@ static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
* is of no use and can be ignored.
*/
loc->migration_policy = op->preferred_mem_loc.migration_policy;
+ drm_pagemap_put(loc->dpagemap);
loc->dpagemap = NULL;
+ if (details->dpagemap)
+ loc->dpagemap = drm_pagemap_get(details->dpagemap);
}
}
}
static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
- struct drm_xe_madvise *op)
+ struct drm_xe_madvise *op,
+ struct xe_madvise_details *details)
{
struct xe_bo *bo;
int i;
@@ -144,7 +162,8 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
- struct drm_xe_madvise *op)
+ struct drm_xe_madvise *op,
+ struct xe_madvise_details *details)
{
int i;
@@ -162,7 +181,8 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
- struct drm_xe_madvise *op);
+ struct drm_xe_madvise *op,
+ struct xe_madvise_details *details);
static const madvise_func madvise_funcs[] = {
[DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
@@ -246,11 +266,12 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
if (XE_IOCTL_DBG(xe, fd < DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM))
return false;
- if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.migration_policy >
- DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES))
+ if (XE_IOCTL_DBG(xe, fd <= DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE &&
+ args->preferred_mem_loc.region_instance != 0))
return false;
- if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.pad))
+ if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.migration_policy >
+ DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES))
return false;
if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.reserved))
@@ -296,6 +317,41 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
return true;
}
+static int xe_madvise_details_init(struct xe_vm *vm, const struct drm_xe_madvise *args,
+ struct xe_madvise_details *details)
+{
+ struct xe_device *xe = vm->xe;
+
+ memset(details, 0, sizeof(*details));
+
+ if (args->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC) {
+ int fd = args->preferred_mem_loc.devmem_fd;
+ struct drm_pagemap *dpagemap;
+
+ if (fd <= 0)
+ return 0;
+
+ dpagemap = xe_drm_pagemap_from_fd(args->preferred_mem_loc.devmem_fd,
+ args->preferred_mem_loc.region_instance);
+ if (XE_IOCTL_DBG(xe, IS_ERR(dpagemap)))
+ return PTR_ERR(dpagemap);
+
+ /* Don't allow a foreign placement without a fast interconnect! */
+ if (XE_IOCTL_DBG(xe, dpagemap->pagemap->owner != vm->svm.peer.owner)) {
+ drm_pagemap_put(dpagemap);
+ return -ENOLINK;
+ }
+ details->dpagemap = dpagemap;
+ }
+
+ return 0;
+}
+
+static void xe_madvise_details_fini(struct xe_madvise_details *details)
+{
+ drm_pagemap_put(details->dpagemap);
+}
+
static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma **vmas,
int num_vmas, u32 atomic_val)
{
@@ -349,6 +405,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
struct drm_xe_madvise *args = data;
struct xe_vmas_in_madvise_range madvise_range = {.addr = args->start,
.range = args->range, };
+ struct xe_madvise_details details;
struct xe_vm *vm;
struct drm_exec exec;
int err, attr_type;
@@ -373,13 +430,17 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
goto unlock_vm;
}
- err = xe_vm_alloc_madvise_vma(vm, args->start, args->range);
+ err = xe_madvise_details_init(vm, args, &details);
if (err)
goto unlock_vm;
+ err = xe_vm_alloc_madvise_vma(vm, args->start, args->range);
+ if (err)
+ goto madv_fini;
+
err = get_vmas(vm, &madvise_range);
if (err || !madvise_range.num_vmas)
- goto unlock_vm;
+ goto madv_fini;
if (madvise_range.has_bo_vmas) {
if (args->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC) {
@@ -387,7 +448,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
madvise_range.num_vmas,
args->atomic.val)) {
err = -EINVAL;
- goto unlock_vm;
+ goto madv_fini;
}
}
@@ -413,7 +474,8 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
}
attr_type = array_index_nospec(args->type, ARRAY_SIZE(madvise_funcs));
- madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args);
+ madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args,
+ &details);
err = xe_vm_invalidate_madvise_range(vm, args->start, args->start + args->range);
@@ -425,6 +487,8 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
drm_exec_fini(&exec);
kfree(madvise_range.vmas);
madvise_range.vmas = NULL;
+madv_fini:
+ xe_madvise_details_fini(&details);
unlock_vm:
up_write(&vm->lock);
put_vm:
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 47853659a705..34c69bcea203 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -2071,7 +2071,13 @@ struct drm_xe_madvise {
struct {
#define DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE 0
#define DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM -1
- /** @preferred_mem_loc.devmem_fd: fd for preferred loc */
+ /**
+ * @preferred_mem_loc.devmem_fd:
+ * Device file-descriptor of the device where the
+ * preferred memory is located, or one of the
+ * above special values. Please also see
+ * @preferred_mem_loc.region_instance below.
+ */
__u32 devmem_fd;
#define DRM_XE_MIGRATE_ALL_PAGES 0
@@ -2079,8 +2085,14 @@ struct drm_xe_madvise {
/** @preferred_mem_loc.migration_policy: Page migration policy */
__u16 migration_policy;
- /** @preferred_mem_loc.pad : MBZ */
- __u16 pad;
+ /**
+ * @preferred_mem_loc.region_instance : Region instance.
+ * MBZ if @devmem_fd <= &DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE.
+ * Otherwise should point to the desired device
+ * VRAM instance of the device indicated by
+ * @preferred_mem_loc.devmem_fd.
+ */
+ __u16 region_instance;
/** @preferred_mem_loc.reserved : Reserved */
__u64 reserved;
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 14/17] drm/xe: Support pcie p2p dma as a fast interconnect
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (12 preceding siblings ...)
2025-11-11 16:44 ` [PATCH v2 13/17] drm/xe/uapi: Extend the madvise functionality to support foreign pagemap placement for svm Thomas Hellström
@ 2025-11-11 16:44 ` Thomas Hellström
2025-11-11 16:44 ` [PATCH v2 15/17] drm/xe/vm: Add a couple of VM debug printouts Thomas Hellström
` (7 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:44 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, dri-devel, himal.prasad.ghimiray, apopple,
airlied, Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
Mimic the dma-buf method using dma_[map|unmap]_resource to map
for pcie-p2p dma.
There's an ongoing area of work upstream to sort out how this best
should be done. One method proposed is to add an additional
pci_p2p_dma_pagemap aliasing the device_private pagemap and use
the corresponding pci_p2p_dma_pagemap page as input for
dma_map_page(). However, that would incur double the amount of
memory and latency to set up the drm_pagemap and given the huge
amount of memory present on modern GPUs, that would really not work.
Hence the simple approach used in this patch.
v2:
- Simplify xe_page_to_pcie(). (Matt Brost)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 34 +++++++++++++++++++++++++++++++---
drivers/gpu/drm/xe/xe_svm.h | 1 +
2 files changed, 32 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index c0b17b548a00..86d7b0882b60 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -3,6 +3,8 @@
* Copyright © 2024 Intel Corporation
*/
+#include <linux/pci-p2pdma.h>
+
#include <drm/drm_drv.h>
#include <drm/drm_managed.h>
#include <drm/drm_pagemap.h>
@@ -441,6 +443,14 @@ static u64 xe_page_to_dpa(struct page *page)
return dpa;
}
+static u64 xe_page_to_pcie(struct page *page)
+{
+ struct xe_pagemap *xpagemap = xe_page_to_pagemap(page);
+ struct xe_vram_region *vr = xe_pagemap_to_vr(xpagemap);
+
+ return xe_page_to_dpa(page) - vr->dpa_base + vr->io_start;
+}
+
enum xe_svm_copy_dir {
XE_SVM_COPY_TO_VRAM,
XE_SVM_COPY_TO_SRAM,
@@ -792,7 +802,10 @@ static bool xe_has_interconnect(struct drm_pagemap_peer *peer1,
struct device *dev1 = xe_peer_to_dev(peer1);
struct device *dev2 = xe_peer_to_dev(peer2);
- return dev1 == dev2;
+ if (dev1 == dev2)
+ return true;
+
+ return pci_p2pdma_distance(to_pci_dev(dev1), dev2, true) >= 0;
}
static DRM_PAGEMAP_OWNER_LIST_DEFINE(xe_owner_list);
@@ -1552,13 +1565,27 @@ xe_drm_pagemap_device_map(struct drm_pagemap *dpagemap,
addr = xe_page_to_dpa(page);
prot = XE_INTERCONNECT_VRAM;
} else {
- addr = DMA_MAPPING_ERROR;
- prot = 0;
+ addr = dma_map_resource(dev,
+ xe_page_to_pcie(page),
+ PAGE_SIZE << order, dir,
+ DMA_ATTR_SKIP_CPU_SYNC);
+ prot = XE_INTERCONNECT_P2P;
}
return drm_pagemap_addr_encode(addr, prot, order, dir);
}
+static void xe_drm_pagemap_device_unmap(struct drm_pagemap *dpagemap,
+ struct device *dev,
+ struct drm_pagemap_addr addr)
+{
+ if (addr.proto != XE_INTERCONNECT_P2P)
+ return;
+
+ dma_unmap_resource(dev, addr.addr, PAGE_SIZE << addr.order,
+ addr.dir, DMA_ATTR_SKIP_CPU_SYNC);
+}
+
static void xe_pagemap_destroy_work(struct work_struct *work)
{
struct xe_pagemap *xpagemap = container_of(work, typeof(*xpagemap), destroy_work);
@@ -1595,6 +1622,7 @@ static void xe_pagemap_destroy(struct drm_pagemap *dpagemap, bool from_atomic_or
static const struct drm_pagemap_ops xe_drm_pagemap_ops = {
.device_map = xe_drm_pagemap_device_map,
+ .device_unmap = xe_drm_pagemap_device_unmap,
.populate_mm = xe_drm_pagemap_populate_mm,
.destroy = xe_pagemap_destroy,
};
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 60eae01a4220..64971c9b2a1a 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -13,6 +13,7 @@
#include <drm/drm_pagemap_util.h>
#define XE_INTERCONNECT_VRAM DRM_INTERCONNECT_DRIVER
+#define XE_INTERCONNECT_P2P (XE_INTERCONNECT_VRAM + 1)
struct drm_device;
struct drm_file;
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 15/17] drm/xe/vm: Add a couple of VM debug printouts
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (13 preceding siblings ...)
2025-11-11 16:44 ` [PATCH v2 14/17] drm/xe: Support pcie p2p dma as a fast interconnect Thomas Hellström
@ 2025-11-11 16:44 ` Thomas Hellström
2025-11-11 16:44 ` [PATCH v2 16/17] drm/pagemap, drm/xe: Support migration over interconnect Thomas Hellström
` (6 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:44 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, Matthew Brost, dri-devel,
himal.prasad.ghimiray, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
Add debug printouts that are valueable for pagemap prefetch,
migration and page collection.
v2:
- Add additional debug prinouts around migration and page collection.
- Require CONFIG_DRM_XE_DEBUG_VM.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com> #v1
---
drivers/gpu/drm/xe/xe_svm.c | 10 ++++++++++
drivers/gpu/drm/xe/xe_vm.c | 7 +++++++
2 files changed, 17 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 86d7b0882b60..0b39905c9312 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -1246,6 +1246,10 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
if (err) {
range_debug(range, "PAGE FAULT - FAIL PAGE COLLECT");
goto out;
+ } else if (IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM)) {
+ drm_dbg(&vm->xe->drm, "After page collect data location is %sin \"%s\".\n",
+ xe_svm_range_has_pagemap(range, dpagemap) ? "" : "NOT ",
+ dpagemap ? dpagemap->drm->unique : "System.");
}
xe_svm_range_get_pages_us_stats_incr(gt, range, get_pages_start);
@@ -1541,9 +1545,15 @@ struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *t
int xe_svm_alloc_vram(struct xe_svm_range *range, const struct drm_gpusvm_ctx *ctx,
struct drm_pagemap *dpagemap)
{
+ struct xe_device *xe = range_to_vm(&range->base)->xe;
+
xe_assert(range_to_vm(&range->base)->xe, range->base.pages.flags.migrate_devmem);
range_debug(range, "ALLOCATE VRAM");
+ if (IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM))
+ drm_dbg(&xe->drm, "Request migration to device memory on \"%s\".\n",
+ dpagemap->drm->unique);
+
return drm_pagemap_populate_mm(dpagemap, xe_svm_range_start(range),
xe_svm_range_end(range),
range->base.gpusvm->mm,
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 85c2c1dea26f..4c628c8b644b 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2911,6 +2911,13 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
if (!dpagemap)
xe_svm_range_migrate_to_smem(vm, svm_range);
+ if (IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM)) {
+ drm_dbg(&vm->xe->drm,
+ "Prefetch pagemap is %s start 0x%016lx end 0x%016lx\n",
+ dpagemap ? dpagemap->drm->unique : "system",
+ xe_svm_range_start(svm_range), xe_svm_range_end(svm_range));
+ }
+
if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, dpagemap)) {
err = xe_svm_alloc_vram(svm_range, &ctx, dpagemap);
if (err) {
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 16/17] drm/pagemap, drm/xe: Support migration over interconnect
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (14 preceding siblings ...)
2025-11-11 16:44 ` [PATCH v2 15/17] drm/xe/vm: Add a couple of VM debug printouts Thomas Hellström
@ 2025-11-11 16:44 ` Thomas Hellström
2025-11-11 16:44 ` [PATCH v2 17/17] drm/xe/svm: Document how xe keeps drm_pagemap references Thomas Hellström
` (5 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:44 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, dri-devel, himal.prasad.ghimiray, apopple,
airlied, Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
Support migration over interconnect when migrating from
device-private pages with the same dev_pagemap owner.
Since we now also collect device-private pages to migrate,
also abort migration if the range to migrate is already
fully populated with pages from the desired pagemap.
Finally return -EBUSY from drm_pagemap_populate_mm()
if the migration can't be completed without first migrating all
pages in the range to system. It is expected that the caller
will perform that before retrying the call to
drm_pagemap_populate_mm().
Assume for now that the drm_pagemap implementation is *not*
capable of migrating data within the pagemap itself. This
restriction will be configurable in upcoming patches.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 177 +++++++++++++++++++++++++---------
drivers/gpu/drm/xe/xe_svm.c | 20 ++--
2 files changed, 143 insertions(+), 54 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 1477a2057a15..e87676313ff9 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -210,6 +210,7 @@ static void drm_pagemap_get_devmem_page(struct page *page,
/**
* drm_pagemap_migrate_map_pages() - Map migration pages for GPU SVM migration
* @dev: The device for which the pages are being mapped
+ * @local_dpagemap: The drm_pagemap pointer of the local drm_pagemap.
* @pagemap_addr: Array to store DMA information corresponding to mapped pages
* @migrate_pfn: Array of migrate page frame numbers to map
* @npages: Number of pages to map
@@ -223,12 +224,14 @@ static void drm_pagemap_get_devmem_page(struct page *page,
* Returns: 0 on success, -EFAULT if an error occurs during mapping.
*/
static int drm_pagemap_migrate_map_pages(struct device *dev,
+ struct drm_pagemap *local_dpagemap,
struct drm_pagemap_addr *pagemap_addr,
unsigned long *migrate_pfn,
unsigned long npages,
enum dma_data_direction dir)
{
unsigned long i;
+ unsigned long num_peer_pages = 0;
for (i = 0; i < npages;) {
struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
@@ -239,31 +242,48 @@ static int drm_pagemap_migrate_map_pages(struct device *dev,
if (!page)
goto next;
- if (WARN_ON_ONCE(is_zone_device_page(page)))
- return -EFAULT;
-
folio = page_folio(page);
order = folio_order(folio);
- dma_addr = dma_map_page(dev, page, 0, page_size(page), dir);
- if (dma_mapping_error(dev, dma_addr))
- return -EFAULT;
+ if (is_zone_device_page(page)) {
+ struct drm_pagemap_zdd *zdd = page->zone_device_data;
+ struct drm_pagemap *dpagemap = zdd->dpagemap;
+ struct drm_pagemap_addr addr;
+
+ if (dpagemap == local_dpagemap)
+ goto next;
- pagemap_addr[i] =
- drm_pagemap_addr_encode(dma_addr,
- DRM_INTERCONNECT_SYSTEM,
- order, dir);
+ num_peer_pages += NR_PAGES(order);
+ addr = dpagemap->ops->device_map(dpagemap, dev, page, order, dir);
+ if (dma_mapping_error(dev, addr.addr))
+ return -EFAULT;
+ } else {
+ dma_addr = dma_map_page(dev, page, 0, page_size(page), dir);
+ if (dma_mapping_error(dev, dma_addr))
+ return -EFAULT;
+
+ pagemap_addr[i] =
+ drm_pagemap_addr_encode(dma_addr,
+ DRM_INTERCONNECT_SYSTEM,
+ order, dir);
+ }
next:
i += NR_PAGES(order);
}
+ if (num_peer_pages)
+ drm_dbg(local_dpagemap->drm, "Migrating %lu peer pages over interconnect.\n",
+ num_peer_pages);
+
return 0;
}
/**
* drm_pagemap_migrate_unmap_pages() - Unmap pages previously mapped for GPU SVM migration
* @dev: The device for which the pages were mapped
+ * @migrate_pfn: Array of migrate pfns set up for the mapped pages. Used to
+ * determine the drm_pagemap of a peer device private page.
* @pagemap_addr: Array of DMA information corresponding to mapped pages
* @npages: Number of pages to unmap
* @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
@@ -274,16 +294,27 @@ static int drm_pagemap_migrate_map_pages(struct device *dev,
*/
static void drm_pagemap_migrate_unmap_pages(struct device *dev,
struct drm_pagemap_addr *pagemap_addr,
+ unsigned long *migrate_pfn,
unsigned long npages,
enum dma_data_direction dir)
{
unsigned long i;
for (i = 0; i < npages;) {
- if (!pagemap_addr[i].addr || dma_mapping_error(dev, pagemap_addr[i].addr))
+ struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
+
+ if (!page || !pagemap_addr[i].addr || dma_mapping_error(dev, pagemap_addr[i].addr))
goto next;
- dma_unmap_page(dev, pagemap_addr[i].addr, PAGE_SIZE << pagemap_addr[i].order, dir);
+ if (is_zone_device_page(page)) {
+ struct drm_pagemap_zdd *zdd = page->zone_device_data;
+ struct drm_pagemap *dpagemap = zdd->dpagemap;
+
+ dpagemap->ops->device_unmap(dpagemap, dev, pagemap_addr[i]);
+ } else {
+ dma_unmap_page(dev, pagemap_addr[i].addr,
+ PAGE_SIZE << pagemap_addr[i].order, dir);
+ }
next:
i += NR_PAGES(pagemap_addr[i].order);
@@ -308,6 +339,7 @@ npages_in_range(unsigned long start, unsigned long end)
* @timeslice_ms: The time requested for the migrated pagemap pages to
* be present in @mm before being allowed to be migrated back.
* @pgmap_owner: Not used currently, since only system memory is considered.
+ * @mflags: Flags governing the migration.
*
* This function migrates the specified virtual address range to device memory.
* It performs the necessary setup and invokes the driver-specific operations for
@@ -333,13 +365,18 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
.start = start,
.end = end,
.pgmap_owner = pgmap_owner,
- .flags = MIGRATE_VMA_SELECT_SYSTEM,
+ .flags = MIGRATE_VMA_SELECT_SYSTEM |
+ MIGRATE_VMA_SELECT_DEVICE_PRIVATE |
+ MIGRATE_VMA_SELECT_DEVICE_COHERENT,
};
unsigned long i, npages = npages_in_range(start, end);
+ unsigned long own_pages = 0, migrated_pages = 0;
struct vm_area_struct *vas;
struct drm_pagemap_zdd *zdd = NULL;
struct page **pages;
struct drm_pagemap_addr *pagemap_addr;
+ struct drm_pagemap *dpagemap = devmem_allocation->dpagemap;
+ struct dev_pagemap *pagemap = dpagemap->pagemap;
void *buf;
int err;
@@ -374,11 +411,13 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
pagemap_addr = buf + (2 * sizeof(*migrate.src) * npages);
pages = buf + (2 * sizeof(*migrate.src) + sizeof(*pagemap_addr)) * npages;
- zdd = drm_pagemap_zdd_alloc(devmem_allocation->dpagemap, pgmap_owner);
+ zdd = drm_pagemap_zdd_alloc(dpagemap, pgmap_owner);
if (!zdd) {
err = -ENOMEM;
- goto err_free;
+ kvfree(buf);
+ goto err_out;
}
+ zdd->devmem_allocation = devmem_allocation; /* Owns ref */
migrate.vma = vas;
migrate.src = buf;
@@ -389,54 +428,108 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
goto err_free;
if (!migrate.cpages) {
- err = -EFAULT;
+ /* No pages to migrate. Raced or unknown device pages. */
+ err = -EBUSY;
goto err_free;
}
if (migrate.cpages != npages) {
+ /*
+ * Some pages to migrate. But we want to migrate all or
+ * nothing. Raced or unknown device pages.
+ */
err = -EBUSY;
- goto err_finalize;
+ goto err_aborted_migration;
+ }
+
+ /* Count device-private pages to migrate */
+ for (i = 0; i < npages; ++i) {
+ struct page *src_page = migrate_pfn_to_page(migrate.src[i]);
+
+ if (src_page && is_zone_device_page(src_page)) {
+ if (page_pgmap(src_page) == pagemap)
+ own_pages++;
+ }
+ }
+
+ drm_dbg(dpagemap->drm, "Total pages %lu; Own pages: %lu.\n",
+ npages, own_pages);
+ if (own_pages == npages) {
+ err = 0;
+ drm_dbg(dpagemap->drm, "Migration wasn't necessary.\n");
+ goto err_aborted_migration;
+ } else if (own_pages) {
+ err = -EBUSY;
+ drm_dbg(dpagemap->drm, "Migration aborted due to fragmentation.\n");
+ goto err_aborted_migration;
}
err = ops->populate_devmem_pfn(devmem_allocation, npages, migrate.dst);
if (err)
goto err_finalize;
- err = drm_pagemap_migrate_map_pages(devmem_allocation->dev, pagemap_addr,
+ err = drm_pagemap_migrate_map_pages(devmem_allocation->dev,
+ devmem_allocation->dpagemap, pagemap_addr,
migrate.src, npages, DMA_TO_DEVICE);
- if (err)
+ if (err) {
+ drm_pagemap_migrate_unmap_pages(devmem_allocation->dev, pagemap_addr,
+ migrate.src, npages, DMA_TO_DEVICE);
+
goto err_finalize;
+ }
+ own_pages = 0;
for (i = 0; i < npages; ++i) {
struct page *page = pfn_to_page(migrate.dst[i]);
+ struct page *src_page = migrate_pfn_to_page(migrate.src[i]);
+ if (unlikely(src_page && is_zone_device_page(src_page) &&
+ page_pgmap(src_page) == pagemap)) {
+ migrate.dst[i] = 0;
+ pages[i] = NULL;
+ own_pages++;
+ continue;
+ }
pages[i] = page;
migrate.dst[i] = migrate_pfn(migrate.dst[i]);
drm_pagemap_get_devmem_page(page, zdd);
}
+ drm_WARN_ON(dpagemap->drm, !!own_pages);
err = ops->copy_to_devmem(pages, pagemap_addr, npages);
+ drm_pagemap_migrate_unmap_pages(devmem_allocation->dev, pagemap_addr,
+ migrate.src, npages, DMA_TO_DEVICE);
if (err)
goto err_finalize;
/* Upon success bind devmem allocation to range and zdd */
devmem_allocation->timeslice_expiration = get_jiffies_64() +
msecs_to_jiffies(timeslice_ms);
- zdd->devmem_allocation = devmem_allocation; /* Owns ref */
err_finalize:
if (err)
drm_pagemap_migration_unlock_put_pages(npages, migrate.dst);
+err_aborted_migration:
migrate_vma_pages(&migrate);
+
+ for (i = 0; i < npages; ++i)
+ if (migrate.src[i] & MIGRATE_PFN_MIGRATE)
+ migrated_pages++;
+
+ if (!err && migrated_pages < npages - own_pages) {
+ drm_dbg(dpagemap->drm, "Raced while finalizing migration.\n");
+ err = -EBUSY;
+ }
+
migrate_vma_finalize(&migrate);
- drm_pagemap_migrate_unmap_pages(devmem_allocation->dev, pagemap_addr, npages,
- DMA_TO_DEVICE);
err_free:
- if (zdd)
- drm_pagemap_zdd_put(zdd);
+ drm_pagemap_zdd_put(zdd);
kvfree(buf);
+ return err;
+
err_out:
+ devmem_allocation->ops->devmem_release(devmem_allocation);
return err;
}
EXPORT_SYMBOL_GPL(drm_pagemap_migrate_to_devmem);
@@ -747,7 +840,8 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
if (err || !mpages)
goto err_finalize;
- err = drm_pagemap_migrate_map_pages(devmem_allocation->dev, pagemap_addr,
+ err = drm_pagemap_migrate_map_pages(devmem_allocation->dev,
+ devmem_allocation->dpagemap, pagemap_addr,
dst, npages, DMA_FROM_DEVICE);
if (err)
goto err_finalize;
@@ -764,7 +858,7 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
drm_pagemap_migration_unlock_put_pages(npages, dst);
migrate_device_pages(src, dst, npages);
migrate_device_finalize(src, dst, npages);
- drm_pagemap_migrate_unmap_pages(devmem_allocation->dev, pagemap_addr, npages,
+ drm_pagemap_migrate_unmap_pages(devmem_allocation->dev, pagemap_addr, dst, npages,
DMA_FROM_DEVICE);
err_free:
kvfree(buf);
@@ -820,12 +914,10 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
void *buf;
int i, err = 0;
- if (page) {
- zdd = page->zone_device_data;
- if (time_before64(get_jiffies_64(),
- zdd->devmem_allocation->timeslice_expiration))
- return 0;
- }
+ zdd = page->zone_device_data;
+ if (time_before64(get_jiffies_64(),
+ zdd->devmem_allocation->timeslice_expiration))
+ return 0;
start = ALIGN_DOWN(fault_addr, size);
end = ALIGN(fault_addr + 1, size);
@@ -861,19 +953,6 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
if (!migrate.cpages)
goto err_free;
- if (!page) {
- for (i = 0; i < npages; ++i) {
- if (!(migrate.src[i] & MIGRATE_PFN_MIGRATE))
- continue;
-
- page = migrate_pfn_to_page(migrate.src[i]);
- break;
- }
-
- if (!page)
- goto err_finalize;
- }
- zdd = page->zone_device_data;
ops = zdd->devmem_allocation->ops;
dev = zdd->devmem_allocation->dev;
@@ -883,7 +962,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
if (err)
goto err_finalize;
- err = drm_pagemap_migrate_map_pages(dev, pagemap_addr, migrate.dst, npages,
+ err = drm_pagemap_migrate_map_pages(dev, zdd->dpagemap, pagemap_addr, migrate.dst, npages,
DMA_FROM_DEVICE);
if (err)
goto err_finalize;
@@ -901,8 +980,8 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
migrate_vma_pages(&migrate);
migrate_vma_finalize(&migrate);
if (dev)
- drm_pagemap_migrate_unmap_pages(dev, pagemap_addr, npages,
- DMA_FROM_DEVICE);
+ drm_pagemap_migrate_unmap_pages(dev, pagemap_addr, migrate.dst,
+ npages, DMA_FROM_DEVICE);
err_free:
kvfree(buf);
err_out:
@@ -938,10 +1017,12 @@ static vm_fault_t drm_pagemap_migrate_to_ram(struct vm_fault *vmf)
struct drm_pagemap_zdd *zdd = vmf->page->zone_device_data;
int err;
+ drm_pagemap_zdd_get(zdd);
err = __drm_pagemap_migrate_to_ram(vmf->vma,
zdd->device_private_page_owner,
vmf->page, vmf->address,
zdd->devmem_allocation->size);
+ drm_pagemap_zdd_put(zdd);
return err ? VM_FAULT_SIGBUS : 0;
}
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 0b39905c9312..56bb3896b89a 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -1028,11 +1028,10 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
/* Ensure the device has a pm ref while there are device pages active. */
xe_pm_runtime_get_noresume(xe);
+ /* Consumes the devmem allocation. */
err = drm_pagemap_migrate_to_devmem(&bo->devmem_allocation, mm,
start, end, timeslice_ms,
xpagemap->pagemap.owner);
- if (err)
- xe_svm_devmem_release(&bo->devmem_allocation);
xe_bo_unlock(bo);
xe_bo_put(bo);
}
@@ -1546,6 +1545,7 @@ int xe_svm_alloc_vram(struct xe_svm_range *range, const struct drm_gpusvm_ctx *c
struct drm_pagemap *dpagemap)
{
struct xe_device *xe = range_to_vm(&range->base)->xe;
+ int err, retries = 1;
xe_assert(range_to_vm(&range->base)->xe, range->base.pages.flags.migrate_devmem);
range_debug(range, "ALLOCATE VRAM");
@@ -1554,10 +1554,18 @@ int xe_svm_alloc_vram(struct xe_svm_range *range, const struct drm_gpusvm_ctx *c
drm_dbg(&xe->drm, "Request migration to device memory on \"%s\".\n",
dpagemap->drm->unique);
- return drm_pagemap_populate_mm(dpagemap, xe_svm_range_start(range),
- xe_svm_range_end(range),
- range->base.gpusvm->mm,
- ctx->timeslice_ms);
+ do {
+ err = drm_pagemap_populate_mm(dpagemap, xe_svm_range_start(range),
+ xe_svm_range_end(range),
+ range->base.gpusvm->mm,
+ ctx->timeslice_ms);
+
+ if (err == -EBUSY && retries)
+ drm_gpusvm_range_evict(range->base.gpusvm, &range->base);
+
+ } while (err == -EBUSY && retries--);
+
+ return err;
}
static struct drm_pagemap_addr
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v2 17/17] drm/xe/svm: Document how xe keeps drm_pagemap references
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (15 preceding siblings ...)
2025-11-11 16:44 ` [PATCH v2 16/17] drm/pagemap, drm/xe: Support migration over interconnect Thomas Hellström
@ 2025-11-11 16:44 ` Thomas Hellström
2025-11-18 0:49 ` Matthew Brost
2025-11-11 17:07 ` ✗ CI.checkpatch: warning for Dynamic drm_pagemaps and Initial multi-device SVM (rev2) Patchwork
` (4 subsequent siblings)
21 siblings, 1 reply; 33+ messages in thread
From: Thomas Hellström @ 2025-11-11 16:44 UTC (permalink / raw)
To: intel-xe
Cc: Thomas Hellström, Matthew Brost, dri-devel,
himal.prasad.ghimiray, apopple, airlied, Simona Vetter,
felix.kuehling, Christian König, dakr, Mrozek, Michal,
Joonas Lahtinen
As an aid to understanding the lifetime of the drm_pagemaps used
by the xe driver, document how the xe driver keeps the
drm_pagemap references.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 56bb3896b89a..c1d6eb2f97d1 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -28,6 +28,28 @@
#define XE_PEER_PAGEMAP ((void *)0ul)
#define XE_PEER_VM ((void *)1ul)
+/**
+ * DOC: drm_pagemap reference-counting in xe:
+ *
+ * In addition to the drm_pagemap internal reference counting by
+ * its zone device data, the xe driver holds the following
+ * long-time references:
+ *
+ * - struct xe_pagemap:
+ * The xe_pagemap struct derives from struct drm_pagemap and
+ * uses its reference count.
+ * - SVM-enabled VMs:
+ * SVM-enabled VMs look up and keeps a reference to all
+ * xe_pagemaps on the same device.
+ * - VMAs:
+ * vmas keep a reference on the drm_pagemap indicated by a gpu_madvise()
+ * call.
+ *
+ * In addition, all drm_pagemap or xe_pagemap pointers where lifetime cannot
+ * be guaranteed by a vma reference under the vm lock should keep a reference.
+ * That includes the range->pages.dpagemap pointer.
+ */
+
static int xe_svm_get_pagemaps(struct xe_vm *vm);
void *xe_svm_private_page_owner(struct xe_vm *vm, bool force_smem)
--
2.51.1
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v2 17/17] drm/xe/svm: Document how xe keeps drm_pagemap references
2025-11-11 16:44 ` [PATCH v2 17/17] drm/xe/svm: Document how xe keeps drm_pagemap references Thomas Hellström
@ 2025-11-18 0:49 ` Matthew Brost
0 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2025-11-18 0:49 UTC (permalink / raw)
To: Thomas Hellström
Cc: intel-xe, dri-devel, himal.prasad.ghimiray, apopple, airlied,
Simona Vetter, felix.kuehling, Christian König, dakr,
Mrozek, Michal, Joonas Lahtinen
On Tue, Nov 11, 2025 at 05:44:07PM +0100, Thomas Hellström wrote:
> As an aid to understanding the lifetime of the drm_pagemaps used
> by the xe driver, document how the xe driver keeps the
> drm_pagemap references.
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 22 ++++++++++++++++++++++
> 1 file changed, 22 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 56bb3896b89a..c1d6eb2f97d1 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -28,6 +28,28 @@
> #define XE_PEER_PAGEMAP ((void *)0ul)
> #define XE_PEER_VM ((void *)1ul)
>
> +/**
> + * DOC: drm_pagemap reference-counting in xe:
> + *
> + * In addition to the drm_pagemap internal reference counting by
> + * its zone device data, the xe driver holds the following
> + * long-time references:
> + *
> + * - struct xe_pagemap:
> + * The xe_pagemap struct derives from struct drm_pagemap and
> + * uses its reference count.
> + * - SVM-enabled VMs:
> + * SVM-enabled VMs look up and keeps a reference to all
> + * xe_pagemaps on the same device.
Nit: I think the formatting looks slighly off with some too early line wraps.
Aside from the nit, LGTM:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> + * - VMAs:
> + * vmas keep a reference on the drm_pagemap indicated by a gpu_madvise()
> + * call.
> + *
> + * In addition, all drm_pagemap or xe_pagemap pointers where lifetime cannot
> + * be guaranteed by a vma reference under the vm lock should keep a reference.
> + * That includes the range->pages.dpagemap pointer.
> + */
> +
> static int xe_svm_get_pagemaps(struct xe_vm *vm);
>
> void *xe_svm_private_page_owner(struct xe_vm *vm, bool force_smem)
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* ✗ CI.checkpatch: warning for Dynamic drm_pagemaps and Initial multi-device SVM (rev2)
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (16 preceding siblings ...)
2025-11-11 16:44 ` [PATCH v2 17/17] drm/xe/svm: Document how xe keeps drm_pagemap references Thomas Hellström
@ 2025-11-11 17:07 ` Patchwork
2025-11-11 17:08 ` ✓ CI.KUnit: success " Patchwork
` (3 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2025-11-11 17:07 UTC (permalink / raw)
To: Thomas Hellström; +Cc: intel-xe
== Series Details ==
Series: Dynamic drm_pagemaps and Initial multi-device SVM (rev2)
URL : https://patchwork.freedesktop.org/series/156525/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
d9120d4d84745cf011b4b3efb338747e69179dfb
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit bf57ab8e3fecc7fa12723f0101756c8519bfc637
Author: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Date: Tue Nov 11 17:44:07 2025 +0100
drm/xe/svm: Document how xe keeps drm_pagemap references
As an aid to understanding the lifetime of the drm_pagemaps used
by the xe driver, document how the xe driver keeps the
drm_pagemap references.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
+ /mt/dim checkpatch 52764bea2cf028d285b0f4d86ee1ebfd4e196486 drm-intel
a9338dcfda08 drm/xe/svm: Fix a debug printout
c24e70746ea6 drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap
bc7289a139cb drm/pagemap: Add a refcounted drm_pagemap backpointer to struct drm_pagemap_zdd
f9739c1f102f drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes
a91ca074671c drm/pagemap: Add a drm_pagemap cache and shrinker
-:176: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#176:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 725 lines checked
02eb0698c6c1 drm/xe: Use the drm_pagemap cache and shrinker
b4dce9064090 drm/pagemap: Remove the drm_pagemap_create() interface
72264b929d95 drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus
-:213: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_name' - possible side-effects?
#213: FILE: include/drm/drm_pagemap_util.h:53:
+#define DRM_PAGEMAP_OWNER_LIST_DEFINE(_name) \
+ struct drm_pagemap_owner_list _name = { \
+ .lock = __MUTEX_INITIALIZER((_name).lock), \
+ .peers = LIST_HEAD_INIT((_name).peers) }
total: 0 errors, 0 warnings, 1 checks, 192 lines checked
8dbac40754f8 drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner
3ad68cf21e6c drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes
8417912511fc drm/xe: Use the vma attibute drm_pagemap to select where to migrate
2647af07575f drm/xe: Simplify madvise_preferred_mem_loc()
535c0bc5fb15 drm/xe/uapi: Extend the madvise functionality to support foreign pagemap placement for svm
acc8cf334fb3 drm/xe: Support pcie p2p dma as a fast interconnect
596efd5f8f1d drm/xe/vm: Add a couple of VM debug printouts
085652b9ac6d drm/pagemap, drm/xe: Support migration over interconnect
bf57ab8e3fec drm/xe/svm: Document how xe keeps drm_pagemap references
^ permalink raw reply [flat|nested] 33+ messages in thread* ✓ CI.KUnit: success for Dynamic drm_pagemaps and Initial multi-device SVM (rev2)
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (17 preceding siblings ...)
2025-11-11 17:07 ` ✗ CI.checkpatch: warning for Dynamic drm_pagemaps and Initial multi-device SVM (rev2) Patchwork
@ 2025-11-11 17:08 ` Patchwork
2025-11-11 17:45 ` ✓ Xe.CI.BAT: " Patchwork
` (2 subsequent siblings)
21 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2025-11-11 17:08 UTC (permalink / raw)
To: Thomas Hellström; +Cc: intel-xe
== Series Details ==
Series: Dynamic drm_pagemaps and Initial multi-device SVM (rev2)
URL : https://patchwork.freedesktop.org/series/156525/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[17:07:04] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[17:07:08] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
../drivers/gpu/drm/xe/xe_vm.c: In function ‘prefetch_ranges’:
../drivers/gpu/drm/xe/xe_vm.c:2921:72: warning: passing argument 3 of ‘xe_svm_range_needs_migrate_to_vram’ makes integer from pointer without a cast [-Wint-conversion]
2921 | if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, dpagemap)) {
| ^~~~~~~~
| |
| struct drm_pagemap *
In file included from ../drivers/gpu/drm/xe/xe_res_cursor.h:38,
from ../drivers/gpu/drm/xe/xe_vm.c:36:
../drivers/gpu/drm/xe/xe_svm.h:324:45: note: expected ‘u32’ {aka ‘unsigned int’} but argument is of type ‘struct drm_pagemap *’
324 | u32 region)
| ~~~~^~~~~~
[17:07:38] Starting KUnit Kernel (1/1)...
[17:07:38] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[17:07:39] ================== guc_buf (11 subtests) ===================
[17:07:39] [PASSED] test_smallest
[17:07:39] [PASSED] test_largest
[17:07:39] [PASSED] test_granular
[17:07:39] [PASSED] test_unique
[17:07:39] [PASSED] test_overlap
[17:07:39] [PASSED] test_reusable
[17:07:39] [PASSED] test_too_big
[17:07:39] [PASSED] test_flush
[17:07:39] [PASSED] test_lookup
[17:07:39] [PASSED] test_data
[17:07:39] [PASSED] test_class
[17:07:39] ===================== [PASSED] guc_buf =====================
[17:07:39] =================== guc_dbm (7 subtests) ===================
[17:07:39] [PASSED] test_empty
[17:07:39] [PASSED] test_default
[17:07:39] ======================== test_size ========================
[17:07:39] [PASSED] 4
[17:07:39] [PASSED] 8
[17:07:39] [PASSED] 32
[17:07:39] [PASSED] 256
[17:07:39] ==================== [PASSED] test_size ====================
[17:07:39] ======================= test_reuse ========================
[17:07:39] [PASSED] 4
[17:07:39] [PASSED] 8
[17:07:39] [PASSED] 32
[17:07:39] [PASSED] 256
[17:07:39] =================== [PASSED] test_reuse ====================
[17:07:39] =================== test_range_overlap ====================
[17:07:39] [PASSED] 4
[17:07:39] [PASSED] 8
[17:07:39] [PASSED] 32
[17:07:39] [PASSED] 256
[17:07:39] =============== [PASSED] test_range_overlap ================
[17:07:39] =================== test_range_compact ====================
[17:07:39] [PASSED] 4
[17:07:39] [PASSED] 8
[17:07:39] [PASSED] 32
[17:07:39] [PASSED] 256
[17:07:39] =============== [PASSED] test_range_compact ================
[17:07:39] ==================== test_range_spare =====================
[17:07:39] [PASSED] 4
[17:07:39] [PASSED] 8
[17:07:39] [PASSED] 32
[17:07:39] [PASSED] 256
[17:07:39] ================ [PASSED] test_range_spare =================
[17:07:39] ===================== [PASSED] guc_dbm =====================
[17:07:39] =================== guc_idm (6 subtests) ===================
[17:07:39] [PASSED] bad_init
[17:07:39] [PASSED] no_init
[17:07:39] [PASSED] init_fini
[17:07:39] [PASSED] check_used
[17:07:39] [PASSED] check_quota
[17:07:39] [PASSED] check_all
[17:07:39] ===================== [PASSED] guc_idm =====================
[17:07:39] ================== no_relay (3 subtests) ===================
[17:07:39] [PASSED] xe_drops_guc2pf_if_not_ready
[17:07:39] [PASSED] xe_drops_guc2vf_if_not_ready
[17:07:39] [PASSED] xe_rejects_send_if_not_ready
[17:07:39] ==================== [PASSED] no_relay =====================
[17:07:39] ================== pf_relay (14 subtests) ==================
[17:07:39] [PASSED] pf_rejects_guc2pf_too_short
[17:07:39] [PASSED] pf_rejects_guc2pf_too_long
[17:07:39] [PASSED] pf_rejects_guc2pf_no_payload
[17:07:39] [PASSED] pf_fails_no_payload
[17:07:39] [PASSED] pf_fails_bad_origin
[17:07:39] [PASSED] pf_fails_bad_type
[17:07:39] [PASSED] pf_txn_reports_error
[17:07:39] [PASSED] pf_txn_sends_pf2guc
[17:07:39] [PASSED] pf_sends_pf2guc
[17:07:39] [SKIPPED] pf_loopback_nop
[17:07:39] [SKIPPED] pf_loopback_echo
[17:07:39] [SKIPPED] pf_loopback_fail
[17:07:39] [SKIPPED] pf_loopback_busy
[17:07:39] [SKIPPED] pf_loopback_retry
[17:07:39] ==================== [PASSED] pf_relay =====================
[17:07:39] ================== vf_relay (3 subtests) ===================
[17:07:39] [PASSED] vf_rejects_guc2vf_too_short
[17:07:39] [PASSED] vf_rejects_guc2vf_too_long
[17:07:39] [PASSED] vf_rejects_guc2vf_no_payload
[17:07:39] ==================== [PASSED] vf_relay =====================
[17:07:39] ================ pf_gt_config (4 subtests) =================
[17:07:39] [PASSED] fair_contexts_1vf
[17:07:39] [PASSED] fair_doorbells_1vf
[17:07:39] ====================== fair_contexts ======================
[17:07:39] [PASSED] 1 VF
[17:07:39] [PASSED] 2 VFs
[17:07:39] [PASSED] 3 VFs
[17:07:39] [PASSED] 4 VFs
[17:07:39] [PASSED] 5 VFs
[17:07:39] [PASSED] 6 VFs
[17:07:39] [PASSED] 7 VFs
[17:07:39] [PASSED] 8 VFs
[17:07:39] [PASSED] 9 VFs
[17:07:39] [PASSED] 10 VFs
[17:07:39] [PASSED] 11 VFs
[17:07:39] [PASSED] 12 VFs
[17:07:39] [PASSED] 13 VFs
[17:07:39] [PASSED] 14 VFs
[17:07:39] [PASSED] 15 VFs
[17:07:39] [PASSED] 16 VFs
[17:07:39] [PASSED] 17 VFs
[17:07:39] [PASSED] 18 VFs
[17:07:39] [PASSED] 19 VFs
[17:07:39] [PASSED] 20 VFs
[17:07:39] [PASSED] 21 VFs
[17:07:39] [PASSED] 22 VFs
[17:07:39] [PASSED] 23 VFs
[17:07:39] [PASSED] 24 VFs
[17:07:39] [PASSED] 25 VFs
[17:07:39] [PASSED] 26 VFs
[17:07:39] [PASSED] 27 VFs
[17:07:39] [PASSED] 28 VFs
[17:07:39] [PASSED] 29 VFs
[17:07:39] [PASSED] 30 VFs
[17:07:39] [PASSED] 31 VFs
[17:07:39] [PASSED] 32 VFs
[17:07:39] [PASSED] 33 VFs
[17:07:39] [PASSED] 34 VFs
[17:07:39] [PASSED] 35 VFs
[17:07:39] [PASSED] 36 VFs
[17:07:39] [PASSED] 37 VFs
[17:07:39] [PASSED] 38 VFs
[17:07:39] [PASSED] 39 VFs
[17:07:39] [PASSED] 40 VFs
[17:07:39] [PASSED] 41 VFs
[17:07:39] [PASSED] 42 VFs
[17:07:39] [PASSED] 43 VFs
[17:07:39] [PASSED] 44 VFs
[17:07:39] [PASSED] 45 VFs
[17:07:39] [PASSED] 46 VFs
[17:07:39] [PASSED] 47 VFs
[17:07:39] [PASSED] 48 VFs
[17:07:39] [PASSED] 49 VFs
[17:07:39] [PASSED] 50 VFs
[17:07:39] [PASSED] 51 VFs
[17:07:39] [PASSED] 52 VFs
[17:07:39] [PASSED] 53 VFs
[17:07:39] [PASSED] 54 VFs
[17:07:39] [PASSED] 55 VFs
[17:07:39] [PASSED] 56 VFs
[17:07:39] [PASSED] 57 VFs
[17:07:39] [PASSED] 58 VFs
[17:07:39] [PASSED] 59 VFs
[17:07:39] [PASSED] 60 VFs
[17:07:39] [PASSED] 61 VFs
[17:07:39] [PASSED] 62 VFs
[17:07:39] [PASSED] 63 VFs
[17:07:39] ================== [PASSED] fair_contexts ==================
[17:07:39] ===================== fair_doorbells ======================
[17:07:39] [PASSED] 1 VF
[17:07:39] [PASSED] 2 VFs
[17:07:39] [PASSED] 3 VFs
[17:07:39] [PASSED] 4 VFs
[17:07:39] [PASSED] 5 VFs
[17:07:39] [PASSED] 6 VFs
[17:07:39] [PASSED] 7 VFs
[17:07:39] [PASSED] 8 VFs
[17:07:39] [PASSED] 9 VFs
[17:07:39] [PASSED] 10 VFs
[17:07:39] [PASSED] 11 VFs
[17:07:39] [PASSED] 12 VFs
[17:07:39] [PASSED] 13 VFs
[17:07:39] [PASSED] 14 VFs
[17:07:39] [PASSED] 15 VFs
[17:07:39] [PASSED] 16 VFs
[17:07:39] [PASSED] 17 VFs
[17:07:39] [PASSED] 18 VFs
[17:07:39] [PASSED] 19 VFs
[17:07:39] [PASSED] 20 VFs
[17:07:39] [PASSED] 21 VFs
[17:07:39] [PASSED] 22 VFs
[17:07:39] [PASSED] 23 VFs
[17:07:39] [PASSED] 24 VFs
[17:07:39] [PASSED] 25 VFs
[17:07:39] [PASSED] 26 VFs
[17:07:39] [PASSED] 27 VFs
[17:07:39] [PASSED] 28 VFs
[17:07:39] [PASSED] 29 VFs
[17:07:39] [PASSED] 30 VFs
[17:07:39] [PASSED] 31 VFs
[17:07:39] [PASSED] 32 VFs
[17:07:39] [PASSED] 33 VFs
[17:07:39] [PASSED] 34 VFs
[17:07:39] [PASSED] 35 VFs
[17:07:39] [PASSED] 36 VFs
[17:07:39] [PASSED] 37 VFs
[17:07:39] [PASSED] 38 VFs
[17:07:39] [PASSED] 39 VFs
[17:07:39] [PASSED] 40 VFs
[17:07:39] [PASSED] 41 VFs
[17:07:39] [PASSED] 42 VFs
[17:07:39] [PASSED] 43 VFs
[17:07:39] [PASSED] 44 VFs
[17:07:39] [PASSED] 45 VFs
[17:07:39] [PASSED] 46 VFs
[17:07:39] [PASSED] 47 VFs
[17:07:39] [PASSED] 48 VFs
[17:07:39] [PASSED] 49 VFs
[17:07:39] [PASSED] 50 VFs
[17:07:39] [PASSED] 51 VFs
[17:07:39] [PASSED] 52 VFs
[17:07:39] [PASSED] 53 VFs
[17:07:39] [PASSED] 54 VFs
[17:07:39] [PASSED] 55 VFs
[17:07:39] [PASSED] 56 VFs
[17:07:39] [PASSED] 57 VFs
[17:07:39] [PASSED] 58 VFs
[17:07:39] [PASSED] 59 VFs
[17:07:39] [PASSED] 60 VFs
[17:07:39] [PASSED] 61 VFs
[17:07:39] [PASSED] 62 VFs
[17:07:39] [PASSED] 63 VFs
[17:07:39] ================= [PASSED] fair_doorbells ==================
[17:07:39] ================== [PASSED] pf_gt_config ===================
[17:07:39] ===================== lmtt (1 subtest) =====================
[17:07:39] ======================== test_ops =========================
[17:07:39] [PASSED] 2-level
[17:07:39] [PASSED] multi-level
[17:07:39] ==================== [PASSED] test_ops =====================
[17:07:39] ====================== [PASSED] lmtt =======================
[17:07:39] ================= pf_service (11 subtests) =================
[17:07:39] [PASSED] pf_negotiate_any
[17:07:39] [PASSED] pf_negotiate_base_match
[17:07:39] [PASSED] pf_negotiate_base_newer
[17:07:39] [PASSED] pf_negotiate_base_next
[17:07:39] [SKIPPED] pf_negotiate_base_older
[17:07:39] [PASSED] pf_negotiate_base_prev
[17:07:39] [PASSED] pf_negotiate_latest_match
[17:07:39] [PASSED] pf_negotiate_latest_newer
[17:07:39] [PASSED] pf_negotiate_latest_next
[17:07:39] [SKIPPED] pf_negotiate_latest_older
[17:07:39] [SKIPPED] pf_negotiate_latest_prev
[17:07:39] =================== [PASSED] pf_service ====================
[17:07:39] ================= xe_guc_g2g (2 subtests) ==================
[17:07:39] ============== xe_live_guc_g2g_kunit_default ==============
[17:07:39] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[17:07:39] ============== xe_live_guc_g2g_kunit_allmem ===============
[17:07:39] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[17:07:39] =================== [SKIPPED] xe_guc_g2g ===================
[17:07:39] =================== xe_mocs (2 subtests) ===================
[17:07:39] ================ xe_live_mocs_kernel_kunit ================
[17:07:39] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[17:07:39] ================ xe_live_mocs_reset_kunit =================
[17:07:39] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[17:07:39] ==================== [SKIPPED] xe_mocs =====================
[17:07:39] ================= xe_migrate (2 subtests) ==================
[17:07:39] ================= xe_migrate_sanity_kunit =================
[17:07:39] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[17:07:39] ================== xe_validate_ccs_kunit ==================
[17:07:39] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[17:07:39] =================== [SKIPPED] xe_migrate ===================
[17:07:39] ================== xe_dma_buf (1 subtest) ==================
[17:07:39] ==================== xe_dma_buf_kunit =====================
[17:07:39] ================ [SKIPPED] xe_dma_buf_kunit ================
[17:07:39] =================== [SKIPPED] xe_dma_buf ===================
[17:07:39] ================= xe_bo_shrink (1 subtest) =================
[17:07:39] =================== xe_bo_shrink_kunit ====================
[17:07:39] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[17:07:39] ================== [SKIPPED] xe_bo_shrink ==================
[17:07:39] ==================== xe_bo (2 subtests) ====================
[17:07:39] ================== xe_ccs_migrate_kunit ===================
[17:07:39] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[17:07:39] ==================== xe_bo_evict_kunit ====================
[17:07:39] =============== [SKIPPED] xe_bo_evict_kunit ================
[17:07:39] ===================== [SKIPPED] xe_bo ======================
[17:07:39] ==================== args (11 subtests) ====================
[17:07:39] [PASSED] count_args_test
[17:07:39] [PASSED] call_args_example
[17:07:39] [PASSED] call_args_test
[17:07:39] [PASSED] drop_first_arg_example
[17:07:39] [PASSED] drop_first_arg_test
[17:07:39] [PASSED] first_arg_example
[17:07:39] [PASSED] first_arg_test
[17:07:39] [PASSED] last_arg_example
[17:07:39] [PASSED] last_arg_test
[17:07:39] [PASSED] pick_arg_example
[17:07:39] [PASSED] sep_comma_example
[17:07:39] ====================== [PASSED] args =======================
[17:07:39] =================== xe_pci (3 subtests) ====================
[17:07:39] ==================== check_graphics_ip ====================
[17:07:39] [PASSED] 12.00 Xe_LP
[17:07:39] [PASSED] 12.10 Xe_LP+
[17:07:39] [PASSED] 12.55 Xe_HPG
[17:07:39] [PASSED] 12.60 Xe_HPC
[17:07:39] [PASSED] 12.70 Xe_LPG
[17:07:39] [PASSED] 12.71 Xe_LPG
[17:07:39] [PASSED] 12.74 Xe_LPG+
[17:07:39] [PASSED] 20.01 Xe2_HPG
[17:07:39] [PASSED] 20.02 Xe2_HPG
[17:07:39] [PASSED] 20.04 Xe2_LPG
[17:07:39] [PASSED] 30.00 Xe3_LPG
[17:07:39] [PASSED] 30.01 Xe3_LPG
[17:07:39] [PASSED] 30.03 Xe3_LPG
[17:07:39] [PASSED] 30.04 Xe3_LPG
[17:07:39] [PASSED] 30.05 Xe3_LPG
[17:07:39] [PASSED] 35.11 Xe3p_XPC
[17:07:39] ================ [PASSED] check_graphics_ip ================
[17:07:39] ===================== check_media_ip ======================
[17:07:39] [PASSED] 12.00 Xe_M
[17:07:39] [PASSED] 12.55 Xe_HPM
[17:07:39] [PASSED] 13.00 Xe_LPM+
[17:07:39] [PASSED] 13.01 Xe2_HPM
[17:07:39] [PASSED] 20.00 Xe2_LPM
[17:07:39] [PASSED] 30.00 Xe3_LPM
[17:07:39] [PASSED] 30.02 Xe3_LPM
[17:07:39] [PASSED] 35.00 Xe3p_LPM
[17:07:39] [PASSED] 35.03 Xe3p_HPM
[17:07:39] ================= [PASSED] check_media_ip ==================
[17:07:39] =================== check_platform_desc ===================
[17:07:39] [PASSED] 0x9A60 (TIGERLAKE)
[17:07:39] [PASSED] 0x9A68 (TIGERLAKE)
[17:07:39] [PASSED] 0x9A70 (TIGERLAKE)
[17:07:39] [PASSED] 0x9A40 (TIGERLAKE)
[17:07:39] [PASSED] 0x9A49 (TIGERLAKE)
[17:07:39] [PASSED] 0x9A59 (TIGERLAKE)
[17:07:39] [PASSED] 0x9A78 (TIGERLAKE)
[17:07:39] [PASSED] 0x9AC0 (TIGERLAKE)
[17:07:39] [PASSED] 0x9AC9 (TIGERLAKE)
[17:07:39] [PASSED] 0x9AD9 (TIGERLAKE)
[17:07:39] [PASSED] 0x9AF8 (TIGERLAKE)
[17:07:39] [PASSED] 0x4C80 (ROCKETLAKE)
[17:07:39] [PASSED] 0x4C8A (ROCKETLAKE)
[17:07:39] [PASSED] 0x4C8B (ROCKETLAKE)
[17:07:39] [PASSED] 0x4C8C (ROCKETLAKE)
[17:07:39] [PASSED] 0x4C90 (ROCKETLAKE)
[17:07:39] [PASSED] 0x4C9A (ROCKETLAKE)
[17:07:39] [PASSED] 0x4680 (ALDERLAKE_S)
[17:07:39] [PASSED] 0x4682 (ALDERLAKE_S)
[17:07:39] [PASSED] 0x4688 (ALDERLAKE_S)
[17:07:39] [PASSED] 0x468A (ALDERLAKE_S)
[17:07:39] [PASSED] 0x468B (ALDERLAKE_S)
[17:07:39] [PASSED] 0x4690 (ALDERLAKE_S)
[17:07:39] [PASSED] 0x4692 (ALDERLAKE_S)
[17:07:39] [PASSED] 0x4693 (ALDERLAKE_S)
[17:07:39] [PASSED] 0x46A0 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46A1 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46A2 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46A3 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46A6 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46A8 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46AA (ALDERLAKE_P)
[17:07:39] [PASSED] 0x462A (ALDERLAKE_P)
[17:07:39] [PASSED] 0x4626 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x4628 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46B0 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46B1 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46B2 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46B3 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46C0 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46C1 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46C2 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46C3 (ALDERLAKE_P)
[17:07:39] [PASSED] 0x46D0 (ALDERLAKE_N)
[17:07:39] [PASSED] 0x46D1 (ALDERLAKE_N)
[17:07:39] [PASSED] 0x46D2 (ALDERLAKE_N)
[17:07:39] [PASSED] 0x46D3 (ALDERLAKE_N)
[17:07:39] [PASSED] 0x46D4 (ALDERLAKE_N)
[17:07:39] [PASSED] 0xA721 (ALDERLAKE_P)
[17:07:39] [PASSED] 0xA7A1 (ALDERLAKE_P)
[17:07:39] [PASSED] 0xA7A9 (ALDERLAKE_P)
[17:07:39] [PASSED] 0xA7AC (ALDERLAKE_P)
[17:07:39] [PASSED] 0xA7AD (ALDERLAKE_P)
[17:07:39] [PASSED] 0xA720 (ALDERLAKE_P)
[17:07:39] [PASSED] 0xA7A0 (ALDERLAKE_P)
[17:07:39] [PASSED] 0xA7A8 (ALDERLAKE_P)
[17:07:39] [PASSED] 0xA7AA (ALDERLAKE_P)
[17:07:39] [PASSED] 0xA7AB (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[17:07:39] [PASSED] 0xA780 (ALDERLAKE_S)
[17:07:39] [PASSED] 0xA781 (ALDERLAKE_S)
[17:07:39] [PASSED] 0xA782 (ALDERLAKE_S)
[17:07:39] [PASSED] 0xA783 (ALDERLAKE_S)
[17:07:39] [PASSED] 0xA788 (ALDERLAKE_S)
[17:07:39] [PASSED] 0xA789 (ALDERLAKE_S)
[17:07:39] [PASSED] 0xA78A (ALDERLAKE_S)
[17:07:39] [PASSED] 0xA78B (ALDERLAKE_S)
[17:07:39] [PASSED] 0x4905 (DG1)
[17:07:39] [PASSED] 0x4906 (DG1)
[17:07:39] [PASSED] 0x4907 (DG1)
[17:07:39] [PASSED] 0x4908 (DG1)
[17:07:39] [PASSED] 0x4909 (DG1)
[17:07:39] [PASSED] 0x56C0 (DG2)
[17:07:39] [PASSED] 0x56C2 (DG2)
[17:07:39] [PASSED] 0x56C1 (DG2)
[17:07:39] [PASSED] 0x7D51 (METEORLAKE)
[17:07:39] [PASSED] 0x7DD1 (METEORLAKE)
[17:07:39] [PASSED] 0x7D41 (METEORLAKE)
[17:07:39] [PASSED] 0x7D67 (METEORLAKE)
[17:07:39] [PASSED] 0xB640 (METEORLAKE)
[17:07:39] [PASSED] 0x56A0 (DG2)
[17:07:39] [PASSED] 0x56A1 (DG2)
[17:07:39] [PASSED] 0x56A2 (DG2)
[17:07:39] [PASSED] 0x56BE (DG2)
[17:07:39] [PASSED] 0x56BF (DG2)
[17:07:39] [PASSED] 0x5690 (DG2)
[17:07:39] [PASSED] 0x5691 (DG2)
[17:07:39] [PASSED] 0x5692 (DG2)
[17:07:39] [PASSED] 0x56A5 (DG2)
[17:07:39] [PASSED] 0x56A6 (DG2)
[17:07:39] [PASSED] 0x56B0 (DG2)
[17:07:39] [PASSED] 0x56B1 (DG2)
[17:07:39] [PASSED] 0x56BA (DG2)
[17:07:39] [PASSED] 0x56BB (DG2)
[17:07:39] [PASSED] 0x56BC (DG2)
[17:07:39] [PASSED] 0x56BD (DG2)
[17:07:39] [PASSED] 0x5693 (DG2)
[17:07:39] [PASSED] 0x5694 (DG2)
[17:07:39] [PASSED] 0x5695 (DG2)
[17:07:39] [PASSED] 0x56A3 (DG2)
[17:07:39] [PASSED] 0x56A4 (DG2)
[17:07:39] [PASSED] 0x56B2 (DG2)
[17:07:39] [PASSED] 0x56B3 (DG2)
[17:07:39] [PASSED] 0x5696 (DG2)
[17:07:39] [PASSED] 0x5697 (DG2)
[17:07:39] [PASSED] 0xB69 (PVC)
[17:07:39] [PASSED] 0xB6E (PVC)
[17:07:39] [PASSED] 0xBD4 (PVC)
[17:07:39] [PASSED] 0xBD5 (PVC)
[17:07:39] [PASSED] 0xBD6 (PVC)
[17:07:39] [PASSED] 0xBD7 (PVC)
[17:07:39] [PASSED] 0xBD8 (PVC)
[17:07:39] [PASSED] 0xBD9 (PVC)
[17:07:39] [PASSED] 0xBDA (PVC)
[17:07:39] [PASSED] 0xBDB (PVC)
[17:07:39] [PASSED] 0xBE0 (PVC)
[17:07:39] [PASSED] 0xBE1 (PVC)
[17:07:39] [PASSED] 0xBE5 (PVC)
[17:07:39] [PASSED] 0x7D40 (METEORLAKE)
[17:07:39] [PASSED] 0x7D45 (METEORLAKE)
[17:07:39] [PASSED] 0x7D55 (METEORLAKE)
[17:07:39] [PASSED] 0x7D60 (METEORLAKE)
[17:07:39] [PASSED] 0x7DD5 (METEORLAKE)
[17:07:39] [PASSED] 0x6420 (LUNARLAKE)
[17:07:39] [PASSED] 0x64A0 (LUNARLAKE)
[17:07:39] [PASSED] 0x64B0 (LUNARLAKE)
[17:07:39] [PASSED] 0xE202 (BATTLEMAGE)
[17:07:39] [PASSED] 0xE209 (BATTLEMAGE)
[17:07:39] [PASSED] 0xE20B (BATTLEMAGE)
[17:07:39] [PASSED] 0xE20C (BATTLEMAGE)
[17:07:39] [PASSED] 0xE20D (BATTLEMAGE)
[17:07:39] [PASSED] 0xE210 (BATTLEMAGE)
[17:07:39] [PASSED] 0xE211 (BATTLEMAGE)
[17:07:39] [PASSED] 0xE212 (BATTLEMAGE)
[17:07:39] [PASSED] 0xE216 (BATTLEMAGE)
[17:07:39] [PASSED] 0xE220 (BATTLEMAGE)
[17:07:39] [PASSED] 0xE221 (BATTLEMAGE)
[17:07:39] [PASSED] 0xE222 (BATTLEMAGE)
[17:07:39] [PASSED] 0xE223 (BATTLEMAGE)
[17:07:39] [PASSED] 0xB080 (PANTHERLAKE)
[17:07:39] [PASSED] 0xB081 (PANTHERLAKE)
[17:07:39] [PASSED] 0xB082 (PANTHERLAKE)
[17:07:39] [PASSED] 0xB083 (PANTHERLAKE)
[17:07:39] [PASSED] 0xB084 (PANTHERLAKE)
[17:07:39] [PASSED] 0xB085 (PANTHERLAKE)
[17:07:39] [PASSED] 0xB086 (PANTHERLAKE)
[17:07:39] [PASSED] 0xB087 (PANTHERLAKE)
[17:07:39] [PASSED] 0xB08F (PANTHERLAKE)
[17:07:39] [PASSED] 0xB090 (PANTHERLAKE)
[17:07:39] [PASSED] 0xB0A0 (PANTHERLAKE)
[17:07:39] [PASSED] 0xB0B0 (PANTHERLAKE)
[17:07:39] [PASSED] 0xD740 (NOVALAKE_S)
[17:07:39] [PASSED] 0xD741 (NOVALAKE_S)
[17:07:39] [PASSED] 0xD742 (NOVALAKE_S)
[17:07:39] [PASSED] 0xD743 (NOVALAKE_S)
[17:07:39] [PASSED] 0xD744 (NOVALAKE_S)
[17:07:39] [PASSED] 0xD745 (NOVALAKE_S)
[17:07:39] [PASSED] 0x674C (CRESCENTISLAND)
[17:07:39] [PASSED] 0xFD80 (PANTHERLAKE)
[17:07:39] [PASSED] 0xFD81 (PANTHERLAKE)
[17:07:39] =============== [PASSED] check_platform_desc ===============
[17:07:39] ===================== [PASSED] xe_pci ======================
[17:07:39] =================== xe_rtp (2 subtests) ====================
[17:07:39] =============== xe_rtp_process_to_sr_tests ================
[17:07:39] [PASSED] coalesce-same-reg
[17:07:39] [PASSED] no-match-no-add
[17:07:39] [PASSED] match-or
[17:07:39] [PASSED] match-or-xfail
[17:07:39] [PASSED] no-match-no-add-multiple-rules
[17:07:39] [PASSED] two-regs-two-entries
[17:07:39] [PASSED] clr-one-set-other
[17:07:39] [PASSED] set-field
[17:07:39] [PASSED] conflict-duplicate
[17:07:39] [PASSED] conflict-not-disjoint
[17:07:39] [PASSED] conflict-reg-type
[17:07:39] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[17:07:39] ================== xe_rtp_process_tests ===================
[17:07:39] [PASSED] active1
[17:07:39] [PASSED] active2
[17:07:39] [PASSED] active-inactive
[17:07:39] [PASSED] inactive-active
[17:07:39] [PASSED] inactive-1st_or_active-inactive
[17:07:39] [PASSED] inactive-2nd_or_active-inactive
[17:07:39] [PASSED] inactive-last_or_active-inactive
[17:07:39] [PASSED] inactive-no_or_active-inactive
[17:07:39] ============== [PASSED] xe_rtp_process_tests ===============
[17:07:39] ===================== [PASSED] xe_rtp ======================
[17:07:39] ==================== xe_wa (1 subtest) =====================
[17:07:39] ======================== xe_wa_gt =========================
[17:07:39] [PASSED] TIGERLAKE B0
[17:07:39] [PASSED] DG1 A0
[17:07:39] [PASSED] DG1 B0
[17:07:39] [PASSED] ALDERLAKE_S A0
[17:07:39] [PASSED] ALDERLAKE_S B0
[17:07:39] [PASSED] ALDERLAKE_S C0
[17:07:39] [PASSED] ALDERLAKE_S D0
[17:07:39] [PASSED] ALDERLAKE_P A0
[17:07:39] [PASSED] ALDERLAKE_P B0
[17:07:39] [PASSED] ALDERLAKE_P C0
[17:07:39] [PASSED] ALDERLAKE_S RPLS D0
[17:07:39] [PASSED] ALDERLAKE_P RPLU E0
[17:07:39] [PASSED] DG2 G10 C0
[17:07:39] [PASSED] DG2 G11 B1
[17:07:39] [PASSED] DG2 G12 A1
[17:07:39] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[17:07:39] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[17:07:39] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[17:07:39] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[17:07:39] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[17:07:39] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[17:07:39] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[17:07:39] ==================== [PASSED] xe_wa_gt =====================
[17:07:39] ====================== [PASSED] xe_wa ======================
[17:07:39] ============================================================
[17:07:39] Testing complete. Ran 446 tests: passed: 428, skipped: 18
[17:07:39] Elapsed time: 35.203s total, 4.199s configuring, 30.537s building, 0.422s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[17:07:39] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[17:07:41] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[17:08:05] Starting KUnit Kernel (1/1)...
[17:08:05] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[17:08:05] ============ drm_test_pick_cmdline (2 subtests) ============
[17:08:05] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[17:08:05] =============== drm_test_pick_cmdline_named ===============
[17:08:05] [PASSED] NTSC
[17:08:05] [PASSED] NTSC-J
[17:08:05] [PASSED] PAL
[17:08:05] [PASSED] PAL-M
[17:08:05] =========== [PASSED] drm_test_pick_cmdline_named ===========
[17:08:05] ============== [PASSED] drm_test_pick_cmdline ==============
[17:08:05] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[17:08:05] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[17:08:05] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[17:08:05] =========== drm_validate_clone_mode (2 subtests) ===========
[17:08:05] ============== drm_test_check_in_clone_mode ===============
[17:08:05] [PASSED] in_clone_mode
[17:08:05] [PASSED] not_in_clone_mode
[17:08:05] ========== [PASSED] drm_test_check_in_clone_mode ===========
[17:08:05] =============== drm_test_check_valid_clones ===============
[17:08:05] [PASSED] not_in_clone_mode
[17:08:05] [PASSED] valid_clone
[17:08:05] [PASSED] invalid_clone
[17:08:05] =========== [PASSED] drm_test_check_valid_clones ===========
[17:08:05] ============= [PASSED] drm_validate_clone_mode =============
[17:08:05] ============= drm_validate_modeset (1 subtest) =============
[17:08:05] [PASSED] drm_test_check_connector_changed_modeset
[17:08:05] ============== [PASSED] drm_validate_modeset ===============
[17:08:05] ====== drm_test_bridge_get_current_state (2 subtests) ======
[17:08:05] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[17:08:05] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[17:08:05] ======== [PASSED] drm_test_bridge_get_current_state ========
[17:08:05] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[17:08:05] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[17:08:05] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[17:08:05] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[17:08:05] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[17:08:05] ============== drm_bridge_alloc (2 subtests) ===============
[17:08:05] [PASSED] drm_test_drm_bridge_alloc_basic
[17:08:05] [PASSED] drm_test_drm_bridge_alloc_get_put
[17:08:05] ================ [PASSED] drm_bridge_alloc =================
[17:08:05] ================== drm_buddy (8 subtests) ==================
[17:08:05] [PASSED] drm_test_buddy_alloc_limit
[17:08:05] [PASSED] drm_test_buddy_alloc_optimistic
[17:08:05] [PASSED] drm_test_buddy_alloc_pessimistic
[17:08:05] [PASSED] drm_test_buddy_alloc_pathological
[17:08:05] [PASSED] drm_test_buddy_alloc_contiguous
[17:08:05] [PASSED] drm_test_buddy_alloc_clear
[17:08:05] [PASSED] drm_test_buddy_alloc_range_bias
[17:08:05] [PASSED] drm_test_buddy_fragmentation_performance
[17:08:05] ==================== [PASSED] drm_buddy ====================
[17:08:05] ============= drm_cmdline_parser (40 subtests) =============
[17:08:05] [PASSED] drm_test_cmdline_force_d_only
[17:08:05] [PASSED] drm_test_cmdline_force_D_only_dvi
[17:08:05] [PASSED] drm_test_cmdline_force_D_only_hdmi
[17:08:05] [PASSED] drm_test_cmdline_force_D_only_not_digital
[17:08:05] [PASSED] drm_test_cmdline_force_e_only
[17:08:05] [PASSED] drm_test_cmdline_res
[17:08:05] [PASSED] drm_test_cmdline_res_vesa
[17:08:05] [PASSED] drm_test_cmdline_res_vesa_rblank
[17:08:05] [PASSED] drm_test_cmdline_res_rblank
[17:08:05] [PASSED] drm_test_cmdline_res_bpp
[17:08:05] [PASSED] drm_test_cmdline_res_refresh
[17:08:05] [PASSED] drm_test_cmdline_res_bpp_refresh
[17:08:05] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[17:08:05] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[17:08:05] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[17:08:05] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[17:08:05] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[17:08:05] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[17:08:05] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[17:08:05] [PASSED] drm_test_cmdline_res_margins_force_on
[17:08:05] [PASSED] drm_test_cmdline_res_vesa_margins
[17:08:05] [PASSED] drm_test_cmdline_name
[17:08:05] [PASSED] drm_test_cmdline_name_bpp
[17:08:05] [PASSED] drm_test_cmdline_name_option
[17:08:05] [PASSED] drm_test_cmdline_name_bpp_option
[17:08:05] [PASSED] drm_test_cmdline_rotate_0
[17:08:05] [PASSED] drm_test_cmdline_rotate_90
[17:08:05] [PASSED] drm_test_cmdline_rotate_180
[17:08:05] [PASSED] drm_test_cmdline_rotate_270
[17:08:05] [PASSED] drm_test_cmdline_hmirror
[17:08:05] [PASSED] drm_test_cmdline_vmirror
[17:08:05] [PASSED] drm_test_cmdline_margin_options
[17:08:05] [PASSED] drm_test_cmdline_multiple_options
[17:08:05] [PASSED] drm_test_cmdline_bpp_extra_and_option
[17:08:05] [PASSED] drm_test_cmdline_extra_and_option
[17:08:05] [PASSED] drm_test_cmdline_freestanding_options
[17:08:05] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[17:08:05] [PASSED] drm_test_cmdline_panel_orientation
[17:08:05] ================ drm_test_cmdline_invalid =================
[17:08:05] [PASSED] margin_only
[17:08:05] [PASSED] interlace_only
[17:08:05] [PASSED] res_missing_x
[17:08:05] [PASSED] res_missing_y
[17:08:05] [PASSED] res_bad_y
[17:08:05] [PASSED] res_missing_y_bpp
[17:08:05] [PASSED] res_bad_bpp
[17:08:05] [PASSED] res_bad_refresh
[17:08:05] [PASSED] res_bpp_refresh_force_on_off
[17:08:05] [PASSED] res_invalid_mode
[17:08:05] [PASSED] res_bpp_wrong_place_mode
[17:08:05] [PASSED] name_bpp_refresh
[17:08:05] [PASSED] name_refresh
[17:08:05] [PASSED] name_refresh_wrong_mode
[17:08:05] [PASSED] name_refresh_invalid_mode
[17:08:05] [PASSED] rotate_multiple
[17:08:05] [PASSED] rotate_invalid_val
[17:08:05] [PASSED] rotate_truncated
[17:08:05] [PASSED] invalid_option
[17:08:05] [PASSED] invalid_tv_option
[17:08:05] [PASSED] truncated_tv_option
[17:08:05] ============ [PASSED] drm_test_cmdline_invalid =============
[17:08:05] =============== drm_test_cmdline_tv_options ===============
[17:08:05] [PASSED] NTSC
[17:08:05] [PASSED] NTSC_443
[17:08:05] [PASSED] NTSC_J
[17:08:05] [PASSED] PAL
[17:08:05] [PASSED] PAL_M
[17:08:05] [PASSED] PAL_N
[17:08:05] [PASSED] SECAM
[17:08:05] [PASSED] MONO_525
[17:08:05] [PASSED] MONO_625
[17:08:05] =========== [PASSED] drm_test_cmdline_tv_options ===========
[17:08:05] =============== [PASSED] drm_cmdline_parser ================
[17:08:05] ========== drmm_connector_hdmi_init (20 subtests) ==========
[17:08:05] [PASSED] drm_test_connector_hdmi_init_valid
[17:08:05] [PASSED] drm_test_connector_hdmi_init_bpc_8
[17:08:05] [PASSED] drm_test_connector_hdmi_init_bpc_10
[17:08:05] [PASSED] drm_test_connector_hdmi_init_bpc_12
[17:08:05] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[17:08:05] [PASSED] drm_test_connector_hdmi_init_bpc_null
[17:08:05] [PASSED] drm_test_connector_hdmi_init_formats_empty
[17:08:05] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[17:08:05] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[17:08:05] [PASSED] supported_formats=0x9 yuv420_allowed=1
[17:08:05] [PASSED] supported_formats=0x9 yuv420_allowed=0
[17:08:05] [PASSED] supported_formats=0x3 yuv420_allowed=1
[17:08:05] [PASSED] supported_formats=0x3 yuv420_allowed=0
[17:08:05] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[17:08:05] [PASSED] drm_test_connector_hdmi_init_null_ddc
[17:08:05] [PASSED] drm_test_connector_hdmi_init_null_product
[17:08:05] [PASSED] drm_test_connector_hdmi_init_null_vendor
[17:08:05] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[17:08:05] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[17:08:05] [PASSED] drm_test_connector_hdmi_init_product_valid
[17:08:05] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[17:08:05] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[17:08:05] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[17:08:05] ========= drm_test_connector_hdmi_init_type_valid =========
[17:08:05] [PASSED] HDMI-A
[17:08:05] [PASSED] HDMI-B
[17:08:05] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[17:08:05] ======== drm_test_connector_hdmi_init_type_invalid ========
[17:08:05] [PASSED] Unknown
[17:08:05] [PASSED] VGA
[17:08:05] [PASSED] DVI-I
[17:08:05] [PASSED] DVI-D
[17:08:05] [PASSED] DVI-A
[17:08:05] [PASSED] Composite
[17:08:05] [PASSED] SVIDEO
[17:08:05] [PASSED] LVDS
[17:08:05] [PASSED] Component
[17:08:05] [PASSED] DIN
[17:08:05] [PASSED] DP
[17:08:05] [PASSED] TV
[17:08:05] [PASSED] eDP
[17:08:05] [PASSED] Virtual
[17:08:05] [PASSED] DSI
[17:08:05] [PASSED] DPI
[17:08:05] [PASSED] Writeback
[17:08:05] [PASSED] SPI
[17:08:05] [PASSED] USB
[17:08:05] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[17:08:05] ============ [PASSED] drmm_connector_hdmi_init =============
[17:08:05] ============= drmm_connector_init (3 subtests) =============
[17:08:05] [PASSED] drm_test_drmm_connector_init
[17:08:05] [PASSED] drm_test_drmm_connector_init_null_ddc
[17:08:05] ========= drm_test_drmm_connector_init_type_valid =========
[17:08:05] [PASSED] Unknown
[17:08:05] [PASSED] VGA
[17:08:05] [PASSED] DVI-I
[17:08:05] [PASSED] DVI-D
[17:08:05] [PASSED] DVI-A
[17:08:05] [PASSED] Composite
[17:08:05] [PASSED] SVIDEO
[17:08:05] [PASSED] LVDS
[17:08:05] [PASSED] Component
[17:08:05] [PASSED] DIN
[17:08:05] [PASSED] DP
[17:08:05] [PASSED] HDMI-A
[17:08:05] [PASSED] HDMI-B
[17:08:05] [PASSED] TV
[17:08:05] [PASSED] eDP
[17:08:05] [PASSED] Virtual
[17:08:05] [PASSED] DSI
[17:08:05] [PASSED] DPI
[17:08:05] [PASSED] Writeback
[17:08:05] [PASSED] SPI
[17:08:05] [PASSED] USB
[17:08:05] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[17:08:05] =============== [PASSED] drmm_connector_init ===============
[17:08:05] ========= drm_connector_dynamic_init (6 subtests) ==========
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_init
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_init_properties
[17:08:05] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[17:08:05] [PASSED] Unknown
[17:08:05] [PASSED] VGA
[17:08:05] [PASSED] DVI-I
[17:08:05] [PASSED] DVI-D
[17:08:05] [PASSED] DVI-A
[17:08:05] [PASSED] Composite
[17:08:05] [PASSED] SVIDEO
[17:08:05] [PASSED] LVDS
[17:08:05] [PASSED] Component
[17:08:05] [PASSED] DIN
[17:08:05] [PASSED] DP
[17:08:05] [PASSED] HDMI-A
[17:08:05] [PASSED] HDMI-B
[17:08:05] [PASSED] TV
[17:08:05] [PASSED] eDP
[17:08:05] [PASSED] Virtual
[17:08:05] [PASSED] DSI
[17:08:05] [PASSED] DPI
[17:08:05] [PASSED] Writeback
[17:08:05] [PASSED] SPI
[17:08:05] [PASSED] USB
[17:08:05] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[17:08:05] ======== drm_test_drm_connector_dynamic_init_name =========
[17:08:05] [PASSED] Unknown
[17:08:05] [PASSED] VGA
[17:08:05] [PASSED] DVI-I
[17:08:05] [PASSED] DVI-D
[17:08:05] [PASSED] DVI-A
[17:08:05] [PASSED] Composite
[17:08:05] [PASSED] SVIDEO
[17:08:05] [PASSED] LVDS
[17:08:05] [PASSED] Component
[17:08:05] [PASSED] DIN
[17:08:05] [PASSED] DP
[17:08:05] [PASSED] HDMI-A
[17:08:05] [PASSED] HDMI-B
[17:08:05] [PASSED] TV
[17:08:05] [PASSED] eDP
[17:08:05] [PASSED] Virtual
[17:08:05] [PASSED] DSI
[17:08:05] [PASSED] DPI
[17:08:05] [PASSED] Writeback
[17:08:05] [PASSED] SPI
[17:08:05] [PASSED] USB
[17:08:05] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[17:08:05] =========== [PASSED] drm_connector_dynamic_init ============
[17:08:05] ==== drm_connector_dynamic_register_early (4 subtests) =====
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[17:08:05] ====== [PASSED] drm_connector_dynamic_register_early =======
[17:08:05] ======= drm_connector_dynamic_register (7 subtests) ========
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[17:08:05] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[17:08:05] ========= [PASSED] drm_connector_dynamic_register ==========
[17:08:05] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[17:08:05] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[17:08:05] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[17:08:05] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[17:08:05] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[17:08:05] ========== drm_test_get_tv_mode_from_name_valid ===========
[17:08:05] [PASSED] NTSC
[17:08:05] [PASSED] NTSC-443
[17:08:05] [PASSED] NTSC-J
[17:08:05] [PASSED] PAL
[17:08:05] [PASSED] PAL-M
[17:08:05] [PASSED] PAL-N
[17:08:05] [PASSED] SECAM
[17:08:05] [PASSED] Mono
[17:08:05] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[17:08:05] [PASSED] drm_test_get_tv_mode_from_name_truncated
[17:08:05] ============ [PASSED] drm_get_tv_mode_from_name ============
[17:08:05] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[17:08:05] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[17:08:05] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[17:08:05] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[17:08:05] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[17:08:05] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[17:08:05] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[17:08:05] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[17:08:05] [PASSED] VIC 96
[17:08:05] [PASSED] VIC 97
[17:08:05] [PASSED] VIC 101
[17:08:05] [PASSED] VIC 102
[17:08:05] [PASSED] VIC 106
[17:08:05] [PASSED] VIC 107
[17:08:05] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[17:08:05] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[17:08:05] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[17:08:05] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[17:08:05] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[17:08:05] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[17:08:05] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[17:08:05] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[17:08:05] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[17:08:05] [PASSED] Automatic
[17:08:05] [PASSED] Full
[17:08:05] [PASSED] Limited 16:235
[17:08:05] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[17:08:05] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[17:08:05] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[17:08:05] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[17:08:05] === drm_test_drm_hdmi_connector_get_output_format_name ====
[17:08:05] [PASSED] RGB
[17:08:05] [PASSED] YUV 4:2:0
[17:08:05] [PASSED] YUV 4:2:2
[17:08:05] [PASSED] YUV 4:4:4
[17:08:05] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[17:08:05] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[17:08:05] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[17:08:05] ============= drm_damage_helper (21 subtests) ==============
[17:08:05] [PASSED] drm_test_damage_iter_no_damage
[17:08:05] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[17:08:05] [PASSED] drm_test_damage_iter_no_damage_src_moved
[17:08:05] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[17:08:05] [PASSED] drm_test_damage_iter_no_damage_not_visible
[17:08:05] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[17:08:05] [PASSED] drm_test_damage_iter_no_damage_no_fb
[17:08:05] [PASSED] drm_test_damage_iter_simple_damage
[17:08:05] [PASSED] drm_test_damage_iter_single_damage
[17:08:05] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[17:08:05] [PASSED] drm_test_damage_iter_single_damage_outside_src
[17:08:05] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[17:08:05] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[17:08:05] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[17:08:05] [PASSED] drm_test_damage_iter_single_damage_src_moved
[17:08:05] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[17:08:05] [PASSED] drm_test_damage_iter_damage
[17:08:05] [PASSED] drm_test_damage_iter_damage_one_intersect
[17:08:05] [PASSED] drm_test_damage_iter_damage_one_outside
[17:08:05] [PASSED] drm_test_damage_iter_damage_src_moved
[17:08:05] [PASSED] drm_test_damage_iter_damage_not_visible
[17:08:05] ================ [PASSED] drm_damage_helper ================
[17:08:05] ============== drm_dp_mst_helper (3 subtests) ==============
[17:08:05] ============== drm_test_dp_mst_calc_pbn_mode ==============
[17:08:05] [PASSED] Clock 154000 BPP 30 DSC disabled
[17:08:05] [PASSED] Clock 234000 BPP 30 DSC disabled
[17:08:05] [PASSED] Clock 297000 BPP 24 DSC disabled
[17:08:05] [PASSED] Clock 332880 BPP 24 DSC enabled
[17:08:05] [PASSED] Clock 324540 BPP 24 DSC enabled
[17:08:05] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[17:08:05] ============== drm_test_dp_mst_calc_pbn_div ===============
[17:08:05] [PASSED] Link rate 2000000 lane count 4
[17:08:05] [PASSED] Link rate 2000000 lane count 2
[17:08:05] [PASSED] Link rate 2000000 lane count 1
[17:08:05] [PASSED] Link rate 1350000 lane count 4
[17:08:05] [PASSED] Link rate 1350000 lane count 2
[17:08:05] [PASSED] Link rate 1350000 lane count 1
[17:08:05] [PASSED] Link rate 1000000 lane count 4
[17:08:05] [PASSED] Link rate 1000000 lane count 2
[17:08:05] [PASSED] Link rate 1000000 lane count 1
[17:08:05] [PASSED] Link rate 810000 lane count 4
[17:08:05] [PASSED] Link rate 810000 lane count 2
[17:08:05] [PASSED] Link rate 810000 lane count 1
[17:08:05] [PASSED] Link rate 540000 lane count 4
[17:08:05] [PASSED] Link rate 540000 lane count 2
[17:08:05] [PASSED] Link rate 540000 lane count 1
[17:08:05] [PASSED] Link rate 270000 lane count 4
[17:08:05] [PASSED] Link rate 270000 lane count 2
[17:08:05] [PASSED] Link rate 270000 lane count 1
[17:08:05] [PASSED] Link rate 162000 lane count 4
[17:08:05] [PASSED] Link rate 162000 lane count 2
[17:08:05] [PASSED] Link rate 162000 lane count 1
[17:08:05] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[17:08:05] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[17:08:05] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[17:08:05] [PASSED] DP_POWER_UP_PHY with port number
[17:08:05] [PASSED] DP_POWER_DOWN_PHY with port number
[17:08:05] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[17:08:05] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[17:08:05] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[17:08:05] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[17:08:05] [PASSED] DP_QUERY_PAYLOAD with port number
[17:08:05] [PASSED] DP_QUERY_PAYLOAD with VCPI
[17:08:05] [PASSED] DP_REMOTE_DPCD_READ with port number
[17:08:05] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[17:08:05] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[17:08:05] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[17:08:05] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[17:08:05] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[17:08:05] [PASSED] DP_REMOTE_I2C_READ with port number
[17:08:05] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[17:08:05] [PASSED] DP_REMOTE_I2C_READ with transactions array
[17:08:05] [PASSED] DP_REMOTE_I2C_WRITE with port number
[17:08:05] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[17:08:05] [PASSED] DP_REMOTE_I2C_WRITE with data array
[17:08:05] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[17:08:05] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[17:08:05] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[17:08:05] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[17:08:05] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[17:08:05] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[17:08:05] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[17:08:05] ================ [PASSED] drm_dp_mst_helper ================
[17:08:05] ================== drm_exec (7 subtests) ===================
[17:08:05] [PASSED] sanitycheck
[17:08:05] [PASSED] test_lock
[17:08:05] [PASSED] test_lock_unlock
[17:08:05] [PASSED] test_duplicates
[17:08:05] [PASSED] test_prepare
[17:08:05] [PASSED] test_prepare_array
[17:08:05] [PASSED] test_multiple_loops
[17:08:05] ==================== [PASSED] drm_exec =====================
[17:08:05] =========== drm_format_helper_test (17 subtests) ===========
[17:08:05] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[17:08:05] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[17:08:05] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[17:08:05] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[17:08:05] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[17:08:05] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[17:08:05] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[17:08:05] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[17:08:05] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[17:08:05] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[17:08:05] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[17:08:05] ============== drm_test_fb_xrgb8888_to_mono ===============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[17:08:05] ==================== drm_test_fb_swab =====================
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ================ [PASSED] drm_test_fb_swab =================
[17:08:05] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[17:08:05] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[17:08:05] [PASSED] single_pixel_source_buffer
[17:08:05] [PASSED] single_pixel_clip_rectangle
[17:08:05] [PASSED] well_known_colors
[17:08:05] [PASSED] destination_pitch
[17:08:05] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[17:08:05] ================= drm_test_fb_clip_offset =================
[17:08:05] [PASSED] pass through
[17:08:05] [PASSED] horizontal offset
[17:08:05] [PASSED] vertical offset
[17:08:05] [PASSED] horizontal and vertical offset
[17:08:05] [PASSED] horizontal offset (custom pitch)
[17:08:05] [PASSED] vertical offset (custom pitch)
[17:08:05] [PASSED] horizontal and vertical offset (custom pitch)
[17:08:05] ============= [PASSED] drm_test_fb_clip_offset =============
[17:08:05] =================== drm_test_fb_memcpy ====================
[17:08:05] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[17:08:05] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[17:08:05] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[17:08:05] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[17:08:05] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[17:08:05] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[17:08:05] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[17:08:05] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[17:08:05] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[17:08:05] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[17:08:05] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[17:08:05] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[17:08:05] =============== [PASSED] drm_test_fb_memcpy ================
[17:08:05] ============= [PASSED] drm_format_helper_test ==============
[17:08:05] ================= drm_format (18 subtests) =================
[17:08:05] [PASSED] drm_test_format_block_width_invalid
[17:08:05] [PASSED] drm_test_format_block_width_one_plane
[17:08:05] [PASSED] drm_test_format_block_width_two_plane
[17:08:05] [PASSED] drm_test_format_block_width_three_plane
[17:08:05] [PASSED] drm_test_format_block_width_tiled
[17:08:05] [PASSED] drm_test_format_block_height_invalid
[17:08:05] [PASSED] drm_test_format_block_height_one_plane
[17:08:05] [PASSED] drm_test_format_block_height_two_plane
[17:08:05] [PASSED] drm_test_format_block_height_three_plane
[17:08:05] [PASSED] drm_test_format_block_height_tiled
[17:08:05] [PASSED] drm_test_format_min_pitch_invalid
[17:08:05] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[17:08:05] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[17:08:05] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[17:08:05] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[17:08:05] [PASSED] drm_test_format_min_pitch_two_plane
[17:08:05] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[17:08:05] [PASSED] drm_test_format_min_pitch_tiled
[17:08:05] =================== [PASSED] drm_format ====================
[17:08:05] ============== drm_framebuffer (10 subtests) ===============
[17:08:05] ========== drm_test_framebuffer_check_src_coords ==========
[17:08:05] [PASSED] Success: source fits into fb
[17:08:05] [PASSED] Fail: overflowing fb with x-axis coordinate
[17:08:05] [PASSED] Fail: overflowing fb with y-axis coordinate
[17:08:05] [PASSED] Fail: overflowing fb with source width
[17:08:05] [PASSED] Fail: overflowing fb with source height
[17:08:05] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[17:08:05] [PASSED] drm_test_framebuffer_cleanup
[17:08:05] =============== drm_test_framebuffer_create ===============
[17:08:05] [PASSED] ABGR8888 normal sizes
[17:08:05] [PASSED] ABGR8888 max sizes
[17:08:05] [PASSED] ABGR8888 pitch greater than min required
[17:08:05] [PASSED] ABGR8888 pitch less than min required
[17:08:05] [PASSED] ABGR8888 Invalid width
[17:08:05] [PASSED] ABGR8888 Invalid buffer handle
[17:08:05] [PASSED] No pixel format
[17:08:05] [PASSED] ABGR8888 Width 0
[17:08:05] [PASSED] ABGR8888 Height 0
[17:08:05] [PASSED] ABGR8888 Out of bound height * pitch combination
[17:08:05] [PASSED] ABGR8888 Large buffer offset
[17:08:05] [PASSED] ABGR8888 Buffer offset for inexistent plane
[17:08:05] [PASSED] ABGR8888 Invalid flag
[17:08:05] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[17:08:05] [PASSED] ABGR8888 Valid buffer modifier
[17:08:05] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[17:08:05] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[17:08:05] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[17:08:05] [PASSED] NV12 Normal sizes
[17:08:05] [PASSED] NV12 Max sizes
[17:08:05] [PASSED] NV12 Invalid pitch
[17:08:05] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[17:08:05] [PASSED] NV12 different modifier per-plane
[17:08:05] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[17:08:05] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[17:08:05] [PASSED] NV12 Modifier for inexistent plane
[17:08:05] [PASSED] NV12 Handle for inexistent plane
[17:08:05] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[17:08:05] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[17:08:05] [PASSED] YVU420 Normal sizes
[17:08:05] [PASSED] YVU420 Max sizes
[17:08:05] [PASSED] YVU420 Invalid pitch
[17:08:05] [PASSED] YVU420 Different pitches
[17:08:05] [PASSED] YVU420 Different buffer offsets/pitches
[17:08:05] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[17:08:05] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[17:08:05] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[17:08:05] [PASSED] YVU420 Valid modifier
[17:08:05] [PASSED] YVU420 Different modifiers per plane
[17:08:05] [PASSED] YVU420 Modifier for inexistent plane
[17:08:05] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[17:08:05] [PASSED] X0L2 Normal sizes
[17:08:05] [PASSED] X0L2 Max sizes
[17:08:05] [PASSED] X0L2 Invalid pitch
[17:08:05] [PASSED] X0L2 Pitch greater than minimum required
[17:08:05] [PASSED] X0L2 Handle for inexistent plane
[17:08:05] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[17:08:05] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[17:08:05] [PASSED] X0L2 Valid modifier
[17:08:05] [PASSED] X0L2 Modifier for inexistent plane
[17:08:05] =========== [PASSED] drm_test_framebuffer_create ===========
[17:08:05] [PASSED] drm_test_framebuffer_free
[17:08:05] [PASSED] drm_test_framebuffer_init
[17:08:05] [PASSED] drm_test_framebuffer_init_bad_format
[17:08:05] [PASSED] drm_test_framebuffer_init_dev_mismatch
[17:08:05] [PASSED] drm_test_framebuffer_lookup
[17:08:05] [PASSED] drm_test_framebuffer_lookup_inexistent
[17:08:05] [PASSED] drm_test_framebuffer_modifiers_not_supported
[17:08:05] ================= [PASSED] drm_framebuffer =================
[17:08:05] ================ drm_gem_shmem (8 subtests) ================
[17:08:05] [PASSED] drm_gem_shmem_test_obj_create
[17:08:05] [PASSED] drm_gem_shmem_test_obj_create_private
[17:08:05] [PASSED] drm_gem_shmem_test_pin_pages
[17:08:05] [PASSED] drm_gem_shmem_test_vmap
[17:08:05] [PASSED] drm_gem_shmem_test_get_pages_sgt
[17:08:05] [PASSED] drm_gem_shmem_test_get_sg_table
[17:08:05] [PASSED] drm_gem_shmem_test_madvise
[17:08:05] [PASSED] drm_gem_shmem_test_purge
[17:08:05] ================== [PASSED] drm_gem_shmem ==================
[17:08:05] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[17:08:05] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[17:08:05] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[17:08:05] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[17:08:05] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[17:08:05] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[17:08:05] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[17:08:05] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[17:08:05] [PASSED] Automatic
[17:08:05] [PASSED] Full
[17:08:05] [PASSED] Limited 16:235
[17:08:05] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[17:08:05] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[17:08:05] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[17:08:05] [PASSED] drm_test_check_disable_connector
[17:08:05] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[17:08:05] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[17:08:05] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[17:08:05] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[17:08:05] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[17:08:05] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[17:08:05] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[17:08:05] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[17:08:05] [PASSED] drm_test_check_output_bpc_dvi
[17:08:05] [PASSED] drm_test_check_output_bpc_format_vic_1
[17:08:05] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[17:08:05] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[17:08:05] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[17:08:05] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[17:08:05] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[17:08:05] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[17:08:05] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[17:08:05] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[17:08:05] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[17:08:05] [PASSED] drm_test_check_broadcast_rgb_value
[17:08:05] [PASSED] drm_test_check_bpc_8_value
[17:08:05] [PASSED] drm_test_check_bpc_10_value
[17:08:05] [PASSED] drm_test_check_bpc_12_value
[17:08:05] [PASSED] drm_test_check_format_value
[17:08:05] [PASSED] drm_test_check_tmds_char_value
[17:08:05] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[17:08:05] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[17:08:05] [PASSED] drm_test_check_mode_valid
[17:08:05] [PASSED] drm_test_check_mode_valid_reject
[17:08:05] [PASSED] drm_test_check_mode_valid_reject_rate
[17:08:05] [PASSED] drm_test_check_mode_valid_reject_max_clock
[17:08:05] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[17:08:05] ================= drm_managed (2 subtests) =================
[17:08:05] [PASSED] drm_test_managed_release_action
[17:08:05] [PASSED] drm_test_managed_run_action
[17:08:05] =================== [PASSED] drm_managed ===================
[17:08:05] =================== drm_mm (6 subtests) ====================
[17:08:05] [PASSED] drm_test_mm_init
[17:08:05] [PASSED] drm_test_mm_debug
[17:08:05] [PASSED] drm_test_mm_align32
[17:08:05] [PASSED] drm_test_mm_align64
[17:08:05] [PASSED] drm_test_mm_lowest
[17:08:05] [PASSED] drm_test_mm_highest
[17:08:05] ===================== [PASSED] drm_mm ======================
[17:08:05] ============= drm_modes_analog_tv (5 subtests) =============
[17:08:05] [PASSED] drm_test_modes_analog_tv_mono_576i
[17:08:05] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[17:08:05] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[17:08:05] [PASSED] drm_test_modes_analog_tv_pal_576i
[17:08:05] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[17:08:05] =============== [PASSED] drm_modes_analog_tv ===============
[17:08:05] ============== drm_plane_helper (2 subtests) ===============
[17:08:05] =============== drm_test_check_plane_state ================
[17:08:05] [PASSED] clipping_simple
[17:08:05] [PASSED] clipping_rotate_reflect
[17:08:05] [PASSED] positioning_simple
[17:08:05] [PASSED] upscaling
[17:08:05] [PASSED] downscaling
[17:08:05] [PASSED] rounding1
[17:08:05] [PASSED] rounding2
[17:08:05] [PASSED] rounding3
[17:08:05] [PASSED] rounding4
[17:08:05] =========== [PASSED] drm_test_check_plane_state ============
[17:08:05] =========== drm_test_check_invalid_plane_state ============
[17:08:05] [PASSED] positioning_invalid
[17:08:05] [PASSED] upscaling_invalid
[17:08:05] [PASSED] downscaling_invalid
[17:08:05] ======= [PASSED] drm_test_check_invalid_plane_state ========
[17:08:05] ================ [PASSED] drm_plane_helper =================
[17:08:05] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[17:08:05] ====== drm_test_connector_helper_tv_get_modes_check =======
[17:08:05] [PASSED] None
[17:08:05] [PASSED] PAL
[17:08:05] [PASSED] NTSC
[17:08:05] [PASSED] Both, NTSC Default
[17:08:05] [PASSED] Both, PAL Default
[17:08:05] [PASSED] Both, NTSC Default, with PAL on command-line
[17:08:05] [PASSED] Both, PAL Default, with NTSC on command-line
[17:08:05] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[17:08:05] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[17:08:05] ================== drm_rect (9 subtests) ===================
[17:08:05] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[17:08:05] [PASSED] drm_test_rect_clip_scaled_not_clipped
[17:08:05] [PASSED] drm_test_rect_clip_scaled_clipped
[17:08:05] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[17:08:05] ================= drm_test_rect_intersect =================
[17:08:05] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[17:08:05] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[17:08:05] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[17:08:05] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[17:08:05] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[17:08:05] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[17:08:05] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[17:08:05] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[17:08:05] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[17:08:05] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[17:08:05] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[17:08:05] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[17:08:05] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[17:08:05] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[17:08:05] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[17:08:05] ============= [PASSED] drm_test_rect_intersect =============
[17:08:05] ================ drm_test_rect_calc_hscale ================
[17:08:05] [PASSED] normal use
[17:08:05] [PASSED] out of max range
[17:08:05] [PASSED] out of min range
[17:08:05] [PASSED] zero dst
[17:08:05] [PASSED] negative src
[17:08:05] [PASSED] negative dst
[17:08:05] ============ [PASSED] drm_test_rect_calc_hscale ============
[17:08:05] ================ drm_test_rect_calc_vscale ================
[17:08:05] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[17:08:05] [PASSED] out of max range
[17:08:05] [PASSED] out of min range
[17:08:05] [PASSED] zero dst
[17:08:05] [PASSED] negative src
[17:08:05] [PASSED] negative dst
[17:08:05] ============ [PASSED] drm_test_rect_calc_vscale ============
[17:08:05] ================== drm_test_rect_rotate ===================
[17:08:05] [PASSED] reflect-x
[17:08:05] [PASSED] reflect-y
[17:08:05] [PASSED] rotate-0
[17:08:05] [PASSED] rotate-90
[17:08:05] [PASSED] rotate-180
[17:08:05] [PASSED] rotate-270
[17:08:05] ============== [PASSED] drm_test_rect_rotate ===============
[17:08:05] ================ drm_test_rect_rotate_inv =================
[17:08:05] [PASSED] reflect-x
[17:08:05] [PASSED] reflect-y
[17:08:05] [PASSED] rotate-0
[17:08:05] [PASSED] rotate-90
[17:08:05] [PASSED] rotate-180
[17:08:05] [PASSED] rotate-270
[17:08:05] ============ [PASSED] drm_test_rect_rotate_inv =============
[17:08:05] ==================== [PASSED] drm_rect =====================
[17:08:05] ============ drm_sysfb_modeset_test (1 subtest) ============
[17:08:05] ============ drm_test_sysfb_build_fourcc_list =============
[17:08:05] [PASSED] no native formats
[17:08:05] [PASSED] XRGB8888 as native format
[17:08:05] [PASSED] remove duplicates
[17:08:05] [PASSED] convert alpha formats
[17:08:05] [PASSED] random formats
[17:08:05] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[17:08:05] ============= [PASSED] drm_sysfb_modeset_test ==============
[17:08:05] ============================================================
[17:08:05] Testing complete. Ran 622 tests: passed: 622
[17:08:06] Elapsed time: 26.540s total, 1.688s configuring, 24.434s building, 0.396s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[17:08:06] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[17:08:07] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[17:08:17] Starting KUnit Kernel (1/1)...
[17:08:17] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[17:08:17] ================= ttm_device (5 subtests) ==================
[17:08:17] [PASSED] ttm_device_init_basic
[17:08:17] [PASSED] ttm_device_init_multiple
[17:08:17] [PASSED] ttm_device_fini_basic
[17:08:17] [PASSED] ttm_device_init_no_vma_man
[17:08:17] ================== ttm_device_init_pools ==================
[17:08:17] [PASSED] No DMA allocations, no DMA32 required
[17:08:17] [PASSED] DMA allocations, DMA32 required
[17:08:17] [PASSED] No DMA allocations, DMA32 required
[17:08:17] [PASSED] DMA allocations, no DMA32 required
[17:08:17] ============== [PASSED] ttm_device_init_pools ==============
[17:08:17] =================== [PASSED] ttm_device ====================
[17:08:17] ================== ttm_pool (8 subtests) ===================
[17:08:17] ================== ttm_pool_alloc_basic ===================
[17:08:17] [PASSED] One page
[17:08:17] [PASSED] More than one page
[17:08:17] [PASSED] Above the allocation limit
[17:08:17] [PASSED] One page, with coherent DMA mappings enabled
[17:08:17] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[17:08:17] ============== [PASSED] ttm_pool_alloc_basic ===============
[17:08:17] ============== ttm_pool_alloc_basic_dma_addr ==============
[17:08:17] [PASSED] One page
[17:08:17] [PASSED] More than one page
[17:08:17] [PASSED] Above the allocation limit
[17:08:17] [PASSED] One page, with coherent DMA mappings enabled
[17:08:17] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[17:08:17] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[17:08:17] [PASSED] ttm_pool_alloc_order_caching_match
[17:08:17] [PASSED] ttm_pool_alloc_caching_mismatch
[17:08:17] [PASSED] ttm_pool_alloc_order_mismatch
[17:08:17] [PASSED] ttm_pool_free_dma_alloc
[17:08:17] [PASSED] ttm_pool_free_no_dma_alloc
[17:08:17] [PASSED] ttm_pool_fini_basic
[17:08:17] ==================== [PASSED] ttm_pool =====================
[17:08:17] ================ ttm_resource (8 subtests) =================
[17:08:17] ================= ttm_resource_init_basic =================
[17:08:17] [PASSED] Init resource in TTM_PL_SYSTEM
[17:08:17] [PASSED] Init resource in TTM_PL_VRAM
[17:08:17] [PASSED] Init resource in a private placement
[17:08:17] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[17:08:17] ============= [PASSED] ttm_resource_init_basic =============
[17:08:17] [PASSED] ttm_resource_init_pinned
[17:08:17] [PASSED] ttm_resource_fini_basic
[17:08:17] [PASSED] ttm_resource_manager_init_basic
[17:08:17] [PASSED] ttm_resource_manager_usage_basic
[17:08:17] [PASSED] ttm_resource_manager_set_used_basic
[17:08:17] [PASSED] ttm_sys_man_alloc_basic
[17:08:17] [PASSED] ttm_sys_man_free_basic
[17:08:17] ================== [PASSED] ttm_resource ===================
[17:08:17] =================== ttm_tt (15 subtests) ===================
[17:08:17] ==================== ttm_tt_init_basic ====================
[17:08:17] [PASSED] Page-aligned size
[17:08:17] [PASSED] Extra pages requested
[17:08:17] ================ [PASSED] ttm_tt_init_basic ================
[17:08:17] [PASSED] ttm_tt_init_misaligned
[17:08:17] [PASSED] ttm_tt_fini_basic
[17:08:17] [PASSED] ttm_tt_fini_sg
[17:08:17] [PASSED] ttm_tt_fini_shmem
[17:08:17] [PASSED] ttm_tt_create_basic
[17:08:17] [PASSED] ttm_tt_create_invalid_bo_type
[17:08:17] [PASSED] ttm_tt_create_ttm_exists
[17:08:17] [PASSED] ttm_tt_create_failed
[17:08:17] [PASSED] ttm_tt_destroy_basic
[17:08:17] [PASSED] ttm_tt_populate_null_ttm
[17:08:17] [PASSED] ttm_tt_populate_populated_ttm
[17:08:17] [PASSED] ttm_tt_unpopulate_basic
[17:08:17] [PASSED] ttm_tt_unpopulate_empty_ttm
[17:08:17] [PASSED] ttm_tt_swapin_basic
[17:08:17] ===================== [PASSED] ttm_tt ======================
[17:08:17] =================== ttm_bo (14 subtests) ===================
[17:08:17] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[17:08:17] [PASSED] Cannot be interrupted and sleeps
[17:08:17] [PASSED] Cannot be interrupted, locks straight away
[17:08:17] [PASSED] Can be interrupted, sleeps
[17:08:17] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[17:08:17] [PASSED] ttm_bo_reserve_locked_no_sleep
[17:08:17] [PASSED] ttm_bo_reserve_no_wait_ticket
[17:08:17] [PASSED] ttm_bo_reserve_double_resv
[17:08:17] [PASSED] ttm_bo_reserve_interrupted
[17:08:17] [PASSED] ttm_bo_reserve_deadlock
[17:08:17] [PASSED] ttm_bo_unreserve_basic
[17:08:17] [PASSED] ttm_bo_unreserve_pinned
[17:08:17] [PASSED] ttm_bo_unreserve_bulk
[17:08:17] [PASSED] ttm_bo_fini_basic
[17:08:17] [PASSED] ttm_bo_fini_shared_resv
[17:08:17] [PASSED] ttm_bo_pin_basic
[17:08:17] [PASSED] ttm_bo_pin_unpin_resource
[17:08:17] [PASSED] ttm_bo_multiple_pin_one_unpin
[17:08:17] ===================== [PASSED] ttm_bo ======================
[17:08:17] ============== ttm_bo_validate (21 subtests) ===============
[17:08:17] ============== ttm_bo_init_reserved_sys_man ===============
[17:08:17] [PASSED] Buffer object for userspace
[17:08:17] [PASSED] Kernel buffer object
[17:08:17] [PASSED] Shared buffer object
[17:08:17] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[17:08:17] ============== ttm_bo_init_reserved_mock_man ==============
[17:08:17] [PASSED] Buffer object for userspace
[17:08:17] [PASSED] Kernel buffer object
[17:08:17] [PASSED] Shared buffer object
[17:08:17] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[17:08:17] [PASSED] ttm_bo_init_reserved_resv
[17:08:17] ================== ttm_bo_validate_basic ==================
[17:08:17] [PASSED] Buffer object for userspace
[17:08:17] [PASSED] Kernel buffer object
[17:08:17] [PASSED] Shared buffer object
[17:08:17] ============== [PASSED] ttm_bo_validate_basic ==============
[17:08:17] [PASSED] ttm_bo_validate_invalid_placement
[17:08:17] ============= ttm_bo_validate_same_placement ==============
[17:08:17] [PASSED] System manager
[17:08:17] [PASSED] VRAM manager
[17:08:17] ========= [PASSED] ttm_bo_validate_same_placement ==========
[17:08:17] [PASSED] ttm_bo_validate_failed_alloc
[17:08:17] [PASSED] ttm_bo_validate_pinned
[17:08:17] [PASSED] ttm_bo_validate_busy_placement
[17:08:17] ================ ttm_bo_validate_multihop =================
[17:08:17] [PASSED] Buffer object for userspace
[17:08:17] [PASSED] Kernel buffer object
[17:08:17] [PASSED] Shared buffer object
[17:08:17] ============ [PASSED] ttm_bo_validate_multihop =============
[17:08:17] ========== ttm_bo_validate_no_placement_signaled ==========
[17:08:17] [PASSED] Buffer object in system domain, no page vector
[17:08:17] [PASSED] Buffer object in system domain with an existing page vector
[17:08:17] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[17:08:17] ======== ttm_bo_validate_no_placement_not_signaled ========
[17:08:17] [PASSED] Buffer object for userspace
[17:08:17] [PASSED] Kernel buffer object
[17:08:17] [PASSED] Shared buffer object
[17:08:17] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[17:08:17] [PASSED] ttm_bo_validate_move_fence_signaled
[17:08:17] ========= ttm_bo_validate_move_fence_not_signaled =========
[17:08:17] [PASSED] Waits for GPU
[17:08:17] [PASSED] Tries to lock straight away
[17:08:17] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[17:08:17] [PASSED] ttm_bo_validate_happy_evict
[17:08:17] [PASSED] ttm_bo_validate_all_pinned_evict
[17:08:17] [PASSED] ttm_bo_validate_allowed_only_evict
[17:08:17] [PASSED] ttm_bo_validate_deleted_evict
[17:08:17] [PASSED] ttm_bo_validate_busy_domain_evict
[17:08:17] [PASSED] ttm_bo_validate_evict_gutting
[17:08:17] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[17:08:17] ================= [PASSED] ttm_bo_validate =================
[17:08:17] ============================================================
[17:08:17] Testing complete. Ran 101 tests: passed: 101
[17:08:17] Elapsed time: 11.433s total, 1.718s configuring, 9.499s building, 0.182s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 33+ messages in thread* ✓ Xe.CI.BAT: success for Dynamic drm_pagemaps and Initial multi-device SVM (rev2)
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (18 preceding siblings ...)
2025-11-11 17:08 ` ✓ CI.KUnit: success " Patchwork
@ 2025-11-11 17:45 ` Patchwork
2025-11-12 2:53 ` ✗ Xe.CI.Full: failure " Patchwork
2025-11-18 6:15 ` [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Alistair Popple
21 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2025-11-11 17:45 UTC (permalink / raw)
To: Thomas Hellström; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 1864 bytes --]
== Series Details ==
Series: Dynamic drm_pagemaps and Initial multi-device SVM (rev2)
URL : https://patchwork.freedesktop.org/series/156525/
State : success
== Summary ==
CI Bug Log - changes from xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486_BAT -> xe-pw-156525v2_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (13 -> 13)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in xe-pw-156525v2_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@xe_waitfence@engine:
- bat-dg2-oem2: [PASS][1] -> [FAIL][2] ([Intel XE#6519])
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/bat-dg2-oem2/igt@xe_waitfence@engine.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/bat-dg2-oem2/igt@xe_waitfence@engine.html
#### Possible fixes ####
* igt@xe_waitfence@reltime:
- bat-pvc-2: [FAIL][3] ([Intel XE#6520]) -> [PASS][4]
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/bat-pvc-2/igt@xe_waitfence@reltime.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/bat-pvc-2/igt@xe_waitfence@reltime.html
[Intel XE#6519]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6519
[Intel XE#6520]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6520
Build changes
-------------
* Linux: xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486 -> xe-pw-156525v2
IGT_8620: 8620
xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486: 52764bea2cf028d285b0f4d86ee1ebfd4e196486
xe-pw-156525v2: 156525v2
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/index.html
[-- Attachment #2: Type: text/html, Size: 2451 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread* ✗ Xe.CI.Full: failure for Dynamic drm_pagemaps and Initial multi-device SVM (rev2)
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (19 preceding siblings ...)
2025-11-11 17:45 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-11-12 2:53 ` Patchwork
2025-11-18 6:15 ` [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Alistair Popple
21 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2025-11-12 2:53 UTC (permalink / raw)
To: Thomas Hellström; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 51736 bytes --]
== Series Details ==
Series: Dynamic drm_pagemaps and Initial multi-device SVM (rev2)
URL : https://patchwork.freedesktop.org/series/156525/
State : failure
== Summary ==
CI Bug Log - changes from xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486_FULL -> xe-pw-156525v2_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-156525v2_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-156525v2_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-156525v2_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate:
- shard-dg2-set2: [PASS][1] -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-dg2-436/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-435/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate.html
New tests
---------
New tests have been introduced between xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486_FULL and xe-pw-156525v2_FULL:
### New IGT tests (117) ###
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs:
- Statuses : 1 pass(s) 3 skip(s)
- Exec time: [0.0, 26.71] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-a-dp-2:
- Statuses : 1 pass(s)
- Exec time: [3.41] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-a-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [3.26] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-b-dp-2:
- Statuses : 1 pass(s)
- Exec time: [3.30] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-b-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [3.32] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-c-dp-2:
- Statuses : 1 pass(s)
- Exec time: [3.29] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-c-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [3.23] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-d-dp-2:
- Statuses : 1 pass(s)
- Exec time: [3.44] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-d-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [3.45] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs:
- Statuses : 1 incomplete(s) 3 skip(s)
- Exec time: [0.0, 0.01] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-a-dp-4:
- Statuses : 1 pass(s)
- Exec time: [2.49] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [3.21] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-b-dp-4:
- Statuses : 1 pass(s)
- Exec time: [2.42] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-b-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [2.31] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-c-dp-4:
- Statuses : 1 pass(s)
- Exec time: [2.43] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-c-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [2.32] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4:
- Statuses : 1 incomplete(s)
- Exec time: [2.38] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [2.38] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs:
- Statuses : 1 pass(s) 3 skip(s)
- Exec time: [0.0, 11.12] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc:
- Statuses : 1 pass(s) 3 skip(s)
- Exec time: [0.0, 10.46] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-a-dp-4:
- Statuses : 1 pass(s)
- Exec time: [1.36] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-a-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [1.52] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-b-dp-4:
- Statuses : 1 pass(s)
- Exec time: [1.29] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-b-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [1.26] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-c-dp-4:
- Statuses : 1 pass(s)
- Exec time: [1.28] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-c-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [1.26] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-d-dp-4:
- Statuses : 1 pass(s)
- Exec time: [1.28] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-d-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [1.21] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-a-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [1.35] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-b-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [1.29] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-c-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [1.22] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-d-hdmi-a-6:
- Statuses : 1 pass(s)
- Exec time: [1.25] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs:
- Statuses :
- Exec time: [None] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-mc-ccs:
- Statuses : 3 skip(s)
- Exec time: [0.0, 0.01] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-mc-ccs@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-mc-ccs@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-mc-ccs@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs:
- Statuses : 4 skip(s)
- Exec time: [0.0, 0.01] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc:
- Statuses : 4 skip(s)
- Exec time: [0.0, 0.01] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-a-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-b-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-b-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-c-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-c-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-d-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-d-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs:
- Statuses : 4 skip(s)
- Exec time: [0.0, 0.01] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-a-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-a-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-b-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-b-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-c-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-c-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-d-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs@pipe-d-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-mc-ccs:
- Statuses : 4 skip(s)
- Exec time: [0.0, 0.01] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-mc-ccs@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-mc-ccs@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-mc-ccs@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-mc-ccs@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs:
- Statuses : 4 skip(s)
- Exec time: [0.0, 0.01] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc:
- Statuses : 3 skip(s)
- Exec time: [0.0, 0.01] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-a-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-a-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-b-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-b-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-c-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-c-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-d-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-d-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs:
- Statuses : 4 skip(s)
- Exec time: [0.0, 0.01] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-a-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-a-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-a-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-b-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-b-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-b-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-c-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-c-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-c-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-d-dp-4:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-d-hdmi-a-1:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-d-hdmi-a-6:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
Known issues
------------
Here are the changes found in xe-pw-156525v2_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_async_flips@async-flip-with-page-flip-events-linear@pipe-a-edp-1:
- shard-lnl: [PASS][3] -> [FAIL][4] ([Intel XE#5993])
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-lnl-3/igt@kms_async_flips@async-flip-with-page-flip-events-linear@pipe-a-edp-1.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-lnl-1/igt@kms_async_flips@async-flip-with-page-flip-events-linear@pipe-a-edp-1.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip:
- shard-dg2-set2: NOTRUN -> [SKIP][5] ([Intel XE#1124]) +1 other test skip
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip:
- shard-bmg: NOTRUN -> [SKIP][6] ([Intel XE#1124]) +1 other test skip
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
* igt@kms_bw@linear-tiling-4-displays-1920x1080p:
- shard-dg2-set2: NOTRUN -> [SKIP][7] ([Intel XE#367])
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_bw@linear-tiling-4-displays-1920x1080p.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4 (NEW):
- shard-dg2-set2: [PASS][8] -> [INCOMPLETE][9] ([Intel XE#3862]) +1 other test incomplete
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-dg2-435/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4.html
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-432/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4.html
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc (NEW):
- shard-bmg: NOTRUN -> [SKIP][10] ([Intel XE#3432]) +1 other test skip
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc.html
* igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-rc-ccs-cc:
- shard-dg2-set2: NOTRUN -> [SKIP][11] ([Intel XE#455] / [Intel XE#787]) +1 other test skip
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-rc-ccs-cc.html
* igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-rc-ccs-cc@pipe-a-dp-4:
- shard-dg2-set2: NOTRUN -> [SKIP][12] ([Intel XE#787]) +6 other tests skip
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-rc-ccs-cc@pipe-a-dp-4.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc:
- shard-dg2-set2: [PASS][13] -> [INCOMPLETE][14] ([Intel XE#2705] / [Intel XE#4212] / [Intel XE#4345])
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc.html
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-435/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-b-dp-4:
- shard-dg2-set2: [PASS][15] -> [INCOMPLETE][16] ([Intel XE#2705] / [Intel XE#4212])
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-b-dp-4.html
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-435/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-b-dp-4.html
* igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs@pipe-c-dp-2:
- shard-bmg: NOTRUN -> [SKIP][17] ([Intel XE#2652] / [Intel XE#787]) +7 other tests skip
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-8/igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs@pipe-c-dp-2.html
* igt@kms_chamelium_edid@hdmi-edid-change-during-suspend:
- shard-bmg: NOTRUN -> [SKIP][18] ([Intel XE#2252])
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_chamelium_edid@hdmi-edid-change-during-suspend.html
* igt@kms_chamelium_hpd@dp-hpd-fast:
- shard-dg2-set2: NOTRUN -> [SKIP][19] ([Intel XE#373]) +1 other test skip
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_chamelium_hpd@dp-hpd-fast.html
* igt@kms_content_protection@lic-type-0@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [FAIL][20] ([Intel XE#1178])
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-4/igt@kms_content_protection@lic-type-0@pipe-a-dp-2.html
* igt@kms_cursor_crc@cursor-rapid-movement-512x170:
- shard-dg2-set2: NOTRUN -> [SKIP][21] ([Intel XE#308])
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_cursor_crc@cursor-rapid-movement-512x170.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-bmg: NOTRUN -> [SKIP][22] ([Intel XE#2291])
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size:
- shard-bmg: [PASS][23] -> [SKIP][24] ([Intel XE#2291]) +3 other tests skip
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-5/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
- shard-dg2-set2: NOTRUN -> [SKIP][25] ([Intel XE#323])
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html
* igt@kms_display_modes@extended-mode-basic:
- shard-bmg: [PASS][26] -> [SKIP][27] ([Intel XE#4302])
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-5/igt@kms_display_modes@extended-mode-basic.html
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_display_modes@extended-mode-basic.html
* igt@kms_dsc@dsc-basic:
- shard-bmg: NOTRUN -> [SKIP][28] ([Intel XE#2244])
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_dsc@dsc-basic.html
* igt@kms_feature_discovery@psr2:
- shard-dg2-set2: NOTRUN -> [SKIP][29] ([Intel XE#1135])
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_feature_discovery@psr2.html
* igt@kms_flip@2x-flip-vs-expired-vblank:
- shard-dg2-set2: [PASS][30] -> [FAIL][31] ([Intel XE#301] / [Intel XE#6545])
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-dg2-436/igt@kms_flip@2x-flip-vs-expired-vblank.html
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_flip@2x-flip-vs-expired-vblank.html
* igt@kms_flip@2x-flip-vs-expired-vblank@ab-hdmi-a6-dp4:
- shard-dg2-set2: [PASS][32] -> [FAIL][33] ([Intel XE#301])
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-dg2-436/igt@kms_flip@2x-flip-vs-expired-vblank@ab-hdmi-a6-dp4.html
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_flip@2x-flip-vs-expired-vblank@ab-hdmi-a6-dp4.html
* igt@kms_flip@2x-plain-flip-fb-recreate:
- shard-bmg: [PASS][34] -> [SKIP][35] ([Intel XE#2316]) +2 other tests skip
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-5/igt@kms_flip@2x-plain-flip-fb-recreate.html
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_flip@2x-plain-flip-fb-recreate.html
* igt@kms_flip@2x-wf_vblank-ts-check-interruptible:
- shard-bmg: NOTRUN -> [SKIP][36] ([Intel XE#2316]) +2 other tests skip
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_flip@2x-wf_vblank-ts-check-interruptible.html
* igt@kms_flip@flip-vs-suspend@b-hdmi-a1:
- shard-adlp: [PASS][37] -> [DMESG-WARN][38] ([Intel XE#4543]) +5 other tests dmesg-warn
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-adlp-3/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-adlp-6/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-shrfb-draw-render:
- shard-dg2-set2: NOTRUN -> [SKIP][39] ([Intel XE#651]) +6 other tests skip
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-shrfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbc-stridechange:
- shard-bmg: NOTRUN -> [SKIP][40] ([Intel XE#5390])
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-stridechange.html
* igt@kms_frontbuffer_tracking@fbcdrrs-rgb565-draw-blt:
- shard-bmg: NOTRUN -> [SKIP][41] ([Intel XE#2311])
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcdrrs-rgb565-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-render:
- shard-dg2-set2: NOTRUN -> [SKIP][42] ([Intel XE#6312])
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt:
- shard-dg2-set2: NOTRUN -> [SKIP][43] ([Intel XE#653]) +3 other tests skip
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-pri-indfb-multidraw:
- shard-bmg: NOTRUN -> [SKIP][44] ([Intel XE#2312]) +3 other tests skip
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-pri-indfb-multidraw.html
* igt@kms_frontbuffer_tracking@fbcpsr-rgb101010-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][45] ([Intel XE#2313]) +1 other test skip
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-rgb101010-draw-mmap-wc.html
* igt@kms_joiner@basic-ultra-joiner:
- shard-bmg: NOTRUN -> [SKIP][46] ([Intel XE#2927])
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_joiner@basic-ultra-joiner.html
* igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5:
- shard-bmg: NOTRUN -> [SKIP][47] ([Intel XE#2763]) +4 other tests skip
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5.html
* igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-sf:
- shard-dg2-set2: NOTRUN -> [SKIP][48] ([Intel XE#1406] / [Intel XE#1489]) +1 other test skip
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-sf.html
* igt@kms_psr@pr-basic:
- shard-dg2-set2: NOTRUN -> [SKIP][49] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +1 other test skip
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_psr@pr-basic.html
* igt@kms_psr@pr-suspend:
- shard-bmg: NOTRUN -> [SKIP][50] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +1 other test skip
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_psr@pr-suspend.html
* igt@kms_setmode@clone-exclusive-crtc:
- shard-bmg: [PASS][51] -> [SKIP][52] ([Intel XE#1435])
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-5/igt@kms_setmode@clone-exclusive-crtc.html
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_setmode@clone-exclusive-crtc.html
* igt@kms_sharpness_filter@filter-scaler-upscale:
- shard-dg2-set2: NOTRUN -> [SKIP][53] ([Intel XE#455]) +2 other tests skip
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_sharpness_filter@filter-scaler-upscale.html
* igt@kms_vblank@ts-continuation-suspend:
- shard-adlp: [PASS][54] -> [DMESG-WARN][55] ([Intel XE#2953] / [Intel XE#4173]) +10 other tests dmesg-warn
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-adlp-4/igt@kms_vblank@ts-continuation-suspend.html
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-adlp-9/igt@kms_vblank@ts-continuation-suspend.html
* igt@xe_eudebug@discovery-race-vmbind:
- shard-dg2-set2: NOTRUN -> [SKIP][56] ([Intel XE#4837])
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@xe_eudebug@discovery-race-vmbind.html
* igt@xe_eudebug_online@breakpoint-not-in-debug-mode:
- shard-bmg: NOTRUN -> [SKIP][57] ([Intel XE#4837]) +1 other test skip
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@xe_eudebug_online@breakpoint-not-in-debug-mode.html
* igt@xe_exec_basic@multigpu-no-exec-null:
- shard-bmg: NOTRUN -> [SKIP][58] ([Intel XE#2322])
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@xe_exec_basic@multigpu-no-exec-null.html
* igt@xe_exec_fault_mode@twice-invalid-fault:
- shard-dg2-set2: NOTRUN -> [SKIP][59] ([Intel XE#288]) +4 other tests skip
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@xe_exec_fault_mode@twice-invalid-fault.html
* igt@xe_exec_system_allocator@many-stride-new-prefetch:
- shard-bmg: [PASS][60] -> [INCOMPLETE][61] ([Intel XE#6480])
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-1/igt@xe_exec_system_allocator@many-stride-new-prefetch.html
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-8/igt@xe_exec_system_allocator@many-stride-new-prefetch.html
* igt@xe_exec_system_allocator@process-many-large-execqueues-mmap-remap-madvise:
- shard-dg2-set2: NOTRUN -> [SKIP][62] ([Intel XE#4915]) +54 other tests skip
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@xe_exec_system_allocator@process-many-large-execqueues-mmap-remap-madvise.html
* igt@xe_exec_system_allocator@threads-many-execqueues-mmap-free-huge-nomemset:
- shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#4943]) +2 other tests skip
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@xe_exec_system_allocator@threads-many-execqueues-mmap-free-huge-nomemset.html
* igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv:
- shard-dg2-set2: [PASS][64] -> [DMESG-WARN][65] ([Intel XE#5893])
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-dg2-436/igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv.html
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-435/igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv.html
* igt@xe_oa@disabled-read-error:
- shard-dg2-set2: NOTRUN -> [SKIP][66] ([Intel XE#3573])
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@xe_oa@disabled-read-error.html
* igt@xe_pm@d3cold-basic:
- shard-dg2-set2: NOTRUN -> [SKIP][67] ([Intel XE#2284] / [Intel XE#366])
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@xe_pm@d3cold-basic.html
* igt@xe_pmu@all-fn-engine-activity-load:
- shard-dg2-set2: NOTRUN -> [SKIP][68] ([Intel XE#4650])
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@xe_pmu@all-fn-engine-activity-load.html
* igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_compute0:
- shard-lnl: [PASS][69] -> [FAIL][70] ([Intel XE#6251]) +1 other test fail
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-lnl-4/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_compute0.html
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-lnl-5/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_compute0.html
* igt@xe_query@multigpu-query-gt-list:
- shard-dg2-set2: NOTRUN -> [SKIP][71] ([Intel XE#944])
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@xe_query@multigpu-query-gt-list.html
#### Possible fixes ####
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
- shard-adlp: [DMESG-FAIL][72] ([Intel XE#4543]) -> [PASS][73]
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-adlp-9/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-adlp-9/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
* igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p:
- shard-bmg: [SKIP][74] ([Intel XE#2314] / [Intel XE#2894]) -> [PASS][75]
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-8/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs:
- shard-dg2-set2: [INCOMPLETE][76] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#4345] / [Intel XE#6168]) -> [PASS][77]
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-dg2-436/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-hdmi-a-6:
- shard-dg2-set2: [INCOMPLETE][78] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#6168]) -> [PASS][79]
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-dg2-436/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-hdmi-a-6.html
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-hdmi-a-6.html
* igt@kms_color@legacy-gamma-reset:
- shard-adlp: [DMESG-WARN][80] ([Intel XE#2953] / [Intel XE#4173]) -> [PASS][81] +11 other tests pass
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-adlp-4/igt@kms_color@legacy-gamma-reset.html
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-adlp-4/igt@kms_color@legacy-gamma-reset.html
* igt@kms_cursor_legacy@cursora-vs-flipb-legacy:
- shard-bmg: [SKIP][82] ([Intel XE#2291]) -> [PASS][83] +4 other tests pass
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-6/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-8/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html
* igt@kms_cursor_legacy@cursora-vs-flipb-varying-size:
- shard-bmg: [DMESG-WARN][84] ([Intel XE#5354]) -> [PASS][85]
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-7/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-4/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html
* igt@kms_cursor_legacy@flip-vs-cursor-legacy:
- shard-bmg: [FAIL][86] ([Intel XE#4633]) -> [PASS][87]
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-4/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-5/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
* igt@kms_dp_link_training@non-uhbr-sst:
- shard-bmg: [SKIP][88] ([Intel XE#4354]) -> [PASS][89]
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-6/igt@kms_dp_link_training@non-uhbr-sst.html
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-8/igt@kms_dp_link_training@non-uhbr-sst.html
* igt@kms_flip@2x-flip-vs-dpms-on-nop-interruptible:
- shard-bmg: [SKIP][90] ([Intel XE#2316]) -> [PASS][91] +9 other tests pass
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-6/igt@kms_flip@2x-flip-vs-dpms-on-nop-interruptible.html
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-8/igt@kms_flip@2x-flip-vs-dpms-on-nop-interruptible.html
* igt@kms_flip@flip-vs-blocking-wf-vblank@a-dp2:
- shard-bmg: [FAIL][92] ([Intel XE#3098]) -> [PASS][93] +1 other test pass
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-1/igt@kms_flip@flip-vs-blocking-wf-vblank@a-dp2.html
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-7/igt@kms_flip@flip-vs-blocking-wf-vblank@a-dp2.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1:
- shard-lnl: [FAIL][94] ([Intel XE#301]) -> [PASS][95] +3 other tests pass
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-lnl-5/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-adlp: [DMESG-WARN][96] ([Intel XE#4543]) -> [PASS][97] +6 other tests pass
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-adlp-6/igt@kms_flip@flip-vs-suspend-interruptible.html
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-adlp-8/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_hdr@invalid-metadata-sizes:
- shard-bmg: [SKIP][98] ([Intel XE#1503]) -> [PASS][99]
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-6/igt@kms_hdr@invalid-metadata-sizes.html
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-4/igt@kms_hdr@invalid-metadata-sizes.html
* igt@xe_pm@s2idle-multiple-execs:
- shard-adlp: [DMESG-WARN][100] ([Intel XE#2953] / [Intel XE#4173] / [Intel XE#4504]) -> [PASS][101]
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-adlp-3/igt@xe_pm@s2idle-multiple-execs.html
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-adlp-3/igt@xe_pm@s2idle-multiple-execs.html
* igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_video_decode0:
- shard-lnl: [FAIL][102] ([Intel XE#6251]) -> [PASS][103] +1 other test pass
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-lnl-4/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_video_decode0.html
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-lnl-5/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_video_decode0.html
#### Warnings ####
* igt@kms_content_protection@lic-type-0:
- shard-bmg: [SKIP][104] ([Intel XE#2341]) -> [FAIL][105] ([Intel XE#1178])
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-6/igt@kms_content_protection@lic-type-0.html
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-4/igt@kms_content_protection@lic-type-0.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt:
- shard-bmg: [SKIP][106] ([Intel XE#2311]) -> [SKIP][107] ([Intel XE#2312]) +9 other tests skip
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt.html
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][108] ([Intel XE#2312]) -> [SKIP][109] ([Intel XE#2311]) +15 other tests skip
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-indfb-draw-mmap-wc.html
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-8/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-fullscreen:
- shard-bmg: [SKIP][110] ([Intel XE#5390]) -> [SKIP][111] ([Intel XE#2312]) +5 other tests skip
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-fullscreen.html
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-fullscreen.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff:
- shard-bmg: [SKIP][112] ([Intel XE#2312]) -> [SKIP][113] ([Intel XE#5390]) +4 other tests skip
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-render:
- shard-bmg: [SKIP][114] ([Intel XE#2313]) -> [SKIP][115] ([Intel XE#2312]) +8 other tests skip
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-render.html
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
- shard-bmg: [SKIP][116] ([Intel XE#2312]) -> [SKIP][117] ([Intel XE#2313]) +16 other tests skip
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-8/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
* igt@kms_plane_multiple@2x-tiling-y:
- shard-bmg: [SKIP][118] ([Intel XE#4596]) -> [SKIP][119] ([Intel XE#5021])
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-6/igt@kms_plane_multiple@2x-tiling-y.html
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-8/igt@kms_plane_multiple@2x-tiling-y.html
* igt@kms_tiled_display@basic-test-pattern:
- shard-bmg: [SKIP][120] ([Intel XE#2426]) -> [FAIL][121] ([Intel XE#1729])
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-4/igt@kms_tiled_display@basic-test-pattern.html
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-5/igt@kms_tiled_display@basic-test-pattern.html
* igt@kms_tiled_display@basic-test-pattern-with-chamelium:
- shard-bmg: [SKIP][122] ([Intel XE#2509]) -> [SKIP][123] ([Intel XE#2426])
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-bmg-1/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-bmg-7/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
* igt@xe_pmu@gt-frequency:
- shard-dg2-set2: [FAIL][124] ([Intel XE#5166]) -> [FAIL][125] ([Intel XE#4819]) +1 other test fail
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486/shard-dg2-436/igt@xe_pmu@gt-frequency.html
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/shard-dg2-435/igt@xe_pmu@gt-frequency.html
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1135]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1135
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
[Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
[Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
[Intel XE#2509]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2509
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2705
[Intel XE#2763]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2763
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#2927]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2927
[Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
[Intel XE#3098]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3098
[Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
[Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
[Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
[Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
[Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#3862]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3862
[Intel XE#4173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4173
[Intel XE#4212]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4212
[Intel XE#4302]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4302
[Intel XE#4345]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4345
[Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
[Intel XE#4504]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4504
[Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
[Intel XE#4633]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4633
[Intel XE#4650]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4650
[Intel XE#4819]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4819
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
[Intel XE#5166]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5166
[Intel XE#5354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5354
[Intel XE#5390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5390
[Intel XE#5893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5893
[Intel XE#5993]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5993
[Intel XE#6168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6168
[Intel XE#6251]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6251
[Intel XE#6312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6312
[Intel XE#6480]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6480
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#6545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6545
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
Build changes
-------------
* Linux: xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486 -> xe-pw-156525v2
IGT_8620: 8620
xe-4086-52764bea2cf028d285b0f4d86ee1ebfd4e196486: 52764bea2cf028d285b0f4d86ee1ebfd4e196486
xe-pw-156525v2: 156525v2
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156525v2/index.html
[-- Attachment #2: Type: text/html, Size: 60452 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM
2025-11-11 16:43 [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
` (20 preceding siblings ...)
2025-11-12 2:53 ` ✗ Xe.CI.Full: failure " Patchwork
@ 2025-11-18 6:15 ` Alistair Popple
2025-11-18 9:31 ` Thomas Hellström
21 siblings, 1 reply; 33+ messages in thread
From: Alistair Popple @ 2025-11-18 6:15 UTC (permalink / raw)
To: Thomas Hellström
Cc: intel-xe, dri-devel, himal.prasad.ghimiray, airlied,
Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
On 2025-11-12 at 03:43 +1100, Thomas Hellström <thomas.hellstrom@linux.intel.com> wrote...
> This series aims at providing an initial implementation of multi-device
> SVM, where communitcation with peers (migration and direct execution out
> of peer memory) uses some form of fast interconnect. In this series
> we're using pcie p2p.
>
> In a multi-device environment, the struct pages for device-private memory
> (the dev_pagemap) may take up a significant amount of system memory. We
> therefore want to provide a means of revoking / removing the dev_pagemaps
> not in use. In particular when a device is offlined, we want to block
> migrating *to* the device memory and migrate data already existing in the
> devices memory to system. The dev_pagemap then becomes unused and can be
> removed.
>
> Removing and setting up a large dev_pagemap is also quite time-consuming,
> so removal of unused dev_pagemaps only happens on system memory pressure
> using a shrinker.
Agree it is quite time-consuming, we have run into this problem as well
including with the pcie p2p dma pages. On the mm side I've started looking
at if/how we can remove the need for struct pages at all for supporting this.
Doesn't help you at all now of course, but hopefully one day we can avoid the
need for this. I will be discussing this at LPC if you happen to be there.
- Alistair
> Patch 1 is a small debug printout fix.
> Patches 2-7 deals with dynamic drm_pagemaps as described above.
> Patches 8-12 adds infrastructure to handle remote drm_pagemaps with
> fast interconnects.
> Patch 13 extends the xe madvise() UAPI to handle remote drm_pagemaps.
> Patch 14 adds a pcie-p2p dma SVM interconnect to the xe driver.
> Patch 15 adds some SVM-related debug printouts for xe.
> Patch 16 adds direct interconnect migration.
> Patch 17 adds some documentation.
>
> What's still missing is implementation of migration policies.
> That will be implemented in follow-up series.
>
> v2:
> - Address review comments from Matt Brost.
> - Fix compilation issues reported by automated testing
> - Add patch 1, 17.
> - What's now patch 16 was extended to support p2p migration.
>
> Thomas Hellström (17):
> drm/xe/svm: Fix a debug printout
> drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap
> drm/pagemap: Add a refcounted drm_pagemap backpointer to struct
> drm_pagemap_zdd
> drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes
> drm/pagemap: Add a drm_pagemap cache and shrinker
> drm/xe: Use the drm_pagemap cache and shrinker
> drm/pagemap: Remove the drm_pagemap_create() interface
> drm/pagemap_util: Add a utility to assign an owner to a set of
> interconnected gpus
> drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner
> drm/xe: Pass a drm_pagemap pointer around with the memory advise
> attributes
> drm/xe: Use the vma attibute drm_pagemap to select where to migrate
> drm/xe: Simplify madvise_preferred_mem_loc()
> drm/xe/uapi: Extend the madvise functionality to support foreign
> pagemap placement for svm
> drm/xe: Support pcie p2p dma as a fast interconnect
> drm/xe/vm: Add a couple of VM debug printouts
> drm/pagemap, drm/xe: Support migration over interconnect
> drm/xe/svm: Document how xe keeps drm_pagemap references
>
> drivers/gpu/drm/Makefile | 3 +-
> drivers/gpu/drm/drm_gpusvm.c | 4 +-
> drivers/gpu/drm/drm_pagemap.c | 354 ++++++++++++---
> drivers/gpu/drm/drm_pagemap_util.c | 568 ++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_device.c | 20 +
> drivers/gpu/drm/xe/xe_device.h | 2 +
> drivers/gpu/drm/xe/xe_device_types.h | 5 +
> drivers/gpu/drm/xe/xe_svm.c | 631 ++++++++++++++++++++++-----
> drivers/gpu/drm/xe/xe_svm.h | 82 +++-
> drivers/gpu/drm/xe/xe_tile.c | 34 +-
> drivers/gpu/drm/xe/xe_tile.h | 21 +
> drivers/gpu/drm/xe/xe_userptr.c | 2 +-
> drivers/gpu/drm/xe/xe_vm.c | 65 ++-
> drivers/gpu/drm/xe/xe_vm.h | 1 +
> drivers/gpu/drm/xe/xe_vm_madvise.c | 106 ++++-
> drivers/gpu/drm/xe/xe_vm_types.h | 21 +-
> drivers/gpu/drm/xe/xe_vram_types.h | 15 +-
> include/drm/drm_pagemap.h | 91 +++-
> include/drm/drm_pagemap_util.h | 92 ++++
> include/uapi/drm/xe_drm.h | 18 +-
> 20 files changed, 1898 insertions(+), 237 deletions(-)
> create mode 100644 drivers/gpu/drm/drm_pagemap_util.c
> create mode 100644 include/drm/drm_pagemap_util.h
>
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM
2025-11-18 6:15 ` [PATCH v2 00/17] Dynamic drm_pagemaps and Initial multi-device SVM Alistair Popple
@ 2025-11-18 9:31 ` Thomas Hellström
0 siblings, 0 replies; 33+ messages in thread
From: Thomas Hellström @ 2025-11-18 9:31 UTC (permalink / raw)
To: Alistair Popple
Cc: intel-xe, dri-devel, himal.prasad.ghimiray, airlied,
Simona Vetter, felix.kuehling, Matthew Brost,
Christian König, dakr, Mrozek, Michal, Joonas Lahtinen
On Tue, 2025-11-18 at 17:15 +1100, Alistair Popple wrote:
> On 2025-11-12 at 03:43 +1100, Thomas Hellström
> <thomas.hellstrom@linux.intel.com> wrote...
> > This series aims at providing an initial implementation of multi-
> > device
> > SVM, where communitcation with peers (migration and direct
> > execution out
> > of peer memory) uses some form of fast interconnect. In this series
> > we're using pcie p2p.
> >
> > In a multi-device environment, the struct pages for device-private
> > memory
> > (the dev_pagemap) may take up a significant amount of system
> > memory. We
> > therefore want to provide a means of revoking / removing the
> > dev_pagemaps
> > not in use. In particular when a device is offlined, we want to
> > block
> > migrating *to* the device memory and migrate data already existing
> > in the
> > devices memory to system. The dev_pagemap then becomes unused and
> > can be
> > removed.
> >
> > Removing and setting up a large dev_pagemap is also quite time-
> > consuming,
> > so removal of unused dev_pagemaps only happens on system memory
> > pressure
> > using a shrinker.
>
> Agree it is quite time-consuming, we have run into this problem as
> well
> including with the pcie p2p dma pages. On the mm side I've started
> looking
> at if/how we can remove the need for struct pages at all for
> supporting this.
> Doesn't help you at all now of course, but hopefully one day we can
> avoid the
> need for this. I will be discussing this at LPC if you happen to be
> there.
Yeah that sounds great. Will not be at LPC in person but will make sure
to join remotely.
Thanks,
Thomas
>
> - Alistair
>
> > Patch 1 is a small debug printout fix.
> > Patches 2-7 deals with dynamic drm_pagemaps as described above.
> > Patches 8-12 adds infrastructure to handle remote drm_pagemaps with
> > fast interconnects.
> > Patch 13 extends the xe madvise() UAPI to handle remote
> > drm_pagemaps.
> > Patch 14 adds a pcie-p2p dma SVM interconnect to the xe driver.
> > Patch 15 adds some SVM-related debug printouts for xe.
> > Patch 16 adds direct interconnect migration.
> > Patch 17 adds some documentation.
> >
> > What's still missing is implementation of migration policies.
> > That will be implemented in follow-up series.
> >
> > v2:
> > - Address review comments from Matt Brost.
> > - Fix compilation issues reported by automated testing
> > - Add patch 1, 17.
> > - What's now patch 16 was extended to support p2p migration.
> >
> > Thomas Hellström (17):
> > drm/xe/svm: Fix a debug printout
> > drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap
> > drm/pagemap: Add a refcounted drm_pagemap backpointer to struct
> > drm_pagemap_zdd
> > drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes
> > drm/pagemap: Add a drm_pagemap cache and shrinker
> > drm/xe: Use the drm_pagemap cache and shrinker
> > drm/pagemap: Remove the drm_pagemap_create() interface
> > drm/pagemap_util: Add a utility to assign an owner to a set of
> > interconnected gpus
> > drm/xe: Use the drm_pagemap_util helper to get a svm pagemap
> > owner
> > drm/xe: Pass a drm_pagemap pointer around with the memory advise
> > attributes
> > drm/xe: Use the vma attibute drm_pagemap to select where to
> > migrate
> > drm/xe: Simplify madvise_preferred_mem_loc()
> > drm/xe/uapi: Extend the madvise functionality to support foreign
> > pagemap placement for svm
> > drm/xe: Support pcie p2p dma as a fast interconnect
> > drm/xe/vm: Add a couple of VM debug printouts
> > drm/pagemap, drm/xe: Support migration over interconnect
> > drm/xe/svm: Document how xe keeps drm_pagemap references
> >
> > drivers/gpu/drm/Makefile | 3 +-
> > drivers/gpu/drm/drm_gpusvm.c | 4 +-
> > drivers/gpu/drm/drm_pagemap.c | 354 ++++++++++++---
> > drivers/gpu/drm/drm_pagemap_util.c | 568
> > ++++++++++++++++++++++++
> > drivers/gpu/drm/xe/xe_device.c | 20 +
> > drivers/gpu/drm/xe/xe_device.h | 2 +
> > drivers/gpu/drm/xe/xe_device_types.h | 5 +
> > drivers/gpu/drm/xe/xe_svm.c | 631 ++++++++++++++++++++++-
> > ----
> > drivers/gpu/drm/xe/xe_svm.h | 82 +++-
> > drivers/gpu/drm/xe/xe_tile.c | 34 +-
> > drivers/gpu/drm/xe/xe_tile.h | 21 +
> > drivers/gpu/drm/xe/xe_userptr.c | 2 +-
> > drivers/gpu/drm/xe/xe_vm.c | 65 ++-
> > drivers/gpu/drm/xe/xe_vm.h | 1 +
> > drivers/gpu/drm/xe/xe_vm_madvise.c | 106 ++++-
> > drivers/gpu/drm/xe/xe_vm_types.h | 21 +-
> > drivers/gpu/drm/xe/xe_vram_types.h | 15 +-
> > include/drm/drm_pagemap.h | 91 +++-
> > include/drm/drm_pagemap_util.h | 92 ++++
> > include/uapi/drm/xe_drm.h | 18 +-
> > 20 files changed, 1898 insertions(+), 237 deletions(-)
> > create mode 100644 drivers/gpu/drm/drm_pagemap_util.c
> > create mode 100644 include/drm/drm_pagemap_util.h
> >
> > --
> > 2.51.1
> >
^ permalink raw reply [flat|nested] 33+ messages in thread