* [PATCH 0/4] Rust GPUVM support
@ 2025-11-28 14:14 Alice Ryhl
2025-11-28 14:14 ` [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc() Alice Ryhl
` (8 more replies)
0 siblings, 9 replies; 24+ messages in thread
From: Alice Ryhl @ 2025-11-28 14:14 UTC (permalink / raw)
To: Danilo Krummrich, Daniel Almeida
Cc: Matthew Brost, Thomas Hellström, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Boris Brezillon, Steven Price, Liviu Dudau, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Frank Binns, Matt Coster,
Rob Clark, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Sean Paul, Marijn Suijten, Lyude Paul, Lucas De Marchi,
Rodrigo Vivi, Sumit Semwal, Christian König, dri-devel,
linux-kernel, rust-for-linux, linux-arm-msm, freedreno, nouveau,
intel-xe, linux-media, linaro-mm-sig, Alice Ryhl, Asahi Lina
This makes a few changes to the way immediate mode works, and then it
implements a Rust immediate mode GPUVM abstraction on top of that.
Please see the following branch for example usage in Tyr:
https://gitlab.freedesktop.org/panfrost/linux/-/merge_requests/53
For context, please see this previous patch:
https://lore.kernel.org/rust-for-linux/20250621-gpuvm-v3-1-10203da06867@collabora.com/
and the commit message of the last patch.
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
Alice Ryhl (4):
drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc()
drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode
drm/gpuvm: use const for drm_gpuva_op_* ptrs
rust: drm: add GPUVM immediate mode abstraction
MAINTAINERS | 1 +
drivers/gpu/drm/drm_gpuvm.c | 80 ++++--
drivers/gpu/drm/imagination/pvr_vm.c | 2 +-
drivers/gpu/drm/msm/msm_gem.h | 2 +-
drivers/gpu/drm/msm/msm_gem_vma.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
drivers/gpu/drm/panthor/panthor_mmu.c | 10 -
drivers/gpu/drm/xe/xe_vm.c | 4 +-
include/drm/drm_gpuvm.h | 12 +-
rust/bindings/bindings_helper.h | 2 +
rust/helpers/drm_gpuvm.c | 43 +++
rust/helpers/helpers.c | 1 +
rust/kernel/drm/gpuvm/mod.rs | 394 +++++++++++++++++++++++++++
rust/kernel/drm/gpuvm/sm_ops.rs | 469 +++++++++++++++++++++++++++++++++
rust/kernel/drm/gpuvm/va.rs | 148 +++++++++++
rust/kernel/drm/gpuvm/vm_bo.rs | 213 +++++++++++++++
rust/kernel/drm/mod.rs | 1 +
17 files changed, 1337 insertions(+), 49 deletions(-)
---
base-commit: 77b686f688126a5f758b51441a03186e9eb1b0f1
change-id: 20251128-gpuvm-rust-b719cac27ad6
Best regards,
--
Alice Ryhl <aliceryhl@google.com>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc()
2025-11-28 14:14 [PATCH 0/4] Rust GPUVM support Alice Ryhl
@ 2025-11-28 14:14 ` Alice Ryhl
2025-11-28 14:24 ` Boris Brezillon
2025-12-19 12:15 ` Danilo Krummrich
2025-11-28 14:14 ` [PATCH 2/4] drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode Alice Ryhl
` (7 subsequent siblings)
8 siblings, 2 replies; 24+ messages in thread
From: Alice Ryhl @ 2025-11-28 14:14 UTC (permalink / raw)
To: Danilo Krummrich, Daniel Almeida
Cc: Matthew Brost, Thomas Hellström, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Boris Brezillon, Steven Price, Liviu Dudau, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Frank Binns, Matt Coster,
Rob Clark, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Sean Paul, Marijn Suijten, Lyude Paul, Lucas De Marchi,
Rodrigo Vivi, Sumit Semwal, Christian König, dri-devel,
linux-kernel, rust-for-linux, linux-arm-msm, freedreno, nouveau,
intel-xe, linux-media, linaro-mm-sig, Alice Ryhl
When calling drm_gpuvm_bo_obtain_prealloc() and using immediate mode,
this may result in a call to ops->vm_bo_free(vm_bo) while holding the
GEMs gpuva mutex. This is a problem if ops->vm_bo_free(vm_bo) performs
any operations that are not safe in the fence signalling critical path,
and it turns out that Panthor (the only current user of the method)
calls drm_gem_shmem_unpin() which takes a resv lock internally.
This constitutes both a violation of signalling safety and lock
inversion. To fix this, we modify the method to internally take the GEMs
gpuva mutex so that the mutex can be unlocked before freeing the
preallocated vm_bo.
Note that this modification introduces a requirement that the driver
uses immediate mode to call drm_gpuvm_bo_obtain_prealloc() as it would
otherwise take the wrong lock.
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
drivers/gpu/drm/drm_gpuvm.c | 58 ++++++++++++++++++++++-------------
drivers/gpu/drm/panthor/panthor_mmu.c | 10 ------
2 files changed, 37 insertions(+), 31 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 936e6c1a60c16ed5a6898546bf99e23a74f6b58b..f08a5cc1d611f971862c1272987e5ecd6d97c163 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -1601,14 +1601,37 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm,
}
EXPORT_SYMBOL_GPL(drm_gpuvm_bo_create);
+static void
+drm_gpuvm_bo_destroy_not_in_lists(struct drm_gpuvm_bo *vm_bo)
+{
+ struct drm_gpuvm *gpuvm = vm_bo->vm;
+ const struct drm_gpuvm_ops *ops = gpuvm->ops;
+ struct drm_gem_object *obj = vm_bo->obj;
+
+ if (ops && ops->vm_bo_free)
+ ops->vm_bo_free(vm_bo);
+ else
+ kfree(vm_bo);
+
+ drm_gpuvm_put(gpuvm);
+ drm_gem_object_put(obj);
+}
+
+static void
+drm_gpuvm_bo_destroy_not_in_lists_kref(struct kref *kref)
+{
+ struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
+ kref);
+
+ drm_gpuvm_bo_destroy_not_in_lists(vm_bo);
+}
+
static void
drm_gpuvm_bo_destroy(struct kref *kref)
{
struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
kref);
struct drm_gpuvm *gpuvm = vm_bo->vm;
- const struct drm_gpuvm_ops *ops = gpuvm->ops;
- struct drm_gem_object *obj = vm_bo->obj;
bool lock = !drm_gpuvm_resv_protected(gpuvm);
if (!lock)
@@ -1617,16 +1640,10 @@ drm_gpuvm_bo_destroy(struct kref *kref)
drm_gpuvm_bo_list_del(vm_bo, extobj, lock);
drm_gpuvm_bo_list_del(vm_bo, evict, lock);
- drm_gem_gpuva_assert_lock_held(gpuvm, obj);
+ drm_gem_gpuva_assert_lock_held(gpuvm, vm_bo->obj);
list_del(&vm_bo->list.entry.gem);
- if (ops && ops->vm_bo_free)
- ops->vm_bo_free(vm_bo);
- else
- kfree(vm_bo);
-
- drm_gpuvm_put(gpuvm);
- drm_gem_object_put(obj);
+ drm_gpuvm_bo_destroy_not_in_lists(vm_bo);
}
/**
@@ -1744,9 +1761,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put_deferred);
void
drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm)
{
- const struct drm_gpuvm_ops *ops = gpuvm->ops;
struct drm_gpuvm_bo *vm_bo;
- struct drm_gem_object *obj;
struct llist_node *bo_defer;
bo_defer = llist_del_all(&gpuvm->bo_defer);
@@ -1765,14 +1780,7 @@ drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm)
while (bo_defer) {
vm_bo = llist_entry(bo_defer, struct drm_gpuvm_bo, list.entry.bo_defer);
bo_defer = bo_defer->next;
- obj = vm_bo->obj;
- if (ops && ops->vm_bo_free)
- ops->vm_bo_free(vm_bo);
- else
- kfree(vm_bo);
-
- drm_gpuvm_put(gpuvm);
- drm_gem_object_put(obj);
+ drm_gpuvm_bo_destroy_not_in_lists(vm_bo);
}
}
EXPORT_SYMBOL_GPL(drm_gpuvm_bo_deferred_cleanup);
@@ -1860,6 +1868,9 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain);
* count is decreased. If not found @__vm_bo is returned without further
* increase of the reference count.
*
+ * The provided @__vm_bo must not already be in the gpuva, evict, or extobj
+ * lists prior to calling this method.
+ *
* A new &drm_gpuvm_bo is added to the GEMs gpuva list.
*
* Returns: a pointer to the found &drm_gpuvm_bo or @__vm_bo if no existing
@@ -1872,14 +1883,19 @@ drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *__vm_bo)
struct drm_gem_object *obj = __vm_bo->obj;
struct drm_gpuvm_bo *vm_bo;
+ drm_WARN_ON(gpuvm->drm, !drm_gpuvm_immediate_mode(gpuvm));
+
+ mutex_lock(&obj->gpuva.lock);
vm_bo = drm_gpuvm_bo_find(gpuvm, obj);
if (vm_bo) {
- drm_gpuvm_bo_put(__vm_bo);
+ mutex_unlock(&obj->gpuva.lock);
+ kref_put(&__vm_bo->kref, drm_gpuvm_bo_destroy_not_in_lists_kref);
return vm_bo;
}
drm_gem_gpuva_assert_lock_held(gpuvm, obj);
list_add_tail(&__vm_bo->list.entry.gem, &obj->gpuva.list);
+ mutex_unlock(&obj->gpuva.lock);
return __vm_bo;
}
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 9f5f4ddf291024121f3fd5644f2fdeba354fa67c..be8811a70e1a3adec87ca4a85cad7c838f54bebf 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -1224,17 +1224,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
goto err_cleanup;
}
- /* drm_gpuvm_bo_obtain_prealloc() will call drm_gpuvm_bo_put() on our
- * pre-allocated BO if the <BO,VM> association exists. Given we
- * only have one ref on preallocated_vm_bo, drm_gpuvm_bo_destroy() will
- * be called immediately, and we have to hold the VM resv lock when
- * calling this function.
- */
- dma_resv_lock(panthor_vm_resv(vm), NULL);
- mutex_lock(&bo->base.base.gpuva.lock);
op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
- mutex_unlock(&bo->base.base.gpuva.lock);
- dma_resv_unlock(panthor_vm_resv(vm));
op_ctx->map.bo_offset = offset;
--
2.52.0.487.g5c8c507ade-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 2/4] drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode
2025-11-28 14:14 [PATCH 0/4] Rust GPUVM support Alice Ryhl
2025-11-28 14:14 ` [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc() Alice Ryhl
@ 2025-11-28 14:14 ` Alice Ryhl
2025-11-28 14:25 ` Boris Brezillon
2025-12-19 12:25 ` Danilo Krummrich
2025-11-28 14:14 ` [PATCH 3/4] drm/gpuvm: use const for drm_gpuva_op_* ptrs Alice Ryhl
` (6 subsequent siblings)
8 siblings, 2 replies; 24+ messages in thread
From: Alice Ryhl @ 2025-11-28 14:14 UTC (permalink / raw)
To: Danilo Krummrich, Daniel Almeida
Cc: Matthew Brost, Thomas Hellström, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Boris Brezillon, Steven Price, Liviu Dudau, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Frank Binns, Matt Coster,
Rob Clark, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Sean Paul, Marijn Suijten, Lyude Paul, Lucas De Marchi,
Rodrigo Vivi, Sumit Semwal, Christian König, dri-devel,
linux-kernel, rust-for-linux, linux-arm-msm, freedreno, nouveau,
intel-xe, linux-media, linaro-mm-sig, Alice Ryhl
In the previous commit we updated drm_gpuvm_bo_obtain_prealloc() to take
locks internally, which means that it's only usable in immediate mode.
In this commit, we notice that drm_gpuvm_bo_obtain() requires you to use
staged mode. This means that we now have one variant of obtain for each
mode you might use gpuvm in.
To reflect this information, we add a warning about using it in
immediate mode, and to make the distinction clearer we rename the method
with a _locked() suffix so that it's clear that it requires the caller
to take the locks.
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
drivers/gpu/drm/drm_gpuvm.c | 16 +++++++++++++---
drivers/gpu/drm/imagination/pvr_vm.c | 2 +-
drivers/gpu/drm/msm/msm_gem.h | 2 +-
drivers/gpu/drm/msm/msm_gem_vma.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
drivers/gpu/drm/xe/xe_vm.c | 4 ++--
include/drm/drm_gpuvm.h | 4 ++--
7 files changed, 21 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index f08a5cc1d611f971862c1272987e5ecd6d97c163..9cd06c7600dc32ceee0f0beb5e3daf31698a66b3 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -1832,16 +1832,26 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_find);
* count of the &drm_gpuvm_bo accordingly. If not found, allocates a new
* &drm_gpuvm_bo.
*
+ * Requires the lock for the GEMs gpuva list.
+ *
* A new &drm_gpuvm_bo is added to the GEMs gpuva list.
*
* Returns: a pointer to the &drm_gpuvm_bo on success, an ERR_PTR on failure
*/
struct drm_gpuvm_bo *
-drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
- struct drm_gem_object *obj)
+drm_gpuvm_bo_obtain_locked(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj)
{
struct drm_gpuvm_bo *vm_bo;
+ /*
+ * In immediate mode this would require the caller to hold the GEMs
+ * gpuva mutex, but it's not okay to allocate while holding that lock,
+ * and this method allocates. Immediate mode drivers should use
+ * drm_gpuvm_bo_obtain_prealloc() instead.
+ */
+ drm_WARN_ON(gpuvm->drm, drm_gpuvm_immediate_mode(gpuvm));
+
vm_bo = drm_gpuvm_bo_find(gpuvm, obj);
if (vm_bo)
return vm_bo;
@@ -1855,7 +1865,7 @@ drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
return vm_bo;
}
-EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain);
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain_locked);
/**
* drm_gpuvm_bo_obtain_prealloc() - obtains an instance of the &drm_gpuvm_bo
diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
index 3d97990170bf6b1341116c5c8b9d01421944eda4..30ff9b84eb14f2455003e76108de6d489a13f61a 100644
--- a/drivers/gpu/drm/imagination/pvr_vm.c
+++ b/drivers/gpu/drm/imagination/pvr_vm.c
@@ -255,7 +255,7 @@ pvr_vm_bind_op_map_init(struct pvr_vm_bind_op *bind_op,
bind_op->type = PVR_VM_BIND_TYPE_MAP;
dma_resv_lock(obj->resv, NULL);
- bind_op->gpuvm_bo = drm_gpuvm_bo_obtain(&vm_ctx->gpuvm_mgr, obj);
+ bind_op->gpuvm_bo = drm_gpuvm_bo_obtain_locked(&vm_ctx->gpuvm_mgr, obj);
dma_resv_unlock(obj->resv);
if (IS_ERR(bind_op->gpuvm_bo))
return PTR_ERR(bind_op->gpuvm_bo);
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index a4cf31853c5008e171c3ad72cde1004c60fe5212..26dfe3d22e3e847f7e63174481d03f72878a8ced 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -60,7 +60,7 @@ struct msm_gem_vm_log_entry {
* embedded in any larger driver structure. The GEM object holds a list of
* drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma
* holds a reference to the vm_bo, and drops it when the vma is unlinked.
- * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an
+ * So we just need to call drm_gpuvm_bo_obtain_locked() to return a ref to an
* existing vm_bo, or create a new one. Once the vma is linked, the ref
* to the vm_bo can be dropped (since the vma is holding one).
*/
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 8316af1723c227f919594446c3721e1a948cbc9e..239b6168a26e636b511187b4993945d1565d149f 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -413,7 +413,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
if (!obj)
return &vma->base;
- vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj);
+ vm_bo = drm_gpuvm_bo_obtain_locked(&vm->base, obj);
if (IS_ERR(vm_bo)) {
ret = PTR_ERR(vm_bo);
goto err_va_remove;
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index 79eefdfd08a2678fedf69503ddf7e9e17ed14c6f..d8888bd29cccef4b8dad9eff2bf6e2b1fd1a7e4d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -1207,7 +1207,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
return -ENOENT;
dma_resv_lock(obj->resv, NULL);
- op->vm_bo = drm_gpuvm_bo_obtain(&uvmm->base, obj);
+ op->vm_bo = drm_gpuvm_bo_obtain_locked(&uvmm->base, obj);
dma_resv_unlock(obj->resv);
if (IS_ERR(op->vm_bo))
return PTR_ERR(op->vm_bo);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index f602b874e0547591d9008333c18f3de0634c48c7..de52d01b0921cc8ac619deeed47b578e0ae69257 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1004,7 +1004,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
xe_bo_assert_held(bo);
- vm_bo = drm_gpuvm_bo_obtain(vma->gpuva.vm, &bo->ttm.base);
+ vm_bo = drm_gpuvm_bo_obtain_locked(vma->gpuva.vm, &bo->ttm.base);
if (IS_ERR(vm_bo)) {
xe_vma_free(vma);
return ERR_CAST(vm_bo);
@@ -2249,7 +2249,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
if (err)
return ERR_PTR(err);
- vm_bo = drm_gpuvm_bo_obtain(&vm->gpuvm, obj);
+ vm_bo = drm_gpuvm_bo_obtain_locked(&vm->gpuvm, obj);
if (IS_ERR(vm_bo)) {
xe_bo_unlock(bo);
return ERR_CAST(vm_bo);
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index fdfc575b260360611ff8ce16c327acede787929f..0d3fc1f6cac9966a42f3bc82b0b491bfefaf5b96 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -736,8 +736,8 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm,
struct drm_gem_object *obj);
struct drm_gpuvm_bo *
-drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
- struct drm_gem_object *obj);
+drm_gpuvm_bo_obtain_locked(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj);
struct drm_gpuvm_bo *
drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *vm_bo);
--
2.52.0.487.g5c8c507ade-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 3/4] drm/gpuvm: use const for drm_gpuva_op_* ptrs
2025-11-28 14:14 [PATCH 0/4] Rust GPUVM support Alice Ryhl
2025-11-28 14:14 ` [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc() Alice Ryhl
2025-11-28 14:14 ` [PATCH 2/4] drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode Alice Ryhl
@ 2025-11-28 14:14 ` Alice Ryhl
2025-11-28 14:27 ` Boris Brezillon
2025-11-28 14:14 ` [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction Alice Ryhl
` (5 subsequent siblings)
8 siblings, 1 reply; 24+ messages in thread
From: Alice Ryhl @ 2025-11-28 14:14 UTC (permalink / raw)
To: Danilo Krummrich, Daniel Almeida
Cc: Matthew Brost, Thomas Hellström, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Boris Brezillon, Steven Price, Liviu Dudau, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Frank Binns, Matt Coster,
Rob Clark, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Sean Paul, Marijn Suijten, Lyude Paul, Lucas De Marchi,
Rodrigo Vivi, Sumit Semwal, Christian König, dri-devel,
linux-kernel, rust-for-linux, linux-arm-msm, freedreno, nouveau,
intel-xe, linux-media, linaro-mm-sig, Alice Ryhl
These methods just read the values stored in the op pointers without
modifying them, so it is appropriate to use const ptrs here.
This allows us to avoid const -> mut pointer casts in Rust.
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
drivers/gpu/drm/drm_gpuvm.c | 6 +++---
include/drm/drm_gpuvm.h | 8 ++++----
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 9cd06c7600dc32ceee0f0beb5e3daf31698a66b3..e06b58aabb8ea6ebd92c625583ae2852c9d2caf1 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -2283,7 +2283,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_interval_empty);
void
drm_gpuva_map(struct drm_gpuvm *gpuvm,
struct drm_gpuva *va,
- struct drm_gpuva_op_map *op)
+ const struct drm_gpuva_op_map *op)
{
drm_gpuva_init_from_op(va, op);
drm_gpuva_insert(gpuvm, va);
@@ -2303,7 +2303,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_map);
void
drm_gpuva_remap(struct drm_gpuva *prev,
struct drm_gpuva *next,
- struct drm_gpuva_op_remap *op)
+ const struct drm_gpuva_op_remap *op)
{
struct drm_gpuva *va = op->unmap->va;
struct drm_gpuvm *gpuvm = va->vm;
@@ -2330,7 +2330,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_remap);
* Removes the &drm_gpuva associated with the &drm_gpuva_op_unmap.
*/
void
-drm_gpuva_unmap(struct drm_gpuva_op_unmap *op)
+drm_gpuva_unmap(const struct drm_gpuva_op_unmap *op)
{
drm_gpuva_remove(op->va);
}
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 0d3fc1f6cac9966a42f3bc82b0b491bfefaf5b96..655bd9104ffb24170fca14dfa034ee79f5400930 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -1121,7 +1121,7 @@ void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
struct drm_gpuva_ops *ops);
static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
- struct drm_gpuva_op_map *op)
+ const struct drm_gpuva_op_map *op)
{
va->va.addr = op->va.addr;
va->va.range = op->va.range;
@@ -1265,13 +1265,13 @@ int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
void drm_gpuva_map(struct drm_gpuvm *gpuvm,
struct drm_gpuva *va,
- struct drm_gpuva_op_map *op);
+ const struct drm_gpuva_op_map *op);
void drm_gpuva_remap(struct drm_gpuva *prev,
struct drm_gpuva *next,
- struct drm_gpuva_op_remap *op);
+ const struct drm_gpuva_op_remap *op);
-void drm_gpuva_unmap(struct drm_gpuva_op_unmap *op);
+void drm_gpuva_unmap(const struct drm_gpuva_op_unmap *op);
/**
* drm_gpuva_op_remap_to_unmap_range() - Helper to get the start and range of
--
2.52.0.487.g5c8c507ade-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction
2025-11-28 14:14 [PATCH 0/4] Rust GPUVM support Alice Ryhl
` (2 preceding siblings ...)
2025-11-28 14:14 ` [PATCH 3/4] drm/gpuvm: use const for drm_gpuva_op_* ptrs Alice Ryhl
@ 2025-11-28 14:14 ` Alice Ryhl
2025-12-01 15:16 ` Daniel Almeida
2025-12-19 15:35 ` Danilo Krummrich
2025-11-28 15:38 ` ✗ CI.checkpatch: warning for Rust GPUVM support Patchwork
` (4 subsequent siblings)
8 siblings, 2 replies; 24+ messages in thread
From: Alice Ryhl @ 2025-11-28 14:14 UTC (permalink / raw)
To: Danilo Krummrich, Daniel Almeida
Cc: Matthew Brost, Thomas Hellström, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Boris Brezillon, Steven Price, Liviu Dudau, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Frank Binns, Matt Coster,
Rob Clark, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Sean Paul, Marijn Suijten, Lyude Paul, Lucas De Marchi,
Rodrigo Vivi, Sumit Semwal, Christian König, dri-devel,
linux-kernel, rust-for-linux, linux-arm-msm, freedreno, nouveau,
intel-xe, linux-media, linaro-mm-sig, Alice Ryhl, Asahi Lina
Add a GPUVM abstraction to be used by Rust GPU drivers.
GPUVM keeps track of a GPU's virtual address (VA) space and manages the
corresponding virtual mappings represented by "GPU VA" objects. It also
keeps track of the gem::Object<T> used to back the mappings through
GpuVmBo<T>.
This abstraction is only usable by drivers that wish to use GPUVM in
immediate mode. This allows us to build the locking scheme into the API
design. It means that the GEM mutex is used for the GEM gpuva list, and
that the resv lock is used for the extobj list. The evicted list is not
yet used in this version.
This abstraction provides a special handle called the GpuVmCore, which
is a wrapper around ARef<GpuVm> that provides access to the interval
tree. Generally, all changes to the address space requires mutable
access to this unique handle.
Some of the safety comments are still somewhat WIP, but I think the API
should be sound as-is.
Co-developed-by: Asahi Lina <lina+kernel@asahilina.net>
Signed-off-by: Asahi Lina <lina+kernel@asahilina.net>
Co-developed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
MAINTAINERS | 1 +
rust/bindings/bindings_helper.h | 2 +
rust/helpers/drm_gpuvm.c | 43 ++++
rust/helpers/helpers.c | 1 +
rust/kernel/drm/gpuvm/mod.rs | 394 +++++++++++++++++++++++++++++++++
rust/kernel/drm/gpuvm/sm_ops.rs | 469 ++++++++++++++++++++++++++++++++++++++++
rust/kernel/drm/gpuvm/va.rs | 148 +++++++++++++
rust/kernel/drm/gpuvm/vm_bo.rs | 213 ++++++++++++++++++
rust/kernel/drm/mod.rs | 1 +
9 files changed, 1272 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 952aed4619c25d395c12962e559d6cd3362f64a7..946629eb9ebf19922bbe782fed37be07067d6bf2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8591,6 +8591,7 @@ S: Supported
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: drivers/gpu/drm/drm_gpuvm.c
F: include/drm/drm_gpuvm.h
+F: rust/kernel/drm/gpuvm/
DRM LOG
M: Jocelyn Falempe <jfalempe@redhat.com>
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 2e43c66635a2c9f31bd99b9817bd2d6ab89fbcf2..c776ae198e1db91f010f88ff1d1c888a3036a87f 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -33,6 +33,7 @@
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
#include <drm/drm_gem.h>
+#include <drm/drm_gpuvm.h>
#include <drm/drm_ioctl.h>
#include <kunit/test.h>
#include <linux/auxiliary_bus.h>
@@ -103,6 +104,7 @@ const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM = ___GFP_HIGHMEM;
const gfp_t RUST_CONST_HELPER___GFP_NOWARN = ___GFP_NOWARN;
const blk_features_t RUST_CONST_HELPER_BLK_FEAT_ROTATIONAL = BLK_FEAT_ROTATIONAL;
const fop_flags_t RUST_CONST_HELPER_FOP_UNSIGNED_OFFSET = FOP_UNSIGNED_OFFSET;
+const u32 RUST_CONST_HELPER_DRM_EXEC_INTERRUPTIBLE_WAIT = DRM_EXEC_INTERRUPTIBLE_WAIT;
const xa_mark_t RUST_CONST_HELPER_XA_PRESENT = XA_PRESENT;
diff --git a/rust/helpers/drm_gpuvm.c b/rust/helpers/drm_gpuvm.c
new file mode 100644
index 0000000000000000000000000000000000000000..18b7dbd2e32c3162455b344e72ec2940c632cc6b
--- /dev/null
+++ b/rust/helpers/drm_gpuvm.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0 or MIT
+
+#ifdef CONFIG_DRM_GPUVM
+
+#include <drm/drm_gpuvm.h>
+
+struct drm_gpuvm *rust_helper_drm_gpuvm_get(struct drm_gpuvm *obj)
+{
+ return drm_gpuvm_get(obj);
+}
+
+void rust_helper_drm_gpuva_init_from_op(struct drm_gpuva *va, struct drm_gpuva_op_map *op)
+{
+ drm_gpuva_init_from_op(va, op);
+}
+
+struct drm_gpuvm_bo *rust_helper_drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_bo)
+{
+ return drm_gpuvm_bo_get(vm_bo);
+}
+
+void rust_helper_drm_gpuvm_exec_unlock(struct drm_gpuvm_exec *vm_exec)
+{
+ return drm_gpuvm_exec_unlock(vm_exec);
+}
+
+bool rust_helper_drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj)
+{
+ return drm_gpuvm_is_extobj(gpuvm, obj);
+}
+
+int rust_helper_dma_resv_lock(struct dma_resv *obj, struct ww_acquire_ctx *ctx)
+{
+ return dma_resv_lock(obj, ctx);
+}
+
+void rust_helper_dma_resv_unlock(struct dma_resv *obj)
+{
+ dma_resv_unlock(obj);
+}
+
+#endif // CONFIG_DRM_GPUVM
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 551da6c9b5064c324d6f62bafcec672c6c6f5bee..91f45155eb9c2c4e92b56ee1abf7d45188873f3c 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -26,6 +26,7 @@
#include "device.c"
#include "dma.c"
#include "drm.c"
+#include "drm_gpuvm.c"
#include "err.c"
#include "irq.c"
#include "fs.c"
diff --git a/rust/kernel/drm/gpuvm/mod.rs b/rust/kernel/drm/gpuvm/mod.rs
new file mode 100644
index 0000000000000000000000000000000000000000..9834dbb938a3622e46048e9b8e06bc6bf03aa0d2
--- /dev/null
+++ b/rust/kernel/drm/gpuvm/mod.rs
@@ -0,0 +1,394 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+
+//! DRM GPUVM in immediate mode
+//!
+//! Rust abstractions for using GPUVM in immediate mode. This is when the GPUVM state is updated
+//! during `run_job()`, i.e., in the DMA fence signalling critical path, to ensure that the GPUVM
+//! and the GPU's virtual address space has the same state at all times.
+//!
+//! C header: [`include/drm/drm_gpuvm.h`](srctree/include/drm/drm_gpuvm.h)
+
+use kernel::{
+ alloc::{AllocError, Flags as AllocFlags},
+ bindings, drm,
+ drm::gem::IntoGEMObject,
+ error::to_result,
+ prelude::*,
+ sync::aref::{ARef, AlwaysRefCounted},
+ types::Opaque,
+};
+
+use core::{
+ cell::UnsafeCell,
+ marker::PhantomData,
+ mem::{ManuallyDrop, MaybeUninit},
+ ops::{Deref, DerefMut, Range},
+ ptr::{self, NonNull},
+};
+
+mod sm_ops;
+pub use self::sm_ops::*;
+
+mod vm_bo;
+pub use self::vm_bo::*;
+
+mod va;
+pub use self::va::*;
+
+/// A DRM GPU VA manager.
+///
+/// This object is refcounted, but the "core" is only accessible using a special unique handle. The
+/// core consists of the `core` field and the GPUVM's interval tree.
+#[repr(C)]
+#[pin_data]
+pub struct GpuVm<T: DriverGpuVm> {
+ #[pin]
+ vm: Opaque<bindings::drm_gpuvm>,
+ /// Accessed only through the [`GpuVmCore`] reference.
+ core: UnsafeCell<T>,
+ /// Shared data not protected by any lock.
+ #[pin]
+ shared_data: T::SharedData,
+}
+
+// SAFETY: dox
+unsafe impl<T: DriverGpuVm> AlwaysRefCounted for GpuVm<T> {
+ fn inc_ref(&self) {
+ // SAFETY: dox
+ unsafe { bindings::drm_gpuvm_get(self.vm.get()) };
+ }
+
+ unsafe fn dec_ref(obj: NonNull<Self>) {
+ // SAFETY: dox
+ unsafe { bindings::drm_gpuvm_put((*obj.as_ptr()).vm.get()) };
+ }
+}
+
+impl<T: DriverGpuVm> GpuVm<T> {
+ const fn vtable() -> &'static bindings::drm_gpuvm_ops {
+ &bindings::drm_gpuvm_ops {
+ vm_free: Some(Self::vm_free),
+ op_alloc: None,
+ op_free: None,
+ vm_bo_alloc: GpuVmBo::<T>::ALLOC_FN,
+ vm_bo_free: GpuVmBo::<T>::FREE_FN,
+ vm_bo_validate: None,
+ sm_step_map: Some(Self::sm_step_map),
+ sm_step_unmap: Some(Self::sm_step_unmap),
+ sm_step_remap: Some(Self::sm_step_remap),
+ }
+ }
+
+ /// Creates a GPUVM instance.
+ #[expect(clippy::new_ret_no_self)]
+ pub fn new<E>(
+ name: &'static CStr,
+ dev: &drm::Device<T::Driver>,
+ r_obj: &T::Object,
+ range: Range<u64>,
+ reserve_range: Range<u64>,
+ core: T,
+ shared: impl PinInit<T::SharedData, E>,
+ ) -> Result<GpuVmCore<T>, E>
+ where
+ E: From<AllocError>,
+ E: From<core::convert::Infallible>,
+ {
+ let obj = KBox::try_pin_init::<E>(
+ try_pin_init!(Self {
+ core <- UnsafeCell::new(core),
+ shared_data <- shared,
+ vm <- Opaque::ffi_init(|vm| {
+ // SAFETY: These arguments are valid. `vm` is valid until refcount drops to
+ // zero.
+ unsafe {
+ bindings::drm_gpuvm_init(
+ vm,
+ name.as_char_ptr(),
+ bindings::drm_gpuvm_flags_DRM_GPUVM_IMMEDIATE_MODE
+ | bindings::drm_gpuvm_flags_DRM_GPUVM_RESV_PROTECTED,
+ dev.as_raw(),
+ r_obj.as_raw(),
+ range.start,
+ range.end - range.start,
+ reserve_range.start,
+ reserve_range.end - reserve_range.start,
+ const { Self::vtable() },
+ )
+ }
+ }),
+ }? E),
+ GFP_KERNEL,
+ )?;
+ // SAFETY: This transfers the initial refcount to the ARef.
+ Ok(GpuVmCore(unsafe {
+ ARef::from_raw(NonNull::new_unchecked(KBox::into_raw(
+ Pin::into_inner_unchecked(obj),
+ )))
+ }))
+ }
+
+ /// Access this [`GpuVm`] from a raw pointer.
+ ///
+ /// # Safety
+ ///
+ /// For the duration of `'a`, the pointer must reference a valid [`GpuVm<T>`].
+ #[inline]
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm) -> &'a Self {
+ // SAFETY: `drm_gpuvm` is first field and `repr(C)`.
+ unsafe { &*ptr.cast() }
+ }
+
+ /// Get a raw pointer.
+ #[inline]
+ pub fn as_raw(&self) -> *mut bindings::drm_gpuvm {
+ self.vm.get()
+ }
+
+ /// Access the shared data.
+ #[inline]
+ pub fn shared(&self) -> &T::SharedData {
+ &self.shared_data
+ }
+
+ /// The start of the VA space.
+ #[inline]
+ pub fn va_start(&self) -> u64 {
+ // SAFETY: Safe by the type invariant of `GpuVm<T>`.
+ unsafe { (*self.as_raw()).mm_start }
+ }
+
+ /// The length of the address space
+ #[inline]
+ pub fn va_length(&self) -> u64 {
+ // SAFETY: Safe by the type invariant of `GpuVm<T>`.
+ unsafe { (*self.as_raw()).mm_range }
+ }
+
+ /// Returns the range of the GPU virtual address space.
+ #[inline]
+ pub fn va_range(&self) -> Range<u64> {
+ let start = self.va_start();
+ let end = start + self.va_length();
+ Range { start, end }
+ }
+
+ /// Returns a [`GpuVmBoObtain`] for the provided GEM object.
+ #[inline]
+ pub fn obtain(
+ &self,
+ obj: &T::Object,
+ data: impl PinInit<T::VmBoData>,
+ ) -> Result<GpuVmBoObtain<T>, AllocError> {
+ Ok(GpuVmBoAlloc::new(self, obj, data)?.obtain())
+ }
+
+ /// Prepare this GPUVM.
+ #[inline]
+ pub fn prepare(&self, num_fences: u32) -> impl PinInit<GpuVmExec<'_, T>, Error> {
+ try_pin_init!(GpuVmExec {
+ exec <- Opaque::try_ffi_init(|exec: *mut bindings::drm_gpuvm_exec| {
+ // SAFETY: exec is valid but unused memory, so we can write.
+ unsafe {
+ ptr::write_bytes(exec, 0u8, 1usize);
+ ptr::write(&raw mut (*exec).vm, self.as_raw());
+ ptr::write(&raw mut (*exec).flags, bindings::DRM_EXEC_INTERRUPTIBLE_WAIT);
+ ptr::write(&raw mut (*exec).num_fences, num_fences);
+ }
+
+ // SAFETY: We can prepare the GPUVM.
+ to_result(unsafe { bindings::drm_gpuvm_exec_lock(exec) })
+ }),
+ _gpuvm: PhantomData,
+ })
+ }
+
+ /// Clean up buffer objects that are no longer used.
+ #[inline]
+ pub fn deferred_cleanup(&self) {
+ // SAFETY: Always safe to perform deferred cleanup.
+ unsafe { bindings::drm_gpuvm_bo_deferred_cleanup(self.as_raw()) }
+ }
+
+ /// Check if this GEM object is an external object for this GPUVM.
+ #[inline]
+ pub fn is_extobj(&self, obj: &T::Object) -> bool {
+ // SAFETY: We may call this with any GPUVM and GEM object.
+ unsafe { bindings::drm_gpuvm_is_extobj(self.as_raw(), obj.as_raw()) }
+ }
+
+ /// Free this GPUVM.
+ ///
+ /// # Safety
+ ///
+ /// Called when refcount hits zero.
+ unsafe extern "C" fn vm_free(me: *mut bindings::drm_gpuvm) {
+ // SAFETY: GPUVM was allocated with KBox and can now be freed.
+ drop(unsafe { KBox::<Self>::from_raw(me.cast()) })
+ }
+}
+
+/// The manager for a GPUVM.
+pub trait DriverGpuVm: Sized {
+ /// Parent `Driver` for this object.
+ type Driver: drm::Driver;
+
+ /// The kind of GEM object stored in this GPUVM.
+ type Object: IntoGEMObject;
+
+ /// Data stored in the [`GpuVm`] that is fully shared.
+ type SharedData;
+
+ /// Data stored with each `struct drm_gpuvm_bo`.
+ type VmBoData;
+
+ /// Data stored with each `struct drm_gpuva`.
+ type VaData;
+
+ /// The private data passed to callbacks.
+ type SmContext;
+
+ /// Indicates that a new mapping should be created.
+ fn sm_step_map<'op>(
+ &mut self,
+ op: OpMap<'op, Self>,
+ context: &mut Self::SmContext,
+ ) -> Result<OpMapped<'op, Self>, Error>;
+
+ /// Indicates that an existing mapping should be removed.
+ fn sm_step_unmap<'op>(
+ &mut self,
+ op: OpUnmap<'op, Self>,
+ context: &mut Self::SmContext,
+ ) -> Result<OpUnmapped<'op, Self>, Error>;
+
+ /// Indicates that an existing mapping should be split up.
+ fn sm_step_remap<'op>(
+ &mut self,
+ op: OpRemap<'op, Self>,
+ context: &mut Self::SmContext,
+ ) -> Result<OpRemapped<'op, Self>, Error>;
+}
+
+/// The core of the DRM GPU VA manager.
+///
+/// This object is the reference to the GPUVM that
+///
+/// # Invariants
+///
+/// This object owns the core.
+pub struct GpuVmCore<T: DriverGpuVm>(ARef<GpuVm<T>>);
+
+impl<T: DriverGpuVm> GpuVmCore<T> {
+ /// Get a reference without access to `core`.
+ #[inline]
+ pub fn gpuvm(&self) -> &GpuVm<T> {
+ &self.0
+ }
+}
+
+impl<T: DriverGpuVm> Deref for GpuVmCore<T> {
+ type Target = T;
+ #[inline]
+ fn deref(&self) -> &T {
+ // SAFETY: By the type invariants we may access `core`.
+ unsafe { &*self.0.core.get() }
+ }
+}
+
+impl<T: DriverGpuVm> DerefMut for GpuVmCore<T> {
+ #[inline]
+ fn deref_mut(&mut self) -> &mut T {
+ // SAFETY: By the type invariants we may access `core`.
+ unsafe { &mut *self.0.core.get() }
+ }
+}
+
+/// The exec token for preparing the objects.
+#[pin_data(PinnedDrop)]
+pub struct GpuVmExec<'a, T: DriverGpuVm> {
+ #[pin]
+ exec: Opaque<bindings::drm_gpuvm_exec>,
+ _gpuvm: PhantomData<&'a mut GpuVm<T>>,
+}
+
+impl<'a, T: DriverGpuVm> GpuVmExec<'a, T> {
+ /// Add a fence.
+ ///
+ /// # Safety
+ ///
+ /// `fence` arg must be valid.
+ pub unsafe fn resv_add_fence(
+ &self,
+ // TODO: use a safe fence abstraction
+ fence: *mut bindings::dma_fence,
+ private_usage: DmaResvUsage,
+ extobj_usage: DmaResvUsage,
+ ) {
+ // SAFETY: Caller ensures fence is ok.
+ unsafe {
+ bindings::drm_gpuvm_resv_add_fence(
+ (*self.exec.get()).vm,
+ &raw mut (*self.exec.get()).exec,
+ fence,
+ private_usage as u32,
+ extobj_usage as u32,
+ )
+ }
+ }
+}
+
+#[pinned_drop]
+impl<'a, T: DriverGpuVm> PinnedDrop for GpuVmExec<'a, T> {
+ fn drop(self: Pin<&mut Self>) {
+ // SAFETY: We hold the lock, so it's safe to unlock.
+ unsafe { bindings::drm_gpuvm_exec_unlock(self.exec.get()) };
+ }
+}
+
+/// How the fence will be used.
+#[repr(u32)]
+pub enum DmaResvUsage {
+ /// For in kernel memory management only (e.g. copying, clearing memory).
+ Kernel = bindings::dma_resv_usage_DMA_RESV_USAGE_KERNEL,
+ /// Implicit write synchronization for userspace submissions.
+ Write = bindings::dma_resv_usage_DMA_RESV_USAGE_WRITE,
+ /// Implicit read synchronization for userspace submissions.
+ Read = bindings::dma_resv_usage_DMA_RESV_USAGE_READ,
+ /// No implicit sync (e.g. preemption fences, page table updates, TLB flushes).
+ Bookkeep = bindings::dma_resv_usage_DMA_RESV_USAGE_BOOKKEEP,
+}
+
+/// A lock guard for the GPUVM's resv lock.
+///
+/// This guard provides access to the extobj and evicted lists.
+///
+/// # Invariants
+///
+/// Holds the GPUVM resv lock.
+pub struct GpuvmResvLockGuard<'a, T: DriverGpuVm>(&'a GpuVm<T>);
+
+impl<T: DriverGpuVm> GpuVm<T> {
+ /// Lock the VM's resv lock.
+ #[inline]
+ pub fn resv_lock(&self) -> GpuvmResvLockGuard<'_, T> {
+ // SAFETY: It's always ok to lock the resv lock.
+ unsafe { bindings::dma_resv_lock(self.raw_resv_lock(), ptr::null_mut()) };
+ // INVARIANTS: We took the lock.
+ GpuvmResvLockGuard(self)
+ }
+
+ #[inline]
+ fn raw_resv_lock(&self) -> *mut bindings::dma_resv {
+ // SAFETY: `r_obj` is immutable and valid for duration of GPUVM.
+ unsafe { (*(*self.as_raw()).r_obj).resv }
+ }
+}
+
+impl<'a, T: DriverGpuVm> Drop for GpuvmResvLockGuard<'a, T> {
+ #[inline]
+ fn drop(&mut self) {
+ // SAFETY: We hold the lock so we can release it.
+ unsafe { bindings::dma_resv_unlock(self.0.raw_resv_lock()) };
+ }
+}
diff --git a/rust/kernel/drm/gpuvm/sm_ops.rs b/rust/kernel/drm/gpuvm/sm_ops.rs
new file mode 100644
index 0000000000000000000000000000000000000000..c0dbd4675de644a3b1cbe7d528194ca7fb471848
--- /dev/null
+++ b/rust/kernel/drm/gpuvm/sm_ops.rs
@@ -0,0 +1,469 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+#![allow(clippy::tabs_in_doc_comments)]
+
+use super::*;
+
+struct SmData<'a, T: DriverGpuVm> {
+ gpuvm: &'a mut GpuVmCore<T>,
+ user_context: &'a mut T::SmContext,
+}
+
+#[repr(C)]
+struct SmMapData<'a, T: DriverGpuVm> {
+ sm_data: SmData<'a, T>,
+ vm_bo: GpuVmBoObtain<T>,
+}
+
+/// The argument for [`GpuVmCore::sm_map`].
+pub struct OpMapRequest<'a, T: DriverGpuVm> {
+ /// Address in GPU virtual address space.
+ pub addr: u64,
+ /// Length of mapping to create.
+ pub range: u64,
+ /// Offset in GEM object.
+ pub offset: u64,
+ /// The GEM object to map.
+ pub vm_bo: GpuVmBoObtain<T>,
+ /// The user-provided context type.
+ pub context: &'a mut T::SmContext,
+}
+
+impl<'a, T: DriverGpuVm> OpMapRequest<'a, T> {
+ fn raw_request(&self) -> bindings::drm_gpuvm_map_req {
+ bindings::drm_gpuvm_map_req {
+ map: bindings::drm_gpuva_op_map {
+ va: bindings::drm_gpuva_op_map__bindgen_ty_1 {
+ addr: self.addr,
+ range: self.range,
+ },
+ gem: bindings::drm_gpuva_op_map__bindgen_ty_2 {
+ offset: self.offset,
+ obj: self.vm_bo.obj().as_raw(),
+ },
+ },
+ }
+ }
+}
+
+/// ```
+/// struct drm_gpuva_op_map {
+/// /**
+/// * @va: structure containing address and range of a map
+/// * operation
+/// */
+/// struct {
+/// /**
+/// * @va.addr: the base address of the new mapping
+/// */
+/// u64 addr;
+///
+/// /**
+/// * @va.range: the range of the new mapping
+/// */
+/// u64 range;
+/// } va;
+///
+/// /**
+/// * @gem: structure containing the &drm_gem_object and it's offset
+/// */
+/// struct {
+/// /**
+/// * @gem.offset: the offset within the &drm_gem_object
+/// */
+/// u64 offset;
+///
+/// /**
+/// * @gem.obj: the &drm_gem_object to map
+/// */
+/// struct drm_gem_object *obj;
+/// } gem;
+/// };
+/// ```
+pub struct OpMap<'op, T: DriverGpuVm> {
+ op: &'op bindings::drm_gpuva_op_map,
+ // Since these abstractions are designed for immediate mode, the VM BO needs to be
+ // pre-allocated, so we always have it available when we reach this point.
+ vm_bo: &'op GpuVmBo<T>,
+ _invariant: PhantomData<*mut &'op mut T>,
+}
+
+impl<'op, T: DriverGpuVm> OpMap<'op, T> {
+ /// The base address of the new mapping.
+ pub fn addr(&self) -> u64 {
+ self.op.va.addr
+ }
+
+ /// The length of the new mapping.
+ pub fn length(&self) -> u64 {
+ self.op.va.range
+ }
+
+ /// The offset within the [`drm_gem_object`](crate::gem::Object).
+ pub fn gem_offset(&self) -> u64 {
+ self.op.gem.offset
+ }
+
+ /// The [`drm_gem_object`](crate::gem::Object) to map.
+ pub fn obj(&self) -> &T::Object {
+ // SAFETY: The `obj` pointer is guaranteed to be valid.
+ unsafe { <T::Object as IntoGEMObject>::from_raw(self.op.gem.obj) }
+ }
+
+ /// The [`GpuVmBo`] that the new VA will be associated with.
+ pub fn vm_bo(&self) -> &GpuVmBo<T> {
+ self.vm_bo
+ }
+
+ /// Use the pre-allocated VA to carry out this map operation.
+ pub fn insert(self, va: GpuVaAlloc<T>, va_data: impl PinInit<T::VaData>) -> OpMapped<'op, T> {
+ let va = va.prepare(va_data);
+ // SAFETY: By the type invariants we may access the interval tree.
+ unsafe { bindings::drm_gpuva_map(self.vm_bo.gpuvm().as_raw(), va, self.op) };
+ // SAFETY: The GEM object is valid, so the mutex is properly initialized.
+ unsafe { bindings::mutex_lock(&raw mut (*self.op.gem.obj).gpuva.lock) };
+ // SAFETY: The va is prepared for insertion, and we hold the GEM lock.
+ unsafe { bindings::drm_gpuva_link(va, self.vm_bo.as_raw()) };
+ // SAFETY: We took the mutex above, so we may unlock it.
+ unsafe { bindings::mutex_unlock(&raw mut (*self.op.gem.obj).gpuva.lock) };
+ OpMapped {
+ _invariant: self._invariant,
+ }
+ }
+}
+
+/// Represents a completed [`OpMap`] operation.
+pub struct OpMapped<'op, T> {
+ _invariant: PhantomData<*mut &'op mut T>,
+}
+
+/// ```
+/// struct drm_gpuva_op_unmap {
+/// /**
+/// * @va: the &drm_gpuva to unmap
+/// */
+/// struct drm_gpuva *va;
+///
+/// /**
+/// * @keep:
+/// *
+/// * Indicates whether this &drm_gpuva is physically contiguous with the
+/// * original mapping request.
+/// *
+/// * Optionally, if &keep is set, drivers may keep the actual page table
+/// * mappings for this &drm_gpuva, adding the missing page table entries
+/// * only and update the &drm_gpuvm accordingly.
+/// */
+/// bool keep;
+/// };
+/// ```
+pub struct OpUnmap<'op, T: DriverGpuVm> {
+ op: &'op bindings::drm_gpuva_op_unmap,
+ _invariant: PhantomData<*mut &'op mut T>,
+}
+
+impl<'op, T: DriverGpuVm> OpUnmap<'op, T> {
+ /// Indicates whether this `drm_gpuva` is physically contiguous with the
+ /// original mapping request.
+ ///
+ /// Optionally, if `keep` is set, drivers may keep the actual page table
+ /// mappings for this `drm_gpuva`, adding the missing page table entries
+ /// only and update the `drm_gpuvm` accordingly.
+ pub fn keep(&self) -> bool {
+ self.op.keep
+ }
+
+ /// The range being unmapped.
+ pub fn va(&self) -> &GpuVa<T> {
+ // SAFETY: This is a valid va.
+ unsafe { GpuVa::<T>::from_raw(self.op.va) }
+ }
+
+ /// Remove the VA.
+ pub fn remove(self) -> (OpUnmapped<'op, T>, GpuVaRemoved<T>) {
+ // SAFETY: The op references a valid drm_gpuva in the GPUVM.
+ unsafe { bindings::drm_gpuva_unmap(self.op) };
+ // SAFETY: The va is no longer in the interval tree so we may unlink it.
+ unsafe { bindings::drm_gpuva_unlink_defer(self.op.va) };
+
+ // SAFETY: We just removed this va from the `GpuVm<T>`.
+ let va = unsafe { GpuVaRemoved::from_raw(self.op.va) };
+
+ (
+ OpUnmapped {
+ _invariant: self._invariant,
+ },
+ va,
+ )
+ }
+}
+
+/// Represents a completed [`OpUnmap`] operation.
+pub struct OpUnmapped<'op, T> {
+ _invariant: PhantomData<*mut &'op mut T>,
+}
+
+/// ```
+/// struct drm_gpuva_op_remap {
+/// /**
+/// * @prev: the preceding part of a split mapping
+/// */
+/// struct drm_gpuva_op_map *prev;
+///
+/// /**
+/// * @next: the subsequent part of a split mapping
+/// */
+/// struct drm_gpuva_op_map *next;
+///
+/// /**
+/// * @unmap: the unmap operation for the original existing mapping
+/// */
+/// struct drm_gpuva_op_unmap *unmap;
+/// };
+/// ```
+pub struct OpRemap<'op, T: DriverGpuVm> {
+ op: &'op bindings::drm_gpuva_op_remap,
+ _invariant: PhantomData<*mut &'op mut T>,
+}
+
+impl<'op, T: DriverGpuVm> OpRemap<'op, T> {
+ /// The preceding part of a split mapping.
+ #[inline]
+ pub fn prev(&self) -> Option<&OpRemapMapData> {
+ // SAFETY: We checked for null, so the pointer must be valid.
+ NonNull::new(self.op.prev).map(|ptr| unsafe { OpRemapMapData::from_raw(ptr) })
+ }
+
+ /// The subsequent part of a split mapping.
+ #[inline]
+ pub fn next(&self) -> Option<&OpRemapMapData> {
+ // SAFETY: We checked for null, so the pointer must be valid.
+ NonNull::new(self.op.next).map(|ptr| unsafe { OpRemapMapData::from_raw(ptr) })
+ }
+
+ /// Indicates whether the `drm_gpuva` being removed is physically contiguous with the original
+ /// mapping request.
+ ///
+ /// Optionally, if `keep` is set, drivers may keep the actual page table mappings for this
+ /// `drm_gpuva`, adding the missing page table entries only and update the `drm_gpuvm`
+ /// accordingly.
+ #[inline]
+ pub fn keep(&self) -> bool {
+ // SAFETY: The unmap pointer is always valid.
+ unsafe { (*self.op.unmap).keep }
+ }
+
+ /// The range being unmapped.
+ #[inline]
+ pub fn va_to_unmap(&self) -> &GpuVa<T> {
+ // SAFETY: This is a valid va.
+ unsafe { GpuVa::<T>::from_raw((*self.op.unmap).va) }
+ }
+
+ /// The [`drm_gem_object`](crate::gem::Object) whose VA is being remapped.
+ #[inline]
+ pub fn obj(&self) -> &T::Object {
+ self.va_to_unmap().obj()
+ }
+
+ /// The [`GpuVmBo`] that is being remapped.
+ #[inline]
+ pub fn vm_bo(&self) -> &GpuVmBo<T> {
+ self.va_to_unmap().vm_bo()
+ }
+
+ /// Update the GPUVM to perform the remapping.
+ pub fn remap(
+ self,
+ va_alloc: [GpuVaAlloc<T>; 2],
+ prev_data: impl PinInit<T::VaData>,
+ next_data: impl PinInit<T::VaData>,
+ ) -> (OpRemapped<'op, T>, OpRemapRet<T>) {
+ let [va1, va2] = va_alloc;
+
+ let mut unused_va = None;
+ let mut prev_ptr = ptr::null_mut();
+ let mut next_ptr = ptr::null_mut();
+ if self.prev().is_some() {
+ prev_ptr = va1.prepare(prev_data);
+ } else {
+ unused_va = Some(va1);
+ }
+ if self.next().is_some() {
+ next_ptr = va2.prepare(next_data);
+ } else {
+ unused_va = Some(va2);
+ }
+
+ // SAFETY: the pointers are non-null when required
+ unsafe { bindings::drm_gpuva_remap(prev_ptr, next_ptr, self.op) };
+
+ // SAFETY: The GEM object is valid, so the mutex is properly initialized.
+ unsafe { bindings::mutex_lock(&raw mut (*self.obj().as_raw()).gpuva.lock) };
+ if !prev_ptr.is_null() {
+ // SAFETY: The prev_ptr is a valid drm_gpuva prepared for insertion. The vm_bo is still
+ // valid as the not-yet-unlinked gpuva holds a refcount on the vm_bo.
+ unsafe { bindings::drm_gpuva_link(prev_ptr, self.vm_bo().as_raw()) };
+ }
+ if !next_ptr.is_null() {
+ // SAFETY: The next_ptr is a valid drm_gpuva prepared for insertion. The vm_bo is still
+ // valid as the not-yet-unlinked gpuva holds a refcount on the vm_bo.
+ unsafe { bindings::drm_gpuva_link(next_ptr, self.vm_bo().as_raw()) };
+ }
+ // SAFETY: We took the mutex above, so we may unlock it.
+ unsafe { bindings::mutex_unlock(&raw mut (*self.obj().as_raw()).gpuva.lock) };
+ // SAFETY: The va is no longer in the interval tree so we may unlink it.
+ unsafe { bindings::drm_gpuva_unlink_defer((*self.op.unmap).va) };
+
+ (
+ OpRemapped {
+ _invariant: self._invariant,
+ },
+ OpRemapRet {
+ // SAFETY: We just removed this va from the `GpuVm<T>`.
+ unmapped_va: unsafe { GpuVaRemoved::from_raw((*self.op.unmap).va) },
+ unused_va,
+ },
+ )
+ }
+}
+
+/// Part of an [`OpRemap`] that represents a new mapping.
+#[repr(transparent)]
+pub struct OpRemapMapData(bindings::drm_gpuva_op_map);
+
+impl OpRemapMapData {
+ /// # Safety
+ /// Must reference a valid `drm_gpuva_op_map` for duration of `'a`.
+ unsafe fn from_raw<'a>(ptr: NonNull<bindings::drm_gpuva_op_map>) -> &'a Self {
+ // SAFETY: ok per safety requirements
+ unsafe { ptr.cast().as_ref() }
+ }
+
+ /// The base address of the new mapping.
+ pub fn addr(&self) -> u64 {
+ self.0.va.addr
+ }
+
+ /// The length of the new mapping.
+ pub fn length(&self) -> u64 {
+ self.0.va.range
+ }
+
+ /// The offset within the [`drm_gem_object`](crate::gem::Object).
+ pub fn gem_offset(&self) -> u64 {
+ self.0.gem.offset
+ }
+}
+
+/// Struct containing objects removed or not used by [`OpRemap::remap`].
+pub struct OpRemapRet<T: DriverGpuVm> {
+ /// The `drm_gpuva` that was removed.
+ pub unmapped_va: GpuVaRemoved<T>,
+ /// If the remap did not split the region into two pieces, then the unused `drm_gpuva` is
+ /// returned here.
+ pub unused_va: Option<GpuVaAlloc<T>>,
+}
+
+/// Represents a completed [`OpRemap`] operation.
+pub struct OpRemapped<'op, T> {
+ _invariant: PhantomData<*mut &'op mut T>,
+}
+
+impl<T: DriverGpuVm> GpuVmCore<T> {
+ /// Create a mapping, removing or remapping anything that overlaps.
+ #[inline]
+ pub fn sm_map(&mut self, req: OpMapRequest<'_, T>) -> Result {
+ let gpuvm = self.gpuvm().as_raw();
+ let raw_req = req.raw_request();
+ let mut p = SmMapData {
+ sm_data: SmData {
+ gpuvm: self,
+ user_context: req.context,
+ },
+ vm_bo: req.vm_bo,
+ };
+ // SAFETY:
+ // * raw_request() creates a valid request.
+ // * The private data is valid to be interpreted as both SmData and SmMapData since the
+ // first field of SmMapData is SmData.
+ to_result(unsafe {
+ bindings::drm_gpuvm_sm_map(gpuvm, (&raw mut p).cast(), &raw const raw_req)
+ })
+ }
+
+ /// Remove any mappings in the given region.
+ #[inline]
+ pub fn sm_unmap(&mut self, addr: u64, length: u64, context: &mut T::SmContext) -> Result {
+ let gpuvm = self.gpuvm().as_raw();
+ let mut p = SmData {
+ gpuvm: self,
+ user_context: context,
+ };
+ // SAFETY:
+ // * raw_request() creates a valid request.
+ // * The private data is valid to be interpreted as only SmData, but drm_gpuvm_sm_unmap()
+ // never calls sm_step_map().
+ to_result(unsafe { bindings::drm_gpuvm_sm_unmap(gpuvm, (&raw mut p).cast(), addr, length) })
+ }
+}
+
+impl<T: DriverGpuVm> GpuVm<T> {
+ /// # Safety
+ /// Must be called from `sm_map`.
+ pub(super) unsafe extern "C" fn sm_step_map(
+ op: *mut bindings::drm_gpuva_op,
+ p: *mut c_void,
+ ) -> c_int {
+ // SAFETY: If we reach `sm_step_map` then we were called from `sm_map` which always passes
+ // an `SmMapData` as private data.
+ let p = unsafe { &mut *p.cast::<SmMapData<'_, T>>() };
+ let op = OpMap {
+ // SAFETY: sm_step_map is called with a map operation.
+ op: unsafe { &(*op).__bindgen_anon_1.map },
+ vm_bo: &p.vm_bo,
+ _invariant: PhantomData,
+ };
+ match p.sm_data.gpuvm.sm_step_map(op, p.sm_data.user_context) {
+ Ok(OpMapped { .. }) => 0,
+ Err(err) => err.to_errno(),
+ }
+ }
+ /// # Safety
+ /// Must be called from `sm_map` or `sm_unmap`.
+ pub(super) unsafe extern "C" fn sm_step_unmap(
+ op: *mut bindings::drm_gpuva_op,
+ p: *mut c_void,
+ ) -> c_int {
+ // SAFETY: If we reach `sm_step_unmap` then we were called from `sm_map` or `sm_unmap` which passes either
+ // an `SmMapData` or `SmData` as private data. Both cases can be cast to `SmData`.
+ let p = unsafe { &mut *p.cast::<SmData<'_, T>>() };
+ let op = OpUnmap {
+ // SAFETY: sm_step_unmap is called with an unmap operation.
+ op: unsafe { &(*op).__bindgen_anon_1.unmap },
+ _invariant: PhantomData,
+ };
+ match p.gpuvm.sm_step_unmap(op, p.user_context) {
+ Ok(OpUnmapped { .. }) => 0,
+ Err(err) => err.to_errno(),
+ }
+ }
+ /// # Safety
+ /// Must be called from `sm_map` or `sm_unmap`.
+ pub(super) unsafe extern "C" fn sm_step_remap(
+ op: *mut bindings::drm_gpuva_op,
+ p: *mut c_void,
+ ) -> c_int {
+ // SAFETY: If we reach `sm_step_remap` then we were called from `sm_map` or `sm_unmap` which passes either
+ // an `SmMapData` or `SmData` as private data. Both cases can be cast to `SmData`.
+ let p = unsafe { &mut *p.cast::<SmData<'_, T>>() };
+ let op = OpRemap {
+ // SAFETY: sm_step_remap is called with a remap operation.
+ op: unsafe { &(*op).__bindgen_anon_1.remap },
+ _invariant: PhantomData,
+ };
+ match p.gpuvm.sm_step_remap(op, p.user_context) {
+ Ok(OpRemapped { .. }) => 0,
+ Err(err) => err.to_errno(),
+ }
+ }
+}
diff --git a/rust/kernel/drm/gpuvm/va.rs b/rust/kernel/drm/gpuvm/va.rs
new file mode 100644
index 0000000000000000000000000000000000000000..a31122ff22282186a1d76d4bb085714f6465722b
--- /dev/null
+++ b/rust/kernel/drm/gpuvm/va.rs
@@ -0,0 +1,148 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+
+use super::*;
+
+/// Represents that a range of a GEM object is mapped in this [`GpuVm`] instance.
+///
+/// Does not assume that GEM lock is held.
+///
+/// # Invariants
+///
+/// This is a valid `drm_gpuva` that is resident in the [`GpuVm`] instance.
+#[repr(C)]
+#[pin_data]
+pub struct GpuVa<T: DriverGpuVm> {
+ #[pin]
+ inner: Opaque<bindings::drm_gpuva>,
+ #[pin]
+ data: T::VaData,
+}
+
+impl<T: DriverGpuVm> GpuVa<T> {
+ /// Access this [`GpuVa`] from a raw pointer.
+ ///
+ /// # Safety
+ ///
+ /// For the duration of `'a`, the pointer must reference a valid `drm_gpuva` associated with a
+ /// [`GpuVm<T>`].
+ #[inline]
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuva) -> &'a Self {
+ // SAFETY: `drm_gpuva` is first field and `repr(C)`.
+ unsafe { &*ptr.cast() }
+ }
+
+ /// Returns a raw pointer to underlying C value.
+ #[inline]
+ pub fn as_raw(&self) -> *mut bindings::drm_gpuva {
+ self.inner.get()
+ }
+
+ /// Returns the address of this mapping in the GPU virtual address space.
+ #[inline]
+ pub fn addr(&self) -> u64 {
+ // SAFETY: The `va.addr` field of `drm_gpuva` is immutable.
+ unsafe { (*self.as_raw()).va.addr }
+ }
+
+ /// Returns the length of this mapping.
+ #[inline]
+ pub fn length(&self) -> u64 {
+ // SAFETY: The `va.range` field of `drm_gpuva` is immutable.
+ unsafe { (*self.as_raw()).va.range }
+ }
+
+ /// Returns `addr..addr+length`.
+ #[inline]
+ pub fn range(&self) -> Range<u64> {
+ let addr = self.addr();
+ addr..addr + self.length()
+ }
+
+ /// Returns the offset within the GEM object.
+ #[inline]
+ pub fn gem_offset(&self) -> u64 {
+ // SAFETY: The `gem.offset` field of `drm_gpuva` is immutable.
+ unsafe { (*self.as_raw()).gem.offset }
+ }
+
+ /// Returns the GEM object.
+ #[inline]
+ pub fn obj(&self) -> &T::Object {
+ // SAFETY: The `gem.offset` field of `drm_gpuva` is immutable.
+ unsafe { <T::Object as IntoGEMObject>::from_raw((*self.as_raw()).gem.obj) }
+ }
+
+ /// Returns the underlying [`GpuVmBo`] object that backs this [`GpuVa`].
+ #[inline]
+ pub fn vm_bo(&self) -> &GpuVmBo<T> {
+ // SAFETY: The `vm_bo` field has been set and is immutable for the duration in which this
+ // `drm_gpuva` is resident in the VM.
+ unsafe { GpuVmBo::from_raw((*self.as_raw()).vm_bo) }
+ }
+}
+
+/// A pre-allocated [`GpuVa`] object.
+///
+/// # Invariants
+///
+/// The memory is zeroed.
+pub struct GpuVaAlloc<T: DriverGpuVm>(KBox<MaybeUninit<GpuVa<T>>>);
+
+impl<T: DriverGpuVm> GpuVaAlloc<T> {
+ /// Pre-allocate a [`GpuVa`] object.
+ pub fn new(flags: AllocFlags) -> Result<GpuVaAlloc<T>, AllocError> {
+ // INVARIANTS: Memory allocated with __GFP_ZERO.
+ Ok(GpuVaAlloc(KBox::new_uninit(flags | __GFP_ZERO)?))
+ }
+
+ /// Prepare this `drm_gpuva` for insertion into the GPUVM.
+ pub(super) fn prepare(mut self, va_data: impl PinInit<T::VaData>) -> *mut bindings::drm_gpuva {
+ let va_ptr = MaybeUninit::as_mut_ptr(&mut self.0);
+ // SAFETY: The `data` field is pinned.
+ let Ok(()) = unsafe { va_data.__pinned_init(&raw mut (*va_ptr).data) };
+ KBox::into_raw(self.0).cast()
+ }
+}
+
+/// A [`GpuVa`] object that has been removed.
+///
+/// # Invariants
+///
+/// The `drm_gpuva` is not resident in the [`GpuVm`].
+pub struct GpuVaRemoved<T: DriverGpuVm>(KBox<GpuVa<T>>);
+
+impl<T: DriverGpuVm> GpuVaRemoved<T> {
+ /// Convert a raw pointer into a [`GpuVaRemoved`].
+ ///
+ /// # Safety
+ ///
+ /// Must have been removed from a [`GpuVm<T>`].
+ pub(super) unsafe fn from_raw(ptr: *mut bindings::drm_gpuva) -> Self {
+ // SAFETY: Since it has been removed we can take ownership of allocation.
+ GpuVaRemoved(unsafe { KBox::from_raw(ptr.cast()) })
+ }
+
+ /// Take ownership of the VA data.
+ pub fn into_inner(self) -> T::VaData
+ where
+ T::VaData: Unpin,
+ {
+ KBox::into_inner(self.0).data
+ }
+}
+
+impl<T: DriverGpuVm> Deref for GpuVaRemoved<T> {
+ type Target = T::VaData;
+ fn deref(&self) -> &T::VaData {
+ &self.0.data
+ }
+}
+
+impl<T: DriverGpuVm> DerefMut for GpuVaRemoved<T>
+where
+ T::VaData: Unpin,
+{
+ fn deref_mut(&mut self) -> &mut T::VaData {
+ &mut self.0.data
+ }
+}
diff --git a/rust/kernel/drm/gpuvm/vm_bo.rs b/rust/kernel/drm/gpuvm/vm_bo.rs
new file mode 100644
index 0000000000000000000000000000000000000000..f21aa17ea4f42c4a2b57b1f3a57a18dd2c3c8b7b
--- /dev/null
+++ b/rust/kernel/drm/gpuvm/vm_bo.rs
@@ -0,0 +1,213 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+
+use super::*;
+
+/// Represents that a given GEM object has at least one mapping on this [`GpuVm`] instance.
+///
+/// Does not assume that GEM lock is held.
+#[repr(C)]
+#[pin_data]
+pub struct GpuVmBo<T: DriverGpuVm> {
+ #[pin]
+ inner: Opaque<bindings::drm_gpuvm_bo>,
+ #[pin]
+ data: T::VmBoData,
+}
+
+impl<T: DriverGpuVm> GpuVmBo<T> {
+ pub(super) const ALLOC_FN: Option<unsafe extern "C" fn() -> *mut bindings::drm_gpuvm_bo> = {
+ use core::alloc::Layout;
+ let base = Layout::new::<bindings::drm_gpuvm_bo>();
+ let rust = Layout::new::<Self>();
+ assert!(base.size() <= rust.size());
+ if base.size() != rust.size() || base.align() != rust.align() {
+ Some(Self::vm_bo_alloc)
+ } else {
+ // This causes GPUVM to allocate a `GpuVmBo<T>` with `kzalloc(sizeof(drm_gpuvm_bo))`.
+ None
+ }
+ };
+
+ pub(super) const FREE_FN: Option<unsafe extern "C" fn(*mut bindings::drm_gpuvm_bo)> = {
+ if core::mem::needs_drop::<Self>() {
+ Some(Self::vm_bo_free)
+ } else {
+ // This causes GPUVM to free a `GpuVmBo<T>` with `kfree`.
+ None
+ }
+ };
+
+ /// Custom function for allocating a `drm_gpuvm_bo`.
+ ///
+ /// # Safety
+ ///
+ /// Always safe to call. Unsafe to match function pointer type in C struct.
+ unsafe extern "C" fn vm_bo_alloc() -> *mut bindings::drm_gpuvm_bo {
+ KBox::<Self>::new_uninit(GFP_KERNEL | __GFP_ZERO)
+ .map(KBox::into_raw)
+ .unwrap_or(ptr::null_mut())
+ .cast()
+ }
+
+ /// Custom function for freeing a `drm_gpuvm_bo`.
+ ///
+ /// # Safety
+ ///
+ /// The pointer must have been allocated with [`GpuVmBo::ALLOC_FN`], and must not be used after
+ /// this call.
+ unsafe extern "C" fn vm_bo_free(ptr: *mut bindings::drm_gpuvm_bo) {
+ // SAFETY:
+ // * The ptr was allocated from kmalloc with the layout of `GpuVmBo<T>`.
+ // * `ptr->inner` has no destructor.
+ // * `ptr->data` contains a valid `T::VmBoData` that we can drop.
+ drop(unsafe { KBox::<Self>::from_raw(ptr.cast()) });
+ }
+
+ /// Access this [`GpuVmBo`] from a raw pointer.
+ ///
+ /// # Safety
+ ///
+ /// For the duration of `'a`, the pointer must reference a valid `drm_gpuvm_bo` associated with
+ /// a [`GpuVm<T>`].
+ #[inline]
+ pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm_bo) -> &'a Self {
+ // SAFETY: `drm_gpuvm_bo` is first field and `repr(C)`.
+ unsafe { &*ptr.cast() }
+ }
+
+ /// Returns a raw pointer to underlying C value.
+ #[inline]
+ pub fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo {
+ self.inner.get()
+ }
+
+ /// The [`GpuVm`] that this GEM object is mapped in.
+ #[inline]
+ pub fn gpuvm(&self) -> &GpuVm<T> {
+ // SAFETY: The `obj` pointer is guaranteed to be valid.
+ unsafe { GpuVm::<T>::from_raw((*self.inner.get()).vm) }
+ }
+
+ /// The [`drm_gem_object`](crate::gem::Object) for these mappings.
+ #[inline]
+ pub fn obj(&self) -> &T::Object {
+ // SAFETY: The `obj` pointer is guaranteed to be valid.
+ unsafe { <T::Object as IntoGEMObject>::from_raw((*self.inner.get()).obj) }
+ }
+
+ /// The driver data with this buffer object.
+ #[inline]
+ pub fn data(&self) -> &T::VmBoData {
+ &self.data
+ }
+}
+
+/// A pre-allocated [`GpuVmBo`] object.
+///
+/// # Invariants
+///
+/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData`, has a refcount of one, and is
+/// absent from any gem, extobj, or evict lists.
+pub(super) struct GpuVmBoAlloc<T: DriverGpuVm>(NonNull<GpuVmBo<T>>);
+
+impl<T: DriverGpuVm> GpuVmBoAlloc<T> {
+ /// Create a new pre-allocated [`GpuVmBo`].
+ ///
+ /// It's intentional that the initializer is infallible because `drm_gpuvm_bo_put` will call
+ /// drop on the data, so we don't have a way to free it when the data is missing.
+ #[inline]
+ pub(super) fn new(
+ gpuvm: &GpuVm<T>,
+ gem: &T::Object,
+ value: impl PinInit<T::VmBoData>,
+ ) -> Result<GpuVmBoAlloc<T>, AllocError> {
+ // SAFETY: The provided gpuvm and gem ptrs are valid for the duration of this call.
+ let raw_ptr = unsafe {
+ bindings::drm_gpuvm_bo_create(gpuvm.as_raw(), gem.as_raw()).cast::<GpuVmBo<T>>()
+ };
+ // CAST: `GpuVmBoAlloc::vm_bo_alloc` ensures that this memory was allocated with the layout
+ // of `GpuVmBo<T>`.
+ let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?;
+ // SAFETY: `ptr->data` is a valid pinned location.
+ let Ok(()) = unsafe { value.__pinned_init(&raw mut (*raw_ptr).data) };
+ // INVARIANTS: We just created the vm_bo so it's absent from lists, and the data is valid
+ // as we just initialized it.
+ Ok(GpuVmBoAlloc(ptr))
+ }
+
+ /// Returns a raw pointer to underlying C value.
+ #[inline]
+ pub(super) fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo {
+ // SAFETY: The pointer references a valid `drm_gpuvm_bo`.
+ unsafe { (*self.0.as_ptr()).inner.get() }
+ }
+
+ /// Look up whether there is an existing [`GpuVmBo`] for this gem object.
+ #[inline]
+ pub(super) fn obtain(self) -> GpuVmBoObtain<T> {
+ let me = ManuallyDrop::new(self);
+ // SAFETY: Valid `drm_gpuvm_bo` not already in the lists.
+ let ptr = unsafe { bindings::drm_gpuvm_bo_obtain_prealloc(me.as_raw()) };
+
+ // If the vm_bo does not already exist, ensure that it's in the extobj list.
+ if ptr::eq(ptr, me.as_raw()) && me.gpuvm().is_extobj(me.obj()) {
+ let _resv_lock = me.gpuvm().resv_lock();
+ // SAFETY: We hold the GPUVMs resv lock.
+ unsafe { bindings::drm_gpuvm_bo_extobj_add(ptr) };
+ }
+
+ // INVARIANTS: Valid `drm_gpuvm_bo` in the GEM list.
+ // SAFETY: `drm_gpuvm_bo_obtain_prealloc` always returns a non-null ptr
+ GpuVmBoObtain(unsafe { NonNull::new_unchecked(ptr.cast()) })
+ }
+}
+
+impl<T: DriverGpuVm> Deref for GpuVmBoAlloc<T> {
+ type Target = GpuVmBo<T>;
+ #[inline]
+ fn deref(&self) -> &GpuVmBo<T> {
+ // SAFETY: By the type invariants we may deref while `Self` exists.
+ unsafe { self.0.as_ref() }
+ }
+}
+
+impl<T: DriverGpuVm> Drop for GpuVmBoAlloc<T> {
+ #[inline]
+ fn drop(&mut self) {
+ // SAFETY: It's safe to perform a deferred put in any context.
+ unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) };
+ }
+}
+
+/// A [`GpuVmBo`] object in the GEM list.
+///
+/// # Invariants
+///
+/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData` and is present in the gem list.
+pub struct GpuVmBoObtain<T: DriverGpuVm>(NonNull<GpuVmBo<T>>);
+
+impl<T: DriverGpuVm> GpuVmBoObtain<T> {
+ /// Returns a raw pointer to underlying C value.
+ #[inline]
+ pub fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo {
+ // SAFETY: The pointer references a valid `drm_gpuvm_bo`.
+ unsafe { (*self.0.as_ptr()).inner.get() }
+ }
+}
+
+impl<T: DriverGpuVm> Deref for GpuVmBoObtain<T> {
+ type Target = GpuVmBo<T>;
+ #[inline]
+ fn deref(&self) -> &GpuVmBo<T> {
+ // SAFETY: By the type invariants we may deref while `Self` exists.
+ unsafe { self.0.as_ref() }
+ }
+}
+
+impl<T: DriverGpuVm> Drop for GpuVmBoObtain<T> {
+ #[inline]
+ fn drop(&mut self) {
+ // SAFETY: It's safe to perform a deferred put in any context.
+ unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) };
+ }
+}
diff --git a/rust/kernel/drm/mod.rs b/rust/kernel/drm/mod.rs
index 1b82b6945edf25b947afc08300e211bd97150d6b..a4b6c5430198571ec701af2ef452cc9ac55870e6 100644
--- a/rust/kernel/drm/mod.rs
+++ b/rust/kernel/drm/mod.rs
@@ -6,6 +6,7 @@
pub mod driver;
pub mod file;
pub mod gem;
+pub mod gpuvm;
pub mod ioctl;
pub use self::device::Device;
--
2.52.0.487.g5c8c507ade-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc()
2025-11-28 14:14 ` [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc() Alice Ryhl
@ 2025-11-28 14:24 ` Boris Brezillon
2025-12-01 9:55 ` Alice Ryhl
2025-12-19 12:15 ` Danilo Krummrich
1 sibling, 1 reply; 24+ messages in thread
From: Boris Brezillon @ 2025-11-28 14:24 UTC (permalink / raw)
To: Alice Ryhl
Cc: Danilo Krummrich, Daniel Almeida, Matthew Brost,
Thomas Hellström, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Steven Price,
Liviu Dudau, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Frank Binns, Matt Coster, Rob Clark,
Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang, Sean Paul,
Marijn Suijten, Lyude Paul, Lucas De Marchi, Rodrigo Vivi,
Sumit Semwal, Christian König, dri-devel, linux-kernel,
rust-for-linux, linux-arm-msm, freedreno, nouveau, intel-xe,
linux-media, linaro-mm-sig
On Fri, 28 Nov 2025 14:14:15 +0000
Alice Ryhl <aliceryhl@google.com> wrote:
> When calling drm_gpuvm_bo_obtain_prealloc() and using immediate mode,
> this may result in a call to ops->vm_bo_free(vm_bo) while holding the
> GEMs gpuva mutex. This is a problem if ops->vm_bo_free(vm_bo) performs
> any operations that are not safe in the fence signalling critical path,
> and it turns out that Panthor (the only current user of the method)
> calls drm_gem_shmem_unpin() which takes a resv lock internally.
>
> This constitutes both a violation of signalling safety and lock
> inversion. To fix this, we modify the method to internally take the GEMs
> gpuva mutex so that the mutex can be unlocked before freeing the
> preallocated vm_bo.
>
> Note that this modification introduces a requirement that the driver
> uses immediate mode to call drm_gpuvm_bo_obtain_prealloc() as it would
> otherwise take the wrong lock.
>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Should we add a Fixes tag?
> ---
> drivers/gpu/drm/drm_gpuvm.c | 58 ++++++++++++++++++++++-------------
> drivers/gpu/drm/panthor/panthor_mmu.c | 10 ------
> 2 files changed, 37 insertions(+), 31 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index 936e6c1a60c16ed5a6898546bf99e23a74f6b58b..f08a5cc1d611f971862c1272987e5ecd6d97c163 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -1601,14 +1601,37 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm,
> }
> EXPORT_SYMBOL_GPL(drm_gpuvm_bo_create);
>
> +static void
> +drm_gpuvm_bo_destroy_not_in_lists(struct drm_gpuvm_bo *vm_bo)
> +{
> + struct drm_gpuvm *gpuvm = vm_bo->vm;
> + const struct drm_gpuvm_ops *ops = gpuvm->ops;
> + struct drm_gem_object *obj = vm_bo->obj;
> +
> + if (ops && ops->vm_bo_free)
> + ops->vm_bo_free(vm_bo);
> + else
> + kfree(vm_bo);
> +
> + drm_gpuvm_put(gpuvm);
> + drm_gem_object_put(obj);
> +}
> +
> +static void
> +drm_gpuvm_bo_destroy_not_in_lists_kref(struct kref *kref)
> +{
> + struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
> + kref);
> +
> + drm_gpuvm_bo_destroy_not_in_lists(vm_bo);
> +}
> +
> static void
> drm_gpuvm_bo_destroy(struct kref *kref)
> {
> struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
> kref);
> struct drm_gpuvm *gpuvm = vm_bo->vm;
> - const struct drm_gpuvm_ops *ops = gpuvm->ops;
> - struct drm_gem_object *obj = vm_bo->obj;
> bool lock = !drm_gpuvm_resv_protected(gpuvm);
>
> if (!lock)
> @@ -1617,16 +1640,10 @@ drm_gpuvm_bo_destroy(struct kref *kref)
> drm_gpuvm_bo_list_del(vm_bo, extobj, lock);
> drm_gpuvm_bo_list_del(vm_bo, evict, lock);
>
> - drm_gem_gpuva_assert_lock_held(gpuvm, obj);
> + drm_gem_gpuva_assert_lock_held(gpuvm, vm_bo->obj);
> list_del(&vm_bo->list.entry.gem);
>
> - if (ops && ops->vm_bo_free)
> - ops->vm_bo_free(vm_bo);
> - else
> - kfree(vm_bo);
> -
> - drm_gpuvm_put(gpuvm);
> - drm_gem_object_put(obj);
> + drm_gpuvm_bo_destroy_not_in_lists(vm_bo);
> }
>
> /**
> @@ -1744,9 +1761,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put_deferred);
> void
> drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm)
> {
> - const struct drm_gpuvm_ops *ops = gpuvm->ops;
> struct drm_gpuvm_bo *vm_bo;
> - struct drm_gem_object *obj;
> struct llist_node *bo_defer;
>
> bo_defer = llist_del_all(&gpuvm->bo_defer);
> @@ -1765,14 +1780,7 @@ drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm)
> while (bo_defer) {
> vm_bo = llist_entry(bo_defer, struct drm_gpuvm_bo, list.entry.bo_defer);
> bo_defer = bo_defer->next;
> - obj = vm_bo->obj;
> - if (ops && ops->vm_bo_free)
> - ops->vm_bo_free(vm_bo);
> - else
> - kfree(vm_bo);
> -
> - drm_gpuvm_put(gpuvm);
> - drm_gem_object_put(obj);
> + drm_gpuvm_bo_destroy_not_in_lists(vm_bo);
> }
> }
> EXPORT_SYMBOL_GPL(drm_gpuvm_bo_deferred_cleanup);
> @@ -1860,6 +1868,9 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain);
> * count is decreased. If not found @__vm_bo is returned without further
> * increase of the reference count.
> *
> + * The provided @__vm_bo must not already be in the gpuva, evict, or extobj
> + * lists prior to calling this method.
> + *
> * A new &drm_gpuvm_bo is added to the GEMs gpuva list.
> *
> * Returns: a pointer to the found &drm_gpuvm_bo or @__vm_bo if no existing
> @@ -1872,14 +1883,19 @@ drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *__vm_bo)
> struct drm_gem_object *obj = __vm_bo->obj;
> struct drm_gpuvm_bo *vm_bo;
>
> + drm_WARN_ON(gpuvm->drm, !drm_gpuvm_immediate_mode(gpuvm));
> +
> + mutex_lock(&obj->gpuva.lock);
> vm_bo = drm_gpuvm_bo_find(gpuvm, obj);
> if (vm_bo) {
> - drm_gpuvm_bo_put(__vm_bo);
> + mutex_unlock(&obj->gpuva.lock);
> + kref_put(&__vm_bo->kref, drm_gpuvm_bo_destroy_not_in_lists_kref);
> return vm_bo;
> }
>
> drm_gem_gpuva_assert_lock_held(gpuvm, obj);
> list_add_tail(&__vm_bo->list.entry.gem, &obj->gpuva.list);
> + mutex_unlock(&obj->gpuva.lock);
>
> return __vm_bo;
> }
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index 9f5f4ddf291024121f3fd5644f2fdeba354fa67c..be8811a70e1a3adec87ca4a85cad7c838f54bebf 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -1224,17 +1224,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> goto err_cleanup;
> }
>
> - /* drm_gpuvm_bo_obtain_prealloc() will call drm_gpuvm_bo_put() on our
> - * pre-allocated BO if the <BO,VM> association exists. Given we
> - * only have one ref on preallocated_vm_bo, drm_gpuvm_bo_destroy() will
> - * be called immediately, and we have to hold the VM resv lock when
> - * calling this function.
> - */
> - dma_resv_lock(panthor_vm_resv(vm), NULL);
> - mutex_lock(&bo->base.base.gpuva.lock);
> op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
> - mutex_unlock(&bo->base.base.gpuva.lock);
> - dma_resv_unlock(panthor_vm_resv(vm));
>
> op_ctx->map.bo_offset = offset;
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 2/4] drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode
2025-11-28 14:14 ` [PATCH 2/4] drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode Alice Ryhl
@ 2025-11-28 14:25 ` Boris Brezillon
2025-12-19 12:25 ` Danilo Krummrich
1 sibling, 0 replies; 24+ messages in thread
From: Boris Brezillon @ 2025-11-28 14:25 UTC (permalink / raw)
To: Alice Ryhl
Cc: Danilo Krummrich, Daniel Almeida, Matthew Brost,
Thomas Hellström, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Steven Price,
Liviu Dudau, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Frank Binns, Matt Coster, Rob Clark,
Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang, Sean Paul,
Marijn Suijten, Lyude Paul, Lucas De Marchi, Rodrigo Vivi,
Sumit Semwal, Christian König, dri-devel, linux-kernel,
rust-for-linux, linux-arm-msm, freedreno, nouveau, intel-xe,
linux-media, linaro-mm-sig
On Fri, 28 Nov 2025 14:14:16 +0000
Alice Ryhl <aliceryhl@google.com> wrote:
> In the previous commit we updated drm_gpuvm_bo_obtain_prealloc() to take
> locks internally, which means that it's only usable in immediate mode.
> In this commit, we notice that drm_gpuvm_bo_obtain() requires you to use
> staged mode. This means that we now have one variant of obtain for each
> mode you might use gpuvm in.
>
> To reflect this information, we add a warning about using it in
> immediate mode, and to make the distinction clearer we rename the method
> with a _locked() suffix so that it's clear that it requires the caller
> to take the locks.
>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
> drivers/gpu/drm/drm_gpuvm.c | 16 +++++++++++++---
The gpuvm changes are
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> drivers/gpu/drm/imagination/pvr_vm.c | 2 +-
> drivers/gpu/drm/msm/msm_gem.h | 2 +-
> drivers/gpu/drm/msm/msm_gem_vma.c | 2 +-
> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
> drivers/gpu/drm/xe/xe_vm.c | 4 ++--
> include/drm/drm_gpuvm.h | 4 ++--
> 7 files changed, 21 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index f08a5cc1d611f971862c1272987e5ecd6d97c163..9cd06c7600dc32ceee0f0beb5e3daf31698a66b3 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -1832,16 +1832,26 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_bo_find);
> * count of the &drm_gpuvm_bo accordingly. If not found, allocates a new
> * &drm_gpuvm_bo.
> *
> + * Requires the lock for the GEMs gpuva list.
> + *
> * A new &drm_gpuvm_bo is added to the GEMs gpuva list.
> *
> * Returns: a pointer to the &drm_gpuvm_bo on success, an ERR_PTR on failure
> */
> struct drm_gpuvm_bo *
> -drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
> - struct drm_gem_object *obj)
> +drm_gpuvm_bo_obtain_locked(struct drm_gpuvm *gpuvm,
> + struct drm_gem_object *obj)
> {
> struct drm_gpuvm_bo *vm_bo;
>
> + /*
> + * In immediate mode this would require the caller to hold the GEMs
> + * gpuva mutex, but it's not okay to allocate while holding that lock,
> + * and this method allocates. Immediate mode drivers should use
> + * drm_gpuvm_bo_obtain_prealloc() instead.
> + */
> + drm_WARN_ON(gpuvm->drm, drm_gpuvm_immediate_mode(gpuvm));
> +
> vm_bo = drm_gpuvm_bo_find(gpuvm, obj);
> if (vm_bo)
> return vm_bo;
> @@ -1855,7 +1865,7 @@ drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
>
> return vm_bo;
> }
> -EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain);
> +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain_locked);
>
> /**
> * drm_gpuvm_bo_obtain_prealloc() - obtains an instance of the &drm_gpuvm_bo
> diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
> index 3d97990170bf6b1341116c5c8b9d01421944eda4..30ff9b84eb14f2455003e76108de6d489a13f61a 100644
> --- a/drivers/gpu/drm/imagination/pvr_vm.c
> +++ b/drivers/gpu/drm/imagination/pvr_vm.c
> @@ -255,7 +255,7 @@ pvr_vm_bind_op_map_init(struct pvr_vm_bind_op *bind_op,
> bind_op->type = PVR_VM_BIND_TYPE_MAP;
>
> dma_resv_lock(obj->resv, NULL);
> - bind_op->gpuvm_bo = drm_gpuvm_bo_obtain(&vm_ctx->gpuvm_mgr, obj);
> + bind_op->gpuvm_bo = drm_gpuvm_bo_obtain_locked(&vm_ctx->gpuvm_mgr, obj);
> dma_resv_unlock(obj->resv);
> if (IS_ERR(bind_op->gpuvm_bo))
> return PTR_ERR(bind_op->gpuvm_bo);
> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
> index a4cf31853c5008e171c3ad72cde1004c60fe5212..26dfe3d22e3e847f7e63174481d03f72878a8ced 100644
> --- a/drivers/gpu/drm/msm/msm_gem.h
> +++ b/drivers/gpu/drm/msm/msm_gem.h
> @@ -60,7 +60,7 @@ struct msm_gem_vm_log_entry {
> * embedded in any larger driver structure. The GEM object holds a list of
> * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma
> * holds a reference to the vm_bo, and drops it when the vma is unlinked.
> - * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an
> + * So we just need to call drm_gpuvm_bo_obtain_locked() to return a ref to an
> * existing vm_bo, or create a new one. Once the vma is linked, the ref
> * to the vm_bo can be dropped (since the vma is holding one).
> */
> diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
> index 8316af1723c227f919594446c3721e1a948cbc9e..239b6168a26e636b511187b4993945d1565d149f 100644
> --- a/drivers/gpu/drm/msm/msm_gem_vma.c
> +++ b/drivers/gpu/drm/msm/msm_gem_vma.c
> @@ -413,7 +413,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
> if (!obj)
> return &vma->base;
>
> - vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj);
> + vm_bo = drm_gpuvm_bo_obtain_locked(&vm->base, obj);
> if (IS_ERR(vm_bo)) {
> ret = PTR_ERR(vm_bo);
> goto err_va_remove;
> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> index 79eefdfd08a2678fedf69503ddf7e9e17ed14c6f..d8888bd29cccef4b8dad9eff2bf6e2b1fd1a7e4d 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> @@ -1207,7 +1207,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
> return -ENOENT;
>
> dma_resv_lock(obj->resv, NULL);
> - op->vm_bo = drm_gpuvm_bo_obtain(&uvmm->base, obj);
> + op->vm_bo = drm_gpuvm_bo_obtain_locked(&uvmm->base, obj);
> dma_resv_unlock(obj->resv);
> if (IS_ERR(op->vm_bo))
> return PTR_ERR(op->vm_bo);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index f602b874e0547591d9008333c18f3de0634c48c7..de52d01b0921cc8ac619deeed47b578e0ae69257 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1004,7 +1004,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
>
> xe_bo_assert_held(bo);
>
> - vm_bo = drm_gpuvm_bo_obtain(vma->gpuva.vm, &bo->ttm.base);
> + vm_bo = drm_gpuvm_bo_obtain_locked(vma->gpuva.vm, &bo->ttm.base);
> if (IS_ERR(vm_bo)) {
> xe_vma_free(vma);
> return ERR_CAST(vm_bo);
> @@ -2249,7 +2249,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
> if (err)
> return ERR_PTR(err);
>
> - vm_bo = drm_gpuvm_bo_obtain(&vm->gpuvm, obj);
> + vm_bo = drm_gpuvm_bo_obtain_locked(&vm->gpuvm, obj);
> if (IS_ERR(vm_bo)) {
> xe_bo_unlock(bo);
> return ERR_CAST(vm_bo);
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index fdfc575b260360611ff8ce16c327acede787929f..0d3fc1f6cac9966a42f3bc82b0b491bfefaf5b96 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -736,8 +736,8 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm,
> struct drm_gem_object *obj);
>
> struct drm_gpuvm_bo *
> -drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
> - struct drm_gem_object *obj);
> +drm_gpuvm_bo_obtain_locked(struct drm_gpuvm *gpuvm,
> + struct drm_gem_object *obj);
> struct drm_gpuvm_bo *
> drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *vm_bo);
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 3/4] drm/gpuvm: use const for drm_gpuva_op_* ptrs
2025-11-28 14:14 ` [PATCH 3/4] drm/gpuvm: use const for drm_gpuva_op_* ptrs Alice Ryhl
@ 2025-11-28 14:27 ` Boris Brezillon
0 siblings, 0 replies; 24+ messages in thread
From: Boris Brezillon @ 2025-11-28 14:27 UTC (permalink / raw)
To: Alice Ryhl
Cc: Danilo Krummrich, Daniel Almeida, Matthew Brost,
Thomas Hellström, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Steven Price,
Liviu Dudau, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Frank Binns, Matt Coster, Rob Clark,
Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang, Sean Paul,
Marijn Suijten, Lyude Paul, Lucas De Marchi, Rodrigo Vivi,
Sumit Semwal, Christian König, dri-devel, linux-kernel,
rust-for-linux, linux-arm-msm, freedreno, nouveau, intel-xe,
linux-media, linaro-mm-sig
On Fri, 28 Nov 2025 14:14:17 +0000
Alice Ryhl <aliceryhl@google.com> wrote:
> These methods just read the values stored in the op pointers without
> modifying them, so it is appropriate to use const ptrs here.
>
> This allows us to avoid const -> mut pointer casts in Rust.
>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
> drivers/gpu/drm/drm_gpuvm.c | 6 +++---
> include/drm/drm_gpuvm.h | 8 ++++----
> 2 files changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index 9cd06c7600dc32ceee0f0beb5e3daf31698a66b3..e06b58aabb8ea6ebd92c625583ae2852c9d2caf1 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -2283,7 +2283,7 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_interval_empty);
> void
> drm_gpuva_map(struct drm_gpuvm *gpuvm,
> struct drm_gpuva *va,
> - struct drm_gpuva_op_map *op)
> + const struct drm_gpuva_op_map *op)
> {
> drm_gpuva_init_from_op(va, op);
> drm_gpuva_insert(gpuvm, va);
> @@ -2303,7 +2303,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_map);
> void
> drm_gpuva_remap(struct drm_gpuva *prev,
> struct drm_gpuva *next,
> - struct drm_gpuva_op_remap *op)
> + const struct drm_gpuva_op_remap *op)
> {
> struct drm_gpuva *va = op->unmap->va;
> struct drm_gpuvm *gpuvm = va->vm;
> @@ -2330,7 +2330,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_remap);
> * Removes the &drm_gpuva associated with the &drm_gpuva_op_unmap.
> */
> void
> -drm_gpuva_unmap(struct drm_gpuva_op_unmap *op)
> +drm_gpuva_unmap(const struct drm_gpuva_op_unmap *op)
> {
> drm_gpuva_remove(op->va);
> }
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 0d3fc1f6cac9966a42f3bc82b0b491bfefaf5b96..655bd9104ffb24170fca14dfa034ee79f5400930 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -1121,7 +1121,7 @@ void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
> struct drm_gpuva_ops *ops);
>
> static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
> - struct drm_gpuva_op_map *op)
> + const struct drm_gpuva_op_map *op)
> {
> va->va.addr = op->va.addr;
> va->va.range = op->va.range;
> @@ -1265,13 +1265,13 @@ int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
>
> void drm_gpuva_map(struct drm_gpuvm *gpuvm,
> struct drm_gpuva *va,
> - struct drm_gpuva_op_map *op);
> + const struct drm_gpuva_op_map *op);
>
> void drm_gpuva_remap(struct drm_gpuva *prev,
> struct drm_gpuva *next,
> - struct drm_gpuva_op_remap *op);
> + const struct drm_gpuva_op_remap *op);
>
> -void drm_gpuva_unmap(struct drm_gpuva_op_unmap *op);
> +void drm_gpuva_unmap(const struct drm_gpuva_op_unmap *op);
>
> /**
> * drm_gpuva_op_remap_to_unmap_range() - Helper to get the start and range of
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* ✗ CI.checkpatch: warning for Rust GPUVM support
2025-11-28 14:14 [PATCH 0/4] Rust GPUVM support Alice Ryhl
` (3 preceding siblings ...)
2025-11-28 14:14 ` [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction Alice Ryhl
@ 2025-11-28 15:38 ` Patchwork
2025-11-28 15:40 ` ✓ CI.KUnit: success " Patchwork
` (3 subsequent siblings)
8 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2025-11-28 15:38 UTC (permalink / raw)
To: Alice Ryhl; +Cc: intel-xe
== Series Details ==
Series: Rust GPUVM support
URL : https://patchwork.freedesktop.org/series/158211/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
2de9a3901bc28757c7906b454717b64e2a214021
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit dfb7567469d0fd0201ffefc9bfc3cbf27a5da0ba
Author: Alice Ryhl <aliceryhl@google.com>
Date: Fri Nov 28 14:14:18 2025 +0000
rust: drm: add GPUVM immediate mode abstraction
Add a GPUVM abstraction to be used by Rust GPU drivers.
GPUVM keeps track of a GPU's virtual address (VA) space and manages the
corresponding virtual mappings represented by "GPU VA" objects. It also
keeps track of the gem::Object<T> used to back the mappings through
GpuVmBo<T>.
This abstraction is only usable by drivers that wish to use GPUVM in
immediate mode. This allows us to build the locking scheme into the API
design. It means that the GEM mutex is used for the GEM gpuva list, and
that the resv lock is used for the extobj list. The evicted list is not
yet used in this version.
This abstraction provides a special handle called the GpuVmCore, which
is a wrapper around ARef<GpuVm> that provides access to the interval
tree. Generally, all changes to the address space requires mutable
access to this unique handle.
Some of the safety comments are still somewhat WIP, but I think the API
should be sound as-is.
Co-developed-by: Asahi Lina <lina+kernel@asahilina.net>
Signed-off-by: Asahi Lina <lina+kernel@asahilina.net>
Co-developed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
+ /mt/dim checkpatch f0525ac2ab2d085a083fe1671aebb7b4dfc8ba67 drm-intel
1be72af2003c drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc()
edf793fabae6 drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode
3cd276e58bcd drm/gpuvm: use const for drm_gpuva_op_* ptrs
dfb7567469d0 rust: drm: add GPUVM immediate mode abstraction
-:66: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#66:
new file mode 100644
-:968: WARNING:LONG_LINE_COMMENT: line length of 114 exceeds 100 columns
#968: FILE: rust/kernel/drm/gpuvm/sm_ops.rs:437:
+ // SAFETY: If we reach `sm_step_unmap` then we were called from `sm_map` or `sm_unmap` which passes either
-:987: WARNING:LONG_LINE: line length of 114 exceeds 100 columns
#987: FILE: rust/kernel/drm/gpuvm/sm_ops.rs:456:
+ // SAFETY: If we reach `sm_step_remap` then we were called from `sm_map` or `sm_unmap` which passes either
total: 0 errors, 3 warnings, 0 checks, 1302 lines checked
^ permalink raw reply [flat|nested] 24+ messages in thread
* ✓ CI.KUnit: success for Rust GPUVM support
2025-11-28 14:14 [PATCH 0/4] Rust GPUVM support Alice Ryhl
` (4 preceding siblings ...)
2025-11-28 15:38 ` ✗ CI.checkpatch: warning for Rust GPUVM support Patchwork
@ 2025-11-28 15:40 ` Patchwork
2025-11-28 15:55 ` ✗ CI.checksparse: warning " Patchwork
` (2 subsequent siblings)
8 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2025-11-28 15:40 UTC (permalink / raw)
To: Alice Ryhl; +Cc: intel-xe
== Series Details ==
Series: Rust GPUVM support
URL : https://patchwork.freedesktop.org/series/158211/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[15:38:54] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[15:38:58] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[15:39:29] Starting KUnit Kernel (1/1)...
[15:39:29] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[15:39:30] ================== guc_buf (11 subtests) ===================
[15:39:30] [PASSED] test_smallest
[15:39:30] [PASSED] test_largest
[15:39:30] [PASSED] test_granular
[15:39:30] [PASSED] test_unique
[15:39:30] [PASSED] test_overlap
[15:39:30] [PASSED] test_reusable
[15:39:30] [PASSED] test_too_big
[15:39:30] [PASSED] test_flush
[15:39:30] [PASSED] test_lookup
[15:39:30] [PASSED] test_data
[15:39:30] [PASSED] test_class
[15:39:30] ===================== [PASSED] guc_buf =====================
[15:39:30] =================== guc_dbm (7 subtests) ===================
[15:39:30] [PASSED] test_empty
[15:39:30] [PASSED] test_default
[15:39:30] ======================== test_size ========================
[15:39:30] [PASSED] 4
[15:39:30] [PASSED] 8
[15:39:30] [PASSED] 32
[15:39:30] [PASSED] 256
[15:39:30] ==================== [PASSED] test_size ====================
[15:39:30] ======================= test_reuse ========================
[15:39:30] [PASSED] 4
[15:39:30] [PASSED] 8
[15:39:30] [PASSED] 32
[15:39:30] [PASSED] 256
[15:39:30] =================== [PASSED] test_reuse ====================
[15:39:30] =================== test_range_overlap ====================
[15:39:30] [PASSED] 4
[15:39:30] [PASSED] 8
[15:39:30] [PASSED] 32
[15:39:30] [PASSED] 256
[15:39:30] =============== [PASSED] test_range_overlap ================
[15:39:30] =================== test_range_compact ====================
[15:39:30] [PASSED] 4
[15:39:30] [PASSED] 8
[15:39:30] [PASSED] 32
[15:39:30] [PASSED] 256
[15:39:30] =============== [PASSED] test_range_compact ================
[15:39:30] ==================== test_range_spare =====================
[15:39:30] [PASSED] 4
[15:39:30] [PASSED] 8
[15:39:30] [PASSED] 32
[15:39:30] [PASSED] 256
[15:39:30] ================ [PASSED] test_range_spare =================
[15:39:30] ===================== [PASSED] guc_dbm =====================
[15:39:30] =================== guc_idm (6 subtests) ===================
[15:39:30] [PASSED] bad_init
[15:39:30] [PASSED] no_init
[15:39:30] [PASSED] init_fini
[15:39:30] [PASSED] check_used
[15:39:30] [PASSED] check_quota
[15:39:30] [PASSED] check_all
[15:39:30] ===================== [PASSED] guc_idm =====================
[15:39:30] ================== no_relay (3 subtests) ===================
[15:39:30] [PASSED] xe_drops_guc2pf_if_not_ready
[15:39:30] [PASSED] xe_drops_guc2vf_if_not_ready
[15:39:30] [PASSED] xe_rejects_send_if_not_ready
[15:39:30] ==================== [PASSED] no_relay =====================
[15:39:30] ================== pf_relay (14 subtests) ==================
[15:39:30] [PASSED] pf_rejects_guc2pf_too_short
[15:39:30] [PASSED] pf_rejects_guc2pf_too_long
[15:39:30] [PASSED] pf_rejects_guc2pf_no_payload
[15:39:30] [PASSED] pf_fails_no_payload
[15:39:30] [PASSED] pf_fails_bad_origin
[15:39:30] [PASSED] pf_fails_bad_type
[15:39:30] [PASSED] pf_txn_reports_error
[15:39:30] [PASSED] pf_txn_sends_pf2guc
[15:39:30] [PASSED] pf_sends_pf2guc
[15:39:30] [SKIPPED] pf_loopback_nop
[15:39:30] [SKIPPED] pf_loopback_echo
[15:39:30] [SKIPPED] pf_loopback_fail
[15:39:30] [SKIPPED] pf_loopback_busy
[15:39:30] [SKIPPED] pf_loopback_retry
[15:39:30] ==================== [PASSED] pf_relay =====================
[15:39:30] ================== vf_relay (3 subtests) ===================
[15:39:30] [PASSED] vf_rejects_guc2vf_too_short
[15:39:30] [PASSED] vf_rejects_guc2vf_too_long
[15:39:30] [PASSED] vf_rejects_guc2vf_no_payload
[15:39:30] ==================== [PASSED] vf_relay =====================
[15:39:30] ================ pf_gt_config (6 subtests) =================
[15:39:30] [PASSED] fair_contexts_1vf
[15:39:30] [PASSED] fair_doorbells_1vf
[15:39:30] [PASSED] fair_ggtt_1vf
[15:39:30] ====================== fair_contexts ======================
[15:39:30] [PASSED] 1 VF
[15:39:30] [PASSED] 2 VFs
[15:39:30] [PASSED] 3 VFs
[15:39:30] [PASSED] 4 VFs
[15:39:30] [PASSED] 5 VFs
[15:39:30] [PASSED] 6 VFs
[15:39:30] [PASSED] 7 VFs
[15:39:30] [PASSED] 8 VFs
[15:39:30] [PASSED] 9 VFs
[15:39:30] [PASSED] 10 VFs
[15:39:30] [PASSED] 11 VFs
[15:39:30] [PASSED] 12 VFs
[15:39:30] [PASSED] 13 VFs
[15:39:30] [PASSED] 14 VFs
[15:39:30] [PASSED] 15 VFs
[15:39:30] [PASSED] 16 VFs
[15:39:30] [PASSED] 17 VFs
[15:39:30] [PASSED] 18 VFs
[15:39:30] [PASSED] 19 VFs
[15:39:30] [PASSED] 20 VFs
[15:39:30] [PASSED] 21 VFs
[15:39:30] [PASSED] 22 VFs
[15:39:30] [PASSED] 23 VFs
[15:39:30] [PASSED] 24 VFs
[15:39:30] [PASSED] 25 VFs
[15:39:30] [PASSED] 26 VFs
[15:39:30] [PASSED] 27 VFs
[15:39:30] [PASSED] 28 VFs
[15:39:30] [PASSED] 29 VFs
[15:39:30] [PASSED] 30 VFs
[15:39:30] [PASSED] 31 VFs
[15:39:30] [PASSED] 32 VFs
[15:39:30] [PASSED] 33 VFs
[15:39:30] [PASSED] 34 VFs
[15:39:30] [PASSED] 35 VFs
[15:39:30] [PASSED] 36 VFs
[15:39:30] [PASSED] 37 VFs
[15:39:30] [PASSED] 38 VFs
[15:39:30] [PASSED] 39 VFs
[15:39:30] [PASSED] 40 VFs
[15:39:30] [PASSED] 41 VFs
[15:39:30] [PASSED] 42 VFs
[15:39:30] [PASSED] 43 VFs
[15:39:30] [PASSED] 44 VFs
[15:39:30] [PASSED] 45 VFs
[15:39:30] [PASSED] 46 VFs
[15:39:30] [PASSED] 47 VFs
[15:39:30] [PASSED] 48 VFs
[15:39:30] [PASSED] 49 VFs
[15:39:30] [PASSED] 50 VFs
[15:39:30] [PASSED] 51 VFs
[15:39:30] [PASSED] 52 VFs
[15:39:30] [PASSED] 53 VFs
[15:39:30] [PASSED] 54 VFs
[15:39:30] [PASSED] 55 VFs
[15:39:30] [PASSED] 56 VFs
[15:39:30] [PASSED] 57 VFs
[15:39:30] [PASSED] 58 VFs
[15:39:30] [PASSED] 59 VFs
[15:39:30] [PASSED] 60 VFs
[15:39:30] [PASSED] 61 VFs
[15:39:30] [PASSED] 62 VFs
[15:39:30] [PASSED] 63 VFs
[15:39:30] ================== [PASSED] fair_contexts ==================
[15:39:30] ===================== fair_doorbells ======================
[15:39:30] [PASSED] 1 VF
[15:39:30] [PASSED] 2 VFs
[15:39:30] [PASSED] 3 VFs
[15:39:30] [PASSED] 4 VFs
[15:39:30] [PASSED] 5 VFs
[15:39:30] [PASSED] 6 VFs
[15:39:30] [PASSED] 7 VFs
[15:39:30] [PASSED] 8 VFs
[15:39:30] [PASSED] 9 VFs
[15:39:30] [PASSED] 10 VFs
[15:39:30] [PASSED] 11 VFs
[15:39:30] [PASSED] 12 VFs
[15:39:30] [PASSED] 13 VFs
[15:39:30] [PASSED] 14 VFs
[15:39:30] [PASSED] 15 VFs
[15:39:30] [PASSED] 16 VFs
[15:39:30] [PASSED] 17 VFs
[15:39:30] [PASSED] 18 VFs
[15:39:30] [PASSED] 19 VFs
[15:39:30] [PASSED] 20 VFs
[15:39:30] [PASSED] 21 VFs
[15:39:30] [PASSED] 22 VFs
[15:39:30] [PASSED] 23 VFs
[15:39:30] [PASSED] 24 VFs
[15:39:30] [PASSED] 25 VFs
[15:39:30] [PASSED] 26 VFs
[15:39:30] [PASSED] 27 VFs
[15:39:30] [PASSED] 28 VFs
[15:39:30] [PASSED] 29 VFs
[15:39:30] [PASSED] 30 VFs
[15:39:30] [PASSED] 31 VFs
[15:39:30] [PASSED] 32 VFs
[15:39:30] [PASSED] 33 VFs
[15:39:30] [PASSED] 34 VFs
[15:39:30] [PASSED] 35 VFs
[15:39:30] [PASSED] 36 VFs
[15:39:30] [PASSED] 37 VFs
[15:39:30] [PASSED] 38 VFs
[15:39:30] [PASSED] 39 VFs
[15:39:30] [PASSED] 40 VFs
[15:39:30] [PASSED] 41 VFs
[15:39:30] [PASSED] 42 VFs
[15:39:30] [PASSED] 43 VFs
[15:39:30] [PASSED] 44 VFs
[15:39:30] [PASSED] 45 VFs
[15:39:30] [PASSED] 46 VFs
[15:39:30] [PASSED] 47 VFs
[15:39:30] [PASSED] 48 VFs
[15:39:30] [PASSED] 49 VFs
[15:39:30] [PASSED] 50 VFs
[15:39:30] [PASSED] 51 VFs
[15:39:30] [PASSED] 52 VFs
[15:39:30] [PASSED] 53 VFs
[15:39:30] [PASSED] 54 VFs
[15:39:30] [PASSED] 55 VFs
[15:39:30] [PASSED] 56 VFs
[15:39:30] [PASSED] 57 VFs
[15:39:30] [PASSED] 58 VFs
[15:39:30] [PASSED] 59 VFs
[15:39:30] [PASSED] 60 VFs
[15:39:30] [PASSED] 61 VFs
[15:39:30] [PASSED] 62 VFs
[15:39:30] [PASSED] 63 VFs
[15:39:30] ================= [PASSED] fair_doorbells ==================
[15:39:30] ======================== fair_ggtt ========================
[15:39:30] [PASSED] 1 VF
[15:39:30] [PASSED] 2 VFs
[15:39:30] [PASSED] 3 VFs
[15:39:30] [PASSED] 4 VFs
[15:39:30] [PASSED] 5 VFs
[15:39:30] [PASSED] 6 VFs
[15:39:30] [PASSED] 7 VFs
[15:39:30] [PASSED] 8 VFs
[15:39:30] [PASSED] 9 VFs
[15:39:30] [PASSED] 10 VFs
[15:39:30] [PASSED] 11 VFs
[15:39:30] [PASSED] 12 VFs
[15:39:30] [PASSED] 13 VFs
[15:39:30] [PASSED] 14 VFs
[15:39:30] [PASSED] 15 VFs
[15:39:30] [PASSED] 16 VFs
[15:39:30] [PASSED] 17 VFs
[15:39:30] [PASSED] 18 VFs
[15:39:30] [PASSED] 19 VFs
[15:39:30] [PASSED] 20 VFs
[15:39:30] [PASSED] 21 VFs
[15:39:30] [PASSED] 22 VFs
[15:39:30] [PASSED] 23 VFs
[15:39:30] [PASSED] 24 VFs
[15:39:30] [PASSED] 25 VFs
[15:39:30] [PASSED] 26 VFs
[15:39:30] [PASSED] 27 VFs
[15:39:30] [PASSED] 28 VFs
[15:39:30] [PASSED] 29 VFs
[15:39:30] [PASSED] 30 VFs
[15:39:30] [PASSED] 31 VFs
[15:39:30] [PASSED] 32 VFs
[15:39:30] [PASSED] 33 VFs
[15:39:30] [PASSED] 34 VFs
[15:39:30] [PASSED] 35 VFs
[15:39:30] [PASSED] 36 VFs
[15:39:30] [PASSED] 37 VFs
[15:39:30] [PASSED] 38 VFs
[15:39:30] [PASSED] 39 VFs
[15:39:30] [PASSED] 40 VFs
[15:39:30] [PASSED] 41 VFs
[15:39:30] [PASSED] 42 VFs
[15:39:30] [PASSED] 43 VFs
[15:39:30] [PASSED] 44 VFs
[15:39:30] [PASSED] 45 VFs
[15:39:30] [PASSED] 46 VFs
[15:39:30] [PASSED] 47 VFs
[15:39:30] [PASSED] 48 VFs
[15:39:30] [PASSED] 49 VFs
[15:39:30] [PASSED] 50 VFs
[15:39:30] [PASSED] 51 VFs
[15:39:30] [PASSED] 52 VFs
[15:39:30] [PASSED] 53 VFs
[15:39:30] [PASSED] 54 VFs
[15:39:30] [PASSED] 55 VFs
[15:39:30] [PASSED] 56 VFs
[15:39:30] [PASSED] 57 VFs
[15:39:30] [PASSED] 58 VFs
[15:39:30] [PASSED] 59 VFs
[15:39:30] [PASSED] 60 VFs
[15:39:30] [PASSED] 61 VFs
[15:39:30] [PASSED] 62 VFs
[15:39:30] [PASSED] 63 VFs
[15:39:30] ==================== [PASSED] fair_ggtt ====================
[15:39:30] ================== [PASSED] pf_gt_config ===================
[15:39:30] ===================== lmtt (1 subtest) =====================
[15:39:30] ======================== test_ops =========================
[15:39:30] [PASSED] 2-level
[15:39:30] [PASSED] multi-level
[15:39:30] ==================== [PASSED] test_ops =====================
[15:39:30] ====================== [PASSED] lmtt =======================
[15:39:30] ================= pf_service (11 subtests) =================
[15:39:30] [PASSED] pf_negotiate_any
[15:39:30] [PASSED] pf_negotiate_base_match
[15:39:30] [PASSED] pf_negotiate_base_newer
[15:39:30] [PASSED] pf_negotiate_base_next
[15:39:30] [SKIPPED] pf_negotiate_base_older
[15:39:30] [PASSED] pf_negotiate_base_prev
[15:39:30] [PASSED] pf_negotiate_latest_match
[15:39:30] [PASSED] pf_negotiate_latest_newer
[15:39:30] [PASSED] pf_negotiate_latest_next
[15:39:30] [SKIPPED] pf_negotiate_latest_older
[15:39:30] [SKIPPED] pf_negotiate_latest_prev
[15:39:30] =================== [PASSED] pf_service ====================
[15:39:30] ================= xe_guc_g2g (2 subtests) ==================
[15:39:30] ============== xe_live_guc_g2g_kunit_default ==============
[15:39:30] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[15:39:30] ============== xe_live_guc_g2g_kunit_allmem ===============
[15:39:30] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[15:39:30] =================== [SKIPPED] xe_guc_g2g ===================
[15:39:30] =================== xe_mocs (2 subtests) ===================
[15:39:30] ================ xe_live_mocs_kernel_kunit ================
[15:39:30] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[15:39:30] ================ xe_live_mocs_reset_kunit =================
[15:39:30] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[15:39:30] ==================== [SKIPPED] xe_mocs =====================
[15:39:30] ================= xe_migrate (2 subtests) ==================
[15:39:30] ================= xe_migrate_sanity_kunit =================
[15:39:30] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[15:39:30] ================== xe_validate_ccs_kunit ==================
[15:39:30] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[15:39:30] =================== [SKIPPED] xe_migrate ===================
[15:39:30] ================== xe_dma_buf (1 subtest) ==================
[15:39:30] ==================== xe_dma_buf_kunit =====================
[15:39:30] ================ [SKIPPED] xe_dma_buf_kunit ================
[15:39:30] =================== [SKIPPED] xe_dma_buf ===================
[15:39:30] ================= xe_bo_shrink (1 subtest) =================
[15:39:30] =================== xe_bo_shrink_kunit ====================
[15:39:30] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[15:39:30] ================== [SKIPPED] xe_bo_shrink ==================
[15:39:30] ==================== xe_bo (2 subtests) ====================
[15:39:30] ================== xe_ccs_migrate_kunit ===================
[15:39:30] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[15:39:30] ==================== xe_bo_evict_kunit ====================
[15:39:30] =============== [SKIPPED] xe_bo_evict_kunit ================
[15:39:30] ===================== [SKIPPED] xe_bo ======================
[15:39:30] ==================== args (11 subtests) ====================
[15:39:30] [PASSED] count_args_test
[15:39:30] [PASSED] call_args_example
[15:39:30] [PASSED] call_args_test
[15:39:30] [PASSED] drop_first_arg_example
[15:39:30] [PASSED] drop_first_arg_test
[15:39:30] [PASSED] first_arg_example
[15:39:30] [PASSED] first_arg_test
[15:39:30] [PASSED] last_arg_example
[15:39:30] [PASSED] last_arg_test
[15:39:30] [PASSED] pick_arg_example
[15:39:30] [PASSED] sep_comma_example
[15:39:30] ====================== [PASSED] args =======================
[15:39:30] =================== xe_pci (3 subtests) ====================
[15:39:30] ==================== check_graphics_ip ====================
[15:39:30] [PASSED] 12.00 Xe_LP
[15:39:30] [PASSED] 12.10 Xe_LP+
[15:39:30] [PASSED] 12.55 Xe_HPG
[15:39:30] [PASSED] 12.60 Xe_HPC
[15:39:30] [PASSED] 12.70 Xe_LPG
[15:39:30] [PASSED] 12.71 Xe_LPG
[15:39:30] [PASSED] 12.74 Xe_LPG+
[15:39:30] [PASSED] 20.01 Xe2_HPG
[15:39:30] [PASSED] 20.02 Xe2_HPG
[15:39:30] [PASSED] 20.04 Xe2_LPG
[15:39:30] [PASSED] 30.00 Xe3_LPG
[15:39:30] [PASSED] 30.01 Xe3_LPG
[15:39:30] [PASSED] 30.03 Xe3_LPG
[15:39:30] [PASSED] 30.04 Xe3_LPG
[15:39:30] [PASSED] 30.05 Xe3_LPG
[15:39:30] [PASSED] 35.11 Xe3p_XPC
[15:39:30] ================ [PASSED] check_graphics_ip ================
[15:39:30] ===================== check_media_ip ======================
[15:39:30] [PASSED] 12.00 Xe_M
[15:39:30] [PASSED] 12.55 Xe_HPM
[15:39:30] [PASSED] 13.00 Xe_LPM+
[15:39:30] [PASSED] 13.01 Xe2_HPM
[15:39:30] [PASSED] 20.00 Xe2_LPM
[15:39:30] [PASSED] 30.00 Xe3_LPM
[15:39:30] [PASSED] 30.02 Xe3_LPM
[15:39:30] [PASSED] 35.00 Xe3p_LPM
[15:39:30] [PASSED] 35.03 Xe3p_HPM
[15:39:30] ================= [PASSED] check_media_ip ==================
[15:39:30] =================== check_platform_desc ===================
[15:39:30] [PASSED] 0x9A60 (TIGERLAKE)
[15:39:30] [PASSED] 0x9A68 (TIGERLAKE)
[15:39:30] [PASSED] 0x9A70 (TIGERLAKE)
[15:39:30] [PASSED] 0x9A40 (TIGERLAKE)
[15:39:30] [PASSED] 0x9A49 (TIGERLAKE)
[15:39:30] [PASSED] 0x9A59 (TIGERLAKE)
[15:39:30] [PASSED] 0x9A78 (TIGERLAKE)
[15:39:30] [PASSED] 0x9AC0 (TIGERLAKE)
[15:39:30] [PASSED] 0x9AC9 (TIGERLAKE)
[15:39:30] [PASSED] 0x9AD9 (TIGERLAKE)
[15:39:30] [PASSED] 0x9AF8 (TIGERLAKE)
[15:39:30] [PASSED] 0x4C80 (ROCKETLAKE)
[15:39:30] [PASSED] 0x4C8A (ROCKETLAKE)
[15:39:30] [PASSED] 0x4C8B (ROCKETLAKE)
[15:39:30] [PASSED] 0x4C8C (ROCKETLAKE)
[15:39:30] [PASSED] 0x4C90 (ROCKETLAKE)
[15:39:30] [PASSED] 0x4C9A (ROCKETLAKE)
[15:39:30] [PASSED] 0x4680 (ALDERLAKE_S)
[15:39:30] [PASSED] 0x4682 (ALDERLAKE_S)
[15:39:30] [PASSED] 0x4688 (ALDERLAKE_S)
[15:39:30] [PASSED] 0x468A (ALDERLAKE_S)
[15:39:30] [PASSED] 0x468B (ALDERLAKE_S)
[15:39:30] [PASSED] 0x4690 (ALDERLAKE_S)
[15:39:30] [PASSED] 0x4692 (ALDERLAKE_S)
[15:39:30] [PASSED] 0x4693 (ALDERLAKE_S)
[15:39:30] [PASSED] 0x46A0 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46A1 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46A2 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46A3 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46A6 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46A8 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46AA (ALDERLAKE_P)
[15:39:30] [PASSED] 0x462A (ALDERLAKE_P)
[15:39:30] [PASSED] 0x4626 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x4628 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46B0 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[15:39:30] [PASSED] 0x46B1 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46B2 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46B3 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46C0 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46C1 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46C2 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46C3 (ALDERLAKE_P)
[15:39:30] [PASSED] 0x46D0 (ALDERLAKE_N)
[15:39:30] [PASSED] 0x46D1 (ALDERLAKE_N)
[15:39:30] [PASSED] 0x46D2 (ALDERLAKE_N)
[15:39:30] [PASSED] 0x46D3 (ALDERLAKE_N)
[15:39:30] [PASSED] 0x46D4 (ALDERLAKE_N)
[15:39:30] [PASSED] 0xA721 (ALDERLAKE_P)
[15:39:30] [PASSED] 0xA7A1 (ALDERLAKE_P)
[15:39:30] [PASSED] 0xA7A9 (ALDERLAKE_P)
[15:39:30] [PASSED] 0xA7AC (ALDERLAKE_P)
[15:39:30] [PASSED] 0xA7AD (ALDERLAKE_P)
[15:39:30] [PASSED] 0xA720 (ALDERLAKE_P)
[15:39:30] [PASSED] 0xA7A0 (ALDERLAKE_P)
[15:39:30] [PASSED] 0xA7A8 (ALDERLAKE_P)
[15:39:30] [PASSED] 0xA7AA (ALDERLAKE_P)
[15:39:30] [PASSED] 0xA7AB (ALDERLAKE_P)
[15:39:30] [PASSED] 0xA780 (ALDERLAKE_S)
[15:39:30] [PASSED] 0xA781 (ALDERLAKE_S)
[15:39:30] [PASSED] 0xA782 (ALDERLAKE_S)
[15:39:30] [PASSED] 0xA783 (ALDERLAKE_S)
[15:39:30] [PASSED] 0xA788 (ALDERLAKE_S)
[15:39:30] [PASSED] 0xA789 (ALDERLAKE_S)
[15:39:30] [PASSED] 0xA78A (ALDERLAKE_S)
[15:39:30] [PASSED] 0xA78B (ALDERLAKE_S)
[15:39:30] [PASSED] 0x4905 (DG1)
[15:39:30] [PASSED] 0x4906 (DG1)
[15:39:30] [PASSED] 0x4907 (DG1)
[15:39:30] [PASSED] 0x4908 (DG1)
[15:39:30] [PASSED] 0x4909 (DG1)
[15:39:30] [PASSED] 0x56C0 (DG2)
[15:39:30] [PASSED] 0x56C2 (DG2)
[15:39:30] [PASSED] 0x56C1 (DG2)
[15:39:30] [PASSED] 0x7D51 (METEORLAKE)
[15:39:30] [PASSED] 0x7DD1 (METEORLAKE)
[15:39:30] [PASSED] 0x7D41 (METEORLAKE)
[15:39:30] [PASSED] 0x7D67 (METEORLAKE)
[15:39:30] [PASSED] 0xB640 (METEORLAKE)
[15:39:30] [PASSED] 0x56A0 (DG2)
[15:39:30] [PASSED] 0x56A1 (DG2)
[15:39:30] [PASSED] 0x56A2 (DG2)
[15:39:30] [PASSED] 0x56BE (DG2)
[15:39:30] [PASSED] 0x56BF (DG2)
[15:39:30] [PASSED] 0x5690 (DG2)
[15:39:30] [PASSED] 0x5691 (DG2)
[15:39:30] [PASSED] 0x5692 (DG2)
[15:39:30] [PASSED] 0x56A5 (DG2)
[15:39:30] [PASSED] 0x56A6 (DG2)
[15:39:30] [PASSED] 0x56B0 (DG2)
[15:39:30] [PASSED] 0x56B1 (DG2)
[15:39:30] [PASSED] 0x56BA (DG2)
[15:39:30] [PASSED] 0x56BB (DG2)
[15:39:30] [PASSED] 0x56BC (DG2)
[15:39:30] [PASSED] 0x56BD (DG2)
[15:39:30] [PASSED] 0x5693 (DG2)
[15:39:30] [PASSED] 0x5694 (DG2)
[15:39:30] [PASSED] 0x5695 (DG2)
[15:39:30] [PASSED] 0x56A3 (DG2)
[15:39:30] [PASSED] 0x56A4 (DG2)
[15:39:30] [PASSED] 0x56B2 (DG2)
[15:39:30] [PASSED] 0x56B3 (DG2)
[15:39:30] [PASSED] 0x5696 (DG2)
[15:39:30] [PASSED] 0x5697 (DG2)
[15:39:30] [PASSED] 0xB69 (PVC)
[15:39:30] [PASSED] 0xB6E (PVC)
[15:39:30] [PASSED] 0xBD4 (PVC)
[15:39:30] [PASSED] 0xBD5 (PVC)
[15:39:30] [PASSED] 0xBD6 (PVC)
[15:39:30] [PASSED] 0xBD7 (PVC)
[15:39:30] [PASSED] 0xBD8 (PVC)
[15:39:30] [PASSED] 0xBD9 (PVC)
[15:39:30] [PASSED] 0xBDA (PVC)
[15:39:30] [PASSED] 0xBDB (PVC)
[15:39:30] [PASSED] 0xBE0 (PVC)
[15:39:30] [PASSED] 0xBE1 (PVC)
[15:39:30] [PASSED] 0xBE5 (PVC)
[15:39:30] [PASSED] 0x7D40 (METEORLAKE)
[15:39:30] [PASSED] 0x7D45 (METEORLAKE)
[15:39:30] [PASSED] 0x7D55 (METEORLAKE)
[15:39:30] [PASSED] 0x7D60 (METEORLAKE)
[15:39:30] [PASSED] 0x7DD5 (METEORLAKE)
[15:39:30] [PASSED] 0x6420 (LUNARLAKE)
[15:39:30] [PASSED] 0x64A0 (LUNARLAKE)
[15:39:30] [PASSED] 0x64B0 (LUNARLAKE)
[15:39:30] [PASSED] 0xE202 (BATTLEMAGE)
[15:39:30] [PASSED] 0xE209 (BATTLEMAGE)
[15:39:30] [PASSED] 0xE20B (BATTLEMAGE)
[15:39:30] [PASSED] 0xE20C (BATTLEMAGE)
[15:39:30] [PASSED] 0xE20D (BATTLEMAGE)
[15:39:30] [PASSED] 0xE210 (BATTLEMAGE)
[15:39:30] [PASSED] 0xE211 (BATTLEMAGE)
[15:39:30] [PASSED] 0xE212 (BATTLEMAGE)
[15:39:30] [PASSED] 0xE216 (BATTLEMAGE)
[15:39:30] [PASSED] 0xE220 (BATTLEMAGE)
[15:39:30] [PASSED] 0xE221 (BATTLEMAGE)
[15:39:30] [PASSED] 0xE222 (BATTLEMAGE)
[15:39:30] [PASSED] 0xE223 (BATTLEMAGE)
[15:39:30] [PASSED] 0xB080 (PANTHERLAKE)
[15:39:30] [PASSED] 0xB081 (PANTHERLAKE)
[15:39:30] [PASSED] 0xB082 (PANTHERLAKE)
[15:39:30] [PASSED] 0xB083 (PANTHERLAKE)
[15:39:30] [PASSED] 0xB084 (PANTHERLAKE)
[15:39:30] [PASSED] 0xB085 (PANTHERLAKE)
[15:39:30] [PASSED] 0xB086 (PANTHERLAKE)
[15:39:30] [PASSED] 0xB087 (PANTHERLAKE)
[15:39:30] [PASSED] 0xB08F (PANTHERLAKE)
[15:39:30] [PASSED] 0xB090 (PANTHERLAKE)
[15:39:30] [PASSED] 0xB0A0 (PANTHERLAKE)
[15:39:30] [PASSED] 0xB0B0 (PANTHERLAKE)
[15:39:30] [PASSED] 0xD740 (NOVALAKE_S)
[15:39:30] [PASSED] 0xD741 (NOVALAKE_S)
[15:39:30] [PASSED] 0xD742 (NOVALAKE_S)
[15:39:30] [PASSED] 0xD743 (NOVALAKE_S)
[15:39:30] [PASSED] 0xD744 (NOVALAKE_S)
[15:39:30] [PASSED] 0xD745 (NOVALAKE_S)
[15:39:30] [PASSED] 0x674C (CRESCENTISLAND)
[15:39:30] [PASSED] 0xFD80 (PANTHERLAKE)
[15:39:30] [PASSED] 0xFD81 (PANTHERLAKE)
[15:39:30] =============== [PASSED] check_platform_desc ===============
[15:39:30] ===================== [PASSED] xe_pci ======================
[15:39:30] =================== xe_rtp (2 subtests) ====================
[15:39:30] =============== xe_rtp_process_to_sr_tests ================
[15:39:30] [PASSED] coalesce-same-reg
[15:39:30] [PASSED] no-match-no-add
[15:39:30] [PASSED] match-or
[15:39:30] [PASSED] match-or-xfail
[15:39:30] [PASSED] no-match-no-add-multiple-rules
[15:39:30] [PASSED] two-regs-two-entries
[15:39:30] [PASSED] clr-one-set-other
[15:39:30] [PASSED] set-field
[15:39:30] [PASSED] conflict-duplicate
[15:39:30] [PASSED] conflict-not-disjoint
[15:39:30] [PASSED] conflict-reg-type
[15:39:30] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[15:39:30] ================== xe_rtp_process_tests ===================
[15:39:30] [PASSED] active1
[15:39:30] [PASSED] active2
[15:39:30] [PASSED] active-inactive
[15:39:30] [PASSED] inactive-active
[15:39:30] [PASSED] inactive-1st_or_active-inactive
[15:39:30] [PASSED] inactive-2nd_or_active-inactive
[15:39:30] [PASSED] inactive-last_or_active-inactive
[15:39:30] [PASSED] inactive-no_or_active-inactive
[15:39:30] ============== [PASSED] xe_rtp_process_tests ===============
[15:39:30] ===================== [PASSED] xe_rtp ======================
[15:39:30] ==================== xe_wa (1 subtest) =====================
[15:39:30] ======================== xe_wa_gt =========================
[15:39:30] [PASSED] TIGERLAKE B0
[15:39:30] [PASSED] DG1 A0
[15:39:30] [PASSED] DG1 B0
[15:39:30] [PASSED] ALDERLAKE_S A0
[15:39:30] [PASSED] ALDERLAKE_S B0
[15:39:30] [PASSED] ALDERLAKE_S C0
[15:39:30] [PASSED] ALDERLAKE_S D0
[15:39:30] [PASSED] ALDERLAKE_P A0
[15:39:30] [PASSED] ALDERLAKE_P B0
[15:39:30] [PASSED] ALDERLAKE_P C0
[15:39:30] [PASSED] ALDERLAKE_S RPLS D0
[15:39:30] [PASSED] ALDERLAKE_P RPLU E0
[15:39:30] [PASSED] DG2 G10 C0
[15:39:30] [PASSED] DG2 G11 B1
[15:39:30] [PASSED] DG2 G12 A1
[15:39:30] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[15:39:30] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[15:39:30] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[15:39:30] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[15:39:30] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[15:39:30] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[15:39:30] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[15:39:30] ==================== [PASSED] xe_wa_gt =====================
[15:39:30] ====================== [PASSED] xe_wa ======================
[15:39:30] ============================================================
[15:39:30] Testing complete. Ran 510 tests: passed: 492, skipped: 18
[15:39:30] Elapsed time: 35.785s total, 4.250s configuring, 31.068s building, 0.442s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[15:39:30] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[15:39:32] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[15:39:57] Starting KUnit Kernel (1/1)...
[15:39:57] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[15:39:57] ============ drm_test_pick_cmdline (2 subtests) ============
[15:39:57] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[15:39:57] =============== drm_test_pick_cmdline_named ===============
[15:39:57] [PASSED] NTSC
[15:39:57] [PASSED] NTSC-J
[15:39:57] [PASSED] PAL
[15:39:57] [PASSED] PAL-M
[15:39:57] =========== [PASSED] drm_test_pick_cmdline_named ===========
[15:39:57] ============== [PASSED] drm_test_pick_cmdline ==============
[15:39:57] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[15:39:57] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[15:39:57] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[15:39:57] =========== drm_validate_clone_mode (2 subtests) ===========
[15:39:57] ============== drm_test_check_in_clone_mode ===============
[15:39:57] [PASSED] in_clone_mode
[15:39:57] [PASSED] not_in_clone_mode
[15:39:57] ========== [PASSED] drm_test_check_in_clone_mode ===========
[15:39:57] =============== drm_test_check_valid_clones ===============
[15:39:57] [PASSED] not_in_clone_mode
[15:39:57] [PASSED] valid_clone
[15:39:57] [PASSED] invalid_clone
[15:39:57] =========== [PASSED] drm_test_check_valid_clones ===========
[15:39:57] ============= [PASSED] drm_validate_clone_mode =============
[15:39:57] ============= drm_validate_modeset (1 subtest) =============
[15:39:57] [PASSED] drm_test_check_connector_changed_modeset
[15:39:57] ============== [PASSED] drm_validate_modeset ===============
[15:39:57] ====== drm_test_bridge_get_current_state (2 subtests) ======
[15:39:57] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[15:39:57] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[15:39:57] ======== [PASSED] drm_test_bridge_get_current_state ========
[15:39:57] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[15:39:57] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[15:39:57] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[15:39:57] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[15:39:57] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[15:39:57] ============== drm_bridge_alloc (2 subtests) ===============
[15:39:57] [PASSED] drm_test_drm_bridge_alloc_basic
[15:39:57] [PASSED] drm_test_drm_bridge_alloc_get_put
[15:39:57] ================ [PASSED] drm_bridge_alloc =================
[15:39:57] ================== drm_buddy (8 subtests) ==================
[15:39:57] [PASSED] drm_test_buddy_alloc_limit
[15:39:57] [PASSED] drm_test_buddy_alloc_optimistic
[15:39:57] [PASSED] drm_test_buddy_alloc_pessimistic
[15:39:57] [PASSED] drm_test_buddy_alloc_pathological
[15:39:57] [PASSED] drm_test_buddy_alloc_contiguous
[15:39:57] [PASSED] drm_test_buddy_alloc_clear
[15:39:57] [PASSED] drm_test_buddy_alloc_range_bias
[15:39:57] [PASSED] drm_test_buddy_fragmentation_performance
[15:39:57] ==================== [PASSED] drm_buddy ====================
[15:39:57] ============= drm_cmdline_parser (40 subtests) =============
[15:39:57] [PASSED] drm_test_cmdline_force_d_only
[15:39:57] [PASSED] drm_test_cmdline_force_D_only_dvi
[15:39:57] [PASSED] drm_test_cmdline_force_D_only_hdmi
[15:39:57] [PASSED] drm_test_cmdline_force_D_only_not_digital
[15:39:57] [PASSED] drm_test_cmdline_force_e_only
[15:39:57] [PASSED] drm_test_cmdline_res
[15:39:57] [PASSED] drm_test_cmdline_res_vesa
[15:39:57] [PASSED] drm_test_cmdline_res_vesa_rblank
[15:39:57] [PASSED] drm_test_cmdline_res_rblank
[15:39:57] [PASSED] drm_test_cmdline_res_bpp
[15:39:57] [PASSED] drm_test_cmdline_res_refresh
[15:39:57] [PASSED] drm_test_cmdline_res_bpp_refresh
[15:39:57] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[15:39:57] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[15:39:57] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[15:39:57] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[15:39:57] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[15:39:57] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[15:39:57] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[15:39:57] [PASSED] drm_test_cmdline_res_margins_force_on
[15:39:57] [PASSED] drm_test_cmdline_res_vesa_margins
[15:39:57] [PASSED] drm_test_cmdline_name
[15:39:57] [PASSED] drm_test_cmdline_name_bpp
[15:39:57] [PASSED] drm_test_cmdline_name_option
[15:39:57] [PASSED] drm_test_cmdline_name_bpp_option
[15:39:57] [PASSED] drm_test_cmdline_rotate_0
[15:39:57] [PASSED] drm_test_cmdline_rotate_90
[15:39:57] [PASSED] drm_test_cmdline_rotate_180
[15:39:57] [PASSED] drm_test_cmdline_rotate_270
[15:39:57] [PASSED] drm_test_cmdline_hmirror
[15:39:57] [PASSED] drm_test_cmdline_vmirror
[15:39:57] [PASSED] drm_test_cmdline_margin_options
[15:39:57] [PASSED] drm_test_cmdline_multiple_options
[15:39:57] [PASSED] drm_test_cmdline_bpp_extra_and_option
[15:39:57] [PASSED] drm_test_cmdline_extra_and_option
[15:39:57] [PASSED] drm_test_cmdline_freestanding_options
[15:39:57] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[15:39:57] [PASSED] drm_test_cmdline_panel_orientation
[15:39:57] ================ drm_test_cmdline_invalid =================
[15:39:57] [PASSED] margin_only
[15:39:57] [PASSED] interlace_only
[15:39:57] [PASSED] res_missing_x
[15:39:57] [PASSED] res_missing_y
[15:39:57] [PASSED] res_bad_y
[15:39:57] [PASSED] res_missing_y_bpp
[15:39:57] [PASSED] res_bad_bpp
[15:39:57] [PASSED] res_bad_refresh
[15:39:57] [PASSED] res_bpp_refresh_force_on_off
[15:39:57] [PASSED] res_invalid_mode
[15:39:57] [PASSED] res_bpp_wrong_place_mode
[15:39:57] [PASSED] name_bpp_refresh
[15:39:57] [PASSED] name_refresh
[15:39:57] [PASSED] name_refresh_wrong_mode
[15:39:57] [PASSED] name_refresh_invalid_mode
[15:39:57] [PASSED] rotate_multiple
[15:39:57] [PASSED] rotate_invalid_val
[15:39:57] [PASSED] rotate_truncated
[15:39:57] [PASSED] invalid_option
[15:39:57] [PASSED] invalid_tv_option
[15:39:57] [PASSED] truncated_tv_option
[15:39:57] ============ [PASSED] drm_test_cmdline_invalid =============
[15:39:57] =============== drm_test_cmdline_tv_options ===============
[15:39:57] [PASSED] NTSC
[15:39:57] [PASSED] NTSC_443
[15:39:57] [PASSED] NTSC_J
[15:39:57] [PASSED] PAL
[15:39:57] [PASSED] PAL_M
[15:39:57] [PASSED] PAL_N
[15:39:57] [PASSED] SECAM
[15:39:57] [PASSED] MONO_525
[15:39:57] [PASSED] MONO_625
[15:39:57] =========== [PASSED] drm_test_cmdline_tv_options ===========
[15:39:57] =============== [PASSED] drm_cmdline_parser ================
[15:39:57] ========== drmm_connector_hdmi_init (20 subtests) ==========
[15:39:57] [PASSED] drm_test_connector_hdmi_init_valid
[15:39:57] [PASSED] drm_test_connector_hdmi_init_bpc_8
[15:39:57] [PASSED] drm_test_connector_hdmi_init_bpc_10
[15:39:57] [PASSED] drm_test_connector_hdmi_init_bpc_12
[15:39:57] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[15:39:57] [PASSED] drm_test_connector_hdmi_init_bpc_null
[15:39:57] [PASSED] drm_test_connector_hdmi_init_formats_empty
[15:39:57] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[15:39:57] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[15:39:57] [PASSED] supported_formats=0x9 yuv420_allowed=1
[15:39:57] [PASSED] supported_formats=0x9 yuv420_allowed=0
[15:39:57] [PASSED] supported_formats=0x3 yuv420_allowed=1
[15:39:57] [PASSED] supported_formats=0x3 yuv420_allowed=0
[15:39:57] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[15:39:57] [PASSED] drm_test_connector_hdmi_init_null_ddc
[15:39:57] [PASSED] drm_test_connector_hdmi_init_null_product
[15:39:57] [PASSED] drm_test_connector_hdmi_init_null_vendor
[15:39:57] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[15:39:57] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[15:39:57] [PASSED] drm_test_connector_hdmi_init_product_valid
[15:39:57] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[15:39:57] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[15:39:57] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[15:39:57] ========= drm_test_connector_hdmi_init_type_valid =========
[15:39:57] [PASSED] HDMI-A
[15:39:57] [PASSED] HDMI-B
[15:39:57] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[15:39:57] ======== drm_test_connector_hdmi_init_type_invalid ========
[15:39:57] [PASSED] Unknown
[15:39:57] [PASSED] VGA
[15:39:57] [PASSED] DVI-I
[15:39:57] [PASSED] DVI-D
[15:39:57] [PASSED] DVI-A
[15:39:57] [PASSED] Composite
[15:39:57] [PASSED] SVIDEO
[15:39:57] [PASSED] LVDS
[15:39:57] [PASSED] Component
[15:39:57] [PASSED] DIN
[15:39:57] [PASSED] DP
[15:39:57] [PASSED] TV
[15:39:57] [PASSED] eDP
[15:39:57] [PASSED] Virtual
[15:39:57] [PASSED] DSI
[15:39:57] [PASSED] DPI
[15:39:57] [PASSED] Writeback
[15:39:57] [PASSED] SPI
[15:39:57] [PASSED] USB
[15:39:57] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[15:39:57] ============ [PASSED] drmm_connector_hdmi_init =============
[15:39:57] ============= drmm_connector_init (3 subtests) =============
[15:39:57] [PASSED] drm_test_drmm_connector_init
[15:39:57] [PASSED] drm_test_drmm_connector_init_null_ddc
[15:39:57] ========= drm_test_drmm_connector_init_type_valid =========
[15:39:57] [PASSED] Unknown
[15:39:57] [PASSED] VGA
[15:39:57] [PASSED] DVI-I
[15:39:57] [PASSED] DVI-D
[15:39:57] [PASSED] DVI-A
[15:39:57] [PASSED] Composite
[15:39:57] [PASSED] SVIDEO
[15:39:57] [PASSED] LVDS
[15:39:57] [PASSED] Component
[15:39:57] [PASSED] DIN
[15:39:57] [PASSED] DP
[15:39:57] [PASSED] HDMI-A
[15:39:57] [PASSED] HDMI-B
[15:39:57] [PASSED] TV
[15:39:57] [PASSED] eDP
[15:39:57] [PASSED] Virtual
[15:39:57] [PASSED] DSI
[15:39:57] [PASSED] DPI
[15:39:57] [PASSED] Writeback
[15:39:57] [PASSED] SPI
[15:39:57] [PASSED] USB
[15:39:57] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[15:39:57] =============== [PASSED] drmm_connector_init ===============
[15:39:57] ========= drm_connector_dynamic_init (6 subtests) ==========
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_init
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_init_properties
[15:39:57] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[15:39:57] [PASSED] Unknown
[15:39:57] [PASSED] VGA
[15:39:57] [PASSED] DVI-I
[15:39:57] [PASSED] DVI-D
[15:39:57] [PASSED] DVI-A
[15:39:57] [PASSED] Composite
[15:39:57] [PASSED] SVIDEO
[15:39:57] [PASSED] LVDS
[15:39:57] [PASSED] Component
[15:39:57] [PASSED] DIN
[15:39:57] [PASSED] DP
[15:39:57] [PASSED] HDMI-A
[15:39:57] [PASSED] HDMI-B
[15:39:57] [PASSED] TV
[15:39:57] [PASSED] eDP
[15:39:57] [PASSED] Virtual
[15:39:57] [PASSED] DSI
[15:39:57] [PASSED] DPI
[15:39:57] [PASSED] Writeback
[15:39:57] [PASSED] SPI
[15:39:57] [PASSED] USB
[15:39:57] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[15:39:57] ======== drm_test_drm_connector_dynamic_init_name =========
[15:39:57] [PASSED] Unknown
[15:39:57] [PASSED] VGA
[15:39:57] [PASSED] DVI-I
[15:39:57] [PASSED] DVI-D
[15:39:57] [PASSED] DVI-A
[15:39:57] [PASSED] Composite
[15:39:57] [PASSED] SVIDEO
[15:39:57] [PASSED] LVDS
[15:39:57] [PASSED] Component
[15:39:57] [PASSED] DIN
[15:39:57] [PASSED] DP
[15:39:57] [PASSED] HDMI-A
[15:39:57] [PASSED] HDMI-B
[15:39:57] [PASSED] TV
[15:39:57] [PASSED] eDP
[15:39:57] [PASSED] Virtual
[15:39:57] [PASSED] DSI
[15:39:57] [PASSED] DPI
[15:39:57] [PASSED] Writeback
[15:39:57] [PASSED] SPI
[15:39:57] [PASSED] USB
[15:39:57] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[15:39:57] =========== [PASSED] drm_connector_dynamic_init ============
[15:39:57] ==== drm_connector_dynamic_register_early (4 subtests) =====
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[15:39:57] ====== [PASSED] drm_connector_dynamic_register_early =======
[15:39:57] ======= drm_connector_dynamic_register (7 subtests) ========
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[15:39:57] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[15:39:57] ========= [PASSED] drm_connector_dynamic_register ==========
[15:39:57] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[15:39:57] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[15:39:57] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[15:39:57] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[15:39:57] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[15:39:57] ========== drm_test_get_tv_mode_from_name_valid ===========
[15:39:57] [PASSED] NTSC
[15:39:57] [PASSED] NTSC-443
[15:39:57] [PASSED] NTSC-J
[15:39:57] [PASSED] PAL
[15:39:57] [PASSED] PAL-M
[15:39:57] [PASSED] PAL-N
[15:39:57] [PASSED] SECAM
[15:39:57] [PASSED] Mono
[15:39:57] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[15:39:57] [PASSED] drm_test_get_tv_mode_from_name_truncated
[15:39:57] ============ [PASSED] drm_get_tv_mode_from_name ============
[15:39:57] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[15:39:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[15:39:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[15:39:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[15:39:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[15:39:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[15:39:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[15:39:57] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[15:39:57] [PASSED] VIC 96
[15:39:57] [PASSED] VIC 97
[15:39:57] [PASSED] VIC 101
[15:39:57] [PASSED] VIC 102
[15:39:57] [PASSED] VIC 106
[15:39:57] [PASSED] VIC 107
[15:39:57] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[15:39:57] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[15:39:57] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[15:39:57] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[15:39:57] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[15:39:57] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[15:39:57] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[15:39:57] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[15:39:57] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[15:39:57] [PASSED] Automatic
[15:39:57] [PASSED] Full
[15:39:57] [PASSED] Limited 16:235
[15:39:57] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[15:39:57] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[15:39:57] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[15:39:57] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[15:39:57] === drm_test_drm_hdmi_connector_get_output_format_name ====
[15:39:57] [PASSED] RGB
[15:39:57] [PASSED] YUV 4:2:0
[15:39:57] [PASSED] YUV 4:2:2
[15:39:57] [PASSED] YUV 4:4:4
[15:39:57] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[15:39:57] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[15:39:57] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[15:39:57] ============= drm_damage_helper (21 subtests) ==============
[15:39:57] [PASSED] drm_test_damage_iter_no_damage
[15:39:57] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[15:39:57] [PASSED] drm_test_damage_iter_no_damage_src_moved
[15:39:57] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[15:39:57] [PASSED] drm_test_damage_iter_no_damage_not_visible
[15:39:57] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[15:39:57] [PASSED] drm_test_damage_iter_no_damage_no_fb
[15:39:57] [PASSED] drm_test_damage_iter_simple_damage
[15:39:57] [PASSED] drm_test_damage_iter_single_damage
[15:39:57] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[15:39:57] [PASSED] drm_test_damage_iter_single_damage_outside_src
[15:39:57] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[15:39:57] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[15:39:57] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[15:39:57] [PASSED] drm_test_damage_iter_single_damage_src_moved
[15:39:57] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[15:39:57] [PASSED] drm_test_damage_iter_damage
[15:39:57] [PASSED] drm_test_damage_iter_damage_one_intersect
[15:39:57] [PASSED] drm_test_damage_iter_damage_one_outside
[15:39:57] [PASSED] drm_test_damage_iter_damage_src_moved
[15:39:57] [PASSED] drm_test_damage_iter_damage_not_visible
[15:39:57] ================ [PASSED] drm_damage_helper ================
[15:39:57] ============== drm_dp_mst_helper (3 subtests) ==============
[15:39:57] ============== drm_test_dp_mst_calc_pbn_mode ==============
[15:39:57] [PASSED] Clock 154000 BPP 30 DSC disabled
[15:39:57] [PASSED] Clock 234000 BPP 30 DSC disabled
[15:39:57] [PASSED] Clock 297000 BPP 24 DSC disabled
[15:39:57] [PASSED] Clock 332880 BPP 24 DSC enabled
[15:39:57] [PASSED] Clock 324540 BPP 24 DSC enabled
[15:39:57] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[15:39:57] ============== drm_test_dp_mst_calc_pbn_div ===============
[15:39:57] [PASSED] Link rate 2000000 lane count 4
[15:39:57] [PASSED] Link rate 2000000 lane count 2
[15:39:57] [PASSED] Link rate 2000000 lane count 1
[15:39:57] [PASSED] Link rate 1350000 lane count 4
[15:39:57] [PASSED] Link rate 1350000 lane count 2
[15:39:57] [PASSED] Link rate 1350000 lane count 1
[15:39:57] [PASSED] Link rate 1000000 lane count 4
[15:39:57] [PASSED] Link rate 1000000 lane count 2
[15:39:57] [PASSED] Link rate 1000000 lane count 1
[15:39:57] [PASSED] Link rate 810000 lane count 4
[15:39:57] [PASSED] Link rate 810000 lane count 2
[15:39:57] [PASSED] Link rate 810000 lane count 1
[15:39:57] [PASSED] Link rate 540000 lane count 4
[15:39:57] [PASSED] Link rate 540000 lane count 2
[15:39:57] [PASSED] Link rate 540000 lane count 1
[15:39:57] [PASSED] Link rate 270000 lane count 4
[15:39:57] [PASSED] Link rate 270000 lane count 2
[15:39:57] [PASSED] Link rate 270000 lane count 1
[15:39:57] [PASSED] Link rate 162000 lane count 4
[15:39:57] [PASSED] Link rate 162000 lane count 2
[15:39:57] [PASSED] Link rate 162000 lane count 1
[15:39:57] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[15:39:57] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[15:39:57] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[15:39:57] [PASSED] DP_POWER_UP_PHY with port number
[15:39:57] [PASSED] DP_POWER_DOWN_PHY with port number
[15:39:57] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[15:39:57] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[15:39:57] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[15:39:57] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[15:39:57] [PASSED] DP_QUERY_PAYLOAD with port number
[15:39:57] [PASSED] DP_QUERY_PAYLOAD with VCPI
[15:39:57] [PASSED] DP_REMOTE_DPCD_READ with port number
[15:39:57] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[15:39:57] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[15:39:57] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[15:39:57] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[15:39:57] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[15:39:57] [PASSED] DP_REMOTE_I2C_READ with port number
[15:39:57] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[15:39:57] [PASSED] DP_REMOTE_I2C_READ with transactions array
[15:39:57] [PASSED] DP_REMOTE_I2C_WRITE with port number
[15:39:57] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[15:39:57] [PASSED] DP_REMOTE_I2C_WRITE with data array
[15:39:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[15:39:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[15:39:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[15:39:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[15:39:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[15:39:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[15:39:57] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[15:39:57] ================ [PASSED] drm_dp_mst_helper ================
[15:39:57] ================== drm_exec (7 subtests) ===================
[15:39:57] [PASSED] sanitycheck
[15:39:57] [PASSED] test_lock
[15:39:57] [PASSED] test_lock_unlock
[15:39:57] [PASSED] test_duplicates
[15:39:57] [PASSED] test_prepare
[15:39:57] [PASSED] test_prepare_array
[15:39:57] [PASSED] test_multiple_loops
[15:39:57] ==================== [PASSED] drm_exec =====================
[15:39:57] =========== drm_format_helper_test (17 subtests) ===========
[15:39:57] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[15:39:57] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[15:39:57] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[15:39:57] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[15:39:57] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[15:39:57] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[15:39:57] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[15:39:57] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[15:39:57] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[15:39:57] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[15:39:57] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[15:39:57] ============== drm_test_fb_xrgb8888_to_mono ===============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[15:39:57] ==================== drm_test_fb_swab =====================
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ================ [PASSED] drm_test_fb_swab =================
[15:39:57] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[15:39:57] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[15:39:57] [PASSED] single_pixel_source_buffer
[15:39:57] [PASSED] single_pixel_clip_rectangle
[15:39:57] [PASSED] well_known_colors
[15:39:57] [PASSED] destination_pitch
[15:39:57] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[15:39:57] ================= drm_test_fb_clip_offset =================
[15:39:57] [PASSED] pass through
[15:39:57] [PASSED] horizontal offset
[15:39:57] [PASSED] vertical offset
[15:39:57] [PASSED] horizontal and vertical offset
[15:39:57] [PASSED] horizontal offset (custom pitch)
[15:39:57] [PASSED] vertical offset (custom pitch)
[15:39:57] [PASSED] horizontal and vertical offset (custom pitch)
[15:39:57] ============= [PASSED] drm_test_fb_clip_offset =============
[15:39:57] =================== drm_test_fb_memcpy ====================
[15:39:57] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[15:39:57] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[15:39:57] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[15:39:57] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[15:39:57] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[15:39:57] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[15:39:57] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[15:39:57] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[15:39:57] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[15:39:57] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[15:39:57] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[15:39:57] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[15:39:57] =============== [PASSED] drm_test_fb_memcpy ================
[15:39:57] ============= [PASSED] drm_format_helper_test ==============
[15:39:57] ================= drm_format (18 subtests) =================
[15:39:57] [PASSED] drm_test_format_block_width_invalid
[15:39:57] [PASSED] drm_test_format_block_width_one_plane
[15:39:57] [PASSED] drm_test_format_block_width_two_plane
[15:39:57] [PASSED] drm_test_format_block_width_three_plane
[15:39:57] [PASSED] drm_test_format_block_width_tiled
[15:39:57] [PASSED] drm_test_format_block_height_invalid
[15:39:57] [PASSED] drm_test_format_block_height_one_plane
[15:39:57] [PASSED] drm_test_format_block_height_two_plane
[15:39:57] [PASSED] drm_test_format_block_height_three_plane
[15:39:57] [PASSED] drm_test_format_block_height_tiled
[15:39:57] [PASSED] drm_test_format_min_pitch_invalid
[15:39:57] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[15:39:57] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[15:39:57] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[15:39:57] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[15:39:57] [PASSED] drm_test_format_min_pitch_two_plane
[15:39:57] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[15:39:57] [PASSED] drm_test_format_min_pitch_tiled
[15:39:57] =================== [PASSED] drm_format ====================
[15:39:57] ============== drm_framebuffer (10 subtests) ===============
[15:39:57] ========== drm_test_framebuffer_check_src_coords ==========
[15:39:57] [PASSED] Success: source fits into fb
[15:39:57] [PASSED] Fail: overflowing fb with x-axis coordinate
[15:39:57] [PASSED] Fail: overflowing fb with y-axis coordinate
[15:39:57] [PASSED] Fail: overflowing fb with source width
[15:39:57] [PASSED] Fail: overflowing fb with source height
[15:39:57] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[15:39:57] [PASSED] drm_test_framebuffer_cleanup
[15:39:57] =============== drm_test_framebuffer_create ===============
[15:39:57] [PASSED] ABGR8888 normal sizes
[15:39:57] [PASSED] ABGR8888 max sizes
[15:39:57] [PASSED] ABGR8888 pitch greater than min required
[15:39:57] [PASSED] ABGR8888 pitch less than min required
[15:39:57] [PASSED] ABGR8888 Invalid width
[15:39:57] [PASSED] ABGR8888 Invalid buffer handle
[15:39:57] [PASSED] No pixel format
[15:39:57] [PASSED] ABGR8888 Width 0
[15:39:57] [PASSED] ABGR8888 Height 0
[15:39:57] [PASSED] ABGR8888 Out of bound height * pitch combination
[15:39:57] [PASSED] ABGR8888 Large buffer offset
[15:39:57] [PASSED] ABGR8888 Buffer offset for inexistent plane
[15:39:57] [PASSED] ABGR8888 Invalid flag
[15:39:57] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[15:39:57] [PASSED] ABGR8888 Valid buffer modifier
[15:39:57] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[15:39:57] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[15:39:57] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[15:39:57] [PASSED] NV12 Normal sizes
[15:39:57] [PASSED] NV12 Max sizes
[15:39:57] [PASSED] NV12 Invalid pitch
[15:39:57] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[15:39:57] [PASSED] NV12 different modifier per-plane
[15:39:57] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[15:39:57] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[15:39:57] [PASSED] NV12 Modifier for inexistent plane
[15:39:57] [PASSED] NV12 Handle for inexistent plane
[15:39:57] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[15:39:57] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[15:39:57] [PASSED] YVU420 Normal sizes
[15:39:57] [PASSED] YVU420 Max sizes
[15:39:57] [PASSED] YVU420 Invalid pitch
[15:39:57] [PASSED] YVU420 Different pitches
[15:39:57] [PASSED] YVU420 Different buffer offsets/pitches
[15:39:57] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[15:39:57] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[15:39:57] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[15:39:57] [PASSED] YVU420 Valid modifier
[15:39:57] [PASSED] YVU420 Different modifiers per plane
[15:39:57] [PASSED] YVU420 Modifier for inexistent plane
[15:39:57] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[15:39:57] [PASSED] X0L2 Normal sizes
[15:39:57] [PASSED] X0L2 Max sizes
[15:39:57] [PASSED] X0L2 Invalid pitch
[15:39:57] [PASSED] X0L2 Pitch greater than minimum required
[15:39:57] [PASSED] X0L2 Handle for inexistent plane
[15:39:57] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[15:39:57] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[15:39:57] [PASSED] X0L2 Valid modifier
[15:39:57] [PASSED] X0L2 Modifier for inexistent plane
[15:39:57] =========== [PASSED] drm_test_framebuffer_create ===========
[15:39:57] [PASSED] drm_test_framebuffer_free
[15:39:57] [PASSED] drm_test_framebuffer_init
[15:39:57] [PASSED] drm_test_framebuffer_init_bad_format
[15:39:57] [PASSED] drm_test_framebuffer_init_dev_mismatch
[15:39:57] [PASSED] drm_test_framebuffer_lookup
[15:39:57] [PASSED] drm_test_framebuffer_lookup_inexistent
[15:39:57] [PASSED] drm_test_framebuffer_modifiers_not_supported
[15:39:57] ================= [PASSED] drm_framebuffer =================
[15:39:57] ================ drm_gem_shmem (8 subtests) ================
[15:39:57] [PASSED] drm_gem_shmem_test_obj_create
[15:39:57] [PASSED] drm_gem_shmem_test_obj_create_private
[15:39:57] [PASSED] drm_gem_shmem_test_pin_pages
[15:39:57] [PASSED] drm_gem_shmem_test_vmap
[15:39:57] [PASSED] drm_gem_shmem_test_get_pages_sgt
[15:39:57] [PASSED] drm_gem_shmem_test_get_sg_table
[15:39:57] [PASSED] drm_gem_shmem_test_madvise
[15:39:57] [PASSED] drm_gem_shmem_test_purge
[15:39:57] ================== [PASSED] drm_gem_shmem ==================
[15:39:57] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[15:39:57] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[15:39:57] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[15:39:57] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[15:39:57] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[15:39:57] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[15:39:57] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[15:39:57] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[15:39:57] [PASSED] Automatic
[15:39:57] [PASSED] Full
[15:39:57] [PASSED] Limited 16:235
[15:39:57] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[15:39:57] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[15:39:57] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[15:39:57] [PASSED] drm_test_check_disable_connector
[15:39:57] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[15:39:57] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[15:39:57] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[15:39:57] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[15:39:57] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[15:39:57] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[15:39:57] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[15:39:57] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[15:39:57] [PASSED] drm_test_check_output_bpc_dvi
[15:39:57] [PASSED] drm_test_check_output_bpc_format_vic_1
[15:39:57] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[15:39:57] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[15:39:57] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[15:39:57] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[15:39:57] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[15:39:57] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[15:39:57] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[15:39:57] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[15:39:57] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[15:39:57] [PASSED] drm_test_check_broadcast_rgb_value
[15:39:57] [PASSED] drm_test_check_bpc_8_value
[15:39:57] [PASSED] drm_test_check_bpc_10_value
[15:39:57] [PASSED] drm_test_check_bpc_12_value
[15:39:57] [PASSED] drm_test_check_format_value
[15:39:57] [PASSED] drm_test_check_tmds_char_value
[15:39:57] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[15:39:57] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[15:39:57] [PASSED] drm_test_check_mode_valid
[15:39:57] [PASSED] drm_test_check_mode_valid_reject
[15:39:57] [PASSED] drm_test_check_mode_valid_reject_rate
[15:39:57] [PASSED] drm_test_check_mode_valid_reject_max_clock
[15:39:57] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[15:39:57] ================= drm_managed (2 subtests) =================
[15:39:57] [PASSED] drm_test_managed_release_action
[15:39:57] [PASSED] drm_test_managed_run_action
[15:39:57] =================== [PASSED] drm_managed ===================
[15:39:57] =================== drm_mm (6 subtests) ====================
[15:39:57] [PASSED] drm_test_mm_init
[15:39:57] [PASSED] drm_test_mm_debug
[15:39:57] [PASSED] drm_test_mm_align32
[15:39:57] [PASSED] drm_test_mm_align64
[15:39:57] [PASSED] drm_test_mm_lowest
[15:39:57] [PASSED] drm_test_mm_highest
[15:39:57] ===================== [PASSED] drm_mm ======================
[15:39:57] ============= drm_modes_analog_tv (5 subtests) =============
[15:39:57] [PASSED] drm_test_modes_analog_tv_mono_576i
[15:39:57] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[15:39:57] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[15:39:57] [PASSED] drm_test_modes_analog_tv_pal_576i
[15:39:57] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[15:39:57] =============== [PASSED] drm_modes_analog_tv ===============
[15:39:57] ============== drm_plane_helper (2 subtests) ===============
[15:39:57] =============== drm_test_check_plane_state ================
[15:39:57] [PASSED] clipping_simple
[15:39:57] [PASSED] clipping_rotate_reflect
[15:39:57] [PASSED] positioning_simple
[15:39:57] [PASSED] upscaling
[15:39:57] [PASSED] downscaling
[15:39:57] [PASSED] rounding1
[15:39:57] [PASSED] rounding2
[15:39:57] [PASSED] rounding3
[15:39:57] [PASSED] rounding4
[15:39:57] =========== [PASSED] drm_test_check_plane_state ============
[15:39:57] =========== drm_test_check_invalid_plane_state ============
[15:39:57] [PASSED] positioning_invalid
[15:39:57] [PASSED] upscaling_invalid
[15:39:57] [PASSED] downscaling_invalid
[15:39:57] ======= [PASSED] drm_test_check_invalid_plane_state ========
[15:39:57] ================ [PASSED] drm_plane_helper =================
[15:39:57] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[15:39:57] ====== drm_test_connector_helper_tv_get_modes_check =======
[15:39:57] [PASSED] None
[15:39:57] [PASSED] PAL
[15:39:57] [PASSED] NTSC
[15:39:57] [PASSED] Both, NTSC Default
[15:39:57] [PASSED] Both, PAL Default
[15:39:57] [PASSED] Both, NTSC Default, with PAL on command-line
[15:39:57] [PASSED] Both, PAL Default, with NTSC on command-line
[15:39:57] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[15:39:57] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[15:39:57] ================== drm_rect (9 subtests) ===================
[15:39:57] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[15:39:57] [PASSED] drm_test_rect_clip_scaled_not_clipped
[15:39:57] [PASSED] drm_test_rect_clip_scaled_clipped
[15:39:57] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[15:39:57] ================= drm_test_rect_intersect =================
[15:39:57] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[15:39:57] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[15:39:57] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[15:39:57] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[15:39:57] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[15:39:57] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[15:39:57] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[15:39:57] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[15:39:57] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[15:39:57] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[15:39:57] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[15:39:57] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[15:39:57] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[15:39:57] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[15:39:57] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[15:39:57] ============= [PASSED] drm_test_rect_intersect =============
[15:39:57] ================ drm_test_rect_calc_hscale ================
[15:39:57] [PASSED] normal use
[15:39:57] [PASSED] out of max range
[15:39:57] [PASSED] out of min range
[15:39:57] [PASSED] zero dst
[15:39:57] [PASSED] negative src
[15:39:57] [PASSED] negative dst
[15:39:57] ============ [PASSED] drm_test_rect_calc_hscale ============
[15:39:57] ================ drm_test_rect_calc_vscale ================
[15:39:57] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[15:39:57] [PASSED] out of max range
[15:39:57] [PASSED] out of min range
[15:39:57] [PASSED] zero dst
[15:39:57] [PASSED] negative src
[15:39:57] [PASSED] negative dst
[15:39:57] ============ [PASSED] drm_test_rect_calc_vscale ============
[15:39:57] ================== drm_test_rect_rotate ===================
[15:39:57] [PASSED] reflect-x
[15:39:57] [PASSED] reflect-y
[15:39:57] [PASSED] rotate-0
[15:39:57] [PASSED] rotate-90
[15:39:57] [PASSED] rotate-180
[15:39:57] [PASSED] rotate-270
[15:39:57] ============== [PASSED] drm_test_rect_rotate ===============
[15:39:57] ================ drm_test_rect_rotate_inv =================
[15:39:57] [PASSED] reflect-x
[15:39:57] [PASSED] reflect-y
[15:39:57] [PASSED] rotate-0
[15:39:57] [PASSED] rotate-90
[15:39:57] [PASSED] rotate-180
[15:39:57] [PASSED] rotate-270
[15:39:57] ============ [PASSED] drm_test_rect_rotate_inv =============
[15:39:57] ==================== [PASSED] drm_rect =====================
[15:39:57] ============ drm_sysfb_modeset_test (1 subtest) ============
[15:39:57] ============ drm_test_sysfb_build_fourcc_list =============
[15:39:57] [PASSED] no native formats
[15:39:57] [PASSED] XRGB8888 as native format
[15:39:57] [PASSED] remove duplicates
[15:39:57] [PASSED] convert alpha formats
[15:39:57] [PASSED] random formats
[15:39:57] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[15:39:57] ============= [PASSED] drm_sysfb_modeset_test ==============
[15:39:57] ================== drm_fixp (2 subtests) ===================
[15:39:57] [PASSED] drm_test_int2fixp
[15:39:57] [PASSED] drm_test_sm2fixp
[15:39:57] ==================== [PASSED] drm_fixp =====================
[15:39:57] ============================================================
[15:39:57] Testing complete. Ran 624 tests: passed: 624
[15:39:57] Elapsed time: 27.410s total, 1.703s configuring, 25.290s building, 0.395s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[15:39:58] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[15:39:59] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[15:40:08] Starting KUnit Kernel (1/1)...
[15:40:08] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[15:40:09] ================= ttm_device (5 subtests) ==================
[15:40:09] [PASSED] ttm_device_init_basic
[15:40:09] [PASSED] ttm_device_init_multiple
[15:40:09] [PASSED] ttm_device_fini_basic
[15:40:09] [PASSED] ttm_device_init_no_vma_man
[15:40:09] ================== ttm_device_init_pools ==================
[15:40:09] [PASSED] No DMA allocations, no DMA32 required
[15:40:09] [PASSED] DMA allocations, DMA32 required
[15:40:09] [PASSED] No DMA allocations, DMA32 required
[15:40:09] [PASSED] DMA allocations, no DMA32 required
[15:40:09] ============== [PASSED] ttm_device_init_pools ==============
[15:40:09] =================== [PASSED] ttm_device ====================
[15:40:09] ================== ttm_pool (8 subtests) ===================
[15:40:09] ================== ttm_pool_alloc_basic ===================
[15:40:09] [PASSED] One page
[15:40:09] [PASSED] More than one page
[15:40:09] [PASSED] Above the allocation limit
[15:40:09] [PASSED] One page, with coherent DMA mappings enabled
[15:40:09] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[15:40:09] ============== [PASSED] ttm_pool_alloc_basic ===============
[15:40:09] ============== ttm_pool_alloc_basic_dma_addr ==============
[15:40:09] [PASSED] One page
[15:40:09] [PASSED] More than one page
[15:40:09] [PASSED] Above the allocation limit
[15:40:09] [PASSED] One page, with coherent DMA mappings enabled
[15:40:09] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[15:40:09] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[15:40:09] [PASSED] ttm_pool_alloc_order_caching_match
[15:40:09] [PASSED] ttm_pool_alloc_caching_mismatch
[15:40:09] [PASSED] ttm_pool_alloc_order_mismatch
[15:40:09] [PASSED] ttm_pool_free_dma_alloc
[15:40:09] [PASSED] ttm_pool_free_no_dma_alloc
[15:40:09] [PASSED] ttm_pool_fini_basic
[15:40:09] ==================== [PASSED] ttm_pool =====================
[15:40:09] ================ ttm_resource (8 subtests) =================
[15:40:09] ================= ttm_resource_init_basic =================
[15:40:09] [PASSED] Init resource in TTM_PL_SYSTEM
[15:40:09] [PASSED] Init resource in TTM_PL_VRAM
[15:40:09] [PASSED] Init resource in a private placement
[15:40:09] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[15:40:09] ============= [PASSED] ttm_resource_init_basic =============
[15:40:09] [PASSED] ttm_resource_init_pinned
[15:40:09] [PASSED] ttm_resource_fini_basic
[15:40:09] [PASSED] ttm_resource_manager_init_basic
[15:40:09] [PASSED] ttm_resource_manager_usage_basic
[15:40:09] [PASSED] ttm_resource_manager_set_used_basic
[15:40:09] [PASSED] ttm_sys_man_alloc_basic
[15:40:09] [PASSED] ttm_sys_man_free_basic
[15:40:09] ================== [PASSED] ttm_resource ===================
[15:40:09] =================== ttm_tt (15 subtests) ===================
[15:40:09] ==================== ttm_tt_init_basic ====================
[15:40:09] [PASSED] Page-aligned size
[15:40:09] [PASSED] Extra pages requested
[15:40:09] ================ [PASSED] ttm_tt_init_basic ================
[15:40:09] [PASSED] ttm_tt_init_misaligned
[15:40:09] [PASSED] ttm_tt_fini_basic
[15:40:09] [PASSED] ttm_tt_fini_sg
[15:40:09] [PASSED] ttm_tt_fini_shmem
[15:40:09] [PASSED] ttm_tt_create_basic
[15:40:09] [PASSED] ttm_tt_create_invalid_bo_type
[15:40:09] [PASSED] ttm_tt_create_ttm_exists
[15:40:09] [PASSED] ttm_tt_create_failed
[15:40:09] [PASSED] ttm_tt_destroy_basic
[15:40:09] [PASSED] ttm_tt_populate_null_ttm
[15:40:09] [PASSED] ttm_tt_populate_populated_ttm
[15:40:09] [PASSED] ttm_tt_unpopulate_basic
[15:40:09] [PASSED] ttm_tt_unpopulate_empty_ttm
[15:40:09] [PASSED] ttm_tt_swapin_basic
[15:40:09] ===================== [PASSED] ttm_tt ======================
[15:40:09] =================== ttm_bo (14 subtests) ===================
[15:40:09] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[15:40:09] [PASSED] Cannot be interrupted and sleeps
[15:40:09] [PASSED] Cannot be interrupted, locks straight away
[15:40:09] [PASSED] Can be interrupted, sleeps
[15:40:09] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[15:40:09] [PASSED] ttm_bo_reserve_locked_no_sleep
[15:40:09] [PASSED] ttm_bo_reserve_no_wait_ticket
[15:40:09] [PASSED] ttm_bo_reserve_double_resv
[15:40:09] [PASSED] ttm_bo_reserve_interrupted
[15:40:09] [PASSED] ttm_bo_reserve_deadlock
[15:40:09] [PASSED] ttm_bo_unreserve_basic
[15:40:09] [PASSED] ttm_bo_unreserve_pinned
[15:40:09] [PASSED] ttm_bo_unreserve_bulk
[15:40:09] [PASSED] ttm_bo_fini_basic
[15:40:09] [PASSED] ttm_bo_fini_shared_resv
[15:40:09] [PASSED] ttm_bo_pin_basic
[15:40:09] [PASSED] ttm_bo_pin_unpin_resource
[15:40:09] [PASSED] ttm_bo_multiple_pin_one_unpin
[15:40:09] ===================== [PASSED] ttm_bo ======================
[15:40:09] ============== ttm_bo_validate (21 subtests) ===============
[15:40:09] ============== ttm_bo_init_reserved_sys_man ===============
[15:40:09] [PASSED] Buffer object for userspace
[15:40:09] [PASSED] Kernel buffer object
[15:40:09] [PASSED] Shared buffer object
[15:40:09] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[15:40:09] ============== ttm_bo_init_reserved_mock_man ==============
[15:40:09] [PASSED] Buffer object for userspace
[15:40:09] [PASSED] Kernel buffer object
[15:40:09] [PASSED] Shared buffer object
[15:40:09] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[15:40:09] [PASSED] ttm_bo_init_reserved_resv
[15:40:09] ================== ttm_bo_validate_basic ==================
[15:40:09] [PASSED] Buffer object for userspace
[15:40:09] [PASSED] Kernel buffer object
[15:40:09] [PASSED] Shared buffer object
[15:40:09] ============== [PASSED] ttm_bo_validate_basic ==============
[15:40:09] [PASSED] ttm_bo_validate_invalid_placement
[15:40:09] ============= ttm_bo_validate_same_placement ==============
[15:40:09] [PASSED] System manager
[15:40:09] [PASSED] VRAM manager
[15:40:09] ========= [PASSED] ttm_bo_validate_same_placement ==========
[15:40:09] [PASSED] ttm_bo_validate_failed_alloc
[15:40:09] [PASSED] ttm_bo_validate_pinned
[15:40:09] [PASSED] ttm_bo_validate_busy_placement
[15:40:09] ================ ttm_bo_validate_multihop =================
[15:40:09] [PASSED] Buffer object for userspace
[15:40:09] [PASSED] Kernel buffer object
[15:40:09] [PASSED] Shared buffer object
[15:40:09] ============ [PASSED] ttm_bo_validate_multihop =============
[15:40:09] ========== ttm_bo_validate_no_placement_signaled ==========
[15:40:09] [PASSED] Buffer object in system domain, no page vector
[15:40:09] [PASSED] Buffer object in system domain with an existing page vector
[15:40:09] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[15:40:09] ======== ttm_bo_validate_no_placement_not_signaled ========
[15:40:09] [PASSED] Buffer object for userspace
[15:40:09] [PASSED] Kernel buffer object
[15:40:09] [PASSED] Shared buffer object
[15:40:09] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[15:40:09] [PASSED] ttm_bo_validate_move_fence_signaled
[15:40:09] ========= ttm_bo_validate_move_fence_not_signaled =========
[15:40:09] [PASSED] Waits for GPU
[15:40:09] [PASSED] Tries to lock straight away
[15:40:09] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[15:40:09] [PASSED] ttm_bo_validate_happy_evict
[15:40:09] [PASSED] ttm_bo_validate_all_pinned_evict
[15:40:09] [PASSED] ttm_bo_validate_allowed_only_evict
[15:40:09] [PASSED] ttm_bo_validate_deleted_evict
[15:40:09] [PASSED] ttm_bo_validate_busy_domain_evict
[15:40:09] [PASSED] ttm_bo_validate_evict_gutting
[15:40:09] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[15:40:09] ================= [PASSED] ttm_bo_validate =================
[15:40:09] ============================================================
[15:40:09] Testing complete. Ran 101 tests: passed: 101
[15:40:09] Elapsed time: 11.082s total, 1.684s configuring, 9.182s building, 0.187s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* ✗ CI.checksparse: warning for Rust GPUVM support
2025-11-28 14:14 [PATCH 0/4] Rust GPUVM support Alice Ryhl
` (5 preceding siblings ...)
2025-11-28 15:40 ` ✓ CI.KUnit: success " Patchwork
@ 2025-11-28 15:55 ` Patchwork
2025-11-28 16:14 ` ✓ Xe.CI.BAT: success " Patchwork
2025-11-28 17:03 ` ✗ Xe.CI.Full: failure " Patchwork
8 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2025-11-28 15:55 UTC (permalink / raw)
To: Alice Ryhl; +Cc: intel-xe
== Series Details ==
Series: Rust GPUVM support
URL : https://patchwork.freedesktop.org/series/158211/
State : warning
== Summary ==
+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast f0525ac2ab2d085a083fe1671aebb7b4dfc8ba67
Sparse version: 0.6.4 (Ubuntu: 0.6.4-4ubuntu3)
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/display/intel_display_types.h:2072:24: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/display/intel_display_types.h:2072:24: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/display/intel_psr.c: note: in included file:
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* ✓ Xe.CI.BAT: success for Rust GPUVM support
2025-11-28 14:14 [PATCH 0/4] Rust GPUVM support Alice Ryhl
` (6 preceding siblings ...)
2025-11-28 15:55 ` ✗ CI.checksparse: warning " Patchwork
@ 2025-11-28 16:14 ` Patchwork
2025-11-28 17:03 ` ✗ Xe.CI.Full: failure " Patchwork
8 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2025-11-28 16:14 UTC (permalink / raw)
To: Alice Ryhl; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 1617 bytes --]
== Series Details ==
Series: Rust GPUVM support
URL : https://patchwork.freedesktop.org/series/158211/
State : success
== Summary ==
CI Bug Log - changes from xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670_BAT -> xe-pw-158211v1_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (11 -> 11)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in xe-pw-158211v1_BAT that come from known issues:
### IGT changes ###
#### Possible fixes ####
* igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-rebind:
- bat-ptl-1: [FAIL][1] ([Intel XE#5625]) -> [PASS][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/bat-ptl-1/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-rebind.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/bat-ptl-1/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-rebind.html
[Intel XE#5625]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5625
Build changes
-------------
* IGT: IGT_8644 -> IGT_8645
* Linux: xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670 -> xe-pw-158211v1
IGT_8644: 069c5ee6eb658181e7264883c6c4fba41fc917a4 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
IGT_8645: 8645
xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670: e1c1b3e03e356d1e20432dcb0d38ad44d5e92670
xe-pw-158211v1: 158211v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/index.html
[-- Attachment #2: Type: text/html, Size: 2196 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* ✗ Xe.CI.Full: failure for Rust GPUVM support
2025-11-28 14:14 [PATCH 0/4] Rust GPUVM support Alice Ryhl
` (7 preceding siblings ...)
2025-11-28 16:14 ` ✓ Xe.CI.BAT: success " Patchwork
@ 2025-11-28 17:03 ` Patchwork
8 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2025-11-28 17:03 UTC (permalink / raw)
To: Alice Ryhl; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 128553 bytes --]
== Series Details ==
Series: Rust GPUVM support
URL : https://patchwork.freedesktop.org/series/158211/
State : failure
== Summary ==
CI Bug Log - changes from xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670_FULL -> xe-pw-158211v1_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-158211v1_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-158211v1_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-158211v1_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@kms_colorop@plane-xr24-xr24-bt2020_inv_oetf-bt2020_oetf:
- shard-bmg: NOTRUN -> [SKIP][1] +7 other tests skip
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_colorop@plane-xr24-xr24-bt2020_inv_oetf-bt2020_oetf.html
- shard-adlp: NOTRUN -> [SKIP][2] +8 other tests skip
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_colorop@plane-xr24-xr24-bt2020_inv_oetf-bt2020_oetf.html
* igt@kms_colorop@plane-xr24-xr24-multiply_inv_125:
- shard-dg2-set2: NOTRUN -> [SKIP][3] +7 other tests skip
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-464/igt@kms_colorop@plane-xr24-xr24-multiply_inv_125.html
* igt@kms_colorop@plane-xr30-xr30-ctm_3x4_bt709_enc_dec:
- shard-lnl: NOTRUN -> [SKIP][4] +10 other tests skip
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_colorop@plane-xr30-xr30-ctm_3x4_bt709_enc_dec.html
* igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling:
- shard-adlp: NOTRUN -> [FAIL][5] +2 other tests fail
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling.html
#### Suppressed ####
The following results come from untrusted machines, tests, or statuses.
They do not affect the overall result.
* {igt@kms_colorop@plane-xr24-xr24-gamma_2_2_inv_oetf}:
- shard-bmg: NOTRUN -> [SKIP][6]
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@kms_colorop@plane-xr24-xr24-gamma_2_2_inv_oetf.html
- shard-adlp: NOTRUN -> [SKIP][7]
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@kms_colorop@plane-xr24-xr24-gamma_2_2_inv_oetf.html
- shard-dg2-set2: NOTRUN -> [SKIP][8]
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-433/igt@kms_colorop@plane-xr24-xr24-gamma_2_2_inv_oetf.html
* {igt@kms_colorop@plane-xr30-xr30-gamma_2_2_inv_oetf}:
- shard-lnl: NOTRUN -> [SKIP][9] +2 other tests skip
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_colorop@plane-xr30-xr30-gamma_2_2_inv_oetf.html
Known issues
------------
Here are the changes found in xe-pw-158211v1_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@intel_hwmon@hwmon-read:
- shard-lnl: NOTRUN -> [SKIP][10] ([Intel XE#1125])
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@intel_hwmon@hwmon-read.html
* igt@kms_async_flips@async-flip-hang@pipe-a-edp-1:
- shard-lnl: NOTRUN -> [FAIL][11] ([Intel XE#6676]) +4 other tests fail
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_async_flips@async-flip-hang@pipe-a-edp-1.html
* igt@kms_async_flips@async-flip-with-page-flip-events-linear@pipe-c-edp-1:
- shard-lnl: [PASS][12] -> [FAIL][13] ([Intel XE#5993])
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-3/igt@kms_async_flips@async-flip-with-page-flip-events-linear@pipe-c-edp-1.html
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_async_flips@async-flip-with-page-flip-events-linear@pipe-c-edp-1.html
* igt@kms_async_flips@test-cursor-atomic:
- shard-lnl: NOTRUN -> [SKIP][14] ([Intel XE#664])
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_async_flips@test-cursor-atomic.html
* igt@kms_async_flips@test-time-stamp:
- shard-lnl: NOTRUN -> [FAIL][15] ([Intel XE#6677]) +5 other tests fail
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@kms_async_flips@test-time-stamp.html
* igt@kms_big_fb@4-tiled-32bpp-rotate-270:
- shard-dg2-set2: NOTRUN -> [SKIP][16] ([Intel XE#316]) +4 other tests skip
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@kms_big_fb@4-tiled-32bpp-rotate-270.html
* igt@kms_big_fb@4-tiled-64bpp-rotate-90:
- shard-bmg: NOTRUN -> [SKIP][17] ([Intel XE#2327]) +7 other tests skip
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@kms_big_fb@4-tiled-64bpp-rotate-90.html
* igt@kms_big_fb@4-tiled-8bpp-rotate-180:
- shard-adlp: NOTRUN -> [SKIP][18] ([Intel XE#1124]) +15 other tests skip
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_big_fb@4-tiled-8bpp-rotate-180.html
* igt@kms_big_fb@4-tiled-addfb-size-overflow:
- shard-adlp: NOTRUN -> [SKIP][19] ([Intel XE#610])
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@kms_big_fb@4-tiled-addfb-size-overflow.html
* igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip:
- shard-lnl: NOTRUN -> [SKIP][20] ([Intel XE#1407]) +8 other tests skip
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
* igt@kms_big_fb@linear-8bpp-rotate-270:
- shard-adlp: NOTRUN -> [SKIP][21] ([Intel XE#316]) +3 other tests skip
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_big_fb@linear-8bpp-rotate-270.html
* igt@kms_big_fb@y-tiled-64bpp-rotate-180:
- shard-adlp: NOTRUN -> [FAIL][22] ([Intel XE#1874])
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@kms_big_fb@y-tiled-64bpp-rotate-180.html
* igt@kms_big_fb@y-tiled-addfb-size-offset-overflow:
- shard-bmg: NOTRUN -> [SKIP][23] ([Intel XE#607])
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@y-tiled-addfb-size-overflow:
- shard-lnl: NOTRUN -> [SKIP][24] ([Intel XE#1428])
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_big_fb@y-tiled-addfb-size-overflow.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
- shard-adlp: NOTRUN -> [FAIL][25] ([Intel XE#6699])
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
- shard-bmg: NOTRUN -> [SKIP][26] ([Intel XE#1124]) +15 other tests skip
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip:
- shard-adlp: NOTRUN -> [FAIL][27] ([Intel XE#1231]) +2 other tests fail
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html
* igt@kms_big_fb@yf-tiled-64bpp-rotate-180:
- shard-dg2-set2: NOTRUN -> [SKIP][28] ([Intel XE#1124]) +17 other tests skip
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@kms_big_fb@yf-tiled-64bpp-rotate-180.html
* igt@kms_big_fb@yf-tiled-8bpp-rotate-0:
- shard-lnl: NOTRUN -> [SKIP][29] ([Intel XE#1124]) +18 other tests skip
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_big_fb@yf-tiled-8bpp-rotate-0.html
* igt@kms_big_fb@yf-tiled-addfb:
- shard-adlp: NOTRUN -> [SKIP][30] ([Intel XE#619])
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@kms_big_fb@yf-tiled-addfb.html
- shard-bmg: NOTRUN -> [SKIP][31] ([Intel XE#2328]) +1 other test skip
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-4/igt@kms_big_fb@yf-tiled-addfb.html
- shard-dg2-set2: NOTRUN -> [SKIP][32] ([Intel XE#619])
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-434/igt@kms_big_fb@yf-tiled-addfb.html
- shard-lnl: NOTRUN -> [SKIP][33] ([Intel XE#1467])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_big_fb@yf-tiled-addfb.html
* igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p:
- shard-lnl: NOTRUN -> [SKIP][34] ([Intel XE#2191]) +3 other tests skip
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html
* igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p:
- shard-adlp: NOTRUN -> [SKIP][35] ([Intel XE#2191]) +2 other tests skip
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html
* igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p:
- shard-bmg: [PASS][36] -> [SKIP][37] ([Intel XE#2314] / [Intel XE#2894])
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-8/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html
* igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p:
- shard-bmg: NOTRUN -> [SKIP][38] ([Intel XE#2314] / [Intel XE#2894])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html
- shard-dg2-set2: NOTRUN -> [SKIP][39] ([Intel XE#2191])
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-433/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html
* igt@kms_bw@linear-tiling-1-displays-1920x1080p:
- shard-dg2-set2: NOTRUN -> [SKIP][40] ([Intel XE#367]) +6 other tests skip
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-466/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html
* igt@kms_bw@linear-tiling-1-displays-2560x1440p:
- shard-bmg: NOTRUN -> [SKIP][41] ([Intel XE#367]) +7 other tests skip
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@kms_bw@linear-tiling-1-displays-2560x1440p.html
* igt@kms_bw@linear-tiling-2-displays-2560x1440p:
- shard-adlp: NOTRUN -> [SKIP][42] ([Intel XE#367]) +9 other tests skip
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@kms_bw@linear-tiling-2-displays-2560x1440p.html
- shard-lnl: NOTRUN -> [SKIP][43] ([Intel XE#367]) +2 other tests skip
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-2/igt@kms_bw@linear-tiling-2-displays-2560x1440p.html
* igt@kms_bw@linear-tiling-4-displays-2160x1440p:
- shard-lnl: NOTRUN -> [SKIP][44] ([Intel XE#1512]) +1 other test skip
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@kms_bw@linear-tiling-4-displays-2160x1440p.html
* igt@kms_ccs@bad-rotation-90-4-tiled-mtl-rc-ccs-cc:
- shard-lnl: NOTRUN -> [SKIP][45] ([Intel XE#2887]) +27 other tests skip
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-2/igt@kms_ccs@bad-rotation-90-4-tiled-mtl-rc-ccs-cc.html
* igt@kms_ccs@crc-primary-basic-4-tiled-bmg-ccs@pipe-c-edp-1:
- shard-lnl: NOTRUN -> [SKIP][46] ([Intel XE#2669]) +3 other tests skip
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_ccs@crc-primary-basic-4-tiled-bmg-ccs@pipe-c-edp-1.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs:
- shard-bmg: NOTRUN -> [SKIP][47] ([Intel XE#2887]) +29 other tests skip
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][48] ([Intel XE#787]) +86 other tests skip
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-1.html
* igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc@pipe-d-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][49] ([Intel XE#455] / [Intel XE#787]) +57 other tests skip
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc@pipe-d-hdmi-a-1.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs:
- shard-adlp: NOTRUN -> [SKIP][50] ([Intel XE#3442])
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
- shard-dg2-set2: NOTRUN -> [SKIP][51] ([Intel XE#3442])
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-a-edp-1:
- shard-lnl: NOTRUN -> [SKIP][52] ([Intel XE#2669] / [Intel XE#3433]) +3 other tests skip
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-a-edp-1.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs:
- shard-lnl: NOTRUN -> [SKIP][53] ([Intel XE#3432])
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc@pipe-c-hdmi-a-6:
- shard-dg2-set2: NOTRUN -> [SKIP][54] ([Intel XE#787]) +160 other tests skip
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-464/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc@pipe-c-hdmi-a-6.html
* igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-rc-ccs@pipe-d-dp-4:
- shard-dg2-set2: NOTRUN -> [SKIP][55] ([Intel XE#455] / [Intel XE#787]) +45 other tests skip
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-rc-ccs@pipe-d-dp-4.html
* igt@kms_ccs@random-ccs-data-4-tiled-bmg-ccs:
- shard-dg2-set2: NOTRUN -> [SKIP][56] ([Intel XE#2907]) +2 other tests skip
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-464/igt@kms_ccs@random-ccs-data-4-tiled-bmg-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs:
- shard-adlp: NOTRUN -> [SKIP][57] ([Intel XE#2907]) +4 other tests skip
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs@pipe-c-dp-2:
- shard-bmg: NOTRUN -> [SKIP][58] ([Intel XE#2652] / [Intel XE#787]) +31 other tests skip
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs@pipe-c-dp-2.html
* igt@kms_chamelium_color@ctm-red-to-blue:
- shard-adlp: NOTRUN -> [SKIP][59] ([Intel XE#306]) +1 other test skip
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@kms_chamelium_color@ctm-red-to-blue.html
- shard-bmg: NOTRUN -> [SKIP][60] ([Intel XE#2325]) +2 other tests skip
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_chamelium_color@ctm-red-to-blue.html
* igt@kms_chamelium_color@gamma:
- shard-dg2-set2: NOTRUN -> [SKIP][61] ([Intel XE#306]) +1 other test skip
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@kms_chamelium_color@gamma.html
- shard-lnl: NOTRUN -> [SKIP][62] ([Intel XE#306])
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@kms_chamelium_color@gamma.html
* igt@kms_chamelium_frames@dp-crc-single:
- shard-adlp: NOTRUN -> [SKIP][63] ([Intel XE#373]) +15 other tests skip
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@kms_chamelium_frames@dp-crc-single.html
- shard-bmg: NOTRUN -> [SKIP][64] ([Intel XE#2252]) +13 other tests skip
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-1/igt@kms_chamelium_frames@dp-crc-single.html
* igt@kms_chamelium_hpd@vga-hpd:
- shard-dg2-set2: NOTRUN -> [SKIP][65] ([Intel XE#373]) +13 other tests skip
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@kms_chamelium_hpd@vga-hpd.html
- shard-lnl: NOTRUN -> [SKIP][66] ([Intel XE#373]) +15 other tests skip
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_chamelium_hpd@vga-hpd.html
* igt@kms_colorop@plane-xr24-xr24-ctm_3x4_50_desat:
- shard-adlp: NOTRUN -> [SKIP][67] ([Intel XE#6704]) +4 other tests skip
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@kms_colorop@plane-xr24-xr24-ctm_3x4_50_desat.html
* igt@kms_colorop@plane-xr24-xr24-ctm_3x4_bt709_enc:
- shard-dg2-set2: NOTRUN -> [SKIP][68] ([Intel XE#6704]) +6 other tests skip
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-435/igt@kms_colorop@plane-xr24-xr24-ctm_3x4_bt709_enc.html
* igt@kms_colorop@plane-xr30-xr30-pq_125_inv_eotf:
- shard-lnl: NOTRUN -> [SKIP][69] ([Intel XE#6704]) +3 other tests skip
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_colorop@plane-xr30-xr30-pq_125_inv_eotf.html
* igt@kms_colorop@plane-xr30-xr30-srgb_inv_eotf_lut:
- shard-bmg: NOTRUN -> [SKIP][70] ([Intel XE#6704]) +5 other tests skip
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_colorop@plane-xr30-xr30-srgb_inv_eotf_lut.html
* igt@kms_content_protection@dp-mst-suspend-resume:
- shard-bmg: NOTRUN -> [SKIP][71] ([Intel XE#6692])
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@kms_content_protection@dp-mst-suspend-resume.html
- shard-adlp: NOTRUN -> [SKIP][72] ([Intel XE#6692])
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@kms_content_protection@dp-mst-suspend-resume.html
- shard-lnl: NOTRUN -> [SKIP][73] ([Intel XE#6692])
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_content_protection@dp-mst-suspend-resume.html
* igt@kms_content_protection@legacy:
- shard-adlp: NOTRUN -> [SKIP][74] ([Intel XE#455]) +37 other tests skip
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_content_protection@legacy.html
- shard-bmg: NOTRUN -> [FAIL][75] ([Intel XE#1178]) +1 other test fail
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_content_protection@legacy.html
- shard-dg2-set2: NOTRUN -> [FAIL][76] ([Intel XE#1178]) +1 other test fail
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@kms_content_protection@legacy.html
- shard-lnl: NOTRUN -> [SKIP][77] ([Intel XE#3278]) +1 other test skip
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_content_protection@legacy.html
* igt@kms_cursor_crc@cursor-offscreen-512x512:
- shard-lnl: NOTRUN -> [SKIP][78] ([Intel XE#2321]) +2 other tests skip
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_cursor_crc@cursor-offscreen-512x512.html
* igt@kms_cursor_crc@cursor-onscreen-512x170:
- shard-dg2-set2: NOTRUN -> [SKIP][79] ([Intel XE#308]) +5 other tests skip
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-432/igt@kms_cursor_crc@cursor-onscreen-512x170.html
* igt@kms_cursor_crc@cursor-random-32x32:
- shard-bmg: NOTRUN -> [SKIP][80] ([Intel XE#2320]) +9 other tests skip
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@kms_cursor_crc@cursor-random-32x32.html
* igt@kms_cursor_crc@cursor-random-512x512:
- shard-adlp: NOTRUN -> [SKIP][81] ([Intel XE#308]) +4 other tests skip
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@kms_cursor_crc@cursor-random-512x512.html
- shard-bmg: NOTRUN -> [SKIP][82] ([Intel XE#2321]) +3 other tests skip
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_cursor_crc@cursor-random-512x512.html
* igt@kms_cursor_crc@cursor-rapid-movement-128x42:
- shard-lnl: NOTRUN -> [SKIP][83] ([Intel XE#1424]) +10 other tests skip
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_cursor_crc@cursor-rapid-movement-128x42.html
* igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic:
- shard-bmg: [PASS][84] -> [SKIP][85] ([Intel XE#2291])
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-8/igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic.html
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic.html
* igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy:
- shard-lnl: NOTRUN -> [SKIP][86] ([Intel XE#309]) +7 other tests skip
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy.html
* igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy:
- shard-bmg: NOTRUN -> [SKIP][87] ([Intel XE#2291]) +1 other test skip
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-adlp: NOTRUN -> [SKIP][88] ([Intel XE#309]) +5 other tests skip
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@flip-vs-cursor-atomic:
- shard-bmg: [PASS][89] -> [FAIL][90] ([Intel XE#4633])
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-7/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
* igt@kms_dirtyfb@drrs-dirtyfb-ioctl:
- shard-lnl: NOTRUN -> [SKIP][91] ([Intel XE#1508])
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-2/igt@kms_dirtyfb@drrs-dirtyfb-ioctl.html
* igt@kms_dirtyfb@psr-dirtyfb-ioctl:
- shard-bmg: NOTRUN -> [SKIP][92] ([Intel XE#1508]) +1 other test skip
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-4/igt@kms_dirtyfb@psr-dirtyfb-ioctl.html
* igt@kms_display_modes@extended-mode-basic:
- shard-adlp: NOTRUN -> [SKIP][93] ([Intel XE#4302])
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@kms_display_modes@extended-mode-basic.html
- shard-bmg: NOTRUN -> [SKIP][94] ([Intel XE#4302])
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_display_modes@extended-mode-basic.html
* igt@kms_dp_link_training@non-uhbr-mst:
- shard-dg2-set2: NOTRUN -> [SKIP][95] ([Intel XE#4354])
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-464/igt@kms_dp_link_training@non-uhbr-mst.html
- shard-adlp: NOTRUN -> [SKIP][96] ([Intel XE#4354])
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@kms_dp_link_training@non-uhbr-mst.html
* igt@kms_dp_link_training@uhbr-mst:
- shard-lnl: NOTRUN -> [SKIP][97] ([Intel XE#4354]) +2 other tests skip
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_dp_link_training@uhbr-mst.html
- shard-adlp: NOTRUN -> [SKIP][98] ([Intel XE#4356])
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@kms_dp_link_training@uhbr-mst.html
- shard-bmg: NOTRUN -> [SKIP][99] ([Intel XE#4354]) +1 other test skip
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@kms_dp_link_training@uhbr-mst.html
- shard-dg2-set2: NOTRUN -> [SKIP][100] ([Intel XE#4356])
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@kms_dp_link_training@uhbr-mst.html
* igt@kms_dp_linktrain_fallback@dp-fallback:
- shard-lnl: NOTRUN -> [SKIP][101] ([Intel XE#4294])
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_dp_linktrain_fallback@dp-fallback.html
* igt@kms_dp_linktrain_fallback@dsc-fallback:
- shard-bmg: NOTRUN -> [SKIP][102] ([Intel XE#4331])
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@kms_dp_linktrain_fallback@dsc-fallback.html
- shard-adlp: NOTRUN -> [SKIP][103] ([Intel XE#4331])
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@kms_dp_linktrain_fallback@dsc-fallback.html
- shard-dg2-set2: NOTRUN -> [SKIP][104] ([Intel XE#4331])
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@kms_dp_linktrain_fallback@dsc-fallback.html
- shard-lnl: NOTRUN -> [SKIP][105] ([Intel XE#4331])
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@kms_dp_linktrain_fallback@dsc-fallback.html
* igt@kms_dsc@dsc-fractional-bpp:
- shard-bmg: NOTRUN -> [SKIP][106] ([Intel XE#2244])
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-1/igt@kms_dsc@dsc-fractional-bpp.html
- shard-lnl: NOTRUN -> [SKIP][107] ([Intel XE#2244])
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-2/igt@kms_dsc@dsc-fractional-bpp.html
* igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-dirtyfb-tests:
- shard-bmg: NOTRUN -> [SKIP][108] ([Intel XE#4422])
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-dirtyfb-tests.html
- shard-dg2-set2: NOTRUN -> [SKIP][109] ([Intel XE#4422])
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-dirtyfb-tests.html
- shard-lnl: NOTRUN -> [SKIP][110] ([Intel XE#4422])
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-dirtyfb-tests.html
* igt@kms_fbcon_fbt@psr:
- shard-bmg: NOTRUN -> [SKIP][111] ([Intel XE#776])
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@kms_fbcon_fbt@psr.html
- shard-adlp: NOTRUN -> [SKIP][112] ([Intel XE#776])
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@kms_fbcon_fbt@psr.html
* igt@kms_feature_discovery@chamelium:
- shard-bmg: NOTRUN -> [SKIP][113] ([Intel XE#2372])
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@kms_feature_discovery@chamelium.html
- shard-dg2-set2: NOTRUN -> [SKIP][114] ([Intel XE#701])
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-432/igt@kms_feature_discovery@chamelium.html
- shard-lnl: NOTRUN -> [SKIP][115] ([Intel XE#701])
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_feature_discovery@chamelium.html
- shard-adlp: NOTRUN -> [SKIP][116] ([Intel XE#701])
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@kms_feature_discovery@chamelium.html
* igt@kms_feature_discovery@dp-mst:
- shard-adlp: NOTRUN -> [SKIP][117] ([Intel XE#1137])
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@kms_feature_discovery@dp-mst.html
- shard-bmg: NOTRUN -> [SKIP][118] ([Intel XE#2375])
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@kms_feature_discovery@dp-mst.html
- shard-lnl: NOTRUN -> [SKIP][119] ([Intel XE#1137])
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-2/igt@kms_feature_discovery@dp-mst.html
* igt@kms_flip@2x-flip-vs-rmfb-interruptible:
- shard-lnl: NOTRUN -> [SKIP][120] ([Intel XE#1421]) +12 other tests skip
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html
* igt@kms_flip@2x-nonexisting-fb-interruptible:
- shard-adlp: NOTRUN -> [SKIP][121] ([Intel XE#310]) +10 other tests skip
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@kms_flip@2x-nonexisting-fb-interruptible.html
* igt@kms_flip@2x-wf_vblank-ts-check:
- shard-bmg: NOTRUN -> [SKIP][122] ([Intel XE#2316]) +1 other test skip
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_flip@2x-wf_vblank-ts-check.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling:
- shard-bmg: NOTRUN -> [SKIP][123] ([Intel XE#2293] / [Intel XE#2380]) +6 other tests skip
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-upscaling@pipe-a-default-mode:
- shard-lnl: NOTRUN -> [SKIP][124] ([Intel XE#1401]) +7 other tests skip
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-upscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-downscaling:
- shard-lnl: NOTRUN -> [SKIP][125] ([Intel XE#1397] / [Intel XE#1745]) +2 other tests skip
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-downscaling@pipe-a-default-mode:
- shard-lnl: NOTRUN -> [SKIP][126] ([Intel XE#1397]) +2 other tests skip
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-downscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode:
- shard-bmg: NOTRUN -> [SKIP][127] ([Intel XE#2293]) +6 other tests skip
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-1/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling:
- shard-lnl: NOTRUN -> [SKIP][128] ([Intel XE#1401] / [Intel XE#1745]) +7 other tests skip
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling.html
* igt@kms_force_connector_basic@force-edid:
- shard-lnl: NOTRUN -> [SKIP][129] ([Intel XE#352])
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_force_connector_basic@force-edid.html
* igt@kms_frontbuffer_tracking@drrs-1p-offscreen-pri-indfb-draw-blt:
- shard-adlp: NOTRUN -> [SKIP][130] ([Intel XE#6312]) +2 other tests skip
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_frontbuffer_tracking@drrs-1p-offscreen-pri-indfb-draw-blt.html
- shard-dg2-set2: NOTRUN -> [SKIP][131] ([Intel XE#6312]) +4 other tests skip
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-434/igt@kms_frontbuffer_tracking@drrs-1p-offscreen-pri-indfb-draw-blt.html
- shard-lnl: NOTRUN -> [SKIP][132] ([Intel XE#6312]) +1 other test skip
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_frontbuffer_tracking@drrs-1p-offscreen-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-shrfb-draw-render:
- shard-lnl: NOTRUN -> [SKIP][133] ([Intel XE#651]) +16 other tests skip
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-shrfb-draw-render.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-render:
- shard-adlp: NOTRUN -> [SKIP][134] ([Intel XE#656]) +70 other tests skip
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@drrs-suspend:
- shard-dg2-set2: NOTRUN -> [SKIP][135] ([Intel XE#651]) +42 other tests skip
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-432/igt@kms_frontbuffer_tracking@drrs-suspend.html
* igt@kms_frontbuffer_tracking@fbc-1p-offscreen-pri-shrfb-draw-render:
- shard-adlp: NOTRUN -> [FAIL][136] ([Intel XE#5671]) +1 other test fail
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@kms_frontbuffer_tracking@fbc-1p-offscreen-pri-shrfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbc-1p-primscrn-shrfb-pgflip-blt:
- shard-adlp: [PASS][137] -> [FAIL][138] ([Intel XE#5671]) +1 other test fail
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-1/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-shrfb-pgflip-blt.html
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-mmap-wc:
- shard-lnl: NOTRUN -> [SKIP][139] ([Intel XE#656]) +68 other tests skip
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-pgflip-blt:
- shard-bmg: NOTRUN -> [SKIP][140] ([Intel XE#4141]) +23 other tests skip
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-tiling-4:
- shard-adlp: NOTRUN -> [SKIP][141] ([Intel XE#1151])
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@kms_frontbuffer_tracking@fbc-tiling-4.html
* igt@kms_frontbuffer_tracking@fbc-tiling-y:
- shard-lnl: NOTRUN -> [SKIP][142] ([Intel XE#1469])
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_frontbuffer_tracking@fbc-tiling-y.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-pri-indfb-draw-blt:
- shard-bmg: NOTRUN -> [SKIP][143] ([Intel XE#2311]) +39 other tests skip
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][144] ([Intel XE#2312]) +16 other tests skip
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-suspend:
- shard-adlp: NOTRUN -> [SKIP][145] ([Intel XE#651]) +19 other tests skip
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@kms_frontbuffer_tracking@fbcdrrs-suspend.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-onoff:
- shard-adlp: NOTRUN -> [SKIP][146] ([Intel XE#653]) +24 other tests skip
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt:
- shard-bmg: NOTRUN -> [SKIP][147] ([Intel XE#2313]) +43 other tests skip
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-plflip-blt:
- shard-dg2-set2: NOTRUN -> [SKIP][148] ([Intel XE#653]) +50 other tests skip
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-432/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-plflip-blt.html
* igt@kms_hdr@brightness-with-hdr:
- shard-bmg: NOTRUN -> [SKIP][149] ([Intel XE#3374] / [Intel XE#3544])
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@kms_hdr@brightness-with-hdr.html
* igt@kms_joiner@basic-big-joiner:
- shard-adlp: NOTRUN -> [SKIP][150] ([Intel XE#2925] / [Intel XE#346])
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_joiner@basic-big-joiner.html
- shard-dg2-set2: NOTRUN -> [SKIP][151] ([Intel XE#2925] / [Intel XE#346]) +1 other test skip
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-434/igt@kms_joiner@basic-big-joiner.html
- shard-lnl: NOTRUN -> [SKIP][152] ([Intel XE#2925] / [Intel XE#346])
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_joiner@basic-big-joiner.html
* igt@kms_joiner@invalid-modeset-big-joiner:
- shard-bmg: NOTRUN -> [SKIP][153] ([Intel XE#346])
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@kms_joiner@invalid-modeset-big-joiner.html
* igt@kms_joiner@invalid-modeset-force-big-joiner:
- shard-adlp: NOTRUN -> [SKIP][154] ([Intel XE#2925] / [Intel XE#3012])
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@kms_joiner@invalid-modeset-force-big-joiner.html
- shard-bmg: NOTRUN -> [SKIP][155] ([Intel XE#3012])
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-2/igt@kms_joiner@invalid-modeset-force-big-joiner.html
* igt@kms_joiner@invalid-modeset-force-ultra-joiner:
- shard-adlp: NOTRUN -> [SKIP][156] ([Intel XE#2925])
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@kms_joiner@invalid-modeset-force-ultra-joiner.html
- shard-bmg: NOTRUN -> [SKIP][157] ([Intel XE#2934])
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@kms_joiner@invalid-modeset-force-ultra-joiner.html
- shard-dg2-set2: NOTRUN -> [SKIP][158] ([Intel XE#2925])
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@kms_joiner@invalid-modeset-force-ultra-joiner.html
- shard-lnl: NOTRUN -> [SKIP][159] ([Intel XE#2925])
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_joiner@invalid-modeset-force-ultra-joiner.html
* igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
- shard-lnl: NOTRUN -> [SKIP][160] ([Intel XE#356])
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
* igt@kms_plane@pixel-format-source-clamping@pipe-a-plane-0:
- shard-lnl: NOTRUN -> [FAIL][161] ([Intel XE#5195]) +2 other tests fail
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_plane@pixel-format-source-clamping@pipe-a-plane-0.html
* igt@kms_plane@pixel-format-source-clamping@pipe-b-plane-0:
- shard-adlp: NOTRUN -> [FAIL][162] ([Intel XE#5195]) +4 other tests fail
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@kms_plane@pixel-format-source-clamping@pipe-b-plane-0.html
* igt@kms_plane_lowres@tiling-y:
- shard-lnl: NOTRUN -> [SKIP][163] ([Intel XE#599]) +1 other test skip
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_plane_lowres@tiling-y.html
* igt@kms_plane_lowres@tiling-yf:
- shard-bmg: NOTRUN -> [SKIP][164] ([Intel XE#2393]) +1 other test skip
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@kms_plane_lowres@tiling-yf.html
* igt@kms_plane_multiple@2x-tiling-yf:
- shard-adlp: NOTRUN -> [SKIP][165] ([Intel XE#4596])
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@kms_plane_multiple@2x-tiling-yf.html
* igt@kms_plane_multiple@tiling-x@pipe-b-edp-1:
- shard-lnl: NOTRUN -> [FAIL][166] ([Intel XE#4658]) +3 other tests fail
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@kms_plane_multiple@tiling-x@pipe-b-edp-1.html
* igt@kms_plane_scaling@plane-downscale-factor-0-5-with-modifiers:
- shard-lnl: NOTRUN -> [SKIP][167] ([Intel XE#6691]) +7 other tests skip
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-2/igt@kms_plane_scaling@plane-downscale-factor-0-5-with-modifiers.html
* igt@kms_pm_backlight@brightness-with-dpms:
- shard-bmg: NOTRUN -> [SKIP][168] ([Intel XE#2938])
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_pm_backlight@brightness-with-dpms.html
- shard-adlp: NOTRUN -> [SKIP][169] ([Intel XE#2938])
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_pm_backlight@brightness-with-dpms.html
- shard-dg2-set2: NOTRUN -> [SKIP][170] ([Intel XE#2938])
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@kms_pm_backlight@brightness-with-dpms.html
* igt@kms_pm_dc@dc3co-vpb-simulation:
- shard-bmg: NOTRUN -> [SKIP][171] ([Intel XE#2391])
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_pm_dc@dc3co-vpb-simulation.html
- shard-adlp: NOTRUN -> [SKIP][172] ([Intel XE#1122])
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_pm_dc@dc3co-vpb-simulation.html
- shard-dg2-set2: NOTRUN -> [SKIP][173] ([Intel XE#1122])
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-466/igt@kms_pm_dc@dc3co-vpb-simulation.html
- shard-lnl: NOTRUN -> [SKIP][174] ([Intel XE#736])
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_pm_dc@dc3co-vpb-simulation.html
* igt@kms_pm_dc@dc5-dpms-negative:
- shard-lnl: NOTRUN -> [SKIP][175] ([Intel XE#1131])
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_pm_dc@dc5-dpms-negative.html
* igt@kms_pm_dc@dc5-psr:
- shard-lnl: NOTRUN -> [FAIL][176] ([Intel XE#718])
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_pm_dc@dc5-psr.html
* igt@kms_pm_dc@dc9-dpms:
- shard-adlp: NOTRUN -> [SKIP][177] ([Intel XE#734])
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@kms_pm_dc@dc9-dpms.html
* igt@kms_pm_lpsp@kms-lpsp:
- shard-bmg: NOTRUN -> [SKIP][178] ([Intel XE#2499])
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@kms_pm_lpsp@kms-lpsp.html
* igt@kms_pm_rpm@dpms-lpsp:
- shard-bmg: NOTRUN -> [SKIP][179] ([Intel XE#1439] / [Intel XE#3141] / [Intel XE#836])
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@kms_pm_rpm@dpms-lpsp.html
* igt@kms_pm_rpm@modeset-non-lpsp-stress:
- shard-adlp: NOTRUN -> [SKIP][180] ([Intel XE#836])
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_pm_rpm@modeset-non-lpsp-stress.html
- shard-lnl: NOTRUN -> [SKIP][181] ([Intel XE#1439] / [Intel XE#3141])
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_pm_rpm@modeset-non-lpsp-stress.html
* igt@kms_psr2_sf@fbc-psr2-plane-move-sf-dmg-area:
- shard-lnl: NOTRUN -> [SKIP][182] ([Intel XE#1406] / [Intel XE#2893] / [Intel XE#4608])
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_psr2_sf@fbc-psr2-plane-move-sf-dmg-area.html
* igt@kms_psr2_sf@fbc-psr2-plane-move-sf-dmg-area@pipe-b-edp-1:
- shard-lnl: NOTRUN -> [SKIP][183] ([Intel XE#1406] / [Intel XE#4608]) +1 other test skip
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_psr2_sf@fbc-psr2-plane-move-sf-dmg-area@pipe-b-edp-1.html
* igt@kms_psr2_sf@pr-cursor-plane-update-sf:
- shard-dg2-set2: NOTRUN -> [SKIP][184] ([Intel XE#1406] / [Intel XE#1489]) +9 other tests skip
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-434/igt@kms_psr2_sf@pr-cursor-plane-update-sf.html
- shard-lnl: NOTRUN -> [SKIP][185] ([Intel XE#1406] / [Intel XE#2893]) +4 other tests skip
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_psr2_sf@pr-cursor-plane-update-sf.html
* igt@kms_psr2_sf@pr-overlay-plane-move-continuous-exceed-fully-sf:
- shard-adlp: NOTRUN -> [SKIP][186] ([Intel XE#1406] / [Intel XE#1489]) +10 other tests skip
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_psr2_sf@pr-overlay-plane-move-continuous-exceed-fully-sf.html
* igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-sf:
- shard-bmg: NOTRUN -> [SKIP][187] ([Intel XE#1406] / [Intel XE#1489]) +10 other tests skip
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-sf.html
* igt@kms_psr2_su@page_flip-p010:
- shard-adlp: NOTRUN -> [SKIP][188] ([Intel XE#1122] / [Intel XE#1406] / [Intel XE#5580])
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@kms_psr2_su@page_flip-p010.html
- shard-bmg: NOTRUN -> [SKIP][189] ([Intel XE#1406] / [Intel XE#2387])
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-2/igt@kms_psr2_su@page_flip-p010.html
- shard-dg2-set2: NOTRUN -> [SKIP][190] ([Intel XE#1122] / [Intel XE#1406])
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-432/igt@kms_psr2_su@page_flip-p010.html
- shard-lnl: NOTRUN -> [SKIP][191] ([Intel XE#1128] / [Intel XE#1406])
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@kms_psr2_su@page_flip-p010.html
* igt@kms_psr@fbc-psr2-cursor-plane-move:
- shard-adlp: NOTRUN -> [SKIP][192] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +20 other tests skip
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@kms_psr@fbc-psr2-cursor-plane-move.html
* igt@kms_psr@fbc-psr2-primary-blt@edp-1:
- shard-lnl: NOTRUN -> [SKIP][193] ([Intel XE#1406] / [Intel XE#4609]) +2 other tests skip
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-2/igt@kms_psr@fbc-psr2-primary-blt@edp-1.html
* igt@kms_psr@fbc-psr2-sprite-plane-move:
- shard-dg2-set2: NOTRUN -> [SKIP][194] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +15 other tests skip
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-466/igt@kms_psr@fbc-psr2-sprite-plane-move.html
- shard-lnl: NOTRUN -> [SKIP][195] ([Intel XE#1406]) +10 other tests skip
[195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_psr@fbc-psr2-sprite-plane-move.html
* igt@kms_psr@psr-primary-page-flip:
- shard-bmg: NOTRUN -> [SKIP][196] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +17 other tests skip
[196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@kms_psr@psr-primary-page-flip.html
* igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
- shard-adlp: NOTRUN -> [SKIP][197] ([Intel XE#1406] / [Intel XE#2939] / [Intel XE#5585])
[197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
- shard-bmg: NOTRUN -> [SKIP][198] ([Intel XE#1406] / [Intel XE#2414])
[198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
- shard-dg2-set2: NOTRUN -> [SKIP][199] ([Intel XE#1406] / [Intel XE#2939])
[199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
- shard-lnl: NOTRUN -> [SKIP][200] ([Intel XE#1406] / [Intel XE#4692])
[200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
* igt@kms_rotation_crc@primary-rotation-270:
- shard-bmg: NOTRUN -> [SKIP][201] ([Intel XE#3414] / [Intel XE#3904])
[201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_rotation_crc@primary-rotation-270.html
- shard-adlp: NOTRUN -> [SKIP][202] ([Intel XE#3414]) +1 other test skip
[202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_rotation_crc@primary-rotation-270.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-180:
- shard-bmg: NOTRUN -> [SKIP][203] ([Intel XE#2330]) +1 other test skip
[203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_rotation_crc@primary-y-tiled-reflect-x-180.html
- shard-dg2-set2: NOTRUN -> [SKIP][204] ([Intel XE#1127])
[204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-464/igt@kms_rotation_crc@primary-y-tiled-reflect-x-180.html
- shard-lnl: NOTRUN -> [SKIP][205] ([Intel XE#1127]) +1 other test skip
[205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_rotation_crc@primary-y-tiled-reflect-x-180.html
* igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180:
- shard-adlp: NOTRUN -> [SKIP][206] ([Intel XE#1127])
[206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180.html
* igt@kms_rotation_crc@sprite-rotation-270:
- shard-dg2-set2: NOTRUN -> [SKIP][207] ([Intel XE#3414]) +2 other tests skip
[207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-434/igt@kms_rotation_crc@sprite-rotation-270.html
- shard-lnl: NOTRUN -> [SKIP][208] ([Intel XE#3414] / [Intel XE#3904]) +2 other tests skip
[208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_rotation_crc@sprite-rotation-270.html
* igt@kms_setmode@basic:
- shard-bmg: NOTRUN -> [FAIL][209] ([Intel XE#6361]) +6 other tests fail
[209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-1/igt@kms_setmode@basic.html
* igt@kms_setmode@basic@pipe-b-edp-1:
- shard-lnl: NOTRUN -> [FAIL][210] ([Intel XE#6361]) +2 other tests fail
[210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@kms_setmode@basic@pipe-b-edp-1.html
* igt@kms_setmode@clone-exclusive-crtc:
- shard-lnl: NOTRUN -> [SKIP][211] ([Intel XE#1435])
[211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_setmode@clone-exclusive-crtc.html
* igt@kms_setmode@invalid-clone-single-crtc-stealing:
- shard-bmg: [PASS][212] -> [SKIP][213] ([Intel XE#1435])
[212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-4/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
[213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
* igt@kms_sharpness_filter@invalid-filter-with-scaling-mode:
- shard-bmg: NOTRUN -> [SKIP][214] ([Intel XE#6503]) +4 other tests skip
[214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_sharpness_filter@invalid-filter-with-scaling-mode.html
* igt@kms_tiled_display@basic-test-pattern-with-chamelium:
- shard-adlp: NOTRUN -> [SKIP][215] ([Intel XE#362])
[215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
- shard-bmg: NOTRUN -> [SKIP][216] ([Intel XE#2426])
[216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
- shard-dg2-set2: NOTRUN -> [SKIP][217] ([Intel XE#1500])
[217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-466/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
* igt@kms_vblank@ts-continuation-dpms-suspend@pipe-a-edp-1:
- shard-lnl: [PASS][218] -> [ABORT][219] ([Intel XE#6675])
[218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-3/igt@kms_vblank@ts-continuation-dpms-suspend@pipe-a-edp-1.html
[219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_vblank@ts-continuation-dpms-suspend@pipe-a-edp-1.html
* igt@kms_vrr@cmrr:
- shard-adlp: NOTRUN -> [SKIP][220] ([Intel XE#2168])
[220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@kms_vrr@cmrr.html
- shard-bmg: NOTRUN -> [SKIP][221] ([Intel XE#2168])
[221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_vrr@cmrr.html
* igt@kms_vrr@flip-dpms:
- shard-dg2-set2: NOTRUN -> [SKIP][222] ([Intel XE#455]) +26 other tests skip
[222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-434/igt@kms_vrr@flip-dpms.html
* igt@kms_vrr@negative-basic:
- shard-lnl: NOTRUN -> [SKIP][223] ([Intel XE#1499]) +2 other tests skip
[223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@kms_vrr@negative-basic.html
* igt@kms_vrr@seamless-rr-switch-drrs:
- shard-bmg: NOTRUN -> [SKIP][224] ([Intel XE#1499]) +3 other tests skip
[224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_vrr@seamless-rr-switch-drrs.html
* igt@sriov_basic@enable-vfs-autoprobe-off:
- shard-lnl: NOTRUN -> [SKIP][225] ([Intel XE#1091] / [Intel XE#2849])
[225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@sriov_basic@enable-vfs-autoprobe-off.html
* igt@xe_ccs@ctrl-surf-copy-new-ctx:
- shard-adlp: NOTRUN -> [SKIP][226] ([Intel XE#455] / [Intel XE#488] / [Intel XE#5607]) +1 other test skip
[226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_ccs@ctrl-surf-copy-new-ctx.html
* igt@xe_compute@ccs-mode-basic:
- shard-bmg: NOTRUN -> [SKIP][227] ([Intel XE#6599]) +1 other test skip
[227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@xe_compute@ccs-mode-basic.html
- shard-adlp: NOTRUN -> [SKIP][228] ([Intel XE#6599]) +1 other test skip
[228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@xe_compute@ccs-mode-basic.html
- shard-lnl: NOTRUN -> [SKIP][229] ([Intel XE#1447])
[229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@xe_compute@ccs-mode-basic.html
* igt@xe_compute@eu-busy-10s:
- shard-dg2-set2: NOTRUN -> [SKIP][230] ([Intel XE#6598])
[230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@xe_compute@eu-busy-10s.html
- shard-lnl: NOTRUN -> [SKIP][231] ([Intel XE#6592] / [Intel XE#6645])
[231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@xe_compute@eu-busy-10s.html
* igt@xe_compute_preempt@compute-preempt:
- shard-adlp: NOTRUN -> [SKIP][232] ([Intel XE#6360])
[232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@xe_compute_preempt@compute-preempt.html
* igt@xe_compute_preempt@compute-preempt-many-vram-evict:
- shard-dg2-set2: NOTRUN -> [SKIP][233] ([Intel XE#6360]) +1 other test skip
[233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@xe_compute_preempt@compute-preempt-many-vram-evict.html
- shard-lnl: NOTRUN -> [SKIP][234] ([Intel XE#5191])
[234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@xe_compute_preempt@compute-preempt-many-vram-evict.html
- shard-adlp: NOTRUN -> [SKIP][235] ([Intel XE#5191])
[235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@xe_compute_preempt@compute-preempt-many-vram-evict.html
* igt@xe_configfs@survivability-mode:
- shard-dg2-set2: NOTRUN -> [SKIP][236] ([Intel XE#6010])
[236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-432/igt@xe_configfs@survivability-mode.html
* igt@xe_copy_basic@mem-copy-linear-0x369:
- shard-adlp: NOTRUN -> [SKIP][237] ([Intel XE#1123]) +1 other test skip
[237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_copy_basic@mem-copy-linear-0x369.html
- shard-dg2-set2: NOTRUN -> [SKIP][238] ([Intel XE#1123]) +1 other test skip
[238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@xe_copy_basic@mem-copy-linear-0x369.html
* igt@xe_copy_basic@mem-matrix-copy-2x2:
- shard-adlp: NOTRUN -> [SKIP][239] ([Intel XE#5300]) +1 other test skip
[239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@xe_copy_basic@mem-matrix-copy-2x2.html
- shard-dg2-set2: NOTRUN -> [SKIP][240] ([Intel XE#5300])
[240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-433/igt@xe_copy_basic@mem-matrix-copy-2x2.html
* igt@xe_eu_stall@invalid-gt-id:
- shard-dg2-set2: NOTRUN -> [SKIP][241] ([Intel XE#5626])
[241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-432/igt@xe_eu_stall@invalid-gt-id.html
- shard-adlp: NOTRUN -> [SKIP][242] ([Intel XE#5626])
[242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@xe_eu_stall@invalid-gt-id.html
* igt@xe_eudebug@basic-vm-bind-ufence-delay-ack:
- shard-dg2-set2: NOTRUN -> [SKIP][243] ([Intel XE#4837]) +20 other tests skip
[243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-435/igt@xe_eudebug@basic-vm-bind-ufence-delay-ack.html
- shard-lnl: NOTRUN -> [SKIP][244] ([Intel XE#4837]) +21 other tests skip
[244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@xe_eudebug@basic-vm-bind-ufence-delay-ack.html
* igt@xe_eudebug_online@interrupt-other:
- shard-adlp: NOTRUN -> [SKIP][245] ([Intel XE#4837] / [Intel XE#5565]) +17 other tests skip
[245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_eudebug_online@interrupt-other.html
- shard-bmg: NOTRUN -> [SKIP][246] ([Intel XE#4837]) +19 other tests skip
[246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@xe_eudebug_online@interrupt-other.html
* igt@xe_eudebug_online@pagefault-read-stress:
- shard-adlp: NOTRUN -> [SKIP][247] ([Intel XE#6665])
[247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@xe_eudebug_online@pagefault-read-stress.html
- shard-bmg: NOTRUN -> [SKIP][248] ([Intel XE#6681])
[248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@xe_eudebug_online@pagefault-read-stress.html
* igt@xe_evict@evict-beng-mixed-threads-small-multi-vm:
- shard-adlp: NOTRUN -> [SKIP][249] ([Intel XE#261] / [Intel XE#688]) +3 other tests skip
[249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@xe_evict@evict-beng-mixed-threads-small-multi-vm.html
* igt@xe_evict@evict-beng-small:
- shard-adlp: NOTRUN -> [SKIP][250] ([Intel XE#261] / [Intel XE#5564] / [Intel XE#688]) +1 other test skip
[250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@xe_evict@evict-beng-small.html
* igt@xe_evict@evict-beng-threads-large-multi-vm:
- shard-lnl: NOTRUN -> [SKIP][251] ([Intel XE#688]) +13 other tests skip
[251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_evict@evict-beng-threads-large-multi-vm.html
* igt@xe_evict@evict-mixed-many-threads-small:
- shard-adlp: NOTRUN -> [SKIP][252] ([Intel XE#261]) +6 other tests skip
[252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@xe_evict@evict-mixed-many-threads-small.html
- shard-bmg: NOTRUN -> [INCOMPLETE][253] ([Intel XE#6321] / [Intel XE#6606])
[253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@xe_evict@evict-mixed-many-threads-small.html
* igt@xe_evict_ccs@evict-overcommit-parallel-instantfree-samefd:
- shard-adlp: NOTRUN -> [SKIP][254] ([Intel XE#688]) +4 other tests skip
[254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@xe_evict_ccs@evict-overcommit-parallel-instantfree-samefd.html
* igt@xe_exec_basic@multigpu-no-exec-userptr-invalidate:
- shard-lnl: NOTRUN -> [SKIP][255] ([Intel XE#1392]) +11 other tests skip
[255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_exec_basic@multigpu-no-exec-userptr-invalidate.html
* igt@xe_exec_basic@multigpu-once-basic-defer-bind:
- shard-adlp: NOTRUN -> [SKIP][256] ([Intel XE#1392] / [Intel XE#5575]) +5 other tests skip
[256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@xe_exec_basic@multigpu-once-basic-defer-bind.html
- shard-bmg: NOTRUN -> [SKIP][257] ([Intel XE#2322]) +10 other tests skip
[257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@xe_exec_basic@multigpu-once-basic-defer-bind.html
* igt@xe_exec_fault_mode@many-bindexecqueue-userptr-imm:
- shard-adlp: NOTRUN -> [SKIP][258] ([Intel XE#288] / [Intel XE#5561]) +44 other tests skip
[258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@xe_exec_fault_mode@many-bindexecqueue-userptr-imm.html
* igt@xe_exec_fault_mode@twice-userptr-invalidate-race:
- shard-dg2-set2: NOTRUN -> [SKIP][259] ([Intel XE#288]) +37 other tests skip
[259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-466/igt@xe_exec_fault_mode@twice-userptr-invalidate-race.html
* igt@xe_exec_mix_modes@exec-simple-batch-store-lr:
- shard-dg2-set2: NOTRUN -> [SKIP][260] ([Intel XE#2360])
[260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-464/igt@xe_exec_mix_modes@exec-simple-batch-store-lr.html
- shard-adlp: NOTRUN -> [SKIP][261] ([Intel XE#2360])
[261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@xe_exec_mix_modes@exec-simple-batch-store-lr.html
* igt@xe_exec_reset@cm-close-fd:
- shard-adlp: NOTRUN -> [DMESG-WARN][262] ([Intel XE#3868])
[262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@xe_exec_reset@cm-close-fd.html
* igt@xe_exec_system_allocator@madvise-no-range-invalidate-same-attr:
- shard-lnl: NOTRUN -> [WARN][263] ([Intel XE#5786])
[263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@xe_exec_system_allocator@madvise-no-range-invalidate-same-attr.html
* igt@xe_exec_system_allocator@many-64k-mmap-free-huge-nomemset:
- shard-lnl: NOTRUN -> [SKIP][264] ([Intel XE#5007])
[264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_exec_system_allocator@many-64k-mmap-free-huge-nomemset.html
- shard-bmg: NOTRUN -> [SKIP][265] ([Intel XE#5007])
[265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@xe_exec_system_allocator@many-64k-mmap-free-huge-nomemset.html
* igt@xe_exec_system_allocator@once-mmap-remap-ro-dontunmap:
- shard-adlp: NOTRUN -> [SKIP][266] ([Intel XE#4915]) +475 other tests skip
[266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@xe_exec_system_allocator@once-mmap-remap-ro-dontunmap.html
* igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-single-vma:
- shard-lnl: [PASS][267] -> [FAIL][268] ([Intel XE#5625])
[267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-7/igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-single-vma.html
[268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-single-vma.html
* igt@xe_exec_system_allocator@process-many-mmap-new-huge-nomemset:
- shard-lnl: NOTRUN -> [SKIP][269] ([Intel XE#4943]) +36 other tests skip
[269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_exec_system_allocator@process-many-mmap-new-huge-nomemset.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-new-huge:
- shard-bmg: NOTRUN -> [SKIP][270] ([Intel XE#4943]) +44 other tests skip
[270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-new-huge.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-eocheck:
- shard-dg2-set2: NOTRUN -> [SKIP][271] ([Intel XE#4915]) +424 other tests skip
[271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-eocheck.html
* igt@xe_fault_injection@exec-queue-create-fail-xe_pxp_exec_queue_add:
- shard-adlp: NOTRUN -> [SKIP][272] ([Intel XE#6281])
[272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@xe_fault_injection@exec-queue-create-fail-xe_pxp_exec_queue_add.html
* igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv:
- shard-dg2-set2: NOTRUN -> [ABORT][273] ([Intel XE#5466])
[273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-433/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
- shard-lnl: NOTRUN -> [ABORT][274] ([Intel XE#5466])
[274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
- shard-adlp: NOTRUN -> [ABORT][275] ([Intel XE#5466] / [Intel XE#5530])
[275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
- shard-bmg: NOTRUN -> [ABORT][276] ([Intel XE#5466] / [Intel XE#5530])
[276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
* igt@xe_gt_freq@freq_suspend:
- shard-bmg: [PASS][277] -> [ABORT][278] ([Intel XE#6675])
[277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-2/igt@xe_gt_freq@freq_suspend.html
[278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@xe_gt_freq@freq_suspend.html
* igt@xe_live_ktest@xe_bo:
- shard-adlp: NOTRUN -> [SKIP][279] ([Intel XE#2229] / [Intel XE#455]) +1 other test skip
[279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_live_ktest@xe_bo.html
* igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit:
- shard-bmg: NOTRUN -> [SKIP][280] ([Intel XE#2229])
[280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html
- shard-adlp: NOTRUN -> [SKIP][281] ([Intel XE#2229])
[281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html
* igt@xe_media_fill@media-fill:
- shard-dg2-set2: NOTRUN -> [SKIP][282] ([Intel XE#560])
[282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-466/igt@xe_media_fill@media-fill.html
* igt@xe_mmap@pci-membarrier-parallel:
- shard-adlp: NOTRUN -> [SKIP][283] ([Intel XE#5100])
[283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@xe_mmap@pci-membarrier-parallel.html
* igt@xe_mmap@small-bar:
- shard-adlp: NOTRUN -> [SKIP][284] ([Intel XE#512])
[284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_mmap@small-bar.html
- shard-bmg: NOTRUN -> [SKIP][285] ([Intel XE#586])
[285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@xe_mmap@small-bar.html
- shard-dg2-set2: NOTRUN -> [SKIP][286] ([Intel XE#512])
[286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@xe_mmap@small-bar.html
- shard-lnl: NOTRUN -> [SKIP][287] ([Intel XE#512])
[287]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@xe_mmap@small-bar.html
* igt@xe_module_load@load:
- shard-lnl: ([PASS][288], [PASS][289], [PASS][290], [PASS][291], [PASS][292], [PASS][293], [PASS][294], [PASS][295], [PASS][296], [PASS][297], [PASS][298], [PASS][299], [PASS][300], [PASS][301], [PASS][302], [PASS][303], [PASS][304], [PASS][305], [PASS][306], [PASS][307], [PASS][308], [PASS][309], [PASS][310], [PASS][311], [PASS][312]) -> ([PASS][313], [SKIP][314], [PASS][315], [PASS][316], [PASS][317], [PASS][318], [PASS][319], [PASS][320], [PASS][321], [PASS][322], [PASS][323], [PASS][324], [PASS][325], [PASS][326], [PASS][327], [PASS][328], [PASS][329], [PASS][330], [PASS][331], [PASS][332], [PASS][333], [PASS][334], [PASS][335], [PASS][336], [PASS][337], [PASS][338]) ([Intel XE#378])
[288]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-2/igt@xe_module_load@load.html
[289]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-8/igt@xe_module_load@load.html
[290]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-8/igt@xe_module_load@load.html
[291]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-5/igt@xe_module_load@load.html
[292]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-7/igt@xe_module_load@load.html
[293]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-5/igt@xe_module_load@load.html
[294]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-5/igt@xe_module_load@load.html
[295]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-7/igt@xe_module_load@load.html
[296]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-7/igt@xe_module_load@load.html
[297]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-5/igt@xe_module_load@load.html
[298]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-4/igt@xe_module_load@load.html
[299]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-1/igt@xe_module_load@load.html
[300]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-2/igt@xe_module_load@load.html
[301]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-3/igt@xe_module_load@load.html
[302]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-2/igt@xe_module_load@load.html
[303]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-4/igt@xe_module_load@load.html
[304]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-3/igt@xe_module_load@load.html
[305]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-4/igt@xe_module_load@load.html
[306]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-3/igt@xe_module_load@load.html
[307]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-3/igt@xe_module_load@load.html
[308]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-1/igt@xe_module_load@load.html
[309]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-1/igt@xe_module_load@load.html
[310]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-4/igt@xe_module_load@load.html
[311]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-8/igt@xe_module_load@load.html
[312]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-8/igt@xe_module_load@load.html
[313]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@xe_module_load@load.html
[314]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_module_load@load.html
[315]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@xe_module_load@load.html
[316]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@xe_module_load@load.html
[317]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@xe_module_load@load.html
[318]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@xe_module_load@load.html
[319]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@xe_module_load@load.html
[320]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-2/igt@xe_module_load@load.html
[321]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-2/igt@xe_module_load@load.html
[322]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@xe_module_load@load.html
[323]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@xe_module_load@load.html
[324]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@xe_module_load@load.html
[325]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_module_load@load.html
[326]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_module_load@load.html
[327]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_module_load@load.html
[328]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_module_load@load.html
[329]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@xe_module_load@load.html
[330]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@xe_module_load@load.html
[331]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@xe_module_load@load.html
[332]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@xe_module_load@load.html
[333]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-2/igt@xe_module_load@load.html
[334]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@xe_module_load@load.html
[335]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@xe_module_load@load.html
[336]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@xe_module_load@load.html
[337]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@xe_module_load@load.html
[338]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@xe_module_load@load.html
- shard-bmg: ([PASS][339], [PASS][340], [PASS][341], [PASS][342], [PASS][343], [PASS][344], [PASS][345], [PASS][346], [PASS][347], [PASS][348], [PASS][349], [PASS][350], [PASS][351], [PASS][352], [PASS][353], [PASS][354], [PASS][355], [PASS][356], [PASS][357], [PASS][358], [PASS][359], [PASS][360], [PASS][361], [PASS][362], [PASS][363]) -> ([PASS][364], [SKIP][365], [PASS][366], [PASS][367], [PASS][368], [PASS][369], [PASS][370], [PASS][371], [PASS][372], [PASS][373], [PASS][374], [PASS][375], [PASS][376], [PASS][377], [PASS][378], [PASS][379], [PASS][380], [PASS][381], [PASS][382], [PASS][383], [PASS][384], [PASS][385], [PASS][386], [PASS][387], [PASS][388]) ([Intel XE#2457])
[339]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-4/igt@xe_module_load@load.html
[340]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-5/igt@xe_module_load@load.html
[341]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-5/igt@xe_module_load@load.html
[342]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-5/igt@xe_module_load@load.html
[343]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-7/igt@xe_module_load@load.html
[344]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-7/igt@xe_module_load@load.html
[345]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-7/igt@xe_module_load@load.html
[346]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-6/igt@xe_module_load@load.html
[347]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-7/igt@xe_module_load@load.html
[348]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-3/igt@xe_module_load@load.html
[349]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-5/igt@xe_module_load@load.html
[350]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-3/igt@xe_module_load@load.html
[351]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-6/igt@xe_module_load@load.html
[352]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-8/igt@xe_module_load@load.html
[353]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-4/igt@xe_module_load@load.html
[354]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-3/igt@xe_module_load@load.html
[355]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-4/igt@xe_module_load@load.html
[356]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-8/igt@xe_module_load@load.html
[357]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-8/igt@xe_module_load@load.html
[358]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-1/igt@xe_module_load@load.html
[359]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-2/igt@xe_module_load@load.html
[360]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-2/igt@xe_module_load@load.html
[361]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-1/igt@xe_module_load@load.html
[362]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-1/igt@xe_module_load@load.html
[363]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-6/igt@xe_module_load@load.html
[364]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-2/igt@xe_module_load@load.html
[365]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@xe_module_load@load.html
[366]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@xe_module_load@load.html
[367]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@xe_module_load@load.html
[368]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@xe_module_load@load.html
[369]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-2/igt@xe_module_load@load.html
[370]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-2/igt@xe_module_load@load.html
[371]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-2/igt@xe_module_load@load.html
[372]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@xe_module_load@load.html
[373]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-1/igt@xe_module_load@load.html
[374]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@xe_module_load@load.html
[375]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@xe_module_load@load.html
[376]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@xe_module_load@load.html
[377]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-1/igt@xe_module_load@load.html
[378]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@xe_module_load@load.html
[379]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@xe_module_load@load.html
[380]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@xe_module_load@load.html
[381]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@xe_module_load@load.html
[382]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@xe_module_load@load.html
[383]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@xe_module_load@load.html
[384]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-4/igt@xe_module_load@load.html
[385]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-1/igt@xe_module_load@load.html
[386]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@xe_module_load@load.html
[387]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@xe_module_load@load.html
[388]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-4/igt@xe_module_load@load.html
- shard-adlp: ([PASS][389], [PASS][390], [PASS][391], [PASS][392], [PASS][393], [PASS][394], [PASS][395], [PASS][396], [PASS][397], [PASS][398], [PASS][399], [PASS][400], [PASS][401], [PASS][402], [PASS][403], [PASS][404], [PASS][405], [PASS][406], [PASS][407], [PASS][408], [PASS][409], [PASS][410], [PASS][411], [PASS][412], [PASS][413]) -> ([PASS][414], [PASS][415], [PASS][416], [PASS][417], [PASS][418], [PASS][419], [PASS][420], [PASS][421], [PASS][422], [PASS][423], [PASS][424], [PASS][425], [PASS][426], [PASS][427], [PASS][428], [PASS][429], [PASS][430], [PASS][431], [PASS][432], [PASS][433], [SKIP][434], [PASS][435], [PASS][436], [PASS][437], [PASS][438], [PASS][439]) ([Intel XE#378] / [Intel XE#5612])
[389]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-4/igt@xe_module_load@load.html
[390]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-1/igt@xe_module_load@load.html
[391]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-1/igt@xe_module_load@load.html
[392]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-3/igt@xe_module_load@load.html
[393]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-8/igt@xe_module_load@load.html
[394]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-3/igt@xe_module_load@load.html
[395]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-2/igt@xe_module_load@load.html
[396]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-2/igt@xe_module_load@load.html
[397]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-2/igt@xe_module_load@load.html
[398]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-4/igt@xe_module_load@load.html
[399]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-4/igt@xe_module_load@load.html
[400]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-8/igt@xe_module_load@load.html
[401]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-9/igt@xe_module_load@load.html
[402]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-9/igt@xe_module_load@load.html
[403]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-9/igt@xe_module_load@load.html
[404]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-9/igt@xe_module_load@load.html
[405]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-6/igt@xe_module_load@load.html
[406]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-6/igt@xe_module_load@load.html
[407]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-4/igt@xe_module_load@load.html
[408]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-8/igt@xe_module_load@load.html
[409]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-1/igt@xe_module_load@load.html
[410]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-1/igt@xe_module_load@load.html
[411]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-6/igt@xe_module_load@load.html
[412]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-3/igt@xe_module_load@load.html
[413]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-3/igt@xe_module_load@load.html
[414]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_module_load@load.html
[415]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_module_load@load.html
[416]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_module_load@load.html
[417]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@xe_module_load@load.html
[418]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@xe_module_load@load.html
[419]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@xe_module_load@load.html
[420]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@xe_module_load@load.html
[421]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@xe_module_load@load.html
[422]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@xe_module_load@load.html
[423]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@xe_module_load@load.html
[424]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_module_load@load.html
[425]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@xe_module_load@load.html
[426]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@xe_module_load@load.html
[427]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@xe_module_load@load.html
[428]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@xe_module_load@load.html
[429]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@xe_module_load@load.html
[430]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@xe_module_load@load.html
[431]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@xe_module_load@load.html
[432]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@xe_module_load@load.html
[433]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@xe_module_load@load.html
[434]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@xe_module_load@load.html
[435]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@xe_module_load@load.html
[436]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@xe_module_load@load.html
[437]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@xe_module_load@load.html
[438]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@xe_module_load@load.html
[439]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@xe_module_load@load.html
* igt@xe_noexec_ping_pong@basic:
- shard-adlp: NOTRUN -> [SKIP][440] ([Intel XE#6259])
[440]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@xe_noexec_ping_pong@basic.html
- shard-lnl: NOTRUN -> [SKIP][441] ([Intel XE#6259])
[441]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@xe_noexec_ping_pong@basic.html
* igt@xe_oa@buffer-fill:
- shard-adlp: NOTRUN -> [SKIP][442] ([Intel XE#3573]) +10 other tests skip
[442]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@xe_oa@buffer-fill.html
* igt@xe_oa@closed-fd-and-unmapped-access:
- shard-dg2-set2: NOTRUN -> [SKIP][443] ([Intel XE#3573]) +12 other tests skip
[443]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-434/igt@xe_oa@closed-fd-and-unmapped-access.html
* igt@xe_oa@mmio-triggered-reports-read:
- shard-adlp: NOTRUN -> [SKIP][444] ([Intel XE#6032])
[444]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-3/igt@xe_oa@mmio-triggered-reports-read.html
- shard-dg2-set2: NOTRUN -> [SKIP][445] ([Intel XE#6032])
[445]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-432/igt@xe_oa@mmio-triggered-reports-read.html
* igt@xe_oa@non-zero-reason-all:
- shard-dg2-set2: NOTRUN -> [SKIP][446] ([Intel XE#6377])
[446]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-463/igt@xe_oa@non-zero-reason-all.html
- shard-adlp: NOTRUN -> [SKIP][447] ([Intel XE#6377])
[447]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@xe_oa@non-zero-reason-all.html
* igt@xe_pat@pat-index-xelp:
- shard-bmg: NOTRUN -> [SKIP][448] ([Intel XE#2245])
[448]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@xe_pat@pat-index-xelp.html
* igt@xe_pat@pat-index-xelpg:
- shard-bmg: NOTRUN -> [SKIP][449] ([Intel XE#2236])
[449]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@xe_pat@pat-index-xelpg.html
- shard-dg2-set2: NOTRUN -> [SKIP][450] ([Intel XE#979])
[450]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-464/igt@xe_pat@pat-index-xelpg.html
- shard-lnl: NOTRUN -> [SKIP][451] ([Intel XE#979])
[451]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_pat@pat-index-xelpg.html
* igt@xe_pm@d3cold-basic-exec:
- shard-dg2-set2: NOTRUN -> [SKIP][452] ([Intel XE#2284] / [Intel XE#366]) +1 other test skip
[452]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-433/igt@xe_pm@d3cold-basic-exec.html
* igt@xe_pm@d3cold-i2c:
- shard-lnl: NOTRUN -> [SKIP][453] ([Intel XE#5694])
[453]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@xe_pm@d3cold-i2c.html
- shard-bmg: NOTRUN -> [SKIP][454] ([Intel XE#5694])
[454]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-8/igt@xe_pm@d3cold-i2c.html
- shard-adlp: NOTRUN -> [SKIP][455] ([Intel XE#5694])
[455]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@xe_pm@d3cold-i2c.html
* igt@xe_pm@d3cold-mocs:
- shard-adlp: NOTRUN -> [SKIP][456] ([Intel XE#2284])
[456]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@xe_pm@d3cold-mocs.html
- shard-bmg: NOTRUN -> [SKIP][457] ([Intel XE#2284]) +2 other tests skip
[457]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@xe_pm@d3cold-mocs.html
- shard-dg2-set2: NOTRUN -> [SKIP][458] ([Intel XE#2284])
[458]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@xe_pm@d3cold-mocs.html
- shard-lnl: NOTRUN -> [SKIP][459] ([Intel XE#2284])
[459]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_pm@d3cold-mocs.html
* igt@xe_pm@d3hot-mmap-vram:
- shard-adlp: NOTRUN -> [SKIP][460] ([Intel XE#1948])
[460]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@xe_pm@d3hot-mmap-vram.html
- shard-lnl: NOTRUN -> [SKIP][461] ([Intel XE#1948])
[461]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-5/igt@xe_pm@d3hot-mmap-vram.html
* igt@xe_pm@s2idle-basic-exec:
- shard-bmg: NOTRUN -> [ABORT][462] ([Intel XE#6675]) +16 other tests abort
[462]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@xe_pm@s2idle-basic-exec.html
* igt@xe_pm@s2idle-exec-after:
- shard-lnl: NOTRUN -> [ABORT][463] ([Intel XE#6675]) +17 other tests abort
[463]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@xe_pm@s2idle-exec-after.html
* igt@xe_pm@s3-basic-exec:
- shard-dg2-set2: NOTRUN -> [ABORT][464] ([Intel XE#6675]) +16 other tests abort
[464]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@xe_pm@s3-basic-exec.html
- shard-lnl: NOTRUN -> [SKIP][465] ([Intel XE#584]) +1 other test skip
[465]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@xe_pm@s3-basic-exec.html
* igt@xe_pm@s3-d3cold-basic-exec:
- shard-adlp: NOTRUN -> [SKIP][466] ([Intel XE#2284] / [Intel XE#366]) +1 other test skip
[466]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@xe_pm@s3-d3cold-basic-exec.html
- shard-lnl: NOTRUN -> [SKIP][467] ([Intel XE#2284] / [Intel XE#366])
[467]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@xe_pm@s3-d3cold-basic-exec.html
* igt@xe_pm@s3-d3hot-basic-exec:
- shard-adlp: NOTRUN -> [ABORT][468] ([Intel XE#6675]) +11 other tests abort
[468]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_pm@s3-d3hot-basic-exec.html
* igt@xe_pm@vram-d3cold-threshold:
- shard-lnl: NOTRUN -> [SKIP][469] ([Intel XE#579])
[469]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@xe_pm@vram-d3cold-threshold.html
* igt@xe_pm_residency@cpg-basic:
- shard-adlp: [PASS][470] -> [ABORT][471] ([Intel XE#6675]) +3 other tests abort
[470]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-3/igt@xe_pm_residency@cpg-basic.html
[471]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-8/igt@xe_pm_residency@cpg-basic.html
* igt@xe_pmu@all-fn-engine-activity-load:
- shard-lnl: NOTRUN -> [SKIP][472] ([Intel XE#4650])
[472]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@xe_pmu@all-fn-engine-activity-load.html
* igt@xe_pmu@engine-activity-accuracy-90:
- shard-lnl: NOTRUN -> [FAIL][473] ([Intel XE#6251]) +1 other test fail
[473]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-7/igt@xe_pmu@engine-activity-accuracy-90.html
* igt@xe_pxp@display-black-pxp-fb:
- shard-adlp: NOTRUN -> [SKIP][474] ([Intel XE#4733])
[474]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@xe_pxp@display-black-pxp-fb.html
* igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq:
- shard-adlp: NOTRUN -> [SKIP][475] ([Intel XE#4733] / [Intel XE#5594]) +5 other tests skip
[475]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-4/igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq.html
- shard-bmg: NOTRUN -> [SKIP][476] ([Intel XE#4733]) +6 other tests skip
[476]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-1/igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq.html
* igt@xe_pxp@pxp-termination-key-update-post-rpm:
- shard-dg2-set2: NOTRUN -> [SKIP][477] ([Intel XE#4733]) +4 other tests skip
[477]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-432/igt@xe_pxp@pxp-termination-key-update-post-rpm.html
* igt@xe_query@multigpu-query-invalid-cs-cycles:
- shard-bmg: NOTRUN -> [SKIP][478] ([Intel XE#944]) +2 other tests skip
[478]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@xe_query@multigpu-query-invalid-cs-cycles.html
- shard-dg2-set2: NOTRUN -> [SKIP][479] ([Intel XE#944]) +2 other tests skip
[479]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-433/igt@xe_query@multigpu-query-invalid-cs-cycles.html
* igt@xe_query@multigpu-query-invalid-extension:
- shard-adlp: NOTRUN -> [SKIP][480] ([Intel XE#944]) +2 other tests skip
[480]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-6/igt@xe_query@multigpu-query-invalid-extension.html
* igt@xe_query@multigpu-query-topology:
- shard-lnl: NOTRUN -> [SKIP][481] ([Intel XE#944]) +2 other tests skip
[481]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@xe_query@multigpu-query-topology.html
* igt@xe_spin_batch@spin-mem-copy:
- shard-dg2-set2: NOTRUN -> [SKIP][482] ([Intel XE#4821])
[482]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@xe_spin_batch@spin-mem-copy.html
- shard-adlp: NOTRUN -> [SKIP][483] ([Intel XE#4821])
[483]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-2/igt@xe_spin_batch@spin-mem-copy.html
* igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling:
- shard-dg2-set2: NOTRUN -> [SKIP][484] ([Intel XE#4130]) +1 other test skip
[484]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-466/igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling.html
* igt@xe_sriov_auto_provisioning@selfconfig-reprovision-reduce-numvfs:
- shard-lnl: NOTRUN -> [SKIP][485] ([Intel XE#4130]) +1 other test skip
[485]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@xe_sriov_auto_provisioning@selfconfig-reprovision-reduce-numvfs.html
* igt@xe_sriov_flr@flr-each-isolation:
- shard-dg2-set2: NOTRUN -> [SKIP][486] ([Intel XE#3342])
[486]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@xe_sriov_flr@flr-each-isolation.html
* igt@xe_sriov_flr@flr-vfs-parallel:
- shard-dg2-set2: NOTRUN -> [SKIP][487] ([Intel XE#4273])
[487]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-436/igt@xe_sriov_flr@flr-vfs-parallel.html
- shard-lnl: NOTRUN -> [SKIP][488] ([Intel XE#4273]) +1 other test skip
[488]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@xe_sriov_flr@flr-vfs-parallel.html
* igt@xe_sriov_scheduling@nonpreempt-engine-resets:
- shard-dg2-set2: NOTRUN -> [SKIP][489] ([Intel XE#4351]) +1 other test skip
[489]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-432/igt@xe_sriov_scheduling@nonpreempt-engine-resets.html
- shard-lnl: NOTRUN -> [SKIP][490] ([Intel XE#4351]) +1 other test skip
[490]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@xe_sriov_scheduling@nonpreempt-engine-resets.html
#### Possible fixes ####
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
- shard-adlp: [FAIL][491] ([Intel XE#1231]) -> [PASS][492]
[491]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-3/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
[492]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-1/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
* igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p:
- shard-bmg: [SKIP][493] ([Intel XE#2314] / [Intel XE#2894]) -> [PASS][494]
[493]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html
[494]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html
* igt@kms_cursor_legacy@cursora-vs-flipb-toggle:
- shard-bmg: [SKIP][495] ([Intel XE#2291]) -> [PASS][496]
[495]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-2/igt@kms_cursor_legacy@cursora-vs-flipb-toggle.html
[496]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-3/igt@kms_cursor_legacy@cursora-vs-flipb-toggle.html
* igt@kms_flip@2x-wf_vblank-ts-check-interruptible:
- shard-bmg: [SKIP][497] ([Intel XE#2316]) -> [PASS][498] +1 other test pass
[497]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-6/igt@kms_flip@2x-wf_vblank-ts-check-interruptible.html
[498]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@kms_flip@2x-wf_vblank-ts-check-interruptible.html
* igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b:
- shard-adlp: [ABORT][499] ([Intel XE#6675]) -> [PASS][500] +1 other test pass
[499]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-adlp-9/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b.html
[500]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-adlp-9/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b.html
* igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1:
- shard-lnl: [FAIL][501] ([Intel XE#2142]) -> [PASS][502] +1 other test pass
[501]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-8/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
[502]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-4/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
* igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-multi-vma:
- shard-lnl: [FAIL][503] ([Intel XE#5625]) -> [PASS][504]
[503]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-8/igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-multi-vma.html
[504]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-multi-vma.html
* igt@xe_pm@s3-multiple-execs:
- shard-bmg: [ABORT][505] ([Intel XE#6675]) -> [PASS][506]
[505]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-6/igt@xe_pm@s3-multiple-execs.html
[506]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-1/igt@xe_pm@s3-multiple-execs.html
- shard-dg2-set2: [ABORT][507] ([Intel XE#6675]) -> [PASS][508]
[507]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-dg2-433/igt@xe_pm@s3-multiple-execs.html
[508]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-dg2-435/igt@xe_pm@s3-multiple-execs.html
* igt@xe_pxp@pxp-stale-queue-post-suspend:
- shard-lnl: [ABORT][509] ([Intel XE#6675]) -> [PASS][510] +2 other tests pass
[509]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-5/igt@xe_pxp@pxp-stale-queue-post-suspend.html
[510]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@xe_pxp@pxp-stale-queue-post-suspend.html
#### Warnings ####
* igt@kms_async_flips@async-flip-with-page-flip-events-linear:
- shard-lnl: [FAIL][511] ([Intel XE#6676]) -> [FAIL][512] ([Intel XE#5993] / [Intel XE#6676])
[511]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-3/igt@kms_async_flips@async-flip-with-page-flip-events-linear.html
[512]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-3/igt@kms_async_flips@async-flip-with-page-flip-events-linear.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-plflip-blt:
- shard-bmg: [SKIP][513] ([Intel XE#2312]) -> [SKIP][514] ([Intel XE#2311]) +2 other tests skip
[513]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-2/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-plflip-blt.html
[514]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-onoff:
- shard-bmg: [SKIP][515] ([Intel XE#2312]) -> [SKIP][516] ([Intel XE#4141])
[515]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-onoff.html
[516]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-plflip-blt:
- shard-bmg: [SKIP][517] ([Intel XE#4141]) -> [SKIP][518] ([Intel XE#2312])
[517]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-plflip-blt.html
[518]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-spr-indfb-draw-blt:
- shard-bmg: [SKIP][519] ([Intel XE#2311]) -> [SKIP][520] ([Intel XE#2312]) +1 other test skip
[519]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-spr-indfb-draw-blt.html
[520]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-spr-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-onoff:
- shard-bmg: [SKIP][521] ([Intel XE#2313]) -> [SKIP][522] ([Intel XE#2312]) +2 other tests skip
[521]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-7/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-onoff.html
[522]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-onoff.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][523] ([Intel XE#2312]) -> [SKIP][524] ([Intel XE#2313]) +2 other tests skip
[523]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-bmg-2/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-cur-indfb-draw-mmap-wc.html
[524]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-bmg-7/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b:
- shard-lnl: [INCOMPLETE][525] ([Intel XE#1035]) -> [ABORT][526] ([Intel XE#6675])
[525]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-3/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b.html
[526]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-8/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b.html
* igt@kms_vblank@ts-continuation-dpms-suspend@pipe-c-edp-1:
- shard-lnl: [ABORT][527] ([Intel XE#6675]) -> [INCOMPLETE][528] ([Intel XE#4488])
[527]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670/shard-lnl-3/igt@kms_vblank@ts-continuation-dpms-suspend@pipe-c-edp-1.html
[528]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/shard-lnl-1/igt@kms_vblank@ts-continuation-dpms-suspend@pipe-c-edp-1.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#1035]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1035
[Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
[Intel XE#1122]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1122
[Intel XE#1123]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1123
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1125]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1125
[Intel XE#1127]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1127
[Intel XE#1128]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1128
[Intel XE#1131]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1131
[Intel XE#1137]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1137
[Intel XE#1151]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1151
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1231]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1231
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1397]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1397
[Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
[Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
[Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
[Intel XE#1428]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1428
[Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
[Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
[Intel XE#1447]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1447
[Intel XE#1467]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1467
[Intel XE#1469]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1469
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
[Intel XE#1500]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1500
[Intel XE#1508]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1508
[Intel XE#1512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1512
[Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
[Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
[Intel XE#1948]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1948
[Intel XE#2142]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2142
[Intel XE#2168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2168
[Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
[Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2236]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2236
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2245]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2245
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2328]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2328
[Intel XE#2330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2330
[Intel XE#2360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2360
[Intel XE#2372]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2372
[Intel XE#2375]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2375
[Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
[Intel XE#2387]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2387
[Intel XE#2391]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2391
[Intel XE#2393]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2393
[Intel XE#2414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2414
[Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
[Intel XE#2457]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2457
[Intel XE#2499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2499
[Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2669]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2669
[Intel XE#2849]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2849
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#2907]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2907
[Intel XE#2925]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2925
[Intel XE#2934]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2934
[Intel XE#2938]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2938
[Intel XE#2939]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2939
[Intel XE#3012]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3012
[Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
[Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
[Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
[Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
[Intel XE#3141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3141
[Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
[Intel XE#3278]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3278
[Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
[Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
[Intel XE#3433]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3433
[Intel XE#3442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3442
[Intel XE#346]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/346
[Intel XE#352]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/352
[Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
[Intel XE#356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/356
[Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
[Intel XE#362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/362
[Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
[Intel XE#3868]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3868
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#4130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4130
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#4273]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4273
[Intel XE#4294]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4294
[Intel XE#4302]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4302
[Intel XE#4331]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4331
[Intel XE#4351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4351
[Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
[Intel XE#4356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4356
[Intel XE#4422]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4422
[Intel XE#4488]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4488
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
[Intel XE#4608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4608
[Intel XE#4609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4609
[Intel XE#4633]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4633
[Intel XE#4650]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4650
[Intel XE#4658]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4658
[Intel XE#4692]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4692
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4821]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4821
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#488]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/488
[Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#5007]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5007
[Intel XE#5100]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5100
[Intel XE#512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/512
[Intel XE#5191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5191
[Intel XE#5195]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5195
[Intel XE#5300]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5300
[Intel XE#5466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5466
[Intel XE#5530]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5530
[Intel XE#5561]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5561
[Intel XE#5564]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5564
[Intel XE#5565]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5565
[Intel XE#5575]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5575
[Intel XE#5580]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5580
[Intel XE#5585]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5585
[Intel XE#5594]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5594
[Intel XE#560]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/560
[Intel XE#5607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5607
[Intel XE#5612]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5612
[Intel XE#5625]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5625
[Intel XE#5626]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5626
[Intel XE#5671]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5671
[Intel XE#5694]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5694
[Intel XE#5786]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5786
[Intel XE#579]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/579
[Intel XE#584]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/584
[Intel XE#586]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/586
[Intel XE#599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/599
[Intel XE#5993]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5993
[Intel XE#6010]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6010
[Intel XE#6032]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6032
[Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
[Intel XE#610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/610
[Intel XE#619]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/619
[Intel XE#6251]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6251
[Intel XE#6259]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6259
[Intel XE#6281]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6281
[Intel XE#6312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6312
[Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321
[Intel XE#6360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6360
[Intel XE#6361]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6361
[Intel XE#6377]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6377
[Intel XE#6503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6503
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#6592]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6592
[Intel XE#6598]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6598
[Intel XE#6599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6599
[Intel XE#6606]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6606
[Intel XE#664]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/664
[Intel XE#6645]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6645
[Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665
[Intel XE#6675]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6675
[Intel XE#6676]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6676
[Intel XE#6677]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6677
[Intel XE#6681]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6681
[Intel XE#6691]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6691
[Intel XE#6692]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6692
[Intel XE#6699]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6699
[Intel XE#6704]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6704
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[Intel XE#701]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/701
[Intel XE#718]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/718
[Intel XE#734]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/734
[Intel XE#736]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/736
[Intel XE#776]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/776
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
[Intel XE#979]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/979
Build changes
-------------
* IGT: IGT_8644 -> IGT_8645
* Linux: xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670 -> xe-pw-158211v1
IGT_8644: 069c5ee6eb658181e7264883c6c4fba41fc917a4 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
IGT_8645: 8645
xe-4166-e1c1b3e03e356d1e20432dcb0d38ad44d5e92670: e1c1b3e03e356d1e20432dcb0d38ad44d5e92670
xe-pw-158211v1: 158211v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158211v1/index.html
[-- Attachment #2: Type: text/html, Size: 150294 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc()
2025-11-28 14:24 ` Boris Brezillon
@ 2025-12-01 9:55 ` Alice Ryhl
0 siblings, 0 replies; 24+ messages in thread
From: Alice Ryhl @ 2025-12-01 9:55 UTC (permalink / raw)
To: Boris Brezillon
Cc: Danilo Krummrich, Daniel Almeida, Matthew Brost,
Thomas Hellström, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Steven Price,
Liviu Dudau, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Trevor Gross, Frank Binns, Matt Coster, Rob Clark,
Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang, Sean Paul,
Marijn Suijten, Lyude Paul, Lucas De Marchi, Rodrigo Vivi,
Sumit Semwal, Christian König, dri-devel, linux-kernel,
rust-for-linux, linux-arm-msm, freedreno, nouveau, intel-xe,
linux-media, linaro-mm-sig
On Fri, Nov 28, 2025 at 03:24:03PM +0100, Boris Brezillon wrote:
> On Fri, 28 Nov 2025 14:14:15 +0000
> Alice Ryhl <aliceryhl@google.com> wrote:
>
> > When calling drm_gpuvm_bo_obtain_prealloc() and using immediate mode,
> > this may result in a call to ops->vm_bo_free(vm_bo) while holding the
> > GEMs gpuva mutex. This is a problem if ops->vm_bo_free(vm_bo) performs
> > any operations that are not safe in the fence signalling critical path,
> > and it turns out that Panthor (the only current user of the method)
> > calls drm_gem_shmem_unpin() which takes a resv lock internally.
> >
> > This constitutes both a violation of signalling safety and lock
> > inversion. To fix this, we modify the method to internally take the GEMs
> > gpuva mutex so that the mutex can be unlocked before freeing the
> > preallocated vm_bo.
> >
> > Note that this modification introduces a requirement that the driver
> > uses immediate mode to call drm_gpuvm_bo_obtain_prealloc() as it would
> > otherwise take the wrong lock.
> >
> > Signed-off-by: Alice Ryhl <aliceryhl@google.com>
>
> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
>
> Should we add a Fixes tag?
Yeah, let's add:
Fixes: 63e919a31625 ("panthor: use drm_gpuva_unlink_defer()")
Alice
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction
2025-11-28 14:14 ` [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction Alice Ryhl
@ 2025-12-01 15:16 ` Daniel Almeida
2025-12-02 8:39 ` Alice Ryhl
2025-12-19 15:35 ` Danilo Krummrich
1 sibling, 1 reply; 24+ messages in thread
From: Daniel Almeida @ 2025-12-01 15:16 UTC (permalink / raw)
To: Alice Ryhl
Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Frank Binns,
Matt Coster, Rob Clark, Dmitry Baryshkov, Abhinav Kumar,
Jessica Zhang, Sean Paul, Marijn Suijten, Lyude Paul,
Lucas De Marchi, Rodrigo Vivi, Sumit Semwal, Christian König,
dri-devel, linux-kernel, rust-for-linux, linux-arm-msm, freedreno,
nouveau, intel-xe, linux-media, linaro-mm-sig, Asahi Lina
Hi Alice,
I find it a bit weird that we reverted to v1, given that the previous gpuvm
attempt was v3. No big deal though.
> On 28 Nov 2025, at 11:14, Alice Ryhl <aliceryhl@google.com> wrote:
>
> Add a GPUVM abstraction to be used by Rust GPU drivers.
>
> GPUVM keeps track of a GPU's virtual address (VA) space and manages the
> corresponding virtual mappings represented by "GPU VA" objects. It also
> keeps track of the gem::Object<T> used to back the mappings through
> GpuVmBo<T>.
>
> This abstraction is only usable by drivers that wish to use GPUVM in
> immediate mode. This allows us to build the locking scheme into the API
> design. It means that the GEM mutex is used for the GEM gpuva list, and
> that the resv lock is used for the extobj list. The evicted list is not
> yet used in this version.
>
> This abstraction provides a special handle called the GpuVmCore, which
> is a wrapper around ARef<GpuVm> that provides access to the interval
> tree. Generally, all changes to the address space requires mutable
> access to this unique handle.
>
> Some of the safety comments are still somewhat WIP, but I think the API
> should be sound as-is.
>
> Co-developed-by: Asahi Lina <lina+kernel@asahilina.net>
> Signed-off-by: Asahi Lina <lina+kernel@asahilina.net>
> Co-developed-by: Daniel Almeida <daniel.almeida@collabora.com>
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
> MAINTAINERS | 1 +
> rust/bindings/bindings_helper.h | 2 +
> rust/helpers/drm_gpuvm.c | 43 ++++
> rust/helpers/helpers.c | 1 +
> rust/kernel/drm/gpuvm/mod.rs | 394 +++++++++++++++++++++++++++++++++
> rust/kernel/drm/gpuvm/sm_ops.rs | 469 ++++++++++++++++++++++++++++++++++++++++
> rust/kernel/drm/gpuvm/va.rs | 148 +++++++++++++
> rust/kernel/drm/gpuvm/vm_bo.rs | 213 ++++++++++++++++++
> rust/kernel/drm/mod.rs | 1 +
> 9 files changed, 1272 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 952aed4619c25d395c12962e559d6cd3362f64a7..946629eb9ebf19922bbe782fed37be07067d6bf2 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -8591,6 +8591,7 @@ S: Supported
> T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
> F: drivers/gpu/drm/drm_gpuvm.c
> F: include/drm/drm_gpuvm.h
> +F: rust/kernel/drm/gpuvm/
>
> DRM LOG
> M: Jocelyn Falempe <jfalempe@redhat.com>
> diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
> index 2e43c66635a2c9f31bd99b9817bd2d6ab89fbcf2..c776ae198e1db91f010f88ff1d1c888a3036a87f 100644
> --- a/rust/bindings/bindings_helper.h
> +++ b/rust/bindings/bindings_helper.h
> @@ -33,6 +33,7 @@
> #include <drm/drm_drv.h>
> #include <drm/drm_file.h>
> #include <drm/drm_gem.h>
> +#include <drm/drm_gpuvm.h>
> #include <drm/drm_ioctl.h>
> #include <kunit/test.h>
> #include <linux/auxiliary_bus.h>
> @@ -103,6 +104,7 @@ const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM = ___GFP_HIGHMEM;
> const gfp_t RUST_CONST_HELPER___GFP_NOWARN = ___GFP_NOWARN;
> const blk_features_t RUST_CONST_HELPER_BLK_FEAT_ROTATIONAL = BLK_FEAT_ROTATIONAL;
> const fop_flags_t RUST_CONST_HELPER_FOP_UNSIGNED_OFFSET = FOP_UNSIGNED_OFFSET;
> +const u32 RUST_CONST_HELPER_DRM_EXEC_INTERRUPTIBLE_WAIT = DRM_EXEC_INTERRUPTIBLE_WAIT;
>
> const xa_mark_t RUST_CONST_HELPER_XA_PRESENT = XA_PRESENT;
>
> diff --git a/rust/helpers/drm_gpuvm.c b/rust/helpers/drm_gpuvm.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..18b7dbd2e32c3162455b344e72ec2940c632cc6b
> --- /dev/null
> +++ b/rust/helpers/drm_gpuvm.c
> @@ -0,0 +1,43 @@
> +// SPDX-License-Identifier: GPL-2.0 or MIT
> +
> +#ifdef CONFIG_DRM_GPUVM
> +
> +#include <drm/drm_gpuvm.h>
> +
> +struct drm_gpuvm *rust_helper_drm_gpuvm_get(struct drm_gpuvm *obj)
> +{
> + return drm_gpuvm_get(obj);
> +}
> +
> +void rust_helper_drm_gpuva_init_from_op(struct drm_gpuva *va, struct drm_gpuva_op_map *op)
> +{
> + drm_gpuva_init_from_op(va, op);
> +}
> +
> +struct drm_gpuvm_bo *rust_helper_drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_bo)
> +{
> + return drm_gpuvm_bo_get(vm_bo);
> +}
> +
> +void rust_helper_drm_gpuvm_exec_unlock(struct drm_gpuvm_exec *vm_exec)
> +{
> + return drm_gpuvm_exec_unlock(vm_exec);
> +}
> +
> +bool rust_helper_drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm,
> + struct drm_gem_object *obj)
> +{
> + return drm_gpuvm_is_extobj(gpuvm, obj);
> +}
> +
> +int rust_helper_dma_resv_lock(struct dma_resv *obj, struct ww_acquire_ctx *ctx)
> +{
> + return dma_resv_lock(obj, ctx);
> +}
> +
> +void rust_helper_dma_resv_unlock(struct dma_resv *obj)
> +{
> + dma_resv_unlock(obj);
> +}
> +
> +#endif // CONFIG_DRM_GPUVM
> diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
> index 551da6c9b5064c324d6f62bafcec672c6c6f5bee..91f45155eb9c2c4e92b56ee1abf7d45188873f3c 100644
> --- a/rust/helpers/helpers.c
> +++ b/rust/helpers/helpers.c
> @@ -26,6 +26,7 @@
> #include "device.c"
> #include "dma.c"
> #include "drm.c"
> +#include "drm_gpuvm.c"
> #include "err.c"
> #include "irq.c"
> #include "fs.c"
> diff --git a/rust/kernel/drm/gpuvm/mod.rs b/rust/kernel/drm/gpuvm/mod.rs
> new file mode 100644
> index 0000000000000000000000000000000000000000..9834dbb938a3622e46048e9b8e06bc6bf03aa0d2
> --- /dev/null
> +++ b/rust/kernel/drm/gpuvm/mod.rs
> @@ -0,0 +1,394 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +
> +//! DRM GPUVM in immediate mode
> +//!
> +//! Rust abstractions for using GPUVM in immediate mode. This is when the GPUVM state is updated
> +//! during `run_job()`, i.e., in the DMA fence signalling critical path, to ensure that the GPUVM
IMHO: We should initially target synchronous VM_BINDS, which are the opposite
of what you described above.
> +//! and the GPU's virtual address space has the same state at all times.
> +//!
> +//! C header: [`include/drm/drm_gpuvm.h`](srctree/include/drm/drm_gpuvm.h)
> +
> +use kernel::{
> + alloc::{AllocError, Flags as AllocFlags},
> + bindings, drm,
> + drm::gem::IntoGEMObject,
> + error::to_result,
> + prelude::*,
> + sync::aref::{ARef, AlwaysRefCounted},
> + types::Opaque,
> +};
> +
> +use core::{
> + cell::UnsafeCell,
> + marker::PhantomData,
> + mem::{ManuallyDrop, MaybeUninit},
> + ops::{Deref, DerefMut, Range},
> + ptr::{self, NonNull},
> +};
> +
> +mod sm_ops;
> +pub use self::sm_ops::*;
> +
> +mod vm_bo;
> +pub use self::vm_bo::*;
> +
> +mod va;
> +pub use self::va::*;
> +
> +/// A DRM GPU VA manager.
> +///
> +/// This object is refcounted, but the "core" is only accessible using a special unique handle. The
I wonder if `Owned<T>` is a good fit here? IIUC, Owned<T> can be refcounted,
but there is only ever one handle on the Rust side? If so, this seems to be
what we want here?
> +/// core consists of the `core` field and the GPUVM's interval tree.
> +#[repr(C)]
> +#[pin_data]
> +pub struct GpuVm<T: DriverGpuVm> {
> + #[pin]
> + vm: Opaque<bindings::drm_gpuvm>,
> + /// Accessed only through the [`GpuVmCore`] reference.
> + core: UnsafeCell<T>,
This UnsafeCell has been here since Lina’s version. I must say I never
understood why, and perhaps now is a good time to clarify it given the changes
we’re making w.r.t to the “unique handle” thing.
This is just some driver private data. It’s never shared with C. I am not
sure why we need this wrapper.
> + /// Shared data not protected by any lock.
> + #[pin]
> + shared_data: T::SharedData,
Should we deref to this?
> +}
> +
> +// SAFETY: dox
> +unsafe impl<T: DriverGpuVm> AlwaysRefCounted for GpuVm<T> {
> + fn inc_ref(&self) {
> + // SAFETY: dox
> + unsafe { bindings::drm_gpuvm_get(self.vm.get()) };
> + }
> +
> + unsafe fn dec_ref(obj: NonNull<Self>) {
> + // SAFETY: dox
> + unsafe { bindings::drm_gpuvm_put((*obj.as_ptr()).vm.get()) };
> + }
> +}
> +
> +impl<T: DriverGpuVm> GpuVm<T> {
> + const fn vtable() -> &'static bindings::drm_gpuvm_ops {
> + &bindings::drm_gpuvm_ops {
> + vm_free: Some(Self::vm_free),
> + op_alloc: None,
> + op_free: None,
> + vm_bo_alloc: GpuVmBo::<T>::ALLOC_FN,
> + vm_bo_free: GpuVmBo::<T>::FREE_FN,
> + vm_bo_validate: None,
> + sm_step_map: Some(Self::sm_step_map),
> + sm_step_unmap: Some(Self::sm_step_unmap),
> + sm_step_remap: Some(Self::sm_step_remap),
> + }
> + }
> +
> + /// Creates a GPUVM instance.
> + #[expect(clippy::new_ret_no_self)]
> + pub fn new<E>(
> + name: &'static CStr,
> + dev: &drm::Device<T::Driver>,
> + r_obj: &T::Object,
Can we call this “reservation_object”, or similar?
We should probably briefly explain what it does, perhaps linking to the C docs.
> + range: Range<u64>,
> + reserve_range: Range<u64>,
> + core: T,
> + shared: impl PinInit<T::SharedData, E>,
> + ) -> Result<GpuVmCore<T>, E>
> + where
> + E: From<AllocError>,
> + E: From<core::convert::Infallible>,
> + {
> + let obj = KBox::try_pin_init::<E>(
> + try_pin_init!(Self {
> + core <- UnsafeCell::new(core),
> + shared_data <- shared,
> + vm <- Opaque::ffi_init(|vm| {
> + // SAFETY: These arguments are valid. `vm` is valid until refcount drops to
> + // zero.
> + unsafe {
> + bindings::drm_gpuvm_init(
> + vm,
> + name.as_char_ptr(),
> + bindings::drm_gpuvm_flags_DRM_GPUVM_IMMEDIATE_MODE
> + | bindings::drm_gpuvm_flags_DRM_GPUVM_RESV_PROTECTED,
> + dev.as_raw(),
> + r_obj.as_raw(),
> + range.start,
> + range.end - range.start,
> + reserve_range.start,
> + reserve_range.end - reserve_range.start,
> + const { Self::vtable() },
> + )
> + }
> + }),
> + }? E),
> + GFP_KERNEL,
> + )?;
> + // SAFETY: This transfers the initial refcount to the ARef.
> + Ok(GpuVmCore(unsafe {
> + ARef::from_raw(NonNull::new_unchecked(KBox::into_raw(
> + Pin::into_inner_unchecked(obj),
> + )))
> + }))
> + }
> +
> + /// Access this [`GpuVm`] from a raw pointer.
> + ///
> + /// # Safety
> + ///
> + /// For the duration of `'a`, the pointer must reference a valid [`GpuVm<T>`].
> + #[inline]
> + pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm) -> &'a Self {
> + // SAFETY: `drm_gpuvm` is first field and `repr(C)`.
> + unsafe { &*ptr.cast() }
> + }
> +
> + /// Get a raw pointer.
> + #[inline]
> + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm {
> + self.vm.get()
> + }
> +
> + /// Access the shared data.
> + #[inline]
> + pub fn shared(&self) -> &T::SharedData {
> + &self.shared_data
> + }
> +
> + /// The start of the VA space.
> + #[inline]
> + pub fn va_start(&self) -> u64 {
> + // SAFETY: Safe by the type invariant of `GpuVm<T>`.
> + unsafe { (*self.as_raw()).mm_start }
> + }
> +
> + /// The length of the address space
> + #[inline]
> + pub fn va_length(&self) -> u64 {
> + // SAFETY: Safe by the type invariant of `GpuVm<T>`.
> + unsafe { (*self.as_raw()).mm_range }
> + }
> +
> + /// Returns the range of the GPU virtual address space.
> + #[inline]
> + pub fn va_range(&self) -> Range<u64> {
> + let start = self.va_start();
> + let end = start + self.va_length();
> + Range { start, end }
> + }
> +
I wonder if we should expose the methods below at this moment. We will not need
them in Tyr until we start submitting jobs. This is still a bit in the future.
I say this for a few reasons:
a) Philipp is still working on the fence abstractions,
b) As a result from the above, we are taking raw fence pointers,
c) Onur is working on a WW Mutex abstraction [0] that includes a Rust
implementation of drm_exec (under another name, and useful in other contexts
outside of DRM). Should we use them here?
I think your current design with the ExecToken is also ok and perhaps we should
stick to it, but it's good to at least discuss this with the others.
> + /// Returns a [`GpuVmBoObtain`] for the provided GEM object.
> + #[inline]
> + pub fn obtain(
> + &self,
> + obj: &T::Object,
> + data: impl PinInit<T::VmBoData>,
> + ) -> Result<GpuVmBoObtain<T>, AllocError> {
Perhaps this should be called GpuVmBo? That’s what you want to “obtain” in the first place.
This is indeed a question, by the way.
> + Ok(GpuVmBoAlloc::new(self, obj, data)?.obtain())
> + }
> +
> + /// Prepare this GPUVM.
> + #[inline]
> + pub fn prepare(&self, num_fences: u32) -> impl PinInit<GpuVmExec<'_, T>, Error> {
> + try_pin_init!(GpuVmExec {
> + exec <- Opaque::try_ffi_init(|exec: *mut bindings::drm_gpuvm_exec| {
> + // SAFETY: exec is valid but unused memory, so we can write.
> + unsafe {
> + ptr::write_bytes(exec, 0u8, 1usize);
> + ptr::write(&raw mut (*exec).vm, self.as_raw());
> + ptr::write(&raw mut (*exec).flags, bindings::DRM_EXEC_INTERRUPTIBLE_WAIT);
> + ptr::write(&raw mut (*exec).num_fences, num_fences);
> + }
> +
> + // SAFETY: We can prepare the GPUVM.
> + to_result(unsafe { bindings::drm_gpuvm_exec_lock(exec) })
> + }),
> + _gpuvm: PhantomData,
> + })
> + }
> +
> + /// Clean up buffer objects that are no longer used.
> + #[inline]
> + pub fn deferred_cleanup(&self) {
> + // SAFETY: Always safe to perform deferred cleanup.
> + unsafe { bindings::drm_gpuvm_bo_deferred_cleanup(self.as_raw()) }
> + }
> +
> + /// Check if this GEM object is an external object for this GPUVM.
> + #[inline]
> + pub fn is_extobj(&self, obj: &T::Object) -> bool {
> + // SAFETY: We may call this with any GPUVM and GEM object.
> + unsafe { bindings::drm_gpuvm_is_extobj(self.as_raw(), obj.as_raw()) }
> + }
> +
> + /// Free this GPUVM.
> + ///
> + /// # Safety
> + ///
> + /// Called when refcount hits zero.
> + unsafe extern "C" fn vm_free(me: *mut bindings::drm_gpuvm) {
> + // SAFETY: GPUVM was allocated with KBox and can now be freed.
> + drop(unsafe { KBox::<Self>::from_raw(me.cast()) })
> + }
> +}
> +
> +/// The manager for a GPUVM.
> +pub trait DriverGpuVm: Sized {
> + /// Parent `Driver` for this object.
> + type Driver: drm::Driver;
> +
> + /// The kind of GEM object stored in this GPUVM.
> + type Object: IntoGEMObject;
> +
> + /// Data stored in the [`GpuVm`] that is fully shared.
> + type SharedData;
> +
> + /// Data stored with each `struct drm_gpuvm_bo`.
> + type VmBoData;
> +
> + /// Data stored with each `struct drm_gpuva`.
> + type VaData;
> +
> + /// The private data passed to callbacks.
> + type SmContext;
> +
> + /// Indicates that a new mapping should be created.
> + fn sm_step_map<'op>(
> + &mut self,
> + op: OpMap<'op, Self>,
> + context: &mut Self::SmContext,
> + ) -> Result<OpMapped<'op, Self>, Error>;
> +
> + /// Indicates that an existing mapping should be removed.
> + fn sm_step_unmap<'op>(
> + &mut self,
> + op: OpUnmap<'op, Self>,
> + context: &mut Self::SmContext,
> + ) -> Result<OpUnmapped<'op, Self>, Error>;
> +
> + /// Indicates that an existing mapping should be split up.
> + fn sm_step_remap<'op>(
> + &mut self,
> + op: OpRemap<'op, Self>,
> + context: &mut Self::SmContext,
> + ) -> Result<OpRemapped<'op, Self>, Error>;
> +}
> +
> +/// The core of the DRM GPU VA manager.
> +///
> +/// This object is the reference to the GPUVM that
> +///
> +/// # Invariants
> +///
> +/// This object owns the core.
> +pub struct GpuVmCore<T: DriverGpuVm>(ARef<GpuVm<T>>);
> +
> +impl<T: DriverGpuVm> GpuVmCore<T> {
> + /// Get a reference without access to `core`.
> + #[inline]
> + pub fn gpuvm(&self) -> &GpuVm<T> {
> + &self.0
> + }
> +}
> +
> +impl<T: DriverGpuVm> Deref for GpuVmCore<T> {
> + type Target = T;
> + #[inline]
> + fn deref(&self) -> &T {
> + // SAFETY: By the type invariants we may access `core`.
> + unsafe { &*self.0.core.get() }
> + }
> +}
> +
> +impl<T: DriverGpuVm> DerefMut for GpuVmCore<T> {
> + #[inline]
> + fn deref_mut(&mut self) -> &mut T {
> + // SAFETY: By the type invariants we may access `core`.
> + unsafe { &mut *self.0.core.get() }
> + }
> +}
> +
> +/// The exec token for preparing the objects.
> +#[pin_data(PinnedDrop)]
> +pub struct GpuVmExec<'a, T: DriverGpuVm> {
> + #[pin]
> + exec: Opaque<bindings::drm_gpuvm_exec>,
> + _gpuvm: PhantomData<&'a mut GpuVm<T>>,
> +}
> +
> +impl<'a, T: DriverGpuVm> GpuVmExec<'a, T> {
> + /// Add a fence.
> + ///
> + /// # Safety
> + ///
> + /// `fence` arg must be valid.
> + pub unsafe fn resv_add_fence(
> + &self,
> + // TODO: use a safe fence abstraction
> + fence: *mut bindings::dma_fence,
> + private_usage: DmaResvUsage,
> + extobj_usage: DmaResvUsage,
> + ) {
> + // SAFETY: Caller ensures fence is ok.
> + unsafe {
> + bindings::drm_gpuvm_resv_add_fence(
> + (*self.exec.get()).vm,
> + &raw mut (*self.exec.get()).exec,
> + fence,
> + private_usage as u32,
> + extobj_usage as u32,
> + )
> + }
> + }
> +}
> +
> +#[pinned_drop]
> +impl<'a, T: DriverGpuVm> PinnedDrop for GpuVmExec<'a, T> {
> + fn drop(self: Pin<&mut Self>) {
> + // SAFETY: We hold the lock, so it's safe to unlock.
> + unsafe { bindings::drm_gpuvm_exec_unlock(self.exec.get()) };
> + }
> +}
> +
> +/// How the fence will be used.
> +#[repr(u32)]
> +pub enum DmaResvUsage {
> + /// For in kernel memory management only (e.g. copying, clearing memory).
> + Kernel = bindings::dma_resv_usage_DMA_RESV_USAGE_KERNEL,
> + /// Implicit write synchronization for userspace submissions.
> + Write = bindings::dma_resv_usage_DMA_RESV_USAGE_WRITE,
> + /// Implicit read synchronization for userspace submissions.
> + Read = bindings::dma_resv_usage_DMA_RESV_USAGE_READ,
> + /// No implicit sync (e.g. preemption fences, page table updates, TLB flushes).
> + Bookkeep = bindings::dma_resv_usage_DMA_RESV_USAGE_BOOKKEEP,
> +}
> +
> +/// A lock guard for the GPUVM's resv lock.
> +///
> +/// This guard provides access to the extobj and evicted lists.
Should we bother with evicted objects at this stage?
> +///
> +/// # Invariants
> +///
> +/// Holds the GPUVM resv lock.
> +pub struct GpuvmResvLockGuard<'a, T: DriverGpuVm>(&'a GpuVm<T>);
> +
> +impl<T: DriverGpuVm> GpuVm<T> {
> + /// Lock the VM's resv lock.
More docs here would be nice.
> + #[inline]
> + pub fn resv_lock(&self) -> GpuvmResvLockGuard<'_, T> {
> + // SAFETY: It's always ok to lock the resv lock.
> + unsafe { bindings::dma_resv_lock(self.raw_resv_lock(), ptr::null_mut()) };
> + // INVARIANTS: We took the lock.
> + GpuvmResvLockGuard(self)
> + }
You can call this more than once and deadlock. Perhaps we should warn about this, or forbid it?
i.e.:
if self.resv_is_locked {
return Err(EALREADY)
}
> +
> + #[inline]
> + fn raw_resv_lock(&self) -> *mut bindings::dma_resv {
> + // SAFETY: `r_obj` is immutable and valid for duration of GPUVM.
> + unsafe { (*(*self.as_raw()).r_obj).resv }
> + }
> +}
> +
> +impl<'a, T: DriverGpuVm> Drop for GpuvmResvLockGuard<'a, T> {
> + #[inline]
> + fn drop(&mut self) {
> + // SAFETY: We hold the lock so we can release it.
> + unsafe { bindings::dma_resv_unlock(self.0.raw_resv_lock()) };
> + }
> +}
> diff --git a/rust/kernel/drm/gpuvm/sm_ops.rs b/rust/kernel/drm/gpuvm/sm_ops.rs
> new file mode 100644
> index 0000000000000000000000000000000000000000..c0dbd4675de644a3b1cbe7d528194ca7fb471848
> --- /dev/null
> +++ b/rust/kernel/drm/gpuvm/sm_ops.rs
> @@ -0,0 +1,469 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +#![allow(clippy::tabs_in_doc_comments)]
> +
> +use super::*;
> +
> +struct SmData<'a, T: DriverGpuVm> {
> + gpuvm: &'a mut GpuVmCore<T>,
> + user_context: &'a mut T::SmContext,
> +}
> +
> +#[repr(C)]
> +struct SmMapData<'a, T: DriverGpuVm> {
> + sm_data: SmData<'a, T>,
> + vm_bo: GpuVmBoObtain<T>,
> +}
> +
> +/// The argument for [`GpuVmCore::sm_map`].
> +pub struct OpMapRequest<'a, T: DriverGpuVm> {
> + /// Address in GPU virtual address space.
> + pub addr: u64,
> + /// Length of mapping to create.
> + pub range: u64,
> + /// Offset in GEM object.
"in the GEM object."
> + pub offset: u64,
> + /// The GEM object to map.
> + pub vm_bo: GpuVmBoObtain<T>,
> + /// The user-provided context type.
> + pub context: &'a mut T::SmContext,
> +}
> +
> +impl<'a, T: DriverGpuVm> OpMapRequest<'a, T> {
> + fn raw_request(&self) -> bindings::drm_gpuvm_map_req {
> + bindings::drm_gpuvm_map_req {
> + map: bindings::drm_gpuva_op_map {
> + va: bindings::drm_gpuva_op_map__bindgen_ty_1 {
> + addr: self.addr,
> + range: self.range,
> + },
> + gem: bindings::drm_gpuva_op_map__bindgen_ty_2 {
> + offset: self.offset,
> + obj: self.vm_bo.obj().as_raw(),
> + },
> + },
> + }
> + }
> +}
> +
> +/// ```
> +/// struct drm_gpuva_op_map {
> +/// /**
> +/// * @va: structure containing address and range of a map
> +/// * operation
> +/// */
> +/// struct {
> +/// /**
> +/// * @va.addr: the base address of the new mapping
> +/// */
> +/// u64 addr;
> +///
> +/// /**
> +/// * @va.range: the range of the new mapping
> +/// */
> +/// u64 range;
> +/// } va;
> +///
> +/// /**
> +/// * @gem: structure containing the &drm_gem_object and it's offset
> +/// */
> +/// struct {
> +/// /**
> +/// * @gem.offset: the offset within the &drm_gem_object
> +/// */
> +/// u64 offset;
> +///
> +/// /**
> +/// * @gem.obj: the &drm_gem_object to map
> +/// */
> +/// struct drm_gem_object *obj;
> +/// } gem;
> +/// };
> +/// ```
I think we can improve the docs above a bit.
> +pub struct OpMap<'op, T: DriverGpuVm> {
> + op: &'op bindings::drm_gpuva_op_map,
> + // Since these abstractions are designed for immediate mode, the VM BO needs to be
> + // pre-allocated, so we always have it available when we reach this point.
> + vm_bo: &'op GpuVmBo<T>,
> + _invariant: PhantomData<*mut &'op mut T>,
> +}
> +
> +impl<'op, T: DriverGpuVm> OpMap<'op, T> {
> + /// The base address of the new mapping.
> + pub fn addr(&self) -> u64 {
> + self.op.va.addr
> + }
> +
> + /// The length of the new mapping.
> + pub fn length(&self) -> u64 {
> + self.op.va.range
> + }
> +
> + /// The offset within the [`drm_gem_object`](crate::gem::Object).
> + pub fn gem_offset(&self) -> u64 {
> + self.op.gem.offset
> + }
> +
> + /// The [`drm_gem_object`](crate::gem::Object) to map.
> + pub fn obj(&self) -> &T::Object {
> + // SAFETY: The `obj` pointer is guaranteed to be valid.
> + unsafe { <T::Object as IntoGEMObject>::from_raw(self.op.gem.obj) }
> + }
> +
> + /// The [`GpuVmBo`] that the new VA will be associated with.
> + pub fn vm_bo(&self) -> &GpuVmBo<T> {
> + self.vm_bo
> + }
> +
> + /// Use the pre-allocated VA to carry out this map operation.
> + pub fn insert(self, va: GpuVaAlloc<T>, va_data: impl PinInit<T::VaData>) -> OpMapped<'op, T> {
> + let va = va.prepare(va_data);
> + // SAFETY: By the type invariants we may access the interval tree.
> + unsafe { bindings::drm_gpuva_map(self.vm_bo.gpuvm().as_raw(), va, self.op) };
> + // SAFETY: The GEM object is valid, so the mutex is properly initialized.
> + unsafe { bindings::mutex_lock(&raw mut (*self.op.gem.obj).gpuva.lock) };
Should we use Fujita’s might_sleep() support here?
> + // SAFETY: The va is prepared for insertion, and we hold the GEM lock.
> + unsafe { bindings::drm_gpuva_link(va, self.vm_bo.as_raw()) };
> + // SAFETY: We took the mutex above, so we may unlock it.
> + unsafe { bindings::mutex_unlock(&raw mut (*self.op.gem.obj).gpuva.lock) };
> + OpMapped {
> + _invariant: self._invariant,
> + }
> + }
> +}
> +
> +/// Represents a completed [`OpMap`] operation.
> +pub struct OpMapped<'op, T> {
> + _invariant: PhantomData<*mut &'op mut T>,
> +}
> +
> +/// ```
> +/// struct drm_gpuva_op_unmap {
> +/// /**
> +/// * @va: the &drm_gpuva to unmap
> +/// */
> +/// struct drm_gpuva *va;
> +///
> +/// /**
> +/// * @keep:
> +/// *
> +/// * Indicates whether this &drm_gpuva is physically contiguous with the
> +/// * original mapping request.
> +/// *
> +/// * Optionally, if &keep is set, drivers may keep the actual page table
> +/// * mappings for this &drm_gpuva, adding the missing page table entries
> +/// * only and update the &drm_gpuvm accordingly.
> +/// */
> +/// bool keep;
> +/// };
> +/// ```
I think the docs could improve here ^
> +pub struct OpUnmap<'op, T: DriverGpuVm> {
> + op: &'op bindings::drm_gpuva_op_unmap,
> + _invariant: PhantomData<*mut &'op mut T>,
> +}
> +
> +impl<'op, T: DriverGpuVm> OpUnmap<'op, T> {
> + /// Indicates whether this `drm_gpuva` is physically contiguous with the
> + /// original mapping request.
> + ///
> + /// Optionally, if `keep` is set, drivers may keep the actual page table
> + /// mappings for this `drm_gpuva`, adding the missing page table entries
> + /// only and update the `drm_gpuvm` accordingly.
> + pub fn keep(&self) -> bool {
> + self.op.keep
> + }
> +
> + /// The range being unmapped.
> + pub fn va(&self) -> &GpuVa<T> {
> + // SAFETY: This is a valid va.
> + unsafe { GpuVa::<T>::from_raw(self.op.va) }
> + }
> +
> + /// Remove the VA.
> + pub fn remove(self) -> (OpUnmapped<'op, T>, GpuVaRemoved<T>) {
> + // SAFETY: The op references a valid drm_gpuva in the GPUVM.
> + unsafe { bindings::drm_gpuva_unmap(self.op) };
> + // SAFETY: The va is no longer in the interval tree so we may unlink it.
> + unsafe { bindings::drm_gpuva_unlink_defer(self.op.va) };
> +
> + // SAFETY: We just removed this va from the `GpuVm<T>`.
> + let va = unsafe { GpuVaRemoved::from_raw(self.op.va) };
> +
> + (
> + OpUnmapped {
> + _invariant: self._invariant,
> + },
> + va,
> + )
> + }
> +}
> +
> +/// Represents a completed [`OpUnmap`] operation.
> +pub struct OpUnmapped<'op, T> {
> + _invariant: PhantomData<*mut &'op mut T>,
> +}
> +
> +/// ```
> +/// struct drm_gpuva_op_remap {
> +/// /**
> +/// * @prev: the preceding part of a split mapping
> +/// */
> +/// struct drm_gpuva_op_map *prev;
> +///
> +/// /**
> +/// * @next: the subsequent part of a split mapping
> +/// */
> +/// struct drm_gpuva_op_map *next;
> +///
> +/// /**
> +/// * @unmap: the unmap operation for the original existing mapping
> +/// */
> +/// struct drm_gpuva_op_unmap *unmap;
> +/// };
> +/// ```
> +pub struct OpRemap<'op, T: DriverGpuVm> {
> + op: &'op bindings::drm_gpuva_op_remap,
> + _invariant: PhantomData<*mut &'op mut T>,
> +}
> +
> +impl<'op, T: DriverGpuVm> OpRemap<'op, T> {
> + /// The preceding part of a split mapping.
> + #[inline]
> + pub fn prev(&self) -> Option<&OpRemapMapData> {
> + // SAFETY: We checked for null, so the pointer must be valid.
> + NonNull::new(self.op.prev).map(|ptr| unsafe { OpRemapMapData::from_raw(ptr) })
> + }
> +
> + /// The subsequent part of a split mapping.
> + #[inline]
> + pub fn next(&self) -> Option<&OpRemapMapData> {
> + // SAFETY: We checked for null, so the pointer must be valid.
> + NonNull::new(self.op.next).map(|ptr| unsafe { OpRemapMapData::from_raw(ptr) })
> + }
> +
> + /// Indicates whether the `drm_gpuva` being removed is physically contiguous with the original
> + /// mapping request.
> + ///
> + /// Optionally, if `keep` is set, drivers may keep the actual page table mappings for this
> + /// `drm_gpuva`, adding the missing page table entries only and update the `drm_gpuvm`
> + /// accordingly.
> + #[inline]
> + pub fn keep(&self) -> bool {
> + // SAFETY: The unmap pointer is always valid.
> + unsafe { (*self.op.unmap).keep }
> + }
> +
> + /// The range being unmapped.
> + #[inline]
> + pub fn va_to_unmap(&self) -> &GpuVa<T> {
> + // SAFETY: This is a valid va.
> + unsafe { GpuVa::<T>::from_raw((*self.op.unmap).va) }
> + }
> +
> + /// The [`drm_gem_object`](crate::gem::Object) whose VA is being remapped.
> + #[inline]
> + pub fn obj(&self) -> &T::Object {
> + self.va_to_unmap().obj()
> + }
> +
> + /// The [`GpuVmBo`] that is being remapped.
> + #[inline]
> + pub fn vm_bo(&self) -> &GpuVmBo<T> {
> + self.va_to_unmap().vm_bo()
> + }
> +
> + /// Update the GPUVM to perform the remapping.
> + pub fn remap(
> + self,
> + va_alloc: [GpuVaAlloc<T>; 2],
> + prev_data: impl PinInit<T::VaData>,
> + next_data: impl PinInit<T::VaData>,
> + ) -> (OpRemapped<'op, T>, OpRemapRet<T>) {
> + let [va1, va2] = va_alloc;
> +
> + let mut unused_va = None;
> + let mut prev_ptr = ptr::null_mut();
> + let mut next_ptr = ptr::null_mut();
> + if self.prev().is_some() {
> + prev_ptr = va1.prepare(prev_data);
> + } else {
> + unused_va = Some(va1);
> + }
> + if self.next().is_some() {
> + next_ptr = va2.prepare(next_data);
> + } else {
> + unused_va = Some(va2);
> + }
> +
> + // SAFETY: the pointers are non-null when required
> + unsafe { bindings::drm_gpuva_remap(prev_ptr, next_ptr, self.op) };
> +
> + // SAFETY: The GEM object is valid, so the mutex is properly initialized.
> + unsafe { bindings::mutex_lock(&raw mut (*self.obj().as_raw()).gpuva.lock) };
> + if !prev_ptr.is_null() {
> + // SAFETY: The prev_ptr is a valid drm_gpuva prepared for insertion. The vm_bo is still
> + // valid as the not-yet-unlinked gpuva holds a refcount on the vm_bo.
> + unsafe { bindings::drm_gpuva_link(prev_ptr, self.vm_bo().as_raw()) };
> + }
> + if !next_ptr.is_null() {
> + // SAFETY: The next_ptr is a valid drm_gpuva prepared for insertion. The vm_bo is still
> + // valid as the not-yet-unlinked gpuva holds a refcount on the vm_bo.
> + unsafe { bindings::drm_gpuva_link(next_ptr, self.vm_bo().as_raw()) };
> + }
> + // SAFETY: We took the mutex above, so we may unlock it.
> + unsafe { bindings::mutex_unlock(&raw mut (*self.obj().as_raw()).gpuva.lock) };
> + // SAFETY: The va is no longer in the interval tree so we may unlink it.
> + unsafe { bindings::drm_gpuva_unlink_defer((*self.op.unmap).va) };
> +
> + (
> + OpRemapped {
> + _invariant: self._invariant,
> + },
> + OpRemapRet {
> + // SAFETY: We just removed this va from the `GpuVm<T>`.
> + unmapped_va: unsafe { GpuVaRemoved::from_raw((*self.op.unmap).va) },
> + unused_va,
> + },
> + )
> + }
> +}
> +
> +/// Part of an [`OpRemap`] that represents a new mapping.
> +#[repr(transparent)]
> +pub struct OpRemapMapData(bindings::drm_gpuva_op_map);
> +
> +impl OpRemapMapData {
> + /// # Safety
> + /// Must reference a valid `drm_gpuva_op_map` for duration of `'a`.
> + unsafe fn from_raw<'a>(ptr: NonNull<bindings::drm_gpuva_op_map>) -> &'a Self {
> + // SAFETY: ok per safety requirements
> + unsafe { ptr.cast().as_ref() }
> + }
> +
> + /// The base address of the new mapping.
> + pub fn addr(&self) -> u64 {
> + self.0.va.addr
> + }
> +
> + /// The length of the new mapping.
> + pub fn length(&self) -> u64 {
> + self.0.va.range
> + }
> +
> + /// The offset within the [`drm_gem_object`](crate::gem::Object).
> + pub fn gem_offset(&self) -> u64 {
> + self.0.gem.offset
> + }
> +}
> +
> +/// Struct containing objects removed or not used by [`OpRemap::remap`].
> +pub struct OpRemapRet<T: DriverGpuVm> {
> + /// The `drm_gpuva` that was removed.
> + pub unmapped_va: GpuVaRemoved<T>,
> + /// If the remap did not split the region into two pieces, then the unused `drm_gpuva` is
> + /// returned here.
> + pub unused_va: Option<GpuVaAlloc<T>>,
> +}
> +
> +/// Represents a completed [`OpRemap`] operation.
> +pub struct OpRemapped<'op, T> {
> + _invariant: PhantomData<*mut &'op mut T>,
> +}
> +
> +impl<T: DriverGpuVm> GpuVmCore<T> {
> + /// Create a mapping, removing or remapping anything that overlaps.
> + #[inline]
> + pub fn sm_map(&mut self, req: OpMapRequest<'_, T>) -> Result {
I wonder if we should keep this “sm” prefix. Perhaps
“map_region” or “map_range” would be better names IMHO.
> + let gpuvm = self.gpuvm().as_raw();
> + let raw_req = req.raw_request();
> + let mut p = SmMapData {
> + sm_data: SmData {
> + gpuvm: self,
> + user_context: req.context,
> + },
> + vm_bo: req.vm_bo,
> + };
> + // SAFETY:
> + // * raw_request() creates a valid request.
> + // * The private data is valid to be interpreted as both SmData and SmMapData since the
> + // first field of SmMapData is SmData.
> + to_result(unsafe {
> + bindings::drm_gpuvm_sm_map(gpuvm, (&raw mut p).cast(), &raw const raw_req)
> + })
> + }
> +
> + /// Remove any mappings in the given region.
> + #[inline]
> + pub fn sm_unmap(&mut self, addr: u64, length: u64, context: &mut T::SmContext) -> Result {
Same here
> + let gpuvm = self.gpuvm().as_raw();
> + let mut p = SmData {
> + gpuvm: self,
> + user_context: context,
> + };
> + // SAFETY:
> + // * raw_request() creates a valid request.
> + // * The private data is valid to be interpreted as only SmData, but drm_gpuvm_sm_unmap()
> + // never calls sm_step_map().
> + to_result(unsafe { bindings::drm_gpuvm_sm_unmap(gpuvm, (&raw mut p).cast(), addr, length) })
> + }
> +}
> +
> +impl<T: DriverGpuVm> GpuVm<T> {
> + /// # Safety
> + /// Must be called from `sm_map`.
> + pub(super) unsafe extern "C" fn sm_step_map(
> + op: *mut bindings::drm_gpuva_op,
> + p: *mut c_void,
> + ) -> c_int {
> + // SAFETY: If we reach `sm_step_map` then we were called from `sm_map` which always passes
> + // an `SmMapData` as private data.
> + let p = unsafe { &mut *p.cast::<SmMapData<'_, T>>() };
> + let op = OpMap {
> + // SAFETY: sm_step_map is called with a map operation.
> + op: unsafe { &(*op).__bindgen_anon_1.map },
> + vm_bo: &p.vm_bo,
> + _invariant: PhantomData,
> + };
> + match p.sm_data.gpuvm.sm_step_map(op, p.sm_data.user_context) {
> + Ok(OpMapped { .. }) => 0,
> + Err(err) => err.to_errno(),
> + }
> + }
> + /// # Safety
> + /// Must be called from `sm_map` or `sm_unmap`.
> + pub(super) unsafe extern "C" fn sm_step_unmap(
> + op: *mut bindings::drm_gpuva_op,
> + p: *mut c_void,
> + ) -> c_int {
> + // SAFETY: If we reach `sm_step_unmap` then we were called from `sm_map` or `sm_unmap` which passes either
> + // an `SmMapData` or `SmData` as private data. Both cases can be cast to `SmData`.
> + let p = unsafe { &mut *p.cast::<SmData<'_, T>>() };
> + let op = OpUnmap {
> + // SAFETY: sm_step_unmap is called with an unmap operation.
> + op: unsafe { &(*op).__bindgen_anon_1.unmap },
> + _invariant: PhantomData,
> + };
> + match p.gpuvm.sm_step_unmap(op, p.user_context) {
> + Ok(OpUnmapped { .. }) => 0,
> + Err(err) => err.to_errno(),
> + }
> + }
> + /// # Safety
> + /// Must be called from `sm_map` or `sm_unmap`.
> + pub(super) unsafe extern "C" fn sm_step_remap(
> + op: *mut bindings::drm_gpuva_op,
> + p: *mut c_void,
> + ) -> c_int {
> + // SAFETY: If we reach `sm_step_remap` then we were called from `sm_map` or `sm_unmap` which passes either
> + // an `SmMapData` or `SmData` as private data. Both cases can be cast to `SmData`.
> + let p = unsafe { &mut *p.cast::<SmData<'_, T>>() };
> + let op = OpRemap {
> + // SAFETY: sm_step_remap is called with a remap operation.
> + op: unsafe { &(*op).__bindgen_anon_1.remap },
> + _invariant: PhantomData,
> + };
> + match p.gpuvm.sm_step_remap(op, p.user_context) {
> + Ok(OpRemapped { .. }) => 0,
> + Err(err) => err.to_errno(),
> + }
> + }
> +}
> diff --git a/rust/kernel/drm/gpuvm/va.rs b/rust/kernel/drm/gpuvm/va.rs
> new file mode 100644
> index 0000000000000000000000000000000000000000..a31122ff22282186a1d76d4bb085714f6465722b
> --- /dev/null
> +++ b/rust/kernel/drm/gpuvm/va.rs
> @@ -0,0 +1,148 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +
> +use super::*;
> +
> +/// Represents that a range of a GEM object is mapped in this [`GpuVm`] instance.
> +///
> +/// Does not assume that GEM lock is held.
> +///
> +/// # Invariants
> +///
> +/// This is a valid `drm_gpuva` that is resident in the [`GpuVm`] instance.
> +#[repr(C)]
> +#[pin_data]
> +pub struct GpuVa<T: DriverGpuVm> {
> + #[pin]
> + inner: Opaque<bindings::drm_gpuva>,
> + #[pin]
> + data: T::VaData,
> +}
> +
> +impl<T: DriverGpuVm> GpuVa<T> {
> + /// Access this [`GpuVa`] from a raw pointer.
> + ///
> + /// # Safety
> + ///
> + /// For the duration of `'a`, the pointer must reference a valid `drm_gpuva` associated with a
> + /// [`GpuVm<T>`].
> + #[inline]
> + pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuva) -> &'a Self {
> + // SAFETY: `drm_gpuva` is first field and `repr(C)`.
> + unsafe { &*ptr.cast() }
> + }
> +
> + /// Returns a raw pointer to underlying C value.
> + #[inline]
> + pub fn as_raw(&self) -> *mut bindings::drm_gpuva {
> + self.inner.get()
> + }
> +
> + /// Returns the address of this mapping in the GPU virtual address space.
> + #[inline]
> + pub fn addr(&self) -> u64 {
> + // SAFETY: The `va.addr` field of `drm_gpuva` is immutable.
> + unsafe { (*self.as_raw()).va.addr }
> + }
> +
> + /// Returns the length of this mapping.
> + #[inline]
> + pub fn length(&self) -> u64 {
> + // SAFETY: The `va.range` field of `drm_gpuva` is immutable.
> + unsafe { (*self.as_raw()).va.range }
> + }
> +
> + /// Returns `addr..addr+length`.
> + #[inline]
> + pub fn range(&self) -> Range<u64> {
> + let addr = self.addr();
> + addr..addr + self.length()
> + }
> +
> + /// Returns the offset within the GEM object.
> + #[inline]
> + pub fn gem_offset(&self) -> u64 {
> + // SAFETY: The `gem.offset` field of `drm_gpuva` is immutable.
> + unsafe { (*self.as_raw()).gem.offset }
> + }
> +
> + /// Returns the GEM object.
> + #[inline]
> + pub fn obj(&self) -> &T::Object {
> + // SAFETY: The `gem.offset` field of `drm_gpuva` is immutable.
> + unsafe { <T::Object as IntoGEMObject>::from_raw((*self.as_raw()).gem.obj) }
> + }
> +
> + /// Returns the underlying [`GpuVmBo`] object that backs this [`GpuVa`].
> + #[inline]
> + pub fn vm_bo(&self) -> &GpuVmBo<T> {
> + // SAFETY: The `vm_bo` field has been set and is immutable for the duration in which this
> + // `drm_gpuva` is resident in the VM.
> + unsafe { GpuVmBo::from_raw((*self.as_raw()).vm_bo) }
> + }
> +}
> +
> +/// A pre-allocated [`GpuVa`] object.
> +///
> +/// # Invariants
> +///
> +/// The memory is zeroed.
> +pub struct GpuVaAlloc<T: DriverGpuVm>(KBox<MaybeUninit<GpuVa<T>>>);
> +
> +impl<T: DriverGpuVm> GpuVaAlloc<T> {
> + /// Pre-allocate a [`GpuVa`] object.
> + pub fn new(flags: AllocFlags) -> Result<GpuVaAlloc<T>, AllocError> {
> + // INVARIANTS: Memory allocated with __GFP_ZERO.
> + Ok(GpuVaAlloc(KBox::new_uninit(flags | __GFP_ZERO)?))
> + }
> +
> + /// Prepare this `drm_gpuva` for insertion into the GPUVM.
> + pub(super) fn prepare(mut self, va_data: impl PinInit<T::VaData>) -> *mut bindings::drm_gpuva {
> + let va_ptr = MaybeUninit::as_mut_ptr(&mut self.0);
> + // SAFETY: The `data` field is pinned.
> + let Ok(()) = unsafe { va_data.__pinned_init(&raw mut (*va_ptr).data) };
> + KBox::into_raw(self.0).cast()
> + }
> +}
> +
> +/// A [`GpuVa`] object that has been removed.
> +///
> +/// # Invariants
> +///
> +/// The `drm_gpuva` is not resident in the [`GpuVm`].
> +pub struct GpuVaRemoved<T: DriverGpuVm>(KBox<GpuVa<T>>);
> +
> +impl<T: DriverGpuVm> GpuVaRemoved<T> {
> + /// Convert a raw pointer into a [`GpuVaRemoved`].
> + ///
> + /// # Safety
> + ///
> + /// Must have been removed from a [`GpuVm<T>`].
> + pub(super) unsafe fn from_raw(ptr: *mut bindings::drm_gpuva) -> Self {
> + // SAFETY: Since it has been removed we can take ownership of allocation.
> + GpuVaRemoved(unsafe { KBox::from_raw(ptr.cast()) })
> + }
> +
> + /// Take ownership of the VA data.
> + pub fn into_inner(self) -> T::VaData
> + where
> + T::VaData: Unpin,
> + {
> + KBox::into_inner(self.0).data
> + }
> +}
> +
> +impl<T: DriverGpuVm> Deref for GpuVaRemoved<T> {
> + type Target = T::VaData;
> + fn deref(&self) -> &T::VaData {
> + &self.0.data
> + }
> +}
> +
> +impl<T: DriverGpuVm> DerefMut for GpuVaRemoved<T>
> +where
> + T::VaData: Unpin,
> +{
> + fn deref_mut(&mut self) -> &mut T::VaData {
> + &mut self.0.data
> + }
> +}
> diff --git a/rust/kernel/drm/gpuvm/vm_bo.rs b/rust/kernel/drm/gpuvm/vm_bo.rs
> new file mode 100644
> index 0000000000000000000000000000000000000000..f21aa17ea4f42c4a2b57b1f3a57a18dd2c3c8b7b
> --- /dev/null
> +++ b/rust/kernel/drm/gpuvm/vm_bo.rs
> @@ -0,0 +1,213 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +
> +use super::*;
> +
> +/// Represents that a given GEM object has at least one mapping on this [`GpuVm`] instance.
> +///
> +/// Does not assume that GEM lock is held.
> +#[repr(C)]
> +#[pin_data]
> +pub struct GpuVmBo<T: DriverGpuVm> {
Oh, we already have GpuVmBo, and GpuVmBoObtain. I see.
> + #[pin]
> + inner: Opaque<bindings::drm_gpuvm_bo>,
> + #[pin]
> + data: T::VmBoData,
> +}
> +
> +impl<T: DriverGpuVm> GpuVmBo<T> {
> + pub(super) const ALLOC_FN: Option<unsafe extern "C" fn() -> *mut bindings::drm_gpuvm_bo> = {
> + use core::alloc::Layout;
> + let base = Layout::new::<bindings::drm_gpuvm_bo>();
> + let rust = Layout::new::<Self>();
> + assert!(base.size() <= rust.size());
We should default to something else instead of panicking IMHO.
> + if base.size() != rust.size() || base.align() != rust.align() {
> + Some(Self::vm_bo_alloc)
> + } else {
> + // This causes GPUVM to allocate a `GpuVmBo<T>` with `kzalloc(sizeof(drm_gpuvm_bo))`.
> + None
> + }
> + };
> +
> + pub(super) const FREE_FN: Option<unsafe extern "C" fn(*mut bindings::drm_gpuvm_bo)> = {
> + if core::mem::needs_drop::<Self>() {
> + Some(Self::vm_bo_free)
> + } else {
> + // This causes GPUVM to free a `GpuVmBo<T>` with `kfree`.
> + None
> + }
> + };
> +
> + /// Custom function for allocating a `drm_gpuvm_bo`.
> + ///
> + /// # Safety
> + ///
> + /// Always safe to call. Unsafe to match function pointer type in C struct.
> + unsafe extern "C" fn vm_bo_alloc() -> *mut bindings::drm_gpuvm_bo {
> + KBox::<Self>::new_uninit(GFP_KERNEL | __GFP_ZERO)
> + .map(KBox::into_raw)
> + .unwrap_or(ptr::null_mut())
> + .cast()
> + }
> +
> + /// Custom function for freeing a `drm_gpuvm_bo`.
> + ///
> + /// # Safety
> + ///
> + /// The pointer must have been allocated with [`GpuVmBo::ALLOC_FN`], and must not be used after
> + /// this call.
> + unsafe extern "C" fn vm_bo_free(ptr: *mut bindings::drm_gpuvm_bo) {
> + // SAFETY:
> + // * The ptr was allocated from kmalloc with the layout of `GpuVmBo<T>`.
> + // * `ptr->inner` has no destructor.
> + // * `ptr->data` contains a valid `T::VmBoData` that we can drop.
> + drop(unsafe { KBox::<Self>::from_raw(ptr.cast()) });
> + }
> +
> + /// Access this [`GpuVmBo`] from a raw pointer.
> + ///
> + /// # Safety
> + ///
> + /// For the duration of `'a`, the pointer must reference a valid `drm_gpuvm_bo` associated with
> + /// a [`GpuVm<T>`].
> + #[inline]
> + pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm_bo) -> &'a Self {
> + // SAFETY: `drm_gpuvm_bo` is first field and `repr(C)`.
> + unsafe { &*ptr.cast() }
> + }
> +
> + /// Returns a raw pointer to underlying C value.
> + #[inline]
> + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo {
> + self.inner.get()
> + }
> +
> + /// The [`GpuVm`] that this GEM object is mapped in.
> + #[inline]
> + pub fn gpuvm(&self) -> &GpuVm<T> {
> + // SAFETY: The `obj` pointer is guaranteed to be valid.
> + unsafe { GpuVm::<T>::from_raw((*self.inner.get()).vm) }
> + }
> +
> + /// The [`drm_gem_object`](crate::gem::Object) for these mappings.
> + #[inline]
> + pub fn obj(&self) -> &T::Object {
> + // SAFETY: The `obj` pointer is guaranteed to be valid.
> + unsafe { <T::Object as IntoGEMObject>::from_raw((*self.inner.get()).obj) }
> + }
> +
> + /// The driver data with this buffer object.
> + #[inline]
> + pub fn data(&self) -> &T::VmBoData {
> + &self.data
> + }
> +}
> +
> +/// A pre-allocated [`GpuVmBo`] object.
> +///
> +/// # Invariants
> +///
> +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData`, has a refcount of one, and is
> +/// absent from any gem, extobj, or evict lists.
> +pub(super) struct GpuVmBoAlloc<T: DriverGpuVm>(NonNull<GpuVmBo<T>>);
> +
> +impl<T: DriverGpuVm> GpuVmBoAlloc<T> {
> + /// Create a new pre-allocated [`GpuVmBo`].
> + ///
> + /// It's intentional that the initializer is infallible because `drm_gpuvm_bo_put` will call
> + /// drop on the data, so we don't have a way to free it when the data is missing.
> + #[inline]
> + pub(super) fn new(
> + gpuvm: &GpuVm<T>,
> + gem: &T::Object,
> + value: impl PinInit<T::VmBoData>,
> + ) -> Result<GpuVmBoAlloc<T>, AllocError> {
> + // SAFETY: The provided gpuvm and gem ptrs are valid for the duration of this call.
> + let raw_ptr = unsafe {
> + bindings::drm_gpuvm_bo_create(gpuvm.as_raw(), gem.as_raw()).cast::<GpuVmBo<T>>()
> + };
> + // CAST: `GpuVmBoAlloc::vm_bo_alloc` ensures that this memory was allocated with the layout
> + // of `GpuVmBo<T>`.
> + let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?;
> + // SAFETY: `ptr->data` is a valid pinned location.
> + let Ok(()) = unsafe { value.__pinned_init(&raw mut (*raw_ptr).data) };
> + // INVARIANTS: We just created the vm_bo so it's absent from lists, and the data is valid
> + // as we just initialized it.
> + Ok(GpuVmBoAlloc(ptr))
> + }
> +
> + /// Returns a raw pointer to underlying C value.
> + #[inline]
> + pub(super) fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo {
> + // SAFETY: The pointer references a valid `drm_gpuvm_bo`.
> + unsafe { (*self.0.as_ptr()).inner.get() }
> + }
> +
> + /// Look up whether there is an existing [`GpuVmBo`] for this gem object.
> + #[inline]
> + pub(super) fn obtain(self) -> GpuVmBoObtain<T> {
> + let me = ManuallyDrop::new(self);
> + // SAFETY: Valid `drm_gpuvm_bo` not already in the lists.
> + let ptr = unsafe { bindings::drm_gpuvm_bo_obtain_prealloc(me.as_raw()) };
> +
> + // If the vm_bo does not already exist, ensure that it's in the extobj list.
> + if ptr::eq(ptr, me.as_raw()) && me.gpuvm().is_extobj(me.obj()) {
> + let _resv_lock = me.gpuvm().resv_lock();
> + // SAFETY: We hold the GPUVMs resv lock.
> + unsafe { bindings::drm_gpuvm_bo_extobj_add(ptr) };
> + }
> +
> + // INVARIANTS: Valid `drm_gpuvm_bo` in the GEM list.
> + // SAFETY: `drm_gpuvm_bo_obtain_prealloc` always returns a non-null ptr
> + GpuVmBoObtain(unsafe { NonNull::new_unchecked(ptr.cast()) })
> + }
> +}
> +
> +impl<T: DriverGpuVm> Deref for GpuVmBoAlloc<T> {
> + type Target = GpuVmBo<T>;
> + #[inline]
> + fn deref(&self) -> &GpuVmBo<T> {
> + // SAFETY: By the type invariants we may deref while `Self` exists.
> + unsafe { self.0.as_ref() }
> + }
> +}
> +
> +impl<T: DriverGpuVm> Drop for GpuVmBoAlloc<T> {
> + #[inline]
> + fn drop(&mut self) {
> + // SAFETY: It's safe to perform a deferred put in any context.
> + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) };
> + }
> +}
> +
> +/// A [`GpuVmBo`] object in the GEM list.
> +///
> +/// # Invariants
> +///
> +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData` and is present in the gem list.
> +pub struct GpuVmBoObtain<T: DriverGpuVm>(NonNull<GpuVmBo<T>>);
> +
> +impl<T: DriverGpuVm> GpuVmBoObtain<T> {
> + /// Returns a raw pointer to underlying C value.
> + #[inline]
> + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo {
> + // SAFETY: The pointer references a valid `drm_gpuvm_bo`.
> + unsafe { (*self.0.as_ptr()).inner.get() }
> + }
> +}
> +
> +impl<T: DriverGpuVm> Deref for GpuVmBoObtain<T> {
> + type Target = GpuVmBo<T>;
> + #[inline]
> + fn deref(&self) -> &GpuVmBo<T> {
> + // SAFETY: By the type invariants we may deref while `Self` exists.
> + unsafe { self.0.as_ref() }
> + }
> +}
> +
> +impl<T: DriverGpuVm> Drop for GpuVmBoObtain<T> {
> + #[inline]
> + fn drop(&mut self) {
> + // SAFETY: It's safe to perform a deferred put in any context.
> + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) };
> + }
> +}
> diff --git a/rust/kernel/drm/mod.rs b/rust/kernel/drm/mod.rs
> index 1b82b6945edf25b947afc08300e211bd97150d6b..a4b6c5430198571ec701af2ef452cc9ac55870e6 100644
> --- a/rust/kernel/drm/mod.rs
> +++ b/rust/kernel/drm/mod.rs
> @@ -6,6 +6,7 @@
> pub mod driver;
> pub mod file;
> pub mod gem;
> +pub mod gpuvm;
> pub mod ioctl;
>
> pub use self::device::Device;
>
> --
> 2.52.0.487.g5c8c507ade-goog
>
>
My overall opinion is that we’re adding a lot of things that will only be
relevant when we’re more advanced on the job submission front. This
includes the things that Phillip is working on (i.e.: Fences + JobQueue).
Perhaps we should keep this iteration downstream (so we’re sure it works
when the time comes) and focus on synchronous VM_BINDS upstream.
The Tyr demo that you’ve tested this on is very helpful for this purpose.
Thoughts?
— Daniel
[0]: https://lore.kernel.org/rust-for-linux/20251201102855.4413-1-work@onurozkan.dev/T/#m43dc9256c4b18dc8955df968bf0712fc7a9d24c6
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction
2025-12-01 15:16 ` Daniel Almeida
@ 2025-12-02 8:39 ` Alice Ryhl
2025-12-02 13:42 ` Daniel Almeida
0 siblings, 1 reply; 24+ messages in thread
From: Alice Ryhl @ 2025-12-02 8:39 UTC (permalink / raw)
To: Daniel Almeida
Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Frank Binns,
Matt Coster, Rob Clark, Dmitry Baryshkov, Abhinav Kumar,
Jessica Zhang, Sean Paul, Marijn Suijten, Lyude Paul,
Lucas De Marchi, Rodrigo Vivi, Sumit Semwal, Christian König,
dri-devel, linux-kernel, rust-for-linux, linux-arm-msm, freedreno,
nouveau, intel-xe, linux-media, linaro-mm-sig, Asahi Lina
On Mon, Dec 01, 2025 at 12:16:09PM -0300, Daniel Almeida wrote:
> Hi Alice,
>
> I find it a bit weird that we reverted to v1, given that the previous gpuvm
> attempt was v3. No big deal though.
>
>
> > On 28 Nov 2025, at 11:14, Alice Ryhl <aliceryhl@google.com> wrote:
> >
> > Add a GPUVM abstraction to be used by Rust GPU drivers.
> >
> > GPUVM keeps track of a GPU's virtual address (VA) space and manages the
> > corresponding virtual mappings represented by "GPU VA" objects. It also
> > keeps track of the gem::Object<T> used to back the mappings through
> > GpuVmBo<T>.
> >
> > This abstraction is only usable by drivers that wish to use GPUVM in
> > immediate mode. This allows us to build the locking scheme into the API
> > design. It means that the GEM mutex is used for the GEM gpuva list, and
> > that the resv lock is used for the extobj list. The evicted list is not
> > yet used in this version.
> >
> > This abstraction provides a special handle called the GpuVmCore, which
> > is a wrapper around ARef<GpuVm> that provides access to the interval
> > tree. Generally, all changes to the address space requires mutable
> > access to this unique handle.
> >
> > Some of the safety comments are still somewhat WIP, but I think the API
> > should be sound as-is.
> >
> > Co-developed-by: Asahi Lina <lina+kernel@asahilina.net>
> > Signed-off-by: Asahi Lina <lina+kernel@asahilina.net>
> > Co-developed-by: Daniel Almeida <daniel.almeida@collabora.com>
> > Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> > Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> > +//! DRM GPUVM in immediate mode
> > +//!
> > +//! Rust abstractions for using GPUVM in immediate mode. This is when the GPUVM state is updated
> > +//! during `run_job()`, i.e., in the DMA fence signalling critical path, to ensure that the GPUVM
>
> IMHO: We should initially target synchronous VM_BINDS, which are the opposite
> of what you described above.
Immediate mode is a locking scheme. We have to pick one of them
regardless of whether we do async VM_BIND yet.
(Well ok immediate mode is not just a locking scheme: it also determines
whether vm_bo cleanup is postponed or not.)
> > +/// A DRM GPU VA manager.
> > +///
> > +/// This object is refcounted, but the "core" is only accessible using a special unique handle. The
>
> I wonder if `Owned<T>` is a good fit here? IIUC, Owned<T> can be refcounted,
> but there is only ever one handle on the Rust side? If so, this seems to be
> what we want here?
Yes, Owned<T> is probably a good fit.
> > +/// core consists of the `core` field and the GPUVM's interval tree.
> > +#[repr(C)]
> > +#[pin_data]
> > +pub struct GpuVm<T: DriverGpuVm> {
> > + #[pin]
> > + vm: Opaque<bindings::drm_gpuvm>,
> > + /// Accessed only through the [`GpuVmCore`] reference.
> > + core: UnsafeCell<T>,
>
> This UnsafeCell has been here since Lina’s version. I must say I never
> understood why, and perhaps now is a good time to clarify it given the changes
> we’re making w.r.t to the “unique handle” thing.
>
> This is just some driver private data. It’s never shared with C. I am not
> sure why we need this wrapper.
The sm_step_* methods receive a `&mut T`. This is UB if other code has
an `&GpuVm<T>` and the `T` is not wrapped in an `UnsafeCell` because
`&GpuVm<T>` implies that the data is not modified.
> > + /// Shared data not protected by any lock.
> > + #[pin]
> > + shared_data: T::SharedData,
>
> Should we deref to this?
We can do that.
> > + /// Creates a GPUVM instance.
> > + #[expect(clippy::new_ret_no_self)]
> > + pub fn new<E>(
> > + name: &'static CStr,
> > + dev: &drm::Device<T::Driver>,
> > + r_obj: &T::Object,
>
> Can we call this “reservation_object”, or similar?
>
> We should probably briefly explain what it does, perhaps linking to the C docs.
Yeah agreed, more docs are probably warranted here.
> I wonder if we should expose the methods below at this moment. We will not need
> them in Tyr until we start submitting jobs. This is still a bit in the future.
>
> I say this for a few reasons:
>
> a) Philipp is still working on the fence abstractions,
>
> b) As a result from the above, we are taking raw fence pointers,
>
> c) Onur is working on a WW Mutex abstraction [0] that includes a Rust
> implementation of drm_exec (under another name, and useful in other contexts
> outside of DRM). Should we use them here?
>
> I think your current design with the ExecToken is also ok and perhaps we should
> stick to it, but it's good to at least discuss this with the others.
I don't think we can postpone adding the "obtain" method. It's required
to call sm_map, which is needed for VM_BIND.
> > + /// Returns a [`GpuVmBoObtain`] for the provided GEM object.
> > + #[inline]
> > + pub fn obtain(
> > + &self,
> > + obj: &T::Object,
> > + data: impl PinInit<T::VmBoData>,
> > + ) -> Result<GpuVmBoObtain<T>, AllocError> {
>
> Perhaps this should be called GpuVmBo? That’s what you want to “obtain” in the first place.
>
> This is indeed a question, by the way.
One could possibly use Owned<_> here.
> > +/// A lock guard for the GPUVM's resv lock.
> > +///
> > +/// This guard provides access to the extobj and evicted lists.
>
> Should we bother with evicted objects at this stage?
The abstractions don't actually support them right now. The resv lock is
currently only here because it's used internally in these abstractions.
It won't be useful to drivers until we add evicted objects.
> > +///
> > +/// # Invariants
> > +///
> > +/// Holds the GPUVM resv lock.
> > +pub struct GpuvmResvLockGuard<'a, T: DriverGpuVm>(&'a GpuVm<T>);
> > +
> > +impl<T: DriverGpuVm> GpuVm<T> {
> > + /// Lock the VM's resv lock.
>
> More docs here would be nice.
>
> > + #[inline]
> > + pub fn resv_lock(&self) -> GpuvmResvLockGuard<'_, T> {
> > + // SAFETY: It's always ok to lock the resv lock.
> > + unsafe { bindings::dma_resv_lock(self.raw_resv_lock(), ptr::null_mut()) };
> > + // INVARIANTS: We took the lock.
> > + GpuvmResvLockGuard(self)
> > + }
>
> You can call this more than once and deadlock. Perhaps we should warn about this, or forbid it?
Same as any other lock. I don't think we need to do anything special.
> > + /// Use the pre-allocated VA to carry out this map operation.
> > + pub fn insert(self, va: GpuVaAlloc<T>, va_data: impl PinInit<T::VaData>) -> OpMapped<'op, T> {
> > + let va = va.prepare(va_data);
> > + // SAFETY: By the type invariants we may access the interval tree.
> > + unsafe { bindings::drm_gpuva_map(self.vm_bo.gpuvm().as_raw(), va, self.op) };
> > + // SAFETY: The GEM object is valid, so the mutex is properly initialized.
>
> > + unsafe { bindings::mutex_lock(&raw mut (*self.op.gem.obj).gpuva.lock) };
>
> Should we use Fujita’s might_sleep() support here?
Could make sense yeah.
> > +/// ```
> > +/// struct drm_gpuva_op_unmap {
> > +/// /**
> > +/// * @va: the &drm_gpuva to unmap
> > +/// */
> > +/// struct drm_gpuva *va;
> > +///
> > +/// /**
> > +/// * @keep:
> > +/// *
> > +/// * Indicates whether this &drm_gpuva is physically contiguous with the
> > +/// * original mapping request.
> > +/// *
> > +/// * Optionally, if &keep is set, drivers may keep the actual page table
> > +/// * mappings for this &drm_gpuva, adding the missing page table entries
> > +/// * only and update the &drm_gpuvm accordingly.
> > +/// */
> > +/// bool keep;
> > +/// };
> > +/// ```
>
> I think the docs could improve here ^
Yeah I can look at it.
> > +impl<T: DriverGpuVm> GpuVmCore<T> {
> > + /// Create a mapping, removing or remapping anything that overlaps.
> > + #[inline]
> > + pub fn sm_map(&mut self, req: OpMapRequest<'_, T>) -> Result {
>
> I wonder if we should keep this “sm” prefix. Perhaps
> “map_region” or “map_range” would be better names IMHO.
I'll wait for Danilo to weigh in on this. I'm not sure where "sm"
actually comes from.
> > +/// Represents that a given GEM object has at least one mapping on this [`GpuVm`] instance.
> > +///
> > +/// Does not assume that GEM lock is held.
> > +#[repr(C)]
> > +#[pin_data]
> > +pub struct GpuVmBo<T: DriverGpuVm> {
>
> Oh, we already have GpuVmBo, and GpuVmBoObtain. I see.
Yeah, GpuVmBoObtain and GpuVmBoAlloc are pointers to GpuVmBo.
> > + #[pin]
> > + inner: Opaque<bindings::drm_gpuvm_bo>,
> > + #[pin]
> > + data: T::VmBoData,
> > +}
> > +
> > +impl<T: DriverGpuVm> GpuVmBo<T> {
> > + pub(super) const ALLOC_FN: Option<unsafe extern "C" fn() -> *mut bindings::drm_gpuvm_bo> = {
> > + use core::alloc::Layout;
> > + let base = Layout::new::<bindings::drm_gpuvm_bo>();
> > + let rust = Layout::new::<Self>();
> > + assert!(base.size() <= rust.size());
>
> We should default to something else instead of panicking IMHO.
This is const context, which makes it a build assertion.
> My overall opinion is that we’re adding a lot of things that will only be
> relevant when we’re more advanced on the job submission front. This
> includes the things that Phillip is working on (i.e.: Fences + JobQueue).
>
> Perhaps we should keep this iteration downstream (so we’re sure it works
> when the time comes) and focus on synchronous VM_BINDS upstream.
> The Tyr demo that you’ve tested this on is very helpful for this purpose.
Yeah let's split out the prepare, GpuVmExec, and resv_add_fence stuff to
a separate patch.
I don't think sync vs async VM_BIND changes much in which methods or
structs are required here. Only difference is whether you call the
methods from a workqueue or not.
Alice
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction
2025-12-02 8:39 ` Alice Ryhl
@ 2025-12-02 13:42 ` Daniel Almeida
0 siblings, 0 replies; 24+ messages in thread
From: Daniel Almeida @ 2025-12-02 13:42 UTC (permalink / raw)
To: Alice Ryhl
Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Frank Binns,
Matt Coster, Rob Clark, Dmitry Baryshkov, Abhinav Kumar,
Jessica Zhang, Sean Paul, Marijn Suijten, Lyude Paul,
Lucas De Marchi, Rodrigo Vivi, Sumit Semwal, Christian König,
dri-devel, linux-kernel, rust-for-linux, linux-arm-msm, freedreno,
nouveau, intel-xe, linux-media, linaro-mm-sig, Asahi Lina
> On 2 Dec 2025, at 05:39, Alice Ryhl <aliceryhl@google.com> wrote:
>
> On Mon, Dec 01, 2025 at 12:16:09PM -0300, Daniel Almeida wrote:
>> Hi Alice,
>>
>> I find it a bit weird that we reverted to v1, given that the previous gpuvm
>> attempt was v3. No big deal though.
>>
>>
>>> On 28 Nov 2025, at 11:14, Alice Ryhl <aliceryhl@google.com> wrote:
>>>
>>> Add a GPUVM abstraction to be used by Rust GPU drivers.
>>>
>>> GPUVM keeps track of a GPU's virtual address (VA) space and manages the
>>> corresponding virtual mappings represented by "GPU VA" objects. It also
>>> keeps track of the gem::Object<T> used to back the mappings through
>>> GpuVmBo<T>.
>>>
>>> This abstraction is only usable by drivers that wish to use GPUVM in
>>> immediate mode. This allows us to build the locking scheme into the API
>>> design. It means that the GEM mutex is used for the GEM gpuva list, and
>>> that the resv lock is used for the extobj list. The evicted list is not
>>> yet used in this version.
>>>
>>> This abstraction provides a special handle called the GpuVmCore, which
>>> is a wrapper around ARef<GpuVm> that provides access to the interval
>>> tree. Generally, all changes to the address space requires mutable
>>> access to this unique handle.
>>>
>>> Some of the safety comments are still somewhat WIP, but I think the API
>>> should be sound as-is.
>>>
>>> Co-developed-by: Asahi Lina <lina+kernel@asahilina.net>
>>> Signed-off-by: Asahi Lina <lina+kernel@asahilina.net>
>>> Co-developed-by: Daniel Almeida <daniel.almeida@collabora.com>
>>> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
>>> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
>
>>> +//! DRM GPUVM in immediate mode
>>> +//!
>>> +//! Rust abstractions for using GPUVM in immediate mode. This is when the GPUVM state is updated
>>> +//! during `run_job()`, i.e., in the DMA fence signalling critical path, to ensure that the GPUVM
>>
>> IMHO: We should initially target synchronous VM_BINDS, which are the opposite
>> of what you described above.
>
> Immediate mode is a locking scheme. We have to pick one of them
> regardless of whether we do async VM_BIND yet.
>
> (Well ok immediate mode is not just a locking scheme: it also determines
> whether vm_bo cleanup is postponed or not.)
>
>>> +/// A DRM GPU VA manager.
>>> +///
>>> +/// This object is refcounted, but the "core" is only accessible using a special unique handle. The
>>
>> I wonder if `Owned<T>` is a good fit here? IIUC, Owned<T> can be refcounted,
>> but there is only ever one handle on the Rust side? If so, this seems to be
>> what we want here?
>
> Yes, Owned<T> is probably a good fit.
>
>>> +/// core consists of the `core` field and the GPUVM's interval tree.
>>> +#[repr(C)]
>>> +#[pin_data]
>>> +pub struct GpuVm<T: DriverGpuVm> {
>>> + #[pin]
>>> + vm: Opaque<bindings::drm_gpuvm>,
>>> + /// Accessed only through the [`GpuVmCore`] reference.
>>> + core: UnsafeCell<T>,
>>
>> This UnsafeCell has been here since Lina’s version. I must say I never
>> understood why, and perhaps now is a good time to clarify it given the changes
>> we’re making w.r.t to the “unique handle” thing.
>>
>> This is just some driver private data. It’s never shared with C. I am not
>> sure why we need this wrapper.
>
> The sm_step_* methods receive a `&mut T`. This is UB if other code has
> an `&GpuVm<T>` and the `T` is not wrapped in an `UnsafeCell` because
> `&GpuVm<T>` implies that the data is not modified.
>
>>> + /// Shared data not protected by any lock.
>>> + #[pin]
>>> + shared_data: T::SharedData,
>>
>> Should we deref to this?
>
> We can do that.
>
>>> + /// Creates a GPUVM instance.
>>> + #[expect(clippy::new_ret_no_self)]
>>> + pub fn new<E>(
>>> + name: &'static CStr,
>>> + dev: &drm::Device<T::Driver>,
>>> + r_obj: &T::Object,
>>
>> Can we call this “reservation_object”, or similar?
>>
>> We should probably briefly explain what it does, perhaps linking to the C docs.
>
> Yeah agreed, more docs are probably warranted here.
>
>> I wonder if we should expose the methods below at this moment. We will not need
>> them in Tyr until we start submitting jobs. This is still a bit in the future.
>>
>> I say this for a few reasons:
>>
>> a) Philipp is still working on the fence abstractions,
>>
>> b) As a result from the above, we are taking raw fence pointers,
>>
>> c) Onur is working on a WW Mutex abstraction [0] that includes a Rust
>> implementation of drm_exec (under another name, and useful in other contexts
>> outside of DRM). Should we use them here?
>>
>> I think your current design with the ExecToken is also ok and perhaps we should
>> stick to it, but it's good to at least discuss this with the others.
>
> I don't think we can postpone adding the "obtain" method. It's required
> to call sm_map, which is needed for VM_BIND.
>
>>> + /// Returns a [`GpuVmBoObtain`] for the provided GEM object.
>>> + #[inline]
>>> + pub fn obtain(
>>> + &self,
>>> + obj: &T::Object,
>>> + data: impl PinInit<T::VmBoData>,
>>> + ) -> Result<GpuVmBoObtain<T>, AllocError> {
>>
>> Perhaps this should be called GpuVmBo? That’s what you want to “obtain” in the first place.
>>
>> This is indeed a question, by the way.
>
> One could possibly use Owned<_> here.
>
>>> +/// A lock guard for the GPUVM's resv lock.
>>> +///
>>> +/// This guard provides access to the extobj and evicted lists.
>>
>> Should we bother with evicted objects at this stage?
>
> The abstractions don't actually support them right now. The resv lock is
> currently only here because it's used internally in these abstractions.
> It won't be useful to drivers until we add evicted objects.
>
>>> +///
>>> +/// # Invariants
>>> +///
>>> +/// Holds the GPUVM resv lock.
>>> +pub struct GpuvmResvLockGuard<'a, T: DriverGpuVm>(&'a GpuVm<T>);
>>> +
>>> +impl<T: DriverGpuVm> GpuVm<T> {
>>> + /// Lock the VM's resv lock.
>>
>> More docs here would be nice.
>>
>>> + #[inline]
>>> + pub fn resv_lock(&self) -> GpuvmResvLockGuard<'_, T> {
>>> + // SAFETY: It's always ok to lock the resv lock.
>>> + unsafe { bindings::dma_resv_lock(self.raw_resv_lock(), ptr::null_mut()) };
>>> + // INVARIANTS: We took the lock.
>>> + GpuvmResvLockGuard(self)
>>> + }
>>
>> You can call this more than once and deadlock. Perhaps we should warn about this, or forbid it?
>
> Same as any other lock. I don't think we need to do anything special.
>
>>> + /// Use the pre-allocated VA to carry out this map operation.
>>> + pub fn insert(self, va: GpuVaAlloc<T>, va_data: impl PinInit<T::VaData>) -> OpMapped<'op, T> {
>>> + let va = va.prepare(va_data);
>>> + // SAFETY: By the type invariants we may access the interval tree.
>>> + unsafe { bindings::drm_gpuva_map(self.vm_bo.gpuvm().as_raw(), va, self.op) };
>>> + // SAFETY: The GEM object is valid, so the mutex is properly initialized.
>>
>>> + unsafe { bindings::mutex_lock(&raw mut (*self.op.gem.obj).gpuva.lock) };
>>
>> Should we use Fujita’s might_sleep() support here?
>
> Could make sense yeah.
>
>>> +/// ```
>>> +/// struct drm_gpuva_op_unmap {
>>> +/// /**
>>> +/// * @va: the &drm_gpuva to unmap
>>> +/// */
>>> +/// struct drm_gpuva *va;
>>> +///
>>> +/// /**
>>> +/// * @keep:
>>> +/// *
>>> +/// * Indicates whether this &drm_gpuva is physically contiguous with the
>>> +/// * original mapping request.
>>> +/// *
>>> +/// * Optionally, if &keep is set, drivers may keep the actual page table
>>> +/// * mappings for this &drm_gpuva, adding the missing page table entries
>>> +/// * only and update the &drm_gpuvm accordingly.
>>> +/// */
>>> +/// bool keep;
>>> +/// };
>>> +/// ```
>>
>> I think the docs could improve here ^
>
> Yeah I can look at it.
>
>>> +impl<T: DriverGpuVm> GpuVmCore<T> {
>>> + /// Create a mapping, removing or remapping anything that overlaps.
>>> + #[inline]
>>> + pub fn sm_map(&mut self, req: OpMapRequest<'_, T>) -> Result {
>>
>> I wonder if we should keep this “sm” prefix. Perhaps
>> “map_region” or “map_range” would be better names IMHO.
>
> I'll wait for Danilo to weigh in on this. I'm not sure where "sm"
> actually comes from.
sm probably is a reference to “split/merge”.
>
>>> +/// Represents that a given GEM object has at least one mapping on this [`GpuVm`] instance.
>>> +///
>>> +/// Does not assume that GEM lock is held.
>>> +#[repr(C)]
>>> +#[pin_data]
>>> +pub struct GpuVmBo<T: DriverGpuVm> {
>>
>> Oh, we already have GpuVmBo, and GpuVmBoObtain. I see.
>
> Yeah, GpuVmBoObtain and GpuVmBoAlloc are pointers to GpuVmBo.
>
>>> + #[pin]
>>> + inner: Opaque<bindings::drm_gpuvm_bo>,
>>> + #[pin]
>>> + data: T::VmBoData,
>>> +}
>>> +
>>> +impl<T: DriverGpuVm> GpuVmBo<T> {
>>> + pub(super) const ALLOC_FN: Option<unsafe extern "C" fn() -> *mut bindings::drm_gpuvm_bo> = {
>>> + use core::alloc::Layout;
>>> + let base = Layout::new::<bindings::drm_gpuvm_bo>();
>>> + let rust = Layout::new::<Self>();
>>> + assert!(base.size() <= rust.size());
>>
>> We should default to something else instead of panicking IMHO.
>
> This is const context, which makes it a build assertion.
>
>> My overall opinion is that we’re adding a lot of things that will only be
>> relevant when we’re more advanced on the job submission front. This
>> includes the things that Phillip is working on (i.e.: Fences + JobQueue).
>>
>> Perhaps we should keep this iteration downstream (so we’re sure it works
>> when the time comes) and focus on synchronous VM_BINDS upstream.
>> The Tyr demo that you’ve tested this on is very helpful for this purpose.
>
> Yeah let's split out the prepare, GpuVmExec, and resv_add_fence stuff to
> a separate patch.
Ack
>
> I don't think sync vs async VM_BIND changes much in which methods or
> structs are required here. Only difference is whether you call the
> methods from a workqueue or not.
>
> Alice
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc()
2025-11-28 14:14 ` [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc() Alice Ryhl
2025-11-28 14:24 ` Boris Brezillon
@ 2025-12-19 12:15 ` Danilo Krummrich
1 sibling, 0 replies; 24+ messages in thread
From: Danilo Krummrich @ 2025-12-19 12:15 UTC (permalink / raw)
To: Alice Ryhl
Cc: Daniel Almeida, Matthew Brost, Thomas Hellström,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Frank Binns,
Matt Coster, Rob Clark, Dmitry Baryshkov, Abhinav Kumar,
Jessica Zhang, Sean Paul, Marijn Suijten, Lyude Paul,
Lucas De Marchi, Rodrigo Vivi, Sumit Semwal, Christian König,
dri-devel, linux-kernel, rust-for-linux, linux-arm-msm, freedreno,
nouveau, intel-xe, linux-media, linaro-mm-sig
On Fri Nov 28, 2025 at 3:14 PM CET, Alice Ryhl wrote:
> +static void
> +drm_gpuvm_bo_destroy_not_in_lists(struct drm_gpuvm_bo *vm_bo)
> +{
> + struct drm_gpuvm *gpuvm = vm_bo->vm;
> + const struct drm_gpuvm_ops *ops = gpuvm->ops;
> + struct drm_gem_object *obj = vm_bo->obj;
> +
> + if (ops && ops->vm_bo_free)
> + ops->vm_bo_free(vm_bo);
> + else
> + kfree(vm_bo);
> +
> + drm_gpuvm_put(gpuvm);
> + drm_gem_object_put(obj);
> +}
I think to us it seems obvious, but I think for new people it might not be. Can
you please add a comment that mentions that this is about the evict and extobj
lists and explains how this is related to locking?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 2/4] drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode
2025-11-28 14:14 ` [PATCH 2/4] drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode Alice Ryhl
2025-11-28 14:25 ` Boris Brezillon
@ 2025-12-19 12:25 ` Danilo Krummrich
1 sibling, 0 replies; 24+ messages in thread
From: Danilo Krummrich @ 2025-12-19 12:25 UTC (permalink / raw)
To: Alice Ryhl
Cc: Daniel Almeida, Matthew Brost, Thomas Hellström,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Frank Binns,
Matt Coster, Rob Clark, Dmitry Baryshkov, Abhinav Kumar,
Jessica Zhang, Sean Paul, Marijn Suijten, Lyude Paul,
Lucas De Marchi, Rodrigo Vivi, Sumit Semwal, Christian König,
dri-devel, linux-kernel, rust-for-linux, linux-arm-msm, freedreno,
nouveau, intel-xe, linux-media, linaro-mm-sig
On Fri Nov 28, 2025 at 3:14 PM CET, Alice Ryhl wrote:
> In the previous commit we updated drm_gpuvm_bo_obtain_prealloc() to take
> locks internally, which means that it's only usable in immediate mode.
> In this commit, we notice that drm_gpuvm_bo_obtain() requires you to use
> staged mode. This means that we now have one variant of obtain for each
> mode you might use gpuvm in.
>
> To reflect this information, we add a warning about using it in
> immediate mode, and to make the distinction clearer we rename the method
> with a _locked() suffix so that it's clear that it requires the caller
> to take the locks.
>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Ultimately, the two different approaches of obtaining a VM_BO have always been
desinged for the two different modes of operation -- great to see this refined!
Given that, I think it would be great to update the "Locking" section of the
GPUVM's documentation and expand it with a new section "Modes of Operation".
Mind sending a follow-up patch / series for this?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction
2025-11-28 14:14 ` [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction Alice Ryhl
2025-12-01 15:16 ` Daniel Almeida
@ 2025-12-19 15:35 ` Danilo Krummrich
2025-12-20 9:48 ` Alice Ryhl
1 sibling, 1 reply; 24+ messages in thread
From: Danilo Krummrich @ 2025-12-19 15:35 UTC (permalink / raw)
To: Alice Ryhl
Cc: Daniel Almeida, Matthew Brost, Thomas Hellström,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Frank Binns,
Matt Coster, Rob Clark, Dmitry Baryshkov, Abhinav Kumar,
Jessica Zhang, Sean Paul, Marijn Suijten, Lyude Paul,
Lucas De Marchi, Rodrigo Vivi, Sumit Semwal, Christian König,
dri-devel, linux-kernel, rust-for-linux, linux-arm-msm, freedreno,
nouveau, intel-xe, linux-media, linaro-mm-sig, Asahi Lina
On Fri Nov 28, 2025 at 3:14 PM CET, Alice Ryhl wrote:
> diff --git a/rust/helpers/drm_gpuvm.c b/rust/helpers/drm_gpuvm.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..18b7dbd2e32c3162455b344e72ec2940c632cc6b
> --- /dev/null
> +++ b/rust/helpers/drm_gpuvm.c
> @@ -0,0 +1,43 @@
> +// SPDX-License-Identifier: GPL-2.0 or MIT
> +
> +#ifdef CONFIG_DRM_GPUVM
> +
> +#include <drm/drm_gpuvm.h>
> +
> +struct drm_gpuvm *rust_helper_drm_gpuvm_get(struct drm_gpuvm *obj)
> +{
> + return drm_gpuvm_get(obj);
> +}
> +
> +void rust_helper_drm_gpuva_init_from_op(struct drm_gpuva *va, struct drm_gpuva_op_map *op)
> +{
> + drm_gpuva_init_from_op(va, op);
> +}
> +
> +struct drm_gpuvm_bo *rust_helper_drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_bo)
> +{
> + return drm_gpuvm_bo_get(vm_bo);
> +}
> +
> +void rust_helper_drm_gpuvm_exec_unlock(struct drm_gpuvm_exec *vm_exec)
> +{
> + return drm_gpuvm_exec_unlock(vm_exec);
> +}
> +
> +bool rust_helper_drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm,
> + struct drm_gem_object *obj)
> +{
> + return drm_gpuvm_is_extobj(gpuvm, obj);
> +}
> +
> +int rust_helper_dma_resv_lock(struct dma_resv *obj, struct ww_acquire_ctx *ctx)
> +{
> + return dma_resv_lock(obj, ctx);
> +}
> +
> +void rust_helper_dma_resv_unlock(struct dma_resv *obj)
> +{
> + dma_resv_unlock(obj);
> +}
The dma_resv_*() helpers should go into their own file and should not depend on
CONFIG_DRM_GPUVM.
> +
> +#endif // CONFIG_DRM_GPUVM
> diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
> index 551da6c9b5064c324d6f62bafcec672c6c6f5bee..91f45155eb9c2c4e92b56ee1abf7d45188873f3c 100644
> --- a/rust/helpers/helpers.c
> +++ b/rust/helpers/helpers.c
> @@ -26,6 +26,7 @@
> #include "device.c"
> #include "dma.c"
> #include "drm.c"
> +#include "drm_gpuvm.c"
> #include "err.c"
> #include "irq.c"
> #include "fs.c"
> diff --git a/rust/kernel/drm/gpuvm/mod.rs b/rust/kernel/drm/gpuvm/mod.rs
> new file mode 100644
> index 0000000000000000000000000000000000000000..9834dbb938a3622e46048e9b8e06bc6bf03aa0d2
> --- /dev/null
> +++ b/rust/kernel/drm/gpuvm/mod.rs
> @@ -0,0 +1,394 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +
> +//! DRM GPUVM in immediate mode
> +//!
> +//! Rust abstractions for using GPUVM in immediate mode. This is when the GPUVM state is updated
> +//! during `run_job()`, i.e., in the DMA fence signalling critical path, to ensure that the GPUVM
> +//! and the GPU's virtual address space has the same state at all times.
Just a note: once we got the modes of operation section in place on the C side,
we should refer to it from here.
> +//!
> +//! C header: [`include/drm/drm_gpuvm.h`](srctree/include/drm/drm_gpuvm.h)
> +
> +use kernel::{
> + alloc::{AllocError, Flags as AllocFlags},
> + bindings, drm,
> + drm::gem::IntoGEMObject,
> + error::to_result,
> + prelude::*,
> + sync::aref::{ARef, AlwaysRefCounted},
> + types::Opaque,
> +};
> +
> +use core::{
> + cell::UnsafeCell,
> + marker::PhantomData,
> + mem::{ManuallyDrop, MaybeUninit},
> + ops::{Deref, DerefMut, Range},
> + ptr::{self, NonNull},
> +};
Kernel vertical style.
> +mod sm_ops;
> +pub use self::sm_ops::*;
> +
> +mod vm_bo;
> +pub use self::vm_bo::*;
> +
> +mod va;
> +pub use self::va::*;
> +
> +/// A DRM GPU VA manager.
> +///
> +/// This object is refcounted, but the "core" is only accessible using a special unique handle. The
> +/// core consists of the `core` field and the GPUVM's interval tree.
I think this is a bit confusing, the 'core' field seems to be the drivers
private data that is protected with the same lock as the GPUVM's interval tree,
so I'd just call it 'data', or 'protected_data', etc.
Establishing the term 'core' as a state to refer to the private data and the
interval tree being accessible makes sense to me.
> +#[repr(C)]
> +#[pin_data]
> +pub struct GpuVm<T: DriverGpuVm> {
> + #[pin]
> + vm: Opaque<bindings::drm_gpuvm>,
> + /// Accessed only through the [`GpuVmCore`] reference.
> + core: UnsafeCell<T>,
> + /// Shared data not protected by any lock.
> + #[pin]
> + shared_data: T::SharedData,
I think it deserves some documentation to have two separate driver private data
fields.
> +}
> +
> +// SAFETY: dox
> +unsafe impl<T: DriverGpuVm> AlwaysRefCounted for GpuVm<T> {
> + fn inc_ref(&self) {
> + // SAFETY: dox
> + unsafe { bindings::drm_gpuvm_get(self.vm.get()) };
> + }
> +
> + unsafe fn dec_ref(obj: NonNull<Self>) {
> + // SAFETY: dox
> + unsafe { bindings::drm_gpuvm_put((*obj.as_ptr()).vm.get()) };
> + }
> +}
> +
> +impl<T: DriverGpuVm> GpuVm<T> {
> + const fn vtable() -> &'static bindings::drm_gpuvm_ops {
> + &bindings::drm_gpuvm_ops {
> + vm_free: Some(Self::vm_free),
> + op_alloc: None,
> + op_free: None,
> + vm_bo_alloc: GpuVmBo::<T>::ALLOC_FN,
> + vm_bo_free: GpuVmBo::<T>::FREE_FN,
> + vm_bo_validate: None,
> + sm_step_map: Some(Self::sm_step_map),
> + sm_step_unmap: Some(Self::sm_step_unmap),
> + sm_step_remap: Some(Self::sm_step_remap),
> + }
> + }
> +
> + /// Creates a GPUVM instance.
> + #[expect(clippy::new_ret_no_self)]
> + pub fn new<E>(
> + name: &'static CStr,
> + dev: &drm::Device<T::Driver>,
> + r_obj: &T::Object,
> + range: Range<u64>,
> + reserve_range: Range<u64>,
> + core: T,
> + shared: impl PinInit<T::SharedData, E>,
> + ) -> Result<GpuVmCore<T>, E>
> + where
> + E: From<AllocError>,
> + E: From<core::convert::Infallible>,
> + {
> + let obj = KBox::try_pin_init::<E>(
> + try_pin_init!(Self {
> + core <- UnsafeCell::new(core),
> + shared_data <- shared,
> + vm <- Opaque::ffi_init(|vm| {
> + // SAFETY: These arguments are valid. `vm` is valid until refcount drops to
> + // zero.
> + unsafe {
> + bindings::drm_gpuvm_init(
> + vm,
> + name.as_char_ptr(),
> + bindings::drm_gpuvm_flags_DRM_GPUVM_IMMEDIATE_MODE
> + | bindings::drm_gpuvm_flags_DRM_GPUVM_RESV_PROTECTED,
> + dev.as_raw(),
> + r_obj.as_raw(),
> + range.start,
> + range.end - range.start,
> + reserve_range.start,
> + reserve_range.end - reserve_range.start,
> + const { Self::vtable() },
> + )
> + }
> + }),
> + }? E),
> + GFP_KERNEL,
> + )?;
> + // SAFETY: This transfers the initial refcount to the ARef.
> + Ok(GpuVmCore(unsafe {
> + ARef::from_raw(NonNull::new_unchecked(KBox::into_raw(
> + Pin::into_inner_unchecked(obj),
> + )))
> + }))
There are some other intentionally incomplete safety comments that just say
"dox" as mentioned in the commit message. Given that this has a comment, just a
quick reminder to rework this one as well.
> + }
> +
> + /// Access this [`GpuVm`] from a raw pointer.
> + ///
> + /// # Safety
> + ///
> + /// For the duration of `'a`, the pointer must reference a valid [`GpuVm<T>`].
The pointer must reference a valid struct drm_gpuvm that is embedded withtin a
GpuVm<T>.
> + #[inline]
> + pub unsafe fn from_raw<'a>(ptr: *mut bindings::drm_gpuvm) -> &'a Self {
> + // SAFETY: `drm_gpuvm` is first field and `repr(C)`.
Reminder: This needs some expansion.
> + unsafe { &*ptr.cast() }
> + }
> +
> + /// Get a raw pointer.
I assume you intend to expand some of the comments a bit, here I'd say something
like "Returns a raw pointer to the embedded `struct drm_gpuvm`.
> + #[inline]
> + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm {
> + self.vm.get()
> + }
> +
> + /// Access the shared data.
> + #[inline]
> + pub fn shared(&self) -> &T::SharedData {
> + &self.shared_data
> + }
> +
> + /// The start of the VA space.
> + #[inline]
> + pub fn va_start(&self) -> u64 {
> + // SAFETY: Safe by the type invariant of `GpuVm<T>`.
> + unsafe { (*self.as_raw()).mm_start }
> + }
> +
> + /// The length of the address space
Missing period. I'd also say "The length of the GPU's virtual address space.".
> + #[inline]
> + pub fn va_length(&self) -> u64 {
> + // SAFETY: Safe by the type invariant of `GpuVm<T>`.
> + unsafe { (*self.as_raw()).mm_range }
> + }
> +
> + /// Returns the range of the GPU virtual address space.
> + #[inline]
> + pub fn va_range(&self) -> Range<u64> {
> + let start = self.va_start();
> + let end = start + self.va_length();
> + Range { start, end }
> + }
> +
> + /// Returns a [`GpuVmBoObtain`] for the provided GEM object.
> + #[inline]
> + pub fn obtain(
> + &self,
> + obj: &T::Object,
> + data: impl PinInit<T::VmBoData>,
> + ) -> Result<GpuVmBoObtain<T>, AllocError> {
> + Ok(GpuVmBoAlloc::new(self, obj, data)?.obtain())
> + }
Does this method make sense? We usually preallocate a VM_BO, then enter the
fence signalling critical path and then obtain the VM_BO.
> +
> + /// Prepare this GPUVM.
> + #[inline]
> + pub fn prepare(&self, num_fences: u32) -> impl PinInit<GpuVmExec<'_, T>, Error> {
> + try_pin_init!(GpuVmExec {
> + exec <- Opaque::try_ffi_init(|exec: *mut bindings::drm_gpuvm_exec| {
> + // SAFETY: exec is valid but unused memory, so we can write.
> + unsafe {
> + ptr::write_bytes(exec, 0u8, 1usize);
> + ptr::write(&raw mut (*exec).vm, self.as_raw());
> + ptr::write(&raw mut (*exec).flags, bindings::DRM_EXEC_INTERRUPTIBLE_WAIT);
> + ptr::write(&raw mut (*exec).num_fences, num_fences);
> + }
> +
> + // SAFETY: We can prepare the GPUVM.
> + to_result(unsafe { bindings::drm_gpuvm_exec_lock(exec) })
> + }),
> + _gpuvm: PhantomData,
> + })
> + }
> +
> + /// Clean up buffer objects that are no longer used.
> + #[inline]
> + pub fn deferred_cleanup(&self) {
> + // SAFETY: Always safe to perform deferred cleanup.
> + unsafe { bindings::drm_gpuvm_bo_deferred_cleanup(self.as_raw()) }
> + }
> +
> + /// Check if this GEM object is an external object for this GPUVM.
> + #[inline]
> + pub fn is_extobj(&self, obj: &T::Object) -> bool {
> + // SAFETY: We may call this with any GPUVM and GEM object.
> + unsafe { bindings::drm_gpuvm_is_extobj(self.as_raw(), obj.as_raw()) }
> + }
> +
> + /// Free this GPUVM.
> + ///
> + /// # Safety
> + ///
> + /// Called when refcount hits zero.
> + unsafe extern "C" fn vm_free(me: *mut bindings::drm_gpuvm) {
> + // SAFETY: GPUVM was allocated with KBox and can now be freed.
> + drop(unsafe { KBox::<Self>::from_raw(me.cast()) })
> + }
> +}
> +
> +/// The manager for a GPUVM.
> +pub trait DriverGpuVm: Sized {
> + /// Parent `Driver` for this object.
> + type Driver: drm::Driver;
> +
> + /// The kind of GEM object stored in this GPUVM.
> + type Object: IntoGEMObject;
> +
> + /// Data stored in the [`GpuVm`] that is fully shared.
> + type SharedData;
> +
> + /// Data stored with each `struct drm_gpuvm_bo`.
> + type VmBoData;
> +
> + /// Data stored with each `struct drm_gpuva`.
> + type VaData;
> +
> + /// The private data passed to callbacks.
> + type SmContext;
> +
> + /// Indicates that a new mapping should be created.
> + fn sm_step_map<'op>(
> + &mut self,
> + op: OpMap<'op, Self>,
> + context: &mut Self::SmContext,
> + ) -> Result<OpMapped<'op, Self>, Error>;
> +
> + /// Indicates that an existing mapping should be removed.
> + fn sm_step_unmap<'op>(
> + &mut self,
> + op: OpUnmap<'op, Self>,
> + context: &mut Self::SmContext,
> + ) -> Result<OpUnmapped<'op, Self>, Error>;
> +
> + /// Indicates that an existing mapping should be split up.
> + fn sm_step_remap<'op>(
> + &mut self,
> + op: OpRemap<'op, Self>,
> + context: &mut Self::SmContext,
> + ) -> Result<OpRemapped<'op, Self>, Error>;
> +}
> +
> +/// The core of the DRM GPU VA manager.
> +///
> +/// This object is the reference to the GPUVM that
I think you forgot to complete the sentence.
> +///
> +/// # Invariants
> +///
> +/// This object owns the core.
> +pub struct GpuVmCore<T: DriverGpuVm>(ARef<GpuVm<T>>);
> +
> +impl<T: DriverGpuVm> GpuVmCore<T> {
> + /// Get a reference without access to `core`.
> + #[inline]
> + pub fn gpuvm(&self) -> &GpuVm<T> {
> + &self.0
> + }
> +}
> +
> +impl<T: DriverGpuVm> Deref for GpuVmCore<T> {
> + type Target = T;
> + #[inline]
> + fn deref(&self) -> &T {
> + // SAFETY: By the type invariants we may access `core`.
> + unsafe { &*self.0.core.get() }
> + }
> +}
> +
> +impl<T: DriverGpuVm> DerefMut for GpuVmCore<T> {
> + #[inline]
> + fn deref_mut(&mut self) -> &mut T {
> + // SAFETY: By the type invariants we may access `core`.
> + unsafe { &mut *self.0.core.get() }
> + }
> +}
Hm..it seems more natural to me to deref to &GpuVm<T> and provide data() and
data_mut().
> +
> +/// The exec token for preparing the objects.
> +#[pin_data(PinnedDrop)]
> +pub struct GpuVmExec<'a, T: DriverGpuVm> {
> + #[pin]
> + exec: Opaque<bindings::drm_gpuvm_exec>,
> + _gpuvm: PhantomData<&'a mut GpuVm<T>>,
> +}
> +
> +impl<'a, T: DriverGpuVm> GpuVmExec<'a, T> {
> + /// Add a fence.
> + ///
> + /// # Safety
> + ///
> + /// `fence` arg must be valid.
> + pub unsafe fn resv_add_fence(
> + &self,
> + // TODO: use a safe fence abstraction
> + fence: *mut bindings::dma_fence,
> + private_usage: DmaResvUsage,
> + extobj_usage: DmaResvUsage,
> + ) {
> + // SAFETY: Caller ensures fence is ok.
> + unsafe {
> + bindings::drm_gpuvm_resv_add_fence(
> + (*self.exec.get()).vm,
> + &raw mut (*self.exec.get()).exec,
> + fence,
> + private_usage as u32,
> + extobj_usage as u32,
> + )
> + }
> + }
> +}
> +
> +#[pinned_drop]
> +impl<'a, T: DriverGpuVm> PinnedDrop for GpuVmExec<'a, T> {
> + fn drop(self: Pin<&mut Self>) {
> + // SAFETY: We hold the lock, so it's safe to unlock.
> + unsafe { bindings::drm_gpuvm_exec_unlock(self.exec.get()) };
> + }
> +}
> +
> +/// How the fence will be used.
> +#[repr(u32)]
> +pub enum DmaResvUsage {
> + /// For in kernel memory management only (e.g. copying, clearing memory).
> + Kernel = bindings::dma_resv_usage_DMA_RESV_USAGE_KERNEL,
> + /// Implicit write synchronization for userspace submissions.
> + Write = bindings::dma_resv_usage_DMA_RESV_USAGE_WRITE,
> + /// Implicit read synchronization for userspace submissions.
> + Read = bindings::dma_resv_usage_DMA_RESV_USAGE_READ,
> + /// No implicit sync (e.g. preemption fences, page table updates, TLB flushes).
> + Bookkeep = bindings::dma_resv_usage_DMA_RESV_USAGE_BOOKKEEP,
> +}
That belongs into a dma_resv abstraction instead.
> +
> +/// A lock guard for the GPUVM's resv lock.
> +///
> +/// This guard provides access to the extobj and evicted lists.
> +///
> +/// # Invariants
> +///
> +/// Holds the GPUVM resv lock.
> +pub struct GpuvmResvLockGuard<'a, T: DriverGpuVm>(&'a GpuVm<T>);
> +
> +impl<T: DriverGpuVm> GpuVm<T> {
> + /// Lock the VM's resv lock.
> + #[inline]
> + pub fn resv_lock(&self) -> GpuvmResvLockGuard<'_, T> {
> + // SAFETY: It's always ok to lock the resv lock.
> + unsafe { bindings::dma_resv_lock(self.raw_resv_lock(), ptr::null_mut()) };
> + // INVARIANTS: We took the lock.
> + GpuvmResvLockGuard(self)
> + }
> +
> + #[inline]
> + fn raw_resv_lock(&self) -> *mut bindings::dma_resv {
> + // SAFETY: `r_obj` is immutable and valid for duration of GPUVM.
> + unsafe { (*(*self.as_raw()).r_obj).resv }
> + }
> +}
> +
> +impl<'a, T: DriverGpuVm> Drop for GpuvmResvLockGuard<'a, T> {
> + #[inline]
> + fn drop(&mut self) {
> + // SAFETY: We hold the lock so we can release it.
> + unsafe { bindings::dma_resv_unlock(self.0.raw_resv_lock()) };
> + }
> +}
> diff --git a/rust/kernel/drm/gpuvm/sm_ops.rs b/rust/kernel/drm/gpuvm/sm_ops.rs
> new file mode 100644
> index 0000000000000000000000000000000000000000..c0dbd4675de644a3b1cbe7d528194ca7fb471848
> --- /dev/null
> +++ b/rust/kernel/drm/gpuvm/sm_ops.rs
> @@ -0,0 +1,469 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +#![allow(clippy::tabs_in_doc_comments)]
> +
> +use super::*;
> +
> +struct SmData<'a, T: DriverGpuVm> {
> + gpuvm: &'a mut GpuVmCore<T>,
> + user_context: &'a mut T::SmContext,
> +}
> +
> +#[repr(C)]
> +struct SmMapData<'a, T: DriverGpuVm> {
> + sm_data: SmData<'a, T>,
> + vm_bo: GpuVmBoObtain<T>,
> +}
> +
> +/// The argument for [`GpuVmCore::sm_map`].
> +pub struct OpMapRequest<'a, T: DriverGpuVm> {
> + /// Address in GPU virtual address space.
> + pub addr: u64,
> + /// Length of mapping to create.
> + pub range: u64,
> + /// Offset in GEM object.
> + pub offset: u64,
> + /// The GEM object to map.
> + pub vm_bo: GpuVmBoObtain<T>,
> + /// The user-provided context type.
> + pub context: &'a mut T::SmContext,
> +}
> +
> +impl<'a, T: DriverGpuVm> OpMapRequest<'a, T> {
> + fn raw_request(&self) -> bindings::drm_gpuvm_map_req {
> + bindings::drm_gpuvm_map_req {
> + map: bindings::drm_gpuva_op_map {
> + va: bindings::drm_gpuva_op_map__bindgen_ty_1 {
> + addr: self.addr,
> + range: self.range,
> + },
> + gem: bindings::drm_gpuva_op_map__bindgen_ty_2 {
> + offset: self.offset,
> + obj: self.vm_bo.obj().as_raw(),
> + },
> + },
> + }
> + }
> +}
> +
> +/// ```
> +/// struct drm_gpuva_op_map {
> +/// /**
> +/// * @va: structure containing address and range of a map
> +/// * operation
> +/// */
> +/// struct {
> +/// /**
> +/// * @va.addr: the base address of the new mapping
> +/// */
> +/// u64 addr;
> +///
> +/// /**
> +/// * @va.range: the range of the new mapping
> +/// */
> +/// u64 range;
> +/// } va;
> +///
> +/// /**
> +/// * @gem: structure containing the &drm_gem_object and it's offset
> +/// */
> +/// struct {
> +/// /**
> +/// * @gem.offset: the offset within the &drm_gem_object
> +/// */
> +/// u64 offset;
> +///
> +/// /**
> +/// * @gem.obj: the &drm_gem_object to map
> +/// */
> +/// struct drm_gem_object *obj;
> +/// } gem;
> +/// };
> +/// ```
> +pub struct OpMap<'op, T: DriverGpuVm> {
> + op: &'op bindings::drm_gpuva_op_map,
> + // Since these abstractions are designed for immediate mode, the VM BO needs to be
> + // pre-allocated, so we always have it available when we reach this point.
> + vm_bo: &'op GpuVmBo<T>,
> + _invariant: PhantomData<*mut &'op mut T>,
> +}
> +
> +impl<'op, T: DriverGpuVm> OpMap<'op, T> {
> + /// The base address of the new mapping.
> + pub fn addr(&self) -> u64 {
> + self.op.va.addr
> + }
> +
> + /// The length of the new mapping.
> + pub fn length(&self) -> u64 {
> + self.op.va.range
> + }
> +
> + /// The offset within the [`drm_gem_object`](crate::gem::Object).
> + pub fn gem_offset(&self) -> u64 {
> + self.op.gem.offset
> + }
> +
> + /// The [`drm_gem_object`](crate::gem::Object) to map.
> + pub fn obj(&self) -> &T::Object {
> + // SAFETY: The `obj` pointer is guaranteed to be valid.
> + unsafe { <T::Object as IntoGEMObject>::from_raw(self.op.gem.obj) }
> + }
> +
> + /// The [`GpuVmBo`] that the new VA will be associated with.
> + pub fn vm_bo(&self) -> &GpuVmBo<T> {
> + self.vm_bo
> + }
> +
> + /// Use the pre-allocated VA to carry out this map operation.
> + pub fn insert(self, va: GpuVaAlloc<T>, va_data: impl PinInit<T::VaData>) -> OpMapped<'op, T> {
> + let va = va.prepare(va_data);
> + // SAFETY: By the type invariants we may access the interval tree.
> + unsafe { bindings::drm_gpuva_map(self.vm_bo.gpuvm().as_raw(), va, self.op) };
> + // SAFETY: The GEM object is valid, so the mutex is properly initialized.
> + unsafe { bindings::mutex_lock(&raw mut (*self.op.gem.obj).gpuva.lock) };
This seems to be used at least twice, maybe a helper that takes a closure
between the raw mutex_lock() and mutex_unlock() is appropriate?
> + // SAFETY: The va is prepared for insertion, and we hold the GEM lock.
> + unsafe { bindings::drm_gpuva_link(va, self.vm_bo.as_raw()) };
> + // SAFETY: We took the mutex above, so we may unlock it.
> + unsafe { bindings::mutex_unlock(&raw mut (*self.op.gem.obj).gpuva.lock) };
> + OpMapped {
> + _invariant: self._invariant,
> + }
> + }
> +}
> +
> +/// Represents a completed [`OpMap`] operation.
Can you please add a brief comment what this type is used for?
Also, we have lots of new types to represent a certain state. Can you please
list all of them in a global documentation section explaining the states?
I think it would be nice if we could use the type state pattern, but it seems
it would be quite unergonomic.
> +pub struct OpMapped<'op, T> {
> + _invariant: PhantomData<*mut &'op mut T>,
> +}
<snip>
> +/// A pre-allocated [`GpuVmBo`] object.
> +///
> +/// # Invariants
> +///
> +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData`, has a refcount of one, and is
> +/// absent from any gem, extobj, or evict lists.
> +pub(super) struct GpuVmBoAlloc<T: DriverGpuVm>(NonNull<GpuVmBo<T>>);
> +
> +impl<T: DriverGpuVm> GpuVmBoAlloc<T> {
> + /// Create a new pre-allocated [`GpuVmBo`].
> + ///
> + /// It's intentional that the initializer is infallible because `drm_gpuvm_bo_put` will call
> + /// drop on the data, so we don't have a way to free it when the data is missing.
> + #[inline]
> + pub(super) fn new(
> + gpuvm: &GpuVm<T>,
> + gem: &T::Object,
> + value: impl PinInit<T::VmBoData>,
> + ) -> Result<GpuVmBoAlloc<T>, AllocError> {
> + // SAFETY: The provided gpuvm and gem ptrs are valid for the duration of this call.
> + let raw_ptr = unsafe {
> + bindings::drm_gpuvm_bo_create(gpuvm.as_raw(), gem.as_raw()).cast::<GpuVmBo<T>>()
> + };
> + // CAST: `GpuVmBoAlloc::vm_bo_alloc` ensures that this memory was allocated with the layout
> + // of `GpuVmBo<T>`.
> + let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?;
> + // SAFETY: `ptr->data` is a valid pinned location.
> + let Ok(()) = unsafe { value.__pinned_init(&raw mut (*raw_ptr).data) };
> + // INVARIANTS: We just created the vm_bo so it's absent from lists, and the data is valid
> + // as we just initialized it.
> + Ok(GpuVmBoAlloc(ptr))
> + }
> +
> + /// Returns a raw pointer to underlying C value.
> + #[inline]
> + pub(super) fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo {
> + // SAFETY: The pointer references a valid `drm_gpuvm_bo`.
> + unsafe { (*self.0.as_ptr()).inner.get() }
> + }
> +
> + /// Look up whether there is an existing [`GpuVmBo`] for this gem object.
> + #[inline]
> + pub(super) fn obtain(self) -> GpuVmBoObtain<T> {
> + let me = ManuallyDrop::new(self);
> + // SAFETY: Valid `drm_gpuvm_bo` not already in the lists.
> + let ptr = unsafe { bindings::drm_gpuvm_bo_obtain_prealloc(me.as_raw()) };
> +
> + // If the vm_bo does not already exist, ensure that it's in the extobj list.
> + if ptr::eq(ptr, me.as_raw()) && me.gpuvm().is_extobj(me.obj()) {
> + let _resv_lock = me.gpuvm().resv_lock();
> + // SAFETY: We hold the GPUVMs resv lock.
> + unsafe { bindings::drm_gpuvm_bo_extobj_add(ptr) };
> + }
> +
> + // INVARIANTS: Valid `drm_gpuvm_bo` in the GEM list.
> + // SAFETY: `drm_gpuvm_bo_obtain_prealloc` always returns a non-null ptr
> + GpuVmBoObtain(unsafe { NonNull::new_unchecked(ptr.cast()) })
> + }
> +}
> +
> +impl<T: DriverGpuVm> Deref for GpuVmBoAlloc<T> {
> + type Target = GpuVmBo<T>;
> + #[inline]
> + fn deref(&self) -> &GpuVmBo<T> {
> + // SAFETY: By the type invariants we may deref while `Self` exists.
> + unsafe { self.0.as_ref() }
> + }
> +}
> +
> +impl<T: DriverGpuVm> Drop for GpuVmBoAlloc<T> {
> + #[inline]
> + fn drop(&mut self) {
> + // SAFETY: It's safe to perform a deferred put in any context.
> + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) };
This does not need to be deferred, no?
> + }
> +}
> +
> +/// A [`GpuVmBo`] object in the GEM list.
> +///
> +/// # Invariants
> +///
> +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData` and is present in the gem list.
> +pub struct GpuVmBoObtain<T: DriverGpuVm>(NonNull<GpuVmBo<T>>);
How is this different from GpuVmBo? The only object that is not in the GEM list
should be GpuVmBoAlloc, i.e. the preallocated one.
> +impl<T: DriverGpuVm> GpuVmBoObtain<T> {
> + /// Returns a raw pointer to underlying C value.
> + #[inline]
> + pub fn as_raw(&self) -> *mut bindings::drm_gpuvm_bo {
> + // SAFETY: The pointer references a valid `drm_gpuvm_bo`.
> + unsafe { (*self.0.as_ptr()).inner.get() }
> + }
> +}
> +
> +impl<T: DriverGpuVm> Deref for GpuVmBoObtain<T> {
> + type Target = GpuVmBo<T>;
> + #[inline]
> + fn deref(&self) -> &GpuVmBo<T> {
> + // SAFETY: By the type invariants we may deref while `Self` exists.
> + unsafe { self.0.as_ref() }
> + }
> +}
> +
> +impl<T: DriverGpuVm> Drop for GpuVmBoObtain<T> {
> + #[inline]
> + fn drop(&mut self) {
> + // SAFETY: It's safe to perform a deferred put in any context.
> + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) };
> + }
> +}
> diff --git a/rust/kernel/drm/mod.rs b/rust/kernel/drm/mod.rs
> index 1b82b6945edf25b947afc08300e211bd97150d6b..a4b6c5430198571ec701af2ef452cc9ac55870e6 100644
> --- a/rust/kernel/drm/mod.rs
> +++ b/rust/kernel/drm/mod.rs
> @@ -6,6 +6,7 @@
> pub mod driver;
> pub mod file;
> pub mod gem;
> +pub mod gpuvm;
> pub mod ioctl;
>
> pub use self::device::Device;
>
> --
> 2.52.0.487.g5c8c507ade-goog
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction
2025-12-19 15:35 ` Danilo Krummrich
@ 2025-12-20 9:48 ` Alice Ryhl
2025-12-20 10:01 ` Alice Ryhl
2025-12-20 10:05 ` Alice Ryhl
0 siblings, 2 replies; 24+ messages in thread
From: Alice Ryhl @ 2025-12-20 9:48 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Daniel Almeida, Matthew Brost, Thomas Hellström,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Frank Binns,
Matt Coster, Rob Clark, Dmitry Baryshkov, Abhinav Kumar,
Jessica Zhang, Sean Paul, Marijn Suijten, Lyude Paul,
Lucas De Marchi, Rodrigo Vivi, Sumit Semwal, Christian König,
dri-devel, linux-kernel, rust-for-linux, linux-arm-msm, freedreno,
nouveau, intel-xe, linux-media, linaro-mm-sig, Asahi Lina
On Fri, Dec 19, 2025 at 04:35:00PM +0100, Danilo Krummrich wrote:
> On Fri Nov 28, 2025 at 3:14 PM CET, Alice Ryhl wrote:
> > + /// Returns a [`GpuVmBoObtain`] for the provided GEM object.
> > + #[inline]
> > + pub fn obtain(
> > + &self,
> > + obj: &T::Object,
> > + data: impl PinInit<T::VmBoData>,
> > + ) -> Result<GpuVmBoObtain<T>, AllocError> {
> > + Ok(GpuVmBoAlloc::new(self, obj, data)?.obtain())
> > + }
>
> Does this method make sense? We usually preallocate a VM_BO, then enter the
> fence signalling critical path and then obtain the VM_BO.
Hmm, but there is something tricky here. When do we add it to the extobj
list, then? If we add it before starting the critical path, then we must
also call drm_gpuvm_bo_obtain_prealloc() before starting the critical
path because obtain must happen before drm_gpuvm_bo_extobj_add(). And
adding it to extobj after signalling the fence seems error prone.
And besides, adding it to the extobj list before the critical path
means that we can have drm_gpuvm_exec_lock() lock the new BO without
having to do anything special - it's simply in the extobj list by the
time we call drm_gpuvm_exec_lock().
> > +impl<T: DriverGpuVm> DerefMut for GpuVmCore<T> {
> > + #[inline]
> > + fn deref_mut(&mut self) -> &mut T {
> > + // SAFETY: By the type invariants we may access `core`.
> > + unsafe { &mut *self.0.core.get() }
> > + }
> > +}
>
> Hm..it seems more natural to me to deref to &GpuVm<T> and provide data() and
> data_mut().
That's fair.
> > +impl<T: DriverGpuVm> Drop for GpuVmBoAlloc<T> {
> > + #[inline]
> > + fn drop(&mut self) {
> > + // SAFETY: It's safe to perform a deferred put in any context.
> > + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) };
>
> This does not need to be deferred, no?
I think what I *actually* want to call here is
kref_put(&self->kref, drm_gpuvm_bo_destroy_not_in_lists_kref);
like what drm_gpuvm_bo_obtain_prealloc() does as of the first patch in
this series.
> > + }
> > +}
> > +
> > +/// A [`GpuVmBo`] object in the GEM list.
> > +///
> > +/// # Invariants
> > +///
> > +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData` and is present in the gem list.
> > +pub struct GpuVmBoObtain<T: DriverGpuVm>(NonNull<GpuVmBo<T>>);
>
> How is this different from GpuVmBo? The only object that is not in the GEM list
> should be GpuVmBoAlloc, i.e. the preallocated one.
The difference is whether there is pointer indirection or not.
This type is morally an ARef<GpuVm<T>>, except I don't expose any way
to increment the refcount.
Alice
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction
2025-12-20 9:48 ` Alice Ryhl
@ 2025-12-20 10:01 ` Alice Ryhl
2025-12-20 10:05 ` Alice Ryhl
1 sibling, 0 replies; 24+ messages in thread
From: Alice Ryhl @ 2025-12-20 10:01 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Daniel Almeida, Matthew Brost, Thomas Hellström,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau,
Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Frank Binns,
Matt Coster, Rob Clark, Dmitry Baryshkov, Abhinav Kumar,
Sean Paul, Marijn Suijten, Lyude Paul, Rodrigo Vivi, Sumit Semwal,
Christian König, dri-devel, linux-kernel, rust-for-linux,
linux-arm-msm, freedreno, nouveau, intel-xe, linux-media,
Asahi Lina
On Sat, Dec 20, 2025 at 09:48:17AM +0000, Alice Ryhl wrote:
Test email
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction
2025-12-20 9:48 ` Alice Ryhl
2025-12-20 10:01 ` Alice Ryhl
@ 2025-12-20 10:05 ` Alice Ryhl
2025-12-20 10:18 ` Alice Ryhl
1 sibling, 1 reply; 24+ messages in thread
From: Alice Ryhl @ 2025-12-20 10:05 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Daniel Almeida, Matthew Brost, Thomas Hellström,
Maarten Lankhorst, Maxime Ripard, David Airlie, Simona Vetter,
Boris Brezillon, Steven Price, Liviu Dudau, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Frank Binns, Matt Coster,
Rob Clark, Dmitry Baryshkov, Abhinav Kumar, Sean Paul,
Marijn Suijten, Lyude Paul, Lucas De Marchi, Rodrigo Vivi,
Sumit Semwal, Christian König, dri-devel, linux-kernel,
rust-for-linux, linux-arm-msm, freedreno, intel-xe, linux-media,
linaro-mm-sig, Asahi Lina
On Sat, Dec 20, 2025 at 09:48:17AM +0000, Alice Ryhl wrote:
> On Fri, Dec 19, 2025 at 04:35:00PM +0100, Danilo Krummrich wrote:
> > On Fri Nov 28, 2025 at 3:14 PM CET, Alice Ryhl wrote:
> > > + /// Returns a [`GpuVmBoObtain`] for the provided GEM object.
> > > + #[inline]
> > > + pub fn obtain(
> > > + &self,
> > > + obj: &T::Object,
> > > + data: impl PinInit<T::VmBoData>,
> > > + ) -> Result<GpuVmBoObtain<T>, AllocError> {
> > > + Ok(GpuVmBoAlloc::new(self, obj, data)?.obtain())
> > > + }
> >
> > Does this method make sense? We usually preallocate a VM_BO, then enter the
> > fence signalling critical path and then obtain the VM_BO.
>
> Hmm, but there is something tricky here. When do we add it to the extobj
> list, then? If we add it before starting the critical path, then we must
> also call drm_gpuvm_bo_obtain_prealloc() before starting the critical
> path because obtain must happen before drm_gpuvm_bo_extobj_add(). And
> adding it to extobj after signalling the fence seems error prone.
>
> And besides, adding it to the extobj list before the critical path
> means that we can have drm_gpuvm_exec_lock() lock the new BO without
> having to do anything special - it's simply in the extobj list by the
> time we call drm_gpuvm_exec_lock().
>
> > > +impl<T: DriverGpuVm> DerefMut for GpuVmCore<T> {
> > > + #[inline]
> > > + fn deref_mut(&mut self) -> &mut T {
> > > + // SAFETY: By the type invariants we may access `core`.
> > > + unsafe { &mut *self.0.core.get() }
> > > + }
> > > +}
> >
> > Hm..it seems more natural to me to deref to &GpuVm<T> and provide data() and
> > data_mut().
>
> That's fair.
>
> > > +impl<T: DriverGpuVm> Drop for GpuVmBoAlloc<T> {
> > > + #[inline]
> > > + fn drop(&mut self) {
> > > + // SAFETY: It's safe to perform a deferred put in any context.
> > > + unsafe { bindings::drm_gpuvm_bo_put_deferred(self.as_raw()) };
> >
> > This does not need to be deferred, no?
>
> I think what I *actually* want to call here is
>
> kref_put(&self->kref, drm_gpuvm_bo_destroy_not_in_lists_kref);
>
> like what drm_gpuvm_bo_obtain_prealloc() does as of the first patch in
> this series.
>
> > > + }
> > > +}
> > > +
> > > +/// A [`GpuVmBo`] object in the GEM list.
> > > +///
> > > +/// # Invariants
> > > +///
> > > +/// Points at a `drm_gpuvm_bo` that contains a valid `T::VmBoData` and is present in the gem list.
> > > +pub struct GpuVmBoObtain<T: DriverGpuVm>(NonNull<GpuVmBo<T>>);
> >
> > How is this different from GpuVmBo? The only object that is not in the GEM list
> > should be GpuVmBoAlloc, i.e. the preallocated one.
>
> The difference is whether there is pointer indirection or not.
>
> This type is morally an ARef<GpuVm<T>>, except I don't expose any way
> to increment the refcount.
>
> Alice
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction
2025-12-20 10:05 ` Alice Ryhl
@ 2025-12-20 10:18 ` Alice Ryhl
0 siblings, 0 replies; 24+ messages in thread
From: Alice Ryhl @ 2025-12-20 10:18 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Daniel Almeida, Matthew Brost, Thomas Hellström,
Maarten Lankhorst, Maxime Ripard, David Airlie, Simona Vetter,
Boris Brezillon, Steven Price, Liviu Dudau, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Frank Binns, Matt Coster,
Rob Clark, Dmitry Baryshkov, Abhinav Kumar, Sean Paul,
Marijn Suijten, Lyude Paul, Lucas De Marchi, Rodrigo Vivi,
Sumit Semwal, Christian König, dri-devel, linux-kernel,
rust-for-linux, linux-arm-msm, freedreno, intel-xe, linux-media,
linaro-mm-sig, Asahi Lina
On Sat, Dec 20, 2025 at 10:05:35AM +0000, Alice Ryhl wrote:
Aha! This one didn't get duplicated on lore. It's the nouveau list that
is broken.
Alice
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2025-12-20 10:18 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-28 14:14 [PATCH 0/4] Rust GPUVM support Alice Ryhl
2025-11-28 14:14 ` [PATCH 1/4] drm/gpuvm: take GEM lock inside drm_gpuvm_bo_obtain_prealloc() Alice Ryhl
2025-11-28 14:24 ` Boris Brezillon
2025-12-01 9:55 ` Alice Ryhl
2025-12-19 12:15 ` Danilo Krummrich
2025-11-28 14:14 ` [PATCH 2/4] drm/gpuvm: drm_gpuvm_bo_obtain() requires lock and staged mode Alice Ryhl
2025-11-28 14:25 ` Boris Brezillon
2025-12-19 12:25 ` Danilo Krummrich
2025-11-28 14:14 ` [PATCH 3/4] drm/gpuvm: use const for drm_gpuva_op_* ptrs Alice Ryhl
2025-11-28 14:27 ` Boris Brezillon
2025-11-28 14:14 ` [PATCH 4/4] rust: drm: add GPUVM immediate mode abstraction Alice Ryhl
2025-12-01 15:16 ` Daniel Almeida
2025-12-02 8:39 ` Alice Ryhl
2025-12-02 13:42 ` Daniel Almeida
2025-12-19 15:35 ` Danilo Krummrich
2025-12-20 9:48 ` Alice Ryhl
2025-12-20 10:01 ` Alice Ryhl
2025-12-20 10:05 ` Alice Ryhl
2025-12-20 10:18 ` Alice Ryhl
2025-11-28 15:38 ` ✗ CI.checkpatch: warning for Rust GPUVM support Patchwork
2025-11-28 15:40 ` ✓ CI.KUnit: success " Patchwork
2025-11-28 15:55 ` ✗ CI.checksparse: warning " Patchwork
2025-11-28 16:14 ` ✓ Xe.CI.BAT: success " Patchwork
2025-11-28 17:03 ` ✗ Xe.CI.Full: failure " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).