* [PATCH v5 00/25] MADVISE FOR XE
@ 2025-07-30 13:00 Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
` (29 more replies)
0 siblings, 30 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
-v6
Rebase on gpuvm patches
Address review comments
-v5
Restore attributes to default after free from userspace
Add defragment worker to merge cpu mirror vma with default attributes
Avoid using VMA in uapi
address review comments
-v4:
fix atomic policies
fix attribute copy
address review comments
Provides a user API to assign attributes like pat_index, atomic
operation type, and preferred location for SVM ranges.
The Kernel Mode Driver (KMD) may split existing VMAs to cover input
ranges, assign user-provided attributes, and invalidate existing PTEs so
that the next page fault/prefetch can use the new attributes.
Boris Brezillon (2):
drm/gpuvm: Pass map arguments through a struct
drm/gpuvm: Kill drm_gpuva_init()
Himal Prasad Ghimiray (23):
drm/gpuvm: Support flags in drm_gpuva_op_map
drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
drm/xe/uapi: Add madvise interface
drm/xe/vm: Add attributes struct as member of vma
drm/xe/vma: Move pat_index to vma attributes
drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as
parameter
drm/gpusvm: Make drm_gpusvm_for_each_* macros public
drm/xe/svm: Split system allocator vma incase of madvise call
drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for
madvise
drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping
drm/xe: Implement madvise ioctl for xe
drm/xe/svm : Add svm ranges migration policy on atomic access
drm/xe/madvise: Update migration policy based on preferred location
drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute
drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
drm/xe/svm: Consult madvise preferred location in prefetch
drm/xe/bo: Add attributes field to xe_bo
drm/xe/bo: Update atomic_access attribute on madvise
drm/xe/madvise: Skip vma invalidation if mem attr are unchanged
drm/xe/vm: Add helper to check for default VMA memory attributes
drm/xe: Reset VMA attributes to default in SVM garbage collector
drm/xe: Enable madvise ioctl for xe
drm/xe/uapi: Add UAPI for querying VMA count and memory attributes
drivers/gpu/drm/drm_gpusvm.c | 122 ++-----
drivers/gpu/drm/drm_gpuvm.c | 191 ++++++-----
drivers/gpu/drm/imagination/pvr_vm.c | 15 +-
drivers/gpu/drm/msm/msm_gem_vma.c | 33 +-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 11 +-
drivers/gpu/drm/panthor/panthor_mmu.c | 13 +-
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_bo.c | 29 +-
drivers/gpu/drm/xe/xe_bo_types.h | 8 +
drivers/gpu/drm/xe/xe_device.c | 4 +
drivers/gpu/drm/xe/xe_gt_pagefault.c | 35 +-
drivers/gpu/drm/xe/xe_pt.c | 39 ++-
drivers/gpu/drm/xe/xe_svm.c | 238 ++++++++++++-
drivers/gpu/drm/xe/xe_svm.h | 23 ++
drivers/gpu/drm/xe/xe_vm.c | 419 +++++++++++++++++++++--
drivers/gpu/drm/xe/xe_vm.h | 10 +-
drivers/gpu/drm/xe/xe_vm_madvise.c | 449 +++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm_madvise.h | 15 +
drivers/gpu/drm/xe/xe_vm_types.h | 50 ++-
include/drm/drm_gpusvm.h | 70 ++++
include/drm/drm_gpuvm.h | 47 ++-
include/uapi/drm/xe_drm.h | 274 +++++++++++++++
22 files changed, 1812 insertions(+), 284 deletions(-)
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h
--
2.34.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 23:23 ` kernel test robot
` (2 more replies)
2025-07-30 13:00 ` [PATCH v5 02/25] drm/gpuvm: Kill drm_gpuva_init() Himal Prasad Ghimiray
` (28 subsequent siblings)
29 siblings, 3 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Boris Brezillon,
Danilo Krummrich, Boris Brezillon, Caterina Shablia, Rob Clark,
dri-devel, Danilo Krummrich, Himal Prasad Ghimiray
From: Boris Brezillon <boris.brezillon@collabora.com>
We are about to pass more arguments to drm_gpuvm_sm_map[_ops_create](),
so, before we do that, let's pass arguments through a struct instead
of changing each call site every time a new optional argument is added.
v5
- Use drm_gpuva_op_map—same as drm_gpuvm_map_req (Danilo)
- Rebase changes for drm_gpuvm_sm_map_exec_lock()
- Fix kernel-docs
Cc: Danilo Krummrich <dakr@redhat.com>
Cc: Boris Brezillon <bbrezillon@kernel.org>
Cc: Caterina Shablia <caterina.shablia@collabora.com>
Cc: Rob Clark <robin.clark@oss.qualcomm.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: <dri-devel@lists.freedesktop.org>
Acked-by: Danilo Krummrich <dakr@kernel.org> (#v4)
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/drm_gpuvm.c | 106 ++++++++++---------------
drivers/gpu/drm/imagination/pvr_vm.c | 15 ++--
drivers/gpu/drm/msm/msm_gem_vma.c | 33 ++++++--
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 11 ++-
drivers/gpu/drm/panthor/panthor_mmu.c | 13 ++-
drivers/gpu/drm/xe/xe_vm.c | 13 ++-
include/drm/drm_gpuvm.h | 10 +--
7 files changed, 112 insertions(+), 89 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index bbc7fecb6f4a..f04d80a3a63b 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -486,13 +486,18 @@
* u64 addr, u64 range,
* struct drm_gem_object *obj, u64 offset)
* {
+ * struct drm_gpuva_op_map op_map = {
+ * .va.addr = addr,
+ * .va.range = range,
+ * .gem.obj = obj,
+ * .gem.offset = offset,
+ * };
* struct drm_gpuva_ops *ops;
* struct drm_gpuva_op *op
* struct drm_gpuvm_bo *vm_bo;
*
* driver_lock_va_space();
- * ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range,
- * obj, offset);
+ * ops = drm_gpuvm_sm_map_ops_create(gpuvm, &op_map);
* if (IS_ERR(ops))
* return PTR_ERR(ops);
*
@@ -2054,16 +2059,15 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unmap);
static int
op_map_cb(const struct drm_gpuvm_ops *fn, void *priv,
- u64 addr, u64 range,
- struct drm_gem_object *obj, u64 offset)
+ const struct drm_gpuva_op_map *req)
{
struct drm_gpuva_op op = {};
op.op = DRM_GPUVA_OP_MAP;
- op.map.va.addr = addr;
- op.map.va.range = range;
- op.map.gem.obj = obj;
- op.map.gem.offset = offset;
+ op.map.va.addr = req->va.addr;
+ op.map.va.range = req->va.range;
+ op.map.gem.obj = req->gem.obj;
+ op.map.gem.offset = req->gem.offset;
return fn->sm_step_map(&op, priv);
}
@@ -2102,17 +2106,16 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
static int
__drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
const struct drm_gpuvm_ops *ops, void *priv,
- u64 req_addr, u64 req_range,
- struct drm_gem_object *req_obj, u64 req_offset)
+ const struct drm_gpuva_op_map *req)
{
struct drm_gpuva *va, *next;
- u64 req_end = req_addr + req_range;
+ u64 req_end = req->va.addr + req->va.range;
int ret;
- if (unlikely(!drm_gpuvm_range_valid(gpuvm, req_addr, req_range)))
+ if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->va.addr, req->va.range)))
return -EINVAL;
- drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
+ drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req->va.addr, req_end) {
struct drm_gem_object *obj = va->gem.obj;
u64 offset = va->gem.offset;
u64 addr = va->va.addr;
@@ -2120,9 +2123,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
u64 end = addr + range;
bool merge = !!va->gem.obj;
- if (addr == req_addr) {
- merge &= obj == req_obj &&
- offset == req_offset;
+ if (addr == req->va.addr) {
+ merge &= obj == req->gem.obj &&
+ offset == req->gem.offset;
if (end == req_end) {
ret = op_unmap_cb(ops, priv, va, merge);
@@ -2141,9 +2144,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
if (end > req_end) {
struct drm_gpuva_op_map n = {
.va.addr = req_end,
- .va.range = range - req_range,
+ .va.range = range - req->va.range,
.gem.obj = obj,
- .gem.offset = offset + req_range,
+ .gem.offset = offset + req->va.range,
};
struct drm_gpuva_op_unmap u = {
.va = va,
@@ -2155,8 +2158,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
return ret;
break;
}
- } else if (addr < req_addr) {
- u64 ls_range = req_addr - addr;
+ } else if (addr < req->va.addr) {
+ u64 ls_range = req->va.addr - addr;
struct drm_gpuva_op_map p = {
.va.addr = addr,
.va.range = ls_range,
@@ -2165,8 +2168,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
};
struct drm_gpuva_op_unmap u = { .va = va };
- merge &= obj == req_obj &&
- offset + ls_range == req_offset;
+ merge &= obj == req->gem.obj &&
+ offset + ls_range == req->gem.offset;
u.keep = merge;
if (end == req_end) {
@@ -2189,7 +2192,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
.va.range = end - req_end,
.gem.obj = obj,
.gem.offset = offset + ls_range +
- req_range,
+ req->va.range,
};
ret = op_remap_cb(ops, priv, &p, &n, &u);
@@ -2197,10 +2200,10 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
return ret;
break;
}
- } else if (addr > req_addr) {
- merge &= obj == req_obj &&
- offset == req_offset +
- (addr - req_addr);
+ } else if (addr > req->va.addr) {
+ merge &= obj == req->gem.obj &&
+ offset == req->gem.offset +
+ (addr - req->va.addr);
if (end == req_end) {
ret = op_unmap_cb(ops, priv, va, merge);
@@ -2236,9 +2239,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
}
}
- return op_map_cb(ops, priv,
- req_addr, req_range,
- req_obj, req_offset);
+ return op_map_cb(ops, priv, req);
}
static int
@@ -2303,10 +2304,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
* drm_gpuvm_sm_map() - calls the &drm_gpuva_op split/merge steps
* @gpuvm: the &drm_gpuvm representing the GPU VA space
* @priv: pointer to a driver private data structure
- * @req_addr: the start address of the new mapping
- * @req_range: the range of the new mapping
- * @req_obj: the &drm_gem_object to map
- * @req_offset: the offset within the &drm_gem_object
+ * @req: ptr to drm_gpuva_op_map struct
*
* This function iterates the given range of the GPU VA space. It utilizes the
* &drm_gpuvm_ops to call back into the driver providing the split and merge
@@ -2333,8 +2331,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
*/
int
drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
- u64 req_addr, u64 req_range,
- struct drm_gem_object *req_obj, u64 req_offset)
+ const struct drm_gpuva_op_map *req)
{
const struct drm_gpuvm_ops *ops = gpuvm->ops;
@@ -2343,9 +2340,7 @@ drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
ops->sm_step_unmap)))
return -EINVAL;
- return __drm_gpuvm_sm_map(gpuvm, ops, priv,
- req_addr, req_range,
- req_obj, req_offset);
+ return __drm_gpuvm_sm_map(gpuvm, ops, priv, req);
}
EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
@@ -2421,10 +2416,7 @@ static const struct drm_gpuvm_ops lock_ops = {
* @gpuvm: the &drm_gpuvm representing the GPU VA space
* @exec: the &drm_exec locking context
* @num_fences: for newly mapped objects, the # of fences to reserve
- * @req_addr: the start address of the range to unmap
- * @req_range: the range of the mappings to unmap
- * @req_obj: the &drm_gem_object to map
- * @req_offset: the offset within the &drm_gem_object
+ * @op: ptr to drm_gpuva_op_map struct
*
* This function locks (drm_exec_lock_obj()) objects that will be unmapped/
* remapped, and locks+prepares (drm_exec_prepare_object()) objects that
@@ -2442,12 +2434,10 @@ static const struct drm_gpuvm_ops lock_ops = {
* for_each_vm_bind_operation {
* switch (op->op) {
* case DRIVER_OP_UNMAP:
- * ret = drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, op->addr, op->range);
+ * ret = drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, op->va.addr, op->va.range);
* break;
* case DRIVER_OP_MAP:
- * ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences,
- * op->addr, op->range,
- * obj, op->obj_offset);
+ * ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences, op);
* break;
* }
*
@@ -2478,18 +2468,16 @@ static const struct drm_gpuvm_ops lock_ops = {
int
drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
struct drm_exec *exec, unsigned int num_fences,
- u64 req_addr, u64 req_range,
- struct drm_gem_object *req_obj, u64 req_offset)
+ struct drm_gpuva_op_map *req)
{
- if (req_obj) {
- int ret = drm_exec_prepare_obj(exec, req_obj, num_fences);
+ if (req->gem.obj) {
+ int ret = drm_exec_prepare_obj(exec, req->gem.obj, num_fences);
if (ret)
return ret;
}
return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec,
- req_addr, req_range,
- req_obj, req_offset);
+ req);
}
EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_exec_lock);
@@ -2611,10 +2599,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
/**
* drm_gpuvm_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
* @gpuvm: the &drm_gpuvm representing the GPU VA space
- * @req_addr: the start address of the new mapping
- * @req_range: the range of the new mapping
- * @req_obj: the &drm_gem_object to map
- * @req_offset: the offset within the &drm_gem_object
+ * @req: ptr to drm_gpuva_op_map struct
*
* This function creates a list of operations to perform splitting and merging
* of existent mapping(s) with the newly requested one.
@@ -2642,8 +2627,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
*/
struct drm_gpuva_ops *
drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
- u64 req_addr, u64 req_range,
- struct drm_gem_object *req_obj, u64 req_offset)
+ const struct drm_gpuva_op_map *req)
{
struct drm_gpuva_ops *ops;
struct {
@@ -2661,9 +2645,7 @@ drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
args.vm = gpuvm;
args.ops = ops;
- ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args,
- req_addr, req_range,
- req_obj, req_offset);
+ ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args, req);
if (ret)
goto err_free_ops;
diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
index 2896fa7501b1..57116709de81 100644
--- a/drivers/gpu/drm/imagination/pvr_vm.c
+++ b/drivers/gpu/drm/imagination/pvr_vm.c
@@ -185,12 +185,17 @@ struct pvr_vm_bind_op {
static int pvr_vm_bind_op_exec(struct pvr_vm_bind_op *bind_op)
{
switch (bind_op->type) {
- case PVR_VM_BIND_TYPE_MAP:
+ case PVR_VM_BIND_TYPE_MAP: {
+ const struct drm_gpuva_op_map map_req = {
+ .va.addr = bind_op->device_addr,
+ .va.range = bind_op->size,
+ .gem.obj = gem_from_pvr_gem(bind_op->pvr_obj),
+ .gem.offset = bind_op->offset,
+ };
+
return drm_gpuvm_sm_map(&bind_op->vm_ctx->gpuvm_mgr,
- bind_op, bind_op->device_addr,
- bind_op->size,
- gem_from_pvr_gem(bind_op->pvr_obj),
- bind_op->offset);
+ bind_op, &map_req);
+ }
case PVR_VM_BIND_TYPE_UNMAP:
return drm_gpuvm_sm_unmap(&bind_op->vm_ctx->gpuvm_mgr,
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 3cd8562a5109..59a9b41bc967 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -371,6 +371,12 @@ struct drm_gpuva *
msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
u64 offset, u64 range_start, u64 range_end)
{
+ struct drm_gpuva_op_map op_map = {
+ .va.addr = range_start,
+ .va.range = range_end - range_start,
+ .gem.obj = obj,
+ .gem.offset = offset,
+ };
struct msm_gem_vm *vm = to_msm_vm(gpuvm);
struct drm_gpuvm_bo *vm_bo;
struct msm_gem_vma *vma;
@@ -399,7 +405,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
if (obj)
GEM_WARN_ON((range_end - range_start) > obj->size);
- drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset);
+ drm_gpuva_init_from_op(&vma->base, &op_map);
vma->mapped = false;
ret = drm_gpuva_insert(&vm->base, &vma->base);
@@ -1172,10 +1178,17 @@ vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exec)
break;
case MSM_VM_BIND_OP_MAP:
case MSM_VM_BIND_OP_MAP_NULL:
- ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1,
- op->iova, op->range,
- op->obj, op->obj_offset);
+ {
+ struct drm_gpuva_op_map map_req = {
+ .va.addr = op->iova,
+ .va.range = op->range,
+ .gem.obj = op->obj,
+ .gem.offset = op->obj_offset,
+ };
+
+ ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, &map_req);
break;
+ }
default:
/*
* lookup_op() should have already thrown an error for
@@ -1283,9 +1296,17 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job)
arg.flags |= MSM_VMA_DUMP;
fallthrough;
case MSM_VM_BIND_OP_MAP_NULL:
- ret = drm_gpuvm_sm_map(job->vm, &arg, op->iova,
- op->range, op->obj, op->obj_offset);
+ {
+ struct drm_gpuva_op_map map_req = {
+ .va.addr = op->iova,
+ .va.range = op->range,
+ .gem.obj = op->obj,
+ .gem.offset = op->obj_offset,
+ };
+
+ ret = drm_gpuvm_sm_map(job->vm, &arg, &map_req);
break;
+ }
default:
/*
* lookup_op() should have already thrown an error for
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index ddfc46bc1b3e..b74054b0a476 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -1276,6 +1276,12 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
break;
case OP_MAP: {
struct nouveau_uvma_region *reg;
+ struct drm_gpuva_op_map map_req = {
+ .va.addr = op->va.addr,
+ .va.range = op->va.range,
+ .gem.obj = op->gem.obj,
+ .gem.offset = op->gem.offset,
+ };
reg = nouveau_uvma_region_find_first(uvmm,
op->va.addr,
@@ -1301,10 +1307,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
}
op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->base,
- op->va.addr,
- op->va.range,
- op->gem.obj,
- op->gem.offset);
+ &map_req);
if (IS_ERR(op->ops)) {
ret = PTR_ERR(op->ops);
goto unwind_continue;
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 4140f697ba5a..5fd4245a57b9 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -2169,15 +2169,22 @@ panthor_vm_exec_op(struct panthor_vm *vm, struct panthor_vm_op_ctx *op,
mutex_lock(&vm->op_lock);
vm->op_ctx = op;
switch (op_type) {
- case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
+ case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP: {
+ const struct drm_gpuva_op_map map_req = {
+ .va.addr = op->va.addr,
+ .va.range = op->va.range,
+ .gem.obj = op->map.vm_bo->obj,
+ .gem.offset = op->map.bo_offset,
+ };
+
if (vm->unusable) {
ret = -EINVAL;
break;
}
- ret = drm_gpuvm_sm_map(&vm->base, vm, op->va.addr, op->va.range,
- op->map.vm_bo->obj, op->map.bo_offset);
+ ret = drm_gpuvm_sm_map(&vm->base, vm, &map_req);
break;
+ }
case DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP:
ret = drm_gpuvm_sm_unmap(&vm->base, vm, op->va.addr, op->va.range);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 432ea325677d..4b3e78745363 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2316,10 +2316,17 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
switch (operation) {
case DRM_XE_VM_BIND_OP_MAP:
- case DRM_XE_VM_BIND_OP_MAP_USERPTR:
- ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, addr, range,
- obj, bo_offset_or_userptr);
+ case DRM_XE_VM_BIND_OP_MAP_USERPTR: {
+ struct drm_gpuva_op_map map_req = {
+ .va.addr = addr,
+ .va.range = range,
+ .gem.obj = obj,
+ .gem.offset = bo_offset_or_userptr,
+ };
+
+ ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
break;
+ }
case DRM_XE_VM_BIND_OP_UNMAP:
ops = drm_gpuvm_sm_unmap_ops_create(&vm->gpuvm, addr, range);
break;
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 274532facfd6..892ffe75a62f 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -1060,8 +1060,8 @@ struct drm_gpuva_ops {
struct drm_gpuva_ops *
drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
- u64 addr, u64 range,
- struct drm_gem_object *obj, u64 offset);
+ const struct drm_gpuva_op_map *req);
+
struct drm_gpuva_ops *
drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
u64 addr, u64 range);
@@ -1205,16 +1205,14 @@ struct drm_gpuvm_ops {
};
int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
- u64 addr, u64 range,
- struct drm_gem_object *obj, u64 offset);
+ const struct drm_gpuva_op_map *req);
int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
u64 addr, u64 range);
int drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
struct drm_exec *exec, unsigned int num_fences,
- u64 req_addr, u64 req_range,
- struct drm_gem_object *obj, u64 offset);
+ struct drm_gpuva_op_map *req);
int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
u64 req_addr, u64 req_range);
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 02/25] drm/gpuvm: Kill drm_gpuva_init()
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-08-05 3:45 ` Matthew Brost
2025-08-05 9:35 ` Danilo Krummrich
2025-07-30 13:00 ` [PATCH v5 03/25] drm/gpuvm: Support flags in drm_gpuva_op_map Himal Prasad Ghimiray
` (27 subsequent siblings)
29 siblings, 2 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Boris Brezillon,
Caterina Shablia, Danilo Krummrich
From: Boris Brezillon <boris.brezillon@collabora.com>
drm_gpuva_init() only has one internal user, and given we are about to
add new optional fields, it only add maintenance burden for no real
benefit, so let's kill the thing now.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
Acked-by: Danilo Krummrich <dakr@kernel.org>
---
include/drm/drm_gpuvm.h | 15 ++++-----------
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 892ffe75a62f..2d24d000f2ee 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -160,15 +160,6 @@ struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start);
struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end);
-static inline void drm_gpuva_init(struct drm_gpuva *va, u64 addr, u64 range,
- struct drm_gem_object *obj, u64 offset)
-{
- va->va.addr = addr;
- va->va.range = range;
- va->gem.obj = obj;
- va->gem.offset = offset;
-}
-
/**
* drm_gpuva_invalidate() - sets whether the backing GEM of this &drm_gpuva is
* invalidated
@@ -1079,8 +1070,10 @@ void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
struct drm_gpuva_op_map *op)
{
- drm_gpuva_init(va, op->va.addr, op->va.range,
- op->gem.obj, op->gem.offset);
+ va->va.addr = op->va.addr;
+ va->va.range = op->va.range;
+ va->gem.obj = op->gem.obj;
+ va->gem.offset = op->gem.offset;
}
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 03/25] drm/gpuvm: Support flags in drm_gpuva_op_map
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 02/25] drm/gpuvm: Kill drm_gpuva_init() Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-08-05 3:58 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 04/25] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag Himal Prasad Ghimiray
` (26 subsequent siblings)
29 siblings, 1 reply; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray,
Danilo Krummrich, Boris Brezillon, Caterina Shablia
This change adds support for passing flags to drm_gpuvm_sm_map() and
sm_map_ops_create(), enabling future extensions that affect split/merge
logic in drm_gpuvm.
Cc: Danilo Krummrich <dakr@redhat.com>
Cc: Boris Brezillon <bbrezillon@kernel.org>
Cc: Caterina Shablia <caterina.shablia@collabora.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
include/drm/drm_gpuvm.h | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 2d24d000f2ee..75c616fdc119 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -810,6 +810,12 @@ enum drm_gpuva_op_type {
DRM_GPUVA_OP_DRIVER,
};
+/** DOC: flags for struct drm_gpuva_op_map
+ * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE DEFAULT split and merge,
+ * It cannot be combined with other flags.
+ */
+#define DRM_GPUVM_SM_MAP_OPS_FLAG_NONE 0
+
/**
* struct drm_gpuva_op_map - GPU VA map operation
*
@@ -847,6 +853,13 @@ struct drm_gpuva_op_map {
*/
struct drm_gem_object *obj;
} gem;
+
+ /**
+ * @flags: Bitmask of DRM_GPUVM_SM_MAP_* flags.
+ * Use DRM_GPUVM_SM_MAP_OPS_FLAG_NONE (0) for default split merge.
+ * It cannot be combined with other flags.
+ */
+ u32 flags;
};
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 04/25] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (2 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 03/25] drm/gpuvm: Support flags in drm_gpuva_op_map Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-08-05 19:24 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 05/25] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
` (25 subsequent siblings)
29 siblings, 1 reply; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray,
Danilo Krummrich, Boris Brezillon, dri-devel
- DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE: This flag is used by
drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
user-provided range and split the existing non-GEM object VMA if the
start or end of the input range lies within it. The operations can
create up to 2 REMAPS and 2 MAPs. The purpose of this operation is to be
used by the Xe driver to assign attributes to GPUVMA's within the
user-defined range. Unlike drm_gpuvm_sm_map_ops_flags in default mode,
the operation with this flag will never have UNMAPs and
merges, and can be without any final operations.
v2
- use drm_gpuvm_sm_map_ops_create with flags instead of defining new
ops_create (Danilo)
- Add doc (Danilo)
v3
- Fix doc
- Fix unmapping check
v4
- Fix mapping for non madvise ops
v5
- Fix mapping (Matthew Brost)
- Rebase on top of struct changes
Cc: Danilo Krummrich <dakr@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Boris Brezillon <bbrezillon@kernel.org>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/drm_gpuvm.c | 87 +++++++++++++++++++++++++++++++------
include/drm/drm_gpuvm.h | 11 ++++-
2 files changed, 83 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index f04d80a3a63b..2aeae8c2296f 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -2110,6 +2110,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
{
struct drm_gpuva *va, *next;
u64 req_end = req->va.addr + req->va.range;
+ bool is_madvise_ops = (req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
+ bool needs_map = !is_madvise_ops;
int ret;
if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->va.addr, req->va.range)))
@@ -2122,26 +2124,35 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
u64 range = va->va.range;
u64 end = addr + range;
bool merge = !!va->gem.obj;
+ bool skip_madvise_ops = is_madvise_ops && merge;
+ needs_map = !is_madvise_ops;
if (addr == req->va.addr) {
merge &= obj == req->gem.obj &&
offset == req->gem.offset;
if (end == req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
break;
}
if (end < req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
continue;
}
if (end > req_end) {
+ if (skip_madvise_ops)
+ break;
+
struct drm_gpuva_op_map n = {
.va.addr = req_end,
.va.range = range - req->va.range,
@@ -2156,6 +2167,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
ret = op_remap_cb(ops, priv, NULL, &n, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops)
+ needs_map = true;
break;
}
} else if (addr < req->va.addr) {
@@ -2173,20 +2187,45 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
u.keep = merge;
if (end == req_end) {
+ if (skip_madvise_ops)
+ break;
+
ret = op_remap_cb(ops, priv, &p, NULL, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops)
+ needs_map = true;
+
break;
}
if (end < req_end) {
+ if (skip_madvise_ops)
+ continue;
+
ret = op_remap_cb(ops, priv, &p, NULL, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops) {
+ struct drm_gpuva_op_map map_req = {
+ .va.addr = req->va.addr,
+ .va.range = end - req->va.addr,
+ };
+
+ ret = op_map_cb(ops, priv, &map_req);
+ if (ret)
+ return ret;
+ }
+
continue;
}
if (end > req_end) {
+ if (skip_madvise_ops)
+ break;
+
struct drm_gpuva_op_map n = {
.va.addr = req_end,
.va.range = end - req_end,
@@ -2198,6 +2237,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
ret = op_remap_cb(ops, priv, &p, &n, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops)
+ needs_map = true;
break;
}
} else if (addr > req->va.addr) {
@@ -2206,20 +2248,29 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
(addr - req->va.addr);
if (end == req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
+
break;
}
if (end < req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
+
continue;
}
if (end > req_end) {
+ if (skip_madvise_ops)
+ break;
+
struct drm_gpuva_op_map n = {
.va.addr = req_end,
.va.range = end - req_end,
@@ -2234,12 +2285,20 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
ret = op_remap_cb(ops, priv, NULL, &n, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops) {
+ struct drm_gpuva_op_map map_req = {
+ .va.addr = addr,
+ .va.range = req_end - addr,
+ };
+
+ return op_map_cb(ops, priv, &map_req);
+ }
break;
}
}
}
-
- return op_map_cb(ops, priv, req);
+ return needs_map ? op_map_cb(ops, priv, req) : 0;
}
static int
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 75c616fdc119..a8e9f70501ef 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -811,10 +811,19 @@ enum drm_gpuva_op_type {
};
/** DOC: flags for struct drm_gpuva_op_map
- * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE DEFAULT split and merge,
+ * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE: DEFAULT split and merge,
* It cannot be combined with other flags.
+ *
+ * %DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE: This flag is used by
+ * drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the user-provided
+ * range and split the existing non-GEM object VMA if the start or end of
+ * the input range lies within it. The operations can create up to 2 REMAPS
+ * and 2 MAPs. Unlike DRM_GPUVM_SM_MAP_OPS_FLAG_NONE flag, the operation with
+ * this flag will never have UNMAPs and merges, and can be without any final
+ * operations.
*/
#define DRM_GPUVM_SM_MAP_OPS_FLAG_NONE 0
+#define DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE BIT(0)
/**
* struct drm_gpuva_op_map - GPU VA map operation
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 05/25] drm/xe/uapi: Add madvise interface
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (3 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 04/25] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 06/25] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
` (24 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
This commit introduces a new madvise interface to support
driver-specific ioctl operations. The madvise interface allows for more
efficient memory management by providing hints to the driver about the
expected memory usage and pte update policy for gpuvma.
v2 (Matthew/Thomas)
- Drop num_ops support
- Drop purgeable support
- Add kernel-docs
- IOWR/IOW
v3 (Matthew/Thomas)
- Reorder attributes
- use __u16 for migration_policy
- use __u64 for reserved in unions
- Avoid usage of vma
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
include/uapi/drm/xe_drm.h | 130 ++++++++++++++++++++++++++++++++++++++
1 file changed, 130 insertions(+)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index c721e130c1d2..4e6e9a9164ee 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -81,6 +81,7 @@ extern "C" {
* - &DRM_IOCTL_XE_EXEC
* - &DRM_IOCTL_XE_WAIT_USER_FENCE
* - &DRM_IOCTL_XE_OBSERVATION
+ * - &DRM_IOCTL_XE_MADVISE
*/
/*
@@ -102,6 +103,7 @@ extern "C" {
#define DRM_XE_EXEC 0x09
#define DRM_XE_WAIT_USER_FENCE 0x0a
#define DRM_XE_OBSERVATION 0x0b
+#define DRM_XE_MADVISE 0x0c
/* Must be kept compact -- no holes */
@@ -117,6 +119,7 @@ extern "C" {
#define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
+#define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
/**
* DOC: Xe IOCTL Extensions
@@ -1978,6 +1981,133 @@ struct drm_xe_query_eu_stall {
__u64 sampling_rates[];
};
+/**
+ * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
+ *
+ * This structure is used to set memory attributes for a virtual address range
+ * in a VM. The type of attribute is specified by @type, and the corresponding
+ * union member is used to provide additional parameters for @type.
+ *
+ * Supported attribute types:
+ * - DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC: Set preferred memory location.
+ * - DRM_XE_MEM_RANGE_ATTR_ATOMIC: Set atomic access policy.
+ * - DRM_XE_MEM_RANGE_ATTR_PAT: Set page attribute table index.
+ *
+ * Example:
+ *
+ * .. code-block:: C
+ *
+ * struct drm_xe_madvise madvise = {
+ * .vm_id = vm_id,
+ * .start = 0x100000,
+ * .range = 0x2000,
+ * .type = DRM_XE_MEM_RANGE_ATTR_ATOMIC,
+ * .atomic_val = DRM_XE_ATOMIC_DEVICE,
+ * };
+ *
+ * ioctl(fd, DRM_IOCTL_XE_MADVISE, &madvise);
+ *
+ */
+struct drm_xe_madvise {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @start: start of the virtual address range */
+ __u64 start;
+
+ /** @range: size of the virtual address range */
+ __u64 range;
+
+ /** @vm_id: vm_id of the virtual range */
+ __u32 vm_id;
+
+#define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC 0
+#define DRM_XE_MEM_RANGE_ATTR_ATOMIC 1
+#define DRM_XE_MEM_RANGE_ATTR_PAT 2
+ /** @type: type of attribute */
+ __u32 type;
+
+ union {
+ /**
+ * @preferred_mem_loc: preferred memory location
+ *
+ * Used when @type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC
+ *
+ * Supported values for @preferred_mem_loc.devmem_fd:
+ * - DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE: set vram of faulting tile as preferred loc
+ * - DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM: set smem as preferred loc
+ *
+ * Supported values for @preferred_mem_loc.migration_policy:
+ * - DRM_XE_MIGRATE_ALL_PAGES
+ * - DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES
+ */
+ struct {
+#define DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE 0
+#define DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM -1
+ /** @preferred_mem_loc.devmem_fd: fd for preferred loc */
+ __u32 devmem_fd;
+
+#define DRM_XE_MIGRATE_ALL_PAGES 0
+#define DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES 1
+ /** @preferred_mem_loc.migration_policy: Page migration policy */
+ __u16 migration_policy;
+
+ /** @preferred_mem_loc.pad : MBZ */
+ __u16 pad;
+
+ /** @preferred_mem_loc.reserved : Reserved */
+ __u64 reserved;
+ } preferred_mem_loc;
+
+ /**
+ * @atomic: Atomic access policy
+ *
+ * Used when @type == DRM_XE_MEM_RANGE_ATTR_ATOMIC.
+ *
+ * Supported values for @atomic.val:
+ * - DRM_XE_ATOMIC_UNDEFINED: Undefined or default behaviour
+ * Support both GPU and CPU atomic operations for system allocator
+ * Support GPU atomic operations for normal(bo) allocator
+ * - DRM_XE_ATOMIC_DEVICE: Support GPU atomic operations
+ * - DRM_XE_ATOMIC_GLOBAL: Support both GPU and CPU atomic operations
+ * - DRM_XE_ATOMIC_CPU: Support CPU atomic
+ */
+ struct {
+#define DRM_XE_ATOMIC_UNDEFINED 0
+#define DRM_XE_ATOMIC_DEVICE 1
+#define DRM_XE_ATOMIC_GLOBAL 2
+#define DRM_XE_ATOMIC_CPU 3
+ /** @atomic.val: value of atomic operation */
+ __u32 val;
+
+ /** @atomic.pad: MBZ */
+ __u32 pad;
+
+ /** @atomic.reserved: Reserved */
+ __u64 reserved;
+ } atomic;
+
+ /**
+ * @pat_index: Page attribute table index
+ *
+ * Used when @type == DRM_XE_MEM_RANGE_ATTR_PAT.
+ */
+ struct {
+ /** @pat_index.val: PAT index value */
+ __u32 val;
+
+ /** @pat_index.pad: MBZ */
+ __u32 pad;
+
+ /** @pat_index.reserved: Reserved */
+ __u64 reserved;
+ } pat_index;
+ };
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
#if defined(__cplusplus)
}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 06/25] drm/xe/vm: Add attributes struct as member of vma
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (4 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 05/25] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 07/25] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
` (23 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
The attribute of xe_vma will determine the migration policy and the
encoding of the page table entries (PTEs) for that vma.
This attribute helps manage how memory pages are moved and how their
addresses are translated. It will be used by madvise to set the
behavior of the vma.
v2 (Matthew Brost)
- Add docs
v3 (Matthew Brost)
- Add uapi references
- 80 characters line wrap
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_vm_types.h | 33 ++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index bed6088e1bb3..5777b0e0c6a9 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -77,6 +77,33 @@ struct xe_userptr {
#endif
};
+/**
+ * struct xe_vma_mem_attr - memory attributes associated with vma
+ */
+struct xe_vma_mem_attr {
+ /** @preferred_loc: perferred memory_location */
+ struct {
+ /** @preferred_loc.migration_policy: Pages migration policy */
+ u32 migration_policy;
+
+ /**
+ * @preferred_loc.devmem_fd: used for determining pagemap_fd
+ * requested by user DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM and
+ * DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE mean system memory or
+ * closest device memory respectively.
+ */
+ u32 devmem_fd;
+ } preferred_loc;
+
+ /**
+ * @atomic_access: The atomic access type for the vma
+ * See %DRM_XE_VMA_ATOMIC_UNDEFINED, %DRM_XE_VMA_ATOMIC_DEVICE,
+ * %DRM_XE_VMA_ATOMIC_GLOBAL, and %DRM_XE_VMA_ATOMIC_CPU for possible
+ * values. These are defined in uapi/drm/xe_drm.h.
+ */
+ u32 atomic_access;
+};
+
struct xe_vma {
/** @gpuva: Base GPUVA object */
struct drm_gpuva gpuva;
@@ -135,6 +162,12 @@ struct xe_vma {
* Needs to be signalled before UNMAP can be processed.
*/
struct xe_user_fence *ufence;
+
+ /**
+ * @attr: The attributes of vma which determines the migration policy
+ * and encoding of the PTEs for this vma.
+ */
+ struct xe_vma_mem_attr attr;
};
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 07/25] drm/xe/vma: Move pat_index to vma attributes
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (5 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 06/25] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 08/25] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
` (22 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
The PAT index determines how PTEs are encoded and can be modified by
madvise. Therefore, it is now part of the vma attributes.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 2 +-
drivers/gpu/drm/xe/xe_vm.c | 6 +++---
drivers/gpu/drm/xe/xe_vm_types.h | 10 +++++-----
3 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 330cc0f54a3f..9128b35ccb3b 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -518,7 +518,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
{
struct xe_pt_stage_bind_walk *xe_walk =
container_of(walk, typeof(*xe_walk), base);
- u16 pat_index = xe_walk->vma->pat_index;
+ u16 pat_index = xe_walk->vma->attr.pat_index;
struct xe_pt *xe_parent = container_of(parent, typeof(*xe_parent), base);
struct xe_vm *vm = xe_walk->vm;
struct xe_pt *xe_child;
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 4b3e78745363..6a712a962d21 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1223,7 +1223,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
if (vm->xe->info.has_atomic_enable_pte_bit)
vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
- vma->pat_index = pat_index;
+ vma->attr.pat_index = pat_index;
if (bo) {
struct drm_gpuvm_bo *vm_bo;
@@ -2679,7 +2679,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.prev) {
vma = new_vma(vm, op->base.remap.prev,
- old->pat_index, flags);
+ old->attr.pat_index, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -2709,7 +2709,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.next) {
vma = new_vma(vm, op->base.remap.next,
- old->pat_index, flags);
+ old->attr.pat_index, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 5777b0e0c6a9..c30f404a00e3 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -102,6 +102,11 @@ struct xe_vma_mem_attr {
* values. These are defined in uapi/drm/xe_drm.h.
*/
u32 atomic_access;
+
+ /**
+ * @pat_index: The pat index to use when encoding the PTEs for this vma.
+ */
+ u16 pat_index;
};
struct xe_vma {
@@ -152,11 +157,6 @@ struct xe_vma {
/** @tile_staged: bind is staged for this VMA */
u8 tile_staged;
- /**
- * @pat_index: The pat index to use when encoding the PTEs for this vma.
- */
- u16 pat_index;
-
/**
* @ufence: The user fence that was provided with MAP.
* Needs to be signalled before UNMAP can be processed.
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 08/25] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (6 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 07/25] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 09/25] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
` (21 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
This change simplifies the logic by ensuring that remapped previous or
next VMAs are created with the same memory attributes as the original VMA.
By passing struct xe_vma_mem_attr as a parameter, we maintain consistency
in memory attributes.
-v2
*dst = *src (Matthew Brost)
-v3 (Matthew Brost)
Drop unnecessary helper
pass attr ptr as input to new_vma and vma_create
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 6a712a962d21..4440557a3233 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1168,7 +1168,8 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
struct xe_bo *bo,
u64 bo_offset_or_userptr,
u64 start, u64 end,
- u16 pat_index, unsigned int flags)
+ struct xe_vma_mem_attr *attr,
+ unsigned int flags)
{
struct xe_vma *vma;
struct xe_tile *tile;
@@ -1223,7 +1224,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
if (vm->xe->info.has_atomic_enable_pte_bit)
vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
- vma->attr.pat_index = pat_index;
+ vma->attr = *attr;
if (bo) {
struct drm_gpuvm_bo *vm_bo;
@@ -2450,7 +2451,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
- u16 pat_index, unsigned int flags)
+ struct xe_vma_mem_attr *attr, unsigned int flags)
{
struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
struct drm_exec exec;
@@ -2479,7 +2480,7 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
}
vma = xe_vma_create(vm, bo, op->gem.offset,
op->va.addr, op->va.addr +
- op->va.range - 1, pat_index, flags);
+ op->va.range - 1, attr, flags);
if (IS_ERR(vma))
goto err_unlock;
@@ -2622,6 +2623,15 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
switch (op->base.op) {
case DRM_GPUVA_OP_MAP:
{
+ struct xe_vma_mem_attr default_attr = {
+ .preferred_loc = {
+ .devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE,
+ .migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
+ },
+ .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+ .pat_index = op->map.pat_index,
+ };
+
flags |= op->map.read_only ?
VMA_CREATE_FLAG_READ_ONLY : 0;
flags |= op->map.is_null ?
@@ -2631,7 +2641,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
flags |= op->map.is_cpu_addr_mirror ?
VMA_CREATE_FLAG_IS_SYSTEM_ALLOCATOR : 0;
- vma = new_vma(vm, &op->base.map, op->map.pat_index,
+ vma = new_vma(vm, &op->base.map, &default_attr,
flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -2679,7 +2689,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.prev) {
vma = new_vma(vm, op->base.remap.prev,
- old->attr.pat_index, flags);
+ &old->attr, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -2709,7 +2719,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.next) {
vma = new_vma(vm, op->base.remap.next,
- old->attr.pat_index, flags);
+ &old->attr, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 09/25] drm/gpusvm: Make drm_gpusvm_for_each_* macros public
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (7 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 08/25] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 10/25] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
` (20 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
The drm_gpusvm_for_each_notifier, drm_gpusvm_for_each_notifier_safe and
drm_gpusvm_for_each_range_safe macros are useful for locating notifiers
and ranges within a user-specified range. By making these macros public,
we enable broader access and utility for developers who need to leverage
them in their implementations.
v2 (Matthew Brost)
- drop inline __drm_gpusvm_range_find
- /s/notifier_iter_first/drm_gpusvm_notifier_find
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 122 +++++++----------------------------
include/drm/drm_gpusvm.h | 70 ++++++++++++++++++++
2 files changed, 95 insertions(+), 97 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 5bb4c77db2c3..647b49ff2da5 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -271,107 +271,50 @@ npages_in_range(unsigned long start, unsigned long end)
}
/**
- * drm_gpusvm_range_find() - Find GPU SVM range from GPU SVM notifier
- * @notifier: Pointer to the GPU SVM notifier structure.
- * @start: Start address of the range
- * @end: End address of the range
+ * drm_gpusvm_notifier_find() - Find GPU SVM notifier from GPU SVM
+ * @gpusvm: Pointer to the GPU SVM structure.
+ * @start: Start address of the notifier
+ * @end: End address of the notifier
*
- * Return: A pointer to the drm_gpusvm_range if found or NULL
+ * Return: A pointer to the drm_gpusvm_notifier if found or NULL
*/
-struct drm_gpusvm_range *
-drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
- unsigned long end)
+struct drm_gpusvm_notifier *
+drm_gpusvm_notifier_find(struct drm_gpusvm *gpusvm, unsigned long start,
+ unsigned long end)
{
struct interval_tree_node *itree;
- itree = interval_tree_iter_first(¬ifier->root, start, end - 1);
+ itree = interval_tree_iter_first(&gpusvm->root, start, end - 1);
if (itree)
- return container_of(itree, struct drm_gpusvm_range, itree);
+ return container_of(itree, struct drm_gpusvm_notifier, itree);
else
return NULL;
}
-EXPORT_SYMBOL_GPL(drm_gpusvm_range_find);
+EXPORT_SYMBOL_GPL(drm_gpusvm_notifier_find);
/**
- * drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
- * @range__: Iterator variable for the ranges
- * @next__: Iterator variable for the ranges temporay storage
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the range
- * @end__: End address of the range
- *
- * This macro is used to iterate over GPU SVM ranges in a notifier while
- * removing ranges from it.
- */
-#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
- for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
- (next__) = __drm_gpusvm_range_next(range__); \
- (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
- (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-
-/**
- * __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
- * @notifier: a pointer to the current drm_gpusvm_notifier
+ * drm_gpusvm_range_find() - Find GPU SVM range from GPU SVM notifier
+ * @notifier: Pointer to the GPU SVM notifier structure.
+ * @start: Start address of the range
+ * @end: End address of the range
*
- * Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
- * the current notifier is the last one or if the input notifier is
- * NULL.
+ * Return: A pointer to the drm_gpusvm_range if found or NULL
*/
-static struct drm_gpusvm_notifier *
-__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
-{
- if (notifier && !list_is_last(¬ifier->entry,
- ¬ifier->gpusvm->notifier_list))
- return list_next_entry(notifier, entry);
-
- return NULL;
-}
-
-static struct drm_gpusvm_notifier *
-notifier_iter_first(struct rb_root_cached *root, unsigned long start,
- unsigned long last)
+struct drm_gpusvm_range *
+drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
+ unsigned long end)
{
struct interval_tree_node *itree;
- itree = interval_tree_iter_first(root, start, last);
+ itree = interval_tree_iter_first(¬ifier->root, start, end - 1);
if (itree)
- return container_of(itree, struct drm_gpusvm_notifier, itree);
+ return container_of(itree, struct drm_gpusvm_range, itree);
else
return NULL;
}
-
-/**
- * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
- * @notifier__: Iterator variable for the notifiers
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the notifier
- * @end__: End address of the notifier
- *
- * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
- */
-#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
- for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
- (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
- (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-
-/**
- * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
- * @notifier__: Iterator variable for the notifiers
- * @next__: Iterator variable for the notifiers temporay storage
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the notifier
- * @end__: End address of the notifier
- *
- * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
- * removing notifiers from it.
- */
-#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
- for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
- (next__) = __drm_gpusvm_notifier_next(notifier__); \
- (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
- (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
+EXPORT_SYMBOL_GPL(drm_gpusvm_range_find);
/**
* drm_gpusvm_notifier_invalidate() - Invalidate a GPU SVM notifier.
@@ -472,22 +415,6 @@ int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
}
EXPORT_SYMBOL_GPL(drm_gpusvm_init);
-/**
- * drm_gpusvm_notifier_find() - Find GPU SVM notifier
- * @gpusvm: Pointer to the GPU SVM structure
- * @fault_addr: Fault address
- *
- * This function finds the GPU SVM notifier associated with the fault address.
- *
- * Return: Pointer to the GPU SVM notifier on success, NULL otherwise.
- */
-static struct drm_gpusvm_notifier *
-drm_gpusvm_notifier_find(struct drm_gpusvm *gpusvm,
- unsigned long fault_addr)
-{
- return notifier_iter_first(&gpusvm->root, fault_addr, fault_addr + 1);
-}
-
/**
* to_drm_gpusvm_notifier() - retrieve the container struct for a given rbtree node
* @node: a pointer to the rbtree node embedded within a drm_gpusvm_notifier struct
@@ -943,7 +870,7 @@ drm_gpusvm_range_find_or_insert(struct drm_gpusvm *gpusvm,
if (!mmget_not_zero(mm))
return ERR_PTR(-EFAULT);
- notifier = drm_gpusvm_notifier_find(gpusvm, fault_addr);
+ notifier = drm_gpusvm_notifier_find(gpusvm, fault_addr, fault_addr + 1);
if (!notifier) {
notifier = drm_gpusvm_notifier_alloc(gpusvm, fault_addr);
if (IS_ERR(notifier)) {
@@ -1107,7 +1034,8 @@ void drm_gpusvm_range_remove(struct drm_gpusvm *gpusvm,
drm_gpusvm_driver_lock_held(gpusvm);
notifier = drm_gpusvm_notifier_find(gpusvm,
- drm_gpusvm_range_start(range));
+ drm_gpusvm_range_start(range),
+ drm_gpusvm_range_start(range) + 1);
if (WARN_ON_ONCE(!notifier))
return;
diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
index 4aedc5423aff..142fc2af1716 100644
--- a/include/drm/drm_gpusvm.h
+++ b/include/drm/drm_gpusvm.h
@@ -282,6 +282,10 @@ void drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
bool drm_gpusvm_has_mapping(struct drm_gpusvm *gpusvm, unsigned long start,
unsigned long end);
+struct drm_gpusvm_notifier *
+drm_gpusvm_notifier_find(struct drm_gpusvm *gpusvm, unsigned long start,
+ unsigned long end);
+
struct drm_gpusvm_range *
drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
unsigned long end);
@@ -434,4 +438,70 @@ __drm_gpusvm_range_next(struct drm_gpusvm_range *range)
(range__) && (drm_gpusvm_range_start(range__) < (end__)); \
(range__) = __drm_gpusvm_range_next(range__))
+/**
+ * drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
+ * @range__: Iterator variable for the ranges
+ * @next__: Iterator variable for the ranges temporay storage
+ * @notifier__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the range
+ * @end__: End address of the range
+ *
+ * This macro is used to iterate over GPU SVM ranges in a notifier while
+ * removing ranges from it.
+ */
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
+
+/**
+ * __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
+ * @notifier: a pointer to the current drm_gpusvm_notifier
+ *
+ * Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
+ * the current notifier is the last one or if the input notifier is
+ * NULL.
+ */
+static inline struct drm_gpusvm_notifier *
+__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
+{
+ if (notifier && !list_is_last(¬ifier->entry,
+ ¬ifier->gpusvm->notifier_list))
+ return list_next_entry(notifier, entry);
+
+ return NULL;
+}
+
+/**
+ * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
+ * @notifier__: Iterator variable for the notifiers
+ * @gpusvm__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the notifier
+ * @end__: End address of the notifier
+ *
+ * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
+ */
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
+
+/**
+ * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
+ * @notifier__: Iterator variable for the notifiers
+ * @next__: Iterator variable for the notifiers temporay storage
+ * @gpusvm__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the notifier
+ * @end__: End address of the notifier
+ *
+ * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
+ * removing notifiers from it.
+ */
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
+
#endif /* __DRM_GPUSVM_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 10/25] drm/xe/svm: Split system allocator vma incase of madvise call
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (8 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 09/25] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 11/25] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
` (19 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
If the start or end of input address range lies within system allocator
vma split the vma to create new vma's as per input range.
v2 (Matthew Brost)
- Add lockdep_assert_write for vm->lock
- Remove unnecessary page aligned checks
- Add kerrnel-doc and comments
- Remove unnecessary unwind_ops and return
v3
- Fix copying of attributes
v4
- Nit fixes
v5
- Squash identifier for madvise in xe_vma_ops to this patch
v6
- Rebase on drm_gpuvm changes
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 109 +++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 +
drivers/gpu/drm/xe/xe_vm_types.h | 1 +
3 files changed, 112 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 4440557a3233..62230283c384 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -4178,3 +4178,112 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
}
kvfree(snap);
}
+
+/**
+ * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
+ * @vm: Pointer to the xe_vm structure
+ * @start: Starting input address
+ * @range: Size of the input range
+ *
+ * This function splits existing vma to create new vma for user provided input range
+ *
+ * Return: 0 if success
+ */
+int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+{
+ struct drm_gpuva_op_map map_req = {
+ .va.addr = start,
+ .va.range = range,
+ .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
+ };
+
+ struct xe_vma_ops vops;
+ struct drm_gpuva_ops *ops = NULL;
+ struct drm_gpuva_op *__op;
+ bool is_cpu_addr_mirror = false;
+ bool remap_op = false;
+ struct xe_vma_mem_attr tmp_attr;
+ int err;
+
+ lockdep_assert_held_write(&vm->lock);
+
+ vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
+ ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
+ if (IS_ERR(ops))
+ return PTR_ERR(ops);
+
+ if (list_empty(&ops->list)) {
+ err = 0;
+ goto free_ops;
+ }
+
+ drm_gpuva_for_each_op(__op, ops) {
+ struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+
+ if (__op->op == DRM_GPUVA_OP_REMAP) {
+ xe_assert(vm->xe, !remap_op);
+ remap_op = true;
+
+ if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
+ is_cpu_addr_mirror = true;
+ else
+ is_cpu_addr_mirror = false;
+ }
+
+ if (__op->op == DRM_GPUVA_OP_MAP) {
+ xe_assert(vm->xe, remap_op);
+ remap_op = false;
+
+ /* In case of madvise ops DRM_GPUVA_OP_MAP is always after
+ * DRM_GPUVA_OP_REMAP, so ensure we assign op->map.is_cpu_addr_mirror true
+ * if REMAP is for xe_vma_is_cpu_addr_mirror vma
+ */
+ op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
+ }
+
+ print_op(vm->xe, __op);
+ }
+
+ xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
+ vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
+ err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
+ if (err)
+ goto unwind_ops;
+
+ xe_vm_lock(vm, false);
+
+ drm_gpuva_for_each_op(__op, ops) {
+ struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+ struct xe_vma *vma;
+
+ if (__op->op == DRM_GPUVA_OP_UNMAP) {
+ /* There should be no unmap */
+ XE_WARN_ON("UNEXPECTED UNMAP");
+ xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
+ } else if (__op->op == DRM_GPUVA_OP_REMAP) {
+ vma = gpuva_to_vma(op->base.remap.unmap->va);
+ /* Store attributes for REMAP UNMAPPED VMA, so they can be assigned
+ * to newly MAP created vma.
+ */
+ tmp_attr = vma->attr;
+ xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
+ } else if (__op->op == DRM_GPUVA_OP_MAP) {
+ vma = op->map.vma;
+ /* In case of madvise call, MAP will always be follwed by REMAP.
+ * Therefore temp_attr will always have sane values, making it safe to
+ * copy them to new vma.
+ */
+ vma->attr = tmp_attr;
+ }
+ }
+
+ xe_vm_unlock(vm);
+ drm_gpuva_ops_free(&vm->gpuvm, ops);
+ return 0;
+
+unwind_ops:
+ vm_bind_ioctl_ops_unwind(vm, &ops, 1);
+free_ops:
+ drm_gpuva_ops_free(&vm->gpuvm, ops);
+ return err;
+}
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 3475a118f666..0d6b08cc4163 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
+int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
+
/**
* to_userptr_vma() - Return a pointer to an embedding userptr vma
* @vma: Pointer to the embedded struct xe_vma
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index c30f404a00e3..cd94d8b5819d 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -495,6 +495,7 @@ struct xe_vma_ops {
struct xe_vm_pgtable_update_ops pt_update_ops[XE_MAX_TILES_PER_DEVICE];
/** @flag: signify the properties within xe_vma_ops*/
#define XE_VMA_OPS_FLAG_HAS_SVM_PREFETCH BIT(0)
+#define XE_VMA_OPS_FLAG_MADVISE BIT(1)
u32 flags;
#ifdef TEST_VM_OPS_ERROR
/** @inject_error: inject error to test error handling */
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 11/25] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (9 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 10/25] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-08-05 4:00 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 12/25] drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping Himal Prasad Ghimiray
` (18 subsequent siblings)
29 siblings, 1 reply; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
In the case of the MADVISE ioctl, if the start or end addresses fall
within a VMA and existing SVM ranges are present, remove the existing
SVM mappings. Then, continue with ops_parse to create new VMAs by REMAP
unmapping of old one.
v2 (Matthew Brost)
- Use vops flag to call unmapping of ranges in vm_bind_ioctl_ops_parse
- Rename the function
v3
- Fix doc
v4
- check if range is already in garbage collector (Matthew Brost)
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 35 +++++++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_svm.h | 7 +++++++
drivers/gpu/drm/xe/xe_vm.c | 8 ++++++--
3 files changed, 48 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 10c8a1bcb86e..c2a5eda504bb 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -919,6 +919,41 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
}
+/**
+ * xe_svm_unmap_address_range - UNMAP SVM mappings and ranges
+ * @vm: The VM
+ * @start: start addr
+ * @end: end addr
+ *
+ * This function UNMAPS svm ranges if start or end address are inside them.
+ */
+void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct drm_gpusvm_notifier *notifier, *next;
+
+ lockdep_assert_held_write(&vm->lock);
+
+ drm_gpusvm_for_each_notifier_safe(notifier, next, &vm->svm.gpusvm, start, end) {
+ struct drm_gpusvm_range *range, *__next;
+
+ drm_gpusvm_for_each_range_safe(range, __next, notifier, start, end) {
+ if (start > drm_gpusvm_range_start(range) ||
+ end < drm_gpusvm_range_end(range)) {
+ if (IS_DGFX(vm->xe) && xe_svm_range_in_vram(to_xe_range(range)))
+ drm_gpusvm_range_evict(&vm->svm.gpusvm, range);
+ drm_gpusvm_range_get(range);
+ __xe_svm_garbage_collector(vm, to_xe_range(range));
+ if (!list_empty(&to_xe_range(range)->garbage_collector_link)) {
+ spin_lock(&vm->svm.garbage_collector.lock);
+ list_del(&to_xe_range(range)->garbage_collector_link);
+ spin_unlock(&vm->svm.garbage_collector.lock);
+ }
+ drm_gpusvm_range_put(range);
+ }
+ }
+ }
+}
+
/**
* xe_svm_bo_evict() - SVM evict BO to system memory
* @bo: BO to evict
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index da9a69ea0bb1..754d56b4d255 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -90,6 +90,8 @@ bool xe_svm_range_validate(struct xe_vm *vm,
u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *vma);
+void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end);
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -303,6 +305,11 @@ u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *vm
return ULONG_MAX;
}
+static inline
+void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
+{
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 62230283c384..d039779412b3 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2669,8 +2669,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
end = op->base.remap.next->va.addr;
if (xe_vma_is_cpu_addr_mirror(old) &&
- xe_svm_has_mapping(vm, start, end))
- return -EBUSY;
+ xe_svm_has_mapping(vm, start, end)) {
+ if (vops->flags & XE_VMA_OPS_FLAG_MADVISE)
+ xe_svm_unmap_address_range(vm, start, end);
+ else
+ return -EBUSY;
+ }
op->remap.start = xe_vma_start(old);
op->remap.range = xe_vma_size(old);
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 12/25] drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (10 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 11/25] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 13/25] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
` (17 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Introduce xe_svm_ranges_zap_ptes_in_range(), a function to zap page table
entries (PTEs) for all SVM ranges within a user-specified address range.
-v2 (Matthew Brost)
Lock should be called even for tlb_invalidation
v3(Matthew Brost)
- Update comment
- s/notifier->itree.start/drm_gpusvm_notifier_start
- s/notifier->itree.last + 1/drm_gpusvm_notifier_end
- use WRITE_ONCE
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 14 ++++++++++-
drivers/gpu/drm/xe/xe_svm.c | 50 +++++++++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_svm.h | 8 ++++++
3 files changed, 71 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 9128b35ccb3b..593fef438cd8 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -950,7 +950,19 @@ bool xe_pt_zap_ptes_range(struct xe_tile *tile, struct xe_vm *vm,
struct xe_pt *pt = vm->pt_root[tile->id];
u8 pt_mask = (range->tile_present & ~range->tile_invalidated);
- xe_svm_assert_in_notifier(vm);
+ /*
+ * Locking rules:
+ *
+ * - notifier_lock (write): full protection against page table changes
+ * and MMU notifier invalidations.
+ *
+ * - notifier_lock (read) + vm_lock (write): combined protection against
+ * invalidations and concurrent page table modifications. (e.g., madvise)
+ *
+ */
+ lockdep_assert(lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 0) ||
+ (lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 1) &&
+ lockdep_is_held_type(&vm->lock, 0)));
if (!(pt_mask & BIT(tile->id)))
return false;
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index c2a5eda504bb..1d0b444bf2ae 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -1018,6 +1018,56 @@ int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
return err;
}
+/**
+ * xe_svm_ranges_zap_ptes_in_range - clear ptes of svm ranges in input range
+ * @vm: Pointer to the xe_vm structure
+ * @start: Start of the input range
+ * @end: End of the input range
+ *
+ * This function removes the page table entries (PTEs) associated
+ * with the svm ranges within the given input start and end
+ *
+ * Return: tile_mask for which gt's need to be tlb invalidated.
+ */
+u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct drm_gpusvm_notifier *notifier;
+ struct xe_svm_range *range;
+ u64 adj_start, adj_end;
+ struct xe_tile *tile;
+ u8 tile_mask = 0;
+ u8 id;
+
+ lockdep_assert(lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 1) &&
+ lockdep_is_held_type(&vm->lock, 0));
+
+ drm_gpusvm_for_each_notifier(notifier, &vm->svm.gpusvm, start, end) {
+ struct drm_gpusvm_range *r = NULL;
+
+ adj_start = max(start, drm_gpusvm_notifier_start(notifier));
+ adj_end = min(end, drm_gpusvm_notifier_end(notifier));
+ drm_gpusvm_for_each_range(r, notifier, adj_start, adj_end) {
+ range = to_xe_range(r);
+ for_each_tile(tile, vm->xe, id) {
+ if (xe_pt_zap_ptes_range(tile, vm, range)) {
+ tile_mask |= BIT(id);
+ /*
+ * WRITE_ONCE pairs with READ_ONCE in
+ * xe_vm_has_valid_gpu_mapping().
+ * Must not fail after setting
+ * tile_invalidated and before
+ * TLB invalidation.
+ */
+ WRITE_ONCE(range->tile_invalidated,
+ range->tile_invalidated | BIT(id));
+ }
+ }
+ }
+ }
+
+ return tile_mask;
+}
+
#if IS_ENABLED(CONFIG_DRM_XE_PAGEMAP)
static struct drm_pagemap *tile_local_pagemap(struct xe_tile *tile)
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 754d56b4d255..b0da0e85f0b8 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -92,6 +92,8 @@ u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *v
void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end);
+u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end);
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -310,6 +312,12 @@ void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
{
}
+static inline
+u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ return 0;
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 13/25] drm/xe: Implement madvise ioctl for xe
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (11 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 12/25] drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-08-05 4:43 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 14/25] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
` (16 subsequent siblings)
29 siblings, 1 reply; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray,
Shuicheng Lin
This driver-specific ioctl enables UMDs to control the memory attributes
for GPU VMAs within a specified input range. If the start or end
addresses fall within an existing VMA, the VMA is split accordingly. The
attributes of the VMA are modified as provided by the users. The old
mappings of the VMAs are invalidated, and TLB invalidation is performed
if necessary.
v2(Matthew brost)
- xe_vm_in_fault_mode can't be enabled by Mesa, hence allow ioctl in non
fault mode too
- fix tlb invalidation skip for same ranges in multiple op
- use helper for tlb invalidation
- use xe_svm_notifier_lock/unlock helper
- s/lockdep_assert_held/lockdep_assert_held_write
- Add kernel-doc
v3(Matthew Brost)
- make vfunc fail safe
- Add sanitizing input args before vfunc
v4(Matthew Brost/Shuicheng)
- Make locks interruptable
- Error handling fixes
- vm_put fixes
v5(Matthew Brost)
- Flush garbage collector before any locking.
- Add check for null vma
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_vm_madvise.c | 308 +++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm_madvise.h | 15 ++
3 files changed, 324 insertions(+)
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 8e0c3412a757..d0ea869fcd24 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -128,6 +128,7 @@ xe-y += xe_bb.o \
xe_uc.o \
xe_uc_fw.o \
xe_vm.o \
+ xe_vm_madvise.o \
xe_vram.o \
xe_vram_freq.o \
xe_vsec.o \
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
new file mode 100644
index 000000000000..b861c3349b0a
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -0,0 +1,308 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include "xe_vm_madvise.h"
+
+#include <linux/nospec.h>
+#include <drm/xe_drm.h>
+
+#include "xe_bo.h"
+#include "xe_pt.h"
+#include "xe_svm.h"
+
+struct xe_vmas_in_madvise_range {
+ u64 addr;
+ u64 range;
+ struct xe_vma **vmas;
+ int num_vmas;
+ bool has_svm_vmas;
+ bool has_bo_vmas;
+ bool has_userptr_vmas;
+};
+
+static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_range)
+{
+ u64 addr = madvise_range->addr;
+ u64 range = madvise_range->range;
+
+ struct xe_vma **__vmas;
+ struct drm_gpuva *gpuva;
+ int max_vmas = 8;
+
+ lockdep_assert_held(&vm->lock);
+
+ madvise_range->num_vmas = 0;
+ madvise_range->vmas = kmalloc_array(max_vmas, sizeof(*madvise_range->vmas), GFP_KERNEL);
+ if (!madvise_range->vmas)
+ return -ENOMEM;
+
+ vm_dbg(&vm->xe->drm, "VMA's in range: start=0x%016llx, end=0x%016llx", addr, addr + range);
+
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, addr, addr + range) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ if (xe_vma_bo(vma))
+ madvise_range->has_bo_vmas = true;
+ else if (xe_vma_is_cpu_addr_mirror(vma))
+ madvise_range->has_svm_vmas = true;
+ else if (xe_vma_is_userptr(vma))
+ madvise_range->has_userptr_vmas = true;
+
+ if (madvise_range->num_vmas == max_vmas) {
+ max_vmas <<= 1;
+ __vmas = krealloc(madvise_range->vmas,
+ max_vmas * sizeof(*madvise_range->vmas),
+ GFP_KERNEL);
+ if (!__vmas) {
+ kfree(madvise_range->vmas);
+ return -ENOMEM;
+ }
+ madvise_range->vmas = __vmas;
+ }
+
+ madvise_range->vmas[madvise_range->num_vmas] = vma;
+ (madvise_range->num_vmas)++;
+ }
+
+ if (!madvise_range->num_vmas)
+ kfree(madvise_range->vmas);
+
+ vm_dbg(&vm->xe->drm, "madvise_range-num_vmas = %d\n", madvise_range->num_vmas);
+
+ return 0;
+}
+
+static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise *op)
+{
+ /* Implementation pending */
+}
+
+static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise *op)
+{
+ /* Implementation pending */
+}
+
+static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise *op)
+{
+ /* Implementation pending */
+}
+
+typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise *op);
+
+static const madvise_func madvise_funcs[] = {
+ [DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
+ [DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
+ [DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
+};
+
+static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct drm_gpuva *gpuva;
+ struct xe_tile *tile;
+ u8 id, tile_mask;
+
+ lockdep_assert_held_write(&vm->lock);
+
+ /* Wait for pending binds */
+ if (dma_resv_wait_timeout(xe_vm_resv(vm), DMA_RESV_USAGE_BOOKKEEP,
+ false, MAX_SCHEDULE_TIMEOUT) <= 0)
+ XE_WARN_ON(1);
+
+ tile_mask = xe_svm_ranges_zap_ptes_in_range(vm, start, end);
+
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ if (xe_vma_is_cpu_addr_mirror(vma) || xe_vma_is_null(vma))
+ continue;
+
+ for_each_tile(tile, vm->xe, id) {
+ if (xe_pt_zap_ptes(tile, vma)) {
+ tile_mask |= BIT(id);
+
+ /*
+ * WRITE_ONCE pairs with READ_ONCE
+ * in xe_vm_has_valid_gpu_mapping()
+ */
+ WRITE_ONCE(vma->tile_invalidated,
+ vma->tile_invalidated | BIT(id));
+ }
+ }
+ }
+
+ return tile_mask;
+}
+
+static int xe_vm_invalidate_madvise_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ u8 tile_mask = xe_zap_ptes_in_madvise_range(vm, start, end);
+
+ if (!tile_mask)
+ return 0;
+
+ xe_device_wmb(vm->xe);
+
+ return xe_vm_range_tilemask_tlb_invalidation(vm, start, end, tile_mask);
+}
+
+static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madvise *args)
+{
+ if (XE_IOCTL_DBG(xe, !args))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, !IS_ALIGNED(args->start, SZ_4K)))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, !IS_ALIGNED(args->range, SZ_4K)))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->range < SZ_4K))
+ return false;
+
+ switch (args->type) {
+ case DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC:
+ if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.migration_policy >
+ DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.pad))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->atomic.reserved))
+ return false;
+ break;
+ case DRM_XE_MEM_RANGE_ATTR_ATOMIC:
+ if (XE_IOCTL_DBG(xe, args->atomic.val > DRM_XE_ATOMIC_CPU))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->atomic.pad))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->atomic.reserved))
+ return false;
+
+ break;
+ case DRM_XE_MEM_RANGE_ATTR_PAT:
+ /*TODO: Add valid pat check */
+ break;
+ default:
+ if (XE_IOCTL_DBG(xe, 1))
+ return false;
+ }
+
+ if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
+ return false;
+
+ return true;
+}
+
+/**
+ * xe_vm_madvise_ioctl - Handle MADVise ioctl for a VM
+ * @dev: DRM device pointer
+ * @data: Pointer to ioctl data (drm_xe_madvise*)
+ * @file: DRM file pointer
+ *
+ * Handles the MADVISE ioctl to provide memory advice for vma's within
+ * input range.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct xe_file *xef = to_xe_file(file);
+ struct drm_xe_madvise *args = data;
+ struct xe_vmas_in_madvise_range madvise_range = {.addr = args->start,
+ .range = args->range, };
+ struct xe_vm *vm;
+ struct drm_exec exec;
+ int err, attr_type;
+
+ vm = xe_vm_lookup(xef, args->vm_id);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -EINVAL;
+
+ if (!madvise_args_are_sane(vm->xe, args)) {
+ err = -EINVAL;
+ goto put_vm;
+ }
+
+ xe_svm_flush(vm);
+
+ err = down_write_killable(&vm->lock);
+ if (err)
+ goto put_vm;
+
+ if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) {
+ err = -ENOENT;
+ goto unlock_vm;
+ }
+
+ err = xe_vm_alloc_madvise_vma(vm, args->start, args->range);
+ if (err)
+ goto unlock_vm;
+
+ err = get_vmas(vm, &madvise_range);
+ if (err || !madvise_range.num_vmas)
+ goto unlock_vm;
+
+ if (madvise_range.has_bo_vmas) {
+ drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
+ drm_exec_until_all_locked(&exec) {
+ for (int i = 0; i < madvise_range.num_vmas; i++) {
+ struct xe_bo *bo = xe_vma_bo(madvise_range.vmas[i]);
+
+ if (!bo)
+ continue;
+ err = drm_exec_lock_obj(&exec, &bo->ttm.base);
+ drm_exec_retry_on_contention(&exec);
+ if (err)
+ goto err_fini;
+ }
+ }
+ }
+
+ if (madvise_range.has_userptr_vmas) {
+ err = down_read_interruptible(&vm->userptr.notifier_lock);
+ if (err)
+ goto err_fini;
+ }
+
+ if (madvise_range.has_svm_vmas) {
+ err = down_read_interruptible(&vm->svm.gpusvm.notifier_lock);
+ if (err)
+ goto unlock_userptr;
+ }
+
+ attr_type = array_index_nospec(args->type, ARRAY_SIZE(madvise_funcs));
+ madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args);
+
+ err = xe_vm_invalidate_madvise_range(vm, args->start, args->start + args->range);
+
+ if (madvise_range.has_svm_vmas)
+ xe_svm_notifier_unlock(vm);
+
+unlock_userptr:
+ if (madvise_range.has_userptr_vmas)
+ up_read(&vm->userptr.notifier_lock);
+err_fini:
+ if (madvise_range.has_bo_vmas)
+ drm_exec_fini(&exec);
+ kfree(madvise_range.vmas);
+ madvise_range.vmas = NULL;
+unlock_vm:
+ up_write(&vm->lock);
+put_vm:
+ xe_vm_put(vm);
+ return err;
+}
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
new file mode 100644
index 000000000000..b0e1fc445f23
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_VM_MADVISE_H_
+#define _XE_VM_MADVISE_H_
+
+struct drm_device;
+struct drm_file;
+
+int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file);
+
+#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 14/25] drm/xe/svm : Add svm ranges migration policy on atomic access
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (12 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 13/25] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-08-05 20:03 ` Matthew Brost
2025-08-05 20:10 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 15/25] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
` (15 subsequent siblings)
29 siblings, 2 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
If the platform does not support atomic access on system memory, and the
ranges are in system memory, but the user requires atomic accesses on
the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
operations as well.
v2
- Drop unnecessary vm_dbg
v3 (Matthew Brost)
- fix atomic policy
- prefetch shouldn't have any impact of atomic
- bo can be accessed from vma, avoid duplicate parameter
v4 (Matthew Brost)
- Remove TODO comment
- Fix comment
- Dont allow gpu atomic ops when user is setting atomic attr as CPU
v5 (Matthew Brost)
- Fix atomic checks
- Add userptr checks
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 23 ++++++++++--------
drivers/gpu/drm/xe/xe_svm.c | 8 ++++--
drivers/gpu/drm/xe/xe_vm.c | 39 ++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 ++
drivers/gpu/drm/xe/xe_vm_madvise.c | 15 +++++++++++-
5 files changed, 74 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 593fef438cd8..6f5b384991cd 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -640,28 +640,31 @@ static const struct xe_pt_walk_ops xe_pt_stage_bind_ops = {
* - In all other cases device atomics will be disabled with AE=0 until an application
* request differently using a ioctl like madvise.
*/
-static bool xe_atomic_for_vram(struct xe_vm *vm)
+static bool xe_atomic_for_vram(struct xe_vm *vm, struct xe_vma *vma)
{
+ if (vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
+ return false;
+
return true;
}
-static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
+static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_vma *vma)
{
struct xe_device *xe = vm->xe;
+ struct xe_bo *bo = xe_vma_bo(vma);
- if (!xe->info.has_device_atomics_on_smem)
+ if (!xe->info.has_device_atomics_on_smem ||
+ vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
return false;
+ if (vma->attr.atomic_access == DRM_XE_ATOMIC_DEVICE)
+ return true;
+
/*
* If a SMEM+LMEM allocation is backed by SMEM, a device
* atomics will cause a gpu page fault and which then
* gets migrated to LMEM, bind such allocations with
* device atomics enabled.
- *
- * TODO: Revisit this. Perhaps add something like a
- * fault_on_atomics_in_system UAPI flag.
- * Note that this also prohibits GPU atomics in LR mode for
- * userptr and system memory on DGFX.
*/
return (!IS_DGFX(xe) || (!xe_vm_in_lr_mode(vm) ||
(bo && xe_bo_has_single_placement(bo))));
@@ -744,8 +747,8 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
goto walk_pt;
if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
- xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
- xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
+ xe_walk.default_vram_pte = xe_atomic_for_vram(vm, vma) ? XE_USM_PPGTT_PTE_AE : 0;
+ xe_walk.default_system_pte = xe_atomic_for_system(vm, vma) ?
XE_USM_PPGTT_PTE_AE : 0;
}
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 1d0b444bf2ae..5e78beebe114 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -793,14 +793,18 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
struct xe_gt *gt, u64 fault_addr,
bool atomic)
{
+ int need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
+
+ if (need_vram < 0)
+ return need_vram;
+
struct drm_gpusvm_ctx ctx = {
.read_only = xe_vma_read_only(vma),
.devmem_possible = IS_DGFX(vm->xe) &&
IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
.check_pages_threshold = IS_DGFX(vm->xe) &&
IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
- .devmem_only = atomic && IS_DGFX(vm->xe) &&
- IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
+ .devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
.timeslice_ms = atomic && IS_DGFX(vm->xe) &&
IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
vm->xe->atomic_svm_timeslice_ms : 0,
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index d039779412b3..463736db19d9 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -4183,6 +4183,45 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
kvfree(snap);
}
+/**
+ * xe_vma_need_vram_for_atomic - Check if VMA needs VRAM migration for atomic operations
+ * @xe: Pointer to the XE device structure
+ * @vma: Pointer to the virtual memory area (VMA) structure
+ * @is_atomic: In pagefault path and atomic operation
+ *
+ * This function determines whether the given VMA needs to be migrated to
+ * VRAM in order to do atomic GPU operation.
+ *
+ * Return:
+ * 1 - Migration to VRAM is required
+ * 0 - Migration is not required
+ * -EINVAL - Invalid access for atomic memory attr
+ *
+ */
+int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
+{
+ if (!IS_DGFX(xe) || !is_atomic)
+ return 0;
+
+ /*
+ * NOTE: The checks implemented here are platform-specific. For
+ * instance, on a device supporting CXL atomics, these would ideally
+ * work universally without additional handling.
+ */
+ switch (vma->attr.atomic_access) {
+ case DRM_XE_ATOMIC_DEVICE:
+ return !xe->info.has_device_atomics_on_smem;
+
+ case DRM_XE_ATOMIC_CPU:
+ return -EINVAL;
+
+ case DRM_XE_ATOMIC_UNDEFINED:
+ case DRM_XE_ATOMIC_GLOBAL:
+ default:
+ return 1;
+ }
+}
+
/**
* xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
* @vm: Pointer to the xe_vm structure
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 0d6b08cc4163..05ac3118d9f4 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
+int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic);
+
int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
/**
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index b861c3349b0a..a53b63dd603d 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -85,7 +85,20 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise *op)
{
- /* Implementation pending */
+ int i;
+
+ xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
+ xe_assert(vm->xe, op->atomic.val <= DRM_XE_ATOMIC_CPU);
+
+ for (i = 0; i < num_vmas; i++) {
+ if (xe_vma_is_userptr(vmas[i])) {
+ if (!(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
+ xe->info.has_device_atomics_on_smem))
+ continue;
+ }
+ vmas[i]->attr.atomic_access = op->atomic.val;
+ /*TODO: handle bo backed vmas */
+ }
}
static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 15/25] drm/xe/madvise: Update migration policy based on preferred location
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (13 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 14/25] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 16/25] drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute Himal Prasad Ghimiray
` (14 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
When the user sets the valid devmem_fd as a preferred location, GPU fault
will trigger migration to tile of device associated with devmem_fd.
If the user sets an invalid devmem_fd the preferred location is current
placement(smem) only.
v2(Matthew Brost)
- Default should be faulting tile
- remove devmem_fd used as region
v3 (Matthew Brost)
- Add migration_policy
- Fix return condition
- fix migrate condition
v4
-Rebase
v5
- Add check for userptr and bo based vmas
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 45 +++++++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_svm.h | 8 ++++++
drivers/gpu/drm/xe/xe_vm_madvise.c | 25 ++++++++++++++++-
3 files changed, 76 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 5e78beebe114..aef76e08b460 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -811,6 +811,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
};
struct xe_svm_range *range;
struct dma_fence *fence;
+ struct drm_pagemap *dpagemap;
struct xe_tile *tile = gt_to_tile(gt);
int migrate_try_count = ctx.devmem_only ? 3 : 1;
ktime_t end = 0;
@@ -840,8 +841,14 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
range_debug(range, "PAGE FAULT");
+ dpagemap = xe_vma_resolve_pagemap(vma, tile);
if (--migrate_try_count >= 0 &&
- xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
+ xe_svm_range_needs_migrate_to_vram(range, vma, !!dpagemap || ctx.devmem_only)) {
+ /* TODO : For multi-device dpagemap will be used to find the
+ * remote tile and remote device. Will need to modify
+ * xe_svm_alloc_vram to use dpagemap for future multi-device
+ * support.
+ */
err = xe_svm_alloc_vram(tile, range, &ctx);
ctx.timeslice_ms <<= 1; /* Double timeslice if we have to retry */
if (err) {
@@ -1079,6 +1086,37 @@ static struct drm_pagemap *tile_local_pagemap(struct xe_tile *tile)
return &tile->mem.vram->dpagemap;
}
+/**
+ * xe_vma_resolve_pagemap - Resolve the appropriate DRM pagemap for a VMA
+ * @vma: Pointer to the xe_vma structure containing memory attributes
+ * @tile: Pointer to the xe_tile structure used as fallback for VRAM mapping
+ *
+ * This function determines the correct DRM pagemap to use for a given VMA.
+ * It first checks if a valid devmem_fd is provided in the VMA's preferred
+ * location. If the devmem_fd is negative, it returns NULL, indicating no
+ * pagemap is available and smem to be used as preferred location.
+ * If the devmem_fd is equal to the default faulting
+ * GT identifier, it returns the VRAM pagemap associated with the tile.
+ *
+ * Future support for multi-device configurations may use drm_pagemap_from_fd()
+ * to resolve pagemaps from arbitrary file descriptors.
+ *
+ * Return: A pointer to the resolved drm_pagemap, or NULL if none is applicable.
+ */
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
+{
+ s32 fd = (s32)vma->attr.preferred_loc.devmem_fd;
+
+ if (fd == DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM)
+ return NULL;
+
+ if (fd == DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE)
+ return IS_DGFX(tile_to_xe(tile)) ? tile_local_pagemap(tile) : NULL;
+
+ /* TODO: Support multi-device with drm_pagemap_from_fd(fd) */
+ return NULL;
+}
+
/**
* xe_svm_alloc_vram()- Allocate device memory pages for range,
* migrating existing data.
@@ -1191,6 +1229,11 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
{
return 0;
}
+
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
+{
+ return NULL;
+}
#endif
/**
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index b0da0e85f0b8..494823afaa98 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -94,6 +94,8 @@ void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end);
u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end);
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile);
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -318,6 +320,12 @@ u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
return 0;
}
+static inline
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
+{
+ return NULL;
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index a53b63dd603d..207cb3a6e220 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -78,7 +78,23 @@ static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise *op)
{
- /* Implementation pending */
+ int i;
+
+ xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC);
+
+ for (i = 0; i < num_vmas; i++) {
+ /*TODO: Extend attributes to bo based vmas */
+ if (!xe_vma_is_cpu_addr_mirror(vmas[i]))
+ continue;
+
+ vmas[i]->attr.preferred_loc.devmem_fd = op->preferred_mem_loc.devmem_fd;
+
+ /* Till multi-device support is not added migration_policy
+ * is of no use and can be ignored.
+ */
+ vmas[i]->attr.preferred_loc.migration_policy =
+ op->preferred_mem_loc.migration_policy;
+ }
}
static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
@@ -184,6 +200,12 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
switch (args->type) {
case DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC:
+ {
+ s32 fd = (s32)args->preferred_mem_loc.devmem_fd;
+
+ if (XE_IOCTL_DBG(xe, fd < DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM))
+ return false;
+
if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.migration_policy >
DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES))
return false;
@@ -194,6 +216,7 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
if (XE_IOCTL_DBG(xe, args->atomic.reserved))
return false;
break;
+ }
case DRM_XE_MEM_RANGE_ATTR_ATOMIC:
if (XE_IOCTL_DBG(xe, args->atomic.val > DRM_XE_ATOMIC_CPU))
return false;
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 16/25] drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (14 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 15/25] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 17/25] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
` (13 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
This attributes sets the pat_index for the svm used vma range, which is
utilized to ascertain the coherence.
v2 (Matthew Brost)
- Pat index sanity check
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm_madvise.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 207cb3a6e220..51a9364abc72 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -9,6 +9,7 @@
#include <drm/xe_drm.h>
#include "xe_bo.h"
+#include "xe_pat.h"
#include "xe_pt.h"
#include "xe_svm.h"
@@ -121,7 +122,13 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise *op)
{
- /* Implementation pending */
+ int i;
+
+ xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_PAT);
+
+ for (i = 0; i < num_vmas; i++)
+ vmas[i]->attr.pat_index = op->pat_index.val;
+
}
typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
@@ -229,8 +236,22 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
break;
case DRM_XE_MEM_RANGE_ATTR_PAT:
- /*TODO: Add valid pat check */
+ {
+ u16 coh_mode = xe_pat_index_get_coh_mode(xe, args->pat_index.val);
+
+ if (XE_IOCTL_DBG(xe, !coh_mode))
+ return false;
+
+ if (XE_WARN_ON(coh_mode > XE_COH_AT_LEAST_1WAY))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->pat_index.pad))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->pat_index.reserved))
+ return false;
break;
+ }
default:
if (XE_IOCTL_DBG(xe, 1))
return false;
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 17/25] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (15 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 16/25] drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 18/25] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
` (12 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Introduce flag DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC to ensure prefetching
in madvise-advised memory regions
v2 (Matthew Brost)
- Add kernel-doc
v3 (Matthew Brost)
- Fix kernel-doc
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
include/uapi/drm/xe_drm.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 4e6e9a9164ee..115b9bca2a25 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -1010,6 +1010,10 @@ struct drm_xe_vm_destroy {
* valid on VMs with DRM_XE_VM_CREATE_FLAG_FAULT_MODE set. The CPU address
* mirror flag are only valid for DRM_XE_VM_BIND_OP_MAP operations, the BO
* handle MBZ, and the BO offset MBZ.
+ *
+ * The @prefetch_mem_region_instance for %DRM_XE_VM_BIND_OP_PREFETCH can also be:
+ * - %DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC, which ensures prefetching occurs in
+ * the memory region advised by madvise.
*/
struct drm_xe_vm_bind_op {
/** @extensions: Pointer to the first extension struct, if any */
@@ -1115,6 +1119,7 @@ struct drm_xe_vm_bind_op {
/** @flags: Bind flags */
__u32 flags;
+#define DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC -1
/**
* @prefetch_mem_region_instance: Memory region to prefetch VMA to.
* It is a region instance, not a mask.
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 18/25] drm/xe/svm: Consult madvise preferred location in prefetch
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (16 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 17/25] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 19/25] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
` (11 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
When prefetch region is DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC, prefetch svm
ranges to preferred location provided by madvise.
v2 (Matthew Brost)
- Fix region, devmem_fd usages
- consult madvise is applicable for other vma's too.
v3
- Fix atomic handling
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 30 ++++++++++++++++++++++--------
1 file changed, 22 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 463736db19d9..d57fc1071142 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -38,6 +38,7 @@
#include "xe_res_cursor.h"
#include "xe_svm.h"
#include "xe_sync.h"
+#include "xe_tile.h"
#include "xe_trace_bo.h"
#include "xe_wa.h"
#include "xe_hmm.h"
@@ -2913,15 +2914,28 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
int err = 0;
struct xe_svm_range *svm_range;
+ struct drm_pagemap *dpagemap;
struct drm_gpusvm_ctx ctx = {};
- struct xe_tile *tile;
+ struct xe_tile *tile = NULL;
unsigned long i;
u32 region;
if (!xe_vma_is_cpu_addr_mirror(vma))
return 0;
- region = op->prefetch_range.region;
+ if (op->prefetch_range.region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC) {
+ dpagemap = xe_vma_resolve_pagemap(vma, xe_device_get_root_tile(vm->xe));
+ /*
+ * TODO: Once multigpu support is enabled will need
+ * something to dereference tile from dpagemap.
+ */
+ if (dpagemap)
+ tile = xe_device_get_root_tile(vm->xe);
+ } else {
+ region = op->prefetch_range.region;
+ if (region)
+ tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
+ }
ctx.read_only = xe_vma_read_only(vma);
ctx.devmem_possible = devmem_possible;
@@ -2929,11 +2943,10 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
/* TODO: Threading the migration */
xa_for_each(&op->prefetch_range.range, i, svm_range) {
- if (!region)
+ if (!tile)
xe_svm_range_migrate_to_smem(vm, svm_range);
- if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
- tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
+ if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, !!tile)) {
err = xe_svm_alloc_vram(tile, svm_range, &ctx);
if (err) {
drm_dbg(&vm->xe->drm, "VRAM allocation failed, retry from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
@@ -3001,7 +3014,8 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
else
region = op->prefetch.region;
- xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
+ xe_assert(vm->xe, region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC ||
+ region <= ARRAY_SIZE(region_to_mem_type));
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.prefetch.va),
@@ -3419,8 +3433,8 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
op == DRM_XE_VM_BIND_OP_PREFETCH) ||
XE_IOCTL_DBG(xe, prefetch_region &&
op != DRM_XE_VM_BIND_OP_PREFETCH) ||
- XE_IOCTL_DBG(xe, !(BIT(prefetch_region) &
- xe->info.mem_region_mask)) ||
+ XE_IOCTL_DBG(xe, (prefetch_region != DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC &&
+ !(BIT(prefetch_region) & xe->info.mem_region_mask))) ||
XE_IOCTL_DBG(xe, obj &&
op == DRM_XE_VM_BIND_OP_UNMAP)) {
err = -EINVAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 19/25] drm/xe/bo: Add attributes field to xe_bo
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (17 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 18/25] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 20/25] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
` (10 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
A single BO can be linked to multiple VMAs, making VMA attributes
insufficient for determining the placement and PTE update attributes
of the BO. To address this, an attributes field has been added to the
BO.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_bo_types.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
index cf604adc13a3..314652afdca7 100644
--- a/drivers/gpu/drm/xe/xe_bo_types.h
+++ b/drivers/gpu/drm/xe/xe_bo_types.h
@@ -61,6 +61,14 @@ struct xe_bo {
*/
struct list_head client_link;
#endif
+ /** @attr: User controlled attributes for bo */
+ struct {
+ /**
+ * @atomic_access: type of atomic access bo needs
+ * protected by bo dma-resv lock
+ */
+ u32 atomic_access;
+ } attr;
/**
* @pxp_key_instance: PXP key instance this BO was created against. A
* 0 in this variable indicates that the BO does not use PXP encryption.
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 20/25] drm/xe/bo: Update atomic_access attribute on madvise
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (18 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 19/25] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-08-05 20:06 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 21/25] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged Himal Prasad Ghimiray
` (9 subsequent siblings)
29 siblings, 1 reply; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Update the bo_atomic_access based on user-provided input and determine
the migration to smem during a CPU fault
v2 (Matthew Brost)
- Avoid cpu unmapping if bo is already in smem
- check atomics on smem too for ioctl
- Add comments
v3
- Avoid migration in prefetch
v4 (Matthew Brost)
- make sanity check function bool
- add assert for smem placement
- fix doc
v5 (Matthew Brost)
- NACK atomic fault with DRM_XE_ATOMIC_CPU
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 29 ++++++++++++--
drivers/gpu/drm/xe/xe_gt_pagefault.c | 35 ++++++----------
drivers/gpu/drm/xe/xe_vm.c | 7 +++-
drivers/gpu/drm/xe/xe_vm_madvise.c | 60 +++++++++++++++++++++++++++-
4 files changed, 103 insertions(+), 28 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index ffca1cea5585..6ab297f94d12 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1709,6 +1709,18 @@ static void xe_gem_object_close(struct drm_gem_object *obj,
}
}
+static bool should_migrate_to_smem(struct xe_bo *bo)
+{
+ /*
+ * NOTE: The following atomic checks are platform-specific. For example,
+ * if a device supports CXL atomics, these may not be necessary or
+ * may behave differently.
+ */
+
+ return bo->attr.atomic_access == DRM_XE_ATOMIC_GLOBAL ||
+ bo->attr.atomic_access == DRM_XE_ATOMIC_CPU;
+}
+
static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
{
struct ttm_buffer_object *tbo = vmf->vma->vm_private_data;
@@ -1717,7 +1729,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
struct xe_bo *bo = ttm_to_xe_bo(tbo);
bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK;
vm_fault_t ret;
- int idx;
+ int idx, r = 0;
if (needs_rpm)
xe_pm_runtime_get(xe);
@@ -1729,8 +1741,19 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
if (drm_dev_enter(ddev, &idx)) {
trace_xe_bo_cpu_fault(bo);
- ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
- TTM_BO_VM_NUM_PREFAULT);
+ if (should_migrate_to_smem(bo)) {
+ xe_assert(xe, bo->flags & XE_BO_FLAG_SYSTEM);
+
+ r = xe_bo_migrate(bo, XE_PL_TT);
+ if (r == -EBUSY || r == -ERESTARTSYS || r == -EINTR)
+ ret = VM_FAULT_NOPAGE;
+ else if (r)
+ ret = VM_FAULT_SIGBUS;
+ }
+ if (!ret)
+ ret = ttm_bo_vm_fault_reserved(vmf,
+ vmf->vma->vm_page_prot,
+ TTM_BO_VM_NUM_PREFAULT);
drm_dev_exit(idx);
} else {
ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot);
diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index ab43dec52776..4ea30fbce9bd 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -75,7 +75,7 @@ static bool vma_is_valid(struct xe_tile *tile, struct xe_vma *vma)
}
static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma,
- bool atomic, struct xe_vram_region *vram)
+ bool need_vram_move, struct xe_vram_region *vram)
{
struct xe_bo *bo = xe_vma_bo(vma);
struct xe_vm *vm = xe_vma_vm(vma);
@@ -85,26 +85,13 @@ static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma,
if (err)
return err;
- if (atomic && vram) {
- xe_assert(vm->xe, IS_DGFX(vm->xe));
+ if (!bo)
+ return 0;
- if (xe_vma_is_userptr(vma)) {
- err = -EACCES;
- return err;
- }
+ err = need_vram_move ? xe_bo_migrate(bo, vram->placement) :
+ xe_bo_validate(bo, vm, true);
- /* Migrate to VRAM, move should invalidate the VMA first */
- err = xe_bo_migrate(bo, vram->placement);
- if (err)
- return err;
- } else if (bo) {
- /* Create backing store if needed */
- err = xe_bo_validate(bo, vm, true);
- if (err)
- return err;
- }
-
- return 0;
+ return err;
}
static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
@@ -115,10 +102,14 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
struct drm_exec exec;
struct dma_fence *fence;
ktime_t end = 0;
- int err;
+ int err, needs_vram;
lockdep_assert_held_write(&vm->lock);
+ needs_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
+ if (needs_vram < 0 || (needs_vram && xe_vma_is_userptr(vma)))
+ return needs_vram < 0 ? needs_vram : -EACCES;
+
xe_gt_stats_incr(gt, XE_GT_STATS_ID_VMA_PAGEFAULT_COUNT, 1);
xe_gt_stats_incr(gt, XE_GT_STATS_ID_VMA_PAGEFAULT_KB, xe_vma_size(vma) / 1024);
@@ -141,7 +132,7 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
/* Lock VM and BOs dma-resv */
drm_exec_init(&exec, 0, 0);
drm_exec_until_all_locked(&exec) {
- err = xe_pf_begin(&exec, vma, atomic, tile->mem.vram);
+ err = xe_pf_begin(&exec, vma, needs_vram == 1, tile->mem.vram);
drm_exec_retry_on_contention(&exec);
if (xe_vm_validate_should_retry(&exec, err, &end))
err = -EAGAIN;
@@ -576,7 +567,7 @@ static int handle_acc(struct xe_gt *gt, struct acc *acc)
/* Lock VM and BOs dma-resv */
drm_exec_init(&exec, 0, 0);
drm_exec_until_all_locked(&exec) {
- ret = xe_pf_begin(&exec, vma, true, tile->mem.vram);
+ ret = xe_pf_begin(&exec, vma, IS_DGFX(vm->xe), tile->mem.vram);
drm_exec_retry_on_contention(&exec);
if (ret)
break;
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index d57fc1071142..0774b40bc37b 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -4214,15 +4214,18 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
*/
int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
{
+ u32 atomic_access = xe_vma_bo(vma) ? xe_vma_bo(vma)->attr.atomic_access :
+ vma->attr.atomic_access;
+
if (!IS_DGFX(xe) || !is_atomic)
- return 0;
+ return false;
/*
* NOTE: The checks implemented here are platform-specific. For
* instance, on a device supporting CXL atomics, these would ideally
* work universally without additional handling.
*/
- switch (vma->attr.atomic_access) {
+ switch (atomic_access) {
case DRM_XE_ATOMIC_DEVICE:
return !xe->info.has_device_atomics_on_smem;
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 51a9364abc72..16ab1267ad21 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -102,6 +102,7 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise *op)
{
+ struct xe_bo *bo;
int i;
xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
@@ -113,8 +114,21 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
xe->info.has_device_atomics_on_smem))
continue;
}
+
vmas[i]->attr.atomic_access = op->atomic.val;
- /*TODO: handle bo backed vmas */
+
+ bo = xe_vma_bo(vmas[i]);
+ if (!bo)
+ continue;
+
+ xe_bo_assert_held(bo);
+ bo->attr.atomic_access = op->atomic.val;
+
+ /* Invalidate cpu page table, so bo can migrate to smem in next access */
+ if (xe_bo_is_vram(bo) &&
+ (bo->attr.atomic_access == DRM_XE_ATOMIC_CPU ||
+ bo->attr.atomic_access == DRM_XE_ATOMIC_GLOBAL))
+ ttm_bo_unmap_virtual(&bo->ttm);
}
}
@@ -263,6 +277,41 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
return true;
}
+static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma **vmas,
+ int num_vmas, u32 atomic_val)
+{
+ struct xe_device *xe = vm->xe;
+ struct xe_bo *bo;
+ int i;
+
+ for (i = 0; i < num_vmas; i++) {
+ bo = xe_vma_bo(vmas[i]);
+ if (!bo)
+ continue;
+ /*
+ * NOTE: The following atomic checks are platform-specific. For example,
+ * if a device supports CXL atomics, these may not be necessary or
+ * may behave differently.
+ */
+ if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_CPU &&
+ !(bo->flags & XE_BO_FLAG_SYSTEM)))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_DEVICE &&
+ !(bo->flags & XE_BO_FLAG_VRAM0) &&
+ !(bo->flags & XE_BO_FLAG_VRAM1) &&
+ !(bo->flags & XE_BO_FLAG_SYSTEM &&
+ xe->info.has_device_atomics_on_smem)))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_GLOBAL &&
+ (!(bo->flags & XE_BO_FLAG_SYSTEM) ||
+ (!(bo->flags & XE_BO_FLAG_VRAM0) &&
+ !(bo->flags & XE_BO_FLAG_VRAM1)))))
+ return false;
+ }
+ return true;
+}
/**
* xe_vm_madvise_ioctl - Handle MADVise ioctl for a VM
* @dev: DRM device pointer
@@ -314,6 +363,15 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
goto unlock_vm;
if (madvise_range.has_bo_vmas) {
+ if (args->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC) {
+ if (!check_bo_args_are_sane(vm, madvise_range.vmas,
+ madvise_range.num_vmas,
+ args->atomic.val)) {
+ err = -EINVAL;
+ goto unlock_vm;
+ }
+ }
+
drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
drm_exec_until_all_locked(&exec) {
for (int i = 0; i < madvise_range.num_vmas; i++) {
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 21/25] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (19 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 20/25] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 20:57 ` kernel test robot
2025-07-30 13:00 ` [PATCH v5 22/25] drm/xe/vm: Add helper to check for default VMA memory attributes Himal Prasad Ghimiray
` (8 subsequent siblings)
29 siblings, 1 reply; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
If a VMA within the madvise input range already has the same memory
attribute as the one requested by the user, skip PTE zapping for that
VMA to avoid unnecessary invalidation.
v2 (Matthew Brost)
- fix skip_invalidation for new attributes
- s/u32/bool
- Remove unnecessary assignment for kzalloc'ed
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm_madvise.c | 60 +++++++++++++++++++++---------
drivers/gpu/drm/xe/xe_vm_types.h | 6 +++
2 files changed, 49 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 16ab1267ad21..99fec6793e41 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -85,16 +85,24 @@ static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
for (i = 0; i < num_vmas; i++) {
/*TODO: Extend attributes to bo based vmas */
- if (!xe_vma_is_cpu_addr_mirror(vmas[i]))
+ if (!xe_vma_is_cpu_addr_mirror(vmas[i])) {
+ vmas[i]->skip_invalidation = true;
continue;
+ }
- vmas[i]->attr.preferred_loc.devmem_fd = op->preferred_mem_loc.devmem_fd;
-
- /* Till multi-device support is not added migration_policy
- * is of no use and can be ignored.
- */
- vmas[i]->attr.preferred_loc.migration_policy =
+ if (vmas[i]->attr.preferred_loc.devmem_fd == op->preferred_mem_loc.devmem_fd &&
+ vmas[i]->attr.preferred_loc.migration_policy ==
+ op->preferred_mem_loc.migration_policy) {
+ vmas[i]->skip_invalidation = true;
+ } else {
+ vmas[i]->skip_invalidation = false;
+ vmas[i]->attr.preferred_loc.devmem_fd = op->preferred_mem_loc.devmem_fd;
+ /* Till multi-device support is not added migration_policy
+ * is of no use and can be ignored.
+ */
+ vmas[i]->attr.preferred_loc.migration_policy =
op->preferred_mem_loc.migration_policy;
+ }
}
}
@@ -111,8 +119,17 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
for (i = 0; i < num_vmas; i++) {
if (xe_vma_is_userptr(vmas[i])) {
if (!(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
- xe->info.has_device_atomics_on_smem))
+ xe->info.has_device_atomics_on_smem)) {
+ vmas[i]->skip_invalidation = true;
continue;
+ }
+ }
+
+ if (vmas[i]->attr.atomic_access == op->atomic.val) {
+ vmas[i]->skip_invalidation = true;
+ } else {
+ vmas[i]->skip_invalidation = false;
+ vmas[i]->attr.atomic_access = op->atomic.val;
}
vmas[i]->attr.atomic_access = op->atomic.val;
@@ -140,9 +157,14 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_PAT);
- for (i = 0; i < num_vmas; i++)
- vmas[i]->attr.pat_index = op->pat_index.val;
-
+ for (i = 0; i < num_vmas; i++) {
+ if (vmas[i]->attr.pat_index == op->pat_index.val) {
+ vmas[i]->skip_invalidation = true;
+ } else {
+ vmas[i]->skip_invalidation = false;
+ vmas[i]->attr.pat_index = op->pat_index.val;
+ }
+ }
}
typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
@@ -168,17 +190,20 @@ static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
false, MAX_SCHEDULE_TIMEOUT) <= 0)
XE_WARN_ON(1);
- tile_mask = xe_svm_ranges_zap_ptes_in_range(vm, start, end);
-
drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
struct xe_vma *vma = gpuva_to_vma(gpuva);
- if (xe_vma_is_cpu_addr_mirror(vma) || xe_vma_is_null(vma))
+ if (vma->skip_invalidation || xe_vma_is_null(vma))
continue;
- for_each_tile(tile, vm->xe, id) {
- if (xe_pt_zap_ptes(tile, vma)) {
- tile_mask |= BIT(id);
+ if (xe_vma_is_cpu_addr_mirror(vma)) {
+ tile_mask |= xe_svm_ranges_zap_ptes_in_range(vm,
+ xe_vma_start(vma),
+ xe_vma_end(vma));
+ } else {
+ for_each_tile(tile, vm->xe, id) {
+ if (xe_pt_zap_ptes(tile, vma)) {
+ tile_mask |= BIT(id);
/*
* WRITE_ONCE pairs with READ_ONCE
@@ -186,6 +211,7 @@ static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
*/
WRITE_ONCE(vma->tile_invalidated,
vma->tile_invalidated | BIT(id));
+ }
}
}
}
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index cd94d8b5819d..81d92d886578 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -157,6 +157,12 @@ struct xe_vma {
/** @tile_staged: bind is staged for this VMA */
u8 tile_staged;
+ /**
+ * @skip_invalidation: Used in madvise to avoid invalidation
+ * if mem attributes doesn't change
+ */
+ bool skip_invalidation;
+
/**
* @ufence: The user fence that was provided with MAP.
* Needs to be signalled before UNMAP can be processed.
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 22/25] drm/xe/vm: Add helper to check for default VMA memory attributes
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (20 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 21/25] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 23/25] drm/xe: Reset VMA attributes to default in SVM garbage collector Himal Prasad Ghimiray
` (7 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Introduce a new helper function `xe_vma_has_default_mem_attrs()` to
determine whether a VMA's memory attributes are set to their default
values. This includes checks for atomic access, PAT index, and preferred
location.
Also, add a new field `default_pat_index` to `struct xe_vma_mem_attr`
to track the initial PAT index set during the first bind. This helps
distinguish between default and user-modified pat index, such as those
changed via madvise.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 24 ++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 ++
drivers/gpu/drm/xe/xe_vm_types.h | 6 ++++++
3 files changed, 32 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 0774b40bc37b..5ee38e9cf6c6 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2598,6 +2598,29 @@ static int xe_vma_op_commit(struct xe_vm *vm, struct xe_vma_op *op)
return err;
}
+/**
+ * xe_vma_has_default_mem_attrs - Check if a VMA has default memory attributes
+ * @vma: Pointer to the xe_vma structure to check
+ *
+ * This function determines whether the given VMA (Virtual Memory Area)
+ * has its memory attributes set to their default values. Specifically,
+ * it checks the following conditions:
+ *
+ * - `atomic_access` is `DRM_XE_VMA_ATOMIC_UNDEFINED`
+ * - `pat_index` is equal to `default_pat_index`
+ * - `preferred_loc.devmem_fd` is `DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE`
+ * - `preferred_loc.migration_policy` is `DRM_XE_MIGRATE_ALL_PAGES`
+ *
+ * Return: true if all attributes are at their default values, false otherwise.
+ */
+bool xe_vma_has_default_mem_attrs(struct xe_vma *vma)
+{
+ return (vma->attr.atomic_access == DRM_XE_ATOMIC_UNDEFINED &&
+ vma->attr.pat_index == vma->attr.default_pat_index &&
+ vma->attr.preferred_loc.devmem_fd == DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE &&
+ vma->attr.preferred_loc.migration_policy == DRM_XE_MIGRATE_ALL_PAGES);
+}
+
static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
struct xe_vma_ops *vops)
{
@@ -2630,6 +2653,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
.migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
},
.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+ .default_pat_index = op->map.pat_index,
.pat_index = op->map.pat_index,
};
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 05ac3118d9f4..f735d994806d 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -66,6 +66,8 @@ static inline bool xe_vm_is_closed_or_banned(struct xe_vm *vm)
struct xe_vma *
xe_vm_find_overlapping_vma(struct xe_vm *vm, u64 start, u64 range);
+bool xe_vma_has_default_mem_attrs(struct xe_vma *vma);
+
/**
* xe_vm_has_scratch() - Whether the vm is configured for scratch PTEs
* @vm: The vm
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 81d92d886578..351242c92c12 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -103,8 +103,14 @@ struct xe_vma_mem_attr {
*/
u32 atomic_access;
+ /**
+ * @default_pat_index: The pat index for VMA set during first bind by user.
+ */
+ u16 default_pat_index;
+
/**
* @pat_index: The pat index to use when encoding the PTEs for this vma.
+ * same as default_pat_index unless overwritten by madvise.
*/
u16 pat_index;
};
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 23/25] drm/xe: Reset VMA attributes to default in SVM garbage collector
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (21 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 22/25] drm/xe/vm: Add helper to check for default VMA memory attributes Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-08-06 4:06 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 24/25] drm/xe: Enable madvise ioctl for xe Himal Prasad Ghimiray
` (6 subsequent siblings)
29 siblings, 1 reply; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Restore default memory attributes for VMAs during garbage collection
if they were modified by madvise. Reuse existing VMA if fully overlapping;
otherwise, allocate a new mirror VMA.
v2 (Matthew Brost)
- Add helper for vma split
- Add retry to get updated vma
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 114 +++++++++++++++++++++-----
drivers/gpu/drm/xe/xe_vm.c | 155 ++++++++++++++++++++++++++----------
drivers/gpu/drm/xe/xe_vm.h | 2 +
3 files changed, 206 insertions(+), 65 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index aef76e08b460..9b3a3f61758c 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -253,9 +253,55 @@ static int __xe_svm_garbage_collector(struct xe_vm *vm,
return 0;
}
+static int xe_svm_range_set_default_attr(struct xe_vm *vm, u64 range_start, u64 range_end)
+{
+ struct xe_vma *vma;
+ struct xe_vma_mem_attr default_attr = {
+ .preferred_loc = {
+ .devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE,
+ .migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
+ },
+ .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+ };
+ int err = 0;
+
+ vma = xe_vm_find_vma_by_addr(vm, range_start);
+ if (!vma)
+ return -EINVAL;
+
+ if (xe_vma_has_default_mem_attrs(vma))
+ return 0;
+
+ vm_dbg(&vm->xe->drm, "Existing VMA start=0x%016llx, vma_end=0x%016llx",
+ xe_vma_start(vma), xe_vma_end(vma));
+
+ if (xe_vma_start(vma) == range_start && xe_vma_end(vma) == range_end) {
+ default_attr.pat_index = vma->attr.default_pat_index;
+ default_attr.default_pat_index = vma->attr.default_pat_index;
+ vma->attr = default_attr;
+ } else {
+ vm_dbg(&vm->xe->drm, "Split VMA start=0x%016llx, vma_end=0x%016llx",
+ range_start, range_end);
+ err = xe_vm_alloc_cpu_addr_mirror_vma(vm, range_start, range_end - range_start);
+ if (err) {
+ drm_warn(&vm->xe->drm, "VMA SPLIT failed: %pe\n", ERR_PTR(err));
+ xe_vm_kill(vm, true);
+ return err;
+ }
+ }
+
+ /*
+ * On call from xe_svm_handle_pagefault original VMA might be changed
+ * signal this to lookup for VMA again.
+ */
+ return -EAGAIN;
+}
+
static int xe_svm_garbage_collector(struct xe_vm *vm)
{
struct xe_svm_range *range;
+ u64 range_start;
+ u64 range_end;
int err;
lockdep_assert_held_write(&vm->lock);
@@ -271,6 +317,9 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
if (!range)
break;
+ range_start = xe_svm_range_start(range);
+ range_end = xe_svm_range_end(range);
+
list_del(&range->garbage_collector_link);
spin_unlock(&vm->svm.garbage_collector.lock);
@@ -283,6 +332,10 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
return err;
}
+ err = xe_svm_range_set_default_attr(vm, range_start, range_end);
+ if (err)
+ return err;
+
spin_lock(&vm->svm.garbage_collector.lock);
}
spin_unlock(&vm->svm.garbage_collector.lock);
@@ -793,40 +846,59 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
struct xe_gt *gt, u64 fault_addr,
bool atomic)
{
- int need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
-
- if (need_vram < 0)
- return need_vram;
-
- struct drm_gpusvm_ctx ctx = {
- .read_only = xe_vma_read_only(vma),
- .devmem_possible = IS_DGFX(vm->xe) &&
- IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
- .check_pages_threshold = IS_DGFX(vm->xe) &&
- IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
- .devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
- .timeslice_ms = atomic && IS_DGFX(vm->xe) &&
- IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
- vm->xe->atomic_svm_timeslice_ms : 0,
- };
+ struct drm_gpusvm_ctx ctx = { };
+ struct drm_pagemap *dpagemap;
struct xe_svm_range *range;
struct dma_fence *fence;
- struct drm_pagemap *dpagemap;
struct xe_tile *tile = gt_to_tile(gt);
- int migrate_try_count = ctx.devmem_only ? 3 : 1;
+ bool vma_updated = false;
+ int need_vram;
+ int migrate_try_count;
ktime_t end = 0;
int err;
- lockdep_assert_held_write(&vm->lock);
+find_vma:
+ if (vma_updated) {
+ vma = xe_vm_find_vma_by_addr(vm, fault_addr);
+ if (!vma)
+ return -EINVAL;
+ }
+
xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(vma));
+ vma_updated = false;
+
+ need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
+ if (need_vram < 0)
+ return need_vram;
+
+ ctx.read_only = xe_vma_read_only(vma);
+ ctx.devmem_possible = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
+ ctx.check_pages_threshold = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
+ SZ_64K : 0;
+ ctx.devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
+ ctx.timeslice_ms = atomic && IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
+ vm->xe->atomic_svm_timeslice_ms : 0;
+ migrate_try_count = ctx.devmem_only ? 3 : 1;
+
+ lockdep_assert_held_write(&vm->lock);
xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
retry:
/* Always process UNMAPs first so view SVM ranges is current */
err = xe_svm_garbage_collector(vm);
- if (err)
- return err;
+ if (err) {
+ if (err == -EAGAIN) {
+ /*
+ * VMA might have changed due to garbage
+ * collection; retry lookup
+ */
+ vma_updated = true;
+ goto find_vma;
+ } else {
+ return err;
+ }
+ }
range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 5ee38e9cf6c6..e77c04f92d0b 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -4263,36 +4263,24 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
}
}
-/**
- * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
- * @vm: Pointer to the xe_vm structure
- * @start: Starting input address
- * @range: Size of the input range
- *
- * This function splits existing vma to create new vma for user provided input range
- *
- * Return: 0 if success
- */
-int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+static int xe_vm_alloc_vma(struct xe_vm *vm, struct drm_gpuva_op_map *map_req)
{
- struct drm_gpuva_op_map map_req = {
- .va.addr = start,
- .va.range = range,
- .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
- };
-
struct xe_vma_ops vops;
struct drm_gpuva_ops *ops = NULL;
struct drm_gpuva_op *__op;
bool is_cpu_addr_mirror = false;
bool remap_op = false;
+ bool is_madvise = (map_req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
struct xe_vma_mem_attr tmp_attr;
+ u16 default_pat;
int err;
lockdep_assert_held_write(&vm->lock);
- vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
- ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
+ vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
+ map_req->va.addr, map_req->va.range);
+
+ ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, map_req);
if (IS_ERR(ops))
return PTR_ERR(ops);
@@ -4303,33 +4291,56 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
drm_gpuva_for_each_op(__op, ops) {
struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+ struct xe_vma *vma = NULL;
- if (__op->op == DRM_GPUVA_OP_REMAP) {
- xe_assert(vm->xe, !remap_op);
- remap_op = true;
+ if (!is_madvise) {
+ if (__op->op == DRM_GPUVA_OP_UNMAP) {
+ vma = gpuva_to_vma(op->base.unmap.va);
+ XE_WARN_ON(!xe_vma_has_default_mem_attrs(vma));
+ default_pat = vma->attr.default_pat_index;
+ }
- if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
- is_cpu_addr_mirror = true;
- else
- is_cpu_addr_mirror = false;
- }
+ if (__op->op == DRM_GPUVA_OP_REMAP) {
+ vma = gpuva_to_vma(op->base.remap.unmap->va);
+ default_pat = vma->attr.default_pat_index;
+ }
- if (__op->op == DRM_GPUVA_OP_MAP) {
- xe_assert(vm->xe, remap_op);
- remap_op = false;
+ if (__op->op == DRM_GPUVA_OP_MAP) {
+ op->map.is_cpu_addr_mirror = true;
+ op->map.pat_index = default_pat;
+ }
+ } else {
+ if (__op->op == DRM_GPUVA_OP_REMAP) {
+ vma = gpuva_to_vma(op->base.remap.unmap->va);
+ xe_assert(vm->xe, !remap_op);
+ remap_op = true;
- /* In case of madvise ops DRM_GPUVA_OP_MAP is always after
- * DRM_GPUVA_OP_REMAP, so ensure we assign op->map.is_cpu_addr_mirror true
- * if REMAP is for xe_vma_is_cpu_addr_mirror vma
- */
- op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
- }
+ if (xe_vma_is_cpu_addr_mirror(vma))
+ is_cpu_addr_mirror = true;
+ else
+ is_cpu_addr_mirror = false;
+ }
+ if (__op->op == DRM_GPUVA_OP_MAP) {
+ xe_assert(vm->xe, remap_op);
+ remap_op = false;
+ /*
+ * In case of madvise ops DRM_GPUVA_OP_MAP is
+ * always after DRM_GPUVA_OP_REMAP, so ensure
+ * we assign op->map.is_cpu_addr_mirror true
+ * if REMAP is for xe_vma_is_cpu_addr_mirror vma
+ */
+ op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
+ }
+ }
print_op(vm->xe, __op);
}
xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
- vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
+
+ if (is_madvise)
+ vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
+
err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
if (err)
goto unwind_ops;
@@ -4341,15 +4352,20 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
struct xe_vma *vma;
if (__op->op == DRM_GPUVA_OP_UNMAP) {
- /* There should be no unmap */
- XE_WARN_ON("UNEXPECTED UNMAP");
- xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
+ vma = gpuva_to_vma(op->base.unmap.va);
+ /* There should be no unmap for madvise */
+ if (is_madvise)
+ XE_WARN_ON("UNEXPECTED UNMAP");
+
+ xe_vma_destroy(vma, NULL);
} else if (__op->op == DRM_GPUVA_OP_REMAP) {
vma = gpuva_to_vma(op->base.remap.unmap->va);
- /* Store attributes for REMAP UNMAPPED VMA, so they can be assigned
- * to newly MAP created vma.
+ /* In case of madvise ops Store attributes for REMAP UNMAPPED
+ * VMA, so they can be assigned to newly MAP created vma.
*/
- tmp_attr = vma->attr;
+ if (is_madvise)
+ tmp_attr = vma->attr;
+
xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
} else if (__op->op == DRM_GPUVA_OP_MAP) {
vma = op->map.vma;
@@ -4357,7 +4373,8 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
* Therefore temp_attr will always have sane values, making it safe to
* copy them to new vma.
*/
- vma->attr = tmp_attr;
+ if (is_madvise)
+ vma->attr = tmp_attr;
}
}
@@ -4371,3 +4388,53 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
drm_gpuva_ops_free(&vm->gpuvm, ops);
return err;
}
+
+/**
+ * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
+ * @vm: Pointer to the xe_vm structure
+ * @start: Starting input address
+ * @range: Size of the input range
+ *
+ * This function splits existing vma to create new vma for user provided input range
+ *
+ * Return: 0 if success
+ */
+int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+{
+ struct drm_gpuva_op_map map_req = {
+ .va.addr = start,
+ .va.range = range,
+ .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
+ };
+
+ lockdep_assert_held_write(&vm->lock);
+
+ vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
+
+ return xe_vm_alloc_vma(vm, &map_req);
+}
+
+/**
+ * xe_vm_alloc_cpu_addr_mirror_vma - Allocate CPU addr mirror vma
+ * @vm: Pointer to the xe_vm structure
+ * @start: Starting input address
+ * @range: Size of the input range
+ *
+ * This function splits/merges existing vma to create new vma for user provided input range
+ *
+ * Return: 0 if success
+ */
+int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+{
+ struct drm_gpuva_op_map map_req = {
+ .va.addr = start,
+ .va.range = range,
+ };
+
+ lockdep_assert_held_write(&vm->lock);
+
+ vm_dbg(&vm->xe->drm, "CPU_ADDR_MIRROR_VMA_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
+ start, range);
+
+ return xe_vm_alloc_vma(vm, &map_req);
+}
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index f735d994806d..6538cddf158b 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -177,6 +177,8 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
+int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
+
/**
* to_userptr_vma() - Return a pointer to an embedding userptr vma
* @vma: Pointer to the embedded struct xe_vma
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 24/25] drm/xe: Enable madvise ioctl for xe
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (22 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 23/25] drm/xe: Reset VMA attributes to default in SVM garbage collector Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 25/25] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes Himal Prasad Ghimiray
` (5 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Ioctl enables setting up of memory attributes in user provided range.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index c754374f1a8d..80a77488381a 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -63,6 +63,7 @@
#include "xe_ttm_stolen_mgr.h"
#include "xe_ttm_sys_mgr.h"
#include "xe_vm.h"
+#include "xe_vm_madvise.h"
#include "xe_vram.h"
#include "xe_vram_types.h"
#include "xe_vsec.h"
@@ -201,6 +202,7 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
};
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v5 25/25] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (23 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 24/25] drm/xe: Enable madvise ioctl for xe Himal Prasad Ghimiray
@ 2025-07-30 13:00 ` Himal Prasad Ghimiray
2025-08-05 19:29 ` Matthew Brost
2025-07-30 14:20 ` ✗ CI.checkpatch: warning for MADVISE FOR XE (rev5) Patchwork
` (4 subsequent siblings)
29 siblings, 1 reply; 54+ messages in thread
From: Himal Prasad Ghimiray @ 2025-07-30 13:00 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray,
Shuicheng Lin
Introduce the DRM_IOCTL_XE_VM_QUERY_MEMORY_RANGE_ATTRS ioctl to allow
userspace to query memory attributes of VMAs within a user specified
virtual address range.
Userspace first calls the ioctl with num_mem_ranges = 0,
sizeof_mem_ranges_attr = 0 and vector_of_vma_mem_attr = NULL to retrieve
the number of memory ranges (vmas) and size of each memory range attribute.
Then, it allocates a buffer of that size and calls the ioctl again to fill
the buffer with memory range attributes.
This two-step interface allows userspace to first query the required
buffer size, then retrieve detailed attributes efficiently.
v2 (Matthew Brost)
- Use same ioctl to overload functionality
v3
- Add kernel-doc
v4
- Make uapi future proof by passing struct size (Matthew Brost)
- make lock interruptible (Matthew Brost)
- set reserved bits to zero (Matthew Brost)
- s/__copy_to_user/copy_to_user (Matthew Brost)
- Avod using VMA term in uapi (Thomas)
- xe_vm_put(vm) is missing (Shuicheng)
v5
- Nits
- Fix kernel-doc
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 2 +
drivers/gpu/drm/xe/xe_vm.c | 102 ++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 +-
include/uapi/drm/xe_drm.h | 139 +++++++++++++++++++++++++++++++++
4 files changed, 244 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 80a77488381a..1e4334f8bdf4 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -203,6 +203,8 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS, xe_vm_query_vmas_attrs_ioctl,
+ DRM_RENDER_ALLOW),
};
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index e77c04f92d0b..a3ca3041e812 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2171,6 +2171,108 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
return err;
}
+static int xe_vm_query_vmas(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct drm_gpuva *gpuva;
+ u32 num_vmas = 0;
+
+ lockdep_assert_held(&vm->lock);
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end)
+ num_vmas++;
+
+ return num_vmas;
+}
+
+static int get_mem_attrs(struct xe_vm *vm, u32 *num_vmas, u64 start,
+ u64 end, struct drm_xe_mem_range_attr *attrs)
+{
+ struct drm_gpuva *gpuva;
+ int i = 0;
+
+ lockdep_assert_held(&vm->lock);
+
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ if (i == *num_vmas)
+ return -ENOSPC;
+
+ attrs[i].start = xe_vma_start(vma);
+ attrs[i].end = xe_vma_end(vma);
+ attrs[i].atomic.val = vma->attr.atomic_access;
+ attrs[i].pat_index.val = vma->attr.pat_index;
+ attrs[i].preferred_mem_loc.devmem_fd = vma->attr.preferred_loc.devmem_fd;
+ attrs[i].preferred_mem_loc.migration_policy =
+ vma->attr.preferred_loc.migration_policy;
+
+ i++;
+ }
+
+ *num_vmas = i;
+ return 0;
+}
+
+int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct xe_file *xef = to_xe_file(file);
+ struct drm_xe_mem_range_attr *mem_attrs;
+ struct drm_xe_vm_query_mem_range_attr *args = data;
+ u64 __user *attrs_user = u64_to_user_ptr(args->vector_of_mem_attr);
+ struct xe_vm *vm;
+ int err = 0;
+
+ if (XE_IOCTL_DBG(xe,
+ ((args->num_mem_ranges == 0 &&
+ (attrs_user || args->sizeof_mem_range_attr != 0)) ||
+ (args->num_mem_ranges > 0 &&
+ (!attrs_user ||
+ args->sizeof_mem_range_attr !=
+ sizeof(struct drm_xe_mem_range_attr))))))
+ return -EINVAL;
+
+ vm = xe_vm_lookup(xef, args->vm_id);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -EINVAL;
+
+ err = down_read_interruptible(&vm->lock);
+ if (err)
+ goto put_vm;
+
+ attrs_user = u64_to_user_ptr(args->vector_of_mem_attr);
+
+ if (args->num_mem_ranges == 0 && !attrs_user) {
+ args->num_mem_ranges = xe_vm_query_vmas(vm, args->start, args->start + args->range);
+ args->sizeof_mem_range_attr = sizeof(struct drm_xe_mem_range_attr);
+ goto unlock_vm;
+ }
+
+ mem_attrs = kvmalloc_array(args->num_mem_ranges, args->sizeof_mem_range_attr,
+ GFP_KERNEL | __GFP_ACCOUNT |
+ __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
+ if (!mem_attrs) {
+ err = args->num_mem_ranges > 1 ? -ENOBUFS : -ENOMEM;
+ goto unlock_vm;
+ }
+
+ memset(mem_attrs, 0, args->num_mem_ranges * args->sizeof_mem_range_attr);
+ err = get_mem_attrs(vm, &args->num_mem_ranges, args->start,
+ args->start + args->range, mem_attrs);
+ if (err)
+ goto free_mem_attrs;
+
+ err = copy_to_user(attrs_user, mem_attrs,
+ args->sizeof_mem_range_attr * args->num_mem_ranges);
+
+free_mem_attrs:
+ kvfree(mem_attrs);
+unlock_vm:
+ up_read(&vm->lock);
+put_vm:
+ xe_vm_put(vm);
+ return err;
+}
+
static bool vma_matches(struct xe_vma *vma, u64 page_addr)
{
if (page_addr > xe_vma_end(vma) - 1 ||
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 6538cddf158b..3953b3ee2955 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -199,7 +199,7 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
-
+int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
void xe_vm_close_and_put(struct xe_vm *vm);
static inline bool xe_vm_in_fault_mode(struct xe_vm *vm)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 115b9bca2a25..6b03f319ab70 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -82,6 +82,7 @@ extern "C" {
* - &DRM_IOCTL_XE_WAIT_USER_FENCE
* - &DRM_IOCTL_XE_OBSERVATION
* - &DRM_IOCTL_XE_MADVISE
+ * - &DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS
*/
/*
@@ -104,6 +105,7 @@ extern "C" {
#define DRM_XE_WAIT_USER_FENCE 0x0a
#define DRM_XE_OBSERVATION 0x0b
#define DRM_XE_MADVISE 0x0c
+#define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
/* Must be kept compact -- no holes */
@@ -120,6 +122,7 @@ extern "C" {
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
#define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
+#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
/**
* DOC: Xe IOCTL Extensions
@@ -2113,6 +2116,142 @@ struct drm_xe_madvise {
__u64 reserved[2];
};
+/**
+ * struct drm_xe_mem_range_attr - Output of &DRM_IOCTL_XE_VM_QUERY_MEM_RANGES_ATTRS
+ *
+ * This structure is provided by userspace and filled by KMD in response to the
+ * DRM_IOCTL_XE_VM_QUERY_MEM_RANGES_ATTRS ioctl. It describes memory attributes of
+ * a memory ranges within a user specified address range in a VM.
+ *
+ * The structure includes information such as atomic access policy,
+ * page attribute table (PAT) index, and preferred memory location.
+ * Userspace allocates an array of these structures and passes a pointer to the
+ * ioctl to retrieve attributes for each memory ranges
+ *
+ * @extensions: Pointer to the first extension struct, if any
+ * @start: Start address of the memory range
+ * @end: End address of the virtual memory range
+ *
+ */
+struct drm_xe_mem_range_attr {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @start: start of the memory range */
+ __u64 start;
+
+ /** @end: end of the memory range */
+ __u64 end;
+
+ /** @preferred_mem_loc: preferred memory location */
+ struct {
+ /** @preferred_mem_loc.devmem_fd: fd for preferred loc */
+ __u32 devmem_fd;
+
+ /** @preferred_mem_loc.migration_policy: Page migration policy */
+ __u32 migration_policy;
+ } preferred_mem_loc;
+
+ /** * @atomic: Atomic access policy */
+ struct {
+ /** @atomic.val: atomic attribute */
+ __u32 val;
+
+ /** @atomic.reserved: Reserved */
+ __u32 reserved;
+ } atomic;
+
+ /** @pat_index: Page attribute table index */
+ struct {
+ /** @pat_index.val: PAT index */
+ __u32 val;
+
+ /** @pat_index.reserved: Reserved */
+ __u32 reserved;
+ } pat_index;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
+/**
+ * struct drm_xe_vm_query_mem_range_attr - Input of &DRM_IOCTL_XE_VM_QUERY_MEM_ATTRIBUTES
+ *
+ * This structure is used to query memory attributes of memory regions
+ * within a user specified address range in a VM. It provides detailed
+ * information about each memory range, including atomic access policy,
+ * page attribute table (PAT) index, and preferred memory location.
+ *
+ * Userspace first calls the ioctl with @num_mem_ranges = 0,
+ * @sizeof_mem_ranges_attr = 0 and @vector_of_vma_mem_attr = NULL to retrieve
+ * the number of memory regions and size of each memory range attribute.
+ * Then, it allocates a buffer of that size and calls the ioctl again to fill
+ * the buffer with memory range attributes.
+ *
+ * If second call fails with -ENOSPC, it means memory ranges changed between
+ * first call and now, retry IOCTL again with @num_mem_ranges = 0,
+ * @sizeof_mem_ranges_attr = 0 and @vector_of_vma_mem_attr = NULL followed by
+ * Second ioctl call.
+ *
+ * Example:
+ *
+ * .. code-block:: C
+ * struct drm_xe_vm_query_mem_range_attr query = {
+ * .vm_id = vm_id,
+ * .start = 0x100000,
+ * .range = 0x2000,
+ * };
+ *
+ * // First ioctl call to get num of mem regions and sizeof each attribute
+ * ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, &query);
+ *
+ * // Allocate buffer for the memory region attributes
+ * void *ptr = malloc(query.num_mem_ranges * query.sizeof_mem_range_attr);
+ *
+ * query.vector_of_mem_attr = (uintptr_t)ptr;
+ *
+ * // Second ioctl call to actually fill the memory attributes
+ * ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, &query);
+ *
+ * // Iterate over the returned memory region attributes
+ * for (unsigned int i = 0; i < query.num_mem_ranges; ++i) {
+ * struct drm_xe_mem_range_attr *attr = (struct drm_xe_mem_range_attr *)ptr;
+ *
+ * // Do something with attr
+ *
+ * // Move pointer by one entry
+ * ptr += query.sizeof_mem_range_attr;
+ * }
+ *
+ * free(ptr);
+ */
+struct drm_xe_vm_query_mem_range_attr {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @vm_id: vm_id of the virtual range */
+ __u32 vm_id;
+
+ /** @num_mem_ranges: number of mem_ranges in range */
+ __u32 num_mem_ranges;
+
+ /** @start: start of the virtual address range */
+ __u64 start;
+
+ /** @range: size of the virtual address range */
+ __u64 range;
+
+ /** @sizeof_mem_range_attr: size of struct drm_xe_mem_range_attr */
+ __u64 sizeof_mem_range_attr;
+
+ /** @vector_of_mem_attr: userptr to array of struct drm_xe_mem_range_attr */
+ __u64 vector_of_mem_attr;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+
+};
+
#if defined(__cplusplus)
}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 54+ messages in thread
* ✗ CI.checkpatch: warning for MADVISE FOR XE (rev5)
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (24 preceding siblings ...)
2025-07-30 13:00 ` [PATCH v5 25/25] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes Himal Prasad Ghimiray
@ 2025-07-30 14:20 ` Patchwork
2025-07-30 14:21 ` ✓ CI.KUnit: success " Patchwork
` (3 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2025-07-30 14:20 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: MADVISE FOR XE (rev5)
URL : https://patchwork.freedesktop.org/series/149550/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
c298eac5978c38dcc62a70c0d73c91765e7cc296
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit ed376919d991746f4fc601438c06b540eab6c0b8
Author: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Date: Wed Jul 30 18:30:50 2025 +0530
drm/xe/uapi: Add UAPI for querying VMA count and memory attributes
Introduce the DRM_IOCTL_XE_VM_QUERY_MEMORY_RANGE_ATTRS ioctl to allow
userspace to query memory attributes of VMAs within a user specified
virtual address range.
Userspace first calls the ioctl with num_mem_ranges = 0,
sizeof_mem_ranges_attr = 0 and vector_of_vma_mem_attr = NULL to retrieve
the number of memory ranges (vmas) and size of each memory range attribute.
Then, it allocates a buffer of that size and calls the ioctl again to fill
the buffer with memory range attributes.
This two-step interface allows userspace to first query the required
buffer size, then retrieve detailed attributes efficiently.
v2 (Matthew Brost)
- Use same ioctl to overload functionality
v3
- Add kernel-doc
v4
- Make uapi future proof by passing struct size (Matthew Brost)
- make lock interruptible (Matthew Brost)
- set reserved bits to zero (Matthew Brost)
- s/__copy_to_user/copy_to_user (Matthew Brost)
- Avod using VMA term in uapi (Thomas)
- xe_vm_put(vm) is missing (Shuicheng)
v5
- Nits
- Fix kernel-doc
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
+ /mt/dim checkpatch e1805ad9a7175457902ae453ea67b76194e7d796 drm-intel
b91873c9552d drm/gpuvm: Pass map arguments through a struct
6de08aa91c86 drm/gpuvm: Kill drm_gpuva_init()
4058df0bfbdb drm/gpuvm: Support flags in drm_gpuva_op_map
9baac96aa776 drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
1f8111893bc9 drm/xe/uapi: Add madvise interface
-:51: WARNING:LONG_LINE: line length of 113 exceeds 100 columns
#51: FILE: include/uapi/drm/xe_drm.h:122:
+#define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
total: 0 errors, 1 warnings, 0 checks, 154 lines checked
b09c57f105b4 drm/xe/vm: Add attributes struct as member of vma
2e4cae7de8cb drm/xe/vma: Move pat_index to vma attributes
ba2bfc3c4d6b drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
c06b0a942ee0 drm/gpusvm: Make drm_gpusvm_for_each_* macros public
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'range__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:452:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'next__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:452:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:452:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:258: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'notifier__' - possible side-effects?
#258: FILE: include/drm/drm_gpusvm.h:485:
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-:258: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#258: FILE: include/drm/drm_gpusvm.h:485:
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-:274: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'notifier__' - possible side-effects?
#274: FILE: include/drm/drm_gpusvm.h:501:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
-:274: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'next__' - possible side-effects?
#274: FILE: include/drm/drm_gpusvm.h:501:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
-:274: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#274: FILE: include/drm/drm_gpusvm.h:501:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
total: 0 errors, 0 warnings, 8 checks, 248 lines checked
f595e88579fc drm/xe/svm: Split system allocator vma incase of madvise call
426634c2b7c0 drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
a0745f87acff drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping
38fc628e62e5 drm/xe: Implement madvise ioctl for xe
-:52: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#52:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 330 lines checked
ed837df2018b drm/xe/svm : Add svm ranges migration policy on atomic access
10051fcb8dcb drm/xe/madvise: Update migration policy based on preferred location
64abeb25843c drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute
9cca8fdeb6ff drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
6fea34204cc2 drm/xe/svm: Consult madvise preferred location in prefetch
2a73a874beab drm/xe/bo: Add attributes field to xe_bo
447833f531bd drm/xe/bo: Update atomic_access attribute on madvise
6c6998b45036 drm/xe/madvise: Skip vma invalidation if mem attr are unchanged
11f1e62e3e8d drm/xe/vm: Add helper to check for default VMA memory attributes
5731e82f8be3 drm/xe: Reset VMA attributes to default in SVM garbage collector
c175f8da8e1d drm/xe: Enable madvise ioctl for xe
ed376919d991 drm/xe/uapi: Add UAPI for querying VMA count and memory attributes
-:209: WARNING:LONG_LINE: line length of 147 exceeds 100 columns
#209: FILE: include/uapi/drm/xe_drm.h:125:
+#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
total: 0 errors, 1 warnings, 0 checks, 287 lines checked
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✓ CI.KUnit: success for MADVISE FOR XE (rev5)
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (25 preceding siblings ...)
2025-07-30 14:20 ` ✗ CI.checkpatch: warning for MADVISE FOR XE (rev5) Patchwork
@ 2025-07-30 14:21 ` Patchwork
2025-07-30 14:36 ` ✗ CI.checksparse: warning " Patchwork
` (2 subsequent siblings)
29 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2025-07-30 14:21 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: MADVISE FOR XE (rev5)
URL : https://patchwork.freedesktop.org/series/149550/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[14:20:46] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[14:20:50] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[14:21:18] Starting KUnit Kernel (1/1)...
[14:21:18] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[14:21:18] ================== guc_buf (11 subtests) ===================
[14:21:18] [PASSED] test_smallest
[14:21:18] [PASSED] test_largest
[14:21:18] [PASSED] test_granular
[14:21:18] [PASSED] test_unique
[14:21:18] [PASSED] test_overlap
[14:21:18] [PASSED] test_reusable
[14:21:18] [PASSED] test_too_big
[14:21:18] [PASSED] test_flush
[14:21:18] [PASSED] test_lookup
[14:21:18] [PASSED] test_data
[14:21:18] [PASSED] test_class
[14:21:18] ===================== [PASSED] guc_buf =====================
[14:21:18] =================== guc_dbm (7 subtests) ===================
[14:21:18] [PASSED] test_empty
[14:21:18] [PASSED] test_default
[14:21:18] ======================== test_size ========================
[14:21:18] [PASSED] 4
[14:21:18] [PASSED] 8
[14:21:18] [PASSED] 32
[14:21:18] [PASSED] 256
[14:21:18] ==================== [PASSED] test_size ====================
[14:21:18] ======================= test_reuse ========================
[14:21:18] [PASSED] 4
[14:21:18] [PASSED] 8
[14:21:18] [PASSED] 32
[14:21:18] [PASSED] 256
[14:21:18] =================== [PASSED] test_reuse ====================
[14:21:18] =================== test_range_overlap ====================
[14:21:18] [PASSED] 4
[14:21:18] [PASSED] 8
[14:21:18] [PASSED] 32
[14:21:18] [PASSED] 256
[14:21:18] =============== [PASSED] test_range_overlap ================
[14:21:18] =================== test_range_compact ====================
[14:21:18] [PASSED] 4
[14:21:18] [PASSED] 8
[14:21:18] [PASSED] 32
[14:21:18] [PASSED] 256
[14:21:18] =============== [PASSED] test_range_compact ================
[14:21:18] ==================== test_range_spare =====================
[14:21:18] [PASSED] 4
[14:21:18] [PASSED] 8
[14:21:18] [PASSED] 32
[14:21:18] [PASSED] 256
[14:21:18] ================ [PASSED] test_range_spare =================
[14:21:18] ===================== [PASSED] guc_dbm =====================
[14:21:18] =================== guc_idm (6 subtests) ===================
[14:21:18] [PASSED] bad_init
[14:21:18] [PASSED] no_init
[14:21:18] [PASSED] init_fini
[14:21:18] [PASSED] check_used
[14:21:18] [PASSED] check_quota
[14:21:18] [PASSED] check_all
[14:21:18] ===================== [PASSED] guc_idm =====================
[14:21:18] ================== no_relay (3 subtests) ===================
[14:21:18] [PASSED] xe_drops_guc2pf_if_not_ready
[14:21:18] [PASSED] xe_drops_guc2vf_if_not_ready
[14:21:18] [PASSED] xe_rejects_send_if_not_ready
[14:21:18] ==================== [PASSED] no_relay =====================
[14:21:18] ================== pf_relay (14 subtests) ==================
[14:21:18] [PASSED] pf_rejects_guc2pf_too_short
[14:21:18] [PASSED] pf_rejects_guc2pf_too_long
[14:21:18] [PASSED] pf_rejects_guc2pf_no_payload
[14:21:18] [PASSED] pf_fails_no_payload
[14:21:18] [PASSED] pf_fails_bad_origin
[14:21:18] [PASSED] pf_fails_bad_type
[14:21:18] [PASSED] pf_txn_reports_error
[14:21:18] [PASSED] pf_txn_sends_pf2guc
[14:21:18] [PASSED] pf_sends_pf2guc
[14:21:18] [SKIPPED] pf_loopback_nop
[14:21:18] [SKIPPED] pf_loopback_echo
[14:21:18] [SKIPPED] pf_loopback_fail
[14:21:18] [SKIPPED] pf_loopback_busy
[14:21:18] [SKIPPED] pf_loopback_retry
[14:21:18] ==================== [PASSED] pf_relay =====================
[14:21:18] ================== vf_relay (3 subtests) ===================
[14:21:18] [PASSED] vf_rejects_guc2vf_too_short
[14:21:18] [PASSED] vf_rejects_guc2vf_too_long
[14:21:18] [PASSED] vf_rejects_guc2vf_no_payload
[14:21:18] ==================== [PASSED] vf_relay =====================
[14:21:18] ===================== lmtt (1 subtest) =====================
[14:21:18] ======================== test_ops =========================
[14:21:18] [PASSED] 2-level
[14:21:18] [PASSED] multi-level
[14:21:18] ==================== [PASSED] test_ops =====================
[14:21:18] ====================== [PASSED] lmtt =======================
[14:21:18] ================= pf_service (11 subtests) =================
[14:21:18] [PASSED] pf_negotiate_any
[14:21:18] [PASSED] pf_negotiate_base_match
[14:21:18] [PASSED] pf_negotiate_base_newer
[14:21:18] [PASSED] pf_negotiate_base_next
[14:21:18] [SKIPPED] pf_negotiate_base_older
[14:21:18] [PASSED] pf_negotiate_base_prev
[14:21:18] [PASSED] pf_negotiate_latest_match
[14:21:18] [PASSED] pf_negotiate_latest_newer
[14:21:18] [PASSED] pf_negotiate_latest_next
[14:21:18] [SKIPPED] pf_negotiate_latest_older
[14:21:18] [SKIPPED] pf_negotiate_latest_prev
[14:21:18] =================== [PASSED] pf_service ====================
[14:21:18] =================== xe_mocs (2 subtests) ===================
[14:21:18] ================ xe_live_mocs_kernel_kunit ================
[14:21:18] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[14:21:18] ================ xe_live_mocs_reset_kunit =================
[14:21:18] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[14:21:18] ==================== [SKIPPED] xe_mocs =====================
[14:21:18] ================= xe_migrate (2 subtests) ==================
[14:21:18] ================= xe_migrate_sanity_kunit =================
[14:21:18] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[14:21:18] ================== xe_validate_ccs_kunit ==================
[14:21:18] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[14:21:18] =================== [SKIPPED] xe_migrate ===================
[14:21:18] ================== xe_dma_buf (1 subtest) ==================
[14:21:18] ==================== xe_dma_buf_kunit =====================
[14:21:18] ================ [SKIPPED] xe_dma_buf_kunit ================
[14:21:18] =================== [SKIPPED] xe_dma_buf ===================
[14:21:18] ================= xe_bo_shrink (1 subtest) =================
[14:21:18] =================== xe_bo_shrink_kunit ====================
[14:21:18] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[14:21:18] ================== [SKIPPED] xe_bo_shrink ==================
[14:21:18] ==================== xe_bo (2 subtests) ====================
[14:21:18] ================== xe_ccs_migrate_kunit ===================
[14:21:18] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[14:21:18] ==================== xe_bo_evict_kunit ====================
[14:21:18] =============== [SKIPPED] xe_bo_evict_kunit ================
[14:21:18] ===================== [SKIPPED] xe_bo ======================
[14:21:18] ==================== args (11 subtests) ====================
[14:21:18] [PASSED] count_args_test
[14:21:18] [PASSED] call_args_example
[14:21:18] [PASSED] call_args_test
[14:21:18] [PASSED] drop_first_arg_example
[14:21:18] [PASSED] drop_first_arg_test
[14:21:18] [PASSED] first_arg_example
[14:21:18] [PASSED] first_arg_test
[14:21:18] [PASSED] last_arg_example
[14:21:18] [PASSED] last_arg_test
[14:21:18] [PASSED] pick_arg_example
[14:21:18] [PASSED] sep_comma_example
[14:21:18] ====================== [PASSED] args =======================
[14:21:18] =================== xe_pci (3 subtests) ====================
[14:21:18] ==================== check_graphics_ip ====================
[14:21:18] [PASSED] 12.70 Xe_LPG
[14:21:18] [PASSED] 12.71 Xe_LPG
[14:21:18] [PASSED] 12.74 Xe_LPG+
[14:21:18] [PASSED] 20.01 Xe2_HPG
[14:21:18] [PASSED] 20.02 Xe2_HPG
[14:21:18] [PASSED] 20.04 Xe2_LPG
[14:21:18] [PASSED] 30.00 Xe3_LPG
[14:21:18] [PASSED] 30.01 Xe3_LPG
[14:21:18] [PASSED] 30.03 Xe3_LPG
[14:21:18] ================ [PASSED] check_graphics_ip ================
[14:21:18] ===================== check_media_ip ======================
[14:21:18] [PASSED] 13.00 Xe_LPM+
[14:21:18] [PASSED] 13.01 Xe2_HPM
[14:21:18] [PASSED] 20.00 Xe2_LPM
[14:21:18] [PASSED] 30.00 Xe3_LPM
[14:21:18] [PASSED] 30.02 Xe3_LPM
[14:21:18] ================= [PASSED] check_media_ip ==================
[14:21:18] ================= check_platform_gt_count =================
[14:21:18] [PASSED] 0x9A60 (TIGERLAKE)
[14:21:18] [PASSED] 0x9A68 (TIGERLAKE)
[14:21:18] [PASSED] 0x9A70 (TIGERLAKE)
[14:21:18] [PASSED] 0x9A40 (TIGERLAKE)
[14:21:18] [PASSED] 0x9A49 (TIGERLAKE)
[14:21:18] [PASSED] 0x9A59 (TIGERLAKE)
[14:21:18] [PASSED] 0x9A78 (TIGERLAKE)
[14:21:18] [PASSED] 0x9AC0 (TIGERLAKE)
[14:21:18] [PASSED] 0x9AC9 (TIGERLAKE)
[14:21:18] [PASSED] 0x9AD9 (TIGERLAKE)
[14:21:18] [PASSED] 0x9AF8 (TIGERLAKE)
[14:21:18] [PASSED] 0x4C80 (ROCKETLAKE)
[14:21:18] [PASSED] 0x4C8A (ROCKETLAKE)
[14:21:18] [PASSED] 0x4C8B (ROCKETLAKE)
[14:21:18] [PASSED] 0x4C8C (ROCKETLAKE)
[14:21:18] [PASSED] 0x4C90 (ROCKETLAKE)
[14:21:18] [PASSED] 0x4C9A (ROCKETLAKE)
[14:21:18] [PASSED] 0x4680 (ALDERLAKE_S)
[14:21:18] [PASSED] 0x4682 (ALDERLAKE_S)
[14:21:18] [PASSED] 0x4688 (ALDERLAKE_S)
[14:21:18] [PASSED] 0x468A (ALDERLAKE_S)
[14:21:18] [PASSED] 0x468B (ALDERLAKE_S)
[14:21:18] [PASSED] 0x4690 (ALDERLAKE_S)
[14:21:18] [PASSED] 0x4692 (ALDERLAKE_S)
[14:21:18] [PASSED] 0x4693 (ALDERLAKE_S)
[14:21:18] [PASSED] 0x46A0 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46A1 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46A2 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46A3 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46A6 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46A8 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46AA (ALDERLAKE_P)
[14:21:18] [PASSED] 0x462A (ALDERLAKE_P)
[14:21:18] [PASSED] 0x4626 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x4628 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46B0 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46B1 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46B2 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46B3 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46C0 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46C1 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46C2 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46C3 (ALDERLAKE_P)
[14:21:18] [PASSED] 0x46D0 (ALDERLAKE_N)
[14:21:18] [PASSED] 0x46D1 (ALDERLAKE_N)
[14:21:18] [PASSED] 0x46D2 (ALDERLAKE_N)
[14:21:18] [PASSED] 0x46D3 (ALDERLAKE_N)
[14:21:18] [PASSED] 0x46D4 (ALDERLAKE_N)
[14:21:18] [PASSED] 0xA721 (ALDERLAKE_P)
[14:21:18] [PASSED] 0xA7A1 (ALDERLAKE_P)
[14:21:18] [PASSED] 0xA7A9 (ALDERLAKE_P)
[14:21:18] [PASSED] 0xA7AC (ALDERLAKE_P)
[14:21:18] [PASSED] 0xA7AD (ALDERLAKE_P)
[14:21:18] [PASSED] 0xA720 (ALDERLAKE_P)
[14:21:18] [PASSED] 0xA7A0 (ALDERLAKE_P)
[14:21:18] [PASSED] 0xA7A8 (ALDERLAKE_P)
[14:21:18] [PASSED] 0xA7AA (ALDERLAKE_P)
[14:21:18] [PASSED] 0xA7AB (ALDERLAKE_P)
[14:21:18] [PASSED] 0xA780 (ALDERLAKE_S)
[14:21:18] [PASSED] 0xA781 (ALDERLAKE_S)
[14:21:18] [PASSED] 0xA782 (ALDERLAKE_S)
[14:21:18] [PASSED] 0xA783 (ALDERLAKE_S)
[14:21:18] [PASSED] 0xA788 (ALDERLAKE_S)
[14:21:18] [PASSED] 0xA789 (ALDERLAKE_S)
[14:21:18] [PASSED] 0xA78A (ALDERLAKE_S)
[14:21:18] [PASSED] 0xA78B (ALDERLAKE_S)
[14:21:18] [PASSED] 0x4905 (DG1)
[14:21:18] [PASSED] 0x4906 (DG1)
[14:21:18] [PASSED] 0x4907 (DG1)
[14:21:18] [PASSED] 0x4908 (DG1)
[14:21:18] [PASSED] 0x4909 (DG1)
[14:21:18] [PASSED] 0x56C0 (DG2)
[14:21:18] [PASSED] 0x56C2 (DG2)
[14:21:18] [PASSED] 0x56C1 (DG2)
[14:21:18] [PASSED] 0x7D51 (METEORLAKE)
[14:21:18] [PASSED] 0x7DD1 (METEORLAKE)
[14:21:18] [PASSED] 0x7D41 (METEORLAKE)
[14:21:18] [PASSED] 0x7D67 (METEORLAKE)
[14:21:18] [PASSED] 0xB640 (METEORLAKE)
[14:21:18] [PASSED] 0x56A0 (DG2)
[14:21:18] [PASSED] 0x56A1 (DG2)
[14:21:18] [PASSED] 0x56A2 (DG2)
[14:21:18] [PASSED] 0x56BE (DG2)
[14:21:18] [PASSED] 0x56BF (DG2)
[14:21:18] [PASSED] 0x5690 (DG2)
[14:21:18] [PASSED] 0x5691 (DG2)
[14:21:18] [PASSED] 0x5692 (DG2)
[14:21:18] [PASSED] 0x56A5 (DG2)
[14:21:18] [PASSED] 0x56A6 (DG2)
[14:21:18] [PASSED] 0x56B0 (DG2)
[14:21:18] [PASSED] 0x56B1 (DG2)
[14:21:18] [PASSED] 0x56BA (DG2)
[14:21:18] [PASSED] 0x56BB (DG2)
[14:21:18] [PASSED] 0x56BC (DG2)
[14:21:18] [PASSED] 0x56BD (DG2)
[14:21:18] [PASSED] 0x5693 (DG2)
[14:21:18] [PASSED] 0x5694 (DG2)
[14:21:18] [PASSED] 0x5695 (DG2)
[14:21:18] [PASSED] 0x56A3 (DG2)
[14:21:18] [PASSED] 0x56A4 (DG2)
[14:21:18] [PASSED] 0x56B2 (DG2)
[14:21:18] [PASSED] 0x56B3 (DG2)
[14:21:18] [PASSED] 0x5696 (DG2)
[14:21:18] [PASSED] 0x5697 (DG2)
[14:21:18] [PASSED] 0xB69 (PVC)
[14:21:18] [PASSED] 0xB6E (PVC)
[14:21:18] [PASSED] 0xBD4 (PVC)
[14:21:18] [PASSED] 0xBD5 (PVC)
[14:21:18] [PASSED] 0xBD6 (PVC)
[14:21:18] [PASSED] 0xBD7 (PVC)
[14:21:18] [PASSED] 0xBD8 (PVC)
[14:21:18] [PASSED] 0xBD9 (PVC)
[14:21:18] [PASSED] 0xBDA (PVC)
[14:21:18] [PASSED] 0xBDB (PVC)
[14:21:18] [PASSED] 0xBE0 (PVC)
[14:21:18] [PASSED] 0xBE1 (PVC)
[14:21:18] [PASSED] 0xBE5 (PVC)
[14:21:18] [PASSED] 0x7D40 (METEORLAKE)
[14:21:18] [PASSED] 0x7D45 (METEORLAKE)
[14:21:18] [PASSED] 0x7D55 (METEORLAKE)
[14:21:18] [PASSED] 0x7D60 (METEORLAKE)
[14:21:18] [PASSED] 0x7DD5 (METEORLAKE)
[14:21:18] [PASSED] 0x6420 (LUNARLAKE)
[14:21:18] [PASSED] 0x64A0 (LUNARLAKE)
[14:21:18] [PASSED] 0x64B0 (LUNARLAKE)
[14:21:18] [PASSED] 0xE202 (BATTLEMAGE)
[14:21:18] [PASSED] 0xE209 (BATTLEMAGE)
[14:21:18] [PASSED] 0xE20B (BATTLEMAGE)
[14:21:18] [PASSED] 0xE20C (BATTLEMAGE)
[14:21:18] [PASSED] 0xE20D (BATTLEMAGE)
[14:21:18] [PASSED] 0xE210 (BATTLEMAGE)
[14:21:18] [PASSED] 0xE211 (BATTLEMAGE)
[14:21:18] [PASSED] 0xE212 (BATTLEMAGE)
[14:21:18] [PASSED] 0xE216 (BATTLEMAGE)
[14:21:18] [PASSED] 0xE220 (BATTLEMAGE)
[14:21:18] [PASSED] 0xE221 (BATTLEMAGE)
[14:21:18] [PASSED] 0xE222 (BATTLEMAGE)
[14:21:18] [PASSED] 0xE223 (BATTLEMAGE)
[14:21:18] [PASSED] 0xB080 (PANTHERLAKE)
[14:21:18] [PASSED] 0xB081 (PANTHERLAKE)
[14:21:18] [PASSED] 0xB082 (PANTHERLAKE)
[14:21:18] [PASSED] 0xB083 (PANTHERLAKE)
[14:21:18] [PASSED] 0xB084 (PANTHERLAKE)
[14:21:18] [PASSED] 0xB085 (PANTHERLAKE)
[14:21:18] [PASSED] 0xB086 (PANTHERLAKE)
[14:21:18] [PASSED] 0xB087 (PANTHERLAKE)
[14:21:18] [PASSED] 0xB08F (PANTHERLAKE)
[14:21:18] [PASSED] 0xB090 (PANTHERLAKE)
[14:21:18] [PASSED] 0xB0A0 (PANTHERLAKE)
[14:21:18] [PASSED] 0xB0B0 (PANTHERLAKE)
[14:21:18] [PASSED] 0xFD80 (PANTHERLAKE)
[14:21:18] [PASSED] 0xFD81 (PANTHERLAKE)
[14:21:18] ============= [PASSED] check_platform_gt_count =============
[14:21:18] ===================== [PASSED] xe_pci ======================
[14:21:18] =================== xe_rtp (2 subtests) ====================
[14:21:18] =============== xe_rtp_process_to_sr_tests ================
[14:21:18] [PASSED] coalesce-same-reg
[14:21:18] [PASSED] no-match-no-add
[14:21:18] [PASSED] match-or
[14:21:18] [PASSED] match-or-xfail
[14:21:18] [PASSED] no-match-no-add-multiple-rules
[14:21:18] [PASSED] two-regs-two-entries
[14:21:18] [PASSED] clr-one-set-other
[14:21:18] [PASSED] set-field
[14:21:18] [PASSED] conflict-duplicate
[14:21:18] [PASSED] conflict-not-disjoint
[14:21:18] [PASSED] conflict-reg-type
[14:21:18] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[14:21:18] ================== xe_rtp_process_tests ===================
[14:21:18] [PASSED] active1
[14:21:18] [PASSED] active2
[14:21:18] [PASSED] active-inactive
[14:21:18] [PASSED] inactive-active
[14:21:18] [PASSED] inactive-1st_or_active-inactive
[14:21:18] [PASSED] inactive-2nd_or_active-inactive
[14:21:18] [PASSED] inactive-last_or_active-inactive
[14:21:18] [PASSED] inactive-no_or_active-inactive
[14:21:18] ============== [PASSED] xe_rtp_process_tests ===============
[14:21:18] ===================== [PASSED] xe_rtp ======================
[14:21:18] ==================== xe_wa (1 subtest) =====================
[14:21:18] ======================== xe_wa_gt =========================
[14:21:18] [PASSED] TIGERLAKE (B0)
[14:21:18] [PASSED] DG1 (A0)
[14:21:18] [PASSED] DG1 (B0)
[14:21:18] [PASSED] ALDERLAKE_S (A0)
[14:21:18] [PASSED] ALDERLAKE_S (B0)
[14:21:18] [PASSED] ALDERLAKE_S (C0)
[14:21:18] [PASSED] ALDERLAKE_S (D0)
[14:21:18] [PASSED] ALDERLAKE_P (A0)
[14:21:18] [PASSED] ALDERLAKE_P (B0)
[14:21:18] [PASSED] ALDERLAKE_P (C0)
[14:21:18] [PASSED] ALDERLAKE_S_RPLS (D0)
[14:21:18] [PASSED] ALDERLAKE_P_RPLU (E0)
[14:21:18] [PASSED] DG2_G10 (C0)
[14:21:18] [PASSED] DG2_G11 (B1)
[14:21:18] [PASSED] DG2_G12 (A1)
[14:21:18] [PASSED] METEORLAKE (g:A0, m:A0)
[14:21:18] [PASSED] METEORLAKE (g:A0, m:A0)
[14:21:18] [PASSED] METEORLAKE (g:A0, m:A0)
[14:21:18] [PASSED] LUNARLAKE (g:A0, m:A0)
[14:21:18] [PASSED] LUNARLAKE (g:B0, m:A0)
stty: 'standard input': Inappropriate ioctl for device
[14:21:18] [PASSED] BATTLEMAGE (g:A0, m:A1)
[14:21:18] ==================== [PASSED] xe_wa_gt =====================
[14:21:18] ====================== [PASSED] xe_wa ======================
[14:21:18] ============================================================
[14:21:18] Testing complete. Ran 297 tests: passed: 281, skipped: 16
[14:21:18] Elapsed time: 31.729s total, 4.212s configuring, 27.201s building, 0.309s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[14:21:18] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[14:21:20] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[14:21:41] Starting KUnit Kernel (1/1)...
[14:21:41] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[14:21:41] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[14:21:41] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[14:21:41] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[14:21:41] =========== drm_validate_clone_mode (2 subtests) ===========
[14:21:41] ============== drm_test_check_in_clone_mode ===============
[14:21:41] [PASSED] in_clone_mode
[14:21:41] [PASSED] not_in_clone_mode
[14:21:41] ========== [PASSED] drm_test_check_in_clone_mode ===========
[14:21:41] =============== drm_test_check_valid_clones ===============
[14:21:41] [PASSED] not_in_clone_mode
[14:21:41] [PASSED] valid_clone
[14:21:41] [PASSED] invalid_clone
[14:21:41] =========== [PASSED] drm_test_check_valid_clones ===========
[14:21:41] ============= [PASSED] drm_validate_clone_mode =============
[14:21:41] ============= drm_validate_modeset (1 subtest) =============
[14:21:41] [PASSED] drm_test_check_connector_changed_modeset
[14:21:41] ============== [PASSED] drm_validate_modeset ===============
[14:21:41] ====== drm_test_bridge_get_current_state (2 subtests) ======
[14:21:41] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[14:21:41] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[14:21:41] ======== [PASSED] drm_test_bridge_get_current_state ========
[14:21:41] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[14:21:41] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[14:21:41] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[14:21:41] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[14:21:41] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[14:21:41] ============== drm_bridge_alloc (2 subtests) ===============
[14:21:41] [PASSED] drm_test_drm_bridge_alloc_basic
[14:21:41] [PASSED] drm_test_drm_bridge_alloc_get_put
[14:21:41] ================ [PASSED] drm_bridge_alloc =================
[14:21:41] ================== drm_buddy (7 subtests) ==================
[14:21:41] [PASSED] drm_test_buddy_alloc_limit
[14:21:41] [PASSED] drm_test_buddy_alloc_optimistic
[14:21:41] [PASSED] drm_test_buddy_alloc_pessimistic
[14:21:41] [PASSED] drm_test_buddy_alloc_pathological
[14:21:41] [PASSED] drm_test_buddy_alloc_contiguous
[14:21:41] [PASSED] drm_test_buddy_alloc_clear
[14:21:41] [PASSED] drm_test_buddy_alloc_range_bias
[14:21:41] ==================== [PASSED] drm_buddy ====================
[14:21:41] ============= drm_cmdline_parser (40 subtests) =============
[14:21:41] [PASSED] drm_test_cmdline_force_d_only
[14:21:41] [PASSED] drm_test_cmdline_force_D_only_dvi
[14:21:41] [PASSED] drm_test_cmdline_force_D_only_hdmi
[14:21:41] [PASSED] drm_test_cmdline_force_D_only_not_digital
[14:21:41] [PASSED] drm_test_cmdline_force_e_only
[14:21:41] [PASSED] drm_test_cmdline_res
[14:21:41] [PASSED] drm_test_cmdline_res_vesa
[14:21:41] [PASSED] drm_test_cmdline_res_vesa_rblank
[14:21:41] [PASSED] drm_test_cmdline_res_rblank
[14:21:41] [PASSED] drm_test_cmdline_res_bpp
[14:21:41] [PASSED] drm_test_cmdline_res_refresh
[14:21:41] [PASSED] drm_test_cmdline_res_bpp_refresh
[14:21:41] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[14:21:41] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[14:21:41] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[14:21:41] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[14:21:41] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[14:21:41] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[14:21:41] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[14:21:41] [PASSED] drm_test_cmdline_res_margins_force_on
[14:21:41] [PASSED] drm_test_cmdline_res_vesa_margins
[14:21:41] [PASSED] drm_test_cmdline_name
[14:21:41] [PASSED] drm_test_cmdline_name_bpp
[14:21:41] [PASSED] drm_test_cmdline_name_option
[14:21:41] [PASSED] drm_test_cmdline_name_bpp_option
[14:21:41] [PASSED] drm_test_cmdline_rotate_0
[14:21:41] [PASSED] drm_test_cmdline_rotate_90
[14:21:41] [PASSED] drm_test_cmdline_rotate_180
[14:21:41] [PASSED] drm_test_cmdline_rotate_270
[14:21:41] [PASSED] drm_test_cmdline_hmirror
[14:21:41] [PASSED] drm_test_cmdline_vmirror
[14:21:41] [PASSED] drm_test_cmdline_margin_options
[14:21:41] [PASSED] drm_test_cmdline_multiple_options
[14:21:41] [PASSED] drm_test_cmdline_bpp_extra_and_option
[14:21:41] [PASSED] drm_test_cmdline_extra_and_option
[14:21:41] [PASSED] drm_test_cmdline_freestanding_options
[14:21:41] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[14:21:41] [PASSED] drm_test_cmdline_panel_orientation
[14:21:41] ================ drm_test_cmdline_invalid =================
[14:21:41] [PASSED] margin_only
[14:21:41] [PASSED] interlace_only
[14:21:41] [PASSED] res_missing_x
[14:21:41] [PASSED] res_missing_y
[14:21:41] [PASSED] res_bad_y
[14:21:41] [PASSED] res_missing_y_bpp
[14:21:41] [PASSED] res_bad_bpp
[14:21:41] [PASSED] res_bad_refresh
[14:21:41] [PASSED] res_bpp_refresh_force_on_off
[14:21:41] [PASSED] res_invalid_mode
[14:21:41] [PASSED] res_bpp_wrong_place_mode
[14:21:41] [PASSED] name_bpp_refresh
[14:21:41] [PASSED] name_refresh
[14:21:41] [PASSED] name_refresh_wrong_mode
[14:21:41] [PASSED] name_refresh_invalid_mode
[14:21:41] [PASSED] rotate_multiple
[14:21:41] [PASSED] rotate_invalid_val
[14:21:41] [PASSED] rotate_truncated
[14:21:41] [PASSED] invalid_option
[14:21:41] [PASSED] invalid_tv_option
[14:21:41] [PASSED] truncated_tv_option
[14:21:41] ============ [PASSED] drm_test_cmdline_invalid =============
[14:21:41] =============== drm_test_cmdline_tv_options ===============
[14:21:41] [PASSED] NTSC
[14:21:41] [PASSED] NTSC_443
[14:21:41] [PASSED] NTSC_J
[14:21:41] [PASSED] PAL
[14:21:41] [PASSED] PAL_M
[14:21:41] [PASSED] PAL_N
[14:21:41] [PASSED] SECAM
[14:21:41] [PASSED] MONO_525
[14:21:41] [PASSED] MONO_625
[14:21:41] =========== [PASSED] drm_test_cmdline_tv_options ===========
[14:21:41] =============== [PASSED] drm_cmdline_parser ================
[14:21:41] ========== drmm_connector_hdmi_init (20 subtests) ==========
[14:21:41] [PASSED] drm_test_connector_hdmi_init_valid
[14:21:41] [PASSED] drm_test_connector_hdmi_init_bpc_8
[14:21:41] [PASSED] drm_test_connector_hdmi_init_bpc_10
[14:21:41] [PASSED] drm_test_connector_hdmi_init_bpc_12
[14:21:41] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[14:21:41] [PASSED] drm_test_connector_hdmi_init_bpc_null
[14:21:41] [PASSED] drm_test_connector_hdmi_init_formats_empty
[14:21:41] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[14:21:41] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[14:21:41] [PASSED] supported_formats=0x9 yuv420_allowed=1
[14:21:41] [PASSED] supported_formats=0x9 yuv420_allowed=0
[14:21:41] [PASSED] supported_formats=0x3 yuv420_allowed=1
[14:21:41] [PASSED] supported_formats=0x3 yuv420_allowed=0
[14:21:41] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[14:21:41] [PASSED] drm_test_connector_hdmi_init_null_ddc
[14:21:41] [PASSED] drm_test_connector_hdmi_init_null_product
[14:21:41] [PASSED] drm_test_connector_hdmi_init_null_vendor
[14:21:41] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[14:21:41] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[14:21:41] [PASSED] drm_test_connector_hdmi_init_product_valid
[14:21:41] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[14:21:41] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[14:21:41] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[14:21:41] ========= drm_test_connector_hdmi_init_type_valid =========
[14:21:41] [PASSED] HDMI-A
[14:21:41] [PASSED] HDMI-B
[14:21:41] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[14:21:41] ======== drm_test_connector_hdmi_init_type_invalid ========
[14:21:41] [PASSED] Unknown
[14:21:41] [PASSED] VGA
[14:21:41] [PASSED] DVI-I
[14:21:41] [PASSED] DVI-D
[14:21:41] [PASSED] DVI-A
[14:21:41] [PASSED] Composite
[14:21:41] [PASSED] SVIDEO
[14:21:41] [PASSED] LVDS
[14:21:41] [PASSED] Component
[14:21:41] [PASSED] DIN
[14:21:41] [PASSED] DP
[14:21:41] [PASSED] TV
[14:21:41] [PASSED] eDP
[14:21:41] [PASSED] Virtual
[14:21:41] [PASSED] DSI
[14:21:41] [PASSED] DPI
[14:21:41] [PASSED] Writeback
[14:21:41] [PASSED] SPI
[14:21:41] [PASSED] USB
[14:21:41] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[14:21:41] ============ [PASSED] drmm_connector_hdmi_init =============
[14:21:41] ============= drmm_connector_init (3 subtests) =============
[14:21:41] [PASSED] drm_test_drmm_connector_init
[14:21:41] [PASSED] drm_test_drmm_connector_init_null_ddc
[14:21:41] ========= drm_test_drmm_connector_init_type_valid =========
[14:21:41] [PASSED] Unknown
[14:21:41] [PASSED] VGA
[14:21:41] [PASSED] DVI-I
[14:21:41] [PASSED] DVI-D
[14:21:41] [PASSED] DVI-A
[14:21:41] [PASSED] Composite
[14:21:41] [PASSED] SVIDEO
[14:21:41] [PASSED] LVDS
[14:21:41] [PASSED] Component
[14:21:41] [PASSED] DIN
[14:21:41] [PASSED] DP
[14:21:41] [PASSED] HDMI-A
[14:21:41] [PASSED] HDMI-B
[14:21:41] [PASSED] TV
[14:21:41] [PASSED] eDP
[14:21:41] [PASSED] Virtual
[14:21:41] [PASSED] DSI
[14:21:41] [PASSED] DPI
[14:21:41] [PASSED] Writeback
[14:21:41] [PASSED] SPI
[14:21:41] [PASSED] USB
[14:21:41] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[14:21:41] =============== [PASSED] drmm_connector_init ===============
[14:21:41] ========= drm_connector_dynamic_init (6 subtests) ==========
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_init
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_init_properties
[14:21:41] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[14:21:41] [PASSED] Unknown
[14:21:41] [PASSED] VGA
[14:21:41] [PASSED] DVI-I
[14:21:41] [PASSED] DVI-D
[14:21:41] [PASSED] DVI-A
[14:21:41] [PASSED] Composite
[14:21:41] [PASSED] SVIDEO
[14:21:41] [PASSED] LVDS
[14:21:41] [PASSED] Component
[14:21:41] [PASSED] DIN
[14:21:41] [PASSED] DP
[14:21:41] [PASSED] HDMI-A
[14:21:41] [PASSED] HDMI-B
[14:21:41] [PASSED] TV
[14:21:41] [PASSED] eDP
[14:21:41] [PASSED] Virtual
[14:21:41] [PASSED] DSI
[14:21:41] [PASSED] DPI
[14:21:41] [PASSED] Writeback
[14:21:41] [PASSED] SPI
[14:21:41] [PASSED] USB
[14:21:41] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[14:21:41] ======== drm_test_drm_connector_dynamic_init_name =========
[14:21:41] [PASSED] Unknown
[14:21:41] [PASSED] VGA
[14:21:41] [PASSED] DVI-I
[14:21:41] [PASSED] DVI-D
[14:21:41] [PASSED] DVI-A
[14:21:41] [PASSED] Composite
[14:21:41] [PASSED] SVIDEO
[14:21:41] [PASSED] LVDS
[14:21:41] [PASSED] Component
[14:21:41] [PASSED] DIN
[14:21:41] [PASSED] DP
[14:21:41] [PASSED] HDMI-A
[14:21:41] [PASSED] HDMI-B
[14:21:41] [PASSED] TV
[14:21:41] [PASSED] eDP
[14:21:41] [PASSED] Virtual
[14:21:41] [PASSED] DSI
[14:21:41] [PASSED] DPI
[14:21:41] [PASSED] Writeback
[14:21:41] [PASSED] SPI
[14:21:41] [PASSED] USB
[14:21:41] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[14:21:41] =========== [PASSED] drm_connector_dynamic_init ============
[14:21:41] ==== drm_connector_dynamic_register_early (4 subtests) =====
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[14:21:41] ====== [PASSED] drm_connector_dynamic_register_early =======
[14:21:41] ======= drm_connector_dynamic_register (7 subtests) ========
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[14:21:41] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[14:21:41] ========= [PASSED] drm_connector_dynamic_register ==========
[14:21:41] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[14:21:41] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[14:21:41] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[14:21:41] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[14:21:41] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[14:21:41] ========== drm_test_get_tv_mode_from_name_valid ===========
[14:21:41] [PASSED] NTSC
[14:21:41] [PASSED] NTSC-443
[14:21:41] [PASSED] NTSC-J
[14:21:41] [PASSED] PAL
[14:21:41] [PASSED] PAL-M
[14:21:41] [PASSED] PAL-N
[14:21:41] [PASSED] SECAM
[14:21:41] [PASSED] Mono
[14:21:41] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[14:21:41] [PASSED] drm_test_get_tv_mode_from_name_truncated
[14:21:41] ============ [PASSED] drm_get_tv_mode_from_name ============
[14:21:41] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[14:21:41] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[14:21:41] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[14:21:41] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[14:21:41] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[14:21:41] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[14:21:41] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[14:21:41] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[14:21:41] [PASSED] VIC 96
[14:21:41] [PASSED] VIC 97
[14:21:41] [PASSED] VIC 101
[14:21:41] [PASSED] VIC 102
[14:21:41] [PASSED] VIC 106
[14:21:41] [PASSED] VIC 107
[14:21:41] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[14:21:41] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[14:21:41] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[14:21:41] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[14:21:41] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[14:21:41] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[14:21:41] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[14:21:41] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[14:21:41] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[14:21:41] [PASSED] Automatic
[14:21:41] [PASSED] Full
[14:21:41] [PASSED] Limited 16:235
[14:21:41] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[14:21:41] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[14:21:41] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[14:21:41] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[14:21:41] === drm_test_drm_hdmi_connector_get_output_format_name ====
[14:21:41] [PASSED] RGB
[14:21:41] [PASSED] YUV 4:2:0
[14:21:41] [PASSED] YUV 4:2:2
[14:21:41] [PASSED] YUV 4:4:4
[14:21:41] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[14:21:41] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[14:21:41] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[14:21:41] ============= drm_damage_helper (21 subtests) ==============
[14:21:41] [PASSED] drm_test_damage_iter_no_damage
[14:21:41] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[14:21:41] [PASSED] drm_test_damage_iter_no_damage_src_moved
[14:21:41] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[14:21:41] [PASSED] drm_test_damage_iter_no_damage_not_visible
[14:21:41] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[14:21:41] [PASSED] drm_test_damage_iter_no_damage_no_fb
[14:21:41] [PASSED] drm_test_damage_iter_simple_damage
[14:21:41] [PASSED] drm_test_damage_iter_single_damage
[14:21:41] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[14:21:41] [PASSED] drm_test_damage_iter_single_damage_outside_src
[14:21:41] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[14:21:41] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[14:21:41] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[14:21:41] [PASSED] drm_test_damage_iter_single_damage_src_moved
[14:21:41] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[14:21:41] [PASSED] drm_test_damage_iter_damage
[14:21:41] [PASSED] drm_test_damage_iter_damage_one_intersect
[14:21:41] [PASSED] drm_test_damage_iter_damage_one_outside
[14:21:41] [PASSED] drm_test_damage_iter_damage_src_moved
[14:21:41] [PASSED] drm_test_damage_iter_damage_not_visible
[14:21:41] ================ [PASSED] drm_damage_helper ================
[14:21:41] ============== drm_dp_mst_helper (3 subtests) ==============
[14:21:41] ============== drm_test_dp_mst_calc_pbn_mode ==============
[14:21:41] [PASSED] Clock 154000 BPP 30 DSC disabled
[14:21:41] [PASSED] Clock 234000 BPP 30 DSC disabled
[14:21:41] [PASSED] Clock 297000 BPP 24 DSC disabled
[14:21:41] [PASSED] Clock 332880 BPP 24 DSC enabled
[14:21:41] [PASSED] Clock 324540 BPP 24 DSC enabled
[14:21:41] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[14:21:41] ============== drm_test_dp_mst_calc_pbn_div ===============
[14:21:41] [PASSED] Link rate 2000000 lane count 4
[14:21:41] [PASSED] Link rate 2000000 lane count 2
[14:21:41] [PASSED] Link rate 2000000 lane count 1
[14:21:41] [PASSED] Link rate 1350000 lane count 4
[14:21:41] [PASSED] Link rate 1350000 lane count 2
[14:21:41] [PASSED] Link rate 1350000 lane count 1
[14:21:41] [PASSED] Link rate 1000000 lane count 4
[14:21:41] [PASSED] Link rate 1000000 lane count 2
[14:21:41] [PASSED] Link rate 1000000 lane count 1
[14:21:41] [PASSED] Link rate 810000 lane count 4
[14:21:41] [PASSED] Link rate 810000 lane count 2
[14:21:41] [PASSED] Link rate 810000 lane count 1
[14:21:41] [PASSED] Link rate 540000 lane count 4
[14:21:41] [PASSED] Link rate 540000 lane count 2
[14:21:41] [PASSED] Link rate 540000 lane count 1
[14:21:41] [PASSED] Link rate 270000 lane count 4
[14:21:41] [PASSED] Link rate 270000 lane count 2
[14:21:41] [PASSED] Link rate 270000 lane count 1
[14:21:41] [PASSED] Link rate 162000 lane count 4
[14:21:41] [PASSED] Link rate 162000 lane count 2
[14:21:41] [PASSED] Link rate 162000 lane count 1
[14:21:41] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[14:21:41] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[14:21:41] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[14:21:41] [PASSED] DP_POWER_UP_PHY with port number
[14:21:41] [PASSED] DP_POWER_DOWN_PHY with port number
[14:21:41] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[14:21:41] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[14:21:41] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[14:21:41] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[14:21:41] [PASSED] DP_QUERY_PAYLOAD with port number
[14:21:41] [PASSED] DP_QUERY_PAYLOAD with VCPI
[14:21:41] [PASSED] DP_REMOTE_DPCD_READ with port number
[14:21:41] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[14:21:41] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[14:21:41] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[14:21:41] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[14:21:41] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[14:21:41] [PASSED] DP_REMOTE_I2C_READ with port number
[14:21:41] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[14:21:41] [PASSED] DP_REMOTE_I2C_READ with transactions array
[14:21:41] [PASSED] DP_REMOTE_I2C_WRITE with port number
[14:21:41] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[14:21:41] [PASSED] DP_REMOTE_I2C_WRITE with data array
[14:21:41] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[14:21:41] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[14:21:41] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[14:21:41] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[14:21:41] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[14:21:41] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[14:21:41] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[14:21:41] ================ [PASSED] drm_dp_mst_helper ================
[14:21:41] ================== drm_exec (7 subtests) ===================
[14:21:41] [PASSED] sanitycheck
[14:21:41] [PASSED] test_lock
[14:21:41] [PASSED] test_lock_unlock
[14:21:41] [PASSED] test_duplicates
[14:21:41] [PASSED] test_prepare
[14:21:41] [PASSED] test_prepare_array
[14:21:41] [PASSED] test_multiple_loops
[14:21:41] ==================== [PASSED] drm_exec =====================
[14:21:41] =========== drm_format_helper_test (17 subtests) ===========
[14:21:41] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[14:21:41] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[14:21:41] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[14:21:41] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[14:21:41] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[14:21:41] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[14:21:41] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[14:21:41] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[14:21:41] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[14:21:41] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[14:21:41] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[14:21:41] ============== drm_test_fb_xrgb8888_to_mono ===============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[14:21:41] ==================== drm_test_fb_swab =====================
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ================ [PASSED] drm_test_fb_swab =================
[14:21:41] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[14:21:41] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[14:21:41] [PASSED] single_pixel_source_buffer
[14:21:41] [PASSED] single_pixel_clip_rectangle
[14:21:41] [PASSED] well_known_colors
[14:21:41] [PASSED] destination_pitch
[14:21:41] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[14:21:41] ================= drm_test_fb_clip_offset =================
[14:21:41] [PASSED] pass through
[14:21:41] [PASSED] horizontal offset
[14:21:41] [PASSED] vertical offset
[14:21:41] [PASSED] horizontal and vertical offset
[14:21:41] [PASSED] horizontal offset (custom pitch)
[14:21:41] [PASSED] vertical offset (custom pitch)
[14:21:41] [PASSED] horizontal and vertical offset (custom pitch)
[14:21:41] ============= [PASSED] drm_test_fb_clip_offset =============
[14:21:41] =================== drm_test_fb_memcpy ====================
[14:21:41] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[14:21:41] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[14:21:41] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[14:21:41] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[14:21:41] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[14:21:41] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[14:21:41] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[14:21:41] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[14:21:41] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[14:21:41] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[14:21:41] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[14:21:41] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[14:21:41] =============== [PASSED] drm_test_fb_memcpy ================
[14:21:41] ============= [PASSED] drm_format_helper_test ==============
[14:21:41] ================= drm_format (18 subtests) =================
[14:21:41] [PASSED] drm_test_format_block_width_invalid
[14:21:41] [PASSED] drm_test_format_block_width_one_plane
[14:21:41] [PASSED] drm_test_format_block_width_two_plane
[14:21:41] [PASSED] drm_test_format_block_width_three_plane
[14:21:41] [PASSED] drm_test_format_block_width_tiled
[14:21:41] [PASSED] drm_test_format_block_height_invalid
[14:21:41] [PASSED] drm_test_format_block_height_one_plane
[14:21:41] [PASSED] drm_test_format_block_height_two_plane
[14:21:41] [PASSED] drm_test_format_block_height_three_plane
[14:21:41] [PASSED] drm_test_format_block_height_tiled
[14:21:41] [PASSED] drm_test_format_min_pitch_invalid
[14:21:41] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[14:21:41] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[14:21:41] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[14:21:41] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[14:21:41] [PASSED] drm_test_format_min_pitch_two_plane
[14:21:41] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[14:21:41] [PASSED] drm_test_format_min_pitch_tiled
[14:21:41] =================== [PASSED] drm_format ====================
[14:21:41] ============== drm_framebuffer (10 subtests) ===============
[14:21:41] ========== drm_test_framebuffer_check_src_coords ==========
[14:21:41] [PASSED] Success: source fits into fb
[14:21:41] [PASSED] Fail: overflowing fb with x-axis coordinate
[14:21:41] [PASSED] Fail: overflowing fb with y-axis coordinate
[14:21:41] [PASSED] Fail: overflowing fb with source width
[14:21:41] [PASSED] Fail: overflowing fb with source height
[14:21:41] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[14:21:41] [PASSED] drm_test_framebuffer_cleanup
[14:21:41] =============== drm_test_framebuffer_create ===============
[14:21:41] [PASSED] ABGR8888 normal sizes
[14:21:41] [PASSED] ABGR8888 max sizes
[14:21:41] [PASSED] ABGR8888 pitch greater than min required
[14:21:41] [PASSED] ABGR8888 pitch less than min required
[14:21:41] [PASSED] ABGR8888 Invalid width
[14:21:41] [PASSED] ABGR8888 Invalid buffer handle
[14:21:41] [PASSED] No pixel format
[14:21:41] [PASSED] ABGR8888 Width 0
[14:21:41] [PASSED] ABGR8888 Height 0
[14:21:41] [PASSED] ABGR8888 Out of bound height * pitch combination
[14:21:41] [PASSED] ABGR8888 Large buffer offset
[14:21:41] [PASSED] ABGR8888 Buffer offset for inexistent plane
[14:21:41] [PASSED] ABGR8888 Invalid flag
[14:21:41] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[14:21:41] [PASSED] ABGR8888 Valid buffer modifier
[14:21:41] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[14:21:41] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[14:21:41] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[14:21:41] [PASSED] NV12 Normal sizes
[14:21:41] [PASSED] NV12 Max sizes
[14:21:41] [PASSED] NV12 Invalid pitch
[14:21:41] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[14:21:41] [PASSED] NV12 different modifier per-plane
[14:21:41] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[14:21:41] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[14:21:41] [PASSED] NV12 Modifier for inexistent plane
[14:21:41] [PASSED] NV12 Handle for inexistent plane
[14:21:41] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[14:21:41] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[14:21:41] [PASSED] YVU420 Normal sizes
[14:21:41] [PASSED] YVU420 Max sizes
[14:21:41] [PASSED] YVU420 Invalid pitch
[14:21:41] [PASSED] YVU420 Different pitches
[14:21:41] [PASSED] YVU420 Different buffer offsets/pitches
[14:21:41] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[14:21:41] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[14:21:41] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[14:21:41] [PASSED] YVU420 Valid modifier
[14:21:41] [PASSED] YVU420 Different modifiers per plane
[14:21:41] [PASSED] YVU420 Modifier for inexistent plane
[14:21:41] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[14:21:41] [PASSED] X0L2 Normal sizes
[14:21:41] [PASSED] X0L2 Max sizes
[14:21:41] [PASSED] X0L2 Invalid pitch
[14:21:41] [PASSED] X0L2 Pitch greater than minimum required
[14:21:41] [PASSED] X0L2 Handle for inexistent plane
[14:21:41] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[14:21:41] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[14:21:41] [PASSED] X0L2 Valid modifier
[14:21:41] [PASSED] X0L2 Modifier for inexistent plane
[14:21:41] =========== [PASSED] drm_test_framebuffer_create ===========
[14:21:41] [PASSED] drm_test_framebuffer_free
[14:21:41] [PASSED] drm_test_framebuffer_init
[14:21:41] [PASSED] drm_test_framebuffer_init_bad_format
[14:21:41] [PASSED] drm_test_framebuffer_init_dev_mismatch
[14:21:41] [PASSED] drm_test_framebuffer_lookup
[14:21:41] [PASSED] drm_test_framebuffer_lookup_inexistent
[14:21:41] [PASSED] drm_test_framebuffer_modifiers_not_supported
[14:21:41] ================= [PASSED] drm_framebuffer =================
[14:21:41] ================ drm_gem_shmem (8 subtests) ================
[14:21:41] [PASSED] drm_gem_shmem_test_obj_create
[14:21:41] [PASSED] drm_gem_shmem_test_obj_create_private
[14:21:41] [PASSED] drm_gem_shmem_test_pin_pages
[14:21:41] [PASSED] drm_gem_shmem_test_vmap
[14:21:41] [PASSED] drm_gem_shmem_test_get_pages_sgt
[14:21:41] [PASSED] drm_gem_shmem_test_get_sg_table
[14:21:41] [PASSED] drm_gem_shmem_test_madvise
[14:21:41] [PASSED] drm_gem_shmem_test_purge
[14:21:41] ================== [PASSED] drm_gem_shmem ==================
[14:21:41] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[14:21:41] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[14:21:41] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[14:21:41] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[14:21:41] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[14:21:41] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[14:21:41] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[14:21:41] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[14:21:41] [PASSED] Automatic
[14:21:41] [PASSED] Full
[14:21:41] [PASSED] Limited 16:235
[14:21:41] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[14:21:41] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[14:21:41] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[14:21:41] [PASSED] drm_test_check_disable_connector
[14:21:41] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[14:21:41] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[14:21:41] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[14:21:41] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[14:21:41] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[14:21:41] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[14:21:41] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[14:21:41] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[14:21:41] [PASSED] drm_test_check_output_bpc_dvi
[14:21:41] [PASSED] drm_test_check_output_bpc_format_vic_1
[14:21:41] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[14:21:41] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[14:21:41] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[14:21:41] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[14:21:41] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[14:21:41] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[14:21:41] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[14:21:41] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[14:21:41] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[14:21:41] [PASSED] drm_test_check_broadcast_rgb_value
[14:21:41] [PASSED] drm_test_check_bpc_8_value
[14:21:41] [PASSED] drm_test_check_bpc_10_value
[14:21:41] [PASSED] drm_test_check_bpc_12_value
[14:21:41] [PASSED] drm_test_check_format_value
[14:21:41] [PASSED] drm_test_check_tmds_char_value
[14:21:41] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[14:21:41] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[14:21:41] [PASSED] drm_test_check_mode_valid
[14:21:41] [PASSED] drm_test_check_mode_valid_reject
[14:21:41] [PASSED] drm_test_check_mode_valid_reject_rate
[14:21:41] [PASSED] drm_test_check_mode_valid_reject_max_clock
[14:21:41] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[14:21:41] ================= drm_managed (2 subtests) =================
[14:21:41] [PASSED] drm_test_managed_release_action
[14:21:41] [PASSED] drm_test_managed_run_action
[14:21:41] =================== [PASSED] drm_managed ===================
[14:21:41] =================== drm_mm (6 subtests) ====================
[14:21:41] [PASSED] drm_test_mm_init
[14:21:41] [PASSED] drm_test_mm_debug
[14:21:41] [PASSED] drm_test_mm_align32
[14:21:41] [PASSED] drm_test_mm_align64
[14:21:41] [PASSED] drm_test_mm_lowest
[14:21:41] [PASSED] drm_test_mm_highest
[14:21:41] ===================== [PASSED] drm_mm ======================
[14:21:41] ============= drm_modes_analog_tv (5 subtests) =============
[14:21:41] [PASSED] drm_test_modes_analog_tv_mono_576i
[14:21:41] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[14:21:41] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[14:21:41] [PASSED] drm_test_modes_analog_tv_pal_576i
[14:21:41] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[14:21:41] =============== [PASSED] drm_modes_analog_tv ===============
[14:21:41] ============== drm_plane_helper (2 subtests) ===============
[14:21:41] =============== drm_test_check_plane_state ================
[14:21:41] [PASSED] clipping_simple
[14:21:41] [PASSED] clipping_rotate_reflect
[14:21:41] [PASSED] positioning_simple
[14:21:41] [PASSED] upscaling
[14:21:41] [PASSED] downscaling
[14:21:41] [PASSED] rounding1
[14:21:41] [PASSED] rounding2
[14:21:41] [PASSED] rounding3
[14:21:41] [PASSED] rounding4
[14:21:41] =========== [PASSED] drm_test_check_plane_state ============
[14:21:41] =========== drm_test_check_invalid_plane_state ============
[14:21:41] [PASSED] positioning_invalid
[14:21:41] [PASSED] upscaling_invalid
[14:21:41] [PASSED] downscaling_invalid
[14:21:41] ======= [PASSED] drm_test_check_invalid_plane_state ========
[14:21:41] ================ [PASSED] drm_plane_helper =================
[14:21:41] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[14:21:41] ====== drm_test_connector_helper_tv_get_modes_check =======
[14:21:41] [PASSED] None
[14:21:41] [PASSED] PAL
[14:21:41] [PASSED] NTSC
[14:21:41] [PASSED] Both, NTSC Default
[14:21:41] [PASSED] Both, PAL Default
[14:21:41] [PASSED] Both, NTSC Default, with PAL on command-line
[14:21:41] [PASSED] Both, PAL Default, with NTSC on command-line
[14:21:41] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[14:21:41] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[14:21:41] ================== drm_rect (9 subtests) ===================
[14:21:41] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[14:21:41] [PASSED] drm_test_rect_clip_scaled_not_clipped
[14:21:41] [PASSED] drm_test_rect_clip_scaled_clipped
[14:21:41] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[14:21:41] ================= drm_test_rect_intersect =================
[14:21:41] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[14:21:41] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[14:21:41] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[14:21:41] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[14:21:41] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[14:21:41] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[14:21:41] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[14:21:41] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[14:21:41] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[14:21:41] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[14:21:41] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[14:21:41] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[14:21:41] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[14:21:41] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[14:21:41] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[14:21:41] ============= [PASSED] drm_test_rect_intersect =============
[14:21:41] ================ drm_test_rect_calc_hscale ================
[14:21:41] [PASSED] normal use
[14:21:41] [PASSED] out of max range
[14:21:41] [PASSED] out of min range
[14:21:41] [PASSED] zero dst
[14:21:41] [PASSED] negative src
[14:21:41] [PASSED] negative dst
[14:21:41] ============ [PASSED] drm_test_rect_calc_hscale ============
[14:21:41] ================ drm_test_rect_calc_vscale ================
[14:21:41] [PASSED] normal use
[14:21:41] [PASSED] out of max range
[14:21:41] [PASSED] out of min range
[14:21:41] [PASSED] zero dst
[14:21:41] [PASSED] negative src
[14:21:41] [PASSED] negative dst
[14:21:41] ============ [PASSED] drm_test_rect_calc_vscale ============
[14:21:41] ================== drm_test_rect_rotate ===================
[14:21:41] [PASSED] reflect-x
[14:21:41] [PASSED] reflect-y
[14:21:41] [PASSED] rotate-0
[14:21:41] [PASSED] rotate-90
[14:21:41] [PASSED] rotate-180
[14:21:41] [PASSED] rotate-270
stty: 'standard input': Inappropriate ioctl for device
[14:21:41] ============== [PASSED] drm_test_rect_rotate ===============
[14:21:41] ================ drm_test_rect_rotate_inv =================
[14:21:41] [PASSED] reflect-x
[14:21:41] [PASSED] reflect-y
[14:21:41] [PASSED] rotate-0
[14:21:41] [PASSED] rotate-90
[14:21:41] [PASSED] rotate-180
[14:21:41] [PASSED] rotate-270
[14:21:41] ============ [PASSED] drm_test_rect_rotate_inv =============
[14:21:41] ==================== [PASSED] drm_rect =====================
[14:21:41] ============ drm_sysfb_modeset_test (1 subtest) ============
[14:21:41] ============ drm_test_sysfb_build_fourcc_list =============
[14:21:41] [PASSED] no native formats
[14:21:41] [PASSED] XRGB8888 as native format
[14:21:41] [PASSED] remove duplicates
[14:21:41] [PASSED] convert alpha formats
[14:21:41] [PASSED] random formats
[14:21:41] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[14:21:41] ============= [PASSED] drm_sysfb_modeset_test ==============
[14:21:41] ============================================================
[14:21:41] Testing complete. Ran 616 tests: passed: 616
[14:21:41] Elapsed time: 23.181s total, 1.609s configuring, 21.402s building, 0.143s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[14:21:41] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[14:21:43] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[14:21:51] Starting KUnit Kernel (1/1)...
[14:21:51] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[14:21:51] ================= ttm_device (5 subtests) ==================
[14:21:51] [PASSED] ttm_device_init_basic
[14:21:51] [PASSED] ttm_device_init_multiple
[14:21:51] [PASSED] ttm_device_fini_basic
[14:21:51] [PASSED] ttm_device_init_no_vma_man
[14:21:51] ================== ttm_device_init_pools ==================
[14:21:51] [PASSED] No DMA allocations, no DMA32 required
[14:21:51] [PASSED] DMA allocations, DMA32 required
[14:21:51] [PASSED] No DMA allocations, DMA32 required
[14:21:51] [PASSED] DMA allocations, no DMA32 required
[14:21:51] ============== [PASSED] ttm_device_init_pools ==============
[14:21:51] =================== [PASSED] ttm_device ====================
[14:21:51] ================== ttm_pool (8 subtests) ===================
[14:21:51] ================== ttm_pool_alloc_basic ===================
[14:21:51] [PASSED] One page
[14:21:51] [PASSED] More than one page
[14:21:51] [PASSED] Above the allocation limit
[14:21:51] [PASSED] One page, with coherent DMA mappings enabled
[14:21:51] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[14:21:51] ============== [PASSED] ttm_pool_alloc_basic ===============
[14:21:51] ============== ttm_pool_alloc_basic_dma_addr ==============
[14:21:51] [PASSED] One page
[14:21:51] [PASSED] More than one page
[14:21:51] [PASSED] Above the allocation limit
[14:21:51] [PASSED] One page, with coherent DMA mappings enabled
[14:21:51] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[14:21:51] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[14:21:51] [PASSED] ttm_pool_alloc_order_caching_match
[14:21:51] [PASSED] ttm_pool_alloc_caching_mismatch
[14:21:51] [PASSED] ttm_pool_alloc_order_mismatch
[14:21:51] [PASSED] ttm_pool_free_dma_alloc
[14:21:51] [PASSED] ttm_pool_free_no_dma_alloc
[14:21:51] [PASSED] ttm_pool_fini_basic
[14:21:51] ==================== [PASSED] ttm_pool =====================
[14:21:51] ================ ttm_resource (8 subtests) =================
[14:21:51] ================= ttm_resource_init_basic =================
[14:21:51] [PASSED] Init resource in TTM_PL_SYSTEM
[14:21:51] [PASSED] Init resource in TTM_PL_VRAM
[14:21:51] [PASSED] Init resource in a private placement
[14:21:51] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[14:21:51] ============= [PASSED] ttm_resource_init_basic =============
[14:21:51] [PASSED] ttm_resource_init_pinned
[14:21:51] [PASSED] ttm_resource_fini_basic
[14:21:51] [PASSED] ttm_resource_manager_init_basic
[14:21:51] [PASSED] ttm_resource_manager_usage_basic
[14:21:51] [PASSED] ttm_resource_manager_set_used_basic
[14:21:51] [PASSED] ttm_sys_man_alloc_basic
[14:21:51] [PASSED] ttm_sys_man_free_basic
[14:21:51] ================== [PASSED] ttm_resource ===================
[14:21:51] =================== ttm_tt (15 subtests) ===================
[14:21:51] ==================== ttm_tt_init_basic ====================
[14:21:51] [PASSED] Page-aligned size
[14:21:51] [PASSED] Extra pages requested
[14:21:51] ================ [PASSED] ttm_tt_init_basic ================
[14:21:51] [PASSED] ttm_tt_init_misaligned
[14:21:51] [PASSED] ttm_tt_fini_basic
[14:21:51] [PASSED] ttm_tt_fini_sg
[14:21:51] [PASSED] ttm_tt_fini_shmem
[14:21:51] [PASSED] ttm_tt_create_basic
[14:21:51] [PASSED] ttm_tt_create_invalid_bo_type
[14:21:51] [PASSED] ttm_tt_create_ttm_exists
[14:21:51] [PASSED] ttm_tt_create_failed
[14:21:51] [PASSED] ttm_tt_destroy_basic
[14:21:51] [PASSED] ttm_tt_populate_null_ttm
[14:21:51] [PASSED] ttm_tt_populate_populated_ttm
[14:21:51] [PASSED] ttm_tt_unpopulate_basic
[14:21:51] [PASSED] ttm_tt_unpopulate_empty_ttm
[14:21:51] [PASSED] ttm_tt_swapin_basic
[14:21:51] ===================== [PASSED] ttm_tt ======================
[14:21:51] =================== ttm_bo (14 subtests) ===================
[14:21:51] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[14:21:51] [PASSED] Cannot be interrupted and sleeps
[14:21:51] [PASSED] Cannot be interrupted, locks straight away
[14:21:51] [PASSED] Can be interrupted, sleeps
[14:21:51] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[14:21:51] [PASSED] ttm_bo_reserve_locked_no_sleep
[14:21:51] [PASSED] ttm_bo_reserve_no_wait_ticket
[14:21:51] [PASSED] ttm_bo_reserve_double_resv
[14:21:51] [PASSED] ttm_bo_reserve_interrupted
[14:21:51] [PASSED] ttm_bo_reserve_deadlock
[14:21:51] [PASSED] ttm_bo_unreserve_basic
[14:21:51] [PASSED] ttm_bo_unreserve_pinned
[14:21:51] [PASSED] ttm_bo_unreserve_bulk
[14:21:51] [PASSED] ttm_bo_put_basic
[14:21:51] [PASSED] ttm_bo_put_shared_resv
[14:21:51] [PASSED] ttm_bo_pin_basic
[14:21:51] [PASSED] ttm_bo_pin_unpin_resource
[14:21:51] [PASSED] ttm_bo_multiple_pin_one_unpin
[14:21:51] ===================== [PASSED] ttm_bo ======================
[14:21:51] ============== ttm_bo_validate (21 subtests) ===============
[14:21:51] ============== ttm_bo_init_reserved_sys_man ===============
[14:21:51] [PASSED] Buffer object for userspace
[14:21:51] [PASSED] Kernel buffer object
[14:21:51] [PASSED] Shared buffer object
[14:21:51] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[14:21:51] ============== ttm_bo_init_reserved_mock_man ==============
[14:21:51] [PASSED] Buffer object for userspace
[14:21:51] [PASSED] Kernel buffer object
[14:21:51] [PASSED] Shared buffer object
[14:21:51] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[14:21:51] [PASSED] ttm_bo_init_reserved_resv
[14:21:51] ================== ttm_bo_validate_basic ==================
[14:21:51] [PASSED] Buffer object for userspace
[14:21:51] [PASSED] Kernel buffer object
[14:21:51] [PASSED] Shared buffer object
[14:21:51] ============== [PASSED] ttm_bo_validate_basic ==============
[14:21:51] [PASSED] ttm_bo_validate_invalid_placement
[14:21:51] ============= ttm_bo_validate_same_placement ==============
[14:21:51] [PASSED] System manager
[14:21:51] [PASSED] VRAM manager
[14:21:51] ========= [PASSED] ttm_bo_validate_same_placement ==========
[14:21:51] [PASSED] ttm_bo_validate_failed_alloc
[14:21:51] [PASSED] ttm_bo_validate_pinned
[14:21:51] [PASSED] ttm_bo_validate_busy_placement
[14:21:51] ================ ttm_bo_validate_multihop =================
[14:21:51] [PASSED] Buffer object for userspace
[14:21:51] [PASSED] Kernel buffer object
[14:21:51] [PASSED] Shared buffer object
[14:21:51] ============ [PASSED] ttm_bo_validate_multihop =============
[14:21:51] ========== ttm_bo_validate_no_placement_signaled ==========
[14:21:51] [PASSED] Buffer object in system domain, no page vector
[14:21:51] [PASSED] Buffer object in system domain with an existing page vector
[14:21:51] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[14:21:51] ======== ttm_bo_validate_no_placement_not_signaled ========
[14:21:51] [PASSED] Buffer object for userspace
[14:21:51] [PASSED] Kernel buffer object
[14:21:51] [PASSED] Shared buffer object
[14:21:51] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[14:21:51] [PASSED] ttm_bo_validate_move_fence_signaled
[14:21:51] ========= ttm_bo_validate_move_fence_not_signaled =========
[14:21:51] [PASSED] Waits for GPU
[14:21:51] [PASSED] Tries to lock straight away
[14:21:51] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[14:21:51] [PASSED] ttm_bo_validate_happy_evict
[14:21:51] [PASSED] ttm_bo_validate_all_pinned_evict
[14:21:51] [PASSED] ttm_bo_validate_allowed_only_evict
[14:21:51] [PASSED] ttm_bo_validate_deleted_evict
[14:21:51] [PASSED] ttm_bo_validate_busy_domain_evict
[14:21:51] [PASSED] ttm_bo_validate_evict_gutting
[14:21:51] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[14:21:51] ================= [PASSED] ttm_bo_validate =================
[14:21:51] ============================================================
[14:21:51] Testing complete. Ran 101 tests: passed: 101
[14:21:51] Elapsed time: 9.552s total, 1.605s configuring, 7.731s building, 0.181s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✗ CI.checksparse: warning for MADVISE FOR XE (rev5)
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (26 preceding siblings ...)
2025-07-30 14:21 ` ✓ CI.KUnit: success " Patchwork
@ 2025-07-30 14:36 ` Patchwork
2025-07-30 15:36 ` ✓ Xe.CI.BAT: success " Patchwork
2025-07-30 17:51 ` ✗ Xe.CI.Full: failure " Patchwork
29 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2025-07-30 14:36 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: MADVISE FOR XE (rev5)
URL : https://patchwork.freedesktop.org/series/149550/
State : warning
== Summary ==
+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast e1805ad9a7175457902ae453ea67b76194e7d796
Sparse version: 0.6.4 (Ubuntu: 0.6.4-4ubuntu3)
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/display/intel_display_types.h:2018:24: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/display/intel_display_types.h:2018:24: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/display/intel_psr.c: note: in included file:
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✓ Xe.CI.BAT: success for MADVISE FOR XE (rev5)
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (27 preceding siblings ...)
2025-07-30 14:36 ` ✗ CI.checksparse: warning " Patchwork
@ 2025-07-30 15:36 ` Patchwork
2025-07-30 17:51 ` ✗ Xe.CI.Full: failure " Patchwork
29 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2025-07-30 15:36 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 1668 bytes --]
== Series Details ==
Series: MADVISE FOR XE (rev5)
URL : https://patchwork.freedesktop.org/series/149550/
State : success
== Summary ==
CI Bug Log - changes from xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796_BAT -> xe-pw-149550v5_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (8 -> 7)
------------------------------
Missing (1): bat-adlp-vm
Known issues
------------
Here are the changes found in xe-pw-149550v5_BAT that come from known issues:
### IGT changes ###
#### Possible fixes ####
* igt@kms_flip@basic-flip-vs-wf_vblank:
- bat-adlp-7: [DMESG-WARN][1] ([Intel XE#4543]) -> [PASS][2] +1 other test pass
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/bat-adlp-7/igt@kms_flip@basic-flip-vs-wf_vblank.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/bat-adlp-7/igt@kms_flip@basic-flip-vs-wf_vblank.html
[Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
Build changes
-------------
* IGT: IGT_8478 -> IGT_8479
* Linux: xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796 -> xe-pw-149550v5
IGT_8478: 3e7c2bd685397f852853878aef4d9c1e4889a28b @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
IGT_8479: 5fea7ca6493415ce108231b0ff29f02d293f9aa6 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796: e1805ad9a7175457902ae453ea67b76194e7d796
xe-pw-149550v5: 149550v5
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/index.html
[-- Attachment #2: Type: text/html, Size: 2247 bytes --]
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✗ Xe.CI.Full: failure for MADVISE FOR XE (rev5)
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
` (28 preceding siblings ...)
2025-07-30 15:36 ` ✓ Xe.CI.BAT: success " Patchwork
@ 2025-07-30 17:51 ` Patchwork
29 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2025-07-30 17:51 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 130122 bytes --]
== Series Details ==
Series: MADVISE FOR XE (rev5)
URL : https://patchwork.freedesktop.org/series/149550/
State : failure
== Summary ==
CI Bug Log - changes from xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796_FULL -> xe-pw-149550v5_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-149550v5_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-149550v5_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-149550v5_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@xe_exec_reset@gt-reset-stress:
- shard-adlp: [PASS][1] -> [ABORT][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-3/igt@xe_exec_reset@gt-reset-stress.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@xe_exec_reset@gt-reset-stress.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-new-bo-map:
- shard-lnl: [PASS][3] -> [FAIL][4]
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-lnl-4/igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-new-bo-map.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-3/igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-new-bo-map.html
Known issues
------------
Here are the changes found in xe-pw-149550v5_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@core_hotunplug@hotreplug:
- shard-bmg: [PASS][5] -> [SKIP][6] ([Intel XE#4963]) +1 other test skip
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@core_hotunplug@hotreplug.html
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@core_hotunplug@hotreplug.html
* igt@core_setmaster@master-drop-set-user:
- shard-bmg: [PASS][7] -> [FAIL][8] ([Intel XE#4674])
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-3/igt@core_setmaster@master-drop-set-user.html
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@core_setmaster@master-drop-set-user.html
* igt@device_reset@unbind-reset-rebind:
- shard-bmg: [PASS][9] -> [SKIP][10] ([Intel XE#5547])
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@device_reset@unbind-reset-rebind.html
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@device_reset@unbind-reset-rebind.html
* igt@fbdev@nullptr:
- shard-bmg: [PASS][11] -> [SKIP][12] ([Intel XE#2134]) +1 other test skip
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-1/igt@fbdev@nullptr.html
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@fbdev@nullptr.html
* igt@kms_addfb_basic@bad-pitch-999:
- shard-bmg: [PASS][13] -> [SKIP][14] ([Intel XE#4950]) +93 other tests skip
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@kms_addfb_basic@bad-pitch-999.html
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_addfb_basic@bad-pitch-999.html
* igt@kms_addfb_basic@size-max:
- shard-bmg: NOTRUN -> [SKIP][15] ([Intel XE#4950]) +5 other tests skip
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_addfb_basic@size-max.html
* igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1:
- shard-lnl: [PASS][16] -> [FAIL][17] ([Intel XE#911]) +7 other tests fail
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-lnl-7/igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1.html
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-2/igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1.html
* igt@kms_async_flips@crc-atomic@pipe-d-hdmi-a-1:
- shard-adlp: [PASS][18] -> [FAIL][19] ([Intel XE#3884]) +1 other test fail
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-4/igt@kms_async_flips@crc-atomic@pipe-d-hdmi-a-1.html
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-1/igt@kms_async_flips@crc-atomic@pipe-d-hdmi-a-1.html
* igt@kms_big_fb@4-tiled-64bpp-rotate-90:
- shard-dg2-set2: NOTRUN -> [SKIP][20] ([Intel XE#316])
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-433/igt@kms_big_fb@4-tiled-64bpp-rotate-90.html
- shard-lnl: NOTRUN -> [SKIP][21] ([Intel XE#1407])
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-8/igt@kms_big_fb@4-tiled-64bpp-rotate-90.html
* igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0:
- shard-adlp: NOTRUN -> [SKIP][22] ([Intel XE#1124]) +4 other tests skip
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-4/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0.html
* igt@kms_big_fb@linear-16bpp-rotate-180:
- shard-bmg: [PASS][23] -> [SKIP][24] ([Intel XE#4947]) +24 other tests skip
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_big_fb@linear-16bpp-rotate-180.html
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_big_fb@linear-16bpp-rotate-180.html
* igt@kms_big_fb@linear-32bpp-rotate-90:
- shard-adlp: NOTRUN -> [SKIP][25] ([Intel XE#316]) +2 other tests skip
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-3/igt@kms_big_fb@linear-32bpp-rotate-90.html
* igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
- shard-adlp: [PASS][26] -> [DMESG-FAIL][27] ([Intel XE#4543]) +8 other tests dmesg-fail
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-1/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-6/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
* igt@kms_big_fb@y-tiled-32bpp-rotate-0:
- shard-lnl: NOTRUN -> [SKIP][28] ([Intel XE#1124]) +3 other tests skip
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-8/igt@kms_big_fb@y-tiled-32bpp-rotate-0.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
- shard-adlp: NOTRUN -> [DMESG-FAIL][29] ([Intel XE#4543]) +1 other test dmesg-fail
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
- shard-dg2-set2: NOTRUN -> [SKIP][30] ([Intel XE#1124]) +3 other tests skip
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-432/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
* igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow:
- shard-adlp: NOTRUN -> [SKIP][31] ([Intel XE#607])
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-9/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
- shard-dg2-set2: NOTRUN -> [SKIP][32] ([Intel XE#607])
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-432/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
- shard-lnl: NOTRUN -> [SKIP][33] ([Intel XE#1477])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-2/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0:
- shard-bmg: NOTRUN -> [SKIP][34] ([Intel XE#1124])
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0.html
* igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p:
- shard-bmg: [PASS][35] -> [SKIP][36] ([Intel XE#2314] / [Intel XE#2894])
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p.html
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p.html
* igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p:
- shard-adlp: NOTRUN -> [SKIP][37] ([Intel XE#2191])
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-4/igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p.html
- shard-dg2-set2: NOTRUN -> [SKIP][38] ([Intel XE#2191])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-466/igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p.html
* igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p:
- shard-lnl: NOTRUN -> [SKIP][39] ([Intel XE#2191]) +1 other test skip
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-8/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html
* igt@kms_bw@linear-tiling-1-displays-2560x1440p:
- shard-dg2-set2: NOTRUN -> [SKIP][40] ([Intel XE#367]) +1 other test skip
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-432/igt@kms_bw@linear-tiling-1-displays-2560x1440p.html
* igt@kms_ccs@bad-pixel-format-yf-tiled-ccs:
- shard-lnl: NOTRUN -> [SKIP][41] ([Intel XE#2887]) +5 other tests skip
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-8/igt@kms_ccs@bad-pixel-format-yf-tiled-ccs.html
* igt@kms_ccs@bad-rotation-90-4-tiled-dg2-rc-ccs-cc@pipe-b-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][42] ([Intel XE#787]) +20 other tests skip
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@kms_ccs@bad-rotation-90-4-tiled-dg2-rc-ccs-cc@pipe-b-hdmi-a-1.html
* igt@kms_ccs@bad-rotation-90-4-tiled-mtl-rc-ccs:
- shard-bmg: NOTRUN -> [SKIP][43] ([Intel XE#2887]) +1 other test skip
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_ccs@bad-rotation-90-4-tiled-mtl-rc-ccs.html
* igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs@pipe-b-dp-2:
- shard-bmg: NOTRUN -> [SKIP][44] ([Intel XE#2652] / [Intel XE#787]) +7 other tests skip
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs@pipe-b-dp-2.html
* igt@kms_ccs@crc-primary-rotation-180-y-tiled-gen12-mc-ccs@pipe-d-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][45] ([Intel XE#455] / [Intel XE#787]) +13 other tests skip
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-4/igt@kms_ccs@crc-primary-rotation-180-y-tiled-gen12-mc-ccs@pipe-d-hdmi-a-1.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4:
- shard-dg2-set2: [PASS][46] -> [INCOMPLETE][47] ([Intel XE#3862]) +1 other test incomplete
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-dg2-436/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4.html
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-435/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs:
- shard-adlp: NOTRUN -> [SKIP][48] ([Intel XE#2907])
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-6/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs.html
- shard-dg2-set2: NOTRUN -> [SKIP][49] ([Intel XE#2907])
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-435/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc@pipe-c-hdmi-a-6:
- shard-dg2-set2: NOTRUN -> [SKIP][50] ([Intel XE#787]) +160 other tests skip
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-463/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc@pipe-c-hdmi-a-6.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs@pipe-d-dp-2:
- shard-dg2-set2: NOTRUN -> [SKIP][51] ([Intel XE#455] / [Intel XE#787]) +30 other tests skip
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-432/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs@pipe-d-dp-2.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-dp-4:
- shard-dg2-set2: [PASS][52] -> [INCOMPLETE][53] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#3124]) +1 other test incomplete
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-dg2-463/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-dp-4.html
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-dp-4.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-d-dp-4:
- shard-dg2-set2: NOTRUN -> [INCOMPLETE][54] ([Intel XE#3124])
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-d-dp-4.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-d-hdmi-a-6:
- shard-dg2-set2: NOTRUN -> [DMESG-WARN][55] ([Intel XE#1727] / [Intel XE#3113])
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-d-hdmi-a-6.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-a-dp-4:
- shard-dg2-set2: NOTRUN -> [INCOMPLETE][56] ([Intel XE#2705] / [Intel XE#4212])
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-a-dp-4.html
* igt@kms_chamelium_audio@hdmi-audio-edid:
- shard-adlp: NOTRUN -> [SKIP][57] ([Intel XE#373]) +6 other tests skip
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-2/igt@kms_chamelium_audio@hdmi-audio-edid.html
* igt@kms_chamelium_edid@dp-edid-stress-resolution-4k:
- shard-bmg: NOTRUN -> [SKIP][58] ([Intel XE#2252]) +1 other test skip
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_chamelium_edid@dp-edid-stress-resolution-4k.html
* igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe:
- shard-dg2-set2: NOTRUN -> [SKIP][59] ([Intel XE#373]) +9 other tests skip
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-433/igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe.html
- shard-lnl: NOTRUN -> [SKIP][60] ([Intel XE#373]) +7 other tests skip
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-2/igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe.html
* igt@kms_content_protection@mei-interface:
- shard-lnl: NOTRUN -> [SKIP][61] ([Intel XE#1468])
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-5/igt@kms_content_protection@mei-interface.html
* igt@kms_content_protection@srm@pipe-a-dp-4:
- shard-dg2-set2: NOTRUN -> [FAIL][62] ([Intel XE#1178]) +2 other tests fail
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-435/igt@kms_content_protection@srm@pipe-a-dp-4.html
* igt@kms_content_protection@type1:
- shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#2341])
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_content_protection@type1.html
- shard-lnl: NOTRUN -> [SKIP][64] ([Intel XE#3278])
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-3/igt@kms_content_protection@type1.html
* igt@kms_content_protection@uevent@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [FAIL][65] ([Intel XE#1188])
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@kms_content_protection@uevent@pipe-a-dp-2.html
* igt@kms_cursor_crc@cursor-onscreen-512x512:
- shard-adlp: NOTRUN -> [SKIP][66] ([Intel XE#308])
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-6/igt@kms_cursor_crc@cursor-onscreen-512x512.html
* igt@kms_cursor_crc@cursor-onscreen-max-size:
- shard-lnl: NOTRUN -> [SKIP][67] ([Intel XE#1424])
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-8/igt@kms_cursor_crc@cursor-onscreen-max-size.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-bmg: [PASS][68] -> [SKIP][69] ([Intel XE#2291]) +1 other test skip
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-legacy:
- shard-adlp: NOTRUN -> [SKIP][70] ([Intel XE#309])
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-9/igt@kms_cursor_legacy@cursorb-vs-flipa-legacy.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
- shard-dg2-set2: NOTRUN -> [SKIP][71] ([Intel XE#323])
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-466/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
- shard-lnl: NOTRUN -> [SKIP][72] ([Intel XE#323])
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-1/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
- shard-adlp: NOTRUN -> [SKIP][73] ([Intel XE#323])
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-4/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
* igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-3:
- shard-bmg: NOTRUN -> [SKIP][74] ([Intel XE#1340])
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-3.html
* igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-6:
- shard-dg2-set2: NOTRUN -> [SKIP][75] ([Intel XE#4494] / [i915#3804])
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-466/igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-6.html
* igt@kms_fbcon_fbt@psr-suspend:
- shard-adlp: NOTRUN -> [SKIP][76] ([Intel XE#776])
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-4/igt@kms_fbcon_fbt@psr-suspend.html
- shard-dg2-set2: NOTRUN -> [SKIP][77] ([Intel XE#776])
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-466/igt@kms_fbcon_fbt@psr-suspend.html
* igt@kms_flip@2x-flip-vs-expired-vblank:
- shard-lnl: NOTRUN -> [SKIP][78] ([Intel XE#1421]) +1 other test skip
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-1/igt@kms_flip@2x-flip-vs-expired-vblank.html
* igt@kms_flip@2x-flip-vs-panning-vs-hang:
- shard-adlp: NOTRUN -> [SKIP][79] ([Intel XE#310]) +1 other test skip
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-1/igt@kms_flip@2x-flip-vs-panning-vs-hang.html
* igt@kms_flip@2x-flip-vs-suspend-interruptible@cd-dp2-hdmi-a3:
- shard-bmg: [PASS][80] -> [INCOMPLETE][81] ([Intel XE#2049] / [Intel XE#2597]) +1 other test incomplete
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_flip@2x-flip-vs-suspend-interruptible@cd-dp2-hdmi-a3.html
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-1/igt@kms_flip@2x-flip-vs-suspend-interruptible@cd-dp2-hdmi-a3.html
* igt@kms_flip@2x-flip-vs-wf_vblank-interruptible:
- shard-bmg: [PASS][82] -> [SKIP][83] ([Intel XE#2316])
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_flip@2x-flip-vs-wf_vblank-interruptible.html
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_flip@2x-flip-vs-wf_vblank-interruptible.html
* igt@kms_flip@bo-too-big-interruptible@a-edp1:
- shard-lnl: NOTRUN -> [TIMEOUT][84] ([Intel XE#1504]) +1 other test timeout
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-8/igt@kms_flip@bo-too-big-interruptible@a-edp1.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible:
- shard-adlp: [PASS][85] -> [DMESG-WARN][86] ([Intel XE#2953] / [Intel XE#4173])
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-8/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-3/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
* igt@kms_flip@flip-vs-expired-vblank@a-edp1:
- shard-lnl: [PASS][87] -> [FAIL][88] ([Intel XE#301]) +1 other test fail
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-3/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
* igt@kms_flip@flip-vs-suspend-interruptible@c-hdmi-a3:
- shard-bmg: NOTRUN -> [INCOMPLETE][89] ([Intel XE#2049] / [Intel XE#2597])
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@kms_flip@flip-vs-suspend-interruptible@c-hdmi-a3.html
* igt@kms_flip@flip-vs-suspend@b-hdmi-a1:
- shard-adlp: [PASS][90] -> [DMESG-WARN][91] ([Intel XE#4543]) +9 other tests dmesg-warn
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-9/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-2/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html
* igt@kms_flip@flip-vs-suspend@d-dp4:
- shard-dg2-set2: [PASS][92] -> [INCOMPLETE][93] ([Intel XE#2049] / [Intel XE#2597]) +1 other test incomplete
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-dg2-435/igt@kms_flip@flip-vs-suspend@d-dp4.html
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-436/igt@kms_flip@flip-vs-suspend@d-dp4.html
* igt@kms_flip@plain-flip-interruptible@b-hdmi-a1:
- shard-adlp: NOTRUN -> [DMESG-WARN][94] ([Intel XE#4543]) +1 other test dmesg-warn
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-4/igt@kms_flip@plain-flip-interruptible@b-hdmi-a1.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling:
- shard-lnl: NOTRUN -> [SKIP][95] ([Intel XE#1401] / [Intel XE#1745])
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-1/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling@pipe-a-default-mode:
- shard-lnl: NOTRUN -> [SKIP][96] ([Intel XE#1401])
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-1/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling@pipe-a-valid-mode:
- shard-bmg: NOTRUN -> [SKIP][97] ([Intel XE#2293]) +3 other tests skip
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-1/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling@pipe-a-valid-mode.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-draw-mmap-wc:
- shard-adlp: NOTRUN -> [SKIP][98] ([Intel XE#651]) +1 other test skip
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-indfb-plflip-blt:
- shard-dg2-set2: NOTRUN -> [SKIP][99] ([Intel XE#651]) +6 other tests skip
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-436/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-shrfb-draw-render:
- shard-lnl: NOTRUN -> [SKIP][100] ([Intel XE#651]) +2 other tests skip
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-3/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-shrfb-draw-render.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-draw-render:
- shard-bmg: NOTRUN -> [SKIP][101] ([Intel XE#2311]) +2 other tests skip
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-7/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-shrfb-draw-blt:
- shard-adlp: NOTRUN -> [SKIP][102] ([Intel XE#656]) +12 other tests skip
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-2/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-shrfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-1p-pri-indfb-multidraw:
- shard-bmg: NOTRUN -> [SKIP][103] ([Intel XE#5390]) +1 other test skip
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-7/igt@kms_frontbuffer_tracking@fbc-1p-pri-indfb-multidraw.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-mmap-wc:
- shard-lnl: NOTRUN -> [SKIP][104] ([Intel XE#656]) +8 other tests skip
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-5/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-shrfb-scaledprimary:
- shard-bmg: NOTRUN -> [SKIP][105] ([Intel XE#4947]) +2 other tests skip
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcdrrs-shrfb-scaledprimary.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-wc:
- shard-adlp: NOTRUN -> [SKIP][106] ([Intel XE#653]) +3 other tests skip
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-1/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-move:
- shard-dg2-set2: NOTRUN -> [SKIP][107] ([Intel XE#653]) +11 other tests skip
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-move.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-shrfb-msflip-blt:
- shard-bmg: NOTRUN -> [SKIP][108] ([Intel XE#2313])
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-shrfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-tiling-y:
- shard-bmg: NOTRUN -> [SKIP][109] ([Intel XE#2352])
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html
* igt@kms_joiner@basic-force-big-joiner:
- shard-bmg: [PASS][110] -> [SKIP][111] ([Intel XE#3012])
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@kms_joiner@basic-force-big-joiner.html
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_joiner@basic-force-big-joiner.html
* igt@kms_joiner@invalid-modeset-force-ultra-joiner:
- shard-adlp: NOTRUN -> [SKIP][112] ([Intel XE#2925])
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@kms_joiner@invalid-modeset-force-ultra-joiner.html
* igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner:
- shard-bmg: NOTRUN -> [SKIP][113] ([Intel XE#4090])
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html
- shard-dg2-set2: NOTRUN -> [SKIP][114] ([Intel XE#2925])
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-434/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html
- shard-lnl: NOTRUN -> [SKIP][115] ([Intel XE#2925])
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-5/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html
* igt@kms_plane_multiple@2x-tiling-x:
- shard-bmg: [PASS][116] -> [SKIP][117] ([Intel XE#4596])
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_plane_multiple@2x-tiling-x.html
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_plane_multiple@2x-tiling-x.html
* igt@kms_plane_multiple@tiling-4:
- shard-adlp: NOTRUN -> [SKIP][118] ([Intel XE#5020])
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-6/igt@kms_plane_multiple@tiling-4.html
* igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5@pipe-a:
- shard-bmg: NOTRUN -> [SKIP][119] ([Intel XE#2763]) +3 other tests skip
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5@pipe-a.html
* igt@kms_pm_backlight@basic-brightness:
- shard-dg2-set2: NOTRUN -> [SKIP][120] ([Intel XE#870])
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-464/igt@kms_pm_backlight@basic-brightness.html
* igt@kms_pm_dc@dc5-dpms-negative:
- shard-lnl: NOTRUN -> [SKIP][121] ([Intel XE#1131])
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-2/igt@kms_pm_dc@dc5-dpms-negative.html
* igt@kms_pm_rpm@drm-resources-equal:
- shard-bmg: NOTRUN -> [SKIP][122] ([Intel XE#4962])
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_pm_rpm@drm-resources-equal.html
* igt@kms_pm_rpm@i2c:
- shard-bmg: [PASS][123] -> [SKIP][124] ([Intel XE#4962]) +1 other test skip
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_pm_rpm@i2c.html
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_pm_rpm@i2c.html
* igt@kms_psr2_sf@fbc-pr-primary-plane-update-sf-dmg-area:
- shard-lnl: NOTRUN -> [SKIP][125] ([Intel XE#2893])
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-8/igt@kms_psr2_sf@fbc-pr-primary-plane-update-sf-dmg-area.html
* igt@kms_psr2_sf@fbc-psr2-overlay-plane-update-sf-dmg-area@pipe-a-edp-1:
- shard-lnl: NOTRUN -> [SKIP][126] ([Intel XE#4608])
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-8/igt@kms_psr2_sf@fbc-psr2-overlay-plane-update-sf-dmg-area@pipe-a-edp-1.html
* igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-fully-sf:
- shard-dg2-set2: NOTRUN -> [SKIP][127] ([Intel XE#1489]) +3 other tests skip
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-432/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-fully-sf.html
* igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-sf:
- shard-adlp: NOTRUN -> [SKIP][128] ([Intel XE#1489]) +5 other tests skip
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-sf.html
* igt@kms_psr@fbc-pr-dpms:
- shard-lnl: NOTRUN -> [SKIP][129] ([Intel XE#1406]) +1 other test skip
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-1/igt@kms_psr@fbc-pr-dpms.html
* igt@kms_psr@fbc-psr2-sprite-plane-move:
- shard-dg2-set2: NOTRUN -> [SKIP][130] ([Intel XE#2850] / [Intel XE#929]) +6 other tests skip
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-433/igt@kms_psr@fbc-psr2-sprite-plane-move.html
* igt@kms_psr@psr-cursor-plane-onoff:
- shard-adlp: NOTRUN -> [SKIP][131] ([Intel XE#2850] / [Intel XE#929]) +4 other tests skip
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-4/igt@kms_psr@psr-cursor-plane-onoff.html
* igt@kms_rotation_crc@primary-4-tiled-reflect-x-0:
- shard-adlp: NOTRUN -> [SKIP][132] ([Intel XE#1127])
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-1/igt@kms_rotation_crc@primary-4-tiled-reflect-x-0.html
* igt@kms_rotation_crc@primary-rotation-270:
- shard-adlp: NOTRUN -> [SKIP][133] ([Intel XE#3414])
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-3/igt@kms_rotation_crc@primary-rotation-270.html
- shard-dg2-set2: NOTRUN -> [SKIP][134] ([Intel XE#3414]) +1 other test skip
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-434/igt@kms_rotation_crc@primary-rotation-270.html
- shard-lnl: NOTRUN -> [SKIP][135] ([Intel XE#3414] / [Intel XE#3904])
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-5/igt@kms_rotation_crc@primary-rotation-270.html
* igt@kms_rotation_crc@primary-x-tiled-reflect-x-180:
- shard-lnl: NOTRUN -> [FAIL][136] ([Intel XE#4689])
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-5/igt@kms_rotation_crc@primary-x-tiled-reflect-x-180.html
* igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0:
- shard-dg2-set2: NOTRUN -> [SKIP][137] ([Intel XE#1127])
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-466/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0.html
* igt@kms_tiled_display@basic-test-pattern:
- shard-adlp: NOTRUN -> [SKIP][138] ([Intel XE#362])
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-3/igt@kms_tiled_display@basic-test-pattern.html
- shard-dg2-set2: NOTRUN -> [FAIL][139] ([Intel XE#1729])
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-432/igt@kms_tiled_display@basic-test-pattern.html
- shard-lnl: NOTRUN -> [SKIP][140] ([Intel XE#362])
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-7/igt@kms_tiled_display@basic-test-pattern.html
* igt@kms_vrr@flip-suspend:
- shard-adlp: NOTRUN -> [SKIP][141] ([Intel XE#455]) +6 other tests skip
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-2/igt@kms_vrr@flip-suspend.html
- shard-dg2-set2: NOTRUN -> [SKIP][142] ([Intel XE#455]) +6 other tests skip
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-433/igt@kms_vrr@flip-suspend.html
* igt@kms_vrr@lobf:
- shard-adlp: NOTRUN -> [SKIP][143] ([Intel XE#2168])
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-3/igt@kms_vrr@lobf.html
- shard-dg2-set2: NOTRUN -> [SKIP][144] ([Intel XE#2168])
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-435/igt@kms_vrr@lobf.html
- shard-lnl: NOTRUN -> [SKIP][145] ([Intel XE#1499])
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-5/igt@kms_vrr@lobf.html
* igt@xe_compute@ccs-mode-compute-kernel:
- shard-adlp: NOTRUN -> [SKIP][146] ([Intel XE#1447] / [Intel XE#5596])
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-6/igt@xe_compute@ccs-mode-compute-kernel.html
* igt@xe_configfs@survivability-mode:
- shard-dg2-set2: NOTRUN -> [SKIP][147] ([Intel XE#5249])
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-463/igt@xe_configfs@survivability-mode.html
- shard-lnl: NOTRUN -> [SKIP][148] ([Intel XE#5249])
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-5/igt@xe_configfs@survivability-mode.html
- shard-adlp: NOTRUN -> [SKIP][149] ([Intel XE#5249])
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-1/igt@xe_configfs@survivability-mode.html
* igt@xe_copy_basic@mem-copy-linear-0x369:
- shard-adlp: NOTRUN -> [SKIP][150] ([Intel XE#1123])
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-4/igt@xe_copy_basic@mem-copy-linear-0x369.html
* igt@xe_eudebug@basic-client-th:
- shard-bmg: NOTRUN -> [SKIP][151] ([Intel XE#4837])
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@xe_eudebug@basic-client-th.html
* igt@xe_eudebug@vma-ufence-faultable:
- shard-dg2-set2: NOTRUN -> [SKIP][152] ([Intel XE#4837]) +5 other tests skip
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-466/igt@xe_eudebug@vma-ufence-faultable.html
- shard-lnl: NOTRUN -> [SKIP][153] ([Intel XE#4837]) +2 other tests skip
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-3/igt@xe_eudebug@vma-ufence-faultable.html
* igt@xe_eudebug_online@interrupt-other:
- shard-adlp: NOTRUN -> [SKIP][154] ([Intel XE#4837] / [Intel XE#5565]) +4 other tests skip
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-9/igt@xe_eudebug_online@interrupt-other.html
* igt@xe_eudebug_sriov@deny-sriov:
- shard-adlp: NOTRUN -> [SKIP][155] ([Intel XE#4519])
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-3/igt@xe_eudebug_sriov@deny-sriov.html
* igt@xe_evict@evict-small-multi-vm:
- shard-adlp: NOTRUN -> [SKIP][156] ([Intel XE#261] / [Intel XE#5564] / [Intel XE#688])
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-4/igt@xe_evict@evict-small-multi-vm.html
* igt@xe_exec_basic@many-bindexecqueue-rebind:
- shard-bmg: [PASS][157] -> [SKIP][158] ([Intel XE#4945]) +539 other tests skip
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@xe_exec_basic@many-bindexecqueue-rebind.html
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_exec_basic@many-bindexecqueue-rebind.html
* igt@xe_exec_basic@multigpu-no-exec-basic-defer-bind:
- shard-bmg: NOTRUN -> [SKIP][159] ([Intel XE#2322])
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@xe_exec_basic@multigpu-no-exec-basic-defer-bind.html
* igt@xe_exec_basic@multigpu-no-exec-bindexecqueue-rebind:
- shard-adlp: NOTRUN -> [SKIP][160] ([Intel XE#1392] / [Intel XE#5575]) +1 other test skip
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@xe_exec_basic@multigpu-no-exec-bindexecqueue-rebind.html
* igt@xe_exec_basic@multigpu-no-exec-null-defer-bind:
- shard-dg2-set2: [PASS][161] -> [SKIP][162] ([Intel XE#1392]) +6 other tests skip
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-dg2-463/igt@xe_exec_basic@multigpu-no-exec-null-defer-bind.html
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-432/igt@xe_exec_basic@multigpu-no-exec-null-defer-bind.html
* igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate-race:
- shard-dg2-set2: NOTRUN -> [SKIP][163] ([Intel XE#1392])
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-432/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate-race.html
- shard-lnl: NOTRUN -> [SKIP][164] ([Intel XE#1392]) +1 other test skip
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-7/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate-race.html
* igt@xe_exec_fault_mode@many-userptr-invalidate-race-prefetch:
- shard-adlp: NOTRUN -> [SKIP][165] ([Intel XE#288] / [Intel XE#5561]) +8 other tests skip
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-4/igt@xe_exec_fault_mode@many-userptr-invalidate-race-prefetch.html
- shard-dg2-set2: NOTRUN -> [SKIP][166] ([Intel XE#288]) +12 other tests skip
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-432/igt@xe_exec_fault_mode@many-userptr-invalidate-race-prefetch.html
* igt@xe_exec_system_allocator@many-execqueues-malloc-mlock-nomemset:
- shard-bmg: [PASS][167] -> [TIMEOUT][168] ([Intel XE#5686])
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-3/igt@xe_exec_system_allocator@many-execqueues-malloc-mlock-nomemset.html
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@xe_exec_system_allocator@many-execqueues-malloc-mlock-nomemset.html
* igt@xe_exec_system_allocator@process-many-execqueues-mmap-nomemset:
- shard-adlp: NOTRUN -> [SKIP][169] ([Intel XE#4915] / [Intel XE#5560]) +79 other tests skip
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-3/igt@xe_exec_system_allocator@process-many-execqueues-mmap-nomemset.html
* igt@xe_exec_system_allocator@process-many-mmap-free-race-nomemset:
- shard-dg2-set2: NOTRUN -> [SKIP][170] ([Intel XE#4915]) +97 other tests skip
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-435/igt@xe_exec_system_allocator@process-many-mmap-free-race-nomemset.html
* igt@xe_exec_system_allocator@process-many-stride-mmap-huge:
- shard-lnl: NOTRUN -> [SKIP][171] ([Intel XE#4943]) +4 other tests skip
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-1/igt@xe_exec_system_allocator@process-many-stride-mmap-huge.html
- shard-bmg: NOTRUN -> [SKIP][172] ([Intel XE#4943])
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@xe_exec_system_allocator@process-many-stride-mmap-huge.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-stride-new-race:
- shard-bmg: NOTRUN -> [SKIP][173] ([Intel XE#4945]) +19 other tests skip
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-new-race.html
* igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv:
- shard-dg2-set2: NOTRUN -> [INCOMPLETE][174] ([Intel XE#5531])
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-463/igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv.html
* igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit:
- shard-bmg: [PASS][175] -> [SKIP][176] ([Intel XE#2229])
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-3/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html
* igt@xe_module_load@many-reload:
- shard-bmg: [PASS][177] -> [FAIL][178] ([Intel XE#5679]) +1 other test fail
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@xe_module_load@many-reload.html
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_module_load@many-reload.html
* igt@xe_oa@non-privileged-map-oa-buffer:
- shard-dg2-set2: NOTRUN -> [SKIP][179] ([Intel XE#3573]) +1 other test skip
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-433/igt@xe_oa@non-privileged-map-oa-buffer.html
* igt@xe_oa@privileged-forked-access-vaddr:
- shard-adlp: NOTRUN -> [SKIP][180] ([Intel XE#3573]) +3 other tests skip
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-1/igt@xe_oa@privileged-forked-access-vaddr.html
* igt@xe_pat@pat-index-xehpc:
- shard-dg2-set2: NOTRUN -> [SKIP][181] ([Intel XE#2838] / [Intel XE#979])
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-433/igt@xe_pat@pat-index-xehpc.html
* igt@xe_pm@s3-exec-after:
- shard-adlp: [PASS][182] -> [DMESG-WARN][183] ([Intel XE#2953] / [Intel XE#4173] / [Intel XE#569])
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-3/igt@xe_pm@s3-exec-after.html
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-3/igt@xe_pm@s3-exec-after.html
* igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq:
- shard-adlp: NOTRUN -> [SKIP][184] ([Intel XE#4733] / [Intel XE#5594]) +2 other tests skip
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq.html
- shard-dg2-set2: NOTRUN -> [SKIP][185] ([Intel XE#4733]) +1 other test skip
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-436/igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq.html
* igt@xe_query@multigpu-query-engines:
- shard-dg2-set2: NOTRUN -> [SKIP][186] ([Intel XE#944]) +1 other test skip
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-432/igt@xe_query@multigpu-query-engines.html
* igt@xe_query@multigpu-query-gt-list:
- shard-adlp: NOTRUN -> [SKIP][187] ([Intel XE#944])
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-3/igt@xe_query@multigpu-query-gt-list.html
- shard-lnl: NOTRUN -> [SKIP][188] ([Intel XE#944])
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-5/igt@xe_query@multigpu-query-gt-list.html
#### Possible fixes ####
* igt@core_hotunplug@unbind-rebind:
- shard-bmg: [SKIP][189] ([Intel XE#4963]) -> [PASS][190]
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@core_hotunplug@unbind-rebind.html
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@core_hotunplug@unbind-rebind.html
* igt@core_setmaster@master-drop-set-root:
- shard-bmg: [FAIL][191] ([Intel XE#4672]) -> [PASS][192]
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@core_setmaster@master-drop-set-root.html
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@core_setmaster@master-drop-set-root.html
* igt@fbdev@write:
- shard-bmg: [SKIP][193] ([Intel XE#2134]) -> [PASS][194]
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@fbdev@write.html
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@fbdev@write.html
* igt@intel_hwmon@hwmon-write:
- shard-bmg: [SKIP][195] -> [PASS][196] +1 other test pass
[195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@intel_hwmon@hwmon-write.html
[196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@intel_hwmon@hwmon-write.html
* igt@kms_addfb_basic@legacy-format:
- shard-adlp: [DMESG-WARN][197] -> [PASS][198] +1 other test pass
[197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-4/igt@kms_addfb_basic@legacy-format.html
[198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-1/igt@kms_addfb_basic@legacy-format.html
* igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
- shard-bmg: [SKIP][199] ([Intel XE#4947]) -> [PASS][200] +20 other tests pass
[199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
[200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip:
- shard-adlp: [DMESG-FAIL][201] ([Intel XE#4543]) -> [PASS][202] +9 other tests pass
[201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-3/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip.html
[202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-a-hdmi-a-6:
- shard-dg2-set2: [INCOMPLETE][203] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#3124] / [Intel XE#4345]) -> [PASS][204]
[203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-a-hdmi-a-6.html
[204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-a-hdmi-a-6.html
* igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic:
- shard-bmg: [SKIP][205] ([Intel XE#2291]) -> [PASS][206] +3 other tests pass
[205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic.html
[206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic.html
* igt@kms_cursor_legacy@flip-vs-cursor-atomic:
- shard-bmg: [FAIL][207] ([Intel XE#4633]) -> [PASS][208]
[207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
[208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-1/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
* igt@kms_flip@2x-blocking-absolute-wf_vblank-interruptible:
- shard-bmg: [SKIP][209] ([Intel XE#2316]) -> [PASS][210] +1 other test pass
[209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_flip@2x-blocking-absolute-wf_vblank-interruptible.html
[210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-7/igt@kms_flip@2x-blocking-absolute-wf_vblank-interruptible.html
* igt@kms_flip@flip-vs-rmfb-interruptible@b-hdmi-a1:
- shard-adlp: [DMESG-WARN][211] ([Intel XE#4543]) -> [PASS][212] +12 other tests pass
[211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-1/igt@kms_flip@flip-vs-rmfb-interruptible@b-hdmi-a1.html
[212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-1/igt@kms_flip@flip-vs-rmfb-interruptible@b-hdmi-a1.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-dg2-set2: [INCOMPLETE][213] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][214] +1 other test pass
[213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-dg2-463/igt@kms_flip@flip-vs-suspend-interruptible.html
[214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-433/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling:
- shard-adlp: [DMESG-FAIL][215] ([Intel XE#4543] / [Intel XE#4921]) -> [PASS][216] +1 other test pass
[215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-8/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling.html
[216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-1/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling.html
* igt@kms_hdr@static-toggle:
- shard-bmg: [SKIP][217] ([Intel XE#1503]) -> [PASS][218]
[217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_hdr@static-toggle.html
[218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-1/igt@kms_hdr@static-toggle.html
* igt@kms_joiner@invalid-modeset-force-big-joiner:
- shard-bmg: [SKIP][219] ([Intel XE#3012]) -> [PASS][220]
[219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_joiner@invalid-modeset-force-big-joiner.html
[220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@kms_joiner@invalid-modeset-force-big-joiner.html
* igt@kms_plane_scaling@plane-upscale-20x20-with-modifiers:
- shard-bmg: [SKIP][221] ([Intel XE#4950]) -> [PASS][222] +77 other tests pass
[221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_plane_scaling@plane-upscale-20x20-with-modifiers.html
[222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_plane_scaling@plane-upscale-20x20-with-modifiers.html
* igt@kms_pm_dc@dc6-dpms:
- shard-adlp: [FAIL][223] ([Intel XE#718]) -> [PASS][224]
[223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-1/igt@kms_pm_dc@dc6-dpms.html
[224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@kms_pm_dc@dc6-dpms.html
* igt@kms_pm_dc@dc6-psr:
- shard-lnl: [FAIL][225] ([Intel XE#718]) -> [PASS][226] +2 other tests pass
[225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-lnl-3/igt@kms_pm_dc@dc6-psr.html
[226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-8/igt@kms_pm_dc@dc6-psr.html
* igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait:
- shard-bmg: [SKIP][227] ([Intel XE#4962]) -> [PASS][228] +1 other test pass
[227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait.html
[228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait.html
* igt@kms_vrr@cmrr@pipe-a-edp-1:
- shard-lnl: [FAIL][229] ([Intel XE#4459]) -> [PASS][230] +1 other test pass
[229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-lnl-2/igt@kms_vrr@cmrr@pipe-a-edp-1.html
[230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-3/igt@kms_vrr@cmrr@pipe-a-edp-1.html
* igt@kms_vrr@negative-basic:
- shard-bmg: [SKIP][231] ([Intel XE#1499]) -> [PASS][232]
[231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_vrr@negative-basic.html
[232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_vrr@negative-basic.html
* igt@xe_exec_basic@multigpu-once-bindexecqueue:
- shard-dg2-set2: [SKIP][233] ([Intel XE#1392]) -> [PASS][234] +3 other tests pass
[233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-dg2-432/igt@xe_exec_basic@multigpu-once-bindexecqueue.html
[234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-436/igt@xe_exec_basic@multigpu-once-bindexecqueue.html
* igt@xe_exec_reset@parallel-gt-reset:
- shard-adlp: [DMESG-WARN][235] ([Intel XE#3876]) -> [PASS][236]
[235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-2/igt@xe_exec_reset@parallel-gt-reset.html
[236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@xe_exec_reset@parallel-gt-reset.html
- shard-dg2-set2: [DMESG-WARN][237] ([Intel XE#3876]) -> [PASS][238]
[237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-dg2-436/igt@xe_exec_reset@parallel-gt-reset.html
[238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-436/igt@xe_exec_reset@parallel-gt-reset.html
* igt@xe_exec_system_allocator@process-many-new-nomemset:
- shard-bmg: [SKIP][239] ([Intel XE#4945]) -> [PASS][240] +416 other tests pass
[239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_exec_system_allocator@process-many-new-nomemset.html
[240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@xe_exec_system_allocator@process-many-new-nomemset.html
* igt@xe_live_ktest@xe_bo:
- shard-bmg: [SKIP][241] ([Intel XE#2229]) -> [PASS][242] +1 other test pass
[241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_live_ktest@xe_bo.html
[242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@xe_live_ktest@xe_bo.html
* igt@xe_module_load@load:
- shard-bmg: ([PASS][243], [PASS][244], [PASS][245], [PASS][246], [PASS][247], [PASS][248], [PASS][249], [PASS][250], [PASS][251], [PASS][252], [PASS][253], [PASS][254], [PASS][255], [PASS][256], [PASS][257], [PASS][258], [PASS][259], [PASS][260], [PASS][261], [PASS][262], [PASS][263], [PASS][264], [PASS][265], [PASS][266], [SKIP][267], [PASS][268]) ([Intel XE#2457]) -> ([PASS][269], [PASS][270], [PASS][271], [PASS][272], [PASS][273], [PASS][274], [PASS][275], [PASS][276], [PASS][277], [PASS][278], [PASS][279], [PASS][280], [PASS][281], [PASS][282], [PASS][283], [PASS][284], [PASS][285], [PASS][286], [PASS][287], [PASS][288], [PASS][289], [PASS][290], [PASS][291], [PASS][292], [PASS][293])
[243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_module_load@load.html
[244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@xe_module_load@load.html
[245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_module_load@load.html
[246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-3/igt@xe_module_load@load.html
[247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@xe_module_load@load.html
[248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_module_load@load.html
[249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@xe_module_load@load.html
[250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@xe_module_load@load.html
[251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@xe_module_load@load.html
[252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-1/igt@xe_module_load@load.html
[253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@xe_module_load@load.html
[254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@xe_module_load@load.html
[255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-1/igt@xe_module_load@load.html
[256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_module_load@load.html
[257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_module_load@load.html
[258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-3/igt@xe_module_load@load.html
[259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@xe_module_load@load.html
[260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@xe_module_load@load.html
[261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@xe_module_load@load.html
[262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@xe_module_load@load.html
[263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@xe_module_load@load.html
[264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@xe_module_load@load.html
[265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@xe_module_load@load.html
[266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_module_load@load.html
[267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@xe_module_load@load.html
[268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@xe_module_load@load.html
[269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_module_load@load.html
[270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@xe_module_load@load.html
[271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@xe_module_load@load.html
[272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-7/igt@xe_module_load@load.html
[273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-7/igt@xe_module_load@load.html
[274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_module_load@load.html
[275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-1/igt@xe_module_load@load.html
[276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@xe_module_load@load.html
[277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@xe_module_load@load.html
[278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@xe_module_load@load.html
[279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_module_load@load.html
[280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@xe_module_load@load.html
[281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@xe_module_load@load.html
[282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@xe_module_load@load.html
[283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@xe_module_load@load.html
[284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@xe_module_load@load.html
[285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-1/igt@xe_module_load@load.html
[286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_module_load@load.html
[287]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_module_load@load.html
[288]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@xe_module_load@load.html
[289]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@xe_module_load@load.html
[290]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@xe_module_load@load.html
[291]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@xe_module_load@load.html
[292]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_module_load@load.html
[293]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_module_load@load.html
* igt@xe_module_load@reload-no-display:
- shard-bmg: [FAIL][294] ([Intel XE#5679]) -> [PASS][295]
[294]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_module_load@reload-no-display.html
[295]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@xe_module_load@reload-no-display.html
* igt@xe_pm@s2idle-basic-exec:
- shard-adlp: [DMESG-WARN][296] ([Intel XE#2953] / [Intel XE#4173]) -> [PASS][297] +6 other tests pass
[296]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-9/igt@xe_pm@s2idle-basic-exec.html
[297]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@xe_pm@s2idle-basic-exec.html
* igt@xe_pm@s3-d3hot-basic-exec:
- shard-adlp: [ABORT][298] ([Intel XE#5545]) -> [PASS][299]
[298]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-2/igt@xe_pm@s3-d3hot-basic-exec.html
[299]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-6/igt@xe_pm@s3-d3hot-basic-exec.html
#### Warnings ####
* igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels:
- shard-bmg: [SKIP][300] ([Intel XE#4950]) -> [SKIP][301] ([Intel XE#2370]) +1 other test skip
[300]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html
[301]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html
* igt@kms_big_fb@linear-32bpp-rotate-90:
- shard-bmg: [SKIP][302] ([Intel XE#4947]) -> [SKIP][303] ([Intel XE#2327]) +2 other tests skip
[302]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_big_fb@linear-32bpp-rotate-90.html
[303]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_big_fb@linear-32bpp-rotate-90.html
* igt@kms_big_fb@x-tiled-32bpp-rotate-90:
- shard-bmg: [SKIP][304] ([Intel XE#2327]) -> [SKIP][305] ([Intel XE#4947]) +3 other tests skip
[304]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_big_fb@x-tiled-32bpp-rotate-90.html
[305]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_big_fb@x-tiled-32bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
- shard-bmg: [SKIP][306] ([Intel XE#4947]) -> [SKIP][307] ([Intel XE#1124]) +10 other tests skip
[306]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
[307]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
* igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow:
- shard-bmg: [SKIP][308] ([Intel XE#4947]) -> [SKIP][309] ([Intel XE#607])
[308]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
[309]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip:
- shard-bmg: [SKIP][310] ([Intel XE#1124]) -> [SKIP][311] ([Intel XE#4947]) +11 other tests skip
[310]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
[311]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
* igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p:
- shard-bmg: [SKIP][312] ([Intel XE#2314] / [Intel XE#2894]) -> [SKIP][313] ([Intel XE#4950]) +1 other test skip
[312]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html
[313]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html
* igt@kms_bw@linear-tiling-1-displays-2560x1440p:
- shard-bmg: [SKIP][314] ([Intel XE#4950]) -> [SKIP][315] ([Intel XE#367]) +2 other tests skip
[314]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_bw@linear-tiling-1-displays-2560x1440p.html
[315]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@kms_bw@linear-tiling-1-displays-2560x1440p.html
* igt@kms_bw@linear-tiling-4-displays-2160x1440p:
- shard-bmg: [SKIP][316] ([Intel XE#367]) -> [SKIP][317] ([Intel XE#4950]) +3 other tests skip
[316]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_bw@linear-tiling-4-displays-2160x1440p.html
[317]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_bw@linear-tiling-4-displays-2160x1440p.html
* igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs-cc:
- shard-bmg: [SKIP][318] ([Intel XE#4947]) -> [SKIP][319] ([Intel XE#2887]) +12 other tests skip
[318]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs-cc.html
[319]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-7/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs-cc.html
* igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs:
- shard-bmg: [SKIP][320] ([Intel XE#2652] / [Intel XE#787]) -> [SKIP][321] ([Intel XE#4947])
[320]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs.html
[321]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs.html
* igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs:
- shard-bmg: [SKIP][322] ([Intel XE#4947]) -> [SKIP][323] ([Intel XE#2652] / [Intel XE#787])
[322]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs.html
[323]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc:
- shard-bmg: [SKIP][324] ([Intel XE#4947]) -> [SKIP][325] ([Intel XE#3432]) +1 other test skip
[324]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc.html
[325]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc.html
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs:
- shard-bmg: [SKIP][326] ([Intel XE#3432]) -> [SKIP][327] ([Intel XE#4947]) +1 other test skip
[326]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@kms_ccs@crc-primary-suspend-y-tiled-ccs.html
[327]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_ccs@crc-primary-suspend-y-tiled-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc:
- shard-bmg: [SKIP][328] ([Intel XE#2887]) -> [SKIP][329] ([Intel XE#4947]) +20 other tests skip
[328]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc.html
[329]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs:
- shard-dg2-set2: [INCOMPLETE][330] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4345] / [Intel XE#4522]) -> [INCOMPLETE][331] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#3124] / [Intel XE#4345])
[330]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-dg2-463/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
[331]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs:
- shard-dg2-set2: [INCOMPLETE][332] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#3124] / [Intel XE#4345]) -> [INCOMPLETE][333] ([Intel XE#2705] / [Intel XE#4212] / [Intel XE#4345])
[332]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
[333]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
* igt@kms_cdclk@mode-transition:
- shard-bmg: [SKIP][334] ([Intel XE#2724]) -> [SKIP][335] ([Intel XE#4947])
[334]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@kms_cdclk@mode-transition.html
[335]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_cdclk@mode-transition.html
* igt@kms_chamelium_color@degamma:
- shard-bmg: [SKIP][336] ([Intel XE#2325]) -> [SKIP][337] ([Intel XE#4950]) +1 other test skip
[336]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_chamelium_color@degamma.html
[337]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_chamelium_color@degamma.html
* igt@kms_chamelium_color@gamma:
- shard-bmg: [SKIP][338] ([Intel XE#4950]) -> [SKIP][339] ([Intel XE#2325]) +1 other test skip
[338]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_chamelium_color@gamma.html
[339]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_chamelium_color@gamma.html
* igt@kms_chamelium_edid@dp-edid-resolution-list:
- shard-bmg: [SKIP][340] ([Intel XE#4950]) -> [SKIP][341] ([Intel XE#2252]) +9 other tests skip
[340]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_chamelium_edid@dp-edid-resolution-list.html
[341]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@kms_chamelium_edid@dp-edid-resolution-list.html
* igt@kms_chamelium_frames@vga-frame-dump:
- shard-bmg: [SKIP][342] ([Intel XE#2252]) -> [SKIP][343] ([Intel XE#4950]) +10 other tests skip
[342]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_chamelium_frames@vga-frame-dump.html
[343]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_chamelium_frames@vga-frame-dump.html
* igt@kms_content_protection@dp-mst-type-1:
- shard-bmg: [SKIP][344] ([Intel XE#4950]) -> [SKIP][345] ([Intel XE#2390])
[344]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_content_protection@dp-mst-type-1.html
[345]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_content_protection@dp-mst-type-1.html
* igt@kms_content_protection@lic-type-0:
- shard-bmg: [FAIL][346] ([Intel XE#1178]) -> [SKIP][347] ([Intel XE#2341]) +1 other test skip
[346]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_content_protection@lic-type-0.html
[347]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_content_protection@lic-type-0.html
* igt@kms_content_protection@mei-interface:
- shard-bmg: [SKIP][348] ([Intel XE#4950]) -> [SKIP][349] ([Intel XE#2341]) +1 other test skip
[348]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_content_protection@mei-interface.html
[349]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-7/igt@kms_content_protection@mei-interface.html
* igt@kms_content_protection@srm:
- shard-bmg: [FAIL][350] ([Intel XE#1178]) -> [SKIP][351] ([Intel XE#4950])
[350]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-1/igt@kms_content_protection@srm.html
[351]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_content_protection@srm.html
* igt@kms_content_protection@uevent:
- shard-bmg: [SKIP][352] ([Intel XE#2341]) -> [FAIL][353] ([Intel XE#1188])
[352]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_content_protection@uevent.html
[353]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@kms_content_protection@uevent.html
* igt@kms_cursor_crc@cursor-onscreen-32x32:
- shard-bmg: [SKIP][354] ([Intel XE#4950]) -> [SKIP][355] ([Intel XE#2320]) +2 other tests skip
[354]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_cursor_crc@cursor-onscreen-32x32.html
[355]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_cursor_crc@cursor-onscreen-32x32.html
* igt@kms_cursor_crc@cursor-onscreen-512x512:
- shard-bmg: [SKIP][356] ([Intel XE#4950]) -> [SKIP][357] ([Intel XE#2321])
[356]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_cursor_crc@cursor-onscreen-512x512.html
[357]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-1/igt@kms_cursor_crc@cursor-onscreen-512x512.html
* igt@kms_cursor_crc@cursor-random-32x32:
- shard-bmg: [SKIP][358] ([Intel XE#2320]) -> [SKIP][359] ([Intel XE#4950]) +7 other tests skip
[358]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@kms_cursor_crc@cursor-random-32x32.html
[359]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_cursor_crc@cursor-random-32x32.html
* igt@kms_cursor_crc@cursor-sliding-512x512:
- shard-bmg: [SKIP][360] ([Intel XE#2321]) -> [SKIP][361] ([Intel XE#4950]) +2 other tests skip
[360]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-3/igt@kms_cursor_crc@cursor-sliding-512x512.html
[361]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_cursor_crc@cursor-sliding-512x512.html
* igt@kms_cursor_legacy@cursora-vs-flipb-legacy:
- shard-bmg: [SKIP][362] ([Intel XE#4950]) -> [SKIP][363] ([Intel XE#2291]) +1 other test skip
[362]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html
[363]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic:
- shard-bmg: [SKIP][364] ([Intel XE#2291]) -> [SKIP][365] ([Intel XE#4950]) +1 other test skip
[364]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic.html
[365]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic.html
* igt@kms_dirtyfb@fbc-dirtyfb-ioctl:
- shard-bmg: [SKIP][366] ([Intel XE#4947]) -> [SKIP][367] ([Intel XE#5428])
[366]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_dirtyfb@fbc-dirtyfb-ioctl.html
[367]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_dirtyfb@fbc-dirtyfb-ioctl.html
* igt@kms_dp_link_training@uhbr-mst:
- shard-bmg: [SKIP][368] ([Intel XE#4354]) -> [SKIP][369] ([Intel XE#4947]) +1 other test skip
[368]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@kms_dp_link_training@uhbr-mst.html
[369]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_dp_link_training@uhbr-mst.html
* igt@kms_dsc@dsc-fractional-bpp:
- shard-bmg: [SKIP][370] ([Intel XE#2244]) -> [SKIP][371] ([Intel XE#4947]) +2 other tests skip
[370]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_dsc@dsc-fractional-bpp.html
[371]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_dsc@dsc-fractional-bpp.html
* igt@kms_dsc@dsc-with-output-formats-with-bpc:
- shard-bmg: [SKIP][372] ([Intel XE#4947]) -> [SKIP][373] ([Intel XE#2244])
[372]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_dsc@dsc-with-output-formats-with-bpc.html
[373]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-7/igt@kms_dsc@dsc-with-output-formats-with-bpc.html
* igt@kms_feature_discovery@display-4x:
- shard-bmg: [SKIP][374] ([Intel XE#1138]) -> [SKIP][375] ([Intel XE#4950])
[374]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_feature_discovery@display-4x.html
[375]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_feature_discovery@display-4x.html
* igt@kms_feature_discovery@dp-mst:
- shard-bmg: [SKIP][376] ([Intel XE#4950]) -> [SKIP][377] ([Intel XE#2375])
[376]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_feature_discovery@dp-mst.html
[377]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@kms_feature_discovery@dp-mst.html
* igt@kms_feature_discovery@psr2:
- shard-bmg: [SKIP][378] ([Intel XE#2374]) -> [SKIP][379] ([Intel XE#4950])
[378]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_feature_discovery@psr2.html
[379]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_feature_discovery@psr2.html
* igt@kms_flip@2x-plain-flip-ts-check-interruptible:
- shard-bmg: [SKIP][380] ([Intel XE#2316]) -> [SKIP][381] ([Intel XE#4950]) +1 other test skip
[380]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_flip@2x-plain-flip-ts-check-interruptible.html
[381]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_flip@2x-plain-flip-ts-check-interruptible.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-bmg: [SKIP][382] ([Intel XE#4950]) -> [INCOMPLETE][383] ([Intel XE#2049] / [Intel XE#2597])
[382]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_flip@flip-vs-suspend-interruptible.html
[383]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling:
- shard-bmg: [SKIP][384] ([Intel XE#4947]) -> [SKIP][385] ([Intel XE#2293] / [Intel XE#2380]) +3 other tests skip
[384]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling.html
[385]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-1/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling:
- shard-bmg: [SKIP][386] ([Intel XE#2293] / [Intel XE#2380]) -> [SKIP][387] ([Intel XE#4947]) +5 other tests skip
[386]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling.html
[387]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling.html
* igt@kms_frontbuffer_tracking@drrs-1p-offscren-pri-indfb-draw-blt:
- shard-bmg: [SKIP][388] ([Intel XE#2311]) -> [SKIP][389] ([Intel XE#4947]) +33 other tests skip
[388]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@kms_frontbuffer_tracking@drrs-1p-offscren-pri-indfb-draw-blt.html
[389]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-1p-offscren-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@drrs-2p-pri-indfb-multidraw:
- shard-bmg: [SKIP][390] ([Intel XE#2312]) -> [SKIP][391] ([Intel XE#4947]) +5 other tests skip
[390]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-pri-indfb-multidraw.html
[391]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-2p-pri-indfb-multidraw.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt:
- shard-bmg: [SKIP][392] ([Intel XE#2311]) -> [SKIP][393] ([Intel XE#2312]) +5 other tests skip
[392]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt.html
[393]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][394] ([Intel XE#4947]) -> [SKIP][395] ([Intel XE#2311]) +25 other tests skip
[394]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc.html
[395]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-7/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-shrfb-draw-mmap-wc:
- shard-bmg: [SKIP][396] ([Intel XE#2312]) -> [SKIP][397] ([Intel XE#2311]) +5 other tests skip
[396]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-shrfb-draw-mmap-wc.html
[397]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-plflip-blt:
- shard-bmg: [SKIP][398] ([Intel XE#2312]) -> [SKIP][399] ([Intel XE#5390]) +3 other tests skip
[398]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-plflip-blt.html
[399]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-rte:
- shard-bmg: [SKIP][400] ([Intel XE#5427]) -> [SKIP][401] ([Intel XE#4947])
[400]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-2p-rte.html
[401]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-2p-rte.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][402] ([Intel XE#4947]) -> [SKIP][403] ([Intel XE#5390]) +9 other tests skip
[402]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-wc.html
[403]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff:
- shard-bmg: [SKIP][404] ([Intel XE#5390]) -> [SKIP][405] ([Intel XE#2312]) +4 other tests skip
[404]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-1/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
[405]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary:
- shard-bmg: [SKIP][406] ([Intel XE#5390]) -> [SKIP][407] ([Intel XE#4947]) +15 other tests skip
[406]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary.html
[407]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary.html
* igt@kms_frontbuffer_tracking@fbc-tiling-y:
- shard-bmg: [SKIP][408] ([Intel XE#4947]) -> [SKIP][409] ([Intel XE#2352])
[408]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-tiling-y.html
[409]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-7/igt@kms_frontbuffer_tracking@fbc-tiling-y.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt:
- shard-bmg: [SKIP][410] ([Intel XE#4947]) -> [SKIP][411] ([Intel XE#2313]) +24 other tests skip
[410]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt.html
[411]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-draw-blt:
- shard-bmg: [SKIP][412] ([Intel XE#4947]) -> [SKIP][413] ([Intel XE#2312]) +10 other tests skip
[412]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-draw-blt.html
[413]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt:
- shard-bmg: [SKIP][414] ([Intel XE#2312]) -> [SKIP][415] ([Intel XE#2313]) +5 other tests skip
[414]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt.html
[415]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@pipe-fbc-rte:
- shard-bmg: [SKIP][416] ([Intel XE#5672]) -> [SKIP][417] ([Intel XE#4947])
[416]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_frontbuffer_tracking@pipe-fbc-rte.html
[417]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_frontbuffer_tracking@pipe-fbc-rte.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
- shard-bmg: [SKIP][418] ([Intel XE#2313]) -> [SKIP][419] ([Intel XE#2312]) +6 other tests skip
[418]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-3/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
[419]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen:
- shard-bmg: [SKIP][420] ([Intel XE#2313]) -> [SKIP][421] ([Intel XE#4947]) +39 other tests skip
[420]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen.html
[421]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen.html
* igt@kms_hdr@brightness-with-hdr:
- shard-bmg: [SKIP][422] ([Intel XE#3374] / [Intel XE#3544]) -> [SKIP][423] ([Intel XE#4950])
[422]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_hdr@brightness-with-hdr.html
[423]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_hdr@brightness-with-hdr.html
* igt@kms_hdr@static-toggle-dpms:
- shard-bmg: [SKIP][424] ([Intel XE#1503]) -> [SKIP][425] ([Intel XE#4950])
[424]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_hdr@static-toggle-dpms.html
[425]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_hdr@static-toggle-dpms.html
* igt@kms_joiner@invalid-modeset-ultra-joiner:
- shard-bmg: [SKIP][426] ([Intel XE#4947]) -> [SKIP][427] ([Intel XE#2927])
[426]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_joiner@invalid-modeset-ultra-joiner.html
[427]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_joiner@invalid-modeset-ultra-joiner.html
* igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
- shard-bmg: [SKIP][428] ([Intel XE#4950]) -> [SKIP][429] ([Intel XE#2501])
[428]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
[429]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
* igt@kms_plane_multiple@2x-tiling-y:
- shard-bmg: [SKIP][430] ([Intel XE#4950]) -> [SKIP][431] ([Intel XE#5021])
[430]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_plane_multiple@2x-tiling-y.html
[431]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_plane_multiple@2x-tiling-y.html
* igt@kms_plane_multiple@2x-tiling-yf:
- shard-bmg: [SKIP][432] ([Intel XE#5021]) -> [SKIP][433] ([Intel XE#4950])
[432]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@kms_plane_multiple@2x-tiling-yf.html
[433]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_plane_multiple@2x-tiling-yf.html
* igt@kms_plane_scaling@planes-downscale-factor-0-5:
- shard-bmg: [SKIP][434] ([Intel XE#2763]) -> [SKIP][435] ([Intel XE#4950])
[434]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_plane_scaling@planes-downscale-factor-0-5.html
[435]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_plane_scaling@planes-downscale-factor-0-5.html
* igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5:
- shard-bmg: [SKIP][436] ([Intel XE#4950]) -> [SKIP][437] ([Intel XE#2763])
[436]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5.html
[437]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5.html
* igt@kms_pm_backlight@fade-with-dpms:
- shard-bmg: [SKIP][438] ([Intel XE#870]) -> [SKIP][439] ([Intel XE#4947]) +1 other test skip
[438]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_pm_backlight@fade-with-dpms.html
[439]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_pm_backlight@fade-with-dpms.html
* igt@kms_pm_dc@dc9-dpms:
- shard-adlp: [SKIP][440] ([Intel XE#734]) -> [FAIL][441] ([Intel XE#3325])
[440]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-adlp-3/igt@kms_pm_dc@dc9-dpms.html
[441]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-adlp-8/igt@kms_pm_dc@dc9-dpms.html
* igt@kms_pm_dc@deep-pkgc:
- shard-bmg: [SKIP][442] ([Intel XE#2505]) -> [SKIP][443] ([Intel XE#4947])
[442]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@kms_pm_dc@deep-pkgc.html
[443]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_pm_dc@deep-pkgc.html
* igt@kms_pm_lpsp@kms-lpsp:
- shard-bmg: [SKIP][444] ([Intel XE#4947]) -> [SKIP][445] ([Intel XE#2499])
[444]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_pm_lpsp@kms-lpsp.html
[445]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_pm_lpsp@kms-lpsp.html
* igt@kms_pm_rpm@modeset-lpsp:
- shard-bmg: [SKIP][446] ([Intel XE#1439] / [Intel XE#3141] / [Intel XE#836]) -> [SKIP][447] ([Intel XE#4962]) +1 other test skip
[446]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_pm_rpm@modeset-lpsp.html
[447]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_pm_rpm@modeset-lpsp.html
* igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf:
- shard-bmg: [SKIP][448] ([Intel XE#1489]) -> [SKIP][449] ([Intel XE#4947]) +9 other tests skip
[448]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf.html
[449]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf.html
* igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-fully-sf:
- shard-bmg: [SKIP][450] ([Intel XE#4947]) -> [SKIP][451] ([Intel XE#1489]) +8 other tests skip
[450]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-fully-sf.html
[451]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-fully-sf.html
* igt@kms_psr2_su@page_flip-p010:
- shard-bmg: [SKIP][452] ([Intel XE#2387]) -> [SKIP][453] ([Intel XE#4947]) +3 other tests skip
[452]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@kms_psr2_su@page_flip-p010.html
[453]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_psr2_su@page_flip-p010.html
* igt@kms_psr@psr-primary-page-flip:
- shard-bmg: [SKIP][454] ([Intel XE#4947]) -> [SKIP][455] ([Intel XE#2234] / [Intel XE#2850]) +9 other tests skip
[454]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_psr@psr-primary-page-flip.html
[455]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_psr@psr-primary-page-flip.html
* igt@kms_psr@psr2-suspend:
- shard-bmg: [SKIP][456] ([Intel XE#2234] / [Intel XE#2850]) -> [SKIP][457] ([Intel XE#4947]) +12 other tests skip
[456]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@kms_psr@psr2-suspend.html
[457]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_psr@psr2-suspend.html
* igt@kms_rotation_crc@primary-rotation-90:
- shard-bmg: [SKIP][458] ([Intel XE#4950]) -> [SKIP][459] ([Intel XE#3414] / [Intel XE#3904]) +1 other test skip
[458]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_rotation_crc@primary-rotation-90.html
[459]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@kms_rotation_crc@primary-rotation-90.html
* igt@kms_rotation_crc@sprite-rotation-90:
- shard-bmg: [SKIP][460] ([Intel XE#3414] / [Intel XE#3904]) -> [SKIP][461] ([Intel XE#4950])
[460]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@kms_rotation_crc@sprite-rotation-90.html
[461]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_rotation_crc@sprite-rotation-90.html
* igt@kms_tiled_display@basic-test-pattern-with-chamelium:
- shard-bmg: [SKIP][462] ([Intel XE#4950]) -> [SKIP][463] ([Intel XE#2426])
[462]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
[463]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
* igt@kms_vrr@flip-suspend:
- shard-bmg: [SKIP][464] ([Intel XE#4950]) -> [SKIP][465] ([Intel XE#1499]) +1 other test skip
[464]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_vrr@flip-suspend.html
[465]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@kms_vrr@flip-suspend.html
* igt@kms_vrr@lobf:
- shard-bmg: [SKIP][466] ([Intel XE#4950]) -> [SKIP][467] ([Intel XE#2168])
[466]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@kms_vrr@lobf.html
[467]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@kms_vrr@lobf.html
* igt@kms_vrr@max-min:
- shard-bmg: [SKIP][468] ([Intel XE#1499]) -> [SKIP][469] ([Intel XE#4950]) +3 other tests skip
[468]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@kms_vrr@max-min.html
[469]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@kms_vrr@max-min.html
* igt@sriov_basic@enable-vfs-autoprobe-off:
- shard-bmg: [SKIP][470] ([Intel XE#4950]) -> [SKIP][471] ([Intel XE#1091] / [Intel XE#2849]) +1 other test skip
[470]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@sriov_basic@enable-vfs-autoprobe-off.html
[471]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@sriov_basic@enable-vfs-autoprobe-off.html
* igt@xe_create@multigpu-create-massive-size:
- shard-bmg: [SKIP][472] ([Intel XE#4945]) -> [SKIP][473] ([Intel XE#2504])
[472]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_create@multigpu-create-massive-size.html
[473]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@xe_create@multigpu-create-massive-size.html
* igt@xe_eudebug@read-metadata:
- shard-bmg: [SKIP][474] ([Intel XE#4837]) -> [SKIP][475] ([Intel XE#4945]) +11 other tests skip
[474]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-2/igt@xe_eudebug@read-metadata.html
[475]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_eudebug@read-metadata.html
* igt@xe_eudebug_online@single-step-one:
- shard-bmg: [SKIP][476] ([Intel XE#4945]) -> [SKIP][477] ([Intel XE#4837]) +12 other tests skip
[476]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_eudebug_online@single-step-one.html
[477]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@xe_eudebug_online@single-step-one.html
* igt@xe_eudebug_sriov@deny-sriov:
- shard-bmg: [SKIP][478] ([Intel XE#4945]) -> [SKIP][479] ([Intel XE#4518])
[478]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_eudebug_sriov@deny-sriov.html
[479]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@xe_eudebug_sriov@deny-sriov.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-userptr:
- shard-bmg: [SKIP][480] ([Intel XE#2322]) -> [SKIP][481] ([Intel XE#4945]) +11 other tests skip
[480]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-1/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-userptr.html
[481]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-userptr.html
* igt@xe_exec_basic@multigpu-no-exec-rebind:
- shard-bmg: [SKIP][482] ([Intel XE#4945]) -> [SKIP][483] ([Intel XE#2322]) +10 other tests skip
[482]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_exec_basic@multigpu-no-exec-rebind.html
[483]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-1/igt@xe_exec_basic@multigpu-no-exec-rebind.html
* igt@xe_exec_system_allocator@process-many-stride-mmap-new-huge:
- shard-bmg: [SKIP][484] ([Intel XE#4945]) -> [SKIP][485] ([Intel XE#4943]) +22 other tests skip
[484]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_exec_system_allocator@process-many-stride-mmap-new-huge.html
[485]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@xe_exec_system_allocator@process-many-stride-mmap-new-huge.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-large-execqueues-mmap-free-huge:
- shard-bmg: [SKIP][486] ([Intel XE#4943]) -> [SKIP][487] ([Intel XE#4945]) +25 other tests skip
[486]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@xe_exec_system_allocator@threads-shared-vm-many-large-execqueues-mmap-free-huge.html
[487]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_exec_system_allocator@threads-shared-vm-many-large-execqueues-mmap-free-huge.html
* igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv:
- shard-lnl: [ABORT][488] ([Intel XE#5466]) -> [ABORT][489] ([Intel XE#4917] / [Intel XE#5466])
[488]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-lnl-7/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
[489]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-lnl-4/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
- shard-bmg: [SKIP][490] ([Intel XE#4945]) -> [ABORT][491] ([Intel XE#4917] / [Intel XE#5466] / [Intel XE#5530])
[490]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
[491]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
* igt@xe_pat@pat-index-xehpc:
- shard-bmg: [SKIP][492] ([Intel XE#4945]) -> [SKIP][493] ([Intel XE#1420])
[492]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_pat@pat-index-xehpc.html
[493]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-8/igt@xe_pat@pat-index-xehpc.html
* igt@xe_pm@s3-d3cold-basic-exec:
- shard-bmg: [SKIP][494] ([Intel XE#4945]) -> [SKIP][495] ([Intel XE#2284]) +1 other test skip
[494]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_pm@s3-d3cold-basic-exec.html
[495]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-6/igt@xe_pm@s3-d3cold-basic-exec.html
* igt@xe_pm@vram-d3cold-threshold:
- shard-bmg: [SKIP][496] ([Intel XE#579]) -> [SKIP][497] ([Intel XE#4945])
[496]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-6/igt@xe_pm@vram-d3cold-threshold.html
[497]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_pm@vram-d3cold-threshold.html
* igt@xe_pmu@fn-engine-activity-load:
- shard-bmg: [SKIP][498] ([Intel XE#4650]) -> [SKIP][499] ([Intel XE#4945])
[498]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@xe_pmu@fn-engine-activity-load.html
[499]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_pmu@fn-engine-activity-load.html
* igt@xe_pxp@display-pxp-fb:
- shard-bmg: [SKIP][500] ([Intel XE#4945]) -> [SKIP][501] ([Intel XE#4733]) +2 other tests skip
[500]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_pxp@display-pxp-fb.html
[501]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-2/igt@xe_pxp@display-pxp-fb.html
* igt@xe_pxp@pxp-stale-queue-post-suspend:
- shard-bmg: [SKIP][502] ([Intel XE#4733]) -> [SKIP][503] ([Intel XE#4945]) +1 other test skip
[502]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-4/igt@xe_pxp@pxp-stale-queue-post-suspend.html
[503]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_pxp@pxp-stale-queue-post-suspend.html
* igt@xe_query@multigpu-query-cs-cycles:
- shard-bmg: [SKIP][504] ([Intel XE#4945]) -> [SKIP][505] ([Intel XE#944]) +1 other test skip
[504]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_query@multigpu-query-cs-cycles.html
[505]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-1/igt@xe_query@multigpu-query-cs-cycles.html
* igt@xe_query@multigpu-query-mem-usage:
- shard-bmg: [SKIP][506] ([Intel XE#944]) -> [SKIP][507] ([Intel XE#4945]) +3 other tests skip
[506]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-3/igt@xe_query@multigpu-query-mem-usage.html
[507]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_query@multigpu-query-mem-usage.html
* igt@xe_sriov_auto_provisioning@exclusive-ranges:
- shard-bmg: [SKIP][508] ([Intel XE#4945]) -> [SKIP][509] ([Intel XE#4130])
[508]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_sriov_auto_provisioning@exclusive-ranges.html
[509]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-4/igt@xe_sriov_auto_provisioning@exclusive-ranges.html
* igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling:
- shard-bmg: [SKIP][510] ([Intel XE#4130]) -> [SKIP][511] ([Intel XE#4945]) +1 other test skip
[510]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling.html
[511]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling.html
* igt@xe_sriov_flr@flr-twice:
- shard-bmg: [SKIP][512] ([Intel XE#4273]) -> [SKIP][513] ([Intel XE#4945])
[512]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-8/igt@xe_sriov_flr@flr-twice.html
[513]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_sriov_flr@flr-twice.html
* igt@xe_sriov_flr@flr-vf1-clear:
- shard-bmg: [SKIP][514] ([Intel XE#3342]) -> [SKIP][515] ([Intel XE#4945])
[514]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-7/igt@xe_sriov_flr@flr-vf1-clear.html
[515]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_sriov_flr@flr-vf1-clear.html
* igt@xe_sriov_scheduling@equal-throughput:
- shard-bmg: [SKIP][516] ([Intel XE#4351]) -> [SKIP][517] ([Intel XE#4945])
[516]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-1/igt@xe_sriov_scheduling@equal-throughput.html
[517]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-5/igt@xe_sriov_scheduling@equal-throughput.html
* igt@xe_sriov_scheduling@nonpreempt-engine-resets:
- shard-bmg: [SKIP][518] ([Intel XE#4945]) -> [SKIP][519] ([Intel XE#4351])
[518]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796/shard-bmg-5/igt@xe_sriov_scheduling@nonpreempt-engine-resets.html
[519]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/shard-bmg-3/igt@xe_sriov_scheduling@nonpreempt-engine-resets.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
[Intel XE#1123]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1123
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1127]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1127
[Intel XE#1131]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1131
[Intel XE#1138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1138
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1188]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1188
[Intel XE#1340]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1340
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
[Intel XE#1420]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1420
[Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
[Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
[Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
[Intel XE#1447]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1447
[Intel XE#1468]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1468
[Intel XE#1477]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1477
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#1504]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1504
[Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
[Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
[Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
[Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
[Intel XE#2134]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2134
[Intel XE#2168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2168
[Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
[Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
[Intel XE#2352]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2352
[Intel XE#2370]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2370
[Intel XE#2374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2374
[Intel XE#2375]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2375
[Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
[Intel XE#2387]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2387
[Intel XE#2390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2390
[Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
[Intel XE#2457]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2457
[Intel XE#2499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2499
[Intel XE#2501]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2501
[Intel XE#2504]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2504
[Intel XE#2505]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2505
[Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
[Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2705
[Intel XE#2724]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2724
[Intel XE#2763]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2763
[Intel XE#2838]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2838
[Intel XE#2849]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2849
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#2907]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2907
[Intel XE#2925]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2925
[Intel XE#2927]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2927
[Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#3012]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3012
[Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
[Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
[Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
[Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
[Intel XE#3124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3124
[Intel XE#3141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3141
[Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
[Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
[Intel XE#3278]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3278
[Intel XE#3325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3325
[Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
[Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
[Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
[Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
[Intel XE#362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/362
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#3862]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3862
[Intel XE#3876]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3876
[Intel XE#3884]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3884
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#4090]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4090
[Intel XE#4130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4130
[Intel XE#4173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4173
[Intel XE#4212]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4212
[Intel XE#4273]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4273
[Intel XE#4345]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4345
[Intel XE#4351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4351
[Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
[Intel XE#4459]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4459
[Intel XE#4494]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4494
[Intel XE#4518]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4518
[Intel XE#4519]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4519
[Intel XE#4522]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4522
[Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
[Intel XE#4608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4608
[Intel XE#4633]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4633
[Intel XE#4650]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4650
[Intel XE#4672]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4672
[Intel XE#4674]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4674
[Intel XE#4689]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4689
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
[Intel XE#4917]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4917
[Intel XE#4921]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4921
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#4945]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4945
[Intel XE#4947]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4947
[Intel XE#4950]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4950
[Intel XE#4962]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4962
[Intel XE#4963]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4963
[Intel XE#5020]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5020
[Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
[Intel XE#5249]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5249
[Intel XE#5390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5390
[Intel XE#5427]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5427
[Intel XE#5428]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5428
[Intel XE#5466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5466
[Intel XE#5530]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5530
[Intel XE#5531]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5531
[Intel XE#5545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5545
[Intel XE#5547]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5547
[Intel XE#5560]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5560
[Intel XE#5561]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5561
[Intel XE#5564]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5564
[Intel XE#5565]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5565
[Intel XE#5575]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5575
[Intel XE#5594]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5594
[Intel XE#5596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5596
[Intel XE#5672]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5672
[Intel XE#5679]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5679
[Intel XE#5686]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5686
[Intel XE#569]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/569
[Intel XE#579]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/579
[Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[Intel XE#718]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/718
[Intel XE#734]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/734
[Intel XE#776]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/776
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#911]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/911
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
[Intel XE#979]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/979
[i915#3804]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3804
Build changes
-------------
* IGT: IGT_8478 -> IGT_8479
* Linux: xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796 -> xe-pw-149550v5
IGT_8478: 3e7c2bd685397f852853878aef4d9c1e4889a28b @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
IGT_8479: 5fea7ca6493415ce108231b0ff29f02d293f9aa6 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-3489-e1805ad9a7175457902ae453ea67b76194e7d796: e1805ad9a7175457902ae453ea67b76194e7d796
xe-pw-149550v5: 149550v5
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v5/index.html
[-- Attachment #2: Type: text/html, Size: 158330 bytes --]
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 21/25] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged
2025-07-30 13:00 ` [PATCH v5 21/25] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged Himal Prasad Ghimiray
@ 2025-07-30 20:57 ` kernel test robot
0 siblings, 0 replies; 54+ messages in thread
From: kernel test robot @ 2025-07-30 20:57 UTC (permalink / raw)
To: Himal Prasad Ghimiray, intel-xe
Cc: llvm, oe-kbuild-all, Matthew Brost, Thomas Hellström,
Himal Prasad Ghimiray
Hi Himal,
kernel test robot noticed the following build warnings:
[auto build test WARNING on next-20250730]
[cannot apply to drm-xe/drm-xe-next linus/master v6.16 v6.16-rc7 v6.16-rc6 v6.16]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Himal-Prasad-Ghimiray/drm-gpuvm-Pass-map-arguments-through-a-struct/20250731-003813
base: next-20250730
patch link: https://lore.kernel.org/r/20250730130050.1001648-22-himal.prasad.ghimiray%40intel.com
patch subject: [PATCH v5 21/25] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged
config: arm-randconfig-004-20250731 (https://download.01.org/0day-ci/archive/20250731/202507310458.N20slFKu-lkp@intel.com/config)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 8f09b03aebb71c154f3bbe725c29e3f47d37c26e)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250731/202507310458.N20slFKu-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507310458.N20slFKu-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/gpu/drm/xe/xe_vm_madvise.c:200:4: warning: variable 'tile_mask' is uninitialized when used here [-Wuninitialized]
200 | tile_mask |= xe_svm_ranges_zap_ptes_in_range(vm,
| ^~~~~~~~~
drivers/gpu/drm/xe/xe_vm_madvise.c:184:18: note: initialize the variable 'tile_mask' to silence this warning
184 | u8 id, tile_mask;
| ^
| = '\0'
1 warning generated.
vim +/tile_mask +200 drivers/gpu/drm/xe/xe_vm_madvise.c
179
180 static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
181 {
182 struct drm_gpuva *gpuva;
183 struct xe_tile *tile;
184 u8 id, tile_mask;
185
186 lockdep_assert_held_write(&vm->lock);
187
188 /* Wait for pending binds */
189 if (dma_resv_wait_timeout(xe_vm_resv(vm), DMA_RESV_USAGE_BOOKKEEP,
190 false, MAX_SCHEDULE_TIMEOUT) <= 0)
191 XE_WARN_ON(1);
192
193 drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
194 struct xe_vma *vma = gpuva_to_vma(gpuva);
195
196 if (vma->skip_invalidation || xe_vma_is_null(vma))
197 continue;
198
199 if (xe_vma_is_cpu_addr_mirror(vma)) {
> 200 tile_mask |= xe_svm_ranges_zap_ptes_in_range(vm,
201 xe_vma_start(vma),
202 xe_vma_end(vma));
203 } else {
204 for_each_tile(tile, vm->xe, id) {
205 if (xe_pt_zap_ptes(tile, vma)) {
206 tile_mask |= BIT(id);
207
208 /*
209 * WRITE_ONCE pairs with READ_ONCE
210 * in xe_vm_has_valid_gpu_mapping()
211 */
212 WRITE_ONCE(vma->tile_invalidated,
213 vma->tile_invalidated | BIT(id));
214 }
215 }
216 }
217 }
218
219 return tile_mask;
220 }
221
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct
2025-07-30 13:00 ` [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
@ 2025-07-30 23:23 ` kernel test robot
2025-08-05 3:56 ` Matthew Brost
2025-08-05 9:40 ` Danilo Krummrich
2 siblings, 0 replies; 54+ messages in thread
From: kernel test robot @ 2025-07-30 23:23 UTC (permalink / raw)
To: Himal Prasad Ghimiray, intel-xe
Cc: oe-kbuild-all, Matthew Brost, Thomas Hellström,
Boris Brezillon, Danilo Krummrich, Caterina Shablia, Rob Clark,
dri-devel, Himal Prasad Ghimiray
Hi Himal,
kernel test robot noticed the following build warnings:
[auto build test WARNING on next-20250730]
[cannot apply to drm-xe/drm-xe-next linus/master v6.16 v6.16-rc7 v6.16-rc6 v6.16]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Himal-Prasad-Ghimiray/drm-gpuvm-Pass-map-arguments-through-a-struct/20250731-003813
base: next-20250730
patch link: https://lore.kernel.org/r/20250730130050.1001648-2-himal.prasad.ghimiray%40intel.com
patch subject: [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct
config: i386-buildonly-randconfig-002-20250731 (https://download.01.org/0day-ci/archive/20250731/202507310715.d6MBnXvv-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250731/202507310715.d6MBnXvv-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507310715.d6MBnXvv-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> Warning: drivers/gpu/drm/drm_gpuvm.c:2471 function parameter 'req' not described in 'drm_gpuvm_sm_map_exec_lock'
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 02/25] drm/gpuvm: Kill drm_gpuva_init()
2025-07-30 13:00 ` [PATCH v5 02/25] drm/gpuvm: Kill drm_gpuva_init() Himal Prasad Ghimiray
@ 2025-08-05 3:45 ` Matthew Brost
2025-08-05 9:35 ` Danilo Krummrich
1 sibling, 0 replies; 54+ messages in thread
From: Matthew Brost @ 2025-08-05 3:45 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Thomas Hellström, Boris Brezillon,
Caterina Shablia, Danilo Krummrich
On Wed, Jul 30, 2025 at 06:30:27PM +0530, Himal Prasad Ghimiray wrote:
> From: Boris Brezillon <boris.brezillon@collabora.com>
>
> drm_gpuva_init() only has one internal user, and given we are about to
> add new optional fields, it only add maintenance burden for no real
> benefit, so let's kill the thing now.
>
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
> Acked-by: Danilo Krummrich <dakr@kernel.org>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> include/drm/drm_gpuvm.h | 15 ++++-----------
> 1 file changed, 4 insertions(+), 11 deletions(-)
>
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 892ffe75a62f..2d24d000f2ee 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -160,15 +160,6 @@ struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
> struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start);
> struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end);
>
> -static inline void drm_gpuva_init(struct drm_gpuva *va, u64 addr, u64 range,
> - struct drm_gem_object *obj, u64 offset)
> -{
> - va->va.addr = addr;
> - va->va.range = range;
> - va->gem.obj = obj;
> - va->gem.offset = offset;
> -}
> -
> /**
> * drm_gpuva_invalidate() - sets whether the backing GEM of this &drm_gpuva is
> * invalidated
> @@ -1079,8 +1070,10 @@ void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
> static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
> struct drm_gpuva_op_map *op)
> {
> - drm_gpuva_init(va, op->va.addr, op->va.range,
> - op->gem.obj, op->gem.offset);
> + va->va.addr = op->va.addr;
> + va->va.range = op->va.range;
> + va->gem.obj = op->gem.obj;
> + va->gem.offset = op->gem.offset;
> }
>
> /**
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct
2025-07-30 13:00 ` [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
2025-07-30 23:23 ` kernel test robot
@ 2025-08-05 3:56 ` Matthew Brost
2025-08-05 5:24 ` Ghimiray, Himal Prasad
2025-08-05 9:40 ` Danilo Krummrich
2 siblings, 1 reply; 54+ messages in thread
From: Matthew Brost @ 2025-08-05 3:56 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Thomas Hellström, Boris Brezillon,
Danilo Krummrich, Boris Brezillon, Caterina Shablia, Rob Clark,
dri-devel, Danilo Krummrich
On Wed, Jul 30, 2025 at 06:30:26PM +0530, Himal Prasad Ghimiray wrote:
> From: Boris Brezillon <boris.brezillon@collabora.com>
>
> We are about to pass more arguments to drm_gpuvm_sm_map[_ops_create](),
> so, before we do that, let's pass arguments through a struct instead
> of changing each call site every time a new optional argument is added.
>
> v5
> - Use drm_gpuva_op_map—same as drm_gpuvm_map_req (Danilo)
> - Rebase changes for drm_gpuvm_sm_map_exec_lock()
> - Fix kernel-docs
>
> Cc: Danilo Krummrich <dakr@redhat.com>
> Cc: Boris Brezillon <bbrezillon@kernel.org>
> Cc: Caterina Shablia <caterina.shablia@collabora.com>
> Cc: Rob Clark <robin.clark@oss.qualcomm.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: <dri-devel@lists.freedesktop.org>
>
> Acked-by: Danilo Krummrich <dakr@kernel.org> (#v4)
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/drm_gpuvm.c | 106 ++++++++++---------------
> drivers/gpu/drm/imagination/pvr_vm.c | 15 ++--
> drivers/gpu/drm/msm/msm_gem_vma.c | 33 ++++++--
> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 11 ++-
> drivers/gpu/drm/panthor/panthor_mmu.c | 13 ++-
> drivers/gpu/drm/xe/xe_vm.c | 13 ++-
> include/drm/drm_gpuvm.h | 10 +--
> 7 files changed, 112 insertions(+), 89 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index bbc7fecb6f4a..f04d80a3a63b 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -486,13 +486,18 @@
> * u64 addr, u64 range,
> * struct drm_gem_object *obj, u64 offset)
> * {
> + * struct drm_gpuva_op_map op_map = {
> + * .va.addr = addr,
> + * .va.range = range,
> + * .gem.obj = obj,
> + * .gem.offset = offset,
> + * };
> * struct drm_gpuva_ops *ops;
> * struct drm_gpuva_op *op
> * struct drm_gpuvm_bo *vm_bo;
> *
> * driver_lock_va_space();
> - * ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range,
> - * obj, offset);
> + * ops = drm_gpuvm_sm_map_ops_create(gpuvm, &op_map);
> * if (IS_ERR(ops))
> * return PTR_ERR(ops);
> *
> @@ -2054,16 +2059,15 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unmap);
>
> static int
> op_map_cb(const struct drm_gpuvm_ops *fn, void *priv,
> - u64 addr, u64 range,
> - struct drm_gem_object *obj, u64 offset)
> + const struct drm_gpuva_op_map *req)
> {
> struct drm_gpuva_op op = {};
>
> op.op = DRM_GPUVA_OP_MAP;
> - op.map.va.addr = addr;
> - op.map.va.range = range;
> - op.map.gem.obj = obj;
> - op.map.gem.offset = offset;
> + op.map.va.addr = req->va.addr;
> + op.map.va.range = req->va.range;
> + op.map.gem.obj = req->gem.obj;
> + op.map.gem.offset = req->gem.offset;
>
> return fn->sm_step_map(&op, priv);
> }
> @@ -2102,17 +2106,16 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
> static int
> __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> const struct drm_gpuvm_ops *ops, void *priv,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *req_obj, u64 req_offset)
> + const struct drm_gpuva_op_map *req)
> {
> struct drm_gpuva *va, *next;
> - u64 req_end = req_addr + req_range;
> + u64 req_end = req->va.addr + req->va.range;
> int ret;
>
> - if (unlikely(!drm_gpuvm_range_valid(gpuvm, req_addr, req_range)))
> + if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->va.addr, req->va.range)))
> return -EINVAL;
>
> - drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
> + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req->va.addr, req_end) {
> struct drm_gem_object *obj = va->gem.obj;
> u64 offset = va->gem.offset;
> u64 addr = va->va.addr;
> @@ -2120,9 +2123,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> u64 end = addr + range;
> bool merge = !!va->gem.obj;
>
> - if (addr == req_addr) {
> - merge &= obj == req_obj &&
> - offset == req_offset;
> + if (addr == req->va.addr) {
> + merge &= obj == req->gem.obj &&
> + offset == req->gem.offset;
>
> if (end == req_end) {
> ret = op_unmap_cb(ops, priv, va, merge);
> @@ -2141,9 +2144,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> if (end > req_end) {
> struct drm_gpuva_op_map n = {
> .va.addr = req_end,
> - .va.range = range - req_range,
> + .va.range = range - req->va.range,
> .gem.obj = obj,
> - .gem.offset = offset + req_range,
> + .gem.offset = offset + req->va.range,
> };
> struct drm_gpuva_op_unmap u = {
> .va = va,
> @@ -2155,8 +2158,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> return ret;
> break;
> }
> - } else if (addr < req_addr) {
> - u64 ls_range = req_addr - addr;
> + } else if (addr < req->va.addr) {
> + u64 ls_range = req->va.addr - addr;
> struct drm_gpuva_op_map p = {
> .va.addr = addr,
> .va.range = ls_range,
> @@ -2165,8 +2168,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> };
> struct drm_gpuva_op_unmap u = { .va = va };
>
> - merge &= obj == req_obj &&
> - offset + ls_range == req_offset;
> + merge &= obj == req->gem.obj &&
> + offset + ls_range == req->gem.offset;
> u.keep = merge;
>
> if (end == req_end) {
> @@ -2189,7 +2192,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> .va.range = end - req_end,
> .gem.obj = obj,
> .gem.offset = offset + ls_range +
> - req_range,
> + req->va.range,
> };
>
> ret = op_remap_cb(ops, priv, &p, &n, &u);
> @@ -2197,10 +2200,10 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> return ret;
> break;
> }
> - } else if (addr > req_addr) {
> - merge &= obj == req_obj &&
> - offset == req_offset +
> - (addr - req_addr);
> + } else if (addr > req->va.addr) {
> + merge &= obj == req->gem.obj &&
> + offset == req->gem.offset +
> + (addr - req->va.addr);
>
> if (end == req_end) {
> ret = op_unmap_cb(ops, priv, va, merge);
> @@ -2236,9 +2239,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> }
> }
>
> - return op_map_cb(ops, priv,
> - req_addr, req_range,
> - req_obj, req_offset);
> + return op_map_cb(ops, priv, req);
> }
>
> static int
> @@ -2303,10 +2304,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
> * drm_gpuvm_sm_map() - calls the &drm_gpuva_op split/merge steps
> * @gpuvm: the &drm_gpuvm representing the GPU VA space
> * @priv: pointer to a driver private data structure
> - * @req_addr: the start address of the new mapping
> - * @req_range: the range of the new mapping
> - * @req_obj: the &drm_gem_object to map
> - * @req_offset: the offset within the &drm_gem_object
> + * @req: ptr to drm_gpuva_op_map struct
> *
> * This function iterates the given range of the GPU VA space. It utilizes the
> * &drm_gpuvm_ops to call back into the driver providing the split and merge
> @@ -2333,8 +2331,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
> */
> int
> drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *req_obj, u64 req_offset)
> + const struct drm_gpuva_op_map *req)
> {
> const struct drm_gpuvm_ops *ops = gpuvm->ops;
>
> @@ -2343,9 +2340,7 @@ drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
> ops->sm_step_unmap)))
> return -EINVAL;
>
> - return __drm_gpuvm_sm_map(gpuvm, ops, priv,
> - req_addr, req_range,
> - req_obj, req_offset);
> + return __drm_gpuvm_sm_map(gpuvm, ops, priv, req);
> }
> EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
>
> @@ -2421,10 +2416,7 @@ static const struct drm_gpuvm_ops lock_ops = {
> * @gpuvm: the &drm_gpuvm representing the GPU VA space
> * @exec: the &drm_exec locking context
> * @num_fences: for newly mapped objects, the # of fences to reserve
> - * @req_addr: the start address of the range to unmap
> - * @req_range: the range of the mappings to unmap
> - * @req_obj: the &drm_gem_object to map
> - * @req_offset: the offset within the &drm_gem_object
> + * @op: ptr to drm_gpuva_op_map struct
s/@op/@req/ - Kernel test robot.
Also I believe Danilo's suggestion here was to define drm_gpuvm_map_req
as the argument and then embed drm_gpuva_op_map within
drm_gpuvm_map_req. So in patch [1], flags would be added to
drm_gpuvm_map_req rather than drm_gpuva_op_map.
Matt
[1] https://patchwork.freedesktop.org/patch/666211/?series=149550&rev=5
> *
> * This function locks (drm_exec_lock_obj()) objects that will be unmapped/
> * remapped, and locks+prepares (drm_exec_prepare_object()) objects that
> @@ -2442,12 +2434,10 @@ static const struct drm_gpuvm_ops lock_ops = {
> * for_each_vm_bind_operation {
> * switch (op->op) {
> * case DRIVER_OP_UNMAP:
> - * ret = drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, op->addr, op->range);
> + * ret = drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, op->va.addr, op->va.range);
> * break;
> * case DRIVER_OP_MAP:
> - * ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences,
> - * op->addr, op->range,
> - * obj, op->obj_offset);
> + * ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences, op);
> * break;
> * }
> *
> @@ -2478,18 +2468,16 @@ static const struct drm_gpuvm_ops lock_ops = {
> int
> drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
> struct drm_exec *exec, unsigned int num_fences,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *req_obj, u64 req_offset)
> + struct drm_gpuva_op_map *req)
> {
> - if (req_obj) {
> - int ret = drm_exec_prepare_obj(exec, req_obj, num_fences);
> + if (req->gem.obj) {
> + int ret = drm_exec_prepare_obj(exec, req->gem.obj, num_fences);
> if (ret)
> return ret;
> }
>
> return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec,
> - req_addr, req_range,
> - req_obj, req_offset);
> + req);
>
> }
> EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_exec_lock);
> @@ -2611,10 +2599,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
> /**
> * drm_gpuvm_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
> * @gpuvm: the &drm_gpuvm representing the GPU VA space
> - * @req_addr: the start address of the new mapping
> - * @req_range: the range of the new mapping
> - * @req_obj: the &drm_gem_object to map
> - * @req_offset: the offset within the &drm_gem_object
> + * @req: ptr to drm_gpuva_op_map struct
> *
> * This function creates a list of operations to perform splitting and merging
> * of existent mapping(s) with the newly requested one.
> @@ -2642,8 +2627,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
> */
> struct drm_gpuva_ops *
> drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *req_obj, u64 req_offset)
> + const struct drm_gpuva_op_map *req)
> {
> struct drm_gpuva_ops *ops;
> struct {
> @@ -2661,9 +2645,7 @@ drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> args.vm = gpuvm;
> args.ops = ops;
>
> - ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args,
> - req_addr, req_range,
> - req_obj, req_offset);
> + ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args, req);
> if (ret)
> goto err_free_ops;
>
> diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
> index 2896fa7501b1..57116709de81 100644
> --- a/drivers/gpu/drm/imagination/pvr_vm.c
> +++ b/drivers/gpu/drm/imagination/pvr_vm.c
> @@ -185,12 +185,17 @@ struct pvr_vm_bind_op {
> static int pvr_vm_bind_op_exec(struct pvr_vm_bind_op *bind_op)
> {
> switch (bind_op->type) {
> - case PVR_VM_BIND_TYPE_MAP:
> + case PVR_VM_BIND_TYPE_MAP: {
> + const struct drm_gpuva_op_map map_req = {
> + .va.addr = bind_op->device_addr,
> + .va.range = bind_op->size,
> + .gem.obj = gem_from_pvr_gem(bind_op->pvr_obj),
> + .gem.offset = bind_op->offset,
> + };
> +
> return drm_gpuvm_sm_map(&bind_op->vm_ctx->gpuvm_mgr,
> - bind_op, bind_op->device_addr,
> - bind_op->size,
> - gem_from_pvr_gem(bind_op->pvr_obj),
> - bind_op->offset);
> + bind_op, &map_req);
> + }
>
> case PVR_VM_BIND_TYPE_UNMAP:
> return drm_gpuvm_sm_unmap(&bind_op->vm_ctx->gpuvm_mgr,
> diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
> index 3cd8562a5109..59a9b41bc967 100644
> --- a/drivers/gpu/drm/msm/msm_gem_vma.c
> +++ b/drivers/gpu/drm/msm/msm_gem_vma.c
> @@ -371,6 +371,12 @@ struct drm_gpuva *
> msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
> u64 offset, u64 range_start, u64 range_end)
> {
> + struct drm_gpuva_op_map op_map = {
> + .va.addr = range_start,
> + .va.range = range_end - range_start,
> + .gem.obj = obj,
> + .gem.offset = offset,
> + };
> struct msm_gem_vm *vm = to_msm_vm(gpuvm);
> struct drm_gpuvm_bo *vm_bo;
> struct msm_gem_vma *vma;
> @@ -399,7 +405,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
> if (obj)
> GEM_WARN_ON((range_end - range_start) > obj->size);
>
> - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset);
> + drm_gpuva_init_from_op(&vma->base, &op_map);
> vma->mapped = false;
>
> ret = drm_gpuva_insert(&vm->base, &vma->base);
> @@ -1172,10 +1178,17 @@ vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exec)
> break;
> case MSM_VM_BIND_OP_MAP:
> case MSM_VM_BIND_OP_MAP_NULL:
> - ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1,
> - op->iova, op->range,
> - op->obj, op->obj_offset);
> + {
> + struct drm_gpuva_op_map map_req = {
> + .va.addr = op->iova,
> + .va.range = op->range,
> + .gem.obj = op->obj,
> + .gem.offset = op->obj_offset,
> + };
> +
> + ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, &map_req);
> break;
> + }
> default:
> /*
> * lookup_op() should have already thrown an error for
> @@ -1283,9 +1296,17 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job)
> arg.flags |= MSM_VMA_DUMP;
> fallthrough;
> case MSM_VM_BIND_OP_MAP_NULL:
> - ret = drm_gpuvm_sm_map(job->vm, &arg, op->iova,
> - op->range, op->obj, op->obj_offset);
> + {
> + struct drm_gpuva_op_map map_req = {
> + .va.addr = op->iova,
> + .va.range = op->range,
> + .gem.obj = op->obj,
> + .gem.offset = op->obj_offset,
> + };
> +
> + ret = drm_gpuvm_sm_map(job->vm, &arg, &map_req);
> break;
> + }
> default:
> /*
> * lookup_op() should have already thrown an error for
> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> index ddfc46bc1b3e..b74054b0a476 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> @@ -1276,6 +1276,12 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
> break;
> case OP_MAP: {
> struct nouveau_uvma_region *reg;
> + struct drm_gpuva_op_map map_req = {
> + .va.addr = op->va.addr,
> + .va.range = op->va.range,
> + .gem.obj = op->gem.obj,
> + .gem.offset = op->gem.offset,
> + };
>
> reg = nouveau_uvma_region_find_first(uvmm,
> op->va.addr,
> @@ -1301,10 +1307,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
> }
>
> op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->base,
> - op->va.addr,
> - op->va.range,
> - op->gem.obj,
> - op->gem.offset);
> + &map_req);
> if (IS_ERR(op->ops)) {
> ret = PTR_ERR(op->ops);
> goto unwind_continue;
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index 4140f697ba5a..5fd4245a57b9 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -2169,15 +2169,22 @@ panthor_vm_exec_op(struct panthor_vm *vm, struct panthor_vm_op_ctx *op,
> mutex_lock(&vm->op_lock);
> vm->op_ctx = op;
> switch (op_type) {
> - case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
> + case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP: {
> + const struct drm_gpuva_op_map map_req = {
> + .va.addr = op->va.addr,
> + .va.range = op->va.range,
> + .gem.obj = op->map.vm_bo->obj,
> + .gem.offset = op->map.bo_offset,
> + };
> +
> if (vm->unusable) {
> ret = -EINVAL;
> break;
> }
>
> - ret = drm_gpuvm_sm_map(&vm->base, vm, op->va.addr, op->va.range,
> - op->map.vm_bo->obj, op->map.bo_offset);
> + ret = drm_gpuvm_sm_map(&vm->base, vm, &map_req);
> break;
> + }
>
> case DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP:
> ret = drm_gpuvm_sm_unmap(&vm->base, vm, op->va.addr, op->va.range);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 432ea325677d..4b3e78745363 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2316,10 +2316,17 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
>
> switch (operation) {
> case DRM_XE_VM_BIND_OP_MAP:
> - case DRM_XE_VM_BIND_OP_MAP_USERPTR:
> - ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, addr, range,
> - obj, bo_offset_or_userptr);
> + case DRM_XE_VM_BIND_OP_MAP_USERPTR: {
> + struct drm_gpuva_op_map map_req = {
> + .va.addr = addr,
> + .va.range = range,
> + .gem.obj = obj,
> + .gem.offset = bo_offset_or_userptr,
> + };
> +
> + ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
> break;
> + }
> case DRM_XE_VM_BIND_OP_UNMAP:
> ops = drm_gpuvm_sm_unmap_ops_create(&vm->gpuvm, addr, range);
> break;
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 274532facfd6..892ffe75a62f 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -1060,8 +1060,8 @@ struct drm_gpuva_ops {
>
> struct drm_gpuva_ops *
> drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> - u64 addr, u64 range,
> - struct drm_gem_object *obj, u64 offset);
> + const struct drm_gpuva_op_map *req);
> +
> struct drm_gpuva_ops *
> drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range);
> @@ -1205,16 +1205,14 @@ struct drm_gpuvm_ops {
> };
>
> int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
> - u64 addr, u64 range,
> - struct drm_gem_object *obj, u64 offset);
> + const struct drm_gpuva_op_map *req);
>
> int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
> u64 addr, u64 range);
>
> int drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
> struct drm_exec *exec, unsigned int num_fences,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *obj, u64 offset);
> + struct drm_gpuva_op_map *req);
>
> int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
> u64 req_addr, u64 req_range);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 03/25] drm/gpuvm: Support flags in drm_gpuva_op_map
2025-07-30 13:00 ` [PATCH v5 03/25] drm/gpuvm: Support flags in drm_gpuva_op_map Himal Prasad Ghimiray
@ 2025-08-05 3:58 ` Matthew Brost
2025-08-05 11:05 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 54+ messages in thread
From: Matthew Brost @ 2025-08-05 3:58 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Thomas Hellström, Danilo Krummrich,
Boris Brezillon, Caterina Shablia
On Wed, Jul 30, 2025 at 06:30:28PM +0530, Himal Prasad Ghimiray wrote:
> This change adds support for passing flags to drm_gpuvm_sm_map() and
> sm_map_ops_create(), enabling future extensions that affect split/merge
> logic in drm_gpuvm.
>
> Cc: Danilo Krummrich <dakr@redhat.com>
> Cc: Boris Brezillon <bbrezillon@kernel.org>
> Cc: Caterina Shablia <caterina.shablia@collabora.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> include/drm/drm_gpuvm.h | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 2d24d000f2ee..75c616fdc119 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -810,6 +810,12 @@ enum drm_gpuva_op_type {
> DRM_GPUVA_OP_DRIVER,
> };
>
> +/** DOC: flags for struct drm_gpuva_op_map
> + * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE DEFAULT split and merge,
> + * It cannot be combined with other flags.
> + */
> +#define DRM_GPUVM_SM_MAP_OPS_FLAG_NONE 0
> +
> /**
> * struct drm_gpuva_op_map - GPU VA map operation
> *
> @@ -847,6 +853,13 @@ struct drm_gpuva_op_map {
> */
> struct drm_gem_object *obj;
> } gem;
> +
> + /**
> + * @flags: Bitmask of DRM_GPUVM_SM_MAP_* flags.
> + * Use DRM_GPUVM_SM_MAP_OPS_FLAG_NONE (0) for default split merge.
> + * It cannot be combined with other flags.
> + */
> + u32 flags;
See my comment here [1], I think the flags should be in
drm_gpuvm_map_req rather than drm_gpuva_op_map as the flags are only
used gpuvm side on op creation, not driver side when consuming
drm_gpuva_op_map.
Matt
[1] https://patchwork.freedesktop.org/patch/666205/?series=149550&rev=5#comment_1222150
> };
>
> /**
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 11/25] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
2025-07-30 13:00 ` [PATCH v5 11/25] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
@ 2025-08-05 4:00 ` Matthew Brost
0 siblings, 0 replies; 54+ messages in thread
From: Matthew Brost @ 2025-08-05 4:00 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström
On Wed, Jul 30, 2025 at 06:30:36PM +0530, Himal Prasad Ghimiray wrote:
> In the case of the MADVISE ioctl, if the start or end addresses fall
> within a VMA and existing SVM ranges are present, remove the existing
> SVM mappings. Then, continue with ops_parse to create new VMAs by REMAP
> unmapping of old one.
>
> v2 (Matthew Brost)
> - Use vops flag to call unmapping of ranges in vm_bind_ioctl_ops_parse
> - Rename the function
>
> v3
> - Fix doc
>
> v4
> - check if range is already in garbage collector (Matthew Brost)
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 35 +++++++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_svm.h | 7 +++++++
> drivers/gpu/drm/xe/xe_vm.c | 8 ++++++--
> 3 files changed, 48 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 10c8a1bcb86e..c2a5eda504bb 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -919,6 +919,41 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
> return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
> }
>
> +/**
> + * xe_svm_unmap_address_range - UNMAP SVM mappings and ranges
> + * @vm: The VM
> + * @start: start addr
> + * @end: end addr
> + *
> + * This function UNMAPS svm ranges if start or end address are inside them.
> + */
> +void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
> +{
> + struct drm_gpusvm_notifier *notifier, *next;
> +
> + lockdep_assert_held_write(&vm->lock);
> +
> + drm_gpusvm_for_each_notifier_safe(notifier, next, &vm->svm.gpusvm, start, end) {
> + struct drm_gpusvm_range *range, *__next;
> +
> + drm_gpusvm_for_each_range_safe(range, __next, notifier, start, end) {
> + if (start > drm_gpusvm_range_start(range) ||
> + end < drm_gpusvm_range_end(range)) {
> + if (IS_DGFX(vm->xe) && xe_svm_range_in_vram(to_xe_range(range)))
> + drm_gpusvm_range_evict(&vm->svm.gpusvm, range);
> + drm_gpusvm_range_get(range);
> + __xe_svm_garbage_collector(vm, to_xe_range(range));
> + if (!list_empty(&to_xe_range(range)->garbage_collector_link)) {
> + spin_lock(&vm->svm.garbage_collector.lock);
> + list_del(&to_xe_range(range)->garbage_collector_link);
> + spin_unlock(&vm->svm.garbage_collector.lock);
> + }
> + drm_gpusvm_range_put(range);
> + }
> + }
> + }
> +}
> +
> /**
> * xe_svm_bo_evict() - SVM evict BO to system memory
> * @bo: BO to evict
> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
> index da9a69ea0bb1..754d56b4d255 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -90,6 +90,8 @@ bool xe_svm_range_validate(struct xe_vm *vm,
>
> u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *vma);
>
> +void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end);
> +
> /**
> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
> * @range: SVM range
> @@ -303,6 +305,11 @@ u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *vm
> return ULONG_MAX;
> }
>
> +static inline
> +void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
> +{
> +}
> +
> #define xe_svm_assert_in_notifier(...) do {} while (0)
> #define xe_svm_range_has_dma_mapping(...) false
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 62230283c384..d039779412b3 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2669,8 +2669,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> end = op->base.remap.next->va.addr;
>
> if (xe_vma_is_cpu_addr_mirror(old) &&
> - xe_svm_has_mapping(vm, start, end))
> - return -EBUSY;
> + xe_svm_has_mapping(vm, start, end)) {
> + if (vops->flags & XE_VMA_OPS_FLAG_MADVISE)
> + xe_svm_unmap_address_range(vm, start, end);
> + else
> + return -EBUSY;
> + }
>
> op->remap.start = xe_vma_start(old);
> op->remap.range = xe_vma_size(old);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 13/25] drm/xe: Implement madvise ioctl for xe
2025-07-30 13:00 ` [PATCH v5 13/25] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
@ 2025-08-05 4:43 ` Matthew Brost
0 siblings, 0 replies; 54+ messages in thread
From: Matthew Brost @ 2025-08-05 4:43 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström, Shuicheng Lin
On Wed, Jul 30, 2025 at 06:30:38PM +0530, Himal Prasad Ghimiray wrote:
> This driver-specific ioctl enables UMDs to control the memory attributes
> for GPU VMAs within a specified input range. If the start or end
> addresses fall within an existing VMA, the VMA is split accordingly. The
> attributes of the VMA are modified as provided by the users. The old
> mappings of the VMAs are invalidated, and TLB invalidation is performed
> if necessary.
>
> v2(Matthew brost)
> - xe_vm_in_fault_mode can't be enabled by Mesa, hence allow ioctl in non
> fault mode too
> - fix tlb invalidation skip for same ranges in multiple op
> - use helper for tlb invalidation
> - use xe_svm_notifier_lock/unlock helper
> - s/lockdep_assert_held/lockdep_assert_held_write
> - Add kernel-doc
>
> v3(Matthew Brost)
> - make vfunc fail safe
> - Add sanitizing input args before vfunc
>
> v4(Matthew Brost/Shuicheng)
> - Make locks interruptable
> - Error handling fixes
> - vm_put fixes
>
> v5(Matthew Brost)
> - Flush garbage collector before any locking.
> - Add check for null vma
>
> Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> Cc: Shuicheng Lin <shuicheng.lin@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/Makefile | 1 +
> drivers/gpu/drm/xe/xe_vm_madvise.c | 308 +++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm_madvise.h | 15 ++
> 3 files changed, 324 insertions(+)
> create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
> create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h
>
> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
> index 8e0c3412a757..d0ea869fcd24 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -128,6 +128,7 @@ xe-y += xe_bb.o \
> xe_uc.o \
> xe_uc_fw.o \
> xe_vm.o \
> + xe_vm_madvise.o \
> xe_vram.o \
> xe_vram_freq.o \
> xe_vsec.o \
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> new file mode 100644
> index 000000000000..b861c3349b0a
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -0,0 +1,308 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2025 Intel Corporation
> + */
> +
> +#include "xe_vm_madvise.h"
> +
> +#include <linux/nospec.h>
> +#include <drm/xe_drm.h>
> +
> +#include "xe_bo.h"
> +#include "xe_pt.h"
> +#include "xe_svm.h"
> +
> +struct xe_vmas_in_madvise_range {
> + u64 addr;
> + u64 range;
> + struct xe_vma **vmas;
> + int num_vmas;
> + bool has_svm_vmas;
> + bool has_bo_vmas;
> + bool has_userptr_vmas;
> +};
> +
> +static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_range)
> +{
> + u64 addr = madvise_range->addr;
> + u64 range = madvise_range->range;
> +
> + struct xe_vma **__vmas;
> + struct drm_gpuva *gpuva;
> + int max_vmas = 8;
> +
> + lockdep_assert_held(&vm->lock);
> +
> + madvise_range->num_vmas = 0;
> + madvise_range->vmas = kmalloc_array(max_vmas, sizeof(*madvise_range->vmas), GFP_KERNEL);
> + if (!madvise_range->vmas)
> + return -ENOMEM;
> +
> + vm_dbg(&vm->xe->drm, "VMA's in range: start=0x%016llx, end=0x%016llx", addr, addr + range);
> +
> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, addr, addr + range) {
> + struct xe_vma *vma = gpuva_to_vma(gpuva);
> +
> + if (xe_vma_bo(vma))
> + madvise_range->has_bo_vmas = true;
> + else if (xe_vma_is_cpu_addr_mirror(vma))
> + madvise_range->has_svm_vmas = true;
> + else if (xe_vma_is_userptr(vma))
> + madvise_range->has_userptr_vmas = true;
> +
> + if (madvise_range->num_vmas == max_vmas) {
> + max_vmas <<= 1;
> + __vmas = krealloc(madvise_range->vmas,
> + max_vmas * sizeof(*madvise_range->vmas),
> + GFP_KERNEL);
> + if (!__vmas) {
> + kfree(madvise_range->vmas);
> + return -ENOMEM;
> + }
> + madvise_range->vmas = __vmas;
> + }
> +
> + madvise_range->vmas[madvise_range->num_vmas] = vma;
> + (madvise_range->num_vmas)++;
> + }
> +
> + if (!madvise_range->num_vmas)
> + kfree(madvise_range->vmas);
> +
> + vm_dbg(&vm->xe->drm, "madvise_range-num_vmas = %d\n", madvise_range->num_vmas);
> +
> + return 0;
> +}
> +
> +static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
> + struct xe_vma **vmas, int num_vmas,
> + struct drm_xe_madvise *op)
> +{
> + /* Implementation pending */
> +}
> +
> +static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
> + struct xe_vma **vmas, int num_vmas,
> + struct drm_xe_madvise *op)
> +{
> + /* Implementation pending */
> +}
> +
> +static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
> + struct xe_vma **vmas, int num_vmas,
> + struct drm_xe_madvise *op)
> +{
> + /* Implementation pending */
> +}
> +
> +typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
> + struct xe_vma **vmas, int num_vmas,
> + struct drm_xe_madvise *op);
> +
> +static const madvise_func madvise_funcs[] = {
> + [DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
> + [DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
> + [DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
> +};
> +
> +static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
> +{
> + struct drm_gpuva *gpuva;
> + struct xe_tile *tile;
> + u8 id, tile_mask;
> +
> + lockdep_assert_held_write(&vm->lock);
> +
> + /* Wait for pending binds */
> + if (dma_resv_wait_timeout(xe_vm_resv(vm), DMA_RESV_USAGE_BOOKKEEP,
> + false, MAX_SCHEDULE_TIMEOUT) <= 0)
> + XE_WARN_ON(1);
> +
> + tile_mask = xe_svm_ranges_zap_ptes_in_range(vm, start, end);
> +
> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
> + struct xe_vma *vma = gpuva_to_vma(gpuva);
> +
> + if (xe_vma_is_cpu_addr_mirror(vma) || xe_vma_is_null(vma))
> + continue;
> +
> + for_each_tile(tile, vm->xe, id) {
> + if (xe_pt_zap_ptes(tile, vma)) {
> + tile_mask |= BIT(id);
> +
> + /*
> + * WRITE_ONCE pairs with READ_ONCE
> + * in xe_vm_has_valid_gpu_mapping()
> + */
> + WRITE_ONCE(vma->tile_invalidated,
> + vma->tile_invalidated | BIT(id));
> + }
> + }
> + }
> +
> + return tile_mask;
> +}
> +
> +static int xe_vm_invalidate_madvise_range(struct xe_vm *vm, u64 start, u64 end)
> +{
> + u8 tile_mask = xe_zap_ptes_in_madvise_range(vm, start, end);
> +
> + if (!tile_mask)
> + return 0;
> +
> + xe_device_wmb(vm->xe);
> +
> + return xe_vm_range_tilemask_tlb_invalidation(vm, start, end, tile_mask);
> +}
> +
> +static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madvise *args)
> +{
> + if (XE_IOCTL_DBG(xe, !args))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, !IS_ALIGNED(args->start, SZ_4K)))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, !IS_ALIGNED(args->range, SZ_4K)))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, args->range < SZ_4K))
> + return false;
> +
> + switch (args->type) {
> + case DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC:
> + if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.migration_policy >
> + DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.pad))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, args->atomic.reserved))
> + return false;
> + break;
> + case DRM_XE_MEM_RANGE_ATTR_ATOMIC:
> + if (XE_IOCTL_DBG(xe, args->atomic.val > DRM_XE_ATOMIC_CPU))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, args->atomic.pad))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, args->atomic.reserved))
> + return false;
> +
> + break;
> + case DRM_XE_MEM_RANGE_ATTR_PAT:
> + /*TODO: Add valid pat check */
> + break;
> + default:
> + if (XE_IOCTL_DBG(xe, 1))
> + return false;
> + }
> +
> + if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
> + return false;
> +
> + return true;
> +}
> +
> +/**
> + * xe_vm_madvise_ioctl - Handle MADVise ioctl for a VM
> + * @dev: DRM device pointer
> + * @data: Pointer to ioctl data (drm_xe_madvise*)
> + * @file: DRM file pointer
> + *
> + * Handles the MADVISE ioctl to provide memory advice for vma's within
> + * input range.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> +{
> + struct xe_device *xe = to_xe_device(dev);
> + struct xe_file *xef = to_xe_file(file);
> + struct drm_xe_madvise *args = data;
> + struct xe_vmas_in_madvise_range madvise_range = {.addr = args->start,
> + .range = args->range, };
> + struct xe_vm *vm;
> + struct drm_exec exec;
> + int err, attr_type;
> +
> + vm = xe_vm_lookup(xef, args->vm_id);
> + if (XE_IOCTL_DBG(xe, !vm))
> + return -EINVAL;
> +
> + if (!madvise_args_are_sane(vm->xe, args)) {
> + err = -EINVAL;
> + goto put_vm;
> + }
> +
> + xe_svm_flush(vm);
> +
> + err = down_write_killable(&vm->lock);
> + if (err)
> + goto put_vm;
> +
> + if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) {
> + err = -ENOENT;
> + goto unlock_vm;
> + }
> +
> + err = xe_vm_alloc_madvise_vma(vm, args->start, args->range);
> + if (err)
> + goto unlock_vm;
> +
> + err = get_vmas(vm, &madvise_range);
> + if (err || !madvise_range.num_vmas)
> + goto unlock_vm;
> +
> + if (madvise_range.has_bo_vmas) {
> + drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
> + drm_exec_until_all_locked(&exec) {
> + for (int i = 0; i < madvise_range.num_vmas; i++) {
> + struct xe_bo *bo = xe_vma_bo(madvise_range.vmas[i]);
> +
> + if (!bo)
> + continue;
> + err = drm_exec_lock_obj(&exec, &bo->ttm.base);
> + drm_exec_retry_on_contention(&exec);
> + if (err)
> + goto err_fini;
> + }
> + }
> + }
> +
> + if (madvise_range.has_userptr_vmas) {
> + err = down_read_interruptible(&vm->userptr.notifier_lock);
> + if (err)
> + goto err_fini;
> + }
> +
> + if (madvise_range.has_svm_vmas) {
> + err = down_read_interruptible(&vm->svm.gpusvm.notifier_lock);
> + if (err)
> + goto unlock_userptr;
> + }
> +
> + attr_type = array_index_nospec(args->type, ARRAY_SIZE(madvise_funcs));
> + madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args);
> +
> + err = xe_vm_invalidate_madvise_range(vm, args->start, args->start + args->range);
> +
> + if (madvise_range.has_svm_vmas)
> + xe_svm_notifier_unlock(vm);
> +
> +unlock_userptr:
> + if (madvise_range.has_userptr_vmas)
> + up_read(&vm->userptr.notifier_lock);
> +err_fini:
> + if (madvise_range.has_bo_vmas)
> + drm_exec_fini(&exec);
> + kfree(madvise_range.vmas);
> + madvise_range.vmas = NULL;
> +unlock_vm:
> + up_write(&vm->lock);
> +put_vm:
> + xe_vm_put(vm);
> + return err;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
> new file mode 100644
> index 000000000000..b0e1fc445f23
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> @@ -0,0 +1,15 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2025 Intel Corporation
> + */
> +
> +#ifndef _XE_VM_MADVISE_H_
> +#define _XE_VM_MADVISE_H_
> +
> +struct drm_device;
> +struct drm_file;
> +
> +int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
> + struct drm_file *file);
> +
> +#endif
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct
2025-08-05 3:56 ` Matthew Brost
@ 2025-08-05 5:24 ` Ghimiray, Himal Prasad
2025-08-05 10:10 ` Danilo Krummrich
0 siblings, 1 reply; 54+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-05 5:24 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-xe, Thomas Hellström, Boris Brezillon,
Danilo Krummrich, Boris Brezillon, Caterina Shablia, Rob Clark,
dri-devel, Danilo Krummrich
On 05-08-2025 09:26, Matthew Brost wrote:
> On Wed, Jul 30, 2025 at 06:30:26PM +0530, Himal Prasad Ghimiray wrote:
>> From: Boris Brezillon <boris.brezillon@collabora.com>
>>
>> We are about to pass more arguments to drm_gpuvm_sm_map[_ops_create](),
>> so, before we do that, let's pass arguments through a struct instead
>> of changing each call site every time a new optional argument is added.
>>
>> v5
>> - Use drm_gpuva_op_map—same as drm_gpuvm_map_req (Danilo)
>> - Rebase changes for drm_gpuvm_sm_map_exec_lock()
>> - Fix kernel-docs
>>
>> Cc: Danilo Krummrich <dakr@redhat.com>
>> Cc: Boris Brezillon <bbrezillon@kernel.org>
>> Cc: Caterina Shablia <caterina.shablia@collabora.com>
>> Cc: Rob Clark <robin.clark@oss.qualcomm.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: <dri-devel@lists.freedesktop.org>
>>
>> Acked-by: Danilo Krummrich <dakr@kernel.org> (#v4)
>> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
>> Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/drm_gpuvm.c | 106 ++++++++++---------------
>> drivers/gpu/drm/imagination/pvr_vm.c | 15 ++--
>> drivers/gpu/drm/msm/msm_gem_vma.c | 33 ++++++--
>> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 11 ++-
>> drivers/gpu/drm/panthor/panthor_mmu.c | 13 ++-
>> drivers/gpu/drm/xe/xe_vm.c | 13 ++-
>> include/drm/drm_gpuvm.h | 10 +--
>> 7 files changed, 112 insertions(+), 89 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
>> index bbc7fecb6f4a..f04d80a3a63b 100644
>> --- a/drivers/gpu/drm/drm_gpuvm.c
>> +++ b/drivers/gpu/drm/drm_gpuvm.c
>> @@ -486,13 +486,18 @@
>> * u64 addr, u64 range,
>> * struct drm_gem_object *obj, u64 offset)
>> * {
>> + * struct drm_gpuva_op_map op_map = {
>> + * .va.addr = addr,
>> + * .va.range = range,
>> + * .gem.obj = obj,
>> + * .gem.offset = offset,
>> + * };
>> * struct drm_gpuva_ops *ops;
>> * struct drm_gpuva_op *op
>> * struct drm_gpuvm_bo *vm_bo;
>> *
>> * driver_lock_va_space();
>> - * ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range,
>> - * obj, offset);
>> + * ops = drm_gpuvm_sm_map_ops_create(gpuvm, &op_map);
>> * if (IS_ERR(ops))
>> * return PTR_ERR(ops);
>> *
>> @@ -2054,16 +2059,15 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unmap);
>>
>> static int
>> op_map_cb(const struct drm_gpuvm_ops *fn, void *priv,
>> - u64 addr, u64 range,
>> - struct drm_gem_object *obj, u64 offset)
>> + const struct drm_gpuva_op_map *req)
>> {
>> struct drm_gpuva_op op = {};
>>
>> op.op = DRM_GPUVA_OP_MAP;
>> - op.map.va.addr = addr;
>> - op.map.va.range = range;
>> - op.map.gem.obj = obj;
>> - op.map.gem.offset = offset;
>> + op.map.va.addr = req->va.addr;
>> + op.map.va.range = req->va.range;
>> + op.map.gem.obj = req->gem.obj;
>> + op.map.gem.offset = req->gem.offset;
>>
>> return fn->sm_step_map(&op, priv);
>> }
>> @@ -2102,17 +2106,16 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
>> static int
>> __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> const struct drm_gpuvm_ops *ops, void *priv,
>> - u64 req_addr, u64 req_range,
>> - struct drm_gem_object *req_obj, u64 req_offset)
>> + const struct drm_gpuva_op_map *req)
>> {
>> struct drm_gpuva *va, *next;
>> - u64 req_end = req_addr + req_range;
>> + u64 req_end = req->va.addr + req->va.range;
>> int ret;
>>
>> - if (unlikely(!drm_gpuvm_range_valid(gpuvm, req_addr, req_range)))
>> + if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->va.addr, req->va.range)))
>> return -EINVAL;
>>
>> - drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
>> + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req->va.addr, req_end) {
>> struct drm_gem_object *obj = va->gem.obj;
>> u64 offset = va->gem.offset;
>> u64 addr = va->va.addr;
>> @@ -2120,9 +2123,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> u64 end = addr + range;
>> bool merge = !!va->gem.obj;
>>
>> - if (addr == req_addr) {
>> - merge &= obj == req_obj &&
>> - offset == req_offset;
>> + if (addr == req->va.addr) {
>> + merge &= obj == req->gem.obj &&
>> + offset == req->gem.offset;
>>
>> if (end == req_end) {
>> ret = op_unmap_cb(ops, priv, va, merge);
>> @@ -2141,9 +2144,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> if (end > req_end) {
>> struct drm_gpuva_op_map n = {
>> .va.addr = req_end,
>> - .va.range = range - req_range,
>> + .va.range = range - req->va.range,
>> .gem.obj = obj,
>> - .gem.offset = offset + req_range,
>> + .gem.offset = offset + req->va.range,
>> };
>> struct drm_gpuva_op_unmap u = {
>> .va = va,
>> @@ -2155,8 +2158,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> return ret;
>> break;
>> }
>> - } else if (addr < req_addr) {
>> - u64 ls_range = req_addr - addr;
>> + } else if (addr < req->va.addr) {
>> + u64 ls_range = req->va.addr - addr;
>> struct drm_gpuva_op_map p = {
>> .va.addr = addr,
>> .va.range = ls_range,
>> @@ -2165,8 +2168,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> };
>> struct drm_gpuva_op_unmap u = { .va = va };
>>
>> - merge &= obj == req_obj &&
>> - offset + ls_range == req_offset;
>> + merge &= obj == req->gem.obj &&
>> + offset + ls_range == req->gem.offset;
>> u.keep = merge;
>>
>> if (end == req_end) {
>> @@ -2189,7 +2192,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> .va.range = end - req_end,
>> .gem.obj = obj,
>> .gem.offset = offset + ls_range +
>> - req_range,
>> + req->va.range,
>> };
>>
>> ret = op_remap_cb(ops, priv, &p, &n, &u);
>> @@ -2197,10 +2200,10 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> return ret;
>> break;
>> }
>> - } else if (addr > req_addr) {
>> - merge &= obj == req_obj &&
>> - offset == req_offset +
>> - (addr - req_addr);
>> + } else if (addr > req->va.addr) {
>> + merge &= obj == req->gem.obj &&
>> + offset == req->gem.offset +
>> + (addr - req->va.addr);
>>
>> if (end == req_end) {
>> ret = op_unmap_cb(ops, priv, va, merge);
>> @@ -2236,9 +2239,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> }
>> }
>>
>> - return op_map_cb(ops, priv,
>> - req_addr, req_range,
>> - req_obj, req_offset);
>> + return op_map_cb(ops, priv, req);
>> }
>>
>> static int
>> @@ -2303,10 +2304,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
>> * drm_gpuvm_sm_map() - calls the &drm_gpuva_op split/merge steps
>> * @gpuvm: the &drm_gpuvm representing the GPU VA space
>> * @priv: pointer to a driver private data structure
>> - * @req_addr: the start address of the new mapping
>> - * @req_range: the range of the new mapping
>> - * @req_obj: the &drm_gem_object to map
>> - * @req_offset: the offset within the &drm_gem_object
>> + * @req: ptr to drm_gpuva_op_map struct
>> *
>> * This function iterates the given range of the GPU VA space. It utilizes the
>> * &drm_gpuvm_ops to call back into the driver providing the split and merge
>> @@ -2333,8 +2331,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
>> */
>> int
>> drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
>> - u64 req_addr, u64 req_range,
>> - struct drm_gem_object *req_obj, u64 req_offset)
>> + const struct drm_gpuva_op_map *req)
>> {
>> const struct drm_gpuvm_ops *ops = gpuvm->ops;
>>
>> @@ -2343,9 +2340,7 @@ drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
>> ops->sm_step_unmap)))
>> return -EINVAL;
>>
>> - return __drm_gpuvm_sm_map(gpuvm, ops, priv,
>> - req_addr, req_range,
>> - req_obj, req_offset);
>> + return __drm_gpuvm_sm_map(gpuvm, ops, priv, req);
>> }
>> EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
>>
>> @@ -2421,10 +2416,7 @@ static const struct drm_gpuvm_ops lock_ops = {
>> * @gpuvm: the &drm_gpuvm representing the GPU VA space
>> * @exec: the &drm_exec locking context
>> * @num_fences: for newly mapped objects, the # of fences to reserve
>> - * @req_addr: the start address of the range to unmap
>> - * @req_range: the range of the mappings to unmap
>> - * @req_obj: the &drm_gem_object to map
>> - * @req_offset: the offset within the &drm_gem_object
>> + * @op: ptr to drm_gpuva_op_map struct
>
> s/@op/@req/ - Kernel test robot.
>
> Also I believe Danilo's suggestion here was to define drm_gpuvm_map_req
> as the argument and then embed drm_gpuva_op_map within
> drm_gpuvm_map_req. So in patch [1], flags would be added to
> drm_gpuvm_map_req rather than drm_gpuva_op_map.
>
> Matt
>
> [1] https://patchwork.freedesktop.org/patch/666211/?series=149550&rev=5
Hi Matt,
Thanks for the review. Initially, I considered using drm_gpuvm_map_req
struct instead of passing drm_gpuva_op_map directly to the gpuvm layer,
allowing it to handle split/merge decisions independently.
However, the upcoming patch [1] relies on this flag to determine
driver-side behavior. So at the end drm_gpuva_op_map and
drm_gpuvm_map_req might end up identical. Based on that—and Danilo’s
feedback on this patch [2] I thought It will be better to keep a single
op_map struct with the flag included.
Boris, could you please confirm if the flag will be useful on the driver
side [1]?
[1] https://patchwork.freedesktop.org/patch/662832/?series=151264&rev=2
[2] https://patchwork.freedesktop.org/patch/662819/?series=151264&rev=2
>
>> *
>> * This function locks (drm_exec_lock_obj()) objects that will be unmapped/
>> * remapped, and locks+prepares (drm_exec_prepare_object()) objects that
>> @@ -2442,12 +2434,10 @@ static const struct drm_gpuvm_ops lock_ops = {
>> * for_each_vm_bind_operation {
>> * switch (op->op) {
>> * case DRIVER_OP_UNMAP:
>> - * ret = drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, op->addr, op->range);
>> + * ret = drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, op->va.addr, op->va.range);
>> * break;
>> * case DRIVER_OP_MAP:
>> - * ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences,
>> - * op->addr, op->range,
>> - * obj, op->obj_offset);
>> + * ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences, op);
>> * break;
>> * }
>> *
>> @@ -2478,18 +2468,16 @@ static const struct drm_gpuvm_ops lock_ops = {
>> int
>> drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
>> struct drm_exec *exec, unsigned int num_fences,
>> - u64 req_addr, u64 req_range,
>> - struct drm_gem_object *req_obj, u64 req_offset)
>> + struct drm_gpuva_op_map *req)
>> {
>> - if (req_obj) {
>> - int ret = drm_exec_prepare_obj(exec, req_obj, num_fences);
>> + if (req->gem.obj) {
>> + int ret = drm_exec_prepare_obj(exec, req->gem.obj, num_fences);
>> if (ret)
>> return ret;
>> }
>>
>> return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec,
>> - req_addr, req_range,
>> - req_obj, req_offset);
>> + req);
>>
>> }
>> EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_exec_lock);
>> @@ -2611,10 +2599,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
>> /**
>> * drm_gpuvm_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
>> * @gpuvm: the &drm_gpuvm representing the GPU VA space
>> - * @req_addr: the start address of the new mapping
>> - * @req_range: the range of the new mapping
>> - * @req_obj: the &drm_gem_object to map
>> - * @req_offset: the offset within the &drm_gem_object
>> + * @req: ptr to drm_gpuva_op_map struct
>> *
>> * This function creates a list of operations to perform splitting and merging
>> * of existent mapping(s) with the newly requested one.
>> @@ -2642,8 +2627,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
>> */
>> struct drm_gpuva_ops *
>> drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
>> - u64 req_addr, u64 req_range,
>> - struct drm_gem_object *req_obj, u64 req_offset)
>> + const struct drm_gpuva_op_map *req)
>> {
>> struct drm_gpuva_ops *ops;
>> struct {
>> @@ -2661,9 +2645,7 @@ drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
>> args.vm = gpuvm;
>> args.ops = ops;
>>
>> - ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args,
>> - req_addr, req_range,
>> - req_obj, req_offset);
>> + ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args, req);
>> if (ret)
>> goto err_free_ops;
>>
>> diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
>> index 2896fa7501b1..57116709de81 100644
>> --- a/drivers/gpu/drm/imagination/pvr_vm.c
>> +++ b/drivers/gpu/drm/imagination/pvr_vm.c
>> @@ -185,12 +185,17 @@ struct pvr_vm_bind_op {
>> static int pvr_vm_bind_op_exec(struct pvr_vm_bind_op *bind_op)
>> {
>> switch (bind_op->type) {
>> - case PVR_VM_BIND_TYPE_MAP:
>> + case PVR_VM_BIND_TYPE_MAP: {
>> + const struct drm_gpuva_op_map map_req = {
>> + .va.addr = bind_op->device_addr,
>> + .va.range = bind_op->size,
>> + .gem.obj = gem_from_pvr_gem(bind_op->pvr_obj),
>> + .gem.offset = bind_op->offset,
>> + };
>> +
>> return drm_gpuvm_sm_map(&bind_op->vm_ctx->gpuvm_mgr,
>> - bind_op, bind_op->device_addr,
>> - bind_op->size,
>> - gem_from_pvr_gem(bind_op->pvr_obj),
>> - bind_op->offset);
>> + bind_op, &map_req);
>> + }
>>
>> case PVR_VM_BIND_TYPE_UNMAP:
>> return drm_gpuvm_sm_unmap(&bind_op->vm_ctx->gpuvm_mgr,
>> diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
>> index 3cd8562a5109..59a9b41bc967 100644
>> --- a/drivers/gpu/drm/msm/msm_gem_vma.c
>> +++ b/drivers/gpu/drm/msm/msm_gem_vma.c
>> @@ -371,6 +371,12 @@ struct drm_gpuva *
>> msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
>> u64 offset, u64 range_start, u64 range_end)
>> {
>> + struct drm_gpuva_op_map op_map = {
>> + .va.addr = range_start,
>> + .va.range = range_end - range_start,
>> + .gem.obj = obj,
>> + .gem.offset = offset,
>> + };
>> struct msm_gem_vm *vm = to_msm_vm(gpuvm);
>> struct drm_gpuvm_bo *vm_bo;
>> struct msm_gem_vma *vma;
>> @@ -399,7 +405,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
>> if (obj)
>> GEM_WARN_ON((range_end - range_start) > obj->size);
>>
>> - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset);
>> + drm_gpuva_init_from_op(&vma->base, &op_map);
>> vma->mapped = false;
>>
>> ret = drm_gpuva_insert(&vm->base, &vma->base);
>> @@ -1172,10 +1178,17 @@ vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exec)
>> break;
>> case MSM_VM_BIND_OP_MAP:
>> case MSM_VM_BIND_OP_MAP_NULL:
>> - ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1,
>> - op->iova, op->range,
>> - op->obj, op->obj_offset);
>> + {
>> + struct drm_gpuva_op_map map_req = {
>> + .va.addr = op->iova,
>> + .va.range = op->range,
>> + .gem.obj = op->obj,
>> + .gem.offset = op->obj_offset,
>> + };
>> +
>> + ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, &map_req);
>> break;
>> + }
>> default:
>> /*
>> * lookup_op() should have already thrown an error for
>> @@ -1283,9 +1296,17 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job)
>> arg.flags |= MSM_VMA_DUMP;
>> fallthrough;
>> case MSM_VM_BIND_OP_MAP_NULL:
>> - ret = drm_gpuvm_sm_map(job->vm, &arg, op->iova,
>> - op->range, op->obj, op->obj_offset);
>> + {
>> + struct drm_gpuva_op_map map_req = {
>> + .va.addr = op->iova,
>> + .va.range = op->range,
>> + .gem.obj = op->obj,
>> + .gem.offset = op->obj_offset,
>> + };
>> +
>> + ret = drm_gpuvm_sm_map(job->vm, &arg, &map_req);
>> break;
>> + }
>> default:
>> /*
>> * lookup_op() should have already thrown an error for
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>> index ddfc46bc1b3e..b74054b0a476 100644
>> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>> @@ -1276,6 +1276,12 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
>> break;
>> case OP_MAP: {
>> struct nouveau_uvma_region *reg;
>> + struct drm_gpuva_op_map map_req = {
>> + .va.addr = op->va.addr,
>> + .va.range = op->va.range,
>> + .gem.obj = op->gem.obj,
>> + .gem.offset = op->gem.offset,
>> + };
>>
>> reg = nouveau_uvma_region_find_first(uvmm,
>> op->va.addr,
>> @@ -1301,10 +1307,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
>> }
>>
>> op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->base,
>> - op->va.addr,
>> - op->va.range,
>> - op->gem.obj,
>> - op->gem.offset);
>> + &map_req);
>> if (IS_ERR(op->ops)) {
>> ret = PTR_ERR(op->ops);
>> goto unwind_continue;
>> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
>> index 4140f697ba5a..5fd4245a57b9 100644
>> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
>> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
>> @@ -2169,15 +2169,22 @@ panthor_vm_exec_op(struct panthor_vm *vm, struct panthor_vm_op_ctx *op,
>> mutex_lock(&vm->op_lock);
>> vm->op_ctx = op;
>> switch (op_type) {
>> - case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
>> + case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP: {
>> + const struct drm_gpuva_op_map map_req = {
>> + .va.addr = op->va.addr,
>> + .va.range = op->va.range,
>> + .gem.obj = op->map.vm_bo->obj,
>> + .gem.offset = op->map.bo_offset,
>> + };
>> +
>> if (vm->unusable) {
>> ret = -EINVAL;
>> break;
>> }
>>
>> - ret = drm_gpuvm_sm_map(&vm->base, vm, op->va.addr, op->va.range,
>> - op->map.vm_bo->obj, op->map.bo_offset);
>> + ret = drm_gpuvm_sm_map(&vm->base, vm, &map_req);
>> break;
>> + }
>>
>> case DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP:
>> ret = drm_gpuvm_sm_unmap(&vm->base, vm, op->va.addr, op->va.range);
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 432ea325677d..4b3e78745363 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2316,10 +2316,17 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
>>
>> switch (operation) {
>> case DRM_XE_VM_BIND_OP_MAP:
>> - case DRM_XE_VM_BIND_OP_MAP_USERPTR:
>> - ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, addr, range,
>> - obj, bo_offset_or_userptr);
>> + case DRM_XE_VM_BIND_OP_MAP_USERPTR: {
>> + struct drm_gpuva_op_map map_req = {
>> + .va.addr = addr,
>> + .va.range = range,
>> + .gem.obj = obj,
>> + .gem.offset = bo_offset_or_userptr,
>> + };
>> +
>> + ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
>> break;
>> + }
>> case DRM_XE_VM_BIND_OP_UNMAP:
>> ops = drm_gpuvm_sm_unmap_ops_create(&vm->gpuvm, addr, range);
>> break;
>> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
>> index 274532facfd6..892ffe75a62f 100644
>> --- a/include/drm/drm_gpuvm.h
>> +++ b/include/drm/drm_gpuvm.h
>> @@ -1060,8 +1060,8 @@ struct drm_gpuva_ops {
>>
>> struct drm_gpuva_ops *
>> drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
>> - u64 addr, u64 range,
>> - struct drm_gem_object *obj, u64 offset);
>> + const struct drm_gpuva_op_map *req);
>> +
>> struct drm_gpuva_ops *
>> drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
>> u64 addr, u64 range);
>> @@ -1205,16 +1205,14 @@ struct drm_gpuvm_ops {
>> };
>>
>> int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
>> - u64 addr, u64 range,
>> - struct drm_gem_object *obj, u64 offset);
>> + const struct drm_gpuva_op_map *req);
>>
>> int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
>> u64 addr, u64 range);
>>
>> int drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
>> struct drm_exec *exec, unsigned int num_fences,
>> - u64 req_addr, u64 req_range,
>> - struct drm_gem_object *obj, u64 offset);
>> + struct drm_gpuva_op_map *req);
>>
>> int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
>> u64 req_addr, u64 req_range);
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 02/25] drm/gpuvm: Kill drm_gpuva_init()
2025-07-30 13:00 ` [PATCH v5 02/25] drm/gpuvm: Kill drm_gpuva_init() Himal Prasad Ghimiray
2025-08-05 3:45 ` Matthew Brost
@ 2025-08-05 9:35 ` Danilo Krummrich
1 sibling, 0 replies; 54+ messages in thread
From: Danilo Krummrich @ 2025-08-05 9:35 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
Caterina Shablia
On 7/30/25 3:00 PM, Himal Prasad Ghimiray wrote:
> From: Boris Brezillon <boris.brezillon@collabora.com>
>
> drm_gpuva_init() only has one internal user, and given we are about to
> add new optional fields, it only add maintenance burden for no real
> benefit, so let's kill the thing now.
>
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
> Acked-by: Danilo Krummrich <dakr@kernel.org>
This also needs your Signed-off-by, when you handle the patch.
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct
2025-07-30 13:00 ` [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
2025-07-30 23:23 ` kernel test robot
2025-08-05 3:56 ` Matthew Brost
@ 2025-08-05 9:40 ` Danilo Krummrich
2025-08-05 11:02 ` Ghimiray, Himal Prasad
2 siblings, 1 reply; 54+ messages in thread
From: Danilo Krummrich @ 2025-08-05 9:40 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
Boris Brezillon, Caterina Shablia, Rob Clark, dri-devel
This series is a bit of a mess on my end, can you please Cc me on all GPUVM
patches with my kernel.org address? Currently, some patches go to my Red Hat
one, some to kernel.org and some to both.
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct
2025-08-05 5:24 ` Ghimiray, Himal Prasad
@ 2025-08-05 10:10 ` Danilo Krummrich
2025-08-05 11:04 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 54+ messages in thread
From: Danilo Krummrich @ 2025-08-05 10:10 UTC (permalink / raw)
To: Ghimiray, Himal Prasad
Cc: Matthew Brost, intel-xe, Thomas Hellström, Boris Brezillon,
Danilo Krummrich, Boris Brezillon, Caterina Shablia, Rob Clark,
dri-devel
On Tue Aug 5, 2025 at 7:24 AM CEST, Himal Prasad Ghimiray wrote:
> On 05-08-2025 09:26, Matthew Brost wrote:
>> Also I believe Danilo's suggestion here was to define drm_gpuvm_map_req
>> as the argument and then embed drm_gpuva_op_map within
>> drm_gpuvm_map_req. So in patch [1], flags would be added to
>> drm_gpuvm_map_req rather than drm_gpuva_op_map.
>>
>> Matt
>>
>> [1] https://patchwork.freedesktop.org/patch/666211/?series=149550&rev=5
>
> Hi Matt,
>
> Thanks for the review. Initially, I considered using drm_gpuvm_map_req
> struct instead of passing drm_gpuva_op_map directly to the gpuvm layer,
> allowing it to handle split/merge decisions independently.
Generally, we should only have the flags field on struct drm_gpuva_op_map if we
need to let GPUVM pass flags for (re)map operations to drivers.
> However, the upcoming patch [1] relies on this flag to determine
> driver-side behavior. So at the end drm_gpuva_op_map and
> drm_gpuvm_map_req might end up identical. Based on that—and Danilo’s
> feedback on this patch [2] I thought It will be better to keep a single
> op_map struct with the flag included.
Let's leave this to the upcoming patches, we can always adjust. For now, let's
go with what Matt summarized above please.
> Boris, could you please confirm if the flag will be useful on the driver
> side [1]?
>
> [1] https://patchwork.freedesktop.org/patch/662832/?series=151264&rev=2
> [2] https://patchwork.freedesktop.org/patch/662819/?series=151264&rev=2
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct
2025-08-05 9:40 ` Danilo Krummrich
@ 2025-08-05 11:02 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 54+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-05 11:02 UTC (permalink / raw)
To: Danilo Krummrich
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
Boris Brezillon, Caterina Shablia, Rob Clark, dri-devel
On 05-08-2025 15:10, Danilo Krummrich wrote:
> This series is a bit of a mess on my end, can you please Cc me on all GPUVM
> patches with my kernel.org address? Currently, some patches go to my Red
> Hat one, some to kernel.org and some to both.
Sure.
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct
2025-08-05 10:10 ` Danilo Krummrich
@ 2025-08-05 11:04 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 54+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-05 11:04 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Matthew Brost, intel-xe, Thomas Hellström, Boris Brezillon,
Danilo Krummrich, Boris Brezillon, Caterina Shablia, Rob Clark,
dri-devel
On 05-08-2025 15:40, Danilo Krummrich wrote:
> On Tue Aug 5, 2025 at 7:24 AM CEST, Himal Prasad Ghimiray wrote:
>> On 05-08-2025 09:26, Matthew Brost wrote:
>>> Also I believe Danilo's suggestion here was to define drm_gpuvm_map_req
>>> as the argument and then embed drm_gpuva_op_map within
>>> drm_gpuvm_map_req. So in patch [1], flags would be added to
>>> drm_gpuvm_map_req rather than drm_gpuva_op_map.
>>>
>>> Matt
>>>
>>> [1] https://patchwork.freedesktop.org/patch/666211/?series=149550&rev=5
>>
>> Hi Matt,
>>
>> Thanks for the review. Initially, I considered using drm_gpuvm_map_req
>> struct instead of passing drm_gpuva_op_map directly to the gpuvm layer,
>> allowing it to handle split/merge decisions independently.
>
> Generally, we should only have the flags field on struct drm_gpuva_op_map if we
> need to let GPUVM pass flags for (re)map operations to drivers.
>
>> However, the upcoming patch [1] relies on this flag to determine
>> driver-side behavior. So at the end drm_gpuva_op_map and
>> drm_gpuvm_map_req might end up identical. Based on that—and Danilo’s
>> feedback on this patch [2] I thought It will be better to keep a single
>> op_map struct with the flag included.
>
> Let's leave this to the upcoming patches, we can always adjust. For now, let's
> go with what Matt summarized above please.
Sure. Thanks. will update next version to use drm_gpuvm_map_req
>
>> Boris, could you please confirm if the flag will be useful on the driver
>> side [1]?
>>
>> [1] https://patchwork.freedesktop.org/patch/662832/?series=151264&rev=2
>> [2] https://patchwork.freedesktop.org/patch/662819/?series=151264&rev=2
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 03/25] drm/gpuvm: Support flags in drm_gpuva_op_map
2025-08-05 3:58 ` Matthew Brost
@ 2025-08-05 11:05 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 54+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-05 11:05 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-xe, Thomas Hellström, Danilo Krummrich,
Boris Brezillon, Caterina Shablia
On 05-08-2025 09:28, Matthew Brost wrote:
> On Wed, Jul 30, 2025 at 06:30:28PM +0530, Himal Prasad Ghimiray wrote:
>> This change adds support for passing flags to drm_gpuvm_sm_map() and
>> sm_map_ops_create(), enabling future extensions that affect split/merge
>> logic in drm_gpuvm.
>>
>> Cc: Danilo Krummrich <dakr@redhat.com>
>> Cc: Boris Brezillon <bbrezillon@kernel.org>
>> Cc: Caterina Shablia <caterina.shablia@collabora.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> include/drm/drm_gpuvm.h | 13 +++++++++++++
>> 1 file changed, 13 insertions(+)
>>
>> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
>> index 2d24d000f2ee..75c616fdc119 100644
>> --- a/include/drm/drm_gpuvm.h
>> +++ b/include/drm/drm_gpuvm.h
>> @@ -810,6 +810,12 @@ enum drm_gpuva_op_type {
>> DRM_GPUVA_OP_DRIVER,
>> };
>>
>> +/** DOC: flags for struct drm_gpuva_op_map
>> + * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE DEFAULT split and merge,
>> + * It cannot be combined with other flags.
>> + */
>> +#define DRM_GPUVM_SM_MAP_OPS_FLAG_NONE 0
>> +
>> /**
>> * struct drm_gpuva_op_map - GPU VA map operation
>> *
>> @@ -847,6 +853,13 @@ struct drm_gpuva_op_map {
>> */
>> struct drm_gem_object *obj;
>> } gem;
>> +
>> + /**
>> + * @flags: Bitmask of DRM_GPUVM_SM_MAP_* flags.
>> + * Use DRM_GPUVM_SM_MAP_OPS_FLAG_NONE (0) for default split merge.
>> + * It cannot be combined with other flags.
>> + */
>> + u32 flags;
>
> See my comment here [1], I think the flags should be in
> drm_gpuvm_map_req rather than drm_gpuva_op_map as the flags are only
> used gpuvm side on op creation, not driver side when consuming
> drm_gpuva_op_map.
Will update it in next version. Thanks.
>
> Matt
>
> [1] https://patchwork.freedesktop.org/patch/666205/?series=149550&rev=5#comment_1222150
>
>> };
>>
>> /**
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 04/25] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
2025-07-30 13:00 ` [PATCH v5 04/25] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag Himal Prasad Ghimiray
@ 2025-08-05 19:24 ` Matthew Brost
0 siblings, 0 replies; 54+ messages in thread
From: Matthew Brost @ 2025-08-05 19:24 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Thomas Hellström, Danilo Krummrich,
Boris Brezillon, dri-devel
On Wed, Jul 30, 2025 at 06:30:29PM +0530, Himal Prasad Ghimiray wrote:
> - DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE: This flag is used by
> drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
> user-provided range and split the existing non-GEM object VMA if the
> start or end of the input range lies within it. The operations can
> create up to 2 REMAPS and 2 MAPs. The purpose of this operation is to be
> used by the Xe driver to assign attributes to GPUVMA's within the
> user-defined range. Unlike drm_gpuvm_sm_map_ops_flags in default mode,
> the operation with this flag will never have UNMAPs and
> merges, and can be without any final operations.
>
> v2
> - use drm_gpuvm_sm_map_ops_create with flags instead of defining new
> ops_create (Danilo)
> - Add doc (Danilo)
>
> v3
> - Fix doc
> - Fix unmapping check
>
> v4
> - Fix mapping for non madvise ops
>
> v5
> - Fix mapping (Matthew Brost)
> - Rebase on top of struct changes
>
> Cc: Danilo Krummrich <dakr@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
I think this patch looks good to me, but will need a rebase based on
discussions in patch #1 of this series.
Going to hold off on the RB until the next rev.
Matt
> Cc: Boris Brezillon <bbrezillon@kernel.org>
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/drm_gpuvm.c | 87 +++++++++++++++++++++++++++++++------
> include/drm/drm_gpuvm.h | 11 ++++-
> 2 files changed, 83 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index f04d80a3a63b..2aeae8c2296f 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -2110,6 +2110,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> {
> struct drm_gpuva *va, *next;
> u64 req_end = req->va.addr + req->va.range;
> + bool is_madvise_ops = (req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
> + bool needs_map = !is_madvise_ops;
> int ret;
>
> if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->va.addr, req->va.range)))
> @@ -2122,26 +2124,35 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> u64 range = va->va.range;
> u64 end = addr + range;
> bool merge = !!va->gem.obj;
> + bool skip_madvise_ops = is_madvise_ops && merge;
>
> + needs_map = !is_madvise_ops;
> if (addr == req->va.addr) {
> merge &= obj == req->gem.obj &&
> offset == req->gem.offset;
>
> if (end == req_end) {
> - ret = op_unmap_cb(ops, priv, va, merge);
> - if (ret)
> - return ret;
> + if (!is_madvise_ops) {
> + ret = op_unmap_cb(ops, priv, va, merge);
> + if (ret)
> + return ret;
> + }
> break;
> }
>
> if (end < req_end) {
> - ret = op_unmap_cb(ops, priv, va, merge);
> - if (ret)
> - return ret;
> + if (!is_madvise_ops) {
> + ret = op_unmap_cb(ops, priv, va, merge);
> + if (ret)
> + return ret;
> + }
> continue;
> }
>
> if (end > req_end) {
> + if (skip_madvise_ops)
> + break;
> +
> struct drm_gpuva_op_map n = {
> .va.addr = req_end,
> .va.range = range - req->va.range,
> @@ -2156,6 +2167,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> ret = op_remap_cb(ops, priv, NULL, &n, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops)
> + needs_map = true;
> break;
> }
> } else if (addr < req->va.addr) {
> @@ -2173,20 +2187,45 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> u.keep = merge;
>
> if (end == req_end) {
> + if (skip_madvise_ops)
> + break;
> +
> ret = op_remap_cb(ops, priv, &p, NULL, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops)
> + needs_map = true;
> +
> break;
> }
>
> if (end < req_end) {
> + if (skip_madvise_ops)
> + continue;
> +
> ret = op_remap_cb(ops, priv, &p, NULL, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops) {
> + struct drm_gpuva_op_map map_req = {
> + .va.addr = req->va.addr,
> + .va.range = end - req->va.addr,
> + };
> +
> + ret = op_map_cb(ops, priv, &map_req);
> + if (ret)
> + return ret;
> + }
> +
> continue;
> }
>
> if (end > req_end) {
> + if (skip_madvise_ops)
> + break;
> +
> struct drm_gpuva_op_map n = {
> .va.addr = req_end,
> .va.range = end - req_end,
> @@ -2198,6 +2237,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> ret = op_remap_cb(ops, priv, &p, &n, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops)
> + needs_map = true;
> break;
> }
> } else if (addr > req->va.addr) {
> @@ -2206,20 +2248,29 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> (addr - req->va.addr);
>
> if (end == req_end) {
> - ret = op_unmap_cb(ops, priv, va, merge);
> - if (ret)
> - return ret;
> + if (!is_madvise_ops) {
> + ret = op_unmap_cb(ops, priv, va, merge);
> + if (ret)
> + return ret;
> + }
> +
> break;
> }
>
> if (end < req_end) {
> - ret = op_unmap_cb(ops, priv, va, merge);
> - if (ret)
> - return ret;
> + if (!is_madvise_ops) {
> + ret = op_unmap_cb(ops, priv, va, merge);
> + if (ret)
> + return ret;
> + }
> +
> continue;
> }
>
> if (end > req_end) {
> + if (skip_madvise_ops)
> + break;
> +
> struct drm_gpuva_op_map n = {
> .va.addr = req_end,
> .va.range = end - req_end,
> @@ -2234,12 +2285,20 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> ret = op_remap_cb(ops, priv, NULL, &n, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops) {
> + struct drm_gpuva_op_map map_req = {
> + .va.addr = addr,
> + .va.range = req_end - addr,
> + };
> +
> + return op_map_cb(ops, priv, &map_req);
> + }
> break;
> }
> }
> }
> -
> - return op_map_cb(ops, priv, req);
> + return needs_map ? op_map_cb(ops, priv, req) : 0;
> }
>
> static int
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 75c616fdc119..a8e9f70501ef 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -811,10 +811,19 @@ enum drm_gpuva_op_type {
> };
>
> /** DOC: flags for struct drm_gpuva_op_map
> - * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE DEFAULT split and merge,
> + * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE: DEFAULT split and merge,
> * It cannot be combined with other flags.
> + *
> + * %DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE: This flag is used by
> + * drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the user-provided
> + * range and split the existing non-GEM object VMA if the start or end of
> + * the input range lies within it. The operations can create up to 2 REMAPS
> + * and 2 MAPs. Unlike DRM_GPUVM_SM_MAP_OPS_FLAG_NONE flag, the operation with
> + * this flag will never have UNMAPs and merges, and can be without any final
> + * operations.
> */
> #define DRM_GPUVM_SM_MAP_OPS_FLAG_NONE 0
> +#define DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE BIT(0)
>
> /**
> * struct drm_gpuva_op_map - GPU VA map operation
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 25/25] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes
2025-07-30 13:00 ` [PATCH v5 25/25] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes Himal Prasad Ghimiray
@ 2025-08-05 19:29 ` Matthew Brost
0 siblings, 0 replies; 54+ messages in thread
From: Matthew Brost @ 2025-08-05 19:29 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström, Shuicheng Lin
On Wed, Jul 30, 2025 at 06:30:50PM +0530, Himal Prasad Ghimiray wrote:
> Introduce the DRM_IOCTL_XE_VM_QUERY_MEMORY_RANGE_ATTRS ioctl to allow
> userspace to query memory attributes of VMAs within a user specified
> virtual address range.
>
> Userspace first calls the ioctl with num_mem_ranges = 0,
> sizeof_mem_ranges_attr = 0 and vector_of_vma_mem_attr = NULL to retrieve
> the number of memory ranges (vmas) and size of each memory range attribute.
> Then, it allocates a buffer of that size and calls the ioctl again to fill
> the buffer with memory range attributes.
>
> This two-step interface allows userspace to first query the required
> buffer size, then retrieve detailed attributes efficiently.
>
> v2 (Matthew Brost)
> - Use same ioctl to overload functionality
>
> v3
> - Add kernel-doc
>
> v4
> - Make uapi future proof by passing struct size (Matthew Brost)
> - make lock interruptible (Matthew Brost)
> - set reserved bits to zero (Matthew Brost)
> - s/__copy_to_user/copy_to_user (Matthew Brost)
> - Avod using VMA term in uapi (Thomas)
> - xe_vm_put(vm) is missing (Shuicheng)
>
> v5
> - Nits
> - Fix kernel-doc
>
> Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> Cc: Shuicheng Lin <shuicheng.lin@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_device.c | 2 +
> drivers/gpu/drm/xe/xe_vm.c | 102 ++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 2 +-
> include/uapi/drm/xe_drm.h | 139 +++++++++++++++++++++++++++++++++
> 4 files changed, 244 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 80a77488381a..1e4334f8bdf4 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -203,6 +203,8 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
> DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
> + DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS, xe_vm_query_vmas_attrs_ioctl,
> + DRM_RENDER_ALLOW),
> };
>
> static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index e77c04f92d0b..a3ca3041e812 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2171,6 +2171,108 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
> return err;
> }
>
> +static int xe_vm_query_vmas(struct xe_vm *vm, u64 start, u64 end)
> +{
> + struct drm_gpuva *gpuva;
> + u32 num_vmas = 0;
> +
> + lockdep_assert_held(&vm->lock);
> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end)
> + num_vmas++;
> +
> + return num_vmas;
> +}
> +
> +static int get_mem_attrs(struct xe_vm *vm, u32 *num_vmas, u64 start,
> + u64 end, struct drm_xe_mem_range_attr *attrs)
> +{
> + struct drm_gpuva *gpuva;
> + int i = 0;
> +
> + lockdep_assert_held(&vm->lock);
> +
> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
> + struct xe_vma *vma = gpuva_to_vma(gpuva);
> +
> + if (i == *num_vmas)
> + return -ENOSPC;
> +
> + attrs[i].start = xe_vma_start(vma);
> + attrs[i].end = xe_vma_end(vma);
> + attrs[i].atomic.val = vma->attr.atomic_access;
> + attrs[i].pat_index.val = vma->attr.pat_index;
> + attrs[i].preferred_mem_loc.devmem_fd = vma->attr.preferred_loc.devmem_fd;
> + attrs[i].preferred_mem_loc.migration_policy =
> + vma->attr.preferred_loc.migration_policy;
> +
> + i++;
> + }
> +
> + *num_vmas = i;
> + return 0;
> +}
> +
> +int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> +{
> + struct xe_device *xe = to_xe_device(dev);
> + struct xe_file *xef = to_xe_file(file);
> + struct drm_xe_mem_range_attr *mem_attrs;
> + struct drm_xe_vm_query_mem_range_attr *args = data;
> + u64 __user *attrs_user = u64_to_user_ptr(args->vector_of_mem_attr);
> + struct xe_vm *vm;
> + int err = 0;
> +
> + if (XE_IOCTL_DBG(xe,
> + ((args->num_mem_ranges == 0 &&
> + (attrs_user || args->sizeof_mem_range_attr != 0)) ||
> + (args->num_mem_ranges > 0 &&
> + (!attrs_user ||
> + args->sizeof_mem_range_attr !=
> + sizeof(struct drm_xe_mem_range_attr))))))
> + return -EINVAL;
> +
> + vm = xe_vm_lookup(xef, args->vm_id);
> + if (XE_IOCTL_DBG(xe, !vm))
> + return -EINVAL;
> +
> + err = down_read_interruptible(&vm->lock);
> + if (err)
> + goto put_vm;
> +
> + attrs_user = u64_to_user_ptr(args->vector_of_mem_attr);
> +
> + if (args->num_mem_ranges == 0 && !attrs_user) {
> + args->num_mem_ranges = xe_vm_query_vmas(vm, args->start, args->start + args->range);
> + args->sizeof_mem_range_attr = sizeof(struct drm_xe_mem_range_attr);
> + goto unlock_vm;
> + }
> +
> + mem_attrs = kvmalloc_array(args->num_mem_ranges, args->sizeof_mem_range_attr,
> + GFP_KERNEL | __GFP_ACCOUNT |
> + __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
> + if (!mem_attrs) {
> + err = args->num_mem_ranges > 1 ? -ENOBUFS : -ENOMEM;
> + goto unlock_vm;
> + }
> +
> + memset(mem_attrs, 0, args->num_mem_ranges * args->sizeof_mem_range_attr);
> + err = get_mem_attrs(vm, &args->num_mem_ranges, args->start,
> + args->start + args->range, mem_attrs);
> + if (err)
> + goto free_mem_attrs;
> +
> + err = copy_to_user(attrs_user, mem_attrs,
> + args->sizeof_mem_range_attr * args->num_mem_ranges);
> +
> +free_mem_attrs:
> + kvfree(mem_attrs);
> +unlock_vm:
> + up_read(&vm->lock);
> +put_vm:
> + xe_vm_put(vm);
> + return err;
> +}
> +
> static bool vma_matches(struct xe_vma *vma, u64 page_addr)
> {
> if (page_addr > xe_vma_end(vma) - 1 ||
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 6538cddf158b..3953b3ee2955 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -199,7 +199,7 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file);
> int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file);
> -
> +int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
> void xe_vm_close_and_put(struct xe_vm *vm);
>
> static inline bool xe_vm_in_fault_mode(struct xe_vm *vm)
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index 115b9bca2a25..6b03f319ab70 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -82,6 +82,7 @@ extern "C" {
> * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> * - &DRM_IOCTL_XE_OBSERVATION
> * - &DRM_IOCTL_XE_MADVISE
> + * - &DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS
> */
>
> /*
> @@ -104,6 +105,7 @@ extern "C" {
> #define DRM_XE_WAIT_USER_FENCE 0x0a
> #define DRM_XE_OBSERVATION 0x0b
> #define DRM_XE_MADVISE 0x0c
> +#define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
>
> /* Must be kept compact -- no holes */
>
> @@ -120,6 +122,7 @@ extern "C" {
> #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> #define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
> +#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
>
> /**
> * DOC: Xe IOCTL Extensions
> @@ -2113,6 +2116,142 @@ struct drm_xe_madvise {
> __u64 reserved[2];
> };
>
> +/**
> + * struct drm_xe_mem_range_attr - Output of &DRM_IOCTL_XE_VM_QUERY_MEM_RANGES_ATTRS
> + *
> + * This structure is provided by userspace and filled by KMD in response to the
> + * DRM_IOCTL_XE_VM_QUERY_MEM_RANGES_ATTRS ioctl. It describes memory attributes of
> + * a memory ranges within a user specified address range in a VM.
> + *
> + * The structure includes information such as atomic access policy,
> + * page attribute table (PAT) index, and preferred memory location.
> + * Userspace allocates an array of these structures and passes a pointer to the
> + * ioctl to retrieve attributes for each memory ranges
> + *
> + * @extensions: Pointer to the first extension struct, if any
> + * @start: Start address of the memory range
> + * @end: End address of the virtual memory range
> + *
> + */
> +struct drm_xe_mem_range_attr {
> + /** @extensions: Pointer to the first extension struct, if any */
> + __u64 extensions;
> +
> + /** @start: start of the memory range */
> + __u64 start;
> +
> + /** @end: end of the memory range */
> + __u64 end;
> +
> + /** @preferred_mem_loc: preferred memory location */
> + struct {
> + /** @preferred_mem_loc.devmem_fd: fd for preferred loc */
> + __u32 devmem_fd;
> +
> + /** @preferred_mem_loc.migration_policy: Page migration policy */
> + __u32 migration_policy;
> + } preferred_mem_loc;
> +
> + /** * @atomic: Atomic access policy */
> + struct {
> + /** @atomic.val: atomic attribute */
> + __u32 val;
> +
> + /** @atomic.reserved: Reserved */
> + __u32 reserved;
> + } atomic;
> +
> + /** @pat_index: Page attribute table index */
> + struct {
> + /** @pat_index.val: PAT index */
> + __u32 val;
> +
> + /** @pat_index.reserved: Reserved */
> + __u32 reserved;
> + } pat_index;
> +
> + /** @reserved: Reserved */
> + __u64 reserved[2];
> +};
> +
> +/**
> + * struct drm_xe_vm_query_mem_range_attr - Input of &DRM_IOCTL_XE_VM_QUERY_MEM_ATTRIBUTES
> + *
> + * This structure is used to query memory attributes of memory regions
> + * within a user specified address range in a VM. It provides detailed
> + * information about each memory range, including atomic access policy,
> + * page attribute table (PAT) index, and preferred memory location.
> + *
> + * Userspace first calls the ioctl with @num_mem_ranges = 0,
> + * @sizeof_mem_ranges_attr = 0 and @vector_of_vma_mem_attr = NULL to retrieve
> + * the number of memory regions and size of each memory range attribute.
> + * Then, it allocates a buffer of that size and calls the ioctl again to fill
> + * the buffer with memory range attributes.
> + *
> + * If second call fails with -ENOSPC, it means memory ranges changed between
> + * first call and now, retry IOCTL again with @num_mem_ranges = 0,
> + * @sizeof_mem_ranges_attr = 0 and @vector_of_vma_mem_attr = NULL followed by
> + * Second ioctl call.
> + *
> + * Example:
> + *
> + * .. code-block:: C
> + * struct drm_xe_vm_query_mem_range_attr query = {
> + * .vm_id = vm_id,
> + * .start = 0x100000,
> + * .range = 0x2000,
> + * };
> + *
> + * // First ioctl call to get num of mem regions and sizeof each attribute
> + * ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, &query);
> + *
> + * // Allocate buffer for the memory region attributes
> + * void *ptr = malloc(query.num_mem_ranges * query.sizeof_mem_range_attr);
> + *
> + * query.vector_of_mem_attr = (uintptr_t)ptr;
> + *
> + * // Second ioctl call to actually fill the memory attributes
> + * ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, &query);
> + *
> + * // Iterate over the returned memory region attributes
> + * for (unsigned int i = 0; i < query.num_mem_ranges; ++i) {
> + * struct drm_xe_mem_range_attr *attr = (struct drm_xe_mem_range_attr *)ptr;
> + *
> + * // Do something with attr
> + *
> + * // Move pointer by one entry
> + * ptr += query.sizeof_mem_range_attr;
> + * }
> + *
> + * free(ptr);
> + */
> +struct drm_xe_vm_query_mem_range_attr {
> + /** @extensions: Pointer to the first extension struct, if any */
> + __u64 extensions;
> +
> + /** @vm_id: vm_id of the virtual range */
> + __u32 vm_id;
> +
> + /** @num_mem_ranges: number of mem_ranges in range */
> + __u32 num_mem_ranges;
> +
> + /** @start: start of the virtual address range */
> + __u64 start;
> +
> + /** @range: size of the virtual address range */
> + __u64 range;
> +
> + /** @sizeof_mem_range_attr: size of struct drm_xe_mem_range_attr */
> + __u64 sizeof_mem_range_attr;
> +
> + /** @vector_of_mem_attr: userptr to array of struct drm_xe_mem_range_attr */
> + __u64 vector_of_mem_attr;
> +
> + /** @reserved: Reserved */
> + __u64 reserved[2];
> +
> +};
> +
> #if defined(__cplusplus)
> }
> #endif
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 14/25] drm/xe/svm : Add svm ranges migration policy on atomic access
2025-07-30 13:00 ` [PATCH v5 14/25] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
@ 2025-08-05 20:03 ` Matthew Brost
2025-08-06 5:30 ` Ghimiray, Himal Prasad
2025-08-05 20:10 ` Matthew Brost
1 sibling, 1 reply; 54+ messages in thread
From: Matthew Brost @ 2025-08-05 20:03 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström
On Wed, Jul 30, 2025 at 06:30:39PM +0530, Himal Prasad Ghimiray wrote:
> If the platform does not support atomic access on system memory, and the
> ranges are in system memory, but the user requires atomic accesses on
> the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
> operations as well.
>
> v2
> - Drop unnecessary vm_dbg
>
> v3 (Matthew Brost)
> - fix atomic policy
> - prefetch shouldn't have any impact of atomic
> - bo can be accessed from vma, avoid duplicate parameter
>
> v4 (Matthew Brost)
> - Remove TODO comment
> - Fix comment
> - Dont allow gpu atomic ops when user is setting atomic attr as CPU
>
> v5 (Matthew Brost)
> - Fix atomic checks
> - Add userptr checks
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pt.c | 23 ++++++++++--------
> drivers/gpu/drm/xe/xe_svm.c | 8 ++++--
> drivers/gpu/drm/xe/xe_vm.c | 39 ++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 2 ++
> drivers/gpu/drm/xe/xe_vm_madvise.c | 15 +++++++++++-
> 5 files changed, 74 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 593fef438cd8..6f5b384991cd 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -640,28 +640,31 @@ static const struct xe_pt_walk_ops xe_pt_stage_bind_ops = {
> * - In all other cases device atomics will be disabled with AE=0 until an application
> * request differently using a ioctl like madvise.
> */
> -static bool xe_atomic_for_vram(struct xe_vm *vm)
> +static bool xe_atomic_for_vram(struct xe_vm *vm, struct xe_vma *vma)
> {
> + if (vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
> + return false;
> +
> return true;
> }
>
> -static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
> +static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_vma *vma)
> {
> struct xe_device *xe = vm->xe;
> + struct xe_bo *bo = xe_vma_bo(vma);
>
> - if (!xe->info.has_device_atomics_on_smem)
> + if (!xe->info.has_device_atomics_on_smem ||
> + vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
> return false;
>
> + if (vma->attr.atomic_access == DRM_XE_ATOMIC_DEVICE)
> + return true;
> +
> /*
> * If a SMEM+LMEM allocation is backed by SMEM, a device
> * atomics will cause a gpu page fault and which then
> * gets migrated to LMEM, bind such allocations with
> * device atomics enabled.
> - *
> - * TODO: Revisit this. Perhaps add something like a
> - * fault_on_atomics_in_system UAPI flag.
> - * Note that this also prohibits GPU atomics in LR mode for
> - * userptr and system memory on DGFX.
> */
> return (!IS_DGFX(xe) || (!xe_vm_in_lr_mode(vm) ||
> (bo && xe_bo_has_single_placement(bo))));
> @@ -744,8 +747,8 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
> goto walk_pt;
>
> if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
> - xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
> - xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
> + xe_walk.default_vram_pte = xe_atomic_for_vram(vm, vma) ? XE_USM_PPGTT_PTE_AE : 0;
> + xe_walk.default_system_pte = xe_atomic_for_system(vm, vma) ?
> XE_USM_PPGTT_PTE_AE : 0;
> }
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 1d0b444bf2ae..5e78beebe114 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -793,14 +793,18 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> struct xe_gt *gt, u64 fault_addr,
> bool atomic)
> {
> + int need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
> +
> + if (need_vram < 0)
> + return need_vram;
> +
Does the compiler not complain about logic before declartions?
Either way how about...
xe_svm_handle_pagefault()
{
/* Logic above */
return __xe_svm_handle_pagefault(.., need_vram);
}
Matt
> struct drm_gpusvm_ctx ctx = {
> .read_only = xe_vma_read_only(vma),
> .devmem_possible = IS_DGFX(vm->xe) &&
> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> .check_pages_threshold = IS_DGFX(vm->xe) &&
> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
> - .devmem_only = atomic && IS_DGFX(vm->xe) &&
> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> + .devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> .timeslice_ms = atomic && IS_DGFX(vm->xe) &&
> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
> vm->xe->atomic_svm_timeslice_ms : 0,
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index d039779412b3..463736db19d9 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -4183,6 +4183,45 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
> kvfree(snap);
> }
>
> +/**
> + * xe_vma_need_vram_for_atomic - Check if VMA needs VRAM migration for atomic operations
> + * @xe: Pointer to the XE device structure
> + * @vma: Pointer to the virtual memory area (VMA) structure
> + * @is_atomic: In pagefault path and atomic operation
> + *
> + * This function determines whether the given VMA needs to be migrated to
> + * VRAM in order to do atomic GPU operation.
> + *
> + * Return:
> + * 1 - Migration to VRAM is required
> + * 0 - Migration is not required
> + * -EINVAL - Invalid access for atomic memory attr
> + *
> + */
> +int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
> +{
> + if (!IS_DGFX(xe) || !is_atomic)
> + return 0;
> +
> + /*
> + * NOTE: The checks implemented here are platform-specific. For
> + * instance, on a device supporting CXL atomics, these would ideally
> + * work universally without additional handling.
> + */
> + switch (vma->attr.atomic_access) {
> + case DRM_XE_ATOMIC_DEVICE:
> + return !xe->info.has_device_atomics_on_smem;
> +
> + case DRM_XE_ATOMIC_CPU:
> + return -EINVAL;
> +
> + case DRM_XE_ATOMIC_UNDEFINED:
> + case DRM_XE_ATOMIC_GLOBAL:
> + default:
> + return 1;
> + }
> +}
> +
> /**
> * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
> * @vm: Pointer to the xe_vm structure
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 0d6b08cc4163..05ac3118d9f4 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
>
> struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
>
> +int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic);
> +
> int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index b861c3349b0a..a53b63dd603d 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -85,7 +85,20 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise *op)
> {
> - /* Implementation pending */
> + int i;
> +
> + xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
> + xe_assert(vm->xe, op->atomic.val <= DRM_XE_ATOMIC_CPU);
> +
> + for (i = 0; i < num_vmas; i++) {
> + if (xe_vma_is_userptr(vmas[i])) {
> + if (!(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
> + xe->info.has_device_atomics_on_smem))
> + continue;
> + }
> + vmas[i]->attr.atomic_access = op->atomic.val;
> + /*TODO: handle bo backed vmas */
> + }
> }
>
> static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 20/25] drm/xe/bo: Update atomic_access attribute on madvise
2025-07-30 13:00 ` [PATCH v5 20/25] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
@ 2025-08-05 20:06 ` Matthew Brost
0 siblings, 0 replies; 54+ messages in thread
From: Matthew Brost @ 2025-08-05 20:06 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström
On Wed, Jul 30, 2025 at 06:30:45PM +0530, Himal Prasad Ghimiray wrote:
> Update the bo_atomic_access based on user-provided input and determine
> the migration to smem during a CPU fault
>
> v2 (Matthew Brost)
> - Avoid cpu unmapping if bo is already in smem
> - check atomics on smem too for ioctl
> - Add comments
>
> v3
> - Avoid migration in prefetch
>
> v4 (Matthew Brost)
> - make sanity check function bool
> - add assert for smem placement
> - fix doc
>
> v5 (Matthew Brost)
> - NACK atomic fault with DRM_XE_ATOMIC_CPU
>
> Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 29 ++++++++++++--
> drivers/gpu/drm/xe/xe_gt_pagefault.c | 35 ++++++----------
> drivers/gpu/drm/xe/xe_vm.c | 7 +++-
> drivers/gpu/drm/xe/xe_vm_madvise.c | 60 +++++++++++++++++++++++++++-
> 4 files changed, 103 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index ffca1cea5585..6ab297f94d12 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -1709,6 +1709,18 @@ static void xe_gem_object_close(struct drm_gem_object *obj,
> }
> }
>
> +static bool should_migrate_to_smem(struct xe_bo *bo)
> +{
> + /*
> + * NOTE: The following atomic checks are platform-specific. For example,
> + * if a device supports CXL atomics, these may not be necessary or
> + * may behave differently.
> + */
> +
> + return bo->attr.atomic_access == DRM_XE_ATOMIC_GLOBAL ||
> + bo->attr.atomic_access == DRM_XE_ATOMIC_CPU;
> +}
> +
> static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
> {
> struct ttm_buffer_object *tbo = vmf->vma->vm_private_data;
> @@ -1717,7 +1729,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
> struct xe_bo *bo = ttm_to_xe_bo(tbo);
> bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK;
> vm_fault_t ret;
> - int idx;
> + int idx, r = 0;
>
> if (needs_rpm)
> xe_pm_runtime_get(xe);
> @@ -1729,8 +1741,19 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
> if (drm_dev_enter(ddev, &idx)) {
> trace_xe_bo_cpu_fault(bo);
>
> - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
> - TTM_BO_VM_NUM_PREFAULT);
> + if (should_migrate_to_smem(bo)) {
> + xe_assert(xe, bo->flags & XE_BO_FLAG_SYSTEM);
> +
> + r = xe_bo_migrate(bo, XE_PL_TT);
> + if (r == -EBUSY || r == -ERESTARTSYS || r == -EINTR)
> + ret = VM_FAULT_NOPAGE;
> + else if (r)
> + ret = VM_FAULT_SIGBUS;
> + }
> + if (!ret)
> + ret = ttm_bo_vm_fault_reserved(vmf,
> + vmf->vma->vm_page_prot,
> + TTM_BO_VM_NUM_PREFAULT);
> drm_dev_exit(idx);
> } else {
> ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot);
> diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> index ab43dec52776..4ea30fbce9bd 100644
> --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
> +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> @@ -75,7 +75,7 @@ static bool vma_is_valid(struct xe_tile *tile, struct xe_vma *vma)
> }
>
> static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma,
> - bool atomic, struct xe_vram_region *vram)
> + bool need_vram_move, struct xe_vram_region *vram)
> {
> struct xe_bo *bo = xe_vma_bo(vma);
> struct xe_vm *vm = xe_vma_vm(vma);
> @@ -85,26 +85,13 @@ static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma,
> if (err)
> return err;
>
> - if (atomic && vram) {
> - xe_assert(vm->xe, IS_DGFX(vm->xe));
> + if (!bo)
> + return 0;
>
> - if (xe_vma_is_userptr(vma)) {
> - err = -EACCES;
> - return err;
> - }
> + err = need_vram_move ? xe_bo_migrate(bo, vram->placement) :
> + xe_bo_validate(bo, vm, true);
>
> - /* Migrate to VRAM, move should invalidate the VMA first */
> - err = xe_bo_migrate(bo, vram->placement);
> - if (err)
> - return err;
> - } else if (bo) {
> - /* Create backing store if needed */
> - err = xe_bo_validate(bo, vm, true);
> - if (err)
> - return err;
> - }
> -
> - return 0;
> + return err;
> }
>
> static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
> @@ -115,10 +102,14 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
> struct drm_exec exec;
> struct dma_fence *fence;
> ktime_t end = 0;
> - int err;
> + int err, needs_vram;
>
> lockdep_assert_held_write(&vm->lock);
>
> + needs_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
> + if (needs_vram < 0 || (needs_vram && xe_vma_is_userptr(vma)))
> + return needs_vram < 0 ? needs_vram : -EACCES;
> +
> xe_gt_stats_incr(gt, XE_GT_STATS_ID_VMA_PAGEFAULT_COUNT, 1);
> xe_gt_stats_incr(gt, XE_GT_STATS_ID_VMA_PAGEFAULT_KB, xe_vma_size(vma) / 1024);
>
> @@ -141,7 +132,7 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
> /* Lock VM and BOs dma-resv */
> drm_exec_init(&exec, 0, 0);
> drm_exec_until_all_locked(&exec) {
> - err = xe_pf_begin(&exec, vma, atomic, tile->mem.vram);
> + err = xe_pf_begin(&exec, vma, needs_vram == 1, tile->mem.vram);
> drm_exec_retry_on_contention(&exec);
> if (xe_vm_validate_should_retry(&exec, err, &end))
> err = -EAGAIN;
> @@ -576,7 +567,7 @@ static int handle_acc(struct xe_gt *gt, struct acc *acc)
> /* Lock VM and BOs dma-resv */
> drm_exec_init(&exec, 0, 0);
> drm_exec_until_all_locked(&exec) {
> - ret = xe_pf_begin(&exec, vma, true, tile->mem.vram);
> + ret = xe_pf_begin(&exec, vma, IS_DGFX(vm->xe), tile->mem.vram);
> drm_exec_retry_on_contention(&exec);
> if (ret)
> break;
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index d57fc1071142..0774b40bc37b 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -4214,15 +4214,18 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
> */
> int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
> {
> + u32 atomic_access = xe_vma_bo(vma) ? xe_vma_bo(vma)->attr.atomic_access :
> + vma->attr.atomic_access;
> +
> if (!IS_DGFX(xe) || !is_atomic)
> - return 0;
> + return false;
>
> /*
> * NOTE: The checks implemented here are platform-specific. For
> * instance, on a device supporting CXL atomics, these would ideally
> * work universally without additional handling.
> */
> - switch (vma->attr.atomic_access) {
> + switch (atomic_access) {
> case DRM_XE_ATOMIC_DEVICE:
> return !xe->info.has_device_atomics_on_smem;
>
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 51a9364abc72..16ab1267ad21 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -102,6 +102,7 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise *op)
> {
> + struct xe_bo *bo;
> int i;
>
> xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
> @@ -113,8 +114,21 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
> xe->info.has_device_atomics_on_smem))
> continue;
> }
> +
> vmas[i]->attr.atomic_access = op->atomic.val;
> - /*TODO: handle bo backed vmas */
> +
> + bo = xe_vma_bo(vmas[i]);
> + if (!bo)
> + continue;
> +
> + xe_bo_assert_held(bo);
> + bo->attr.atomic_access = op->atomic.val;
> +
> + /* Invalidate cpu page table, so bo can migrate to smem in next access */
> + if (xe_bo_is_vram(bo) &&
> + (bo->attr.atomic_access == DRM_XE_ATOMIC_CPU ||
> + bo->attr.atomic_access == DRM_XE_ATOMIC_GLOBAL))
> + ttm_bo_unmap_virtual(&bo->ttm);
> }
> }
>
> @@ -263,6 +277,41 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
> return true;
> }
>
> +static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma **vmas,
> + int num_vmas, u32 atomic_val)
> +{
> + struct xe_device *xe = vm->xe;
> + struct xe_bo *bo;
> + int i;
> +
> + for (i = 0; i < num_vmas; i++) {
> + bo = xe_vma_bo(vmas[i]);
> + if (!bo)
> + continue;
> + /*
> + * NOTE: The following atomic checks are platform-specific. For example,
> + * if a device supports CXL atomics, these may not be necessary or
> + * may behave differently.
> + */
> + if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_CPU &&
> + !(bo->flags & XE_BO_FLAG_SYSTEM)))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_DEVICE &&
> + !(bo->flags & XE_BO_FLAG_VRAM0) &&
> + !(bo->flags & XE_BO_FLAG_VRAM1) &&
> + !(bo->flags & XE_BO_FLAG_SYSTEM &&
> + xe->info.has_device_atomics_on_smem)))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_GLOBAL &&
> + (!(bo->flags & XE_BO_FLAG_SYSTEM) ||
> + (!(bo->flags & XE_BO_FLAG_VRAM0) &&
> + !(bo->flags & XE_BO_FLAG_VRAM1)))))
> + return false;
> + }
> + return true;
> +}
> /**
> * xe_vm_madvise_ioctl - Handle MADVise ioctl for a VM
> * @dev: DRM device pointer
> @@ -314,6 +363,15 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
> goto unlock_vm;
>
> if (madvise_range.has_bo_vmas) {
> + if (args->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC) {
> + if (!check_bo_args_are_sane(vm, madvise_range.vmas,
> + madvise_range.num_vmas,
> + args->atomic.val)) {
> + err = -EINVAL;
> + goto unlock_vm;
> + }
> + }
> +
> drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
> drm_exec_until_all_locked(&exec) {
> for (int i = 0; i < madvise_range.num_vmas; i++) {
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 14/25] drm/xe/svm : Add svm ranges migration policy on atomic access
2025-07-30 13:00 ` [PATCH v5 14/25] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-08-05 20:03 ` Matthew Brost
@ 2025-08-05 20:10 ` Matthew Brost
2025-08-06 5:29 ` Ghimiray, Himal Prasad
1 sibling, 1 reply; 54+ messages in thread
From: Matthew Brost @ 2025-08-05 20:10 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström
On Wed, Jul 30, 2025 at 06:30:39PM +0530, Himal Prasad Ghimiray wrote:
> If the platform does not support atomic access on system memory, and the
> ranges are in system memory, but the user requires atomic accesses on
> the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
> operations as well.
>
> v2
> - Drop unnecessary vm_dbg
>
> v3 (Matthew Brost)
> - fix atomic policy
> - prefetch shouldn't have any impact of atomic
> - bo can be accessed from vma, avoid duplicate parameter
>
> v4 (Matthew Brost)
> - Remove TODO comment
> - Fix comment
> - Dont allow gpu atomic ops when user is setting atomic attr as CPU
>
> v5 (Matthew Brost)
> - Fix atomic checks
> - Add userptr checks
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pt.c | 23 ++++++++++--------
> drivers/gpu/drm/xe/xe_svm.c | 8 ++++--
> drivers/gpu/drm/xe/xe_vm.c | 39 ++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 2 ++
> drivers/gpu/drm/xe/xe_vm_madvise.c | 15 +++++++++++-
> 5 files changed, 74 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 593fef438cd8..6f5b384991cd 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -640,28 +640,31 @@ static const struct xe_pt_walk_ops xe_pt_stage_bind_ops = {
> * - In all other cases device atomics will be disabled with AE=0 until an application
> * request differently using a ioctl like madvise.
> */
> -static bool xe_atomic_for_vram(struct xe_vm *vm)
> +static bool xe_atomic_for_vram(struct xe_vm *vm, struct xe_vma *vma)
> {
> + if (vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
> + return false;
> +
> return true;
> }
>
> -static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
> +static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_vma *vma)
> {
> struct xe_device *xe = vm->xe;
> + struct xe_bo *bo = xe_vma_bo(vma);
>
> - if (!xe->info.has_device_atomics_on_smem)
> + if (!xe->info.has_device_atomics_on_smem ||
> + vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
> return false;
>
> + if (vma->attr.atomic_access == DRM_XE_ATOMIC_DEVICE)
> + return true;
> +
> /*
> * If a SMEM+LMEM allocation is backed by SMEM, a device
> * atomics will cause a gpu page fault and which then
> * gets migrated to LMEM, bind such allocations with
> * device atomics enabled.
> - *
> - * TODO: Revisit this. Perhaps add something like a
> - * fault_on_atomics_in_system UAPI flag.
> - * Note that this also prohibits GPU atomics in LR mode for
> - * userptr and system memory on DGFX.
> */
> return (!IS_DGFX(xe) || (!xe_vm_in_lr_mode(vm) ||
> (bo && xe_bo_has_single_placement(bo))));
> @@ -744,8 +747,8 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
> goto walk_pt;
>
> if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
> - xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
> - xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
> + xe_walk.default_vram_pte = xe_atomic_for_vram(vm, vma) ? XE_USM_PPGTT_PTE_AE : 0;
> + xe_walk.default_system_pte = xe_atomic_for_system(vm, vma) ?
> XE_USM_PPGTT_PTE_AE : 0;
> }
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 1d0b444bf2ae..5e78beebe114 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -793,14 +793,18 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> struct xe_gt *gt, u64 fault_addr,
> bool atomic)
> {
> + int need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
> +
> + if (need_vram < 0)
> + return need_vram;
> +
> struct drm_gpusvm_ctx ctx = {
> .read_only = xe_vma_read_only(vma),
> .devmem_possible = IS_DGFX(vm->xe) &&
> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> .check_pages_threshold = IS_DGFX(vm->xe) &&
> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
> - .devmem_only = atomic && IS_DGFX(vm->xe) &&
> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> + .devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> .timeslice_ms = atomic && IS_DGFX(vm->xe) &&
> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
> vm->xe->atomic_svm_timeslice_ms : 0,
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index d039779412b3..463736db19d9 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -4183,6 +4183,45 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
> kvfree(snap);
> }
>
> +/**
> + * xe_vma_need_vram_for_atomic - Check if VMA needs VRAM migration for atomic operations
> + * @xe: Pointer to the XE device structure
> + * @vma: Pointer to the virtual memory area (VMA) structure
> + * @is_atomic: In pagefault path and atomic operation
> + *
> + * This function determines whether the given VMA needs to be migrated to
> + * VRAM in order to do atomic GPU operation.
> + *
> + * Return:
> + * 1 - Migration to VRAM is required
> + * 0 - Migration is not required
> + * -EINVAL - Invalid access for atomic memory attr
Also how about -EACCES here?
Matt
> + *
> + */
> +int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
> +{
> + if (!IS_DGFX(xe) || !is_atomic)
> + return 0;
> +
> + /*
> + * NOTE: The checks implemented here are platform-specific. For
> + * instance, on a device supporting CXL atomics, these would ideally
> + * work universally without additional handling.
> + */
> + switch (vma->attr.atomic_access) {
> + case DRM_XE_ATOMIC_DEVICE:
> + return !xe->info.has_device_atomics_on_smem;
> +
> + case DRM_XE_ATOMIC_CPU:
> + return -EINVAL;
> +
> + case DRM_XE_ATOMIC_UNDEFINED:
> + case DRM_XE_ATOMIC_GLOBAL:
> + default:
> + return 1;
> + }
> +}
> +
> /**
> * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
> * @vm: Pointer to the xe_vm structure
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 0d6b08cc4163..05ac3118d9f4 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
>
> struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
>
> +int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic);
> +
> int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index b861c3349b0a..a53b63dd603d 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -85,7 +85,20 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise *op)
> {
> - /* Implementation pending */
> + int i;
> +
> + xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
> + xe_assert(vm->xe, op->atomic.val <= DRM_XE_ATOMIC_CPU);
> +
> + for (i = 0; i < num_vmas; i++) {
> + if (xe_vma_is_userptr(vmas[i])) {
> + if (!(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
> + xe->info.has_device_atomics_on_smem))
> + continue;
> + }
> + vmas[i]->attr.atomic_access = op->atomic.val;
> + /*TODO: handle bo backed vmas */
> + }
> }
>
> static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 23/25] drm/xe: Reset VMA attributes to default in SVM garbage collector
2025-07-30 13:00 ` [PATCH v5 23/25] drm/xe: Reset VMA attributes to default in SVM garbage collector Himal Prasad Ghimiray
@ 2025-08-06 4:06 ` Matthew Brost
2025-08-06 5:32 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 54+ messages in thread
From: Matthew Brost @ 2025-08-06 4:06 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström
On Wed, Jul 30, 2025 at 06:30:48PM +0530, Himal Prasad Ghimiray wrote:
> Restore default memory attributes for VMAs during garbage collection
> if they were modified by madvise. Reuse existing VMA if fully overlapping;
> otherwise, allocate a new mirror VMA.
>
> v2 (Matthew Brost)
> - Add helper for vma split
> - Add retry to get updated vma
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 114 +++++++++++++++++++++-----
> drivers/gpu/drm/xe/xe_vm.c | 155 ++++++++++++++++++++++++++----------
> drivers/gpu/drm/xe/xe_vm.h | 2 +
> 3 files changed, 206 insertions(+), 65 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index aef76e08b460..9b3a3f61758c 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -253,9 +253,55 @@ static int __xe_svm_garbage_collector(struct xe_vm *vm,
> return 0;
> }
>
> +static int xe_svm_range_set_default_attr(struct xe_vm *vm, u64 range_start, u64 range_end)
> +{
> + struct xe_vma *vma;
> + struct xe_vma_mem_attr default_attr = {
> + .preferred_loc = {
> + .devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE,
> + .migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
> + },
> + .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
> + };
> + int err = 0;
> +
> + vma = xe_vm_find_vma_by_addr(vm, range_start);
> + if (!vma)
> + return -EINVAL;
> +
> + if (xe_vma_has_default_mem_attrs(vma))
> + return 0;
> +
> + vm_dbg(&vm->xe->drm, "Existing VMA start=0x%016llx, vma_end=0x%016llx",
> + xe_vma_start(vma), xe_vma_end(vma));
> +
> + if (xe_vma_start(vma) == range_start && xe_vma_end(vma) == range_end) {
> + default_attr.pat_index = vma->attr.default_pat_index;
> + default_attr.default_pat_index = vma->attr.default_pat_index;
> + vma->attr = default_attr;
> + } else {
> + vm_dbg(&vm->xe->drm, "Split VMA start=0x%016llx, vma_end=0x%016llx",
> + range_start, range_end);
> + err = xe_vm_alloc_cpu_addr_mirror_vma(vm, range_start, range_end - range_start);
> + if (err) {
> + drm_warn(&vm->xe->drm, "VMA SPLIT failed: %pe\n", ERR_PTR(err));
> + xe_vm_kill(vm, true);
> + return err;
> + }
> + }
> +
> + /*
> + * On call from xe_svm_handle_pagefault original VMA might be changed
> + * signal this to lookup for VMA again.
> + */
> + return -EAGAIN;
> +}
> +
> static int xe_svm_garbage_collector(struct xe_vm *vm)
> {
> struct xe_svm_range *range;
> + u64 range_start;
> + u64 range_end;
> int err;
>
> lockdep_assert_held_write(&vm->lock);
> @@ -271,6 +317,9 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
> if (!range)
> break;
>
> + range_start = xe_svm_range_start(range);
> + range_end = xe_svm_range_end(range);
> +
> list_del(&range->garbage_collector_link);
> spin_unlock(&vm->svm.garbage_collector.lock);
>
> @@ -283,6 +332,10 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
> return err;
> }
>
> + err = xe_svm_range_set_default_attr(vm, range_start, range_end);
> + if (err)
> + return err;
You don't want to return on -EAGAIN here, rather collect it, continue
and return -EAGAIN once the garbage collector list is empty. No need to
contiously lookup the VMA in xe_svm_handle_pagefault (in next rev
__xe_svm_handle_pagefault), this only need be done once.
> +
> spin_lock(&vm->svm.garbage_collector.lock);
> }
> spin_unlock(&vm->svm.garbage_collector.lock);
> @@ -793,40 +846,59 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> struct xe_gt *gt, u64 fault_addr,
> bool atomic)
> {
> - int need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
> -
> - if (need_vram < 0)
> - return need_vram;
> -
> - struct drm_gpusvm_ctx ctx = {
> - .read_only = xe_vma_read_only(vma),
> - .devmem_possible = IS_DGFX(vm->xe) &&
> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> - .check_pages_threshold = IS_DGFX(vm->xe) &&
> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
> - .devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> - .timeslice_ms = atomic && IS_DGFX(vm->xe) &&
> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
> - vm->xe->atomic_svm_timeslice_ms : 0,
> - };
> + struct drm_gpusvm_ctx ctx = { };
> + struct drm_pagemap *dpagemap;
> struct xe_svm_range *range;
> struct dma_fence *fence;
> - struct drm_pagemap *dpagemap;
> struct xe_tile *tile = gt_to_tile(gt);
> - int migrate_try_count = ctx.devmem_only ? 3 : 1;
> + bool vma_updated = false;
> + int need_vram;
> + int migrate_try_count;
> ktime_t end = 0;
> int err;
>
> - lockdep_assert_held_write(&vm->lock);
> +find_vma:
> + if (vma_updated) {
> + vma = xe_vm_find_vma_by_addr(vm, fault_addr);
> + if (!vma)
> + return -EINVAL;
> + }
> +
> xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(vma));
> + vma_updated = false;
> +
> + need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
> + if (need_vram < 0)
> + return need_vram;
This is a bit ugly. I think if you have __xe_svm_handle_pagefault and
xe_svm_handle_pagefault as here [1] this can be handled cleaner (i.e.
still a static setup of drm_gpusvm_ctx).
If xe_svm_garbage_collector returns an in __xe_svm_handle_pagefault kick
it up to xe_svm_handle_pagefault, you catch -EAGAIN there, relookup the
VMA and call __xe_svm_handle_pagefault again. I think that would look
quite a bit better.
Matt
[1] https://patchwork.freedesktop.org/patch/666222/?series=149550&rev=5#comment_1222471
> +
> + ctx.read_only = xe_vma_read_only(vma);
> + ctx.devmem_possible = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
> + ctx.check_pages_threshold = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
> + SZ_64K : 0;
> + ctx.devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
> + ctx.timeslice_ms = atomic && IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
> + vm->xe->atomic_svm_timeslice_ms : 0;
>
> + migrate_try_count = ctx.devmem_only ? 3 : 1;
> +
> + lockdep_assert_held_write(&vm->lock);
> xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
>
> retry:
> /* Always process UNMAPs first so view SVM ranges is current */
> err = xe_svm_garbage_collector(vm);
> - if (err)
> - return err;
> + if (err) {
> + if (err == -EAGAIN) {
> + /*
> + * VMA might have changed due to garbage
> + * collection; retry lookup
> + */
> + vma_updated = true;
> + goto find_vma;
> + } else {
> + return err;
> + }
> + }
>
> range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx);
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 5ee38e9cf6c6..e77c04f92d0b 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -4263,36 +4263,24 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
> }
> }
>
> -/**
> - * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
> - * @vm: Pointer to the xe_vm structure
> - * @start: Starting input address
> - * @range: Size of the input range
> - *
> - * This function splits existing vma to create new vma for user provided input range
> - *
> - * Return: 0 if success
> - */
> -int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> +static int xe_vm_alloc_vma(struct xe_vm *vm, struct drm_gpuva_op_map *map_req)
> {
> - struct drm_gpuva_op_map map_req = {
> - .va.addr = start,
> - .va.range = range,
> - .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
> - };
> -
> struct xe_vma_ops vops;
> struct drm_gpuva_ops *ops = NULL;
> struct drm_gpuva_op *__op;
> bool is_cpu_addr_mirror = false;
> bool remap_op = false;
> + bool is_madvise = (map_req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
> struct xe_vma_mem_attr tmp_attr;
> + u16 default_pat;
> int err;
>
> lockdep_assert_held_write(&vm->lock);
>
> - vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
> - ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
> + vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
> + map_req->va.addr, map_req->va.range);
> +
> + ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, map_req);
> if (IS_ERR(ops))
> return PTR_ERR(ops);
>
> @@ -4303,33 +4291,56 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
>
> drm_gpuva_for_each_op(__op, ops) {
> struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
> + struct xe_vma *vma = NULL;
>
> - if (__op->op == DRM_GPUVA_OP_REMAP) {
> - xe_assert(vm->xe, !remap_op);
> - remap_op = true;
> + if (!is_madvise) {
> + if (__op->op == DRM_GPUVA_OP_UNMAP) {
> + vma = gpuva_to_vma(op->base.unmap.va);
> + XE_WARN_ON(!xe_vma_has_default_mem_attrs(vma));
> + default_pat = vma->attr.default_pat_index;
> + }
>
> - if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
> - is_cpu_addr_mirror = true;
> - else
> - is_cpu_addr_mirror = false;
> - }
> + if (__op->op == DRM_GPUVA_OP_REMAP) {
> + vma = gpuva_to_vma(op->base.remap.unmap->va);
> + default_pat = vma->attr.default_pat_index;
> + }
>
> - if (__op->op == DRM_GPUVA_OP_MAP) {
> - xe_assert(vm->xe, remap_op);
> - remap_op = false;
> + if (__op->op == DRM_GPUVA_OP_MAP) {
> + op->map.is_cpu_addr_mirror = true;
> + op->map.pat_index = default_pat;
> + }
> + } else {
> + if (__op->op == DRM_GPUVA_OP_REMAP) {
> + vma = gpuva_to_vma(op->base.remap.unmap->va);
> + xe_assert(vm->xe, !remap_op);
> + remap_op = true;
>
> - /* In case of madvise ops DRM_GPUVA_OP_MAP is always after
> - * DRM_GPUVA_OP_REMAP, so ensure we assign op->map.is_cpu_addr_mirror true
> - * if REMAP is for xe_vma_is_cpu_addr_mirror vma
> - */
> - op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
> - }
> + if (xe_vma_is_cpu_addr_mirror(vma))
> + is_cpu_addr_mirror = true;
> + else
> + is_cpu_addr_mirror = false;
> + }
>
> + if (__op->op == DRM_GPUVA_OP_MAP) {
> + xe_assert(vm->xe, remap_op);
> + remap_op = false;
> + /*
> + * In case of madvise ops DRM_GPUVA_OP_MAP is
> + * always after DRM_GPUVA_OP_REMAP, so ensure
> + * we assign op->map.is_cpu_addr_mirror true
> + * if REMAP is for xe_vma_is_cpu_addr_mirror vma
> + */
> + op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
> + }
> + }
> print_op(vm->xe, __op);
> }
>
> xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
> - vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
> +
> + if (is_madvise)
> + vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
> +
> err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
> if (err)
> goto unwind_ops;
> @@ -4341,15 +4352,20 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> struct xe_vma *vma;
>
> if (__op->op == DRM_GPUVA_OP_UNMAP) {
> - /* There should be no unmap */
> - XE_WARN_ON("UNEXPECTED UNMAP");
> - xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
> + vma = gpuva_to_vma(op->base.unmap.va);
> + /* There should be no unmap for madvise */
> + if (is_madvise)
> + XE_WARN_ON("UNEXPECTED UNMAP");
> +
> + xe_vma_destroy(vma, NULL);
> } else if (__op->op == DRM_GPUVA_OP_REMAP) {
> vma = gpuva_to_vma(op->base.remap.unmap->va);
> - /* Store attributes for REMAP UNMAPPED VMA, so they can be assigned
> - * to newly MAP created vma.
> + /* In case of madvise ops Store attributes for REMAP UNMAPPED
> + * VMA, so they can be assigned to newly MAP created vma.
> */
> - tmp_attr = vma->attr;
> + if (is_madvise)
> + tmp_attr = vma->attr;
> +
> xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
> } else if (__op->op == DRM_GPUVA_OP_MAP) {
> vma = op->map.vma;
> @@ -4357,7 +4373,8 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> * Therefore temp_attr will always have sane values, making it safe to
> * copy them to new vma.
> */
> - vma->attr = tmp_attr;
> + if (is_madvise)
> + vma->attr = tmp_attr;
> }
> }
>
> @@ -4371,3 +4388,53 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> drm_gpuva_ops_free(&vm->gpuvm, ops);
> return err;
> }
> +
> +/**
> + * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
> + * @vm: Pointer to the xe_vm structure
> + * @start: Starting input address
> + * @range: Size of the input range
> + *
> + * This function splits existing vma to create new vma for user provided input range
> + *
> + * Return: 0 if success
> + */
> +int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> +{
> + struct drm_gpuva_op_map map_req = {
> + .va.addr = start,
> + .va.range = range,
> + .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
> + };
> +
> + lockdep_assert_held_write(&vm->lock);
> +
> + vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
> +
> + return xe_vm_alloc_vma(vm, &map_req);
> +}
> +
> +/**
> + * xe_vm_alloc_cpu_addr_mirror_vma - Allocate CPU addr mirror vma
> + * @vm: Pointer to the xe_vm structure
> + * @start: Starting input address
> + * @range: Size of the input range
> + *
> + * This function splits/merges existing vma to create new vma for user provided input range
> + *
> + * Return: 0 if success
> + */
> +int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> +{
> + struct drm_gpuva_op_map map_req = {
> + .va.addr = start,
> + .va.range = range,
> + };
> +
> + lockdep_assert_held_write(&vm->lock);
> +
> + vm_dbg(&vm->xe->drm, "CPU_ADDR_MIRROR_VMA_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
> + start, range);
> +
> + return xe_vm_alloc_vma(vm, &map_req);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index f735d994806d..6538cddf158b 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -177,6 +177,8 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
>
> int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
>
> +int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
> +
> /**
> * to_userptr_vma() - Return a pointer to an embedding userptr vma
> * @vma: Pointer to the embedded struct xe_vma
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 14/25] drm/xe/svm : Add svm ranges migration policy on atomic access
2025-08-05 20:10 ` Matthew Brost
@ 2025-08-06 5:29 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 54+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-06 5:29 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, Thomas Hellström
On 06-08-2025 01:40, Matthew Brost wrote:
> On Wed, Jul 30, 2025 at 06:30:39PM +0530, Himal Prasad Ghimiray wrote:
>> If the platform does not support atomic access on system memory, and the
>> ranges are in system memory, but the user requires atomic accesses on
>> the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
>> operations as well.
>>
>> v2
>> - Drop unnecessary vm_dbg
>>
>> v3 (Matthew Brost)
>> - fix atomic policy
>> - prefetch shouldn't have any impact of atomic
>> - bo can be accessed from vma, avoid duplicate parameter
>>
>> v4 (Matthew Brost)
>> - Remove TODO comment
>> - Fix comment
>> - Dont allow gpu atomic ops when user is setting atomic attr as CPU
>>
>> v5 (Matthew Brost)
>> - Fix atomic checks
>> - Add userptr checks
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_pt.c | 23 ++++++++++--------
>> drivers/gpu/drm/xe/xe_svm.c | 8 ++++--
>> drivers/gpu/drm/xe/xe_vm.c | 39 ++++++++++++++++++++++++++++++
>> drivers/gpu/drm/xe/xe_vm.h | 2 ++
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 15 +++++++++++-
>> 5 files changed, 74 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
>> index 593fef438cd8..6f5b384991cd 100644
>> --- a/drivers/gpu/drm/xe/xe_pt.c
>> +++ b/drivers/gpu/drm/xe/xe_pt.c
>> @@ -640,28 +640,31 @@ static const struct xe_pt_walk_ops xe_pt_stage_bind_ops = {
>> * - In all other cases device atomics will be disabled with AE=0 until an application
>> * request differently using a ioctl like madvise.
>> */
>> -static bool xe_atomic_for_vram(struct xe_vm *vm)
>> +static bool xe_atomic_for_vram(struct xe_vm *vm, struct xe_vma *vma)
>> {
>> + if (vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
>> + return false;
>> +
>> return true;
>> }
>>
>> -static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
>> +static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_vma *vma)
>> {
>> struct xe_device *xe = vm->xe;
>> + struct xe_bo *bo = xe_vma_bo(vma);
>>
>> - if (!xe->info.has_device_atomics_on_smem)
>> + if (!xe->info.has_device_atomics_on_smem ||
>> + vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
>> return false;
>>
>> + if (vma->attr.atomic_access == DRM_XE_ATOMIC_DEVICE)
>> + return true;
>> +
>> /*
>> * If a SMEM+LMEM allocation is backed by SMEM, a device
>> * atomics will cause a gpu page fault and which then
>> * gets migrated to LMEM, bind such allocations with
>> * device atomics enabled.
>> - *
>> - * TODO: Revisit this. Perhaps add something like a
>> - * fault_on_atomics_in_system UAPI flag.
>> - * Note that this also prohibits GPU atomics in LR mode for
>> - * userptr and system memory on DGFX.
>> */
>> return (!IS_DGFX(xe) || (!xe_vm_in_lr_mode(vm) ||
>> (bo && xe_bo_has_single_placement(bo))));
>> @@ -744,8 +747,8 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
>> goto walk_pt;
>>
>> if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
>> - xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
>> - xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
>> + xe_walk.default_vram_pte = xe_atomic_for_vram(vm, vma) ? XE_USM_PPGTT_PTE_AE : 0;
>> + xe_walk.default_system_pte = xe_atomic_for_system(vm, vma) ?
>> XE_USM_PPGTT_PTE_AE : 0;
>> }
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index 1d0b444bf2ae..5e78beebe114 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -793,14 +793,18 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>> struct xe_gt *gt, u64 fault_addr,
>> bool atomic)
>> {
>> + int need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
>> +
>> + if (need_vram < 0)
>> + return need_vram;
>> +
>> struct drm_gpusvm_ctx ctx = {
>> .read_only = xe_vma_read_only(vma),
>> .devmem_possible = IS_DGFX(vm->xe) &&
>> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
>> .check_pages_threshold = IS_DGFX(vm->xe) &&
>> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
>> - .devmem_only = atomic && IS_DGFX(vm->xe) &&
>> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
>> + .devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
>> .timeslice_ms = atomic && IS_DGFX(vm->xe) &&
>> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
>> vm->xe->atomic_svm_timeslice_ms : 0,
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index d039779412b3..463736db19d9 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -4183,6 +4183,45 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
>> kvfree(snap);
>> }
>>
>> +/**
>> + * xe_vma_need_vram_for_atomic - Check if VMA needs VRAM migration for atomic operations
>> + * @xe: Pointer to the XE device structure
>> + * @vma: Pointer to the virtual memory area (VMA) structure
>> + * @is_atomic: In pagefault path and atomic operation
>> + *
>> + * This function determines whether the given VMA needs to be migrated to
>> + * VRAM in order to do atomic GPU operation.
>> + *
>> + * Return:
>> + * 1 - Migration to VRAM is required
>> + * 0 - Migration is not required
>> + * -EINVAL - Invalid access for atomic memory attr
>
> Also how about -EACCES here?
>
> Matt
Sure
>
>> + *
>> + */
>> +int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
>> +{
>> + if (!IS_DGFX(xe) || !is_atomic)
>> + return 0;
>> +
>> + /*
>> + * NOTE: The checks implemented here are platform-specific. For
>> + * instance, on a device supporting CXL atomics, these would ideally
>> + * work universally without additional handling.
>> + */
>> + switch (vma->attr.atomic_access) {
>> + case DRM_XE_ATOMIC_DEVICE:
>> + return !xe->info.has_device_atomics_on_smem;
>> +
>> + case DRM_XE_ATOMIC_CPU:
>> + return -EINVAL;
>> +
>> + case DRM_XE_ATOMIC_UNDEFINED:
>> + case DRM_XE_ATOMIC_GLOBAL:
>> + default:
>> + return 1;
>> + }
>> +}
>> +
>> /**
>> * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
>> * @vm: Pointer to the xe_vm structure
>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>> index 0d6b08cc4163..05ac3118d9f4 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.h
>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>> @@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
>>
>> struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
>>
>> +int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic);
>> +
>> int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
>>
>> /**
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index b861c3349b0a..a53b63dd603d 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -85,7 +85,20 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
>> struct xe_vma **vmas, int num_vmas,
>> struct drm_xe_madvise *op)
>> {
>> - /* Implementation pending */
>> + int i;
>> +
>> + xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
>> + xe_assert(vm->xe, op->atomic.val <= DRM_XE_ATOMIC_CPU);
>> +
>> + for (i = 0; i < num_vmas; i++) {
>> + if (xe_vma_is_userptr(vmas[i])) {
>> + if (!(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
>> + xe->info.has_device_atomics_on_smem))
>> + continue;
>> + }
>> + vmas[i]->attr.atomic_access = op->atomic.val;
>> + /*TODO: handle bo backed vmas */
>> + }
>> }
>>
>> static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 14/25] drm/xe/svm : Add svm ranges migration policy on atomic access
2025-08-05 20:03 ` Matthew Brost
@ 2025-08-06 5:30 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 54+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-06 5:30 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, Thomas Hellström
On 06-08-2025 01:33, Matthew Brost wrote:
> On Wed, Jul 30, 2025 at 06:30:39PM +0530, Himal Prasad Ghimiray wrote:
>> If the platform does not support atomic access on system memory, and the
>> ranges are in system memory, but the user requires atomic accesses on
>> the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
>> operations as well.
>>
>> v2
>> - Drop unnecessary vm_dbg
>>
>> v3 (Matthew Brost)
>> - fix atomic policy
>> - prefetch shouldn't have any impact of atomic
>> - bo can be accessed from vma, avoid duplicate parameter
>>
>> v4 (Matthew Brost)
>> - Remove TODO comment
>> - Fix comment
>> - Dont allow gpu atomic ops when user is setting atomic attr as CPU
>>
>> v5 (Matthew Brost)
>> - Fix atomic checks
>> - Add userptr checks
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_pt.c | 23 ++++++++++--------
>> drivers/gpu/drm/xe/xe_svm.c | 8 ++++--
>> drivers/gpu/drm/xe/xe_vm.c | 39 ++++++++++++++++++++++++++++++
>> drivers/gpu/drm/xe/xe_vm.h | 2 ++
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 15 +++++++++++-
>> 5 files changed, 74 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
>> index 593fef438cd8..6f5b384991cd 100644
>> --- a/drivers/gpu/drm/xe/xe_pt.c
>> +++ b/drivers/gpu/drm/xe/xe_pt.c
>> @@ -640,28 +640,31 @@ static const struct xe_pt_walk_ops xe_pt_stage_bind_ops = {
>> * - In all other cases device atomics will be disabled with AE=0 until an application
>> * request differently using a ioctl like madvise.
>> */
>> -static bool xe_atomic_for_vram(struct xe_vm *vm)
>> +static bool xe_atomic_for_vram(struct xe_vm *vm, struct xe_vma *vma)
>> {
>> + if (vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
>> + return false;
>> +
>> return true;
>> }
>>
>> -static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
>> +static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_vma *vma)
>> {
>> struct xe_device *xe = vm->xe;
>> + struct xe_bo *bo = xe_vma_bo(vma);
>>
>> - if (!xe->info.has_device_atomics_on_smem)
>> + if (!xe->info.has_device_atomics_on_smem ||
>> + vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
>> return false;
>>
>> + if (vma->attr.atomic_access == DRM_XE_ATOMIC_DEVICE)
>> + return true;
>> +
>> /*
>> * If a SMEM+LMEM allocation is backed by SMEM, a device
>> * atomics will cause a gpu page fault and which then
>> * gets migrated to LMEM, bind such allocations with
>> * device atomics enabled.
>> - *
>> - * TODO: Revisit this. Perhaps add something like a
>> - * fault_on_atomics_in_system UAPI flag.
>> - * Note that this also prohibits GPU atomics in LR mode for
>> - * userptr and system memory on DGFX.
>> */
>> return (!IS_DGFX(xe) || (!xe_vm_in_lr_mode(vm) ||
>> (bo && xe_bo_has_single_placement(bo))));
>> @@ -744,8 +747,8 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
>> goto walk_pt;
>>
>> if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
>> - xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
>> - xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
>> + xe_walk.default_vram_pte = xe_atomic_for_vram(vm, vma) ? XE_USM_PPGTT_PTE_AE : 0;
>> + xe_walk.default_system_pte = xe_atomic_for_system(vm, vma) ?
>> XE_USM_PPGTT_PTE_AE : 0;
>> }
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index 1d0b444bf2ae..5e78beebe114 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -793,14 +793,18 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>> struct xe_gt *gt, u64 fault_addr,
>> bool atomic)
>> {
>> + int need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
>> +
>> + if (need_vram < 0)
>> + return need_vram;
>> +
>
> Does the compiler not complain about logic before declartions?
Nope.
>
> Either way how about...
>
> xe_svm_handle_pagefault()
> {
> /* Logic above */
>
> return __xe_svm_handle_pagefault(.., need_vram);
> }
Will do it.
>
> Matt
>
>> struct drm_gpusvm_ctx ctx = {
>> .read_only = xe_vma_read_only(vma),
>> .devmem_possible = IS_DGFX(vm->xe) &&
>> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
>> .check_pages_threshold = IS_DGFX(vm->xe) &&
>> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
>> - .devmem_only = atomic && IS_DGFX(vm->xe) &&
>> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
>> + .devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
>> .timeslice_ms = atomic && IS_DGFX(vm->xe) &&
>> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
>> vm->xe->atomic_svm_timeslice_ms : 0,
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index d039779412b3..463736db19d9 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -4183,6 +4183,45 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
>> kvfree(snap);
>> }
>>
>> +/**
>> + * xe_vma_need_vram_for_atomic - Check if VMA needs VRAM migration for atomic operations
>> + * @xe: Pointer to the XE device structure
>> + * @vma: Pointer to the virtual memory area (VMA) structure
>> + * @is_atomic: In pagefault path and atomic operation
>> + *
>> + * This function determines whether the given VMA needs to be migrated to
>> + * VRAM in order to do atomic GPU operation.
>> + *
>> + * Return:
>> + * 1 - Migration to VRAM is required
>> + * 0 - Migration is not required
>> + * -EINVAL - Invalid access for atomic memory attr
>> + *
>> + */
>> +int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
>> +{
>> + if (!IS_DGFX(xe) || !is_atomic)
>> + return 0;
>> +
>> + /*
>> + * NOTE: The checks implemented here are platform-specific. For
>> + * instance, on a device supporting CXL atomics, these would ideally
>> + * work universally without additional handling.
>> + */
>> + switch (vma->attr.atomic_access) {
>> + case DRM_XE_ATOMIC_DEVICE:
>> + return !xe->info.has_device_atomics_on_smem;
>> +
>> + case DRM_XE_ATOMIC_CPU:
>> + return -EINVAL;
>> +
>> + case DRM_XE_ATOMIC_UNDEFINED:
>> + case DRM_XE_ATOMIC_GLOBAL:
>> + default:
>> + return 1;
>> + }
>> +}
>> +
>> /**
>> * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
>> * @vm: Pointer to the xe_vm structure
>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>> index 0d6b08cc4163..05ac3118d9f4 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.h
>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>> @@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
>>
>> struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
>>
>> +int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic);
>> +
>> int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
>>
>> /**
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index b861c3349b0a..a53b63dd603d 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -85,7 +85,20 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
>> struct xe_vma **vmas, int num_vmas,
>> struct drm_xe_madvise *op)
>> {
>> - /* Implementation pending */
>> + int i;
>> +
>> + xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
>> + xe_assert(vm->xe, op->atomic.val <= DRM_XE_ATOMIC_CPU);
>> +
>> + for (i = 0; i < num_vmas; i++) {
>> + if (xe_vma_is_userptr(vmas[i])) {
>> + if (!(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
>> + xe->info.has_device_atomics_on_smem))
>> + continue;
>> + }
>> + vmas[i]->attr.atomic_access = op->atomic.val;
>> + /*TODO: handle bo backed vmas */
>> + }
>> }
>>
>> static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v5 23/25] drm/xe: Reset VMA attributes to default in SVM garbage collector
2025-08-06 4:06 ` Matthew Brost
@ 2025-08-06 5:32 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 54+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-06 5:32 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, Thomas Hellström
On 06-08-2025 09:36, Matthew Brost wrote:
> On Wed, Jul 30, 2025 at 06:30:48PM +0530, Himal Prasad Ghimiray wrote:
>> Restore default memory attributes for VMAs during garbage collection
>> if they were modified by madvise. Reuse existing VMA if fully overlapping;
>> otherwise, allocate a new mirror VMA.
>>
>> v2 (Matthew Brost)
>> - Add helper for vma split
>> - Add retry to get updated vma
>>
>> Suggested-by: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_svm.c | 114 +++++++++++++++++++++-----
>> drivers/gpu/drm/xe/xe_vm.c | 155 ++++++++++++++++++++++++++----------
>> drivers/gpu/drm/xe/xe_vm.h | 2 +
>> 3 files changed, 206 insertions(+), 65 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index aef76e08b460..9b3a3f61758c 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -253,9 +253,55 @@ static int __xe_svm_garbage_collector(struct xe_vm *vm,
>> return 0;
>> }
>>
>> +static int xe_svm_range_set_default_attr(struct xe_vm *vm, u64 range_start, u64 range_end)
>> +{
>> + struct xe_vma *vma;
>> + struct xe_vma_mem_attr default_attr = {
>> + .preferred_loc = {
>> + .devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE,
>> + .migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
>> + },
>> + .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
>> + };
>> + int err = 0;
>> +
>> + vma = xe_vm_find_vma_by_addr(vm, range_start);
>> + if (!vma)
>> + return -EINVAL;
>> +
>> + if (xe_vma_has_default_mem_attrs(vma))
>> + return 0;
>> +
>> + vm_dbg(&vm->xe->drm, "Existing VMA start=0x%016llx, vma_end=0x%016llx",
>> + xe_vma_start(vma), xe_vma_end(vma));
>> +
>> + if (xe_vma_start(vma) == range_start && xe_vma_end(vma) == range_end) {
>> + default_attr.pat_index = vma->attr.default_pat_index;
>> + default_attr.default_pat_index = vma->attr.default_pat_index;
>> + vma->attr = default_attr;
>> + } else {
>> + vm_dbg(&vm->xe->drm, "Split VMA start=0x%016llx, vma_end=0x%016llx",
>> + range_start, range_end);
>> + err = xe_vm_alloc_cpu_addr_mirror_vma(vm, range_start, range_end - range_start);
>> + if (err) {
>> + drm_warn(&vm->xe->drm, "VMA SPLIT failed: %pe\n", ERR_PTR(err));
>> + xe_vm_kill(vm, true);
>> + return err;
>> + }
>> + }
>> +
>> + /*
>> + * On call from xe_svm_handle_pagefault original VMA might be changed
>> + * signal this to lookup for VMA again.
>> + */
>> + return -EAGAIN;
>> +}
>> +
>> static int xe_svm_garbage_collector(struct xe_vm *vm)
>> {
>> struct xe_svm_range *range;
>> + u64 range_start;
>> + u64 range_end;
>> int err;
>>
>> lockdep_assert_held_write(&vm->lock);
>> @@ -271,6 +317,9 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
>> if (!range)
>> break;
>>
>> + range_start = xe_svm_range_start(range);
>> + range_end = xe_svm_range_end(range);
>> +
>> list_del(&range->garbage_collector_link);
>> spin_unlock(&vm->svm.garbage_collector.lock);
>>
>> @@ -283,6 +332,10 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
>> return err;
>> }
>>
>> + err = xe_svm_range_set_default_attr(vm, range_start, range_end);
>> + if (err)
>> + return err;
>
> You don't want to return on -EAGAIN here, rather collect it, continue
> and return -EAGAIN once the garbage collector list is empty. No need to
> contiously lookup the VMA in xe_svm_handle_pagefault (in next rev
> __xe_svm_handle_pagefault), this only need be done once.
True. makes sense.
>
>> +
>> spin_lock(&vm->svm.garbage_collector.lock);
>> }
>> spin_unlock(&vm->svm.garbage_collector.lock);
>> @@ -793,40 +846,59 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>> struct xe_gt *gt, u64 fault_addr,
>> bool atomic)
>> {
>> - int need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
>> -
>> - if (need_vram < 0)
>> - return need_vram;
>> -
>> - struct drm_gpusvm_ctx ctx = {
>> - .read_only = xe_vma_read_only(vma),
>> - .devmem_possible = IS_DGFX(vm->xe) &&
>> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
>> - .check_pages_threshold = IS_DGFX(vm->xe) &&
>> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
>> - .devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
>> - .timeslice_ms = atomic && IS_DGFX(vm->xe) &&
>> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
>> - vm->xe->atomic_svm_timeslice_ms : 0,
>> - };
>> + struct drm_gpusvm_ctx ctx = { };
>> + struct drm_pagemap *dpagemap;
>> struct xe_svm_range *range;
>> struct dma_fence *fence;
>> - struct drm_pagemap *dpagemap;
>> struct xe_tile *tile = gt_to_tile(gt);
>> - int migrate_try_count = ctx.devmem_only ? 3 : 1;
>> + bool vma_updated = false;
>> + int need_vram;
>> + int migrate_try_count;
>> ktime_t end = 0;
>> int err;
>>
>> - lockdep_assert_held_write(&vm->lock);
>> +find_vma:
>> + if (vma_updated) {
>> + vma = xe_vm_find_vma_by_addr(vm, fault_addr);
>> + if (!vma)
>> + return -EINVAL;
>> + }
>> +
>> xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(vma));
>> + vma_updated = false;
>> +
>> + need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
>> + if (need_vram < 0)
>> + return need_vram;
>
> This is a bit ugly. I think if you have __xe_svm_handle_pagefault and
> xe_svm_handle_pagefault as here [1] this can be handled cleaner (i.e.
> still a static setup of drm_gpusvm_ctx).
>
> If xe_svm_garbage_collector returns an in __xe_svm_handle_pagefault kick
> it up to xe_svm_handle_pagefault, you catch -EAGAIN there, relookup the
> VMA and call __xe_svm_handle_pagefault again. I think that would look
> quite a bit better.
Agreed. Will update in next version.
Thanks>
> Matt
>
> [1] https://patchwork.freedesktop.org/patch/666222/?series=149550&rev=5#comment_1222471
>
>> +
>> + ctx.read_only = xe_vma_read_only(vma);
>> + ctx.devmem_possible = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
>> + ctx.check_pages_threshold = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
>> + SZ_64K : 0;
>> + ctx.devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
>> + ctx.timeslice_ms = atomic && IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
>> + vm->xe->atomic_svm_timeslice_ms : 0;
>>
>> + migrate_try_count = ctx.devmem_only ? 3 : 1;
>> +
>> + lockdep_assert_held_write(&vm->lock);
>> xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
>>
>> retry:
>> /* Always process UNMAPs first so view SVM ranges is current */
>> err = xe_svm_garbage_collector(vm);
>> - if (err)
>> - return err;
>> + if (err) {
>> + if (err == -EAGAIN) {
>> + /*
>> + * VMA might have changed due to garbage
>> + * collection; retry lookup
>> + */
>> + vma_updated = true;
>> + goto find_vma;
>> + } else {
>> + return err;
>> + }
>> + }
>>
>> range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx);
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 5ee38e9cf6c6..e77c04f92d0b 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -4263,36 +4263,24 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
>> }
>> }
>>
>> -/**
>> - * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
>> - * @vm: Pointer to the xe_vm structure
>> - * @start: Starting input address
>> - * @range: Size of the input range
>> - *
>> - * This function splits existing vma to create new vma for user provided input range
>> - *
>> - * Return: 0 if success
>> - */
>> -int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
>> +static int xe_vm_alloc_vma(struct xe_vm *vm, struct drm_gpuva_op_map *map_req)
>> {
>> - struct drm_gpuva_op_map map_req = {
>> - .va.addr = start,
>> - .va.range = range,
>> - .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
>> - };
>> -
>> struct xe_vma_ops vops;
>> struct drm_gpuva_ops *ops = NULL;
>> struct drm_gpuva_op *__op;
>> bool is_cpu_addr_mirror = false;
>> bool remap_op = false;
>> + bool is_madvise = (map_req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
>> struct xe_vma_mem_attr tmp_attr;
>> + u16 default_pat;
>> int err;
>>
>> lockdep_assert_held_write(&vm->lock);
>>
>> - vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
>> - ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
>> + vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
>> + map_req->va.addr, map_req->va.range);
>> +
>> + ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, map_req);
>> if (IS_ERR(ops))
>> return PTR_ERR(ops);
>>
>> @@ -4303,33 +4291,56 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
>>
>> drm_gpuva_for_each_op(__op, ops) {
>> struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>> + struct xe_vma *vma = NULL;
>>
>> - if (__op->op == DRM_GPUVA_OP_REMAP) {
>> - xe_assert(vm->xe, !remap_op);
>> - remap_op = true;
>> + if (!is_madvise) {
>> + if (__op->op == DRM_GPUVA_OP_UNMAP) {
>> + vma = gpuva_to_vma(op->base.unmap.va);
>> + XE_WARN_ON(!xe_vma_has_default_mem_attrs(vma));
>> + default_pat = vma->attr.default_pat_index;
>> + }
>>
>> - if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
>> - is_cpu_addr_mirror = true;
>> - else
>> - is_cpu_addr_mirror = false;
>> - }
>> + if (__op->op == DRM_GPUVA_OP_REMAP) {
>> + vma = gpuva_to_vma(op->base.remap.unmap->va);
>> + default_pat = vma->attr.default_pat_index;
>> + }
>>
>> - if (__op->op == DRM_GPUVA_OP_MAP) {
>> - xe_assert(vm->xe, remap_op);
>> - remap_op = false;
>> + if (__op->op == DRM_GPUVA_OP_MAP) {
>> + op->map.is_cpu_addr_mirror = true;
>> + op->map.pat_index = default_pat;
>> + }
>> + } else {
>> + if (__op->op == DRM_GPUVA_OP_REMAP) {
>> + vma = gpuva_to_vma(op->base.remap.unmap->va);
>> + xe_assert(vm->xe, !remap_op);
>> + remap_op = true;
>>
>> - /* In case of madvise ops DRM_GPUVA_OP_MAP is always after
>> - * DRM_GPUVA_OP_REMAP, so ensure we assign op->map.is_cpu_addr_mirror true
>> - * if REMAP is for xe_vma_is_cpu_addr_mirror vma
>> - */
>> - op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
>> - }
>> + if (xe_vma_is_cpu_addr_mirror(vma))
>> + is_cpu_addr_mirror = true;
>> + else
>> + is_cpu_addr_mirror = false;
>> + }
>>
>> + if (__op->op == DRM_GPUVA_OP_MAP) {
>> + xe_assert(vm->xe, remap_op);
>> + remap_op = false;
>> + /*
>> + * In case of madvise ops DRM_GPUVA_OP_MAP is
>> + * always after DRM_GPUVA_OP_REMAP, so ensure
>> + * we assign op->map.is_cpu_addr_mirror true
>> + * if REMAP is for xe_vma_is_cpu_addr_mirror vma
>> + */
>> + op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
>> + }
>> + }
>> print_op(vm->xe, __op);
>> }
>>
>> xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
>> - vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
>> +
>> + if (is_madvise)
>> + vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
>> +
>> err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
>> if (err)
>> goto unwind_ops;
>> @@ -4341,15 +4352,20 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
>> struct xe_vma *vma;
>>
>> if (__op->op == DRM_GPUVA_OP_UNMAP) {
>> - /* There should be no unmap */
>> - XE_WARN_ON("UNEXPECTED UNMAP");
>> - xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
>> + vma = gpuva_to_vma(op->base.unmap.va);
>> + /* There should be no unmap for madvise */
>> + if (is_madvise)
>> + XE_WARN_ON("UNEXPECTED UNMAP");
>> +
>> + xe_vma_destroy(vma, NULL);
>> } else if (__op->op == DRM_GPUVA_OP_REMAP) {
>> vma = gpuva_to_vma(op->base.remap.unmap->va);
>> - /* Store attributes for REMAP UNMAPPED VMA, so they can be assigned
>> - * to newly MAP created vma.
>> + /* In case of madvise ops Store attributes for REMAP UNMAPPED
>> + * VMA, so they can be assigned to newly MAP created vma.
>> */
>> - tmp_attr = vma->attr;
>> + if (is_madvise)
>> + tmp_attr = vma->attr;
>> +
>> xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
>> } else if (__op->op == DRM_GPUVA_OP_MAP) {
>> vma = op->map.vma;
>> @@ -4357,7 +4373,8 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
>> * Therefore temp_attr will always have sane values, making it safe to
>> * copy them to new vma.
>> */
>> - vma->attr = tmp_attr;
>> + if (is_madvise)
>> + vma->attr = tmp_attr;
>> }
>> }
>>
>> @@ -4371,3 +4388,53 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
>> drm_gpuva_ops_free(&vm->gpuvm, ops);
>> return err;
>> }
>> +
>> +/**
>> + * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
>> + * @vm: Pointer to the xe_vm structure
>> + * @start: Starting input address
>> + * @range: Size of the input range
>> + *
>> + * This function splits existing vma to create new vma for user provided input range
>> + *
>> + * Return: 0 if success
>> + */
>> +int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
>> +{
>> + struct drm_gpuva_op_map map_req = {
>> + .va.addr = start,
>> + .va.range = range,
>> + .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
>> + };
>> +
>> + lockdep_assert_held_write(&vm->lock);
>> +
>> + vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
>> +
>> + return xe_vm_alloc_vma(vm, &map_req);
>> +}
>> +
>> +/**
>> + * xe_vm_alloc_cpu_addr_mirror_vma - Allocate CPU addr mirror vma
>> + * @vm: Pointer to the xe_vm structure
>> + * @start: Starting input address
>> + * @range: Size of the input range
>> + *
>> + * This function splits/merges existing vma to create new vma for user provided input range
>> + *
>> + * Return: 0 if success
>> + */
>> +int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
>> +{
>> + struct drm_gpuva_op_map map_req = {
>> + .va.addr = start,
>> + .va.range = range,
>> + };
>> +
>> + lockdep_assert_held_write(&vm->lock);
>> +
>> + vm_dbg(&vm->xe->drm, "CPU_ADDR_MIRROR_VMA_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
>> + start, range);
>> +
>> + return xe_vm_alloc_vma(vm, &map_req);
>> +}
>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>> index f735d994806d..6538cddf158b 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.h
>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>> @@ -177,6 +177,8 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
>>
>> int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
>>
>> +int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
>> +
>> /**
>> * to_userptr_vma() - Return a pointer to an embedding userptr vma
>> * @vma: Pointer to the embedded struct xe_vma
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 54+ messages in thread
end of thread, other threads:[~2025-08-06 5:32 UTC | newest]
Thread overview: 54+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-30 13:00 [PATCH v5 00/25] MADVISE FOR XE Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 01/25] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
2025-07-30 23:23 ` kernel test robot
2025-08-05 3:56 ` Matthew Brost
2025-08-05 5:24 ` Ghimiray, Himal Prasad
2025-08-05 10:10 ` Danilo Krummrich
2025-08-05 11:04 ` Ghimiray, Himal Prasad
2025-08-05 9:40 ` Danilo Krummrich
2025-08-05 11:02 ` Ghimiray, Himal Prasad
2025-07-30 13:00 ` [PATCH v5 02/25] drm/gpuvm: Kill drm_gpuva_init() Himal Prasad Ghimiray
2025-08-05 3:45 ` Matthew Brost
2025-08-05 9:35 ` Danilo Krummrich
2025-07-30 13:00 ` [PATCH v5 03/25] drm/gpuvm: Support flags in drm_gpuva_op_map Himal Prasad Ghimiray
2025-08-05 3:58 ` Matthew Brost
2025-08-05 11:05 ` Ghimiray, Himal Prasad
2025-07-30 13:00 ` [PATCH v5 04/25] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag Himal Prasad Ghimiray
2025-08-05 19:24 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 05/25] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 06/25] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 07/25] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 08/25] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 09/25] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 10/25] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 11/25] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
2025-08-05 4:00 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 12/25] drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 13/25] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
2025-08-05 4:43 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 14/25] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-08-05 20:03 ` Matthew Brost
2025-08-06 5:30 ` Ghimiray, Himal Prasad
2025-08-05 20:10 ` Matthew Brost
2025-08-06 5:29 ` Ghimiray, Himal Prasad
2025-07-30 13:00 ` [PATCH v5 15/25] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 16/25] drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 17/25] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 18/25] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 19/25] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 20/25] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
2025-08-05 20:06 ` Matthew Brost
2025-07-30 13:00 ` [PATCH v5 21/25] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged Himal Prasad Ghimiray
2025-07-30 20:57 ` kernel test robot
2025-07-30 13:00 ` [PATCH v5 22/25] drm/xe/vm: Add helper to check for default VMA memory attributes Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 23/25] drm/xe: Reset VMA attributes to default in SVM garbage collector Himal Prasad Ghimiray
2025-08-06 4:06 ` Matthew Brost
2025-08-06 5:32 ` Ghimiray, Himal Prasad
2025-07-30 13:00 ` [PATCH v5 24/25] drm/xe: Enable madvise ioctl for xe Himal Prasad Ghimiray
2025-07-30 13:00 ` [PATCH v5 25/25] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes Himal Prasad Ghimiray
2025-08-05 19:29 ` Matthew Brost
2025-07-30 14:20 ` ✗ CI.checkpatch: warning for MADVISE FOR XE (rev5) Patchwork
2025-07-30 14:21 ` ✓ CI.KUnit: success " Patchwork
2025-07-30 14:36 ` ✗ CI.checksparse: warning " Patchwork
2025-07-30 15:36 ` ✓ Xe.CI.BAT: success " Patchwork
2025-07-30 17:51 ` ✗ Xe.CI.Full: failure " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox